Your company has upgraded from a legacy LLM model to a new model that allows for larger sequences and higher token limits. What is the most likely result of upgrading to the new model?
Transformers are useful for language modeling because their architecture is uniquely suited for handling which of the following?
What metrics would you use to evaluate the performance of a RAG workflow in terms of the accuracy of responses generated in relation to the input query? (Choose two.)
In the context of developing an AI application using NVIDIA’s NGC containers, how does the use of containerized environments enhance the reproducibility of LLM training and deployment workflows?
What are the main advantages of instructed large language models over traditional, small language models (< 300M parameters)? (Pick the 2 correct responses)
Imagine you are training an LLM consisting of billions of parameters and your training dataset is significantly larger than the available RAM in your system. Which of the following would be an alternative?
Which technique is designed to train a deep learning model by adjusting the weights of the neural network based on the error between the predicted and actual outputs?
In ML applications, which machine learning algorithm is commonly used for creating new data based on existing data?
In the context of machine learning model deployment, how can Docker be utilized to enhance the process?
Which of the following principles are widely recognized for building trustworthy AI? (Choose two.)
What distinguishes BLEU scores from ROUGE scores when evaluating natural language processing models?
Which of the following best describes the purpose of attention mechanisms in transformer models?
In the development of Trustworthy AI, what is the significance of ‘Certification’ as a principle?
In the context of data preprocessing for Large Language Models (LLMs), what does tokenization refer to?
When using NVIDIA RAPIDS to accelerate data preprocessing for an LLM fine-tuning pipeline, which specific feature of RAPIDS cuDF enables faster data manipulation compared to traditional CPU-based Pandas?
In the context of fine-tuning LLMs, which of the following metrics is most commonly used to assess the performance of a fine-tuned model?
In the development of trustworthy AI systems, what is the primary purpose of implementing red-teaming exercises during the alignment process of large language models?
In Natural Language Processing, there are a group of steps in problem formulation collectively known as word representations (also word embeddings). Which of the following are Deep Learning models that can be used to produce these representations for NLP tasks? (Choose two.)
What are some methods to overcome limited throughput between CPU and GPU? (Pick the 2 correct responses)