Summer Sale - Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dpt65

1z0-1127-25 Questions and Answers

Question # 6

How can the concept of "Groundedness" differ from "Answer Relevance" in the context of Retrieval Augmented Generation (RAG)?

A.

Groundedness pertains to factual correctness, whereas Answer Relevance concerns query relevance.

B.

Groundedness refers to contextual alignment, whereas Answer Relevance deals with syntactic accuracy.

C.

Groundedness measures relevance to the user query, whereas Answer Relevance evaluates data integrity.

D.

Groundedness focuses on data integrity, whereas Answer Relevance emphasizes lexical diversity.

Full Access
Question # 7

Which statement describes the difference between "Top k" and "Top p" in selecting the next token in the OCI Generative AI Generation models?

A.

"Top k" and "Top p" are identical in their approach to token selection but differ in their application of penalties to tokens.

B.

"Top k" selects the next token based on its position in the list of probable tokens, whereas "Top p" selects based on the cumulative probability of the top tokens.

C.

"Top k" considers the sum of probabilities of the top tokens, whereas "Top p" selects from the "Top k" tokens sorted by probability.

D.

"Top k" and "Top p" both select from the same set of tokens but use different methods to prioritize them based on frequency.

Full Access
Question # 8

What is the characteristic of T-Few fine-tuning for Large Language Models (LLMs)?

A.

It updates all the weights of the model uniformly.

B.

It selectively updates only a fraction of weights to reduce the number of parameters.

C.

It selectively updates only a fraction of weights to reduce computational load and avoid overfitting.

D.

It increases the training time as compared to Vanilla fine-tuning.

Full Access
Question # 9

What is the main advantage of using few-shot model prompting to customize a Large Language Model (LLM)?

A.

It allows the LLM to access a larger dataset.

B.

It eliminates the need for any training or computational resources.

C.

It provides examples in the prompt to guide the LLM to better performance with no training cost.

D.

It significantly reduces the latency for each model request.

Full Access
Question # 10

Given the following prompts used with a Large Language Model, classify each as employing the Chain-of-Thought, Least-to-Most, or Step-Back prompting technique:

A.

"Calculate the total number of wheels needed for 3 cars. Cars have 4 wheels each. Then, use the total number of wheels to determine how many sets of wheels we can buy with $200 if one set (4 wheels) costs $50."

B.

"Solve a complex math problem by first identifying the formula needed, and then solve a simpler version of the problem before tackling the full question."

C.

"To understand the impact of greenhouse gases on climate change, let’s start by defining what greenhouse gases are. Next, we’ll explore how they trap heat in the Earth’s atmosphere."A. 1: Step-Back, 2: Chain-of-Thought, 3: Least-to-MostB. 1: Least-to-Most, 2: Chain-of-Thought, 3: Step-BackC. 1: Chain-of-Thought, 2: Step-Back, 3: Least-to-MostD. 1: Chain-of-Thought, 2: Least-to-Most, 3: Step-Back

Full Access
Question # 11

In which scenario is soft prompting appropriate compared to other training styles?

A.

When there is a significant amount of labeled, task-specific data available

B.

When the model needs to be adapted to perform well in a domain on which it was not originally trained

C.

When there is a need to add learnable parameters to a Large Language Model (LLM) without task-specific training

D.

When the model requires continued pretraining on unlabeled data

Full Access
Question # 12

What does a cosine distance of 0 indicate about the relationship between two embeddings?

A.

They are completely dissimilar

B.

They are unrelated

C.

They are similar in direction

D.

They have the same magnitude

Full Access
Question # 13

You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours are required for fine-tuning if the cluster is active for 10 hours?

A.

25 unit hours

B.

40 unit hours

C.

20 unit hours

D.

30 unit hours

Full Access
Question # 14

What distinguishes the Cohere Embed v3 model from its predecessor in the OCI Generative AI service?

A.

Support for tokenizing longer sentences

B.

Improved retrievals for Retrieval Augmented Generation (RAG) systems

C.

Emphasis on syntactic clustering of word embeddings

D.

Capacity to translate text in over 100 languages

Full Access
Question # 15

How does the structure of vector databases differ from traditional relational databases?

A.

It stores data in a linear or tabular format.

B.

It is not optimized for high-dimensional spaces.

C.

It uses simple row-based data storage.

D.

It is based on distances and similarities in a vector space.

Full Access
Question # 16

Which statement best describes the role of encoder and decoder models in natural language processing?

A.

Encoder models and decoder models both convert sequences of words into vector representations without generating new text.

B.

Encoder models take a sequence of words and predict the next word in the sequence, whereas decoder models convert a sequence of words into a numerical representation.

C.

Encoder models convert a sequence of words into a vector representation, and decoder models take this vector representation to generate a sequence of words.

D.

Encoder models are used only for numerical calculations, whereas decoder models are used to interpret the calculated numerical values back into text.

Full Access
Question # 17

What issue might arise from using small datasets with the Vanilla fine-tuning method in the OCI Generative AI service?

A.

Overfitting

B.

Underfitting

C.

Data Leakage

D.

Model Drift

Full Access
Question # 18

How does the structure of vector databases differ from traditional relational databases?

A.

A vector database stores data in a linear or tabular format.

B.

It is not optimized for high-dimensional spaces.

C.

It is based on distances and similarities in a vector space.

D.

It uses simple row-based data storage.

Full Access
Question # 19

Which is NOT a typical use case for LangSmith Evaluators?

A.

Measuring coherence of generated text

B.

Aligning code readability

C.

Evaluating factual accuracy of outputs

D.

Detecting bias or toxicity

Full Access
Question # 20

How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?

A.

Increasing the temperature removes the impact of the most likely word.

B.

Decreasing the temperature broadens the distribution, making less likely words more probable.

C.

Increasing the temperature flattens the distribution, allowing for more varied word choices.

D.

Temperature has no effect on probability distribution; it only changes the speed of decoding.

Full Access
Question # 21

What is the purpose of Retrievers in LangChain?

A.

To train Large Language Models

B.

To retrieve relevant information from knowledge bases

C.

To break down complex tasks into smaller steps

D.

To combine multiple components into a single pipeline

Full Access
Question # 22

Which is a distinguishing feature of "Parameter-Efficient Fine-Tuning (PEFT)" as opposed to classic "Fine-tuning" in Large Language Model training?

A.

PEFT involves only a few or new parameters and uses labeled, task-specific data.

B.

PEFT modifies all parameters and is typically used when no training data exists.

C.

PEFT does not modify any parameters but uses soft prompting with unlabeled data.

D.

PEFT modifies all parameters and uses unlabeled, task-agnostic data.

Full Access
Question # 23

When is fine-tuning an appropriate method for customizing a Large Language Model (LLM)?

A.

When the LLM already understands the topics necessary for text generation

B.

When the LLM does not perform well on a task and the data for prompt engineering is too large

C.

When the LLM requires access to the latest data for generating outputs

D.

When you want to optimize the model without any instructions

Full Access
Question # 24

What does "k-shot prompting" refer to when using Large Language Models for task-specific applications?

A.

Providing the exact k words in the prompt to guide the model's response

B.

Explicitly providing k examples of the intended task in the prompt to guide the model’s output

C.

The process of training the model on k different tasks simultaneously to improve its versatility

D.

Limiting the model to only k possible outcomes or answers for a given task

Full Access
Question # 25

Which is a key advantage of using T-Few over Vanilla fine-tuning in the OCI Generative AI service?

A.

Reduced model complexity

B.

Enhanced generalization to unseen data

C.

Increased model interpretability

D.

Faster training time and lower cost

Full Access
Question # 26

Which is a characteristic of T-Few fine-tuning for Large Language Models (LLMs)?

A.

It updates all the weights of the model uniformly.

B.

It does not update any weights but restructures the model architecture.

C.

It selectively updates only a fraction of the model’s weights.

D.

It increases the training time as compared to Vanilla fine-tuning.

Full Access