Weekend Sale - Special 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70dumps

1z0-1110-23 Questions and Answers

Question # 6

You are creating an Oracle Cloud Infrastructure (OCI) Data Science job that will run on a recurring

basis in a production environment. This job will pick up sensitive data from an Object Storage

bucket, train a model, and save it to the model catalog.

How would you design the authentication mechanism for the job?

A.

Create a pre-authenticated request (PAR) for the Object Storage bucket, and use that in the

job code.

B.

Use the resource principal of the job run as the signer in the job code, ensuring there is a

dynamic group for this job run with appropriate access to Object Storage and the model

catalog.

C.

Store your personal OCI config file and keys in the Vault and access the Vault through the job

run resource principal.

D.

Package your personal OCI config file and keys in the job artifact

Full Access
Question # 7

The feature type TechJob has the following registered validators: Tech-Job.validator.register(name=’is_tech_job’, handler=is_tech_job_default_handler) Tech-Job.validator.register(name=’is_tech_job’, handler= is_tech_job_open_handler, condi-tion=(‘job_family’,)) TechJob.validator.register(name=’is_tech_job’, handler= is_tech_job_closed_handler, condition=(‘job_family’: ‘IT’)) When you run is_tech_job(job_family=’Engineering’), what does the feature type validator system do?

A.

Execute the is_tech_job_default_handler sales handler.

B.

Throw an error because the system cannot determine which handler to run.

C.

Execute the is_tech_job_closed_handler handler.

D.

Execute the is_tech_job_open_handler handler.

Full Access
Question # 8

Where do calls to stdout and stderr from score.py go in a model deployment?

A.

The predict log in the Oracle Cloud Infrastructure (OCI) Logging service as defined in the deployment.

B.

The OCI Cloud Shell, which can be accessed from the console.

C.

The file that was defined for them on the Virtual stachine (VM).

D.

The OCI console.

Full Access
Question # 9

For your next data science project, you need access to public geospatial images.

Which Oracle Cloud service provides free access to those images?

A.

Oracle Open Data

B.

Oracle Big Data Service

C.

Oracle Cloud Infrastructure Data Science

D.

Oracle Analytics Cloud

Full Access
Question # 10

You are a data scientist leveraging the Oracle Cloud Infrastructure (OCI) Language AI service for

various types of text analyses. Which TWO capabilities can you utilize with this tool?

A.

Topic classification

B.

Table extraction

C.

Sentiment analysis

D.

Sentence diagramming

E.

Punctuation correction

Full Access
Question # 11

You are a data scientist working for a utilities company. You have developed an algorithm that

detects anomalies from a utility reader in the grid. The size of the model artifact is about 2 GB, and

you are trying to store it in the model catalog. Which three interfaces could you use to save the

model artifact into the model catalog?

A.

Git CLI

B.

Oracle Cloud Infrastructure (OCI) Command Line Interface (CLI)

C.

Accelerated Data Science (ADS) Software Development Kit (SDK)

D.

ODSC CLI

E.

Console

F.

OCI Python SDK

Full Access
Question # 12

When preparing your model artifact to save it to the Oracle Cloud Infrastructure (OCI) Data Science model catalog, you create a score.py file. What is the purpose of the score.py fie?

A.

Define the compute scaling strategy.

B.

Configure the deployment infrastructure.

C.

Define the inference server dependencies.

D.

Execute the inference logic code

Full Access
Question # 13

As you are working in your notebook session, you find that your notebook session does not have

enough compute CPU and memory for your workload.

How would you scale up your notebook session without losing your work?

A.

Create a temporary bucket on Object Storage, write all your files and data to Object Storage,

delete your notebook session, provision a new notebook session on a larger compute shape,

Want any exam dump in pdf email me at info.tipsandtricks10@gmail.com (Little Paid)

and copy your files and data from your temporary bucket onto your new notebook session.

B.

Ensure your files and environments are written to the block volume storage under the

/home/datascience directory, deactivate the notebook session, and activate the notebook

session with a larger compute shape selected.

C.

Download all your files and data to your local machine, delete your notebook session,

provision a new notebook session on a larger compute shape, and upload your files from

your local machine to the new notebook session.

D.

Deactivate your notebook session, provision a new notebook session on a larger compute

shape and re-create all of your file changes.

Full Access
Question # 14

You are a data scientist designing an air traffic control model, and you choose to leverage Oracle

AutoML You understand that the Oracle AutoML pipeline consists of multiple stages and

automatically operates in a certain sequence. What is the correct sequence for the Oracle AutoML

pipeline?

A.

Algorithm selection, Feature selection, Adaptive sampling, Hyperparameter tuning

Want any exam dump in pdf email me at info.tipsandtricks10@gmail.com (Little Paid)

B.

Adaptive sampling, Algorithm selection, Feature selection, Hyperparameter tuning

C.

Adaptive sampling, Feature selection, Algorithm selection, Hyperparameter tuning

D.

Algorithm selection, Adaptive sampling, Feature selection, Hyperparameter tuning

Full Access
Question # 15

You are attempting to save a model from a notebook session to the model catalog by using the

Accelerated Data Science (ADS) SDK, with resource principal as the authentication signer, and you

get a 404 authentication error. Which two should you look for to ensure permissions are set up

correctly?

A.

The model artifact is saved to the block volume of the notebook session.

B.

A dynamic group has rules that matching the notebook sessions in it compartment.

C.

The policy for your user group grants manages permissions for the model catalog in this

compartment.

D.

The policy for a dynamic group grant manages permissions for the model catalog in it

compartment.

E.

The networking configuration allows access to Oracle Cloud Infrastructure services through a

Service Gateway.

Full Access
Question # 16

What preparation steps are required to access an Oracle AI service SDK from a Data Science notebook session?

A.

Call the Accented Data Science (ADS) command to enable Al integration

B.

Create and upload the API signing key and config file

C.

Import the REST API

D.

Create and upload execute.py and runtime.yaml

Full Access
Question # 17

You are building a model and need input that represents data as morning, afternoon, or evening. However, the data contains a time stamp. What part of the Data Science life cycle would you be in when creating the new variable?

A.

Model type selection

B.

Model validation

C.

Data access

D.

Feature engineering

Full Access
Question # 18

You have created a conda environment in your notebook session. This is the first time you are

working with published conda environments. You have also created an Object Storage bucket with

permission to manage the bucket.

Which two commands are required to publish the conda environment?

A.

odac conda publish --slug

B.

odsc conda list --override

C.

odsc conda init --bucket_namespace --bucket_name

D.

odsc conda create --file manifest.yaml

E.

conda activate /home/datascience/conda/

Full Access
Question # 19

You train a model to predict housing prices for your city. Which two metrics from the

Accelerated Data Science (ADS) ADSEvaluator class can you use to evaluate the regression model?

A.

Explained Variance Score

B.

F-1 Score

C.

Weighted Precision

D.

Weighted Recall

E.

Mean Absolute Error

Full Access
Question # 20

You want to ensure that all stdout and stderr from your code are automatically collected and

logged, without implementing additional logging in your code. How would you achieve this with Data

Science Jobs?

A.

On job creation, enable logging and select a log group. Then, select either a log or the option

to enable automatic log creation.

B.

Make sure that your code is using the standard logging library and then store all the logs to

Object Storage at the end of the job.

C.

Create your own log group and use a third-party logging service to capture job run details for

log collection and storing.

D.

You can implement custom logging in your code by using the Data Science Jobs logging

service.

Full Access
Question # 21

Which TWO statements are true about published conda environments?

A.

The odsc conda init command is used to configure the location of published conda en-vironments.

B.

They can be used in Data Science Jobs and model deployments.

C.

Your notebook session acts as the source to share published conda environment with team members.

D.

You can only create published conda environment by modifying a Data Science conde

E.

They are curated by Oracle Cloud Infrastructure (OCI) Data Science.

Full Access
Question # 22

As a data scientist, you create models for cancer prediction based on mammographic images.

The correct identification is very crucial in this case. After evaluating two models, you arrive at the

following confusion matrix.

Model 1 has Test accuracy is 80% and recall is 70%.

• Model 2 has Test accuracy is 75% and recall is 85%.

Which model would you prefer and why?

A.

Model 2, because recall is high.

B.

Model 1, because the test accuracy is high.

C.

Model 2, because recall has more impact on predictions in this use se.

D.

Model 1, because recall has lesser impact on predictions in this use case

Full Access
Question # 23

The Accelerated Data Science (ADS) model evaluation classes support different types of machine

learning modeling techniques. Which three types of modeling techniques are supported by ADS

Evaluators?

A.

Principal Component Analysis

B.

Multiclass Classification

C.

K-means Clustering

D.

Recurrent Neural Network

E.

Binary Classification

F.

Regression Analysis

Full Access
Question # 24

While reviewing your data, you discover that your data set has a class imbalance. You are aware

that the Accelerated Data Science (ADS) SDK provides multiple built-in automatic transformation

tools for data set transformation. Which would be the right tool to correct any imbalance between

the classes?

A.

visualize_transforms ()

B.

auto_transform()

C.

sample ()

D.

suggest_recommendations()

Full Access