New Year Sale - Special 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70dumps

AAISM Questions and Answers

Question # 6

Which of the following is the MOST effective action an organization can take to address data security risk when using generative AI features in an application?

A.

Rely on the AI provider’s independent third-party audit reports for assurance

B.

Establish policies and awareness training for acceptable use of AI

C.

Require opt-out provisions for data usage in service agreements

D.

Establish guidelines and best practices with third parties for intellectual property ownership

Full Access
Question # 7

Which AI model is BEST suited to ensure explainability in an HR department’s pre-screening tool for candidate resumes?

A.

Support vector machine

B.

Neural network

C.

Decision tree

D.

Gradient boosting machine

Full Access
Question # 8

Which of the following is the MAIN objective of the operational phase of AI life cycle management?

A.

Optimize the model’s algorithms

B.

Align the model to business needs

C.

Monitor model performance

D.

Obtain end-user feedback

Full Access
Question # 9

Which of the following is the MOST effective defense against cyberattacks that alter input data to avoid detection by the model?

A.

Conducting periodic monitoring activities on the model’s decisions

B.

Enhancing model robustness through adversarial training

C.

Implementing restricted access to the model’s internal parameters

D.

Applying differential privacy controls on training datasets

Full Access
Question # 10

An organization is reviewing an AI application to determine whether it is still needed. Engineers have been asked to analyze the number of incorrect predictions against the total number of predictions made. Which of the following is this an example of?

A.

Control self-assessment (CSA)

B.

Model validation

C.

Key performance indicator (KPI)

D.

Explainable decision-making

Full Access
Question # 11

Implementing which of the following would MOST effectively address bias in generative AI models?

A.

Data augmentation

B.

Data minimization

C.

Adversarial training

D.

Fairness constraints

Full Access
Question # 12

Which of the following factors is MOST important for preserving user confidence and trust in generative AI systems?

A.

Bias minimization

B.

Access controls and secure storage solutions

C.

Transparent disclosure and informed consent

D.

Data anonymization

Full Access
Question # 13

A data scientist creating categories and training an algorithm on large data sets is performing which learning technique?

A.

Supervised

B.

Reinforcement

C.

Unsupervised

D.

Machine learning (ML)

Full Access
Question # 14

Which of the following BEST describes the role of risk documentation in an AI governance program?

A.

Providing a record of past AI-related incidents for audits

B.

Outlining the acceptable levels of risk for AI-related initiatives

C.

Offering detailed analyses of technical risk and vulnerabilities

D.

Demonstrating governance, risk, and compliance (GRC) for external stakeholders

Full Access
Question # 15

Which of the following is the BEST control for preventing deepfakes?

A.

Output provenance verification

B.

Regular AI risk assessment

C.

AI governance policies

D.

System input validation

Full Access
Question # 16

An organization plans to implement a new AI system. Which of the following is the MOST important factor in determining the level of risk monitoring activities required?

A.

The organization’s risk appetite

B.

The organization’s number of AI system users

C.

The organization’s risk tolerance

D.

The organization’s compensating controls

Full Access
Question # 17

An organization is commissioning a third-party AI system using sensitive data. Which metric is MOST important to consider?

A.

Accessibility rating

B.

Model response time

C.

Accuracy thresholds

D.

Service availability

Full Access
Question # 18

An organization develops and implements an AI-based plug-in for users that summarizes their individual emails. Which of the following is the GREATEST risk associated with this application?

A.

Lack of application vulnerability scanning

B.

Data format incompatibility

C.

Insufficient rate limiting for APIs

D.

Inadequate controls over parameters

Full Access
Question # 19

AI developers often find it difficult to explain the processes inside deep learning systems PRIMARILY because:

A.

Training data input for learning is spread throughout the public domain and continues to change

B.

Generated knowledge dynamically changes in memory without being tracked by change history logs

C.

Applied algorithms are based on probability theories to improve system performance

D.

Neural network architectures can include statistical methods that are not fully understood

Full Access
Question # 20

A financial organization is concerned about AI data poisoning. Which control BEST mitigates this risk?

A.

Implementing a break-glass policy

B.

Transparency with customers about data sources

C.

Using training data from multiple sources

D.

Delivering AI-specific security awareness training

Full Access
Question # 21

Which of the following would MOST effectively obtain ongoing support from stakeholders to align AI initiatives with business objectives?

A.

Conducting periodic organization-wide AI staff training

B.

Addressing and optimizing AI-related risk

C.

Developing and monitoring the AI strategic roadmap

D.

Quantifying and communicating the value of AI solutions

Full Access
Question # 22

The PRIMARY ethical concern of generative AI is that it may:

A.

Produce unexpected data that could lead to bias

B.

Cause information integrity issues

C.

Cause information to become unavailable

D.

Breach the confidentiality of information

Full Access
Question # 23

In a new supply chain management system, AI models used by participating parties are interactively connected to generate advice in support of management decision making. Which of the following is the GREATEST challenge related to this architecture?

A.

Establishing clear lines of responsibility for AI model outputs

B.

Identifying hallucinations returned by AI models

C.

Determining the aggregate risk of the system

D.

Explaining the overall benefit of the system to stakeholders

Full Access
Question # 24

Secure aggregation enhances federated learning security by:

A.

Encrypting individual model updates so only the server can access them

B.

Applying differential privacy to training data

C.

Ensuring client contributions remain confidential even if the server is compromised

D.

Processing client updates in isolation

Full Access
Question # 25

Within an incident handling process, which of the following would BEST help restore end user trust with an AI system?

A.

The AI model prioritizes incidents based on business impact

B.

AI is being used to monitor incident detection and alerts

C.

The AI model’s outputs are validated by team members

D.

Remediation of the AI system based on lessons learned

Full Access
Question # 26

A CISO must provide KPIs for the organization’s newly deployed AI chatbot. Which metrics are BEST?

A.

Response time and throughput

B.

Error rate and bias detection

C.

Customer effort score and user retention

D.

Explainability and F1 score

Full Access
Question # 27

A financial organization relies on AI-based identity verification and fraud detection services. Which of the following BEST integrates AI security risk into the business continuity plan (BCP)?

A.

Using explainable AI to document decision paths

B.

Periodic retraining using pre-labeled data

C.

Including AI model supporting infrastructure in disaster recovery scenarios

D.

Duplicating AI microservices across multiple availability zones

Full Access
Question # 28

An organization is deploying an automated AI cybersecurity system. Which of the following would be the MOST effective strategy to minimize human error and improve overall security?

A.

Conducting periodic penetration testing

B.

Using historical data to train AI detection software

C.

Utilizing machine learning (ML) algorithms to ensure responsible use

D.

Implementing manual monitoring of potential alerts

Full Access
Question # 29

Within an incident handling process, which of the following would BEST help restore end-user trust in an AI system?

A.

Remediation of the AI system based on lessons learned

B.

The AI model’s outputs are validated by team members

C.

AI is used to monitor incident detection and alerts

D.

The AI model prioritizes incidents based on business impact

Full Access
Question # 30

Which of the following is the GREATEST benefit of implementing an AI tool to safeguard sensitive data and prevent unauthorized access?

A.

Timely analysis of endpoint activities

B.

Timely initiation of incident response

C.

Reduced number of false positives

D.

Reduced need for data classification

Full Access
Question # 31

AI developers often find deep learning systems difficult to explain PRIMARILY because:

A.

Knowledge dynamically changes without logs

B.

Neural network architectures include statistical methods not fully understood

C.

Algorithms rely on probability theories

D.

Training data is spread across public domains

Full Access
Question # 32

When robust input controls cannot prevent prompt injections in an LLM, what is the BEST compensating control?

A.

Fine-tune the system to validate inputs

B.

Implement identity and access management (IAM)

C.

Conduct human reviews of AI system inputs

D.

Review and annotate the AI system's outputs

Full Access
Question # 33

When robust input controls are not practical on a large language model (LLM) to prevent prompt injection attacks from external threats, which of the following would be the BEST compensating control to address the risk?

A.

Review and annotate the AI system's outputs

B.

Implement identity and access management (IAM)

C.

Conduct human reviews of the AI system's inputs

D.

Fine-tune the system to validate the AI system's inputs

Full Access
Question # 34

An aerospace manufacturing company that prioritizes accuracy and security has decided to use generative AI to enhance operations. Which of the following large language model (LLM) adoption plans BEST aligns with the company’s risk appetite?

A.

Developing a public LLM to automate critical functions

B.

Purchasing an LLM dataset on the open market

C.

Contracting LLM access from a reputable third-party provider

D.

Developing a private LLM to automate non-critical functions

Full Access
Question # 35

An organization has requested a developer to apply AI algorithms to existing modules in order to improve customer service quality. At this stage, which of the following should be considered FIRST?

A.

The developer may need to be held accountable for business inquiries raised by customers

B.

IT management may need to revise the service agreement if AI behavior cannot be predefined

C.

Project sponsors may need to agree on a phased approach in order to ensure safe release

D.

The organization may need to explain the performance of the applied AI algorithm

Full Access
Question # 36

An organization has discovered that employees have started regularly utilizing open-source generative AI without formal guidance. Which of the following should be the CISO’s GREATEST concern?

A.

Lack of monitoring

B.

Policy violations

C.

Data leakage

D.

Model hallucinations

Full Access
Question # 37

When evaluating a new AI tool for intrusion prevention, which is MOST important to ensure fit within the existing program architecture?

A.

Ensure automated response orchestration

B.

Prioritize real-time anomaly detection

C.

Confirm tool capabilities align with control objectives

D.

Select a tool that integrates with the SIEM

Full Access
Question # 38

Which of the following is the GREATEST concern when a vendor enables generative AI features for an organization's critical system?

A.

Access to the model

B.

Proposed regulatory enhancements

C.

Security monitoring and alerting

D.

Bias and ethical practices

Full Access
Question # 39

Which of the following is the MOST effective way to prevent a model inversion attack?

A.

Monitor model output for anomalies

B.

Utilize data pseudonymization

C.

Implement differential privacy during model training

D.

Ensure data minimization

Full Access
Question # 40

When using AI as part of incident response, which of the following BEST ensures the automation aligns with regulatory and governance obligations?

A.

Use deep learning models to autonomously classify all incidents

B.

Train the AI incident response platform to mirror legacy response workflows and log containment

C.

Apply anomaly detection models to filter incoming threats and automate containment

D.

Implement a tiered automation strategy where severity ratings inform the need for human oversight

Full Access
Question # 41

Which of the following would MOST effectively ensure an organization developing AI systems has comprehensive data classification and inventory management?

A.

Creating a centralized team to oversee the classification of data used in AI projects

B.

Conducting quarterly audits of AI data sets for anomalies and missing metadata

C.

Establishing a manual process to categorize data based on business needs and regulatory compliance

D.

Implementing an automated data cataloging tool that integrates with all organizational data repositories

Full Access
Question # 42

Which of the following mitigation control strategies would BEST reduce the risk of introducing hidden backdoors during model fine-tuning via third-party components?

A.

Leveraging open-source models and packages

B.

Performing threat modeling and integrity checks

C.

Disabling runtime logs during model training

D.

Implementing unsupervised learning methods

Full Access
Question # 43

Which of the following is the MOST important consideration when an organization is adopting generative AI for personalized advertising?

A.

Fraud risk

B.

Reputational risk

C.

Commercial risk

D.

Regulatory risk

Full Access
Question # 44

Which of the following AI data management techniques involves creating validation and test data?

A.

Training

B.

Annotating

C.

Splitting

D.

Learning

Full Access
Question # 45

A global organization experienced multiple incidents of staff pasting confidential data into public chatbots. Which action is MOST important to reduce short-term risk?

A.

Deliver role-based, scenario-driven AI security training mapped to job functions

B.

Require employees to complete an annual generic phishing and deepfake module

C.

Publish an AI acceptable use policy and collect signatures

D.

Block access to public LLMs at the network perimeter

Full Access
Question # 46

Which of the following BEST describes an adversarial attack on an AI model?

A.

Attacking the underlying hardware of the AI system

B.

Providing inputs that mislead the AI model into incorrect predictions

C.

Reverse engineering the AI model using social engineering techniques

D.

Conducting denial-of-service (DoS) attacks against AI APIs

Full Access
Question # 47

An organization implementing an LLM application sees unexpected cost increases due to excessive computational resource usage. Which vulnerability is MOST likely in need of mitigation?

A.

Excessive agency

B.

Sensitive information disclosure

C.

Unbounded consumption

D.

System prompt leakage

Full Access
Question # 48

Which defense is MOST effective against cyberattacks that alter input data to avoid detection?

A.

Enhancing model robustness through adversarial training

B.

Restricting access to internal model parameters

C.

Conducting periodic monitoring of decisions

D.

Applying differential privacy to training data

Full Access
Question # 49

A large corporation has received an influx of sophisticated credential-phishing emails and wants to leverage an AI solution to detect and quarantine these messages before they reach employees. Which of the following blue-team AI features is BEST suited to this task?

A.

Large language model (LLM)

B.

Natural language processing (NLP)

C.

Natural language generation (NLG)

D.

Retrieval-augmented generation (RAG)

Full Access
Question # 50

When creating a use case for an AI model that provides sensitive decisions affecting end users, which of the following is the GREATEST benefit of using model cards?

A.

Ethical considerations of the model are documented

B.

Technical instructions for model deployment are created

C.

Data collection requirements are reduced

D.

Model type selection is documented

Full Access
Question # 51

Which of the following is a key risk indicator (KRI) for an AI system used for threat detection?

A.

Number of training epochs

B.

Training time of the model

C.

Number of layers in the neural network

D.

Number of system overrides by cyber analysts

Full Access
Question # 52

Which of the following methods provides the MOST effective protection against model inversion attacks?

A.

Using adversarial training

B.

Reducing the model’s complexity

C.

Implementing regularization output

D.

Increasing the number of training iterations

Full Access
Question # 53

Which phase of the AI data life cycle presents the GREATEST inherent risk?

A.

Monitoring

B.

Maintenance

C.

Preparation

D.

Training

Full Access
Question # 54

An aerospace manufacturer prioritizing accuracy and security wants to use generative AI. Which LLM adoption plan BEST aligns with its risk appetite?

A.

Developing a private LLM to automate non-critical functions

B.

Contracting LLM access from a reputable third-party provider

C.

Developing a public LLM to automate critical functions

D.

Purchasing an LLM dataset on the open market

Full Access
Question # 55

Which of the following BEST strengthens information security controls around the use of generative AI applications?

A.

Ensuring controls exceed industry benchmarks

B.

Monitoring AI outputs against policy

C.

Implementing a kill switch

D.

Validating AI model training data

Full Access
Question # 56

Which BEST describes the role of model cards in AI solutions?

A.

They visualize AI model performance

B.

They document training data and AI model use cases

C.

They help developers create synthetic data

D.

They automatically fine-tune AI models

Full Access
Question # 57

Which of the following controls BEST mitigates the risk of bias in AI models?

A.

Robust access control techniques

B.

Regular data reconciliation

C.

Cryptographic hash functions

D.

Diverse data sourcing strategies

Full Access
Question # 58

During red-team testing of an AI system used to make lending decisions, which of the following techniques BEST simulates a data poisoning attack?

A.

Inputting encrypted data into the model

B.

Adding noise to output predictions

C.

Stealing model weights from a deployed API

D.

Corrupting training data sets to manipulate outcomes

Full Access
Question # 59

Which of the following is the BEST approach for minimizing risk when integrating acceptable use policies for AI foundation models into business operations?

A.

Limit model usage to predefined scenarios specified by the developer

B.

Rely on the developer's enforcement mechanisms

C.

Establish AI model life cycle policy and procedures

D.

Implement responsible development training and awareness

Full Access
Question # 60

Which AI data management technique involves creating validation and test data?

A.

Learning

B.

Splitting

C.

Training

D.

Annotating

Full Access
Question # 61

What is the PRIMARY purpose of a dedicated AI management system policy?

A.

Minimizing environmental impact

B.

Optimizing AI model accuracy

C.

Complying with external regulations

D.

Providing a framework to set AI objectives

Full Access
Question # 62

Which of the following key risk indicators (KRIs) is MOST relevant when evaluating the effectiveness of an organization’s AI risk management program?

A.

Number of AI models deployed into production

B.

Percentage of critical business systems with AI components

C.

Percentage of AI projects in compliance

D.

Number of AI-related training requests submitted

Full Access
Question # 63

A vendor switched its chatbot’s AI model without due diligence, causing unethical investment advice. What control BEST prevents this scenario?

A.

Master services agreement

B.

Change management

C.

Shared responsibility model

D.

Data minimization

Full Access
Question # 64

A data scientist creating categories and training the algorithm on large data sets is an example of which type of AI model learning technique?

A.

Reinforcement

B.

Unsupervised

C.

Machine learning (ML)

D.

Supervised

Full Access
Question # 65

Which of the following AI system vulnerabilities is MOST easily exploited by adversaries?

A.

Inaccurate generalizations from new data by the AI model

B.

Weak controls for access to the AI model

C.

Lack of protection against denial of service (DoS) attacks

D.

Inability to detect input modifications causing inappropriate AI outputs

Full Access
Question # 66

An organization plans to apply an AI system to its business, but developers find it difficult to predict system results due to lack of visibility to the inner workings of the AI model. Which of the following is the GREATEST challenge associated with this situation?

A.

Gaining the trust of end users through explainability and transparency

B.

Assigning a risk owner who is responsible for system uptime and performance

C.

Determining average turnaround time for AI transaction completion

D.

Continuing operations to meet expected AI security requirements

Full Access
Question # 67

An organization plans to use AI to analyze the shopping patterns of its customers to predict interests and send targeted, customized marketing emails. Which of the following should be done FIRST?

A.

Obtain customer consent

B.

Train the marketing department

C.

Update the terms of service

D.

Verify customer email addresses

Full Access
Question # 68

How can an organization best remain compliant when decommissioning an AI system that recorded patient data?

A.

Perform a post-destruction risk assessment

B.

Ensure backups are tested and access controls are audited

C.

Update governance policies based on lessons learned

D.

Ensure a certificate of destruction is received and archived

Full Access
Question # 69

Which of the following should be the PRIMARY consideration for an organization concerned about liabilities associated with unforeseen behavior from agentic AI systems?

A.

Model dependencies

B.

Approved base models

C.

Accountability model

D.

Acceptable risk level

Full Access
Question # 70

Which of the following types of data is used to tune hyperparameters?

A.

Validation

B.

Configuration

C.

Training

D.

Test

Full Access
Question # 71

A programmer suspects an AI system is inferring sensitive user information. What is the BEST action?

A.

Inform the governance panel

B.

Suggest fine-tuning

C.

Conduct a code review

D.

Alert the CIO

Full Access
Question # 72

Which of the following is the BEST way to ensure role clarity and staff effectiveness when implementing AI-assisted security monitoring tools?

A.

Defer implementation until the security team can be expanded with data scientists.

B.

Update the security program to include cross-functional AI-specific responsibilities.

C.

Transition responsibilities for AI tools to external consultants for improved scalability.

D.

Increase training budgets for business staff to obtain vendor-neutral AI certifications.

Full Access
Question # 73

A viral video shows a blurry person making claims about a product safety issue. The video has random low-quality sections. This MOST likely represents what threat?

A.

Hallucinations

B.

Model drift

C.

Data poisoning

D.

Deepfake

Full Access
Question # 74

An AI system that supports critical processes has deviated from expected performance and is producing biased outcomes. Which of the following is the BEST course of action?

A.

Retrain the model with a new and expanded dataset

B.

Perform a root cause analysis to identify mitigation steps

C.

Conduct audits of the data and the model

D.

Activate the model kill switch

Full Access
Question # 75

Security and assurance requirements for AI systems should FIRST be embedded in the:

A.

Model design phase

B.

Model training phase

C.

Model testing phase

D.

Model deployment phase

Full Access
Question # 76

Which of the following strategies is the MOST effective way to protect against AI data poisoning?

A.

Increasing model complexity to better handle data variations

B.

Ensuring the model is trained on diverse data sources

C.

Incorporating more features and data into model training

D.

Using robust data validation techniques and anomaly detection

Full Access