Which of the following are true regarding the CPU of a QEMU virtual machine? (Choose two.)
The CPU architecture of a QEMU virtual machine is independent of the host system's architecture.
Each QEMU virtual machine can only have one CPU with one core.
For each QEMU virtual machine, one dedicated physical CPU core must be reserved.
QEMU uses the concept of virtual CPUs to map the virtual machines to physical CPUs.
QEMU virtual machines support multiple virtual CPUs in order to run SMP systems.
The CPU architecture of a QEMU virtual machine is independent of the host system’s architecture. QEMU can emulate many CPU architectures, including x86, ARM, Alpha, and SPARC, regardless of the host system’s architecture1. This allows QEMU to run guest operating systems that are not compatible with the host system’s hardware. Therefore, option A is correct. QEMU virtual machines support multiple virtual CPUs in order to run SMP systems. QEMU uses the concept of virtual CPUs (vCPUs) to map the virtual machines to physical CPUs. Each vCPU is a thread that runs on a physical CPU core. QEMU allows the user to specify the number of vCPUs and the CPU model for each virtual machine. QEMU can run SMP systems with multiple vCPUs, as well as single-processor systems with one vCPU2. Therefore, option E is also correct. The other options are incorrect because they do not describe the CPU of a QEMU virtual machine. Option B is wrong because QEMU virtual machines can have more than one CPU with more than one core. Option C is wrong because QEMU does not require a dedicated physical CPU core for each virtual machine. QEMU can share the physical CPU cores among multiple virtual machines, depending on the load and the scheduling policy. Option D is wrong because QEMU does not use the term CPU, but vCPU, to refer to the virtual machines’ processors. References:
QEMU vs VirtualBox: What’s the difference? - LinuxConfig.org
QEMU / KVM CPU model configuration — QEMU documentation
Introduction — QEMU documentation
Qemu/KVM Virtual Machines - Proxmox Virtual Environment
Which file in acgroupdirectory contains the list of processes belonging to thiscgroup?
pids
members
procs
casks
subjects
The file procs in a cgroup directory contains the list of processes belonging to this cgroup. Each line in the file shows the PID of a process that is a member of the cgroup. A process can be moved to a cgroup by writing its PID into the cgroup’s procs file. For example, to move the process with PID 24982 to the cgroup cg1, the following command can be used: echo 24982 > /sys/fs/cgroup/cg1/procs1. The file procs is different from the file tasks, which lists the threads belonging to the cgroup. The file procs can be used to move all threads in a thread group at once, while the file tasks can be used to move individual threads2. References:
Creating and organizing cgroups · cgroup2 - GitHub Pages
Control Groups — The Linux Kernel documentation
Which statement is true regarding the Linux kernel module that must be loaded in order to use QEMU with hardware virtualization extensions?
It must be loaded into the kernel of the host system only if the console of a virtual machine will be connected to a physical console of the host system
It must be loaded into the kernel of each virtual machine that will access files and directories from the host system's file system.
It must be loaded into the Kernel of the host system in order to use the visualization extensions of the host system's CPU
It must be loaded into the kernel of the first virtual machine as it interacts with the QEMU bare metal hypervisor and is required to trigger the start of additional virtual machines
It must be loaded into the kernel of each virtual machine to provide Para virtualization which is required by QEMU.
The Linux kernel module that must be loaded in order to use QEMU with hardware virtualization extensions is KVM (Kernel-based Virtual Machine). KVM is a full virtualization solution that allows a user space program (such as QEMU) to utilize the hardware virtualization features of various processors (such as Intel VT or AMD-V). KVM consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM must be loaded into the kernel of the host system in order to use the virtualization extensions of the host system’s CPU. This enables QEMU to run multiple virtual machines with unmodified Linux or Windows images, each with private virtualized hardware. KVM is integrated with QEMU, so there is no need to load it into the kernel of each virtual machine or the first virtual machine. KVM also does not require paravirtualization, which is a technique that modifies the guest operating system to communicate directly with the hypervisor, bypassing the emulation layer. References:
Features/KVM - QEMU
Kernel-based Virtual Machine
KVM virtualization on Red Hat Enterprise Linux 8 (2023)
Which of the following statements are true regarding VirtualBox?
It is a hypervisor designed as a special kernel that is booted before the first regular operating system starts.
It only supports Linux as a guest operating system and cannot run Windows inside a virtual machine.
It requires dedicated shared storage, as it cannot store virtual machine disk images locally on block devices of the virtualization host.
It provides both a graphical user interface and command line tools to administer virtual machines.
It is available for Linux only and requires the source code of the currently running Linux kernel to be available.
VirtualBox is a hosted hypervisor, which means it runs as an application on top of an existing operating system, not as a special kernel that is booted before the first regular operating system starts1. VirtualBox supports a large number of guest operating systems, including Windows, Linux, Solaris, OS/2, and OpenBSD1. VirtualBox does not require dedicated shared storage, as it can store virtual machine disk images locally on block devices of the virtualization host, or on network shares, or on iSCSI targets1. VirtualBox provides both a graphical user interface (GUI) and command line tools (VBoxManage) to administer virtual machines1. VirtualBox is available for Windows, Linux, macOS, and Solaris hosts1, and does not require the source code of the currently running Linux kernel to be available. References:
Oracle VM VirtualBox: Features Overview
What happens when the following command is executed twice in succession?
docker run -tid -v data:/data debian bash
The container resulting from the second invocation can only read the content of /data/ and cannot change it.
Each container is equipped with its own independent data volume, available at /data/ in the respective container.
Both containers share the contents of the data volume, have full permissions to alter its content and mutually see their respective changes.
The original content of the container image data is available in both containers, although changes stay local within each container.
The second command invocation fails with an error stating that the volume data is already associated with a running container.
The command docker run -tid -v data:/data debian bash creates and runs a new container from the debian image, with an interactive terminal and a detached mode, and mounts a named volume data at /data in the container12. If the volume data does not exist, it is created automatically3. If the command is executed twice in succession, two containers are created and run, each with its own terminal and process ID, but they share the same volume data. This means that both containers can access, modify, and see the contents of the data volume, and any changes made by one container are reflected in the other container. Therefore, the statement C is true and the correct answer. The statements A, B, D, and E are false and incorrect, as they do not describe the behavior of the command or the volume correctly. References:
1: docker run | Docker Docs.
2: Docker run reference | Docker Docs - Docker Documentation.
3: Use volumes | Docker Documentation.
[4]: How to Use Docker Run Command with Examples - phoenixNAP.
If aDockerfilecontains the following lines:
WORKDIR /
RUN cd /tmp
RUN echo test > test
where is the filetestlocated?
/ting/test within the container image.
/root/tesc within the container image.
/test within the container image.
/tmp/test on the system running docker build.
test in the directory holding the Dockerf ile.
The WORKDIR instruction sets the working directory for any subsequent RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile1. The RUN instruction executes commands in a new layer on top of the current image and commits the results2. The RUN cd command does not change the working directory for the next RUN instruction, because each RUN command runs in a new shell and a new environment3. Therefore, the file test is created in the root directory (/) of the container image, not in the /tmp directory. References:
Dockerfile reference: WORKDIR
Dockerfile reference: RUN
difference between RUN cd and WORKDIR in Dockerfile
Which of the following statements in aDockerfileleads to a container which outputs hello world? (Choose two.)
ENTRYPOINT "echo Hello World"
ENTRYPOINT [ "echo hello world" ]
ENTRYPOINT [ "echo", "hello", "world" ]
ENTRYPOINT echo Hello World
ENTRYPOINT "echo", "Hello", "World*
The ENTRYPOINT instruction in a Dockerfile specifies the default command to run when a container is started from the image. The ENTRYPOINT instruction can be written in two forms: exec form and shell form. The exec form uses a JSON array to specify the command and its arguments, such as [ “executable”, “param1”, “param2” ]. The shell form uses a single string to specify the command and its arguments, such as “executable param1 param2”. The shell form is converted to the exec form by adding /bin/sh -c to the beginning of the command. Therefore, the following statements in a Dockerfile are equivalent and will lead to a container that outputs hello world:
ENTRYPOINT [ “echo hello world” ] ENTRYPOINT [ “/bin/sh”, “-c”, “echo hello world” ] ENTRYPOINT “echo hello world” ENTRYPOINT [ “echo”, “hello”, “world” ] ENTRYPOINT [ “/bin/sh”, “-c”, “echo”, “hello”, “world” ] ENTRYPOINT “echo hello world”
The other statements in the question are invalid or incorrect. The statement A. ENTRYPOINT “echo Hello World” is invalid because it uses double quotes to enclose the entire command, which is not allowed in the shell form. The statement D. ENTRYPOINT echo Hello World is incorrect because it does not use quotes to enclose the command, which is required in the shell form. The statement E. ENTRYPOINT “echo”, “Hello”, “World” is invalid because it uses double quotes to separate the command and its arguments, which is not allowed in the exec form. References:
Dockerfile reference | Docker Docs
Using the Dockerfile ENTRYPOINT and CMD Instructions - ATA Learning
Difference Between run, cmd and entrypoint in a Dockerfile
Which of the following values are valid in the type attribute of a
proc
namespace
kvm
cgroup
Ixc
The type attribute of a
https://libvirt.org/formatcaps.html
What is the purpose of thekubeletservice in Kubernetes?
Provide a command line interface to manage Kubernetes.
Build a container image as specified in a Dockerfile.
Manage permissions of users when interacting with the Kubernetes API.
Run containers on the worker nodes according to the Kubernetes configuration.
Store and replicate Kubernetes configuration data.
The purpose of the kubelet service in Kubernetes is to run containers on the worker nodes according to the Kubernetes configuration. The kubelet is an agent or program that runs on each node and communicates with the Kubernetes control plane. It receives a set of PodSpecs that describe the desired state of the pods that should be running on the node, and ensures that the containers described in those PodSpecs are running and healthy. The kubelet also reports the status of the node and the pods back to the control plane. The kubelet does not manage containers that were not created by Kubernetes. References:
Kubernetes Docs - kubelet
Learn Steps - What is kubelet and what it does: Basics on Kubernetes
Which CPU flag indicates the hardware virtualization capability on an AMD CPU?
HVM
VIRT
SVM
PVM
VMX
The CPU flag that indicates the hardware virtualization capability on an AMD CPU is SVM. SVM stands for Secure Virtual Machine, and it is a feature of AMD processors that enables the CPU to run virtual machines with hardware assistance. SVM is also known as AMD-V, which is AMD’s brand name for its virtualization technology. SVM allows the CPU to support a hypervisor, which is a software layer that creates and manages virtual machines. A hypervisor can run multiple virtual machines on a single physical machine, each with its own operating system and applications. SVM improves the performance and security of virtual machines by allowing the CPU to directly execute privileged instructions and handle memory access, instead of relying on software emulation or binary translation. SVM also provides nested virtualization, which is the ability to run avirtual machine inside another virtual machine. To use SVM, the CPU must support it and the BIOS must enable it. The user can check if the CPU supports SVM by looking for the svm flag in the /proc/cpuinfo file or by using the lscpu command. The user can also use the virt-host-validate command to verify if the CPU and the BIOS are properly configured for hardware virtualization123. References:
How to check if CPU supports hardware virtualization (VT technology)1
Processor support - KVM3
How to Enable Virtualization in BIOS for Intel and AMD4
Which of the following kinds of data cancloud-initprocess directly from user-data? (Choose three.)
Shell scripts to execute
Lists of URLs to import
ISO images to boot from
cloud-config declarations in YAML
Base64-encoded binary files to execute
Cloud-init is a tool that allows users to customize the configuration and behavior of cloud instances during the boot process. Cloud-init can process different kinds of data that are passed to the instance via user-data, which is a mechanism provided by various cloud providers to inject data into the instance. Among the kinds of data that cloud-init can process directly from user-data are:
Shell scripts to execute: Cloud-init can execute user-data that is formatted as a shell script, starting with the #!/bin/sh or #!/bin/bash shebang. The script can contain any commands that are valid in the shell environment of the instance. The script is executed as the root user during the boot process12.
Lists of URLs to import: Cloud-init can import user-data that is formatted as a list of URLs, separated by newlines. The URLs can point to any valid data source that cloud-init supports, such as shell scripts, cloud-config files, or include files. The URLs are fetched and processed by cloud-init in the order they appear in the list13.
cloud-config declarations in YAML: Cloud-init can process user-data that is formatted as a cloud-config file, which is a YAML document that contains declarations for various cloud-init modules. The cloud-config file can specify various aspects of the instance configuration, such as hostname, users, packages, commands, services, and more. The cloud-config file must start with the #cloud-config header14.
The other kinds of data listed in the question are not directly processed by cloud-init from user-data. They are either not supported, not recommended, or require additional steps to be processed. These kinds of data are:
ISO images to boot from: Cloud-init does not support booting from ISO images that are passed as user-data. ISO images are typically used to install an operating system on a physical or virtual machine, not to customize an existing cloud instance. To boot from an ISO image, the user would need to attach it as a secondary disk to the instance and configure the boot order accordingly5.
Base64-encoded binary files to execute: Cloud-init does not recommend passing binary files as user-data, as they may not be compatible with the instance’s architecture or operating system. Base64-encoding does not change this fact, as it only converts the binary data into ASCII characters. To execute a binary file, the user would need to decode it and make it executable on the instance6.
References:
User-Data Formats — cloud-init 22.1 documentation
User-Data Scripts
Include File
Cloud Config
How to Boot From ISO Image File Directly in Windows
How to run a binary file as a command in the terminal?.
FILL BLANK
What is the default path to the Docker daemon configuration file on Linux? (Specify the full name of the file,Including path.)
/etc/docker/daemon.json
The default path to the Docker daemon configuration file on Linux is /etc/docker/daemon.json. This file is a JSON file that contains the settings and options for the Docker daemon, which is the service that runs on the host operating system and manages the containers, images, networks, and other Docker resources. The /etc/docker/daemon.json file does not exist by default, but it can be created by the user to customize the Docker daemon behavior. The file can also be specified by using the --config-file flag when starting the Docker daemon. The file must be a valid JSON object and follow the syntax and structure of the dockerd reference docs12. References:
Docker daemon configuration file - Medium3
Docker daemon configuration overview | Docker Docs4
docker daemon | Docker Docs5
What is true aboutcontainerd?
It is a text file format defining the build process of containers.
It runs in each Docker container and provides DHCP client functionality
It uses rune to start containers on a container host.
It is the initial process run at the start of any Docker container.
It requires the Docker engine and Docker CLI to be installed.
Containerd is an industry-standard container runtime that uses Runc (a low-level container runtime) by default, but can be configured to use others as well1. Containerd manages the complete container lifecycle of its host system, from image transfer and storage to containerexecution and supervision1. It supports the standards established by the Open Container Initiative (OCI)1. Containerd does not require the Docker engine and Docker CLI to be installed, as it can be used independently or with other container platforms2. Containerd is not a text file format, nor does it run in each Docker container or provide DHCP client functionality. Containerd is not the initial process run at the start of any Docker container, as that is the role of the container runtime, such as Runc3. References: 1 (search for “containerd”), 2 (search for “Containerd is an open source”), 3 (search for “It uses rune to start containers”).
TESTED 18 Jan 2026
Copyright © 2014-2026 DumpsTool. All Rights Reserved