Which statement about the Kubernetes network model is correct?
Pods can only communicate with Pods exposed via a Service.
Pods can communicate with all Pods without NAT.
The Pod IP is only visible inside a Pod.
The Service IP is used for the communication between Services.
Kubernetes’ networking model assumes that every Pod has its own IP address and that Pods can communicate with other Pods across nodes without requiring network address translation (NAT). That makes B correct. This is one of Kubernetes’ core design assumptions and is typically implemented via CNI plugins that provide flat, routable Pod networking (or equivalent behavior using encapsulation/routing).
This model matters because scheduling is dynamic. The scheduler can place Pods anywhere in the cluster, and applications should not need to know whether a peer is on the same node or a different node. With the Kubernetes network model, Pod-to-Pod communication works uniformly: a Pod can reach any other Pod IP directly, and nodes can reach Pods as well. Services and DNS add stable naming and load balancing, but direct Pod connectivity is part of the baseline model.
Option A is incorrect because Pods can communicate directly using Pod IPs even without Services (subject to NetworkPolicies and routing). Services are abstractions for stable access and load balancing; they are not the only way Pods can communicate. Option C is incorrect because Pod IPs are not limited to visibility “inside a Pod”; they are routable within the cluster network. Option D is misleading: Services are often used by Pods (clients) to reach a set of Pods (backends). “Service IP used for communication between Services” is not the fundamental model; Services are virtual IPs for reaching workloads, and “Service-to-Service communication” usually means one workload calling another via the target Service name.
A useful way to remember the official model: (1) all Pods can communicate with all other Pods (no NAT), (2) all nodes can communicate with all Pods (no NAT), (3) Pod IPs are unique cluster-wide. This enables consistent microservice connectivity and supports higher-level traffic management layers like Ingress and service meshes.
=========
What are the advantages of adopting a GitOps approach for your deployments?
Reduce failed deployments, operational costs, and fragile release processes.
Reduce failed deployments, configuration drift, and fragile release processes.
Reduce failed deployments, operational costs, and learn git.
Reduce failed deployments, configuration drift and improve your reputation.
The correct answer is B: GitOps helps reduce failed deployments, reduce configuration drift, and reduce fragile release processes. GitOps is an operating model where Git is the source of truth for declarative configuration (Kubernetes manifests, Helm releases, Kustomize overlays). A GitOps controller (like Flux or Argo CD) continuously reconciles the cluster’s actual state to match what’s declared in Git. This creates a stable, repeatable deployment pipeline and minimizes “snowflake” environments.
Reducing failed deployments: changes go through pull requests, code review, automated checks, and controlled merges. Deployments become predictable because the controller applies known-good, versioned configuration rather than ad-hoc manual commands. Rollbacks are also simpler—reverting a Git commit returns the cluster to the prior desired state.
Reducing configuration drift: without GitOps, clusters often drift because humans apply hotfixes directly in production or because different environments diverge over time. With GitOps, the controller detects drift and either alerts or automatically corrects it, restoring alignment with Git.
Reducing fragile release processes: releases become standardized and auditable. Git history provides an immutable record of who changed what and when. Promotion between environments becomes systematic (merge/branch/tag), and the same declarative artifacts are used consistently.
The other options include items that are either not the primary GitOps promise (like “learn git”) or subjective (“improve your reputation”). Operational cost reduction can happen indirectly through fewer incidents and more automation, but the most canonical and direct GitOps advantages in Kubernetes delivery are reliability and drift control—captured precisely in B.
=========
Which of the following is a correct definition of a Helm chart?
A Helm chart is a collection of YAML files bundled in a tar.gz file and can be applied without decompressing it.
A Helm chart is a collection of JSON files and contains all the resource definitions to run an application on Kubernetes.
A Helm chart is a collection of YAML files that can be applied on Kubernetes by using the kubectl tool.
A Helm chart is similar to a package and contains all the resource definitions to run an application on Kubernetes.
A Helm chart is best described as a package for Kubernetes applications, containing the resource definitions (as templates) and metadata needed to install and manage an application—so D is correct. Helm is a package manager for Kubernetes; the chart is the packaging format. Charts include a Chart.yaml (metadata), a values.yaml (default configuration values), and a templates/ directory containing Kubernetes manifests written as templates. When you install a chart, Helm renders those templates into concrete Kubernetes YAML manifests by substituting values, then applies them to the cluster.
Option A is misleading/incomplete. While charts are often distributed as a compressed tarball (.tgz), the defining feature is not “YAML bundled in tar.gz” but the packaging and templating model that supports install/upgrade/rollback. Option B is incorrect because Helm charts are not “collections of JSON files” by definition; Kubernetes resources can be expressed as YAML or JSON, but Helm charts overwhelmingly use templated YAML. Option C is incorrect because charts are not simply YAML applied by kubectl; Helm manages releases, tracks installed resources, and supports upgrades and rollbacks. Helm uses Kubernetes APIs under the hood, but the value of Helm is the lifecycle and packaging system, not “kubectl apply.”
In cloud-native application delivery, Helm helps standardize deployments across environments (dev/stage/prod) by externalizing configuration through values. It reduces copy/paste and supports reuse via dependencies and subcharts. Helm also supports versioning of application packages, allowing teams to upgrade predictably and roll back if needed—critical for production change management.
So, the correct and verified definition is D: a Helm chart is like a package containing the resource definitions needed to run an application on Kubernetes.
=========
Which Kubernetes Service type exposes a service only within the cluster?
ClusterIP
NodePort
LoadBalancer
ExternalName
In Kubernetes, a Service provides a stable network endpoint for a set of Pods and abstracts away their dynamic nature. Kubernetes offers several Service types, each designed for different exposure requirements. Among these, ClusterIP is the Service type that exposes an application only within the cluster, making it the correct answer.
When a Service is created with the ClusterIP type, Kubernetes assigns it a virtual IP address that is reachable exclusively from within the cluster’s network. This IP is used by other Pods and internal components to communicate with the Service through cluster DNS or environment variables. External traffic from outside the cluster cannot directly access a ClusterIP Service, which makes it ideal for internal APIs, backend services, and microservices that should not be publicly exposed.
Option B (NodePort) is incorrect because NodePort exposes the Service on a static port on each node’s IP address, allowing access from outside the cluster. Option C (LoadBalancer) is incorrect because it provisions an external load balancer—typically through a cloud provider—to expose the Service publicly. Option D (ExternalName) is incorrect because it does not create a proxy or internal endpoint at all; instead, it maps the Service name to an external DNS name outside the cluster.
ClusterIP is also the default Service type in Kubernetes. If no type is explicitly specified in a Service manifest, Kubernetes automatically assigns it as ClusterIP. This default behavior reflects the principle of least exposure, encouraging internal-only access unless external access is explicitly required.
From a cloud native architecture perspective, ClusterIP Services are fundamental to building secure, scalable microservices systems. They enable internal service-to-service communication while reducing the attack surface by preventing unintended external access.
According to Kubernetes documentation, ClusterIP Services are intended for internal communication within the cluster and are not reachable from outside the cluster network. Therefore, ClusterIP is the correct and fully verified answer, making option A the right choice.
Can a Kubernetes Service expose multiple ports?
No, you can only expose one port per each Service.
Yes, but you must specify an unambiguous name for each port.
Yes, the only requirement is to use different port numbers.
No, because the only port you can expose is port number 443.
Yes, a Kubernetes Service can expose multiple ports, and when it does, each port should have a unique, unambiguous name, making B correct. In the Service spec, the ports field is an array, allowing you to define multiple port mappings (e.g., 80 for HTTP and 443 for HTTPS, or grpc and metrics). Each entry can include port (Service port), targetPort (backend Pod port), and protocol.
The naming requirement becomes important because Kubernetes needs to disambiguate ports, especially when other resources refer to them. For example, an Ingress backend or some proxies/controllers can reference Service ports by name. Also, when multiple ports exist, a name helps humans and automation reliably select the correct port. Kubernetes documentation and common practice recommend naming ports whenever there is more than one, and in several scenarios it’s effectively required to avoid ambiguity.
Option A is incorrect because multi-port Services are common and fully supported. Option C is insufficient: while different port numbers are necessary, naming is the correct distinguishing rule emphasized by Kubernetes patterns and required by some integrations. Option D is incorrect and nonsensical—Services can expose many ports and are not restricted to 443.
Operationally, exposing multiple ports through one Service is useful when a single backend workload provides multiple interfaces (e.g., application traffic and a metrics endpoint). You can keep stable discovery under one DNS name while still differentiating ports. The backend Pods must still listen on the target ports, and selectors determine which Pods are endpoints. The key correctness point for this question is: multi-port Services are allowed, and each port should be uniquely named to avoid confusion and integration issues.
=========
If kubectl is failing to retrieve information from the cluster, where can you find Pod logs to troubleshoot?
/var/log/pods/
~/.kube/config
/var/log/k8s/
/etc/kubernetes/
The correct answer is A: /var/log/pods/. When kubectl logs can’t retrieve logs (for example, API connectivity issues, auth problems, or kubelet/API proxy issues), you can often troubleshoot directly on the node where the Pod ran. Kubernetes nodes typically store container logs on disk, and a common location is under /var/log/pods/, organized by namespace, Pod name/UID, and container. This directory contains symlinks or files that map to the underlying container runtime log location (often under /var/log/containers/ as well, depending on distro/runtime setup).
Option B (~/.kube/config) is your local kubeconfig file; it contains cluster endpoints and credentials, not Pod logs. Option D (/etc/kubernetes/) contains Kubernetes component configuration/manifests on some installations (especially control plane), not application logs. Option C (/var/log/k8s/) is not a standard Kubernetes log path.
Operationally, the node-level log locations depend on the container runtime and logging configuration, but the Kubernetes convention is that kubelet writes container logs to a known location and exposes them through the API so kubectl logs works. If the API path is broken, node access becomes your fallback. This is also why secure node access is sensitive: anyone with node root access can potentially read logs (and other data), which is part of the threat model.
So, the best answer for where to look on the node for Pod logs when kubectl can’t retrieve them is /var/log/pods/, option A.
=========
What is Helm?
An open source dashboard for Kubernetes.
A package manager for Kubernetes applications.
A custom scheduler for Kubernetes.
An end-to-end testing project for Kubernetes applications.
Helm is best described as a package manager for Kubernetes applications, making B correct. Helm packages Kubernetes resource manifests (Deployments, Services, ConfigMaps, Ingress, RBAC, etc.) into a unit called a chart. A chart includes templates and default values, allowing teams to parameterize deployments for different environments (dev/stage/prod) without rewriting YAML.
From an application delivery perspective, Helm solves common problems: repeatable installation, upgrade management, versioning, and sharing of standardized application definitions. Instead of copying and editing raw YAML, users install a chart and supply a values.yaml file (or CLI overrides) to configure image tags, replica counts, ingress hosts, resource requests, and other settings. Helm then renders templates into concrete Kubernetes manifests and applies them to the cluster.
Helm also manages releases: it tracks what has been installed and supports upgrades and rollbacks. This aligns with cloud native delivery practices where deployments are automated, reproducible, and auditable. Helm is commonly integrated into CI/CD pipelines and GitOps workflows (sometimes with charts stored in Git or Helm repositories).
The other options are incorrect: a dashboard is a UI like Kubernetes Dashboard; a scheduler is kube-scheduler (or custom scheduler implementations, but Helm is not that); end-to-end testing projects exist in the ecosystem, but Helm’s role is packaging and lifecycle management of Kubernetes app definitions.
So the verified, standard definition is: Helm = Kubernetes package manager.
Which of the following statements is correct concerning Open Policy Agent (OPA)?
The policies must be written in Python language.
Kubernetes can use it to validate requests and apply policies.
Policies can only be tested when published.
It cannot be used outside Kubernetes.
Open Policy Agent (OPA) is a general-purpose policy engine used to define and enforce policy across different systems. In Kubernetes, OPA is commonly integrated through admission control (often via Gatekeeper or custom admission webhooks) to validate and/or mutate requests before they are persisted in the cluster. This makes B correct: Kubernetes can use OPA to validate API requests and apply policy decisions.
Kubernetes’ admission chain is where policy enforcement naturally fits. When a user or controller submits a request (for example, to create a Pod), the API server can call external admission webhooks. Those webhooks can evaluate the request against policy—such as “no privileged containers,” “images must come from approved registries,” “labels must include cost-center,” or “Ingress must enforce TLS.” OPA’s policy language (Rego) allows expressing these rules in a declarative form, and the decision (“allow/deny” and sometimes patches) is returned to the API server. This enforces governance consistently and centrally.
Option A is incorrect because OPA policies are written in Rego, not Python. Option C is incorrect because policies can be tested locally and in CI pipelines before deployment; in fact, testability is a key advantage. Option D is incorrect because OPA is designed to be platform-agnostic—it can be used with APIs, microservices, CI/CD pipelines, service meshes, and infrastructure tools, not only Kubernetes.
From a Kubernetes fundamentals view, OPA complements RBAC: RBAC answers “who can do what to which resources,” while OPA-style admission policies answer “even if you can create this resource, does it meet our organizational rules?” Together they help implement defense in depth: authentication + authorization + policy admission + runtime security controls. That is why OPA is widely used to enforce security and compliance requirements in Kubernetes environments.
=========
What framework does Kubernetes use to authenticate users with JSON Web Tokens?
OpenID Connect
OpenID Container
OpenID Cluster
OpenID CNCF
Kubernetes commonly authenticates users using OpenID Connect (OIDC) when JSON Web Tokens (JWTs) are involved, so A is correct. OIDC is an identity layer on top of OAuth 2.0 that standardizes how clients obtain identity information and how JWTs are issued and validated.
In Kubernetes, authentication happens at the API server. When OIDC is configured, the API server validates incoming bearer tokens (JWTs) by checking token signature and claims against the configured OIDC issuer and client settings. Kubernetes can use OIDC claims (such as sub, email, groups) to map the authenticated identity to Kubernetes RBAC subjects. This is how enterprises integrate clusters with identity providers such as Okta, Dex, Azure AD, or other OIDC-compliant IdPs.
Options B, C, and D are fabricated phrases and not real frameworks. Kubernetes documentation explicitly references OIDC as a supported method for token-based user authentication (alongside client certificates, bearer tokens, static token files, and webhook authentication). The key point is that Kubernetes does not “invent” JWT auth; it integrates with standard identity providers through OIDC so clusters can participate in centralized SSO and group-based authorization.
Operationally, OIDC authentication is typically paired with:
RBAC for authorization (“what you can do”)
Audit logging for traceability
Short-lived tokens and rotation practices for security
Group claim mapping to simplify permission management
So, the verified framework Kubernetes uses with JWTs for user authentication is OpenID Connect.
=========
Which of the following sentences is true about namespaces in Kubernetes?
You can create a namespace within another namespace in Kubernetes.
You can create two resources of the same kind and name in a namespace.
The default namespace exists when a new cluster is created.
All the objects in the cluster are namespaced by default.
The true statement is C: the default namespace exists when a new cluster is created. Namespaces are a Kubernetes mechanism for partitioning cluster resources into logical groups. When you set up a cluster, Kubernetes creates some initial namespaces (including default, and commonly kube-system, kube-public, and kube-node-lease). The default namespace is where resources go if you don’t specify a namespace explicitly.
Option A is false because namespaces are not hierarchical; Kubernetes does not support “namespaces inside namespaces.” Option B is false because within a given namespace, resource names must be unique per resource kind. You can’t have two Deployments with the same name in the same namespace. You can have a Deployment named web in one namespace and another Deployment named web in a different namespace—namespaces provide that scope boundary. Option D is false because not all objects are namespaced. Many resources are cluster-scoped (for example, Nodes, PersistentVolumes, ClusterRoles, ClusterRoleBindings, and StorageClasses). Namespaces apply only to namespaced resources.
Operationally, namespaces support multi-tenancy and environment separation (dev/test/prod), RBAC scoping, resource quotas, and policy boundaries. For example, you can grant a team access only to their namespace and enforce quotas that prevent them from consuming excessive CPU/memory. Namespaces also make organization and cleanup easier: deleting a namespace removes most namespaced resources inside it (subject to finalizers).
So, the verified correct statement is C: the default namespace exists upon cluster creation.
=========
What is ephemeral storage?
Storage space that need not persist across restarts.
Storage that may grow dynamically.
Storage used by multiple consumers (e.g., multiple Pods).
Storage that is always provisioned locally.
The correct answer is A: ephemeral storage is non-persistent storage whose data does not need to survive Pod restarts or rescheduling. In Kubernetes, ephemeral storage typically refers to storage tied to the Pod’s lifetime—such as the container writable layer, emptyDir volumes, and other temporary storage types. When a Pod is deleted or moved to a different node, that data is generally lost.
This is different from persistent storage, which is backed by PersistentVolumes and PersistentVolumeClaims and is designed to outlive individual Pod instances. Ephemeral storage is commonly used for caches, scratch space, temporary files, and intermediate build artifacts—data that can be recreated and is not the authoritative system of record.
Option B is incorrect because “may grow dynamically” describes an allocation behavior, not the defining characteristic of ephemeral storage. Option C is incorrect because multiple consumers is about access semantics (ReadWriteMany etc.) and shared volumes, not ephemerality. Option D is incorrect because ephemeral storage is not “always provisioned locally” in a strict sense; while many ephemeral forms are local to the node, the definition is about lifecycle and persistence guarantees, not necessarily physical locality.
Operationally, ephemeral storage is an important scheduling and reliability consideration. Pods can request/limit ephemeral storage similarly to CPU/memory, and nodes can evict Pods under disk pressure. Mismanaged ephemeral storage (logs written to the container filesystem, runaway temp files) can cause node disk exhaustion and cascading failures. Best practices include shipping logs off-node, using emptyDir intentionally with size limits where supported, and using persistent volumes for state that must survive restarts.
So, ephemeral storage is best defined as storage that does not need to persist across restarts/rescheduling, matching option A.
=========
A site reliability engineer needs to temporarily prevent new Pods from being scheduled on node-2 while keeping the existing workloads running without disruption. Which kubectl command should be used?
kubectl cordon node-2
kubectl delete node-2
kubectl drain node-2
kubectl pause deployment
In Kubernetes, node maintenance and availability are common operational tasks, and the platform provides specific commands to control how the scheduler places Pods on nodes. When the requirement is to temporarily prevent new Pods from being scheduled on a node without affecting the currently running Pods, the correct approach is to cordon the node.
The command kubectl cordon node-2 marks the node as unschedulable. This means the Kubernetes scheduler will no longer place any new Pods onto that node. Importantly, cordoning a node does not evict, restart, or interrupt existing Pods. All workloads already running on the node continue operating normally. This makes cordoning ideal for scenarios such as diagnostics, monitoring, or preparing for future maintenance while ensuring zero workload disruption.
Option B, kubectl delete node-2, is incorrect because deleting a node removes it entirely from the cluster. This action would cause Pods running on that node to be terminated and rescheduled elsewhere, resulting in disruption—exactly what the question specifies must be avoided.
Option C, kubectl drain node-2, is also incorrect in this context. Draining a node safely evicts Pods (except for certain exclusions like DaemonSets) and reschedules them onto other nodes. While drain is useful for maintenance and upgrades, it does not keep existing workloads running on the node, making it unsuitable here.
Option D, kubectl pause deployment, applies only to Deployments and merely pauses rollout updates. It does not affect node-level scheduling behavior and has no impact on where Pods are placed by the scheduler.
Therefore, the correct and verified answer is Option A: kubectl cordon node-2, which aligns with Kubernetes operational best practices and official documentation for non-disruptive node management.
Which component of the node is responsible to run workloads?
The kubelet.
The kube-proxy.
The kube-apiserver.
The container runtime.
The verified correct answer is D (the container runtime). On a Kubernetes node, the container runtime (such as containerd or CRI-O) is the component that actually executes containers—it creates container processes, manages their lifecycle, pulls images, and interacts with the underlying OS primitives (namespaces, cgroups) through an OCI runtime like runc. In that direct sense, the runtime is what “runs workloads.”
It’s important to distinguish responsibilities. The kubelet (A) is the node agent that orchestrates what should run on the node: it watches the API server for Pods assigned to the node and then asks the runtime to start/stop containers accordingly. Kubelet is essential for node management, but it does not itself execute containers; it delegates execution to the runtime via CRI. kube-proxy (B) handles Service traffic routing rules (or is replaced by other dataplanes) and does not run containers. kube-apiserver (C) is a control plane component that stores and serves cluster state; it is not a node workload runner.
So, in the execution chain: scheduler assigns Pod → kubelet sees Pod assigned → kubelet calls runtime via CRI → runtime launches containers. When troubleshooting “containers won’t start,” you often inspect kubelet logs and runtime logs because the runtime is the component that can fail image pulls, sandbox creation, or container start operations.
Therefore, the best answer to “which node component is responsible to run workloads” is the container runtime, option D.
=========
What is the main purpose of etcd in Kubernetes?
etcd stores all cluster data in a key value store.
etcd stores the containers running in the cluster for disaster recovery.
etcd stores copies of the Kubernetes config files that live /etc/.
etcd stores the YAML definitions for all the cluster components.
The main purpose of etcd in Kubernetes is to store the cluster’s state as a distributed key-value store, so A is correct. Kubernetes is API-driven: objects like Pods, Deployments, Services, ConfigMaps, Secrets, Nodes, and RBAC rules are persisted by the API server into etcd. Controllers, schedulers, and other components then watch the API for changes and reconcile the cluster accordingly. This makes etcd the “source of truth” for desired and observed cluster state.
Options B, C, and D are misconceptions. etcd does not store the running containers; that’s the job of the kubelet/container runtime on each node, and container state is ephemeral. etcd does not store /etc configuration file copies. And while you may author objects as YAML manifests, Kubernetes stores them internally as API objects (serialized) in etcd—not as “YAML definitions for all components.” The data is structured key/value entries representing Kubernetes resources and metadata.
Because etcd is so critical, its performance and reliability directly affect the cluster. Slow disk I/O or poor network latency increases API request latency and can delay controller reconciliation, leading to cascading operational problems (slow rollouts, delayed scheduling, timeouts). That’s why etcd is typically run on fast, reliable storage and in an HA configuration (often 3 or 5 members) to maintain quorum and tolerate failures. Backups (snapshots) and restore procedures are also central to disaster recovery: if etcd is lost, the cluster loses its state.
Security is also important: etcd can contain sensitive information (especially Secrets unless encrypted at rest). Proper TLS, restricted access, and encryption-at-rest configuration are standard best practices.
So, the verified correct answer is A: etcd stores all cluster data/state in a key-value store.
=========
Which of these is a valid container restart policy?
On login
On update
On start
On failure
The correct answer is D: On failure. In Kubernetes, restart behavior is controlled by the Pod-level field spec.restartPolicy, with valid values Always, OnFailure, and Never. The option presented here (“On failure”) maps to Kubernetes’ OnFailure policy. This setting determines what the kubelet should do when containers exit:
Always: restart containers whenever they exit (typical for long-running services)
OnFailure: restart containers only if they exit with a non-zero status (common for batch workloads)
Never: do not restart containers (fail and leave it terminated)
So “On failure” is a valid restart policy concept and the only one in the list that matches Kubernetes semantics.
The other options are not Kubernetes restart policies. “On login,” “On update,” and “On start” are not recognized values and don’t align with how Kubernetes models container lifecycle. Kubernetes is declarative and event-driven: it reacts to container exit codes and controller intent, not user “logins.”
Operationally, choosing the right restart policy is important. For example, Jobs typically use restartPolicy: OnFailure or Never because the goal is completion, not continuous uptime. Deployments usually imply “Always” because the workload should keep serving traffic, and a crashed container should be restarted. Also note that controllers interact with restarts: a Deployment may recreate Pods if they fail readiness, while a Job counts completions and failures based on Pod termination behavior.
Therefore, among the options, the only valid (Kubernetes-aligned) restart policy is D.
=========
What does “Continuous Integration” mean?
The continuous integration and testing of code changes from multiple sources manually.
The continuous integration and testing of code changes from multiple sources via automation.
The continuous integration of changes from one environment to another.
The continuous integration of new tools to support developers in a project.
The correct answer is B: Continuous Integration (CI) is the practice of frequently integrating code changes from multiple contributors and validating them through automated builds and tests. The “continuous” part is about doing this often (ideally many times per day) and consistently, so integration problems are detected early instead of piling up until a painful merge or release window.
Automation is essential. CI typically includes steps like compiling/building artifacts, running unit and integration tests, executing linters, checking formatting, scanning dependencies for vulnerabilities, and producing build reports. This automation creates fast feedback loops that help developers catch regressions quickly and maintain a releasable main branch.
Option A is incorrect because manual integration/testing does not scale and undermines the reliability and speed that CI is meant to provide. Option C confuses CI with deployment promotion across environments (which is more aligned with Continuous Delivery/Deployment). Option D is unrelated: adding tools can support CI, but it isn’t the definition.
In cloud-native application delivery, CI is tightly coupled with containerization and Kubernetes: CI pipelines often build container images from source, run tests, scan images, sign artifacts, and push to registries. Those validated artifacts then flow into CD processes that deploy to Kubernetes using manifests, Helm, or GitOps controllers. Without CI, Kubernetes rollouts become riskier because you lack consistent validation of what you’re deploying.
So, CI is best defined as automated integration and testing of code changes from multiple sources, which matches option B.
=========
What is the role of the ingressClassName field in a Kubernetes Ingress resource?
It defines the type of protocol (HTTP or HTTPS) that the Ingress Controller should process.
It specifies the backend Service used by the Ingress Controller to route external requests.
It determines how routing rules are prioritized when multiple Ingress objects are applied.
It indicates which Ingress Controller should implement the rules defined in the Ingress resource.
The ingressClassName field in a Kubernetes Ingress resource is used to explicitly specify which Ingress Controller is responsible for processing and enforcing the rules defined in that Ingress. This makes option D the correct answer.
In Kubernetes clusters, it is common to have multiple Ingress Controllers running at the same time. For example, a cluster might run an NGINX Ingress Controller, a cloud-provider-specific controller, and an internal-only controller simultaneously. Without a clear mechanism to select which controller should handle a given Ingress resource, multiple controllers could attempt to process the same rules, leading to conflicts or undefined behavior.
The ingressClassName field solves this problem by referencing an IngressClass object. The IngressClass defines the controller implementation (via the controller field), and the Ingress resource uses ingressClassName to declare which class—and therefore which controller—should act on it. This creates a clean and explicit binding between an Ingress and its controller.
Option A is incorrect because protocol handling (HTTP vs HTTPS) is defined through TLS configuration and service ports, not by ingressClassName. Option B is incorrect because backend Services are defined in the rules and backend sections of the Ingress specification. Option C is incorrect because routing priority is determined by path matching rules and controller-specific logic, not by ingressClassName.
Historically, annotations were used to select Ingress Controllers, but ingressClassName is now the recommended and standardized approach. It improves clarity, portability, and compatibility across different Kubernetes distributions and controllers.
In summary, the primary purpose of ingressClassName is to indicate which Ingress Controller should implement the routing rules for a given Ingress resource, making Option D the correct and verified answer.
What is the practice of bringing financial accountability to the variable spend model of cloud resources?
FaaS
DevOps
CloudCost
FinOps
The practice of bringing financial accountability to cloud spending—where costs are variable and usage-based—is called FinOps, so D is correct. FinOps (Financial Operations) is an operating model and culture that helps organizations manage cloud costs by connecting engineering, finance, and business teams. Because cloud resources can be provisioned quickly and billed dynamically, traditional budgeting approaches often fail to keep pace. FinOps addresses this by introducing shared visibility, governance, and optimization processes that enable teams to make cost-aware decisions while still moving fast.
In Kubernetes and cloud-native architectures, variable spend shows up in many ways: autoscaling node pools, over-provisioned resource requests, idle clusters, persistent volumes, load balancers, egress traffic, managed services, and observability tooling. FinOps practices encourage tagging/labeling for cost attribution, defining cost KPIs, enforcing budget guardrails, and continuously optimizing usage (right-sizing resources, scaling policies, turning off unused environments, and selecting cost-effective architectures).
Why the other options are incorrect: FaaS (Function as a Service) is a compute model (serverless), not a financial accountability practice. DevOps is a cultural and technical practice focused on collaboration and delivery speed, not specifically cloud cost accountability (though it can complement FinOps). CloudCost is not a widely recognized standard term in the way FinOps is.
In practice, FinOps for Kubernetes often involves improving resource efficiency: aligning requests/limits with real usage, using HPA/VPA appropriately, selecting instance types that match workload profiles, managing cluster autoscaler settings, and allocating shared platform costs to teams via labels/namespaces. It also includes forecasting and anomaly detection, because cloud-native spend can spike quickly due to misconfigurations (e.g., runaway autoscaling or excessive log ingestion).
So, the correct term for financial accountability in cloud variable spend is FinOps (D).
=========
How can you extend the Kubernetes API?
Adding a CustomResourceDefinition or implementing an aggregation layer.
Adding a new version of a resource, for instance v4beta3.
With the command kubectl extend api, logged in as an administrator.
Adding the desired API object as a kubelet parameter.
A is correct: Kubernetes’ API can be extended by adding CustomResourceDefinitions (CRDs) and/or by implementing the API Aggregation Layer. These are the two canonical extension mechanisms.
CRDs let you define new resource types (new kinds) that the Kubernetes API server stores in etcd and serves like native objects. You typically pair a CRD with a controller/operator that watches those custom objects and reconciles real resources accordingly. This pattern is foundational to the Kubernetes ecosystem (many popular add-ons install CRDs).
The aggregation layer allows you to add entire API services (aggregated API servers) that serve additional endpoints under the Kubernetes API. This is used when you want custom API behavior, custom storage, or specialized semantics beyond what CRDs provide (or when implementing APIs like metrics APIs historically).
Why the other answers are wrong:
B is not how API extension works. You don’t “extend the API” by inventing new versions like v4beta3; versions are defined and implemented by API servers/controllers, not by users arbitrarily.
C is fictional; there is no standard kubectl extend api command.
D is also incorrect; kubelet parameters configure node agent behavior, not API server types and discovery.
So, the verified ways to extend Kubernetes’ API surface are CRDs and API aggregation, which is option A.
=========
Scenario: You have a Kubernetes cluster hosted in a public cloud provider. When trying to create a Service of type LoadBalancer, the external-ip is stuck in the "Pending" state. Which Kubernetes component is failing in this scenario?
Cloud Controller Manager
Load Balancer Manager
Cloud Architecture Manager
Cloud Load Balancer Manager
When you create a Service of type LoadBalancer in a cloud environment, Kubernetes relies on cloud-provider integration to provision an external load balancer and allocate a public IP (or equivalent). The control plane component responsible for this integration is the cloud-controller-manager, so A is correct.
In Kubernetes, a LoadBalancer Service triggers a controller loop that calls the cloud provider APIs to create/update a load balancer that forwards traffic to the cluster (often via NodePorts on worker nodes, or via provider-specific mechanisms). The Service remains with EXTERNAL-IP: Pending until the cloud provider resource is successfully created and the controller updates the Service status with the assigned external address. If that status never updates, it usually indicates the cloud integration path is broken—commonly due to: missing cloud provider configuration, broken credentials/IAM permissions, the cloud-controller-manager not running/healthy, or a misconfigured cloud provider implementation.
The other options are not real Kubernetes components. Kubernetes does not include a “Load Balancer Manager” or “Cloud Architecture Manager” component name in its standard architecture. In many managed Kubernetes offerings, the cloud-controller-manager (or its equivalent) is provided/managed by the provider, but the responsibility remains the same: reconcile Kubernetes Service resources into cloud load balancer resources.
Therefore, in this scenario, the failing component is the Cloud Controller Manager, which is the Kubernetes control plane component that interfaces with the cloud provider to provision external load balancers and update the Service status.
=========
What feature must a CNI support to control specific traffic flows for workloads running in Kubernetes?
Border Gateway Protocol
IP Address Management
Pod Security Policy
Network Policies
To control which workloads can communicate with which other workloads in Kubernetes, you use NetworkPolicy resources—but enforcement depends on the cluster’s networking implementation. Therefore, for traffic-flow control, the CNI/plugin must support Network Policies, making D correct.
Kubernetes defines the NetworkPolicy API as a declarative way to specify allowed ingress and egress traffic based on selectors (Pod labels, namespaces, IP blocks) and ports/protocols. However, Kubernetes itself does not enforce NetworkPolicy rules; enforcement is provided by the network plugin (or associated dataplane components). If your CNI does not implement NetworkPolicy, the objects may exist in the API but have no effect—Pods will communicate freely by default.
Option B (IP Address Management) is often part of CNI responsibilities, but IPAM is about assigning addresses, not enforcing L3/L4 security policy. Option A (BGP) is used by some CNIs to advertise routes (for example, in certain Calico deployments), but BGP is not the general requirement for policy enforcement. Option C (Pod Security Policy) is a deprecated/removed Kubernetes admission feature related to Pod security settings, not network flow control.
From a Kubernetes security standpoint, NetworkPolicies are a key tool for implementing least privilege at the network layer—limiting lateral movement, reducing blast radius, and segmenting environments. But they only work when the chosen CNI supports them. Thus, the correct answer is D: Network Policies.
=========
In a Kubernetes cluster, what is the primary role of the Kubernetes scheduler?
To manage the lifecycle of the Pods by restarting them when they fail.
To monitor the health of the nodes and Pods in the cluster.
To handle network traffic between services within the cluster.
To distribute Pods across nodes based on resource availability and constraints.
The Kubernetes scheduler is a core control plane component responsible for deciding where Pods should run within a cluster. Its primary role is to assign newly created Pods that do not yet have a node assigned to an appropriate node based on a variety of factors such as resource availability, scheduling constraints, and policies.
When a Pod is created, it enters a Pending state until the scheduler selects a suitable node. The scheduler evaluates all available nodes and filters out those that do not meet the Pod’s requirements. These requirements may include CPU and memory requests, node selectors, node affinity rules, taints and tolerations, topology spread constraints, and other scheduling policies. After filtering, the scheduler scores the remaining nodes to determine the best placement for the Pod and then binds the Pod to the selected node.
Option A is incorrect because restarting failed Pods is handled by other components such as the kubelet and higher-level controllers like Deployments, ReplicaSets, or StatefulSets—not the scheduler. Option B is incorrect because monitoring node and Pod health is primarily the responsibility of the kubelet and the Kubernetes controller manager, which reacts to node failures and ensures desired state. Option C is incorrect because handling network traffic is managed by Services, kube-proxy, and the cluster’s networking implementation, not the scheduler.
Option D correctly describes the scheduler’s purpose. By distributing Pods across nodes based on resource availability and constraints, the scheduler helps ensure efficient resource utilization, high availability, and workload isolation. This intelligent placement is essential for maintaining cluster stability and performance, especially in large-scale or multi-tenant environments.
According to Kubernetes documentation, the scheduler’s responsibility is strictly focused on Pod placement decisions. Once a Pod is scheduled, the scheduler’s job is complete for that Pod, making option D the accurate and fully verified answer.
What is the name of the Kubernetes resource used to expose an application?
Port
Service
DNS
Deployment
To expose an application running on Pods so that other components can reliably reach it, Kubernetes uses a Service, making B the correct answer. Pods are ephemeral: they can be recreated, rescheduled, and scaled, which means Pod IPs change. A Service provides a stable endpoint (virtual IP and DNS name) and load-balances traffic across the set of Pods selected by its label selector.
Services come in multiple forms. The default is ClusterIP, which exposes the application inside the cluster. NodePort exposes the Service on a static port on each node, and LoadBalancer (in supported clouds) provisions an external load balancer that routes traffic to the Service. ExternalName maps a Service name to an external DNS name. But across these variants, the abstraction is consistent: a Service defines how to access a logical group of Pods.
Option A (Port) is not a Kubernetes resource type; ports are fields within resources. Option C (DNS) is a supporting mechanism (CoreDNS creates DNS entries for Services), but DNS is not the resource you create to expose the app. Option D (Deployment) manages Pod replicas and rollouts, but it does not directly provide stable networking access; you typically pair a Deployment with a Service to expose it.
This is a core cloud-native pattern: controllers manage compute, Services manage stable connectivity, and higher-level gateways like Ingress provide L7 routing for HTTP/HTTPS. So, the Kubernetes resource used to expose an application is Service (B).
=========
What is a sidecar container?
A Pod that runs next to another container within the same Pod.
A container that runs next to another Pod within the same namespace.
A container that runs next to another container within the same Pod.
A Pod that runs next to another Pod within the same namespace.
A sidecar container is an additional container that runs alongside the main application container within the same Pod, sharing network and storage context. That matches option C, so C is correct. The sidecar pattern is used to add supporting capabilities to an application without modifying the application code. Because both containers are in the same Pod, the sidecar can communicate with the main container over localhost and share volumes for files, sockets, or logs.
Common sidecar examples include: log forwarders that tail application logs and ship them to a logging system, proxies (service mesh sidecars like Envoy) that handle mTLS and routing policy, config reloaders that watch ConfigMaps and signal the main process, and local caching agents. Sidecars are especially powerful in cloud-native systems because they standardize cross-cutting concerns—security, observability, traffic policy—across many workloads.
Options A and D incorrectly describe “a Pod running next to …” which is not how sidecars work; sidecars are containers, not separate Pods. Running separate Pods “next to” each other in a namespace does not give the same shared network namespace and tightly coupled lifecycle. Option B is also incorrect for the same reason: a sidecar is not a separate Pod; it is a container in the same Pod.
Operationally, sidecars share the Pod lifecycle: they are scheduled together, scaled together, and generally terminated together. This is both a benefit (co-location guarantees) and a responsibility (resource requests/limits should include the sidecar’s needs, and failure modes should be understood). Kubernetes is increasingly formalizing sidecar behavior (e.g., sidecar containers with ordered startup semantics), but the core definition remains: a helper container in the same Pod.
=========
What is the purpose of the kube-proxy?
The kube-proxy balances network requests to Pods.
The kube-proxy maintains network rules on nodes.
The kube-proxy ensures the cluster connectivity with the internet.
The kube-proxy maintains the DNS rules of the cluster.
The correct answer is B: kube-proxy maintains network rules on nodes. kube-proxy is a node component that implements part of the Kubernetes Service abstraction. It watches the Kubernetes API for Service and EndpointSlice/Endpoints changes, and then programs the node’s dataplane rules (commonly iptables or IPVS, depending on configuration) so that traffic sent to a Service virtual IP and port is correctly forwarded to one of the backing Pod endpoints.
This is how Kubernetes provides stable Service addresses even though Pod IPs are ephemeral. When Pods scale up/down or are replaced during a rollout, endpoints change; kube-proxy updates the node rules accordingly. From the perspective of a client, the Service name and ClusterIP remain stable, while the actual backend endpoints are load-distributed.
Option A is a tempting phrasing but incomplete: load distribution is an outcome of the forwarding rules, but kube-proxy’s primary role is maintaining the network forwarding rules that make Services work. Option C is incorrect because internet connectivity depends on cluster networking, routing, NAT, and often CNI configuration—not kube-proxy’s job description. Option D is incorrect because DNS is typically handled by CoreDNS; kube-proxy does not “maintain DNS rules.”
Operationally, kube-proxy failures often manifest as Service connectivity issues: Pod-to-Service traffic fails, ClusterIP routing breaks, NodePort behavior becomes inconsistent, or endpoints aren’t updated correctly. Modern Kubernetes environments sometimes replace kube-proxy with eBPF-based dataplanes, but in the classic architecture the correct statement remains: kube-proxy runs on each node and maintains the rules needed for Service traffic steering.
=========
Which command will list the resource types that exist within a cluster?
kubectl api-resources
kubectl get namespaces
kubectl api-versions
curl https://kubectrl/namespaces
To list the resource types available in a Kubernetes cluster, you use kubectl api-resources, so A is correct. This command queries the API server’s discovery endpoints and prints a table of resources (kinds) that the cluster knows about, including their names, shortnames, API group/version, whether they are namespaced, and supported verbs. It’s extremely useful for learning what objects exist in a cluster—especially when CRDs are installed, because those custom resource types will also appear in the output.
Option C (kubectl api-versions) lists available API versions (group/version strings like v1, apps/v1, batch/v1) but does not directly list the resource kinds/types. It’s related discovery information but answers a different question. Option B (kubectl get namespaces) lists namespaces, not resource types. Option D is invalid (typo in URL and conceptually not the Kubernetes discovery mechanism).
Practically, kubectl api-resources is used during troubleshooting and exploration: you might use it to confirm whether a CRD is installed (e.g., certificates.cert-manager.io kinds), to check whether a resource is namespaced, or to find the correct kind name for kubectl get. It also helps understand what your cluster supports at the API layer (including aggregated APIs).
So, the verified correct command to list resource types that exist in the cluster is A: kubectl api-resources.
Which are the two primary modes for Service discovery within a Kubernetes cluster?
Environment variables and DNS
API calls and LDAP
Labels and RADIUS
Selectors and DHCP
Kubernetes supports two primary built-in modes of Service discovery for workloads: environment variables and DNS, making A correct.
Environment variables: When a Pod is created, kubelet can inject environment variables for Services that exist in the same namespace at the time the Pod starts. These variables include the Service host and port (for example, MY_SERVICE_HOST and MY_SERVICE_PORT). This approach is simple but has limitations: values are captured at Pod creation time and don’t automatically update if Services change, and it can become cluttered in namespaces with many Services.
DNS-based discovery: This is the most common and flexible method. Kubernetes cluster DNS (usually CoreDNS) provides names like service-name.namespace.svc.cluster.local. Clients resolve the name and connect to the Service, which then routes to backend Pods. DNS scales better, is dynamic with endpoint updates, supports headless Services for per-Pod discovery, and is the default pattern for microservice communication.
The other options are not Kubernetes service discovery modes. Labels and selectors are used internally to relate Services to Pods, but they are not what application code uses for discovery (apps typically don’t query selectors; they call DNS names). LDAP and RADIUS are identity/authentication protocols, not service discovery. DHCP is for IP assignment on networks, not for Kubernetes Service discovery.
Operationally, DNS is central: many applications assume name-based connectivity. If CoreDNS is misconfigured or overloaded, service-to-service calls may fail even if Pods and Services are otherwise healthy. Environment-variable discovery can still work for some legacy apps, but modern cloud-native practice strongly prefers DNS (and sometimes service meshes on top of it). The key exam concept is: Kubernetes provides service discovery via env vars and DNS.
=========
What is a Service?
A static network mapping from a Pod to a port.
A way to expose an application running on a set of Pods.
The network configuration for a group of Pods.
An NGINX load balancer that gets deployed for an application.
The correct answer is B: a Kubernetes Service is a stable way to expose an application running on a set of Pods. Pods are ephemeral—IPs can change when Pods are recreated, rescheduled, or scaled. A Service provides a consistent network identity (DNS name and usually a ClusterIP virtual IP) and a policy for routing traffic to the current healthy backends.
Typically, a Service uses a label selector to determine which Pods are part of the backend set. Kubernetes then maintains the corresponding endpoint data (Endpoints/EndpointSlice), and the cluster dataplane (kube-proxy or an eBPF-based implementation) forwards traffic from the Service IP/port to one of the Pod IPs. This enables reliable service discovery and load distribution across replicas, especially during rolling updates where Pods are constantly replaced.
Option A is incorrect because Service routing is not a “static mapping from a Pod to a port.” It’s dynamic and targets a set of Pods. Option C is too vague and misstates the concept; while Services relate to networking, they are not “the network configuration for a group of Pods” (that’s closer to NetworkPolicy/CNI configuration). Option D is incorrect because Kubernetes does not automatically deploy an NGINX load balancer when you create a Service. NGINX might be used as an Ingress controller or external load balancer in some setups, but a Service is a Kubernetes API abstraction, not a specific NGINX component.
Services come in several types (ClusterIP, NodePort, LoadBalancer, ExternalName), but the core definition remains the same: stable access to a dynamic set of Pods. This is foundational for microservices and for decoupling clients from the churn of Pod lifecycles.
So, the verified correct definition is B.
=========
Which GitOps engine can be used to orchestrate parallel jobs on Kubernetes?
Jenkins X
Flagger
Flux
Argo Workflows
Argo Workflows (D) is the correct answer because it is a Kubernetes-native workflow engine designed to define and run multi-step workflows—often with parallelization—directly on Kubernetes. Argo Workflows models workflows as DAGs (directed acyclic graphs) or step-based sequences, where each step is typically a Pod. Because each step is expressed as Kubernetes resources (custom resources), Argo can schedule many tasks concurrently, control fan-out/fan-in patterns, and manage dependencies between steps (e.g., “run these 10 jobs in parallel, then aggregate results”).
The question calls it a “GitOps engine,” but the capability being tested is “orchestrate parallel jobs.” Argo Workflows fits because it is purpose-built for running complex job orchestration, including parallel tasks, retries, timeouts, artifacts passing, and conditional execution. In practice, many teams store workflow manifests in Git and apply GitOps practices around them, but the distinguishing feature here is the workflow orchestration engine itself.
Why the other options are not best:
Flux (C) is a GitOps controller that reconciles cluster state from Git; it doesn’t orchestrate parallel job graphs as its core function.
Flagger (B) is a progressive delivery operator (canary/blue-green) often paired with GitOps and service meshes/Ingress; it’s not a general workflow orchestrator for parallel batch jobs.
Jenkins X (A) is CI/CD-focused (pipelines), not primarily a Kubernetes-native workflow engine for parallel job DAGs in the way Argo Workflows is.
So, the Kubernetes-native tool specifically used to orchestrate parallel jobs and workflows is Argo Workflows (D).
=========
What helps an organization to deliver software more securely at a higher velocity?
Kubernetes
apt-get
Docker Images
CI/CD Pipeline
A CI/CD pipeline is a core practice/tooling approach that enables organizations to deliver software faster and more securely, so D is correct. CI (Continuous Integration) automates building and testing code changes frequently, reducing integration risk and catching defects early. CD (Continuous Delivery/Deployment) automates releasing validated builds into environments using consistent, repeatable steps—reducing manual errors and enabling rapid iteration.
Security improves because automation enables standardized checks on every change: static analysis, dependency scanning, container image scanning, policy validation, and signing/verification steps can be integrated into the pipeline. Instead of relying on ad-hoc human processes, security controls become repeatable gates. In Kubernetes environments, pipelines commonly build container images, run tests, publish artifacts to registries, and then deploy via manifests, Helm, or GitOps controllers—keeping deployments consistent and auditable.
Option A (Kubernetes) is a platform that helps run and manage workloads, but by itself it doesn’t guarantee secure high-velocity delivery. It provides primitives (rollouts, declarative config, RBAC), yet the delivery workflow still needs automation. Option B (apt-get) is a package manager for Debian-based systems and is not a delivery pipeline. Option C (Docker Images) are artifacts; they improve portability and repeatability, but they don’t provide the end-to-end automation of building, testing, promoting, and deploying across environments.
In cloud-native application delivery, the pipeline is the “engine” that turns code changes into safe production releases. Combined with Kubernetes’ declarative deployment model (Deployments, rolling updates, health probes), a CI/CD pipeline supports frequent releases with controlled rollouts, fast rollback, and strong auditability. That is exactly what the question is targeting. Therefore, the verified answer is D.
=========
In Kubernetes, which command is the most efficient way to check the progress of a Deployment rollout and confirm if it has completed successfully?
kubectl get deployments --show-labels -o wide
kubectl describe deployment my-deployment --namespace=default
kubectl logs deployment/my-deployment --all-containers=true
kubectl rollout status deployment/my-deployment
When performing rolling updates in Kubernetes, it is important to have a clear and efficient way to track the progress of a Deployment rollout and determine whether it has completed successfully. The most direct and purpose-built command for this task is kubectl rollout status deployment/my-deployment, making option D the correct answer.
The kubectl rollout status command is specifically designed to monitor the state of rollouts for resources such as Deployments, StatefulSets, and DaemonSets. It provides real-time feedback on the rollout process, including whether new Pods have been created, old Pods are being terminated, and if the desired number of updated replicas has become available. The command blocks until the rollout either completes successfully or fails, which makes it especially useful in automation and CI/CD pipelines.
Option A is incorrect because kubectl get deployments only provides a snapshot view of deployment status fields and does not actively track rollout progress. Option B can provide detailed information and events, but it is verbose and not optimized for quickly confirming rollout completion. Option C is incorrect because Deployment objects themselves do not produce logs; logs are generated by Pods and containers, not higher-level workload resources.
The rollout status command also integrates with Kubernetes’ revision history, ensuring that it accurately reflects the current state of the Deployment’s update strategy. If a rollout is stuck due to failed Pods, readiness probe failures, or resource constraints, the command will indicate that the rollout is not progressing, helping operators quickly identify issues.
In summary, kubectl rollout status deployment/my-deployment is the most efficient and reliable way to check rollout progress and confirm success. It is purpose-built for rollout tracking, easy to interpret, and widely used in production Kubernetes workflows, making Option D the correct and verified answer.
What's the most adopted way of conflict resolution and decision-making for the open-source projects under the CNCF umbrella?
Financial Analysis
Discussion and Voting
Flipism Technique
Project Founder Say
B (Discussion and Voting) is correct. CNCF-hosted open-source projects generally operate with open governance practices that emphasize transparency, community participation, and documented decision-making. While each project can have its own governance model (maintainers, technical steering committees, SIGs, TOC interactions, etc.), a very common and widely adopted approach to resolving disagreements and making decisions is to first pursue discussion (often on GitHub issues/PRs, mailing lists, or community meetings) and then use voting/consensus mechanisms when needed.
This approach is important because open-source communities are made up of diverse contributors across companies and geographies. “Project Founder Say” (D) is not a sustainable or typical CNCF governance norm for mature projects; CNCF explicitly encourages neutral, community-led governance rather than single-person control. “Financial Analysis” (A) is not a conflict resolution mechanism for technical decisions, and “Flipism Technique” (C) is not a real governance practice.
In Kubernetes specifically, community decisions are often made within structured groups (e.g., SIGs) using discussion and consensus-building, sometimes followed by formal votes where governance requires it. The goal is to ensure decisions are fair, recorded, and aligned with the project’s mission and contributor expectations. This also reduces risk of vendor capture and builds trust: anyone can review the rationale in meeting notes, issues, or PR threads, and decisions can be revisited with new evidence.
Therefore, the most adopted conflict resolution and decision-making method across CNCF open-source projects is discussion and voting, making B the verified correct answer.
=========
Why is Cloud-Native Architecture important?
Cloud Native Architecture revolves around containers, microservices and pipelines.
Cloud Native Architecture removes constraints to rapid innovation.
Cloud Native Architecture is modern for application deployment and pipelines.
Cloud Native Architecture is a bleeding edge technology and service.
Cloud-native architecture is important because it enables organizations to build and run software in a way that supports rapid innovation while maintaining reliability, scalability, and efficient operations. Option B best captures this: cloud native removes constraints to rapid innovation, so B is correct.
In traditional environments, innovation is slowed by heavyweight release processes, tightly coupled systems, manual operations, and limited elasticity. Cloud-native approaches—containers, declarative APIs, automation, and microservices-friendly patterns—reduce those constraints. Kubernetes exemplifies this by offering a consistent deployment model, self-healing, automated rollouts, scaling primitives, and a large ecosystem of delivery and observability tools. This makes it easier to ship changes more frequently and safely: teams can iterate quickly, roll back confidently, and standardize operations across environments.
Option A is partly descriptive (containers/microservices/pipelines are common in cloud native), but it doesn’t explain why it matters; it lists ingredients rather than the benefit. Option C is vague (“modern”) and again doesn’t capture the core value proposition. Option D is incorrect because cloud native is not primarily about being “bleeding edge”—it’s about proven practices that improve time-to-market and operational stability.
A good way to interpret “removes constraints” is: cloud native shifts the bottleneck away from infrastructure friction. With automation (IaC/GitOps), standardized runtime packaging (containers), and platform capabilities (Kubernetes controllers), teams spend less time on repetitive manual work and more time delivering features. Combined with observability and policy automation, this results in faster delivery with better reliability—exactly the reason cloud-native architecture is emphasized across the Kubernetes ecosystem.
=========
Which component of the Kubernetes architecture is responsible for integration with the CRI container runtime?
kubeadm
kubelet
kube-apiserver
kubectl
The correct answer is B: kubelet. The Container Runtime Interface (CRI) defines how Kubernetes interacts with container runtimes in a consistent, pluggable way. The component that speaks CRI is the kubelet, the node agent responsible for running Pods on each node. When the kube-scheduler assigns a Pod to a node, the kubelet reads the PodSpec and makes the runtime calls needed to realize that desired state—pull images, create a Pod sandbox, start containers, stop containers, and retrieve status and logs. Those calls are made via CRI to a CRI-compliant runtime such as containerd or CRI-O.
Why not the others:
kubeadm bootstraps clusters (init/join/upgrade workflows) but does not run containers or speak CRI for workload execution.
kube-apiserver is the control plane API frontend; it stores and serves cluster state and does not directly integrate with runtimes.
kubectl is just a client tool that sends API requests; it is not involved in runtime integration on nodes.
This distinction matters operationally. If the runtime is misconfigured or CRI endpoints are unreachable, kubelet will report errors and Pods can get stuck in ContainerCreating, image pull failures, or runtime errors. Debugging often involves checking kubelet logs and runtime service health, because kubelet is the integration point bridging Kubernetes scheduling/state with actual container execution.
So, the node-level component responsible for CRI integration is the kubelet—option B.
=========
Which option represents best practices when building container images?
Use multi-stage builds, use the latest tag for image version, and only install necessary packages.
Use multi-stage builds, pin the base image version to a specific digest, and install extra packages just in case.
Use multi-stage builds, pin the base image version to a specific digest, and only install necessary packages.
Avoid multi-stage builds, use the latest tag for image version, and install extra packages just in case.
Building secure, efficient, and reproducible container images is a core principle of cloud native application delivery. Kubernetes documentation and container security best practices emphasize minimizing image size, reducing attack surface, and ensuring deterministic builds. Option C fully aligns with these principles, making it the correct answer.
Multi-stage builds allow developers to separate the build environment from the runtime environment. Dependencies such as compilers, build tools, and temporary artifacts are used only in intermediate stages and excluded from the final image. This significantly reduces image size and limits the presence of unnecessary tools that could be exploited at runtime.
Pinning the base image to a specific digest ensures immutability and reproducibility. Tags such as latest can change over time, potentially introducing breaking changes or vulnerabilities without notice. By using a digest, teams guarantee that the same base image is used every time the image is built, which is essential for predictable behavior, security auditing, and reliable rollbacks.
Installing only necessary packages further reduces the attack surface. Every additional package increases the risk of vulnerabilities and expands the maintenance burden. Minimal images are faster to pull, quicker to start, and easier to scan for vulnerabilities. Kubernetes security guidance consistently recommends keeping container images as small and purpose-built as possible.
Option A is incorrect because using the latest tag undermines build determinism and traceability. Option B is incorrect because installing extra packages “just in case” contradicts the principle of minimalism and increases security risk. Option D is incorrect because avoiding multi-stage builds and installing unnecessary packages leads to larger, less secure images and is explicitly discouraged in cloud native best practices.
According to Kubernetes and CNCF security guidance, combining multi-stage builds, immutable image references, and minimal dependencies results in more secure, reliable, and maintainable container images. Therefore, option C represents the best and fully verified approach when building container images.
What is the default eviction timeout when the Ready condition of a node is Unknown or False?
Thirty seconds.
Thirty minutes.
One minute.
Five minutes.
The verified correct answer is D (Five minutes). In Kubernetes, node health is continuously monitored. When a node stops reporting status (heartbeats from the kubelet) or is otherwise considered unreachable, the Node controller updates the Node’s Ready condition to Unknown (or it can become False). From that point, Kubernetes has to balance two risks: acting too quickly might cause unnecessary disruption (e.g., transient network hiccups), but acting too slowly prolongs outage for workloads that were running on the failed node.
The “default eviction timeout” refers to the control plane behavior that determines how long Kubernetes waits before evicting Pods from a node that appears unhealthy/unreachable. After this timeout elapses, Kubernetes begins eviction of Pods so controllers (like Deployments) can recreate them on healthy nodes, restoring the desired replica count and availability.
This is tightly connected to high availability and self-healing: Kubernetes does not “move” Pods from a dead node; it replaces them. The eviction timeout gives the cluster time to confirm the node is truly unavailable, avoiding flapping in unstable networks. Once eviction begins, replacement Pods can be scheduled elsewhere (assuming capacity exists), which is the normal recovery path for stateless workloads.
It’s also worth noting that graceful operational handling can be influenced by PodDisruptionBudgets (for voluntary disruptions) and by workload design (replicas across nodes/zones). But the question is testing the default timer value, which is five minutes in this context.
Therefore, among the choices provided, the correct answer is D.
=========
What is the API that exposes resource metrics from the metrics-server?
custom.k8s.io
resources.k8s.io
metrics.k8s.io
cadvisor.k8s.io
The correct answer is C: metrics.k8s.io. Kubernetes’ metrics-server is the standard component that provides resource metrics (primarily CPU and memory) for nodes and pods. It aggregates this information (sourced from kubelet/cAdvisor) and serves it through the Kubernetes aggregated API under the group metrics.k8s.io. This is what enables commands like kubectl top nodes and kubectl top pods, and it is also a key data source for autoscaling with the Horizontal Pod Autoscaler (HPA) when scaling on CPU/memory utilization.
Why the other options are wrong:
custom.k8s.io is not the standard API group for metrics-server resource metrics. Custom metrics are typically served through the custom metrics API (commonly custom.metrics.k8s.io) via adapters (e.g., Prometheus Adapter), not metrics-server.
resources.k8s.io is not the metrics-server API group.
cadvisor.k8s.io is not exposed as a Kubernetes aggregated metrics API. cAdvisor is a component integrated into kubelet that provides container stats, but metrics-server is the thing that exposes the aggregated Kubernetes metrics API, and the canonical group is metrics.k8s.io.
Operationally, it’s important to understand the boundary: metrics-server provides basic resource metrics suitable for core autoscaling and “top” views, but it is not a full observability system (it does not store long-term metrics history like Prometheus). For richer metrics (SLOs, application metrics, long-term trending), teams typically deploy Prometheus or a managed monitoring backend. Still, when the question asks specifically which API exposes metrics-server data, the answer is definitively metrics.k8s.io.
=========
What is a DaemonSet?
It’s a type of workload that ensures a specific set of nodes run a copy of a Pod.
It’s a type of workload responsible for maintaining a stable set of replica Pods running in any node.
It’s a type of workload that needs to be run periodically on a given schedule.
It’s a type of workload that provides guarantees about ordering, uniqueness, and identity of a set of Pods.
A DaemonSet ensures that a copy of a Pod runs on each node (or a selected subset of nodes), which matches option A and makes it correct. DaemonSets are ideal for node-level agents that should exist everywhere, such as log shippers, monitoring agents, CNI components, storage daemons, and security scanners.
DaemonSets differ from Deployments/ReplicaSets because their goal is not “N replicas anywhere,” but “one replica per node” (subject to node selection). When nodes are added to the cluster, the DaemonSet controller automatically schedules the DaemonSet Pod onto the new nodes. When nodes are removed, the Pods associated with those nodes are cleaned up. You can restrict placement using node selectors, affinity rules, or tolerations so that only certain nodes run the DaemonSet (for example, only Linux nodes, only GPU nodes, or only nodes with a dedicated label).
Option B sounds like a ReplicaSet/Deployment behavior (stable set of replicas), not a DaemonSet. Option C describes CronJobs (scheduled, recurring run-to-completion workloads). Option D describes StatefulSets, which provide stable identity, ordering, and uniqueness guarantees for stateful replicas.
Operationally, DaemonSets matter because they often run critical cluster services. During maintenance and upgrades, DaemonSet update strategy determines how those node agents roll out across the fleet. Since DaemonSets can tolerate taints (like master/control-plane node taints), they can also be used to ensure essential agents run across all nodes, including special pools. Thus, the correct definition is A.
=========
What is the role of a NetworkPolicy in Kubernetes?
The ability to cryptic and obscure all traffic.
The ability to classify the Pods as isolated and non isolated.
The ability to prevent loopback or incoming host traffic.
The ability to log network security events.
A Kubernetes NetworkPolicy defines which traffic is allowed to and from Pods by selecting Pods and specifying ingress/egress rules. A key conceptual effect is that it can make Pods “isolated” (default deny except what is allowed) versus “non-isolated” (default allow). This aligns best with option B, so B is correct.
By default, Kubernetes networking is permissive: Pods can typically talk to any other Pod. When you apply a NetworkPolicy that selects a set of Pods, those selected Pods become “isolated” for the direction(s) covered by the policy (ingress and/or egress). That means only traffic explicitly allowed by the policy is permitted; everything else is denied (again, for the selected Pods and direction). This classification concept—isolated vs non-isolated—is a common way the Kubernetes documentation explains NetworkPolicy behavior.
Option A is incorrect: NetworkPolicy does not encrypt (“cryptic and obscure”) traffic. Encryption is typically handled by mTLS via a service mesh or application-layer TLS. Option C is not the primary role; loopback and host traffic handling depend on the network plugin and node configuration, and NetworkPolicy is not a “prevent loopback” mechanism. Option D is incorrect because NetworkPolicy is not a logging system; while some CNIs can produce logs about policy decisions, logging is not NetworkPolicy’s role in the API.
One critical Kubernetes detail: NetworkPolicy enforcement is performed by the CNI/network plugin. If your CNI doesn’t implement NetworkPolicy, creating these objects won’t change runtime traffic. In CNIs that do support it, NetworkPolicy becomes a foundational security primitive for segmentation and least privilege: restricting database access to app Pods only, isolating namespaces, and reducing lateral movement risk.
So, in the language of the provided answers, NetworkPolicy’s role is best captured as the ability to classify Pods into isolated/non-isolated by applying traffic-allow rules—option B.
=========
What is the purpose of the kubelet component within a Kubernetes cluster?
A dashboard for Kubernetes clusters that allows management and troubleshooting of applications.
A network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
A component that watches for newly created Pods with no assigned node, and selects a node for them to run on.
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
The kubelet is the primary node agent in Kubernetes. It runs on every worker node (and often on control-plane nodes too if they run workloads) and is responsible for ensuring that containers described by PodSpecs are actually running and healthy on that node. The kubelet continuously watches the Kubernetes API (via the control plane) for Pods that have been scheduled to its node, then it collaborates with the node’s container runtime (through CRI) to pull images, create containers, start them, and manage their lifecycle. It also mounts volumes, configures the Pod’s networking (working with the CNI plugin), and reports Pod and node status back to the API server.
Option D captures the core: “an agent on each node that makes sure containers are running in a Pod.” That includes executing probes (liveness, readiness, startup), restarting containers based on the Pod’s restartPolicy, and enforcing resource constraints in coordination with the runtime and OS.
Why the other options are wrong: A describes the Kubernetes Dashboard (or similar UI tools), not kubelet. B describes kube-proxy, which programs node-level networking rules (iptables/ipvs/eBPF depending on implementation) to implement Service virtual IP behavior. C describes the kube-scheduler, which selects a node for Pods that do not yet have an assigned node.
A useful way to remember kubelet’s role is: scheduler decides where, kubelet makes it happen there. Once the scheduler binds a Pod to a node, kubelet becomes responsible for reconciling “desired state” (PodSpec) with “observed state” (running containers). If a container crashes, kubelet will restart it according to policy; if an image is missing, it will pull it; if a Pod is deleted, it will stop containers and clean up. This node-local reconciliation loop is fundamental to Kubernetes’ self-healing and declarative operation model.
=========
Which Kubernetes resource workload ensures that all (or some) nodes run a copy of a Pod?
DaemonSet
StatefulSet
kubectl
Deployment
A DaemonSet is the workload controller that ensures a Pod runs on all nodes or on a selected subset of nodes, so A is correct. DaemonSets are used for node-level agents and infrastructure components that must be present everywhere—examples include log collectors, monitoring agents, storage daemons, CNI components, and node security tools.
The DaemonSet controller watches for node additions/removals. When a new node joins the cluster, Kubernetes automatically schedules a new DaemonSet Pod onto that node (subject to constraints such as node selectors, affinities, and taints/tolerations). When a node is removed, its DaemonSet Pod naturally disappears with it. This creates the “one per node” behavior that differentiates DaemonSets from other workload types.
A Deployment manages a replica count across the cluster, not “one per node.” A StatefulSet manages stable identity and ordered operations for stateful replicas; it does not inherently map one Pod to every node. kubectl is a CLI tool and not a workload resource.
DaemonSets can also be scoped: by using node selectors, node affinity, and tolerations, you can ensure Pods run only on GPU nodes, only on Linux nodes, only in certain zones, or only on nodes with a particular label. That’s why the question says “all (or some) nodes.”
Therefore, the correct and verified answer is DaemonSet (A).
Which of the following best describes horizontally scaling an application deployment?
The act of adding/removing node instances to the cluster to meet demand.
The act of adding/removing applications to meet demand.
The act of adding/removing application instances of the same application to meet demand.
The act of adding/removing resources to application instances to meet demand.
Horizontal scaling means changing how many instances of an application are running, not changing how big each instance is. Therefore, the best description is C: adding/removing application instances of the same application to meet demand. In Kubernetes, “instances” typically correspond to Pod replicas managed by a controller like a Deployment. When you scale horizontally, you increase or decrease the replica count, which increases or decreases total throughput and resilience by distributing load across more Pods.
Option A is about cluster/node scaling (adding or removing nodes), which is infrastructure scaling typically handled by a cluster autoscaler in cloud environments. Node scaling can enable more Pods to be scheduled, but it’s not the definition of horizontal application scaling itself. Option D describes vertical scaling—adding/removing CPU or memory resources to a given instance (Pod/container) by changing requests/limits or using VPA. Option B is vague and not the standard definition.
Horizontal scaling is a core cloud-native pattern because it improves availability and elasticity. If one Pod fails, other replicas continue serving traffic. In Kubernetes, scaling can be manual (kubectl scale deployment ... --replicas=N) or automatic using the Horizontal Pod Autoscaler (HPA). HPA adjusts replicas based on observed metrics like CPU utilization, memory, or custom/external metrics (for example, request rate or queue length). This creates responsive systems that can handle variable traffic.
From an architecture perspective, designing for horizontal scaling often means ensuring your application is stateless (or manages state externally), uses idempotent request handling, and supports multiple concurrent instances. Stateful workloads can also scale horizontally, but usually with additional constraints (StatefulSets, sharding, quorum membership, stable identity).
So the verified definition and correct choice is C.
=========
When modifying an existing Helm release to apply new configuration values, which approach is the best practice?
Use helm upgrade with the --set flag to apply new values while preserving the release history.
Use kubectl edit to modify the live release configuration and apply the updated resource values.
Delete the release and reinstall it with the desired configuration to force an updated deployment.
Edit the Helm chart source files directly and reapply them to push the updated configuration values.
Helm is a package manager for Kubernetes that provides a declarative and versioned approach to application deployment and lifecycle management. When updating configuration values for an existing Helm release, the recommended and best-practice approach is to use helm upgrade, optionally with the --set flag or a values file, to apply the new configuration while preserving the release’s history.
Option A is correct because helm upgrade updates an existing release in a controlled and auditable manner. Helm stores each revision of a release, allowing teams to inspect past configurations and roll back to a previous known-good state if needed. Using --set enables quick overrides of individual values, while using -f values.yaml supports more complex or repeatable configurations. This approach aligns with GitOps and infrastructure-as-code principles, ensuring consistency and traceability.
Option B is incorrect because modifying Helm-managed resources directly with kubectl edit breaks Helm’s state tracking. Helm maintains a record of the desired state for each release, and manual edits can cause configuration drift, making future upgrades unpredictable or unsafe. Kubernetes documentation and Helm guidance strongly discourage modifying Helm-managed resources outside of Helm itself.
Option C is incorrect because deleting and reinstalling a release discards the release history and may cause unnecessary downtime or data loss, especially for stateful applications. Helm’s upgrade mechanism is specifically designed to avoid this disruption while still applying configuration changes safely.
Option D is also incorrect because editing chart source files directly and reapplying them bypasses Helm’s release management model. While chart changes are appropriate during development, applying them directly to a running release without helm upgrade undermines versioning, rollback, and repeatability.
According to Helm documentation, helm upgrade is the standard and supported method for modifying deployed applications. It ensures controlled updates, preserves operational history, and enables safe rollbacks, making option A the correct and fully verified best practice.
What native runtime is Open Container Initiative (OCI) compliant?
runC
runV
kata-containers
gvisor
The Open Container Initiative (OCI) publishes open specifications for container images and container runtimes so that tools across the ecosystem remain interoperable. When a runtime is “OCI-compliant,” it means it implements the OCI Runtime Specification (how to run a container from a filesystem bundle and configuration) and/or works cleanly with OCI image formats through the usual layers (image → unpack → runtime). runC is the best-known, widely used reference implementation of the OCI runtime specification and is the low-level runtime underneath many higher-level systems. In Kubernetes, you typically interact with a higher-level container runtime (such as containerd or CRI-O) through the Container Runtime Interface (CRI). That higher-level runtime then uses a low-level OCI runtime to actually create Linux namespaces/cgroups, set up the container process, and start it. In many default installations, containerd delegates to runC for this low-level “create/start” work.
The other options are related but differ in what they are: Kata Containers uses lightweight VMs to provide stronger isolation while still presenting a container-like workflow; gVisor provides a user-space kernel for sandboxing containers; these can be used with Kubernetes via compatible integrations, but the canonical “native OCI runtime” answer in most curricula is runC. Finally, “runV” is not a common modern Kubernetes runtime choice in typical OCI discussions. So the most correct, standards-based answer here is A (runC) because it directly implements the OCI runtime spec and is commonly used as the default low-level runtime behind CRI implementations.
=========
The cloud native architecture centered around microservices provides a strong system that ensures ______________.
fallback
resiliency
failover
high reachability
The best answer is B (resiliency). A microservices-centered cloud-native architecture is designed to build systems that continue to operate effectively under change and failure. “Resiliency” is the umbrella concept: the ability to tolerate faults, recover from disruptions, and maintain acceptable service levels through redundancy, isolation, and automated recovery.
Microservices help resiliency by reducing blast radius. Instead of one monolith where a single defect can take down the entire application, microservices separate concerns into independently deployable components. Combined with Kubernetes, you get resiliency mechanisms such as replication (multiple Pod replicas), self-healing (restart and reschedule on failure), rolling updates, health probes, and service discovery/load balancing. These enable the platform to detect and replace failing instances automatically, and to keep traffic flowing to healthy backends.
Options C (failover) and A (fallback) are resiliency techniques but are narrower terms. Failover usually refers to switching to a standby component when a primary fails; fallback often refers to degraded behavior (cached responses, reduced features). Both can exist in microservice systems, but the broader architectural guarantee microservices aim to support is resiliency overall. Option D (“high reachability”) is not the standard term used in cloud-native design and doesn’t capture the intent as precisely as resiliency.
In practice, achieving resiliency also requires good observability and disciplined delivery: monitoring/alerts, tracing across service boundaries, circuit breakers/timeouts/retries, and progressive delivery patterns. Kubernetes provides platform primitives, but resilient microservices also need careful API design and failure-mode thinking.
So the intended and verified completion is resiliency, option B.
=========
What default level of protection is applied to the data in Secrets in the Kubernetes API?
The values use AES symmetric encryption
The values are stored in plain text
The values are encoded with SHA256 hashes
The values are base64 encoded
Kubernetes Secrets are designed to store sensitive data such as tokens, passwords, or certificates and make them available to Pods in controlled ways (as environment variables or mounted files). However, the default protection applied to Secret values in the Kubernetes API is base64 encoding, not encryption. That is why D is correct. Base64 is an encoding scheme that converts binary data into ASCII text; it is reversible and does not provide confidentiality.
By default, Secret objects are stored in the cluster’s backing datastore (commonly etcd) as base64-encoded strings inside the Secret manifest. Unless the cluster is configured for encryption at rest, those values are effectively stored unencrypted in etcd and may be visible to anyone who can read etcd directly or who has API permissions to read Secrets. This distinction is critical for security: base64 can prevent accidental issues with special characters in YAML/JSON, but it does not protect against attackers.
Option A is only correct if encryption at rest is explicitly configured on the API server using an EncryptionConfiguration (for example, AES-CBC or AES-GCM providers). Many managed Kubernetes offerings enable encryption at rest for etcd as an option or by default, but that is a deployment choice, not the universal Kubernetes default. Option C is incorrect because hashing is used for verification, not for secret retrieval; you typically need to recover the original value, so hashing isn’t suitable for Secrets. Option B (“plain text”) is misleading: the stored representation is base64-encoded, but because base64 is reversible, the security outcome is close to plain text unless encryption at rest and strict RBAC are in place.
The correct operational stance is: treat Kubernetes Secrets as sensitive; lock down access with RBAC, enable encryption at rest, avoid broad Secret read permissions, and consider external secret managers when appropriate. But strictly for the question’s wording—default level of protection—base64 encoding is the right answer.
=========
A Pod named my-app must be created to run a simple nginx container. Which kubectl command should be used?
kubectl create nginx --name=my-app
kubectl run my-app --image=nginx
kubectl create my-app --image=nginx
kubectl run nginx --name=my-app
In Kubernetes, the simplest and most direct way to create a Pod that runs a single container is to use the kubectl run command with the appropriate image specification. The command kubectl run my-app --image=nginx explicitly instructs Kubernetes to create a Pod named my-app using the nginx container image, which makes option B the correct answer.
The kubectl run command is designed to quickly create and run a Pod (or, in some contexts, a higher-level workload resource) from the command line. When no additional flags such as --restart=Always are specified, Kubernetes creates a standalone Pod by default. This is ideal for simple use cases like testing, demonstrations, or learning scenarios where only a single container is required.
Option A is incorrect because kubectl create nginx --name=my-app is not valid syntax; the create subcommand requires a resource type (such as pod, deployment, or service) or a manifest file. Option C is also incorrect because kubectl create my-app --image=nginx omits the resource type and therefore is not a valid kubectl create command. Option D is incorrect because kubectl run nginx --name=my-app attempts to use the deprecated --name flag, which is no longer supported in modern versions of kubectl.
Using kubectl run with explicit naming and image flags is consistent with Kubernetes command-line conventions and is widely documented as the correct approach for creating simple Pods. The resulting Pod can be verified using commands such as kubectl get pods and kubectl describe pod my-app.
In summary, Option B is the correct and verified answer because it uses valid kubectl syntax to create a Pod named my-app running the nginx container image in a straightforward and predictable way.
Which of the following cloud native proxies is used for ingress/egress in a service mesh and can also serve as an application gateway?
Frontend proxy
Kube-proxy
Envoy proxy
Reverse proxy
Envoy Proxy is a high-performance, cloud-native proxy widely used for ingress and egress traffic management in service mesh architectures, and it can also function as an application gateway. It is the foundational data-plane component for popular service meshes such as Istio, Consul, and AWS App Mesh, making option C the correct answer.
In a service mesh, Envoy is typically deployed as a sidecar proxy alongside each application Pod. This allows Envoy to transparently intercept and manage all inbound and outbound traffic for the service. Through this model, Envoy enables advanced traffic management features such as load balancing, retries, timeouts, circuit breaking, mutual TLS, and fine-grained observability without requiring application code changes.
Envoy is also commonly used at the mesh boundary to handle ingress and egress traffic. When deployed as an ingress gateway, Envoy acts as the entry point for external traffic into the mesh, performing TLS termination, routing, authentication, and policy enforcement. As an egress gateway, it controls outbound traffic from the mesh to external services, enabling security controls and traffic visibility. These capabilities allow Envoy to serve effectively as an application gateway, not just an internal proxy.
Option A, “Frontend proxy,” is a generic term and not a specific cloud-native component. Option B, kube-proxy, is responsible for implementing Kubernetes Service networking rules at the node level and does not provide service mesh features or gateway functionality. Option D, “Reverse proxy,” is a general architectural pattern rather than a specific cloud-native proxy implementation.
Envoy’s extensibility, performance, and deep integration with Kubernetes and service mesh control planes make it the industry-standard proxy for modern cloud-native networking. Its ability to function both as a sidecar proxy and as a centralized ingress or egress gateway clearly establishes Envoy proxy as the correct and verified answer.
What is the Kubernetes abstraction that allows groups of Pods to be exposed inside a Kubernetes cluster?
Deployment
Daemon
Unit
Service
In Kubernetes, Pods are ephemeral by design. They can be created, destroyed, rescheduled, or replaced at any time, and each Pod receives its own IP address. Because of this dynamic nature, directly relying on Pod IPs for communication is unreliable. To solve this problem, Kubernetes provides the Service abstraction, which allows a stable way to expose and access a group of Pods inside (and sometimes outside) the cluster.
A Service defines a logical set of Pods using label selectors and provides a consistent virtual IP address and DNS name for accessing them. Even if individual Pods fail or are replaced, the Service remains stable, and traffic is automatically routed to healthy Pods that match the selector. This makes Services a fundamental building block for internal communication between applications within a Kubernetes cluster.
Deployments (Option A) are responsible for managing the lifecycle of Pods, including scaling, rolling updates, and self-healing. However, Deployments do not provide networking or exposure capabilities. They control how Pods run, not how they are accessed.
Option B, “Daemon,” is not a valid Kubernetes resource. The correct resource is a DaemonSet, which ensures that a copy of a Pod runs on each (or selected) node in the cluster. DaemonSets are used for node-level workloads like logging or monitoring agents, not for exposing Pods.
Option C, “Unit,” is not a Kubernetes concept at all and does not exist in Kubernetes architecture.
Services can be configured in different ways depending on access requirements, such as ClusterIP for internal access, NodePort or LoadBalancer for external access, and Headless Services for direct Pod discovery. Regardless of type, the core purpose of a Service is to expose a group of Pods in a stable and reliable way.
Therefore, the correct and verified answer is Option D: Service, which is the Kubernetes abstraction specifically designed to expose groups of Pods within a cluster.
In Kubernetes, what is the primary responsibility of the kubelet running on each worker node?
To allocate persistent storage volumes and manage distributed data replication for Pods.
To manage cluster state information and handle all scheduling decisions for workloads.
To ensure that containers defined in Pod specifications are running and remain healthy on the node.
To provide internal DNS resolution and route service traffic between Pods and nodes.
The kubelet is the primary node-level agent in Kubernetes and plays a critical role in ensuring that workloads run correctly on each worker node. Its main responsibility is to ensure that the containers described in Pod specifications are running and remain healthy on that node, which makes option C the correct answer.
Once the Kubernetes scheduler assigns a Pod to a node, the kubelet on that node takes over execution responsibilities. It watches the API server for Pod specifications that are scheduled to its node and then interacts with the container runtime to start, stop, and manage the containers defined in those Pods. The kubelet continuously monitors container health and reports Pod and node status back to the API server, enabling Kubernetes to make informed decisions about restarts, rescheduling, or remediation.
Health checks are another key responsibility of the kubelet. It executes liveness, readiness, and startup probes as defined in the Pod specification. Based on probe results, the kubelet may restart containers or update Pod status to reflect whether the application is ready to receive traffic. This behavior directly supports Kubernetes’ self-healing capabilities.
Option A is incorrect because persistent storage allocation and data replication are handled by storage systems, CSI drivers, and controllers—not by the kubelet itself. Option B is incorrect because cluster state management and scheduling decisions are the responsibility of control plane components such as the API server, controller manager, and kube-scheduler. Option D is incorrect because DNS resolution and service traffic routing are handled by components like CoreDNS and kube-proxy.
In summary, the kubelet acts as the “node supervisor” for Kubernetes workloads. By ensuring containers are running as specified and continuously reporting their status, the kubelet forms the essential link between the Kubernetes control plane and the actual execution of applications on worker nodes. This clearly aligns with Option C as the correct and verified answer.
What is the difference between a Deployment and a ReplicaSet?
With a Deployment, you can’t control the number of pod replicas.
A ReplicaSet does not guarantee a stable set of replica pods running.
A Deployment is basically the same as a ReplicaSet with annotations.
A Deployment is a higher-level concept that manages ReplicaSets.
A Deployment is a higher-level controller that manages ReplicaSets and provides rollout/rollback behavior, so D is correct. A ReplicaSet’s primary job is to ensure that a specified number of Pod replicas are running at any time, based on a label selector and Pod template. It’s a fundamental “keep N Pods alive” controller.
Deployments build on that by managing the lifecycle of ReplicaSets over time. When you update a Deployment (for example, changing the container image tag or environment variables), Kubernetes creates a new ReplicaSet for the new Pod template and gradually shifts replicas from the old ReplicaSet to the new one according to the rollout strategy (RollingUpdate by default). Deployments also retain revision history, making it possible to roll back to a previous ReplicaSet if a rollout fails.
Why the other options are incorrect:
A is false: Deployments absolutely control the number of replicas via spec.replicas and can also be controlled by HPA.
B is false: ReplicaSets do guarantee that a stable number of replicas is running (that is their core purpose).
C is false: a Deployment is not “a ReplicaSet with annotations.” It is a distinct API resource with additional controller logic for declarative updates, rollouts, and revision tracking.
Operationally, most teams create Deployments rather than ReplicaSets directly because Deployments are safer and more feature-complete for application delivery. ReplicaSets still appear in real clusters because Deployments create them automatically; you’ll commonly see multiple ReplicaSets during rollout transitions. Understanding the hierarchy is crucial for troubleshooting: if Pods aren’t behaving as expected, you often trace from Deployment → ReplicaSet → Pod, checking selectors, events, and rollout status.
So the key difference is: ReplicaSet maintains replica count; Deployment manages ReplicaSets and orchestrates updates. Therefore, D is the verified answer.
=========
What Kubernetes component handles network communications inside and outside of a cluster, using operating system packet filtering if available?
kube-proxy
kubelet
etcd
kube-controller-manager
kube-proxy is the Kubernetes component responsible for implementing Service networking on nodes, commonly by programming operating system packet filtering / forwarding rules (like iptables or IPVS), which makes A correct.
Kubernetes Services provide stable virtual IPs and ports that route traffic to a dynamic set of Pod endpoints. kube-proxy watches the API server for Service and EndpointSlice/Endpoints updates and then configures the node’s networking so that traffic to a Service is correctly forwarded to one of the backend Pods. In iptables mode, kube-proxy installs NAT and forwarding rules; in IPVS mode, it programs kernel load-balancing tables. In both cases, it leverages OS-level packet handling to efficiently steer traffic. This is the “packet filtering if available” concept referenced in the question.
kube-proxy’s work affects both “inside” and “outside” paths in typical setups. Internal cluster clients reach Services via ClusterIP and DNS, and kube-proxy rules forward that traffic to Pods. For external traffic, paths often involve NodePort or LoadBalancer Services or Ingress controllers that ultimately forward into Services/Pods—again relying on node-level service rules. While some modern CNI/eBPF dataplanes can replace or bypass kube-proxy, the classic Kubernetes architecture still defines kube-proxy as the component implementing Service connectivity.
The other options are not networking dataplane components: kubelet runs Pods and reports status; etcd stores cluster state; kube-controller-manager runs control loops for API objects. None of these handle node-level packet routing for Services. Therefore, the correct verified answer is A: kube-proxy.
What is Serverless computing?
A computing method of providing backend services on an as-used basis.
A computing method of providing services for AI and ML operating systems.
A computing method of providing services for quantum computing operating systems.
A computing method of providing services for cloud computing operating systems.
Serverless computing is a cloud execution model where the provider manages infrastructure concerns and you consume compute as a service, typically billed based on actual usage (requests, execution time, memory), which matches A. In other words, you deploy code (functions) or sometimes containers, configure triggers (HTTP events, queues, schedules), and the platform automatically provisions capacity, scales it up/down, and handles much of availability and fault tolerance behind the scenes.
From a cloud-native architecture standpoint, “serverless” doesn’t mean there are no servers; it means developers don’t manage servers. The platform abstracts away node provisioning, OS patching, and much of runtime scaling logic. This aligns with the “as-used basis” phrasing: you pay for what you run rather than maintaining always-on capacity.
It’s also useful to distinguish serverless from Kubernetes. Kubernetes automates orchestration (scheduling, self-healing, scaling), but operating Kubernetes still involves cluster-level capacity decisions, node pools, upgrades, networking baseline, and policy. With serverless, those responsibilities are pushed further toward the provider/platform. Kubernetes can enable serverless experiences (for example, event-driven autoscaling frameworks), but serverless as a model is about a higher level of abstraction than “orchestrate containers yourself.”
Options B, C, and D are incorrect because they describe specialized or vague “operating system” services rather than the commonly accepted definition. Serverless is not specifically about AI/ML OSs or quantum OSs; it’s a general compute delivery model that can host many kinds of workloads.
Therefore, the correct definition in this question is A: providing backend services on an as-used basis.
=========
Which of the following characteristics is associated with container orchestration?
Application message distribution
Dynamic scheduling
Deploying application JAR files
Virtual machine distribution
A core capability of container orchestration is dynamic scheduling, so B is correct. Orchestration platforms (like Kubernetes) are responsible for deciding where containers (packaged as Pods in Kubernetes) should run, based on real-time cluster conditions and declared requirements. “Dynamic” means the system makes placement decisions continuously as workloads are created, updated, or fail, and as cluster capacity changes.
In Kubernetes, the scheduler evaluates Pods that have no assigned node, filters nodes that don’t meet requirements (resources, taints/tolerations, affinity/anti-affinity, topology constraints), and then scores remaining nodes to pick the best target. This scheduling happens at runtime and adapts to the current state of the cluster. If nodes go down or Pods crash, controllers create replacements and the scheduler places them again—another aspect of dynamic orchestration.
The other options don’t define container orchestration: “application message distribution” is more about messaging systems or service communication patterns, not orchestration. “Deploying application JAR files” is a packaging/deployment detail relevant to Java apps but not a defining orchestration capability. “Virtual machine distribution” refers to VM management rather than container orchestration; Kubernetes focuses on containers and Pods (even if those containers sometimes run in lightweight VMs via sandbox runtimes).
So, the defining trait here is that an orchestrator automatically and continuously schedules and reschedules workloads, rather than relying on static placement decisions.
Which command lists the running containers in the current Kubernetes namespace?
kubectl get pods
kubectl ls
kubectl ps
kubectl show pods
The correct answer is A: kubectl get pods. Kubernetes does not manage “containers” as standalone top-level objects; the primary schedulable unit is the Pod, and containers run inside Pods. Therefore, the practical way to list what’s running in a namespace is to list the Pods in that namespace. kubectl get pods shows Pods and their readiness, status, restarts, and age—giving you the canonical view of running workloads.
If you need the container-level details (images, container names), you typically use additional commands and output formatting:
kubectl describe pod
kubectl get pods -o jsonpath=... or -o wide to surface more fields
kubectl get pods -o=json to inspect .spec.containers and .status.containerStatuses
But among the provided options, kubectl get pods is the only real kubectl command that lists the running workload objects in the current namespace.
The other options are not valid kubectl subcommands: kubectl ls, kubectl ps, and kubectl show pods are not standard Kubernetes CLI operations. Kubernetes intentionally centers around the API resource model, so listing resources uses kubectl get
So while the question says “running containers,” the Kubernetes-correct interpretation is “containers in running Pods,” and the appropriate listing command in the namespace is kubectl get pods, option A.
=========
How do you deploy a workload to Kubernetes without additional tools?
Create a Bash script and run it on a worker node.
Create a Helm Chart and install it with helm.
Create a manifest and apply it with kubectl.
Create a Python script and run it with kubectl.
The standard way to deploy workloads to Kubernetes using only built-in tooling is to create Kubernetes manifests (YAML/JSON definitions of API objects) and apply them with kubectl, so C is correct. Kubernetes is a declarative system: you describe the desired state of resources (e.g., a Deployment, Service, ConfigMap, Ingress) in a manifest file, then submit that desired state to the API server. Controllers reconcile the actual cluster state to match what you declared.
A manifest typically includes mandatory fields like apiVersion, kind, and metadata, and then a spec describing desired behavior. For example, a Deployment manifest declares replicas and the Pod template (containers, images, ports, probes, resources). Applying the manifest with kubectl apply -f
Option B (Helm) is indeed a popular deployment tool, but Helm is explicitly an “additional tool” beyond kubectl and the Kubernetes API. The question asks “without additional tools,” so Helm is excluded by definition. Option A (running Bash scripts on worker nodes) bypasses Kubernetes’ desired-state control and is not how Kubernetes workload deployment is intended; it also breaks portability and operational safety. Option D is not a standard Kubernetes deployment mechanism; kubectl does not “run Python scripts” to deploy workloads (though scripts can automate kubectl, that’s still not the primary mechanism).
From a cloud native delivery standpoint, manifests support GitOps, reviewable changes, and repeatable deployments across environments. The Kubernetes-native approach is: declare resources in manifests and apply them to the cluster. Therefore, C is the verified correct answer.
What do Deployments and StatefulSets have in common?
They manage Pods that are based on an identical container spec.
They support the OnDelete update strategy.
They support an ordered, graceful deployment and scaling.
They maintain a sticky identity for each of their Pods.
Both Deployments and StatefulSets are Kubernetes workload controllers that manage a set of Pods created from a Pod template, meaning they manage Pods based on an identical container specification (a shared Pod template). That is why A is correct. In both cases, you declare a desired state (replicas, container images, environment variables, volumes, probes, etc.) in spec.template, and the controller ensures the cluster converges toward that state by creating, updating, or replacing Pods.
The differences are what make the other options incorrect. OnDelete update strategy is associated with StatefulSets (it’s one of their update strategies), but it is not a shared, defining behavior across both controllers, so B is not “in common.” Ordered, graceful deployment and scaling is a hallmark of StatefulSets (ordered pod creation/termination and stable identities) rather than Deployments, so C is not shared. Sticky identity per Pod (stable network identity and stable storage identity per replica, commonly via StatefulSet + headless Service) is specifically a StatefulSet characteristic, not a Deployment feature, so D is not common.
A useful way to think about it is: both controllers manage replicas of a Pod template, but they differ in semantics. Deployments are designed primarily for stateless workloads and typically focus on rolling updates and scalable replicas where any instance is interchangeable. StatefulSets are designed for stateful workloads and add identity and ordering guarantees: each replica gets a stable name (like db-0, db-1) and often stable PersistentVolumeClaims.
So the shared commonality the question is testing is the basic workload-controller pattern: both controllers manage Pods created from a common template (identical container spec). Therefore, A is the verified answer.
=========
Which of the following are tasks performed by a container orchestration tool?
Schedule, scale, and manage the health of containers.
Create images, scale, and manage the health of containers.
Debug applications, and manage the health of containers.
Store images, scale, and manage the health of containers.
A container orchestration tool (like Kubernetes) is responsible for scheduling, scaling, and health management of workloads, making A correct. Orchestration sits above individual containers and focuses on running applications reliably across a fleet of machines. Scheduling means deciding which node should run a workload based on resource requests, constraints, affinities, taints/tolerations, and current cluster state. Scaling means changing the number of running instances (replicas) to meet demand (manually or automatically through autoscalers). Health management includes monitoring whether containers and Pods are alive and ready, replacing failed instances, and maintaining the declared desired state.
Options B and D include “create images” and “store images,” which are not orchestration responsibilities. Image creation is a CI/build responsibility (Docker/BuildKit/build systems), and image storage is a container registry responsibility (Harbor, ECR, GCR, Docker Hub, etc.). Kubernetes consumes images from registries but does not build or store them. Option C includes “debug applications,” which is not a core orchestration function. While Kubernetes provides tools that help debugging (logs, exec, events), debugging is a human/operator activity rather than the orchestrator’s fundamental responsibility.
In Kubernetes specifically, these orchestration tasks are implemented through controllers and control loops: Deployments/ReplicaSets manage replica counts and rollouts, kube-scheduler assigns Pods to nodes, kubelet ensures containers run, and probes plus controller logic replace unhealthy replicas. This is exactly what makes Kubernetes valuable at scale: instead of manually starting/stopping containers on individual hosts, you declare your intent and let the orchestration system continually reconcile reality to match. That combination—placement + elasticity + self-healing—is the core of container orchestration, matching option A precisely.
=========
Which of the following options includes valid API versions?
alpha1v1, beta3v3, v2
alpha1, beta3, v2
v1alpha1, v2beta3, v2
v1alpha1, v2beta3, 2.0
Kubernetes API versions follow a consistent naming pattern that indicates stability level and versioning. The valid forms include stable versions like v1, and pre-release versions such as v1alpha1, v1beta1, etc. Option C contains valid-looking Kubernetes version strings—v1alpha1, v2beta3, v2—so C is correct.
In Kubernetes, the “v” prefix is part of the standard for API versions. A stable API uses v1, v2, etc. Pre-release APIs include a stability marker: alpha (earliest, most changeable) and beta (more stable but still may change). The numeric suffix (e.g., alpha1, beta3) indicates iteration within that stability stage.
Option A is invalid because strings like alpha1v1 and beta3v3 do not match Kubernetes conventions (the v comes first, and alpha/beta are qualifiers after the version: v1alpha1). Option B is invalid because alpha1 and beta3 are missing the leading version prefix; Kubernetes API versions are not just “alpha1.” Option D includes 2.0, which looks like semantic versioning but is not the Kubernetes API version format. Kubernetes uses v2, not 2.0, for API versions.
Understanding this matters because API versions signal compatibility guarantees. Stable APIs are supported for a defined deprecation window, while alpha/beta APIs may change in incompatible ways and can be removed more easily. When authoring manifests, selecting the correct apiVersion ensures the API server accepts your resource and that controllers interpret fields correctly.
Therefore, among the choices, C is the only option comprised of valid Kubernetes-style API version strings.
=========
In Kubernetes, what is the primary function of a RoleBinding?
To provide a user or group with permissions across all resources at the cluster level.
To assign the permissions of a Role to a user, group, or service account within a namespace.
To enforce namespace network rules by binding policies to Pods running in the namespace.
To create and define a new Role object that contains a specific set of permissions.
In Kubernetes, authorization is managed using Role-Based Access Control (RBAC), which defines what actions identities can perform on which resources. Within this model, a RoleBinding plays a crucial role by connecting permissions to identities, making option B the correct answer.
A Role defines a set of permissions—such as the ability to get, list, create, or delete specific resources—but by itself, a Role does not grant those permissions to anyone. A RoleBinding is required to bind that Role to a specific subject, such as a user, group, or service account. This binding is namespace-scoped, meaning it applies only within the namespace where the RoleBinding is created. As a result, RoleBindings enable fine-grained access control within individual namespaces, which is essential for multi-tenant and least-privilege environments.
When a RoleBinding is created, it references a Role (or a ClusterRole) and assigns its permissions to one or more subjects within that namespace. This allows administrators to reuse existing roles while precisely controlling who can perform certain actions and where. For example, a RoleBinding can grant a service account read-only access to ConfigMaps in a single namespace without affecting access elsewhere in the cluster.
Option A is incorrect because cluster-wide permissions are granted using a ClusterRoleBinding, not a RoleBinding. Option C is incorrect because network rules are enforced using NetworkPolicies, not RBAC objects. Option D is incorrect because Roles are defined independently and only describe permissions; they do not assign them to identities.
In summary, a RoleBinding’s primary purpose is to assign the permissions defined in a Role to users, groups, or service accounts within a specific namespace. This separation of permission definition (Role) and permission assignment (RoleBinding) is a fundamental principle of Kubernetes RBAC and is clearly documented in Kubernetes authorization architecture.
Imagine you're releasing open-source software for the first time. Which of the following is a valid semantic version?
1.0
2021-10-11
0.1.0-rc
v1beta1
Semantic Versioning (SemVer) follows the pattern MAJOR.MINOR.PATCH with optional pre-release identifiers (e.g., -rc, -alpha.1) and build metadata. Among the options, 0.1.0-rc matches SemVer rules, so C is correct.
0.1.0-rc breaks down as: MAJOR=0, MINOR=1, PATCH=0, and -rc indicates a pre-release (“release candidate”). Pre-release versions are valid SemVer and are explicitly allowed to denote versions that are not yet considered stable. For a first-time open-source release, 0.x.y is common because it signals the API may still change in backward-incompatible ways before reaching 1.0.0.
Why the other options are not correct SemVer as written:
1.0 is missing the PATCH segment; SemVer requires three numeric components (e.g., 1.0.0).
2021-10-11 is a date string, not MAJOR.MINOR.PATCH.
v1beta1 resembles Kubernetes API versioning conventions, not SemVer.
In cloud-native delivery and Kubernetes ecosystems, SemVer matters because it communicates compatibility. Incrementing MAJOR indicates breaking changes, MINOR indicates backward-compatible feature additions, and PATCH indicates backward-compatible bug fixes. Pre-release tags allow releasing candidates for testing without claiming full stability. This is especially useful for open-source consumers and automation systems that need consistent version comparison and upgrade planning.
So, the only valid semantic version in the choices is 0.1.0-rc, option C.
=========
Which API object is the recommended way to run a scalable, stateless application on your cluster?
ReplicaSet
Deployment
DaemonSet
Pod
For a scalable, stateless application, Kubernetes recommends using a Deployment because it provides a higher-level, declarative management layer over Pods. A Deployment doesn’t just “run replicas”; it manages the entire lifecycle of rolling out new versions, scaling up/down, and recovering from failures by continuously reconciling the current cluster state to the desired state you define. Under the hood, a Deployment typically creates and manages a ReplicaSet, and that ReplicaSet ensures a specified number of Pod replicas are running at all times. This layering is the key: you get ReplicaSet’s self-healing replica maintenance plus Deployment’s rollout/rollback strategies and revision history.
Why not the other options? A Pod is the smallest deployable unit, but it’s not a scalable controller—if a Pod dies, nothing automatically replaces it unless a controller owns it. A ReplicaSet can maintain N replicas, but it does not provide the full rollout orchestration (rolling updates, pause/resume, rollbacks, and revision tracking) that you typically want for stateless apps that ship frequent releases. A DaemonSet is for node-scoped workloads (one Pod per node or subset of nodes), like log shippers or node agents, not for “scale by replicas.”
For stateless applications, the Deployment model is especially appropriate because individual replicas are interchangeable; the application does not require stable network identities or persistent storage per replica. Kubernetes can freely replace or reschedule Pods to maintain availability. Deployment strategies (like RollingUpdate) allow you to upgrade without downtime by gradually replacing old replicas with new ones while keeping the Service endpoints healthy. That combination—declarative desired state, self-healing, and controlled updates—makes Deployment the recommended object for scalable stateless workloads.
=========
What function does kube-proxy provide to a cluster?
Implementing the Ingress resource type for application traffic.
Forwarding data to the correct endpoints for Services.
Managing data egress from the cluster nodes to the network.
Managing access to the Kubernetes API.
kube-proxy is a node-level networking component that helps implement the Kubernetes Service abstraction. Services provide a stable virtual IP and DNS name that route traffic to a set of Pods (endpoints). kube-proxy watches the API for Service and EndpointSlice/Endpoints changes and then programs the node’s networking rules so that traffic sent to a Service is forwarded (load-balanced) to one of the correct backend Pod IPs. This is why B is correct.
Conceptually, kube-proxy turns the declarative Service configuration into concrete dataplane behavior. Depending on the mode, it may use iptables rules, IPVS, or integrate with eBPF-capable networking stacks (sometimes kube-proxy is replaced or bypassed by CNI implementations, but the classic kube-proxy role remains the canonical answer). In iptables mode, kube-proxy creates NAT rules that rewrite traffic from the Service virtual IP to one of the Pod endpoints. In IPVS mode, it programs kernel load-balancing tables for more scalable service routing. In all cases, the job is to connect “Service IP/port” to “Pod IP/port endpoints.”
Option A is incorrect because Ingress is a separate API resource and requires an Ingress Controller (like NGINX Ingress, HAProxy, Traefik, etc.) to implement HTTP routing, TLS termination, and host/path rules. kube-proxy is not an Ingress controller. Option C is incorrect because general node egress management is not kube-proxy’s responsibility; egress behavior typically depends on the CNI plugin, NAT configuration, and network policies. Option D is incorrect because API access control is handled by the API server’s authentication/authorization layers (RBAC, webhooks, etc.), not kube-proxy.
So kube-proxy’s essential function is: keep node networking rules in sync so that Service traffic reaches the right Pods. It is one of the key components that makes Services “just work” across nodes without clients needing to know individual Pod IPs.
=========
What is the core metric type in Prometheus used to represent a single numerical value that can go up and down?
Summary
Counter
Histogram
Gauge
In Prometheus, a Gauge represents a single numerical value that can increase and decrease over time, which makes D the correct answer. Gauges are used for values like current memory usage, number of in-flight requests, queue depth, temperature, or CPU usage—anything that can move up and down.
This contrasts with a Counter, which is strictly monotonically increasing (it only goes up, except for resets when a process restarts). Counters are ideal for cumulative totals like total HTTP requests served, total errors, or bytes transmitted. Histograms and Summaries are used to capture distributions (often latency distributions), providing bucketed counts (histogram) or quantile approximations (summary), and are not the “single value that goes up and down” primitive the question asks for.
In Kubernetes observability, metrics are a primary signal for understanding system health and performance. Prometheus is widely used to scrape metrics from Kubernetes components (kubelet, API server, controller-manager), cluster add-ons, and applications. Gauges are common for resource utilization metrics and for instantaneous states, such as container_memory_working_set_bytes or go_goroutines.
When you build alerting and dashboards, selecting the right metric type matters. For example, if you want to alert on the current memory usage, a gauge is appropriate. If you want to compute request rates, you typically use counters with Prometheus functions like rate() to derive per-second rates. Histograms and summaries are used when you need latency percentiles or distribution analysis.
So, for “a single numerical value that can go up and down,” the correct Prometheus metric type is Gauge (D).
=========
Which of the following is a good habit for cloud native cost efficiency?
Follow an automated approach to cost optimization, including visibility and forecasting.
Follow manual processes for cost analysis, including visibility and forecasting.
Use only one cloud provider to simplify the cost analysis.
Keep your legacy workloads unchanged, to avoid cloud costs.
The correct answer is A. In cloud-native environments, costs are highly dynamic: autoscaling changes compute footprint, ephemeral environments come and go, and usage-based billing applies to storage, network egress, load balancers, and observability tooling. Because of this variability, automation is the most sustainable way to achieve cost efficiency. Automated visibility (dashboards, chargeback/showback), anomaly detection, and forecasting help teams understand where spend is coming from and how it changes over time. Automated optimization actions can include right-sizing requests/limits, enforcing TTLs on preview environments, scaling down idle clusters, and cleaning unused resources.
Manual processes (B) don’t scale as complexity grows. By the time someone reviews a spreadsheet or dashboard weekly, cost spikes may have already occurred. Automation enables fast feedback loops and guardrails, which is essential for preventing runaway spend caused by misconfiguration (e.g., excessive log ingestion, unbounded autoscaling, oversized node pools).
Option C is not a cost-efficiency “habit.” Single-provider strategies may simplify some billing views, but they can also reduce leverage and may not be feasible for resilience/compliance; it’s a business choice, not a best practice for cloud-native cost management. Option D is counterproductive: keeping legacy workloads unchanged often wastes money because cloud efficiency typically requires adapting workloads—right-sizing, adopting autoscaling, and using managed services appropriately.
In Kubernetes specifically, cost efficiency is tightly linked to resource management: accurate CPU/memory requests, limits where appropriate, cluster autoscaler tuning, and avoiding overprovisioning. Observability also matters because you can’t optimize what you can’t measure. Therefore, the best habit is an automated cost optimization approach with strong visibility and forecasting—A.
=========
What is the main purpose of a DaemonSet?
A DaemonSet ensures that all (or certain) nodes run a copy of a Pod.
A DaemonSet ensures that the kubelet is constantly up and running.
A DaemonSet ensures that there are as many pods running as specified in the replicas field.
A DaemonSet ensures that a process (agent) runs on every node.
The correct answer is A. A DaemonSet is a workload controller whose job is to ensure that a specific Pod runs on all nodes (or on a selected subset of nodes) in the cluster. This is fundamentally different from Deployments/ReplicaSets, which aim to maintain a certain replica count regardless of node count. With a DaemonSet, the number of Pods is implicitly tied to the number of eligible nodes: add a node, and the DaemonSet automatically schedules a Pod there; remove a node, and its Pod goes away.
DaemonSets are commonly used for node-level services and background agents: log collectors, node monitoring agents, storage daemons, CNI components, or security agents—anything where you want a presence on each node to interact with node resources. This aligns with option D’s phrasing (“agent on every node”), but option A is the canonical definition and is slightly broader because it covers “all or certain nodes” (via node selectors/affinity/taints-tolerations) and the fact that the unit is a Pod.
Why the other options are wrong: DaemonSets do not “keep kubelet running” (B); kubelet is a node service managed by the OS. DaemonSets do not use a replicas field to maintain a specific count (C); that’s Deployment/ReplicaSet behavior.
Operationally, DaemonSets matter for cluster operations because they provide consistent node coverage and automatically react to node pool scaling. They also require careful scheduling constraints so they land only where intended (e.g., only Linux nodes, only GPU nodes). But the main purpose remains: ensure a copy of a Pod runs on each relevant node—option A.
=========
TESTED 10 Feb 2026
Copyright © 2014-2026 DumpsTool. All Rights Reserved