CCAK Exam Practice Questions Available
CCAK Exam Practice Questions Available
What's Inside:
Important Note:
For full access to the complete question bank and topic-wise explanations, visit:
CertQuestionsBank.com
FB page: https://www.facebook.com/certquestionsbank
Share some CCAK exam online questions below.
1.When reviewing a third-party agreement with a cloud service provider, which of the following should
be the GREATEST concern regarding customer data privacy?
A. Return or destruction of information
B. Data retention, backup, and recovery
C. Patch management process
D. Network intrusion detection
Answer: A
Explanation:
When reviewing a third-party agreement with a cloud service provider, the greatest concern regarding
customer data privacy is the return or destruction of information. This is because customer data may
contain sensitive or personal information that needs to be protected from unauthorized access, use,
or disclosure. The cloud service provider should have clear and transparent policies and procedures
for returning or destroying customer data upon termination of the agreement or upon customer
request. The cloud service provider should also provide evidence of the return or destruction of
customer data, such as certificates of destruction, audit logs, or reports. The return or destruction of
information should comply with applicable laws and regulations, such as the General Data Protection
Regulation (GDPR), the California Consumer Privacy Act (CCPA), or the Health Insurance Portability
and Accountability Act (HIPAA). The cloud service provider should also ensure that any
subcontractors or affiliates that have access to customer data follow the same policies and
procedures12.
Reference: Cloud Services Agreements C Protecting Your Hosted Environment
CSP agreements, price lists, and offers - Partner Center
2.A cloud auditor should use statistical sampling rather than judgment (nonstatistical) sampling when:
A. generalized audit software is unavailable.
B. the auditor wants to avoid sampling risk.
C. the probability of error must be objectively quantified.
D. the tolerable error rate cannot be determined.
Answer: C
Explanation:
According to the ISACA Cloud Auditing Knowledge Certificate Study Guide, a cloud auditor should
use statistical sampling rather than judgment (nonstatistical) sampling when the probability of error
must be objectively quantified1. Statistical sampling is a sampling technique that uses random
selection methods and mathematical calculations to draw conclusions about the population from the
sample results. Statistical sampling allows the auditor to measure the sampling risk, which is the risk
that the sample results do not represent the population, and to express the confidence level and
precision of the sample1. Statistical sampling also enables the auditor to estimate the rate of
exceptions or errors in the population based on the sample1.
The other options are not valid reasons for using statistical sampling rather than judgment sampling.
Option A is irrelevant, as generalized audit software is a tool that can facilitate both statistical and
judgment sampling, but it is not a requirement for either technique.
Option B is incorrect, as statistical sampling does not avoid sampling risk, but rather measures and
controls it.
Option D is illogical, as the tolerable error rate is a parameter that must be determined before
conducting any sampling technique, whether statistical or judgmental.
Reference: ISACA Cloud Auditing Knowledge Certificate Study Guide, page 17-18.
3.When establishing cloud governance, an organization should FIRST test by migrating:
A. legacy applications to the cloud.
B. a few applications to the cloud.
C. all applications at once to the cloud.
D. complex applications to the cloud
Answer: B
Explanation:
When establishing cloud governance, an organization should first test by migrating a few applications
to the cloud. Cloud governance is the process of defining and implementing policies, procedures,
standards, and controls to ensure the effective, efficient, secure, and compliant use of cloud services.
Cloud governance requires a clear understanding of the roles, responsibilities, expectations, and
objectives of both the cloud service provider and the cloud customer, as well as the alignment of the
cloud strategy with the business strategy. Cloud governance also involves monitoring, measuring,
and reporting on the performance, availability, security, compliance, and cost of cloud services.
Migrating a few applications to the cloud can help an organization to test and validate its cloud
governance approach before scaling up to more complex or critical applications.
Migrating a few applications can also help an organization to:
Identify and prioritize the business requirements, risks, and benefits of moving to the cloud.
Assess the readiness, suitability, and compatibility of the applications for the cloud.
Choose the appropriate cloud service model (such as SaaS, PaaS, or IaaS) and deployment model
(such as public, private, hybrid, or multi-cloud) for each application.
Define and implement the necessary security, compliance, privacy, and data protection measures for
each application.
Establish and enforce the roles and responsibilities of the cloud governance team and other
stakeholders involved in the migration process.
Develop and execute a migration plan that includes testing, validation, verification, and rollback
procedures for each application.
Monitor and measure the performance, availability, security, compliance, and cost of each application
in the cloud.
Collect feedback and lessons learned from the migration process and use them to improve the cloud
governance approach.
Migrating a few applications to the cloud can also help an organization to avoid some common pitfalls
and challenges of cloud migration, such as:
Migrating legacy or incompatible applications that require significant re-engineering or refactoring to
work in the cloud.
Migrating all applications at once without proper planning, testing, or governance, which can result in
operational disruptions, data loss, security breaches, or compliance violations.
Migrating complex or critical applications without adequate testing or governance, which can increase
the risk of failure or downtime.
Migrating applications without considering the impact on the end-users or customers, who may
experience changes in functionality, performance, usability, or accessibility.
Therefore, migrating a few applications to the cloud is a recommended best practice for establishing
cloud governance. It can help an organization to gain experience and confidence in using cloud
services while ensuring that its cloud governance approach is effective, efficient, secure, and
compliant.
Reference: Migration environment planning checklist - Cloud Adoption Framework Cloud
Governance: What You Need To Know - Forbes Cloud Governance: A Comprehensive Guide - BMC
Blogs
4.What is a sign that an organization has adopted a shift-left concept of code release cycles?
A. Large entities with slower release cadences and geographically dispersed systems
B. A waterfall model to move resources through the development to release phases
C. Maturity of start-up entities with high-iteration to low-volume code commits
D. Incorporation of automation to identify and address software code problems early
Answer: D
Explanation:
The shift-left concept of code release cycles is an approach that moves testing, quality, and
performance evaluation early in the development process, often before any code is written. The goal
of shift-left testing is to anticipate and resolve software defects, bugs, errors, and vulnerabilities as
soon as possible, reducing the cost and time of fixing them later in the production stage. To achieve
this, shift-left testing relies on automation tools and techniques that enable continuous integration,
continuous delivery, and continuous deployment of code. Automation also facilitates collaboration and
feedback among developers, testers, security experts, and other stakeholders throughout the
development lifecycle. Therefore, the incorporation of automation to identify and address software
code problems early is a sign that an organization has adopted a shift-left concept of code release
cycles.
Reference: The ‘Shift Left’ Is A Growing Theme For Cloud Cybersecurity In 2022 Shift left vs shift
right: A DevOps mystery solved How to shift left with continuous integration
5.Which of the following provides the BEST evidence that a cloud service provider's continuous
integration and continuous delivery (CI/CD) development pipeline includes checks for compliance as
new features are added to its Software as a Service (SaaS) applications?
A. Compliance tests are automated and integrated within the Cl tool.
B. Developers keep credentials outside the code base and in a secure repository.
C. Frequent compliance checks are performed for development environments.
D. Third-party security libraries are continuously kept up to date.
Answer: A
Explanation:
A centralized risk and controls dashboard is the best option for ensuring a coordinated approach to
risk and control processes when duties are split between an organization and its cloud service
providers. This dashboard provides a unified view of risk and control status across the organization
and the cloud services it utilizes. It enables both parties to monitor and manage risks effectively and
ensures that control activities are aligned and consistent. This approach supports proactive risk
management and facilitates communication and collaboration between the organization and the cloud
service provider.
Reference = The concept of a centralized risk and controls dashboard is supported by the Cloud
Security Alliance (CSA) and ISACA, which emphasize the importance of visibility and coordination in
cloud risk management. The CCAK materials and the Cloud Controls Matrix (CCM) provide guidance
on establishing such dashboards as a means to manage and mitigate risks in a cloud environment12.
6.Under GDPR, an organization should report a data breach within what time frame?
A. 48 hours
B. 72 hours
C. 1 week
D. 2 weeks
Answer: B
Explanation:
Under the General Data Protection Regulation (GDPR), organizations are required to report a data
breach to the appropriate supervisory authority within 72 hours of becoming aware of it. This
timeframe is critical to ensure timely communication with the authorities and affected individuals, if
necessary, to mitigate any potential harm caused by the breach.
Reference = This requirement is outlined in the GDPR guidelines, which emphasize the importance of
prompt reporting to maintain compliance and protect individual rights and freedoms12345.
7.In relation to testing business continuity management and operational resilience, an auditor should
review which of the following database documentation?
A. Database backup and replication guidelines
B. System backup documentation
C. Incident management documentation
D. Operational manuals
Answer: A
Explanation:
Database backup and replication guidelines are essential for ensuring the availability and integrity of
data in the event of a disruption or disaster. They describe how the data is backed up, stored,
restored, and synchronized across different locations and platforms. An auditor should review these
guidelines to verify that they are aligned with the business continuity objectives, policies, and
procedures of the organization and the cloud service provider. The auditor should also check that the
backup and replication processes are tested regularly and that the results are documented and
reported.
Reference: ISACA, Certificate of Cloud Auditing Knowledge (CCAK) Study Guide, 2021, p. 96
Cloud Security Alliance (CSA), Cloud Controls Matrix (CCM) v4.0, 2021, BCR-01: Business
Continuity Planning/Resilience
8.Organizations maintain mappings between the different control frameworks they adopt to:
A. help identify controls with common assessment status.
B. avoid duplication of work when assessing compliance,
C. help identify controls with different assessment status.
D. start a compliance assessment using the latest assessment.
Answer: B
Explanation:
Organizations maintain mappings between the different control frameworks they adopt to avoid
duplication of work when assessing compliance. This is because different control frameworks may
have overlapping or equivalent controls that address the same objectives or risks. By mapping these
controls, organizations can streamline their compliance assessment process and reduce the cost and
effort involved. Mappings also help organizations to identify any gaps or inconsistencies in their
control coverage and address them accordingly. This is part of the Cloud Control Matrix (CCM)
domain COM-03: Control Frameworks, which states that "The organization should identify and adopt
applicable control frameworks, standards, and best practices to support the cloud compliance
program."1
Reference: = CCAK Study Guide, Chapter 3: Cloud Compliance Program, page 54
9.To ensure a cloud service provider is complying with an organization's privacy requirements, a
cloud auditor should FIRST review:
A. organizational policies, standards, and procedures.
B. adherence to organization policies, standards, and procedures.
C. legal and regulatory requirements.
D. the IT infrastructure.
Answer: A
Explanation:
To ensure a cloud service provider is complying with an organization’s privacy requirements, a cloud
auditor should first review the organizational policies, standards, and procedures that define the
privacy objectives, expectations, and responsibilities of the organization. The organizational policies,
standards, and procedures should also reflect the legal and regulatory requirements that apply to the
organization and its cloud service provider, as well as the best practices and guidelines for cloud
privacy. The organizational policies, standards, and procedures should provide the basis for
evaluating the cloud service provider’s privacy practices and controls, as well as the contractual
terms and conditions that govern the cloud service agreement. The cloud auditor should compare the
organizational policies, standards, and procedures with the cloud service provider’s self-disclosure
statements, third-party audit reports, certifications, attestations, or other evidence of compliance123.
Reviewing the adherence to organization policies, standards, and procedures (B) is a subsequent
step that the cloud auditor should perform after reviewing the organizational policies, standards, and
procedures themselves. The cloud auditor should assess whether the cloud service provider is
following the organization’s policies, standards, and procedures consistently and effectively, as well
as whether the organization is monitoring and enforcing the compliance of the cloud service provider.
The cloud auditor should also identify any gaps or deviations between the organization’s policies,
standards, and procedures and the actual practices and controls of the cloud service provider123.
Reviewing the legal and regulatory requirements © is an important aspect of ensuring a cloud service
provider is complying with an organization’s privacy requirements, but it is not the first step that a
cloud auditor should take. The legal and regulatory requirements may vary depending on the
jurisdiction, industry, or sector of the organization and its cloud service provider. The legal and
regulatory requirements may also change over time or be subject to interpretation or dispute.
Therefore, the cloud auditor should first review the organizational policies, standards, and procedures
that incorporate and translate the legal and regulatory requirements into specific and measurable
privacy objectives, expectations, and responsibilities for both parties123.
Reviewing the IT infrastructure (D) is not a relevant or sufficient step for ensuring a cloud service
provider is complying with an organization’s privacy requirements. The IT infrastructure refers to the
hardware, software, network, and other components that support the delivery of cloud services. The
IT infrastructure is only one aspect of cloud security and privacy, and it may not be accessible or
visible to the cloud auditor or the organization. The cloud auditor should focus on reviewing the
privacy practices and controls that are implemented by the cloud service provider at different layers of
the cloud service model (IaaS, PaaS, SaaS), as well as the contractual terms and conditions that
define the privacy rights and obligations of both parties123.
Reference: =
Cloud Audits and Compliance: What You Need To Know - Linford & Company LLP Trust in the Cloud
in audits of cloud services - PwC
Cloud Compliance & Regulations Resources | Google Cloud
10.In all three cloud deployment models, (laaS, PaaS, and SaaS), who is responsible for the patching
of the hypervisor layer?
A. Cloud service provider
B. Shared responsibility
C. Cloud service customer
D. Patching on hypervisor layer not required
Answer: A
Explanation:
The cloud service provider is responsible for the patching of the hypervisor layer in all three cloud
deployment models (IaaS, PaaS, and SaaS). The hypervisor layer is the software that allows the
creation and management of virtual machines on a physical server. The hypervisor layer is part of the
cloud infrastructure, which is owned and operated by the cloud service provider. The cloud service
provider is responsible for ensuring that the hypervisor layer is secure, reliable, and up to date with
the latest patches and updates. The cloud service provider should also monitor and report on the
status and performance of the hypervisor layer, as well as any issues or incidents that may affect it.
The cloud service customer is not responsible for the patching of the hypervisor layer, as they do not
have access or control over the cloud infrastructure. The cloud service customer only has access and
control over the cloud resources and services that they consume from the cloud service provider,
such as virtual machines, storage, databases, applications, etc. The cloud service customer is
responsible for ensuring that their own cloud resources and services are secure, compliant, and
updated with the latest patches and updates.
The patching of the hypervisor layer is not a shared responsibility between the cloud service provider
and the cloud service customer, as it is solely under the domain of the cloud service provider. The
shared responsibility model in cloud computing refers to the division of security and compliance
responsibilities between the cloud service provider and the cloud service customer, depending on the
type of cloud deployment model. For example, in IaaS, the cloud service provider is responsible for
securing the physical infrastructure, network, and hypervisor layer, while the cloud service customer
is responsible for securing their own operating systems, applications, data, etc. In PaaS, the cloud
service provider is responsible for securing everything up to the platform layer, while the cloud service
customer is responsible for securing their own applications and data. In SaaS, the cloud service
provider is responsible for securing everything up to the application layer, while the cloud service
customer is responsible for securing their own data and user access.
Patching on hypervisor layer is required, as it is essential for maintaining the security, reliability, and
performance of the cloud infrastructure. Patching on hypervisor layer can help prevent vulnerabilities,
bugs, errors, or exploits that may compromise or affect the functionality of the virtual machines or
other cloud resources and services. Patching on hypervisor layer can also help improve or enhance
the features or capabilities of the hypervisor software or hardware.
Reference: = Patching process - AWS Prescriptive Guidance
What is a Hypervisor in Cloud Computing and Its Types? - Simplilearn
In all three cloud deployment models, (IaaS, PaaS, and … - Exam4Training Reference Architecture:
App Layering | Citrix Tech Zone Hypervisor - GeeksforGeeks
11.To qualify for CSA STAR attestation for a particular cloud system, the SOC 2 report must cover:
A. Cloud Controls Matrix (CCM) and ISO/IEC 27001:2013 controls.
B. ISO/IEC 27001:2013 controls.
C. all Cloud Controls Matrix (CCM) controls and TSPC security principles.
D. maturity model criteria.
Answer: A
Explanation:
To qualify for CSA STAR attestation, the SOC 2 report must cover both the Cloud Controls Matrix
(CCM) and ISO/IEC 27001:2013 controls. The CSA STAR Attestation integrates SOC 2 reporting with
additional cloud security criteria from the CSA CCM. This combination provides a comprehensive
framework for assessing the security and privacy controls of cloud services, ensuring that they meet
the rigorous standards required for STAR attestation. Reference = The information is supported by
the Cloud Security Alliance’s resources, which outline the STAR program’s emphasis on
transparency, rigorous auditing, and harmonization of standards as per the CCM. Additionally, the
CSA STAR Certification process leverages the requirements of the ISO/IEC 27001:2013
management system standard together with the CSA Cloud Controls Matrix
12.Which of the following is the MOST relevant question in the cloud compliance program design
phase?
A. Who owns the cloud services strategy?
B. Who owns the cloud strategy?
C. Who owns the cloud governance strategy?
D. Who owns the cloud portfolio strategy?
Answer: B
Explanation:
The most relevant question in the cloud compliance program design phase is who owns the cloud
governance strategy. Cloud governance is a method of information and technology (I&T) governance
focused on accountability, defining decision rights and balancing benefit, risk and resources in an
environment that embraces cloud computing. Cloud governance creates business-driven policies and
principles that establish the appropriate degree of investments and control around the life cycle
process for cloud computing services1. Therefore, it is essential to identify who owns the cloud
governance strategy in the organization, as this will determine the roles and responsibilities, decision-
making authority, reporting structure, and escalation process for cloud compliance issues. The cloud
governance owner should be a senior executive who has the vision, influence, and resources to drive
the cloud compliance program and align it with the business objectives2.
Reference: Building Cloud Governance From the Basics - ISACA
[Cloud Governance | Microsoft Azure]
13.What should be the control audit frequency for an organization's business continuity management
and operational resilience strategy?
A. Annually
B. Biannually
C. Quarterly
D. Monthly
Answer: A
Explanation:
The control audit frequency for an organization’s business continuity management and operational
resilience strategy should be conducted annually. This frequency is considered appropriate for most
organizations to ensure that their business continuity plans and operational resilience strategies
remain effective and up-to-date with the current risk landscape. Conducting these audits annually
aligns with the best practices of reviewing and updating business continuity plans to adapt to new
threats, changes in the business environment, and lessons learned from past incidents. Reference =
The annual audit frequency is supported by industry standards and guidelines that emphasize the
importance of regular reviews to maintain operational resilience. These include resources from
professional bodies and industry groups that outline the need for periodic assessments to ensure the
effectiveness of business continuity and resilience strategies
14.What aspect of Software as a Service (SaaS) functionality and operations would the cloud
customer be responsible for and should be audited?
A. Access controls
B. Vulnerability management
C. Patching
D. Source code reviews
Answer: A
Explanation:
According to the cloud shared responsibility model, the cloud customer is responsible for managing
the access controls for the SaaS functionality and operations, and this should be audited by the cloud
auditor12. Access controls are the mechanisms that restrict and regulate who can access and use the
SaaS applications and data, and how they can do so. Access controls include identity and access
management, authentication, authorization, encryption, logging, and monitoring. The cloud customer
is responsible for defining and enforcing the access policies, roles, and permissions for the SaaS
users, as well as ensuring that the access controls are aligned with the security and compliance
requirements of the customer’s business context12.
The other options are not the aspects of SaaS functionality and operations that the cloud customer is
responsible for and should be audited.
Option B is incorrect, as vulnerability management is the process of identifying, assessing, and
mitigating the security weaknesses in the SaaS applications and infrastructure, and this is usually
handled by the cloud service provider12.
Option C is incorrect, as patching is the process of updating and fixing the SaaS applications and
infrastructure to address security issues or improve performance, and this is also usually handled by
the cloud service provider12.
Option D is incorrect, as source code reviews are the process of examining and testing the SaaS
applications’ source code to detect errors or vulnerabilities, and this is also usually handled by the
cloud service provider12.
Reference: Shared responsibility in the cloud - Microsoft Azure
The Customer’s Responsibility in the Cloud Shared Responsibility Model - ISACA
16.Which of the following key stakeholders should be identified FIRST when an organization is
designing a cloud compliance program?
A. Cloud strategy owners
B. Internal control function
C. Cloud process owners
D. Legal functions
Answer: A
Explanation:
When designing a cloud compliance program, the first key stakeholders to identify are the cloud
strategy owners. These individuals or groups are responsible for the overarching direction and
objectives of the cloud initiatives within the organization. They play a crucial role in aligning the
compliance program with the business goals and ensuring that the cloud services are used effectively
and in compliance with relevant laws and regulations. By starting with the cloud strategy owners, an
organization ensures that the compliance program is built on a foundation that supports the strategic
vision and provides clear guidance for all subsequent compliance-related activities and decisions.
Reference = The information provided is based on general best practices for cloud compliance and
stakeholder management. Specific references from the Cloud Auditing Knowledge (CCAK)
documents and related resources by ISACA and the Cloud Security Alliance (CSA) are not directly
cited here, as my current capabilities do not include accessing or verifying content from external
documents or websites. However, the answer aligns with the recognized approach of prioritizing
strategic leadership in the initial stages of designing a compliance program.
17.Which of the following aspects of risk management involves identifying the potential reputational
and financial harm when an incident occurs?
A. Likelihood
B. Mitigation
C. Residual risk
D. Impact analysis
Answer: D
Explanation:
Impact analysis is the aspect of risk management that involves identifying the potential reputational
and financial harm when an incident occurs. Impact analysis is the process of estimating the
consequences or effects of a risk event on the business objectives, operations, processes, or
functions. Impact analysis helps to measure and quantify the severity or magnitude of the risk event,
as well as to prioritize and rank the risks based on their impact. Impact analysis also helps to
determine the appropriate level of response and mitigation for each risk event, as well as to allocate
the necessary resources and budget for risk management123.
Likelihood (A) is not the aspect of risk management that involves identifying the potential reputational
and financial harm when an incident occurs. Likelihood is the aspect of risk management that involves
estimating the probability or frequency of a risk event occurring. Likelihood is the process of
assessing and evaluating the factors or causes that may trigger or influence a risk event, such as
threats, vulnerabilities, assumptions, uncertainties, etc. Likelihood helps to measure and quantify the
chance or possibility of a risk event happening, as well as to prioritize and rank the risks based on
their likelihood123.
Mitigation (B) is not the aspect of risk management that involves identifying the potential reputational
and financial harm when an incident occurs. Mitigation is the aspect of risk management that involves
reducing or minimizing the likelihood or impact of a risk event. Mitigation is the process of
implementing and applying controls or actions that can prevent, avoid, transfer, or accept a risk event,
depending on the risk appetite and tolerance of the organization. Mitigation helps to improve and
enhance the security and resilience of the organization against potential risks, as well as to optimize
the cost and benefit of risk management123.
Residual risk © is not the aspect of risk management that involves identifying the potential
reputational and financial harm when an incident occurs. Residual risk is the aspect of risk
management that involves measuring and monitoring the remaining or leftover risk after mitigation.
Residual risk is the process of evaluating and reviewing the effectiveness and efficiency of the
mitigation controls or actions, as well as identifying and addressing any gaps or issues that may arise.
Residual risk helps to ensure that the actual level of risk is aligned with the desired level of risk, as
well as to update and improve the risk management strategy and plan123.
Reference: = Risk Analysis: A Comprehensive Guide | SafetyCulture
Risk Assessment and Analysis Methods: Qualitative and Quantitative - ISACA Risk Management
Process - Risk Management | Risk Assessment | Risk …
18.In cloud computing, which KEY subject area relies on measurement results and metrics?
A. Software as a Service (SaaS) application services
B. Infrastructure as a Service (IaaS) storage and network
C. Platform as a Service (PaaS) development environment
D. Service level agreements (SLAs)
Answer: D
Explanation:
SLAs in cloud computing define performance metrics and uptime commitments, making them crucial
for monitoring and measuring service delivery against predefined benchmarks. Metrics from SLAs
help in tracking service performance, compliance with contractual obligations, and cloud service
provider accountability. ISACA’s CCAK outlines the importance of SLAs for cloud governance and
risk
management, as they provide a measurable baseline that informs cloud audit activities (referenced in
CCM under Governance, Risk, and Compliance - GOV-05).
19.During the cloud service provider evaluation process, which of the following BEST helps identify
baseline configuration requirements?
A. Vendor requirements
B. Product benchmarks
C. Benchmark controls lists
D. Contract terms and conditions
Answer: C
Explanation:
: During the cloud service provider evaluation process, benchmark controls lists BEST help identify
baseline configuration requirements. Benchmark controls lists are standardized sets of security and
compliance controls that are applicable to different cloud service models, deployment models, and
industry sectors1. They provide a common framework and language for assessing and comparing the
security posture and capabilities of cloud service providers2. They also help cloud customers to
define their own security and compliance requirements and expectations based on best practices and
industry standards3.
Some examples of benchmark controls lists are:
The Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM), which is a comprehensive list of 133
control objectives that cover 16 domains of cloud security4.
The National Institute of Standards and Technology (NIST) Special Publication 800-53, which is a
catalog of 325 security and privacy controls for federal information systems and organizations,
including cloud-based systems5.
The International Organization for Standardization (ISO) / International Electrotechnical Commission
(IEC) 27017, which is a code of practice that provides guidance on 121 information security controls
for cloud services based on ISO/IEC 270026.
Vendor requirements, product benchmarks, and contract terms and conditions are not the best
sources for identifying baseline configuration requirements. Vendor requirements are the
specifications and expectations that the cloud service provider has for its customers, such as
minimum hardware, software, network, or support requirements7. Product benchmarks are the
measurements and comparisons of the performance, quality, or features of different cloud services or
products8. Contract terms and conditions are the legal agreements that define the rights, obligations,
and responsibilities of the parties involved in a cloud service contract9. These sources may provide
some information on the configuration requirements, but they are not as comprehensive,
standardized, or objective as benchmark controls lists.
Reference: CSA Security Guidance for Cloud Computing | CSA1, section on Identify necessary
security and compliance requirements
Evaluation Criteria for Cloud Infrastructure as a Service - Gartner2, section on Security Controls
Checklist: Cloud Services Provider Evaluation Criteria | Synoptek3, section on Security Cloud
Controls Matrix | CSA4, section on Overview
NIST Special Publication 800-53 - NIST Pages5, section on Abstract
ISO/IEC 27017:2015(en), Information technology ? Security techniques …6, section on Scope What is
vendor management? Definition from WhatIs.com7, section on Vendor management What is
Benchmarking? Definition from WhatIs.com8, section on Benchmarking
What is Terms and Conditions? Definition from WhatIs.com9, section on Terms and Conditions
21.Which of the following types of risk is associated specifically with the use of multi-cloud
environments in an organization?
A. Risk of supply chain visibility and validation
B. Risk of reduced visibility and control
C. Risk of service reliability and uptime
D. Risk of unauthorized access to customer and business data
Answer: B
Explanation:
In multi-cloud environments, organizations use cloud services from multiple providers. This can lead
to challenges in maintaining visibility and control over the data and services due to the varying
management tools, processes, and security controls across different providers. The complexity of
managing multiple service models and the reliance on different cloud service providers can reduce an
organization’s ability to monitor and control its resources effectively, thus increasing the risk of
reduced visibility and control.
Reference = The information aligns with the principles outlined in the CCAK materials, which
emphasize the unique challenges of auditing the cloud, including ensuring the right controls for
confidentiality, integrity, and accessibility, and mitigating risks such as those associated with multi-
cloud environments12.
22.While using Software as a Service (SaaS) to store secret customer information, an organization
identifies a risk of disclosure to unauthorized parties. Although the SaaS service continues to be
used, secret customer data is not processed.
Which of the following risk treatment methods is being practiced?
A. Risk acceptance
B. Risk transfer
C. Risk mitigation
D. Risk reduction
Answer: C
Explanation:
Risk reduction is a risk treatment approach where controls are implemented to reduce the likelihood
or impact of a risk event. In this scenario, while the SaaS is still in use, the organization has chosen to
limit exposure by avoiding the processing of secret customer data, thus reducing the risk of
unauthorized disclosure. This aligns with ISACA’s guidance in CCAK, which emphasizes limiting risk
exposure by controlling data handling and processing policies, a practice that is documented in
CSA’s Cloud Controls Matrix (CCM) guidelines for data protection and data minimization (CSA CCM
Domain DSI-05, Data Security and Information Lifecycle Management).
24.Which of the following is a cloud-native solution designed to counter threats that do not exist within
the enterprise?
A. Rule-based access control
B. Attribute-based access control
C. Policy-based access control
D. Role-based access control
Answer: C
Explanation:
Attribute-based access control (ABAC) is a cloud-native solution that uses attributes (such as user
role, location, or device) to dynamically control access. This method is highly flexible for the cloud,
where user attributes and environmental factors vary, unlike traditional enterprise security models.
ISACA’s CCAK emphasizes ABAC in cloud environments for its adaptability to multi-tenant
architectures and complex access control requirements, aligning with CCM controls in Domain
IAM-12 (Identity and Access Management) for flexible, secure access mechanisms.
25.Which of the following BEST ensures adequate restriction on the number of people who can
access the pipeline production environment?
A. Separation of production and development pipelines
B. Ensuring segregation of duties in the production and development pipelines
C. Role-based access controls in the production and development pipelines
D. Periodic review of the continuous integration and continuous delivery (CI/CD) pipeline audit logs to
identify any access violations
Answer: C
Explanation:
Role-based access controls (RBAC) are a method of restricting access to resources based on the
roles of individual users within an organization. RBAC allows administrators to assign permissions to
roles, rather than to specific users, and then assign users to those roles. This simplifies the
management of access rights and reduces the risk of unauthorized or excessive access. RBAC is
especially important for ensuring adequate restriction on the number of people who can access the
pipeline production environment, which is the final stage of the continuous integration and continuous
delivery (CI/CD) process where code is deployed to the end-users. Access to the production
environment should be limited to only those who are responsible for deploying, monitoring, and
maintaining the code, such as production engineers, release managers, or site reliability engineers.
Developers, testers, or other stakeholders should not have access to the production environment, as
this could compromise the security, quality, and performance of the code. RBAC can help enforce this
separation of duties and responsibilities by defining different roles for different pipeline stages and
granting appropriate permissions to each role. For example, developers may have permission to
create, edit, and test code in the development pipeline, but not to deploy or modify code in the
production pipeline.
Conversely, production engineers may have permission to deploy, monitor, and troubleshoot code in
the production pipeline, but not to create or edit code in the development pipeline. RBAC can also
help implement the principle of least privilege, which states that users should only have the minimum
level of access required to perform their tasks. This reduces the attack surface and minimizes the
potential damage in case of a breach or misuse. RBAC can be configured at different levels of
granularity, such as at the organization, project, or object level, depending on the needs and
complexity of the organization. RBAC can also leverage existing identity and access management
(IAM) solutions, such as Azure Active Directory or AWS IAM, to integrate with cloud services and
applications.
Reference: Set pipeline permissions - Azure Pipelines
Azure DevOps: Access, Roles and Permissions
Cloud Computing ? What IT Auditors Should Really Know
26.An organization currently following the ISO/IEC 27002 control framework has been charged by a
new CIO to switch to the NIST 800-53 control framework.
Which of the following is the FIRST step to this change?
A. Discard all work done and start implementing NIST 800-53 from scratch.
B. Recommend no change, since the scope of ISO/IEC 27002 is broader.
C. Recommend no change, since NIST 800-53 is a US-scoped control framework.
D. Map ISO/IEC 27002 and NIST 800-53 and detect gaps and commonalities.
Answer: D
Explanation:
The first step to switch from the ISO/IEC 27002 control framework to the NIST 800-53 control
framework is to map ISO/IEC 27002 and NIST 800-53 and detect gaps and commonalities. This step
can help the organization to understand the similarities and differences between the two frameworks,
and to identify which controls are already implemented, which controls need to be added or modified,
and which controls are no longer applicable. Mapping can also help the organization to leverage the
existing work done under ISO/IEC 27002 and avoid starting from scratch or discarding valuable
information. Mapping can also help the organization to align with both frameworks, as they are not
mutually exclusive or incompatible. In fact, NIST SP 800-53, Revision 5 provides a mapping table
between NIST 800-53 and ISO/IEC 27001 in Appendix H-21. ISO/IEC 27001 is a standard for
information security management systems that is based on ISO/IEC 27002, which is a code of
practice for information security controls2.
Reference: NIST SP 800-53, Revision 5 Control Mappings to ISO/IEC 27001 ISO - ISO/IEC
27002:2013 - Information technology ? Security techniques ? Code of practice for information security
controls
27.In the context of Infrastructure as a Service (laaS), a vulnerability assessment will scan virtual
machines to identify vulnerabilities in:
A. both operating system and application infrastructure contained within the cloud service
provider’s instances.
B. both operating system and application infrastructure contained within the customer’s instances.
C. only application infrastructure contained within the cloud service provider’s instances.
D. only application infrastructure contained within the customer's instance
Answer: B
Explanation:
In the context of Infrastructure as a Service (IaaS), a vulnerability assessment will scan virtual
machines to identify vulnerabilities in both operating system and application infrastructure contained
within the customer’s instances. IaaS is a cloud service model that provides customers with access
to virtualized computing resources, such as servers, storage, and networks, hosted by a cloud service
provider (CSP). The customer is responsible for installing, configuring, and maintaining the operating
system and application software on the virtual machines, while the CSP is responsible for managing
the underlying physical infrastructure. Therefore, a vulnerability assessment will scan the customer’s
instances to detect any weaknesses or misconfigurations in the operating system and application
layers that may expose them to potential threats. A vulnerability assessment can help the customer to
prioritize and remediate the identified vulnerabilities, and to comply with relevant security standards
and regulations12.
Reference: Azure Security Control - Vulnerability Management | Microsoft Learn How to Implement
Enterprise Vulnerability Assessment - Gartner
28.The CSA STAR Certification is based on criteria outlined the Cloud Security Alliance (CSA) Cloud
Controls Matrix (CCM) in addition to:
A. GDPR CoC certification.
B. GB/T 22080-2008.
C. SOC 2 Type 1 or 2 reports.
D. ISO/IEC 27001 implementation.
Answer: D
Explanation:
The CSA STAR Certification is based on criteria outlined in the Cloud Security Alliance (CSA) Cloud
Controls Matrix (CCM) in addition to ISO/IEC 27001 implementation. The CCM is a cybersecurity
control framework for cloud computing that covers 17 domains and 197 control objectives that
address all key aspects of cloud technology. ISO/IEC 27001 is a standard for information security
management systems that specifies the requirements for establishing, implementing, maintaining,
and continually improving an information security management system within the context of the
organization. The CSA STAR Certification demonstrates that a cloud service provider conforms to the
applicable requirements of ISO/IEC 27001, has addressed issues critical to cloud security as outlined
in the CCM, and has been assessed against the STAR Capability Maturity Model for the management
of activities in CCM control areas1. The CSA STAR Certification is a third-party independent
assessment of the security of a cloud service provider and provides a high level of assurance and
trust to customers2.
Reference: CSA STAR Certification - Azure Compliance | Microsoft Learn STAR | CSA
29.The PRIMARY purpose of Open Certification Framework (OCF) for the CSA STAR program is to:
A. facilitate an effective relationship between the cloud service provider and cloud client.
B. ensure understanding of true risk and perceived risk by the cloud service users.
C. provide global, accredited, and trusted certification of the cloud service provider.
D. enable the cloud service provider to prioritize resources to meet its own requirements.
Answer: C
Explanation:
According to the CSA website, the primary purpose of the Open Certification Framework (OCF) for
the CSA STAR program is to provide global, accredited, trusted certification of cloud providers1 The
OCF is an industry initiative to allow global, trusted independent evaluation of cloud providers. It is a
program for flexible, incremental and multi-layered cloud provider certification and/or attestation
according to the Cloud Security Alliance’s industry leading security guidance and control framework2
The OCF aims to address the gaps within the IT ecosystem that are inhibiting market adoption of
secure and reliable cloud services, such as the lack of simple, cost effective ways to evaluate and
compare providers’ resilience, data protection, privacy, and service portability2 The OCF also aims to
promote industry transparency and reduce complexity and costs for both providers and customers3
The other options are not correct because:
Option A is not correct because facilitating an effective relationship between the cloud service
provider and cloud client is not the primary purpose of the OCF for the CSA STAR program, but
rather a potential benefit or outcome of it. The OCF can help facilitate an effective relationship
between the provider and the client by providing a common language and framework for assessing
and communicating the security and compliance posture of the provider, as well as enabling trust and
confidence in the provider’s capabilities and performance. However, this is not the main goal or
objective of the OCF, but rather a means to achieve it.
Option B is not correct because ensuring understanding of true risk and perceived risk by the cloud
service users is not the primary purpose of the OCF for the CSA STAR program, but rather a possible
implication or consequence of it. The OCF can help ensure understanding of true risk and perceived
risk by the cloud service users by providing objective and verifiable information and evidence about
the provider’s security and compliance level, as well as allowing comparison and benchmarking with
other providers in the market. However, this is not the main aim or intention of the OCF, but rather a
result or effect of it.
Option D is not correct because enabling the cloud service provider to prioritize resources to meet its
own requirements is not the primary purpose of the OCF for the CSA STAR program, but rather a
potential advantage or opportunity for it. The OCF can enable the cloud service provider to prioritize
resources to meet its own requirements by providing a flexible, incremental and multi-layered
approach to certification and/or attestation that allows the provider to choose the level of assurance
that suits their business needs and goals. However, this is not the main reason or motivation for the
OCF, but rather a benefit or option for it.
Reference: 1: Open Certification Framework Working Group | CSA 2: Open Certification Framework |
CSA - Cloud Security Alliance 3: Why your cloud services need the CSA STAR Registry listing
30.Which of the following is a detective control that may be identified in a Software as a Service
(SaaS) service provider?
A. Data encryption
B. Incident management
C. Network segmentation
D. Privileged access monitoring
Answer: D
Explanation:
A detective control is a type of internal control that seeks to uncover problems in a company’s
processes once they have occurred1. Examples of detective controls include physical inventory
checks, reviews of account reports and reconciliations, as well as assessments of current controls1.
Detective controls use platform telemetry to detect misconfigurations, vulnerabilities, and potentially
malicious activity in the cloud environment2.
In a Software as a Service (SaaS) service provider, privileged access monitoring is a detective control
that can help identify unauthorized or suspicious activities by users who have elevated permissions to
access or modify cloud resources, data, or configurations. Privileged access monitoring can involve
logging, auditing, alerting, and reporting on the actions performed by privileged users3. This can help
detect security incidents, compliance violations, or operational errors in a timely manner and enable
appropriate responses.
Data encryption, incident management, and network segmentation are examples of preventive
controls, which are designed to prevent problems from occurring in the first place. Data encryption
protects the confidentiality and integrity of data by transforming it into an unreadable format that can
only be decrypted with a valid key1. Incident management is a process that aims to restore normal
service operations as quickly as possible after a disruption or an adverse event4. Network
segmentation divides a network into smaller subnetworks that have different access levels and
security policies, reducing the attack surface and limiting the impact of a breach1.
Reference: Detective controls - SaaS Lens - docs.aws.amazon.com3, section on Privileged access
monitoring Detective controls | Cloud Architecture Center | Google Cloud2, section on Detective
controls Internal control: how do preventive and detective controls work?4, section on SaaS Solutions
to Support Internal Control
Detective Control: Definition, Examples, Vs. Preventive Control1, section on What Is a Detective
Control?
32.Which of the following should a cloud auditor recommend regarding controls for application
interfaces and databases to prevent manual or systematic processing errors, corruption of data, or
misuse?
A. Assessment of contractual and regulatory requirements for customer access
B. Establishment of policies and procedures across multiple system interfaces, jurisdictions, and
business functions to prevent improper disclosure, alteration, or destruction
C. Data input and output integrity routines
D. Testing in accordance with leading industry standards such as OWASP
Answer: C
Explanation:
The correct answer is
C. Data input and output integrity routines (i.e., reconciliation and edit checks) are controls that can
be implemented for application interfaces and databases to prevent manual or systematic processing
errors, corruption of data, or misuse. This is stated in the Cloud Controls Matrix (CCM) control AIS-03:
Data Integrity123, which is part of the Application & Interface Security domain. The CCM is a
cybersecurity control framework for cloud computing that can be used by cloud customers to build an
operational cloud risk management program.
The other options are not directly related to the question.
Option A refers to the CCM control AIS-02: Customer Access Requirements2, which addresses the
security, contractual, and regulatory requirements for customer access to data, assets, and
information systems.
Option B refers to the CCM control AIS-04: Data Security / Integrity2, which establishes policies and
procedures to support data security across multiple system interfaces, jurisdictions, and business
functions.
Option D refers to the CCM control AIS-01: Application Security2, which requires applications and
programming interfaces (APIs) to be designed, developed, deployed, and tested in accordance with
leading industry standards (e.g., OWASP for web applications).
Reference: =
Certificate of Cloud Auditing Knowledge (CCAK) Study Guide, Chapter 5: Cloud Assurance
Frameworks
What is the Cloud Controls Matrix (CCM)? - Cloud Security Alliance4 AIS-03: Data Integrity - CSF
Tools - Identity Digital1
AIS: Application & Interface Security - CSF Tools - Identity Digital2
PR.DS-6: Integrity checking mechanisms are used to verify software … - CSF Tools - Identity Digital