0% found this document useful (0 votes)
5 views

Cloud Computing Unit-V

The document discusses cloud security challenges, common standards, and applications, highlighting issues such as data loss, user account hijacking, and denial of service attacks. It outlines various cloud security standards, including NIST, ISO-27017, and GDPR, which provide frameworks for ensuring data safety and compliance. Additionally, it covers the Hadoop framework, its architecture, and advantages, emphasizing its scalability and cost-effectiveness in processing large data volumes.

Uploaded by

pranavrajpoot99
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Cloud Computing Unit-V

The document discusses cloud security challenges, common standards, and applications, highlighting issues such as data loss, user account hijacking, and denial of service attacks. It outlines various cloud security standards, including NIST, ISO-27017, and GDPR, which provide frameworks for ensuring data safety and compliance. Additionally, it covers the Hadoop framework, its architecture, and advantages, emphasizing its scalability and cost-effectiveness in processing large data volumes.

Uploaded by

pranavrajpoot99
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Security,

Standards,
and
Applications
Mahfooz Alam
Assistant Professor
Department of MCA
G. L. Bajaj College of Technology
and Management, Greater Noida
Outlines
1. Security in Clouds:
• Cloud security challenges
• Software as a Service Security
2. Common Standards:
• The Open Cloud Consortium
• The Distributed management
• Task Force
• Standards for application Developers
• Standards for Messaging
• Standards for Security,
• End user access to cloud computing,
• Mobile Internet devices and the cloud.
• Hadoop
• MapReduce
• Virtual Box
• Google App Engine
• Programming Environment for Google App Engine
Security Challenges in Cloud Computing
There is no doubt that Cloud Computing provides various Advantages but there are
also some security issues in cloud computing. Some Security Issues in Cloud
Computing are as follows
1. Data Loss: It is one of the issues faced in Cloud Computing. This is also known as
Data Leakage. As we know our sensitive data is in the hands of Somebody else,
and we don’t have full control over our database. So, if the security of cloud
service is to break by hackers then it may be possible that hackers will get access
to our sensitive data or personal files.
2. Interference of Hackers and Insecure APIs: It is important to protect the Interface
and APIs which are used by an external user. But also in cloud computing, few
services are available in the public domain which is the vulnerable part of Cloud
Computing because it may be possible that these services are accessed by some
third parties. So, it may be possible that with the help of these services, hackers
can easily hack or harm our data.
3. User Account Hijacking: Account Hijacking is the most serious security issue in
Cloud Computing. If somehow the Account of a User or an Organization is hijacked
by a hacker then the hacker has full authority to perform Unauthorized Activities.
Security Challenges in Cloud Computing
4. Changing Service Provider: Vendor lock-in is also an important Security issue in
Cloud Computing. Many organizations will face different problems while shifting
from one vendor to another. For example: If an Organization wants to shift
from AWS Cloud to Google Cloud Services then they face various problems like
shifting of all data, also both cloud services have different techniques and
functions, so they also face problems regarding that. Also, it may be possible that
the charges of AWS are different from Google Cloud, etc.
5. Lack of Skill: While working, shifting to another service provider, needing an extra
feature, how to use a feature, etc. are the main problems caused by IT Companies
who don’t have skilled Employees. So it requires a skilled person to work with
Cloud Computing.
6. Denial of Service (DoS) Attack: This type of attack occurs when the system
receives too much traffic. Mostly DoS attacks occur in large organizations such as
the banking sector, government sector, etc. When a DoS attack occurs, data is
lost. So, in order to recover data, it requires a great amount of money as well as
time to handle it.
What are Cloud Security Standards?
It was essential to establish guidelines for how work is done in the cloud due to the
different security dangers facing the cloud. They offer a thorough framework for how
cloud security is upheld with regard to both the user and the service provider.
• Cloud security standards provide a roadmap for businesses transitioning from a
traditional approach to a cloud-based approach by providing the right tools,
configurations, and policies required for security in cloud usage.
• It helps to devise an effective security strategy for the organization.
• It also supports organizational goals like privacy, portability, security, and
interoperability.
• Certification with cloud security standards increases trust and gives businesses a
competitive edge.
Need for Cloud Security Standards
• Ensure cloud computing is an appropriate environment: Organizations need to
make sure that cloud computing is the appropriate environment for the
applications as security and mitigating risk are the major concerns.
• To ensure that sensitive data is safe in the cloud: Organizations need a way to
make sure that the sensitive data is safe in the cloud while remaining compliant
with standards and regulations.
• No existing clear standard: Cloud security standards are essential as earlier there
were no existing clear standards that can define what constitutes a secure cloud
environment. Thus, making it difficult for cloud providers and cloud users to
define what needs to be done to ensure a secure environment.
Common Cloud Security Standards
1. NIST (National Institute of Standards and Technology)

NIST is a federal organization in the US that creates metrics and standards to boost
competition in the scientific and technology industries. The National Institute of
Regulations and Technology (NIST) developed the Cybersecurity Framework to comply
with US regulations such as the Federal Information Security Management Act and
the Health Insurance Portability and Accountability Act (HIPAA) (FISMA).

2. ISO-27017

This standard has not yet been introduced to the marketplace. It attempts to offer
further direction in the cloud computing information security field. Its purpose is to
supplement the advice provided in ISO/IEC 27002 and various other ISO27k standards,
such as ISO/IEC 27018 on the privacy implications of cloud computing, and ISO/IEC
27031 on business continuity.

3. ISO-27018

The protection of personally identifiable information (PII) in public clouds that serve
as PII processors is covered by this standard. Despite the fact that this standard is
especially aimed at public-cloud service providers like AWS or Azure,
Common Cloud Security Standards [Cont…]
4. CIS controls
Organizations can secure their systems with the help of Internet Security Center (CIS)
Controls, which are open-source policies based on consensus. To easily access a list of
evaluations for cloud security, consult the CIS Benchmarks customized for particular
cloud service providers
5. FISMA
In accordance with the Federal Information Security Management Act (FISMA), all
federal agencies and their contractors are required to safeguard information systems
and assets. NIST, using NIST SP 800-53, was given authority under FISMA to define the
framework security standards.
6. Cloud Architecture Framework
These frameworks, which frequently cover operational effectiveness, security, and cost-
value factors, can be viewed as best-party standards for cloud architects. This
framework, developed by Amazon Web Services, aids architects in designing workloads
and applications on the Amazon cloud.
7. General Data Protection Regulation (GDPR)
For the European Union, there are laws governing data protection and privacy. Even
though this law only applies to the European Union, it is something you should keep in
mind if you store or otherwise handle any personal information of residents of the EU.
Common Cloud Security Standards [Cont…]
4. CIS controls
Organizations can secure their systems with the help of Internet Security Center (CIS)
Controls, which are open-source policies based on consensus. To easily access a list of
evaluations for cloud security, consult the CIS Benchmarks customized for particular
cloud service providers
5. FISMA
In accordance with the Federal Information Security Management Act (FISMA), all
federal agencies and their contractors are required to safeguard information systems
and assets. NIST, using NIST SP 800-53, was given authority under FISMA to define the
framework security standards.
6. Cloud Architecture Framework
These frameworks, which frequently cover operational effectiveness, security, and cost-
value factors, can be viewed as best-party standards for cloud architects. This
framework, developed by Amazon Web Services, aids architects in designing workloads
and applications on the Amazon cloud.
7. General Data Protection Regulation (GDPR)
For the European Union, there are laws governing data protection and privacy. Even
though this law only applies to the European Union, it is something you should keep in
mind if you store or otherwise handle any personal information of residents of the EU.
Common Cloud Security Standards [Cont…]
8. SOC Reporting
A form of audit of the operational processes used by IT businesses offering any service
is known as a “Service and Organization Audits 2” (SOC 2). A worldwide standard for
cybersecurity risk management systems is SOC 2 reporting. Your company’s policies,
practices, and controls are in place to meet the five trust principles, as shown by the
SOC 2 Audit Report. The SOC 2 audit report lists security, availability, processing
integrity, confidentiality, and confidentiality as security principles. If you offer software
as a service, potential clients might request proof that you adhere to SOC 2 standards.
9. PCI DSS
For all merchants who use credit or debit cards, the PCI DSS (Payment Card Industry
Data Security Standard) provides a set of security criteria. For businesses that handle
cardholder data, there is PCI DSS. The PCI DSS specifies fundamental technological and
operational criteria for safeguarding cardholder data. Cardholders are intended to be
protected from identity theft and credit card fraud by the PCI DSS standard.
Common Cloud Security Standards [Cont…]
10. HIPAA
The Health Insurance Portability and Accountability Act (HIPAA), passed by the US
Congress to safeguard individual health information, also has parts specifically dealing
with information security. Businesses that handle medical data must abide by HIPAA
law. The HIPAA Security Rule (HSR) is the best choice in terms of information security.
The HIPAA HSR specifies rules for protecting people’s electronic personal health
information that a covered entity generates, acquires, makes use of, or maintains.
11. CIS AWS Foundations v1.2
Any business that uses Amazon Web Service cloud resources can help safeguard
sensitive IT systems and data by adhering to the CIS AWS Foundations Benchmark.
12. ACSC Essential Eight
ACSC Essential 8 (also known as the ASD Top 4) is a list of eight cybersecurity mitigation
strategies for small and large firms. In order to improve security controls, protect
businesses’ computer resources and systems, and protect data from cybersecurity
attacks, the Australian Signals Directorate (ASD) and the Australian Cyber Security
Centre (ACSC) developed the “Essential Eight Tactics.”
Open Cloud Consortium
The Open Cloud Consortium (OCC): supports the development of standards for cloud
computing and frameworks for interoperating between clouds; supports the
development of benchmarks for cloud computing; supports open-source software for
cloud computing; manages a testbed for cloud computing called the Open Cloud
Testbed; sponsors workshops and other events related to cloud computing.”
DMTF: Distributed Management Task Force
It is a Standards Development Non-profit Organization, which builds by developing open
management-based standards that extend over various promising and long-established
conventional Information technology-based organizational structures and facilities
comprising virtualization, networking, cloud computing, servers, and data storage and
promotes interoperability in support of business ventures and Internet backgrounds.
The head office of DMTF a non-profit organization is situated in Portland, Oregon.
The corporations, whose representatives came together as a board of directors of DMTF,
comprise:
• Broadcom Inc.
• Intel Corporation
• Dell Technologies
• Hitachi, Ltd.
• NetApp
• HP Inc.
• Cisco
• Lenovo
• Hewlett Packard Enterprise
Evolution of DMTF
History
• In 1992, the DMTF was first time established as the "Desktop Management Task
Force".
• In 1999, the name of the organization was changed to "Distributed Management
Task Force" because the organization was progressed and started dealing with
distributed management in the course of further standards.
DMTF Standards
Some of the standards of DMTF comprise:
• CADF, an abbreviation of Cloud Auditing Data Federation.
• DASH, an abbreviation of Desktop and Mobile Architecture for System Hardware.
• MCTP, an abbreviation of Management Component Transport Protocol.
• NC-SI, an abbreviation of Network Controller Sideband Interface.
• OVF, an abbreviation of Open Virtualization Format.
• PLDM , an abbreviation of Platform Level Data Model.
• SMASH, an abbreviation of Systems Management Architecture for Server Hardware.
DMTF Advantages
• The members of the corporations get front-line access to the information concerning
DMTF standards and tools.
• The members of the corporations get a chance to take part in the designation of
these DMTF standards.
• The members of the corporations get a chance to work together on tasks in the
company of implementers of these DMTF standards and tools.
Hadoop
Hadoop is an open source framework from Apache and is used to store process and
analyze data which are very huge in volume. Hadoop is written in Java and is not OLAP
(online analytical processing). It is used for batch/offline processing. It is being used by
Facebook, Yahoo, Google, Twitter, LinkedIn and many more. Moreover, it can be scaled
up just by adding nodes in the cluster.
Modules of Hadoop
1. HDFS: Hadoop Distributed File System. Google published its paper GFS and on the
basis of that HDFS was developed. It states that the files will be broken into blocks
and stored in nodes over the distributed architecture.
2. Yarn: Yet another Resource Negotiator is used for job scheduling and managing the
cluster.
3. Map Reduce: This is a framework which helps Java programs to do parallel
computation on data using key-value pair. The Map task takes input data and
converts it into a data set which can be computed in Key value pair. The output of the
Map task is consumed by reduced task and then the out of reducer gives the desired
result.
4. Hadoop Common: These Java libraries are used to start Hadoop and are used by
other Hadoop modules.
Hadoop Architecture
The Hadoop architecture is a package of the file system, MapReduce engine, and the
HDFS (Hadoop Distributed File System). The MapReduce engine can be MapReduce/MR1
or YARN/MR2.
A Hadoop cluster consists of a single master and multiple slave nodes. The master node
includes Job Tracker, Task Tracker, NameNode, and DataNode whereas the slave node
includes DataNode and TaskTracker.
Hadoop Distributed File System (HDFS)
The Hadoop Distributed File System (HDFS) is a distributed file system for Hadoop. It
contains a master/slave architecture. This architecture consists of a single NameNode
that performs the role of master, and multiple DataNodes performing the role of a slave.
Both NameNode and DataNode are capable enough to run on commodity machines. The
Java language is used to develop HDFS. So any machine that supports Java language can
easily run the NameNode and DataNode software.
NameNode
o It is a single master server exists in the HDFS cluster.
o As it is a single node, it may become the reason for single-point failure.
o It manages the file system namespace by executing an operation like opening,
renaming, and closing the files.
o It simplifies the architecture of the system.
DataNode
o The HDFS cluster contains multiple data nodes.
o Each DataNode contains multiple data blocks.
o These data blocks are used to store data.
o It is the responsibility of DataNode to read and write requests from the file system's
clients.
Hadoop Distributed File System (HDFS)
o It performs block creation, deletion, and replication upon instruction from the
NameNode.
Job Tracker
o The role of Job Tracker is to accept the MapReduce jobs from a client and process the
data by using NameNode.
o In response, NameNode provides metadata to Job Tracker.
Task Tracker
o It works as a slave node for Job Tracker.
o It receives tasks and code from Job Tracker and applies that code to the file. This
process can also be called a Mapper.
MapReduce Layer
The MapReduce comes into existence when the client application submits the
MapReduce job to Job Tracker. In response, the Job Tracker sends the request to the
appropriate Task Trackers. Sometimes, the TaskTracker fails or time out. In such a case,
that part of the job is rescheduled.
Advantages of Hadoop
o Fast: In HDFS the data are distributed over the cluster and are mapped which helps in
faster retrieval. Even the tools to process the data are often on the same servers,
thus reducing the processing time. It is able to process terabytes of data in minutes
and Peta bytes in hours.
o Scalable: Hadoop cluster can be extended by just adding nodes in the cluster.
o Cost Effective: Hadoop is open source and uses commodity hardware to store data so
it is really cost-effective as compared to traditional relational database management
systems.
o Resilient to failure: HDFS has the property with which it can replicate data over the
network, so if one node is down or some other network failure happens, then
Hadoop takes the other copy of data and uses it. Normally, data are replicated thrice
but the replication factor is configurable.
History of Hadoop
The Hadoop was started by Doug Cutting and Mike Cafarella in 2002. Its origin was the
Google File System paper, published by Google.

Let's focus on the history of Hadoop in the following steps: -


o In 2002, Doug Cutting and Mike Cafarella started to work on a project, Apache
Nutch. It is an open-source web crawler software project.
o While working on Apache Nutch, they were dealing with big data. To store that data
they have to spend a lot of costs which becomes the consequence of that project.
This problem becomes one of the important reasons for the emergence of Hadoop.
o In 2003, Google introduced a file system known as GFS (Google file system). It is a
proprietary distributed file system developed to provide efficient access to data.
o In 2004, Google released a white paper on Map Reduce. This technique simplifies the
data processing on large clusters.
History of Hadoop [Cont…]
o In 2005, Doug Cutting and Mike Cafarella introduced a new file system known as
NDFS (Nutch Distributed File System). This file system also includes Map reduce.
o In 2006, Doug Cutting quit Google and joined Yahoo. On the basis of the Nutch
project, Dough Cutting introduces a new project Hadoop with a file system known as
HDFS (Hadoop Distributed File System). Hadoop’s first version 0.1.0 was released this
year.
o Doug Cutting gave named his project Hadoop after his son's toy elephant.
o In 2007, Yahoo ran two clusters of 1000 machines.
o In 2008, Hadoop became the fastest system to sort 1 terabyte of data on a 900-node
cluster within 209 seconds.
o In 2013, Hadoop 2.2 was released.
o In 2017, Hadoop 3.0 was released.
Map Reduce
MapReduce is a programming model used for efficient processing in parallel over large
datasets in a distributed manner. The data is first split and then combined to produce
the final result. The libraries for MapReduce are written in so many programming
languages with various different optimizations. The purpose of MapReduce in Hadoop
is to Map each of the jobs and then it will reduce it to equivalent tasks to provide less
overhead over the cluster network and to reduce the processing power. The
MapReduce task is mainly divided into two phases Map Phase and Reduce Phase.
Components of Map Reduce Architecture
1. Client: The MapReduce client is the one who brings the Job to the MapReduce for
processing. There can be multiple clients available that continuously send jobs for
processing to the Hadoop MapReduce Manager.
2. Job: The MapReduce Job is the actual work that the client wanted to do which is
comprised of so many smaller tasks that the client wants to process or execute.
3. Hadoop MapReduce Master: It divides the particular job into subsequent job-parts.
4. Job-Parts: The task or sub-jobs that are obtained after dividing the main job. The
result of all the job-parts combined to produce the final output.
5. Input Data: The data set that is fed to the MapReduce for processing.
6. Output Data: The final result is obtained after the processing.
Map Reduce Working
The MapReduce task is mainly divided into 2 phases i.e. Map phase and Reduce phase.
1. Map: As the name suggests its main use is to map the input data in key-value pairs.
The input to the map may be a key-value pair where the key can be the id of some
kind of address and value is the actual value that it keeps. The Map() function will
be executed in its memory repository on each of these input key-value pairs and
generates the intermediate key-value pair which works as input for the Reducer
or Reduce() function.
2. Reduce: The intermediate key-value pairs that work as input for Reducer are
shuffled and sort and send to the Reduce() function. Reducer aggregate or group
the data based on its key-value pair as per the reducer algorithm written by the
developer.
Map Reduce Working [Cont…]
How the Job tracker and the task tracker deal with MapReduce:
1. Job Tracker: The work of the Job tracker is to manage all the resources and all the
jobs across the cluster and also to schedule each map on the Task Tracker running
on the same data node since there can be hundreds of data nodes available in the
cluster.
2. Task Tracker: The Task Tracker can be considered as the actual slaves that are
working on the instruction given by the Job Tracker. This Task Tracker is deployed on
each of the nodes available in the cluster that executes the Map and Reduce task as
instructed by Job Tracker.
What is Google App Engine?
Google App Engine is a cloud computing Platform as a Service (PaaS) which provides
Web app developers and businesses with access to Google’s scalable hosting in Google-
managed data centers and tier-1 Internet service. It enables developers to take full
advantage of its serverless platform. These applications are required to be written in,
namely: Java, Python, PHP, Go, Node.JS, . NET, and Ruby. Applications in the Google App
Engine require the use of Google query language and store data in Google Big Table.
Advantages of Google App Engine
The Google App Engine has a lot of benefits that can help you advance your app ideas.
This comprises:
1. Infrastructure for Security: The Internet infrastructure that Google uses is arguably
the safest in the entire world. Since the application data and code are hosted on
extremely secure servers, there has rarely been any kind of illegal access to date.
2. Faster Time to Market: For every organization, getting a product or service to
market quickly is crucial. When it comes to quickly releasing the product,
encouraging the development and maintenance of an app is essential. A firm can
grow swiftly with Google Cloud App Engine’s assistance.
3. Quick to Start: You don’t need to spend a lot of time prototyping or deploying the
app to users because there is no hardware or product to buy and maintain.
What is Google App Engine?
4. Easy to Use: The tools that you need to create, test, launch, and update the
applications are included in Google App Engine (GAE).
5. Rich set of APIs & Services: A number of built-in APIs and services in Google App
Engine enable developers to create strong, feature-rich apps.
6. Scalability: This is one of the deciding variables for the success of any software.
When using the Google app engine to construct apps, you may access technologies
like GFS, Big Table, and others that Google uses to build its own apps.
7. Performance and Reliability: Among international brands, Google ranks among the
top ones. Therefore, you must bear that in mind while talking about performance
and reliability.
8. Cost Savings: To administer your servers, you don’t need to employ engineers or
even do it yourself. The money you save might be put toward developing other
areas of your company.
9. Platform Independence: Since the app engine platform only has a few
dependencies, you can easily relocate all of your data to another environment.
Major Features of Google App Engine
Some of the prominent GAE features that user can take advantage of include:
• Language support: Google App Engine lets users’ environment to build applications
in some of the most popular languages, including Java, Python, Ruby, Golang,
Node.js, C#, and PHP.
• Flexibility: Google App Engine offers the flexibility to import libraries & frameworks
through Docker containers.
• Diagnostics: Google App Engine uses cloud monitoring and logging to monitor health
and performance of an application which helps to diagnose and fix bugs quickly. The
error reporting document helps developers fix bugs on an immediate basis.
• Traffic splitting: Google App Engine automatically routes the incoming traffic to
different application versions as a part of A/B testing. This enables users to easily
create environments for developing, staging, production and testing.
• Security: Google App Engine enables users to define access rules in Engine’s firewall
and utilize SSL/TLS certificates on custom domains for free.
• Google App Engine Pricing GAE offers a usage-based plan for its users with free
quota to trial out the service without costs.
Mobile Internet Device
• A Mobile Internet Device (MID) is a small multimedia-enabled mobile device that
provides wireless Internet access. MIDs facilitate real-time and two-way
communication by filling the multimedia gap between mobile phones and tablets.
• A MID is larger than a handheld device, like a smartphone, but smaller than an ultra-
mobile PC (UMPC). MID technology focuses on providing entertainment, information,
and location-based services to individual consumers, rather than enterprises.
• A MID has several positive advantages over smaller and larger devices. It provides a
larger display than a typical mobile phone with preloaded Internet functionality,
which facilitates Web browsing. The compact MID design allows users to easily carry
a MID in a backpack or purse. Also, MID devices are significantly lighter than standard
laptops.
• MIDs provide efficient and wireless connectivity. Features based on Intel’s 2007
prototype are as follows:
• Display screen: 4.5 to 6 inches • Pixel resolution: 800×480 or 1024×600
• Boot time: Faster than UMPC • Easy interface
• Manufacturer’s suggested retail price • Wide local area network (WLAN) or Wi-
(MSRP): Lower than UMPC Fi technology
• Random access memory (RAM): 256 or
512 Mb
End User Computing
End-user computing (EUC) is a combination of technologies, policies, and processes that
gives your workforce secure, remote access to applications, desktops, and data they
need to get their work done. Modern enterprises use EUC so that their employees can
work from wherever they are, across multiple devices, in a safe and scalable way. A well-
designed EUC program gives users immediate access to the digital technologies they
need for productivity, both on-premises and remotely in the cloud.
Why is EUC important?
The term end-user computing (EUC) covers technologies such as:
• Remote workforce management
• Virtual desktop infrastructure (VDI)
• Application virtualization and application streaming platforms
Benefits of End User Computing
Business continuity: Hundreds of millions of people now work remotely, with no or
limited access to company hardware. EUC has been critical to business continuity as
employees access the business applications and data they need on their personal
devices.
Security: EUC services and applications are centrally managed. They store data and
desktops centrally in the cloud and stream only pixels out to endpoint devices, avoiding
sensitive data downloads to user devices. End users can use multi-factor authentication
to access EUC systems, even further decreasing the risk of unauthorized data access if an
employee's device is lost or stolen.
Agility: Cloud-based EUC solutions greatly improve agility by providing instant scalability,
making it more straightforward for organizations to deploy and manage secure virtual
desktop infrastructure. You can release new software and new tools or turn existing
Windows- or Linux-based applications into Software as a Service (SaaS) applications
faster than you could on-premises. EUC also makes it more efficient to oversee
migrations and deployments.
Benefits of End User Computing
Cost savings: It is more cost-effective for enterprises to manage centralized desktops and
EUC solutions in the cloud. You can manage and install applications in a single location,
reducing management complexity and cost. You can also add and remove users on
demand and pay only for what you use when you use it. You no longer need to overbuy
hardware and licenses for peak user capacity.

Testing: EUC is an optimum environment for rapid prototyping and testing. Organizations
can offer new tools to their workers to test how effective and user-friendly they are on a
large scale. EUC allows you to change design and structure to rapidly carry out testing
campaigns.

Collaboration: Teams using EUC services can securely collaborate on projects, with their
content stored on the cloud. You can also share project information with external users
for cross-organizational collaboration with changes made in real-time.

You might also like