Researchpaper - Serverless Computing
Researchpaper - Serverless Computing
Manan Sharma, Deepak Saini, Utpal Chaudhary, Riteek Raj Modi, Vani Gupta,
Monika
{20BCS4130,20BCS4066,20BCS4160,20BCS4115,20BCS4068,
20BCS4070}@cuchd.in
1. Introduction
Cloud Computing is the on-demand delivery of computing resources such as
computing power and storage usually through the public network. It relies on sharing
of resources among its users and usually uses the ‘pay-as-you-go’ model, which
means now users have to pay only for OPEX (operational expense) rather than
CAPEX (capital expense). It surely helped in reducing capital expenses but also lead
to unexpected operating expenses for users.
AWS was one of the first companies which lead to the Commercialization of Cloud
Computing. AWS was launched in July 2002 with the aim to innovate and build
entrepreneurial applications on the cloud. By 2010 other vendors like Microsoft
Azure, and Google Cloud Platform came into the picture. After June 2015, the term
serverless began to gain popularity after the release of the AWS service called AWS
Lambda. Now at present, we have many cloud providers other than AWS like
Microsoft Azure, Google Cloud, Rackspace, IBM, Oracle, etc.
When we have so many vendors for the same product, so it becomes difficult to
choose the right vendor. A business should consider these factors before choosing a
vendor like cost, performance, limits, programming language, programming model,
composability, deployment, security, accounting, monitoring, and debugging.
AWS is dominating the market when it comes to cloud computing, here is the
pictorial representation of different cloud vendors and their shares in the market.
2
For many years, in cloud computing, the architecture entire machine down to the
hardware layers was virtualized with the help of a hypervisor. In this architecture
there was a pool of virtual machines each carrying a copy of the operating system was
created on the top of the machine. In short virtual machines creates a copy of
hardware or computer system that makes it possible for a single machine to host
several virtual machines that act as if they are separate piece of hardware.
These virtual machines did not interact with other virtual machines on the same host
machine and were self-contained units and contained copies of everything from the
operating system to libraries and databases. However, with time, it turned out it was
not the most effective architecture. Here we were virtualizing down to the hardware
level, in the new architecture virtualization is limited to software levels above the
operating system level. This new architecture makes use of containers instead of
virtual machines. According to IBM "Containers are a lighter-weight, more agile way
of handling virtualization — since they don't use a hypervisor, you can enjoy faster
re-source provisioning and speedier availability of new applications”. So instead of
virtualizing the hardware containers virtualize the operating system so each individual
container contains only the application and its libraries and dependencies. Containers
deliver a much higher level of abstraction than virtual machines and also provide
resources faster than Virtual Machines.
Containers are the integral elements of many PaaS (Platform as a service) offerings.
However, Containers aren't limited to PaaS. Organizations often use containers in the
local data center for the exact same benefits; enabling users to easily deploy and
spawn applications within containers. Although containers are far more efficient and
faster than virtual machines, their growth is inhibited due to basic infrastructural
elements called servers. This gives rise to new computing architecture known as
serverless computing.
Although its name is serverless it does not mean that servers are not involved here,
here what it does is that users can develop their application independently without
thinking about servers as cloud providers manage servers on behalf of their
customers. Serverless computing is a programming model and cloud architecture
where the application is decomposed into ‘triggers’ (events) and ‘actions’ (functions)
where small code snippets are executed in the cloud without any control over the re-
sources. Here we are billed not for resources allowed but for the execution time. The
application consumes the resources only at the time of execution and once the
execution time is over, it releases the resources. The price model includes only the
amount of time in which the resources were in use and the application developer need
not pay for resources until they are executed, thus it is referred to as ‘serverless’.
4
Since we are being charged only for the execution time and the whole process
revolves around functions it is also known as FAAS (Function as a Service). FaaS is a
simple and straightforward approach just writing functions in programs to reuse the
code in the upcoming program. Similarly, the FaaS provides the functions which are
registered by developers in the various cloud services platforms. These platforms
support a variety of languages like C++, Java, Python, JavaScript, etc. The Users can
define the event which triggers each function.
The platform also needs to queue events, and based on the state of the queues and
arrival rate of events, schedule the execution of functions, and manage to stop and
deallocate resources for idle functions instances. Moreover, cloud providers also need
to consider scalability and failure management in the cloud, and should provide
heterogeneous hardware support, and a variety of programming languages should be
explored.
2. Literature Review
Dahanayake A. (2017) serverless research that architecture helps reduce operational costs
in comparison with the traditional solution of constantly keeping a running server. As a
result of the study of Acharya B. (2020), a fully serverless, scalable application was
designed and implemented on the serverless database with a new-made API. An effort
was made to demonstrate how writing functions, looking at DynamoDB streams, events
subsystem, API, and the storage pattern should be carried out. Finally, this study talks
about the advantages of next-generation serverless technology without thinking about
scaling and management of servers.
Utomo P. (2020) has deep-dived into serverless computing discussing how GitHub
provides the environment for developing and publishing websites in an integrated
environment. With GitHub Pages, we can host the static web easily, fast, and for free. It
helps the developer for integrating the development and deployment process, because
GitHub Pages are integrating with the GitHub environment that supports continuous
integration and continuous delivery, and also become a content delivery network (CDN),
as a part of the JAM stack building block.
Vergadia P. (2020) has talked about how Compute Engine offers scale, performance,
and value that allows you to easily launch large compute clusters on Google’s
infrastructure. For security and authentication, numerous techniques provided by AWS
were dis- covered including the use of a Single Sign On session by Dani A (2020). OTP-
based authentication was found to be more reliable and secure than HTTP-based
methods. Cloud implications relating to infrastructure elasticity, load balancing,
provisioning variation, infrastructure, and memory reservation size were studied.
Shelar N. (2021) has talked about creating a simple serverless website that helps users to
request feedback forms for college surveys. Serverless is a process that shows services,
practices, and strategies which are used to build a website so as to innovate and develop
for faster changes. Serverless computing contains infrastructure management tasks
such as capacity provisioning and patching.
Krishnan H. (2021) researched that with the help of serverless computing, the time
required to meet the market is low, cost-effective, and provides higher efficiency. In
serverless cloud computing, the developer need not worry about owning, managing, and
maintenance of servers as it is carried out by the cloud service provider.
Anand K. (2021) has eloquently discussed the technologies used to construct a serverless
web application. They have used the AWS platform for serverless web application
construction. There are various AWS services like AWS Lambda, Amazon API Gateway,
AWS Amplify, Amazon DynamoDB, and Amazon Cognito. AWS DynamoDB is used
as the database for the web applications, and AWS Lambda is used to create functions to
write from or to write to the database, AWS API Gateway creates a REST API, an
application programming interface that allows the web application to interact with
RESTful API. AWS S3 hosts the web application and AWS CloudFront delivers the web
application from the nearest location of the user.
6
It shows how
serverless archi-
The development of tecture helps
AWS (Amazon
serverless prototype reduce opera-
Web Services)
application is limited tional costs in
Ajantha Lambda, API
2017 to the Amazon Web comparison with
Dahanayake Gateway,
Services (AWS) envi- the traditional
Cloudwatch
ronment. solution of con-
stantly keeping a
running server.
The usage of
GitHub as web
The usage of GitHub
hosting and
as web hosting and
Prayudi JAMstack as an
JAMstack as an ap-
Utomo 2020 GitHub platform approach for
proach for developing
developing web,
web, not very popular
not very popular
among developers
among develop-
ers
Large internet
companies like
Amazon,
This study helps to
Netflix, and
implement abstrac-
LinkedIn
AWS Lambda tion for serverless
deploy big
along with other architecture which
multistage
existing AWS enables efficient
A. Dani applications in
2020 services like S3, cross server data
the cloud
DynamoDB, management, re-
which can be
CloudWatch etc. source allocation
developed,
and isolation
tested, de-
amongst other
ployed, scaled,
things
operated and
upgraded in-
dependently.
Compute Engine
offers scale,
performance,
and value that
allows you to
easily launch
what happens if our large compute
website gets really clusterson
To deploy web- popular and the traffic G o o g l e ’ s i n-
sites on Google grows from 100s to frastructure.
Priyanka millions of users? We There are no
2020 Cloud using
Vergadia need to make sure that upfront invest-
Google Compute our application can ments and you
Engine gracefully handle can run thou-
peaks and dips in traf- sands of virtual
fic. CPUs on a sys-
tem that has
been designed to
be fast, and to
offer strong
consistency of
performance.
8
In serverless
AWS Lambda, cloud comput-
Azure, and ing, the devel-
Google cloud. the control over the oper need not
basic compo- infrastructure is lost, worry about
nents such as as the cloud service owning, man-
Hari Krish-
2021 serverless API provider takes care of agement, and
nan Andi
gateway, it, we cannot produce maintenance of
FaaS (Function any change in in- servers as it is
as a service) and frastructure. carried out by
BaaS (Backend the cloud service
as a service). provider.
it is a cloud
service platform
which offers
compute power,
Security concern for
database stor-
Nikita 2021 GitHub platform backend phase devel-
age, content
opers.
delivery and
other functional-
ity to help de-
velopers.
9
AWS Lambda is a computing service that lets you run code without provisioning
ormanaging servers.
With AWS lambda, you can run code for virtually any type of application or backend
service with all zero administration.
• Data normalization
• Transactions
You can still use these principles, but some of them are not ideal for serverless
computation Strong data consistency, data normalization, and transactions are the
basis.
10
for any SQL databases, and SQL databases do not scale and are structured. Therefore,
a NoSQL database is what you need if you need a highly scalable solution. Data
frequently move across the system in the serverless world as a reaction to a particular
occurrence. In this case, the data is ultimately consistent (at least, eventually!). In this
situation, transactions are also not the greatest option. Data is frequently duplicated
and denormalized since databases need to be arranged for quick reading in order to
perform at their peak. Containers are where the serverless system's components, glue,
and core run. You may control how powerful your hardware is with a memory option.
Its memory ranges from 128 MB to 3 GB. Even though it's not much, you don't need
to execute any functions. If you need multiple functions at the same time, you get a
new container and new capabilities.
2. Search for the IAM account in the services and features column.
3. Create a new role in the IAM account for accessing full control of DynamoDB.
4. Create a policy for that role by giving full access to dynamo DB and s3 bucket.
5. Search for the s3 bucket and connect your s3 bucket to the database you
created in Dynamo DB.
7. Search for AWS lambda and then connect it to the bucket created in s3 by
creating a new lambda function and follow the steps as do.
9. Scroll up the page and you will see the space for the code to run, deploy
manage, and many more.
10. At the space of code write down the code for invoking the database by triggering
the s3 bucket.
12
4. Challenges
Lack of Control: You don't own or have any control over the server infrastructure
when using serverless computing. This means that problems with your server's
hardware, software, or other components could have a significant negative influence
on your business.
Possible compatibility issues: You could desire to employ cloud services from many
suppliers, which could present compatibility problems. Although this is theoretically
conceivable, you can run into compatibility problems if you utilize one provider for
your serverless architecture but wish to use their capabilities with other cloud
services.
The drawback of this is that there is a known delay the next time it needs to execute
when the resources totally spin down. Running the function requires reallocating the
resources, which takes time. You end up with a single set of performance
characteristics for recently utilized "hot" functions and another profile for “cold”
functions that must be patiently created by the platform before being executed.
14
An unsafe configuration
Providers of cloud services provide a variety of options and features. Unattended incorrect
settings or configurations might pose serious security risks. It may serve as a point of entry
for assaults against serverless infrastructures.
Security: As perimeters evaporate, serverless architecture calls for new security paradigms and
best practices. It is exceedingly challenging to reliably implement security standards throughout
the entire application because each container or serverless workload provider uses its own
security frameworks.
Cost: The costs of serverless computing may be both direct and indirect. For instance,
significant reliance on API calls might increase expenses and lead to difficult-to-fix
performance bottlenecks.
6. Conclusion
Serverless computing is a growing market and it continues to evolve as cloud
providers continue to come up with new solutions for its drawbacks and challenges.
Serverless computing allows developers to focus on developing applications and
business logic while taking care of various resources.
Sadly, there has been a lack of interest from the research community and academics
regarding this. It seems that the task to solve various wide variety of technically
challenging and intellectually deep problems in this space, ranging from infrastructure
issues such as optimizations to the cold start problem to the design of a composable
programming model has been left to Big Cloud providers or future generations. We
strongly feel if more academicians get into it, it will not only benefit cloud industry
but also humanity as with proper and efficient usage of servers we can lower the
carbon footprint to a significant level
References
1. Prayudi Utomo, Falahah Suprapto based on “GitHub platform” , On-
line; accessed 2020.
4. Hari Krishnan Andi on the AWS Lambda Azure, and Google cloud ”.
Online; accessed 2021.