0% found this document useful (0 votes)
2 views

CNF QNA

Object/Relational Mapping (ORM) automates the persistence of Java objects to relational database tables using metadata. Key ORM problems include defining persistent classes, mapping metadata, handling class inheritance, and managing object identity. Docker is an open platform for developing and running applications, with key components including Docker Engine, images, and containers, while Docker Compose simplifies multi-container application management.

Uploaded by

Tripthi Dubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

CNF QNA

Object/Relational Mapping (ORM) automates the persistence of Java objects to relational database tables using metadata. Key ORM problems include defining persistent classes, mapping metadata, handling class inheritance, and managing object identity. Docker is an open platform for developing and running applications, with key components including Docker Engine, images, and containers, while Docker Compose simplifies multi-container application management.

Uploaded by

Tripthi Dubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

1a)what is ORM .list any four generic ORM problems??{I hv listed all problems.

}
In a nutshell, object/relational mapping is the automated (and transparent) persistence of objects in
a Java application to the tables in a relational database, using metadata that describes the mapping
between the objects and the database. ORM, in essence, works by (reversibly) transforming data
from one representation to another.

The following list of issues, which we’ll call the O/R mapping problems, are the fundamental
problems solved by a full object/relational mapping tool in a Java environment.

• What do persistent classes look like? Are they fine-grained JavaBeans? Or are they instances of
some (coarser granularity) component model like EJB? How transparent is the persistence tool? Do
we have to adopt a programming model and conventions for classes of the business domain?

• How is mapping metadata defined? Since the object/relational transformation is governed entirely
by metadata, the format and definition of this metadata is a centrally important issue. Should an
ORM tool provide a GUI to manipulate the metadata graphically? Or are there better approaches to
metadata definition?

• How should we map class inheritance hierarchies? There are several standard strategies. What
about polymorphic associations, abstract classes, and interfaces?

• How do object identity and equality relate to database (primary key) identity.How do we map
instances of particular classes to particular table rows?

• How does the persistence logic interact at runtime with the objects of the business domain? This is
a problem of generic programming, and there are a number of solutions including source generation,
runtime reflection, runtime bytecode generation, and buildtime bytecode enhancement. The
solution to this problem might affect your build process

• What is the lifecyle of a persistent object? Does the lifecycle of some objects depend upon the
lifecycle of other associated objects? How do we translate the lifecyle of an object to the lifecycle of
a database row?

• What facilities are provided for sorting, searching, and aggregating? The application could do some
of these things in memory. But efficient use of relational technology requires that this work
sometimes be performed by the database.

• How do we efficiently retrieve data with associations? Efficient access to relational data is usually
accomplished via table joins. Object-oriented applications usually access data by navigating an object
graph. Two data access patterns should be avoided when possible: the n+1 selects problem, and its
complement, the Cartesian product problem (fetching too much data in a single select).

1b) infer the steps for connecting to database from spring using hibernate xml based
configuration.?
https://www.javaguides.net/2018/11/spring-mvc-5-hibernate-5-xml-based-configuration-
example.html

{out of sallybus question coz he didn’t teach but expect us to learn from our selves. But might come}
2 . write the program 7{jpa program.}
3a. provide an overview of docker and its key components.
Docker - Introduction

• Docker is an open platform for developing, shipping, and running applications.

• Docker enables to separate your applications from infrastructure to till deliver software quickly.

• With Docker, manage infrastructure in the same ways to manage applications.

• By taking advantage of Docker's methodologies for shipping, testing, and deploying code, you can
significantly reduce the delay between writing code and running it in production.

Docker components
Docker Engine: It's a core part of Docker system. Docker uses a client-server architecture. It consists
of

Docker server or daemon is a process which runs continuously and used to control and manage
the containers. The Docker daemon listens to only Docker API requests and handles Docker
images, containers, networks, it also communicates with other daemons to manage Docker
services.

A REST API (Representational State Transfer Application Programming Interface): which interfaces
the programs to use talk with the daemon and give instruct it what to do.

The Docker Client: is a command-line interface (CLI) for sending instructions to the Docker
Daemon using special Docker commands. Though a client can run on the host machine, it relies
on Docker Engine’s REST API to connect remotely with the daemon.

IMAGE: An image is a read-only template with instructions for creating a container. Images
define Application dependencies and the processes that should run when the application
launches. We can get images from Docker Hub or create our own images by including specific
instructions within a file called Docker file.

Containers: A container is a runnable instance of an image. We can create, start, stop, move, or
delete a container using the Docker API or CLI. An image is a class and the container is an
instance of that class. It is possible to connect a container to one or more networks, attach
storage to it, or even create a new image based on its current state.
3b)Design a Dockerfile to pull an mysql image, set mysql root password and
expose port 3306. write the command to build the docker image from file.

# Use the official MySQL image from Docker Hub


FROM mysql:latest

# Set the MySQL root password


ENV MYSQL_ROOT_PASSWORD=password

# Expose port 3306 to allow outside connections


EXPOSE 3306

To build the Docker image from this Dockerfile, you can use the docker build command.
Assuming the Dockerfile is in your current directory, you can execute the following command
in your terminal:
docker build -t my_mysql_image .
This command will build the Docker image using the Dockerfile in the current directory (.)
and tag it with the name my_mysql_image. Make sure to replace my_mysql_image with the
desired name for your image.

4a)Describe docker compose purpose and provide a basic overview of its


usage ?
Docker Compose is a tool for defining and running multi-container applications. It is the key
to unlocking a streamlined and efficient development and deployment experience. It also
has commands for managing the whole lifecycle of your application:
● Start, stop, and rebuild services
● View the status of running services
● Stream the log output of running services
● Run a one-off command on a service
Using Docker Compose offers several benefits that streamline the development,
deployment, and management of containerized applications:
● Simplified control:
● Efficient collaboration:
● Rapid application development:
● Portability across environments:
● Extensive community and support:

Docker Compose looks like


version: "3.8"
services:
app:
image: myorg/myapp:latest
ports:
- 8080:8080
environment:
spring.datasource.url: jdbc:mysql://db:3306/student
spring.datasource.driverClassName:
com.mysql.cj.jdbc.Driver
spring.datasource.username: admin
spring.datasource.password: root
depends_on:
- db
networks:
- default

db:
image: mysql:latest
environment:
DATABASE_HOST: docker-mysql
DATABASE_PORT: 3306
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: student
MYSQL_USER: admin
MYSQL_PASSWORD: root
networks:
- default
4b) justify Kubernetes is merely not just an orchresttation system ??

5a)Derive the relationshipbetween the cluster node and pods in Kubernetes??


Chatgpt ans:
In Kubernetes, the cluster is comprised of nodes, which serve as the foundational
infrastructure for running applications. Each node functions as a worker within the
Kubernetes cluster, providing the necessary resources for executing and managing
application workloads.
Pods, on the other hand, represent the smallest deployable units in Kubernetes. They
encapsulate one or more containers and are scheduled to run on nodes within the cluster.
Pods share common resources such as storage and networking, enabling seamless
communication and interaction between the containers they host.
The relationship between nodes and pods is pivotal in Kubernetes:
Nodes: Nodes serve as the execution environment for pods. They supply the computing,
memory, and storage resources needed to run pod containers. Additionally, nodes house
essential Kubernetes components like the kubelet, kube-proxy, and container runtime.
Pods: Pods are assigned to run on specific nodes within the cluster. Each pod contains one or
more containers, tightly integrated and sharing the same network context. Pods can be
independently managed, scaled, and deployed.
Pod-to-Node Relationship: Kubernetes orchestrates the placement of pods onto nodes. Once
scheduled, a pod becomes associated with a particular node, which is responsible for
executing the containers within it. Pods within a node can communicate with each other via
the node's networking infrastructure.
Node Management: Kubernetes oversees the lifecycle of nodes, ensuring their availability
and health. It monitors node status and allocates pods based on resource requirements and
operational constraints, thereby optimizing workload distribution across the cluster.
In essence, nodes provide the underlying infrastructure for running pods, while pods
encapsulate and manage application workloads within the Kubernetes environment.
Understanding this symbiotic relationship is crucial for effectively deploying and managing
applications in Kubernetes.
Sirs dabba answers:
5b)explain the fundamental concepts of microservices architecture??
Microservices - also known as the microservice architecture - is an architectural style that
structures an application as a collection of services that are:

 Independently deployable
 Loosely coupled

Services are typically organized around business capabilities. Each service is often owned by a
single, small team.
The microservice architecture enables an organization to deliver large, complex applications
rapidly, frequently, reliably and sustainably - a necessity for competing and winning in today’s
world.

What are microservices?


• Microservices are small, independent, and loosely coupled. A single small team of
developers can write and maintain a service.

• Each service is a separate codebase, which can be managed by a small development


team.
• Services can be deployed independently. A team can update an existing service
without rebuilding and redeploying the entire application.

• Services are responsible for persisting their own data or external state. This differs
from the traditional model, where a separate data layer handles data persistence.

• Services communicate with each other by using well-defined APIs. Internal


implementation details of each service are hidden from other services.

• Supports polyglot programming. For example, services don't need to share the same
technology stack, libraries, or frameworks.

• Besides for the services themselves, some other components appear in a typical
microservices architecture:

• Management/orchestration. This component is responsible for placing services on


nodes, identifying failures, rebalancing services across nodes, and so forth. Typically
this component is an off-the-shelf technology such as Kubernetes, rather than
something custom built.

• API Gateway. The API gateway is the entry point for clients. Instead of calling services
directly, clients call the API gateway, which forwards the call to the appropriate
services on the back end.

• Advantages of using an API gateway include:

• It decouples clients from services. Services can be versioned or refactored without


needing to update all of the clients.

• Services can use messaging protocols that are not web friendly, such as AMQP.
• The API Gateway can perform other cross-cutting functions such as authentication,
logging, SSL termination, and load balancing.
• Out-of-the-box policies, like for throttling, caching, transformation, or validation.
Benefits
Agility. Because microservices are deployed independently, it's easier to manage bug fixes
and feature releases. You can update a service without redeploying the entire application,
and roll back an update if something goes wrong. In many traditional applications, if a bug is
found in one part of the application, it can block the entire release process. New features
might be held up waiting for a bug fix to be integrated, tested, and published.
Small, focused teams. A microservice should be small enough that a single feature team can
build, test, and deploy it. Small team sizes promote greater agility. Large teams tend be less
productive, because communication is slower, management overhead goes up, and agility
diminishes.
Small code base. In a monolithic application, there is a tendency over time for code
dependencies to become tangled. Adding a new feature requires touching code in a lot of
places. By not sharing code or data stores, a microservices architecture minimizes
dependencies, and that makes it easier to add new features.
Mix of technologies. Teams can pick the technology that best fits their service, using a mix of
technology stacks as appropriate.
Fault isolation. If an individual microservice becomes unavailable, it won't disrupt the entire
application, as long as any upstream microservices are designed to handle faults correctly.
For example, you can implement the Circuit Breaker pattern, or you can design your solution
so that the microservices communicate with each other using asynchronous messaging
patterns.
Scalability. Services can be scaled independently, letting you scale out subsystems that
require more resources, without scaling out the entire application. Using an orchestrator
such as Kubernetes, you can pack a higher density of services onto a single host, which
allows for more efficient utilization of resources.
Data isolation. It is much easier to perform schema updates, because only a single
microservice is affected. In a monolithic application, schema updates can become very
challenging, because different parts of the application might all touch the same data, making
any alterations to the schema risky.

You might also like