CNF QNA
CNF QNA
}
In a nutshell, object/relational mapping is the automated (and transparent) persistence of objects in
a Java application to the tables in a relational database, using metadata that describes the mapping
between the objects and the database. ORM, in essence, works by (reversibly) transforming data
from one representation to another.
The following list of issues, which we’ll call the O/R mapping problems, are the fundamental
problems solved by a full object/relational mapping tool in a Java environment.
• What do persistent classes look like? Are they fine-grained JavaBeans? Or are they instances of
some (coarser granularity) component model like EJB? How transparent is the persistence tool? Do
we have to adopt a programming model and conventions for classes of the business domain?
• How is mapping metadata defined? Since the object/relational transformation is governed entirely
by metadata, the format and definition of this metadata is a centrally important issue. Should an
ORM tool provide a GUI to manipulate the metadata graphically? Or are there better approaches to
metadata definition?
• How should we map class inheritance hierarchies? There are several standard strategies. What
about polymorphic associations, abstract classes, and interfaces?
• How do object identity and equality relate to database (primary key) identity.How do we map
instances of particular classes to particular table rows?
• How does the persistence logic interact at runtime with the objects of the business domain? This is
a problem of generic programming, and there are a number of solutions including source generation,
runtime reflection, runtime bytecode generation, and buildtime bytecode enhancement. The
solution to this problem might affect your build process
• What is the lifecyle of a persistent object? Does the lifecycle of some objects depend upon the
lifecycle of other associated objects? How do we translate the lifecyle of an object to the lifecycle of
a database row?
• What facilities are provided for sorting, searching, and aggregating? The application could do some
of these things in memory. But efficient use of relational technology requires that this work
sometimes be performed by the database.
• How do we efficiently retrieve data with associations? Efficient access to relational data is usually
accomplished via table joins. Object-oriented applications usually access data by navigating an object
graph. Two data access patterns should be avoided when possible: the n+1 selects problem, and its
complement, the Cartesian product problem (fetching too much data in a single select).
1b) infer the steps for connecting to database from spring using hibernate xml based
configuration.?
https://www.javaguides.net/2018/11/spring-mvc-5-hibernate-5-xml-based-configuration-
example.html
{out of sallybus question coz he didn’t teach but expect us to learn from our selves. But might come}
2 . write the program 7{jpa program.}
3a. provide an overview of docker and its key components.
Docker - Introduction
• Docker enables to separate your applications from infrastructure to till deliver software quickly.
• By taking advantage of Docker's methodologies for shipping, testing, and deploying code, you can
significantly reduce the delay between writing code and running it in production.
Docker components
Docker Engine: It's a core part of Docker system. Docker uses a client-server architecture. It consists
of
Docker server or daemon is a process which runs continuously and used to control and manage
the containers. The Docker daemon listens to only Docker API requests and handles Docker
images, containers, networks, it also communicates with other daemons to manage Docker
services.
A REST API (Representational State Transfer Application Programming Interface): which interfaces
the programs to use talk with the daemon and give instruct it what to do.
The Docker Client: is a command-line interface (CLI) for sending instructions to the Docker
Daemon using special Docker commands. Though a client can run on the host machine, it relies
on Docker Engine’s REST API to connect remotely with the daemon.
IMAGE: An image is a read-only template with instructions for creating a container. Images
define Application dependencies and the processes that should run when the application
launches. We can get images from Docker Hub or create our own images by including specific
instructions within a file called Docker file.
Containers: A container is a runnable instance of an image. We can create, start, stop, move, or
delete a container using the Docker API or CLI. An image is a class and the container is an
instance of that class. It is possible to connect a container to one or more networks, attach
storage to it, or even create a new image based on its current state.
3b)Design a Dockerfile to pull an mysql image, set mysql root password and
expose port 3306. write the command to build the docker image from file.
To build the Docker image from this Dockerfile, you can use the docker build command.
Assuming the Dockerfile is in your current directory, you can execute the following command
in your terminal:
docker build -t my_mysql_image .
This command will build the Docker image using the Dockerfile in the current directory (.)
and tag it with the name my_mysql_image. Make sure to replace my_mysql_image with the
desired name for your image.
db:
image: mysql:latest
environment:
DATABASE_HOST: docker-mysql
DATABASE_PORT: 3306
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: student
MYSQL_USER: admin
MYSQL_PASSWORD: root
networks:
- default
4b) justify Kubernetes is merely not just an orchresttation system ??
Independently deployable
Loosely coupled
Services are typically organized around business capabilities. Each service is often owned by a
single, small team.
The microservice architecture enables an organization to deliver large, complex applications
rapidly, frequently, reliably and sustainably - a necessity for competing and winning in today’s
world.
• Services are responsible for persisting their own data or external state. This differs
from the traditional model, where a separate data layer handles data persistence.
• Supports polyglot programming. For example, services don't need to share the same
technology stack, libraries, or frameworks.
• Besides for the services themselves, some other components appear in a typical
microservices architecture:
• API Gateway. The API gateway is the entry point for clients. Instead of calling services
directly, clients call the API gateway, which forwards the call to the appropriate
services on the back end.
• Services can use messaging protocols that are not web friendly, such as AMQP.
• The API Gateway can perform other cross-cutting functions such as authentication,
logging, SSL termination, and load balancing.
• Out-of-the-box policies, like for throttling, caching, transformation, or validation.
Benefits
Agility. Because microservices are deployed independently, it's easier to manage bug fixes
and feature releases. You can update a service without redeploying the entire application,
and roll back an update if something goes wrong. In many traditional applications, if a bug is
found in one part of the application, it can block the entire release process. New features
might be held up waiting for a bug fix to be integrated, tested, and published.
Small, focused teams. A microservice should be small enough that a single feature team can
build, test, and deploy it. Small team sizes promote greater agility. Large teams tend be less
productive, because communication is slower, management overhead goes up, and agility
diminishes.
Small code base. In a monolithic application, there is a tendency over time for code
dependencies to become tangled. Adding a new feature requires touching code in a lot of
places. By not sharing code or data stores, a microservices architecture minimizes
dependencies, and that makes it easier to add new features.
Mix of technologies. Teams can pick the technology that best fits their service, using a mix of
technology stacks as appropriate.
Fault isolation. If an individual microservice becomes unavailable, it won't disrupt the entire
application, as long as any upstream microservices are designed to handle faults correctly.
For example, you can implement the Circuit Breaker pattern, or you can design your solution
so that the microservices communicate with each other using asynchronous messaging
patterns.
Scalability. Services can be scaled independently, letting you scale out subsystems that
require more resources, without scaling out the entire application. Using an orchestrator
such as Kubernetes, you can pack a higher density of services onto a single host, which
allows for more efficient utilization of resources.
Data isolation. It is much easier to perform schema updates, because only a single
microservice is affected. In a monolithic application, schema updates can become very
challenging, because different parts of the application might all touch the same data, making
any alterations to the schema risky.