0% found this document useful (0 votes)
73 views

G-07-Autonomous Database - Database Dedicated-Transcript

This document discusses deployment considerations and best practices for autonomous database dedicated. It provides guidance on network architecture, including support for public and private subnets. It recommends best practices for production and development setups, including using separate VCNs or private subnets for different tiers. It also outlines the steps for getting started with a private cloud setup, including requesting infrastructure limits, setting up roles, creating compartments and networks, and deploying the autonomous Exadata infrastructure and databases.

Uploaded by

Rodel G. Sanchez
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views

G-07-Autonomous Database - Database Dedicated-Transcript

This document discusses deployment considerations and best practices for autonomous database dedicated. It provides guidance on network architecture, including support for public and private subnets. It recommends best practices for production and development setups, including using separate VCNs or private subnets for different tiers. It also outlines the steps for getting started with a private cloud setup, including requesting infrastructure limits, setting up roles, creating compartments and networks, and deploying the autonomous Exadata infrastructure and databases.

Uploaded by

Rodel G. Sanchez
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 7

Welcome back.

In this, we are going to look at deployment considerations-- some of


the architectural guidelines as well as best practices for autonomous database
dedicated. When it comes to network architecture, in terms of dedicated autonomous
database, we have full virtual cloud network support. Any customer can have it
deployed in public as well as private subnets.

Private IPs at the cluster level-- that is combined with Exadata infrastructure.
You have cloud compute native connections with OCI subnets or VCN peerings. So
these all are supported here. Customer corporate network connections you can
connect to over VPN or using DRG for IPSec tunneling, or you can connect with using
FastConnect for high speed, keeping the traffic completely off the internet. With
the recent addition, we have support from Microsoft Azure secure connections to
Oracle cloud infrastructure.

In terms of best practices customer VCN setup, production setup provides you a
common isolation that you have to look at. So you can have it deployed in a single
VCN or cluster everything in a private subnet. Client tier in separate private
subnet, as well as web tier applications in a public subnet as per the
requirements. You can have peered VCNs or this autonomous dedicated in the private
subnet, client tiers in the separate VCN or private subnets. So that way, you are
able to provide maximum isolation.

For development setups, you can use a single VCN or a bastion host in public
subnets. So you can connect to OCI or a VPN endpoint to DRG. And autonomous in
private subnet-- bastion routing rules has to be factored, as well as the easy
developer connections from your laptop. So in the right-hand side, you see the same
thing is depicted here. So you have an Oracle cloud region. And inside that, you
have different availability domains. And the placement of all these backends or app
tier and databases are shown here.

So in terms of getting started with the private cloud setups, normally it starts
with requesting a service limit increase for Exadata quarter rack. And the request
comes to Oracle's internal team, and they get it approved. And the next step you
are going to do is the role setup. That is fleet and DBA roles. So they are unique
to dedicated Exadata autonomous deployments. And we have several OCI policies,
which have been created to separate the service user responsibilities and create
private cloud isolation.

The next step comes where you will set up your private cloud. And you will be
needing the OCI compartment, which we assign to IT and end users based on
organizational structure. You will be creating your network overlay for
organizational structure. And then, we create the autonomous Exadata infrastructure
as well as the required containers into those compartments. And different shapes
are available to choose from-- quarter, half, and full rack when it comes to
dedicated deployment. So it provides self-service access to end users to create and
use this autonomous database.

So in a dedicated, since you have basically an extra layer of infrastructure that


the customer themselves, in essence, is basically managing. So we have added these
roles. IT fleet admin-- so their task is that the IT admin, they will be more
concerned about what infrastructure is needed and the deployment of those
infrastructures. And then, we have another DBA role. So that would be used for
basically creating the containers and the databases related things within those
container databases.

Fleet manager is to locate and configure the resource infrastructure. Cluster and
container databases can meet the service level agreements. And DB admins basically
create the database, select the SLS, and then monitor the health of their
databases.
In terms of identity and access management, you might have already gone through the
previous classes on identity and access management. But the autonomous dedicated
became a bit of a different kind of resource type. You need to create basically
separation of responsibility for fleet versus database administration, and on OCI,
we are providing you different resource types.

One of them is autonomous Exadata infrastructures. So that is for dedicated


hardware resources. Then, you have container databases. So it is a runtime
environment to meet your specific service level agreements. Autonomous database
resource classes are for application databases, and autonomous backups-- so these
four are basically resource types for OCI autonomous.

And you need to have knowledge of how to write these policies. So you'd be allowing
a specific group-- for example, the IT admin or fleet admin roll versus a DB. So
you will have to allow what kind of resources within each compartment can be given
access. So just to summarize, a group is nothing but a set of users with the same
privileges. And policies are used to basically bind privileges for a group to a
specific set of resources in a compartment. And compartment is an operating context
for a specific set of service resources, and only accessible to groups who are
explicitly granted access.

So in this policy statement, the verb is going to be one of inspect, read, use, and
manage. So you might have seen this in much detail in our identity and access
management class. And manage is basically the highest level of privilege, and
inspect is the lowest level of privilege on those resources.

Let's take an example here. So here, we are talking about three groups. AcmeFA
stands for fleet admin. And we have two DBA groups-- Roadrunner, as well as Coyote.
So a compartment corresponding to them is FACompartment-- that's for fleet admin.
And then, Roadrunner is the other DBA compartment. So you need to have the group
policy for both of these DBA sides, as well as for the fleet admin side.

And to write them, you have to write the policy statements that allow group
CoyoteDBA to manage autonomous-database in compartment here. As well as for another
DBA group, Roadrunner-- we have allowed group Roadrunner to manage autonomous-
database in compartment RoadrunnerCompartment. Or if they want to manage backups,
autonomous-backups is the resource. And this way, you are able to write the policy
statement.

For fleet admins, they are more concerned about managing the infrastructure side of
it. Autonomous-exadata-infrastructures manages privileges given. And then,
container-databases, and DBA to read on this container-databases-- and then, the
corresponding compartments. So we have several scripts already created, which might
be available on GitHub. So they can be also downloaded to get more details on
getting it started on dedicated.

So another IAM example-- in this group, you're just across two departments,
manufacturing and IT. First thing is related to IAM service limits at the
compartment level. And for manufacturing, we are allowing developers to manage
autonomous-databases in compartment manufacturing, and also read autonomous-
container-databases in compartment IT. And fleet admins are responsible for
infrastructure, so allow fleet to manage autonomous-exadata-infrastructures in
compartment IT, as well as the autonomous-container-databases in compartment IT. So
this way, based on different roles, you are able to write the policy statements.

So autonomous database is nothing but dedicated private database cloud in public


cloud. Simply you can say private cloud and public cloud. And its simple deployment
has basically a single cluster as well as a single container database. So in this
example, we are showing web store is a single container database, available on this
dedicated department. And all the other databases, like shop and ship, are created
in a single container database. So one service level and maintenance updates
schedule for all of the databases, since they are part of one single cluster.

So administrator specifies size, region, and availability domain, of desired


dedicated Exadata infrastructure. Then, administrator partitions the system by
specifying desired clusters as well as container databases. And database users just
provision their databases within these container databases. So you just have to
just specify DB compute-- how many OCPUs-- as well as the maximum storage. And CPU
and storage can elastically grow as well as shrink online, as in the case with
serverless.

Billing is based on size of Exadata infrastructure and number of CPUs. So two


components-- infrastructure component and database OCPU component. And customers
can also bring their existing database licenses if they want to lower the cost.

In terms of isolation-- so isolation is for overall improving the security posture


within the deployment. Dedicated allows multiple levels of isolation. So you have,
first, dedicated server host. That can satisfy basically internal compliance
requirements. And different kinds of security and compliance control could be
applicable, like private IPs or transport layer security, DLS guidance here, goal
images, or detailed tenant-level audits. So these kinds of features are available,
in terms of security.

You have container databases, a cluster of VMs, separate hardware, hardware


enforced private network VCNs. They all contribute to the isolation of the
environment. So level of security and performance isolation can be basically
tailored to the need of each of their databases. And we all know that isolation is
normally complex. But in autonomous, it provides some flexibility the way you want
to implement.

So in terms of customizable operational policies, you can have a very sophisticated


private cloud in a public cloud. You've seen that you have separate containers or
databases, CDBs. So that way, you are able to segregate the business areas based on
the criticality. And test workloads can be isolated from the production workloads.

Test databases can get updates and new versions first before you are able to bring
them on the product concepts. That way, database separation policies are there. You
have software updates for your test/dev or production. You can avoid any updates
during peak periods. Think of like this [INAUDIBLE] season, there you don't want to
put any patch on the system. So you can delay that part of the patch. So that way,
you are able to control any kind of update to conform to application certified
versions.

In terms of availability policies also, it is customizable. So a specified level of


HA as well as DR need for each of the container databases. And customizable
overprovisioning and peak usage policies. For non-production databases, definitely
it makes sense to do overprovisioning. For example, for doing a functional testing
which is carried out by QA guys or developers, they need to just do some of the
application development. You can have overprovisioned system and utilize the
infrastructure for this.

As I said, there are two administrative roles. They are unique to autonomous. Fleet
Admin activities are separated from DB Admin using IAM privileges. And Fleet Admin
group is nothing but for business forecasting or dev/test products and kind of
allocation. They are more concerned about spending caps. As well, Fleet Admin
provides the resources using GUI or APIs. They allocate budget or service limits by
compartment through the organization.
They are responsible for provisioning exadata infrastructure. They specify the
name, region, availability domain, or they provision the cluster as well as
provision the container database CDB in the cluster. For provisioning cluster, you
have to specify the name, software version, service levels, whether you need a data
guard or not. Those are tasks of Fleet Admin. You can always use BYOL, or if
customers are already having licenses of Oracle databases. And Fleet Admin provides
during the provisioning state.

On the other hand, Database Admin, they'll be creating the new database.
Deployment-wise it is nothing different. It is same as the serverless. You need to
just select the database type, whether it's going to be autonomous transaction
processing or autonomous data warehouse. The number of CPU count-- so that is all
related to the performance. And DBS storage size limit-- how many terabytes do you
want that be allocated? And container database-- that contains the DB that is
specifically dedicated.

So then you clear the database users as well as schemas. Performance resources
allocated proportionately to number of CPU what is chosen here. For example, if our
database gets 15% of CPU in Exadata server, they get the same amount of percentage
of memory. Same goes with IO per second, as well as the storage CPUs or flash
cache.

CPU and memory allocated to CDB grows dynamically as PDB CPUs are added to it. And
no need to specify any sessions, or files, or process, or buffer cache in PGA,
because these parameters are all automatic on autonomous database.

So in terms of database operations on the Cloud Control Plane, lifecycle operations


are mostly work through, maybe, cloud UI, or REST APIs, or maybe through SDKs--
Java nor Python are SDKs. Capacity planning, as well as setup the Exadata clusters,
CDB, create, delete, the start, the stop. Database creation release and the
starting, the stopping, the scaling OCPUs, the storage, or any other resource.

Backup and restore, doing point and end recovery, or setting any updates for
Exadata Infrastructure, or a VM cluster or container database monitoring tool, even
notifications. So these all are basically administrative tasks, and are managed
through these different interfaces. You can always download the connection
information that is using the wallet source for the encrypted connections. And it
is the same behavior at the serverless that we have for client credits or wallets.

In terms of performance monitoring, as well as scripts and schema designs, all the
standard tools, such as SQL Developer or web-based SQL developers are available in
dedicated as well. And Performance Hub is natively integrated, and they provide a
lot of the insight about your database. All AWR reports are average active
sessions. The SQL we have here, active session History Analytics. So they all are
natively available on Oracle Cloud Console.

You will be able to monitor all the database SQLs which are coming in there. You
can additionally use your existing Enterprise Manager Grid Control deployments, if
you're already using it. So starting with 13.3 version of EM Cloud Control, they
support autonomous context and processing databases.

So now, we'll focus on client connections. We'll try to understand how these are
done on a dedicated environment. We see that it provides full virtual cloud network
support, as well as public and private subnets, Oracle Cloud Native connections, so
compute running on subnet in cluster's VCN or in a peered VCN.

So they all can make connections to Autonomous Database Dedicated from customer
corporate network connections through VPN, or FastConnect, or the peer agent
Microsoft Azure. So that would be secure connections to your autonomous database.

So in terms of securing highly-available client connectivity, so all connections


are secured by default. So wallets are based on TLS certificates, normally SQL star
NET. Both modes are available in Autonomous Dedicated. So connection services can
use priority-based or workload-specific transaction processing or reporting.

So different kind of connection types are given. So we'll discuss about those
services in next slide. It supports Transparent Application Continuity, and this is
very important because it can track and record the sessions at the last transaction
state. It can recover and replace in the event of any unplanned outage, and
proactivity drains any service before maintenance, or hide any planned switchover
or failed events. So that will be available on Autonomous Dedicated here.

The services are used to control the workload priorities with the TLS and non-TLS
pairs. Applications are able to connect with a pre-defined database service to
control. And that controls, basically, SQL parallelism, the relative priority,
maximum concurrency for executing the users.

So most of the OLTP application uses TP service. And batches use the low service
here. So these are for TPURGENT, TP, high, medium, and low. So basically five
services are defined here. And based on the workflow type, you are going to decide
on. For OLTP, TPURGENT, and TP are used most of the time. And for data warehouse,
batching, and reporting kind of queries, you select between these high, medium, and
low, considering how many concurrent systems or operations you are going to
operate.

So services here, all of these services, they are used to connect to Autonomous
Database. And username for this Service Console is going to be "admin" in all the
cases. You use the password you entered when you create the autonomous database. At
that time, you'll specify the admin password. So admin user validates their
password and the service name among these. And they come from the client credential
wallet. With the mention of that one, you are able to connect to database.

As part of the credentials, you get the DNS name star ORA file. And it has all the
five database service names which we see here, TPURGENT to LOW. These pre-defined
service names provide different level of performance, as well as concurrency for
autonomous transaction processing.

So if you go into detail of this TPURGENT, it is for high-priority application


connection service, so for any kind of time-critical transaction processing
operations. So this connection service supports manual parallelism that you have to
keep in mind. The TP, on the other hand, is a typical application connection for
transaction processing operation. And this connection service does not run with
parallelism.

If you look at HIGH as the service level, so it is for high-priority application


connection, mostly used for reporting, as well as batch operations. So all
operations run in parallel. And it is subject to queueing there. MEDIUM, on the
other hand, is application connection service for reporting and batch. And all
operations run in parallel, and are also subject to queueing.

That LOW level of service is our lowest priority application connection service for
reporting and batch, again. And connection service does not run with parallelism.
So these are the differences you need to keep in mind.

So services in these service may get disconnected if they are basically staying
idle for more than five minutes. So that way, it allows resources to be freed up
for any other active users within your database.
So when you download the credential ZIP file in order to connect to autonomous
database, you get DNS name star ORA. And if you look inside, there would be
different names or service names, like a TP, they are HIGH, LOW, or MEDIUM, or TP,
or TPURGENT, along with the address portal call, port information, the host
information, as well as the security settings of that.

You have to make sure that the credential wallet, as well as DNS names informations
are correct, so that you can connect to autonomous. So for example, like, if you
have 16 of CPU, you basically get three concurrent queries for HIGH, 20 for MEDIUM,
and 32 concurrent queries for LOW service.

So some of the best practices. So our OLTP apps always use TP, with the batch
reporting using LOW. Maximum concurrent requests while scheduling reports request
on low priority. It keeps parallelism to 1 to minimize that grid cache activity
impact to transactional requests.

So when to change this common configuration? So one is for extremely sensitive


transaction requests, can run in a session with TPURGENT, so that gets the highest
priority. And Reporting Analytics are slightly time-sensitive, and they can use a
MEDIUM as well. So it gives you more resource parallelism so query can return
faster. Marginal impact if heavy transaction workload. And rarely if ever would you
use HIGH with OLTP applications.

So on the other side, data warehouse applications use MEDIUMs. That is a better
balance between parallelism as well as concurrency. And when you want to change
this, if you have a data warehouse with a lot of real end users, relatively small,
need more concurrency. So you should be using mostly LOW as service there. So for
time-critical reports which are system driven and run infrequently. So in those
cases you would be looking at changing as well.

This slides shows how to download that Client Connection Credential Wallet. And you
see here, when you go Autonomous Database space, under Administration, under
Database Connection, it provides you an option to download it. And when we will
look into that DNS name star ORA file, you will see different service names and
their connection string. And the same connection string, you are going to always
the same credential wallet will be also used by SQL Developer to connect to the
autonomous services.

Just continuing to that customer VCN for database users. So a SCAN, which will be
available, they will leverage the OCI VCN, so that a SCAN is a single client access
name. And they use basically three IP addresses as well as our DNS. And DNS
automatically adjusts on service move using GARP messaging. And services open on
only one node, in case you're carrying less than 16 OCPU. So that is for best out-
of-the-box performance.

Co-location tagging is to enable request routing to specific RAC nodes. And this is
unique to Autonomous Dedicated at moment. So it is useful when running with more
than 16 OCPU, which basically opens a database on more than one node.

So COLOCATION_TAG parameter, it has a alphanumeric string in CONNECT_DATA parameter


of DNS. So it looks like this. So if COLOCATION_TAG equals to interactive, so that
means that it is going to allow adequate routing.

Transparent Application Continuity is enabled in server by default. So you're not


going to configure for any client drivers 19c and above. So enabled per session
using TNSNAMES.ORA. So do not use EZCONNECT naming, for example, here. You see
that, we have a service, sales.us.example.com. And it has CONNECT_TIMEOUT,
RETRY_COUNT, RETRY_DELAY, TRANSPORT_CONNECT_TIMEOUT. So all of the parameters which
are needed to enable transparent application continuity, you would see in the
service descriptor here.

If you want to disable this one, you can execute DBMS_APP_CONT.DISABLE_FAILOVER,


"HIGH", so when "HIGH" can be replaced with any of the service name you are using--
HIGH, MEDIUM, or LOW, or TP, or TPURGENT. So you can see the Developer Guide for
older driver details. It supports 12.1 as well as above.

With this one, we are concluding the part 2 of Autonomous Database Dedicated. In
next part, we are going to cover the security options which are available in this
Autonomous Database. Thanks for watching, and I will see you in part 3.

You might also like