<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel>
        <copyright>Copyright TechTarget - All rights reserved</copyright>
        <description></description>
        <docs>https://cyber.law.harvard.edu/rss/rss.html</docs>
        <generator>Techtarget Feed Generator</generator>
        <language>en</language>
        <lastBuildDate>Mon, 27 Apr 2026 02:12:00 GMT</lastBuildDate>
        <link>https://searchservervirtualization.techtarget.com</link>
        <managingEditor>editor@techtarget.com</managingEditor>
        <item>
            <body>&lt;p&gt;AI took center stage at Microsoft Ignite 2025, this year's installment of the tech giant's annual conference.&lt;/p&gt; 
&lt;p&gt;From Nov. 18-21, conference-goers gathered at San Francisco's Moscone Center for hundreds of live sessions, demonstrations and labs focusing on key topic areas. Key topics this year included cloud and AI platforms, AI-powered security and AI business tools.&lt;/p&gt; 
&lt;p&gt;In a shakeup from years past, Microsoft CEO Satya Nadella did not make an appearance at this year's event. Judson Althoff, CEO of Microsoft's commercial business, delivered the opening keynote -- where he highlighted the company's AI innovations -- alongside senior Microsoft engineering leaders.&lt;/p&gt; 
&lt;p&gt;Dive into our editorial coverage below to catch up on the major announcements and news analysis from this year's Microsoft Ignite conference, and stay tuned for future updates.&lt;/p&gt;</body>
            <description>Our guide to Microsoft Ignite 2025 has everything you need to know about the annual conference, including live news updates, expert analysis and highlights from last year's show.</description>
            <link>https://www.techtarget.com/searchwindowsserver/conference/Microsoft-Ignite-conference-coverage</link>
            <pubDate>Mon, 10 Nov 2025 00:00:00 GMT</pubDate>
            <title>Microsoft Ignite 2025 conference coverage</title>
        </item>
        <item>
            <body>&lt;p&gt;Managing an enterprise network has never been more complicated. With distributed users, applications and environments, and changing demands, network administrators are turning to automation tools to ease the burden. Ansible, Terraform and Vagrant are network automation tools used to deploy, manage and provision network changes.&lt;/p&gt; 
&lt;p&gt;Given their shared characteristics, network administrators might wonder about the similarities and differences between the tools and how they relate to one another. While the three infrastructure automation platforms seem similar, they fulfill different functions.&lt;/p&gt; 
&lt;p&gt;This article compares Ansible, Terraform and Vagrant, diving into their pros and cons, potential use cases and how network administrators can use a combination of the three tools for a comprehensive network automation effort.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="What is Ansible?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is Ansible?&lt;/h2&gt;
 &lt;p&gt;&lt;a target="_blank" href="https://www.ansible.com/overview/how-ansible-works" rel="noopener"&gt;Ansible&lt;/a&gt; is a Python-based IT system configuration automation tool. It has gained wide acceptance as a network automation system, due in part to its agentless architecture -- it doesn't require the use of an agent to automate the system. Other network automation systems, such as NAPALM, can easily integrate with Ansible, broadening vendor support and increasing its appeal.&lt;/p&gt;
 &lt;p&gt;Ansible's actions are configured using YAML-formatted files, called &lt;i&gt;playbooks&lt;/i&gt;, which network engineers are often more comfortable using than programmatic automation frameworks, such as Nornir. Ansible has a large online community, and many resources are available to &lt;a href="https://www.techtarget.com/searchitoperations/answer/How-do-I-get-started-with-an-Ansible-playbook"&gt;learn how to use it&lt;/a&gt;. Red Hat provides a commercially supported version called Ansible Tower.&lt;/p&gt;
 &lt;h3&gt;Benefits of Ansible&lt;/h3&gt;
 &lt;p&gt;Advantages of Ansible include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;YAML-formatted playbook files.&lt;/b&gt; The human-readable format of a YAML file makes it easier to interpret compared to other programming languages and can easily be converted to other languages -- such as &lt;a href="https://www.theserverside.com/definition/JSON-Javascript-Object-Notation"&gt;JSON&lt;/a&gt; and &lt;a href="https://www.techtarget.com/whatis/definition/XML-Extensible-Markup-Language"&gt;XML&lt;/a&gt; -- for use through other tools.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Agentless operation.&lt;/b&gt; Network administrators typically prefer Ansible's agentless architecture, which simplifies deployment, reduces overhead and increases security.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Multivendor capability. &lt;/b&gt;Because Ansible is open source, it can automate scripts from different platforms, which makes it useful to automate multivendor environments.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Large community and vendor support. &lt;/b&gt;Ansible has a large, active online community of users, which speeds up the development of new processes, quickly identifies bugs and helps maintain the platform's efficiency and relevancy.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;Disadvantages of Ansible&lt;/h3&gt;
 &lt;p&gt;Challenges of Ansible include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Slow speeds. &lt;/b&gt;Ansible can be slow to collect a large volume of information.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Specific use cases.&lt;/b&gt; Ansible's functions are tailored more to device-specific configuration.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Limited UI. &lt;/b&gt;As a command-line interface, Ansible's capabilities are limited compared to other network automation tools, which makes it difficult to use to manage advanced, complex environments.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;Ansible use cases&lt;/h3&gt;
 &lt;p&gt;Ansible's primary use case is network automation, which means network administrators can use the tool to automate several areas of the network. Areas in which Ansible can help network professionals automate include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Network configuration management.&lt;/b&gt; Ansible enables network administrators to automate repetitive configuration management tasks, such as software deployment and device configuration. Automating network tasks reduces human error and downtime, which helps increase network reliability.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Preexisting system configuration.&lt;/b&gt; Ansible's ability to automate network configuration enables network professionals to automate preexisting system configuration settings in the network, such as routing, switching and device settings.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Software deployment.&lt;/b&gt; Network administrators use Ansible to install software, provision servers, offer continuous delivery and roll out updates. By automating these processes, Ansible enables network professionals to maintain software consistency across the network environment.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Security and compliance.&lt;/b&gt; Network admins can use Ansible to automate security policies to ensure their networks remain in compliance. Furthermore, network pros can also ensure their Ansible playbooks remain in compliance by defining requirements and regularly monitoring and managing playbooks.&lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;            
&lt;section class="section main-article-chapter" data-menu-title="What is Terraform?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is Terraform?&lt;/h2&gt;
 &lt;p&gt;&lt;a target="_blank" href="https://www.terraform.io/intro/index.html" rel="noopener"&gt;Terraform&lt;/a&gt; is an infrastructure as code (IaC) DevOps tool used to create, maintain and decommission large data center infrastructure. Terraform is a cloud infrastructure management tool that works across multiple cloud providers, which makes it ideal for the full lifecycle of data center infrastructure.&lt;/p&gt;
 &lt;p&gt;The configurations are specified in the declarative programming language of HashiCorp Configuration Language. The declarative nature of HCL requires practitioners to specify the change they want the code to make. As the configuration changes, Terraform determines the steps to transform an infrastructure into the new desired state.&lt;/p&gt;
 &lt;p&gt;Terraform doesn't have a GUI. This might be considered a liability, but it isn't. The declarative language is ideal for working in a code repository with version control, which is necessary for IaC. Having too many systems with GUIs could result in a complex maze of screens with choices and dialog boxes that could be accomplished with a few lines of configuration syntax.&lt;/p&gt;
 &lt;p&gt;Terraform's three-step workflow consists of the following phases:&lt;/p&gt;
 &lt;ol class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Write.&lt;/b&gt; Network administrators write the Terraform configuration. The configuration, written in HCL, should define the desired state of the network and specify the resources.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Plan.&lt;/b&gt; Based on the written code, Terraform creates a plan for configuring the infrastructure. Here, Terraform can complete one of three actions: create, update or destroy.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Apply.&lt;/b&gt; On the network administrator's command, Terraform applies the configuration changes to the network.&lt;/li&gt; 
 &lt;/ol&gt;
 &lt;p&gt;The plan phase is one of &lt;a href="https://www.techtarget.com/searchitoperations/news/252485153/HashiCorp-Terraform-beta-brings-long-awaited-features"&gt;Terraform's most helpful features&lt;/a&gt; because it shows the changes that would occur without performing them. It's essentially a test drive of a proposed change. The output enables teams to verify that the changes are what they intended to happen and that the desired end state is achieved.&lt;/p&gt;
 &lt;h3&gt;Benefits of Terraform&lt;/h3&gt;
 &lt;p&gt;Advantages of Terraform include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Infrastructure provisioning &lt;/b&gt;&lt;b&gt;and management.&lt;/b&gt; Terraform can configure infrastructure and &lt;a href="https://www.techtarget.com/searchnetworking/tip/A-guide-to-network-lifecycle-management"&gt;oversee lifecycle management&lt;/a&gt;, independent of a cloud provider.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Cloud service and resource integration. &lt;/b&gt;Terraform can integrate cloud services with external functions, such as email and DNS.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Visibility.&lt;/b&gt; The &lt;i&gt;plan&lt;/i&gt; phase provides visibility into changes before applying them.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Three-step workflow.&lt;/b&gt; Terraform's three-step workflow is integral to the tool's functionality. It creates a comprehensive IaC tool that facilitates and helps manage multi-cloud environments.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;Disadvantages of Terraform&lt;/h3&gt;
 &lt;p&gt;Challenges of Terraform include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Steep learning curve. &lt;/b&gt;Terraform can be difficult for network administrators without a background in IaC. In addition, HCL can be a barrier to adoption for network professionals who are more familiar with &lt;a href="https://www.techtarget.com/searchapparchitecture/tip/A-beginners-guide-to-learning-new-programming-languages"&gt;other programming languages&lt;/a&gt;, such as YAML.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Complexity.&lt;/b&gt; Terraform is primarily suited to managing large and complex infrastructures, which can be difficult for network administrators to handle. In addition, the state files Terraform uses to track infrastructure can be complicated to manage.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Troubleshooting challenges.&lt;/b&gt; It can be difficult to troubleshoot with Terraform due to the vague error messages that network administrators might find challenging to debug.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;Terraform use cases&lt;/h3&gt;
 &lt;p&gt;Although Terraform is primarily a DevOps IaC tool designed for cloud management and infrastructure provisioning, network administrators can use Terraform to support networking-specific applications and other automation tasks. Examples of Terraform use cases for network teams include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Software-defined networking.&lt;/b&gt; Terraform can work together with software-defined networks to automate configuration changes depending on the network's requirements.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Multi-cloud environment management. &lt;/b&gt;Terraform's three-step workflow facilitates multi-cloud provisioning and management by providing a comprehensive and unified approach to manage resources across various cloud providers.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Kubernetes deployment. &lt;/b&gt;Terraform's unified workflow and lifecycle management capabilities make it useful to deploy containerized applications with Kubernetes because administrators can provision apps from a centralized platform.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Consistent environments.&lt;/b&gt; Terraform helps administrators with multiple environments ensure their configurations have the same level of uniformity.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Infrastructure provisioning.&lt;/b&gt; Terraform can automate changes in larger infrastructures that contain multiple components.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/networking-ansible_vs_terraform_vs_vagrant-f.png"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/networking-ansible_vs_terraform_vs_vagrant-f_mobile.png" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/networking-ansible_vs_terraform_vs_vagrant-f_mobile.png 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/networking-ansible_vs_terraform_vs_vagrant-f.png 1280w" alt="Ansible vs. Terraform vs. Vagrant" data-credit="Informa TechTarget" height="434" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Compare Ansible, Terraform and Vagrant, looking at pros and cons as well as use cases.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;                 
&lt;section class="section main-article-chapter" data-menu-title="What is Vagrant?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is Vagrant?&lt;/h2&gt;
 &lt;p&gt;&lt;a target="_blank" href="https://www.vagrantup.com/intro" rel="noopener"&gt;Vagrant&lt;/a&gt; is an open source platform used to deploy, manage and automate VM environments. Vagrant, also from HashiCorp, is primarily used to replicate a development environment among multiple developers who need to guarantee consistency. This is particularly important to ensure consistency in software library versions, environment variables and dependency versions.&lt;/p&gt;
 &lt;p&gt;Vagrant can incorporate other automation tools, such as &lt;a href="https://www.techtarget.com/searchitoperations/feature/Ansible-vs-Chef-vs-Puppet-vs-SaltStack-A-comparison"&gt;Ansible, Puppet or Chef&lt;/a&gt;, to perform specific VM configuration tasks. Developers specify the software version and elements they want in the environment, and Vagrant performs the actions necessary to create a VM with that configuration. Other developers can use the same Vagrant configuration file to replicate the VM quickly.&lt;/p&gt;
 &lt;p&gt;Development environment consistency is critical for eliminating bugs related to differences between each software developer's environment. Vagrant is also valuable for quickly and consistently instantiating software test systems, which enables developers to easily fire up test systems when checking new features and bug fixes.&lt;/p&gt;
 &lt;h3&gt;Benefits of Vagrant&lt;/h3&gt;
 &lt;p&gt;Advantages of Vagrant include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Simplified development environments. &lt;/b&gt;Vagrant makes it easy for teams to configure and deploy standard environments for development and testing. This is especially useful to create consistent environments quickly and easily for testing without compromising the production environment.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;VM management.&lt;/b&gt; Although most administrators use Vagrant for development environments, Vagrant can also be used as a general VM management tool. Vagrant can abstract VMs, which enables administrators to create a consistent, exact replication of the environments they plan to reproduce.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Widely used. &lt;/b&gt;Many software development organizations have adopted Vagrant, which makes it easier for teams to pick up. In addition, the breadth of information available on Vagrant, through official documentation or online communities, provides network admins with more support for the tool.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Community resources.&lt;/b&gt; Because Vagrant is so commonly used, teams can also access a wide selection of resources made available online by community members.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;Disadvantages of Vagrant&lt;/h3&gt;
 &lt;p&gt;Challenges of Vagrant include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Unable to implement changes. &lt;/b&gt;Although Vagrant can show what changes would look like in a production environment, it's not designed to handle &lt;a href="https://www.techtarget.com/searchnetworking/answer/What-does-a-network-infrastructure-upgrade-project-involve"&gt;infrastructure changes&lt;/a&gt;.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;High resource consumption. &lt;/b&gt;Because Vagrant provisions multiple VMs -- which are resource-intensive -- it uses a large amount of memory and CPU, which increases complexity.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Slower performance. &lt;/b&gt;Vagrant's resource-intensive nature means the platform also offers lower performance compared to other configuration management tools.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;Vagrant use cases&lt;/h3&gt;
 &lt;p&gt;Vagrant is primarily a DevOps tool for software testing and development, and it's most often used to create consistent development and testing VM environments. However, network admins can also take advantage of its capabilities and use it to help with networking-specific applications and purposes. Use cases for Vagrant include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Abstraction. &lt;/b&gt;Network administrators can use Vagrant to create abstractions and manage configurations, where they can test &lt;a href="https://www.techtarget.com/searchnetworking/tip/Common-types-of-enterprise-network-connections"&gt;network connectivity options&lt;/a&gt;, such as connecting to a public network or creating a private network.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Network simulation.&lt;/b&gt; Network administrators can use Vagrant to simulate a network to test network design changes or other changes to the network configuration.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Interface management.&lt;/b&gt; Network admins can use Vagrant to configure network interfaces as either static or dynamic using Dynamic Host Configuration Protocol.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;Choose between Ansible, Terraform and Vagrant&lt;/p&gt;
 &lt;p&gt;Ansible, Terraform and Vagrant each perform automation in some way, but their functionality is decidedly different. A single organization could likely use all three tools: Ansible for network configuration management, Terraform to manage cloud infrastructure across one or more cloud providers, and Vagrant for software development and test platform standardization.&lt;/p&gt;
 &lt;p&gt;As always, it is best to select the tool that matches your business requirements.&lt;/p&gt;
 &lt;p&gt;&lt;b&gt;Editor's note: &lt;/b&gt;&lt;i&gt;This article was originally written by Terry Slattery and expanded by Deanna Darah to add comparison information about the tools.&lt;/i&gt;&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Terry Slattery is an independent consultant who specializes in network management and network automation. He founded Netcordia and invented NetMRI, a network analysis appliance that provides visibility into the issues and complexity of modern router- and switch-based IP networks.&lt;/i&gt;&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Deanna Darah is site editor for Informa TechTarget's SearchNetworking site.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Ansible sets up networks agentlessly, Terraform manages cloud infrastructure and Vagrant creates consistent development environments. Each serves distinct network automation needs.</description>
            
            <link>https://www.techtarget.com/searchnetworking/tip/Ansible-vs-Terraform-vs-Vagrant-Whats-the-difference</link>
            <pubDate>Fri, 12 Sep 2025 12:02:00 GMT</pubDate>
            <title>Ansible vs. Terraform vs. Vagrant: What's the difference?</title>
        </item>
        <item>
            <body>&lt;p&gt;Electric power is one of the most important resources to protect when it comes to critical infrastructure. Virtually every business can experience power loss, and the results can be disastrous.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://www.techtarget.com/searchdisasterrecovery/definition/business-continuity"&gt;Business continuity&lt;/a&gt; is an organization's ability to maintain critical business functions during and after a disruption. Power outages can strike at any moment, leading to extended downtime or data loss. Incorporating power outage preparedness into a business continuity plan can help when the lights go down.&lt;/p&gt; 
&lt;p&gt;A business continuity plan for power outages must be part of an organization's incident response protocols. Organizations can also take various measures to minimize the likelihood of power outages, such as infrastructure testing and ensuring ample backup power supply access.&lt;/p&gt; 
&lt;p&gt;This article update will cover the consequences of power loss, causes, and detailed steps to create and implement an outage recovery plan.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="Consequences of power loss"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Consequences of power loss&lt;/h2&gt;
 &lt;p&gt;A loss of power can &lt;a href="https://www.techtarget.com/searchdisasterrecovery/Free-incident-response-plan-template-for-disaster-recovery-planners"&gt;shut down an entire business&lt;/a&gt; unless organizations take suitable precautions. A complete loss of commercial power is the worst-case scenario, as opposed to local and/or regional outages that are confined to specific locations. Extended outages can cause &lt;a href="https://www.techtarget.com/searchdisasterrecovery/tip/Real-life-business-continuity-failures-Examples-to-study"&gt;catastrophic business losses &lt;/a&gt;that might last hours, days or weeks.&lt;/p&gt;
 &lt;p&gt;Unplanned downtime due to power outage might result in the following, if not remedied quickly:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Data loss.&lt;/li&gt; 
  &lt;li&gt;Financial/legal troubles for not meeting compliance requirements.&lt;/li&gt; 
  &lt;li&gt;Reputational harm.&lt;/li&gt; 
  &lt;li&gt;Damage to critical systems.&lt;/li&gt; 
  &lt;li&gt;Employee injuries and even fatalities.&lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="Causes of business power outages"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Causes of business power outages&lt;/h2&gt;
 &lt;p&gt;Events that are most likely to cause power outages are often associated with natural causes, such as severe weather. Naturally occurring events likely to cause power outages include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Tornadoes.&lt;/li&gt; 
  &lt;li&gt;Wildfires.&lt;/li&gt; 
  &lt;li&gt;Earthquakes.&lt;/li&gt; 
  &lt;li&gt;Flooding.&lt;/li&gt; 
  &lt;li&gt;Mudslides.&lt;/li&gt; 
  &lt;li&gt;Lightning strikes.&lt;/li&gt; 
  &lt;li&gt;Sinkholes.&lt;/li&gt; 
  &lt;li&gt;Solar flares and storms.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;Outages can also be manmade, by error or malicious action. Disruptions caused by human activity can include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Flooding caused by damaged plumbing.&lt;/li&gt; 
  &lt;li&gt;Electrical damage caused by improper wiring or lack of grounding.&lt;/li&gt; 
  &lt;li&gt;Incorrect data entry or programming of power management systems.&lt;/li&gt; 
  &lt;li&gt;Failure of systems within the nation's electric grid.&lt;/li&gt; 
  &lt;li&gt;Damage to high-voltage overhead power lines and towers.&lt;/li&gt; 
  &lt;li&gt;&lt;a href="https://www.techtarget.com/searchdisasterrecovery/tip/Why-BCDR-teams-should-consider-EMP-disaster-recovery-plans"&gt;Electromagnetic pulses&lt;/a&gt;.&lt;/li&gt; 
  &lt;li&gt;Damage to power cables by construction equipment.&lt;/li&gt; 
  &lt;li&gt;Incorrect power system installation.&lt;/li&gt; 
  &lt;li&gt;Insufficient fueling of backup power systems.&lt;/li&gt; 
  &lt;li&gt;Failure to regularly test backup power systems.&lt;/li&gt; 
  &lt;li&gt;Fire/arson.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;div class="youtube-iframe-container"&gt;
  &lt;iframe id="ytplayer-0" src="https://www.youtube.com/embed/ZetTrqWFE_w?si=TnvyxSiYJ0RLIzpX?autoplay=0&amp;amp;modestbranding=1&amp;amp;rel=0&amp;amp;widget_referrer=null&amp;amp;enablejsapi=1&amp;amp;origin=https://searchservervirtualization.techtarget.com" type="text/html" height="360" width="640" frameborder="0"&gt;&lt;/iframe&gt;
 &lt;/div&gt;
&lt;/section&gt;      
&lt;section class="section main-article-chapter" data-menu-title="8 steps to design and implement a business continuity plan for power outages"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;8 steps to design and implement a business continuity plan for power outages&lt;/h2&gt;
 &lt;p&gt;Like most business continuity plans, several activities should occur before plan development begins. The following is a list of eight tasks IT teams and BCDR personnel must complete before implementing a plan.&lt;/p&gt;
 &lt;h3&gt;1. Secure senior management approval and funding&lt;/h3&gt;
 &lt;p&gt;Few, if any, business initiatives get off the ground if leadership is not on board, so it is critical to secure management approval early. Discussions with management will likely shape who is involved in a business continuity plan, as well as &lt;a href="https://www.techtarget.com/searchdisasterrecovery/A-disaster-recovery-budget-template-A-free-download-and-guide"&gt;determine the budget&lt;/a&gt;. This stage is an opportunity to make the case for investing in business continuity and evaluating power sources for potential outages.&lt;/p&gt;
 &lt;h3&gt;2. Establish a project team&lt;/h3&gt;
 &lt;p&gt;Depending on the size of the organization, the size of a &lt;a href="https://www.techtarget.com/searchdisasterrecovery/tip/Establish-a-business-continuity-team-to-get-the-full-picture"&gt;business continuity team&lt;/a&gt; varies. These teams are typically made up of IT personnel, with input from various department heads and HR. When creating a business continuity plan for power outages, members might also include internal facilities employees and external power professionals.&lt;/p&gt;
 &lt;h3&gt;3. Conduct a business impact analysis (BIA) and risk assessment&lt;/h3&gt;
 &lt;p&gt;Risks vary by organization, so it is critical that the business continuity team &lt;a href="https://www.techtarget.com/searchdisasterrecovery/feature/Using-a-business-impact-analysis-BIA-template-A-free-BIA-template-and-guide"&gt;conduct a business impact analysis&lt;/a&gt; (BIA) and risk assessment before creating a plan. A BIA will identify the severity of different risks and how badly they are likely to affect key business processes. For example, the analysis might show that a loss of electricity to a data center's power source would cause significant downtime, resulting in compliance violations or data loss. That finding would affect the structure of a business continuity plan, making sure that the power source is a high priority.&lt;/p&gt;
 &lt;p&gt;A &lt;a href="https://www.techtarget.com/searchsecurity/definition/risk-assessment"&gt;risk assessment&lt;/a&gt;, on the other hand, determines the likelihood of different risks that might affect operations. When it comes to a power outage plan, key risks to assess would be aging infrastructure, local weather patterns &lt;a target="_blank" href="https://www.energy.gov/articles/department-energy-releases-report-evaluating-us-grid-reliability-and-security" rel="noopener"&gt;and the reliability&lt;/a&gt; of the region's electric grid.&lt;/p&gt;
 &lt;h3&gt;4. Prepare a list of all power resources&lt;/h3&gt;
 &lt;p&gt;When an outage strikes, you don't want to be scrambling to find the electric company's contact information. For a power outage business continuity plan, make sure someone has a copy of or access to this information. This could be a member of the business continuity team or HR.&lt;/p&gt;
 &lt;p&gt;Organizations should have contact information for &lt;a href="https://www.techtarget.com/searchdatacenter/feature/Data-center-power-infrastructure-essentials-prevent-downtime"&gt;key power resources&lt;/a&gt;, which might include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Primary and alternate electric utility companies.&lt;/li&gt; 
  &lt;li&gt;Emergency power system vendors.&lt;/li&gt; 
  &lt;li&gt;Fuel companies for backup power systems.&lt;/li&gt; 
  &lt;li&gt;Electricians and specialized contractors.&lt;/li&gt; 
  &lt;li&gt;Access to power system engineers and consultants.&lt;/li&gt; 
  &lt;li&gt;Access to suppliers of power protection resources.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;5. Establish procedures for responding to power outages&lt;/h3&gt;
 &lt;p&gt;Statistically, the likelihood of a short-term power interruption is fairly high. Short-term outages can last about 15 minutes. They are nuisances, but will not likely disrupt business operations. By contrast, longer-term power outages, lasting hours or days, require a more intensive business recovery process.&lt;/p&gt;
 &lt;p&gt;Organizations must establish several key procedures for resolving power outages. While some of the technical details will differ by organization, two key elements should be top priorities: people and power.&lt;/p&gt;
 &lt;p&gt;Confirm that employees are unharmed and commence &lt;a target="_blank" href="https://www.ready.gov/evacuation" rel="noopener"&gt;evacuation of personnel&lt;/a&gt; as quickly as possible. Establish outside meeting locations where employees can gather, receive further instructions from management and first responders, and management can take headcounts of employees.&lt;/p&gt;
 &lt;p&gt;Launching emergency power systems can help keep the organization running, unless other circumstances necessitate a physical evacuation. &lt;a href="https://www.techtarget.com/searchdatacenter/tip/Data-center-backup-power-systems-standards-to-address-downtime"&gt;Backup power and remote working&lt;/a&gt; make sense so long as the outage is of a relatively short duration, and is largely confined to a specific geographic area, such as a city or section of a city, and is not a statewide or nationwide disruption.&lt;/p&gt;
 &lt;p&gt;For larger-area and extended power outages, the above strategies might be insufficient, so it is important to discuss short- and long-term power outage strategies periodically with senior management, facilities teams and utility companies.&lt;/p&gt;
 &lt;h3&gt;6. Establish recovery procedures post-outage&lt;/h3&gt;
 &lt;p&gt;Once power returns, the business will need time to recover and &lt;a href="https://www.techtarget.com/searchwindowsserver/definition/System-Restore"&gt;restart systems&lt;/a&gt;, and reestablish &lt;a href="https://www.techtarget.com/searchdisasterrecovery/tip/How-to-maintain-network-continuity-in-a-DR-strategy"&gt;network connections&lt;/a&gt; and related activities. Check with building facilities personnel on the cause of the outage and determine remedial actions that can help prevent future occurrences.&lt;/p&gt;
 &lt;h3&gt;7. Establish and schedule testing activities&lt;/h3&gt;
 &lt;p&gt;Testing is key to any business continuity plan. &lt;a href="https://www.techtarget.com/searchdisasterrecovery/tip/Strengthen-a-business-continuity-plan-with-testing-exercises"&gt;IT teams must test&lt;/a&gt; incident response activities, backup and restore operations, and communications resources to make sure the company can return to business smoothly. When creating a business continuity plan for power outages, you must also consider physical infrastructures that might be affected and test their backup power systems.&lt;/p&gt;
 &lt;h3&gt;8. Schedule periodic assessments of power infrastructures&lt;/h3&gt;
 &lt;p&gt;Business continuity plans typically include strategies for power outage response and necessary resources. These include local backup power systems, spare power supplies for equipment racks and devices, spare power cables, power connectors and spare power outlets.&lt;/p&gt;
 &lt;p&gt;Periodically inspect the building infrastructure for power protection equipment, and be sure to include the following resources and strategies:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Lightning arrestors.&lt;/li&gt; 
  &lt;li&gt;Grounding.&lt;/li&gt; 
  &lt;li&gt;Power conditioners.&lt;/li&gt; 
  &lt;li&gt;Surge suppressors.&lt;/li&gt; 
  &lt;li&gt;Cabling with the proper rating for the intended usage.&lt;/li&gt; 
  &lt;li&gt;Backup power systems.&lt;/li&gt; 
  &lt;li&gt;Flashlights.&lt;/li&gt; 
  &lt;li&gt;Diverse cable routing in vertical and horizontal raceways and cable paths.&lt;/li&gt; 
  &lt;li&gt;Diverse power cable routing into the building.&lt;/li&gt; 
  &lt;li&gt;Service from two different utility power substations.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;Make sure that emergency lighting is in place throughout each floor of the office and in stairwells. If an organization is a tenant in an office building or manufacturing facility, check with the facilities management team on their power protection activities.&lt;/p&gt;
&lt;/section&gt;                            
&lt;section class="section main-article-chapter" data-menu-title="Include power loss in BCDR and resilience plans"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Include power loss in BCDR and resilience plans&lt;/h2&gt;
 &lt;p&gt;Power loss is one of the principal risks and threats to business continuity. &lt;a href="https://www.techtarget.com/searchdisasterrecovery/feature/Sample-business-continuity-plan-template-for-SMBs-Free-download-and-guide"&gt;No matter the size&lt;/a&gt; or location of an organization, IT teams must prepare for several vulnerabilities to prevent or reduce the effects of potential power outages.&lt;/p&gt;
 &lt;p&gt;When developing BCDR, incident response and resilience plans, organizations must include power disruptions in risk assessments and BIAs. These analyses help identify ways to prepare for power outages and how to &lt;a href="https://www.techtarget.com/searchdisasterrecovery/tip/How-to-calculate-maximum-allowable-downtime"&gt;mitigate the severity of an outage to the business.&lt;/a&gt; These assessments will also show the organization the likelihood of different disruptions, helping them dedicate resources where they are needed most.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Paul Kirvan, FBCI, CISA, is an independent consultant and technical writer with more than 35 years of experience in business continuity, disaster recovery, resilience, cybersecurity, GRC, telecom and technical writing. &lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Loss of electric power presents a major risk to business continuity, and no organization is immune. Take these steps to create a solid business continuity plan for power outages.</description>
            
            <link>https://www.techtarget.com/searchdisasterrecovery/tip/Make-a-power-outage-business-continuity-plan-with-these-tips</link>
            <pubDate>Tue, 19 Aug 2025 13:15:00 GMT</pubDate>
            <title>Building a power outage business continuity plan: Step by step</title>
        </item>
        <item>
            <body>&lt;p&gt;At this year's HPE Discover conference, it was clear that IT pros' demands for virtualization alternatives and infrastructure for AI have been heard.&lt;/p&gt; 
&lt;p&gt;Here are three important facts from the event that IT leaders must be aware of as we enter the second half of 2025.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="Demands for virtualization alternatives"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Demands for virtualization alternatives&lt;/h2&gt;
 &lt;p&gt;When there is a massive uptick in demand, the vendor community will respond, though might take a few quarters.&lt;/p&gt;
 &lt;p&gt;Since Broadcom &lt;a href="https://www.techtarget.com/searchcloudcomputing/news/366591935/Broadcom-faces-challenges-with-latest-VMware-releases"&gt;adjusted VMware's licensing model&lt;/a&gt; 18 months ago, many organizations have investigated potential alternatives. While some businesses have started to integrate alternative options, the majority of &lt;a href="https://www.techtarget.com/searchvmware/news/366621112/VMware-dominance-remains-despite-challengers"&gt;organizations have decided to stand pat&lt;/a&gt;.&lt;/p&gt;
 &lt;p&gt;At HPE Discover, CEO Antonio Neri highlighted the availability of HPE Private Cloud Business Edition with &lt;a href="https://www.techtarget.com/searchcloudcomputing/news/366623936/HPE-adds-Morpheus-Data-to-KVM-hypervisor-for-enterprises"&gt;HPE Morpheus VM Essentials&lt;/a&gt;, specifically calling out the potential to save 90% on virtualization costs.&lt;/p&gt;
 &lt;p&gt;Morpheus VM Essentials enables users to provision and manage HVM-based VMs, HPE's own hypervisor based on Kernel-based Virtual Machine, and VMware-based VMs from a single interface. With HPE Private Cloud Business Edition, HPE provides Morpheus VM Essentials as part of a private cloud built on the HPE Alletra disaggregated hyperconverged infrastructure platform.&lt;/p&gt;
 &lt;p&gt;&lt;a href="https://www.techtarget.com/searchitoperations/news/366624593/Red-Hat-OpenShift-Virtualization-roadmap-chases-VMware"&gt;Red Hat&lt;/a&gt;, Microsoft, Nutanix and Verge.io also provide &lt;a href="https://www.techtarget.com/searchdatacenter/news/366618713/VMware-alternative-vendors-see-2025-as-year-to-make-a-mark"&gt;alternative hypervisor options&lt;/a&gt;. If your organization is interested in diversifying its hypervisor environment, it is important to recognize that this space is evolving quickly. Keep an eye out, as additional alternatives are likely to emerge, and their capabilities should increase over the next several quarters.&lt;/p&gt;
 &lt;p&gt;When evaluating alternatives, security, scalability and cost are all key concerns. But, more importantly, understand how easily an alternative can integrate into your existing environment, while also providing greater agility to meet future demands, such as supporting hybrid cloud options and container-based workloads.&lt;/p&gt;
&lt;/section&gt;       
&lt;section class="section main-article-chapter" data-menu-title="IT infrastructure for AI on the rise"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;IT infrastructure for AI on the rise&lt;/h2&gt;
 &lt;p&gt;According to 2025 hybrid cloud research from Enterprise Strategy Group, now part of Omdia, 91% of organizations say they are making or planning to make significant infrastructure investments to support new AI initiatives. This new wave of investment in AI is transforming vendor roadmaps and priorities.&lt;/p&gt;
 &lt;p&gt;One of the earliest signs of shifting vendor roadmaps appeared on the server side, with vendors adding the ability to integrate more GPU accelerators into a single system, such as the HPE Compute XD690, to improve the density of the deployment for training and inference. On the storage side, nearly every vendor now offers a high-performance, highly scalable storage option in their portfolios to support AI demands.&lt;/p&gt;
 &lt;p&gt;HPE highlighted the HPE Alletra Storage MP X10000, which was &lt;a href="https://www.techtarget.com/searchdatacenter/news/366615895/Object-storage-VM-offerings-emerge-for-HPE-GreenLake"&gt;announced late last year&lt;/a&gt;. This storage infrastructure offers what you expect: a highly scalable, high-performance software-defined storage system that offers object storage services to support the anticipated large volumes of unstructured data required for AI training and inference. In addition to the foundational specifications, however, HPE adds the ability to integrate a prevalidated portfolio of generative AI models designed to tag the metadata of object data inline on ingest.&lt;/p&gt;
 &lt;p&gt;Quality data is essential to success in AI. Prepping the data to identify and tag the right data to train or augment models is a complex and time-consuming activity. With the ability to integrate prevalidated models, the X10000 should help simplify the data preparation process to support internal AI projects. While a similar process could be done using external systems leveraging similar generative AI models, this integrated approach should simplify deployment and reduce network bandwidth once in place, since the tagging happens inline within the system itself.&lt;/p&gt;
 &lt;p&gt;Beyond the X10000, HPE also announced a &lt;a href="https://www.techtarget.com/searchenterpriseai/news/366626405/HPE-beefs-up-AI-factory-fueled-offerings-with-Nvidia-upgrades"&gt;new generation of its HPE Private Cloud AI&lt;/a&gt;, which provides turnkey AI factory infrastructure for enterprise environments. Importantly, this technology can integrate with the previous generation of HPE Private Cloud AI.&lt;/p&gt;
 &lt;p&gt;Given how new AI environments are, there is a question as to whether a turnkey approach with predefined configurations, which should make deployment simpler, is superior. Or is a more customized approach, which tailors the hardware to the use case, preferable for improving ROI? With that consideration in mind, HPE also offers options to deploy a more customized approach as well.&lt;/p&gt;
 &lt;p&gt;For IT decision-makers investing in AI, the takeaway is that success requires more than a GPU investment. The way your organization manages its data environment to support the AI environment is a critical design factor in ensuring success in AI. As businesses become more mature in their use of AI, their needs will move beyond compute and storage.&lt;/p&gt;
&lt;/section&gt;        
&lt;section class="section main-article-chapter" data-menu-title="Networking essential for AI success"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Networking essential for AI success&lt;/h2&gt;
 &lt;p&gt;In his keynote address, Neri listed networking, along with AI and hybrid cloud, as the three pillars of HPE's corporate strategy. The strategic importance of networking is likely fueling &lt;a href="https://www.techtarget.com/searchnetworking/news/366626897/DOJ-clears-road-for-HPEs-14B-Juniper-Networks-acquisition"&gt;HPE's plans to acquire Juniper Networks&lt;/a&gt; to augment its networking portfolio.&lt;/p&gt;
 &lt;p&gt;Networking obviously plays a critical role in ensuring the distributed application environment operates properly. As organizations scale their internal AI initiatives, &lt;a href="https://www.techtarget.com/searchnetworking/tip/Building-networks-for-AI-workloads"&gt;modernizing networking infrastructure&lt;/a&gt; has become increasingly vital to ensuring that the surrounding data pipeline infrastructure -- storage and networking -- can support the needs of the accelerator technology. In your AI architecture investment plans, networking should be a critical consideration.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Scott Sinclair is practice director with TechTarget's Enterprise Strategy Group, now part of Omdia, covering the storage industry.&lt;/i&gt;&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Enterprise Strategy Group is part of Omdia. Its analysts have business relationships with technology vendors.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>IT vendors continue to deliver VMware alternatives and expand infrastructure systems they offer to support enterprise AI workloads.</description>
            
            <link>https://www.techtarget.com/searchdatacenter/opinion/Virtualization-alternatives-and-AI-Lessons-from-HPE-Discover</link>
            <pubDate>Mon, 07 Jul 2025 11:51:00 GMT</pubDate>
            <title>Virtualization alternatives and AI: Lessons from HPE Discover</title>
        </item>
        <item>
            <body>&lt;p&gt;With all the backup-focused products available today, there's no excuse not to back up systems and data. The key to making all of this work is having a backup schedule.&lt;/p&gt; 
&lt;p&gt;Knowing how to create a backup schedule and develop a thorough scheduling strategy is essential.&lt;/p&gt; 
&lt;p&gt;The principal goal of a backup schedule is establishing time frames to back up an entire system, multiple systems, data and databases, network files, and other critical systems and data.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="Why is a backup schedule needed?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Why is a backup schedule needed?&lt;/h2&gt;
 &lt;p&gt;Backup schedules are essential IT activities, as they ensure the following vital functions are addressed:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Disaster recovery.&lt;/b&gt; This task involves recovering and restarting critical systems, VMs, data files and databases.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Defining start times and completion times for all regular backups.&lt;/b&gt; The schedule must include all data backup activities and testing activities. Any backup tools and network resources used should also be specified.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Restoring files that have been accidentally deleted.&lt;/b&gt; We have all experienced this at some point, and it helps to have a safety net if work files or other critical data are erased.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Gauging the effect of backup activities on production activities.&lt;/b&gt; A backup schedule can help keep systems operating at peak performance and allow backups to occur outside of production schedules.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Satisfying compliance and audit requirements.&lt;/b&gt; Schedules can be essential sources of evidence for organizations that must comply with data protection regulations and standards and be periodically audited for general IT controls.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Complying with recovery time and recovery point objectives.&lt;/b&gt; &lt;a href="https://www.techtarget.com/searchstorage/feature/What-is-the-difference-between-RPO-and-RTO-from-a-backup-perspective"&gt;RTO and RPO&lt;/a&gt; are essential for managing data backup and recovery; schedules can demonstrate that these metrics are being addressed.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;div class="youtube-iframe-container"&gt;
  &lt;iframe id="ytplayer-0" src="https://www.youtube.com/embed/FIL6L7f32Bs?autoplay=0&amp;amp;modestbranding=1&amp;amp;rel=0&amp;amp;widget_referrer=null&amp;amp;enablejsapi=1&amp;amp;origin=https://searchservervirtualization.techtarget.com" type="text/html" height="360" width="640" frameborder="0"&gt;&lt;/iframe&gt;
 &lt;/div&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="Key issues in backup scheduling"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Key issues in backup scheduling&lt;/h2&gt;
 &lt;p&gt;Although scheduling backups might seem like a no-brainer, several process components must be addressed within IT departments and reviewed with business unit leaders and senior management.&lt;/p&gt;
 &lt;p&gt;Addressing these items will ensure a comprehensive, auditable schedule that is easily understood and can be implemented by designated data backup team members and others in an emergency. Following is a list of key concerns for &lt;a href="https://www.techtarget.com/searchdatabackup/tip/Best-practices-10-basic-steps-for-better-backup"&gt;planning and executing data backups&lt;/a&gt;.&lt;/p&gt;
 &lt;h3&gt;1. What needs to be backed up?&lt;/h3&gt;
 &lt;p&gt;Data and system owners should specify the frequency of their backups and what should be backed up. Normally, data administrators should back up everything -- or specific parts -- in the IT environment with a frequency acceptable to business unit leaders and cost-effective operations.&lt;/p&gt;
 &lt;p&gt;Organizations should also consider the cost of backups and their &lt;a href="https://www.techtarget.com/searchdatabackup/tip/How-to-improve-backup-performance"&gt;effects on system -- and company -- performance&lt;/a&gt;. For example, it might make sense to replicate the entire system or critical portions of the system and specific individual files and databases to an alternate storage medium and perform incremental backups to that environment.&lt;/p&gt;
 &lt;h3&gt;2. Location of systems and files to be backed up&lt;/h3&gt;
 &lt;p&gt;Identify if the working location will be an on-site server, storage device or &lt;a href="https://www.techtarget.com/searchdatabackup/definition/cloud-backup"&gt;cloud-based backup&lt;/a&gt;. This can be included in the backup schedule and should also be specified in data backup policies and procedures, especially from compliance and audit perspectives.&lt;/p&gt;
 &lt;h3&gt;3. Who performs backups?&lt;/h3&gt;
 &lt;p&gt;A data backup administrator's activities should be governed by their discussions with all systems and data owners. If individual users back up their data files, this should be addressed by an IT policy for data management. Other IT employees should be identified as potential backup staff to the primary backup administrator(s). This might involve internal &lt;a href="https://www.techtarget.com/searchdatabackup/tip/Backup-training-for-general-IT-admins"&gt;training from the data admins&lt;/a&gt; and vendors whose technology resources are used for backups.&lt;/p&gt;
 &lt;h3&gt;4. Time frames for backups&lt;/h3&gt;
 &lt;p&gt;Points in time when data and system backups can occur should be defined based on business requirements. For example, some systems and data files might need to be backed up as soon as they're modified. This reflects their criticality to the business. Full backups are often performed after business hours and over the weekend. More frequent backups are governed by the business, and their execution might depend on specific systems and network resources.&lt;/p&gt;
 &lt;p&gt;Several variables influence backup time frames:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;System or service that performs the backups.&lt;/li&gt; 
  &lt;li&gt;Location of the backups -- for example, &lt;a href="https://www.techtarget.com/searchdatabackup/feature/Cloud-backup-vs-local-traditional-backup-advantages-disadvantages"&gt;on-site or remote&lt;/a&gt;.&lt;/li&gt; 
  &lt;li&gt;Time of day for performing backups.&lt;/li&gt; 
  &lt;li&gt;Use of mounted or unmounted file systems.&lt;/li&gt; 
  &lt;li&gt;RPO/RTO metrics to be satisfied.&lt;/li&gt; 
  &lt;li&gt;Requirements as specified by system/data owners and senior management.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;Backup administrators should periodically consult with system owners on these criteria to ensure the appropriateness of backup policies, procedures and schedules.&lt;/p&gt;
 &lt;h3&gt;5. How frequently do systems and data files need to be backed up?&lt;/h3&gt;
 &lt;p&gt;Some files, such as customer data files, are updated often during the day, requiring admins to back up these files more frequently. They might consider backing them up at the end of each day -- factoring in all incremental revisions -- so that an up-to-date backup is saved.&lt;/p&gt;
 &lt;p&gt;Other situations might require files to be backed up immediately so they are always current. Other files might not need to be backed up regularly and, as such, could be candidates for &lt;a href="https://www.techtarget.com/searchdatabackup/tip/How-to-keep-physical-backup-media-storage-safe"&gt;alternative storage, such as tape&lt;/a&gt;.&lt;/p&gt;
 &lt;p&gt;RPO requirements might also influence the frequency of backups. For example, if the RPO for certain critical files is 10 seconds or less, the backups will likely be more frequent, and the technology used for those backups -- e.g., data mirroring, data replication, high-speed low-latency networks -- will also need to be considered.&lt;/p&gt;
 &lt;p&gt;System backups might need a different schedule than the one used for data files and databases. Backups should occur any time one or more parameters in a system change during daily operations. This suggests a more ad hoc approach to system backups; each organization must establish those requirements.&lt;/p&gt;
 &lt;h3&gt;6. Restoration of data from backups&lt;/h3&gt;
 &lt;p&gt;Backups are created to ensure that if a system, data recovery or restoration is needed, those resources are as current as possible. Organizations should consider the criticality of systems and files to establish backup and restoration priority. These items should be factored into the backup schedule.&lt;/p&gt;
 &lt;h3&gt;7. Location of restored systems and data&lt;/h3&gt;
 &lt;p&gt;It might be necessary to restore systems or data to an alternate platform in an emergency -- a key consideration for disaster recovery. Cloud-based platforms are increasingly popular approaches to this requirement. MSPs that &lt;a href="https://www.msptoday.com/topics/msp-today/articles/458031-top-3-msp-backup-solutions-2024.htm" target="_blank" rel="noopener"&gt;specialize&lt;/a&gt; in data backup and storage are also viable alternatives. The key is to locate backed-up resources at a sufficient distance from the firm's primary location to reduce the risk of loss at alternate storage locations.&lt;/p&gt;
&lt;/section&gt;                        
&lt;section class="section main-article-chapter" data-menu-title="Types of backups and examples"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Types of backups and examples&lt;/h2&gt;
 &lt;p&gt;The following are &lt;a href="https://www.techtarget.com/searchdatabackup/feature/Full-incremental-or-differential-How-to-choose-the-correct-backup-type"&gt;typical types of backups&lt;/a&gt;:&lt;/p&gt;
 &lt;h3&gt;Day zero backups&lt;/h3&gt;
 &lt;p&gt;Day zero backups are performed when a new system is fully installed and accepted by the system owner. It establishes the initial baseline for future updates.&lt;/p&gt;
 &lt;h3&gt;Full backups&lt;/h3&gt;
 &lt;p&gt;&lt;a href="https://www.techtarget.com/searchdatabackup/definition/Full-Backup"&gt;Full backups&lt;/a&gt; store all the systems and files within the system, or they store selected systems and files as defined by the system/data owner. Companies should perform these regularly, such as once a week, and consider backups when a significant change to the IT infrastructure occurs.&lt;/p&gt;
 &lt;h3&gt;Incremental backups&lt;/h3&gt;
 &lt;p&gt;&lt;a href="https://www.techtarget.com/searchdatabackup/definition/incremental-backup"&gt;Incremental backups&lt;/a&gt; create a copy of all the files that have changed since a previous backup.&lt;/p&gt;
 &lt;h3&gt;Differential backups&lt;/h3&gt;
 &lt;p&gt;&lt;a href="https://www.techtarget.com/searchdatabackup/definition/differential-backup"&gt;Differential backups&lt;/a&gt; create a copy of all the files that have changed since the last full backup.&lt;/p&gt;
 &lt;p&gt;Examples of frequently used systems and files for backup scheduling include:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Individual user files.&lt;/li&gt; 
  &lt;li&gt;Databases.&lt;/li&gt; 
  &lt;li&gt;VMs.&lt;/li&gt; 
  &lt;li&gt;Password and group files.&lt;/li&gt; 
  &lt;li&gt;Accounting files.&lt;/li&gt; 
  &lt;li&gt;Configuration files.&lt;/li&gt; 
  &lt;li&gt;Terminal and port files.&lt;/li&gt; 
  &lt;li&gt;Network files.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://www.techtarget.com/rms/onlineimages/whatis-pillar_full_incremental_differential_backup.png"&gt;
  &lt;img data-src="https://www.techtarget.com/rms/onlineimages/whatis-pillar_full_incremental_differential_backup_mobile.png" class="lazy" data-srcset="https://www.techtarget.com/rms/onlineimages/whatis-pillar_full_incremental_differential_backup_mobile.png 960w,https://www.techtarget.com/rms/onlineimages/whatis-pillar_full_incremental_differential_backup.png 1280w" alt="Diagram comparing how full vs. incremental vs. differential data backups work." height="621" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Data backup administrators rely on full and partial backups to keep information safe.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;             
&lt;section class="section main-article-chapter" data-menu-title="Best practices for developing and implementing a backup schedule"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Best practices for developing and implementing a backup schedule&lt;/h2&gt;
 &lt;p&gt;Setting up a backup schedule requires a detailed and accurate list of all systems, data files and databases to be backed up. The schedule should be prepared on a platform that facilitates timely changes, as data and system backup schedules can be very dynamic. Use &lt;a href="https://www.techtarget.com/whatis/definition/recovery-point-objective-RPO"&gt;RPO values&lt;/a&gt; to assist with creating the schedules, as these values will be affected by how frequently their backups are scheduled. Work with the organization's backup software vendor to assist with schedule preparation. The same applies to external resources, such as MSPs and cloud-based backup and storage firms.&lt;/p&gt;
 &lt;p&gt;Follow these steps to establish a reliable backup schedule to protect data:&lt;/p&gt;
 &lt;ol class="default-list"&gt; 
  &lt;li&gt;Define the type(s) of backup needed: day zero, full, incremental or differential.&lt;/li&gt; 
  &lt;li&gt;Determine how frequently these backup jobs will be performed.&lt;/li&gt; 
  &lt;li&gt;Establish the retention period for each type of backup. A common strategy is to perform incremental backups daily and full backups weekly. You will need to account for the size of the data and the resources available to perform these backups.&lt;/li&gt; 
  &lt;li&gt;Once the backup schedule is defined, use &lt;a href="https://www.techtarget.com/searchdatabackup/tip/Automated-backup-How-it-works-and-why-you-should-use-it"&gt;backup software to automate the process&lt;/a&gt; and promote consistency and reliability.&lt;/li&gt; 
  &lt;li&gt;Regularly monitor the backup schedule by checking backup logs for errors and &lt;a href="https://www.techtarget.com/searchdatabackup/answer/What-is-a-good-backup-test-frequency"&gt;testing restores to confirm data can be recovered&lt;/a&gt;. This should catch potential issues before they escalate.&lt;/li&gt; 
 &lt;/ol&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://www.techtarget.com/rms/onlineimages/data_backup-backup-schedule.png"&gt;
  &lt;img data-src="https://www.techtarget.com/rms/onlineimages/data_backup-backup-schedule_mobile.png" class="lazy" data-srcset="https://www.techtarget.com/rms/onlineimages/data_backup-backup-schedule_mobile.png 960w,https://www.techtarget.com/rms/onlineimages/data_backup-backup-schedule.png 1280w" alt="Image of a sample backup schedule." height="387" width="559"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;A backup schedule can include a few full data backups and many incremental backups over a month.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;p&gt;Some organizations might need to perform incremental backups several times daily, while others might need more infrequent updates. The increased use of VMs makes efficient and timely backups even more important. In a disaster, enterprises must recover and restore quickly to resume operations with minimal downtime.&lt;/p&gt;
 &lt;p&gt;&lt;b&gt;Editor's note:&lt;/b&gt; &lt;i&gt;This article was updated in April 2025 to include additional best practices and to improve the reader experience.&lt;/i&gt;&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Paul Kirvan, FBCI, CISA, is an independent consultant and technical writer with more than 35 years of experience in business continuity, disaster recovery, resilience, cybersecurity, GRC, telecom and technical writing.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>For daily work files, databases and mission-critical systems and data, the more frequent the backup, the better. But how often should you back up all your data?</description>
            
            <link>https://www.techtarget.com/searchdatabackup/tip/Backup-scheduling-best-practices-to-ensure-availability</link>
            <pubDate>Mon, 14 Apr 2025 00:00:00 GMT</pubDate>
            <title>Backup scheduling best practices to ensure availability</title>
        </item>
        <item>
            <body>&lt;p&gt;Total cost of ownership (TCO) is an estimation of the expenses associated with purchasing, deploying, managing, using and retiring &lt;a href="https://www.techtarget.com/searchcio/definition/IT-asset-management-information-technology-asset-management"&gt;IT assets&lt;/a&gt;, such as a product or piece of equipment.&lt;/p&gt; 
&lt;p&gt;TCO, or actual cost, quantifies the cost of the purchase across the product's entire lifecycle. Therefore, it offers a more accurate basis for determining the value -- cost vs. return on investment (&lt;a href="https://www.techtarget.com/searchcio/definition/ROI"&gt;ROI&lt;/a&gt;) -- of an investment than the purchase price alone.&lt;/p&gt; 
&lt;p&gt;TCO can be calculated as the initial purchase price plus costs of operation across the asset's lifespan. It is especially critical in &lt;a href="https://www.techtarget.com/searchdatacenter/definition/IT"&gt;IT&lt;/a&gt;, manufacturing, &lt;a href="https://www.techtarget.com/searcherp/definition/supply-chain-management-SCM"&gt;supply chain management&lt;/a&gt; and &lt;a href="https://www.techtarget.com/searchcloudcomputing/definition/cloud-computing"&gt;cloud computing&lt;/a&gt;, where operational costs often exceed initial purchase costs.&lt;/p&gt; 
&lt;figure class="main-article-image full-col" data-img-fullsize="https://www.techtarget.com/rms/onlineImages/ROI_example.jpg"&gt;
 &lt;img data-src="https://www.techtarget.com/rms/onlineImages/ROI_example_mobile.jpg" class="lazy" data-srcset="https://www.techtarget.com/rms/onlineImages/ROI_example_mobile.jpg 960w,https://www.techtarget.com/rms/onlineImages/ROI_example.jpg 1280w" alt="graphic illustrating how to calculate return on investment in an acquisition" height="226" width="519"&gt;
 &lt;figcaption&gt;
  &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Total cost of ownership -- aka money spent -- is a factor to consider when determining return on investment.
 &lt;/figcaption&gt;
 &lt;div class="main-article-image-enlarge"&gt;
  &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
 &lt;/div&gt;
&lt;/figure&gt; 
&lt;section class="section main-article-chapter" data-menu-title="What factors determine TCO?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;&lt;a name="_gb22my6sfg0k"&gt;&lt;/a&gt;What factors determine TCO?&lt;/h2&gt;
 &lt;p&gt;Overall TCO includes direct and indirect expenses, as well as some intangible ones that may be assigned a monetary value. TCO includes the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Direct costs, e.g., purchase price, installation, maintenance.&lt;/li&gt; 
  &lt;li&gt;Indirect costs, e.g., training, downtime, performance inefficiencies.&lt;/li&gt; 
  &lt;li&gt;Intangible costs, e.g., employee productivity loss, compliance risks.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;For example, a &lt;a href="https://www.techtarget.com/whatis/definition/server"&gt;server&lt;/a&gt;'s TCO might include an expensive purchase price. However, indirect costs could include a good deal on ongoing IT support and also factor in low &lt;a href="https://www.techtarget.com/searchitoperations/definition/systems-management"&gt;systems management&lt;/a&gt; time because of its user-friendly interface.&lt;/p&gt;
 &lt;p&gt;TCO factors in the costs accumulated from purchase to decommissioning of the asset.&lt;/p&gt;
 &lt;p&gt;For a &lt;a href="https://www.techtarget.com/searchdatacenter/definition/data-center"&gt;data center&lt;/a&gt; server, for example, this means initial acquisition price, repairs, maintenance costs, upgrades, service or support contracts, network integration, security, &lt;a href="https://www.techtarget.com/searchcio/definition/software-license"&gt;software licenses&lt;/a&gt; and employee training.&lt;/p&gt;
 &lt;p&gt;It can even account for the credit terms on which the company purchased the product. Through analysis, the purchasing manager might assign a monetary value to intangible costs, such as systems management time, electricity used, &lt;a href="https://www.techtarget.com/whatis/definition/uptime-and-downtime"&gt;downtime&lt;/a&gt;, insurance and other overhead.&lt;/p&gt;
 &lt;p&gt;Total cost of ownership must be compared to total benefits of ownership to determine the viability of a purchase.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://www.techtarget.com/rms/onlineimages/cio-it_asset_lifecycle_management.png"&gt;
  &lt;img data-src="https://www.techtarget.com/rms/onlineimages/cio-it_asset_lifecycle_management_mobile.png" class="lazy" data-srcset="https://www.techtarget.com/rms/onlineimages/cio-it_asset_lifecycle_management_mobile.png 960w,https://www.techtarget.com/rms/onlineimages/cio-it_asset_lifecycle_management.png 1280w" alt="graphic illustrating the lifecycle of an IT asset from planning to retiring the asset" height="560" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Total cost of ownership of an IT product covers not just the initial purchase price, but costs of operating that product across its lifespan -- when it is retired by an organization.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;h3&gt;&lt;a name="_wkzp0xj81m3s"&gt;&lt;/a&gt;New TCO considerations today&lt;/h3&gt;
 &lt;p&gt;In addition to the aforementioned criteria, total cost of ownership today also may include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Cloud computing and software-as-a service (&lt;a href="https://www.techtarget.com/searchcloudcomputing/definition/Software-as-a-Service"&gt;SaaS&lt;/a&gt;) costs 
   &lt;ul style="list-style-type: circle;" class="default-list"&gt; 
    &lt;li&gt;SaaS subscription models introduce unpredictable long-term costs.&lt;/li&gt; 
    &lt;li&gt;Hidden fees for data storage, &lt;a href="https://www.techtarget.com/searchapparchitecture/definition/application-program-interface-API"&gt;application programming interface&lt;/a&gt; calls and &lt;a href="https://www.techtarget.com/searchdatacenter/definition/vendor-lock-in"&gt;vendor lock-in&lt;/a&gt; affect TCO.&lt;/li&gt; 
    &lt;li&gt;&lt;a href="https://www.techtarget.com/searchcloudcomputing/tip/8-key-steps-of-a-cloud-exit-strategy"&gt;Cloud egress&lt;/a&gt; fees -- costs to move data out of a cloud provider -- are often overlooked.&lt;/li&gt; 
   &lt;/ul&gt; &lt;/li&gt; 
  &lt;li&gt;Artificial intelligence (&lt;a href="https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence"&gt;AI&lt;/a&gt;)-powered cost optimization 
   &lt;ul style="list-style-type: circle;" class="default-list"&gt; 
    &lt;li&gt;AI can help to predict maintenance costs and optimize &lt;a href="https://www.techtarget.com/searchdatacenter/definition/infrastructure"&gt;IT infrastructure&lt;/a&gt; spending.&lt;/li&gt; 
    &lt;li&gt;&lt;a href="https://www.techtarget.com/searchbusinessanalytics/definition/predictive-analytics"&gt;Predictive analytics&lt;/a&gt; helps forecast vendor pricing changes.&lt;/li&gt; 
   &lt;/ul&gt; &lt;/li&gt; 
  &lt;li&gt;&lt;a href="https://www.techtarget.com/whatis/definition/business-sustainability"&gt;Sustainability&lt;/a&gt; and &lt;a href="https://www.techtarget.com/searchcio/definition/green-IT-green-information-technology"&gt;green IT&lt;/a&gt; 
   &lt;ul style="list-style-type: circle;" class="default-list"&gt; 
    &lt;li&gt;Energy-efficient servers, cloud data centers and carbon footprint reductions affect TCO.&lt;/li&gt; 
    &lt;li&gt;Regulatory requirements for sustainability reporting add compliance costs.&lt;/li&gt; 
   &lt;/ul&gt; &lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;            
&lt;section class="section main-article-chapter" data-menu-title="The challenges in calculating TCO"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;&lt;a name="_evpq73ihsncd"&gt;&lt;/a&gt;The challenges in calculating TCO&lt;/h2&gt;
 &lt;p&gt;There are several methodologies and &lt;a href="https://www.techtarget.com/searchapparchitecture/definition/software"&gt;software&lt;/a&gt; tools to calculate total cost of ownership, but the process is not perfect. Many enterprises fail to define a singular methodology. This is an issue because they cannot base purchasing decisions on uniform information.&lt;/p&gt;
 &lt;p&gt;Another problem is that it is difficult to determine the scope of &lt;a href="https://www.techtarget.com/whatis/definition/OPEX-operational-expenditure"&gt;operating costs&lt;/a&gt; for any single piece of IT equipment. Some hidden cost factors are easily overlooked, such as depreciation and warranty, or inaccurately compared from one product to another.&lt;/p&gt;
 &lt;p&gt;For example, support costs on one server may include the cost of spare parts. This might make support cost more than it does on another server but eliminate the acquisition cost of buying a new system.&lt;/p&gt;
 &lt;p&gt;Cost of ownership analysis generally doesn't anticipate unpredictable rising costs over time -- for example, if &lt;a href="https://www.techtarget.com/searchnetworking/tip/How-to-plan-and-start-a-network-upgrade"&gt;upgrade&lt;/a&gt; part costs jump substantially more than expected due to a distributor change.&lt;/p&gt;
 &lt;p&gt;TCO calculations cannot account for the availability of upgrades and services or the impact of vendor relationships.&lt;/p&gt;
 &lt;p&gt;If a software vendor cancels a particular functionality after three years, no longer stocks parts after five years or ends support for certain software, the enterprise may be subject to unexpected and significant additional costs, which could drive TCO far beyond its initial estimate.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://www.techtarget.com/rms/onlineImages/converged_infras-hci_tco_checklist-f.png"&gt;
  &lt;img data-src="https://www.techtarget.com/rms/onlineImages/converged_infras-hci_tco_checklist-f_mobile.png" class="lazy" data-srcset="https://www.techtarget.com/rms/onlineImages/converged_infras-hci_tco_checklist-f_mobile.png 960w,https://www.techtarget.com/rms/onlineImages/converged_infras-hci_tco_checklist-f.png 1280w" alt="Text graphic outlining the various cost elements associated with an infrastructure project" height="498" width="559"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;To understand the range of factors that go into any TCO calculation, consider what goes into TCO for hyperconverged infrastructure when compute, network and storage are bundled in a single platform.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;        
&lt;section class="section main-article-chapter" data-menu-title="TCO in cloud vs. on-premises IT infrastructure"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;&lt;a name="_6ahm1p82matk"&gt;&lt;/a&gt;TCO in cloud vs. on-premises IT infrastructure&lt;/h2&gt;
 &lt;p&gt;Given the information covered in the previous section, the following serves as a breakdown of TCO factors in a &lt;a href="https://www.techtarget.com/searchcloudcomputing/tip/Evaluate-on-premises-vs-cloud-computing-pros-and-cons"&gt;cloud versus on-premises&lt;/a&gt; infrastructure.&lt;/p&gt;
 &lt;table class="main-article-table"&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Factor&lt;/td&gt; 
    &lt;td&gt;Cloud (SaaS, IaaS, PaaS)&lt;/td&gt; 
    &lt;td&gt;On-premises (servers, data centers)&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td&gt;&lt;strong&gt;Upfront cost&lt;/strong&gt;&lt;/td&gt; 
    &lt;td&gt;Low (subscription-based)&lt;/td&gt; 
    &lt;td&gt;High (Capex investment)&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;&lt;strong&gt;Maintenance cost&lt;/strong&gt;&lt;/td&gt; 
    &lt;td&gt;Managed by provider&lt;/td&gt; 
    &lt;td&gt;In-house IT team&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;/td&gt; 
    &lt;td&gt;High (elastic)&lt;/td&gt; 
    &lt;td&gt;Limited&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;&lt;strong&gt;Energy cost&lt;/strong&gt;&lt;/td&gt; 
    &lt;td&gt;Included in fees&lt;/td&gt; 
    &lt;td&gt;Paid separately&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;&lt;strong&gt;Vendor lock-in risk&lt;/strong&gt;&lt;/td&gt; 
    &lt;td&gt;High&lt;/td&gt; 
    &lt;td&gt;Low&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;&lt;strong&gt;Long-term TCO&lt;/strong&gt;&lt;/td&gt; 
    &lt;td&gt;Variable&lt;/td&gt; 
    &lt;td&gt;Predictable&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt;
&lt;/section&gt;   
&lt;section class="section main-article-chapter" data-menu-title="Best practices to optimize TCO calculations"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Best practices to optimize TCO calculations&lt;/h2&gt;
 &lt;p&gt;Enterprise managers and purchasing decision-makers complete cost analyses for multiple options and then compare TCO to determine overall costs and, ultimately, the lowest long-term cost. The following &lt;a href="https://www.techtarget.com/searchsoftwarequality/definition/best-practice"&gt;best practices&lt;/a&gt; can help organizations optimize TCO:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;a name="_xvdr86npgg5z"&gt;&lt;/a&gt;Use AI and predictive analytics 
   &lt;ul style="list-style-type: circle;" class="default-list"&gt; 
    &lt;li&gt;AI-powered tools can analyze historical spending patterns and forecast future costs.&lt;/li&gt; 
    &lt;li&gt;AI-driven &lt;a href="https://www.techtarget.com/searchcloudcomputing/tip/Cloud-cost-management-tools-you-should-know-about"&gt;cloud cost management platforms&lt;/a&gt; -- e.g., Apptio, Cloudability -- help optimize cloud TCO.&lt;/li&gt; 
   &lt;/ul&gt; &lt;/li&gt; 
  &lt;li&gt;&lt;a name="_tl521w723gwf"&gt;&lt;/a&gt;Factor in &lt;a href="https://www.techtarget.com/whatis/definition/environmental-social-and-governance-ESG"&gt;environmental, sustainability and governance&lt;/a&gt; costs 
   &lt;ul style="list-style-type: circle;" class="default-list"&gt; 
    &lt;li&gt;Track &lt;a href="https://www.techtarget.com/sustainability/feature/Understand-greenhouse-gas-emissions-vs-carbon-emissions"&gt;carbon emissions&lt;/a&gt; from IT infrastructure.&lt;/li&gt; 
    &lt;li&gt;Invest in &lt;a href="https://www.techtarget.com/searchdatacenter/tip/Four-ways-to-reduce-data-center-power-consumption"&gt;energy-efficient&lt;/a&gt; hardware to lower long-term operational costs.&lt;/li&gt; 
   &lt;/ul&gt; &lt;/li&gt; 
  &lt;li&gt;&lt;a name="_5vdc8j2163ef"&gt;&lt;/a&gt;Consider the true cost of vendor lock-in 
   &lt;ul style="list-style-type: circle;" class="default-list"&gt; 
    &lt;li&gt;Assess long-term &lt;a href="https://www.techtarget.com/searchcloudcomputing/definition/cloud-migration"&gt;migration&lt;/a&gt; costs before committing to a vendor.&lt;/li&gt; 
    &lt;li&gt;Understand how proprietary solutions limit future flexibility.&lt;/li&gt; 
   &lt;/ul&gt; &lt;/li&gt; 
  &lt;li&gt;&lt;a name="_2royozww7454"&gt;&lt;/a&gt;Use a standardized TCO framework 
   &lt;ul style="list-style-type: circle;" class="default-list"&gt; 
    &lt;li&gt;Gartner's TCO model is used for &lt;a href="https://www.techtarget.com/searchdatacenter/definition/infrastructure"&gt;IT infrastructure&lt;/a&gt; and cloud migration.&lt;/li&gt; 
    &lt;li&gt;ISO 15686-5 is used for building lifecycle cost analysis.&lt;/li&gt; 
   &lt;/ul&gt; &lt;/li&gt; 
 &lt;/ul&gt;
 &lt;div class="youtube-iframe-container"&gt;
  &lt;iframe id="ytplayer-0" src="https://www.youtube.com/embed/FpNVy_nPTbY?autoplay=0&amp;amp;modestbranding=1&amp;amp;rel=0&amp;amp;widget_referrer=null&amp;amp;enablejsapi=1&amp;amp;origin=https://searchservervirtualization.techtarget.com" type="text/html" height="360" width="640" frameborder="0"&gt;&lt;/iframe&gt;
 &lt;/div&gt;
 &lt;p&gt;&lt;em&gt;Not sure what it will cost to run workloads in the cloud? Discover the key variables to &lt;a href="https://www.techtarget.com/searchcloudcomputing/tip/How-to-calculate-your-cloud-TCO"&gt;calculate cloud total cost of ownership&lt;/a&gt; when comparing on-premises deployment with cloud deployment to avoid costly surprises later on.&lt;/em&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Total cost of ownership (TCO) is an estimation of the expenses associated with purchasing, deploying, managing, using and retiring IT assets, such as a product or piece of equipment.</description>
            
            <link>https://www.techtarget.com/searchdatacenter/definition/TCO</link>
            <pubDate>Thu, 10 Apr 2025 09:00:00 GMT</pubDate>
            <title>What is total cost of ownership (TCO)?</title>
        </item>
        <item>
            <body>&lt;p&gt;A green data center is a repository for the storage, processing, management and dissemination of data in which the physical space and the mechanical and electrical subsystems are designed to maximize energy efficiency and minimize the environmental impact. The construction and operation of a green data center includes advanced technologies and strategies.&lt;/p&gt; 
&lt;p&gt;The following are examples of some of the technologies and strategies used in green data center initiatives:&lt;/p&gt; 
&lt;ul class="default-list"&gt; 
 &lt;li&gt;Minimized building footprints with physical spaces optimized for effective airflow.&lt;/li&gt; 
 &lt;li&gt;Low-emission building materials, carpets and paints.&lt;/li&gt; 
 &lt;li&gt;Sustainable landscaping.&lt;/li&gt; 
 &lt;li&gt;Extensive use of virtualization to maximize hardware utilization while reducing space and heat generation.&lt;/li&gt; 
 &lt;li&gt;Use of AI platforms to analyze data center operations and optimize data center infrastructure management (DCIM) techniques.&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://www.techtarget.com/sustainability/definition/e-waste"&gt;E-waste&lt;/a&gt; recycling.&lt;/li&gt; 
 &lt;li&gt;Catalytic converters on backup generators.&lt;/li&gt; 
 &lt;li&gt;Alternative cooling technologies, including heat pumps, &lt;a href="https://www.techtarget.com/searchdatacenter/definition/data-center-evaporative-cooling-swamp-cooling"&gt;evaporative cooling&lt;/a&gt;, natural cooled facilities located in cold geographic locations and liquid immersion cooling for servers and other computing equipment.&lt;/li&gt; 
 &lt;li&gt;Heat recovery and redirection designs that redirect and reuse heat from data center facilities to other cooler areas of the building.&lt;/li&gt; 
 &lt;li&gt;Alternative and renewable energy sources, such as photovoltaic technology, biofuels and wind technology.&lt;/li&gt; 
 &lt;li&gt;Hybrid and electric vehicles.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Building and certifying a green data center or other facility can be expensive upfront, but long-term cost savings can be realized on operations and maintenance. Green facilities offer employees a healthy, comfortable work environment and enhance relations with local communities.&lt;/p&gt; 
&lt;p&gt;Environmentalists and, increasingly, the general public are pressuring governments to offer green incentives. Companies sometimes receive tax incentives and other monetary support for the development and use of &lt;a href="https://www.techtarget.com/searchdatacenter/tip/Considerations-for-sustainable-data-center-design"&gt;environmentally responsible technologies&lt;/a&gt;.&lt;/p&gt; 
&lt;div class="youtube-iframe-container"&gt;
 &lt;iframe id="ytplayer-0" src="https://www.youtube.com/embed/yGkfBo2iSiI?si=_iJvhdBHlh6lhAPt?autoplay=0&amp;amp;modestbranding=1&amp;amp;rel=0&amp;amp;widget_referrer=null&amp;amp;enablejsapi=1&amp;amp;origin=https://searchservervirtualization.techtarget.com" type="text/html" height="360" width="640" frameborder="0"&gt;&lt;/iframe&gt;
&lt;/div&gt; 
&lt;section class="section main-article-chapter" data-menu-title="Why do you need a green data center?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Why do you need a green data center?&lt;/h2&gt;
 &lt;p&gt;Green data centers have become essential for enterprise computing. A properly designed and well-implemented green data center design can use a range of energy-efficient technologies to do the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Lower power consumption, which is one of the largest data center expenses.&lt;/li&gt; 
  &lt;li&gt;Reduce carbon emissions, which demonstrates environmental responsibility, improves public brand perception and helps the environment.&lt;/li&gt; 
  &lt;li&gt;Conserve valuable resources, such as water and fossil fuels.&lt;/li&gt; 
  &lt;li&gt;Improve the sustainability of data center operations, which is a critical concern for &lt;a href="https://www.techtarget.com/searchdisasterrecovery/definition/business-continuity"&gt;business continuity&lt;/a&gt;.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;Establishing a green data center can be complex and expensive. Traditional data centers can be upgraded or retrofitted with green technologies, such as virtualization, AI-driven DCIM, and refurbishing and recycling programs. However, the deployment of a full-service green data center with a comprehensive suite of green technologies and strategies typically requires the construction of a new data center facility.&lt;/p&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="What are the benefits of a green data center?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What are the benefits of a green data center?&lt;/h2&gt;
 &lt;p&gt;Adopting a green approach to data center energy and environment management is a significant investment. However, over time, these data centers provide a range of advantages. Some benefits of green data centers are the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Lower long-term operating costs.&lt;/li&gt; 
  &lt;li&gt;Reduced physical space requirements.&lt;/li&gt; 
  &lt;li&gt;Less carbon emissions and a smaller &lt;a href="https://www.techtarget.com/whatis/definition/carbon-footprint"&gt;carbon footprint&lt;/a&gt; due to fewer and more energy-efficient data center gear.&lt;/li&gt; 
  &lt;li&gt;Decreased water use, expanding the range of suitable data center locations while reducing contention for water with local communities.&lt;/li&gt; 
  &lt;li&gt;Reduced waste output from packaging reductions, gear redeployments and recycling.&lt;/li&gt; 
  &lt;li&gt;Enhanced business continuity and regulatory compliance postures.&lt;/li&gt; 
  &lt;li&gt;Reduced electricity consumption.&lt;/li&gt; 
  &lt;li&gt;More emphasis on renewable and sustainable data center technology and resources.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/actions_to_help_reduce_ewaste-f.png"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/actions_to_help_reduce_ewaste-f_mobile.png" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/actions_to_help_reduce_ewaste-f_mobile.png 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/actions_to_help_reduce_ewaste-f.png 1280w" alt="list of ways to reduce e-waste" height="413" width="559"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Four ways data center operators can deal with e-waste.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="Green data center performance metrics"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Green data center performance metrics&lt;/h2&gt;
 &lt;p&gt;Numerous metrics have been developed to measure energy use and sustainability. They help demonstrate and certify that buildings are using energy efficiently and don't harm the environment. Organizations typically adopt and track several green energy metrics to provide the most complete picture of energy efficiency and sustainable practices and operation.&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Power usage effectiveness. &lt;/b&gt;Developed in 1997 by The Green Grid, part of the Information Technology Industry Council, &lt;a href="https://www.techtarget.com/searchdatacenter/definition/power-usage-effectiveness-PUE"&gt;PUE&lt;/a&gt; measures the power consumption of a data center. It's the ratio of the total power provided to the data center divided by the power the IT equipment in the data center uses. The goal is to have the ratio come as close as possible to one. A ratio of one means all of the power provided to the data center is being used by the computing equipment, effectively eliminating heat and waste. Ratios greater than one indicate lower efficiency.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Data center Infrastructure efficiency. &lt;/b&gt;&lt;a href="https://www.techtarget.com/searchdatacenter/definition/data-center-infrastructure-efficiency-DCIE"&gt;DCiE&lt;/a&gt; measures how effectively a data center uses the available infrastructure. It divides the power consumed by IT gear by the total power the data center uses. DCiE is the inverse of PUE. A DCiE approaching one indicates greater energy efficiency. As with PUE, DCiE is a direct gauge of efficiency and is often used to help engineers measure the impact of infrastructure changes on efficiency.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Carbon usage effectiveness.&lt;/b&gt; The Green Grid also developed&lt;b&gt; &lt;/b&gt;the&lt;b&gt; &lt;/b&gt;CUE metric to show if a data center has attained its sustainability goals. It's the ratio of carbon dioxide emissions the data center generates divided by the energy consumption of data center equipment. The goal is to have the lowest possible value, which indicates that the data center is effectively controlling its carbon dioxide emissions and carbon footprint.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Water usage effectiveness. &lt;/b&gt;The WUE metric evaluates a data center's water consumption relative to its energy output. It's determined by dividing the total annual water use in liters by the total annual energy use in kilowatt-hours. The result is WUE expressed in liters per &lt;a href="https://www.techtarget.com/whatis/definition/watt-hour-Wh"&gt;kWh&lt;/a&gt;. WUE is most useful for facilities that rely on water-based cooling systems.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Renewable energy usage.&lt;/b&gt; REU is the ratio of energy derived from renewable sources, including photovoltaic, wind and hydroelectric power, divided by the total electricity used by the data center. This renders a ratio, but REU is typically expressed as a percentage. When the REU reaches 100%, all of the facility's energy use is from renewable sources. REU is most helpful for green data centers that emphasize sustainability through renewable energy sources.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Energy reuse factor.&lt;/b&gt; ERF is a more exotic metric that expresses energy efficiency in terms of re-use. ERF is a ratio of the energy re-used, such as using waste heat from servers to heat other spaces, versus the total energy the data center uses. This ratio is expressed as a percentage, and is typically low, but larger percentages indicate more effective energy reuse.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Data center carbon footprint.&lt;/b&gt; The DCCF metric gauges the environmental impact of a data center. It's determined by multiplying the total power the data center consumes by the carbon emission factor of the power source. A lower DCCF means a lower carbon impact. Engineers often use this metric to determine the sustainability of the data center and as justification for investments in renewable, low-carbon emission energy sources.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/data_center-pue.png"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/data_center-pue_mobile.png" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/data_center-pue_mobile.png 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/data_center-pue.png 1280w" alt="How power usage effectiveness (PUE) works" height="286" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Power usage effectiveness is a metric used to assess the efficiency of a data center.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="Energy efficiency certifications"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Energy efficiency certifications&lt;/h2&gt;
 &lt;p&gt;Three certifications are available to validate that a building or IT device is energy-efficient and environmentally friendly:&lt;/p&gt;
 &lt;ol type="1" start="1" class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Leadership in Energy and Environmental Design.&lt;/b&gt; The U.S. Green Building Council created this certification. &lt;a href="https://www.techtarget.com/searchdatacenter/definition/LEED-Leadership-in-Energy-and-Environmental-Design"&gt;LEED&lt;/a&gt; certification of a building means it has satisfied a rigorous set of criteria to reduce energy consumption and be environmentally friendly.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Energy Star.&lt;/b&gt; Developed by the U.S. Environmental Protection Agency and the U.S. Department of Energy, the &lt;a href="https://www.techtarget.com/searchdatacenter/definition/Energy-Star"&gt;Energy Star&lt;/a&gt;&lt;b&gt; &lt;/b&gt;designation certifies that a machine or device is energy-efficient. Use of Energy Star-certified products have &lt;a target="_blank" href="https://www.energystar.gov/about?s=mega" rel="noopener"&gt;saved residential and businesses users more than 5 trillion kilowatts&lt;/a&gt; of electricity since 1992.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Certified Energy Efficient Datacenter Award.&lt;/b&gt; CEEDA is a certification framework designed around a mix of standards, including ones from the American Society of Heating, Refrigerating, and Air Conditioning Engineers; Energy Star; the European Code of Conduct; European Telecommunications Standards Institute; Green Grid metrics; and the International Organization for Standardization. This global certification program independently recognizes the successful implementation of energy-efficiency practices in data centers across a variety of specialized disciplines.&lt;/li&gt; 
 &lt;/ol&gt;
&lt;/section&gt;   
&lt;section class="section main-article-chapter" data-menu-title="Components of a green data center"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Components of a green data center&lt;/h2&gt;
 &lt;p&gt;Virtually every component of a data center, from how the building is constructed to the equipment in use, can be made more energy-efficient and environmentally friendly.&lt;/p&gt;
 &lt;figure class="main-article-image half-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/storage-energy_efficient_storage_checklist-h.png"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/storage-energy_efficient_storage_checklist-h_half_column_mobile.png" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/storage-energy_efficient_storage_checklist-h_half_column_mobile.png 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/storage-energy_efficient_storage_checklist-h.png 1280w" alt="list of factors affecting energy-efficiency of storage" height="427" width="279"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Efficient storage technology is a key part of a green data center. See what it takes to have energy-efficient storage.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;p&gt;Energy efficiency and environmental considerations are essential design components when upgrading an existing data center or building a new one. Organizations planning a green data center can use a design firm with experience in designing energy-efficient and environmentally friendly buildings.&lt;/p&gt;
 &lt;p&gt;Design considerations and components of a green data center include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Cold and hot aisles.&lt;/b&gt; Data center servers are placed and contained in &lt;a href="https://www.techtarget.com/searchdatacenter/definition/hot-cold-aisle"&gt;cold and hot aisles&lt;/a&gt; that enable hot air to be pumped to air conditioner returns and cold air from cold aisles to where it's needed for cooling.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Free air cooling.&lt;/b&gt; &lt;a href="https://www.techtarget.com/searchdatacenter/definition/free-cooling"&gt;Free cooling&lt;/a&gt; systems use outdoor air to cool data centers that are strategically located in cooler climates.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Liquid cooling.&lt;/b&gt; Immersion cooling technologies submerge data center gear in a circulated bath of nonconductive oil that contains and carries heat more efficiently than air cooling while using less power in chiller and circulation equipment than traditional heating, ventilation and air conditioning gear.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Low-power servers.&lt;/b&gt; These servers work well in data centers. Their low energy consumption makes them more efficient than traditional servers.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Virtualized servers.&lt;/b&gt; Virtualization lets a single physical server host multiple virtual server instances. One physical server can function as many different servers and reduce the total server count in the data center.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Modular data centers.&lt;/b&gt; These energy-efficient data centers are portable and can be quickly set up wherever they're needed. They're also called &lt;i&gt;data centers in a box&lt;/i&gt;.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Evaporative cooling.&lt;/b&gt; Various technologies, such as evaporation pads and high-pressure spray systems, reduce heat through the evaporation of water.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Heat recovery and reuse.&lt;/b&gt; Waste heat from data center power use is reused to heat other facilities.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Ultrasonic humidification.&lt;/b&gt; Energy-efficient ultrasound is used to create the moisture needed to establish proper environmental conditions to run some devices in a data center. For example, adequate humidity reduces static accumulation and potentially damaging static discharges.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Renewable energy use.&lt;/b&gt; Green data centers will typically integrate one or more renewable energy sources, such as photovoltaic, wind, hydroelectric or biofuel installations. This reduces their carbon footprint, improves sustainability and reduces dependence on traditional utilities.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Infrastructure monitoring and management.&lt;/b&gt; Software technologies such as DCIM let data center operators monitor energy use, optimize resource allocation, ensure safe operational environments, and oversee facility and IT infrastructure performance.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Building design.&lt;/b&gt; Factors such as building size and positioning, insulation type, natural lighting and air handling can reduce ongoing energy demands and affect data center efficiency.&lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;      
&lt;section class="section main-article-chapter" data-menu-title="Cloud computing and green data centers"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Cloud computing and green data centers&lt;/h2&gt;
 &lt;p&gt;The global rise of cloud computing has altered the calculus of green data center design and implementation for many organizations. In effect, cloud computing allows an organization to use computing resources and services remotely and on demand. This treats computing as if it were a utility such as electricity or water.&lt;/p&gt;
 &lt;p&gt;For most organizations, the availability of cloud computing has reduced, and even eliminated, the demand for green data center design. Cloud computing mitigates issues such as hardware counts, power supply and use, cooling needs and physical space.&lt;/p&gt;
 &lt;p&gt;For example, an organization might opt to migrate key workloads and data to a cloud provider rather than invest the time, financial capital and personnel in building and operating a green data center. Some of the most progressive businesses might forego a data center entirely and commit to a cloud-first or cloud-native posture for their business software and services.&lt;/p&gt;
 &lt;p&gt;However, for cloud computing providers with many large data centers around the world, the need to design, build and manage green data centers is more acute than ever.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Learn more about what factors to consider to &lt;/i&gt;&lt;a href="https://www.techtarget.com/searchdatacenter/feature/Get-started-with-green-energy-for-your-data-center"&gt;&lt;i&gt;run a sustainable data center&lt;/i&gt;&lt;/a&gt;&lt;i&gt;.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>A green data center is a repository for the storage, processing, management and dissemination of data in which the physical space and the mechanical and electrical subsystems are designed to maximize energy efficiency and minimize the environmental impact.</description>
            
            <link>https://www.techtarget.com/searchdatacenter/definition/green-data-center</link>
            <pubDate>Thu, 06 Mar 2025 09:00:00 GMT</pubDate>
            <title>What is a green data center?</title>
        </item>
        <item>
            <body>&lt;p&gt;Few technology projects evoke as much excitement -- and anxiety -- as a private cloud. An organization that can deliver resources, services and applications with high levels of control and self-service can find many opportunities for business innovation and transformation.&lt;/p&gt; 
&lt;p&gt;However, private clouds can be notoriously complex and demanding, especially when a business tries to implement private cloud technology in-house. Fortunately, there are many alternatives available today that can provide powerful DIY solutions, along with proven and convenient private cloud platforms as a service.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="What is a private cloud?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is a private cloud?&lt;/h2&gt;
 &lt;p&gt;A &lt;a href="https://www.techtarget.com/searchcloudcomputing/definition/private-cloud"&gt;private cloud&lt;/a&gt; is a cloud computing environment dedicated to the exclusive use of a single organization. This cloud deployment model is intended to deliver resources, services and applications to users, while supporting a high level of self-service, automation, flexibility and user autonomy -- much like a public cloud.&lt;/p&gt;
 &lt;p&gt;However, private clouds can have noteworthy limitations. Since private clouds are typically provisioned from an organization's existing IT infrastructure, the amount of scalability and number of services available to users can be far more limited than public clouds. Even when private cloud capabilities are delivered through third-party providers, the amount of scalability and scope should be considered carefully.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://www.techtarget.com/rms/onlineimages/storage-four_cloud_options_02-f.png"&gt;
  &lt;img data-src="https://www.techtarget.com/rms/onlineimages/storage-four_cloud_options_02-f_mobile.png" class="lazy" data-srcset="https://www.techtarget.com/rms/onlineimages/storage-four_cloud_options_02-f_mobile.png 960w,https://www.techtarget.com/rms/onlineimages/storage-four_cloud_options_02-f.png 1280w" alt="Four cloud deployment models" height="286" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Understand the similarities and differences among the public cloud, private cloud and hybrid cloud models.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="Why choose a private cloud platform?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Why choose a private cloud platform?&lt;/h2&gt;
 &lt;p&gt;Organizations choose a private cloud deployment model when they need a combination of characteristics that are beneficial to the business, including the following.&lt;/p&gt;
 &lt;h3&gt;Security&lt;/h3&gt;
 &lt;p&gt;A private cloud offers a dedicated single-tenant computing environment, so there is little risk associated, as opposed to multi-tenant public cloud environments. The organization sets the security posture and configurations needed to ensure appropriate data protection and security for the private cloud environment. A private cloud is well suited to business environments that require tight security around business data and applications.&lt;/p&gt;
 &lt;h3&gt;Customization&lt;/h3&gt;
 &lt;p&gt;A private cloud often supplies a limited number of resources, services and a fixed set of applications that are available to users on demand. The business can choose any combination of these assets and select how they are requested by -- and provided to -- the user base. In effect, a private cloud can deliver the look and feel desired by the business.&lt;/p&gt;
 &lt;h3&gt;Control&lt;/h3&gt;
 &lt;p&gt;A private cloud enables a business to exert direct control over the behavior and operational characteristics of the computing environment. This level of control may be needed to maintain appropriate business governance and &lt;a href="https://www.techtarget.com/searchcio/tip/Top-cloud-compliance-standards-and-how-to-use-them"&gt;regulatory compliance&lt;/a&gt;. For example, private clouds are popular with government agencies, financial institutions, healthcare organizations and businesses with mission-critical operations.&lt;/p&gt;
&lt;/section&gt;        
&lt;section class="section main-article-chapter" data-menu-title="Top private cloud providers and software"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Top private cloud providers and software&lt;/h2&gt;
 &lt;p&gt;It's an embarrassment of riches for IT pros considering paid or open source private cloud platform options. Some platforms, such as Azure Stack Hub, are unique to specific vendors and pose some risk of vendor lock-in. Other vendors offer private cloud as a service, often partitioning a portion of the provider's public cloud infrastructure and dedicating that portion to a customer for its single-tenant use as a private cloud.&lt;/p&gt;
 &lt;p&gt;Finally, there are software frameworks, such as Apache CloudStack, that organizations can use to cobble together a private cloud within their existing local data center. Selecting a private cloud platform depends on many factors, including users' current data center platforms, hybrid cloud goals -- if any -- security and support needs, current IT staff expertise and cost limitations.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://www.techtarget.com/rms/onlineimages/servervirt-private_cloud_provider_features.png"&gt;
  &lt;img data-src="https://www.techtarget.com/rms/onlineimages/servervirt-private_cloud_provider_features_mobile.png" class="lazy" data-srcset="https://www.techtarget.com/rms/onlineimages/servervirt-private_cloud_provider_features_mobile.png 960w,https://www.techtarget.com/rms/onlineimages/servervirt-private_cloud_provider_features.png 1280w" alt="How to choose a private cloud provider" height="252" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Evaluate private cloud providers.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;p&gt;Here is a list, ordered alphabetically, of top private cloud service providers and software options according to market research.&lt;/p&gt;
 &lt;h3&gt;Apache CloudStack&lt;/h3&gt;
 &lt;p&gt;An open source IaaS cloud platform, Apache CloudStack offers a comprehensive management system that features usage metering and image deployment. It supports hypervisors such as Kernel-based Virtual Machine (KVM); VMware vSphere, including ESXi and vCenter; XenServer/XCP; and XCP-ng. Beyond its own API, CloudStack also supports AWS APIs and Open Cloud Computing Interface from the Open Grid Forum.&lt;/p&gt;
 &lt;p&gt;CloudStack offers scalable infrastructure with high availability and handles features like tiered storage, Active Directory integration and some software-defined networking. As with other open source platforms, it takes a knowledgeable IT staff to install and support CloudStack.&lt;/p&gt;
 &lt;h3&gt;Azure Stack Hub&lt;/h3&gt;
 &lt;p&gt;Azure Stack Hub is part of Microsoft's Azure Stack portfolio and enables organizations to run apps in an Azure environment located on-premises, which can deliver Azure services within a data center. Use consistent tools, environments and applications, and easily transfer them between Azure and Azure Stack Hub. Stack Hub provides an autonomous private cloud that runs independent of internet connectivity and does not require the Azure public cloud. However, the two can be connected seamlessly to create an Azure &lt;a href="https://www.techtarget.com/searchcloudcomputing/feature/Public-cloud-vs-private-cloud-Key-benefits-and-differences"&gt;hybrid cloud&lt;/a&gt;.&lt;/p&gt;
 &lt;p&gt;Azure Stack, similar to its public cloud services, offers pay-as-you-use pricing.&lt;/p&gt;
 &lt;h3&gt;Eucalyptus&lt;/h3&gt;
 &lt;p&gt;Eucalyptus software provides a modular, open source, private IaaS cloud platform for CentOS, Fedora and Ubuntu environments capable of building AWS-compatible clouds. Eucalyptus is API-compatible with Amazon EC2, S3, Identity and Access Management, Elastic Load Balancing, Auto Scaling and CloudWatch services, enabling Eucalyptus to create hybrid clouds between local and AWS resources.&lt;/p&gt;
 &lt;p&gt;Eucalyptus offers a free community edition.&lt;/p&gt;
 &lt;h3&gt;HPE private cloud&lt;/h3&gt;
 &lt;p&gt;HPE private cloud offerings combine infrastructure and software to form a private cloud delivered through the HPE GreenLake platform. GreenLake enables private clouds to support workloads and data in data centers, at the edge, in colocations and across public cloud landing zones as business needs change. Users can employ a consistent cloud experience using intuitive, self-service access to important resources and tools. Staff can consolidate disparate environments in ways that deliver flexibility, control and security. HPE's solutions can be delivered as a hosted or self-managed platform.&lt;/p&gt;
 &lt;p&gt;HPE GreenLake uses the pay-per-use pricing model but also offers upfront options for certain private cloud solutions, such as HPE GreenLake for Private Cloud Business Edition.&lt;/p&gt;
 &lt;h3&gt;IBM Cloud Private&lt;/h3&gt;
 &lt;p&gt;IBM Cloud Private is a packaged enterprise-grade platform for developing and managing container-based applications. It offers a fully integrated environment for managing containers that includes the Kubernetes container orchestrator, private image registry, management system and monitoring framework. IBM Cloud Private uses open source components, including Docker and Helm, and is intended to integrate with multiple public cloud service providers.&lt;/p&gt;
 &lt;p&gt;IBM offers several private cloud solutions, and its pricing models vary.&lt;/p&gt;
 &lt;h3&gt;Nutanix&lt;/h3&gt;
 &lt;p&gt;Nutanix software and services combine the self-service and agility of public cloud with the performance, security and cost benefits of private clouds inside the on-premises data center. Nutanix offerings can deliver applications, services and data at scale, while maintaining performance and reliability, automating application lifecycle management, protecting cloud data with native data encryption, offering visibility into actual cloud costs and optimizations, consolidating data storage and eliminating storage silos, and assisting with business governance and continuity.&lt;/p&gt;
 &lt;p&gt;Nutanix offers different pricing and licensing options.&lt;/p&gt;
 &lt;h3&gt;OpenNebula&lt;/h3&gt;
 &lt;p&gt;The OpenNebula project attempts to offer a turnkey, versatile, feature-rich and vendor-agnostic platform for creating and managing private, public and hybrid clouds atop virtualized data centers -- though it's compliant with AWS and Equinix. OpenNebula touts a self-service portal, comprehensive UIs, automated and unified management, a marketplace, performance and capacity management, high availability and good integration across third-party tools to &lt;a href="https://www.techtarget.com/searchcloudcomputing/tip/4-best-practices-to-avoid-cloud-vendor-lock-in"&gt;avoid vendor lock-in&lt;/a&gt;. OpenNebula supports KVM and Kubernetes clusters in a common shared environment and provides support for multi-tenancy, automatic provisioning and service elasticity.&lt;/p&gt;
 &lt;p&gt;OpenNebula follows a subscription-based pricing model.&lt;/p&gt;
 &lt;h3&gt;OpenStack&lt;/h3&gt;
 &lt;p&gt;OpenStack software builds on existing hypervisors to provision and manage compute, storage and networking resources as a complete open source cloud OS for VMs, containers and bare-metal systems. It relies on a set of software components that combine to provide common services for cloud infrastructure. It supports VMware ESXi, Microsoft Hyper-V, Citrix XenServer and open source KVM.&lt;/p&gt;
 &lt;p&gt;Although OpenStack is free, it's complex and requires extensive expertise to deploy and utilize -- often leading to a significant cost for the business. Support is also community-driven, and it depends on &lt;a href="https://www.techtarget.com/whatis/feature/Top-20-cloud-computing-skills-to-boost-your-career"&gt;knowledgeable staff&lt;/a&gt; to support effectively.&lt;/p&gt;
 &lt;p&gt;Since it is open source, OpenStack software is free to download and use.&lt;/p&gt;
 &lt;h3&gt;Oracle Cloud Infrastructure&lt;/h3&gt;
 &lt;p&gt;Oracle Cloud Infrastructure (OCI) provides public, hybrid, multi- and dedicated (private) cloud services. Private cloud services can be deployed in a dedicated region with just a few racks and are preconfigured to optimize their intended use cases. The private deployment supports more than 150 services to handle migration, modernization and different plans for innovation. OCI runs almost any workload, including mission-critical and AI workloads, while addressing stringent data sovereignty and regulatory requirements.&lt;/p&gt;
 &lt;p&gt;Oracle follows a pay-as-you-go pricing model.&lt;/p&gt;
 &lt;h3&gt;Platform9 Private Cloud Director&lt;/h3&gt;
 &lt;p&gt;Platform9 is a third-party, public-private hybrid cloud provider based on OpenStack, Kubernetes and Fission that enables organizations to create and manage hybrid clouds. Capabilities include cluster blueprints, advanced cluster resource management, cluster-wide software-defined networks and infrastructure self-service features. It can scale to hundreds of virtualized clusters and hundreds of hypervisors. Platform9 supports KVM, VMware vSphere and Docker. Since Platform9 handles managing OpenStack and Kubernetes and the essential hybrid cloud structure, users are relieved of the configuration and upgrade issues involved.&lt;/p&gt;
 &lt;p&gt;Today, Platform9 has three deployment options and is available as a hosted, fully managed SaaS control plane or self-managed deployment. Private Cloud Director community edition is free. The commercial versions -- SaaS or self-hosted -- follow a subscription-based pricing model.&lt;/p&gt;
 &lt;h3&gt;Red Hat OpenShift&lt;/h3&gt;
 &lt;p&gt;Red Hat OpenShift is a comprehensive, consistent and turnkey platform of tools and services used to develop, modernize and deploy enterprise applications at scale. It can be used with traditional, modernized and &lt;a href="https://www.techtarget.com/searchcloudcomputing/tip/A-beginners-guide-to-cloud-native-application-development"&gt;cloud‑native applications&lt;/a&gt; as either VMs or containers.&lt;/p&gt;
 &lt;p&gt;Red Hat can supply OpenShift as a cloud service or as a fully self-managed edition. Pricing is typically based on the chosen deployment model and resources required.&lt;/p&gt;
 &lt;h3&gt;VMware Cloud Foundation&lt;/h3&gt;
 &lt;p&gt;The VMware Cloud Foundation platform provides private and hybrid cloud infrastructure with enterprise-grade compute, networking, storage, security, and resilience management tools, as well as features such as disaster recovery and compliance support. The platform provides sophisticated networking, vSphere Kubernetes Service, license portability and automated provisioning, and it supports scalable &lt;a href="https://www.techtarget.com/searchcio/tip/Top-edge-computing-trends-to-watch-in-2020"&gt;deployments at the edge&lt;/a&gt;. The platform follows a licensing- and subscription-based pricing model.&lt;/p&gt;
 &lt;p&gt;VMware pricing is generally customized based on various factors, such as support levels.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Stephen J. Bigelow, senior technology editor at TechTarget, has more than 30 years of technical writing experience in the PC and technology industry.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>There are many factors to consider when choosing a private cloud platform. Explore these popular providers to see which one suits your business best.</description>
            
            <link>https://www.techtarget.com/searchcloudcomputing/answer/Explore-private-cloud-platform-options-Paid-and-open-source</link>
            <pubDate>Fri, 10 Jan 2025 09:00:00 GMT</pubDate>
            <title>Explore the top 12 private cloud providers of 2025</title>
        </item>
        <item>
            <body>&lt;p&gt;Private clouds appeal to businesses that need the flexibility and self-service found in a public cloud, with the control and transparency found in on-premises infrastructures. However, implementing and managing a private cloud can be a challenging endeavor fraught with complex problems.&lt;/p&gt; 
&lt;p&gt;Gain a better understanding of how private and public clouds differ, as well as the most common private environment problems to avoid deployment and management headaches.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="Private vs. public clouds"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Private vs. public clouds&lt;/h2&gt;
 &lt;p&gt;Private and public clouds are two models of cloud computing intended to deliver different sets of benefits to a business:&lt;/p&gt;
 &lt;ol class="default-list"&gt; 
  &lt;li&gt;A &lt;b&gt;public cloud&lt;/b&gt;, such as AWS, Microsoft Azure or Google Cloud, is designed for extremely high scalability, offering a &lt;a href="https://www.techtarget.com/searchcloudcomputing/feature/A-cloud-services-cheat-sheet-for-AWS-Azure-and-Google-Cloud"&gt;broad set of services&lt;/a&gt; and resources across a global footprint. Public clouds operate using multi-tenant architectures, where resources and services are shared, and business data is primarily retained remotely in the public cloud.&lt;/li&gt; 
  &lt;li&gt;A &lt;b&gt;private cloud&lt;/b&gt; is designed for high levels of control and oversight, which enable the business to provide cloud self-service and autonomy, while enforcing direct control over the infrastructure and data that constitute the private cloud. But this also limits the scope, scalability and services of most private clouds.&lt;/li&gt; 
 &lt;/ol&gt;
 &lt;p&gt;Consequently, &lt;a href="https://www.techtarget.com/searchcloudcomputing/feature/Public-cloud-vs-private-cloud-Key-benefits-and-differences"&gt;public and private clouds&lt;/a&gt; are not mutually exclusive and can be used simultaneously to deliver different business benefits. Public and private clouds can also connect to provide a &lt;a href="https://www.techtarget.com/searchcloudcomputing/definition/hybrid-cloud"&gt;hybrid cloud&lt;/a&gt;, ideally bringing the benefits and capabilities of both cloud paradigms to the business.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://www.techtarget.com/rms/onlineImages/cloud_computing-public_vs_private_cloud-f.png"&gt;
  &lt;img data-src="https://www.techtarget.com/rms/onlineImages/cloud_computing-public_vs_private_cloud-f_mobile.png" class="lazy" data-srcset="https://www.techtarget.com/rms/onlineImages/cloud_computing-public_vs_private_cloud-f_mobile.png 960w,https://www.techtarget.com/rms/onlineImages/cloud_computing-public_vs_private_cloud-f.png 1280w" alt="Compare public vs private cloud" height="252" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Private clouds provide users with direct control over data and resources but with a higher upfront cost than a public cloud model.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;     
&lt;section class="section main-article-chapter" data-menu-title="Private cloud deployment issues"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Private cloud deployment issues&lt;/h2&gt;
 &lt;p&gt;After defining what a private cloud is, let's discuss the 10 most common issues to consider when establishing a private cloud.&lt;/p&gt;
 &lt;h3&gt;1. Undefined objectives&lt;/h3&gt;
 &lt;p&gt;Tech envy is the bane of modern businesses. Don't implement costly technology, like a private cloud, just because it's in the media or pursued by a competitor. Understand the needs or justifications for a private cloud, and assess the value of such a project &lt;a href="https://online.hbs.edu/blog/post/cost-benefit-analysis" target="_blank" rel="noopener"&gt;with a cost-benefit analysis&lt;/a&gt;. Users need to understand the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Why a private cloud is needed.&lt;/li&gt; 
  &lt;li&gt;What it needs to do for the business.&lt;/li&gt; 
  &lt;li&gt;How it should align with business goals.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;2. Infrastructure costs&lt;/h3&gt;
 &lt;p&gt;Private clouds rely on on-premises infrastructure, so a business needs to provision -- or build -- on-premises infrastructure that is dedicated to private cloud use. This demands significant capital investment, which can initially cost more than the pay-as-you-go model of public clouds. Understand the hardware, software, talent and time investments needed to build a private cloud, and budget accordingly.&lt;/p&gt;
 &lt;p&gt;Also, the different types of private cloud come with different costs. Note the following when planning a private cloud deployment:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Self-managed private cloud.&lt;/b&gt; Also known as an &lt;i&gt;on-premises cloud&lt;/i&gt;, an organization creates and manages this cloud autonomously. The facility that houses the infrastructure is either an on-premises server room, a company-owned data center or a colocation center on rented rack space.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Managed private cloud.&lt;/b&gt; A third-party provider manages the cloud infrastructure, which is reserved for the use of one organization.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Virtual private cloud.&lt;/b&gt; A VPC is the division of a service provider's public, multi-tenant cloud architecture to support private cloud computing.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://www.techtarget.com/rms/onlineimages/types_of_private_cloud-f.png"&gt;
  &lt;img data-src="https://www.techtarget.com/rms/onlineimages/types_of_private_cloud-f_mobile.png" class="lazy" data-srcset="https://www.techtarget.com/rms/onlineimages/types_of_private_cloud-f_mobile.png 960w,https://www.techtarget.com/rms/onlineimages/types_of_private_cloud-f.png 1280w" alt="Compare the different types of private cloud" height="280" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;The three types of private cloud provide users with varying degrees of convenience and control, from virtual private clouds to private clouds entirely self-managed by the user.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;h3&gt;3. Poor expertise&lt;/h3&gt;
 &lt;p&gt;Private clouds can be complex to design, build, manage and maintain -- especially supporting services and frameworks, such as enterprise applications, software services, automation and orchestration. This demands extensive expertise from IT staff, which might not be present within the current available personnel. These skills could include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Infrastructure management related to on-premises hardware and virtualization technologies.&lt;/li&gt; 
  &lt;li&gt;Security management focused on private cloud needs.&lt;/li&gt; 
  &lt;li&gt;Advanced networking expertise to handle complex configurations from strict security needs.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;A private cloud project might require new staff, or current staff might need extensive new &lt;a href="https://www.techtarget.com/searchcloudcomputing/tip/Upskill-your-cloud-team-for-success"&gt;training and education&lt;/a&gt;.&lt;/p&gt;
 &lt;h3&gt;4. Limitations&lt;/h3&gt;
 &lt;p&gt;Consider the pressing limitations for a private cloud. IT infrastructure is finite, so private clouds rarely approach the capabilities of modern public clouds. There is only so much money, time and talent available to a modern business.&lt;/p&gt;
 &lt;p&gt;For example, a private cloud rarely offers the capacity, scope of services or level of scalability found in the public cloud. Public cloud has far more experienced staff and a global data center footprint for scale. For this reason, some businesses choose a hybrid cloud approach.&lt;/p&gt;
 &lt;h3&gt;5. Compliance and governance requirements&lt;/h3&gt;
 &lt;p&gt;Consider how creating a private cloud impacts vital compliance and governance issues. Depending on the industry and business, there may be &lt;a href="https://www.techtarget.com/searchcio/tip/Top-cloud-compliance-standards-and-how-to-use-them"&gt;strict compliance requirements&lt;/a&gt; to protect personal data, such as in healthcare with HIPPA or a company that does business with the European Union with GDPR.&lt;/p&gt;
 &lt;p&gt;Strict data privacy compliance regulations need to be reflected in user access, data storage and retention throughout the private cloud. Similarly, business governance must evolve to reflect the new capabilities and risks of a private cloud, including data access, usage, security and business continuance.&lt;/p&gt;
 &lt;h3&gt;6. Resilience&lt;/h3&gt;
 &lt;p&gt;Systems and devices fail, and failures within a private cloud can profoundly impact the business. Public clouds can fail over to other regions or even to other providers.&lt;/p&gt;
 &lt;p&gt;Consider the level of resilience needed to ensure system and data availability. This might include high availability architecture designs, real-time data protection and backup/restoration capabilities, and other technologies to mitigate downtime.&lt;/p&gt;
 &lt;h3&gt;7. Configuration and data protection&lt;/h3&gt;
 &lt;p&gt;Private cloud design should include careful consideration of security features, such as encryption, firewalls and access controls.&lt;/p&gt;
 &lt;p&gt;A large portion of security problems arise from poorly configured infrastructure and excessive (loose) permissions. Private clouds demand close consideration of hardware and software configurations, strict change management and careful behavioral monitoring. This helps to ensure that the private cloud is secure and that minimum access and privileges are provisioned to users.&lt;/p&gt;
 &lt;h3&gt;8. Monitoring&lt;/h3&gt;
 &lt;p&gt;Is the private cloud working and maintaining service levels the way it should? Use a suite of monitoring tools that can gather and report important performance metrics across the private cloud. Decide what the &lt;a href="https://www.techtarget.com/searchcloudcomputing/feature/Metrics-that-matter-in-cloud-application-monitoring"&gt;vital metrics&lt;/a&gt; should be, along with desired performance parameters. These metrics could include resource utilization, such as CPU, memory and storage. Also, consider how metrics should be reported and reviewed.&lt;/p&gt;
 &lt;p&gt;When private cloud designers can understand how the cloud should work, it's far easier to identify, understand and remediate issues before they escalate.&lt;/p&gt;
 &lt;h3&gt;9. Continuous optimization&lt;/h3&gt;
 &lt;p&gt;IT infrastructure is rarely static. Private clouds benefit from periodic reviews and upgrades to enhance vital factors, such as reliability, efficiency, capability, performance and capacity. Understand how to use monitoring and reporting to evaluate private cloud performance and set the stage for periodic upgrades and optimizations over time. Optimizations should also include careful attention to changing business goals and strategies, ensuring that the private cloud aligns with business needs.&lt;/p&gt;
 &lt;h3&gt;10. Technologies&lt;/h3&gt;
 &lt;p&gt;Technologies represent the "how" of a public cloud and are often the last factor to consider. As with any data center endeavor, private cloud designers should build an infrastructure using reliable and extensive systems or devices that are well suited to established goals. Technologies should fit business goals, not the reverse. Select vendors for their product reliability, compatibility and support.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Stephen J. Bigelow, senior technology editor at TechTarget, has more than 30 years of technical writing experience in the PC and technology industry.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Organizations that want to create a private cloud must prepare for a multitude of issues, including security, compliance, staff expertise and cost.</description>
            
            <link>https://www.techtarget.com/searchcloudcomputing/answer/Creating-a-private-cloud-with-minimal-issues</link>
            <pubDate>Tue, 12 Nov 2024 09:00:00 GMT</pubDate>
            <title>10 common issues when creating a private cloud</title>
        </item>
        <item>
            <body>&lt;p&gt;Many organizations are running mission-critical workloads in public or hybrid cloud environments. But there are also several advantages to using a private cloud.&lt;/p&gt; 
&lt;p&gt;Private cloud services offer similar capabilities to those that major public cloud providers can deliver except they do so by using organization-owned resources that reside on premises. One of the most appealing &lt;a href="https://www.techtarget.com/searchcloudcomputing/definition/private-cloud"&gt;private cloud&lt;/a&gt; advantages is that organizations can use their existing hardware and software resources. This enables users to keep their hardware on premises, select the exact hardware they want to support their private cloud and have greater control over security. Other advantages include potential cost savings, scalability and stronger compliance features.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="Control over hardware choices"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Control over hardware choices&lt;/h2&gt;
 &lt;p&gt;Public cloud providers keep their prices low by sharing hardware among multiple tenants. Although larger public cloud providers allow tenants to lease dedicated hardware, this hardware comes at a price that is substantially higher than shared offerings. There is also the potential for &lt;a href="https://www.techtarget.com/searchcloudcomputing/definition/noisy-neighbor-cloud-computing-performance"&gt;noisy neighbor&lt;/a&gt; syndrome.&lt;/p&gt;
 &lt;p&gt;Because an organization's hardware supports its private cloud, it has the freedom to select the hardware it uses and the resources that best meet its needs. A business can decide whether to use cutting-edge hardware or make use of lower-end commodity options. With full access to this hardware, users can perform upgrades or maintenance on an as-needed basis.&lt;/p&gt;
 &lt;p&gt;Additionally, dedicated hardware eliminates the security risks associated with sharing hardware with tenants outside the organization. With hardware resources on premises, organizations can mandate their security.&lt;/p&gt;
 &lt;blockquote class="main-article-pullquote"&gt;
  &lt;div class="main-article-pullquote-inner"&gt;
   &lt;figure&gt;
    Because an organization's hardware supports its private cloud, it has the freedom to select the hardware it uses.
   &lt;/figure&gt;
   &lt;i class="icon" data-icon="z"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/blockquote&gt;
&lt;/section&gt;     
&lt;section class="section main-article-chapter" data-menu-title="The issue of scalability"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;The issue of scalability&lt;/h2&gt;
 &lt;p&gt;When it comes to scalability, public clouds have a distinct advantage. Public cloud providers have servers in their data centers providing extra capacity any time it's needed. They also make it easy to scale workloads up or down in response to demand spikes.&lt;/p&gt;
 &lt;p&gt;It is possible to build a private cloud that allows workloads to scale as needed. However, there is one critical difference between public and private clouds in this regard. Public clouds offer near-limitless scalability. Scalability in private clouds is limited by an organization's hardware resources. Workloads can be scaled up to the point that they exhaust all the unused capacity within the private cloud but cannot scale beyond that point.&lt;/p&gt;
 &lt;p&gt;Though users might need to add new resources to scale private cloud environments, it's relatively common for an organization to begin by creating a small private cloud. The organization can then expand its existing hardware over time. However, that hardware comes at a cost. Furthermore, data center hardware only provides ROI when the hardware is being used. Hardware that exists solely for the purpose of providing unused capacity is not providing ROI unless a spike occurs.&lt;/p&gt;
 &lt;p&gt;Rather than investing heavily in extra private cloud capacity, most organizations embrace the hybrid approach to scale up. Those resources are then released when the workload can be scaled back down.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://www.techtarget.com/rms/onlineImages/cloud_computing-public_vs_private_cloud-f.png"&gt;
  &lt;img data-src="https://www.techtarget.com/rms/onlineImages/cloud_computing-public_vs_private_cloud-f_mobile.png" class="lazy" data-srcset="https://www.techtarget.com/rms/onlineImages/cloud_computing-public_vs_private_cloud-f_mobile.png 960w,https://www.techtarget.com/rms/onlineImages/cloud_computing-public_vs_private_cloud-f.png 1280w" alt="Compare public vs. private cloud" height="252" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Explore major differences between public cloud and private cloud.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;      
&lt;section class="section main-article-chapter" data-menu-title="Granular control over infrastructure"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Granular control over infrastructure&lt;/h2&gt;
 &lt;p&gt;Private cloud services provide complete control over the entire infrastructure. Users can base their private cloud around virtualization tools, such as Hyper-V and Microsoft System Center, or a VMware-based infrastructure. The user has full control.&lt;/p&gt;
 &lt;p&gt;However, a hypervisor such as VMware, Hyper-V or Kernel-based Virtual Machine is not the only option. Those who wish to build a private or &lt;a href="https://www.techtarget.com/searchcloudcomputing/definition/hybrid-cloud"&gt;hybrid cloud&lt;/a&gt; that mimics the public cloud can opt to purchase resources from public cloud providers. For example, Microsoft offers Azure Stack Hub, which lets organizations run Azure services in their own data centers. Similarly, Amazon offers AWS Outposts, which consists of a fully managed AWS infrastructure that runs on premises.&lt;/p&gt;
 &lt;p&gt;The main advantage to building a private cloud -- as opposed to using public cloud alternative services -- is control. Having control of a private cloud gives access to resources that aren't exposed by public cloud providers. Some public cloud providers offer cloud-based directory services, but they block access to some of the group policy settings and built-in accounts. These providers also usually prevent access to low-level hypervisor settings to prevent any intervention with the provider's security model and their ability to manage the cloud infrastructure.&lt;/p&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="Sizing instances to match exact requirements"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Sizing instances to match exact requirements&lt;/h2&gt;
 &lt;p&gt;A private cloud environment gives users the ability to establish granular control of the sizing of VM instances. When creating a new VM instance in AWS, for example, you have to choose from several predetermined instance sizes. The instance size determines the amount of memory, CPU and storage resources that are available to the VM instance.&lt;/p&gt;
 &lt;p&gt;However, the predefined instance sizes might not meet an organization's exact requirements. Users might have to settle for an instance that's smaller than they prefer, which can affect performance. Conversely, users might have to select an instance that is larger than needed, which increases costs and wastes resources. A private cloud provides flexibility to size instances to match exact requirements.&lt;/p&gt;
&lt;/section&gt;   
&lt;section class="section main-article-chapter" data-menu-title="Avoiding monthly bills and fluctuating costs"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Avoiding monthly bills and fluctuating costs&lt;/h2&gt;
 &lt;p&gt;One of the main reasons the public cloud became popular was the promise of cost savings. The public cloud offered consumption-based pricing, which freed organizations from having to make large upfront investments in server hardware and software.&lt;/p&gt;
 &lt;p&gt;In retrospect, this consumption-based pricing model has been beneficial for startups and for small businesses who lack the ability to purchase data center hardware. The public cloud can also save money when it comes to running new workloads without the necessary hardware and software.&lt;/p&gt;
 &lt;p&gt;When it comes to existing workloads, however, many organizations aren't seeing these promised cloud savings. In some cases, organizations are even finding that it costs more to run a workload in the public cloud than it does to run it in-house. As such, the last few years have seen organizations &lt;a href="https://www.techtarget.com/searchcloudcomputing/tip/Is-cloud-repatriation-the-only-answer"&gt;repatriating cloud-based workloads&lt;/a&gt; by bringing those workloads back into their own data centers.&lt;/p&gt;
 &lt;p&gt;One of the greatest benefits of using a private cloud was avoiding monthly bills from a cloud provider. However, some of the major data center hardware vendors have begun to implement their own consumption-based pricing. In other words, hardware usage is metered, and the organization must pay a monthly fee based on its hardware use. Businesses must ensure that they are aware of any ongoing financial commitments associated with hardware purchases.&lt;/p&gt;
 &lt;p&gt;There are other advantages to using nonmetered hardware besides not receiving a bill each month. Organizations can avoid cost fluctuations that are so common with the public cloud. In the public cloud, costs tend to increase over time as data accumulates and users deploy more demanding workloads. These instances, in conjunction with cloud providers' complex billing calculations, can make cloud costs difficult to predict.&lt;/p&gt;
 &lt;p&gt;Finally, there have been several instances over the past several years of public cloud &lt;a href="https://www.techtarget.com/searchcloudcomputing/tutorial/How-to-survive-a-cloud-service-outage"&gt;providers experiencing outages&lt;/a&gt;. Private clouds aren't immune to outages. However, using a private cloud does ensure that users have the control to manage these outages without having to wait for their cloud provider to solve the issue. Avoiding further downtime can help organizations save money.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/types_of_private_cloud-f.png"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/types_of_private_cloud-f_mobile.png" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/types_of_private_cloud-f_mobile.png 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/types_of_private_cloud-f.png 1280w" alt="Compare the three types of private clouds." height="280" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;The different models of private cloud provide users with varying amounts of convenience and control.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;        
&lt;section class="section main-article-chapter" data-menu-title="Staying compliant"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Staying compliant&lt;/h2&gt;
 &lt;p&gt;Private clouds were once the obvious choice for organizations who were subject to regulatory compliance mandates. Private clouds allow an organization to have complete and total control over the low-level cloud infrastructure, which can be helpful with compliance. More importantly, private clouds guarantee &lt;a href="https://www.techtarget.com/whatis/definition/data-sovereignty"&gt;data sovereignty&lt;/a&gt; and make it easy to guarantee that data is being stored domestically.&lt;/p&gt;
 &lt;p&gt;However, today, it is no longer so cut and dry. Public cloud has had time to mature, and service providers know that nearly all their biggest customers are subject to regulatory requirements. As such, public cloud providers go to great lengths to ensure that their services are compliant with various regulations. AWS, Microsoft and Google have &lt;a href="https://www.techtarget.com/searchcloudcomputing/tip/Learn-the-basics-of-industry-cloud-platforms"&gt;industry cloud offerings&lt;/a&gt; that provide specialized services and capabilities to serve security- and compliance-focused industries, such as finance and healthcare companies.&lt;/p&gt;
 &lt;p&gt;Both public and private clouds can be made compliant. In some cases, it might be easier to ensure compliance within the public cloud. This is because public cloud providers have already done so much of the work. This is particularly true for organizations that opt to run a workload using a managed service within the public cloud, as opposed to building a solution using cloud-based VM instances.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Brien Posey is a former 22-time Microsoft MVP and a commercial astronaut candidate. In his more than 30 years in IT, he has served as a lead network engineer for the U.S. Department of Defense and a network administrator for some of the largest insurance companies in America.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>There's more to private clouds than control over infrastructure and security. Consider the benefits of private cloud services before jumping on the public cloud bandwagon.</description>
            
            <link>https://www.techtarget.com/searchcloudcomputing/tip/Understand-major-private-cloud-services-advantages</link>
            <pubDate>Tue, 08 Oct 2024 09:00:00 GMT</pubDate>
            <title>Examine the top benefits of private cloud</title>
        </item>
        <item>
            <body>&lt;p&gt;Virtual machine backup mistakes can occur at any level. It's up to backup administrators to spot and rectify these errors to reduce the risk of data loss.&lt;/p&gt; 
&lt;p&gt;Virtual machine backups enable organizations to protect VMs with the same reliability and security as &lt;a href="https://www.techtarget.com/searchitoperations/tip/Virtual-servers-vs-physical-servers-What-are-the-differences"&gt;traditional physical server&lt;/a&gt; backup. However, there are several ways that VM backup can go awry.&lt;/p&gt; 
&lt;p&gt;Some roadblocks are relatively simple, such as bottlenecks and a lack of resources. More complex issues, such as guest OS difficulties and virtual disk corruption, can complicate data protection efforts significantly.&lt;/p&gt; 
&lt;p&gt;Below are 12 common virtual machine backup mistakes that administrators must watch out for. Catching these errors and quickly remedying them is key to &lt;a href="https://www.techtarget.com/searchitoperations/tip/Proactive-backup-measures-simplify-virtual-server-recovery"&gt;keeping VM data safe&lt;/a&gt;.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="1. Performing guest OS backups"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;1. Performing guest OS backups&lt;/h2&gt;
 &lt;p&gt;Backing up through the &lt;a href="https://www.techtarget.com/searchitoperations/definition/guest-OS-guest-operating-system"&gt;guest OS&lt;/a&gt; is probably the most common VM backup mistake. It's best to perform backups at the VM host level rather than installing a backup agent onto a VM's operating system.&lt;/p&gt;
 &lt;p&gt;The reason why it is best to avoid guest OS backups whenever possible is that they are inefficient and difficult to manage at scale. In addition, if several virtual machines run guest OS backups simultaneously, they can collectively cause significant performance bottlenecks.&lt;/p&gt;
 &lt;p&gt;Another reason why it's best to perform host-level backups is because doing so keeps administrators from having to manage each VM backup individually. New VMs are created all the time, and it's easy to forget to include a new VM in backups. Backing up at the host level avoids this problem altogether, because newly created virtual machines are backed up automatically.&lt;/p&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="2. Backing up virtual hard disk files directly"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;2. Backing up virtual hard disk files directly&lt;/h2&gt;
 &lt;p&gt;Users should never try to back up virtual hard disk files directly at the physical storage device and bypass the virtualization layer. While there are ways of safely backing up a virtual hard disk outside the virtualization layer, doing so bypasses the various safeguards that are built into the operating system. A simple mistake can corrupt the entire virtual hard disk, especially if snapshots or checkpoints are present.&lt;/p&gt;
&lt;/section&gt;  
&lt;section class="section main-article-chapter" data-menu-title="3. Treating VM snapshots as a backup alternative"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;3. Treating VM snapshots as a backup alternative&lt;/h2&gt;
 &lt;p&gt;VM &lt;a href="https://www.techtarget.com/searchvmware/definition/VMware-snapshot"&gt;snapshots&lt;/a&gt; -- or &lt;a href="https://www.techtarget.com/searchitoperations/tip/A-beginners-guide-to-Hyper-V-checkpoints"&gt;checkpoints&lt;/a&gt;, if using Microsoft -- preserve the state of a VM from the point in time when the snapshot was taken. In addition, users can create multiple snapshots to provide more than one restore point to choose from. While this can be useful in certain situations, it should never be used as a primary method for backing up VMs.&lt;/p&gt;
 &lt;p&gt;A virtual machine backup contains a full copy of the VM's virtual hard disk. Conversely, a snapshot does not copy a virtual machine's contents. That is &lt;a href="https://www.techtarget.com/searchitoperations/tip/Learn-the-differences-between-VM-snapshot-vs-backup"&gt;why snapshots are not true backups&lt;/a&gt;. If a storage problem causes a virtual machine to be lost, the snapshots will likely also be destroyed. Even if the snapshots remain, they are useless without the original virtual hard disk. Snapshots should be treated as a convenient feature rather than a backup alternative.&lt;/p&gt;
 &lt;p&gt;Snapshots also tend to diminish read performance, especially if multiple snapshots exist for a VM. Each hypervisor vendor has its own way of doing things, but generally speaking, the act of creating a snapshot causes a new virtual hard disk to be created. The original virtual hard disk is treated as read-only.&lt;/p&gt;
 &lt;p&gt;This means that when a read operation occurs, the hypervisor must read the snapshot virtual disk first and then perform a second read against the original virtual hard disk if the snapshot virtual hard disk does not contain the requested data. Creating multiple snapshots can result in several virtual hard disks having to be read every time a read operation occurs.&lt;/p&gt;
 &lt;div class="youtube-iframe-container"&gt;
  &lt;iframe id="ytplayer-0" src="https://www.youtube.com/embed/VmgE3vQG3lc?autoplay=0&amp;amp;modestbranding=1&amp;amp;rel=0&amp;amp;widget_referrer=null&amp;amp;enablejsapi=1&amp;amp;origin=https://searchservervirtualization.techtarget.com" type="text/html" height="360" width="640" frameborder="0"&gt;&lt;/iframe&gt;
 &lt;/div&gt;
&lt;/section&gt;      
&lt;section class="section main-article-chapter" data-menu-title="4. Not creating up-to-date backups"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;4. Not creating up-to-date backups&lt;/h2&gt;
 &lt;p&gt;Backup applications are just like any other application in that they can contain bugs or security vulnerabilities. They must be kept up to date through patching. The unique thing about backup applications, however, is that a bug can &lt;a href="https://www.techtarget.com/searchdatabackup/tip/Data-backup-failure-Five-tips-for-prevention"&gt;jeopardize entire backups&lt;/a&gt;.&lt;/p&gt;
 &lt;p&gt;As an example, at one time there was an issue with VMware Data Recovery that caused its catalogs to become corrupt. A catalog is essentially an index of the data that has been backed up and is used by most backup applications. The catalog corruption issue was fixed with a patch, but some admins who failed to update their software in a timely manner found themselves having to rebuild their backup catalogs from scratch.&lt;/p&gt;
&lt;/section&gt;   
&lt;section class="section main-article-chapter" data-menu-title="5. Not assigning the right permissions"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;5. Not assigning the right permissions&lt;/h2&gt;
 &lt;p&gt;Some backup applications require each protected host server to have a service account that can facilitate the backup process. These types of backup applications can be prone to backup errors related to insufficient permissions. For example, a backup might fail if the account policy forces a password change, but the backup application itself is not made aware of the password change. When this occurs, the backup usually fails before any data can be processed, and the logs reflect a security error or a read failure.&lt;/p&gt;
 &lt;p&gt;As important as it is for a backup application to have the necessary permissions, it is also important to avoid assigning excessive permissions to a backup account. If a backup application backs data up to a backup vault, for example, then it is best to remove the permissions required to delete, encrypt or modify the vault or the data within it. That way, if the backup account were to become compromised, a cybercriminal would be unable to use that account to &lt;a href="https://www.techtarget.com/searchdatabackup/feature/How-ransomware-variants-are-neutralizing-data-backups"&gt;destroy existing backups&lt;/a&gt;.&lt;/p&gt;
&lt;/section&gt;   
&lt;section class="section main-article-chapter" data-menu-title="6. Using unsupported OS versions"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;6. Using unsupported OS versions&lt;/h2&gt;
 &lt;p&gt;Unsupported guest operating systems are another potential cause of virtual machine backup failures. For example, a backup application that fully supports backing up VMs that run Windows Server 2022 might view Windows Server 2025 as an unsupported operating system unless the &lt;a href="https://www.techtarget.com/searchitoperations/tip/Top-10-VM-backup-tools-for-VMware-and-Hyper-V"&gt;backup software&lt;/a&gt; is updated to make it aware of the new OS version.&lt;/p&gt;
 &lt;p&gt;The problems caused by a lack of OS support can be avoided by verifying backup support before upgrading virtual machines to a new operating system. It is possible, however, that even if a backup application does not recognize the application running on a VM, it might still be able to create an image backup of that VM.&lt;/p&gt;
&lt;/section&gt;   
&lt;section class="section main-article-chapter" data-menu-title="7. Overloading the host server"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;7. Overloading the host server&lt;/h2&gt;
 &lt;p&gt;Another virtual machine backup mistake is overstressing the host server. If a VM resides on a disk that is already I/O bound, then the disk might not be able to deliver sufficient performance to keep the backup from timing out. The fix to this problem is to correct the storage bottleneck.&lt;/p&gt;
 &lt;p&gt;While backing up at the virtualization layer reduces resource usage on VMs when backups occur, resource usage will still be high on the hosts and storage devices when backups are running.&lt;/p&gt;
 &lt;p&gt;Resource starvation problems often come down to &lt;a href="https://www.techtarget.com/searchdatabackup/tip/Backup-scheduling-best-practices-to-ensure-availability"&gt;backup scheduling&lt;/a&gt;. Hosts typically share the same data stores in virtual environments, and bottlenecks caused by too many simultaneous VM backups on a single data store will affect all hosts that have VMs running on it. Likewise, if too many VMs on the same host are being backed up at the same time, it will create bottlenecks for all the VMs on that host.&lt;/p&gt;
 &lt;p&gt;A better option is to &lt;a href="https://www.techtarget.com/searchdatabackup/answer/The-benefits-and-drawbacks-of-CDP-solutions"&gt;use continuous data protection&lt;/a&gt;. CDP backups will initially be large and resource-intensive. However, once the initial backup is complete, all future backups will generally be small. The reason for this is that small backups run constantly -- every few seconds to every few minutes -- as opposed to running a monolithic scheduled backup.&lt;/p&gt;
&lt;/section&gt;     
&lt;section class="section main-article-chapter" data-menu-title="8. Virtual hard disk corruption"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;8. Virtual hard disk corruption&lt;/h2&gt;
 &lt;p&gt;Just as a physical hard disk can become corrupt, so too can a virtual hard disk. If corruption exists within a virtual hard disk, then a backup application might have trouble backing up the corresponding VM.&lt;/p&gt;
 &lt;p&gt;Typically when this occurs, the backup application logs will contain either read errors or data integrity errors. These errors can be clues that corruption might exist within a virtual hard disk.&lt;/p&gt;
&lt;/section&gt;   
&lt;section class="section main-article-chapter" data-menu-title="9. Not quiescing properly"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;9. Not quiescing properly&lt;/h2&gt;
 &lt;p&gt;Backups of VMs that are running Windows Server as a guest OS generally rely on the &lt;a href="https://www.techtarget.com/searchwindowsserver/tip/How-to-configure-shadow-copies-on-file-servers"&gt;Volume Shadow Copy Service&lt;/a&gt; (VSS). This service performs a quiesce operation that enables applications running on the VM to be backed up in an application-consistent -- as opposed to a crash-consistent -- manner.&lt;/p&gt;
 &lt;p&gt;The Volume Shadow Copy Service uses a collection of VSS writers to facilitate the backup of various applications and OS components, such as the Active Directory. If any of the VSS writers required by the backup process were to fail, then the entire backup could fail as a result.&lt;/p&gt;
 &lt;p&gt;If an administrator suspects that a VSS failure might be to blame for a VM backup failure, they should check the state of the VSS writers within the virtual machine. The &lt;samp&gt;vssadmin list writers&lt;/samp&gt; command, within the guest OS, displays the state of each VSS writer.&lt;/p&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="10. Using buggy applications"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;10. Using buggy applications&lt;/h2&gt;
 &lt;p&gt;Virtual machine backups can fail because an application that is running on a VM is buggy. For example, Microsoft once released an Exchange Server patch called &lt;a href="https://www.techtarget.com/searchenterprisedesktop/tip/Windows-10-updates-to-avoid-and-how-to-address-them"&gt;Cumulative Update&lt;/a&gt; 3 for Exchange Server 2013. Among other things, the patch contained a fix for a bug that randomly caused Exchange Server backups to fail.&lt;/p&gt;
 &lt;p&gt;If you are experiencing inconsistent problems with backing up a VM, check to see if there are any known bugs with applications running on the VM.&lt;/p&gt;
&lt;/section&gt;   
&lt;section class="section main-article-chapter" data-menu-title="11. Security software configuration issues"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;11. Security software configuration issues&lt;/h2&gt;
 &lt;p&gt;Occasionally, security software might keep a backup from completing properly. For example, there have been plenty of documented instances of &lt;a href="https://www.techtarget.com/searchsecurity/tip/10-antimalware-tools-for-ransomware-protection-and-removal"&gt;antimalware software&lt;/a&gt; interfering with certain backup applications. Similarly, some backup applications might require exceptions to be added to firewalls.&lt;/p&gt;
&lt;/section&gt;  
&lt;section class="section main-article-chapter" data-menu-title="12. Starving backup servers of resources"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;12. Starving backup servers of resources&lt;/h2&gt;
 &lt;p&gt;Backup servers are basically like pumps: Data is read from a source, goes into the backup server and then is sent from the backup server to the target device. The volume that a backup server can handle is determined by the resources assigned to it, and the more resources that are available, the faster it can pump data.&lt;/p&gt;
 &lt;p&gt;Backing up VMs can heavily tax primary and backup storage resources, as well as the network, but there is more to backups than just moving data from Point A to Point B. Backup servers handle advanced functions, including &lt;a href="https://www.techtarget.com/searchdatabackup/tip/Compression-deduplication-and-encryption-Whats-the-difference"&gt;deduplication, compression&lt;/a&gt; and determining which disk blocks need to be backed up. For a backup server to achieve maximum throughput, it must have sufficient resources to avoid creating a bottleneck in &lt;a target="_blank" href="https://mosimtec.com/5-insightful-bottleneck-analysis-examples" rel="noopener"&gt;any one resource area&lt;/a&gt;.&lt;/p&gt;
 &lt;p&gt;Backup administrators should monitor the resource usage of the backup server. In practice, it's better for a backup server to have too many resources than too few. Ensuring that a backup server has the resources it needs can enable data to move at maximum speed. This will decrease the time required to back up data.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Brien Posey is a 22-time Microsoft MVP and a commercial astronaut candidate. In his more than 30 years in IT, he has served as a lead network engineer for the U.S. Department of Defense and a network administrator for some of the largest insurance companies in America.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Despite an administrator's best efforts, virtual machine backups can fail. Determine the cause of the failure and modify the VM backup strategy to prevent future mistakes.</description>
            
            <link>https://www.techtarget.com/searchdatabackup/tip/Top-five-mistakes-made-when-backing-up-VMs-and-how-to-prevent-them</link>
            <pubDate>Wed, 17 Jul 2024 15:00:00 GMT</pubDate>
            <title>12 common virtual machine backup mistakes</title>
        </item>
        <item>
            <body>&lt;p&gt;IT pros commonly use Microsoft Configuration Manager or Windows Server Update Services to manage patches on Windows Server and desktop systems. Both options are well known, widely deployed and integrated into the larger Windows ecosystem, making either one a natural fit for organizations running Microsoft's OSes.&lt;/p&gt; 
&lt;p&gt;The &lt;a href="https://www.techtarget.com/searchenterprisedesktop/definition/patch-management"&gt;patch management&lt;/a&gt; features in Configuration Manager, formerly System Center Configuration Manager, which is now part of the Microsoft Intune family of products, can help administrators manage the complex tasks of tracking and applying updates. Configuration Manager includes a set of integrated tools for updating software manually or automatically, as well as controlling when and &lt;a href="https://www.techtarget.com/searchsecurity/tip/5-enterprise-patch-management-best-practices"&gt;how patches are deployed&lt;/a&gt;. It offers other management functions, giving IT a single tool to carry out many of the tasks associated with administering Windows computers.&lt;/p&gt; 
&lt;p&gt;Configuration Manager uses Windows Server Update Services (&lt;a href="https://www.techtarget.com/searchwindowsserver/definition/Windows-Server-Update-Services-WSUS"&gt;WSUS&lt;/a&gt;) to synchronize updates and to conduct update applicability scans. However, organizations don't need Configuration Manager to use WSUS, which is a free server role in Windows Server used to manage and distribute updates. Like Configuration Manager, WSUS provides an integrated tool to patch Windows machines but without the expense or overhead.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="Why consider WSUS or Configuration Manager alternatives?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Why consider WSUS or Configuration Manager alternatives?&lt;/h2&gt;
 &lt;p&gt;Although many organizations use Configuration Manager and WSUS for Windows patching, both products have limitations. For example, some IT professionals consider Configuration Manager to be expensive and overly complex. The product also offers limited support for non-Windows platforms and applications and must be installed on Windows Server.&lt;/p&gt;
 &lt;p&gt;Like Configuration Manager, WSUS also has to be installed on Windows Server, which can bring additional licensing fees. The product also has a reputation for being inefficient, cumbersome and buggy at times. In addition, WSUS provides only rudimentary automation and little in the way of reporting capabilities.&lt;/p&gt;
 &lt;p&gt;&lt;a href="https://www.techtarget.com/searchenterprisedesktop/tip/12-best-patch-management-software-and-tools"&gt;Third-party patching tools&lt;/a&gt; seek to address these limitations, either by extending WSUS or Configuration Manager or by providing a separate tool for patch management. Third-party tools can help streamline and simplify patching operations, while providing greater control over the patching process. But not all patching tools are the same.&lt;/p&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="Top WSUS alternatives for patch management"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Top WSUS alternatives for patch management&lt;/h2&gt;
 &lt;p&gt;Here, in alphabetical order, is a quick summary of eight prominent WSUS alternatives that each take a different approach to updating the Windows OS and the applications that run on it. Read on for details about all eight products.&lt;/p&gt;
 &lt;h3&gt;1. Automox&lt;/h3&gt;
 &lt;p&gt;Automox, a cloud-based, cross-platform system management tool for patch management, software distribution, reporting and policy enforcement, is not purely a patch management tool, but rather an IT operations system. In addition to its patch management capabilities, Automox can be used for configuration management, software deployment and policy enforcement. Another key feature is single-click remediation of vulnerabilities that have been identified by CrowdStrike, Rapid7 or Tenable.&lt;/p&gt;
 &lt;p&gt;Automox performs patch management of OSes and applications across a variety of endpoints. Agents are currently available for Windows, macOS and Linux, and Automox can natively patch dozens of third-party applications across the platforms.&lt;/p&gt;
 &lt;p&gt;Automox is a SaaS product, and the vendor offers two types of subscriptions: Standard and Pro. The Standard plan is geared toward organizations that need patch management capabilities. Besides OS and application patching, it includes features such as full device inventories, basic reporting and end-user notifications. Pro includes all the same features as Standard but adds multizone management, automated vulnerability remediation, remote control and other capabilities.&lt;/p&gt;
 &lt;h3&gt;2. GFI LanGuard&lt;/h3&gt;
 &lt;p&gt;Emphasizing cross-platform patch management and vulnerability scanning for smaller organizations, GFI LanGuard is a network security tool for patch management, network auditing and vulnerability detection. It can be used with or without agents, and its reporting engine has been specifically designed to assist with regulatory compliance for &lt;a href="https://www.techtarget.com/searchhealthit/definition/HIPAA"&gt;HIPAA&lt;/a&gt;, PCI DSS, etc. It recently added &lt;a href="https://www.techtarget.com/searchenterpriseai/definition/generative-AI"&gt;generative AI&lt;/a&gt; for analytics and automated configuration.&lt;/p&gt;
 &lt;p&gt;GFI LanGuard supports patch management for Windows, macOS and Linux systems, as well as third-party applications. The software scans managed nodes and identifies missing patches but also checks for over 60,000 known vulnerabilities. If a vulnerability is detected, the software provides a graphical indication of the vulnerability's severity, as well as a recommended course of action. GFI LanGuard is also designed to integrate with over 4,000 security applications. Incidentally, GFI LanGuard's vulnerability scanning is not limited to Windows, macOS and Linux. It also supports Apple iOS and Android and can scan some network hardware, such as printers and switches.&lt;/p&gt;
 &lt;p&gt;GFI LanGuard is licensed based on the number of nodes to be managed. GFI offers three different licensing plans. Each plan is based on an annual subscription and includes all the same features. The only difference between the plans is the cost, with larger organizations paying less per node. GFI LanGuard is also part of the GFI Unlimited plan, which includes licenses for a number of other GFI products.&lt;/p&gt;
 &lt;h3&gt;3. Ivanti Security Controls&lt;/h3&gt;
 &lt;p&gt;An enterprise-grade tool running on physical or virtual hardware, Ivanti Security Controls, formerly Patch for Windows, is a versatile patch management product for Windows computers, VMs and VM templates. It also supports &lt;a href="https://www.techtarget.com/searchvmware/definition/VMware-ESXi"&gt;VMware ESXi&lt;/a&gt; hosts, Windows applications and Linux distributions, including Red Hat Enterprise Linux and CentOS. Security Controls has a centralized interface designed to make it easy to scan physical and virtual systems, assess and deploy patches, and schedule remote operations while providing granular privilege management to balance access and security.&lt;/p&gt;
 &lt;p&gt;Admins can configure Security Controls to automatically run scheduled recurring scans and deploy any missing patches that are detected during the scans. Security Controls can detect and categorize software and hardware, track asset inventory over time and control a computer's power state, such as shutdowns and restarts. It also gives admins a way to run &lt;a href="https://www.techtarget.com/searchwindowsserver/definition/PowerShell"&gt;PowerShell&lt;/a&gt; scripts to carry out tasks or automate operations. The REST APIs integrate Security Controls with other products and support remote access and control while offering a method to automate operations.&lt;/p&gt;
 &lt;p&gt;Security Controls generates multiple reports that provide a variety of information, such as the installed OSes, machine power states, patch deployments and status, and machine compliance. Admins can use database queries to generate custom reports. Security Controls can display applications and their services and components, as well as import the &lt;a href="https://www.techtarget.com/searchsecurity/definition/Common-Vulnerabilities-and-Exposures-CVE"&gt;CVE&lt;/a&gt; list. It can also show which patches are related to each CVE.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/Ivanti_Security_Controls_protects_Windows_and_Linux_machines_ABN.jpg"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/Ivanti_Security_Controls_protects_Windows_and_Linux_machines_ABN_mobile.jpg" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/Ivanti_Security_Controls_protects_Windows_and_Linux_machines_ABN_mobile.jpg 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/Ivanti_Security_Controls_protects_Windows_and_Linux_machines_ABN.jpg 1280w" alt="Ivanti Security Controls patching tool" data-credit="Ivanti" height="367" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Ivanti Security Controls protects Windows and Linux machines.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;h3&gt;4. Kaseya VSA&lt;/h3&gt;
 &lt;p&gt;Cloud-based Kaseya VSA is a remote monitoring and management service that includes patch management capabilities to install, deploy and update software on Windows, macOS and Linux machines. It uses policy-based patch management that automates and standardizes software maintenance. Admins can approve, schedule and install patches, as well as schedule regular network scans for analyzing computers and automating software updates.&lt;/p&gt;
 &lt;p&gt;Kaseya VSA has a centralized console to assist with patch management operations, including uninstalling and repairing software. Admins can scan computers for missing patches, view a summary of the patch status for each machine and exclude patches from specific machines. Kaseya VSA can also run procedures before or after updates. For example, an admin can use a procedure to automate setting up a newly added computer.&lt;/p&gt;
 &lt;p&gt;Kaseya VSA gives administrators both manual and automated options for updating software while offering granular control over the patching process. Admins can also set up patch reports to see compliance across their environments and quickly identify endpoints and applications that need attention. In addition, Kaseya VSA aggregates the patch status of all machines to see which CVEs need to be addressed on each. Admins can also use the product to access recent network scans to identify installed and missing patches.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/Kaseya_vsa_monitors_and_remediates_vulnerabilities_ABN.jpg"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/Kaseya_vsa_monitors_and_remediates_vulnerabilities_ABN_mobile.jpg" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/Kaseya_vsa_monitors_and_remediates_vulnerabilities_ABN_mobile.jpg 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/Kaseya_vsa_monitors_and_remediates_vulnerabilities_ABN.jpg 1280w" alt="Kaseya VSA patch management" data-credit="Kaseya" height="309" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Kaseya VSA monitors and remediates vulnerabilities.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;h3&gt;5. ManageEngine Patch Manager Plus&lt;/h3&gt;
 &lt;p&gt;ManageEngine, a division of Zoho, offers Patch Manager Plus, a versatile patch management tool available as on-premises software or as a cloud service. It supports Windows, macOS and &lt;a href="https://www.techtarget.com/searchenterprisedesktop/tip/3-crucial-Linux-patch-management-best-practices-for-IT"&gt;Linux endpoints&lt;/a&gt;, along with more than 850 third-party applications. Admins can carry out patching operations from a single interface and use the vendor's prebuilt packages to streamline the process. They can also automate patch deployment for OSes and applications.&lt;/p&gt;
 &lt;p&gt;Patch Manager Plus includes numerous auditing, analytics and reporting features for visibility into the patch status of computers and applications. ManageEngine offers a free edition and two paid editions, Professional and Enterprise. The Enterprise edition is suitable for WAN use, while Professional is designed for LANs. The Enterprise edition includes all of the features found in the Professional edition but adds antivirus definition updates and the ability to test and approve patches. The Enterprise edition also includes a distribution server, which can help conserve bandwidth. The free edition supports similar features as the paid editions but is limited to 20 workstations and five servers.&lt;/p&gt;
 &lt;p&gt;With the Enterprise edition, IT can automate the &lt;a href="https://www.techtarget.com/searchenterprisedesktop/tip/Use-this-10-step-patch-management-process-to-ensure-success"&gt;entire patch management process&lt;/a&gt;. This includes scanning endpoints for missing patches, downloading patches from vendor websites, deploying the patches and generating reports on the patch management process. However, all three editions include such features as service pack deployments, Active Directory authentication, roaming user patching, role-based administration and on-demand remote shutdown.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/Manage_Engine_PatchManager_Plus_Missing_Patches.jpg"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/Manage_Engine_PatchManager_Plus_Missing_Patches_mobile.jpg" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/Manage_Engine_PatchManager_Plus_Missing_Patches_mobile.jpg 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/Manage_Engine_PatchManager_Plus_Missing_Patches.jpg 1280w" alt="The Patch Manager Plus patching tool" data-credit="ManageEngine" height="317" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;ManageEngine Patch Manager Plus checks for missing patches.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;h3&gt;6. PDQ Deploy&lt;/h3&gt;
 &lt;p&gt;PDQ Deploy from PDQ.com is a lightweight software deployment tool for automating patch management on Windows Server and desktop machines. It supports more than 250 Windows applications, which can be updated using the vendor's prebuilt, pretested packages. In addition, IT can create custom packages, as well as copy over files, send messages to users or force reboots on managed systems. PDQ offers plans for small business, education and enterprise use. Licensing costs are based on the number of admins who will be using the software. A 14-day trial is available.&lt;/p&gt;
 &lt;p&gt;PDQ Deploy uses a centralized console for installing, uninstalling, updating, repairing and making other changes across the network. The console also provides access to the prebuilt application packages. In addition, PDQ offers a CLI for working with packages.&lt;/p&gt;
 &lt;p&gt;Admins can also use scripts to automate operations with support for several scripting languages, including &lt;a href="https://www.techtarget.com/whatis/definition/Visual-Basic-VB"&gt;Visual Basic&lt;/a&gt;, PowerShell and batch files. In addition, IT can set up multiple distribution points for sharing custom packages, schedules and target lists.&lt;/p&gt;
 &lt;p&gt;For most deployments, admins use the scheduling capabilities to deploy packages at specified intervals. They can also create automatic deployments for new package versions as they become available from the package library. In addition, PDQ Deploy can send an email with details about patch deployments, including which computers or software were updated and which systems might need more attention. Admins can also access built-in reports that provide deployment and scheduling information.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/PDQ_Deploy.jpg"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/PDQ_Deploy_mobile.jpg" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/PDQ_Deploy_mobile.jpg 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/PDQ_Deploy.jpg 1280w" alt="PDQ Deploy patch manager" data-credit="PDQ.com" height="464" width="558"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;PDQ Deploy automates patch management and updates system changes.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;h3&gt;7. Quest KACE Systems Management Appliance&lt;/h3&gt;
 &lt;p&gt;Quest KACE Systems Management Appliance is designed to be a &lt;a href="https://www.techtarget.com/searchenterprisedesktop/definition/unified-endpoint-management-UEM"&gt;unified endpoint management&lt;/a&gt; system that can do far more than just basic patch management. You can also use it for tasks such as software distribution, software license tracking, and server management and monitoring. There is even a mobile app for managing service tickets on the go.&lt;/p&gt;
 &lt;p&gt;The appliance can perform patch management for Windows, Linux and macOS systems, along with a vast number of applications. Quest uses a system of smart labels that enable admins to classify both endpoints and updates and then perform actions or create reports based on the labels. For example, patches might be labeled by vendor, date or severity. The appliance can also perform automated vulnerability scans, as well as configuration management and policy enforcement. In multisite environments, the appliance can use replication servers to minimize bandwidth consumption during patch management.&lt;/p&gt;
 &lt;p&gt;Quest offers several deployment options for KACE Systems Management Appliance. It can be deployed as an on-premises virtual appliance or hosted in VMware, Hyper-V, Nutanix or the Microsoft Azure cloud.&lt;/p&gt;
 &lt;h3&gt;8. SolarWinds Patch Manager&lt;/h3&gt;
 &lt;p&gt;Intended for organizations that are already using WSUS or Configuration Manager, SolarWinds Patch Manager offers broad support for third-party applications and more comprehensive reporting capabilities than most tools. It builds on and extends WSUS and Configuration Manager to provide a patch management tool for &lt;a target="_blank" href="https://www.forbes.com/sites/forbestechcouncil/2019/08/06/why-software-patches-dont-fix-everything" rel="noopener"&gt;addressing&lt;/a&gt; software vulnerabilities and managing third-party applications. IT teams can automatically apply Windows updates using customized schedules that target specific business groups or system categories, based on such factors as OS or IP range. Patch Manager also helps teams proactively identify which Windows machines need to be patched and then quickly deploy the patches to those systems, including virtualized workloads.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/SolarWinds_Patch_Manager_WSUS_sync_status_and_settings_ABN.jpg"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/SolarWinds_Patch_Manager_WSUS_sync_status_and_settings_ABN_mobile.jpg" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/SolarWinds_Patch_Manager_WSUS_sync_status_and_settings_ABN_mobile.jpg 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/SolarWinds_Patch_Manager_WSUS_sync_status_and_settings_ABN.jpg 1280w" alt="SolarWinds Patch Manager" data-credit="SolarWinds" height="337" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;SolarWinds Patch Manager provides WSUS sync status and settings.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;p&gt;Admins can create pre- and post-update package scenarios to verify third-party patch deployments. Patch Manager includes Custom Package Wizard for admins to build packages for any application without the need for complex scripting or System Center Updates Publisher. SolarWinds also offers prebuilt, pretested application packages that admins can quickly deploy through WSUS or Configuration Manager.&lt;/p&gt;
 &lt;p&gt;Patch Manager features a web console for centralizing patch management and viewing important patch information. The console offers several reporting options for determining patch status and demonstrating patch compliance to auditors. Admins can also view information about the latest available patches, missing patches on their systems and the general health of the patch environment. In addition, Patch Manager can notify admins when updates become available, either through the console or by email.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Brien Posey is a 15-time Microsoft MVP with two decades of IT experience. He has served as a lead network engineer for the U.S. Department of Defense and as a network administrator for some of the largest insurance companies in America.&lt;/i&gt;&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Robert Sheldon is a technical consultant and freelance technology writer. He has written numerous books, articles and training materials related to Windows, databases, business intelligence and other areas of technology.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Compare the features of eight prominent patch management tools for Microsoft OSes and third-party applications to find the right option for your organization.</description>
            
            <link>https://www.techtarget.com/searchwindowsserver/feature/5-WSUS-alternatives-for-patch-management</link>
            <pubDate>Wed, 15 May 2024 09:00:00 GMT</pubDate>
            <title>8 WSUS alternatives for patch management</title>
        </item>
        <item>
            <body>&lt;section class="section main-article-chapter" data-menu-title="What is a data center?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is a data center?&lt;/h2&gt;
 &lt;p&gt;A data center is a facility composed of networked computers, storage systems and computing infrastructure that organizations use to assemble, process, store and disseminate large amounts of data. A business typically relies heavily on the applications, services and data contained within a data center, making it a critical asset for everyday operations. The main components of a data center typically include routers, firewalls, switches, storage systems and &lt;a href="https://www.techtarget.com/searchnetworking/definition/Application-delivery-controller"&gt;application delivery controllers&lt;/a&gt;.&lt;/p&gt;
&lt;/section&gt;  
&lt;section class="section main-article-chapter" data-menu-title="What is a modern data center?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is a modern data center?&lt;/h2&gt;
 &lt;p&gt;In the past, data centers were highly controlled physical environments. However, modern infrastructures have shifted away from physical servers to virtualized environments, facilitating the deployment of applications and workloads across diverse &lt;a href="https://www.techtarget.com/searchstorage/ehandbook/Who-needs-a-multi-cloud-environment-and-how-best-to-deploy-one"&gt;multi-cloud environments&lt;/a&gt;.&lt;/p&gt;
 &lt;p&gt;Modern data centers can support a variety of workloads, from traditional enterprise apps to modern cloud-native services. These enterprise data centers increasingly incorporate facilities for securing and protecting &lt;a href="https://www.techtarget.com/searchcloudcomputing/definition/cloud-computing"&gt;cloud computing&lt;/a&gt; and in-house, on-site resources. They're designed to meet the growing demands of businesses for computing resources, while optimizing energy efficiency and reducing operational costs.&lt;/p&gt;
 &lt;p&gt;As enterprises turn to cloud computing and multi-cloud environments, conventional &lt;a href="https://www.techtarget.com/searchdatacenter/feature/Will-data-centers-become-obsolete"&gt;data centers are evolving&lt;/a&gt;, blurring the lines between the data centers of cloud providers and those of enterprises.&lt;/p&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="How do data centers work?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;How do data centers work?&lt;/h2&gt;
 &lt;p&gt;A data center facility, which enables an organization to collect its resources and infrastructure for data processing, storage and communications, includes the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Systems for storing, sharing, accessing and processing data across the organization.&lt;/li&gt; 
  &lt;li&gt;Physical infrastructure for supporting data processing and data communications.&lt;/li&gt; 
  &lt;li&gt;Utilities such as cooling, electricity, network security access and uninterruptible power supplies (&lt;a href="https://www.techtarget.com/searchdatacenter/definition/uninterruptible-power-supply"&gt;UPSes&lt;/a&gt;).&lt;/li&gt; 
  &lt;li&gt;&lt;a href="https://www.techtarget.com/searchdatacenter/tip/Data-center-safety-checklist-Best-practices-to-follow"&gt;Physical safety mechanisms&lt;/a&gt;, such as monitoring across the entire building, safety personnel, metal detectors and &lt;a href="https://www.techtarget.com/searchsecurity/definition/biometrics"&gt;biometric&lt;/a&gt; systems.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;Gathering all these resources in a data center enables the organization to do the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Protect proprietary systems and data.&lt;/li&gt; 
  &lt;li&gt;Centralize IT and data processing employees, contractors and vendors.&lt;/li&gt; 
  &lt;li&gt;Apply &lt;a href="https://www.techtarget.com/searchsecurity/tip/How-to-write-an-information-security-policy-plus-templates"&gt;information security&lt;/a&gt; controls to proprietary systems and data.&lt;/li&gt; 
  &lt;li&gt;Realize economies of scale by consolidating sensitive systems in one place.&lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;     
&lt;section class="section main-article-chapter" data-menu-title="Why are data centers important?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Why are data centers important?&lt;/h2&gt;
 &lt;p&gt;Data centers support almost all computation, data storage, and network and business applications for the enterprise. To the extent that the business of a modern enterprise is run on computers, the data center &lt;em&gt;is&lt;/em&gt; the business.&lt;/p&gt;
 &lt;p&gt;Data centers are crucial for the following reasons:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Information and storage processing.&lt;/b&gt; Data centers are essentially huge computers that store and process vast amounts of information, making them indispensable for tech firms and businesses that rely on digital data.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Support for IT operations. &lt;/strong&gt;Data centers support IT operations and critical applications, providing infrastructure for computing, storage and networking needs. They can be owned and operated by organizations, managed by third-party or &lt;a href="https://www.techtarget.com/searchcloudcomputing/tip/Top-public-cloud-providers-A-brief-comparison"&gt;public cloud providers&lt;/a&gt;, or rented spaces inside &lt;a href="https://www.techtarget.com/searchdatacenter/definition/colocation-colo"&gt;colocation&lt;/a&gt; facilities.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Support for cloud technology. &lt;/b&gt;With the increasing reliance on cloud technology, cloud data centers have become popular. Tech companies dedicated to cloud computing typically operate cloud data centers, further emphasizing the significance of data centers in modern technology.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Proximity and connectivity. &lt;/strong&gt;Data centers are ideally located in areas that are minimally susceptible to natural disasters and near stable and reliable sources of electricity to ensure better internet connectivity. The closer a data center is to a business, the faster the overall internet speed is.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Data management and security.&lt;/strong&gt; Data centers house crucial organizational data, user data and important applications, making their security and reliability essential for businesses. They also offer scalability, security, efficiency and state-of-the-art technology to address the growing demands of businesses.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Business agility and resiliency. &lt;/strong&gt;As businesses become increasingly digitalized, data has become the most important asset in a &lt;a href="https://www.techtarget.com/searchcio/definition/digital-economy"&gt;digital economy&lt;/a&gt;. Data centers are essential for managing data and ensuring compliance and security, all of which are crucial for organizational adaptability and resilience.&lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="What are the core components of data centers?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What are the core components of data centers?&lt;/h2&gt;
 &lt;p&gt;Elements of a data center are generally divided into the following primary categories:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;strong&gt;Facility.&lt;/strong&gt; This includes the physical location with security access controls and sufficient square footage to house the data center's infrastructure and equipment.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Networking equipment.&lt;/strong&gt; This equipment supports the storage and processing of applications and data by handling tasks such as switching, routing, &lt;a href="https://www.techtarget.com/searchnetworking/definition/load-balancing"&gt;load balancing&lt;/a&gt; and analytics.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Enterprise data storage.&lt;/strong&gt; A modern data center houses an organization's data systems in a well-protected physical and storage infrastructure along with servers, storage subsystems, networking switches, routers, firewalls, cabling and physical racks.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Support infrastructure. &lt;/strong&gt;This equipment provides the highest available sustainability related to uptime. Components of the support infrastructure include the following: 
   &lt;ul style="list-style-type: circle;" class="default-list"&gt; 
    &lt;li&gt;Power distribution and supplemental power subsystems.&lt;/li&gt; 
    &lt;li&gt;Electrical switching.&lt;/li&gt; 
    &lt;li&gt;UPSes.&lt;/li&gt; 
    &lt;li&gt;Backup generators.&lt;/li&gt; 
    &lt;li&gt;Ventilation and data center cooling systems, such as in-row cooling configurations and &lt;a href="https://www.techtarget.com/searchdatacenter/definition/computer-room-air-conditioning-unit"&gt;computer room air conditioners&lt;/a&gt;.&lt;/li&gt; 
    &lt;li&gt;Adequate provisioning for network carrier connectivity.&lt;/li&gt; 
   &lt;/ul&gt; &lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Operational staff.&lt;/strong&gt; These employees are required to maintain and monitor IT and infrastructure equipment around the clock.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/typical_data_center_equipment-f.png"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/typical_data_center_equipment-f_mobile.png" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/typical_data_center_equipment-f_mobile.png 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/typical_data_center_equipment-f.png 1280w" alt="A chart showing the components of a data center." height="453" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Several types of equipment are used in data centers.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="What are the types of data centers?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What are the types of data centers?&lt;/h2&gt;
 &lt;p&gt;Depending on the ownership and precise requirements of a business, a data center's size, shape, location and capacity can vary.&lt;/p&gt;
 &lt;p&gt;Common data center types include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;strong&gt;Enterprise data centers. &lt;/strong&gt;These proprietary data centers are built and owned by organizations for their internal end users. They support the IT operations and critical applications of a single organization and can be located both on premises and off-site.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Managed services data centers. &lt;/strong&gt;Managed by third parties, these data centers provide all aspects of data storage and computing services. Companies lease, instead of buy, the infrastructure and services.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Cloud-based data centers.&lt;/strong&gt; These off-site distributed data centers are managed by third-party or public cloud providers, such as &lt;a href="https://www.techtarget.com/searchaws/definition/Amazon-Web-Services"&gt;Amazon Web Services&lt;/a&gt;, Google or Microsoft. Based on an &lt;a href="https://www.techtarget.com/searchsecurity/tip/5-step-IaaS-security-checklist-for-cloud-customers"&gt;infrastructure-as-a-service model&lt;/a&gt;, the leased infrastructure enables customers to provision a virtual data center within minutes.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Colocation data centers.&lt;/strong&gt; These rental spaces inside &lt;a href="https://www.techtarget.com/searchdatacenter/feature/Top-5-colocation-providers"&gt;colocation facilities are owned by third parties&lt;/a&gt;. The renting organization provides the hardware, and the data center provides and manages the infrastructure, including physical space, bandwidth, cooling and security systems. Colocation is appealing to organizations that want to avoid the large capital expenditures associated with building and maintaining their own data centers.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Edge data centers. &lt;/strong&gt;These are smaller facilities that solve the latency problem by being geographically closer to the edge of the network and data sources. Edge data centers also enhance application performance and customer experience, particularly for real-time, data-intensive tasks, such as &lt;a href="https://www.techtarget.com/searchbusinessanalytics/definition/big-data-analytics"&gt;big data analytics&lt;/a&gt;, &lt;a href="https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence"&gt;artificial intelligence&lt;/a&gt; and content delivery.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Hyperscale data centers. &lt;/strong&gt;Synonymous with large-scale providers, such as Amazon, Meta and Google, these hyperscale computing infrastructures maximize hardware density, while minimizing the cost of cooling and administrative overhead.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Micro data centers.&lt;/strong&gt; Micro data centers are compact design data centers associated with &lt;a href="https://www.techtarget.com/searchdatacenter/definition/edge-computing"&gt;edge computing&lt;/a&gt;. While smaller than traditional data centers, micro data centers deliver comparable functionalities. They simplify edge computing setup through quick deployment, needing less space and power. A standard micro data center container or locker typically houses less than 10 servers and 100 &lt;a href="https://www.techtarget.com/searchitoperations/definition/virtual-machine-VM"&gt;virtual machines&lt;/a&gt;.&lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="What are the standards of a data center?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What are the standards of a data center?&lt;/h2&gt;
 &lt;p&gt;Small businesses can operate successfully with several servers and storage arrays networked within a closet or small room, while major computing organizations might fill an enormous warehouse space with data center equipment and infrastructure. In other cases, data centers can be assembled in mobile installations, such as shipping containers, also known as &lt;em&gt;data centers in a box&lt;/em&gt;, which can be moved and deployed as required.&lt;/p&gt;
 &lt;p&gt;However, data centers can be defined by various levels of reliability or resilience, sometimes referred to as &lt;em&gt;data center tiers&lt;/em&gt;. In 2005, the American National Standards Institute and the Telecommunications Industry Association published standard &lt;a href="https://www.techtarget.com/searchdatacenter/tip/A-quick-primer-on-the-ANSI-TIA-942-standard"&gt;ANSI/TIA-942&lt;/a&gt;, "Telecommunications Infrastructure Standard for Data Centers," which defines four tiers of data center design and implementation guidelines.&lt;/p&gt;
 &lt;p&gt;Tiers can be differentiated by available resources, data center capacities or uptime guarantees. The &lt;a target="_blank" href="https://uptimeinstitute.com/tiers" rel="noopener"&gt;Uptime Institute&lt;/a&gt; defines data center tiers as follows:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;strong&gt;Tier I.&lt;/strong&gt; These are the most basic types of data centers, and they incorporate a UPS. Tier I data centers don't provide redundant systems but should guarantee at least 99.671% uptime.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Tier II.&lt;/strong&gt; These data centers include system, power and cooling redundancy and guarantee at least 99.741% uptime. An annual downtime of 22 hours can be expected from a Tier II data center.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Tier III. &lt;/strong&gt;These data centers provide partial &lt;a href="https://www.techtarget.com/searchdisasterrecovery/definition/fault-tolerant"&gt;fault tolerance&lt;/a&gt;, 72 hours of outage protection, full redundancy and a 99.982% uptime guarantee.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Tier IV.&lt;/strong&gt; These data centers guarantee 99.995% uptime -- or no more than 26.3 minutes of downtime per year -- as well as full fault tolerance, system redundancy and 96 hours of outage protection.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;Beyond the basic issues of cost and facility size, sites are selected based on a multitude of criteria, such as geographic location, seismic and meteorological stability, access to roads and airports, availability of energy and telecommunications, and even the prevailing political environment.&lt;/p&gt;
 &lt;p&gt;Once a site is secured, the data center architecture can be designed with attention to the mechanical and electrical infrastructure, as well as the composition and layout of the IT equipment. All these issues are guided by the availability and efficiency goals of the desired data center tier.&lt;/p&gt;
 &lt;div class="youtube-iframe-container"&gt;
  &lt;iframe id="ytplayer-0" src="https://www.youtube.com/embed/ucmnHYCawyA?autoplay=0&amp;amp;modestbranding=1&amp;amp;rel=0&amp;amp;widget_referrer=null&amp;amp;enablejsapi=1&amp;amp;origin=https://searchservervirtualization.techtarget.com" type="text/html" height="360" width="640" frameborder="0"&gt;&lt;/iframe&gt;
 &lt;/div&gt;
&lt;/section&gt;        
&lt;section class="section main-article-chapter" data-menu-title="How are data centers managed?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;How are data centers managed?&lt;/h2&gt;
 &lt;p&gt;Data center management encompasses the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;strong&gt;Facilities management.&lt;/strong&gt; Managing the physical data center facility can include duties related to the real estate of the facility, utilities, access control and personnel.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Data center inventory or asset management.&lt;/strong&gt; Data center facilities include hardware assets, as well as software licensing and release management.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Data center infrastructure management. &lt;/strong&gt;&lt;a href="https://www.techtarget.com/searchdatacenter/definition/data-center-infrastructure-management-DCIM"&gt;DCIM&lt;/a&gt; lies at the intersection of IT and facility management and is usually accomplished through monitoring the data center's performance to optimize energy, equipment and floor space use.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Technical support.&lt;/strong&gt; The data center provides technical services to the organization, and as such, it must also provide technical support to enterprise end users.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Operations.&lt;/strong&gt; Data center management includes day-to-day processes and services that are provided by the data center.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Infrastructure management and monitoring.&lt;/strong&gt; Modern data centers use monitoring tools that enable remote IT data center administrators to oversee the facility and equipment, measure performance, detect failures and implement corrective actions without ever physically entering the data center room.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Energy consumption and efficiency.&lt;/strong&gt; A simple data center might need less energy, but enterprise data centers can require more than 100 megawatts. Today, the &lt;a href="https://www.techtarget.com/searchdatacenter/definition/green-data-center"&gt;green data center&lt;/a&gt;, which is designed for minimum environmental impact through the use of low-emission building materials, catalytic converters and alternative energy technologies, is growing in popularity.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Data center security and safety.&lt;/strong&gt; Data center design must also implement sound safety and security practices, including the layout of doorways and access corridors to accommodate the movement of large IT equipment and employee access. &lt;a href="https://www.techtarget.com/searchdatacenter/tip/What-to-know-about-data-center-fire-protection"&gt;Fire suppression&lt;/a&gt; is another key safety area, and the extensive use of high-energy electrical and electronic equipment precludes common sprinklers.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/best_practices_for_data_center_management-f.png"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/best_practices_for_data_center_management-f_mobile.png" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/best_practices_for_data_center_management-f_mobile.png 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/best_practices_for_data_center_management-f.png 1280w" alt="Bulleted list of data center management best practices." height="280" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Explore several best practices for data center management.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="What is data center consolidation?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is data center consolidation?&lt;/h2&gt;
 &lt;p&gt;Data center consolidation is the process of downsizing or consolidating many servers, storage systems, networking systems or even locations into a more efficient set of systems. Consolidation typically occurs during mergers and acquisitions when the majority business doesn't need the data centers owned by the subordinate business.&lt;/p&gt;
 &lt;p&gt;There are many benefits of consolidating data centers, including the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Reduced latency.&lt;/b&gt; Modern businesses might use two or more data center installations across multiple locations for greater resilience and better application performance, which lowers latency by locating workloads closer to users.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Cost reduction.&lt;/b&gt; A business with multiple data centers could opt to &lt;a href="https://www.techtarget.com/searchnetworking/tip/3-steps-for-a-successful-data-center-consolidation-plan"&gt;consolidate data centers&lt;/a&gt;, reducing the number of locations to minimize the costs of IT operations. By reducing the operational costs of multiple data centers and energy expenses, businesses can gain significant cost savings.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Improved efficiency.&lt;/strong&gt; Consolidating data centers can streamline operations, simplify infrastructure and make it easier for IT staff to control, optimize and manage the organization's infrastructure.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Optimize IT governance and compliance.&lt;/strong&gt; Data center consolidation enables easier compliance with regulatory measures. With fewer data centers to operate, organizations can more easily adhere to regulatory requirements, data protection standards and industry best practices, reducing compliance risks and potential penalties.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Improved business continuity and disaster recovery (BCDR).&lt;/strong&gt; Centralizing data and backup infrastructure enhances BCDR capabilities. Organizations can apply more thorough backup and &lt;a href="https://www.techtarget.com/searchdisasterrecovery/definition/data-replication"&gt;replication&lt;/a&gt; strategies, ensure data availability across geographically dispersed locations, and reduce &lt;a href="https://www.techtarget.com/whatis/definition/recovery-time-objective-RTO"&gt;recovery time objectives&lt;/a&gt; and &lt;a href="https://www.techtarget.com/whatis/definition/recovery-point-objective-RPO"&gt;recovery point objectives&lt;/a&gt;.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Environmental benefits.&lt;/strong&gt; Due to the fewer number of data centers being used, data center consolidation reduces energy consumption and carbon footprint. It also optimizes server efficiency through virtualization, minimizes electronic waste and extends hardware life span. This plays a crucial role in the &lt;a href="https://www.techtarget.com/searchcio/feature/Sustainability-in-business-practices-What-IT-should-know"&gt;sustainability strategies of businesses&lt;/a&gt;, aligning with global efforts to combat climate change.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Enhanced security.&lt;/strong&gt; Consolidation reduces the attack surface and enables more effective use of security measures, which strengthens the overall security posture of an organization.&lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="Data center vs. cloud vs. server farm: What are the differences?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Data center vs. cloud vs. server farm: What are the differences?&lt;/h2&gt;
 &lt;p&gt;How and where data is stored play a crucial role in the overall success of an organization. Over time, businesses have transitioned from simple on-site server farms and large enterprise data centers to cloud infrastructures.&lt;/p&gt;
 &lt;p&gt;The key differences among enterprise data centers, cloud service vendors and server farms include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;strong&gt;Enterprise data centers&lt;/strong&gt; are designed for mission-critical businesses and are built with availability and scalability in mind. They offer everything required to maintain seamless business operations, including physical computer equipment and storage devices, as well as &lt;a href="https://www.techtarget.com/searchdatacenter/tip/Discover-disaster-recovery-and-backup-for-edge-networks"&gt;DR and backup&lt;/a&gt;.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Cloud vendors&lt;/strong&gt; enable users to purchase access to the cloud service provider's resources without having to build or buy their own infrastructure. Customers can manage their virtualized or nonvirtualized resources without having physical access to the cloud provider's facility.&lt;br&gt;The main difference between a cloud data center and a typical enterprise data center is scale. Because cloud data centers serve many different organizations, they can be huge.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Server farms&lt;/strong&gt; are bare-bones data centers. Many interconnected servers live inside the same facility to provide centralized control and easy accessibility. Even with cloud computing gaining popularity, many businesses still prefer server farms because they offer cost savings, security and performance optimization. In fact, cloud providers also use server farms inside their data centers.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;Further blurring the lines between these platforms is the growth of &lt;a href="https://www.techtarget.com/searchcloudcomputing/definition/hybrid-cloud"&gt;hybrid cloud&lt;/a&gt;. As enterprises increasingly rely on public cloud providers, they must incorporate connectivity between their own data centers and their cloud providers.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/Google_data_center_in_Douglas_County_George.jpg"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/Google_data_center_in_Douglas_County_George_mobile.jpg" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/Google_data_center_in_Douglas_County_George_mobile.jpg 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/Google_data_center_in_Douglas_County_George.jpg 1280w" alt="A photo of a large-scale Google data center in Ga." data-credit="Google" height="373" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Enterprises like Google can require large data centers, like this Google data center in Douglas County, Ga.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;      
&lt;section class="section main-article-chapter" data-menu-title="Evolution of data centers"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Evolution of data centers&lt;/h2&gt;
 &lt;p&gt;Data centers have undergone a significant evolution over the years, adapting to technological advancements and changing business needs:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;1940s.&lt;/b&gt; The origins of the first data centers can be traced back to early computer systems, such as the &lt;a href="https://www.computerweekly.com/news/252496341/ENIAC-anniversary-What-75-years-of-computer-technology-have-delivered"&gt;Electronic Numerical Integrator and Computer&lt;/a&gt;, or ENIAC. These machines, which were used by the military, were complex to maintain and operate. They required specialized computer rooms with racks, cable trays, cooling mechanisms and access restrictions to accommodate all the equipment and execute the proper security measures.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;1960s.&lt;/b&gt; The creation of the &lt;a href="https://www.techtarget.com/searchdatacenter/definition/mainframe"&gt;mainframe computer&lt;/a&gt; by IBM led to the development of dedicated mainframe rooms at large companies and government agencies, some of which needed their own free-standing buildings, marking the birth of the first data centers.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;1990s.&lt;/b&gt; The term &lt;em&gt;data center&lt;/em&gt; first came into use when IT operations started to expand and inexpensive networking equipment became available. It became possible to store all of a company's necessary servers in a room within the company. These specialized computer rooms were dubbed &lt;em&gt;data centers&lt;/em&gt; within the organizations, and the term gained traction. Around the time of the &lt;a href="https://www.techtarget.com/searchcio/definition/dot-com-bubble"&gt;dot-com bubble&lt;/a&gt; in the late 1990s, the need for internet speed and a constant internet presence for companies necessitated bigger facilities to house the large amount of networking equipment needed. It was at this point that data centers became popular and began to resemble the ones described above.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Recent years&lt;/b&gt;&lt;b&gt;.&lt;/b&gt; Current data centers reflect a shift toward greater efficiency, flexibility and integration with cloud resources to meet the evolving demands of modern computing and storage needs.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;&lt;em&gt;Learn the essentials of &lt;a href="https://www.techtarget.com/searchdatacenter/How-to-design-and-build-a-data-center"&gt;designing a data center&lt;/a&gt; efficiently. Explore key components, infrastructure and industry standards before embarking on the project.&lt;/em&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>A data center is a facility composed of networked computers, storage systems and computing infrastructure that organizations use to assemble, process, store and disseminate large amounts of data.</description>
            
            <link>https://www.techtarget.com/searchdatacenter/definition/data-center</link>
            <pubDate>Fri, 03 May 2024 09:00:00 GMT</pubDate>
            <title>data center</title>
        </item>
        <item>
            <body>&lt;p&gt;In today's IT world, you can have workloads both on premises and in the cloud. A common denominator for each is a need for disaster recovery.&lt;/p&gt; 
&lt;p&gt;Azure Site Recovery is one option for administrators who need flexibility with workload protection. Azure Site Recovery works in the data center with physical servers, Windows VMs, Linux VMs and virtual machines in the Azure cloud. One of the benefits for the &lt;a href="https://www.techtarget.com/searchsecurity/tip/Business-continuity-vs-disaster-recovery-vs-incident-response"&gt;enterprise is having DR capabilities&lt;/a&gt; without the need to spend a significant amount on hardware and software.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="What is Azure Site Recovery?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is Azure Site Recovery?&lt;/h2&gt;
 &lt;p&gt;As the name indicates, Azure Site Recovery is a service based in Microsoft's Azure cloud platform. Customers can replicate their on-premises Windows and Linux VMs running on VMware or Hyper-V and physical Windows or Linux servers to Azure. In the event of a disaster, such as a power outage or &lt;a href="https://www.techtarget.com/searchdatacenter/tip/How-to-prevent-and-recover-from-server-failure"&gt;hardware failure&lt;/a&gt;, VMs fail over and continue to operate in the Azure cloud to minimize downtime.&lt;/p&gt;
 &lt;p&gt;An agent installed on the physical server or virtual machine facilitates the replication process in Azure and assists with tasks related to configuration, monitoring and troubleshooting.&lt;/p&gt;
 &lt;p&gt;Azure Site Recovery supports application-aware replication for Microsoft products, such as Exchange, SQL Server, Active Directory and SharePoint, and major vendors, including Oracle, SAP and IBM.&lt;/p&gt;
 &lt;p&gt;Azure Site Recovery also replicates Azure VM workloads.&lt;/p&gt;
 &lt;p&gt;Organizations can tailor Azure Site Recovery for their specific needs and customize the &lt;a href="https://www.techtarget.com/whatis/definition/recovery-time-objective-RTO"&gt;recovery time objective&lt;/a&gt; (RTO) and &lt;a href="https://www.techtarget.com/whatis/definition/recovery-point-objective-RPO"&gt;recovery point objective&lt;/a&gt; (RPO) for more frequent replication of important workloads to avoid data loss. Advanced settings also offer a way to build a recovery plan for an orchestrated failover and use of automation via PowerShell scripts to accelerate DR efforts.&lt;/p&gt;
&lt;/section&gt;      
&lt;section class="section main-article-chapter" data-menu-title="What are the benefits of Azure Site Recovery?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What are the benefits of Azure Site Recovery?&lt;/h2&gt;
 &lt;p&gt;One of the real advantages of this Azure service for a Microsoft-based shop is integration. All the functionality is built into the admin portal and requires little effort to configure beyond the agent installation, which can be done automatically. Offerings from other vendors, such as Zerto and Veeam, work the same way but require additional configuration using a management suite based outside the Azure portal.&lt;/p&gt;
 &lt;p&gt;As a cloud-based service, Azure Site Recovery is under constant development. Microsoft introduced Azure Site Recovery in 2014 and issues rollups nearly every month to add features or improve functionality with the service and related software.&lt;/p&gt;
&lt;/section&gt;   
&lt;section class="section main-article-chapter" data-menu-title="What is Azure Site Recovery pricing?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is Azure Site Recovery pricing?&lt;/h2&gt;
 &lt;p&gt;One of the big issues for any platform is cost. Microsoft charges based on the number of protected instances. Organizations interested in testing the service can use it for free for the first 31 days.&lt;/p&gt;
 &lt;p&gt;Each protected instance to Azure costs $25 per month with additional fees for the Azure Site Recovery license, storage in Azure, storage transactions and outbound data transfer. For a protected instance to a customer-owned site, the cost is $16 per instance.&lt;/p&gt;
 &lt;p&gt;As with most systems, there are caveats, including how replication and recovery are tied to specific Azure regions depending on the location of the cluster.&lt;/p&gt;
 &lt;p&gt;Microsoft does not charge for incoming replication, but does charge for outgoing replication. The total &lt;a href="https://www.techtarget.com/searchcloudcomputing/tip/Implement-these-Azure-cost-optimization-best-practices"&gt;cost depends on the Azure region&lt;/a&gt; and how much egress data is transferred. Each attached disk to the instance adds to the charges with faster disk types costing more.&lt;/p&gt;
 &lt;p&gt;During a failover, Microsoft adds a compute charge for the VMs that are running in Azure.&lt;/p&gt;
&lt;/section&gt;      
&lt;section class="section main-article-chapter" data-menu-title="What are Azure Site Recovery prerequisites and requirements?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What are Azure Site Recovery prerequisites and requirements?&lt;/h2&gt;
 &lt;p&gt;Using Azure Site Recovery requires an active subscription to Azure.&lt;/p&gt;
 &lt;p&gt;In Azure, there must be an Azure virtual network to place the VMs in a failover and a storage account to hold replicated data.&lt;/p&gt;
 &lt;p&gt;Replication for VMware VMs and physical servers requires a configuration server to handle replication to Azure. The server is set up as a highly available VMware VM and uses a registration key from the recovery vault for access.&lt;/p&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="How does Azure Site Recovery work with cloud workloads?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;How does Azure Site Recovery work with cloud workloads?&lt;/h2&gt;
 &lt;p&gt;Microsoft integrates Azure Site Recovery with cloud workloads to make protection of those instances easy. To add Azure Site Recovery to a single VM, click the DR option on the VM pane, select the region to recover to and accept the defaults.&lt;/p&gt;
 &lt;p&gt;While this is a quick way to set up DR for the single VM, it isn't best practice because it ignores a lot of the advanced options and configuration settings. Admins who might have been overwhelmed by earlier versions of Azure Site Recovery that were overly complicated and difficult to deploy will appreciate recent updates that simplify the process.&lt;/p&gt;
 &lt;p&gt;After setting up Azure Site Recovery to protect a VM, the Azure subscription objects section now has a Recovery Services vault that ends in -asr. Microsoft creates this vault when you use the simpler method.&lt;/p&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="How to work with the recovery vault"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;How to work with the recovery vault&lt;/h2&gt;
 &lt;p&gt;The ability to fail over in a consistent manner is a significant benefit that gives the enterprise more fine-grained control over groups of VMs.&lt;/p&gt;
 &lt;p&gt;Ideally, the admin sets up a recovery vault -- or several -- in advance to group applications for a more manageable and consistent failover experience.&lt;/p&gt;
 &lt;p&gt;To create a recovery vault, click&lt;b&gt; Create a resource &lt;/b&gt;from the Azure portal, search for &lt;b&gt;Recovery Services vault &lt;/b&gt;and then select &lt;b&gt;Create &lt;/b&gt;to build the vault, following the configuration prompts to give it a name, resource group and Azure region.&lt;/p&gt;
 &lt;p&gt;To add VMs to a recovery vault, use the &lt;b&gt;Disaster Recovery&lt;/b&gt; button on the &lt;b&gt;Recovery Services vault &lt;/b&gt;and blade, and select the new vault.&lt;/p&gt;
 &lt;p&gt;The vault provides options to &lt;a href="https://www.techtarget.com/searchdatabackup/feature/The-7-critical-backup-strategy-best-practices-to-keep-data-safe"&gt;manage the backup polices&lt;/a&gt; for stored VMs, provides the restore function from a recovery point in the VM and monitors the replication status of protected VMs.&lt;/p&gt;
&lt;/section&gt;      
&lt;section class="section main-article-chapter" data-menu-title="What are some potential issues with Azure Site Recovery?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What are some potential issues with Azure Site Recovery?&lt;/h2&gt;
 &lt;p&gt;There are some potential issues during the Azure Site Recovery setup process, but they are usually easily diagnosed.&lt;/p&gt;
 &lt;p&gt;If the VM has just been provisioned, it may still be completing tasks in the background, such as installing agents, so try again after several minutes if it doesn't appear in Azure Site Recovery.&lt;/p&gt;
 &lt;p&gt;If the VM is on premises, then make sure the Azure Site Recovery agent is installed. It may be prudent to reboot and try to add it to Azure Site Recovery again.&lt;/p&gt;
 &lt;p&gt;Communication issues are by far one of the most common issues. Microsoft provides lengthy &lt;a target="_blank" href="https://learn.microsoft.com/en-us/azure/site-recovery/site-recovery-manage-network-interfaces-on-premises-to-azure" rel="noopener"&gt;documentation&lt;/a&gt; on setup requirements and implementation for on-premises VMs to Azure. It does take some time to make sure everything is properly configured.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Stuart Burns is a virtualization expert at a Fortune 500 company. He specializes in VMware and system integration with additional expertise in disaster recovery and systems management. Burns received vExpert status in 2015.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Understand the costs and the requirements to use this flexible disaster recovery service that works with Linux and Windows Server in the data center and virtual machines in the Azure cloud.</description>
            
            <link>https://www.techtarget.com/searchwindowsserver/tutorial/Try-an-Azure-Site-Recovery-setup-for-DR-needs</link>
            <pubDate>Mon, 08 Apr 2024 09:00:00 GMT</pubDate>
            <title>Azure Site Recovery tutorial for workload disaster recovery</title>
        </item>
        <item>
            <body>&lt;p&gt;Xen and KVM offer distinct advantages, such as the ability to run multiple OSes simultaneously and gain access to network flexibility.&lt;/p&gt; 
&lt;p&gt;These hypervisors are Linux-based and have vendor support for management tools through Citrix, Oracle and Red Hat. Although the underlying code is open source, they represent revenue streams for companies offering support.&lt;/p&gt; 
&lt;p&gt;Kernel-based VM (KVM) and Xen take advantage of &lt;a href="https://www.techtarget.com/searchitoperations/feature/How-to-choose-the-best-CPU-for-virtualization"&gt;CPU virtualization&lt;/a&gt; instructions present on both AMD and Intel processors. Arm-based systems using v7 and later CPUs support virtualization extensions on KVM.&lt;/p&gt; 
&lt;p&gt;The decision between the two comes down to the organization's primary infrastructure, staff resources and interest in using the cloud. Cost is a significant part of the equation in terms of both initial acquisition and long-term support.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="Terminology and definitions"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Terminology and definitions&lt;/h2&gt;
 &lt;p&gt;Before digging into Xen and KVM specifically, it's important to understand a few key terms to avoid confusion:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Type 1 hypervisor.&lt;/b&gt; A Type 1 hypervisor runs directly on the physical hardware of a host machine without the need for an underlying OS.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Type 2 hypervisor.&lt;/b&gt; A Type 2 hypervisor typically runs on top of an OS with indirect access to the underlying hardware.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Paravirtualization.&lt;/b&gt; &lt;a href="https://www.techtarget.com/searchitoperations/definition/paravirtualization"&gt;Paravirtualization&lt;/a&gt; is a software interface presented to a VM that mimics the underlying hardware/software interface.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Full virtualization.&lt;/b&gt; Full virtualization uses binary translation and direct execution of user requests.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Hardware virtualization.&lt;/b&gt; In the x86 world, both AMD and Intel provide virtualization features in their CPU products that provide virtualization-specific instructions.&lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;   
&lt;section class="section main-article-chapter" data-menu-title="What is Xen?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is Xen?&lt;/h2&gt;
 &lt;p&gt;Researchers at the University of Cambridge created the Xen Type 1 hypervisor in the late 1990s, and The Linux Foundation took over the project in 2013.&lt;/p&gt;
 &lt;p&gt;All Xen-based systems implement a Type 1 hypervisor, which enables IT administrators to run multiple OSes on the same hardware and provides a small management layer to help admins manage shared resources. Xen uses paravirtualization for most Linux-based guest OSes, while incorporating &lt;a href="https://www.techtarget.com/searchitoperations/definition/hardware-assisted-virtualization"&gt;hardware-assisted virtualization&lt;/a&gt; for Windows guest OSes.&lt;/p&gt;
 &lt;p&gt;Citrix and Oracle use Xen for their virtualization products. Citrix co-opted the Xen name but rebranded XenServer as Citrix Hypervisor to differentiate it from the open source offering. Support for virtual desktops remains a high priority for Citrix, and XenServer has been optimized for that type of workload.&lt;/p&gt;
 &lt;p&gt;Citrix was absorbed by Cloud Software Group in 2022. Since that time, XenServer has emerged as a new entity with its own website and marketing strategy. Based on Xen, Citrix Hypervisor comes at no additional cost to existing Citrix Virtual Apps and Desktops customers.&lt;/p&gt;
 &lt;p&gt;The most recent Xen-based project has the name XCP-ng (Xen Cloud Platform - next generation) with &lt;a target="_blank" href="https://xcp-ng.org/blog/2024/02/15/xcp-ng-8-3-beta-2/" rel="noopener"&gt;version 8.3 Beta 2 released&lt;/a&gt; Feb. 15, 2024. XCP-ng started as a fork of XenServer and is a Xen Project incubation project hosted by The Linux Foundation. It provides a number of GUI-based management tools that aren't in the basic Xen distribution.&lt;/p&gt;
 &lt;h3&gt;Xen pros&lt;/h3&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;A true Type 1 hypervisor that provides lower overhead due to having direct access to the hardware.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;Xen cons&lt;/h3&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;No ability to share resources of an underlying OS.&lt;/li&gt; 
  &lt;li&gt;No support for sVirt.&lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;          
&lt;section class="section main-article-chapter" data-menu-title="What is KVM?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is KVM?&lt;/h2&gt;
 &lt;p&gt;Adopted into Linux in 2007, KVM is a hypervisor that virtualizes OSes on x86 server hardware. Because it's in the Linux kernel but runs guest OS software, there is debate about its classification as a Type 1 or Type 2 hypervisor. It's possible to run KVM as a Type 1 hypervisor using a custom installation process.&lt;/p&gt;
 &lt;p&gt;Choosing to run KVM on top of a Linux OS brings additional benefits, such as resource swapping among guests, shared common libraries and optimized system performance. It also adds security features you don't get from using a Type 1 hypervisor, such as sVirt.&lt;/p&gt;
 &lt;p&gt;OpenStack and oVirt currently use KVM as the default hypervisor. KVM lets admins run the following guest OSes: Berkeley Software Distribution, Solaris, Windows, ReactOS and macOS with QEMU. Most mainstream Linux distributions offer KVM support, including openSUSE, &lt;a href="https://www.techtarget.com/searchdatacenter/definition/Red-Hat-Enterprise-Linux-RHEL"&gt;RHEL&lt;/a&gt; and Ubuntu.&lt;/p&gt;
 &lt;p&gt;Primary KVM vendor support is through Red Hat, plus the Linux kernel development team. Both admins and vendors consider this support an advantage, and Amazon has actively moved toward a more hybrid approach to include KVM integration.&lt;/p&gt;
 &lt;p&gt;Red Hat purchased Qumranet in 2008 and is the intellectual property owner of everything KVM. Like the business model Red Hat uses for other open source products, such as RHEL, Red Hat makes its money on service and updates. In 2019, &lt;a href="https://www.techtarget.com/searchcloudcomputing/news/252466466/Red-Hat-IBM-deal-closes-both-promise-a-friendly-coopetition"&gt;Red Hat was acquired by IBM&lt;/a&gt;.&lt;/p&gt;
 &lt;h3&gt;KVM pros&lt;/h3&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Integrated into the Linux kernel and, as such, receives regular security and performance updates, plus bug fixes through the normal Linux upgrade channels.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;KVM cons&lt;/h3&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Requires Linux OS running on host hardware for Type 2 functionality.&lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;          
&lt;section class="section main-article-chapter" data-menu-title="Vendor usage"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Vendor usage&lt;/h2&gt;
 &lt;p&gt;Citrix has been tied to Microsoft as a remote desktop platform since its inception. XenServer continues in that vein, running Windows desktops on a centrally managed server. This provides any number of advantages from a security perspective, including controlling the vulnerability footprint through managed images.&lt;/p&gt;
 &lt;p&gt;KVM has a solid stance in the world of cloud providers, not the least of which is Amazon. It's also the mainstay of Red Hat virtualization and an entire family of offerings. It's the primary virtualization platform for OpenShift. In the OpenShift context, there's a close tie between &lt;a href="https://www.techtarget.com/searchitoperations/answer/Containers-vs-VMs-What-are-the-key-differences"&gt;containers and VMs&lt;/a&gt;. This provides a path for those looking to move to containerized platforms without getting rid of their current infrastructure.&lt;/p&gt;
&lt;/section&gt;   
&lt;section class="section main-article-chapter" data-menu-title="Differences between KVM and Xen hypervisors"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Differences between KVM and Xen hypervisors&lt;/h2&gt;
 &lt;p&gt;The Xen hypervisor uses a microkernel design that runs on bare-metal hardware and can run on systems without virtualization extensions. This doesn't apply to most modern servers but is an issue for older hardware.&lt;/p&gt;
 &lt;p&gt;Xen version 4.18, released in November 2023, delivered security, performance and architecture features focused on &lt;a href="https://www.techtarget.com/searchenterpriseai/feature/10-common-uses-for-machine-learning-applications-in-business"&gt;AI and machine learning applications&lt;/a&gt;. This version brings support for the latest Arm and Intel CPU hardware, including Sapphire Rapids and Granite Rapids for x86 workloads.&lt;/p&gt;
 &lt;p&gt;An advantage of KVM is that it functions at the Linux OS kernel; this means that KVM receives bug fixes and security updates as Linux publishes new releases. From a security standpoint, KVM also benefits from sVirt and mandatory access control security measures, which prevent manual labeling attacks.&lt;/p&gt;
 &lt;p&gt;KVM Nitro is Amazon's most advanced Elastic Compute Cloud capability that carves out isolated compute environments within the same instance. It uses a custom minimal hypervisor based on KVM. Nitro also uses custom-designed hardware cards with application-specific integrated circuits to implement network and storage I/O. These features help provide security isolation between the different subsystems and serve as the primary way to protect sensitive data at the VM level.&lt;/p&gt;
&lt;/section&gt;     
&lt;section class="section main-article-chapter" data-menu-title="Choosing between the two"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Choosing between the two&lt;/h2&gt;
 &lt;p&gt;The organization's primary infrastructure is the main factor in deciding between KVM and Xen hypervisors. Other deciding factors include UX, staff knowledge and code requirements.&lt;/p&gt;
 &lt;p&gt;Admins should have a good understanding of current dependencies with specific vendors and a clear vision of where their IT projects are heading.&lt;/p&gt;
 &lt;p&gt;As of February 2024, Broadcom discontinued VMware ESXi Free, which could lead SMBs and home lab users to switch to KVM-based &lt;a href="https://www.techtarget.com/searchitoperations/tutorial/Get-started-with-this-Nutanix-Community-Edition-installation-guide"&gt;Nutanix Community Edition&lt;/a&gt;.&lt;/p&gt;
 &lt;p&gt;Oracle and Citrix have a large customer base and push Xen as their primary hypervisor. Red Hat, SUSE and Canonical support KVM as a virtualization option for their Linux versions.&lt;/p&gt;
 &lt;p&gt;For cloud, admins face a similar decision: Citrix and Oracle have a Xen-based offering, as opposed to Google on KVM. Amazon offers both Xen and KVM, so an admin's infrastructure requirements are the final factor. For example, admins that choose Amazon as their cloud provider for a new project might be more inclined toward KVM. IT teams that use Citrix or Oracle and move their system to the cloud will favor Xen.&lt;/p&gt;
 &lt;p&gt;It's also important to evaluate the growing popularity of hybrid and on-premises cloud offerings. If admins investigate these options, they must understand and consider their existing virtualization software and how well it integrates with any &lt;a href="https://www.techtarget.com/searchcloudcomputing/tip/Top-public-cloud-providers-A-brief-comparison"&gt;prospective cloud provider&lt;/a&gt;.&lt;/p&gt;
 &lt;p&gt;Amazon supports both Xen and KVM, but the vendor still maintains a working relationship with Citrix.&lt;/p&gt;
 &lt;p&gt;Admins should fully understand all UX options before they make a final decision. Major cloud vendors provide both web-based and programmatic interfaces to enable flexibility for IT teams and admins.&lt;/p&gt;
 &lt;p&gt;Automation is the main way to manage any large-scale virtualization project, which requires someone to write code. Available and capable staff are the primary factor in understanding and planning for any code-writing requirements.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Paul Ferrill has been writing in the computer trade press for over 25 years.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Admins often evaluate Xen and KVM as open source options. The main factors to consider in a primary hypervisor are organizational infrastructure and cloud adoption interests.</description>
            
            <link>https://www.techtarget.com/searchitoperations/tip/Xen-vs-KVM-What-are-the-differences</link>
            <pubDate>Mon, 25 Mar 2024 09:00:00 GMT</pubDate>
            <title>Xen vs. KVM: What are the differences?</title>
        </item>
        <item>
            <body>&lt;p&gt;IT administrators who combine virtualization with a server consolidation plan can improve server utilization and reduce data center power consumption and hardware costs. A server consolidation plan can include one of two consolidation methods: migrating workloads to a server's OS or using virtualization to run applications inside of VMs.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://www.techtarget.com/searchitoperations/definition/What-is-server-virtualization-The-ultimate-guide"&gt;Virtualization&lt;/a&gt; is often the preferred choice for server consolidation because it introduces a host of benefits, such as reduced power and cooling costs, as well as high availability for workloads. To achieve these benefits, admins must formulate a server consolidation plan that aligns with business needs.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="What is server consolidation?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is server consolidation?&lt;/h2&gt;
 &lt;p&gt;Server consolidation is the process of reducing the number of physical servers in a data center by combining workloads. If a data center underutilizes server hardware, then &lt;a href="https://www.techtarget.com/searchdatacenter/answer/How-should-I-spec-our-new-server-hardware-configuration"&gt;admins can configure servers &lt;/a&gt;to host multiple workloads. This means servers can better utilize hardware, which reduces the total number of servers required to run workloads.&lt;/p&gt;
&lt;/section&gt;  
&lt;section class="section main-article-chapter" data-menu-title="What are the benefits of server consolidation?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What are the benefits of server consolidation?&lt;/h2&gt;
 &lt;p&gt;Benefits of server consolidation include reduced hardware maintenance costs and the overall data center footprint, which is beneficial for organizations that lease space in a colocation facility.&lt;/p&gt;
 &lt;p&gt;Admins should consider several factors during the early phases of their server consolidation planning process. For example, admins should outline specific goals, determine whether one consolidation method is better suited to achieve those goals, devise a plan to deal with &lt;a href="https://www.techtarget.com/searchsecurity/feature/10-types-of-security-incidents-and-how-to-handle-them"&gt;possible security and compliance violations&lt;/a&gt; and consider whether the cost savings justify the total expense of the server consolidation project.&lt;/p&gt;
&lt;/section&gt;   
&lt;section class="section main-article-chapter" data-menu-title="Types of server consolidation"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Types of server consolidation&lt;/h2&gt;
 &lt;p&gt;There are two ways admins can consolidate servers. First, admins can run multiple workloads on a server's OS. The other option is to use server virtualization. The idea here is to convert physical servers into VMs. As virtualization hosts can accommodate multiple VMs, the net result will be fewer servers running in the data center.&lt;/p&gt;
&lt;/section&gt;  
&lt;section class="section main-article-chapter" data-menu-title="Server consolidation architecture"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Server consolidation architecture&lt;/h2&gt;
 &lt;p&gt;Before deciding on one of the two server consolidation methods, consider the advantages and disadvantages associated with the two architectures.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineImages/server_virtualization-traditional_virtual_architecture.jpg"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineImages/server_virtualization-traditional_virtual_architecture_mobile.jpg" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineImages/server_virtualization-traditional_virtual_architecture_mobile.jpg 960w,https://searchservervirtualization.techtarget.com/rms/onlineImages/server_virtualization-traditional_virtual_architecture.jpg 1280w" alt="Server virtualization architecture" height="314" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Virtualization uses software that simulates hardware functionality to create a virtual system, enabling organizations to operate multiple OSes and applications on a single server.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;h3&gt;Migrate workloads to a server's OS&lt;/h3&gt;
 &lt;p&gt;The first server consolidation method involves combining multiple workloads onto a single server. The main advantages associated with this technique are that it's relatively easy to implement and it's less expensive than using server virtualization. Only a single OS license is required and there is no need to purchase and learn how to use virtualization software.&lt;/p&gt;
 &lt;p&gt;Although this type of consolidation is simpler and less expensive than using server virtualization, there are some disadvantages to consider:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Potential interference with other workloads running on the server.&lt;/b&gt; Because virtualization software isn't being used, there is nothing to keep a workload's resource utilization in check. If a workload consumes excessive hardware resources, other workloads running on the server will suffer.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Security implications.&lt;/b&gt; Because multiple workloads share a common OS, it means that if a single application has a security vulnerability, then the application can act as a point of entry for attackers, who could then compromise all of the other workloads running on the server.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Server outage.&lt;/b&gt; If the server were to suffer an outage, then all of the workloads running on that server would incur downtime as a result. As such, the &lt;a href="https://www.techtarget.com/searchdatacenter/tip/How-to-prevent-and-recover-from-server-failure"&gt;failure of a single server&lt;/a&gt; could cause a major outage in the absence of a high-availability platform.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;Use server virtualization&lt;/h3&gt;
 &lt;p&gt;The second method of consolidating servers is through virtualization. When admins use virtualization, the physical server takes on the role of a virtualization host.&lt;/p&gt;
 &lt;p&gt;There are numerous advantages to using virtualization technology for workload consolidation:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Isolated workloads.&lt;/b&gt; Applications run inside VMs with their own OS, which helps isolate workloads from one another. A security breach within one virtual server typically doesn't affect another.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Fault &lt;/b&gt;&lt;b&gt;tolerance&lt;/b&gt;&lt;b&gt; for VMs.&lt;/b&gt; &lt;a href="https://www.techtarget.com/searchdisasterrecovery/definition/fault-tolerant"&gt;Fault tolerance&lt;/a&gt; means that if a problem were to occur on a virtualization host, then all of the VMs running on that host can automatically fail over to a different host, thereby preventing downtime.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Automation for monitoring sufficient hardware resources.&lt;/b&gt; Similarly, automation can be used to monitor VMs to make sure that they are receiving the required hardware resources. If a VM needs additional hardware resources to deal with an activity spike, then the underlying virtualization technology can automatically handle the resource allocation. If sufficient hardware resources aren't available, then the virtualization software can even be configured to automatically &lt;a href="https://www.techtarget.com/searchitoperations/tip/How-to-perform-a-Hyper-V-to-VMware-migration"&gt;migrate the VM&lt;/a&gt; to a different host where the required resources are available.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;However, virtualization isn't without its disadvantages. The main disadvantages are that virtualization is often expensive and complex to implement. Not only is there the software license cost to consider, but you might need to purchase additional hardware, such as a storage array that can accommodate your VMs.&lt;/p&gt;
&lt;/section&gt;            
&lt;section class="section main-article-chapter" data-menu-title="What to consider before consolidating servers"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What to consider before consolidating servers&lt;/h2&gt;
 &lt;p&gt;In most cases, virtualization is the preferred tool for server consolidation. Even so, there are several factors admins should consider during the virtualization planning process.&lt;/p&gt;
 &lt;h3&gt;Data center capacity&lt;/h3&gt;
 &lt;p&gt;One of the first factors is &lt;a href="https://www.techtarget.com/searchdatacenter/feature/What-you-should-consider-when-right-sizing-a-data-center"&gt;data center capacity&lt;/a&gt;. Admins must consider the number of physical servers required to host the anticipated number of VMs. It's worth noting, however, that VMs aren't all created equally. Some can be larger than others, based on workload requirements, and it's important to make sure that virtualization hosts can provide the required capacity.&lt;/p&gt;
 &lt;p&gt;It's also important to design the IT infrastructure with more capacity than is actually needed. This extra capacity is useful for accommodating future workloads. More importantly however, it is needed in case one or more virtualization hosts were to fail. VMs will only be able to fail over to another host if the destination host has enough capacity available to handle its existing workload, plus the workload from the failed host.&lt;/p&gt;
 &lt;h3&gt;Virtualization architecture&lt;/h3&gt;
 &lt;p&gt;Another key step is to decide which virtualization architecture to use, such as &lt;a href="https://www.techtarget.com/searchitoperations/answer/Hyper-V-vs-VMware-comparison-What-are-the-differences"&gt;Hyper-V or VMware&lt;/a&gt;. Admins must then decide how many server nodes to include in a cluster and choose a storage architecture.&lt;/p&gt;
 &lt;h3&gt;Public cloud hosting&lt;/h3&gt;
 &lt;p&gt;It's also a good idea to consider the &lt;a href="https://www.techtarget.com/searchcloudcomputing/definition/public-cloud"&gt;public cloud&lt;/a&gt; as an option for hosting VMs. Cloud providers offer virtualization platforms as a managed service. This means that organizations can focus their server management efforts on the VMs themselves without having to worry about the underlying service-level infrastructure. The cloud computing provider handles low-level server management and optimization on the customer's behalf.&lt;/p&gt;
 &lt;p&gt;Using the cloud for VM hosting can be a good option since it's less complex than hosting VMs on premises. This approach also reduces energy consumption within an organization's data center because servers are running in the cloud rather than running on site.&lt;/p&gt;
 &lt;p&gt;There are some &lt;a href="https://www.techtarget.com/searchcloudcomputing/tip/Explore-the-pros-and-cons-of-cloud-computing"&gt;disadvantages to the public cloud&lt;/a&gt;. For example, cloud providers don't give their customers full control over the virtualization environment. Those needing granular control might be better off with an on-premises platform. Depending on the use case, running virtual servers in the cloud might be more costly than hosting those VMs on premises, though the opposite can also be true.&lt;/p&gt;
 &lt;h3&gt;Server consolidation costs&lt;/h3&gt;
 &lt;p&gt;As admins progress through this process, they should keep an eye on the project's cost. Hypervisor licensing is one of the first costs to consider, but admins might also require supplementary management and monitoring tools, or capacity planning and management tools. In addition, admins might require a support contract depending on which virtualization platform they choose.&lt;/p&gt;
 &lt;h3&gt;Migration methods&lt;/h3&gt;
 &lt;p&gt;Finally, admins should consider how to migrate their workloads to a virtualized system. Options vary by workload, but the migration process should be seamless and avoid service interruptions.&lt;/p&gt;
&lt;/section&gt;               
&lt;section class="section main-article-chapter" data-menu-title="How to consolidate servers"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;How to consolidate servers&lt;/h2&gt;
 &lt;p&gt;The server consolidation process is going to vary widely depending on the consolidation method being used and the types of workloads that are being consolidated. The consolidation process is also going to vary based on whether workloads will continue running on-premises or if those workloads are being migrated to the cloud.&lt;/p&gt;
 &lt;p&gt;There are two main consolidation methods that tend to be used, though there are countless variations.&lt;/p&gt;
 &lt;ol class="default-list"&gt; 
  &lt;li&gt;One technique involves using software to assist in the consolidation process. For example, cloud providers often offer tools to assist in the process of migrating a server's workload to a cloud-based VM. There are also software companies that offer database migration tools or physical-to-virtual &lt;a target="_blank" href="https://greatminds.consulting/en/insight/physical-to-virtual-tools-plugins-for-virtualisation-projects" rel="noopener"&gt;conversion tools&lt;/a&gt; that can automate the VM provisioning process.&lt;/li&gt; 
  &lt;li&gt;Another approach that's often used involves using disaster recovery software. This method involves backing up a physical server's workload and then restoring the backup to the physical or VM to which the workload is being migrated. A more complex variation of this technique involves using disaster recovery software to perform a workload failover to the destination host, thereby enabling the workload to be migrated without incurring downtime.&lt;/li&gt; 
 &lt;/ol&gt;
 &lt;p&gt;&lt;i&gt;Brien Posey is a 15-time Microsoft MVP with two decades of IT experience. He has served as a lead network engineer for the U.S. Department of Defense and as a network administrator for some of the largest insurance companies in America. &lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Server consolidation enables admins to boost server utilization and decrease power consumption, which can also reduce costs and improve performance.</description>
            
            <link>https://www.techtarget.com/searchitoperations/tutorial/2-ways-to-craft-a-server-consolidation-project-plan</link>
            <pubDate>Fri, 15 Mar 2024 10:30:00 GMT</pubDate>
            <title>Server consolidation benefits, types and considerations</title>
        </item>
        <item>
            <body>&lt;p&gt;IT administrators often have to deal with virtualization problems, such as &lt;a href="https://www.techtarget.com/whatis/definition/virtualization-sprawl-virtual-server-sprawl"&gt;VM sprawl&lt;/a&gt;, network congestion, server hardware failures, reduced VM performance, software licensing restrictions and container issues. But companies can mitigate these issues before they occur with lifecycle management tools and business policies.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://www.techtarget.com/searchitoperations/definition/What-is-server-virtualization-The-ultimate-guide"&gt;Server virtualization&lt;/a&gt; brings far better system utilization, workload flexibility and other benefits to the data center. But, in spite of its benefits, virtualization isn't perfect: The hypervisors themselves are sound, but the issues that arise from virtualization can waste resources and drive administrators to the breaking point.&lt;/p&gt; 
&lt;p&gt;Here are six common virtualization issues admins encounter and how to effectively address them.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="1. VM sprawl wastes valuable computing resources"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;1. VM sprawl wastes valuable computing resources&lt;/h2&gt;
 &lt;p&gt;Organizations often virtualize a certain number of workloads and then have to buy more servers down the line to accommodate more workloads. This occurs because companies usually don't have the business policies in place to plan or manage VM creation.&lt;/p&gt;
 &lt;p&gt;Before virtualization, a new server took weeks -- if not months -- to deploy because companies had to plan a budget for systems and coordinate deployment. Bringing a new workload online was a big deal that IT professionals and managers scrutinized. With virtualization, a hypervisor can allocate computing resources and spin up a new VM on an available server in minutes. Hypervisors include offerings such as &lt;a href="https://www.techtarget.com/searchitoperations/answer/Hyper-V-vs-VMware-comparison-What-are-the-differences"&gt;Microsoft Hyper-V and VMware vSphere&lt;/a&gt;, along with a cadre of well-proven open source alternatives.&lt;/p&gt;
 &lt;p&gt;However, once VMs are in the environment, there are rarely any default processes or monitoring in place to tell if anyone needs or uses them. This results in VMs that potentially fall into disuse. Yet, those abandoned VMs accumulate over time and consume computing, backup and disaster recovery resources. This is known as &lt;i&gt;VM sprawl&lt;/i&gt;, and it's a phenomenon that imposes operational costs and overhead for those unused VMs but brings no tangible value to the business.&lt;/p&gt;
 &lt;p&gt;Because VMs are so easy to create and destroy, organizations need policies and procedures that help them understand when they need a new VM, determine how long they need it and justify it as if it were a new server. Organizations should also consider tracking VMs with lifecycle management tools. There should be clear review dates and removal dates so that the organization can either extend or retire the VM. Other tools, such as &lt;a href="https://www.techtarget.com/searchenterprisedesktop/definition/Application-monitoring-app-monitoring"&gt;application performance management&lt;/a&gt; platforms, can also help gather utilization and performance metrics about each workload operating across the infrastructure.&lt;/p&gt;
 &lt;p&gt;All this helps tie VMs to specific departments, divisions or other stakeholders so organizations can see exactly how much of the IT environment that part of the business needs and how the IT infrastructure is being used. Some businesses even use chargeback tactics to bill departments for the amount of computing they use. Chances are that a workload owner that needs to pay for VMs takes a diligent look at every one of them.&lt;/p&gt;
&lt;/section&gt;      
&lt;section class="section main-article-chapter" data-menu-title="2. VMs can congest network traffic"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;2. VMs can congest network traffic&lt;/h2&gt;
 &lt;p&gt;Network congestion is another common problem. For example, an organization that routinely runs its system numbers might notice that it has enough memory and CPU cores to host 25 VMs on a single server. But, once IT admins load those VMs onto the server, they might discover that the server's only network interface card (&lt;a href="https://www.techtarget.com/searchnetworking/definition/network-interface-card"&gt;NIC&lt;/a&gt;) port is already saturated, which can impair VM communication and cause some VMs to report network errors or suffer performance issues.&lt;/p&gt;
 &lt;p&gt;Before virtualization, one application on a single server typically used only a fraction of the server's network bandwidth. But, as multiple VMs take up residence on the virtualized server, each VM on the server demands some of the available network bandwidth. Most servers are only fitted with a single NIC port, and it doesn't take long for network traffic on a virtualized server to cause a bottleneck that overwhelms the NIC. Workloads sensitive to network latency might report errors or even crash.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/server_virt-common_problems.png"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/server_virt-common_problems_mobile.png" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/server_virt-common_problems_mobile.png 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/server_virt-common_problems.png 1280w" alt="6 common virtualization problems" height="308" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Although virtualization comes with many benefits, there are some challenges organizations often encounter as well.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;p&gt;Standard Gigabit Ethernet ports can typically support traffic from several VMs, but organizations &lt;a href="https://www.techtarget.com/searchitoperations/tutorial/2-ways-to-craft-a-server-consolidation-project-plan"&gt;planning high levels of consolidation&lt;/a&gt; might need to upgrade servers with multiple NIC ports -- or a higher-bandwidth NIC and LAN segment -- to provide adequate network connectivity. Organizations can sometimes relieve short-term traffic congestion problems by rebalancing workloads to spread out bandwidth-hungry VMs across multiple servers.&lt;/p&gt;
 &lt;p&gt;As an alternative rebalancing strategy, two bandwidth-hungry VMs routinely communicating across two different physical servers might be migrated so that both VMs are placed on the same physical server. This effectively takes that busy VM-to-VM communication off the LAN and enables it to take place within the physical server itself -- alleviating the congestion on the LAN caused by those demanding VMs.&lt;/p&gt;
 &lt;p&gt;Remember that NIC upgrades might also demand additional switch ports or switch upgrades. In some cases, organizations might need to distribute the traffic from multiple NICs across multiple switches to prevent switch backplane saturation. This requires the attention of a network architect involved in the virtualization and consolidation effort from the earliest planning phase.&lt;/p&gt;
&lt;/section&gt;       
&lt;section class="section main-article-chapter" data-menu-title="3. Consolidation multiplies the effect of hardware failures"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;3. Consolidation multiplies the effect of hardware failures&lt;/h2&gt;
 &lt;p&gt;Consider 10 VMs all running on the same physical server. Virtualization provides tools such as snapshots and live migration that can protect VMs and ensure their continued operation under normal conditions. But virtualization does nothing to protect the underlying hardware. So, what happens when the server fails? It's the age-old cliche of putting all your eggs in one basket.&lt;/p&gt;
 &lt;p&gt;The physical hardware platform becomes a &lt;a href="https://www.techtarget.com/searchdatacenter/definition/Single-point-of-failure-SPOF"&gt;single point of failure&lt;/a&gt; and affects all the workloads running on the platform. Greater levels of consolidation mean more workloads on each server, and server failures affect those workloads. This is significantly different than traditional physical deployments, where a single server supported one application. Similar effects can occur in the LAN, where a fault in a switch or other network gear can isolate one or more servers -- and disrupt the activity of all the VMs on those servers.&lt;/p&gt;
 &lt;p&gt;In a properly architected and deployed environment, the affected workload fails over and restarts on other servers. But there is some disruption to the workload's availability during the restart. Remember that the workload must restart from a snapshot in storage and move from disk to memory on an available server. The recovery process might take several minutes depending on the size of the image and the amount of traffic on the network. An &lt;a href="https://www.techtarget.com/searchnetworking/answer/What-are-the-3-most-common-network-issues-to-troubleshoot"&gt;already congested network&lt;/a&gt; might take much longer to move the snapshot into another server's memory. A network fault might prevent any recovery at all.&lt;/p&gt;
 &lt;p&gt;There are several tactics for mitigating server hardware failures and downtime. In the short term, organizations can opt to redistribute workloads across multiple servers -- perhaps on different LAN segments -- to prevent multiple critical applications from residing on a single server. It might also be possible to lower consolidation levels in the short term to limit the number of workloads on each physical system.&lt;/p&gt;
 &lt;p&gt;Over the long term, organizations can deploy high availability servers for important consolidation platforms. These servers might include redundant power supplies and numerous memory protection technologies, such as memory sparing and memory mirroring.&lt;/p&gt;
 &lt;p&gt;These &lt;a href="https://www.techtarget.com/searchdatacenter/feature/Learn-the-major-types-of-server-hardware-and-their-pros-and-cons"&gt;server hardware features&lt;/a&gt; help to prevent errors or, at least, prevent them from becoming fatal. The most critical workloads might reside on server clusters, which keep multiple copies of each workload in synchronization. If one server fails, another node in the cluster takes over and continues operation without disruption. IT infrastructure engineers must consider the potential effect of hardware failures in a virtualized environment and implement the architectures, hardware and policies needed to mitigate faults before they occur.&lt;/p&gt;
&lt;/section&gt;       
&lt;section class="section main-article-chapter" data-menu-title="4. Application performance can still be marginal in a VM"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;4. Application performance can still be marginal in a VM&lt;/h2&gt;
 &lt;p&gt;Organizations that decide to move their 25-year-old, custom-written corporate database server into a VM might discover that the database performs slower than molasses. Or, if organizations decide to virtualize a modern application, they might notice that it runs erratically or is slow. There are several possibilities when it comes to VM performance problems.&lt;/p&gt;
 &lt;p&gt;For older, in-house and custom-built applications, one of the most efficient ways to code software is to use specific hardware calls. Unfortunately, simple &lt;a href="https://www.techtarget.com/searchdatacenter/tip/Guide-to-lift-and-shift-data-center-migration"&gt;lift-and-shift migrations&lt;/a&gt; can be treacherous for many legacy applications. Any time organizations change the hardware or abstract it from the application, the software might not work correctly, and it usually needs to be recoded.&lt;/p&gt;
 &lt;p&gt;It's possible that antique software simply isn't compatible with virtualization; organizations might need to update it, switch to some other commercial software product or SaaS offering that does the same job or continue using the old physical system the application was running on before. But none of these are particularly attractive or practical options for organizations on a tight budget.&lt;/p&gt;
 &lt;p&gt;Organizations with a modern application that performs poorly after virtualization might find the workload &lt;a href="https://www.techtarget.com/searchitoperations/feature/How-to-choose-the-best-CPU-for-virtualization"&gt;needs more computing resources&lt;/a&gt;, such as memory space, CPU cycles and cores. Organizations can typically run a benchmark utility and identify any resources that are overutilized and then provision additional computing resources to provide some slack. For example, if memory is too tight, the application might rely on disk file swapping, which can slow performance. Adding enough memory to avoid disk swapping can improve performance. In some cases, migrating a poorly performing VM to another server -- perhaps a newer or lightly used server -- can help address the problem.&lt;/p&gt;
 &lt;p&gt;Whether the application in question is &lt;a target="_blank" href="https://www.purestorage.com/knowledge/legacy-apps-vs-modern-apps.html" rel="noopener"&gt;modern or legacy&lt;/a&gt;, testing in a lab environment prior to virtualization could have helped identify troublesome applications and given organizations the opportunity to discover issues and formulate answers to virtualization problems before rolling the VM out into production.&lt;/p&gt;
&lt;/section&gt;      
&lt;section class="section main-article-chapter" data-menu-title="5. Software licensing is a slippery slope in a virtual environment"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;5. Software licensing is a slippery slope in a virtual environment&lt;/h2&gt;
 &lt;p&gt;Software licensing was always confusing and expensive, but software vendors have quickly caught up with virtualization technology and updated their licensing rules to account for VMs, multiple CPUs and other resource provisioning loopholes that virtualization enables. The bottom line is that organizations can't expect to clone VMs ad infinitum without buying licenses for the OS and the application running in each VM.&lt;/p&gt;
 &lt;p&gt;Organizations must always review and &lt;a href="https://www.techtarget.com/searchcloudcomputing/tip/Learn-the-basics-of-SaaS-licensing-and-pricing-models"&gt;understand the licensing rules for any software&lt;/a&gt; that they deploy. Large organizations might even retain a licensing compliance officer to track software licensing and offer guidance for software deployment, including virtualization. Organizations should involve these professionals if they are available. Modern systems management tools increasingly provide functionality for software licensing tracking and reporting.&lt;/p&gt;
 &lt;p&gt;License breaches can expose organizations to litigation and substantial penalties. Major software vendors often reserve the right to audit organizations and verify their licensing. Most vendors are more interested in getting their licensing fees than litigation, especially for first offenders. But, when organizations consider that a single license might cost thousands of dollars, careless VM proliferation can be financially crippling.&lt;/p&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="6. Containers can lead to conundrums"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;6. Containers can lead to conundrums&lt;/h2&gt;
 &lt;p&gt;The emergence of virtual containers has only exacerbated the traditional challenges present in VMs. Containers, such as those used through Docker and Apache Mesos, are &lt;a href="https://www.techtarget.com/searchitoperations/answer/Containers-vs-VMs-What-are-the-key-differences"&gt;essentially small and resource-efficient VMs&lt;/a&gt; that can be generated in seconds and live for minutes or hours as needed. Consequently, containers can be present in enormous numbers with startlingly short lifecycles. This demands high levels of automation and orchestration with specialized tools, such as Kubernetes.&lt;/p&gt;
 &lt;p&gt;When implemented properly and managed well, containers provide an attractive and effective mechanism for application deployment -- often in parallel with traditional VMs. But containers are subject to the same realities of risk and limitations discussed for VMs. When you consider that containers can be far more plentiful and harder to manage, the challenges for containers demand even more careful attention from IT staff.&lt;/p&gt;
 &lt;p&gt;Server virtualization has changed the face of modern corporate computing with VMs and containers. It enables efficient use of computing resources on fewer physical systems and provides more ways to protect data and ensure availability. But virtualization isn't perfect, and it creates new problems that organizations must understand and address to keep the data center running smoothly.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Stephen J. Bigelow, senior technology editor at TechTarget, has more than 20 years of technical writing experience in the PC and technology industry.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Organizations can correct common problems with virtualization, such as VM sprawl and network congestion, through business policies rather than purchasing additional technology.</description>
            
            <link>https://www.techtarget.com/searchitoperations/feature/5-common-virtualization-problems-and-how-to-solve-them</link>
            <pubDate>Tue, 12 Mar 2024 09:00:00 GMT</pubDate>
            <title>6 common virtualization problems and how to solve them</title>
        </item>
        <item>
            <body>&lt;p&gt;The idea behind all virtualization is to abstract a computer's hardware resources from the software that uses those resources. A hypervisor is a software tool installed on the host system to provide this layer of abstraction. Once a hypervisor is installed, OSes and applications interact with the virtualized resources abstracted by the hypervisor -- not the physical resources of the actual host computer.&lt;/p&gt; 
&lt;p&gt;There are different types of virtualization based on the level of isolation provided: full virtualization vs. paravirtualization.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="What is full virtualization?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is full virtualization?&lt;/h2&gt;
 &lt;p&gt;Virtualization is often approached as full virtualization. That is, the hypervisor provides complete abstraction through a software-based virtualization layer, and assigns abstracted resources to one or more logical entities called virtual machines (&lt;a href="https://www.techtarget.com/searchitoperations/definition/virtual-machine-VM"&gt;VMs&lt;/a&gt;). This is typically referred to as Type 1 virtualization and is seen in products such as VMware ESXi. Each VM and its guest OS work as though they run alone on independent computers, and the OSes and applications require no special modifications or adaptations to operate in a typical VM. Every VM is logically isolated from every other VM. VMs don't communicate or share resources unless the VMs are deliberately set up to do so -- usually through standard network intercommunications.&lt;/p&gt;
 &lt;p&gt;However, early hypervisors had a performance problem. Hypervisors rely on hardware emulation, such as a VM manager, to handle the binary translations back and forth between physical hardware and virtual resources, such as CPUs and memory spaces. This constant translation imposes a performance penalty on the host computer. In the early days of full virtualization, this performance penalty limited the practical number of VMs that a system could host and frequently limited the types of applications that could run in a VM successfully.&lt;/p&gt;
 &lt;p&gt;These early &lt;a href="https://www.techtarget.com/searchitoperations/feature/5-common-virtualization-problems-and-how-to-solve-them"&gt;performance problems in virtualization&lt;/a&gt; have long been resolved through the common use of hardware-assisted processors that incorporate extensions to the processors' instruction set, including Intel Virtualization Technology (VT) and AMD Virtualization (AMD-V) extensions. Today, full virtualization operates at hardware speeds and offers excellent performance for server virtualization in enterprise production environments. This has caused paravirtualization to fall into disuse, though it's still important to understand what paravirtualization is, and how it fits into the available spectrum of virtualization technologies.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/server_virt-full_virtualization_vs_paravirtualization.png"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/server_virt-full_virtualization_vs_paravirtualization_mobile.png" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/server_virt-full_virtualization_vs_paravirtualization_mobile.png 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/server_virt-full_virtualization_vs_paravirtualization.png 1280w" alt="Full virtualization vs. paravirtualization comparison chart" height="285" width="559"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Full virtualization is a complete abstraction of resources from the underlying hardware, whereas paravirtualization requires the OS to communicate with the hypervisor.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;h3&gt;Benefits of full virtualization&lt;/h3&gt;
 &lt;p&gt;Full virtualization (Type 1) doesn't require OS assistance to virtualize a computer or create VMs. This allows IT administrators to run OSes and applications without any modifications. This is because the hypervisor fully manages resources and translates instructions quickly. The hypervisor also enables the OS to emulate new hardware, which can improve reliability, security and productivity in a system.&lt;/p&gt;
 &lt;p&gt;Full virtualization enables admins to run applications on completely isolated guest OSes, which provides support for multiple OSes simultaneously -- such as Windows Server 2016 in one VM, Windows Server 2019 in another VM, and Ubuntu Linux in yet another VM -- all on the same computer. Full virtualization also provides other features, such as &lt;a href="https://www.techtarget.com/searchitoperations/tip/Top-10-VM-backup-tools-for-VMware-and-Hyper-V"&gt;easy VM backups&lt;/a&gt; and migrations, enabling VMs to be easily moved from one computer to another without disrupting the VM and its workload. This kind of flexibility enables organizations to reduce hardware costs and simplify system hardware maintenance.&lt;/p&gt;
 &lt;h3&gt;Disadvantages of full virtualization&lt;/h3&gt;
 &lt;p&gt;Despite virtualization's broad adoption and continued success, there are some drawbacks to the technology. The use of hypervisors and hardware-assisted processors offers excellent performance compared to bare-metal (nonvirtualized) OS and application deployments, but the hypervisor itself still adds a layer of additional complexity to the technology stack that an organization must procure, license, implement and manage.&lt;/p&gt;
 &lt;p&gt;Applications that require direct access to the underlying computer's hardware won't function properly in a VM. Today, such applications are exceedingly rare and, typically, represent a minuscule minority of &lt;a href="https://www.techtarget.com/searchitoperations/definition/legacy-application"&gt;legacy applications&lt;/a&gt;. Even in extreme cases where a legacy application can't be updated, it can continue operating on a dedicated server and shouldn't affect the adoption and use of full virtualization for other enterprise workloads.&lt;/p&gt;
 &lt;p&gt;IT professionals must consider availability and risk in VM deployments. In bare-metal environments, a server fault will affect one workload. In a virtualized environment, a physical server fault or failure can affect every VM running on the system. For example, if a server running five VMs should fail, all five of those workloads will fail as well and must be recovered. Multiple VMs on the same system can also potentially congest the system's available LAN bandwidth. This makes load balancing, protection and recovery schemes very important in virtualized data centers.&lt;/p&gt;
&lt;/section&gt;            
&lt;section class="section main-article-chapter" data-menu-title="What is paravirtualization?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is paravirtualization?&lt;/h2&gt;
 &lt;p&gt;&lt;i&gt;Para&lt;/i&gt; means alongside or partial, and &lt;a href="https://www.techtarget.com/searchitoperations/definition/paravirtualization"&gt;paravirtualization&lt;/a&gt; gained attention as one potential answer to full virtualization's early performance issues. Paravirtualization sought to bolster early virtualization performance by enabling an OS to recognize the presence of a hypervisor. Products such as IBM LPAR enabled the OS to communicate directly with that hypervisor to share activity that would otherwise be complex and time-consuming for the hypervisor's VM manager to handle. Commands sent from the OS to the hypervisor are dubbed &lt;i&gt;hypercalls&lt;/i&gt;. At the same time, the OS can still talk to and manage the underlying hardware layer -- that's the &lt;i&gt;para&lt;/i&gt; or &lt;i&gt;partial&lt;/i&gt; angle.&lt;/p&gt;
 &lt;p&gt;For paravirtualization to work, admins must modify or adapt the guest VM OSes to implement an API capable of exchanging hypercalls with the paravirtualization hypervisor. Typically, a paravirtualized hypervisor, such as Xen, requires OS support and drivers built into the Linux kernel and other OSes.&lt;/p&gt;
 &lt;p&gt;Unmodified, proprietary OSes, such as Microsoft Windows, won't run in a paravirtualized environment, although paravirtualization-aware device drivers might be available to enable an unmodified OS to &lt;a href="https://www.techtarget.com/searchitoperations/tip/Xen-vs-KVM-What-are-the-differences"&gt;run on a Xen hypervisor&lt;/a&gt;. Admins must modify the OS to communicate with the hypervisor, but the applications themselves don't require any modifications.&lt;/p&gt;
 &lt;h3&gt;Benefits of paravirtualization&lt;/h3&gt;
 &lt;p&gt;Paravirtualization relies on direct communication between the guest OS kernel and the underlying hypervisor in a system. In the early days of virtualization, it could offer improved performance levels and system utilization compared to full hypervisors without the benefit of hardware-assisted processors. Paravirtualization also promised easier backups, faster migrations, improved &lt;a href="https://www.techtarget.com/searchitoperations/tutorial/2-ways-to-craft-a-server-consolidation-project-plan"&gt;server consolidation&lt;/a&gt; and reduced power consumption compared to hypervisors on legacy hardware.&lt;/p&gt;
 &lt;p&gt;Today, the benefits of paravirtualization are largely mitigated by its disadvantages when compared to full virtualization &lt;a href="https://www.techtarget.com/searchdatacenter/Server-hardware-guide-to-architecture-products-and-management"&gt;running on modern hardware&lt;/a&gt; with hardware-assisted processors.&lt;/p&gt;
 &lt;h3&gt;Disadvantages of paravirtualization&lt;/h3&gt;
 &lt;p&gt;Despite paravirtualization's early benefits, it also carries some important criticisms. Because admins must modify the OS, it limits the use of paravirtualization to OS versions -- most open source -- that are properly modified and validated for such use, and therefore limits the number of OS options available for an enterprise. Major proprietary OSes, such as Windows Server, simply don't support paravirtualization.&lt;/p&gt;
 &lt;p&gt;Paravirtualization also requires a hypervisor and modified OS capable of communicating with each other through APIs. This direct communication creates a tight dependency between the OS and hypervisor, potentially resulting in version compatibility problems where a hypervisor or OS update might break the virtualization. The intentional communication could also pose possible &lt;a href="https://www.techtarget.com/searchsecurity/feature/How-to-fix-the-top-5-cybersecurity-vulnerabilities"&gt;security vulnerabilities&lt;/a&gt; to the system. There is simply more to go wrong.&lt;/p&gt;
 &lt;p&gt;Another disadvantage of paravirtualization is the inability to predict performance gains. Many of paravirtualization's benefits vary depending on the workload. Essentially, the number of paravirtualization APIs and the amount of compute those APIs receive from the system determines the benefits workloads receive.&lt;/p&gt;
&lt;/section&gt;           
&lt;section class="section main-article-chapter" data-menu-title="Key differences between full virtualization and paravirtualization"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Key differences between full virtualization and paravirtualization&lt;/h2&gt;
 &lt;p&gt;Paravirtualization attempts to accomplish the same goals as Type 1 (full) virtualization. But paravirtualization modifies the OS as a workaround, which is often undesirable because it makes virtualization dependent on the OS. Any &lt;a href="https://www.techtarget.com/searchenterprisedesktop/tip/Use-this-10-step-patch-management-process-to-ensure-success"&gt;OS patches or updates&lt;/a&gt; -- or even a need to switch to a different OS -- can cripple virtualization on that system. Type 1 virtualization provides total isolation and leaves virtualization completely independent of any other OSes running on the system. In fact, a host OS is unnecessary when running Type 1 virtualization. The Type 1 hypervisor itself effectively acts as the host OS.&lt;/p&gt;
 &lt;p&gt;Where paravirtualization attempts to implement a virtualization layer below a modified host OS, Type 2 (guest) virtualization simply adds a hypervisor as an ordinary application installed above a standard unmodified OS. Once a Type 2 hypervisor is installed, admins can create VMs as needed. Type 2 virtualization is the foundation of container virtualization using a specialized hypervisor called a container engine. &lt;a href="https://www.techtarget.com/searchitoperations/definition/Type-2-hypervisor-hosted-hypervisor"&gt;Type 2 hypervisors&lt;/a&gt; and container engines can share a host OS, but again, the host OS requires no modifications and doesn't create an undesirable dependency for VMs.&lt;/p&gt;
&lt;/section&gt;   
&lt;section class="section main-article-chapter" data-menu-title="Understanding hardware-assisted virtualization"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Understanding hardware-assisted virtualization&lt;/h2&gt;
 &lt;p&gt;Full virtualization works by using a software hypervisor to abstract a computer's hardware resources -- memory, processors, network I/O -- and provide logical representations of those resources to logical VM instances. This imposes another layer of software used to manage resources and handle the translation between logical and physical resources. In the early days of virtualization, this continuous translation between physical and logical resources imposed a serious performance penalty that limited the number of VMs that a hypervisor could practically create and support on a computer.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineimages/server_virt-system_virt_implementations.png"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineimages/server_virt-system_virt_implementations_mobile.png" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineimages/server_virt-system_virt_implementations_mobile.png 960w,https://searchservervirtualization.techtarget.com/rms/onlineimages/server_virt-system_virt_implementations.png 1280w" alt="Full vs. para vs. hardware-assisted virtualization" height="366" width="520"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Before hardware-assisted virtualization, virtualization was accomplished using two techniques: full virtualization and paravirtualization. 
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;p&gt;Computer hardware designers quickly realized that many of the time-consuming processes needed to handle full virtualization could be vastly accelerated through the addition of specific microprocessor instructions, rather than using software to emulate those functions outside of the processor. The addition of these new instruction sets was dubbed &lt;a href="https://www.techtarget.com/searchitoperations/definition/hardware-assisted-virtualization"&gt;&lt;i&gt;hardware-assisted virtualization&lt;/i&gt;&lt;/a&gt;.&lt;/p&gt;
 &lt;p&gt;Starting in 2005, major microprocessor vendors Intel and AMD added sets of new instructions to their processor families designed specifically to accelerate the tasks associated with virtualization. Intel called these extensions Intel VT, while AMD named the extensions AMD-V. Today, virtually all microprocessors, with perhaps a few exceptions for dedicated or task-specific microcontrollers, support virtualization instruction sets.&lt;/p&gt;
&lt;/section&gt;     
&lt;section class="section main-article-chapter" data-menu-title="Full virtualization vs. paravirtualization: The verdict"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Full virtualization vs. paravirtualization: The verdict&lt;/h2&gt;
 &lt;p&gt;Today, most general-purpose microprocessors intended for enterprise-class servers now include either Intel or AMD extensions for hardware-assisted virtualization. This effectively eliminated the performance penalties associated with full virtualization and enabled Type 1 full virtualization to become the preeminent approach to enterprise virtualization. Even Type 2 virtualization and containerization -- through a shared host OS -- &lt;a target="_blank" href="https://www.baeldung.com/linux/virtual-machine-vs-native-hardware" rel="noopener"&gt;can approach native hardware performance&lt;/a&gt; using modern microprocessors.&lt;/p&gt;
 &lt;p&gt;Combined with the benefits of full virtualization isolation and the ability to use any OS without modification, paravirtualization hasn't gained much traction in enterprise data centers. This helped full virtualization become the de facto standard for much of the industry, as opposed to paravirtualization, which is generally relegated to experimental and niche use cases.&lt;/p&gt;
 &lt;div class="youtube-iframe-container"&gt;
  &lt;iframe id="ytplayer-0" src="https://www.youtube.com/embed/WIPi7l0d8Ww?autoplay=0&amp;amp;modestbranding=1&amp;amp;rel=0&amp;amp;widget_referrer=null&amp;amp;enablejsapi=1&amp;amp;origin=https://searchservervirtualization.techtarget.com" type="text/html" height="360" width="640" frameborder="0"&gt;&lt;/iframe&gt;
 &lt;/div&gt;
 &lt;p&gt;&lt;i&gt;Stephen J. Bigelow, senior technology editor at TechTarget, has more than 20 years of technical writing experience in the PC and technology industry.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Full virtualization and paravirtualization both enable hardware resource abstraction, but the two technologies differ when it comes to isolation levels.</description>
            
            <link>https://www.techtarget.com/searchitoperations/tip/Full-virtualization-vs-paravirtualization-Key-differences</link>
            <pubDate>Mon, 11 Mar 2024 12:30:00 GMT</pubDate>
            <title>Full virtualization vs. paravirtualization: Key differences</title>
        </item>
        <item>
            <body>&lt;p&gt;The main difference between Type 1 vs. Type 2 hypervisors is that Type 1 runs on bare metal and Type 2 runs atop an operating system. Each hypervisor type also has its own pros and cons and specific use cases.&lt;/p&gt; 
&lt;p&gt;Virtualization works by abstracting physical hardware and devices from the applications running on that hardware. The process of virtualization provisions and manages the system's resources, including processor, memory, storage and network resources. This enables the system to host more than one workload simultaneously, making more cost- and energy-efficient use of the available servers and systems across the organization.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="What are hypervisors?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What are hypervisors?&lt;/h2&gt;
 &lt;p&gt;Virtualization requires the use of a &lt;a href="https://www.techtarget.com/searchitoperations/definition/hypervisor"&gt;hypervisor&lt;/a&gt;, which was originally called a virtual machine monitor or VMM. A hypervisor abstracts operating systems and applications from their underlying hardware. The physical hardware that a hypervisor runs on is typically referred to as a host machine, whereas the VMs that the hypervisor creates and supports are collectively called guest machines, guest VMs or simply VMs.&lt;/p&gt;
 &lt;p&gt;A hypervisor lets the host hardware operate multiple VMs independent of each other and share abstracted resources among those VMs. Virtualization with a hypervisor increases a data center's efficiency compared to physical workload hosting.&lt;/p&gt;
 &lt;p&gt;There are two types of hypervisors: Type 1 and Type 2 hypervisors. Both hypervisor varieties can &lt;a href="https://www.techtarget.com/searchitoperations/feature/How-to-choose-the-best-CPU-for-virtualization"&gt;virtualize common elements such as CPU&lt;/a&gt;, memory and networking. But based on its location in the stack, the hypervisor virtualizes these elements differently.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineImages/server_virt-hypervisor.jpg"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineImages/server_virt-hypervisor_mobile.jpg" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineImages/server_virt-hypervisor_mobile.jpg 960w,https://searchservervirtualization.techtarget.com/rms/onlineImages/server_virt-hypervisor.jpg 1280w" alt="Hypervisor types." height="336" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;There are many differences between Type 1 and Type 2 hypervisors.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;     
&lt;section class="section main-article-chapter" data-menu-title="Type 1 hypervisors"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Type 1 hypervisors&lt;/h2&gt;
 &lt;p&gt;A &lt;a href="https://www.techtarget.com/searchitoperations/definition/bare-metal-hypervisor"&gt;Type 1 hypervisor&lt;/a&gt; runs directly on the host machine's physical hardware, and it's &lt;a href="https://www.techtarget.com/searchitoperations/tip/A-beginners-guide-to-hosted-and-bare-metal-virtualization"&gt;referred to as a bare-metal hypervisor&lt;/a&gt;. The Type 1 hypervisor doesn't have to load an underlying OS. With direct access to the underlying hardware and no other software -- such as OSes and device drivers -- to contend with for virtualization, Type 1 hypervisors are regarded as the most efficient and best-performing hypervisors available for enterprise computing. In fact, Type 1 hypervisors are often referred to as the virtualization or virtual operating system.&lt;/p&gt;
 &lt;p&gt;Hypervisors that run directly on physical hardware are also highly secure. Virtualization mitigates the risk of attacks that target security flaws and vulnerabilities in OSes because each guest has its own OS. This ensures an attack on a guest VM is logically isolated to that VM and can't spread to others running on the same hardware.&lt;/p&gt;
 &lt;h3&gt;Type 1 hypervisor uses and capabilities&lt;/h3&gt;
 &lt;p&gt;Type 1 hypervisors have long been preferred and are the de facto standard for enterprise-class virtualization. The ability to &lt;a href="https://www.techtarget.com/searchitoperations/tip/Right-sizing-VMs-improves-performance-combats-resource-contention"&gt;create VMs of almost any size&lt;/a&gt; and configuration makes bare metal VMs well-suited for hosting large and complex enterprise workloads. The close connection established between the VM and the underlying hardware allows excellent performance, especially once virtualization command sets were added to modern microprocessors.&lt;/p&gt;
 &lt;p&gt;The Type 1 hypervisor provides several key benefits for the enterprise:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Reliability.&lt;/b&gt; IT organizations use Type 1 hypervisors for production-level workloads that require increased uptimes, advanced failover and other production-ready features.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Scalability. &lt;/b&gt;The typical Type 1 hypervisor can scale to virtualize workloads across several terabytes of RAM and hundreds of CPU cores.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Advanced features.&lt;/b&gt; In addition, Type 1 hypervisors often provide support for software-defined storage and networking, which creates additional security and portability for virtualized workloads. However, such features come with a much higher initial cost and greater support contract requirements.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Strong management.&lt;/b&gt; The typical Type 1 hypervisor requires some level of external management -- with interfaces such as Microsoft System Center Virtual Machine Manager or VMware vCenter -- to access the full scope of the hypervisor's abilities.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Foundation for the cloud.&lt;/b&gt; Virtualization, and Type 1 hypervisors in particular, played an enormous role in enabling cloud computing technologies. The ability to provision, deploy and manage virtual environments on-demand through software was a pivotal characteristic for computing efficiency and the key to software-based, on-demand, user-driven capabilities that are endemic to successful cloud computing. There can be no cloud without virtualization and its related hypervisors.&lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;       
&lt;section class="section main-article-chapter" data-menu-title="Type 2 hypervisors"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Type 2 hypervisors&lt;/h2&gt;
 &lt;p&gt;A Type 2 hypervisor is typically installed on top of an existing host OS. It is sometimes called a &lt;a href="https://www.techtarget.com/searchitoperations/definition/Type-2-hypervisor-hosted-hypervisor"&gt;hosted hypervisor&lt;/a&gt; because it relies on the host machine's preexisting OS to manage calls to CPU, memory, storage and network resources.&lt;/p&gt;
 &lt;p&gt;Type 2 hypervisors trace their roots back to the early days of x86 virtualization when the hypervisor was added above the existing systems' OSes. Although the purpose and goals of Type 1 and Type 2 hypervisors are identical, the presence of an underlying OS with Type 2 hypervisors introduces unavoidable latency. All the hypervisor's activities and the work of every VM must pass through a single common host OS. Any security flaws or vulnerabilities in the host OS could also potentially compromise all of the VMs running above it.&lt;/p&gt;
 &lt;h3&gt;Type 2 hypervisor uses and capabilities&lt;/h3&gt;
 &lt;p&gt;The traditional limitations of a Type 2 hypervisor have limited its use to client or end-user systems, or experimental environments where performance and security were lesser concerns than a full production environment. For example, software developers might use a Type 2 hypervisor to create VMs to test a software product prior to release. Similarly, Type 2 hypervisors have seen significant use in smaller high-volume virtual instances, and IT organizations typically use Type 2 hypervisors to create virtual desktops common in VDI deployments.&lt;/p&gt;
 &lt;p&gt;Still, Type 2 hypervisors have seen a strong surge in popularity because of several attractive benefits:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Small and fast.&lt;/b&gt; Type 2 hypervisors don't need individual operating systems like Type 1 VMs. This results in simpler and smaller logical entities that use far fewer resources, are faster to create, and are easier to migrate or manipulate.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Highly scalable.&lt;/b&gt; Because a Type 2 VM can use far fewer computer resources than a Type 1 VM, a computer can potentially host many more Type 2 VMs than Type 1 VMs.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Foundation for containers.&lt;/b&gt; The concept of Type 2 hypervisors is core to the emergence of virtualized containers. Containers use specialized Type 2 hypervisors called container engines, such as Docker or Apache Mesos, which let containers share a common OS. Container technology has spawned a new and highly efficient type of application architecture called &lt;a href="https://www.techtarget.com/searchapparchitecture/definition/microservices"&gt;microservices&lt;/a&gt;.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Foundation for cloud.&lt;/b&gt; Most public cloud providers offer native services that directly support the creation and management of virtual containers alongside traditional Type 1 VMs.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;In some businesses, container technology has displaced traditional Type 1 VMs as the preferred or most popular virtualization type.&lt;/p&gt;
&lt;/section&gt;        
&lt;section class="section main-article-chapter" data-menu-title="Key differences between Type 1 and Type 2 hypervisors"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Key differences between Type 1 and Type 2 hypervisors&lt;/h2&gt;
 &lt;p&gt;When selecting a hypervisor, it's important to understand the key differences between the Type 1 and Type 2 technologies:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Complexity for IT.&lt;/b&gt; A Type 1 hypervisor is the functional OS and focal point of the system typically installed on an enterprise-class server with the intention of creating and hosting multiple VMs. This requires comprehensive knowledge of the Type 1 hypervisor and detailed &lt;a href="https://www.techtarget.com/searchitoperations/tip/6-virtual-server-management-best-practices"&gt;server management and administration&lt;/a&gt;. A Type 2 hypervisor takes the form of a more traditional end-user application that can be installed and operated on simpler systems by less technical staff, though a solid knowledge of creating and managing Type 2 VMs is still highly recommended.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Installation.&lt;/b&gt; A Type 1 hypervisor is installed directly atop a computer's hardware. No underlying operating system is needed to operate a Type 1 hypervisor. A Type 2 hypervisor requires an underlying operating system (a host OS), and the Type 2 hypervisor operates atop the OS as any other application.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Access to computer resources.&lt;/b&gt; A Type 1 hypervisor has direct access to the computer's memory, CPU and other hardware resources that the hypervisor will virtualize, provision and manage directly. A Type 2 hypervisor must access and virtualize the computer's resources, but this must be accomplished through the host operating system.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;VM performance.&lt;/b&gt; A Type 1 hypervisor will offer the best performance for VMs and the workloads running inside each VM because the Type 1 hypervisor has direct access to the computer's underlying hardware resources. A Type 2 hypervisor must operate through an underlying operating system to access the computer's hardware. This results in additional latency and slightly lower performance for Type 2 VMs. Modern microprocessors and computer hardware designs can help to mitigate this performance gap, but it remains an important consideration for performance-sensitive applications.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;VM security.&lt;/b&gt; Type 1 hypervisors &lt;a href="https://www.techtarget.com/searchitoperations/tip/What-to-keep-in-mind-when-securing-virtual-environments"&gt;provide excellent security&lt;/a&gt; by invoking high levels of logical isolation between VMs -- no resources or services are shared between VMs. A security breach in one VM doesn't place other VMs at risk. Type 2 hypervisors offer good logical isolation as well, but the shared host OS poses a common threat. All of the vulnerabilities and risks of the host OS can affect the Type 2 hypervisor and all of the Type 2 VMs running above the host OS. It's critical to keep BOTH hypervisor types patched and updated, and the host OS must also be aggressively patched and updated on Type 2 hypervisor systems.&lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;   
&lt;section class="section main-article-chapter" data-menu-title="Hardware support for Type 1 and Type 2 hypervisors"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Hardware support for Type 1 and Type 2 hypervisors&lt;/h2&gt;
 &lt;p&gt;Hardware acceleration technologies &lt;a href="https://www.techtarget.com/searchitoperations/tip/Understand-hardware-support-for-virtualization"&gt;are widely available for virtualization's tasks&lt;/a&gt;. Such technologies include Intel Virtualization Technology extensions for Intel processors and AMD Virtualization extensions for AMD processors. There are numerous other virtualization-based extensions and features, including second-level address translation and support for nested virtualization.&lt;/p&gt;
 &lt;p&gt;Hardware acceleration technologies perform many of the process-intensive tasks needed to create and manage virtual resources on a computer. Hardware acceleration improves virtualization performance and the practical number of VMs a computer could host is above what the hypervisor can do alone.&lt;/p&gt;
 &lt;p&gt;Both Type 1 and Type 2 hypervisors use hardware acceleration support, but to varying degrees. Type 1 hypervisors rely on hardware acceleration technologies and typically don't function without those technologies available and enabled through the system's BIOS.&lt;/p&gt;
 &lt;p&gt;Type 2 hypervisors are generally capable of using hardware acceleration technologies if those features are available. But they typically &lt;a target="_blank" href="https://history-computer.com/whats-the-difference-between-hardware-and-software-emulation/" rel="noopener"&gt;fall back on software emulation&lt;/a&gt; in the absence of native hardware support. However, computers without hardware acceleration technologies that rely on software emulation will suffer significant performance penalties that restrict the number of VMs and the performance of those VMs on that computer.&lt;/p&gt;
 &lt;p&gt;Although all enterprise-class servers now include excellent hardware acceleration for virtualization, it's worth checking with your hypervisor vendor to determine a specific hypervisor's hardware support requirements.&lt;/p&gt;
 &lt;div class="youtube-iframe-container"&gt;
  &lt;iframe id="ytplayer-0" src="https://www.youtube.com/embed/0cAcYq7YyWQ?autoplay=0&amp;amp;modestbranding=1&amp;amp;rel=0&amp;amp;widget_referrer=null&amp;amp;enablejsapi=1&amp;amp;origin=https://searchservervirtualization.techtarget.com" type="text/html" height="360" width="640" frameborder="0"&gt;&lt;/iframe&gt;
 &lt;/div&gt;
&lt;/section&gt;       
&lt;section class="section main-article-chapter" data-menu-title="Type 1 and Type 2 hypervisor vendors"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Type 1 and Type 2 hypervisor vendors&lt;/h2&gt;
 &lt;p&gt;The hypervisor market contains several vendors, including VMware, Microsoft, Oracle and Citrix. Below are some popular products for both Type 1 and Type 2 hypervisors.&lt;/p&gt;
 &lt;h3&gt;Type 1 hypervisor products&lt;/h3&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;VMware vSphere.&lt;/b&gt; VMware vSphere includes the ESXi hypervisor and vCenter management software to provide a suite of virtualization products, such as the vSphere Client, vSphere software development kits, Storage vMotion, the Distributed Resource Scheduler and Fault Tolerance. VMware vSphere is geared toward enterprise data centers; smaller businesses might find it difficult to justify the price.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Microsoft Hyper-V.&lt;/b&gt; Microsoft Hyper-V runs on Windows OSes and lets admins run multiple OSes inside a VM. Admins and developers often use Hyper-V to build test environments to run software on several OSes by creating VMs for each test.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;KVM.&lt;/b&gt; The KVM hypervisor is an open source virtualization architecture made for Linux distributions. The KVM hypervisor lets admins convert a Linux kernel into a hypervisor and has direct access to hardware along with any VMs hosted by the hypervisor. Features include live migration, scheduling and resource control.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Xen hypervisor. &lt;/b&gt;The open source Xen Project originally began as a research project at the University of Cambridge in 2003. It later moved under the purview of the Linux Foundation. &lt;a href="https://www.techtarget.com/searchitoperations/tip/Xen-vs-KVM-What-are-the-differences"&gt;Xen is used as the upstream version for other hypervisors&lt;/a&gt;, including Oracle VM and Citrix Hypervisor. Amazon Web Services uses a customized version of the Xen hypervisor as the foundation for its Elastic Compute Cloud.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Oracle VM.&lt;/b&gt; Oracle VM is an open source virtualization architecture that uses Xen at its core and lets admins deploy OSes and application software in VMs. Oracle VM features include creation and configuration of server pools, creation and management of storage repositories, VM cloning, VM migration and load balancing.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Citrix Hypervisor.&lt;/b&gt; The Citrix Hypervisor, previously known as Citrix XenServer, is an open source server virtualization platform based on the Xen hypervisor. Admins use the Citrix Hypervisor to deploy, host and manage VMs as well as distribute hardware resources to those VMs. Some key features include VM templates, XenMotion and host live patches. The Citrix Hypervisor comes in two versions: Standard and Enterprise.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;Type 2 hypervisor products&lt;/h3&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Oracle VM VirtualBox.&lt;/b&gt; Oracle VM VirtualBox is an open source hosted hypervisor that runs on a host OS to support guest VMs. VirtualBox supports a variety of host OSes, such as Windows, Apple macOS, Linux and Oracle Solaris. VirtualBox offers multigeneration branched snapshots, Guest Additions, guest multiprocessing, ACPI support and Preboot Execution Environment network boot. Other Oracle hypervisor offerings include Oracle Solaris Zones and Oracle VM Server for x86.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;VMware Workstation Pro and VMware Fusion&lt;/b&gt;. VMware Workstation Pro is a 64-bit hosted hypervisor capable of implementing virtualization on Windows and Linux systems. Some of Workstation's features include host/guest file sharing, the creation and deployment of encrypted VMs, and VM snapshots. VMware developed Fusion as an alternative to Workstation. VMware Fusion offers many of the same capabilities as Workstation but is macOS compatible and comes with fewer features at a reduced price.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;QEMU. &lt;/b&gt;QEMU is an &lt;a href="https://www.techtarget.com/searchitoperations/tip/5-open-source-software-applications-for-virtualization"&gt;open source virtualization tool&lt;/a&gt; that emulates CPU architectures as well as lets developers and admins run applications compiled for one architecture on another. QEMU offers features such as support for non-volatile dual in-line memory module hardware, share file system, secure guests and memory encryption.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Parallels Desktop.&lt;/b&gt; Primarily geared toward macOS admins, Parallels Desktop lets Windows, Linux and Google Chrome OSes and applications run on Apple Mac. Common features include network conditioning; support for 128GB per VM; and Chef/Ohai, Docker and HashiCorp Vagrant integrations. Parallels Desktop comes in three modes: Coherence, Full Screen and Modality mode.&lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;       
&lt;section class="section main-article-chapter" data-menu-title="Considerations for using Type 1 vs. Type 2 hypervisors"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Considerations for using Type 1 vs. Type 2 hypervisors&lt;/h2&gt;
 &lt;p&gt;When choosing between a Type 1 and Type 2 hypervisor, admins must consider the type and size of their workloads. If admins primarily work in an enterprise or large organization and must deploy hundreds of VMs, a Type 1 hypervisor will suit their needs.&lt;/p&gt;
 &lt;p&gt;But if admins have a smaller deployment, less-demanding workloads or require a testing environment, Type 2 hypervisors are less complex and have a smaller price tag. Enterprises and organizations can use Type 2 hypervisors as needed for workloads that suit the technology. Virtual containers are founded on Type 2 concepts, and many organizations will &lt;a href="https://www.techtarget.com/searchitoperations/answer/Containers-vs-VMs-What-are-the-key-differences"&gt;deploy containers rather than traditional VMs&lt;/a&gt; for some software types.&lt;/p&gt;
 &lt;p&gt;Ultimately, Type 1 and Type 2 hypervisors aren't mutually exclusive. Both hypervisors serve different purposes, and both can exist simultaneously within the same IT environment. It's even possible to operate both hypervisors on the same computer, such as nesting a Type 2 hypervisor in a Type 1 VM, though such combinations are exceedingly rare.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Stephen J. Bigelow, senior technology editor at TechTarget, has more than 20 years of technical writing experience in the PC and technology industry.&lt;/i&gt;&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Brian Kirsch, an IT architect and Milwaukee Area Technical College instructor, has been in IT for more than 20 years, holds multiple certifications and sits on the VMUG board of directors.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Choosing between the two hypervisor types largely depends on whether IT administrators oversee an enterprise data center or client-facing, end-user systems.</description>
            
            <link>https://www.techtarget.com/searchitoperations/tip/Whats-the-difference-between-Type-1-vs-Type-2-hypervisor</link>
            <pubDate>Thu, 07 Mar 2024 00:00:00 GMT</pubDate>
            <title>What's the difference between Type 1 vs. Type 2 hypervisor?</title>
        </item>
        <item>
            <body>&lt;p&gt;A lack of insight has plagued IT organizations since the earliest days of computing, leading to inefficiency and wasted capital, nagging performance problems and perplexing availability issues that can be costly and time-consuming to resolve. That's why IT monitoring has long been an essential element of any enterprise IT strategy.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://www.techtarget.com/searchitoperations/definition/IT-monitoring"&gt;IT monitoring&lt;/a&gt; is needed to oversee and guide the data center and its constituent applications and services. IT administrators and business leaders track metrics over time to ensure that the IT organization &lt;a href="https://www.techtarget.com/searchitoperations/definition/IT-performance-management-information-technology-performance-management"&gt;maintains its required level of performance&lt;/a&gt;, security and availability, using trends to validate infrastructure updates and changes long before applications and services are affected. At the same time, real-time alerting lets admins respond to immediate problems that could harm the business.&lt;/p&gt; 
&lt;p&gt;Enterprise IT monitoring uses software-based instrumentation, such as APIs and software agents, to gather operational information about hardware and software across the enterprise infrastructure. Such information can include basic device or application and device health checks, as well as far more detailed metrics that track resource availability and utilization, system and network response times and error rates and alarms.&lt;/p&gt; 
&lt;p&gt;IT monitoring employs the following three fundamental layers:&lt;/p&gt; 
&lt;ol class="default-list"&gt; 
 &lt;li&gt;The &lt;i&gt;foundation layer&lt;/i&gt; gathers data from the IT environment, often using combinations of agents, logs, APIs or other standardized communication protocols to access data from hardware and software.&lt;/li&gt; 
 &lt;li&gt;The &lt;i&gt;software layer&lt;/i&gt; then processes and analyzes the raw data. From this, the &lt;a href="https://www.techtarget.com/searchitoperations/feature/Compare-8-tools-for-IT-monitoring"&gt;monitoring tools&lt;/a&gt; establish trends and generate alarms.&lt;/li&gt; 
 &lt;li&gt;The &lt;i&gt;interface layer&lt;/i&gt; displays the analyzed data in graphs or charts through a GUI dashboard.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;section class="section main-article-chapter" data-menu-title="What is an IT monitoring strategy?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is an IT monitoring strategy?&lt;/h2&gt;
 &lt;p&gt;In simplest terms, an IT monitoring strategy represents the organization's game plan for managing the health, performance and availability of applications and infrastructure. A monitoring strategy defines why monitoring is needed, what needs to be watched and how it needs to be watched. An IT monitoring strategy encompasses four essential levels:&lt;/p&gt;
 &lt;ol class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Goals.&lt;/b&gt; This is the "why" of IT monitoring. Monitoring for its own sake is a waste of important resources. An IT monitoring strategy should be built on a meaningful or tangible business purpose. For example, monitoring might be needed to improve application availability, ensure a satisfactory UX or measure revenue per transaction.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Metrics and KPIs.&lt;/b&gt; This is the "what" of IT monitoring. Business and IT leaders can select from a wide array of metrics and KPIs that will help the business meet its goals. Metrics and KPIs can be measured directly or indirectly calculated by performing simple calculations based on other direct measurements.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Tools.&lt;/b&gt; This is the "how" of IT monitoring. Procure or build software tools that can collect, store, process and deliver metrics and KPIs to IT and business leaders. Not all tools are suited to all metrics or KPIs, and many tools provide a high level of customization to accommodate a wide range of environments and use cases. It's important to select the right tools for the job at hand.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Reporting.&lt;/b&gt; More directly, what's going to happen to KPI and metric data once it's collected by tools. This is an important and often overlooked part of the "how" of IT monitoring. Metrics and KPIs aren't an end unto themselves -- simply having all that data offers little value. Tools must collect, process and present data points to IT and business leaders in a form that's clear and actionable. Although reporting is a function of the tool set, it's vital to consider how that data is presented. Some reporting might be tactical or immediate in the form of alerts, while other reporting might be more strategic or trend-related in the form of dashboard summaries.&lt;/li&gt; 
 &lt;/ol&gt;
 &lt;h3&gt;Monitoring vs. observability&lt;/h3&gt;
 &lt;p&gt;The evolution of IT has introduced a &lt;a href="https://www.techtarget.com/searchitoperations/tip/Observability-vs-monitoring-Whats-the-difference"&gt;distinction between the concepts of monitoring and observability&lt;/a&gt;, and admins will most likely encounter both terms when developing an IT monitoring strategy and selecting suitable tools.&lt;/p&gt;
 &lt;p&gt;In simplest terms, monitoring is used to collect data and make conclusions about the outputs of an application, service or device. For example, it's a simple matter to measure the bandwidth utilization of a network segment and report that as a percentage of available bandwidth.&lt;/p&gt;
 &lt;p&gt;&lt;a href="https://www.techtarget.com/searchitoperations/definition/observability"&gt;Observability has a deeper meaning&lt;/a&gt;, encompassing the collection, processing and reporting of data points that can provide a more detailed and holistic picture of the environment's behavior -- and more effectively pinpoint potential problems. With the bandwidth example, observability might deliver a more detailed picture of workloads and services consuming the available bandwidth on that network segment.&lt;/p&gt;
 &lt;p&gt;For the purposes of this guide, the needs and factors that drive monitoring and observability are identical. The concept of an IT monitoring strategy can entail both monitoring and observability.&lt;/p&gt;
&lt;/section&gt;        
&lt;section class="section main-article-chapter" data-menu-title="Why is having an IT monitoring strategy important?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Why is having an IT monitoring strategy important?&lt;/h2&gt;
 &lt;p&gt;An IT monitoring strategy is a &lt;a href="https://www.techtarget.com/searchitoperations/tip/IT-operations-management-best-practices-you-need-to-know"&gt;cornerstone of IT operations&lt;/a&gt;. Because almost every modern enterprise derives revenue from applications and data running on an IT infrastructure, it's essential for the business to know that those applications, data sets and underlying infrastructure are all working within acceptable parameters. IT monitoring lets the business remediate -- even prevent -- problems that can affect customer satisfaction and revenue.&lt;/p&gt;
 &lt;p&gt;An IT monitoring strategy is the high-tech equivalent of quality control (QC) in a traditional factory.&lt;/p&gt;
 &lt;p&gt;Consider a traditional factory in the business of manufacturing a physical product for sale. The business implements a QC activity that evaluates the suitability of raw materials, gauges the functionality and quality of each production machine's output and validates the final product against physical dimensions, functional behavior or other parameters.&lt;/p&gt;
 &lt;p&gt;Traditional QC is responsible for making sure that the company is manufacturing quality products that operate properly and are visually and functionally suitable for sale. Without QC, the company has no objective means of measuring the quality or suitability of the products being manufactured. Quality products make for happier customers and fewer returns.&lt;/p&gt;
 &lt;p&gt;IT monitoring is closely analogous to its physical counterpart. IT monitoring can ensure that applications are available and healthy, related data stores are available and valid and that all the supporting servers, storage, networking and services are functioning normally -- all with the goal of delivering applications and data to users.&lt;/p&gt;
 &lt;p&gt;When an application crashes, performs poorly, can't access data or becomes otherwise unavailable, customer satisfaction and revenue falls, costly time and effort is spent troubleshooting and the business might even face regulatory consequences. Without IT monitoring, the business has no objective means of knowing how well applications are working until help requests start flooding in. IT monitoring, and the strategies adopted to implement that monitoring, are essential for the business to have objective insights into its operations and consequences to revenue. In many cases, proper IT monitoring can even mitigate potential problems before they manifest to the user.&lt;/p&gt;
&lt;/section&gt;       
&lt;section class="section main-article-chapter" data-menu-title="Types of IT monitoring"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Types of IT monitoring&lt;/h2&gt;
 &lt;p&gt;Although the need for IT monitoring is ubiquitous, monitoring approaches have proliferated and diversified through the years. This has yielded an array of strategies and tools focused on specific aspects of monitoring: IT infrastructure, public cloud, networking, security and application and UX.&lt;/p&gt;
 &lt;h3&gt;IT infrastructure monitoring&lt;/h3&gt;
 &lt;p&gt;The traditional foundation of compute infrastructure lies in an organization's local data center environment: fleets of physical and virtual servers as well as storage assets such as local disks and disk arrays. Server monitoring discovers and classifies the existing assets, including hardware, OSes and drivers, and then collects and processes a series of metrics.&lt;/p&gt;
 &lt;p&gt;Metrics can track physical and virtual server availability -- uptime -- and performance, and measure resource capacity, such as processor core availability and clock usage, memory capacity and utilization, disk storage and any associated storage area network.&lt;/p&gt;
 &lt;p&gt;Monitoring can also check and enforce system configurations to ensure uniformity and security. By watching capacity metrics, a business can find unavailable, unused or underutilized resources and make informed predictions about when and how much to upgrade.&lt;/p&gt;
 &lt;p&gt;Metrics also let admins oversee benchmarks that gauge actual vs. normal compute performance from which they can immediately identify and correct infrastructure failures. Changes in performance trends over time also might indicate potential future compute problems, such as over-stressed servers.&lt;/p&gt;
 &lt;h3&gt;Public cloud infrastructure monitoring&lt;/h3&gt;
 &lt;p&gt;As organizations expand off-premises compute environments, infrastructure monitoring has expanded to include remote and cloud infrastructures. Although cloud monitoring has some limitations, IaaS providers often allow infrastructure visibility down to the server and OS level, including processors, memory and storage. Some native tools let IT managers dig into the details of log data. Still, public cloud infrastructures are shrouded by a virtualization layer that conceals the provider's underlying physical assets. Cloud providers offer native monitoring tools, and third-party tools are available for multi-cloud environments.&lt;/p&gt;
 &lt;h3&gt;Network monitoring&lt;/h3&gt;
 &lt;p&gt;Servers and storage have little value without a LAN and WAN to connect them, so network monitoring has evolved as an important IT monitoring type. Unique devices and services in the network, including switches, routers, firewalls and gateways, rely on APIs and common communication protocols to provide details about configuration, such as routing and forwarding tables. Monitoring tools yield network performance metrics -- such as uptime, errors, bandwidth consumption and latency -- across all the subnets of a complex LAN. It's challenging to find a single network monitoring tool to cover all network devices and process network metrics into meaningful intelligence.&lt;/p&gt;
 &lt;p&gt;Another reason network monitoring is a separate branch of the IT monitoring tree is security. A network is the road system that carries data around an enterprise and to its users. It's also the principal avenue for attack on the organization's servers and storage. That makes it essential to have an array of network security tools for intrusion detection and prevention, vulnerability monitoring and access logging. The notion of continuous security monitoring relies on automation. It promises real-time, end-to-end oversight of the security environment, alerting security teams to potential breaches.&lt;/p&gt;
 &lt;h3&gt;Security monitoring&lt;/h3&gt;
 &lt;p&gt;Beyond performance and availability, security has become a principal goal for IT monitoring strategies. Security is vital for protecting business data and helping strengthen the organization's compliance and governance posture. Security has many aspects, such as the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Authentication and authorization to ensure that only the right users have access to the right resources and services.&lt;/li&gt; 
  &lt;li&gt;Intrusion detection and alerting to watch for unusual network, application and storage traffic.&lt;/li&gt; 
  &lt;li&gt;Checking and preventing malware such as viruses, spyware and ransomware.&lt;/li&gt; 
  &lt;li&gt;Setting and enforcing a prescribed configuration environment to ensure that systems and applications operate within known, well-defined infrastructure for security and compliance -- usually part of a broader change management process.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;Security monitoring might require different tools to cover each aspect. For example, one tool might handle intrusion detection and prevention, while another tool might handle authentication and authorization monitoring.&lt;/p&gt;
 &lt;h3&gt;Application and user experience monitoring&lt;/h3&gt;
 &lt;p&gt;Even an infrastructure and network that work perfectly might provide inadequate support for demanding applications, leaving users frustrated and dissatisfied. This potentially leads to wasted investment and lost business. Ultimately, it's availability and performance of the workload -- not the infrastructure -- that really matters to users. This has led to two relatively recent expressions of IT monitoring: &lt;a href="https://www.techtarget.com/searchenterprisedesktop/definition/Application-monitoring-app-monitoring"&gt;application performance monitoring&lt;/a&gt; (APM) and UX monitoring.&lt;/p&gt;
 &lt;p&gt;APM typically uses traditional infrastructure and network monitoring tools to assess the underlying behaviors of the application's environment, but it also gathers metrics specifically related to application performance. These can include average latency, latency under peak load and bottleneck data such as delays accessing an essential database or the time it takes to service a user's request.&lt;/p&gt;
 &lt;p&gt;An organization that demonstrates how an application performs within acceptable parameters and operates as expected can strengthen its governance posture. When an application's performance deviates from acceptable parameters, leaders can remediate problems quickly -- often without users ever knowing a problem existed.&lt;/p&gt;
 &lt;p&gt;APM took root in local data centers, but public cloud providers offer support tools -- such as Amazon CloudWatch and Azure Monitor -- for application-centric cloud monitoring. Organizations with a strong public cloud portfolio will need a range of cloud monitoring tools that not only track application performance, but also ensure security and calculate how efficiently resources are used. Common cloud application metrics include resource availability, response time, application errors and network traffic levels.&lt;/p&gt;
 &lt;p&gt;A recent iteration of APM relates to applications based on a &lt;a href="https://www.techtarget.com/searchapparchitecture/definition/microservices"&gt;microservices&lt;/a&gt; architecture, which uses APIs to integrate and allow communication between individual services. Such architectures are particularly &lt;a href="https://www.techtarget.com/searchapparchitecture/tip/What-semantic-monitoring-can-and-cant-do-for-microservices"&gt;challenging to monitor&lt;/a&gt; because of the ephemeral nature of microservices containers and the emphasis on LAN connectivity and performance. Consequently, microservices applications often rely on what's called "semantic monitoring" to run periodic, simulated tests within production systems, gathering metrics on the application's performance, availability, functionality and response times.&lt;/p&gt;
 &lt;p&gt;UX monitoring is closely related to APM, but the metrics are gathered from the users' perspective. For example, an application-responsiveness metric might refer to the time it takes for an application to complete a requested web page. If this metric is low, that means the request was completed quickly and the user was satisfied with the result. As the request takes longer to complete, the user is less satisfied.&lt;/p&gt;
 &lt;p&gt;Regardless of monitoring type, the goal is to answer four essential questions:&lt;/p&gt;
 &lt;ol class="default-list"&gt; 
  &lt;li&gt;What is present?&lt;/li&gt; 
  &lt;li&gt;Is it working?&lt;/li&gt; 
  &lt;li&gt;How well is it working?&lt;/li&gt; 
  &lt;li&gt;How much resources are being used?&lt;/li&gt; 
 &lt;/ol&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineImages/itops-it_monitoring_is_everywhere-i.png"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineImages/itops-it_monitoring_is_everywhere-i_mobile.png" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineImages/itops-it_monitoring_is_everywhere-i_mobile.png 960w,https://searchservervirtualization.techtarget.com/rms/onlineImages/itops-it_monitoring_is_everywhere-i.png 1280w" alt="IT monitoring throughout the enterprise technology stack" height="428" width="559"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;IT monitoring happens throughout the enterprise IT stack, gathering metrics about system and application performance, network and security updates and user experiences.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;                          
&lt;section class="section main-article-chapter" data-menu-title="IT monitoring and DevOps"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;IT monitoring and DevOps&lt;/h2&gt;
 &lt;p&gt;As IT activities transition into a business service, the role of IT monitoring has expanded from an infrastructure focus to include IT processes to ensure that workflows proceed quickly, efficiently and successfully. At the same time, software development has emerged as one of the most important IT processes. Agile and continuous development paradigms, such as &lt;a href="https://www.techtarget.com/searchitoperations/definition/DevOps"&gt;DevOps&lt;/a&gt;, embrace rapid cycles of iteration and testing to drive software product development.&lt;/p&gt;
 &lt;p&gt;Rapid cyclical workflows can be rife with bottlenecks and delays caused by human error, poorly planned processes and inadequate or inappropriate tools. Monitoring the DevOps workflow enables an organization to collect metrics on deployment frequency, lead time, change volume, failed deployments, defect -- bug -- volume, mean time to detection/recovery, service-level agreement (SLA) compliance and other steps and missteps. &lt;a href="https://www.techtarget.com/searchitoperations/tip/Nine-DevOps-metrics-you-should-use-to-gauge-improvement"&gt;These metrics can preserve DevOps efficiency&lt;/a&gt; while identifying potential areas for improvement.&lt;/p&gt;
 &lt;p&gt;IT monitoring tools used in DevOps environments often focus on end-to-end aspects of application performance and UX monitoring. Rather than simply observe the total or net performance, the goal is to help developers and project managers delve into the many associations and dependencies that occur around application performance. This helps them determine the root of performance problems and troubleshoot more effectively.&lt;/p&gt;
 &lt;p&gt;For example, it's useful to know that it takes an average of 12 seconds to return a user request. That metric, however, doesn't identify why the action takes so long. Tools monitoring the underlying infrastructure might reveal that web server 18 needs 7.4 seconds to get a response from the database server, and this is ultimately responsible for the unacceptable delay. With such insight, developers and admins can address the underlying issues to improve and maintain application performance.&lt;/p&gt;
&lt;/section&gt;     
&lt;section class="section main-article-chapter" data-menu-title="IT monitoring and containers"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;IT monitoring and containers&lt;/h2&gt;
 &lt;p&gt;IT monitoring has also expanded to support virtualized containers, as many new applications are designed and built to deploy in containers. The rise of containers brings a host of &lt;a href="https://www.techtarget.com/searchitoperations/tip/Improve-container-monitoring-with-these-strategies-and-tools"&gt;container monitoring challenges&lt;/a&gt; to IT organizations. Containers are notoriously ephemeral -- they spawn within the environment and last only as long as needed, sometimes only seconds. Containers use relatively few resources compared to VMs, but they are far more numerous.&lt;/p&gt;
 &lt;p&gt;Moreover, a working container environment involves an array of supporting elements:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Container hosts.&lt;/li&gt; 
  &lt;li&gt;Container engines, such as Docker or CRI-O.&lt;/li&gt; 
  &lt;li&gt;Cluster orchestration and management systems, such as Kubernetes.&lt;/li&gt; 
  &lt;li&gt;Service routing and mesh services.&lt;/li&gt; 
  &lt;li&gt;Application deployment paradigms, such as microservices.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;Taken together, containers greatly multiply the number of objects that DevOps and IT teams must monitor, rendering traditional monitoring setup and configuration activities inadequate.&lt;/p&gt;
 &lt;p&gt;Container monitoring involves familiar metrics such as resource and network utilization, but there are other metrics to contend with, including node utilization, pods vs. capacity and kube-system alerts. Even container logging must be reviewed and updated to ensure that meaningful log data is collected from the application, volume and container engine for analysis.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineImages/itops-kpis_container-f.png"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineImages/itops-kpis_container-f_mobile.png" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineImages/itops-kpis_container-f_mobile.png 960w,https://searchservervirtualization.techtarget.com/rms/onlineImages/itops-kpis_container-f.png 1280w" alt="Container KPIs" height="290" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Breakdown of key performance indicators for container infrastructure monitoring.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;       
&lt;section class="section main-article-chapter" data-menu-title="Advanced IT monitoring concepts"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Advanced IT monitoring concepts&lt;/h2&gt;
 &lt;p&gt;Other advances in enterprise IT monitoring include the rise of &lt;a href="https://www.techtarget.com/whatis/definition/real-time-monitoring"&gt;real-time monitoring&lt;/a&gt; and trend/predictive monitoring. Real-time monitoring isn't just a matter of agents forwarding collected data and sending alerts to IT admins. Instead, the goal is to stream continuous real-time information where it can be collected, analyzed and used to make informed decisions about immediate events and assess trends over time. Data is collected from the infrastructure, but with increasingly resilient and distributed modern infrastructures, the data collected from applications can be more valuable to IT staff.&lt;/p&gt;
 &lt;p&gt;Such monitoring employs a more analytical approach to alerting and thresholds, with techniques such as classification analysis and regression analysis used to make smart decisions about normal vs. abnormal &lt;a href="https://www.techtarget.com/searchitoperations/tip/Explore-the-benefits-limitations-of-time-series-monitoring-in-IT"&gt;system and application behaviors&lt;/a&gt;. Classification analysis organizes data points into groups or clusters, allowing events outside of the classification to be easily identified for closer evaluation. Regression analysis generally makes decisions or predictions based on past events or behaviors. Normal distributions plot events based on probability to determine the mean -- average -- and variations --standard deviations -- also allowing unusual events to be found quickly and effectively.&lt;/p&gt;
 &lt;p&gt;Classification and regression analysis are also closely related to machine learning (ML) and AI. Both technologies are making inroads in IT monitoring. ML uses collected data to build a behavioral model and then expands and refines the model over time to provide accurate predictions about how something will behave. ML technologies have proved effective in IT tasks such as failure prediction and predictive maintenance. AI and AIOps build on ML to bring autonomy to the ML model and enable software to make -- and respond to -- informed decisions based on dynamic conditions.&lt;/p&gt;
 &lt;div class="articleVideoFullWidth "&gt;
  &lt;video id="singlePlayer-0" class="video-js" data-account="1367663370" data-player="241dc03c-5fb7-411b-a162-bdf807c489ba" data-embed="default" data-video-id="6029948913001" controls=""&gt;&lt;/video&gt;
  &lt;script src="//players.brightcove.net/1367663370/241dc03c-5fb7-411b-a162-bdf807c489ba_default/index.min.js"&gt;&lt;/script&gt;
 &lt;/div&gt;
 &lt;p&gt;Network monitoring poses another challenge. Frequent use of traditional Simple Network Management Protocol (SNMP) communication can be disruptive to busy modern networks, so a type of real-time monitoring called &lt;i&gt;streaming telemetry&lt;/i&gt; pushes network operational data to data collection points. This offers better real-time monitoring than SNMP without disrupting network devices.&lt;/p&gt;
&lt;/section&gt;      
&lt;section class="section main-article-chapter" data-menu-title="How to build an effective IT monitoring strategy"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;How to build an effective IT monitoring strategy&lt;/h2&gt;
 &lt;p&gt;A good IT monitoring strategy saves money, conserves limited IT resources, speeds troubleshooting and mitigation and reduces the burden of managing many disparate tools. There are several best practices an organization can build upon to create an overall strategy:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Reduce or limit the number of monitoring tools.&lt;/b&gt; Seek a single-pane-of-glass monitoring environment wherever possible. This works well for relatively homogeneous organizations that use a limited number of systems, architectures, workflows and policies. As one example, a company that does business with just one public cloud provider might use that provider's native monitoring tools along with one or two tools to support the local data center. However, this might be impractical for heterogeneous organizations with broad mixes of hardware, architectures and workflow models.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Connect IT monitoring with business outcomes.&lt;/b&gt; It's easy to deploy so many tools and collect so much data that IT spends vital time and resources watching behaviors and parameters that have no effect on the business. When an IT monitoring strategy starts with a clear perspective on business goals, the subsequent decisions on metrics -- and the tools to gather them -- can be far more deliberate and focused. This will yield results that most directly benefit the business.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Consider the monitoring approach or focus.&lt;/b&gt; There are many ways to approach monitoring that focus on specific areas, such as applications, performance, infrastructure, security, governance and compliance. Each employs different metrics and KPIs. It's possible to embrace several approaches simultaneously, but this requires additional tools and more complex reporting.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Develop monitoring that closely ties to the application architecture. &lt;/b&gt;For example, create a UI for application data using microservices, public cloud, serverless and managed services. This approach is intended for newer application architectures, which can be designed and supported from the ground up; it doesn't work well for legacy or heterogeneous architectures.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Develop an in-house monitoring environment.&lt;/b&gt; One common example is to use log aggregation and analytics tools to create a central repository of operational data, and analyze, report and even predict alerts. This strategy can integrate multiple monitoring tools along with database, data integration, monitoring and visualization tools to create a custom monitoring resource. Be aware that the DIY approach can be time-consuming and expensive to create and maintain.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Use the data that's collected.&lt;/b&gt; Data without purpose is useless. Consider how each metric or KPI will be used. If a metric or KPI isn't needed for a tangible purpose, there's no benefit in gathering and storing that data. Consider how the data will be processed, reported, retained and eventually destroyed according to the organization's data retention policies. If monitoring data isn't included in those policies, it should be. Data should yield meaningful alerting and reporting.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Adopt an autonomous operations platform. &lt;/b&gt;Tools such as Moogsoft, Datameer, VictorOps, Opsgenie and AlertOps use data integration and ML to effectively create a unified monitoring system with a growing level of intelligence and autonomy to help speed IT incident reporting and responses.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;Once the strategy is clear, organizations can make more granular choices about implementation approaches and tools. There are plenty of options.&lt;/p&gt;
 &lt;h3&gt;Agents vs. agentless monitoring&lt;/h3&gt;
 &lt;p&gt;This is the process of collecting, processing and reporting data. But which data is collected -- and how -- can vary dramatically. A truly effective monitoring tool sees each target hardware or software object and can query details about them. In most cases, this requires the installation of agents on each object to be discovered and monitored. Although they produce extremely detailed monitoring data, agents must be patched, updated and otherwise managed. They also require processing and network overhead, potentially harming the performance of the object on which the agent operates. Using agents echoes the age-old scientific truism; the act of observing a thing changes the behavior of that thing.&lt;/p&gt;
 &lt;p&gt;Agentless monitoring foregoes the use of agents and instead collects data through standardized communication protocols, such as intelligent platform management interface, SNMP or services such as interoperable APIs. Agentless monitoring sheds the disadvantages of agents, but the data it collects tends to be limited in quantity and detail. Many monitoring products support agent and agentless data collection.&lt;/p&gt;
 &lt;h3&gt;Reactive monitoring vs. proactive monitoring&lt;/h3&gt;
 &lt;p&gt;This is another expression of real-time vs. trend monitoring. Collection and reporting on real-time statistics and data -- such as processor and memory utilization -- to overall service health and availability is a time-tested, proven approach for alerting and troubleshooting in a 24/7 data center environment. In this approach, admins react to an event once it occurs.&lt;/p&gt;
 &lt;p&gt;Proactive monitoring seeks to look ahead and make assessments and recommendations that can potentially prevent problems from occurring. For example, if a monitoring tool alerts admins that memory wasn't released when a VM was destroyed, it could help prevent a memory leak in the VM application before the affected server runs out of memory and crashes. Proactive monitoring depends on reactive tools to collect data and create trends for the proactive tool to analyze, and is increasingly augmented with ML and AI technologies to help spot abnormal behaviors and recurring events. For example, if ML detects a recurring rise in application demand or traffic, it can increase resources to the workload automatically to preserve performance and UX without any human intervention.&lt;/p&gt;
 &lt;h3&gt;Distributed applications&lt;/h3&gt;
 &lt;p&gt;Applications that traditionally run in the local data center are increasingly distributed across multiple computing infrastructure models, such as remote data centers and both hybrid cloud and multi-cloud environments. For example, an application might run multiple instances in the public cloud, where ample scalability is readily available, but rely on other applications or data still hosted in the local data center. This adds tremendous monitoring complexity because each component of the overall application must be monitored to ensure that it operates properly.&lt;/p&gt;
 &lt;p&gt;One key choice in such complex environments is centralization or decentralization. Centralizing collects monitoring data from local and cloud platforms into a single tool to present a single, unified view. This is best to provide end-to-end monitoring across cloud and local infrastructures, although it requires careful integration. By contrast, decentralization continues the use of cloud and local tools without coordination or interdependency. This is simpler to manage and maintain with few dependencies, but organization and analysis of multiple monitoring tools and data sources can be a challenge.&lt;/p&gt;
 &lt;h3&gt;Monitoring and virtualization&lt;/h3&gt;
 &lt;p&gt;Virtualization is a staple of cloud and local data centers and is responsible for vastly improved resource utilization and versatility through software-defined technologies, such as software-defined networks. Monitoring must account for the presence of virtualization layers, whether hypervisors or container engines, to see the underlying physical layer wherever possible. Modern monitoring tools are typically virtualization-aware, but it's important to validate each tool's behavior. Containers are a variation of virtualization technology but share the same need for monitoring and management.&lt;/p&gt;
 &lt;p&gt;For example, network virtualization divides a physical network into many logical networks, but it can mask performance or device problems from traditional monitoring tools. Proper monitoring at the network level might require monitoring individual VMs and hypervisors, or containers and container engines, to ensure a complete performance picture.&lt;/p&gt;
 &lt;h3&gt;The role of ML and AI&lt;/h3&gt;
 &lt;p&gt;Enterprise IT monitoring involves a vast amount of information. There's real-time data and streaming telemetry to watch for current events and track trends over time, and countless detailed logs generated by servers, devices, OSes and applications to sort and analyze for event triggers and root causes. Many monitoring alarms and alerts are false positives or have no consequent effect on performance or stability. It can be daunting for admins to identify and isolate meaningful events from inconsequential ones.&lt;/p&gt;
 &lt;p&gt;Consider the issue of &lt;a href="https://www.techtarget.com/searchitoperations/tip/Overcome-these-challenges-to-detect-anomalies-in-IT-monitoring"&gt;anomaly detection&lt;/a&gt;. Common thresholds can trigger an alert, but human intervention determines whether the alert is important. Monitoring tools increasingly incorporate AI and ML capabilities, which apply math and trends to flag events as statistically significant and help admins separate the signal from the noise. In effect, AI sets thresholds automatically to reduce false positives and identify and prioritize the most important incidents.&lt;/p&gt;
 &lt;p&gt;ML also aids anomaly detection in log analytics, a monitoring practice that is particularly effective for root cause analysis and troubleshooting. Here, ML uses regression analysis and event correlation to flag potential anomalies and predict future events and can even adjust for seasonal or daily variations in trends to reduce false positives.&lt;/p&gt;
 &lt;p&gt;For an example of ML and AI in monitoring, consider the vast amounts of network traffic that an organization receives. Divining an attempted hack or other attack from that volume of traffic can be extremely challenging. But anomaly detection techniques can combine a view of traffic content, behaviors and log reporting to pinpoint likely attacks and take proactive steps to block the activity while it's investigated.&lt;/p&gt;
 &lt;p&gt;Although ML provides powerful benefits for IT monitoring, the benefits aren't automatic. Every business is different, so there's no single algorithm or model for ML. This means IT admins and software developers must create a model that drives ML for the organization, using a vast array of metrics, such as network traffic volumes, source and target IP address, memory, storage, application latency, replication latency and message queue length. A practical ML exercise might involve Apache Mesos and the K-means clustering algorithm for data clustering and analysis.&lt;/p&gt;
&lt;/section&gt;                      
&lt;section class="section main-article-chapter" data-menu-title="Best practices for IT monitoring"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Best practices for IT monitoring&lt;/h2&gt;
 &lt;p&gt;IT monitoring is a dynamic process that requires regular attention and support of data monitoring, thresholds and alerts, visualization or dashboard setup and integrations with other tools or workflows, &lt;a href="https://www.techtarget.com/searchsoftwarequality/CI-CD-pipelines-explained-Everything-you-need-to-know"&gt;such as CI/CD&lt;/a&gt; and AIOps. ML and AI can help to alleviate some of the routine tasks involved, but regular attention is essential to maintain the automated workflows and to validate the evolving ML model.&lt;/p&gt;
 &lt;p&gt;Consider the simple &lt;a href="https://www.techtarget.com/searchitoperations/tip/Get-started-with-threshold-monitoring"&gt;importance of thresholds in IT monitoring&lt;/a&gt;. Monitoring can employ static and dynamic thresholds. Static thresholds are typically set based on worst-case situations, such as maximum processor or memory utilization percentages, and can typically be adjusted from any default thresholds included with the monitoring tool. A static threshold is rarely changed and doesn't account for variations in the environment. It applies to every instance, so it's easy to wind up over- or under-reporting critical issues, resulting in missed problems or false positives.&lt;/p&gt;
 &lt;p&gt;By comparison, dynamic thresholds generally use ML to determine what's normal and generate alerts only when the determined threshold is exceeded. Dynamic thresholds can adjust for seasonal or cyclical trends and can better separate real events from false positives. Thresholds are adjusted automatically based on cyclical trends and new input. Dynamic thresholds are imperfect, and they can be disrupted when activity occurs outside of established patterns. Thus, dynamic thresholds still require some human oversight to ensure that any ML and automation proceeds in an acceptable manner.&lt;/p&gt;
 &lt;p&gt;Overall, the best practices for enterprise IT monitoring and responses can be broken down into a series of practical guidelines:&lt;/p&gt;
 &lt;ol class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Focus on the system and apps.&lt;/b&gt; There are countless metrics that can be collected and analyzed, but the only metrics that most IT admins should worry about are the metrics related to system -- infrastructure -- and application performance. Everything else is extraneous or can't readily be acted upon by IT. For example, a metric such as cost per transaction has little value to IT teams but might be vitally important to business leaders. Conversely, a metric such as transaction latency might be meaningless to business leaders but can be vital to adequate performance and SLA compliance where IT teams are directly responsible.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Carefully configure alerts.&lt;/b&gt; Thresholds and alerts are typically the first line of defense when issues arise. Direct alerts to the most appropriate team members and then be sure to hold those staffers accountable. Ideally, IT should know about any problem before a supervisor -- or a customer. Integrate alerts into an automated ticketing or incident system, if possible, to speed assignment and remediation.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Be selective with alerts and reports.&lt;/b&gt; Don't overwhelm IT staff with needless or informational alerts. Only configure alerts for metrics that pertain directly to IT operations and turn off alerting for metrics over which the IT staff has no control. This reduces noise and stress, plus it lets staff &lt;a href="https://www.techtarget.com/searchitoperations/tip/How-to-respond-to-3-common-IT-alerts"&gt;focus on the most relevant alerts&lt;/a&gt;.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Match people to data.&lt;/b&gt; Monitoring is typically a team effort where different staff see and respond to different data. For example, workload owners might need to see application transaction or revenue-related data and reports; IT staff will want to see infrastructure metrics and capacity/performance reporting; helpdesk teams will likely be the front line for alerts. Understand who sees what, and how those responsible individuals will respond.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Review and update monitoring plans.&lt;/b&gt; IT monitoring strategies aren't static entities. Plans are often codified into formal documents, and they need regular updates to keep pace with changing business needs, new tools and evolving regulatory and governance requirements. Review and update the IT monitoring strategy on a regular basis and ensure that the plan meets everyone's needs.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Consider data retention requirements.&lt;/b&gt; IT monitoring can create a substantial amount of data in the form of log files and metrics data streams. All that data is business data and falls under the policies and practices of data security, retention and destruction. Consider the specific retention needs for metrics, KPIs, logs and alerts, and establish data lifecycle management workflows for monitoring data accordingly. Generally, retention for monitoring data is far shorter than typical business data, but the proper management of monitoring data prevents storage sprawl (wasted storage) and strengthens business governance.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Correlate data where possible.&lt;/b&gt; Look for opportunities to combine or correlate data from varied but related metrics. Establishing correlations can help the business find cause-and-effect relationships that enhance observability and might expose opportunities for improvement. For example, a business that sees surges in network traffic to an application and simultaneously notices spikes in server lag and declines in UX have the basis for potential configuration and infrastructure changes. Tools with analytics, ML and AI capabilities can often yield the best results in data analysis tasks.&lt;/li&gt; 
 &lt;/ol&gt;
&lt;/section&gt;      
&lt;section class="section main-article-chapter" data-menu-title="Important metrics to include in an IT monitoring strategy"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Important metrics to include in an IT monitoring strategy&lt;/h2&gt;
 &lt;p&gt;There's no single universal suite of metrics suited for all businesses and industries. However, there are some common categories and data points that are typically included in a monitoring plan.&lt;/p&gt;
 &lt;p&gt;There are five general categories for metrics and KPIs:&lt;/p&gt;
 &lt;ol class="default-list"&gt; 
  &lt;li&gt;Performance.&lt;/li&gt; 
  &lt;li&gt;Quality.&lt;/li&gt; 
  &lt;li&gt;Security.&lt;/li&gt; 
  &lt;li&gt;Velocity.&lt;/li&gt; 
  &lt;li&gt;Value.&lt;/li&gt; 
 &lt;/ol&gt;
 &lt;p&gt;Each category carries an array of common metrics to consider. Only a small sampling of potential metrics is shown as an example below.&lt;/p&gt;
 &lt;h3&gt;Performance metrics&lt;/h3&gt;
 &lt;p&gt;Performance data indicates the operational state of workloads, services and infrastructure. These are typically most relevant to IT teams and include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Capacity.&lt;/b&gt; The amount of resources used by -- or available to -- a system or application.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Errors.&lt;/b&gt; The number of failed queries or requests -- or other problems -- that occur over time.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Health.&lt;/b&gt; The availability and overall condition of an application or system.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Mean time between failures.&lt;/b&gt; The average time between incidents or failures requiring intervention.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Mean time to repair (MTTR).&lt;/b&gt; The average time needed to mitigate or remediate an incident or failure.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Response time.&lt;/b&gt; The time needed to respond to a query or request -- sometimes termed latency.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Throughput.&lt;/b&gt; The number of queries or requests that a system can handle over time.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;Quality metrics&lt;/h3&gt;
 &lt;p&gt;Quality metrics outline and quantify UX. These data points can be useful to business and technology leaders, as well as software developers and workload stakeholders. Factors such as health, errors and failure rates are often part of the quality discussion. Other quality metrics include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Bug density.&lt;/b&gt; The number or rate of defects encountered in a software build or test cycle.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Escaped defects.&lt;/b&gt; The number or rate of defects not detected during the test cycle -- and encountered in later deployment.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Pass/fail rates.&lt;/b&gt; The number or percentage of successful commit-to-build cycles -- often vs. the number or percentage of unsuccessful commit-to-build cycles.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Throughput.&lt;/b&gt; The rate of new builds or the rate at which work is being performed.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;UX.&lt;/b&gt; A flexible metric often deduced from other factors such as response time and error rates.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;Security metrics&lt;/h3&gt;
 &lt;p&gt;Security metrics are used to quantify issues related to compliance and risk, and can be vital for both IT teams responsible for security, developers creating new code and business leaders responsible for regulatory and governance consequences. Common security metrics include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Code quality.&lt;/b&gt; The assessment of code quality prior to a build.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Issues detected.&lt;/b&gt; The number or rate of security incidents such as detected attacks, bad login attempts, malicious acts blocked and unauthorized changes attempted.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Mean time to detect vulnerabilities.&lt;/b&gt; The average time needed to find a vulnerability in code or infrastructure.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Mean time to resolve vulnerabilities.&lt;/b&gt; The average time needed to resolve a vulnerability in code or infrastructure once detected.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Number of critical or high vulnerabilities.&lt;/b&gt; The number or rate of serious vulnerabilities detected in code or infrastructure.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Response time.&lt;/b&gt; The amount of time needed to respond to and remediate a security incident.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Scan frequency.&lt;/b&gt; The rate of scans for intrusion or other malicious actions.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Technical debt ratio.&lt;/b&gt; The number of software or infrastructure updates or changes delayed -- effectively putting off work until later, creating a "debt" that eventually needs to be met by developers or IT staff.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;Velocity metrics&lt;/h3&gt;
 &lt;p&gt;Velocity metrics &lt;a target="_blank" href="https://www.7pace.com/blog/velocity-metrics" rel="noopener"&gt;indicate the speed&lt;/a&gt; at which work is being accomplished. Factors such as MTTR and other "mean time" metrics can be related to velocity. Velocity is an indirect gauge of efficiency and is most important to business leaders. Other velocity metrics for infrastructure and development include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Change volume. &lt;/b&gt;The number of changes implemented in a given time, otherwise known as change rate. This might indicate changes to the infrastructure -- such as configuration changes -- but is most often used to report changes to code.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Completion rate. &lt;/b&gt;The number of tickets -- or issues -- addressed and resolved in a given time.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Customer tickets. &lt;/b&gt;The number of change requests or help/issue tickets. "Change" might be related to personnel or infrastructure -- such as onboarding or offboarding an employee -- or bug requests from software users.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Cycle time.&lt;/b&gt; The time needed to complete a development cycle or iteration.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Deployment frequency.&lt;/b&gt; The rate or percentage of tested builds that are deployed to production.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Resolution time.&lt;/b&gt; The time needed to resolve an issue or help ticket.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Response time.&lt;/b&gt; The time needed to address an issue, change or help ticket.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;h3&gt;Value metrics&lt;/h3&gt;
 &lt;p&gt;Value metrics are often extrapolated from cost, revenue, velocity and other available data to provide indications of business value and outcomes. Value metrics are typically most important to business leaders and workload stakeholders. Common value metrics include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Cost.&lt;/b&gt; Correlating cost data to other performance, quality and velocity metrics can yield cost data -- and comparative cost-savings data. Common cost variants can include cost-per-user, cost-per-ticket or cost-per-asset, such as the cost of running a server.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Customer satisfaction.&lt;/b&gt; Correlating UX metrics with other business data, such as repeat sales or average spend, can help business leaders gauge the overall happiness of customers and their willingness to do business.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Customer usage.&lt;/b&gt; This is a utilization measure that can indicate a wide range of possible metrics such as time spent on website, using a service or the number of return visits in a given time.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Lead time.&lt;/b&gt; The time needed to implement, deploy or ship.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Opportunity cost.&lt;/b&gt; The estimated value of an opportunity lost vs. the opportunity taken.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Return on investment.&lt;/b&gt; The amount of revenue, or profit, generated from an investment.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Time to market.&lt;/b&gt; The time needed to bring concept to product, or a product or service to market.&lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;                    
&lt;section class="section main-article-chapter" data-menu-title="IT monitoring tools"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;IT monitoring tools&lt;/h2&gt;
 &lt;p&gt;IT admins can only act on what they see, and what they see is enabled through tools. Organizations can employ a multitude of tools to oversee and manage infrastructure and services, but tools have various limitations in scope, discovery, interoperability and capability.&lt;/p&gt;
 &lt;p&gt;An IT team needs a clear perspective on criteria -- What problems are they trying to solve through the use of tools? For example, a business concerned with network performance or traffic analysis needs a network monitoring tool; a tool intended for server monitoring might offer some network insights, but that data likely isn't meaningful enough to be useful.&lt;/p&gt;
 &lt;p&gt;In the end, an IT staff team faces a difficult decision: deploy a suite or framework that does everything to some extent or use tools from a variety of vendors that provide detailed information but in a pieced-together arrangement that can be hard to integrate, learn and maintain.&lt;/p&gt;
 &lt;p&gt;This difficult decision is exacerbated by the sheer number of tools available. Tools can be chosen from system vendors, third-party providers or SaaS and other cloud services.&lt;/p&gt;
 &lt;h3 class="splash-heading"&gt;IT monitoring tool examples&lt;/h3&gt;
 &lt;p&gt;The following is only a partial (alphabetical) list of recognized offerings compiled from public research and reporting -- there are countless other tools available to suit almost any business size and need:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Amazon CloudWatch.&lt;/li&gt; 
  &lt;li&gt;AppDynamics.&lt;/li&gt; 
  &lt;li&gt;BMC TrueSight Infrastructure Management.&lt;/li&gt; 
  &lt;li&gt;Broadcom DX Unified Infrastructure Management.&lt;/li&gt; 
  &lt;li&gt;Cisco CloudCenter.&lt;/li&gt; 
  &lt;li&gt;Datadog.&lt;/li&gt; 
  &lt;li&gt;Dynatrace.&lt;/li&gt; 
  &lt;li&gt;Google Cloud's operations suite.&lt;/li&gt; 
  &lt;li&gt;Grafana Cloud.&lt;/li&gt; 
  &lt;li&gt;Grafana Enterprise Stack.&lt;/li&gt; 
  &lt;li&gt;Hewlett Packard Enterprise (HPE) OpsRamp.&lt;/li&gt; 
  &lt;li&gt;IBM Cloud Monitoring.&lt;/li&gt; 
  &lt;li&gt;Kaseya VSA.&lt;/li&gt; 
  &lt;li&gt;ManageEngine Applications Manager.&lt;/li&gt; 
  &lt;li&gt;Nagios XI.&lt;/li&gt; 
  &lt;li&gt;Microsoft Azure Monitor.&lt;/li&gt; 
  &lt;li&gt;Microsoft System Center Operations Manager (SCOM).&lt;/li&gt; 
  &lt;li&gt;NetApp Cloud Insights.&lt;/li&gt; 
  &lt;li&gt;New Relic.&lt;/li&gt; 
  &lt;li&gt;Oracle Application Performance Monitoring Cloud Service.&lt;/li&gt; 
  &lt;li&gt;SolarWinds Network Performance Monitor.&lt;/li&gt; 
  &lt;li&gt;SolarWinds Server and Application Monitor.&lt;/li&gt; 
  &lt;li&gt;Splunk Infrastructure Monitoring.&lt;/li&gt; 
  &lt;li&gt;Veeam ONE.&lt;/li&gt; 
  &lt;li&gt;VMware Aria Operations.&lt;/li&gt; 
  &lt;li&gt;Zabbix.&lt;/li&gt; 
  &lt;li&gt;Zenoss Cloud.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;It's important to establish a clear understanding of desired features, capabilities and compatibility before narrowing this enormous field to several possible candidates. At that point, it should be possible to review candidates more closely and implement several proof-of-concept projects to test and validate the tools, along with performance and interoperability in the enterprise environment, before making a final selection for procurement and deployment.&lt;/p&gt;
 &lt;p&gt;Sometimes, new and innovative technologies offer powerful opportunities for monitoring, optimization and troubleshooting. One example of this innovation is the emergence of log analytics tools. Almost every system produces log files that contain valuable data about events, changes and errors. But logs can be huge, difficult to parse and challenging to correlate, making it almost impossible for humans to find real value in logs.&lt;/p&gt;
 &lt;p&gt;A relatively new classification of log analytics tools can discover, aggregate, analyze and report insights gleaned from logs across the infrastructure and applications. The recent addition of ML and AI capabilities to log analytics enables such tools to pinpoint anomalous behaviors and even predict potential events or issues. In addition to logs, the ability to access and aggregate vast amounts of monitoring data from other tools enables &lt;a href="https://www.techtarget.com/searchitoperations/feature/Compare-Grafana-vs-Datadog-for-IT-monitoring"&gt;products such as Grafana or Datadog&lt;/a&gt; to offer more comprehensive pictures of what's happening in an environment.&lt;/p&gt;
 &lt;p&gt;Organizations with a local data center typically adopt some form of server monitoring tool to oversee each server's health, resources and performance. Many tools provide server and application or service management features. Tools include Cacti, ManageEngine Applications Manager, Microsoft SCOM, Nagios, Opsview, SolarWinds Server and Application Monitor and Zabbix.&lt;/p&gt;
 &lt;p&gt;IT must also decide between vendor-native or third-party monitoring tools. Third-party tools such as SolarWinds Virtualization Manager and Veeam One monitor virtualized assets, such as VMs, and can potentially provide superior visualizations and integrations at a lower cost than native hypervisor offerings, such as Microsoft's System Center 2022 or VMware vRealize Operations 8.0 and later.&lt;/p&gt;
 &lt;p&gt;Extensibility and interoperability are critical when selecting an IT monitoring tool. Plugins, modules, connectors and other types of software-based interfaces enable tools to discover, configure, manage and troubleshoot additional systems and services. Adding a new plugin can be far easier and cheaper than purchasing a new tool. One example is the use of modules to &lt;a href="https://www.techtarget.com/searchitoperations/tip/How-and-why-to-add-SolarWinds-modules"&gt;extend a tool such as SolarWinds&lt;/a&gt; for additional IT operations tasks.&lt;/p&gt;
 &lt;p&gt;Interoperability is critical in building a broader monitoring and automation umbrella, and some tools are rising to the challenge. For example, the Dynatrace AIOps engine now collects metrics from the Kubernetes API and Prometheus time-series monitoring tool for Kubernetes clusters. Ideally, such integration &lt;a href="https://www.techtarget.com/searchitoperations/tip/Tips-and-tools-for-collecting-helpful-Kubernetes-metrics"&gt;improves detection of root cause events in Kubernetes&lt;/a&gt;; more broadly, the implications for integration and IT automation portend powerful advancements for AI in operations.&lt;/p&gt;
 &lt;p&gt;The ability to process and render vast amounts of infrastructure data at various levels, from dashboards to graphs, adds tremendous value to server and system monitoring. Sometimes, a separate visualization tool is most appropriate. &lt;a href="https://www.techtarget.com/searchitoperations/tip/Evaluate-Grafana-vs-Kibana-for-IT-data-visualization"&gt;Examples include Kibana&lt;/a&gt;, an open source log analysis platform that discovers, visualizes and builds dashboards on top of log data; and Grafana, a similar open source visualization tool, which is used with a variety of data stores and supports metrics.&lt;/p&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://searchservervirtualization.techtarget.com/rms/onlineImages/KibanaGrafana2.jpg"&gt;
  &lt;img data-src="https://searchservervirtualization.techtarget.com/rms/onlineImages/KibanaGrafana2_mobile.jpg" class="lazy" data-srcset="https://searchservervirtualization.techtarget.com/rms/onlineImages/KibanaGrafana2_mobile.jpg 960w,https://searchservervirtualization.techtarget.com/rms/onlineImages/KibanaGrafana2.jpg 1280w" alt="Grafana alerting dashboard" height="349" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;This is how Grafana presents an alerting dashboard.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;p&gt;The shift of infrastructure and applications to the cloud means organizations must track those resources as part of their enterprise IT monitoring efforts. Public cloud providers have opened their traditionally opaque infrastructures to accommodate this, and service providers offer their own native tools for cloud monitoring. The service formerly known as Google Stackdriver -- which has been folded into the Google Cloud Console portfolio -- monitors Google Cloud as well as applications and VMs that run on AWS Elastic Compute Cloud; Microsoft Azure Monitor collects and analyzes data and resources from the Azure cloud; and &lt;a href="https://www.techtarget.com/searchaws/tip/Compare-CloudWatch-vs-Datadog-and-New-Relic-for-AWS-monitoring"&gt;AWS users have Amazon CloudWatch&lt;/a&gt;. Additional options include Oracle Application Performance Monitoring cloud service and Cisco CloudCenter, as well as tools such as Datadog for cloud analytics and monitoring and New Relic to track web applications.&lt;/p&gt;
 &lt;p&gt;Another major class of IT monitoring tools focuses on networks and security. Such tools can include physical devices and services such as firewalls and load balancers. They watch network activity for traffic sources, patterns and performance between servers, systems and services.&lt;/p&gt;
 &lt;p&gt;A typical &lt;a href="https://www.techtarget.com/searchnetworking/feature/Explore-evolving-network-performance-monitoring-tools"&gt;network monitoring tool&lt;/a&gt; -- such as Zabbix, Nagios, Wireshark, Datadog or SolarWinds' Network Performance Monitor -- will offer automatic discovery, automatic node and device inventory along with automatic and configurable trouble alerts and reporting. The interface should feature easy-to-read dashboards or charts, and it should include the ability to generate a network topology map.&lt;/p&gt;
 &lt;p&gt;Virtualization and application awareness enable the tool to support advanced technologies such as network virtualization and APM. Network monitoring can use agents but might not need agents for all devices or applications. Graphing and reporting should ideally support interoperability with data visualization, log analytics and other monitoring tools.&lt;/p&gt;
 &lt;p&gt;Finally, organizations can use a variety of application and UX monitoring tools, &lt;a href="https://www.techtarget.com/searchitoperations/tip/Learn-how-New-Relic-works-and-when-to-use-it-for-IT-monitoring"&gt;such as New Relic&lt;/a&gt;, to ensure application performance and user experience or satisfaction. These tools gather metrics on application behaviors, analyze that data to identify errors and troublesome transaction types and offer detailed alerting and reporting to illustrate application and user metrics, as well as highlight SLA assessments. Others in the APM and UX segments that offer products to assist with monitoring include Datadog, Dynatrace, AppDynamics and Splunk.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Stephen J. Bigelow, senior technology editor at TechTarget, has more than 20 years of technical writing experience in the PC and technology industry.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>This comprehensive IT monitoring guide examines strategies to track systems, from servers to software UIs, and how to choose tools for every monitoring need.</description>
            
            <link>https://www.techtarget.com/searchitoperations/The-definitive-guide-to-enterprise-IT-monitoring</link>
            <pubDate>Thu, 25 Jan 2024 13:23:00 GMT</pubDate>
            <title>The definitive guide to enterprise IT monitoring</title>
        </item>
        <title>SearchServerVirtualization Resources and Information from TechTarget</title>
        <ttl>60</ttl>
        <webMaster>webmaster@techtarget.com</webMaster>
    </channel>
</rss>
