COMPTIA NETWORK + The Ultimate Comprehensive Beginners Guide To Learn About The CompTIA Network+ Certification From A-Z (With Practice Exercises) Vol.2 by Benoit Stevens
COMPTIA NETWORK + The Ultimate Comprehensive Beginners Guide To Learn About The CompTIA Network+ Certification From A-Z (With Practice Exercises) Vol.2 by Benoit Stevens
By
Benoit Stevens
TABLE OF CONTENTS
INTRODUCTION
CHAPTER 1: NETWORK STANDARDS, PROTOCOLS AND
OPTIMIZING OPERATING SYSTEM
Address Resolution Protocol (ARP)
Optimizing Operating Systems
Maintaining Operating Systems
Windows Patch Management
CHAPTER 2: ALL YOU NEED TO KNOW ABOUT NETWORK
VIRTUALIZATION
Understanding Network Service
TCP/IP Configuration Choices
Understanding Cloud Computing and Virtualization
Concepts of Cloud Computing
Cloud Services
Types of Clouds
Understanding Virtualization
The Purpose of Virtual Machines
The Hypervisor
Setting Up and Using Client-Side Virtualization
CHAPTER 3: MITIGATING NETWORK THREATS
CHAPTER 4: MANAGING AND TROUBLESHOOTING THE
NETWORK
Determine whether you have a software or hardware issue
Troubleshooting a wireless network
CHAPTER 5: ALL YOU NEED TO KNOW ABOUT OPERATIONAL
PROCEDURES
Understanding Safety Procedures
Identifying Potential Safety Hazards
CONCLUSION
INTRODUCTION
ARP helps translate TCP/IP addresses to MAC addresses when you are
running a broadcast. You use ARP when connected to an Ethernet network to
determine the machine on the network that is using a specific IP address.
When the requesting device receives the IP address, it adds it to its ARP table
for future reference.
ping
Ping is an essential utility that allows you to determine whether you can
connect to a host, or whether the host is responding. The ping syntax is as
follows:
Ping IP address or hostname
There are so many options that you can use with the ping command, which
will reveal different features about the workstation or the host.
nslookup
The nslookup utility helps you query a name server to identify the IP
addresses to which specific names resolve. This is ideal, especially when you
are configuring a new workstation or server to access the internet.
The nslookup command helps you find out the unique features of a given
domain name and the servers that it connects to. It also shows the
configuration of these servers.
Optimizing Operating Systems
Every computer running a modern operating system (OS) requires both
occasional optimizations to keep the system running snappily and ongoing
maintenance to make sure nothing goes wrong. Microsoft, Apple, and the
many Linux developers use decades of experience with operating systems to
search for ways to make the tasks of maintaining and optimizing easy and
very automatic, but there’s still a lot to do to keep things humming
along.
The chapter covers maintenance and optimization, so you need to ensure
you know what these two terms mean. Maintenance means jobs you do from
time to time to keep the OS running well, such as running hard drive utilities.
CompTIA sees optimization as changes you make to a system to make it
better—a good example is adding RAM. This chapter covers the standard
maintenance and optimization activities performed on Windows, Mac OS X,
and Linux, and the tools techs use to achieve them.
NOTE This chapter covers maintenance and optimization techniques for
all the operating systems currently on the CompTIA A+ exams. But, like the
exams and the reality of market share, Windows features a lot more than Mac
OS X or Linux.
Even the best maintained, most perfectly optimized computer is going to
run into trouble. Hard drives crash, naïve coworkers delete files, and those
super great new video card drivers sometimes fail. The secret isn’t to try to
avoid trouble, because trouble will find you, but rather to make sure you’re
ready to deal with problems when they arise.
This is one area that very few users do well, and it’s our jobs as techs to
make a recovery from trouble as painless as possible. OS developers give us
plenty of tools to prepare for problems—we just need to make sure we use
them properly.
Maintaining Operating Systems
Maintaining modern operating systems can be compared to maintaining a
new automobile.
Of course, a new automobile comes with a warranty, so most of us just
take it to the dealer to get work done. In this case, however, you are the
mechanic, so you need to think as an auto mechanic would think. First, an
auto mechanic needs to apply recalls when the automaker finds a severe
problem. For a PC tech, that means installing the latest system patches
released by Microsoft. You also need to maintain the parts that wear down
over time. On a car, that might mean changing the oil or rotating the tires. In
a Windows system, that includes keeping the hard drive and Registry
organized and uncluttered. Mac OS X and Linux require a little less
maintenance,
Windows Patch Management
There’s no such thing as a perfect operating system, and Windows is no
exception. From the moment Microsoft releases a new version of Windows,
malware attacks, code errors, new hardware, new features, and many other
issues compel Microsoft to provide updates, known more generically as
patches in the computing world, to the operating system. The process of
keeping software updated in a safe and timely fashion is known as patch
management. Microsoft’s primary distribution tool for handling patch
management is a Control Panel applet called Windows Update.
Windows Update separates the different type of panel in distinct kinds:
updates and service packs. Updates in Windows Vista and 7 are individual
fixes that come out fairly often, on the order of once a week individual
updates are usually reasonably small, rarely more than a few megabytes. A
service pack is a large bundle of updates plus anything else Microsoft might
choose to add. Service packs are invariably large (hundreds of megabytes)
and are often packaged with Windows, as shown in Figure 15-1.NOTE
Windows Vista has two service packs: SP1 and SP2. Window 7 has one
service pack: SP1. In Window 8, 8.1, and 10, updates have replaced the use
of service packs.
With Windows 8 and later, Microsoft ditched the service pack
terminology and used only updates to indicate changes. Significant updates
get a revision number, like Windows 8 to Windows 8.1
Pay attention to the steps listed here.
Windows Update checks your system, grabs the updates, and patches your
system automatically. Even if you don’t want to allow Windows Update to
run automatically, it’ll nag you about updates until you repair your system.
Microsoft provides Windows Update for all versions of Windows. In
Windows 8/8.1 offers two interfaces for Windows Update: one in Control
Panel and one in the PC Settings app. In Windows 10, the Control Panel
Windows Update app is gone, and you will find the only Windows Update
interface in the Settings app under Update and Security.
CHAPTER 2: ALL YOU NEED TO KNOW
ABOUT NETWORK VIRTUALIZATION
You must have been on the Internet for a while now because everyone
seems to be continuously connected to the Internet. Web-enabled
smartphones and other mobile devices have
made the connection seemingly persistent.
Anytime you visit a web page, you make a connection from your device,
which is the client, to a web server. To be more specific, a link is requested
by your Internet software that is using the Hypertext Transfer Protocol
(HTTP) or the TCP/IP protocol suite. Your client needs to know the IP
address of the webserver, and it will request port 80.
if you need to make a Secure connection, then they are made using the
Hypertext Transfer Protocol Secure (HTTPS) and port 443.
The web server is configured with web hosting software, which listens for
inbound requests on port 80 or port 443. Two web platforms that are common
are the open-source Apache and Microsoft’s Internet Information Services
(IIS). Although there are a few different packages available for use. Web
servers provide
content on request, which can include text, images, and videos, and they
can also do things like run scripts to open additional functions, like,
processing of credit card transactions and querying databases.
Web servers regularly functions as download servers also, which means
that they also use FTP. They listen for and incoming requests on TCP ports
20 and 21 as well.
Individuals or companies that are independent bodies can manage web
servers, but it is mostly an Internet service provider or web hosting company
that manages various websites and operates them. One web server can be
configured to control dozens of smaller websites using the same IP address,
provided it has sufficient
resources to handle the traffic. On the flip side, enormous sites such as
Amazon.com and Google are made up of multiple web servers acting as one
site. It is estimated that Google has over 900,000 servers, and Microsoft
claims to have over one million servers!
If a company wants to host its web server, the best place for it is in the
DMZ. This configuration provides easy access and the best security. The
firewall can be configured to allow inbound port 80 and 443 requests to the
DMZ but not to allow incoming requests on those ports to make it to the
internal corporate network.
DHCP Scopes
DHCP servers are always configured using a scope, which contains the
various pieces of information that the server can provide to its clients. DHCP
servers also need at least one scope, but they can also have more than one if
the need arises. The following
items are included within the scope:
Address pool: This is the range of addresses that the server can always
give out to its clients. For instance, the pool may have been configured to
give out some addresses ranging from 192.168.0.100 to 192.168.0.200. If all
the addresses are given out, then the server cannot provide any information to
any new potential clients. The address pool configuration will include the
subnet mask if the network is using an IPv4 address.
Lease durations: IP addresses given out by the DHCP server are also
leased out to clients, and the lease has an expiration period. Before the lease
expires, the client will typically be allowed to renegotiate to receive a new
contract. If the lease eventually expires, then the address will become
available to assign to another client. If you find yourself in a situation where
there are limited IP addresses but lots of clients coming on and off the
network frequently then you might want to shorten the lease period. The
downside is that it will generate a bit more network broadcast traffic.
Address reservations: Some IP addresses are always reserved for some
specific clients, based on the client's MAC address. This is particularly
important for devices that need to have a static IP address, like printers,
servers, and routers.
Scope options: These provide extra configuration items that are not
included in the IP address and subnet mask. Items like the address of the
default gateway, which is the router and DNS servers are most common.
Some other things may include the addresses of servers that may provide
additional functions, like, time synchronization; NetBIOS name resolution,
telephony services, or the domain name for the client to use.
TCP/IP Configuration Choices
When configuring TCP/IP on a network, you have three choices for
assigning IP addresses: manually, automatically using a DHCP server, and a
hybrid approach. The manual option takes most of the work, and it works
only for smaller networks. The administrator needs to keep track of all of the
addresses that have been assigned to it so that the same address won't end up
being assigned accidentally to multiple computers. Duplicate addresses will
cause some communication problems. Most administrators may also have
better things to do than to manage IP addresses manually.
DHCP is extremely convenient. The administrator sets up a scope and
options and lets the server manage all of the IP addresses. For devices that
require a static address, such as printers, servers, and routers, the
administrator can configure address reservations based on their MAC
addresses. That way, every time a specific printer comes online, it always
receives an equivalent address. This takes a touch of setup within the
beginning, or when a replacement device is added, but in the long run, it is
worth it. The DHCP server is the single point of management for all IP
addresses on the network. That makes administration and troubleshooting
much more comfortable. The hybrid option is a combination of both manual
assignments and DHCP. For instance, devices that need static IP addresses
can be assigned manually, whereas clients get their information from a
DHCP server. In situations like this, the administrator might set a DHCP
client pool address range for 192.168.0.100 to 192.168.0.200 and then use
addresses 192.168.0.1 to 192.168.0.99 for static devices. The problem with
this approach is that it needs some extra administrative effort to manage the
static addresses and confirm that an equivalent address isn't assigned several
times. Furthermore, if another administrator looks at the DHCP server and
sees the scope, the administrator may or may not know that addresses lower
than 100 are used for static assignments and could increase the scope to
incorporate those addresses. Doing so will cause the DHCP server to give out
addresses that could conflict with devices that have been configured
manually. So hybrid is always an option, but it is often not recommended.
The best option is to use the DHCP server to manage all IP addresses on the
network.
How does DHCP Work?
The DNS server also use the zone file whenever a computer wants to
make a query. For instance, if you were to ask this DNS server, "Who is
mydomain.com?" the response would be 192.168.1.25. If you then ask it,
"Who is www.mydomain.com?" it will also look and see that www is an alias
for mydomain.com and also provides the same IP address. If you are the
DNS administrator for a network, you will be asked to manage the zone file,
including entering hostnames and IP addresses when appropriate.
DNS on the Internet
The Internet is massive. So big that there's no way one DNS server could
manage all of the computer name mappings out there. The creators of DNS
had foreseen this, and they designed it in a way that reduces potential issues.
For example, if you are looking for the website www.wiley.com. When the
DNS server your computer is configured to use is asked for a resolution, it
will check its zone file first to see if it knows the IP address. If it doesn't, it
will check its cache to see if the record is in there. The cache may be a
temporary database of recently resolved names and IP addresses. If it still
doesn't know the answer, it can query another DNS server to ask for help.
The first server it will ask is called a root server. The Internet namespace is
designed as a hierarchical structure, and the dot at the end is the widest
categorization, which is known as "the root." The next level of the hierarchy
is the top-level domains, such as .com, .net, .edu, etc.
You've probably used the Internet several time, and you maybe thinking,
"Where the trailing dot come from. The dot at the end represents the root.
Without it, a domain name will not be considered a fully qualified domain
name, fit for Internet use. However, the dot is a convention that's used when
Internet browsers are used; users don't need to type it in. Even though it was
omitted, the browser understands that it's technically looking for
www.yahoo.com. and not www.yahoo.com with no final period. The Internet
is a fantastic place, with 13 global root servers. All DNS servers needs to be
configured to ask a root server for help. The root server will return the name
of a top-level domain DNS server. The querying DNS server will then
request that server for assistance. The process continues, like that until the
querying DNS server finds a server that can resolve the name
www.wiley.com. Then the querying DNS server will cache the fixed name so
that subsequent lookups are faster. The length of time that the name is held in
the cache is configurable by the DNS administrator.
The DNS name resolution process If you're curious, the list of root servers
is maintained at www.iana.org/domains/root/servers, among other places. if
you visit a website you've never visited before, it can sometimes take longer
than the standard time to load. The next time you visit that site, it will
probably appear faster, if the name resolution is held in the cache.
DNS servers for intranet use can also be located inside all the network
firewalls. If it's used for Internet name resolution, it's most effective to place
it in the DMZ. DNS uses UDP or TCP port 53.
Proxy Server
A proxy server is a server that makes requests for resources on behalf of a
client. The one that is most common and you will see is a web proxy, but you
might run into a caching proxy as well.
Configuring Windows 7 to Use a Proxy Server
1. If you are using Internet Explorer, click Tools ➢ Internet
Options.
2. If you are using Google Chrome, open Internet Options by
clicking the Menu icon and choosing Settings ➢ Show
Advanced Settings ➢ Change Proxy Settings.
3. Click the Connections tab and then the LAN Settings button.
4. Check the top box under Proxy Server, and add the server's
address and port number.
5. Click OK to save the settings, and close the window.
6. Click OK on the Internet Options dialogue box to save the settings
Authentication Server
An authentication server may be a device that examines the credentials of
a user trying to access the network, and it determines if network access is
granted. also, they are gatekeepers and critical components to network
security. Authentication servers is a wireless router or access point, Ethernet
switch, or a remote access server (RAS). A frequent term that you will
hear in the Microsoft world is domain controller, which is a centralized
authentication server. Other types of authentication servers are RAS, Remote
Authentication Dial-in User Service (RADIUS), Terminal Access Controller
Access-Control System Plus (TACACS+), and Kerberos. The method will
differ slightly between servers, but generally what happens is the user (or
computer) trying to access the network presents credentials. If the credentials
are deemed appropriate, the authentication server issues the user a security
code or a ticket that grants them access to resources. For most users, the
specified credentials are a username and password.
To increase security, some systems require multiple items also to a
username to authenticate users, like a password and a security token. These
systems are said to use multifactor authentication. In multifactor
authentication, the user is mostly required to present two of these three items:
Password or a PIN.
A smart card or a PIN generated from which generally refers to
biometrics. Examples include using fingerprints, facial recognition,
and retinal scanners.
Preserving the safety of an authentication server is critical for network
security. If an attacker breaches the authentication server, it can have
catastrophic implications for the network. Therefore, most authentication
servers are securely tucked behind a firewall. To users who need remote
access, RAS or RADIUS servers are often placed in a DMZ.
Internet Appliances
The definition of an online appliance may be a device that creates easy access
to the Internet. The CompTIA A+ certification exams list three items under
Internet appliances, and that they are all related to Internet security for
networks as opposed to ease of Internet access. They are intrusion detection,
prevention systems and unified threat management.
Intrusion Detection and Prevention Systems
The intrusion detection system and the prevention systems are two internet
appliances that are closely related. The two devices monitor network traffic
and look for any suspicious activity that might be a sign of network-based
attack. You can see them as being somewhat analogous to antivirus (AV)
programs. AV programs examine individual files that are looking for signs of
malicious content. IDSs and IPSs look for malicious contents as well, but in
network traffic patterns.
Both IDSs and IPSs are quite different from firewalls. Firewalls have
some special sets of rules that allow or deny packets to enter a network, based
on criteria like the packet’s origination or destination address, the protocol
being used, or the source or destination ports. Their major function is to block
malicious traffic from entering the network in the first place. Firewalls may
also have in built IDS or IPS software, or IDS and IPS devices can also be
stand-alone hardware devices.
An IDS is a passive device because It watches network traffic, and it
detects anomalies that might represent an attack. For instance, if an attacker
was trying to flood a network with traffic on a specific port, the IDS would
sense that the additional traffic on that port was unusual. Then it would log
the anomaly and send an alert to an administrator. but you should note that it
doesn't prevent any attack; it simply logs relevant information about the
attack and sends an alert.
By contrast, an IPS is also an active device that monitors network traffic,
but when it detects any anomaly, it can take actions to try and stop the attack.
For instance, if it senses suspicious inbound traffic on any specific IP port, it
can shut the port down, and block the sender, or reset the TCP connection. If
the malicious stream of data is intended for one computer, it can prevent the
attacking host from communicating with the victim computer or prevent the
attacker from communicating with any other computer. The specific actions it
can take will depend on the device.
Both types of devices come in network-based varieties (NIDS and NIPS)
and host-based varieties (HIDS and HIPS). As you might expect based on
their names, the network-based versions are designed to protect multiple
systems, whereas the host-based ones can protect only one computer.
Unified Threat Management
The Internet is an amazing place, but it can also be a scary one as well. It
is a bit like for each video of puppies doing cute things, there are some
hackers lurking in dark corners trying to steal identities or crash the servers.
It’s an unfortunate reality of the Internet age. Software and hardware
solutions have been springing up in response to different types of threats, and
managing them can be a challenge. For example, a network needs a firewall,
anti-malware and anti-spam software, and maybe content filtering and IPS
devices also . It’s a lot that you have to deal with.
The goal of unified threat management (UTM) is to centralize security
management, and permit administrators to manage all of their security-related
hardware and software through one single device or interface. For
administrators, having a single management point greatly reduce
administration difficulties. The disadvantage is that it introduces a single
point of failure. If all network security is managed through one device, the
device failure could become problematic. UTM is generally implemented as a
stand-alone device on a network, and it replaces the traditional firewall. A
UTM device can provide the following types of services:
Packet filtering and inspection
Intrusion protection service
Gateway anti-malware
Spam blocking
Malicious website blocking
Application control
Many in the industry see UTM as the next generation of firewalls, and its
popularity is likely to increase over the next couple of years.
Legacy and Embedded Systems
Legacies are considered a good thing. Most of us want to leave a legacy of
some kind, whether it’s within a community or organization or our families.
A legacy is something that lives on beyond a human life span. If you mention
the term legacy system in the computer world though, you might likely meet
with groans and eye rolls. It means that the system is old and outdated by
today’s computing standards.
Legacy systems are usually defined as those using old technology in one
or more of the following areas:
Hardware
Software
Network protocols
Many legacy systems were state of the art when they were implemented
originally in the 1970s or 1980s, but they have not been upgraded or
replaced. Although, they are old, slow and specialized knowledge, it is
required to maintain and operate them. For example, someone might need to
know the Pick operating system; which came out in the 1970s, how to
operate an IBM AS/400 or manage VAX, or how to configure the IPX/SPX
network protocol. An embedded system is one that's critical during a process;
other systems or processes depend on it. For purposes of this section, we will
treat legacy and embedded systems as the same because administrators face
similar issues with both. So why don’t upcoming companies replace the
legacy systems?
First, most companies don’t have large IT budgets, and replacing the
legacy systems will be very expensive. This is true for most companies who
see the systems providing reliable service.
Second, the cost of failure of an upgrade could be tragic. The world’s
global financial systems are in various places supported by legacy systems.
Messing up migration in could however be a career-limiting move.
Third, the time it would take to test the new system, verify the
functionality, and roll out the implementation could be extensive. Time is
money, and as stated earlier, IT budgets are generally tight. Furthermore, it’s
difficult to find technicians and consultants who really understands the legacy
systems. People move from company to another company, or consultants
retire and take their specialized knowledge with them. for instance, a person
who was a mid-20’s computer wizard in 1975 will be in their mid-60’s now
and might probably be looking forward to retirement. The cost to find
someone knowledgeable on these systems can be high, also finding a
replacement hardware can be difficult and almost impossible while being
expensive at the same time. however, the cost of maintenance might surpass
the cost of upgrading, but it might also not be. So what’s a network
administrator to do? If possible, replace or repurpose legacy systems can also
provide long-term benefits to most companies. But at the same time,
recognize the risk involved. If a replacement is not an option, then the best
advice that will be to learn as much as you can about them. Hopefully, the
system should be based on established standards, so you can always look
them up on the Internet and learn as much as you can. you can also see what
operating manuals you can track down, or pick the brain of those who
understands how they operate. As difficult as legacy systems can be, you can
make yourself quite useful by being an expert on them.
A common administrative option is to try to separate the legacy system as
much as possible so that its lack of speed won’t affect the rest of the network.
This is much easier to do with hardware or protocols than with software. For
example, the network could be found out with one segment that hosts the
legacy systems or protocols.
One technology that will help in replacing and updating the legacy
systems will be virtualization, which may obviate the necessity for one-to-
one hardware-to software relationships.
Understanding Cloud Computing and Virtualization
The computer industry is one of the bigest trends and new technology
comes along always and become popular until the next wave of newer, faster,
and better objects come along to distract everyone from the former wave.
looking back over the past 20 years, there have been various big waves,
which includes the rise of the Internet, wireless networking, and mobile
computing. Within each trend, there are smaller ones. For example, the
Internet was helped by modems and ISPs in the middle 1990s, and then
broadband access took over. Wireless networking has also seen several
generations of faster technology, from the 11Mbps to 802.11b, which at the
time it came out was really cool, and promising gigabit wireless. Mobile
computing has been a long-lasting device, from laptop computers becoming
more popular than desktops, and more recently with handheld devices like
your smartphones and tablets which essentially functions like computers.
The biggest recent wave in the computing world is cloud computing. Its
name comes from the very fact that technology is Internet-based; in most
computer literature, the web is represented by a graphic which may appear as
if a cloud. It looks like everyone is jumping on the cloud, and technicians
need to be well aware of what it can provide and its limitations. The most
important core technology that supports cloud computing is virtualization and
both topics will be covered in the following sections.
Concepts of Cloud Computing
You must have heard the term cloud computing a lot in recent times. What
exactly is the cloud?
The way it sounds might be like it’s one giant, magical entity that does
everything you could want a computer to do. Only it’s not quite that big as
you think, and it’s not even one thing.
Cloud computing is a method by which you can access remote servers to
store files or run applications for you. There isn’t one cloud but hundreds of
commercial clouds that exist today. Many of them are owned by big
companies such as Microsoft, Google, HP, Apple, Netflix, and Amazon.
They set up the hardware and software for you on their network, and when
you use it.
Using the cloud sounds simple, and in most cases it is simple. From the
administrator’s side, things can be a little tricky. Cloud computing involves a
concept called virtualization, which means that there isn’t a one-to-one
relationship between a physical server and a logical server. however, there
might be one physical server that virtually hosts cloud servers for a dozen
companies, or there might be several physical servers working together as
one logical server. From the end user’s side, the idea of a physical machine
and a virtual machine doesn’t come into play, because it’s all handled behind
the scenes. We’ll cover virtualization in more depth later in this chapter.
There are many advantages to cloud computing, and therefore the most vital
ones revolve around money. Cloud providers can get economies of scale by
having an enormous pool of resources available to share among many
purchasers . it's going to be entirely possible for them to feature more clients
without having to feature new hardware, which ends up in greater profit.
From a client company’s standpoint, the corporate pays for only the resources
it needs without investing large amounts of capital into hardware which will
be outdated during a few years. Using the cloud is often cheaper than the
choice . Plus, if there's a hardware failure within the cloud, the provider
handles it. If the cloud is about upright, the client won’t even know that a
failure occurred. Other advantages of cloud computing include fast scalability
for clients and simple access to resources no matter location.
The biggest downside of the cloud has been security. The company’s data
is stored on someone else’s server and company employees are sending it
back and forth via the web . Cloud providers have dramatically increased
their security over the previous couple of years, but this can still be a problem
, especially if the info is very sensitive material or personally identifiable
information (PII). Also, some companies don’t like the incontrovertible fact
that they don’t own the assets. Now let’s go into the kinds of services clouds
provide, the kinds of clouds, cloud-specific terms with which you ought to be
familiar, and some samples of employing a cloud from the client-side.
Cloud Services
Cloud providers sell everything “as a service.” the sort of service is named
for the very best level of technology provided. for instance , if computing and
storage are the very best levels, the client will purchase infrastructure as a
service. If applications are involved, it'll be software as a service. Nearly
everything which will be digitized are often provided as a service. Let’s take
a glance at the three common types of services offered by cloud providers,
from the bottom up:
Infrastructure as a Service: if a corporation needs extra network
capacity, including processing power, storage, and networking
services (such as firewalls) but doesn’t have the cash to buy more
network hardware, it can buy infrastructure as a service (IaaS),
which may be a lot like paying for utilities. the client pays for what
it uses. Of the three, IaaS requires the foremost network
management expertise from the client. In an IaaS setup, the client
provides and manages the software.
Platform as a Service; Platform as a service (PaaS) adds a layer to
IaaS includes software development tools like runtime
environments. due to this, it is often very helpful to software
developers; the seller manages the varied hardware platforms. This
frees up the software developer to specialise in building their
application and scaling it. the simplest PaaS solutions leave the
client to export their developed programs and run them in an
environment aside from where it had been developed. examples of
PaaS includes the Google App Engine, Microsoft, Red Hat, Amazon
Web ,Services Elastic Beanstalk, Engine Yard, and Heroku.
Software as a Service (SaaS); the very best of those three levels of
service may be a software as a service (SaaS), which handles the
task of managing software and its deployment, and it includes the
platform and infrastructure also . this is often the one with which
you're probably most familiar with because it’s the model employed
by Google Docs, Microsoft Office 365, and even storage solutions
like Dropbox. The advantage of this model is to reduce costs for
most software ownership and management; clients typically check
in for subscriptions to use the software and will renew as required .
Hardware as a Service (HaaS);, which is analogous to IaaS but is
more likely related specifically to data storage
Communications as a service (CaaS;), which provides things like
voice IP (VoIP), instant messaging, and video collaboration
Network as a service (NaaS); which provides network infrastructure
Desktop as a service (DaaS); which provides virtual desktops in
order that users with multiple devices or platforms can have a
uniform desktop experience across all systems
Data as a service (also DaaS); which provides for multiple sources
of knowledge during a mash-up
Business processes as a service (BPaaS); which is employed to
provide business processes like payroll, IT help desk, or other
services
Anything/Everything as a service (XaaS); which may be a combination of
the services already discussed
The level of responsibility between the provider and therefore the client is
specified in the contract. It should be very clear which party has
responsibility for specific elements should anything go awry.
Types of Clouds
Running a cloud isn't restricted to big companies offering services over the
web . Companies can buy virtualization software to set up individual clouds
within their network. That sort of setup is mentioned as a personal cloud.
Running a private cloud eliminates most of the features that companies want
from the cloud, like rapid scalability and eliminating the need to urge and
manage computer assets. the large advantage, though, is that it allows the
company to regulate its security within the cloud.
Broad network access; The broad network means cloud capabilities are
accessible over the network by differing kinds of clients, like workstations,
laptops, and mobile phones, employing a common access software such as
web browsers. The ability for users to urge the info they need , once they
want, how they need , is usually mentioned as ubiquitous access.
Resource pooling; the thought of resource pooling is closely related to
virtualization. The provider’s resources are seen together as a large pool,
which may be divided among clients as needed. Clients should be able to
access additional resources as needed, albeit the client might not remember
where the resources are physically located.
Cloud-Based Storage
Storage is that area which cloud computing got its start. The idea is
simple, users store files even as they might on a tough drive but with two
major advantages. One, they don’t need to buy the hardware. Two, different
users can access the files no matter where they're physically located. Users
are often located within the us , China, and Germany, and every one of them
have access via their browser . This is particularly helpful for multinational
organizations. There is no shortage of cloud-based storage providers within
the market today. Each one offers slightly different features. Most of them
will offer limited storage for free of charge and premium services for more
data-heavy users. however, note that the data limits and cost can change.
Most of these providers offer business plans with unlimited storage also for
an additional cost.
Most cloud storage providers offers your desktop synchronization, which
makes it in order that you've got a folder on your computer, even as if it were
on your hard drive. And it’s important to note that the folder will always have
the most current edition of the files stored in the cloud.
Accessing the sites is completed through your browser . Once you are on
the site, managing your files is much like managing them on your local
computer. You have a couple of options for sharing a folder with another
user. One way is to right-click the folder and choose Share. You’ll be asked
to enter their name or email address and to indicate whether they can view or
edit the file. To share multiple items, you can check the boxes in front of
folder names and then click the icon that shows a person and a plus sign
above the checkbox. That will take you to the same sharing menu, which will
ask for the name and email address.
Cloud-Based Applications
Google popularized the use of web-based applications. After all, the whole
Chromebook platform, which has been very successful, is based on this
premise! Other companies have gotten into the cloud-based application space
also , like Microsoft with Office 365. The menu and the layout are a little
different than PC-based versions of microsoft Office, but if you’re familiar
with Office, you can use Office 365 with ease, and all the files are stored on
the cloud.
Cloud-based apps run through your web browser. and for end-users, it is
actually great because;
One, your system does not have to use its hardware to run any application.
Two, different client OSs can run the appliance without fear about
compatibility issues.
When choosing a cloud provider, you may use anyone you like. it’s better
if you experience the differences in how providers store files and let you
manage and manipulate them before making your choice.
Using Google’s Cloud Services
1. Open Google at www.google.com.
2. If you do not have a Google account, you will need to create one. With
it, you use Google’s online apps and storage as
well as a Gmail account.
3. If you're doing this on your own, create a second account to share files
and folders with.
4. Once you’re logged in, click the Apps icon in the upper-right corner.
It’s the one that has nine small squares
5. In Apps, click Drive. This will open Google Drive
6. you will then Create a folder and share it with other account. Also, you
can create a document or spreadsheet using Google’s online software.
7. If it is necessary, you can log out and log in to the other account that
you created to access the resources that were shared for you.
The latest trend in web applications and cloud storage is the streaming of
media. Companies like Netflix, Amazon, Pandora, Apple, etc. store movies
and music on their clouds. You can download their client software, and for a
monthly subscription fee, you'll stream media to your device. It can be your
smartphone, tablet, computer, or your home entertainment system. Before the
invention of broadband network technologies, this sort of setup would have
been impossible, but now it's has become the mainstream way that people
receive audio and video entertainment.
Understanding Virtualization
Perhaps the easiest way to understand virtualization is to match and
contrast it to more traditional technologies. In the traditional computing
model, a computer is identified as being a physical machine that's running
some combination of software, like an operating system and other
applications. There’s a one-on-one relationship between the hardware and the
operating system. For instance, imagine a machine that is a digital computer
and it needs to perform the functions of a web server as well. For this to
happen, the administrator would wish to make sure that the pc has enough
resources to support the service, install webserver software, configure the
acceptable files and permissions, and bring it back online as either a file
server or web server. These would be a straightforward administrative tasks.
But now imagine that the machine in question is being asked to run
Windows Server and Linux at an equivalent time. there will be a problem..
In the traditional computing model, one OS can run at one occasion,
because each OS completely controls the hardware resources within the
computer. an administrator can install a second OS and configure the server
to dual-boot, which means the OS to run is chosen during the boot process,
but one OS can run at a time. So if the requirement is to have a Windows-
based file server and a Linux-based Apache web server, there’s a problem.
Two physical computers are needed.
Similarly, imagine that there is a Windows-based workstation being used
by an application programmer. The programmer has been asked to code an
app that works in Linux, or Mac OS, or anything other than
Windows. When the programmer must test the app to ascertain how well it
works, what does she do?, she can configure her system to dual-boot, but
once again, in the traditional computing model, she’s limited to at least one
OS at a time per physical computer. Her company could purchase a second
system, but that quickly starts to urge expensive when there are multiple
users with similar needs.
This is where virtualization comes in. virtualization is defined as creating
virtual versions of something. In computer language, it means creating a
virtual environment where “computers” can operate. Virtualization usually
won't let multiple OSs run on one physical machine at the same time, they are
mostly bound by the physical characteristics of the machine on which they
reside, but virtualization breaks down the traditional one-on-one relationship
between a physical set of hardware and an OS.
Virtualization has been around the industry since 1967, but it's exploded in
popularity recently because of it's flexibility that the Internet offers.
The Purpose of Virtual Machines
The virtualized version of a computer is appropriately called a virtual
machine (VM). because of VMs, it's becoming far less common to need dual-
boot machines today than within the past. Also, VMs make the technology
just like the cloud possible. A cloud provider can have one incredibly
powerful server that's running five instances of an OS for client use, and
every client can act as if it had its server. On the flip side, cloud providers can
pool resources from various physical servers into what seems like one system
to the client, effectively giving clients unlimited processing or storage
capabilities. The underlying purpose of all of this is to save lots of money.
Cloud providers can achieve economies of scale because adding additional
clients doesn’t quite require acquisition of additional hardware. Hence,
Clients don’t need to buy hardware and may pay just for the services they
use. End users,
in the workstation example, we provided earlier, can have multiple
environments to use without having to shop for additional hardware.
The Hypervisor
The key enabler for virtualization is a piece of software called the
hypervisor, which can also be referred to as a virtual machine manager
(VMM). The hypervisor software allows various operating systems to share
the same host, and it manages the physical resource allocated to those virtual
OSs. There are two sorts of hypervisors: Type 1 and sort 2,
A Type 1 hypervisor sits directly on the hardware, and it is sometimes
mentioned as a bare-metal hypervisor. during this instance, the hypervisor
will use the OS for the physical machine. This setup is commonly used for
server-side virtualization, because the hypervisor itself typically has a very
low hardware requirement to support its functions. Type 1 is mainly
considered to possess better performance than Type 2, because there's no
host OS involved which makes the and system devoted to supporting
virtualization. Virtual OSs are run within the hypervisor, and the virtual
(guest) OSs are completely independent of every other.
Examples of Type 1 hypervisors are Microsoft Hyper-V, VMware, and
Citrix XenServer.
A Type 2 hypervisor sits on an existing OS , called the host OS. this is
often utilized in client-side virtualization, where multiple OSs are managed
on the client machine as against a server. An example of this is to be a
Windows user who wants to run Linux at the same time as Windows. The
user could install a hypervisor then install Linux within the hypervisor and
run both OSs concurrently and independently. The downsides of Type 2 are
that the host OS consumes resources like processor time and memory and a
host OS failure means the guest OSs failed also, examples of Type 2
hypervisors are Microsoft Virtual PC and Virtual Server, Oracle VirtualBox,
VMware Workstation, and KVM.
Client-Side Virtualization Requirements
As you would possibly expect, running multiple OSs on one physical
workstation can require more resources than running one OS. No rule says a
workstation being used for virtualization is required to possess more robust
hardware than another machine, but for performance reasons, the system
should be well equipped. This is mostly true for systems that is running a
kind 2 hypervisor, which sits on top of a number OS. The host OS will need
resources too, and it'll compete with the VMs for those resources.
Resource Requirements
The major resources here are an equivalent as you'd expect when
discussing the other computer’s performance: CPU, RAM, disk drive space,
and network performance. From the CPU standpoint, know that the
hypervisor can treat each core of a processor as separate virtual processors,
and it can create multiple virtual processors out of a single core. the general
rule here is that the faster the processor the better, but, the more cores a
processor has, the more virtual Oss it can support quickly. Within the
hypervisor, there'll most likely be a choice to set the allocation of physical
resources like a CPU priority and therefore the amount of RAM to every VM.
Some hypervisors require that the CPU be specially designed to support
virtualization. For Intel chips, this technology is named virtualization
technology (VT), and AMD chips got to support AMDV. Also, many system
BIOSs have a choice to either activate or turn off virtualization support. If a
processor supports virtualization but the hypervisor won’t install, then you
need to check the BIOS and enable virtualization. The specific steps to try
this may vary supported the BIOS, so check the manufacturer’s
documentation. Memory is usually an enormous concern for computers, and
virtual ones are not any different. When installing the guest OS, the
hypervisor might ask you how much memory to allocate to the VM. This can
be modified later if the guest OS within the VM requires more memory to run
properly. However, Always remember, that the host OS requires RAM too.
so, if the host OS needs 4GB of RAM and therefore the guest OS needs 4GB
of RAM, the system needs to have at least 8GB of RAM to support both
adequately. Hard disk space works the same way as RAM. Each OS will need
its own hard disk space, and the guest OS will be configured via the
hypervisor. Make sure that the physical computer has enough free disk space
to support the guest OSs.
Finally, from a networking standpoint, each of the virtual desktops will
need full network access, and to configure the permissions for each can
sometimes be tricky. The virtual desktop is mostly called a virtual desktop
interface (VDI), which term encompasses the software and hardware needed
to make the virtual environment. The VM will then create a virtual NIC and
also manage the resources of that NIC correctly. The virtual NIC doesn’t
have to be connected to the physical NIC; an administrator could create a
whole virtual network within the virtual environment where the virtual
machines just ask each other. That’s not normally practicing in the real world
though, so the virtual NIC will be connected to the physical NIC. The virtual
switch will have to manage the traffic to and from the virtual NICs and attach
to the physical NIC logically. Network bandwidth is often
the biggest bottleneck when running multiple hypervisors With Type 1
hypervisors, virtual desktops are often used with remote administration. This
can allow a remote administrator to work on the workstation with or without
the knowledge of the user sitting ahead of the machine.
Emulator Requirements
Virtual machines are created to exist and perform a bit like a physical
machine. due to that, all of the wants that a physical machine would need to
be replicated by the hypervisor, and that process is called emulation. The
terms hypervisor and emulator are often used interchangeably, even though
they don’t mean the same thing.
The hypervisor can support multiple OSs, whereas technically, an
emulator appears to figure an equivalent together specific OS. For the
requirements, the emulator and hypervisor need to be compatible with the
host OS. That’s about it. Some mobile-based games are very popular, and do
not have PC equivalents. In other words, you need to run them from iOS or
Android OS. Android is based on the open source code, meaning that there
are Android emulators on the market. One of such emulator is called Andy, at
www.andyroid.net. People can install Andy on their desktops, then install
Android apps within Andy. That way, they can play their favorite mobile
games on their desktop computer.
Security Requirements
In the youth of the cloud, a standard misconception was that virtual
machines couldn’t be hacked. Unfortunately, some hackers proved this
wrong. Instead of attacking the OS within the VM, hackers have turned their
attention into attacking the hypervisor. Why hit one OS when you can always
hit them all on the computer at the same time? There have been several
virtualization-specific threats that have cropped up that specialize in the
hypervisor, but updates have fixed the issues as they have become known.
The solution to most virtual machine threats is to always apply the most
recent updates to keep the system current. At the same time, all security
concerns that affects individual computers are also applied to VMs. For
example, if Windows is being operated during a VM, that instance of
Windows still needs anti-malware software installed on it.
Setting Up and Using Client-Side Virtualization
If given a scenario on the A+ exam or faced with things in real life, you
ought to be ready to find out and show someone the way to use the client-side
virtualization. Here are some sample steps to follow:
1. Determine the client's needs for virtualization. For example, do they
have Windows or Mac client and wish to run Linux? Or perhaps they need a
Windows computer and need to run Mac OS X at the same time? Determine
their needs, and then secure the additional OSs before beginning the
installation.
2. Evaluate the pc to make sure that it can support the VM.
a. Does the processor supports virtualization?
b. How much RAM does the computer have? do you think it is enough to
meet the minimum requirements of all installed OSs?
c. what proportion free disk drive space is there? the drive space needs to
be enough to install the hypervisor and guest OS and also to store any files
that need to be stored from within the
guest OS.
d. Does the system have a quick enough network connection if the guest
OS needs access to the network?
3. Consider which hypervisor to use. Is there a version of the hypervisor
that’s compatible with the host OS?
4. Consider security requirements. If the guest OS will be on the Internet,
will it have proper security software installed?
5. After all, conditions are deemed sufficient, you'll install the hypervisor
and the guest OS. It will not affect the host OS, but it’s always a good idea to
back up the system first before installing any new major software packages!
Now that the key concepts behind client-side virtualization are covered,
Normally, installing a second OS involves a comparatively complicated
process where you need to dual-boot your computer. You’re not getting to do
this here.
Instead, you'll use the VirtualBox hypervisor that permits you to create a
replacement virtual system on your disk drive and not affect your existing
Windows installation. We promise you that this exercise will not ruin
Windows on your computer! And when you’re finished, you can just
uninstall VirtualBox, if you would like , and zip will have changed on your
system.
Installing VirtualBox and Lubuntu on Windows 7
The first two steps are for preparation only. you would like to download
the Oracle VirtualBox and a version of Lubuntu. Any version of Linux is ok ,
but we’ll point you to a 32-bit version of Lubuntu, which should minimize
any compatibility issues. Depending on your network speed, the download
could take an hour or more.
1. Download Oracle VirtualBox from
www.virtualbox.org/wiki/Downloads. Select VirtualBox For Windows Hosts,
unless, of course, you've got a Mac, and then you need the one for OS X
hosts. reserve it to your desktop for ease of access.
2. Download Lubuntu from http://lubuntu.net/tags/download. There is a
link on the left side for Lubuntu x86 CD. Choose that one. it'll download a
zipped file with a .iso extension. You will need that ISO file later; it'll
essentially act as a bootable CD for your OS installation. Now you'll begin
the installation of VirtualBox.
3. Double-click the VirtualBox icon. If you eventually get a security
warning, click the Run button. Then click Next on the Setup Wizard screen.
4. On the Custom Setup screen, click Next then Next again. It will offer
you a warning about your network interfaces. Click Yes. (Your network
connections will come automatically.)
5. Click Install. this might take several minutes. (You may additionally
need to clear another security warning box.)
6. Once the install is complete, click Finish. It’s time to configure
VirtualBox.
7. you would possibly get a VirtualBox warning telling you that a picture
file isn't currently accessible.
8. Click the blue New icon to make a replacement virtual machine. Give it
a name. the sort and Version boxes aren’t critical; they don’t affect anything.
If you type in Lubuntu for a reputation , it will automatically set Type to
Linux and Version to Ubuntu (32 bit). Click Next.
9. within the Memory Size window, click Next.
10. within the disk drive window, the default is Create A Virtual disk
drive . Leave that option selected and click on Create.
11. you'll be prompted for what disk drive file type you would like to
create. Leave it on VDI and click on Next.
12. On subsequent screen, either Dynamically Allocated or Fixed Size is
fine. If you're low on disc space , accompany Fixed Size. Click Next.
13. within the File Location And Size window, it’s probably best to go
away it at the default size of 8GB. Don’t make it any smaller. Click Create.
Now you'll see a screen with a virtual drive. Great! You now have a virtual
machine on your disk drive. Now you only got to put something thereon ,
more specifically an OS.
14. Click the Settings button.
15. within the Lubuntu – Settings window, click the Storage icon on the
left.
16. Under one among your controllers, you ought to see something that
looks like a disc icon that says Empty.
17. On the very right side of the window, you'll have another disc
Icon with a touchdown arrow there on. Click that, and a menu will pop up.
Select Choose A Virtual CD/DVD computer file.
18. Within the window that pops up, navigate to the directory where you
stored the Lubuntu ISO file that you simply downloaded. Highlight the file,
then open.
19. Back to the Lubuntu Settings dialog, your drive that used to be empty
should now say Lubuntu. Click OK.
20. Now you're back to the Oracle VirtualBox Manager screen. With
Lubuntu highlighted by the left, press the green Start arrow. This may begin
the installation of Lubuntu.
21. Choose a language then, on subsequent screen, choose Install Lubuntu
and press Enter.
22. Follow the Lubuntu installation process.
23. you'll get to a screen which asks you for an installation type. It is
scary, but choose the Erase Disk And Install Lubuntu option. This will install
it on the virtual disk that you simply created earlier with VirtualBox, and it'll
not wipe out your entire disk drive.
24. Continue with the installation process. When unsure, choose the
default and move to subsequent step.
25. Once installation is completed, click the Restart Now button.
Now that the installation is complete, fiddle in your new operating system!
Summary
In this chapter, you've got learned about various server roles and
technologies that employment on local networks and therefore the ones that
employment on the Internet to form the cloud possible. First, you learned
about specific server roles. which incorporates web, file, print, DHCP, DNS,
proxy, mail, and authentication servers. We talked about what all of those do
also as where they ought to be located on the network, either inside the secure
network or within the
DMZ. additionally to servers, many networks will have Internet
appliances dedicated to security, like IDS, IPS, and UTM devices. Some
networks also support legacy or embedded systems. While these systems are
old and outdated, they often provide critical functionality on the network.
The next topic was cloud computing. Cloud computing has become one of
the most well liked topics in IT circles. There are several differing types of
services that cloud providers sell, like IaaS, PaaS, and SaaS. There also are
differing types of clouds, like public, private, community, and hybrid. Cloud
computing depends upon virtualization. Virtualization removes every barrier
that there is of being in a one-on-one relationship between computer
hardware and an operating system. You have also learnt about what
virtualization does and the core piece of software that is called the
hypervisor. You also learned about the requirements for client-side
virtualization. This chapter finished with an exercise
on how to install a hypervisor and Linux Ubuntu on a Windows computer.
CHAPTER 3: MITIGATING NETWORK
THREATS
● Denial of service
Denial of service attacks (DoS) blocks you from accessing the network or
resources associated with the network. It can be done in so many ways. DoS
attacks are frequent today, and they usually target large corporations.
The attackers can flood the organization website with too much traffic that
the servers cannot handle, making it impossible for legitimate users to access
the site.
● Ping of death
The ping of death is one of the common forms of DoS attacks. As you
learned earlier, ping is a command line used to determine whether your
device is communicating with IP requests and responding to them
accordingly.
So what happens in the ping of death attack? To communicate, your
device sends ICMP packets to a remote host to establish its availability.
However, in a ping of death attack, the intruder will flood the remote host
with ICMP packets. When this happens, they probably expect the remote host
to be overwhelmed, and in the process, it either hangs or keeps rebooting.
Luckily today, a lot of operating systems are designed with patches and
security updates to protect them from such an attack.
● Smurfs
Forget about the tiny adorable blue fellows you have seen on TV. Smurf
attacks are nowhere close to charming. What smurf attacks do is that they
keep flooding your network with spoofed ping broadcasts. Spoofing is using
someone else’s IP address.
How is a smurf attack executed? The attacker will spoof your IP address,
and after that, direct a humongous number of pings to the broadcast addresses
associated with your IP address. The router that receives the request will
proceed to broadcast the pings on the network, assuming it is a standard
broadcast request. From here, all the other hosts will pick up on the
broadcast. Since they are all echoing response to the IP address request, they
create an echo. In a short while, you will have a nightmare on the network
because each host on the network tries to respond to a request.
Smurf attacks are more useful when they target a vast network. They
benefit from the economies of scale. Smurf attacks might be a thing of the
past, but you can never rule out the possibility of occurring. There is always
that one random hacker who decides to take an old school approach that no
one would suspect. Most routers today are programmed in a way that they
cannot broadcast data packets haphazardly.
● Tribe flood network
Tribe flood network attacks are very sophisticated and are commonly
known as distributed denial of service attacks, DDoS. They are orchestrated
when the attacker launches a series of DoS attacks from several sources, and
all the attacks are directed towards a host of devices.
● SYN flood attack
This is an attack where your server or device is flooded with a lot of
meaningless requests. When attacked, your device struggles to process a lot
of applications, yet they hold no value. An attacker initiates this process by
sending a lot of SYN packets to your network.
When the requests are delivered to your device, it attempts to respond to
all of them. Within a short while, your network resources are depleted or
stretched to capacity. At this point, any incoming requests will be rejected
because your network is struggling to deal with a flood of SYN requests.
● Virus
The funny thing about viral attacks is that they usually get a lot of media
attention. A lot of people fall victim to these attacks before they realize they
are affected. A virus is a simple program whose effect depends on the
intention of the developer who coded it. It is not always easy to determine the
motivation behind a virus, but it is wise to assume complete devastation.
After all, anyone who gains unwarranted access to your devices is definitely
up to no good.
Some of the common reactions that have been reported on virus infections
include devices going on a rebooting loop, wiping your entire hard drive
clean, deleting some files, sending meaningless messages and emails to
everyone in your contact list, and so forth.
An exciting thing about viral infections is that they never replicate on their
own. They depend on the user to do something to execute them. Something
as simple as downloading a photo online could have dire ramifications on
your network if the picture has a virus hidden in it. Most viral infections
target places where people frequent, like social networks. Once someone has
it in their device, it spreads like wildfire.
File viruses are the most common, and there is a likelihood that you might
have fallen victim at some point. These are viruses that are hidden in files
that you share all the time. Since you trust the individual who shares the data,
you will barely suspect a thing. Once you open the data, the virus executes,
and your problems begin.
Remember, when we mentioned that viruses are often planned around
places that people frequent? Well, most of them are created to hide in
applications that you use all the time, like a spreadsheet or MS Word
document. You might have enjoyed a presentation and asked the presenter to
share the PowerPoint file with you for future reference. If the file were
infected, you would become a victim.
A file virus is written to affect an application or system file that is
executable. For those who run Microsoft Windows operating systems, file
viruses have extensions like .exe or .dll. The chances are high that you will
ignore these files assuming that they are system files.
When you access the infected file, the virus is executed. The virus will
load to the system memory, waiting for you to load a new application or
program. The moment that happens, it infects that program, and before you
know it, you cannot do anything on the network.
A macro virus is a script that performs the intended hack without your
knowledge. Unlike file viruses, you do not need to execute anything to
initiate this virus. Macro viruses are common because they are some of the
most accessible scripts to write. Most of them are harmless, but you can
never take chances.
A boot sector virus is one of the worst kinds of viruses that can affect your
devices. This virus embeds itself into the master boot record. When the
master boot record of your device is compromised, there is very little that you
can do other than wiping out the entire hard drive and reinstalling the
operating system.
This virus overwrites into the boot sector, preventing the operating system
from identifying the boot sector or boot order. If you turn on your computer,
but it cannot boot, citing an absent hard drive or an operating system not
detected, chances are high that you might have a boot sector virus infection.
A multipartite virus affects both your files and the boot sector. This is one
of the most dangerous infections, and you will almost certainly have to wipe
out the hard drive. Some of these infections are so evil, they can stay for
months without detection from the point of infection, and cause havoc later
on when you least expect it.
Today a lot of companies have measures in place to prevent their networks
and devices from such attacks. However, you can take additional steps to
protect yourself. Install and regularly update your antivirus programs. Avoid
websites and networks that are risky and notorious with viral infections. If
you like to read tabloid news, you will always be a victim of viral infections.
● Worms
Worms operate like viruses. The problem with worms is that once you are
infected, they spread randomly. You do not need to do anything. As long as
worms are in your system, they can do as they please. Worms autonomously
activate, operate, and destruct.
How attacks happen
Depending on their ultimate goal, each attacker will often have an action
plan to get what they want. Some attackers will lure you over a duration of
time, in the process of learning as much as they can about you and your
network, even without your knowledge before they pounce and execute their
attack.
Attackers can interact with you in a very subtle form, and you will rarely
realize you are being targeted. Something as simple as a computer game is
sufficient to allow them access to your network. Armed with the knowledge
that someone out there is always trying to gain privileged access to your
network, you must exercise caution.
A directed attack is an attack that is orchestrated by an attacker. This is
someone who was looking for something specific, or who had a particular
reason for hacking your network. A directed attack is different from things
like viruses, for example. A virus might be transmitted from one device to
another by unwitting users. Viruses usually take advantage of a weak system
and embed it into the device. Some viruses clone and look like files you see
or access all the time, so you cannot suspect them at all. Here are some of the
most common network attacks that you might experience while managing a
network:
● Active X attack
These attacks are embedded onto small apps that you must install on your
computer to allow you access to something specific. Some websites require
you to install Java or Adobe plugins to play some media. These are the
simple ways the attackers access your devices. Once you install the app, the
program runs in the background without your knowledge, collecting
information that is transmitted to the hacker’s server.
Hackers who use this technique can remotely access everything in your
devices without your knowledge. This is a dangerous attack because someone
who can see everything in your hard drive might as well store damning
information or plant doctored evidence in your device.
● Auto rooter
An auto rooter is an automated hack. Hackers who perform this rely on
rootkits. This type of attack is common with hackers who want to spy on your
network. Once affected, they have access to everything you do and can
monitor your device for as long as they need to.
● IP spoofing
An IP spoof is a situation where the attacker sends data packets, but
instead of using their real source address, they use a fake one. More often,
your network will be susceptible because the IP address is spoofed to look
like it is coming from a device within the network when in a real sense, the
packets are originating from an alien IP address.
The problem with IP spoofing is that routers will identify the packets and
treat them as standard requests within the network because they can locate the
IP address. The best way to deal with such an attack is to have a firewall in
place.
There is a lot of privileged information on the internet today. Think about
the number of people who shop online, the companies that store user
credentials and information on the cloud, and so forth. Data is the new gold,
and everyone is trying to hack into some system to obtain useful data.
Corporate espionage, for example, is one of the most prominent black hat
businesses. People are paid too much money to hack into a system and obtain
specific data. Implementing firewalls on the networks is one of the best ways
to avoid this problem.
● Backdoor access
A backdoor is a path created by a developer to allow them access to the
program or app without going through the normal processes. Most developers
create backdoors so that they can access these apps for different reasons.
Some use this as a means of troubleshooting the app if standard methods fail,
and they cannot access them.
For hackers, backdoors allow them to invade the network. It is always
advisable that you monitor and inspect the network as often as possible to
detect backdoors. You should also conduct system audits frequently to make
sure your network security is up to par with the present standards in network
security.
Considering the current state of affairs in data security, most countries
place a lot of emphasis on the need for a system and network audits. You
must do everything in your power to protect the networks you use, or you
might be held liable if a data breach ends up with inappropriate use of the
data obtained.
● Application layer attack
An application-layer attack exploits loopholes in some of the programs or
applications that you use. Most of these attacks target apps and programs that
require permissions because they collect a lot of privileged information.
Anyone who hacks into your system with such an attack will not just gain
access to the system and devices, and they will have access to an information
goldmine.
● Packet sniffing
Packet sniffers are tools that network managers use to troubleshoot the
network. They scour the network for problems. However, these same tools
can also be used by hackers. Packet sniffers are commonly used by identity
fraudsters to steal login credentials in the breached networks, and any other
information that might be relevant to their cause.
● Brute force
In a brute force attack, the hacker runs a program on your network that
logs into your server, for example. They gain account privileges, and from
here, use them for backdoor access. Later on, they can access your network
without needing passwords.
● Network reconnaissance
Network surveillance is merely spying on the network. The hackers take
their time to obtain as much information as they can about your network
before they pounce. They can scan network ports, or employ phishing
techniques to get the information they need.
● Password attack
A password attack is an attack where the hacker pretends to be a valid user
on the network, allowing them access to your resources and credentials.
Password attacks can be initiated in so many ways. They can also be used
alongside other attacks.
● Man in the middle
Man in the middle attacks takes place when the hacker intercepts your
communication. They read the data caught before it is delivered to the
recipient. Internet service providers, credit and debit card swipe machines,
and rogue ATM operators are commonly used for these attacks.
● Trust exploits
It has often been said that the weakest link in any security apparatus is the
human interface. We tend to hold trust in high regard, and this is what brings
down the entire system. You believe that someone cannot do something
wrong because you know them personally, and they are not of that character.
Unknown to you, they might have ulterior motives, or someone else might be
using them without their knowledge. Trust exploits occur when a hacker
manages to exploit the trust relationships that you have in the network.
● Port redirection
In this attack, the hacker uses a host machine that is already compromised.
The compromised host machine is one that is friendly to your network. Since
this machine already has privileges, the hacker uses it to generate traffic to
the network, traffic packets that would usually be blocked by your firewall
from getting into the network.
● Phishing
Phishing is one of the most sophisticated hacking attempts you can come
across today. There is a good reason why it is referred to as social
engineering. As we evolve with the networks, so do the tactics used to obtain
privileged information by hackers. Phishing attacks require you to offer
information to the hacker that you would not provide in your right mind, but
you do so without knowing it.
The majority of network administrators have taken steps to protect their
networks from attacks. Hacking such networks is not easy. Instead of going
through the trouble, hackers simply create something that looks legitimate
and passes it on to the users. With this, they obtain all the information they
need from you.
An excellent example of this is a hacker who wants to collect some
information like identification details, date and place of birth, and so forth.
Such a hacker can create a loan app, make it look legitimate, and put it up in
an app store. From there, they market the loan app correctly and make sure
they get the right attention.
Once you download this app, because perhaps you are looking for cheap
loans with a bad credit score, you feed in all the essential details you would
provide the bank or any lender and wait for their feedback. You never get any
input from the phishing hackers. They already have information like your full
name, residential address, phone number, email address, and some even
prompt you to create a password, which most people use the same password
across all accounts.
Some phishing hackers even clone emails and make them appear to look
like they come from a legitimate source like a well-known government entity
or bank. To avoid these, always make sure you use official links from
websites you access. When you are in doubt, call the correct contact details to
find out the truth before you provide your information to hackers.
Some of the phishing attacks use keyloggers. Keyloggers can run in your
system undetected. Today most keyloggers are built in such a way that they
can send not just the keystrokes, but some even capture screenshots and
upload them to the hacker’s server without your knowledge. Since you might
be typing a lot on the keyboard, keyloggers can be programmed to capture
only specific information like emails and passwords, phone numbers, and so
forth.
Protecting your network
While you are supposed to and can mitigate most, if not all, of the network
threats discussed, most network administrators cannot do this for different
reasons. Some administrators assume certain risks are beyond their purview,
and as a result, they do not give them the attention they require. To protect
your network, you should perform the following:
● Active detection
Active detection is a process where you scan your network all the time to
detect intrusion. This should be a free plan for any network administrator.
Remember how you double-check your door before you leave, yet you have
turned back and twisted the keys to the very end? The same applies to
network security.
● Passive detection
Passive detection practices require that you log all network activities and
events on a file for review later on. One of the best examples of passive
detection is installing CCTV cameras on the premise. You might not be
watching the cameras all the time, but you are confident that they capture
everything that transpires on the premises. If something is amiss, you can go
back to the cameras later on and review. This is the same concept that applies
to networks.
In the event of any network issue, you can always go back to the event
logs and try to find out what might have happened.
● Proactive protection
Proactive protection is about preparing yourself for the worst possible
scenario. You struggle to make sure that the network is incorrigible. All
procedures and steps that you take to achieve this are part of proactive
protection mechanisms. Active protection is about vigilance.
What can you do to protect your network from intrusion? Each
organization must have some rules, regulations, procedures, or policies that
guide its operations. When it comes to network security, nothing is ever too
paranoid about being considered.
Make sure you perform a network audit as recommended by an external
auditor. A review examines your network to determine whether all the
components are safe. While an internal auditor can perform the analysis, you
need an external editor for industry-certified standard audits.
Ensure that you communicate the necessary security policies effectively so
that everyone is aware of their existence. This can be done in the form of a
notification on the user devices. Something as simple as this
“UNAUTHORIZED ACCESS IS PROHIBITED AND IS PUNISHABLE
BY LAW” displayed clearly can act as a constant reminder to users that they
must stay in check.
Any ports that are not in operation should be disabled. This way, guests in
the office cannot use them. Someone might come in and plug their laptop into
one of the free network ports, and in the process, introduce a virus into the
network without knowing it.
It is good practice to reset network passwords as frequently as possible.
Some organizations perform these changes monthly, while others do it as
regularly as weekly or even daily. The password you use when you come to
work in the morning expires at the end of the workday when you sign out,
and you get a new one when you sign in to the office in the morning.
Always make sure your network has firewalls running. Firewalls protect
all the internet connections so that only those with authorized access can use
the network resources. There are different firewall products in the market. Be
sure to use one which suits your budget and is relevant to the size of your
company.
Keep your antivirus programs updated to the latest version all the time.
Run system checks frequently to weed out potential threats to the network.
Most organizations perform security upholstering over the weekend when the
office is not busy so that when everyone reports to their desks on Monday
morning, the systems are ready.
CHAPTER 4
From time to time, you will need to troubleshoot the network. You can
have this done on schedule or impulse, in response to an immediate threat.
More often, the need for troubleshooting catches you off guard. At times it is
the straightforward issues that make things difficult in the network. More
often than not, you worry about a severe problem, struggling to understand
the cause, only to realize it was something simple, and perhaps all you
needed was to reboot the network.
Network problems can overwhelm you. It gets worse when you have a
problem at peak hours. Everyone on the network is unable to do their work
until you sort out the problem. The pressure can be so intense, primarily if
you work in a fast-paced organization.
The first step in troubleshooting a network is to identify and narrow down
the possibilities. The network issue might be caused by one of many reasons.
Narrow them down and eliminate them one by one, especially if you cannot
deduce an immediate cause.
For troubleshooting, no reason is ever too simple to be possible. Eliminate
the possibility of a problem as a result of simple human errors. The following
are the four essential procedures that you should follow when troubleshooting
a network concern:
• Check the network to ensure all the simple things are okay.
• Determine whether you have a software or hardware issue.
• Determine whether the problem is localized to the server or workstation.
• Find out the sectors in the network that are affected.
These four steps will help you eliminate possible causes one by one until
you identify the problem and fix it. Let’s delve deeper into it.
Check the network to ensure all the simple things are okay
At times it is the simplest explanation that might solve your problem.
Before you worry about complex reasons for the network issue, try and
eliminate any possibilities of a tiny problem. Many are the times when
someone will call you frantically that they are unable to access their account
on the network, only to realize that they had the Caps Lock on.
While assessing the problem, ensure that the correct procedure is followed
to access the network. Check to make sure the credentials are proper.
Someone might be keying in the wrong details inadvertently. You’d be
surprised the number of times people enter the incorrect information and lock
their accounts.
You can also create restrictions over the number of times users can sign
into their devices. This alerts you when someone is struggling to access their
accounts, and you can reach out and assist them accordingly. It might also
come in handy and inform you when someone is trying to access a device
they are not supposed to.
● Login problems
In case your network problem is user-oriented, ensure their login
credentials are correct. Where possible, try to sign in to the account from a
separate workstation. In case that works, try it on the problematic workstation
to rule out any other challenges.
If all the possibilities mentioned do not work for you, go through the
documentation for your network to determine whether there are any
restrictions in place that you might not be aware of, and make sure the user is
not in violation of any such restrictions.
● Power connection
Check the power switch. Are all the devices that should be powered on
running? There is always a risk that someone tripped on one of the cables and
plugged it off the power source. You’d be surprised the number of times
people complain about having a blank screen, yet their computer is powered
on, only to realize that the power cable to the screen was not plugged
incorrectly.
● Collision and link lights
Check the collision light and the link lights. The collision light blinks
amber in color. You should see it on the hubs or the ethernet network
interface card. If this light is on, you have a collision on the network. For a
bustling network, accidents are prevalent. However, if the light blinks
frequently, the collisions might be too much, affecting network traffic. Check
to make sure the network interface card and any other network device are
working correctly because one of them might have malfunctioned.
The link light is green in color. If the link lights are on for the network
interface card and the hub where the workstation is connected, this is a sign
that communication between the hub and workstation is not interfered with.
● Operator problems
Individual operators can have inhibitions that have nothing to do with the
network but lock them out and prevent them from accessing the grid
altogether. Perhaps the system you use is alien to the user. If they do not
understand it, chances are high; they will struggle to use it. Find out if the
user has any challenges, and if so, walk them through it carefully so that they
do not feel you undermine them or look down upon them.
Explain to them why they are experiencing the problem. Be firm and make
the user confident to reach out whenever they have a similar problem or any
other. If you do not inspire confidence in the user, they may shy away from
informing you of a problem, and instead attempt to solve them on their own,
which only makes things worse.
Determine whether you have a software or hardware issue
Hardware problems can be extreme. One of the devices might have
outlived its useful life. Hardware problems might also mean you need to plan
for data recovery or retrieval if the hardware fails. Fixes for hardware
problems involve replacing the devices, updating device drivers, or tweaking
the device settings.
Troubleshooting software problems depend on the nature of the issue at
hand. Most programs today are operated on a subscription basis. Perhaps the
subscription has expired and was not renewed in good time, hence you are
locked out of the system, or your user privileges have been limited to free
user account terms. In such a case, follow up with the relevant parties and
pay the subscription fee to restore full access.
Remember that whether you are dealing with hardware or software issues,
you might need to back-up your data. Ensure you have sufficient space for
this.
Determine whether the issue is localized to the server or workstation
Identifying the extent of the problem can help you know how severe it is.
If it is a server problem, a lot of people will be affected, and you might have
too much to deal with than if it was just one workstation.
For a workstation problem, you can try to sign into that account from a
different workstation in the same work-group. If that works, you can trace all
the necessary steps to fix the problem. Check the connections, the cable, the
keyboard, and so forth. The chances are high that the question might be
simple.
Find out the sectors in the network that are affected
Determining the areas in the network that are affected by the problem is
not going to be an easy task. There are many possibilities here. In case a lot
of people on the network are concerned, your network might be suffering
from a network address conflict.
Check your TCP/IP settings to make sure that all IP addresses on the
network are correct. The problem comes in when any two sectors in the
network share a subnet address. This causes duplication in IP errors, and it
might take you a while to realize the problem. In case everyone on the
network has the same problem, it could be an issue with a server to which
they are all connected. This is an easy one to solve.
Check the cables
The way the network is set up could be causing you problems. If you have
checked and realized everything else on the network is fine, but the system is
still down, you need to look at the cables. Ensure all the wires are connected
to their appropriate ports. Patch cables between wall jacks and your
workstations might need replacing. Most of the time, people step on the
wires, wheel over them with their chairs, and so forth. If cables are run across
the office floor, you might need to replace these, and probably consider a
better way of running cables.
There are several cable issues that you might be experiencing. Most of
them are essential, but they are the foundation of your network, so you have
to know about them. Here are some of the cabling issues you might
experience:
● Interference
Computers are highly susceptible to signal interference. Radio transmitters
and TV sets interfere with computers most of the time. These devices
generate radio frequencies during transmission. To avoid this problem, ensure
you use shielded network cables for the entire network.
● Shorts
A physical fault might cause a short circuit in the cabling network. Today
there are special tools that you can use to locate the quick. More often than
not, you will need to fix or replace the cable.
● Collisions
If two devices on your network are communicating at the same time and
on the same segment, there will be a collision. Accidents are possible if you
are still using older ethernet networks or hubs. Replace hubs in the workplace
with switches where possible, because switches are intelligent and can help
you prevent collisions on the network.
● Echo
An echo is an open impedance mismatch. With cable testing equipment,
you will know whether your cables are completing the circuit or not. Test to
identify a bad connection. In case you experience an echo on all the wires at
the same place, you might have a cut cable that needs replacing. Today some
special testing equipment can show the exact location of a cut even if the
wires are set behind the wall.
● Attenuation
Attenuation is a situation where the medium within which signals travel
degrade the signal. All networks experience this problem. The risk of
mitigation depends on how you lay the cable. Take copper, for example. You
should amplify the network by a switch or a hub after every 100 meters. If
you use fiber optic, however, you get a longer distance before the network is
degraded. Consider your organization's needs, and if possible, use fiber optic
cables instead of copper. However, if you cannot afford to use fiber optic
cables, have a hub or switch in place accordingly to prevent attenuation.
● Cross talk
Wires that are in proximity to one another experience cross talk when they
transmit current. To reduce the risk of cross-talk, paired wires are twisted and
set at 90 degrees from one another. The tighter you have the cables twisted,
the less crosstalk you will experience on the network.
Troubleshooting a wireless network
Most users appreciate wireless networks today, mainly because they are
easy to access from a wide range, depending on the settings. Wireless
networks also take away the problem of running cables all over the place. For
network administrators, wireless networks might present one of the biggest
challenges during troubleshooting.
First, wireless networks are synonymous with configuration problems.
More often, when you have a problem with the wireless connection, you have
to go through the steps discussed above to make sure the hardware is okay,
then you get into troubleshooting the network. The following are some of the
common challenges you might experience with a wireless network:
● Encryption challenges
Encryption is mandatory to protect all communication across a wireless
network. Each network uses a unique encryption process. Some networks use
WPA2; others use WEP and so forth. For the sake of security, make sure you
use the best encryption protocol for your network. To make work easier,
always make sure everyone on the network has their devices configured with
the same encryption.
● Interference
Wireless networks transmit data packets and signals through radio waves.
For this reason, they are more susceptible to interference than the cable
networks. A wireless network might suffer interference from a Bluetooth
device attached to a computer in the office. This is prevalent, especially when
the object of intervention is close to the network.
● Channel problems
A lot of wireless networks operate within the frequency range between 2.4
GHz and 5GHz. In between these frequencies, there are so many networks.
Some channels are allocated more bandwidth than others, hence the reason
why they are more precise and stable. Most of the time, you will barely have
an issue with channel configuration, unless someone intentionally or
accidentally forces their device to use the wrong channel.
● Mismatched ESSID
A wireless device will always search for Service Set Identifiers (SSID)
close. It might also search for an Extended Service Set Identifier (ESSID). If
you are operating in a building where there are so many ESSIDs, you might
experience interference, especially when one of these has a stronger
broadcast than what you own.
● Frequency issues
Each channel determines the frequency that the wireless devices must use.
However, some tools allow you the freedom to set the device to a unique
frequency. If you choose to configure the frequency manually, always
remember to do the same for all the devices on the network. If you do not do
that, the devices will not communicate. If you have too many tools to add to
the network, it is always safer to use the default setting.
● Distance
The distance problem arises when the clients are too far from the network.
One of the solutions here is to move the antenna or router as close to the
clients as possible. If you are lucky to own a device with a powerful signal,
you have to rethink the broadcast distance, because you might be susceptible
to unwarranted access.
● Antenna placement
The best setting for the wireless antenna is at the center of the wireless
network, or as close to it as you can. However, in case this is not possible,
you can also set an antenna far from the network, but connect a cable to it.
Poor antenna placement translates to poor network performance, and in some
cases, you might not even have network access at all.
● Bounce
Bounce is famous in a wireless network that transmits signals over a wide
area. To ensure that everyone has proper access to the network, it is advisable
to install network reflectors or repeaters to boost the network. However, you
should only do this if you can control the network signals. Otherwise, you
will end up creating a vast network, which becomes difficult to manage, and
also susceptible to hacks.
Procedure for troubleshooting a network
Having looked at all the possible ways of troubleshooting a network issue,
the following are the appropriate steps you should follow:
1. Gathering information
You cannot solve a problem without knowing what it is all about. Collect
as much information as you can about the network problem. How long has
the problem persisted? What are the challenges users are experiencing?
Which part of the internet or network are they unable to access? Ask all the
questions, even those that might seem insignificant might point you in the
right direction.
2. Identify the affected sectors
Whenever there is a network glitch, someone somewhere will be unable to
do their work. Investigate to know who is affected and how. During this
process, if someone comes to you with a network problem, at times, it might
help to have them walk you through it from the beginning. You might just
realize the cause of the problem in an instant.
3. Scan for recent changes
If you follow through the problem with the user and manage to recreate
the problem as they described it, it means that you can track any changes that
might have taken place on the network in light of their recent activities. Take
note of the error messages displayed as they might also help you diagnose
and solve the problem.
4. Hypothesize the possible causes
A hypothesis is about listing down the possible causes of the problem and
then narrowing it down accurately to the right one. In some cases, you might
determine the problem immediately. However, in severe cases, some
problems on the network are a culmination of so many other minor problems,
so having a list of possible causes might help you diagnose and fix all the
small ones as you build up to the larger issue.
5. Does the problem warrant escalation?
While you should be able to fix most networking problems on your own,
some situations might be out of your hand and require you to escalate the
issue to someone with more experience dealing such. The sooner you realize
you cannot handle the problem on your own and increase it, the better it will
be for you because you can bring in the experts in record time.
6. Come up with a plan of action
Having figured out the problem, communicate to the affected party that
you are sorting it out. If necessary, walk them through the process. Each
solution should have an immediate effect or an expected effect. Ensure you
are evident in this description so that the user can alert you just in case they
notice something different from the baseline performance after you fix their
problem.
7. Monitor the results
One of the biggest mistakes network administrators make is solving a
problem and then assuming that everything else is okay. Each problem-
solution will always have a domino effect on something in the network. At
times by solving one problem, you might end up creating a more significant
problem. This is why you need to study the results to ensure that you can
keep the rest of the network safe.
8. Documentation
If everything works just fine, remember to document the process and the
solution. Earlier on, we had looked at the importance of documentation. It
will come in handy later on when the same problem occurs somewhere else.
In the documentation, including the possible conditions that might have
caused the problem.
Remember to mention the software version in use. If you managed to
reproduce the problem during testing, include this in the documentation.
Mention all the solutions that you might have tried, and the effects,
highlighting why you opted out of those solutions. Present the final solution
that worked, and why you chose it as your best option.
CHAPTER 5
For every day we go to work, we try our best to do what it takes to get the
job done. As IT professionals, there are millions of facts crammed into your
head about how different hardware components work and what software
configuration settings work best for your systems. You know which servers
or workstations give you constant trouble and which end users need more
help than others. We are believed to be experts in knowing what to do to keep
computers and networks up and running smoothly. Even though you don't
intend to think about it overtly, you need to be an expert on how to get things
done as well. Even though the how might not be first on your mind, for you
every day should be integrated into your work processes. Operational
procedures define the 'how', and they guide you on the proper way to get
various tasks accomplished.
In this chapter, we will start by talking about safety, which includes your
security and the safety of your co-workers. Also, environmental concerns.
Observing proper safety procedures can help prevent injury to yourself and
others.
Our discussion about the environment is going to be two-sided. The
environment affects computers through things like dust, sunlight, and water,
but computers can also pose potential harm to the environment. We will
eventually consider both sides as we move through this chapter. We will also
cover some legal aspects of operational procedures. Which includes licensing
of software, protection of personally identifiable information and incident
response.
In the final a part of this chapter, we'll switch to discussing
professionalism and communication and specialize in topics that you simply
got to know for your exam study. Applying the skills learned here will help
you pass the exam, but on a more practical level, it will help you become a
better technician and possibly further advance your career.
Understanding Safety Procedures
The proliferation of computers in today's society has created lots of jobs
for technicians. Presumably, that's why you're reading this book: You want to
get your CompTIA A+ certification. Many others who don't fix computers
professionally do like seeing them as a hobby. In time past, only the expert
users tried to crack the case on a laptop. Frequently, repairing the system
meant using a soldering iron. Today, because of a budget parts, computer
repair is not quite as involved. No matter the skills or intent you have, if
you're going to be inside a computer, you always need to be aware of safety
issues.
There's no point in getting yourself hurt or killed. As a provider of a
hands-on service which is repairing, maintaining, or upgrading someone's
computer, you need to be aware of some general safety tips, because if you're
not careful, you'll harm yourself or the equipment. Clients expect you to
solve their problems, not make them worse by injuring yourself or those
around you. In the following sections, we'll talk about identifying safety
hazards and creating a safe working environment.
Identifying Potential Safety Hazards
Anything can actually be a potential safety hazard, both human-made and
environmental, that can cause safety problems when you're working with and
around computers. Perhaps the critical aspect of computers you should be
aware of is that not only does it use electricity, but they also store electrical
charge after they're turned off. This makes the power supply and the monitor
off-limits to anyone except a repairperson explicitly trained for those devices.
Also, the computer's processor and other parts of the printer runs at an
extremely high temperature, and you can get burned if you try to handle them
immediately after they've been in operation.
Those are the two major safety measures that should concern you. There
are lots more. When discussing safety issues concerning PCs, let's break them
down into four general areas:
Computer components
Electrostatic discharge
Electromagnetic interference
Natural elements
Computer Components
As mentioned earlier, computers use electricity. And as you're probably
aware, electricity can either hurt or kill you. The first safety rule when
working inside a computer is to make sure that it's turned off. So if you have
to open the laptop to inspect or replace parts, be sure to turn off the machine
before you begin. Leaving it plugged in is usually fine, and many times it is
preferred because it grounds the equipment and can help prevent electrostatic
discharge.
The power-off rule have one exception: you don't have to off the computer
when working with hot-swappable parts, which are designed to be unplugged
and plugged back in when the computer is on. Most of these components
have an accessible interface that is external, so you don't need to crack the
computer case.
The Power Supply
Do not take the difficulty of safety and electricity lightly. Removing the
power supply from its external casing are often dangerous. The current
flowing through the power supply follows a complete circuit typically; when
your body breaks the course, your body becomes part of that circuit. Getting
inside the facility supply is that the most dangerous thing you can do as an
untrained technician.
The biggest dangers with power supplies are burning or electrocuting
yourself. These risks go hand in hand. If you touch a naked wire that is
carrying current, you could get electrocuted. A large-enough current passing
through you'll cause severe burns. It can also make your heart to stop
working, your muscles to seize, and your brain to stop functioning correctly.
It can kill you. Electricity always finds the best path to ground. And because
the human body is a bag of saltwater (a good conductor of electricity), the
voltage will use you as a conductor if you are grounded.
Although it is possible to open a power supply to work on it, doing so is
not advisable. Power supplies contain lots of capacitors that can hold lethal
charges long after they need been unplugged! It is undoubtedly dangerous to
open the case of a power supply. Besides, power supplies are cheap. It would
probably cost less to replace one than trying to fix it, and this method will be
much safer.
In the late 90s, a few mass computer manufacturers experimented with
using open power supplies in their computers to save lots of money. We don't
know if any deaths occurred because of such incompetence, but it was a
terrible idea.
The Monitor
Other than the power supply, the most dangerous component to try to
repair is a computer monitor, especially older-style monitors. It would be best
if you stayed away from repairing any monitor, To avoid the hazardous
environment contained inside the monitor. It can retain high-voltage charge
for hours after it's been turned off. It is better to take it to a certified monitor
technician or television repair shop. The repair shop or certified technician
will know the right procedures for discharging the monitor, which involve
attaching a resistor to the flyback transformer's charging capacitor to release
the high-voltage electrical charge that builds up during use. They will also be
ready to determine whether the monitor are often repaired or whether it needs
to be replaced. Remember, the monitor works in its own extremely protected
environment (the monitor case) and should not respond well to your desire to
undertake to open it.
Although repairing monitors isn't recommended, the A+ exam may test
your knowledge of the safety practices to use if you ever need to fix a
monitor yourself. If you have any reason to open a monitor, you must
discharge the high-voltage charge on it first, by using a high-voltage probe.
This probe has a huge needle, a gauge that indicates volts, and a wire with
that comes with an alligator clip. Attach the alligator clip to the ground. Put
the probe needle under the high-voltage cup on the monitor. You will see the
gauge spike to around 15,000 volts and gradually reduce to zero. When it
reaches zero, you'll remove the high-voltage probe and repair the high-
voltage.
The Case
One component that people regularly overlook is the case. Cases are made
of metal, and some computer cases have very sharp
edges inside, so you need to be careful when handling them. You can, cut
yourself by jamming your fingers between the case and the frame when you
try to force the case back on if you are not careful. Also, another dangerous
aspect is drive bays. Numerous technicians have scraped or cut their hands on
drive bays when trying in vain to plug a drive cable into the motherboard.
Particularly sharp edges can be covered with duct tape, ensure you're
covering the only metal and nothing with electrical components on it.
The Printer
If you've ever attempted to repair a printer, you might have thought that
there was a touch monster in there hiding all of the screws from you. Besides
missing screws, here are some things to observe out for when repairing
printers:
When handling a toner cartridge from a electrostatic printer or page
printer, do not turn it upside down. You will find yourself spending more
time cleaning the printer and therefore the surrounding area than fixing the
printer.
Don't put any objects into the feeding system in an attempt to clear the
path when the printer is running.
Laser printers generate a laser that's hazardous to your eyes. Do not look
directly into the source of the laser.
If it's an inkjet printer, do not try to blow in the ink cartridge to clear a
clogged opening—that is, unless you wish the taste of ink.
Some parts of a electrostatic printer are going to be damaged if you touch
them. Your skin produces oils and has a
small surface layer of dead skin cells. These substances can collect on the
fragile surface of the EP cartridge and cause malfunctions.
Bottom line: Keep your fingers out of places where they don't belong!
Laser printers use a high voltage source of power to charge internal
components, which can cause severe injuries. Laser printers can get
extremely hot. Don't burn yourself on internal components.
Using an egg carton is a good way to store and keep track of screws that
you simply remove of
a device when you're working on it. When working with printers, you need
to follow some simple guidelines.
If there's a messed-up setting, paper jam, or ink or toner problem, we will
fix it. If it's something other than that, we call a certified printer repair person.
The inner workings of printers can get pretty complicated, and it's best to call
someone trained to make those types of repairs.
Electrostatic Discharge
So far, we've discused about how electricity can possibly hurt people, but
it can also pose safety issues for computer components. One of the alarming
concerns for components is the electrostatic discharge (ESD). For the most
part, ESD won't do severe damage to a person other than provide a little
shock. But small amounts of ESD can cause severe
damage to computer components, which damage can happen by causing
computers to hold or reboot or fail else in the least . ESD happens when two
objects of dissimilar charge inherit contact with one another. The two objects
often exchange electrons to make the electrostatic charge between them of
standard. This charge often does damage electronic components.
CONCLUSION
You are embarking on a journey that will get you so far, and change your
life. The lessons you learn in this book will help you go a long way in your
career as a networking expert. Once you are done reading this book, set aside
some time and think about everything you have read. Each chapter offers
useful information and pointers that will guide you.
One of the essential things you need in networking is a practice lab or a
computer on which you can try your hands on some of the lessons you learn
in this book. The world of networking is advancing and keeps developing
over time. Some conventions might change in a few years. With this in mind,
therefore, try and make sure you have access to some practice material to
help you stay abreast of technologies in networking.
If you have been in the corporate space for a long time, you will realize
that staffing managers today focus more on applications over papers. You
might have some excellent articles, but if you are unable to apply the
knowledge learned and solve problems for the manager, they would not see
the benefit of hiring you for the job.
There is so much you can learn about networks and how to manage them
effectively. At the moment, network security is one of the biggest concerns
that a lot of organizations grapple with. You are expected to know how to
deal with this. When hired, the decision-makers in your organization believe
that you have what it takes to protect and safeguard their network resources.
The beauty of computing today is that there is so much evolution taking
place. Things change so fast, yet somehow they remain the same. With in-
depth knowledge of CompTIA Network+, you learn valuable lessons that will
help you advance and evolve with technological advances as they happen.
CompTIA Network+ prepares you not just by teaching you the necessary
information you need to pass the exams, but also by showing you the hands-
on approach to solving problems.