Solution of TNM Paper(2016)
Solution of TNM Paper(2016)
Answer:- FCAPS is a framework and model for network management. The term itself is an
acronym that stands for Fault, Configuration, Accounting, Performance and Security.
Fault Management: The goals and objectives include early fault recognition, isolation of
negative effects, fault correction and logging of the corrections to assist in
improvement. The network operator must assure that (usually) automatic fault
notification is followed by rapid manual or monitored automatic activities to assure fault
identification, evaluation and timely correction.
Configuration Management: This involves the collecting and storing various
configuration data, preferably in an easily accessible database(s), simplifying
configuration procedures for each network device, logging configuration changes, and
provisioning transmission paths through networks without switches.
Accounting Management: Also called billing management, this involves collecting such
network user data as link utilization, disk drive or data storage usage, and CPU
processing time.
Performance Management: In view of investments made to set up the network, this
examines and monitors the current network efficiency and plans ahead for future
changes or upgrades. While constantly monitoring the health of the network and
searching for trends, network parameters are tracked and logged; these include data
transmission rate (throughput), error rates, downtime/uptime, use-time percentages
and response time to user and automated inputs or requests.
Security Management: This is mostly concerned with authenticated and authorized
access to the network as well as encryption of data, i.e. controlling all access and
securing all data.
Answer:-
Answer:- The Open Systems Interconnect (OSI) model has seven layers. This article describes
and explains them, beginning with the 'lowest' in the hierarchy (the physical) and proceeding to
the 'highest' (the application). The layers are stacked this way:
Application
Presentation
Session
Transport
Network
Data Link
Physical
PHYSICAL LAYER:-The physical layer, the lowest layer of the OSI model, is concerned with the
transmission and reception of the unstructured raw bit stream over a physical medium. It
describes the electrical/optical, mechanical, and functional interfaces to the physical medium,
and carries the signals for all of the higher layers. It provides:
Data encoding: modifies the simple digital signal pattern (1s and 0s) used by the PC to better
accommodate the characteristics of the physical medium, and to aid in bit and frame
synchronization. It determines:
How many pins do the connectors have and what is each pin used for?
Transmission technique: determines whether the encoded bits will be transmitted by baseband
(digital) or broadband (analog) signaling.
Physical medium transmission: transmits bits as electrical or optical signals appropriate for the
physical medium, and determines:
How many volts/db should be used to represent a given signal state, using a given physical
medium
DATA LINK LAYER:-The data link layer provides error-free transfer of data frames from one
node to another over the physical layer, allowing layers above it to assume virtually error-free
transmission over the link. To do this, the data link layer provides:
Link establishment and termination: establishes and terminates the logical link between two
nodes.
Frame traffic control: tells the transmitting node to "back-off" when no frame buffers are
available.
Media access management: determines when the node "has the right" to use the physical
medium.
NETWORK LAYER:-The network layer controls the operation of the subnet, deciding which
physical path the data should take based on network conditions, priority of service, and other
factors. It provides:
Subnet traffic control: routers (network layer intermediate systems) can instruct a sending
station to "throttle back" its frame transmission when the router's buffer fills up.
Subnet usage accounting: has accounting functions to keep track of frames forwarded by
subnet intermediate systems, to produce billing information.
Communications Subnet
The network layer software must build headers so that the network layer software residing in
the subnet intermediate systems can recognize them and use them to route data to the
destination address.
This layer relieves the upper layers of the need to know anything about the data transmission
and intermediate switching technologies used to connect systems. It establishes, maintains and
terminates connections across the intervening communications facility (one or several
intermediate systems in the communication subnet).
In the network layer and the layers below, peer protocols exist between a node and its
immediate neighbor, but the neighbor may be a node through which data is routed, not the
destination station. The source and destination stations may be separated by many
intermediate systems.
TRANSPORT LAYER:-The transport layer ensures that messages are delivered error-free, in
sequence, and with no losses or duplications. It relieves the higher layer protocols from any
concern with the transfer of data between them and their peers.
The size and complexity of a transport protocol depends on the type of service it can get from
the network layer. For a reliable network layer with virtual circuit capability, a minimal
transport layer is required. If the network layer is unreliable and/or only supports datagram,
the transport protocol should include extensive error detection and recovery.
Message segmentation: accepts a message from the (session) layer above it, splits the message
into smaller units (if not already small enough), and passes the smaller units down to the
network layer. The transport layer at the destination station reassembles the message.
Message traffic control: tells the transmitting station to "back-off" when no message buffers are
available.
Session multiplexing: multiplexes several message streams, or sessions onto one logical link and
keeps track of which messages belong to which sessions (see session layer).
Typically, the transport layer can accept relatively large messages, but there are strict message
size limits imposed by the network (or lower) layer. Consequently, the transport layer must
break up the messages into smaller units, or frames, prepending a header to each frame.
The transport layer header information must then include control information, such as message
start and message end flags, to enable the transport layer on the other end to recognize
message boundaries. In addition, if the lower layers do not maintain sequence, the transport
header must contain sequence information to enable the transport layer on the receiving end
to get the pieces back together in the right order before handing the received message up to
the layer above.
End-to-end layers
Unlike the lower "subnet" layers whose protocol is between immediately adjacent nodes, the
transport layer and the layers above are true "source to destination" or end-to-end layers, and
are not concerned with the details of the underlying communications facility. Transport layer
software (and software above it) on the source station carries on a conversation with similar
software on the destination station by using message headers and control messages.
SESSION LAYER:-The session layer allows session establishment between processes running on
different stations. It provides:
Session support: performs the functions that allow these processes to communicate over the
network, performing security, name recognition, logging, and so on.
Data compression: reduces the number of bits that need to be transmitted on the network.
Data encryption: encrypt data for security purposes. For example, password encryption.
APPLICATION LAYER:-The application layer serves as the window for users and application
processes to access network services. This layer contains a variety of commonly needed
functions:
Inter-process communication
Network management
Directory services
Answer:- Several correlation techniques are used to isolate and localize fault in networks. All
are based on (1) Detecting and filtering of events (2) Correlating observed events to isolate and
localize the fault either topologically or functionally (3) Identifying the cause of the problem. In
all three cases, different reasoning methods distinguish one technique from another.
Six approaches to correlation techniques:
(1) Rule-based reasoning (2) Model-based reasoning (3) Case-based reasoning (4) Codebook (5)
State transition graph model (6) Finite state machine model
State transition graph model:- A state transition graph model is used by Seagate Nerve Center
correlation system. This could be used as a stand-alone system or integrated with an NMS,
which HP Open View and some other vendors have done.
A simple state diagram with two states for a ping/response process is shown in figure. The two
states are ping mode and receive response. When an NMS sends a ping, it transitions from the
ping mode state to the receive response state. When it receives a response, it transitions back
to the ping node state. As you know by now, this method is how the health of all the
components is nominated by the NMS.
Ping mode
Response Ping
Receive response
Answer:- In recent years the role of the IT department has gradually moved from a support
role into that of a business driver. As technology continues to evolve at a furious pace, it
presents both opportunities and challenges. So what are the biggest issues facing IT managers
and business owners over the coming year?
Big data and the Internet of Things (IoT): being able to capture large amounts of data is
changing the way we do business, but data in itself is worth nothing if it cannot be mined and
used to improve processes and create new revenue streams. An increasing amount of data is
generated via IoT, which sees devices and inanimate objects directly communicating via the
internet. Industry experts Gartner predict we will see 25 billion things connected to the internet
by 2020 and while still a relatively young arena set to transform some businesses, it could
potentially be the demise of others. Business owners need to recognize that as big data drives
storage, hardware and network infrastructure developments, it is the underpinning technology
solution and communications ‘plumbing’ that will really influence its success.
The IT skills gap: As IT becomes more complex the skills shortage becomes more acute. The
issue was debated in Parliament last year, and an estimated 45% of UK businesses are dealing
with a shortage of IT talent that is set to harm innovation. Rather than trying to maintain high
levels of expertise in-house, one option is to switch to the benefits of tailored managed services
or outsourcing models provided by an expert IT partner. Bringing in a third party to handle
some – or all – of their IT requirements keeps the client in the driving seat and resolves the
issue of day to day skills capability as well as allowing them to tap into high level strategic
expertise as and when needed.
Improving management overview: the age-old problem of having access to real time
management information will continue to rest with the IT department, meaning integrating
disparate systems and data from cloud and in-house systems will remain at the top of the IT
manager’s agenda. A full ‘from the inside out’ review of infrastructure and systems to assess
whether the tail is actually wagging the dog may sound daunting but will pay dividends in the
long term. What worked for you five years ago may no longer be appropriate, as your business
evolves.
Establishing anytime, anywhere access: As boundaries between work and home continue to
blur, many of us expect to be ‘always connected’. This has prompted a huge rise in bring your
own device (BYOD), where staff use their own mobile devices such as laptops, tablets and smart
phones to access company data and applications and presents an ongoing challenge to
businesses trying to balance accessibility and potential cost savings with security and control. IT
managers need to work with HR and operations teams to make sure the company and its staff
understand the risks, responsibilities and obligations around how BYOD is implemented and
used.
Tightened security: -even with all the security tools at the IT manager’s disposal, data and
security breaches are a fact of business life and will continue to make the headlines in 2016.
The IT industry has seen a recent growth explosion in tools that detect and deal with
disruptions, such as the Solar Winds N-able network monitoring platform. Combining regular
and comprehensive preventative maintenance with real time monitoring of critical network and
desktop devices can remove a major headache for IT managers and ensure network reliability
and stability.
Shrinking budgets:- the continuous push to do more with less puts IT managers under more
pressure than ever before to deliver. Step back and consider your IT requirements like a clean
sheet – encourage a strategic, creative approach to IT issues, review the potential benefits of
unified communications and cloud services to switch from capital expenditure to operating
expenditure models, and bring the right partner in for advice and support.
You might be pleasantly surprised by how fresh thinking and a new approach to your IT can not
only help you solve current and future challenges, but also deliver big benefits across the
business
Answer:- Several network management standards are in use today. Table1 lists four standards
and their salient points, and a fifth standard based on emerging technology. They are the OSI
model, the Internet model, TMN, IEEE LAN/MAN, and Web-based management.
i. The Open System Interconnection (OSI) management standard is the standard adopted by the
International Standards Organization (ISO).
ii. The OSI management protocol standard is Common Management Information Protocol
(CMIP), and has built-in services, Common Management Information Service (CMIS), that
specify the basic services needed to perform the various functions.
iii. It is the most comprehensive set of specifications, and addresses all seven layers of the OSI
Reference Model. The specifications are object-oriented and hence managed objects are based
on object classes and inheritance rules.
Standard Salient points
OSI/CMIP International standard
Management of data communication network- LAN and WAN
Deals with all 7 layers
Most complete
Object oriented
SNMP/Internet Industry standard(IETF)
Originally intended for management of Internet components
Based on OSI network management framework
Addresses both network and administrative aspects of management
Version SNMPv1 was the first version SNMPv2 currently exists in at SNMPv3 is the
of SNMP least three flavors, SNMPv2c, newest version of
SNMPv2u, and SNMPv2 SNMP.
protocol Simple request/ response Similarity: Get, GetNext, Set SNMPv3 uses
operations protocol. Protocol Changes: Trap message SNMPv2 protocol
operations :Get, GetNext, Set, format New protocol operations and its
and Trap operations GetBulk and PDU message
Inform format
Security No security from someone SNMPv2 failed to improve on Its primary feature
with access to the network security. is enhanced
security.
Content SNMPv1 SNMPv2 SNMPv3
Complexity Performance and security More powerful but more SNMPv3 focuses on
limitations. complex than SNMPv1 improving the
security aspect.
packets
Susceptible to Yes No No
injection
attacks
Susceptible to Yes No No
replay attacks
Susceptible to Yes No No
sniffing of
session keys
i) Rule-based reasoning (RBR) is the earliest form of correlation technique. It is also known by
many other names, including rule-based expert system, expert system, production system, and
blackboard system.
ii) It has a working memory, an inference engine and a knowledge base. The three levels
representing the three components are the data level, control level, and knowledge level,
respectively.
iii) The knowledge base contains expert knowledge as to (1) definition of a problem in the
network and (2) action that needs to be taken if a particular condition occurs.
iv) The knowledge base information is rule-based in the form of if-then or condition-action,
containing rules that indicate which operations are to be performed when.
v) The working memory contains, as working memory elements, the topological and state
information of the network being monitored.
vi) The working memory recognizes when the network goes into a faulty state.
vii) The inference engine, in cooperation with the knowledge base, compares the current state
with the left side of the rule-base and finds the closest match to output the right side of the
rule. The knowledge base then executes an action on the working memory element.
viii) In Figure1, the rule-based paradigm is interactive among the three components and is
iterative. Several strategies are available for the rule-based paradigm.
ix) A specific strategy is implemented in the inference engine. When a specific rule has been
chosen, an action is performed on the working memory element, which can then initiate
another event. This process continues until the correct state is achieved in the working
memory.
x) Rules are established in the knowledge base from the expertise of people in the field. The
rule is an exact match and the action is very specific.
xi) If the antecedent and action in the rule do not match, the paradigm breaks and it is called
brittle. However, it can be fixed by adding more rules, but doing so increases the database size
and degrades performance, called a knowledge acquisition bottleneck.
xii) As the number of working memory elements grows, memory requirements grow
exponentially. In addition, the action is specific, which can cause unwanted behavior.
xiii) For example, we can define the alarm condition for packet loss as follows:
If packet loss < 10% alarm green
If packet loss => 10% < 15% alarm yellow
If packet loss => 15% alarm red
xiv) The left side conditions are the working memory elements, which if detected would
execute the appropriate rule defined in the rule-base
xv) This action could cause the alarm condition to flip back and forth in boundary cases. An
application of fuzzy logic is used to remedy this problem, but it is difficult to implement.
Model Based Correlation Technique :- In artificial intelligence, model-based reasoning refers to
an inference method used in expert systems based on a model of the physical world. With this
approach, the main focus of application development is developing the model. Then at run
time, an "engine" combines this model knowledge with observed data to derive conclusions
such as a diagnosis or a prediction
Case-based reasoning:-Case-based reasoning broadly construed, is the process of solving new
problems based on the solutions of similar past problems. An auto mechanic who fixes an
engine by recalling another car that exhibited similar symptoms is using case-based reasoning.
A lawyer who advocates a particular outcome in a trial based on legal precedentsor a judge who
creates case law is using case-based reasoning. So, too, an engineer copying working elements
of nature (practicing biomimicry), is treating nature as a database of solutions to problems.
Case-based reasoning is a prominent kind of analogy making.
It has been argued that case-based reasoning is not only a powerful method for computer
reasoning, but also a pervasive behavior in everyday human problem solving; or, more radically,
that all reasoning is based on past cases personally experienced. This view is related
to prototype theory, which is most deeply explored in cognitive science.
Codebook:- a codebook is a document used for implementing a code. A codebook contains
a lookup table for coding and decoding; each word or phrase has one or more strings which
replace it. To decipher messages written in code, corresponding copies of the codebook must
be available at either end. The distribution and physical security of codebooks presents a
special difficulty in the use of codes, compared to the secret information used in ciphers,
the key, which is typically much shorter.
The United States National Security Agency documents sometimes use codebook to refer
to block ciphers; compare their use of combiner-type algorithm to refer to stream ciphers.
A codebook is usually made in two parts, one part being for converting plaintext to ciphertext,
the other for the opposite purpose. Both are usually organized similar to a standard dictionary,
with plaintext words (in the first part) and ciphertext words (in the second part) presented like
dictionary headwords.
State transition graph model:- A state transition graph model is used by Seagate NerveCenter
correlation system. This could be used as a stand-alone system or integrated with an NMS,
which HP OpenView and some other vendors have done.
A simple state diagram with two states for a ping/response process is shown in figure. The two
states are ping mode and receive response. When an NMS sends a ping, it transitions from the
ping mode state to the receive response state. When it receive a response, it transitions back to
the ping node state. As you know by now, this method is how the health of all the components
is nominated by the NMS.
.
ILMI Private/public Service control Service activation, service assurance ILMI - SNMP
(maintenance), usage metering
(performance, billing)
i. The M1 interface is between an SNMP management system and an SNMP agent in an ATM
device, as shown in Figure2.
Answer:- Fault in a network is normally associated with failure of a network component and
subsequent loss of connectivity. Fault management involves a five-step process:
(1) Fault detection, (2) Fault location, (3) Restoration of service, (4) Identification of root cause
of the problem, and (5) Problem resolution.
i. The fault should be detected as quickly as possible by the centralized management system,
preferably before or at about the same time as when the users notice it.
ii. Fault location involves identifying where the problem is located. We distinguish this from
problem isolation, although in practice it could be the same.
iii. The reason for doing this is that it is important to restore service to the users as quickly as
possible, using alternative means.
iv. The restoration of service takes a higher priority over diagnosing the problem and fixing it.
v. Identification of the root cause of the problem could be a complex process, which we will go
into greater depth soon.
vi. After identifying the source of the problem, a trouble ticket can be generated to resolve the
problem.
vii. In an automated network operations center, the trouble ticket could be generated
automatically by the NMS.
Fault Detection:-
i. Fault detection is accomplished using either a polling scheme (the NMS polling management
agents periodically for status) or by the generation of traps (management agents based on
information from the network elements sending unsolicited alarms to the NMS).
ii. An application program in NMS generates the ping command periodically and waits for
response. Connectivity is declared broken when a preset number of consecutive responses are
not received.
iii. The frequency of pinging and the preset number for failure detection may be optimized for
balance between traffic overhead and the rapidity with which failure is to be detected.
iv. The alternative detection scheme is to use traps. One of the advantages of traps is that
failure detection is accomplished faster with less traffic overhead.
Fault Location and Isolation Techniques:-
i. Fault location using a simple would be to detect all the network components that have failed.
The origin of the problem could then be traced by walking down the topology tree where the
problem starts.
ii. Thus, if an interface card on a router has failed; all managed components connected to that
interface would indicate failure.
iii. After having located where the fault is, the next step is to isolate the fault (i.e. determine the
source of the problem).
iv. First, we should delineate the problem between failure of the component and the physical
link. Thus, in the above example, the interface card may be functioning well, but the link to the
interface may be down. We need to use various diagnostic tools to isolate the cause.
v. Let us assume for the moment that the link is not the problem but that the interface card is.
We then proceed to isolate the problem to the layer that is causing it. It is possible that
excessive packet loss is causing disconnection.
vi. We can measure packet loss by pinging, if pinging can be used. We can query the various
Management Information Base (MIB) parameters on the node itself or other related nodes to
further localize the cause of the problem.
vii. For example, error rates calculated from the interface group parameters, ifInDiscards,
ifInErrors, ifOutDiscards, and ifOutErrors with respect to the input and out-put packet rates,
could help us isolate the problem in the interface card.
Service Restoration:-
Root Cause Analysis (RCA) is a popular and often-used technique that helps people answer the
question of why the problem occurred in the first place.
It seeks to identify the origin of a problem using a specific set of steps, with associated tools, to
find the primary cause of the problem, so that you can:
Problem Resolution:-
Correcting the problem (indicates that the problem has been solved) by hardware & software
techniques, managed objects are repaired or replaced, and operations returned to normal.