0% found this document useful (0 votes)
78 views

Microsoft Convicted of Software Piracy: Kerit Ved

This document discusses several factors to consider when evaluating a potential migration from Windows to Linux, including total cost of ownership, security and liability, support options, applications availability, directory services, custom software porting, migration costs, and usability of Linux applications and desktop environments. It provides examples and references for further research on these topics.

Uploaded by

Riazuddin Khan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views

Microsoft Convicted of Software Piracy: Kerit Ved

This document discusses several factors to consider when evaluating a potential migration from Windows to Linux, including total cost of ownership, security and liability, support options, applications availability, directory services, custom software porting, migration costs, and usability of Linux applications and desktop environments. It provides examples and references for further research on these topics.

Uploaded by

Riazuddin Khan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Kerit ved

Ans 1: Microsoft, one of the biggest advocates of anti-piracy legislation, appears


to be playing on both sides of the fence. Apparently Microsoft was convicted of
violating intellectual property laws (to use the term Microsoft uses: piracy) in
France. What is really surprising in this turn of events is that somehow this
conviction escaped notice by the press. You can read more about this
here...Microsoft convicted of software piracy..

In the Microsoft case, much to Microsoft's dismay, Judge Colleen Kollar-Kotelly


decided to allow a late entry from the states that are continuing to dispute the
remedy's already agreed upon by Microsoft, the DOJ and 9 of the original 18
states involved in the suit. Apparently a modular version of Windows is not only
possible but already exists, in a test form. In fact Judge Colleen Kollar-Kotelly
has decided to admit this evidence stating that this is information that is
important for the court to have. Read more here..Judge sets scene for battle over
modular Windows

by Jay Fougere / iEntry Staff Editor

Recent studies have shown that using Linux throughout your enterprise can
lower IT costs (TCO- Total Cost of Ownership) for your organization by as
much as 34%. In fact, because Linux does not require hardware upgrades nearly
as frequently as proprietary systems, even more savings can theoretically be
achieved over the long run. 

These are the results of a study performed by Cybersource, an independent


consulting agency (cyber.com.au). In fact, contrary to Microsoft's claim that
initial licensing cost amounts to only about 8% of the TCO of software
(see Microsoft's letter to Congressman Villanueva  ), Cybersource shows in their
study that over a three year span (the average life cycle of proprietary operating
systems and software) that the initial licensing costs can make up as much as
28% of the cost of maintaining these systems, if purchased with new hardware.
That figure climbs to over 38% when existing hardware is used. These are not
the trivial figures which Microsoft would have you believe.  

The whitepaper, available from Cybersource (see  Linux_vs_Windows) is


thorough and gives full explanations of how the results of this study are
calculated. 
Licensing is only one small part of the big picture when one is considering a
major transition in infrastructure. Many other factors come in to play, most of
which are more of a concern to today's IT shops. 

Security and Liability 


One issue which comes to mind is security. Security has become a buzzword in
the IT industry lately. First of all, let me say that no software is 100% secure and
comparing two different software programs in terms of security is very difficult.
No security professional can honestly say that one system is more secure than the
other. The security of a system depends entirely on how well the system is
configured after it has been installed. Even after the initial installation, patches
and fixes will need to be applied once they become available.  

Rather than try to compare apples and oranges with regard to security, you
instead may want to have a closer look at liability. Many software vendors
(Microsoft) would have you believe that because there is no single company that
owns Linux, there is no accountability in the case of a security breach or other
system problem for that matter. What Microsoft neglects to tell you is that if you
adhere to their EULA (End User License Agreement-- look at
SystemrootSystem32eula.txt on your Windows system), they are no more
accountable for lost productivity, lost data, or any incidental or consequential
damages than Open Source software. In fact, other than replacement of defective
media, Microsoft claims no responsibility for anything that may happen as a
result of using their software.

Support 
Support is an issue that many IT managers are (rightly) concerned about. I know
that most distributions will offer some installation support (usually 60 or 90 days
by email), but will also offer enterprises pay-per-incident and pay by the hour
support. In fact, I recently read a very informative article by Network
Computing that gave a very positive impression of companies that offer Linux
support. You can find that article here...(see  'Team' Work Pays Off for Linux ).
It sounds as though Network Computing had much better luck with their Linux
support issues than I received from Microsoft the two or three times that I called
(just for the record, those of you Windows zealots who claim that getting a lost
CD replaced by Microsoft is a walk in the park, you must enjoy things like
gargling with razor blades; five phone calls to different numbers and two hours
on the phone and there was still no help to be found). 

Applications 
Another argument that I hear very often is that Linux has a limited number of
applications. This depends entirely on what you intend to do with your
computers. There are many applications now available for Linux that integrate
Linux desktops into Windows networks. The first that comes to mind is
Codeweaver's CrossOver plugin. This fine little piece of code allows Linux users
to run Microsoft Office 97 and 2000 on Linux machines (you can find out more
at Codeweaver's site...(see  Linux Users To Run MS Office & Lotus Notes
Without Windows). You can also look to Ximian for their Ximian Connector; a
similar product to Codeweaver's Crossover plugin, Ximian's Connector allows
Linux users to access a Microsoft Exchange server for email, calendaring, etc...
(see  Ximian Connector for Microsoft Exchange). 

There are other ways of running Windows applications on Linux as well. The
Wine project, although not near perfect, has made improvements by leaps and
bounds in the past year. Wine and its associated documentation can be found
here...(see  WINE). If you need a little more versatile version of Wine,
Transgaming's Winex does a really nice job running Windows applications.
Winex is based on Wine (although Winex is proprietary) and was designed to
allow gamers to play their Windows game releases on Linux. However, it is not
limited to games and it may be the answer for that one Windows application that
you or your users cannot live without. 

Directory services is often times a necessary application for large networks. If


your business depends on directory services, the only option that I know of is
(not free or Open Source) Novell's eDirectory. The one really nice feature of
eDirectory (in my eyes) is that it runs on several different platforms, greatly
reducing the burden of integrating Linux and Windows workstations. This may
be worth consideration. You can find out more at Novell's site...(see  Novell
eDirectory 8.6.2).

If you have custom software that was written for Windows on which your
business depends, Linux may not be a good choice for you. On the other hand,
porting a custom application can often times be much less expensive than the
costs associated with a Microsoft upgrade (especially if hardware is involved). If
you have developers working "in-house" you can have them porting software to
Linux and testing it long before you intend to migrate. 

Migration From Windows to Linux 


Speaking of migration, one of the biggest arguments from the Windows camp is
that the cost to migrate from Windows to Linux is prohibitive. In response, I
would like to mention that those costs are not going to go down in the future, but
Microsoft is going to keep forcing upgrades which will probably far outweigh the
costs associated with a migration. It seems to me that biting the bullet now can
prevent you from being locked into a situation you can't get out of later. Also,
there are many tools available that can assist you with your migrations from
Windows to Linux...(see  LSP: migrate from Windows NT to Linux).

Quality/Usability of Applications 
The quality of applications for Linux has improved dramatically over the past
few years, so that the argument that Open Source is simply not on par with
commercial software is a blatant falsehood. If you don't believe me, would you
believe Microsoft? An internal memo was leaked around the end of October,
1998 which described the threat of Open Source software (OSS). In the memo,
which later became dubbed "The Halloween Documents", Microsoft personnel
claim that "Recent case studies (the Internet) provide very dramatic evidence in
customer's eyes that commercial quality can be achieved / exceeded by OSS
projects." and "Another barrier to entry that has been tackled by OSS is project
complexity. OSS teams are undertaking projects whose size & complexity had
heretofore been the exclusive domain of commercial, economically-
organized/motivated development teams. Examples include the Linux Operating
System and Xfree86 GUI." You can find the original Halloween Documents with
commentary here...(see  The Halloween Documents).

Now, I used Linux back in 1999 and it was nothing like the Linux that is
available today, in terms of graphical user environments and end user usability.
The Kompany, a German software company, has recently released the third
evolution of their KDE (K Desktop Environment). The reason that I mention this
piece of software is that it is every bit as intuitive as Microsoft's offerings, and
much better looking by many accounts. If you are worried about the learning
curve involved with moving your employees from Windows to Linux, you should
not have a difficult time getting users to adjust to this great desktop
environment. I wrote an article discussing KDE3 in which I tried an experiment.
I invited my good friend Peter Thiruselvam to try out a brand new KDE3
installation. Peter had never before used Linux and what he had to say about it
may quell some of your worries. See the article here...(see  Introducing KDE3).

If KDE3 is not up your alley, there is always Ximian's Gnome desktop. This is
another easy to use desktop environment that will require very little training to
bring your end users up to speed. 

In Closing 
If you are seriously considering a transition to Linux, below are some links that
will get you started. For ease of integration I would suggest testing several
different distributions to see which have features that you will rely on. For
instance, Lycoris Linux (formerly Redmond Linux) has a very nice interface for
browsing Windows networks set up by default. If ease of installation is an
important factor, there are several distributions available that I can easily
recommend as being as easy or easier to install than Windows products. Lycoris,
SuSE, RedHat, and Mandrake come to mind immediately, although there are
many others as well. 

If rolling out large numbers of preconfigured desktops is important to you,


RedHat may be a good choice. With it's very easy to use KickStart configuration
tool, setting up automated installations is a snap, whether you are installing from
CD, FTP or an NFS file share. 

You may find that one distribution suits one department in your organization
while another distribution may work better in another department.
Interoperability will remain consistent (within reason -- KickStart files are not
going to work with distributions other than RedHat, naturally) so you can make
that choice if you need to. That is the beauty of Linux -- choice.  

With the involvement that many large vendors are promoting, it only makes
sense to consider Linux. It is not just a geeks toy anymore, but rather a powerful,
secure, and stable operating system. But don't take my word for it, check out
Linux offerings from IBM, HP, Dell, Compaq and many, many more. Linux is
coming to the desktop whether you are ready for it or not; be ready.  

In summary, when you consider the cost, security and liability, support and
usability of Linux when compared to other operating systems, you can quickly
see how a move to Linux can be beneficial to you and your company.
Considering the options, all that you may be giving up is proprietary file formats
and an endless cycle of "upgrades" that may or may not offer any value to your
users.

iEntry, Inc. provided this and many other very useful and informative articles
through their free news letter. Take advantage of it and order it for yourself.
This one was of such significant importance that I preserved it here for your use.

Microsoft and Linux

Microsoft is scared of Linux because it sees the free operating system as a threat rather than an opportunity. The
company reasons that if people are using a free OS and have access to free office suites and other apps, they won't
buy commercial software.
Nonsense. Linux is one percent of the market that Microsoft could target with Office: Linux. Just because people
choose an alternate OS, as Mac users do, doesn't mean they won't choose the world's most popular office suite, as
Mac users also do

The Linux Strategy

Linus Torvalds started Linux by creating his own Unix-like kernel in 1991. Torvalds eventually released Linux
under a GNU license, which allowed for both free and commercial distribution of the open source operating system.
Linux has become very popular on servers, but it has only a small presence in personal computing.

The Linux strategy is to make the operating system free, open, and customizable. Programmers are free to modify
their installations to best meet their needs.

The problem with the Linux strategy is that it's free, open, and customizable. There are numerous Linux
distributions, many of them offering two or three user interface options - GNOME, KDE, or Xfce. All distributions
share the Linux name, and as a free, open source operating system, there's little incentive to market Linux.

Many companies are finding the cost advantages of using Linux to be too compelling to ignore, especially
in times when they're looking for every cost advantage they can get. The result is that the use of Linux
continues to grow.

The opportunity for those who are familiar with Linux is clear. Because Linux is priced at an enterprise
level vs. the system level, it creates the ability to eliminate multiple copies of licensed operating systems
in a company. By doing so, it allows a company to save hundreds, even thousands of dollars in server
and desktop operating system license fees.

Linux is not a fit for every situation but it's certainly worth evaluating, especially when you have large
numbers of PC desktops and servers in your company. The savings can be significant.

Supported system architectures include but are not limited to Intel X86, Intel Itanium, AMD AMD 64, and
IBM's zSeries, iSeries, pSeries, and S/390.

Below, you will find the standard Linux cost-saving strategy template used in Mike Sisco's Technology
Cost Saving Strategies book and training module, which contains fifty specific strategies to help IT
managers improve their companies' bottom line.

After the template, you will find two sample cost examples to give you an idea of the potential savings. In
addition, an excellent paper titled, "Total Cost of Ownership for Linux in the Enterprise" by Robert Francis
Group, is worth reading.

Linux
Description: The Linux operating system is challenging Microsoft for a share of the server operating
system space and has a significantly lower price tag. Because Linux is licensed per enterprise vs. per
user or system, the cost savings can be significant.

Category: Direct savings

Identify: Inventory all servers, PCs, and computer systems that have potential to operate on Linux vs.
Microsoft or other traditional operating systems.
Quantify: Calculate the cost of licensing multiple OS licenses for identified servers and/or workstations
from Microsoft vs. one license for all workstations and one license for all servers with Linux.

Investment: Yes, a Linux license (nominal)

Tips:
 Start with “noncritical” application servers.
 Test Linux stability until you are satisfied.
 Test a small group of IT users for Linux on desktop PCs.

Example #1
Let's say you have identified an opportunity to implement Linux in a Web server environment requiring
seven servers to meet the expected daily hits needs. A comparison of the operating costs to support
seven Web servers is shown in Table Abelow:
Table A

LINUX—Server

Operating System Licenses for 7 servers

Red Hat Linux ES $799

Windows Server 2003 $2,793

Ans 3:

Definitions of storage technology on the Web:

 Storage Technology refers to the technology used to store the images or


information obtained through the use of some form of Capture Technology (3.2).
This includes the medium used for storage (3.3.1), the compression methodology
used to minimize the amount of storage medium employed

Advancements
to 100-year-
old Recording
Technology
Open Doors 
for 10-fold
Hard Drive
Capacity
Expansion
  While the hard drive industry has been using longitudinal
recording successfully for five decades, it is now within two
product generations of reaching its practical limit.
For about the past decade, scientists and engineers have
pondered the potential effects of a natural phenomenon
called superparamagnetism and postulated when its
presence might interfere with the progress of the hard disk
drive (HDD) industry.
Since the first commercial hard drive was introduced in
1956, the industry has grown storage capacity
exponentially by decreasing the size of the magnetic grains
that make up data bits. In effect, the smaller the magnetic
grain, the smaller the bit, the more data that can be stored
on a disk. With longitudinal recording, we are getting close
to the point where data integrity will be harmed if we
continue to shrink the magnetic grains. This is due to the
superparamagnetic effect.
Superparamagnetism occurs when the microscopic
magnetic grains on the disk become so tiny that random
thermal vibrations at room temperature cause them to lose
their ability to hold their magnetic orientations. What
results are “flipped bits” – bits whose magnetic north and
south poles suddenly and spontaneously reverse – that
corrupt data, rendering it and the storage device unreliable.
Today, the hard drive industry’s ability to push out the
superparamagnetic limit is more critical than ever as
capacity requirements continue to grow dramatically. This is
due, in large part, to the increasing use of hard drives in
consumer electronic devices. Consumers wanting to store
more music, photos and video are looking to the hard drive
industry to pack more and more storage capacity on smaller
devices. The superparamagnetic effect on current magnetic
recording technologies will make that growth impossible
within one to two years.
Thanks to renewed interest in a magnetic recording method
first demonstrated more than 100 years ago, there’s
confidence at Hitachi Global Storage Technologies (Hitachi
GST) and elsewhere in the storage industry that the natural
effects of superparamagnetism can be further stalled. That
method is called perpendicular recording, which when fully
realized over the next 5-7 years is expected to enable a 10-
fold increase in storage capacity over today’s technology.
This would enable, for example, a 60-GB one-inch
Microdrive from Hitachi GST, which is used in MP3 players,
personal media players, digital cameras, PDAs and other
handheld devices.
Hitachi, earlier this month, demonstrated a perpendicular
recording data density of 230 gigabits/square inch – twice
that of today’s density on longitudinal recording -- which
could result in a 20 gigabyte Microdrive in 2007.
 
Perpendicular and Longitudinal Recording: How They
Differ
  For nearly 50 years, the disk drive industry has focused
nearly exclusively on a method called longitudinal magnetic
recording, in which the magnetization of each data bit is
aligned horizontally in relation to the drive’s spinning
platter. In perpendicular recording, the magnetization of the
bit is aligned vertically – or perpendicularly – in relation to
the disk drive’s platter.
Perpendicular recording was first demonstrated in the late
19th century by Danish scientist Valdemar Poulsen, the first
person to demonstrate that sound could be recorded
magnetically. Advances in perpendicular recording were
sporadic until 1976 when 
Dr. Shun-ichi Iwasaki – president and chief director of the
prestigious Tohoku Institute of Technology in Japan and
generally considered the father of modern perpendicular
recording – verified distinct density advantages in
perpendicular recording. His work laid the foundation for
more aggressive perpendicular recording research that
continued even as the industry made advances in areal
density using longitudinal recording.
To help understand how perpendicular recording works,
consider the bits as small bar magnets. In conventional
longitudinal recording, the magnets representing the bits
are lined up end-to-end along circular tracks in the plane of
the disk. If you consider the highest-density bit pattern of
alternating ones and zeros, then the adjacent magnets end
up head-to-head (north-pole to north pole) and tail-to-tail
(south-pole to south-pole). In this scenario, they want to
repel each other, making them unstable against thermal
fluctuations. In perpendicular recording, the tiny magnets
are standing up and down. Adjacent alternating bits stand
with north pole next to south pole; thus, they want to
attract each other and are more stable and can be packed
more closely. This geometry is the key to making the bits
smaller without superparamagnetism causing them to lose
their memory.
Perpendicular recording allows hard drive manufacturers to
put more bits of data on each square inch of disk space –
called areal density or data density -- because of magnetic
geometry. Moreover, perpendicular recording results in the
improved ability of a bit to retain its magnetic charge, a
property called coercivity.
Though it departs from the current method of recording,
perpendicular recording is technically the closest alternative
to longitudinal recording, thus enabling the industry to
capitalize on current knowledge while delaying the
superparamagnetic effect.
The exact areal density at which the superparamagnetic
effect occurs has been a moving target, subject to much
scientific and engineering debate. As early as the 1970’s,
scientists predicted that the limit would be reached when
data densities reached 25 megabits per square inch. Such
predictions were woefully inaccurate; they did not consider
the ingenuity of scientists and engineers to skirt technical
obstacles. Through innovations in laboratories at Hitachi
GST and other companies, those limits have moved forward
dramatically. Today, the highest areal density with
longitudinal recording has surpassed 100 gigabits per
square inch. However, researchers believe the technology
will begin losing its ability to maintain data integrity at areal
densities much beyond 120 gigabits per square inch, at
which time, perpendicular recording will become the
dominant magnetic recording technology.
The superparamagnetic barrier is drawing nearer, forcing
the industry to slow the historically rapid pace of growth in
disk drive capacity – a pace that, at its peak over the past
decade, doubled capacity every 12 months. Using
perpendicular recording, scientists at Hitachi GST and other
companies believe that the effects of superparamagnetism
can be further forestalled, which would create opportunities
for continued robust growth in areal density at a rate of
about 40 percent each year.
The geometry and coercivity advantages of perpendicular
recording led scientists to believe in potential areal densities
that are up to 10 times greater than the maximum possible
with longitudinal recording. Given current estimates, that
would suggest an areal density using perpendicular
recording as great as one terabit per square inch -- making
possible in two to three years a 3.5-inch disk drive capable
of storing an entire terabyte of data.
  
Perpendicular Recording: Opportunity and Challenges
  Perpendicular magnetic recording represents an important
opportunity for Hitachi GST and others in the hard drive
industry to continue to grow capacities at a reasonable
pace. Such growth is needed to satisfy the burgeoning
information requirements of society: A 2003 University of
California-Berkeley study estimates that more than 4 million
terabytes of information were produced and stored
magnetically in 2002 – more than double the 1.7 million
terabytes produced and stored in 2000. There are no signs
that the requirements for hard-disk storage is ebbing. On
the contrary, all signs indicate that the demand for hard
drive storage will continue to grow at a staggering rate,
fueled by IT applications and, increasingly, consumer
electronics requirements.
Industry analysts have predicted that hard drives for
consumer electronics will account for 40 percent of all hard
drive shipments by 2008, up from 9 percent in 2003 and 15
percent in 2004. But unlike hard drives used in IT
applications where performance is key, hard drives for
consumer electronic applications have ultra-high storage
capacity as their main requirement. More than ever,
consumers are holding their entertainment and personal
data in digital formats and have demonstrated an insatiable
appetite for storing more music, photos, video and other
personal documents. So much so, that Hitachi GST believes
in the next 5-10 years, the average household will have 10-
20 hard drives in various applications. This vision will
require the successful adoption of perpendicular recording.
Even though perpendicular recording is technically akin to
the current generation of longitudinal devices, a number of
technical challenges remain. For example, engineers at
Hitachi GST are engaged in research to invent new kinds of
read/write heads; to experiment with new materials that
have enhanced magnetic properties and improved surface
finishes; to maintain signal-to-noise ratios as the magnetic
bits and signals become smaller; and to detect and
interpret the magnetic signals using ever more advanced
algorithms.
Though large, all of these challenges are familiar – the
same type of challenges that the industry has traditionally
faced and overcome. But successfully meeting the new
challenges will take time, engineering resources and
ingenuity on a massive scale – the kind of scale most likely
to come from Hitachi GST and other vertically-integrated
companies who research and produce their own hard drive
technologies.
Equally important, perpendicular recording is not a panacea
for all storage requirements. Rather, it is a stepping stone
that will give the disk drive industry breathing room to
explore and invent new methods of extending magnetic
recording. One method called patterned media, for
example, may one day reduce the size of a bit to a single
grain as compared to the 100 or so grains that comprise a
bit today. The approach uses lithography to etch a pattern
onto the platter. Once engineered, it is a technology that
should be easily and economically replicated, adding no
significant cost to the drive and potentially improving areal
densities by another factor of 10. Significant research is
being undertaken in Hitachi GST laboratories on this
approach.
A fundamental challenge researchers are facing is that high
media coercivity is normally associated with an increased
difficulty in writing. Potential approaches to resolve this
problem include thermally-assisted magnetic recording. The
heat assist allows recording on improved media with a high
coercivity. Another approach is tilted perpendicular
recording. This approach sets the magnetization at a
diagonal to theoretically improve the media’s ability to hold
magnetic charge while still being recordable.
 
Confidence for the Future
  When the first five-megabyte drive was introduced 50 years
ago, few if anyone could have predicted the current state of
the industry. They would likely not believe that a read/write
head could fly a hundred miles an hour over a spinning
platter at a distance that is less 1/10,000th of a human
hair. Or that drives the size of quarters would be capable of
storing entire music libraries. This would all be in the realm
of science fiction.
Yet they would likely understand the scientific concepts and
physical laws that have made these advances possible.
While there has been a great deal of invention, the basic
science – like Valdemar Poulsen’s discovery more than 100
years ago – has remained relatively constant.
Such constancy gives rise to confidence across the industry
that the challenge of superparamagnetism will be met.
Perpendicular recording is the most likely first technology
bridge but it is by no means the last.
 
Backup/recovery

Learn ways to shorten your backup window or recover data from the brink of disaster! Our
backup/recovery experts are ready to address your specific questions!

Storage management

Storage isn't just a cabinet full of disks blindly attached to a production network. It's resource, so it must
be allocated and managed as a resource in order to achieve the greatest benefit to a corporation. Ask
our expert how to successfully approach tackling the tough tasks ahead of you.

Ans 5:

Functions Of Operating System 


Today most operating systems perform the following important functions:
1. Processor management, that is, assignment of processor to different tasks being performed by
the computer system.
2. Memory management, that is, allocation of main memory and other storage areas to the system
programmes as well as user programmes and data.
3. Input/output management, that is, co-ordination and assignment of the different output and input
device while one or more programmes are being executed.
4. File management, that is, the storage of file of various storage devices to another. It also allows all
files to be easily changed and modified through the use of text editors or some other files
manipulation routines. 

5. Establishment and enforcement of a priority system. That is, it determines and maintains the order
in which jobs are to be executed in the computer system.
6. Automatic transition from job to job as directed by special control statements.
7. Interpretation of commands and instructions. 
8. Coordination and assignment of compilers, assemblers, utility programs, and other software to the
various user of the computer system.
9. Facilities easy communication between the computer system and the computer operator (human).
It also establishes data security and integrity.

The major functions of an OS are:

-resource management,
-data management,
-job (task) management, and
-standard means of communication between user and computer.

The resource management function of an OS allocates computer resources such as CPU time, main
memory, secondary storage, and input and output devices for use.

The data management functions of an OS govern the input and output of the data and their location,
storage, and retrieval.

The job management function of an OS prepares, schedules, controls, and monitors jobs submitted for
execution to ensure the most efficient processing. A job is a collection of one or more related programs
and their data.

A job is a collection of one or more related programs and their data.

The OS establishes a standard means of communication between users and their computer systems. It
does this by providing a user interface and a standard set of commands that control the hardware.

Typical Day-to-Day Uses of an Operating System

-Executing application programs.


-Formatting floppy diskettes.
-Setting up directories to organize your files.
-Displaying a list of files stored on a particular disk.
-Verifying that there is enough room on a disk to save a file.
-Protecting and backing up your files by copying them to other disks for safekeeping.

How Do Operating Systems Differ?

Operating systems for large computers are more complex and sophisticated than those for
microcomputers because the operating systems for large computers must address the needs of a very
large number of users, application programs, and hardware devices, as well as supply a host of
administrative and security features.

Operating system capabilities can be described in terms of

-the number of users they can accommodate at one time,


-how many tasks can be run at one time, and
-how they process those tasks.

Number of Users:
A single-user operating system allows only one user at a time to access a computer.

Most operating systems on microcomputers, such as DOS and Window 95, are single-user access
systems.

A multiuser operating system allows two or more users to access a computer at the same time (UNIX).

The actual number of users depends on the hardware and the OS design.
Time sharing allows many users to access a single computer.
This capability is typically found on large computer operating systems where many users need access at
the same time.

Number of Tasks

An operating system can be designed for single tasking or multitasking.

A single tasking operating system allows only one program to execute at a time, and the program must
finish executing completely before the next program can begin.

A multitasking operating system allows a single CPU to execute what appears to be more than one
program at a time.

Context switching allows several programs to reside in memory but only one to be active at a time. The
active program is said to be in the foreground. The other programs in memory are not active and are said
to be in the background. Instead of having to quit a program and load another, you can simply switch the
active program in the foreground to the background and bring a program from the background into the
foreground with a few keystrokes.

Cooperative multitasking in which a background program uses the CPU during idle time of the foreground
program. For example, the background program might sort data while the foreground program waits for a
keystroke.

Time-slice multitasking enables a CPU to switch its attention between the requested tasks of two or more
programs. Each task receives the attention of the CPU for a fraction of a second before the CPU moves
on to the next. Depending on the application, the order in which tasks receive CPU attention may be
determined sequentially (first come first served) or by previously defined priority levels.

Multithreading supports several simultaneous tasks within the same application. For example, with only
one copy of a database management system in memory, one database file can be sorted while data is
simultaneously entered into another database file.

You might also like