Microsoft Convicted of Software Piracy: Kerit Ved
Microsoft Convicted of Software Piracy: Kerit Ved
Recent studies have shown that using Linux throughout your enterprise can
lower IT costs (TCO- Total Cost of Ownership) for your organization by as
much as 34%. In fact, because Linux does not require hardware upgrades nearly
as frequently as proprietary systems, even more savings can theoretically be
achieved over the long run.
Rather than try to compare apples and oranges with regard to security, you
instead may want to have a closer look at liability. Many software vendors
(Microsoft) would have you believe that because there is no single company that
owns Linux, there is no accountability in the case of a security breach or other
system problem for that matter. What Microsoft neglects to tell you is that if you
adhere to their EULA (End User License Agreement-- look at
SystemrootSystem32eula.txt on your Windows system), they are no more
accountable for lost productivity, lost data, or any incidental or consequential
damages than Open Source software. In fact, other than replacement of defective
media, Microsoft claims no responsibility for anything that may happen as a
result of using their software.
Support
Support is an issue that many IT managers are (rightly) concerned about. I know
that most distributions will offer some installation support (usually 60 or 90 days
by email), but will also offer enterprises pay-per-incident and pay by the hour
support. In fact, I recently read a very informative article by Network
Computing that gave a very positive impression of companies that offer Linux
support. You can find that article here...(see 'Team' Work Pays Off for Linux ).
It sounds as though Network Computing had much better luck with their Linux
support issues than I received from Microsoft the two or three times that I called
(just for the record, those of you Windows zealots who claim that getting a lost
CD replaced by Microsoft is a walk in the park, you must enjoy things like
gargling with razor blades; five phone calls to different numbers and two hours
on the phone and there was still no help to be found).
Applications
Another argument that I hear very often is that Linux has a limited number of
applications. This depends entirely on what you intend to do with your
computers. There are many applications now available for Linux that integrate
Linux desktops into Windows networks. The first that comes to mind is
Codeweaver's CrossOver plugin. This fine little piece of code allows Linux users
to run Microsoft Office 97 and 2000 on Linux machines (you can find out more
at Codeweaver's site...(see Linux Users To Run MS Office & Lotus Notes
Without Windows). You can also look to Ximian for their Ximian Connector; a
similar product to Codeweaver's Crossover plugin, Ximian's Connector allows
Linux users to access a Microsoft Exchange server for email, calendaring, etc...
(see Ximian Connector for Microsoft Exchange).
There are other ways of running Windows applications on Linux as well. The
Wine project, although not near perfect, has made improvements by leaps and
bounds in the past year. Wine and its associated documentation can be found
here...(see WINE). If you need a little more versatile version of Wine,
Transgaming's Winex does a really nice job running Windows applications.
Winex is based on Wine (although Winex is proprietary) and was designed to
allow gamers to play their Windows game releases on Linux. However, it is not
limited to games and it may be the answer for that one Windows application that
you or your users cannot live without.
If you have custom software that was written for Windows on which your
business depends, Linux may not be a good choice for you. On the other hand,
porting a custom application can often times be much less expensive than the
costs associated with a Microsoft upgrade (especially if hardware is involved). If
you have developers working "in-house" you can have them porting software to
Linux and testing it long before you intend to migrate.
Quality/Usability of Applications
The quality of applications for Linux has improved dramatically over the past
few years, so that the argument that Open Source is simply not on par with
commercial software is a blatant falsehood. If you don't believe me, would you
believe Microsoft? An internal memo was leaked around the end of October,
1998 which described the threat of Open Source software (OSS). In the memo,
which later became dubbed "The Halloween Documents", Microsoft personnel
claim that "Recent case studies (the Internet) provide very dramatic evidence in
customer's eyes that commercial quality can be achieved / exceeded by OSS
projects." and "Another barrier to entry that has been tackled by OSS is project
complexity. OSS teams are undertaking projects whose size & complexity had
heretofore been the exclusive domain of commercial, economically-
organized/motivated development teams. Examples include the Linux Operating
System and Xfree86 GUI." You can find the original Halloween Documents with
commentary here...(see The Halloween Documents).
Now, I used Linux back in 1999 and it was nothing like the Linux that is
available today, in terms of graphical user environments and end user usability.
The Kompany, a German software company, has recently released the third
evolution of their KDE (K Desktop Environment). The reason that I mention this
piece of software is that it is every bit as intuitive as Microsoft's offerings, and
much better looking by many accounts. If you are worried about the learning
curve involved with moving your employees from Windows to Linux, you should
not have a difficult time getting users to adjust to this great desktop
environment. I wrote an article discussing KDE3 in which I tried an experiment.
I invited my good friend Peter Thiruselvam to try out a brand new KDE3
installation. Peter had never before used Linux and what he had to say about it
may quell some of your worries. See the article here...(see Introducing KDE3).
If KDE3 is not up your alley, there is always Ximian's Gnome desktop. This is
another easy to use desktop environment that will require very little training to
bring your end users up to speed.
In Closing
If you are seriously considering a transition to Linux, below are some links that
will get you started. For ease of integration I would suggest testing several
different distributions to see which have features that you will rely on. For
instance, Lycoris Linux (formerly Redmond Linux) has a very nice interface for
browsing Windows networks set up by default. If ease of installation is an
important factor, there are several distributions available that I can easily
recommend as being as easy or easier to install than Windows products. Lycoris,
SuSE, RedHat, and Mandrake come to mind immediately, although there are
many others as well.
You may find that one distribution suits one department in your organization
while another distribution may work better in another department.
Interoperability will remain consistent (within reason -- KickStart files are not
going to work with distributions other than RedHat, naturally) so you can make
that choice if you need to. That is the beauty of Linux -- choice.
With the involvement that many large vendors are promoting, it only makes
sense to consider Linux. It is not just a geeks toy anymore, but rather a powerful,
secure, and stable operating system. But don't take my word for it, check out
Linux offerings from IBM, HP, Dell, Compaq and many, many more. Linux is
coming to the desktop whether you are ready for it or not; be ready.
In summary, when you consider the cost, security and liability, support and
usability of Linux when compared to other operating systems, you can quickly
see how a move to Linux can be beneficial to you and your company.
Considering the options, all that you may be giving up is proprietary file formats
and an endless cycle of "upgrades" that may or may not offer any value to your
users.
iEntry, Inc. provided this and many other very useful and informative articles
through their free news letter. Take advantage of it and order it for yourself.
This one was of such significant importance that I preserved it here for your use.
Microsoft is scared of Linux because it sees the free operating system as a threat rather than an opportunity. The
company reasons that if people are using a free OS and have access to free office suites and other apps, they won't
buy commercial software.
Nonsense. Linux is one percent of the market that Microsoft could target with Office: Linux. Just because people
choose an alternate OS, as Mac users do, doesn't mean they won't choose the world's most popular office suite, as
Mac users also do
Linus Torvalds started Linux by creating his own Unix-like kernel in 1991. Torvalds eventually released Linux
under a GNU license, which allowed for both free and commercial distribution of the open source operating system.
Linux has become very popular on servers, but it has only a small presence in personal computing.
The Linux strategy is to make the operating system free, open, and customizable. Programmers are free to modify
their installations to best meet their needs.
The problem with the Linux strategy is that it's free, open, and customizable. There are numerous Linux
distributions, many of them offering two or three user interface options - GNOME, KDE, or Xfce. All distributions
share the Linux name, and as a free, open source operating system, there's little incentive to market Linux.
Many companies are finding the cost advantages of using Linux to be too compelling to ignore, especially
in times when they're looking for every cost advantage they can get. The result is that the use of Linux
continues to grow.
The opportunity for those who are familiar with Linux is clear. Because Linux is priced at an enterprise
level vs. the system level, it creates the ability to eliminate multiple copies of licensed operating systems
in a company. By doing so, it allows a company to save hundreds, even thousands of dollars in server
and desktop operating system license fees.
Linux is not a fit for every situation but it's certainly worth evaluating, especially when you have large
numbers of PC desktops and servers in your company. The savings can be significant.
Supported system architectures include but are not limited to Intel X86, Intel Itanium, AMD AMD 64, and
IBM's zSeries, iSeries, pSeries, and S/390.
Below, you will find the standard Linux cost-saving strategy template used in Mike Sisco's Technology
Cost Saving Strategies book and training module, which contains fifty specific strategies to help IT
managers improve their companies' bottom line.
After the template, you will find two sample cost examples to give you an idea of the potential savings. In
addition, an excellent paper titled, "Total Cost of Ownership for Linux in the Enterprise" by Robert Francis
Group, is worth reading.
Linux
Description: The Linux operating system is challenging Microsoft for a share of the server operating
system space and has a significantly lower price tag. Because Linux is licensed per enterprise vs. per
user or system, the cost savings can be significant.
Category: Direct savings
Identify: Inventory all servers, PCs, and computer systems that have potential to operate on Linux vs.
Microsoft or other traditional operating systems.
Quantify: Calculate the cost of licensing multiple OS licenses for identified servers and/or workstations
from Microsoft vs. one license for all workstations and one license for all servers with Linux.
Tips:
Start with “noncritical” application servers.
Test Linux stability until you are satisfied.
Test a small group of IT users for Linux on desktop PCs.
Example #1
Let's say you have identified an opportunity to implement Linux in a Web server environment requiring
seven servers to meet the expected daily hits needs. A comparison of the operating costs to support
seven Web servers is shown in Table Abelow:
Table A
LINUX—Server
Ans 3:
Advancements
to 100-year-
old Recording
Technology
Open Doors
for 10-fold
Hard Drive
Capacity
Expansion
While the hard drive industry has been using longitudinal
recording successfully for five decades, it is now within two
product generations of reaching its practical limit.
For about the past decade, scientists and engineers have
pondered the potential effects of a natural phenomenon
called superparamagnetism and postulated when its
presence might interfere with the progress of the hard disk
drive (HDD) industry.
Since the first commercial hard drive was introduced in
1956, the industry has grown storage capacity
exponentially by decreasing the size of the magnetic grains
that make up data bits. In effect, the smaller the magnetic
grain, the smaller the bit, the more data that can be stored
on a disk. With longitudinal recording, we are getting close
to the point where data integrity will be harmed if we
continue to shrink the magnetic grains. This is due to the
superparamagnetic effect.
Superparamagnetism occurs when the microscopic
magnetic grains on the disk become so tiny that random
thermal vibrations at room temperature cause them to lose
their ability to hold their magnetic orientations. What
results are “flipped bits” – bits whose magnetic north and
south poles suddenly and spontaneously reverse – that
corrupt data, rendering it and the storage device unreliable.
Today, the hard drive industry’s ability to push out the
superparamagnetic limit is more critical than ever as
capacity requirements continue to grow dramatically. This is
due, in large part, to the increasing use of hard drives in
consumer electronic devices. Consumers wanting to store
more music, photos and video are looking to the hard drive
industry to pack more and more storage capacity on smaller
devices. The superparamagnetic effect on current magnetic
recording technologies will make that growth impossible
within one to two years.
Thanks to renewed interest in a magnetic recording method
first demonstrated more than 100 years ago, there’s
confidence at Hitachi Global Storage Technologies (Hitachi
GST) and elsewhere in the storage industry that the natural
effects of superparamagnetism can be further stalled. That
method is called perpendicular recording, which when fully
realized over the next 5-7 years is expected to enable a 10-
fold increase in storage capacity over today’s technology.
This would enable, for example, a 60-GB one-inch
Microdrive from Hitachi GST, which is used in MP3 players,
personal media players, digital cameras, PDAs and other
handheld devices.
Hitachi, earlier this month, demonstrated a perpendicular
recording data density of 230 gigabits/square inch – twice
that of today’s density on longitudinal recording -- which
could result in a 20 gigabyte Microdrive in 2007.
Perpendicular and Longitudinal Recording: How They
Differ
For nearly 50 years, the disk drive industry has focused
nearly exclusively on a method called longitudinal magnetic
recording, in which the magnetization of each data bit is
aligned horizontally in relation to the drive’s spinning
platter. In perpendicular recording, the magnetization of the
bit is aligned vertically – or perpendicularly – in relation to
the disk drive’s platter.
Perpendicular recording was first demonstrated in the late
19th century by Danish scientist Valdemar Poulsen, the first
person to demonstrate that sound could be recorded
magnetically. Advances in perpendicular recording were
sporadic until 1976 when
Dr. Shun-ichi Iwasaki – president and chief director of the
prestigious Tohoku Institute of Technology in Japan and
generally considered the father of modern perpendicular
recording – verified distinct density advantages in
perpendicular recording. His work laid the foundation for
more aggressive perpendicular recording research that
continued even as the industry made advances in areal
density using longitudinal recording.
To help understand how perpendicular recording works,
consider the bits as small bar magnets. In conventional
longitudinal recording, the magnets representing the bits
are lined up end-to-end along circular tracks in the plane of
the disk. If you consider the highest-density bit pattern of
alternating ones and zeros, then the adjacent magnets end
up head-to-head (north-pole to north pole) and tail-to-tail
(south-pole to south-pole). In this scenario, they want to
repel each other, making them unstable against thermal
fluctuations. In perpendicular recording, the tiny magnets
are standing up and down. Adjacent alternating bits stand
with north pole next to south pole; thus, they want to
attract each other and are more stable and can be packed
more closely. This geometry is the key to making the bits
smaller without superparamagnetism causing them to lose
their memory.
Perpendicular recording allows hard drive manufacturers to
put more bits of data on each square inch of disk space –
called areal density or data density -- because of magnetic
geometry. Moreover, perpendicular recording results in the
improved ability of a bit to retain its magnetic charge, a
property called coercivity.
Though it departs from the current method of recording,
perpendicular recording is technically the closest alternative
to longitudinal recording, thus enabling the industry to
capitalize on current knowledge while delaying the
superparamagnetic effect.
The exact areal density at which the superparamagnetic
effect occurs has been a moving target, subject to much
scientific and engineering debate. As early as the 1970’s,
scientists predicted that the limit would be reached when
data densities reached 25 megabits per square inch. Such
predictions were woefully inaccurate; they did not consider
the ingenuity of scientists and engineers to skirt technical
obstacles. Through innovations in laboratories at Hitachi
GST and other companies, those limits have moved forward
dramatically. Today, the highest areal density with
longitudinal recording has surpassed 100 gigabits per
square inch. However, researchers believe the technology
will begin losing its ability to maintain data integrity at areal
densities much beyond 120 gigabits per square inch, at
which time, perpendicular recording will become the
dominant magnetic recording technology.
The superparamagnetic barrier is drawing nearer, forcing
the industry to slow the historically rapid pace of growth in
disk drive capacity – a pace that, at its peak over the past
decade, doubled capacity every 12 months. Using
perpendicular recording, scientists at Hitachi GST and other
companies believe that the effects of superparamagnetism
can be further forestalled, which would create opportunities
for continued robust growth in areal density at a rate of
about 40 percent each year.
The geometry and coercivity advantages of perpendicular
recording led scientists to believe in potential areal densities
that are up to 10 times greater than the maximum possible
with longitudinal recording. Given current estimates, that
would suggest an areal density using perpendicular
recording as great as one terabit per square inch -- making
possible in two to three years a 3.5-inch disk drive capable
of storing an entire terabyte of data.
Perpendicular Recording: Opportunity and Challenges
Perpendicular magnetic recording represents an important
opportunity for Hitachi GST and others in the hard drive
industry to continue to grow capacities at a reasonable
pace. Such growth is needed to satisfy the burgeoning
information requirements of society: A 2003 University of
California-Berkeley study estimates that more than 4 million
terabytes of information were produced and stored
magnetically in 2002 – more than double the 1.7 million
terabytes produced and stored in 2000. There are no signs
that the requirements for hard-disk storage is ebbing. On
the contrary, all signs indicate that the demand for hard
drive storage will continue to grow at a staggering rate,
fueled by IT applications and, increasingly, consumer
electronics requirements.
Industry analysts have predicted that hard drives for
consumer electronics will account for 40 percent of all hard
drive shipments by 2008, up from 9 percent in 2003 and 15
percent in 2004. But unlike hard drives used in IT
applications where performance is key, hard drives for
consumer electronic applications have ultra-high storage
capacity as their main requirement. More than ever,
consumers are holding their entertainment and personal
data in digital formats and have demonstrated an insatiable
appetite for storing more music, photos, video and other
personal documents. So much so, that Hitachi GST believes
in the next 5-10 years, the average household will have 10-
20 hard drives in various applications. This vision will
require the successful adoption of perpendicular recording.
Even though perpendicular recording is technically akin to
the current generation of longitudinal devices, a number of
technical challenges remain. For example, engineers at
Hitachi GST are engaged in research to invent new kinds of
read/write heads; to experiment with new materials that
have enhanced magnetic properties and improved surface
finishes; to maintain signal-to-noise ratios as the magnetic
bits and signals become smaller; and to detect and
interpret the magnetic signals using ever more advanced
algorithms.
Though large, all of these challenges are familiar – the
same type of challenges that the industry has traditionally
faced and overcome. But successfully meeting the new
challenges will take time, engineering resources and
ingenuity on a massive scale – the kind of scale most likely
to come from Hitachi GST and other vertically-integrated
companies who research and produce their own hard drive
technologies.
Equally important, perpendicular recording is not a panacea
for all storage requirements. Rather, it is a stepping stone
that will give the disk drive industry breathing room to
explore and invent new methods of extending magnetic
recording. One method called patterned media, for
example, may one day reduce the size of a bit to a single
grain as compared to the 100 or so grains that comprise a
bit today. The approach uses lithography to etch a pattern
onto the platter. Once engineered, it is a technology that
should be easily and economically replicated, adding no
significant cost to the drive and potentially improving areal
densities by another factor of 10. Significant research is
being undertaken in Hitachi GST laboratories on this
approach.
A fundamental challenge researchers are facing is that high
media coercivity is normally associated with an increased
difficulty in writing. Potential approaches to resolve this
problem include thermally-assisted magnetic recording. The
heat assist allows recording on improved media with a high
coercivity. Another approach is tilted perpendicular
recording. This approach sets the magnetization at a
diagonal to theoretically improve the media’s ability to hold
magnetic charge while still being recordable.
Confidence for the Future
When the first five-megabyte drive was introduced 50 years
ago, few if anyone could have predicted the current state of
the industry. They would likely not believe that a read/write
head could fly a hundred miles an hour over a spinning
platter at a distance that is less 1/10,000th of a human
hair. Or that drives the size of quarters would be capable of
storing entire music libraries. This would all be in the realm
of science fiction.
Yet they would likely understand the scientific concepts and
physical laws that have made these advances possible.
While there has been a great deal of invention, the basic
science – like Valdemar Poulsen’s discovery more than 100
years ago – has remained relatively constant.
Such constancy gives rise to confidence across the industry
that the challenge of superparamagnetism will be met.
Perpendicular recording is the most likely first technology
bridge but it is by no means the last.
Backup/recovery
Learn ways to shorten your backup window or recover data from the brink of disaster! Our
backup/recovery experts are ready to address your specific questions!
Storage management
Storage isn't just a cabinet full of disks blindly attached to a production network. It's resource, so it must
be allocated and managed as a resource in order to achieve the greatest benefit to a corporation. Ask
our expert how to successfully approach tackling the tough tasks ahead of you.
Ans 5:
5. Establishment and enforcement of a priority system. That is, it determines and maintains the order
in which jobs are to be executed in the computer system.
6. Automatic transition from job to job as directed by special control statements.
7. Interpretation of commands and instructions.
8. Coordination and assignment of compilers, assemblers, utility programs, and other software to the
various user of the computer system.
9. Facilities easy communication between the computer system and the computer operator (human).
It also establishes data security and integrity.
-resource management,
-data management,
-job (task) management, and
-standard means of communication between user and computer.
The resource management function of an OS allocates computer resources such as CPU time, main
memory, secondary storage, and input and output devices for use.
The data management functions of an OS govern the input and output of the data and their location,
storage, and retrieval.
The job management function of an OS prepares, schedules, controls, and monitors jobs submitted for
execution to ensure the most efficient processing. A job is a collection of one or more related programs
and their data.
The OS establishes a standard means of communication between users and their computer systems. It
does this by providing a user interface and a standard set of commands that control the hardware.
Operating systems for large computers are more complex and sophisticated than those for
microcomputers because the operating systems for large computers must address the needs of a very
large number of users, application programs, and hardware devices, as well as supply a host of
administrative and security features.
Number of Users:
A single-user operating system allows only one user at a time to access a computer.
Most operating systems on microcomputers, such as DOS and Window 95, are single-user access
systems.
A multiuser operating system allows two or more users to access a computer at the same time (UNIX).
The actual number of users depends on the hardware and the OS design.
Time sharing allows many users to access a single computer.
This capability is typically found on large computer operating systems where many users need access at
the same time.
Number of Tasks
A single tasking operating system allows only one program to execute at a time, and the program must
finish executing completely before the next program can begin.
A multitasking operating system allows a single CPU to execute what appears to be more than one
program at a time.
Context switching allows several programs to reside in memory but only one to be active at a time. The
active program is said to be in the foreground. The other programs in memory are not active and are said
to be in the background. Instead of having to quit a program and load another, you can simply switch the
active program in the foreground to the background and bring a program from the background into the
foreground with a few keystrokes.
Cooperative multitasking in which a background program uses the CPU during idle time of the foreground
program. For example, the background program might sort data while the foreground program waits for a
keystroke.
Time-slice multitasking enables a CPU to switch its attention between the requested tasks of two or more
programs. Each task receives the attention of the CPU for a fraction of a second before the CPU moves
on to the next. Depending on the application, the order in which tasks receive CPU attention may be
determined sequentially (first come first served) or by previously defined priority levels.
Multithreading supports several simultaneous tasks within the same application. For example, with only
one copy of a database management system in memory, one database file can be sorted while data is
simultaneously entered into another database file.