0% found this document useful (0 votes)
1K views

Evolution of Client

The document discusses the evolution of client-server computing from mainframe-based systems to microcomputer-based systems. It describes how hardware and software advances enabled microcomputers to become more powerful, affordable and reliable for businesses. This led to the development of client-server models where individual departments could control their own data and applications, moving away from centralized mainframe systems. First two-tier client-server systems were developed, and then three-tier systems with separate application servers were created to improve scalability and management of large enterprise systems.

Uploaded by

lalson_raju
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views

Evolution of Client

The document discusses the evolution of client-server computing from mainframe-based systems to microcomputer-based systems. It describes how hardware and software advances enabled microcomputers to become more powerful, affordable and reliable for businesses. This led to the development of client-server models where individual departments could control their own data and applications, moving away from centralized mainframe systems. First two-tier client-server systems were developed, and then three-tier systems with separate application servers were created to improve scalability and management of large enterprise systems.

Uploaded by

lalson_raju
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Evolution of Client-Server Computing The evolution of Client-Server Computin has been driven by business needs, as well as the increasing

costs for host (mainframe and midrange) machines and maintenance, the decreasing costs and increasing power of micro-computers and the increased reliability of LANs( Local Area Networks). In the past twenty years, there are dramatic improvements in the hardware and software technologies for micro-computers. Micro-computers become affordable for small businesses and organisations. And at the same time their performances are becoming more and more reliable. On the other hand, the drop in price for mainframe is growing at a slower rate than the drop in its price. Little developments have achieved with mainframes. The following are the improvements made by micro-computers:

Hardware: The speed of desktop microprocessors has grown exponenetially,


from a 8MHz 386-based computers to 100Hz-based pentium-based microprocessors. These mass-produced microprocessors are cheaper and more powerful than those used in mainframe and midrange computers. On the other hand, the capacity of main memory in micro-omputers has been quafrupling every three years. Typically main memory size is 16 Megabytes nowadays. Besides, the amount of backup storage and memory such as hard disks and CDROMs that are able to support micro-computers has also puts an almost unlimited amount of data in reach for end-users. Software: The development and acceptance of GUIs ( Graphical User Interfaces) such as Windows 3.1 and OS/2 has made the PC working environment more user-friendly. And the user are more efficient in learning new application softwares in a graphical environment. Besides GUIs, the use of multithreaded processing and relational databases has also contributed to the popularity of Client-Server Computing. Evolution Client/server computing was created because of a need for computer managers to be able to respond quickly to business demands, which they could not do easily with the central, mainframe-based applications. Application development time was too slow, and the results could not be tailored for the special needs of each department. The personal computer environment allowed the users to have computing power and data under their control. Unfortunately, this environment did not lend itself to collaboration between workers. There was a great need to create a system that would work for each

departmental to have control over their own formatting and data usage standards. This led to departmental level client/server. (Stallings & Van Slyke, 1997) The next step was a move to a two-tier, client/server system. The only real change here was that a true DBMS was substituted for the File Server. This database server is a computer that is responsible for database storage, access, and processing in a client/server environment. The client workstation here is responsible for managing the user interface, including presentation logic, data processing logic and business rules logic. The database server is responsible for database storage, access, and processing. This allowed for a multi-user system that was very reliable which made it a good solution for many different problems. The problems with two-tier unfortunately soon became obvious. A two-tier system does not scale well and is not suitable for enterprise computing. Management problems grow with the size of the system. This led to the development of a three-tier system. Adding an application server, to handle business and data logic, created the three-tier system. This improvement provided more power, reduced the need for software on the client and added more power and scalability at reduced support costs. Three-tier architecture has the database as the top tier. It works like any client-server environment on a server, waiting to process data requests from approved users. The middle tier acts as a mediator, processing requests coming from the user and from the database. It maintains a full-time connection to the database using either native drivers, open database connectivity (ODBC) or Java database connectivity (JDBC). The middle tier often has its own user login to make that connection. The database interaction all occurs at the middle tier. Significantly, the database thinks that there is only one user presently accessing it in this model. Therefore, with a client-server database like Oracle or Sybase or SQL Server facilitating 500 people, the system can only detect the one user. At the bottom of the structure is a very thin "client" tier probably written in Java or a type of Web-based technology that allows it to be used within your browser. The connection from the client tier to the middle tier is carried out through technologies designed specifically to accommodate requests depending on the hardware platform and the development environment. This type of technology has several advantages. First, the part of the software which must be transmitted to the user's terminal, can be quite small. Having a very thin client lets a limited amount of data load, this allows faster start-up times. In large businesses this kind of architecture provides a simple means of centralized configuration management. As this layer needs only to handle the results of the application, the thin client can easily handle a multi-platform environment. These improvements also came with some challenges. Within this type of system there are more potential points of

failure with few tools are available, performance sometimes suffers and upgrades become a significant task. The Internet is an example of a successful adaptation of a three-tier client/server system that uses an open set of standards, which allows various networks to interconnect. The introduction of a separate Web server expands the power and use of the system. A mainframe can be brought into the system allowing the use of existing, legacy applications in a new context. Microsoft, IBM, and Netscape are coming together to promote their own version of what should be. (Anonymous, 1996; Vandersluis, 1999)

You might also like