Network
Distributed Processing
The structure of the distributed processing network can
significantly
affect its functionality, cost reliability, and performance. The
first
distributed processing architecture using network data
communications
was the file server approach. Its limitations gave rise to
client/server approach. This is the structure widely used by
businesses to downsize from centralized mainframe computer
processing. The third common approach is peer-to-peer. (Turban,
McLean and Wetherbe, 1996)
File Server
Architecture
File server distributed processing is the simplest approach for a
relatively large number of network processors to share data.
Enterprise-wide data are digitally stored in a designated file
server processor connected to the network. When an application at
any node needs to perform processing on data contained in a file,
the application requests the file from the file server. Then, the
file server responds by transmitting the entire file to the
requesting node processor, and the requesting node performs the
necessary processing locally. There are few limitation of this
arrangement which helped to spur the development of the
client/server computing architecture.
First, large files can electronically choke the network whenever
they are transmitted to a node for processing, especially if they
are transmitted often to a large number of nodes.
Second, many nodes can be simultaneously altering the same file
which consequently create chaos when the file is
"returned" to the file server.
Thirdly, the file server can only allow one node at a time to
update a file.
Client/Server
Architecture
Client/Server(C/S) refers to computing technologies in which the
hardware and software components, i.e., clients and servers are
distributed across a network. This technology includes both the
traditional database-oriented client/server technology, as well
as more recent general distributed computing technology. Besides,
a client/server system is a user-centric system that emphasizes
the user's interaction with the data. Client/server computing
splits processing between "clients" and
"servers". The users will experience the network as a
single system with all functions, both client and server,
integrated and accessible.
The client is the user point-of-entry for the required function
in a client/server computing application. Normally a desktop
computer, workstation, or laptop computer. The user generally
interacts directly with only the client portion of the
application, typically through a graphical user interface. The
user typically utilizes it to input data and query a database to
retrieve data. Once the data have been retrieved, the user can
analyze and report on them, using fourth-generation packages such
as spreadsheets, word processors, and graphics applications
available on the client machine on the user's own desktop.
The server satisfies some or all the user's request for data or
functionality and might be anything from a supercomputer or
mainframe to another desktop computer. Servers store and process
shared data and also perform back-end functions not visible to
users, such as managing peripheral devices and controlling access
to shared databases.
Every organization has its own data handling and data processing
requirements. For instance, data communication at Northern Bank
requires sharing data between three branch offices, located in
two adjacent towns, and a credit card authorization center.
Automatic teller machines (ATMs) and workstations are hardwired
with coaxial cable to a server in each branch. Northern Bank uses
a flexible client/server network design. In a client/server
network, the user's computer, the client, takes on more
responsibility than it does in a traditional server-oriented
network. The client computer handles the user interface software.
Northern's ATMs and personal computer is a diskless workstation
and does not have disk drives because the data is maintained by
the server.
The advantages of client/server computing include user
convenience, scalability, and greater ability to accommodate and
maintain hardware and software from different vendors.
Further more, there is another category of tool between the
client tools and server tools called middleware. Middleware
controls communication between clients and servers and performs
whatever translation is necessary to make a client's request
understandable to a server device. It provides required services
such as remote database access, interprocess communication,
distributed object management, directory services, and security
services. Products and technologies in this area include:
The Common Object Request Broker Architecture (CORBA) from the
Object Management Group (OMG),
The Distributed Computing Environment (DCE) from the Open
Software Foundation (OSF),
The Component Object Model (COM) and OLE 2.0 from Microsoft, and
Microsoft Open Data Base Connectivity (ODBC)
Peer-To-Peer
Architecture
Peer-to-peer architecture is an important alternative to
client-server for small computer networks. In peer-to-peer, each
workstation can communicate directly with each other workstation
on the network without going through a specialized server.
Peer-to-peer is appropriate when the network users mostly do
their own work but occasionally need to exchange data. In these
cases, it may be more efficient to keep data and copies of the
software at each workstation to avoid the delays of downloading
data and software each time a user gets started. However,
peer-to-peer also has potential problems in security and
consistency. For example, with data at someone else's
workstation, the data may be difficult to retrieve when that
person is out of the office and the workstation is shut off.