Skip to main content
Grid Computing - The Future of Internetworking

Grid Computing - The Future of Internetworking



If you are using the internet from your desktop, then your computer is part of a network. This network in turn is part of another network. In short, the internet is a networking infrastructure. It was the result of some visionary thinking by the people in the early 1960s that saw great potential value in allowing computers to share information on research and development in scientific and military fields. Shrinking computer sizes and information sharing services made it possible for every computer in the world to interconnect. The 'World Wide Web' is a service built on top of the internet which enables millions of computers around the world to share information.

The internet has served well so far but has a few shortcomings. Scientists and computing experts are always on the lookout for the next leap forward. The internet allows you to only share information and nothing more you can think of apart from that. In order to address this issue, the experts have devised a new system which is called the 'Grid'. A grid computing system is a type of parallel computing system which enables us to share computing power, disk storage, databases and software applications. The term 'Grid' first appeared in Ian Foster's and Carl Kesselman's seminal work - The Grid: Blueprint for a new computing infrastructure. Ian Foster is the Associate Division Director in the Mathematics and Computer Science Division at Argonne National Laboratory (United States), where he leads the Distributed Systems Laboratory, and he is a Professor in the Department of Computer Science at the University of Chicago. In one of his articles, Ian Foster lists these primary attributes of a Grid:

1. Computing resources are not administered centrally
2. Open standards are used
3. Significant quality of service is achieved

A grid computing system, also known as a distributed computing system, relies on computers connected to a network by a conventional network interface like the 'Ethernet' (the port into which you connect your LAN cable). These computers have the capability to combine and yield better results than a supercomputer. These computers are independently controlled, and can perform tasks unrelated to the grid at the operator's concern. This is what Ian Foster indicates in the first point of his checklist.

The striking feature of a grid computing system is that it enables sharing of computing power. This is also known as CPU scavenging in which a computer steals the unused cycles of any other computer. These cycles are nothing but the idle times of a processor in a computer. Shared computing is the act of sharing tasks over multiple computers. In other words, it is as good as getting the same job done from a bunch of people rather than a big strong man. As a result, a task which took days to complete will be achieved over a relatively small period of time. IBM is currently working on developing a global scale supercomputer based on the concept of shared computing. They have named it as 'Project Kittyhawk' and it will run the entire internet as an application. Shared computing is one of the services that a grid will provide.

Desktops, laptops, supercomputers and clusters combine to form a grid. All of these computers can have different hardware and operating systems. Grids are also usually loosely connected in a decentralized network, rather than contained in a single location, as computers in a cluster often are. Hence, a grid's flexibility and additional features distinguish it from its competitors. Moreover, a grid is built from open standard protocols and interfaces like TCP/IP protocol suite which is important to realize internationalization rather than subjugating it to local limits.

In addition to shared computing power, a grid allows you to share disk storage, databases and software applications. Altogether, this system will be very helpful to people belonging to different categories, ranging from scientists to consumers. Most of the computers in mid size and large size organizations are idle for a higher percentage of time. These idle processors can be utilized for other important tasks by the means of CPU scavenging. Different scientists of the world would like to visualize their applications in real time rather than wait for ages till their results are shipped, verified, and then sent back. With the help of grid, people from different fields of expertise will be able to hook up to remote computers and share their findings. This won't only speed up the process but will also produce accurate results.

One of the biggest projects on grid is being carried out in Switzerland by the European Organization for Nuclear Research (CERN). Thousands of desktops, laptops, mobile phones, data vaults, meteorological sensors and telescopes will constitute the biggest grid which will produce an annual database of roughly 15 million gigabytes. This event will become possible when two beams of subatomic particles called 'Hadrons', will collide in the 'Large Hadron Collider' - A gigantic scientific instrument 100 meters underground at Geneva, Switzerland. Thousands of scientists around the world want to access and analyze this data, so CERN is collaborating with institutions in 33 different countries to operate a distributed computing and data storage infrastructure: the LHC Computing Grid (LCG).

To constantly reap the benefits of such a system, it is important to make sure that the computers performing the calculations are entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or incorrect results, and from using the system as an attack vector. In a grid, computers drop out either voluntarily or due to their failure.

Related Post