The Next Big Speed Boost: Parallel Computing With High-Speed Networks
December 11, 1997
GAINESVILLE — An accountant for XYZ Widget Co. has stepped out for lunch and left his computer running with the Dilbert screen saver doing its job.
In his absence and without his knowledge, a co-worker downstairs in engineering has silently tapped into the very brain of the accountant’s computer, borrowing powerful processor cycles and using them for her own purposes.
A hacker at work? Not at all. It’s parallel computing with high-performance networks, a concept University of Florida researchers are developing, and it’s perfectly legal.
Alan George, director of UF’s High-performance Computing and Simulation Research Lab, says parallel computing could make computers thousands of times faster than today’s models and sidestep the physical limitations of ever-shrinking microprocessors. However, to achieve effective parallel computing, computer communication networks must undergo major advances. That’s what the lab is after.
“The lack of virtually instant access and immediate problem solving are keeping us from reaching the full potential of the computer revolution,” said George.
With parallel computing, multiple processors are linked together in a network so computers can talk to each other — across the hallway or across the globe — as easily and quickly as possible by replacing existing network wires with fiber-optic cables.
With advancements under way for next-generation networks, George said, a user working on a large and difficult problem could borrow unused power from the processors of other computers in a network to boost their own machine’s power.
George said only a fraction of conventional computer networks’ full potential currently is attainable from conventional computers because of network bottlenecks. He said the computer revolution has been dependent largely upon advancements in microprocessors, which are nearing their peak because of limits on chip density.
Existing technologies such as the World Wide Web, teleconferencing and e-mail allow people to interact, but limits on computing and communication speed pose barriers. When it comes to computers communicating more quickly for parallel computing or Web browsing, computers never were designed to exploit its potential. The future, he said, will depend on high-performance networks to support parallel and other forms of computing.
“The performance we can achieve is inevitably driven by the weakest link in that chain,” George said.
So what are the potential benefits? Operating rooms could be equipped with video conferencing so the best surgeons from other medical facilities could consult on an operation in real-time with no delays.
Another for-instance: Meteorologists could build a complex hurricane tracking model and simulate it in minutes or hours, well before a hurricane makes landfall. Existing computers may take a week to build and simulate such a model; by that time, the storm has reached land and done its damage.
This form of parallel computing also plays a large role in cartoon animation. “Toy Story,” the first full-length feature film done entirely by computer, used multiple workstations to refine the picture. Each frame of the film was broken into sections, and each section was perfected by separate computers and pieced back together.
Bill Phipps, an electrical and computer engineering graduate student who has been working at the UF lab for more than two years, said linking multiple computers with high-speed communication may be the next big step in carrying computers into the 21st century.
“This is going to be the thing that saves the day,” Phipps said.
Parallel computing with high-performance networks likely will reach the commercial market early in the next decade, though George said several problems still must be solved before it reaches the home, such as how to connect to the home inexpensively with fiber-optic cabling in place of telephone and cable wires.