
Faster supercomputers are needed to solve important scientific and engineering problems. Computing twice as fast would be impressive; ten times faster would be even more impressive.
Philip Emeagwali, a doctoral candidate in scientific computing
in the College of Engineering and the 1989 recipient of the
Gordon Bell Prize for his supercomputing research, has increased
the speed of a massively parallel supercomputer to as much as
1,000 times faster than a mainframe computer and 1,000,000
times faster than a personal computer.
"The supercomputer industry and much of the academic establishment have claimed that massively parallel computers were suited only for certain types of problems," Emeagwali says. "But in the past few months, reports at scientific gatherings and in the news media have indicated that some investigators using the Connection Machine the largest massively parallel supercomputer now available  have proved the establishment wrong." Emeagwali had already been looking at computationintensive problems from a theoretical standpoint. When he learned of a $1,000 prize offered by the Institute of Electrical and Electronic Engineers Computer Society for the fastest computation in a scientific and engineering problem requiring trillions of calculations, he decided to compete. Emeagwali studied the U.S. government's list of the 20 most computationally difficult problems. The one that interested him most involved calculating oil. Even before the onset of war in the Persian Gulf, American experts recognized the importance of improving the efficiency of oil extraction. "The oil industry purchases 10 percent of all supercomputers and is keenly aware of the difficulty of computing oilfield flow," Emeagwali says. Oil has properties that make calculating its flow patterns within an oil field more difficult than modeling the flow of groundwater. To model oilfield flow in a computer requires the simulation of the distribution of the oil at tens of thousands of locations throughout the farflung field. At each location, the computer must be programmed to make hundreds of simultaneous calculations at regular intervals of time to determine such variables as temperature, direction of oil flow, viscosity, pressure and several geological properties of the basin holding the oil. "Even a supercomputer working at the rate of millions of calculations a second is far too slow to reach a result that can be acted on in a timely fashion," Emeagwali explains. "The oil companies need the results quickly enough to decide how to recover the maximum amount of oil." Since an average of only 30 percent of oil is recovered in an oil field, Emeagwali notes, "It's easy to understand why the industry is keenly interested in more accurate simulations of oil flow. An improvement to even a 31 percent recovery rate  just one percentage point  translates into billions of dollars of savings." Emeagwali attracted the attention of many industries and investigators when he won the Gordon Bell Prize by showing how he used a $6 million massively parallel computer to perform the trillions of oil fieldmodeling computations at three times the speed of the mightiest $30 million supercomputer. He hit a computational speed of 3.1 billion calculations per second. How did he do it? It took some creative mathematical thinking for Emeagwali, who was renowned for mathematical prowess even as a child in Nigeria, to hit upon a `new' technique that resurrected some equations that had grown dusty in the computing field for 50 years. Rather than use the equations that have been used throughout the century to calculate oilfield flow and similar phenomena, Emeagwali asked himself, "When did we start using these equations, and why did we start using them?"
He researched those equations and learned that in the late
19th century "a type of partial differential equation similar
to the classical `heat equation' was derived to perform the kinds of
calculations required to describe oilfield flow."
In 1938 a Soviet mathematician, B. K. Risenkampf, derived a set of partial differential equations that included the fourth force. The Risenkampf equations belong to the hyperbolic category. Until the invention of the massively parallel computer, it made no sense to try to apply Risenkampf's equations to problems like oilfield flow; it would have taken too many computations for existing computing technology  from calculating machines to supercomputers. "The fourth, or inertial, force affecting the slow flow of oil in the ground is about 10,000 times smaller than the three other forces," Emeagwali explains, "so neglecting inertia didn`t result in much error even though the solutions still resembled those of the parabolic equations, "If I put 10,000 dollar bills on the table in ones and you take a dollar, I'm not likely to detect and report the crime. In the same way, it was reasonable to ignore the inertial force back then."
Emeagwali had become interested in the Risenkampf equations while working at the National Weather Service, and decided to take a "topdown approach" by seeing if the hyperbolic equations would result in a better model of the oilfield flow. "I knew that hyperbolic equations result in solutions that more accurately reflect the real world," Emeagwali says, and so he expected them to yield a better representation of the real properties of oilfield flow. Even though they are more complex, Emeagwali theorized, hyperbolic equations would open "a shorter and quicker path" to the solution of modeling flow. And in terms of his academic goals, using hyperbolic equations on a massively parallel computer would show that "calculations that could take months, even years, to perform on a personal computer could be done in seconds or minutes. "If we had massively parallel computers a hundred years ago," Emeagwali continues, "we would have used hyperbolic equations instead of parabolic. The serial computer hardware we have today reflects the absence of a need to go the hyperbolic route. But once you have a certain kind of hardware, it reinforces the methods you`ve used. It`s not that anyone is to blame for it, but in a sense computers have developed down a blind alley." In the future, Emeagwali says  and the very near future at that  the architecture of massively parallel computers like the Connection Machine will trickle down to the personal computer level. They will increase realism in what computer buffs call artificial reality (AR). More important for civilization will be the impart of massive parallelism at the supercomputer level. Emeagwali expects to see quite soon "automakers using these computers to fully simulate car crashes on the computer rather than crashing expensive riggedout models at up to $750,000 a test." In medicine, "Investigators will find that using computers based on the technology of massive parallelism will permit them to study human diseases by studying humans without compromising human health, instead of using mice, chimpanzees and the like." "Any way you look at it, " Emeagwali concludes, "the computer industry will have no choice. They will have to switch to massive parallelism." Emeagwali hopes to give the industry a big nudge in early 1991 if his latest submission for the international computing contest is as convincing as last year's. "I'm trying to prove that we know how to reach the Holy Grail of computing  computing at the teraflops level by performing trillions of calculations in a second" [see main article].
Emeagwali says massively parallel supercomputers are approximately five times faster
than conventional machines now, but he forecasts that the advantage will
approach 100to1 in 10 years. If he's right, you can expect radical changes in the computer
industry very soon.
Reported by John Woodford in the February 1991 issue of the Michigan Today.
Click on emeagwali.com for more information.
