Back to Basics - Why go parallel?
Parallel programs are no longer limited to the realm of high-end computing. In this column, Darin Ohashi takes us back to basics to explain why we all need to go parallel.
Computers with multiple processors have been around for a long time, and people have been studying parallel programming techniques for just as long. However, only in the last few years have multi-core processors and parallel programming become truly mainstream. What changed?
For years, processor designers have been able to increase the performance of processors by increasing their clock speeds. But a few years ago, they ran into a few serious problems. RAM speeds were not able to keep up with the increased speed of processors, causing processors to waste clock cycles waiting for data. The speed at which electrons can flow through wires is limited, leading to delays within the chip itself. Finally, increasing a processor's clock speed also increases its power requirements. Increased power requirements leads to the processor generating more heat (which is why overclockers come up with such ridiculous cooling solutions).
With so many issues making it increasingly difficult to increase clock speed, designers needed to find another way to improve performance. That's when they realized that instead of increasing the core's clock speed, they could keep the clock speed fairly constant and put more cores on the chip. Thus was born the multi-core revolution.
Unfortunately, the shift to multi-core processors has led to some serious issues on the software side. In the past, processor clock speed was doubling about every 18 months. Thus a piece of software would run faster over time simply because it was running on newer, faster processors.
With multi-core processors, this speed up no longer occurs. Clock speeds have settled around the two to three gigahertz range. New processors may be slightly faster for non-parallel (single-threaded) applications (usually due to architectural changes, as opposed to clock speed increases). But the real increase in computer processing power is due to multiple cores.
A single-threaded application will never be able to utilize the increase in power provided by multiple cores. You could be using a processor that is many times more powerful, but a single-threaded application will not show a corresponding increase in speed.
In other words, if you want an application to get faster, you can no longer rely on processor clock speed increasing over time. To take advantage of processor improvements and speed up an application, the application must be written in parallel and it must be able to scale to the number of available cores.
As an aside, please note that I have been talking about parallelizing for performance reasons. There are certain types or parts of programs that are traditionally implemented using multiple threads, such as graphical user interfaces (GUIs). These applications generally use threads to handle multiple inputs at the same time, or to reduce the latency of certain interactions. In these cases, having access to multiple cores often does not improve the overall performance. Thus, even programs such as GUIs will need to parallelize their time consuming algorithms if they are to take advantage of (and thus get faster on) multiple core machines.
Hopefully I've convinced you that parallelizing applications is not just important, but necessary. But for a more in-depth look at the software issues, take a look at The Free Lunch Is Over by Herb Sutter, which originally appeared in Dr. Dobb's Journal, 30(3), March 2005.
A version of this story originally appeared in Darin Ohashi's blog.