• Subscribe

Back to Basics - Why go parallel?

Back to Basics - Why go parallel?

Darin Ohashi is a senior kernel developer at Maplesoft. His background is in mathematics and computer science, with a focus on algorithm and data structure design and analysis. For the last few years he has been focused on developing tools to enable parallel programming in Maple, a well-known scientific software package.

Parallel programs are no longer limited to the realm of high-end computing. In this column, Darin Ohashi takes us back to basics to explain why we all need to go parallel.

Computers with multiple processors have been around for a long time, and people have been studying parallel programming techniques for just as long. However, only in the last few years have multi-core processors and parallel programming become truly mainstream. What changed?

For years, processor designers have been able to increase the performance of processors by increasing their clock speeds. But a few years ago, they ran into a few serious problems. RAM speeds were not able to keep up with the increased speed of processors, causing processors to waste clock cycles waiting for data. The speed at which electrons can flow through wires is limited, leading to delays within the chip itself. Finally, increasing a processor's clock speed also increases its power requirements. Increased power requirements leads to the processor generating more heat (which is why overclockers come up with such ridiculous cooling solutions).

Glossary of terms:
  • core: the part of a processor responsible for executing a single series of instructions at a time.
  • processor: the physical chip that plugs into a motherboard. A computer can have multiple processors, and each processor can have multiple cores
  • process: a running instance of a program. A process's memory is usually protected from access by other processes.
  • thread: a running instance of a process's code. A single process can have multiple threads, and multiple threads can be executing at the same on multiple cores
  • parallel: the ability to utilize more than one processor at a time to solve problems more quickly, usually by being multi-threaded.

With so many issues making it increasingly difficult to increase clock speed, designers needed to find another way to improve performance. That's when they realized that instead of increasing the core's clock speed, they could keep the clock speed fairly constant and put more cores on the chip. Thus was born the multi-core revolution.

Unfortunately, the shift to multi-core processors has led to some serious issues on the software side. In the past, processor clock speed was doubling about every 18 months. Thus a piece of software would run faster over time simply because it was running on newer, faster processors.

With multi-core processors, this speed up no longer occurs. Clock speeds have settled around the two to three gigahertz range. New processors may be slightly faster for non-parallel (single-threaded) applications (usually due to architectural changes, as opposed to clock speed increases). But the real increase in computer processing power is due to multiple cores.

A single-threaded application will never be able to utilize the increase in power provided by multiple cores. You could be using a processor that is many times more powerful, but a single-threaded application will not show a corresponding increase in speed.

In other words, if you want an application to get faster, you can no longer rely on processor clock speed increasing over time. To take advantage of processor improvements and speed up an application, the application must be written in parallel and it must be able to scale to the number of available cores.

As an aside, please note that I have been talking about parallelizing for performance reasons. There are certain types or parts of programs that are traditionally implemented using multiple threads, such as graphical user interfaces (GUIs). These applications generally use threads to handle multiple inputs at the same time, or to reduce the latency of certain interactions. In these cases, having access to multiple cores often does not improve the overall performance. Thus, even programs such as GUIs will need to parallelize their time consuming algorithms if they are to take advantage of (and thus get faster on) multiple core machines.

Hopefully I've convinced you that parallelizing applications is not just important, but necessary. But for a more in-depth look at the software issues, take a look at The Free Lunch Is Over by Herb Sutter, which originally appeared in Dr. Dobb's Journal, 30(3), March 2005.

A version of this story originally appeared in Darin Ohashi's blog.

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2021 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.


We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.