• Subscribe

iSGTW Opinion - The rise of parallelism - and other computing challenges


Opinion - The rise of parallelism (and other computing challenges)


The ILLIAC IV supercomputer led its field in 1966 as a parallel computing machine. This computer was only ever quarter finished, but took eleven years to build at nearly four times the original estimated cost.
Image courtesy of Steve Jurvetson

In the past, parallelism was just one solution among the many available to manufacturers wanting to propose computer architectures with attractive peak performances.

Today, parallelism is no longer an "option": it is now necessary for manufacturers to make large use of parallelism in order to propose attractive solutions.

Parallelism is no longer devoted purely to the field of high performance or high speed computing. As a consequence, it is almost everywhere: parallelism is used in PCs, cellular phones and much more. The extensive use of parallelism has transformed "More than Moore" into reality, contributing to the sustained amazement of modern users of computer devices.

The double-edges of the parallel sword

Fields such as computer science and numerical computing have traditionally faced a number of important challenges; however, the advent of grid computing and the massive use of parallelism have now raised many more important questions.

Will the convergence of parallel and distributed computing change the very nature of computer science and numerical computing? Will communication libraries or interfaces such as MPI or OpenMPI continue to permit programmers to maintain high performance? Do the numerical methods presently in use suit massive parallelism and the presence of faults in the systems? These are just few of the important questions that have arisen with the advent of parallel and distributed computing.

To ensure efficient use of new parallel and distributed architectures, new concepts related to communication, synchronization, fault tolerance and auto-organization must come into view and be widely used.

Parallel problems can be split into many smaller sub-problems, so that each sub-problem can be worked on by a different processor. This means that many sub-probems can be worked on "in parallel," thus increasing the speed of your computation.
Stock image courtesy of sxc.hu

Innovation through evolution

Manufacturers agree that the architecture of future supercomputers will be massively parallel, and as a consequence, they will need to be fault tolerant and well suited to dynamicity. So, a kind of auto-organization will also be needed, since efficient control of these very large systems will not necessarily be possible solely from the outside.

Parallel and distributed algorithms will also have to cope more and more with the asynchronous nature of communication networks and the presence of faults in the system.

Further, concepts such as asynchronous algorithms-whereby each process can run at its own pace according to its load and performance-present many similarities with the concept of wait-free processes in distributed computing, but they have yet to generate the popularity they deserve.

Ideas such as these are gaining more and more attention in many fields, particularly among computer scientists working on communication libraries such as Open MPI. Thus many more questions are raised: where will parallelism lead us and along which roads will we travel to get there? All of these questions must be answered and new solutions found if we are to continue to drive the evolution of computing.

These questions and concepts will be discussed at the 16th Euromicro International Conference on Parallel, Distributed and network-based Processing (PDP 2008), which will be held from 13-15 February 2008 in Toulouse, France. Eighty-three papers from 22 countries in Asia, Europe, North-America and South America have been selected by the Program Committee.

In addition to the conference main track, Special Sessions will address hot topics such as grids, parallel and distributed bioinformatics, virtualization in distributed systems, security in networked and distributed systems, modeling simulation and optimization of peer-to-peer environments and next-generation web computing. Computer manufacturers will also present their architectures, processors and strategies.

- Didier El Baz, Head of the Distributed Computing and Asynchronism team, LAAS-CNRS

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2019 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.