iSGTW is now Science Node Learn more about our evolution

  • Subscribe

Merging black holes

This movie is divided into two parts, each part showing a different numerical simulation, with brief captions that describe what is being shown. Part 1: Binary black holes orbit, lose energy because of gravitational radiation, and finally collide, forming a single black hole; gravitational waveform, spacetime curvature, and orbital trajectories are shown. Part 2: Event horizon and apparent horizons for the head-on collision of two black holes. CC BY-NC 2.5 Simulating eXtreme Spacetime - a Caltech-Cornell project.

The Simulating eXtreme Spacetime project generates fantastic simulations like those shown above using a code called the Spectral Einstein Code, or SpEC for short.

We tracked down a member of the collaboration to ask a few questions.

iSGTW: The first paper using SpEC was published in 2000. Has the code continued to undergo development since then/is development ongoing?

Harald Pfeiffer, Canadian Institute for Theoretical Astrophysics and University of Toronto: The code has been under continual development since then. In fact, previous versions of the code date back several years earlier.

iSGTW: How resource intensive is this code - can it do these simulations overnight on a workstation? Or does it need many hundreds or thousands of CPU-hours?

Pfeiffer: Binary compact object simulations (where each object can either be a black hole or a neutron star) require 10s to 100s of thousand of CPU-hours per run. For binary black holes, the high cost is mostly determined by the high accuracy required for gravitational wave detectors (these detectors use our simulations as filters to enhance their sensitivity). For neutron star-black hole and neutron star-neutron star binaries the high cost is mostly determined by the large amount of physical effects that need to be simulated: hydrodynamics, magnetic fields, nuclear physics, neutrinos...

iSGTW: How much of the SpEC code is parallelized, and what kind of parallelism are we talking about -- are the parallel calculations independent of each other, or are they dependent requiring a low-latency connection between nodes?

Pfeiffer: Given our CPU requirements, we have to be parallel. We use MPI and need a moderately fast interconnect. Infiniband is best, Gigabit Ethernet looses about 20% efficiency. The efficiency loss of gigE is not terrible, and we do run on gig-E clusters, as it is often easier to get compute time there.

iSGTW: What kind of architectures does SpEC run on -- has it run on clusters? Grids? Clouds? Supercomputers?

Pfeiffer: Beowulf clusters and supercomputers. We run on in-house clusters at Caltech and CITA, and at various supercomputers (Kraken, Ranger, Lonestar, funded through "NSF Teragrid", SciNet at Univerity of Toronto, funded by "Compute Canada").

For more simulations, or to learn more about extreme spacetime physics, visit the SXS collaboration's homepage, or skip straight to their movies page here.

Join the conversation

Do you have story ideas or something to contribute?
Let us know!

Copyright © 2015 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.


We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.