Throw a ball in a well-lit room and you'll probably read its path without too much difficulty. But if you throw that ball in an unlit room, you should probably duck.
Seeing the unseen is the task of sonography. Rendering sonograms to greater degrees of resolution is the challenge clinicians face as they help patients understand what's going on inside their bodies.
That's the task Robert C. Waag and his team from HABICO, Inc. and the University of Rochester, in New York, US, set out to improve. Waag's research, recently published in the Journal of the Acoustical Society of America, uses supercomputers to enable earlier detection of harmful objects under the flesh — and improve survival rates.
In a typical breast examination, doctors first look to mammography to give them eyes in the dark. Clinicians may also consult magnetic resonance imaging (MRI) technology to create diagnostic images. Ultrasound, sending sound waves through tissue, provides another way to read what's going on beneath our skin.
But like the ball bouncing in the dark, it's difficult to know how the wave will move. A sound wave will behave one way when sent through water, but quite differently when it encounters an obstruction.
“If I assume there's no tissue there, I know I can get the beam to concentrate,” says Andrew Hesford, senior engineer at HABICO and research assistant professor at the University of Rochester. “But as soon as you put a breast or any other structure there, the focus suffers from aberrations, and the image doesn't look as good as it could.”
A sound wave moving through water would appear as an expanding set of ripples. In contrast, when propagated through obstructing layers of connective and fat tissues found in the breast, the sound waves fragment and arrive at different times.
To correct for these aberrations, scientists rely on the k-space model to read how a sound wave will look as it moves through time, and the fast multipole method to account for wave movement along different frequencies. These models put the focus where it needs to be, enabling clinicians to see the ball bouncing in the dark.
But are these correctives to be trusted? As scientists enhance diagnostic instruments, how can we know these numerical models will work in that context?
To achieve greater confidence, Hesford and his colleagues mapped realistic sound wave simulations against digitized MRI breast images. Reading how sound traveled through these digitized samples required 3D representations to show sound speed, density, and attenuation — approximately eight million samples of each parameter per cubic centimeter of breast tissue.
This is a very computationally intensive task, so the HABICO team relied on supercomputers to save time. For GPU-based simplified physics visualizations, the researchers used Longhorn (now decommissioned), a supercomputer at the Texas Advanced Computing Center (TACC) in Austin, US, that contained 2,048 compute cores, 14.5 terabytes aggregate memory and 512 GPUs. Waag's team also maintains an allocation on Maverick, also at TACC.
To segment and convert MRI images of breast specimens into a 3D numerical model, Hesford released the Kraken. Maintained by the National Institute for Computational Sciences at the University of Tennessee, in Knoxville, US, Kraken (now decommissioned) was composed of 9,408 compute nodes, each containing two six-core processors and 16 GB of RAM.
A full modeling of all of the behavior of an ultrasound scan would have required a century using the average desktop computer, a time frame that eliminates the value of those simulations, Hesford surmises.
“Our motivation is to provide clinicians with greater diagnostic tools and information to allow them to make decisions more quickly, earlier on in the disease cycle. The more information they have and the better resolution they can get out of ultrasound, then the earlier they can catch disease, and that translates directly to higher survival rates,” Hesford says.
With close to 40,000 women succumbing and another 300,000 receiving the dread diagnosis in the US each year, time is of the essence. Thanks to the efforts of Hesford and his collaborators, we know the aberration corrections are right, and advancing those algorithms to improve diagnostic equipment is the right way forward.
This research was funded in part by the National Institute of Biomedical Imaging and Bioengineering in the National Institutes of Health (grants R01 EB009692 and R01 EB010069). Calculations associated with the work were made possible by the Extreme Science and Engineering Discovery Environment (XSEDE, project TG-DMS110016), which is supported by the National Science Foundation (grant number OCI-1053575).