Would your teenage daughter purchase the exact same clothes as her grandmother? The answer is probably no. When they both get sick, however, they are likely to receive nearly identical treatment plans and drug choices. Traditional evidence-based medicine, where testing and prescription follow personal testimony, is a one-size-fits-all approach that will one day be replaced by predictive, ongoing, numerical statements of health.
From individual genome sequencing and pharmacogenomics, to computational pathology, the future is here. Jürgen Schulze, assistant research scientist at the Qualcomm Institute at the University of California, San Diego, US, says computational medicine "will revolutionize medical care over the next decade and well into the future." Schulze, along with other experts, spoke last week at the 2014 International Supercomputing Conference (ISC) held in Leipzig, Germany.
Schulze was just one of several speakers highlighting advances in the life sciences at ISC. Klaus Schulten, a leader in the field of computational biophysics and professor at the University of Illinois at Urbana-Champaign, US, predicts that, in addition to the organelle, scientists will simulate a complete cell within 10 to 15 years. Schulten led research - using the Blue Waters supercomputer at the National Center for Supercomputing Applications - to determine the chemical structure of the HIV capsid in 2013. You can read more about Schulten in our feature this week: Supporting world-class research at ISC'14.
The process of diagnosing a genetic disease, often years in the making, is extremely slow - answers typically reveal themselves one test at a time. One hospital, however, is using high-performance computing and next-generation sequencing to quickly identify markers of genetic diseases in premature infants and acutely ill babies. Children's Mercy Hospital in Kansas City, Missouri, US, launched STAT-Seq as part of the US National Institutes of Health Genomic Sequencing and Newborn Screening Disorders research program.
Shane Corder, a high-performance computing software engineer at Children's Mercy, says combining advanced next-generation sequencing technologies with high-performance compute and storage clusters is key to quickly confirming disease-causing variants. STAT-Seq uses software developed at Children's Mercy to automate identification of DNA variations that explain a patient's condition, with turnaround times measuring just under a day. TaGSCAN, another test available at Children's Mercy, enables doctors to rule out around 750 genetic diseases. The test is relatively inexpensive and available to patients and physicians via routine order.
Powerful computational platforms like those at Children's Mercy are enabling scientists to identify, analyze, and compare the fundamental biological components and processes that regulate human diseases and their impacts. Advances in modeling and simulation are also playing a key role in improved patient outcomes.
The techniques of pathology, the germ theory of disease, and modern public health have a history going back to the mid 19th century. The main tool pathologists used at that time - and still use today - was the microscope. When looking at a tissue sample under a microscope, pathologists can reliably identify only a handful of things by eye, for example how malignant cells look or the frequency of proliferation.
Andrew Beck, a pathologist at Beth Israel Deaconess Medical Center in Boston, Massachusetts, US, developed C-Path (Computational Pathologist), a computer algorithm that can look at not only a small subset of features, but also thousands of other points of interest like relationships between neighboring cells, feature variance across an image, and the characteristics of supporting cells.
Once subsets of characteristics are determined, Beck says, machine learning and statistics can be used to identify subsets that are most useful for the patient. "Personalized medicine relies on taking very detailed, accurate quantitative information from patient tissue samples. We need new tools like C-Path to match patients to the exact treatment that is most effective for their disease, not just the treatment that happens to be the most commonly used."
Using C-Path, Beck discovered that tumor cells themselves are actually not the most important indicator of patient survival; the connective tissue surrounding the cancer cell, known as the stroma, provides significant additional information. "In the future, we see pathologists routinely working with computers to make the most informative interpretation of cancer tissue samples," says Beck.
From uncovering complex interactions and disease mechanisms to testing therapy efficacy and aiding in diagnosis, computational models help researchers and physicians develop a deeper understanding of the intricate dynamics of human disease.
Sudden cardiac arrest (SCA) continues to be the leading cause of death worldwide. In the US, 400,000 people die of SCA each year. Globally, one person dies every five seconds. Ionic currents that go through each cell's membrane - giving rise to the action potential - course the pumping action of the human heart. In 2008, it took nearly two weeks of compute time to simulate a single heartbeat.
"What we really need in this domain," says Matthias Reumann of IBM Research in Zurich, Switzerland, "is to simulate the heart for minutes, if not hours." Reumann applies his expertise in supercomputing to push the boundaries of modeling and analytic approaches to the electrophysiology of the heart.
By 2012, Reumann was part of a team at the IBM T. J. Watson Research Center and IBM Research - Australia that, in close collaboration withLawrence Livermore National Laboratory (LLNL) in California, US, developed the Cardioid code that could effectively harness the power of Sequoia's 1.5M computational cores. The final results demonstrated 60 beats in 67.2 seconds, almost real time and over 1,200 times faster than previous state-of-the-art simulations in the field.
Reumann is now working toward heart models that that can be applied in clinical settings to assess risk of sudden cardiac events, as well potential applications related to drug interactions and personalized medicine. The team has overcome the computational challenge but now also faces a big data analytics challenge because the time to analyze, visualize, and interpret results becomes the rate-limiting step.
The IBM Research team and LLNL are now working on data processing and reduction schemes that will enable the use of high-resolution cardiac models that are practical in clinical settings for which long wait times are unacceptable.
Various challenges must be overcome before computational medicine becomes a routine part of patient care. However, powerful computational platforms are already enabling scientists to identify, analyze, and compare the fundamental biological components and processes that regulate human diseases and their impacts.