• Subscribe

Hacking Zika in the Lone Star state

Speed read
  • Zika hackathon relies on NSF-funded big data HPC.
  • TACC Wrangler supercomputer provides storage space.
  • Hackathon has created a repeatable platform applicable to other outbreaks.

More than 50 data scientists, engineers, and UT Austin students joined forces at the Austin Zika Hackathon to use big data to fight the spread of Zika.

Hackathon participants investigated ways to pool together different sets of data — outbreak reports, stagnant water sources, empty swimming pools and ponds that are potential mosquito breeding grounds, and even Facebook and Twitter feeds.

<strong>Hackin' in the heart of Texas. </strong>Hackers convened in Austin to take on the Zika virus. XSEDE suercomputer Wrangler provided the data storage for the hackathon. Courtesy TACC.

The Texas Advanced Computing Center (TACC) plans to store all the data on a new data-intensive supercomputer called Wrangler.

Wrangler is one of the newest Extreme Science and Engineering Discovery Environment (XSEDE) supercomputing resources, and is supported by the National Science Foundation (NSF).

“We're trying to collect these disparate pieces of data, and there's not a good way for people to ask questions about that data — that's the big problem,” says Ari Kahn, human translational genomics coordinator at TACC.

Zika, a mosquito-borne disease that can cause fever and birth defects, threatens to spread to the United States. As of mid-May 2016, Mexico had reported 272 cases of Zika. The problem has grown so large that President Obama has requested $1.9 billion to halt the spread of Zika.

The US Centers for Disease Control (CDC) is now ramping up collection of data that tracks Zika spread. But big gaps exist in linking different kinds of data, and that makes it tough for experts to predict where it will go next and what to do to prevent it.

“The Zika Hackathon is about bringing awareness and building a platform that is repeatable, not just for the Zika virus.” ~Eddie Garcia, Cloudera

The Zika hackers formed groups and worked on creating demo projects based off of sample CDC and other data. One project developed a working tensor flow model that used machine learning to search through aerial images for pools of stagnant water, potential breeding ground for mosquitos that carry Zika.

Another team developed a mobile app with nodes that would allow researchers to report developing cases of mosquito-borne illness. One demonstrated a way to map microcephaly occurrences in Brazil using an R maps interface to Leaflet. Another made headway into readying CDC data from Puerto Rico to layer with CIA Fact Book data for richer understanding of how Zika has progressed there.

“The Zika Hackathon is about bringing awareness and building a platform that is repeatable, not just for the Zika virus data analysis,” says Zika Hackathon organizer Eddie Garcia, chief security architect at Cloudera. “Someone can take what we did here today and apply it to some other unknown outbreak. It's really about getting people together, excited, bringing awareness, and building out a platform that is repeatable.”

<strong> Leader of the hack. </strong> Human translational genomics coordinator Ari Kahn helped lead the Zika hackathon in Austin. Courtesy TACC.

The Zika Hackathon brought together an emerging kind of scientist, a data scientist. Data scientists specialize both in translating information from many different sources into data that can be used together and in using new technologies by which knowledge can be extracted from today's massive data collections.

“There are three classes of work that get put under the umbrella of data science,” says data scientist Juliet Hougland of Cloudera. “1) Data scrubbing – getting data in the right format, in the right place — is a huge part of any job where you're going to do something useful with that data. 2) Investigative analytics looks at historic data and doing interesting, useful analysis on it. 3) Operational analytics supports recommendation engines, fraud detection systems, and more."

“There's not a good way for people to ask questions about that data — that's the big problem.” ~ Ari Kahn.

At the hackathon, software developer David Walling of TACC's Data Intensive Computing group spoke of his current research extracting rich data from ‘grey literature,’ unofficial records that can be images inside PDF files, a bane of data scientists. His work uses natural language processing techniques to map occurrences in the grey literature of a given species such as fish at specific locations and dates. Progress on this problem would translate well to getting more information for researchers about Zika.

“If you can see where all the water sources are and then overlay how the reports of outbreaks are happening, then you can create a model for how it's spreading and how it will spread in the future based on where the water sources are. Then maybe you can come up with some plans to offset that so the spreading doesn't happen as fast or doesn't happen at all,” Kahn says.

The charitable arm of the data analytics company, Cloudera Cares, along with TACC and other local partners, are planning to hold quarterly hackathons as part of a larger planned project to use big data to battle Zika and other threats. The project will help prevent outbreaks from happening, and make it easier for researchers to get answers.

To learn more about the Zika outbreak check these resources from the CDC, the World Health Organization, and the European Center for Disease Prevention and Control.


Reprinted with permission from the Texas Advanced Computing Center. Read the original article here.



XSEDE16 will be held in Miami from July 17-21.

This year's theme is Diversity, Big Data, & Science at Scale: Enabling the next generation of Science and Technology.

Join the conversation

Do you have story ideas or something to contribute?
Let us know!

Copyright © 2017 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.