- The elimination of poverty is a critical global development goal
- A computer model can be trained to pick out significant features from satellite imagery that are predictive of poverty
- Better measurement of poverty at a large scale will improve understanding of what works in the fight against poverty
More than 700 million people around the world live in extreme poverty. They struggle to fulfill basic needs like health, nutrition, and access to water, sanitation, and education. These conditions are so critical that the United Nations has named the elimination of poverty as its number one sustainable development goal.
But development officials need a way to track progress and discover if poverty reduction programs are effective. The majority of people living in extreme poverty are concentrated in Southern Asia and sub-Saharan Africa, particularly in rural areas. Researchers typically collect poverty information at the village level via household surveys — which are both expensive and time consuming.
Researchers at the Sustainability and Artificial Intelligence Lab at Stanford University are working to change that by using a combination of satellite imagery and machine learning to produce more frequent data that covers larger areas.
“We see our approach as not replacing household surveys but as a complement to the traditional methodology,” says Marshall Burke, assistant professor of Earth Systems Science. “In areas where you can’t do the surveys or in years when you can’t do the surveys, we now have another measurement tool that can fill those gaps.”
Using a combination of nighttime and high-resolution daytime satellite images of five African countries, Burke and his lab trained a computer model to recognize landscape features that are predictive of poverty.
While the nighttime images, or ‘nightlights,’ are an imperfect match with economic development, they do provide some information about whether an area is poor or not. The research team used them to help the computer model identify what features in the daytime imagery are important.
“We’re not telling the model what to look for,” says Burke. “Some of the things it finds are things that you or I would recognize as important, like roads, urban areas, waterways, and farmland, but other patterns the computer determines are significant are not things that actually look like anything to us. The computer uses both of them to make the final prediction.”
Learning what poverty looks like
Burke collaborated with Stanford computer science professor Stefano Ermon’s group to provide the necessary computing infrastructure and know-how.
The Ermon Group, funded in part by the US National Science Foundation, is part of the Stanford AI Lab which offers scientists 3,232 processor cores and ~18 TB of memory.
Using the Caffe deep learning framework, it took about 48 hours of compute time to train the model to recognize significant features in the images.
Scientists imported about 400,000 1km-by-1km satellite images (about 100 GB of data) from all over Africa for use during training. Once they optimized the model, they began the real work of predicting poverty on the ground in Nigeria, Tanzania, Uganda, Malawi, and Rwanda.
In sum, the researchers worked with nearly 2 million satellite images (at around 3m resolution) to arrive at their conclusions.
“If you train a deep learning model with millions of parameters, you need a lot of computational power,” says Burke.
Onward and upward
Burke and his lab have big plans for the future of their model. They hope to improve their predictions by incorporating imagery from different wavelengths beyond the visible spectrum. They would also like to scale up across countries to create an Africa-wide poverty map, and expand their work to Asia.
Another goal is to try to make poverty predictions over time, which is currently hampered by the lack of training data available. A few surveys exist for individual years that provide a snapshot of poverty in a single location, but there are even fewer surveys that present a repeated observation of poverty in the same villages.
Says Burke, “The fundamental problem is that we just don’t have that much data on poverty. That’s the whole motivation of doing the project.”
Looking at the earth from a bird’s eye view could also yield benefits beyond prediction and reduction of poverty. The same training model may prove useful in measuring food security outcomes, health, or childhood malnutrition. Researchers are only just beginning to explore the possibilities of solving complex problems from high above.