Feature - Neighborly efficiency: Scaling kNN problems (nearly) linearly using GPUs
A new approach to solving a class of computational problems known as k-Nearest Neighbor could speed up applications ranging from face and fingerprint recognition to music genre classification.
There are a number of approaches to solving kNN problems, explained Libin Sun, a doctoral student studying computer vision at Brown University.
"Current approaches tend to have strong assumptions on the dimensionality or the geometric properties of the data, whereas our approach is highly general and can be applied to all kinds of kNN problems," Sun said.
As an example, Sun points to the parallel approach as exemplified by Piyush Kumar, which focuses on 3-D spatial data. Although Kumar's method is generally considered to be state of the art, it cannot solve problems involving thousands of dimensions. Yet these problems arise frequently in computer vision, where each image can have millions of pixels, and each pixel has three dimensions to account for color (red, green, and blue).
"Some people are using sequential approaches. Some people are using approximations. Some are using one layer of parallelization," explained Cyrus Stoller, a software engineer at the Palo Alto Medical Foundation. "We wanted to calculate the actual k-nearest neighbors."
Stoller and Sun were both undergraduates double majoring in computer science and mathematics, taking a course in parallel and distributed computing under associate professor Tia Newhall at Swarthmore College when they collaborated with Newhall and Andrew Danner to create a new approach to kNN problems. They split the calculations up in order to create two layers of parallelization. The first and most conventional layer used the popular MPI protocol, while the second layer used GPUs to accelerate the calculations.
Next, they tested their approach using the Lincoln cluster at the National Center for Supercomputing Applications, which offers 96 Intel 64-bit Linux nodes, each equipped with two nVIDIA Tesla S1070 accelerator units. The result, according to a poster they presented at the TeraGrid conference this past summer, was impressive, spitting results out at 80 times the speed they would expect if using just the GPU layer.
Of course, this solution isn't right for all problems. Stoller and Sun's approach is best-suited to particularly large problems.
"Our experiments show that our method scales close to linear if the problem is large enough," Stoller said. "As long as the problem is large enough, the time taken for the merge phase will always be dominated by the time needed for each node to calculate/sort distances for their designated chunk of data. This allows us to (almost linearly) scale by throwing in more nodes."
Sun expressed hopes that their approach would prove useful to the larger computer vision field.
"A recent trend in computer vision is the emphasis on data driven approaches, which almost always require performing kNN on datasets containing millions of images," Sun said. "We anticipate that our method can be of use to a greater field of research, not just handling 3D spatial/GIS data, which motivated most of the previous kNN algorithms."
Stoller, Sun, Newhall, and Danner plan to follow up with additional experiments, with an eye towards eventually publishing a paper.
For more information, download the poster PDF.
-Miriam Boon, iSGTW