Data Logistics Toolkit delivers big data for analysis
Connecting state and local government leaders
Developed university researchers, the toolkit simplifies ways users can sort and organize big data sets to provide faster access to the data most pertinent to their research.
The hype over big data is still ongoing and realities are still being sorted out, but the promises big data holds for government organizations are obvious. Getting to actionable results will require overcoming significant hurdles, however, not least of which is managing the huge streams of data that are involved.
The Data Logistics Toolkit (DLT), a university-based project, is one solution that’s in use now and is close to a being a turnkey product. Paid for by a National Science Foundation grant, its goal is to help organizations more effectively store and share the massive amounts of data that are needed for big data initiatives.
Not only will the DLT help position data closer to where it is needed, it will also make it easier for users to manipulate the data.
“It makes it simpler for users to traverse related elements of data and to link them together in ways that are specific to the kinds of things that are being analyzed,” said Martin Swany, an associate professor at Indiana University’s School of Informatics and Computing. “That makes it a lot easier to do the analysis.”
Researchers at Vanderbilt University and the University of Tennessee Knoxville are the other partners in the DLT project.
One of the basic problems big data researchers are confronting is that there’s no good way yet to zero-in on specific segments of the huge data sets created by sensors and other sources that are of particular interest to users. They first have to download an entire data set, often from a site far away from the end user, and then sift through all that information to isolate data that’s most appropriate.
That takes a up lot of time and network resources.
DLT gets around that issue by first organizing data so that it’s closer to the end user. In physical logistics, rather than sending a package that needs to be delivered back to its original source if the intended recipient isn’t at home, the package is instead shipped to a warehouse that’s close to the address it needs to be eventually delivered to. DLT likewise parks data in local caches where it’s more immediately available when demanded by the user.
That kind of logistics is currently just not provided by most network architectures.
“The Internet doesn’t have a holistic way of enabling (data) logistics, and that’s why you have such things as content distribution and peer-to-peer networks,” Swany said. “The ability to cache the data locally means it can be streamed, and users can begin working with the data before it’s completely downloaded.”
In a way, DLT provides a way to glue various storage backends together so that the capacity in what are now isolated islands of storage in various data centers and content caches can be linked more effectively. It also acts as an overlay so that various forms of storage, including such things as Dropbox, can be brought together into one overall environment. The end user of the data, however, will see it all as a single resource.
Once those logistics are in place, users can then apply intelligence to the big data management problem by choosing the data they stream, basically prioritizing the data so they get those that are most urgent for their analysis. That’s the key goal for big data over the next few years, Swany said.
Researchers at Vanderbilt already use DLT to manage and share nearly a petabyte of data they are working with that’s been generated by the Large Hadron Collider experiments near Geneva, Switzerland. Closer to home, it’s being used by AmericaView, a nationwide partnership of scientists who are working with remote sensing data generated by Landsat and other satellites. The Earth Observation Depot Network, which is aimed at accelerating remote sensing workflows for sites across the United States, is a result of that effort.
Swany said DLT is also being used in areas of the world where Internet connections are not as common. A colleague has developed hardware based on DLT that he can take to Africa. It allows requests for data to be queued, and then, when connected, it can process those requests and download the data so it can then be worked on offline.
Outside of research institutions, Swany said DLT should also prove ideal for Smart City projects that need to analyze increasingly large volumes of data produced by sensors embedded in roads, bridges and other infrastructure.
The DLT development team has another year to go with the NSF grant, and Swany expects the entire package to be of production quality by then. The various components of DLT are at that level now, he said, but the overall integration is still at the beta level and needs “some assembly” such as with interface configurations.
DLT 1.0 is available for download now, however. Swany said it’s at the point where any competent software or IT person in an organization should be able to work with it and make is usable. There will be quarterly updates over the next year, by which time that turnkey version should be delivered.