Essential big data tools for government
Connecting state and local government leaders
Public-sector priorities: a strong IT foundation and tools for prepping data.
This is the first in a series about big data tools. Read part two and part three.
Big data. It’s massive. It comes in all types of formats. It’s dynamic, changing.
Government managers are looking to derive value from the mountains of data collected by their agencies to tackle a host of issues, including cybersecurity, fraud detection, crime prevention, medical research, weather modeling, intellectual property protection, operation efficiency and situational awareness.
A growing challenge is choosing the right technology to aid in collecting, processing, analyzing and storing massive amounts of data, especially since the pool of big data tools keeps expanding.
Unfortunately, there are no “must have” tool sets, since any initial big data deployment will be driven by an individual agency’s business requirements, according to the TechAmerica Foundation’s report, “Demystifying Big Data.”
Tools that ingest and extract data, index it, translate it and then clean it up for analysis and presentation are part of an expanding big data ecosystem, said Barbara Toohill, vice president and director of Mitre’s Homeland Security Systems Engineering and Development Institute.
To put these technologies to best use, agency managers need to understand the problem they are trying to solve and then determine what data is needed to solve that problem before investing heavily in tools, Toohill advised an audience of government and industry representatives at a recent FCW Executive Briefing on big data in Washington, D.C. “One of the challenges is that tools sound great in PowerPoint presentations, but are much more challenging when people start using them,” she said.
Still, there are core technologies that come in both open-source and proprietary solutions to support any agency’s big data portfolio. But where to start?
A sound IT foundation
If there is a prerequisite for successful big data analytics, it would be an IT infrastructure with the necessary bandwidth and storage.
A good number of agencies have already laid the groundwork for enterprise uses of big data. The National Oceanic and Atmospheric Administration, for example, built a high-speed gigabit network to give researchers secure access to large volumes of complex, high-resolution climate and weather images. N-Wave, a 10-gigabit Ethernet wide-area network, connects NOAA’s high-performance computing sites, data archives and researchers. It lets scientists collaborate and transfer data without network constraints.
Before N-Wave, NOAA scientists often had to ship hard drives to each other to share data. Now NOAA even has the ability to scale to 100-gigabit Ethernet and beyond as research demands increase and next-generation services are added to the network. N-Wave relies on Cisco’s Carrier Routing System, a self-healing mesh network with redundant routers, able to reroute traffic if a communications link goes down.
For agencies that don’t want to invest in hardware and other systems for large data workloads, the cloud is an option, said Mark Ryland, chief solutions architect for Amazon Web Services. In fact, the cloud is better suited to the dynamic aspects of big data, he said.
The Security and Exchange Commission’s Market Information Data Analytics System runs in the AWS cloud. MIDAS, an internal system that gives the SEC information for financial securities, every day scoops up 1 billion records that are time-stamped to the microsecond, including tapes and proprietary feeds of each stock exchange, all posted orders and quotes and trades both on- and off-exchange, according to the SEC.
With MIDAS in the cloud, “there is no hardware to support, no software upgrades to maintain, no data feeds to handle, and hence no SEC resources are required for these tasks,” Gregg Berman, the SEC’s associate director in the Office of Analytics and Research, Division of Trading, said in an address at the Securities Industry and Financial Markets Association Tech conference.
“If we need to perform a very large analysis, we employ multiple servers and invoke parallel jobs. Access to processing power is just not an issue, at least not at present,” he said.
Data ingestion tools
While agencies need an IT foundation for big data, source data must be prepped and ingested before it is ready to be stored for later use.
Data ingestion often involves altering individual files by editing their content and/or formatting them to fit into a larger document so that they can be quickly accessed. And the complexity increases when dealing with streaming data, multiple data sources, formats and millions of records.
Apache Flume is a distributed system for collecting, aggregating and moving log data from multiple sources and writing it to a centralized data store such as Hadoop Distributed File System (HDFS). According to a Dr. Dobb’s article, Flume is becoming a de facto standard for directing data streams into Hadoop because it is robust and easy to configure.
Apache Sqoop is designed to transfer data between Hadoop and relational databases. Sqoop automates the import and export of data from relational databases, enterprise data warehouses and NoSQL systems, according to the Apache Software Foundation. It uses Map Reduce to import and export data, which provides parallel operation as well as fault tolerance.Because agencies need more analytics in real time, “government customers are starting to merge big and fast data tools,” said Rich Campbell, chief technologist of EMC Federal. Agencies are combining tools such as Pivotal HD, a commercially supported distribution of the Apache Hadoop, and Pivotal Gemfire, for data ingestion, to run traditional big data management and analytics.
They are also using Pivotal HawQ for advanced database services and data fabric services. “These solutions let government agencies perform more real-time queries with less of a requirement to move data, allowing for better response times,” Campbell said.
Other tools include IBM’s InfoSphere Streams, an advanced analytic platform that allows user-developed applications to quickly ingest, analyze and correlate information from multiple data sources in real time.
Next: Tools for big data storage, processing and integration