Big data parking lot: Tools for fast storage, retrieval and integration
Connecting state and local government leaders
Hadoop is a must-have big data tool, but it has some drawbacks, including the need for high-level user expertise and conditions for optimal bandwidth.
This is the second in a series about big data tools. Read part one.
In the emerging big data ecosystem, storage providers offer the infrastructure on which all analytic tools run, and by far the most common system for storing and batch processing enterprise big data is HDFS, the Hadoop Distributed File System.
“Hadoop is a unifying element for people using big data because it is a standard to store and retrieve large data sets. It is like a big data parking lot,” said Abe Usher, chief innovation officer of the HumanGEO Group, where he works with defense and intelligence agencies.
Hadoop is an open-source framework that breaks up large data sets and distributes the processing work across a cluster of servers. Once the data is loaded into the cluster, a user queries the data with the MapReduce framework, which “maps” the query to the proper node where it is processed then “reduces” the results from the queries on the distributed machines to one answer. Commercial versions of Hadoop from companies such as Cloudera, Hortonworks and IBM are available.
Hadoop has been used in several successful government programs, including the National Cancer Institute’s Frederick National Laboratory, which built an infrastructure capable of cross-referencing the relationships between 17,000 genes and five major cancer subtypes. In 2010, GSA revamped its USASearch, a hosted search service used by more than 550 government websites. Using HDFS, Hadoop and Apache Hive, GSA improved search results by aggregating and analyzing big data on users’ search behavior.
But agencies should realize that there are manpower and cost concerns with the framework,. It is a relatively new technology that requires people with Hadoop expertise, and it needs to run on multiple servers in a Tier 1 data center with good internal bandwidth and management. And because Hadoop is a batch processing engine, it is not optimized for real-time analysis. Deployment costs can hit the $50,000 range, Usher said Oracle has moved to address issues of cost and complexity with the Oracle Big Data Appliance, which incorporates Cloudera's software (including Apache Hadoop) into Oracle hardware, said Mark A. Johnson, director, Engineered Systems, Oracle Public Sector. The big data appliance comes prebuilt, optimized and tuned to lower the costs of big data projects. Similarly, IBM offers InfoSphere BigInsights, a Hadoop-based analysis tool that includes visualization, advanced analytics, security and administration.
Data integration and retrieval
Traditional relational databases weren’t designed to cope with the variety, velocity and volume of unstructured data coming from audio devices, machine-to-machine communications, cell phones, sensors, social media platforms and video. Instead, NoSQL databases are built to write data much faster than an RDBMS and deliver fast query speeds across large volumes. They are distributed tools that manage unstructured and semi-structured data that requires frequent access. Some examples include:
- MongoDB leverages in-memory computing and is built for scalability, performance and high availability, scaling from single-server deployments to large, complex multisite architectures.
- Apache Cassandra handles big data workloads across multiple data centers with no single point of failure, providing enterprises with high database performance and availability.
Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google's BigTable data storage system. Other NoSQL systems were built with government security sensitivities in mind. MarkLogic’s enterprise-grade platform can integrate diverse data from legacy databases, open-source technologies and Web information sources. The government-grade security NoSQL database has been used for fraud detection, risk analysis and vendor and bid management.
And in 2008, the National Security Agency created Accumulo and contributed it to the Apache Foundation as an incubator project in September 2011. Because it includes cell-level security, the tool can restrict users’ access to only particular fields of the database. This enables data of various security levels to be stored within the same row, and users of varying degrees of access to query the same table, while preserving data confidentiality. According to the NSA, hundreds of developers are currently using Accumulo.
Extraction, transformation and loading tools
Extraction, Transformation and Loading (ETL) processes are critical components for migrating data from one database to another or for feeding a data warehouse or business intelligence system. An ETL tool retrieves data from all operational systems and prepares it for further analysis by reformatting, cleaning, mapping and standardizing it. As ETL tools mature, they increasingly support integration with Hadoop. Talend provides traditional ETL capabilities but also simplifies big data integration. The company’s Open Studio for Big Data offers a unified open-source environment that simplifies the loading, extraction, transformation and processing of large and diverse data sets. Pentaho’s enterprise Kettle ETL engine – called Pentaho Data Integration – consists of a core data integration engine and GUI applications that allow the user to define data integration jobs and transformations.
Universal information access is another emerging area of big data that combines elements of database and search technologies, giving users a single point of access to all data, regardless of source, format or location. UIA offers the reporting and visualization features commonly found in business intelligence applications.
Attivio’s Active Intelligence Engine reportedly unifies disconnected systems, combining enterprise search, business intelligence and big data technologies. AIE ingests all types of structured and unstructured content and builds a schema-less index that can be accessed with a single query. Cambridge Semantics’ Anzo Unstructured combines data from databases, spreadsheets and documents from any source across the enterprise and automatically discovers new relationships between data.