Do you know where your data's been?
Connecting state and local government leaders
Data lineage gives IT managers visibility so they can trace errors, debug applications or recreate lost output.
As data becomes increasingly central to agency decision-making, it’s not enough to just have the data. Officials must feel confident in its ability to be verified -- where it came from, who has had access to it, and where it has been before landing in its current location, according to data experts.
The practice of keeping track of this kind of information is called data lineage, and it's especially important when an organization is leveraging data from multiple sources, according to Ronald Layne, the manager of data quality and governance at George Washington University.
“Data lineage provides that end-to-end traceability of your data assets,” Layne said in a webinar on government use of data hosted by FedInsider.
The costs of not developing clear data lineage can be significant. According to a post on the National Institute of Health's data science site, up to 90 percent of a researcher’s time is spent simply cleaning and deciphering data.
Having robust and reliable data lineage gives data scientists visibility into the data so they can trace errors, debug applications or recreate lost output. It can be useful in a number of situations, including undergoing an audit, finding the cause of a problem with data, seeing what will be affected when changes to data are made and preventing breaches by knowing who has access to data, he said.
An important part of data lineage is metadata, said Steven Totman, who leads Cloudera’s chief data office for global financial services industry. Metadata, in a way, does for data what food labels do for products in a grocery store, Totman said. Without them it's hard to know what's inside.
There are generally three layers of metadata: business metadata, which is the language used inside a given organization to understand its data; technical metadata, the descriptions of where data is stored and how it's moved; and operational metadata that describes actual processes along with what datasets or files were accessed.
Tracking the lineage for every piece of data can be challenging, Totman said, but it is typically done one of two ways. One method focuses on design-time information, which looks at the way the data is meant to move through a system. The other method, which is becoming more common, looks at operational data, or information on how the data actually moved through the system.
Cloudera’s Navigator data governance solution plugs into the Hadoop stack and looks at the running system and builds up lineage based on audit logs. Other tools, called enterprise metadata repositories, are taking on the job of collecting the metadata needed for lineage and automating the entire process, Totman said.
Once these processes are implemented throughout a network, they can help the end-user better understand what they’re working with, according to David Yokum, the director of The Lab @ D.C., which does research to help inform Washington, D.C.'s public policy.
Knowing if the data was self-reported versus measured by a sensor can impact how analysis is done. For example, if a dataset has GPS information in it, then an analyst would likely take into account the fact that GPS tends to have some variation, Yokum said in an interview.
“For me it really does go back to really understanding what those numbers mean and how they map back to something happening in the real world,” he said.