How to know the source of shared data
Connecting state and local government leaders
As data is shared, multiple copies of that data can emerge. A Tagged Data Authority Server could help solve the question of who has authority over the data by maintaining the master copy.
Here's a quick question for government systems managers: Do you know what the authority is for all the data you import? If not, do you have a way to establish that authority and maintain data integrity?
Maybe a data authority server could help track those details for your organization. But first, here is a little background.
More than ever, government agencies are under pressure to share data among departments and agencies and between the federal government and state and local agencies. This pressure to share is sparked not only by national security issues but also by agencies' desire to consolidate systems.
The problem is that as data is shared, multiple copies of data can emerge.
It's common for government agencies to share data that's been converted to Extensible Markup Language files because the files can easily be imported, exported and even split up and integrated with other data to create new files in multiple locations. The problem is that although data managers usually know where the files originate, it's increasingly likely they don't know where every tagged data element within those files originates.
That makes it difficult for system managers to know the data's original form, how old each data element is, and who is responsible for that data. For example, if a record references a person, tagged data elements might include name, address, Security number and so on. But addresses can change, and all data can become inaccurate in time.
That is where the concept of establishing authority for all tagged data comes in. The Social Security number is a perfect example. No matter how many times a person's Social Security number is collected on government forms and stored on government systems, the ultimate authority for a person's Social Security number is, of course, the Social Security Administration. So rather than storing multiple copies of a Social Security number at multiple government offices, systems can be designed to always look upstream — even all the way to SSA — to confirm that a Social Security number is accurate.
I've been doing a lot of research lately into the concept of what I call tagged data authority servers. They are servers that, on the fly, help with data cleansing, merging, deduplicating and updating operations.
What makes this solution different than simple data mirroring is its mission of reaching across multiple networks and domains, even into domains that an agency does not directly control, to help update a variety of data sources. It also takes a series of steps beyond simple cleansing and updating by encouraging agencies to establish a firm taxonomy for their multiple data sources, including details on where the data comes from, how it is stored and how copies are checked against the original data source.
The engine is a server that contains a master set of data. This is essentially metadata that tracks where every tagged data element originates, including data that is imported from resources outside the organization. When there is a change in any of the data sources, meta information in the authority server can be updated in one of two ways.
- Details can be sent directly to the authority server by the server that houses the original data source. That is probably the simplest way, but it's also difficult to set up when you are dealing with many data sources from many different providers.
- The authority server can be designed to send a call to the location of the original data to see if it has been updated. This can be done by comparing time stamps or the data.
At this point, the tagged data authority server is still just a concept. But there are models to follow as the concept evolves. The most obvious models rely on data caching.
One obvious model is the Internet's Domain Name System. DNS operates a series of authoritative name servers that function as root servers. It's basically a distributed database system that takes a client/server approach to data distribution. Regional and lower-level DNS servers don't copy the full range of DNS information in the root servers. They only send a query when they need to resolve a specific IP address. They then keep the information in memory for a short period of time. Eventually, they do new queries to be sure they have the latest official IP address.
Another model might be similar to what Akamai Technologies does. Akamai helps large Internet service providers speed delivery of their content by transparently mirroring it to multiple servers located around the globe. To do this, it needs to time stamp and establish version control for multiple datasets, coordinating how servers store the data.
Other ideas I've heard include expanding the technologies of Coda, a network file system that uses a local cache to provide access to server data when a network connection is lost, or Lustre, an object-based, distributed file system generally used for large-scale cluster computing.
Either way, the tagged data authority server is an idea whose time has come.
For that reason, I'm interested in hearing about other models that could help agencies better synchronize multiple copies of data.