Earlier today, a reader of my book presented to me a good question: “Why did you feel we needed the Unified Information Model given the initiatives on the Semantic Web and the Dublin Core metadata standards?” There is nothing like a good question with which to start the day!
The Problems to be Solved
When you are looking for information, you do so with a task of Work that is before you to complete. You may need to make a business decision, you may be researching to answer specific questions about how well a marketing campaign is working, or you may evaluating whether a potential joint venture partner is accurately reporting their performance. But, in every instance, we seek information in order to complete Work (each italicized term used in this post is defined more fully in Achieving Digital Trust).
The Semantic Web and Dublin Core are impressive initiatives to improve the discoverability and management of information assets across the vast dimensions of the Net. But that is as far as they go. They enable the management of information; they do not enable our ability to calculate whether to trust a specific information asset as a Resource to use to complete the defined Work.
That is the first problem I needed to solve—how do we decide to trust information (notably digital information) as a resource to perform work? It is not enough to find the information, no matter how efficiently we can do so because of the Semantic Web and Dublin Core. Before we rely on the information, we want to affirmatively calculate if the information is trustworthy for the work to be performed.
Of course, we can presume the information is trustworthy and skip the calculation of our trust in the information. But that involves a level of risk that is increasing; as Caleb Barlow, IBM’s security VP, recently observed: “In 2017, trust in systems will be broken as bad guys move from just exfiltrating data to changing it.” So, to enable the calculation of trust in information, what do we need?
Every trust calculation is rules-based. Just like any mathematical formula, the rules define placeholders, to be filled in by values that enable the calculation to proceed. Those values are themselves information. That’s right, even when the Resource is information, we need information about information.
That introduced a portfolio of questions defining the second problem that needed to be solved — To evaluate information, what information is needed? How should it be classified and organized? How and when do we input this added information? This collection of information is called decisional information — the information we require to calculate our decision whether (and how much) to trust any Resource.
The Answers (in Summary)
To solve these questions, I spent months performing Einstein-like thought experiments and, as well, researching available materials on the Semantic Web and the Dublin Core as well as other standards and rules on structuring parts, but not all, of the decisional information relating to any information asset. These two initiatives make a great deal of progress toward the goal of being able to achieve ‘ambient findability’—the phrase Peter Morville developed. They are enabling us to find information resources and to manage them through powerful, synchronized metadata structures and dictionaries. Everything these efforts are building help populate the Descriptive Layer of the Unified Information Model.
But, and this is the important distinction, they do not anticipate the decisional information we require to affirmatively calculate our trust in information resources we need to perform work.
In response, there are two major classifications that are introduced by the Unified Information Model and integrated into its structure (see illustration above). First, is the Evaluation Layer, which has three components. When we are selecting any resource, one of our key trust decisions is whether it performs well in relation to the work we need to complete. For that, we are vitally interested in acquiring and evaluating historical performance data—whether metrics generated within a computer system, or third party evaluations, including crowd-sourced reviews on the quality of a book. The 3 classifications of the Evaluation Layer anticipate these types of decisional information.
Now, as you well know, evaluative data can be hard to find. It is often not connected to and available with the information asset for which we are calculating trust. But just try to decide whether to trust something to do work without any idea of how well it has previously performed in similar activities. All of your radar senses are on full alert! Just imagine how much more care you might use in choosing a movie if none of the reviews were available. Or purchasing something on eBay where there are no ratings of the sellers or the products—all of those are examples of evaluative information. The same is true if the resource is a digital information asset!
In fact, the layered architecture of the Unified Information Model offers a provocative notion: we care so much about Evaluative data that we seek that information even before we actually look at the core asset itself. If there are no favorable evaluations that rise to the level of our rules, we do not even go further. That is why the Evaluation layer sits where it does in the Unified Information Model. One must pass through it in order to move closer to the actual content we wish to use.
The second additional classification is the Navigation Layer. This layer consists of the data that allows us to find what we are looking for. Even if a data source has great evaluations, it must still be functional to our work requirements. We have to be able to find the right data. Imagine an encyclopedia without a table of contents, a search function, or an index. The resource may be prepared and researched with the highest quality, but if you cannot find the information needed by your work, you may never actually use it.
These two layers, the Evaluation Layer and the Navigation Layer, are indispensable to our calculations of whether to trust information as a resource to be used to perform our work. Yet the information in these layers is often disconnected from the substantive content. If we can design our information assets to enable these layers to be more closely linked, and designed to enable trust to be more quickly calculated, the substantive content becomes easier to trust and our work will be more readily completed.
To learn more, of course, one must read Achieving Digital Trust. But I hope the preceding helps clarify the kinds of problems addressed, and solved, by the Unified Information Model. It is just one of the Trust Engineering Tools introduced within the pages of that work. I believe these supplement, and enhance, the work being done in the Semantic Web and Dublin Core.