A key principle in modern democracies is that the rule of law is known. Statutes, regulations, court decisions, agency deliberations, and even the minutes of Federal Reserve meetings are published and made available. The operating premise is that, if the rules are accessible, civil order and social continuity will be strengthened and the conduct of those violating the rules is more easily prosecuted. The old saying that “Ignorance of the law is no excuse” rests on an important premise—the law must be published and accessible. The Internet has made much of the content of the rule of law even more accessible.
This morning, while grading a discussion forum about privacy notices, a rather big issue came into focus: how can we gain access to all of the technical rules relied upon ‘behind the curtain’ to design and operate the systems we are asked to trust in our daily lives? Think about this: while a privacy notice may promise you that your personal information will be secured, the real evidence of that promise is discovered only with further inquiry.
What standards and best practices were selected as the foundation on which to build security? What is the documentation indicating how those rules were executed properly in the design? What evidence exists to indicate those rules are being properly executed in operation, at the moment you push a submit button and transfer your personal information into the control of the system at the other end?
The New Rules of Law and Order
We are living through an important migration. The rules that provide us, within cyberspace, an ability to achieve civil order and social continuity are no longer within the laws and regulations of the nation state. Instead, those rules are being published and maintained as standards and best practices. Unlike public law, the actual standards and best practices, such as the publications of the International Standards Organization (ISO) or American National Standards Institute (ANSI) can only be accessed when purchased for a fee. Some, like the rules for assuring the security of credit card information, are the work of specialized organizations; the PCI Security Standards Council does provide public access to their standards at no cost, but you have to know where to find them (www.pcisecuritystandards.org).
But merely finding the rules is not enough. Many technology standards give flexibility to the ‘regulated’ entity as to how to interpret and apply the rules. In turn, the standards include mechanisms by which independent third parties can be engaged to audit compliance and provide certifications a particular system conforms to the requirements of a particular standard.
To be complete, asking if the promise your personal data is secured has been honored now requires more. The standard or standards relied upon by those developing the system must be identified; the interpretive design documentation must be evaluated, and the existence and accuracy of the certification (if any) must be validated. In rigorous corporate due diligence exercises, the quality and integrity of the certification third party (essentially an auditor) are also evaluated against other published standards.
As citizens of the Internet, the outcome for any of us is fairly dramatic: we do not know the rules by which to evaluate the promises, or the performance, of those in custody of our information. For corporations, the same issues arise. Rather than rely on public law, the rules defining permissible conduct are not part of the public record, but are instead more opaque, and it is more difficult to evaluate the compliance.
What Has to Change?
When we make a decision to trust a website, an information record, a database or any digital device, our decision is a calculation. There are two critical resources required to perform that calculation. We need to identify and select the rules against which the target of our calculation must conform, and we need the information of how well the target actually performs against those rules. The process is nearly mathematical, much like having an algorithm (aka the rules) and slugging in the values (aka the information).
Of course, when we do not know the rules by which a target has been designed or operated, it is not possible to realistically calculate whether or not to award our trust. As the rules migrate away from the public sector folio and into private standards and best practices, our ability to perform that calculation becomes more and more difficult. The logical outcome is far too familiar: rather than affirmatively calculate trust, we merely presume to trust the target, only to discover after an adverse event that our trust was, indeed, misplaced.
As corporations grow in their understanding of this ‘hidden’ migration of the rules, they are beginning to assemble the resources to change the process. They are building autonomous agents and query tools that investigate systems, devices, and processes before their trust in those targets is awarded by a transfer of data (or the opening of a port to allow access to corporate data). The agents and tools embed inventories of the private rules and issue calls for the evidence of compliance, as well as evidence of non-compliance. Only once those calculations have been performed is trust awarded, and the related business transactions executed.
But those rules-based agents and tools are the assets of the sophisticated. For small to medium sized businesses, and for individuals, the migration of rules away from the public rule of law is disabling their ability to test, evaluate, and render affirmative trust decisions. For most of us, we truly do not know what rules have been followed, nor whether the rules have failed to achieve the intended objectives.
The online businesses that understand the value of trust are responding—they are publishing more and more detail regarding the design and operation of their systems, as well as describing their controls, in order to enable our trust calculations to fire, and fire with greater confidence, rather than presumptions.
Anyone even casually familiar with eBay sees the evidence in action. Seller ratings, buyer ratings, dispute resolutions, security protocols—all of these are steps toward transparency. As users, we understand the rules, and gain access to functional evidence with which to calculate our trust in the platform as well as individual transaction prospects.
Transparency Enables Trust
So, that becomes the key—building transparency into the system’s architecture of rules, and the performance of the system against those rules. When that type of information and data is offered, the company doing so gains an advantage. For the customer, rather than presume trust, affirmative decisions can be made that, in themselves, become more trusted because they are based on disclosed truth. Of course, the data must, itself, be trustworthy, and that may require even further investigation.
But the 20th century business model of relying on public law, and public agencies, to assure the civil order and our trust decisions in commerce, is fast disappearing. The 21st century model is one that will be built on transparency—both to the rules of design and operation, and the data that provides those rules are being properly executed. Those are the building blocks for digital trust and, to the winner who delivers them will surely go the spoils.