A fascinating legal theory is beginning to come out of the oven that may change how we think of digital information as property. What do you think?
An unauthorized computer access event occurs. System logs and other operating data provides evidence that personal information records were accessed. The logs indicate that the information records were copied and exported; however, there is no further evidence, including from named individuals, that the personal records have been improperly used, such as for identity theft.
A second variation involves the physical theft of a laptop or other computer equipment on which personal information was stored, perhaps even in unencrypted formats that would allow fairly easy access and use of the information. Again, there is no further evidence the personal information has, in fact, been improperly used. That exact fact pattern was the basis of a recent Illinois court decision, Maglio v. Advocate Health and Hospitals Corporation.
In each variation, the individuals to whom the information relates file lawsuits, claiming negligence by the custodian, for which they are entitled to compensation. However, none of them could actually prove they had experienced any direct injuries, such as the unauthorized use of credit cards, false mortgage loan applications, etc.
It is easy to imagine a more corporate scenario. A cloud service provider is storing sensitive business data of a corporate client. The service provider’s systems are compromised but, again, there is no evidence that proves any harm to the client, other than the loss of confidentiality. Indeed, in the Canadian case involving Boeing discussed in an earlier post here, involved those facts. Based on the public record to date, there is no evidence the alleged thief actually harmed Boeing by delivering the copied files to a competitor. The data was still on Boeing’s systems; there was no actual loss, such as the theft of a lawnmower out of your garage.
So the new legal theory is that, unless the owners or data subjects can prove economic harm or other injuries, the mere unauthorized access does not give them the basis to pursue any legal remedy against the data custodian. Indeed, that was the exact holding of the Illinois court in rejecting a case filed by the affected individuals.
Clearly, in any of these scenarios, there is a sense of loss, a sense of invasion, a sense that trust in the custodian has truly been compromised. But will the law provide the affected subject a right of recourse where there is no economically measurable injury? Will courts of other nations recognize rulings in another country seeking the recovery of damages when, in fact, there is no measurable injury?
What is the right outcome here? Can we put a price tag on the loss of digital trust?
To be honest, I am still trying to form my own opinion. I would welcome hearing yours.
Recently, searchcompliance.com invited me to comment on whether the Digital Age requires companies to toss their entire 20th century records management programs and technologies out and start fresh with information governance for digital records. In fact, there are a number of important features of vintage records management programs worth keeping. To find out which ones, just click on their link above (may require free registration to access).
The Internet of Things is a great noun. In just three words, it describes an entirely new generation of interconnectivity among the devices with which we intersect in our daily lives—toasters, refrigerators, ovens, HVAC in the home, pet monitors, baby monitors, televisions, sound systems, smoke alarms. The Internet of Things installs in all of these appliances connectivity to the Net and the Cloud, enabling you better electronic control and, as reported in many sources, increased surveillance of how you live your life. Consumption patterns, usage patterns within the home, food preferences, sleeping styles, and on and on. But the Internet of Things also invites malicious actors—hackers that intrude electronically. One story I saw this weekend reported hacking into a baby monitor and broadcasting offensive sounds to the infant!
So, what will it take for you to trust the Internet of Things? Will you begin to use these appliances without a second thought or, perhaps, will you take that second and third and further thought? If you do the latter, what will be the content of your thoughts? How do you decide to trust an entirely new device, and its potentials for good and for harm? What sources of knowledge will you seek out to inform you of the questions to be asking and, as well, the answers? What kind of trial experiences will you want to conduct? If there is no eBay-like crowd sourcing of evaluations of each new product, how will you decide to put these devices into your home?
All of these questions highlight that each new technology we consider as a tool with which to conduct our lives confronts us with the need to make an affirmative decision—do you, or do you not, trust the device? It is a fairly binary analysis, but immensely complicated in its execution. If you decide to trust the new tool, and it does not work as promised, or causes injury or harm, what will be your reaction to the next new device making up the Internet of Things? How will your bad experience change your criteria for evaluating the next tool? Will you change your requirements or use the same considerations you used the first time? Well, that one is easy—every experience we have shapes our next related decisions. Good choices and good outcomes move our criteria in one direction; bad choices and bad outcomes change our criteria in another direction.
As The Internet of Things moves front and center into our lives, this is a great time to focus on how each of us make our trust decisions about new devices that extend the reach of the Net. But the stories about these new devices also confront us with the reality that every device is now a recording agent, collecting, processing, and communicating data about your interactions with the device and transporting that data into large, analytical tools that help inform the refinement and design of even more tools and their related data assets. The stories also soberly remind us that every digital device becomes an access point for malicious actors, even the sound monitors you install to keep an ear close to the murmurs of your sleeping child. What criteria will you express, and what answers will you require, before you trust each new device that populates the Internet of Things?
In my new book, I introduce a very cool thing—a tool is so simple in its essential design that you can draw out on a dinner napkin—the trust prism. Using the trust prism, each stakeholder in a new device—the investor, the designer, the manufacturer, the service provider, and the homeowner consumer—can better visualize the questions they need to be asking to reach affirmative decisions to trust the device. In doing so, the answers can also emerge more quickly and, hopefully, with less adverse incidents. After all, the malicious actors are only exploiting one true failure—the failure of each of these stakeholders to ask and answer the questions that need to be asked to make effective trust decisions!
What are your questions? What will it take for you to trust the Internet of Things? What will you want to know to trust the devices with the detailed information they will collect about how you live your life?
Why is privacy such an enormous headache for companies? For centuries, knowing your customer is an essential requirement for success in commerce. Each evolution in business is shaped by an improvement in the capacity of companies to better identify their customers and how to best create products and services that align to the customers’ profiles. Collecting information about a customer is how companies create new wealth—the information enables the companies to produce something that customers will value. The economics are simple: the more useful a product proves to be in meeting a customer’s needs, the more value the customer is prepared to pay. For most customers, sharing information with suppliers is part of the negotiation required to secure the best fit between the product and the customer.
But privacy, as a legal issue, did not begin as a corporate issue. Both in the United States and in Europe, privacy found its place in the rule of law among the tensions between governments and the citizens that are governed. Why has privacy become so difficult for corporate entities and their supplier networks?
The simple truth is that neither the corporate entities nor the consumers properly factored in the astounding surveillance and monitoring that 21st century technologies now enable. The technologies are gathering so much more information than was previously considered to be possible. Moreover, the technologies are enabling information about consumer conduct and behavior to be collected that has nothing to do directly with the primary vendor, but can have genuine economic value to secondary consumers of the collected information.
Case in point is the announcement by Facebook that its app can now activate the microphone in cell phones to identify tv shows, music broadcasts, or live music. The FB app is collecting information that has real value, for which real companies will pay a great deal of money in order to acquire consumer-descriptive information they could not directly acquire themselves.
But no consumer pays FB anything; rather, they do, but they do not pay with money. They pay with the consents given to allow FB to engage in data collection. Now, as the economic value of that data becomes more and more visible to the consumers, we are all waking up to the fact that the old model economics simply are not in play. Each of us is enabling access and sharing information outside the negotiations involved in directly purchasing goods or services. And that has become the sticking point.
Privacy requires a new architecture. For corporations, the winners will be the ones that understand they must be conspicuous, transparent, and ethical in describing the information they collect and how they will use it. The value of the information must be expressed in a manner that effectively induces the consumer to see the information sharing more directly connected to the real of their commercial relationships with corporate suppliers of the goods and services they purchase.
Privacy policies, as Annie Anton has so brilliantly demonstrated with her research, are simply ineffective jumbles of words that do not meaningfully enable the consumer to understand, or factor into their buying decisions, how corporations use their personal information. What is needed is a way for consumers and companies to effectively express their respective terms, and achieve agreements that are meaningful and economically balanced. Properly designed, corporate rules of the game that are clear, transparent, and enforceable may actually motivate consumers to provide greater information, which ultimately enhances and strengthens the buyer-seller relationship.
The solution, I submit, is to follow the lead of industry to develop a functional lexicon of abbreviations that enable automated expressions and negotiations. For example, one of the great successes in enabling international trade was the publication of INCOTERMS, a concise expression of terms that could be integrated into electronic ordering systems to enable more complex legal terms of sale to be abbreviated with consistent meaning. We need the same solution to enable consumers around the world to more effectively connect with their suppliers, and produce rules of the game that are precise, controlled, and meaningful to each consumer-corporate relationship. Yes it requires work to achieve, but the level of work is so much less than the endless bickering, litigation, legislating, and puffing that current defines the privacy battlefields.
I look forward to introducing next month at the Computers, Freedom, & Privacy conference three key insights that are presented in my book, and explaining how they can be used to reshape privacy as a feature of global commerce.
Last week, the European equivalent of the US Supreme Court issued a controversial decision. A Spanish citizen petitioned the court to require Google to remove information from its search results about the citizen that related to a 1998 government-ordered auction required to recover debts the citizen owed. The information was published, and still accessible, on a Spanish newspaper’s website; the court concluded that information could be retained there by the newspaper as part of the “media”.
Google is not a newspaper, but the court concluded Google does collect and process personal data and, for that, is to be classified as a “data controller” under the EU privacy and data protection directives. As such, data controllers have an obligation to remove data from their systems if the data is “inadequate, irrelevant, or no longer relevant, or excessive in relation to the purposes of the processing ”. The court required Google to accept the requests of individuals to request the removal of their personal data meeting those standards.
As some have described this decision, the EU court confirmed an individual’s ‘right to be forgotten’. Commentators and pundits are flaming in all directions about the impact of this decision on privacy, freedom of speech, journalistic integrity, and the inevitability of ubiquitous surveillance and recordation of our lives.
But, in terms of digital trust, the question I care about is, “How will I trust that a data asset has been fully removed and inaccessible?” The complexities of search engines, distributed databases, and the inter-dependencies of systems and data services creates an enormous challenge. Even if a search service subject to EU’s jurisdiction affirmatively confirms they have removed personal data as requested by a data subject, the data subject has no means of fully validating the compliance.
First, the data subject must be able to confirm the initial search service has complied. Second, the data subject has the burden of tracking down other search services that have tied to, or independently archived, various search results, links, or content. Not only primary copies of data must be accounted for, but also secondary copies, backup copies, reformatting of data into different databases, etc.
What must be done differently? The “break” that no one is discussing is that the data subject never had control of the manner in which their data can be used. In fact, the newspaper and Google were merely republishing public record content. So, the “root cause” was that the public agency established no controls on the re-use of their public records, including to sell newspapers or side-bar ads on search services.
What this case exposes is that every single acquisition of data involves a negotiation (or the absence of one). Privacy laws vest in individuals the right to control their personal data through consent mechanisms which are, simply, a contract exercise of offer and acceptance. Digital trust depends on the same negotiations and contract formations at every link in the chain.
Data sources, including virtually any public sector website, can establish suitable controls; commercial sources, such as journalists and search services, must similarly impose controls to better assure their right to use the information.
To me, this is not a constraint on freedom of speech nor on journalistic investigation. Reporters still value, and respect, the need for facts to be independently confirmed by at least two sources. Now, a new standard is emerging—can we trust that we have a right to use and publish the information in a digital age? Once that right is explicit, and its scope (including the secondary linking to that information) is more clear, then that trust is possible.
But all of those negotiations must be engineered to be more automated. It is not that hard, really. We already tag data elements with explicit descriptors; we merely need to add a way to connect those descriptors to the rights of publication and use that are associated with them. Those are the rules that must connect to the data. Use of the data requires, as a predicate, assurances the related rules will be followed.
The Spanish citizen never intended for the descriptive information about the auction of his assets to become the content which helps sustain advertising revenue for news media or search services. So now we know the rules that should be in place; how long will we take to change the game?
In the last week, several online news sources were publishing analyses about the challenges of structuring compliance. One writer observed that, across different industries, compliance executives were facing common compliance challenges. Another analyst talked about the perils and uncertainties of potential compliance with differing legal rules for what will be required to build and maintain effective information security. But no one is talking about the real monster in the room—the capacity of the computer to serve as the definitive, objective, and authoritative witness.
Take a quick look at the headlines announcing virtually any new enforcement action or agreed settlement, whether in the United States or any other nation—time and again, the government agencies are building their case and prosecuting compliance actions based on digital information and records extracted from the computers. These are not merely electronic mails that someone was careless enough to send (and too busy to think about deleting a long time ago). Instead, compliance is being proven by telephone call records (including from mobile phones), application and device log-in data, revision histories to critical digital files, operating logs documenting improper access to, or alteration of, a company’s digital histories, stored in the form of routine business records.
So, how about a new, easy to explain definition of what compliance means in the 21st century? It is actually quite simple. Compliance is defined as follows:
First, compliance requires rules, rules for which their performance (or absence of performance) can be recorded and documented. That means that any rules that rely on ambiguous expressions such as “reasonable,” “adequate,” “appropriate to the level of sensitivity of the data,” or similar vocabulary are not workable.
Second, compliance requires performance pursuant to the rules. No shoulda, woulda, sorta did explanations are functional. The activities of the actor to which the rules apply must be executed in a manner that allows performance to be measured against the rules. The actor can be a human, an application, a device, a system, or a company—what matters is that their conduct can be affirmatively measured and compared against the rules.
Third, compliance requires the evidence of performance (or non-performance) of the rules to be preserved and accessible. That evidence must be authoritative, objective, and its integrity cannot be questioned—in other words, the records of a company’s performance must be trusted. Without such evidence, compliance becomes merely a calculated guess as to whether a company is, in fact, performing the rules.
Stop the presses! That means compliance executives have, as perhaps their most important role, the creation and preservation of the evidence of how their company performs the rules that apply to their business. Yes, training, cultural norms, and ethical values are important to develop. But if a compliance executive does not succeed in demonstrating their job is to create and preserve the digital evidence of due performance, they will fail in their job.
So, compliance is defined far more simply: know the rules, perform the rules, and create the trusted evidence of their due performance. It really is just that simple.
But writing rules, especially within companies required to navigate different sets of public rules, can be really hard. So, in my forthcoming book, I have included the Rules for Composing Rules, a set of eight rules for how to author rules in order to enable true compliance to be achieved in a digital world. What is really cool is that I have tested these rules with graduate students in both law school and information systems engineering, and both groups of students are thriving at applying the Rules for Composing Rules (affectionately known as the RCRs).
Do you think that definition works? What makes compliance more complicated? Feel free to post your ideas.
During the last 18 months or so, I had suspended contributing to this blog in order to focus on the creation of my new book on building digital trust. The second full draft is now complete and being submitted to several publishers for consideration. This blog will become my primary voice for addressing the “great challenge” that confronts our global community—how do we learn to trust digital information? I will be previewing some of the key concepts and insights in the book. I will also be commenting on ongoing developments in order to highlight how having a different model for thinking about trust can make all the difference in how the problems are solved. It is time to begin, with a word or two about the book!
The working title is Building Digital Trust—A New Architecture to Create Wealth and Govern a Wired World. That may remain as the title—I understand publishers have a lot to say about that sort of thing. But the working title gives some good clues as to what is inside!
First, to achieve trust in digital assets, you can no longer presume the information is trustworthy. That trust must be built and “a new architecture” suggests our existing models for designing information systems and information assets are not working.
Second, anything new that involves the Net must be grounded in the motivating force of creating wealth. The book does something that many will think is a bit dull, but is terribly important: the book enables new designs to connect the traditional services of governing information (such as records management and information security) to the wealth creation purposes for which any company exists. In order words, using the tools in the book, you can make the business case for how those services do not degrade profits, but actually serve to increase wealth.
Third, whenever society creates a new type of asset, and marketplaces for that asset comes into being, it is our human instinct to stabilize that asset, and create confidence and vitality for the related marketplaces. Thus, I discovered the designs offered in the book for creating wealth can also serve to re-direct and shape how we govern all of the dimensions of the Net through the rules of the nation-states, international organizations, technology standards, and best practices. Providing a unified platform that can enable wealth to be created and for governance to proceed forward with increased effectiveness is an essential deliverable to you, as a reader.
So, what do you think about the working title? If you have some questions or ideas, feel free to comment on the post.
Finally, for this first post, I discovered an important truth about undertaking a major book—you learn quickly that, in looking under the rocks and within the crevices of your big ideas, there is a LOT of work to be done to move from a grand design to a completed structure. Indeed, the challenges of the building architect became an important metaphor for my own journey. Sure, I had been thinking and talking about digital trust for nearly two decades. But when it came time to mature my napkin drawings into a full-fledged, provocative, and defensible analysis, I discovered that the Devil is truly in the details. In fact, there were more than one Devil lurking along the path, taunting me to think harder, tear apart and re-draft, and look differently at the problems . . . and the solutions.
Perhaps the coolest dimension of the entire exercise was that I discovered, as an author and as a source of the insights and solutions I was writing about, I was no different than the source of any other information asset you must determine to trust. One of the really neat things about how you evaluate, filter, and ultimately choose to rely on information is that you must first determine you are prepared to trust the source itself. This becomes a critical juncture in trust analysis on the Net: do you trust the source in order to trust the information?
So, as a source to you, as a reader, I realized I had to earn your affirmative trust at every step of unfolding the information in the book in order for you to place any confidence in the information itself. Thus, writing the book became a test of the very principles which are at the core of my work. Since the book was moving into unknown areas, that test became a bit more challenging. How would you determine to trust a guide through a dark and mysterious jungle if the guide confesses they had never been there before?
The key is transparency. The book shares with you how I found the path forward, and is open and honest about when, occasionally, I got lost and found my way again. But that honesty serves, I hope, to provide to you an ability to follow along and gain confidence in my ability to lead you to new insights into how you make trust decisions in every moment of your day, and how you can better design and build trust in the digital work you produce.
So, do you now trust me? What do I need to say or do to earn your trust? Let me know so your investment of time in reading this blog yields an exceptional return!
On Saturday, my son-in-law described his passion and interest in the neurosciences of fear. As a black belt martial artist, he is skilled in knowing how to defend and, when necessary, attack. But he explained that there is a real difference between the emotions of worry and fear. The skilled martial artist knows how to avoid fear, in part because they control the way they react to risk—what might be truly fearsome to most of us is, to the trained fighter, merely a worry. Action is required to avoid injury, but the action is not fear-based. There are certainly times when we need to react because we truly fear some outcome—but often the reaction can be ill-considered and, sadly, ineffective at deterring what has made us afraid. How does one evaluate the situation at a particular instance in time and control the fear? This question is his focus—absolutely fascinating.
Then, today I spoke with Jeff Lowder, president of the Society for Information Risk Analysts (SIRA). SIRA exists to improve how we analyze risks to information. In our discussion, we began to explore what it means to manage risk—what is one managing? Where did the idea originate that business management embraces managing risk? Is one managing the objectives of the business, or trying to manage the likelihood of events interfering with those objectives? If we accept that one cannot manage what cannot be measured (the essence of Six Sigma and other enduring management models), what must be measured to gain control of risks, and the likelihood of bad things happening?
These are surprisingly difficult questions to answer. But I wonder if those managing information risk can learn something by working out in the gym with a martial artist. When done well, both benefit from slowing down the passage of time, learning how to assess all of the surrounding circumstances, process and evaluate all of the relevant indicators and evidence, and then make rational, informed decisions. Yet, so often those addressing risk management in business act more on fear than on the actual evidence around them. The professionals develop extensive controls—both offensive and defensive—for responding to risks, but do not really try to calculate the real probabilities and make the controls proportionate to the probabilities.
One dominant method of organizing risk analysis is to grade the risks based on color—red for extreme risk, yellow for moderate risk, green for low risk. But, in a world in which we can measure and automate assessment of so many variables, why are we still relying on a methodology that is not much better than trying to fight with your eyes covered by a blindfold, unable to sense and evaluate all of the variables?
On Friday of last week, my wife was routinely reviewing our bank statement online and saw 12 transactions in four states within the preceding 24 hours. Of course, I had not left our home except to get groceries. Dang it—my debit card had been compromised, something we had suffered through last year when my wife’s card was compromised while we were travelling in France. Twice in one year! We were able to immediately call our bank, report the transactions, cancel the card, and already the credits are being restored. On the one hand, the risks to us were properly managed—and our bank provided terrific support. But it left me wondering—if we have been compromised twice in one year, are the risks being properly managed? Is the fact banks have fraud reporting hotlines some indication that, in martial arts parlance, not enough training is occurring?
Over the next few months, I will be exploring the questions and the answers.
As anyone following the Olympics on even a casual basis knows, a few days ago four badminton teams were disqualified for trying to intentionally lose matches in order to improve their position in subsequent rounds. They were shown the dreaded “black card”.
At first, I was not going to write anything, then a British cyclist intentionally crashed half-way through a team race on the velodrome, which entitled the team a chance to restart. On the “do over” the team, indeed, had a superior time. There was no disqualification. Then today, a runner was disqualified for not running hard enough in a qualifying heat, it being reported he was “saving himself” for another event in which he was entered. So, similar behavior in different sports, and with different outcomes. But, there is unanimity in the ethical conclusion the behavior is not acceptable.
Now, I am truly perplexed. Are there any circumstances in business where one might wish to intentionally “lose” in competition? Once you start thinking about it, you realize there are tons of situations where losing on purpose makes sense. Securities markets trade “short” and “long”; employees “throw” transactions in exchange for other consideration, etc. So, in designing your business systems, what controls are needed to protect your company against the possibility that an employee may wish to pretend to play the game?
Using my Trust Prism is somewhat like a truth serum. Just like a prism disperses light, my analytical process demands that all of the operating data from a system or process be capable of being examined. Only by doing so can we expose the type of data that may be indicative of when someone on our team is actually playing to lose.
We were watching a game of women’s handball between Angola and Russia. For those who like their sport fast-paced, physical, and tough, this is a great event! It is somewhat surprising the sport is not more popular in the US.
But here we are, sitting in Virginia, watching live video from London on a laptop, two national teams from different continents, speaking entirely different languages, playing the same game.
Even without the commentator’s audio, as spectators we were able to understand the game. Though separated by distance, language, and radically different cultures, both teams were able to play to win. When the game was over, both teams knew who had won, and graciously accepted the outcome.
So, what makes this possible? The rules of the game create the shared language for playing. The rules are the building blocks of understanding, communication, scoring, penalties, winning and losing. The rules prescribe the court, the size of the net, the shooting lines, the size of the ball, and the materials with which the ball is constructed.
Even when translated into different human languages, the rules deliver the consistency, the structure, the methodology, and the manner of scoring and winning. They enable everyone to trust the game, and play with vigor and resolve.
In the global inventory of complex information systems, each system begins as its own nation. But to engage in commerce, to exchange information, to execute transactions, and to create wealth for the operators, the systems must move onto a new, shared playing field.
It is not enough to look at the Internet as the only field of play. Health care, consumer games, financial services, supply chain management—each adds its own rules to create the game to be played. But playing the game requires that a system (and a system’s owner) know, and execute, all of the rules—the business rules, the technology rules, and the legal rules. The rules must be authored so that everyone who wants to play can access, and understand, the rules.
Using my Trust Prism, I help companies see and better analyze what all the rules are, build an inventory of those rules, and develop the systems and processes for assuring those rules are followed. In doing so, a company is better prepared to play to win.