Let the battle begin. At stake is global supremacy in achieving compliance with complex legal rules. For now, there are only two fighters. But only one of them realizes the fight has already begun.In one corner, IBM’s Watson has been selected by two different vendors employed by the same customer—the US Air Force. Watson is known for having beaten the world champion in a game of chess back in 1997 (so very 20th century) and is the center-stage star of IBM’s new “cognitive computing” marketing campaign.The Air Force has assigned to the vendors the mission to ingest, deconstruct, and reassemble the entire suite of Federal regulations that govern the purchasing process.
According to the Washington Post, the Federal Acquisition Regulations are 1,897 pages of “the densest prose on the planet.” (The link to their story is at the end of this post). One Federal representative is quoted as saying, “It’s unreasonable to expect that a single individual [can] fully understand all of the relevant [pages] to answer a specific question.”
In the opposing corner, Google Deep Mind’s AlphaGo. No one has hired AlphaGo to do anything yet with any Federal regulations. But, earlier this week, AlphaGo prevailed in a stunning 4-1 victory over Grandmaster Lee Se-Dol in a very different, far more complex game—the game of Go. (Type ‘computer wins at go’ into your search engine of choice to learn more).
Chess vs. Go? The game of Go is considered by artificial intelligence developers to be far more complex, with nearly an infinite number of potential configurations. AlphaGo’s distinctive feature is that it uses two neural networks to calculate the probabilities and strength of each possible move.
One of those networks, the first one that is fired up for each move, is called the “policy network”. The policy network evaluates the current context (i.e., the field of play on the go board) and, based on the context, filters out millions of possible moves because they do not make sense within that context.
Wait, AlphaGo’s first major component is pre-populated with rules against which contextual information is ingested in order to draw a boundary that includes a smaller subset of rules with which to evaluate the next moves in a game? And they call it a policy network? That process is exactly the same used by the Rules for Composing Rules (also known as the RCRs), one of the key tools presented in my book.
The RCRs are a set of 8 rules for how to author rules. The goal of the RCRs is to guide those who author rules in order that machines can execute the new rules, and achieve compliance, with greater precision and fewer errors, if any at all. There are two areas in which the RCRs offer tremendous power.
The first is to guide the mapping of rules with ambiguous terms into structures and expressions that can be engineered into applications and programs. Governmental regulations are the archetypical target for this use; they possess ambiguity that makes any human or automated navigation nearly impossible to achieve. That is, of course, why the Air Force is taking on the problem with Watson.
The second is to aid the extraction from human behavior and “instincts” of the rules by which we make decisions, which is exactly what AlphaGo has achieved, ingesting and learning from over 100,000 games played by some of the world’s best players.
So, sitting here on a Saturday morning, I realized that the RCRs are actually connecting to two of the great advances toward effective artificial intelligence now underway. That is kind of cool. But you likely want to know a bit more, without going out to buy my book or attend one of my courses at Johns Hopkins or Oxford.
The RCRs define a controlled, scalable process for how to author rules in order to achieve calculated, affirmative outcomes with mathematical certainty. But the RCRs go one step further—they guide the author of rules to also design the records that preserve the history of the outcomes. In other words, the RCRs enable rules to be engineered which both assure and document compliance. That is the essence of governance itself—requiring certain processes to be performed (or prohibited) and having available the information required to calculate whether compliance with those processes has occurred.
In my Oxford course on Building Information Governance, my students have creatively realized the RCRs are also a fabulous way of auditing existing rules to determine if they properly direct an actor to do the right thing. If performance cannot be defined and capable of mathematical measurement, there will be an inherent risk of failure. They use the RCRs to identify where rules fall short, and then author the new rules to improve performance (and auditability of compliance).
Watson is currently trying to make sense of a complex set of existing rules. AlphaGo has taken a different approach, beginning with the very simple rules of Go and enabling learning from experience (which is another way of saying that AlphaGo is adapting by using the records created and stored from prior games) to drive new rules that enable the machine to beat a human in a game. In other words, it seems AlphaGo, particularly in the use of its policy network, is further along.
When Watson and AlphaGo begin to tackle the same challenge, the battle will be explosive. Other competitors will come out of stealth mode and begin to join the fight. Much like professional wrestling, a cage will need to be lowered down over the ring to contain the fight.
But something will be required to set things off. What will be the challenge that ignites the battle? Complying with the procurement regulations of a single nation for just one of its military forces is actually a smallish project compared to what lays ahead.
The big one already confounding corporations is how to engineer compliance by corporate networks and systems with the global, conflicting, volatile, and often incomplete rules for assuring the security of those assets against adverse actors and threats. The increasing costs of security, tracked against the increasing complexity and success of the attacks, evidence the fail rate of our current efforts.
In some respects, while laudatory, the Air Force is not tackling the elephant in the room. The most expensive economic drain on our current capabilities is our inability to comply with the security and privacy rules that exist. Using AI to evolve those rules—whether governmental, industrial, or internal—into a coherent, integrated, programmable set of requirements with which compliance can be calculated will do far more for everyone.
Achieving digital trust means we acquire confidence that our systems, networks, devices, applications, and data assets comply with known rules which are being properly executed. The RCRs can accelerate our progress toward that end. Unlike the current political process of apportioning delegates based on the proportion of votes won, the contest to achieve digital trust will more likely be one of “winner takes all”. Watson has some catching up to do.
Here is the link to the Washington Post coverage of the Air Force’s Watson initiative.