One of the powerful ambitions we have in creating RitterMaps is to enable the legal professional to be more effective in communicating with the digital world about the legal requirements that exist, and how those requirements can be met through the effective design and management of IT. Visualization is critical, not only to enabling the power of pictures to communicate to be harnessed to the challenge, but also enabling facts, events, and information of legal significance to be mapped against the legal requirements.
While lawyers are beginning to migrate toward the use of visual tools in the courtroom to communicate the stories they are presenting, they continue to lag behind the world in which their clients exist in using visual tools to perform the rest of the work that precedes the trial. A big part of that work is mapping real statistics of how the e-discovery process is moving forward against the requirements and the overall project plan.
My thanks to Jeff Brandt, who identified this useful blog post on the value of visualization to IT executives. I strongly endorse a quick read so that lawyers can better understand the power of RitterMaps to managing the complexity of e-discovery.
Beginning March 29, 2012, I will be joining 1SecureAudit in a new four part briefing series designed for in-house counsel on automating the management of legal risk. This series focuses on four risk areas where technology and best practices in managing risk can be innovatively applied to reduce the likelihood of adverse events. The remaining briefings are scheduled once per month during the next quarter. Together, these briefings will introduce a model for building a legal risk management program that works in the 21st century.
The first briefing is Managing the Legal Risks of Outside Counsel. Many corporate lawyers presume that their law firms know how to manage and protect digital information, including the sensitive records that may serve as evidence in lawsuits and government investigations. But do law firms really do the right thing? How can corporations better address the legal and business risks that exist when outside counsel is entrusted with valuable corporate records?
Those attending this webinar will walk away with a detailed RitterMap™ that presents a structure for conducting a dialogue with outside law firms on how the firms implement modern information security management practices, as well as a detailed checklist of issues to be addressed in the engagement letters under which a law firm is employed. Attendees will also receive a discount coupon for a future 1SecureAudit eDiscovery Survival Executive Briefing and Workshop.
Registration for each webinar and access to further information about the series can be done here.
For some time, RitterMaps have received very different reactions from the professionals that have seen how I converge legal information into the power and flexibility of mind maps. Generally, those who are older or more experienced in their careers hesitate, or pause, or simply express a disfavor for visual tools. There is a strong preference for information to be presented in text format, perhaps in spreadsheets, but without the structure, context, color and visual architecture that RitterMaps enable. Of course, for those who are "born digital", there is a much more comfortable and positive response. Nearly 5 years ago, during my first training of Associates in a major law firm, one of the newer lawyers concluded the program by walking up and sharing reviewed that, "your maps are like gold!". That enthusiasm has continued, and nearly anyone who has been raised with a mouse beneath their fingertips has been comfortable with the visual information maps.
Last week, something happened that was truly noteworthy. The 25-year-old son of one of my good friends was visiting for dinner. My friend encouraged me to show his son, an aspiring lawyer, the RitterMaps and how we are developing these tools to support the performance of legal services, and the collaborative, team management of digital information. We’re – located to my office and, sitting side–by–side, I began to show him how the mind mapping function worked, and some of the features and capabilities of the RitterMaps. Then, it happened. While my own hand was gesturing in the air, the aspiring lawyer simply reached for and took control of the mouse. No permission was requested, nor expected. Instead, acting nearly on instinct, and eager to explore the further depth and complexity of the content, he took over.
Now, the dynamics changed entirely. My guest was asking questions, opening and closing topics, experimenting with restructuring and re-organizing the content, engaging and interacting with the RitterMaps naturally and without any training, instruction, or guidance on how to do things. Reflecting later that night on the moment, I remembered the scientist who is playing the keyboard for the visiting alien spaceship in Close Encounters of the Third Kind. The scientist removed his hands from the keyboard and the spacecraft took over, playing the music on its own. I felt much the same as the scientist – awed and humbled.
The singular incident underscored that the challenges of trying to reorganize legal content in order to be more accessible to those responsible for managing the digital record is worth all of the effort, false starts, and resistance from the status quo. We run out of alphabet letters– and it’s difficult to keep track of whether anyone of us belongs in the X, Y, or other generations. But the current and future generations are simply being wired differently to interact with, explore, acquire and apply information. The inherent presence of the digital screen, and the near-infinite accessibility of information that can be transformed into knowledge, empowers individuals to point, click, explore, defy structure, and fearlessly persist in shaping the information into the knowledge structures they require to learn, work, and even play.
Traditional publication formats–most notably hardcover textbooks, three-ring notebooks, and the ubiquitous slide deck– simply no longer work as effective tools to organize, present, and deliver knowledge that enriches, informs and empowers each of us. In developing RitterMaps, unique in their integration of legal and technology content into unified presentations, I was trying to solve a simple problem: to equip both legal and IT professionals with a resource that enables them to work better, collaborate, and reduce the risks of not understanding each other’s business languages, cultures, and performance objectives. But it turns out that we may be doing something much more important and provocative.
By taking the first steps to present legal and technology content together, and using visual information mind maps as the publication structure, we are building tools that enable those "born digital" to explore faster, to learn better, and ultimately communicate with one another in a visual space that requires no training to navigate. Yesterday, an adjunct professor at Columbia University introduced me to their masters’ program on Information and Knowledge Strategy. It’s a very cool new program. In our conversation, she pointed out that RitterMaps have another purpose. They accelerate the capability of the learner to transform and share the information they have learned with others. A RitterMap, or any mind map, allows the learner to immediately take the instrument from which they have acquired the knowledge, and present that same content to someone else-other team members, supervisors, or perhaps even judges. It turns out this is one of the most important "knowledge strategies"– to design and implement the means to share knowledge above the din of endless digital information.
It is clear that our strategy must embrace giving the X, Y, and younger generations hands – on experience with RitterMaps. If they can touch, edit, modify and adapt the content, each user becomes an owner of both the information and the knowledge they are experiencing. This control enables the user to construct their own architecture and, in the final analysis, use the knowledge to their best advantage. And isn’t that essential if effective information governance in a digital world is to be achieved?
I will long remember the evening when I lost control of my mouse. It was one of the best experiences of this journey.
This blog is being published here and, in a few days, at The Ritter Academy. While focused on e-discovery, it has even broader implications when the analysis is applied to broader initiatives in building trust in digital information. Whether for improved privacy, information security, electronic commerce, unified communications, cloud-based service delivery—or electronic discovery—success can be improved if we can employ maps that show the path forward, put people on the same page and allow them to see the entire picture!
The economic pain and adverse outcomes that companies have endured as a result of their historical underfunding of information management have sent the message –we have to improve how we find and manage digital information as evidence.
But, despite the message ( and the fact that fewer messengers are being killed for having delivered the message!), companies (and their lawyers) are still struggling to implement functional, defensible programs.
After all, if they were all getting e-discovery right, its unlikely Gartner would have projected the global marketplace for e-discovery software to grow—47% from 2009 to 2011! And, Gartner acknowledges, that projection largely takes into account only the U.S. market. What does Gartner project as the overall spend for this year? $ 1.49 billion.
Of course, Gartner is only looking at one of the many checks to be written to install improved e-discovery capabilities. External costs include legal, consulting, hardware, training, software support, etc. Internal costs include many of the same, as well as operating costs associated with the facilities, personnel, utilities, and management. To my knowledge, no one has estimated the overall spending that is being allocated to the overall investment required for e-discovery processes to be improved.
But, in my mind, spending on e-discovery software (as well as the other costs) is a patch—a remedy for poor records and information management infrastructure that enables organizations to find and validate information required for any legitimate purpose (including as evidence) with appropriate speed, quality and accuracy. So, why are companies spending so much? Why are the number of cases being reported by e-discovery analysts continuing to increase in volume, nearing two published decisions per week in 2011?
E-Discovery as Organizational Change
To answer those questions requires looking at electronic discovery differently—lets think of the innovations demanded by improving electronic discovery as organizational change. When we view e-discovery in that manner, what becomes a critical truth to navigate is that– across industries, technologies, geographies, and processes—organizational change usually fails!
According to consistent research during the last decade, here are some sobering statistics:
- Up to 85% of organizational change initiatives fail.
- Up to 70% of these failures are due to flawed execution.
- The failure rate of change initiatives dependent upon people (reengineering, TCM, cultural change) is 80-90%.
- Less than 10% of what is taught to staff in the classroom is transferred to the job.
In fact, e-discovery process improvement demands change on many levels. To name a few:
- movement from indexed, boxes of stored paper records to massive digital archives and tapes;
- movement from stable content on a printed page to dynamic content, such as web page content, databases, etc. that demand constant management to assure required preservation occurs;
- movement from controlled records stored within an enterprise to outsourced and/or cloud-based storage involving multiple locations, often shifting without the prior control of the record owner;
- migration from wired, networks and systems of controlled devices to wireless, remote, mobile devices;
- acceleration of discovery from a deferred exercise (after all legal motions and counter-motions on legal grounds are exhausted) to an exercise requiring immediate and comprehensive action at the outset of a case (or even before, if litigation is “reasonably foreseeable”);
- reconfiguration of the teams required to recover and manage potential evidence from largely internal resources (records managers and key witnesses) to significant engagement of contracted assets: software, hosting services, outside law firms, e-discovery consultants, forensic analysts, etc.
- migration from largely manual management (Bates numbering, indexing, physical review) to automated management (electronic UPC codes, digital processing, automated reviews with search terms, review of electronic files, etc.)
- introduction of automated logic, predictive coding, management of false positives, etc. to replace attorney review and judgment on relevancy and responsiveness.
That list is pretty compelling—and a good insight into how much e-discovery process improvement demands as organizational change. But what can we learn from the world of organizational change management to help improve the investment returns and probabilities of success within companies? What can consultants, law firms, software vendors, and data custodians do to improve their own credibility with corporate customers?
Improving Organizational Change with Maps
The opening statistics hint at where the e-discovery process can be dramatically improved. But we are continuing to also learn how RitterMaps are more powerful than we first realized at enabling improved outcomes for organizations.
Flawed execution is a reflection of poor planning, poor discipline in execution, and the absence of meaningful metrics against which to incrementally measure progress. But flawed execution is also a function of individuals not being trained well-enough in the overall program’s objectives, and the risks of failing to succeed, to be motivated to perform as expected.
RitterMaps enable more effective training, but they also extend beyond the actual courses, empowering you to retain in front of you a picture of the path forward. Used as a platform against which to customize your program, RitterMaps put everyone on the same page.
A structured change management methodology is critical to implementing significant changes. Studies report that, aside from effective sponsorship, a structured process with tools is the number two contributor to success.
RitterMaps serve as valuable visual tools with which to structure and design the path forward in e-discovery process improvement. The range of depth on different RitterMaps is intentionally designed to enable high-level managers to engage with the process, while detailed structures become functional checklists. The maps are tools—use them beyond the classroom and you begin to leverage your investment in Academy enrollment to an entirely new level.
An effective communications plan, as well as a plan for continually reinforcing to your team members throughout the organization the value of the new processes, are vital. It is stunning that companies invest heavily in educating staff, only to see 90% of the knowledge delivered in the classroom is not transferred to the job, much less transferred from the persons attending the class to their colleagues, supervisors, and others.
RitterMaps are powerful tools for enabling improved communications. Designed to be more than “a picture which says 1,000 words”, our RitterMaps enable you to share with your team members what has been learned—literally putting everyone on the same page!
Our RitterMaps are also there to enable you to do your job better, putting the knowledge acquired in taking a course or lesson at your fingertips. In instructional design theory, these are called “sidekicks”—tools that transfer the knowledge into allowing the learner to do their job.
So, perhaps we are onto something important. E-discovery confronts companies with immense organizational change challenges, with high risks of failure inherent to the process. RitterMaps are much more valuable than colorful artifacts of learning—they can become tools that enable collaboration, improve project execution, facilitate structured processes and empower better communication and better application of knowledge in how people do their jobs!
For many information professionals, outsourcing, cloud sourcing, and distributed information systems have greatly complicated their ability to construct and manage corporate compliance programs. These business models embrace data mobility, velocity, and dissemination (data created, distributed to, and stored in, more locations), variables which are in direct conflict with the control methods, process documentation, and management models which are the pillars of good compliance.
With increasing frequency, I am asked how companies build their compliance around data-intensive rules and regulations, such as privacy and data protection, when the mobility, velocity, and dissemination of data continue to accelerate. The conflicts are translating into obstacles to potential efficiencies in business—in many cases, companies elect to ignore those conflicts, forging ahead and abandoning the assurance that compliance can be maintained.
This post introduces a design strategy that is rapidly maturing—and is a foundation stone for rules-based design and information governance. It explains what I am trying to achieve in creating RitterMaps and working with information mapping to integrate legal compliance fully into IT design and management.
The problem is relatively simple to explain: long before the Internet, governance established itself as something that occurs within defined geographic boundaries. The authority of a government was limited to the boundaries of its jurisdiction. One immediately thinks of the sheriff posse in hot pursuit that stops at the edge of the river, having reached the state border, having no authority to continue the chase.
As corporate compliance developed as a business value and process, compliance has adopted the same notion of geographic boundary. Compliance begins by asking what assets (people, property, technology, data) are present in identified locations, and then asking, based on that presence, what rules exist for which compliance programs should be developed. Similarly, if a jurisdiction did not have identified rules, companies have previously designed, and continue to design, their operations to take advantage of operating in a low-regulation environment.
When new locations are evaluated for expansion, or acquisitions are considered, compliance (and the related cost of compliance) is a routine required element in the analysis. The same business process has proved useful in the first decades of outsourcing (before on-demand, cloud-based sourcing). Based on location, companies calculate how the outsourcing impacted compliance. In all of these situations, the one variable that was viewed as stable was geographic location.
Trial lawyers use the same analytical process: the first questions asked, for any case, are “Where is the court located? What are the rules? Are there alternative forums that offer better rules?” For them, once the venue is identified, the rules (and the obligation of the lawyer to work within those rules) are known and relatively stable.
All of these analytical models struggle to sustain themselves as current (and future) generations of technology gain momentum. Essentially, the digital information assets are acquiring a liquidity that defies compliance tied to specific locations, personnel, or devices. As the business case for increased liquidity (through cloud-based sourcing and distributed storage) increases, the compliance effectiveness is proportionately compromised.
In negotiating agreements for cloud services, companies confront the data liquidity issue directly, particularly as lawyers struggle to achieve a result that assures the compliance function that the mobility of the data across a vendor’s cloud facility will not jeopardize the data protection and privacy controls. A common strategy, often unsuccessful, is to limit the cloud provider’s flexibility so the contracted services will be performed only in identified locations.
What, then, is a solution that works? The answer is not to abandon the importance of the geographic variable, but embrace it differently. We are not going to change the mobility, velocity, or dissemination of data—but we can change how we view building and administering compliance. The strategy ultimately enables any business to achieve even higher scores of mobility, velocity, and dissemination (all of which contribute to business efficiency, agility and operating profitability), with more efficient and lower-cost compliance.
The key is to presume that data will always be mobile. But, however much the data moves, the data will always be some “where”—the data will be present in a physical location, stored in a device that has some known geographic location. Just as we have done since the day of horse-galloping posses, geography matters. The difference is that we must build an inventory of the rules for all possible locations and then, and only then, build suitable compliance processes that work across the locations which are acceptable.
In other words, do not let the location of a corporation’s offices, manufacturing facilities, or data processing centers drive the analysis. Instead, assume the mobility of the data and design compliance to enable that mobility to be maximized to your business benefit.
In pursuit of this concept, some companies have been building “compliance maps”. Most often, the work product is not a map at all but a detailed legal inventory of the applicable rules. But technology now exists to organize and present the compliance inventory differently—using maps themselves.
Mapping software used for geographic maps include the ability to show layers of information—these are called “rasters”. Layers can show different types of information—by using color codes for the pixels of a certain location, one can view the elevation, terrain quality, land use, water density, etc.
In many respects, when we are trying to build an inventory of the rules applicable to data in a specified location, we are also constructing compliance “rasters”. International rules (such as conventions), national rules (laws and regulations), sub-national rules (states and provincial laws and regulations), local rules, etc. In effect, we are identifying the rules by the boundaries of the locations to which the rules apply and stacking them up to determine the overall inventory that applies at any location.
The data employed to inform a raster sits in a separate database. The process is within reach to build databases that can be layered, and then visually portrayed. But, in doing so, we are also building a complete inventory of the rules, and delineating, based on the triggers which make a rule applicable, when those rules require compliance.
In building and expanding this type of inventory, which uses location to evaluate the rules based on mobile data, we are putting in place an infrastructure component that allows faster evaluations of compliance requirements and, in turn, better and improved negotiations among customers and suppliers. There are challenges to doing so, but the ability to truly map the law into structures that can be integrated into IT governance and compliance with far greater business agility.
If you would like to have a conversation about mapping the law for your business, pick up the phone and give me a call.
In the short life of the Ritter Academy, we have uncovered an unexpected barrier to training that, no matter how powerful and effective our content, we must call out in order to find a way to overcome it. Another title for this post might begin “You can lead a horse to water, but. . . .”
A reality we have learned is that professionals are afraid of technology, and that fear is inhibiting their enthusiasm to learn. One young litigator even confessed to me that he was upset he was required to attend training on e-discovery—“I went to law school to avoid all this computer technology stuff.”
Except for the IT professional, many experienced professionals remain quietly intimidated by technology and its implications for how they do their jobs. But, instead of investing in overcoming those fears, they remain largely in denial, believing that someone else can take the responsibility, “cover” what they do not know, or minimize the depth of learning they must pursue.
But in evaluating what could be the “root cause”, we have identified some other factors that may be more responsible than simple fear.
- First, to take a formal class or attend a conference is, itself, an admission you don’t know what others might expect you to know. That can be tough, particularly as individual gains seniority.
- Second, attending a formal class or conference also means suspending the revenue-oriented activities which the professional is otherwise expected to perform. This is a direct cost that makes it harder, particularly in the current economy, to voluntarily select to attend formal training.
- Third, the legal profession was built on a business model which presumed that research, analysis, and mastering the rules for particular matters were billable activities—in other words, ongoing learning was paid for by the client. That is no longer the case—clients expect their team members to know the answers. They are less and less willing to pay for research, particularly on topics that the client presumes the attorney already has mastered.
- Yet, the legal profession’s requirements for continuing professional education, set by each state for its lawyers, are among the lowest for any profession. State rules vary, but the norm is about 12 hours per year. By contrast, medical doctors generally are required to complete 50 hours per year, and even more in some certified specialties.
- In some respects, the low requirements in the legal profession for continuing education are aligned to the business model for billable research—in other words, the bar association need not establish more demanding requirements since, for every lawyer, ongoing research is presumed to be part of any engagement.
So, we know we have built a learning resource that (a) enables a professional to learn at their own pace, without attending formal events, (b) delivers flexible access and control, allowing a professional to learn when doing so minimally interferes with their ongoing job duties, and (c) by using our visually engaging, interactive RitterMaps, rapidly accelerates the transfer of what one needs to learn, thereby making the overall time investment smaller, and more effective.
But how do we overcome the fear of change? How do we make professionals more comfortable with continual learning as part of how they do their job? How do we get experienced professionals to overcome their pride, and embrace learning as a strategy for remaining competitive and valued, particularly as courts continue to impose sanctions on lawyers who display their incompetence in managing electronic evidence for the truth?
I welcome your thoughts and dialogue about this important issue.
It did not surprise me that the second indictment after Bernie Madoff charged his software developers. To succeed as the wizard of lies, the title given to him by Diana B. Henriques in her new book, Madoff needed to create digital information that was accepted as evidence of the truth. No matter how gracious he could be as a salesman, investor or executive, the data had to be convincing. And we know the rest of the story. The software itself served up convincing presentations but was, itself, designed to deceive.
I have not had the time to fully investigate all of the fine reporting done by others; I look forward to doing so. But I am fairly confident that the SEC’s failure in this case, and a persistent weakness in continued operations, is the absence of rigorous rules-based review of software integrity. Software that is properly designed, with solid documentation, that can be evaluated by auditors, should be the essential requirement for doing business in a regulated space.
The shift being suggested is, perhaps, significant. The records of a business have long been the focus of regulatory supervision. But we have come a long way from the anecdotes of two sets of books under the counter—one for the accountant and one for the government. Instead, government, to serve the public mission, must develop rules for the systems themselves from which records originate. Focusing on security is useful, but Madoff and his developers demonstrated that the strongest perimeter cannot truly provide trusted records. Instead, we must recognize that the inability to see how systems are designed, to understand how code is authored, and to validate the integrity of the resulting reports and data—all of these current conditions degrade trust.
But, even without public sector regulation, our own business interests should compel the same result. Our businesses demand reliable and trustworthy data on which we can make business decisions. As cloud services continue to tempt companies, the reality is that the service providers themselves degrade trust by not taking seriously the need of their customers to be able to assure that their data, once mobile across the cloud, must retain the same trusted attributes as the data holds within a corporate system.
So, whether sourcing from cloud providers, software applications (commercial or home-grown), or yet-to-be-invented solutions offering further economic efficiency, we fail to discharge our obligation to our investors, shareholders and customers if we do not learn from the lessons of Madoff and insist upon greater transparency in knowing the structure and function of the systems and services from which the data is produced.
Earlier this week, I presented a webinar for ISACA and Searchcompliance.com on developing a cloud strategy. The central theme: demand that the service be performed against known rules, for which compliance is integrated into the contract. In other words, “trust . . . but verify.”
Over 1,200 people around the world were actively logged in. At the end of my remarks, I offered that if someone wanted the RitterMap I had used, send me an e-mail. Astonishing—over 250 requests, and from corporations and organizations whose names are immediately known, flooded in (in a good way)! So, if you would like to obtain the same RitterMap, send an email to firstname.lastname@example.org requesting the RitterMap on Developing a Cloud Strategy.
It’s really exciting to know that what I am doing with visual maps is at the cutting edge. Now, with nearly 2 million users, MindJet has invited my team to join their Developer Network, which allows us to connect our passion to mapping law and technology together to some of the most innovative developers for integrating mapping into other practical applications and solutions.
MindJet realized that mappers who create really cool content and developers creating really cool apps can achieve even more when they connect. This is an exciting development for us as we move forward toward the launch of the Ritter Academy, an online training facility that uses visual-based learning to build trust in digital information.
Thanks MindJet—we will try to earn your confidence with our continued innovation.
Wow. You know you are doing something really neat when a major software company, Mindjet, features your solutions in their monthly newsletter to over 1.8 million users of their mapping software tool! Last month, senior executives at Mindjet were introduced to the provocative innovation of RitterMaps—using visual mapping to be able to see, and navigate, all of the rules by which digital information is to be governed. They were excited and asked me to write a blog about it for their website. They were so pleased with the blog they have featured my blog in their monthly newsletter.
So, take a look at what I wrote for them here, then come back and, if it makes sense, give me a call to learn how I can show your team how to use visual mapping to navigate the hazards and reach the destination we seek in building information governance—information we can trust to speak the truth.