Debating AI in an Aspen Summer: “Collective Cognitive Agility”

In the Summer of 2012, I attended a small strategy retreat in Aspen to discuss how organisations “Navigating the Emerging Economy” can win. What is a leader to do in a world that is not only changing but also changing your own organisation’s ability to deal with change?
At the end of the deliberations, James Manyika, Andrew McAfee, Erin Brynjolfsson and I decided to continue one particular argument over coffee and snacks: how fast will AI take over jobs requiring general cognitive competence? Funny enough someone captured my argument with McAfee in the corridors earlier in the day (see pic).
Six years on, I haven’t changed a single point of view.
AI will improve the “collective cognitive agility” of teams, one of the biggest problems in management today.
The “strategic overhead” of keeping even small teams of people supposed to work together on intellectual tasks aligned is so high that executing changes and visions that require multiple steps over long periods of time almost never happen within budget. And the tasks if completed at all routinely fail.
A huge chunk of that problem relates to bad tools that make syncing ideas excruciatingly hard. This is particularly so among generalist knowledge workers, who tend to be more intellectually independent than ultra-specialists, blue-collar workers and semi-skilled labourers. “Herding cats,” I believe is the phrase.
AI-enhanced tools will make it far easier to illustrate patterns, present data, reduce descriptive ambiguities, and help team players know when they are actually making the same point using different lexicons (a frequent cause of wasteful conflict). Today’s discussion tools are poor as they do not adapt to the unique communication styles, cognitive preferences, and learning needs of discussants.
Adaptive AI will also reduce the time spent actually putting together presentations, charts and diagrams etc. to convey ideas, thereby enabling people to focus on the merit and utility of those ideas. I reckon that a wholly new language is imminent: “Cognitive Agility Markup Language”.
Does that mean that the BCGs and McKinseys of this world would be robbed of some of their razzmatazz? Of course. But who want that? Do even McKinsey hotshots prefer the status quo? I am not too sure.
A lot of consulting initiatives fail because synchronising the analytical styles of consultants, the ideation preferences of entrepreneurs and the processing heuristics of managers is today an absolute change agent’s nightmare!
Consultants cannot be results-oriented in the current system, full stop! They have to hold on to the outmoded “time and materials” billing approach precisely because deep down they know that a results-oriented focus will be held hostage to the “cognitive synchronization” problem I have described above. The costs of syncing with their clients using current tools are simply prohibitive. Even the most advanced neurolinguistic programming techniques achieve only crude results.
Sophisticated artificial intelligence would thus remove the “mechanical drudgery” not just from semi-skilled labour contexts but also from the knowledge industry.
What many people don’t realise is that so much of knowledge work (even in academic research) today is packed full of drudgery! The growing popularity of tools like Dedoose and Nvivo testifies to this. But imagine the power of truly intelligent “ideas conveyance” tools in the collective hands of research and execution TEAMS.
The counter-argument of Brynjolfsson and McAfee then was that over time a “productivity” effect shows up, and inevitably one finds that one needs fewer teams, whether of the research or execution variety.
My contention is that this kind of “productivity” actually allows team sizes to expand, and for more teams to coordinate, and though it also enables a smaller team to do more, that “more” is often in reference to coordination and collaboration to overcome critical execution hurdles that would have been ignored to the peril of the entire enterprise.
We know that 70% of all change efforts fail. Overall, execution fails far more than it succeeds. This implies wasted resources: products that don’t get created, sales that aren’t closed, systems that do not improve, etc.
What AI productivity proponents fail to realise is that very often these are BOTTLENECKS that not only limit labour throughput but also results throughput. Solving those problems more effectively allows more execution to happen, which grows the opportunity to experiment even more with change ideas, opening up additional pathways for deploying capital creatively. More intellectual labour then becomes a sounder INVESTMENT proposition than is the case now in many enterprise work segments outside data science.
There is another compelling reason why I believe I am more on point on this score than the likes of McCafe and Brynjolfsson. Investment decision-making cannot be taken for granted in the evolution of AI “form factors”.
People tend to think of AI only in terms of “general capabilities”. This is “algorithm-level” rather than “system-level” thinking.
It goes something like this: “we have algorithm x that can argue a closing brief better than a human lawyer”. And “we have algorithm y that can read the facial expressions of jury members and the judge much better and adapt the arguments over time.” Combining these algorithms with others and adding superior data analytics should create a criminal trial robot-attorney that no human can match. Ergo, in a few decades criminal trial lawyers will be out of business.
The problem with this thinking is that it relies on the “concord fallacy”, which is that capabilities drive investment decisions. The value proposition of concord was premised almost entirely on speed. In the end, that wasn’t enough. Other factors, not least safety, pricing, service chain issues, etc., derailed the proposition.
The fact of the matter is that investment decisions often drive capabilities.
Where are the dollars going as far as law practice technology is concerned? Where are the AI capabilities emerging as a result being embedded? Would AI capabilities in the medium term lead to more effective legal teams and enhance the contributions of paralegals or to robot lawyers, judges and jurors?
One simply has to look at Universities such as William & Mary, Tilburg and Stanford, and examine their investment decisions in virtual reality courtrooms and legal practice simulation technologies. One is more likely if one does to predict a greater focus on legal team complementary technologies that create more opportunities for monies currently wasted in tedious law research and inefficient litigation practices to be diverted. Such repurposed funds can more productively then serve emerging opportunities in arbitration, mediation, discovery etc., thereby creating more opportunities for legal knowledge workers to work with smart people that they can’t work with today due to “intersubjective dissonance”. More collaboration and coordination opens up new disciplines, and create more work as the new spaces need to be tended too.
AI companies seeking to target law firms would be rather myopic not to shape the form factors (the presentation interface) of their technology to intersect with the investment trends in the target industry.
In simple terms: investment decisions into technology follow a “co-innovation” paradigm: multiple parties must understand the trends well enough to invest in a generally complementary way. So regardless of “capabilities”, technologies are presented in ways that align with the collective intelligence of the groups that use the technology. Technologies that boost that cognitive intelligence are more likely to be *successfully* adopted, driving further investments.
Technologies that bypass group thinking dynamics tend to become fringe technologies.
This is why I believe that in the medium-term successful AI technologies will be those that boost team effectiveness, rather than replace individual knowledge workers. These technologies will drive overall productivity paving the way for more employment as the strategic and investment budgets of organisations shift to topline growth rather than stay focused on optimising the bottomline by cutting costs.
In fact, cost-cutting (via labour-reduction) AI in the above sense is a suicide mission proposition for both producer and adopter.

Leave a Reply