In the tussle over what Ghanaians should call their high seat of government – Flagstaff House or Jubilee House – we have seen the replay of an old drama: history-baiting.

It recalls the fight over the country’s first Presidet’s decision to relocate his personal residence to Fort Christiansborg (the property rechristened as “Osu Castle”). The Opposition at the time denounced the action, believing it to represent a symbolic move to show that Kwame Nkrumah had finally stepped into the shoes of the departed colonial overlords. Later, stories would circulate that the castle was haunted, and its occupants were enduring sleepless nights from the howls and moans of the ghosts of desecrated slaves.

In 2005, the matter sprung up again. This time over a decision by the Government of the day to borrow $50 million from the Indians to construct a new, “more befitting”, presidential palace to replace the Osu Castle, and thereby mark a symbolic close to a sordid colonial chapter. The Opposition stormed out of Parliament in protest.

There is indeed a general sense that the “forts and castles” dotting the country’s coasts (the 40 or so remnants that remain in Ghana represent the largest concentration of European fortifications in Africa) are haunted by the savage humiliations of our past. That they are merely a scar of European subjugation, and their only utility now lies in serving as a constant reminder of the brutality of a bygone era.

But what if things are more nuanced, as true history usually is? What if these structures instead represent a towering monument to the collective Ghanaian disinterest in the complexity of the country’s history? Ghanaians’ sheer lack of familiarity with the important details of how the society has traveled to where it is now on the 500 year-old road of intercourse with the Europeans?

I am provoked to ask because of a niggling confusion I have endured for many, many, years. It is simply this: the major castles that have come to dominate our consciousness are architectural oddities.

Their emergence is said to have begun in the High Renaissance period with the building of the fortification which would later come to be known as “Elmina Castle”. Yet the structures jarringly fail to resemble any of the forms of castellation derived from the European traditions of the period and the succeeding epochs when they were built.

Take Christiansborg, for instance. Take a careful look at the picture of Christiansborg attached. It is usually said to have been constructed by the Danes in the year that Frederick III declared himself absolute Monarch of Denmark (1661). And yet it bears the monogram of Christian VII, the troubled Danish King who ruled from 1767 until his death in 1808 (“ruled” being a euphemism, as he spent most of his days drugged). Why is that?

The truth is that there really DOESN’T EXIST *ONE CHRISTIANSBORG* castle.

What we call Christiansborg castle has a past shrouded in some mystery. It is more accurately dated to 1640, when the Portuguese are said to have constructed a battlement at its current location after they had been dispossessed of encampments elsewhere. But as historian Walton Claridge modestly acknowledges: “important forts or castles are
suddenly mentioned as being in existence at Christiansborg and Cape Coast, but of their origin next to nothing is known”.

The record does show that the Swedes were in possession of this fortified base in 1645, and seem to have held it until a turncoat called Carlof secured a commission from Frederick III in 1657 – a few years before this strong-willed Danish monarch disbanded his Parliament and tore apart his country’s version of the Magna Carta – to seize Swedish possessions on the Gold Coast, following the defeat of Frederick’s forces by the Swedes in the European theater.

The Danish conquest of this Swedish fort – Ursu Lodge – and subsequent enhancements to its battlements began a process of permanent association of the structure with Danish design. Never mind that it changed hands numerous times ((including at one point to the shrewd Akwamu Chieftain, Asamaning), and most times the different powers made extensions, modifications, and renovations to its essential design.

In short, like so many of the European forts and “castles” in Ghana, Christiansborg is an architectural mongrel. Its design is “local”, in the sense that it reflects unique historical and geographical conditions prevailing in historical Ghana over a long period of time. It has literally “grown” in this soil. And though heinous crimes did occur in them, that alone does not make them totally alien impositions on our landscape.

It is not too hard to see that most of the European forts that remain show the same mongrelised features, as if pandering to the whims of some lost architectural school. How much of this character was influenced by native builders and artisans though? We know that with the exception of Elmina Castle (the Fort of St. George of the Mine), there are no significant records of pre-fabricated structures having been brought to build most of these structures. A good deal of the stonemasonry happened locally. Was the foremanship entirely European over the many centuries that these structures evolved, given how much they have evolved, and how eclectic their features are?

We know from European imperialistic experiences elsewhere that native artisanship is usually crucial in erecting these stone structures. The common anti-period and anti-style feel and look of the European fortifications that remain on our shores, despite the hundreds of years and divergent national origins that separate them, are made all the more peculiar by the consistency of the “mongrel” method that generated them.

One merely has to contrast this emergent motif with the Renaissance forts we find in Europe (see pictures), and even those constructed in other spheres of imperialism for the same rough and dirty purposes for which those here were apparently constructed (see the attached pictures). And one can only conclude that these structures are unique creations of our own soil, bearing in their rugged veins the clots and fluids of many stories of contest and intercourse, in some of which we were more, far more, than merely victims and bystanders.

Or, maybe, it is simply that History is not so easily appropriated for simplistic political narratives.

Nothing gets my pulse racing and my juices flowing like the suspicion that the professional class has retreated into silence in the face of clear “policy incoherence” – I.e. an “Emperor’s New Clothes” situation.
The Bank of Ghana may have very prudent and sensible reasons for raising the minimum paid up capital for universal banks in Ghana to nearly $100 million. But I haven’t heard them. And, trust me, I’ve been scouring the financial press trying to hear some cogent arguments. So far, none.
The country’s financial analysts and specialists have made some comments, but all I get is a confusion of “minimum equity” with “capital adequacy”. I have been a keen follower of public policy in this country for nearly a decade and half; I can smell incoherence when I see it.
Setting the minimum “own funds” requirement at such a high level, rather than focusing on the relative ratio between core capital (in simple terms: what the owners of the bank have invested, together with past profits from operations), on the one hand, and risk-weighted assets (crudely: loans advanced to borrowers, mostly with deposits made by the public, and other liabilities), on the other hand, is simply a promotion of big banks over medium banks regardless of the business strategy of individual banks.
There are banks focused on segments of the market with less risk. There are banks better at managing risks. There are banks focused on small projects. There are banks targeting medium sized businesses. In short, there are banks with less risky assets. And fewer assets in total, who perform critical roles in the country by providing credit at lower cost to niche customers they know extremely well. Such banks don’t need to be massive to be profitable and catalytic.
A hundred million dollars in minimum capital if translated into Basel III terms might suggest that the Bank of Ghana sees the minimal viable asset footprint of a bank in Ghana as roughly a billion dollars. In a tiny economy of $42 billion, this is merely the triumph of size over substance.
Such consolidation may well reduce competition and simply act to shield already unresponsive market leaders. Not when in the most sophisticated markets regulators are doing everything possible to spur competition and reduce the influence of the banking majors. In the UK we have seen nearly 20 new banking licenses in the last half a decade, with 20 more under consideration. In fact both the PRA and FCA are now open to discussing capital requirements on a case by case basis.
The notion that only giant universal banks can play essential roles in the top end of the economy isn’t very persuasive. India, with an economy several dozen times our size has had a minimum equity/capital requirement of $77.5 million for a good while now without banks imploding under the weight of the economic obligations placed on them. Australia, one of the world’s strictest banking jurisdictions, maintains a minimum own funds limit of $50 million. In Canada, the minimum paid up capital is a remarkable $5 million.
The minimum subscribed capital in Luxembourg is less than $8 million. In Switzerland, it is a little more than $10.2 million. Don’t the Swiss need banks with giant muscles to pursue “big ticket transactions”? As we are usually told is the reason why banks in Ghana need to consolidate? Even though there is no real logical basis for assuming that syndication and risk pooling by multiple independent banks are not more than adequate for large-scale transaction financing activities. And in some cases actually more prudent. Even the Nigerians, from whom the Ghanaian regulators believe they have been borrowing these “macho” banking regulations, have a minimum capital requirement of $70 million for national banks (our equivalent of “universal banks”).
But here is what is interesting: Nigeria is now proposing to CAP or LIMIT total capital to $280 million. Why? Because they have been going too far in promoting consolidation to the point where “size-related systemic risks” have been rising, and innovation is suffering. Nigeria has one of the world’s lowest mobile money penetration rates: 1%! Compared to 60% in Kenya, where in 2016 lawmakers rejected an attempt to force minimum capital requirements up from $10 million to $50 million.
When I started this dawn to search for a sophisticated banking jurisdiction with a minimum paid-in capital requirement of $100 million plus, I was hopeful of finding a few within minutes. I haven’t seen any so far.
As I said in the beginning it is important that we don’t confuse “capital adequacy” and liquidity ratios with absolute equity amounts. Both Basel III and the intimidating EU CRR emphasise capital adequacy rather than minimum paid up capital for the simple reason that capital adequacy is the more effective instrument for matching a bank’s safety buffers with its risk appetite. And Ghana’s 10% requirement, if well policed, seems fine enough (though one could actually get away with 8% under the Basel Accords. Minimum capital thresholds are somewhat more blunt. A large capital base does not translate in any significant sense to prudent banking practices.
Of course it is harder to do risk weighting and monitor governance and personnel competence than it is to measure the absolute levels of capital, but that is not the soundest excuse for preferring the latter.
At any rate, technology is beginning to erode the differences among the different tiers of banking anyway.
A wag may say that it will soon be shrewder to bag 250 rural banking licenses (“one district one bank”) for 250 million Ghana Cedis than one universal banking licence for 400 million Ghana Cedis, if the goal is to aggregate privileges.
But the more serious point is that technology is on the verge of transforming risk management in ways never before possible hence the aggressive push in more sophisticated markets to remove arbitrary barriers and promote smarter, more sensitive, regulations that encourage competition and innovation whilst detecting fraud and cheating well before their effects escalate or cascade.
I would of course be happy to change my attitude to the policy in the face of superior argument, analysis, and logic. What a pity then that so little insight has so far been forthcoming from the lofty quarters where these decisions are taken.

In the Summer of 2012, I attended a small strategy retreat in Aspen to discuss how organisations “Navigating the Emerging Economy” can win. What is a leader to do in a world that is not only changing but also changing your own organisation’s ability to deal with change?
At the end of the deliberations, James Manyika, Andrew McAfee, Erin Brynjolfsson and I decided to continue one particular argument over coffee and snacks: how fast will AI take over jobs requiring general cognitive competence? Funny enough someone captured my argument with McAfee in the corridors earlier in the day (see pic).
Six years on, I haven’t changed a single point of view.
AI will improve the “collective cognitive agility” of teams, one of the biggest problems in management today.
The “strategic overhead” of keeping even small teams of people supposed to work together on intellectual tasks aligned is so high that executing changes and visions that require multiple steps over long periods of time almost never happen within budget. And the tasks if completed at all routinely fail.
A huge chunk of that problem relates to bad tools that make syncing ideas excruciatingly hard. This is particularly so among generalist knowledge workers, who tend to be more intellectually independent than ultra-specialists, blue-collar workers and semi-skilled labourers. “Herding cats,” I believe is the phrase.
AI-enhanced tools will make it far easier to illustrate patterns, present data, reduce descriptive ambiguities, and help team players know when they are actually making the same point using different lexicons (a frequent cause of wasteful conflict). Today’s discussion tools are poor as they do not adapt to the unique communication styles, cognitive preferences, and learning needs of discussants.
Adaptive AI will also reduce the time spent actually putting together presentations, charts and diagrams etc. to convey ideas, thereby enabling people to focus on the merit and utility of those ideas. I reckon that a wholly new language is imminent: “Cognitive Agility Markup Language”.
Does that mean that the BCGs and McKinseys of this world would be robbed of some of their razzmatazz? Of course. But who want that? Do even McKinsey hotshots prefer the status quo? I am not too sure.
A lot of consulting initiatives fail because synchronising the analytical styles of consultants, the ideation preferences of entrepreneurs and the processing heuristics of managers is today an absolute change agent’s nightmare!
Consultants cannot be results-oriented in the current system, full stop! They have to hold on to the outmoded “time and materials” billing approach precisely because deep down they know that a results-oriented focus will be held hostage to the “cognitive synchronization” problem I have described above. The costs of syncing with their clients using current tools are simply prohibitive. Even the most advanced neurolinguistic programming techniques achieve only crude results.
Sophisticated artificial intelligence would thus remove the “mechanical drudgery” not just from semi-skilled labour contexts but also from the knowledge industry.
What many people don’t realise is that so much of knowledge work (even in academic research) today is packed full of drudgery! The growing popularity of tools like Dedoose and Nvivo testifies to this. But imagine the power of truly intelligent “ideas conveyance” tools in the collective hands of research and execution TEAMS.
The counter-argument of Brynjolfsson and McAfee then was that over time a “productivity” effect shows up, and inevitably one finds that one needs fewer teams, whether of the research or execution variety.
My contention is that this kind of “productivity” actually allows team sizes to expand, and for more teams to coordinate, and though it also enables a smaller team to do more, that “more” is often in reference to coordination and collaboration to overcome critical execution hurdles that would have been ignored to the peril of the entire enterprise.
We know that 70% of all change efforts fail. Overall, execution fails far more than it succeeds. This implies wasted resources: products that don’t get created, sales that aren’t closed, systems that do not improve, etc.
What AI productivity proponents fail to realise is that very often these are BOTTLENECKS that not only limit labour throughput but also results throughput. Solving those problems more effectively allows more execution to happen, which grows the opportunity to experiment even more with change ideas, opening up additional pathways for deploying capital creatively. More intellectual labour then becomes a sounder INVESTMENT proposition than is the case now in many enterprise work segments outside data science.
There is another compelling reason why I believe I am more on point on this score than the likes of McCafe and Brynjolfsson. Investment decision-making cannot be taken for granted in the evolution of AI “form factors”.
People tend to think of AI only in terms of “general capabilities”. This is “algorithm-level” rather than “system-level” thinking.
It goes something like this: “we have algorithm x that can argue a closing brief better than a human lawyer”. And “we have algorithm y that can read the facial expressions of jury members and the judge much better and adapt the arguments over time.” Combining these algorithms with others and adding superior data analytics should create a criminal trial robot-attorney that no human can match. Ergo, in a few decades criminal trial lawyers will be out of business.
The problem with this thinking is that it relies on the “concord fallacy”, which is that capabilities drive investment decisions. The value proposition of concord was premised almost entirely on speed. In the end, that wasn’t enough. Other factors, not least safety, pricing, service chain issues, etc., derailed the proposition.
The fact of the matter is that investment decisions often drive capabilities.
Where are the dollars going as far as law practice technology is concerned? Where are the AI capabilities emerging as a result being embedded? Would AI capabilities in the medium term lead to more effective legal teams and enhance the contributions of paralegals or to robot lawyers, judges and jurors?
One simply has to look at Universities such as William & Mary, Tilburg and Stanford, and examine their investment decisions in virtual reality courtrooms and legal practice simulation technologies. One is more likely if one does to predict a greater focus on legal team complementary technologies that create more opportunities for monies currently wasted in tedious law research and inefficient litigation practices to be diverted. Such repurposed funds can more productively then serve emerging opportunities in arbitration, mediation, discovery etc., thereby creating more opportunities for legal knowledge workers to work with smart people that they can’t work with today due to “intersubjective dissonance”. More collaboration and coordination opens up new disciplines, and create more work as the new spaces need to be tended too.
AI companies seeking to target law firms would be rather myopic not to shape the form factors (the presentation interface) of their technology to intersect with the investment trends in the target industry.
In simple terms: investment decisions into technology follow a “co-innovation” paradigm: multiple parties must understand the trends well enough to invest in a generally complementary way. So regardless of “capabilities”, technologies are presented in ways that align with the collective intelligence of the groups that use the technology. Technologies that boost that cognitive intelligence are more likely to be *successfully* adopted, driving further investments.
Technologies that bypass group thinking dynamics tend to become fringe technologies.
This is why I believe that in the medium-term successful AI technologies will be those that boost team effectiveness, rather than replace individual knowledge workers. These technologies will drive overall productivity paving the way for more employment as the strategic and investment budgets of organisations shift to topline growth rather than stay focused on optimising the bottomline by cutting costs.
In fact, cost-cutting (via labour-reduction) AI in the above sense is a suicide mission proposition for both producer and adopter.

The problem is Aristotle. Wait, don’t laugh. I’m serious. The old chap has forced into our collective consciousness, through the circular concept of his “eudaimonia”, the notion that happiness must be the ultimate aim of all that is worthwhile. That’s nonsense.

There are many pursuits that are their own ultimate ends, and many of these pursuits are worthy in their own right.

Happiness must be sought in its own lane. The notion that happiness is some pot of gold at the end of the arc is bunk. Happiness is a compartment. And life has many compartments.

Once you understand this everything becomes clear. You learn to apportion pieces of your life properly. You learn to spare some time on happiness every now and then, but you move on, tending to the many plants in that vast garden of self discovery. You come to understand that happiness is seasoning. Not some grand clue to human purpose. Not some hidden gem that once you discover everything falls into place, and the true path illuminated, all that really matter lining up like doric columns towards the true dome of self-actualisation, every extraneous want and need having fallen by the wayside.

This is a lie. Happiness is one thing among many that makes humankind complete. Spend some time on it. Harvest some of it. Every now and then. It will, however, never satiate your total being. However much you prize and honour it by screening all that you do according to the degree something promises to yield happiness at the end of the arc. Happiness is overrated.

The critical flaw at the heart of all visions of the “future of work” in which artificial intelligence makes most professions obsolete and drives billions out of work may be summed up in the phrase, “internal anachronism”.
 
This concept is best illustrated by a major defect of most science fiction movies.
 
Take Luc Besson’s Valerian, for instance. A civilisation that has learnt to travel across hyperdimensional space still uses staccato-firing weapons and relies on natural zoological species to replicate physical objects.
 
In the Wachowski Brothers’ Jupiter Ascending, a civilisation that has conquered the light barrier still uses wolverine DNA to promote aggression in its soldiery, whilst, wait for it, growing feathery wings on their backs for air mobility.
 
The problem stems from the difficulty of true multidisciplinary thinking. To project well into the future, one needs to understand a vast array of disciplines, scientific and humanities-based, and deeply grasp how findings in one field impact and grow atop developments in other fields ensuring some degree of harmony in technological advancement.
 
That is how come cooking in microwave ovens and talking to other people using wireless devices appear like starkly divergent cultural realities and yet provide a common defining hallmark of the late twentieth century. In fact, only a span of five years (1967 to 1972) separated the commercialisation of both technologies to the point where household ubiquity was only a matter of time.
 
The principle at play here is that a civilisation that has mastered electromagnetic radiation would apply it to remarkably diverse aspects of their culture, triggering powerful trends in multiple areas of research and application. Indeed, “microwave sterilisation” in the food sciences (which takes preservation of food to a whole new level) seem very removed from the nodes used in the Internet of Things, until we start seeing wireless sensors in microwaves, as is the case with the Tovala smart oven. Smart ovens, like Tovala, should eventually enable anyone to translate virtually any recipe into an adequate meal within a very short period of time.
 
The interplay of wireless technologies in this manner thus mean that at a certain point in the near future, the impact of wireless on our nutritional lives will not be limited to food-ordering apps. It is very likely that in a world where sensor technology has advanced to a point where it is truly ubiquitous, food ordering apps would be obsolete and “smart cooking” would become a better reflection of the “proper harmony across the state of relevant technologies”. If that is the case, then showing a food ordering app being used in a city completely swathed in driverless car infrastructure would constitute a case of “internal anachronism”.
 
I deliberately chose a very subtle example to illustrate the nuances. A much easier example would be an intelligent ambulance driving a person to an emergency theater rather than implementing the stabilisation and recovery techniques in situ.
 
If the long preamble has served its purpose of explaining the essence of the idea, then it should be easy to see how it applies to the dystopic visions of the future of work.
 
Simply put, the kind of general, human level, intelligence expected of computers in the next couple of years can only happen if there is an acceleration of a vast multitude of fields, both because AI at such an advanced state should dramatically boost research in every other field and also because to solve some of the complex problems confronting AI today would require advancement in a wide range of adjacent and not-so-adjacent fields, from networks to smart materials to micropower engineering.
 
The impact on our economy would be transformational to the point of opening up whole new frontiers in subsea habitation, space outposts, urban greening, species restoration, subterranean complexes etc etc. It is not simply that “new industries” will create new types of jobs. It is also that existing industries will grapple with scale issues of unimaginable proportions that cannot be automated a priori until the social, economic and technological interests have aligned enough to allow automation.
 
And to the extent that many of these new opportunities would first require human investment decisions, political clearance, and economic reconfiguration to take off, the employment cycle would follow the pattern of high human uptake followed by a productivity plateau and thereafter automation to improve returns on investment. Though the cycles would grow shorter, the new waves of “employment expansion” will also come faster and more intensely, even as the growth in human populations slow (a universal phenomenon of industrialisation).
 
A distorted understanding of how inter-disciplinary cross-fertilisation powers the growth of technological capacity, and how economic and cultural cycles interweave with waves of structural change, greatly underplays the power of “new frontier” exploration to open up new opportunities for humans to engage productively with the construction of new societies, even when the forces of change seem uncontrollable.
 
If you think that this is a far-off vision, then you get the point. Human-replacement level AI is a far-off vision too, because its emergence presupposes the vision I have described above. The same forces that can undermine, frustrate or drag that “massive scale” vision are the very ones that shall slow down the impact of human-replacement level AI and encourage instead many intermediate steps of human-complementary AI.

Building anything of consequence in Ghana, and by extension Africa, if you are an African, requires immense patience.

If the Brew Butler-Kweku Awotwi promoted 350MW Cenpower power plant is commissioned this year in Tema, Ghana (I note that both the mid-2017 and end-2017 timelines were missed, so the official Q2 2018 timeline may also be optimistic), it would have taken 11 years since the consortium was given its wholesale license to commence operations, and four years after it announced that it has finally succeeded in raising the money needed to build the plant. Raising the money alone took more than 10 years.

Cenpower, the first licensed fully private IPP in Ghana would be coming on stream a full 8 years after Sunon Asogli, the other contender for that distinction. Total development time was fifteen years, if counting from the company’s incorporation year, and more if counting from idea inception.

In the course of that time, the project scale was downsized by 50MW, the price tag ballooned from $300 million to $450 million and then to $900 million, as multiple investor rounds brought cash in tranches and backers cashed out and in, adding layers of cost. The EPC contract itself was tendered in 2010, but serious construction started sometime in 2015 after Parliamentary ratification of certain government concessions in 2012.

The “Ghana ownership” component is now about 21%, with founder’s equity whittled down to about 10%.

That’s what it takes to build a billion dollar behemoth in Ghana: 20 years and the boldness and sense to be willing to trade a 90% stake in your dream for realistic success.

Fallacies may lead to sub-optimal behaviour at the individual level, but that does not make them irrational. At the social level especially they provide some of the most rational explanations for seemingly irrational behaviour.
Take the media industry in Nigeria, for example. It is common for journalists to have salary arrears that stretch back for more than a year. In fact, the more prestigious the newspaper brand, the higher the likelihood of long arrears.
Paradoxically Nigerian Journalists work harder than journalists in many African countries, and the longer pay delays the higher the output.
My initial thought when I first witnessed this was to assume that “grey journalism” (what we call “soli” in Ghana) is simply stronger in Nigeria so the formal salary is merely a formality. But more probing suggested that this was not the case.
Then last week it came to me: Sunk Costs fallacy.
The longer pay delays, the higher the amount that would be lost if one leaves the job, and the more valuable the amount becomes from a lump sum investment opportunity point of view. Journalists then see strong incentive in keeping the business running, so that: 1) it does not fail completely and 2) they don’t get laid off for performance issues. Journalists reinforce this with “equity theory” – the idea that it would be unfair to lose their arrears whilst those who stay benefit from the work done to date. The owners then factor this into their financing model (i.e. they see the interest-free borrowing from employees as lower-cost than the high interest charged by the banks for short-term working capital needs). In the end everyone’s logic reinforces everyone’s.
This is obviously fallacious thinking on all sides because economic optimality will advise that the journalists look more to the future opportunities they are losing for staying put in a high-risk job and not the “sunk costs” of the unpaid salaries, which are in the past. It will advise also that newspaper proprietors simply trim staff and reorganise their business models.
But in the same way that scammed investors throw more money into pyramid schemes the riskier it gets, Nigerian journalists opt instead to stay.
From a higher vantage point, however, the fallacy provides the most rational way of describing the group behaviour whatever we may think of the individual judgement that aggregates to the behaviour.