The longest civic advocacy campaign I have ever participated in is the attempt made by Ghana’s CSOs (Civil Society Organisations) to block the totally unjustifiable effort by Ghana’s Electoral Commission (EC) to jettison millions of dollars worth of perfectly functional, in many cases brand new, equipment and subject voters to a needless re-registration exercise.

The EC did not stop at this wasteful endeavour, which shall surely one day come under the scrutiny it deserves, it went further to outlaw the existing voter identification card and block the use of birth certificates as proof of citizenship, in a decision that was not merely bizarre but truly absurd. That even the fears and anxieties of a pandemic in full flight could not restrain this exercise of wanton power made this situation all the frightening.

It is important that the record shows clearly that the EC’s actions were morally, ethically and administratively irresponsible. It must do so because this shall not be the last time a governmental entity will attempt to use raw power to push a cause so antithetical to the values of sound governance that they leave in their wake dangerous precedents that will continue to haunt the nation. When such a mess happens again, people should not be able to feign indecision about where they stand under the excuse of inconclusive information.

So, though the CSOs could not get in the way of brute power, and the things that they set out to prevent have happened, it helps to make unambiguous the sheer moral calamity that befell the Republic of Ghana, even if only for posterity.

Frequent readers of this blog should recognise the key claims that the EC made to dress its decisions and conduct with a modicum of rationality and propriety. Nonetheless, I will repeat them again:

1. The existing biometric equipment were obsolete because they date from way back in 2011, and since then have not been replaced or refurbished. Furthermore, it was alleged that the absence of facial biometric capacity in the existing machines rendered them less effective and accurate. The CSOs made it the central plank of their advocacy to expose these plain untruths, made more alarming by the barefaced shamelessness with which they were repeated non-stop by a major constitutional organ of the State.

2. The EC insisted at every turn that pandemic fears were overblown because the EC had capacity to implement health and safety protocols that will ensure that the outdoor registration activities pose no risk to registrants. The CSOs, in their ill-fated amicus brief and in many media appearances, emphasised the sheer mendacity of these assurances, given what everyone knows about the conditions under which voter registration takes place in Ghana.

These affairs shall be remembered in the history of civic activism and public administration in Ghana. When they are, let it be without doubt that the evidence eventually vindicated all the fears and assertions of the CSOs, and repudiated all the assurances and claims of the EC.

These videos sent to us by some journalists who took pains to collect the footage to help seal the record establish the following without equivocation:

A. The EC was unable to bring in enough Thales’ equipment for the registration due to the rushed and shambolic procurement exercise. So the very equipment that was said to be obsolete and had to be tossed aside to create a procurement opportunity worth tens of millions of dollars had to be mobilised for the registration exercise.

B. Readers will observe that in many polling stations in poorer neighbourhoods, almost all the BVR (biometric voter registration) equipment in use were the same HSB-compatible HP and Dell ones that were used in the last limited voters’ registration exercise in 2019. Our estimate is that at least 40% of current equipment being used for the registration exercise is made up of old BVRs, some with the Neurotechnology-provided software.

C. It will also be observed in the videos how the EC used a number of indoor facilities, contrary to claims that the exercise was entirely outdoors. Worse, it is evident that it is not the irresponsibility of a few recalcitrant citizens that led to the depressing incidents of overcrowding reported in the news; instead, in many cases, the process and facilities had been designed such that overcrowding was inevitable.

D. Zero efforts were made to enforce any social distancing rules, mask wearing or other health, hygiene and safety protocols in many registration centers, especially in poorer neighbourhoods.

E. No specialist equipment for facial biometrics were deployed. The LED lights used were paired with standard digital photography. Hence, the claims that any facial recognition requirements could not be met with existing visual data were all palpably false.

Let the record stand.

(Are you at the registration stations? Send me notes on your experience, please.)

For the record: the EC’s claims and assurances exposed for what they are.
Overcrowding by design.
The equipment that was so bad it had to be thrown away to make way for brand new multimillion dollar stuff in use in full view of the public and media.
BVDs used for last year’s limited registration had to be scrambled to save the day.
The old BVRs and BVDs are the only reason the EC was able to conduct the mass re-registration as the rushed procurement could not deliver the full set of machines.

Since my initial skimming through, I have only now had time to carefully scrutinise the legal arguments made by Ghana’s Electoral Commission (EC) in justification of its position that prospective registrants should not be able to use existing voter ID cards to register in the planned/upcoming mass voter re-registration exercise.

The justification they gave is simply that, in the past, voter identification cards could be acquired without one having to show material proof of citizenship, and consequently, because of this laxity, a “chain of provenance” problem has emerged. In simple terms, people who may not have been constitutionally entitled to a Voter ID card ended up getting one anyway. In subsequent registration exercises, such persons, it is speculated, proceeded to use the same cards to acquire new ones. Since no records exist today of who these individuals might be, the EC is urging the court to assume that they could be in such numbers as to render the entire register substantially unconstitutional. “Unconstitutionality” in this case is a consequence of the EC being unable to confine the right to vote to citizens, as the constitution enjoins, as a result of a register so flawed that it does not equip them to satisfactorily police the citizenship-eligibility of voters.

My firm conclusion after carefully examining that logic is the untenability of any attempt to uphold the “chain of provenance” argument without eventually succumbing to plain absurdity. Even ignoring the obvious fact that people who may have registered without proof of citizenship in a previous electoral cycle may have subsequently acquired such proof, besides the Voter ID card they secured previously, and used this new proof in the subsequent registration exercises, the fact remains that it is the duty of the EC to identify such individuals and deal with them appropriately without disturbing the rights of non-infringing voters on the roll. After all, voting is about individual not collective eligibility. But there are graver concerns about the whole logic.

The first exhibit in the argument I am going to present is the image of the Ghanaian passport application form below.

It should be clear even from a cursory scan that, historically, the administrative determination of what qualifies as proof of citizenship in Ghana has been maximalist. The authorities have always viewed the matter as one to be settled by inquiry instead of established by forensic investigation. The process has typically involved a request to the applicant to bring “some documents” that can help the Authority in question come to “a reasonable determination” whether the applicant is more likely than not to be a citizen. The open-ended nature of the range of documents historically allowed, from employment and school ID documents to “other ID cards”, suggests an effort to be inclusive rather than exclusive.

That can only be the reason why the Passport Authorities have been inclined for many decades now to accept such a wide range of documents. The principal goal has been to reduce false negatives and not to maximise true positives, simply because the harm of excluding a true citizen from exercising their rights exceeds the harm of allowing a non-citizen to enjoy those rights. That, indeed, has for most of modern national life been the general principle of civil identification, empirical evidence of which abounds in the sheer numbers of so-called “illegal migrants” around the world versus the number of wrongly denationalised citizens. 

To allow Ghanaian passports to be used to register for electoral purposes, when they are founded on such liberal grounds, and then to refuse to accept existing Voter ID cards amounts to patent absurdity.  

From the way passport eligibility has been administratively defined, one could well use a Voter ID card to acquire one in principle. Or one can use the Voter ID card to register at a school and then use the school ID card to register for the passport, which then presumably qualifies one to be registered as a voter in the new dispensation created by the truly obnoxious CI 26 law passed today by Parliament to outlaw use of birth certificates and existing Voter ID cards. Because, alas, chain of providence logic never holds up to scrutiny.

Worse still, this law allows prospective registrants to “prove” their citizenship (not “identity”, mind you) by the mere attestation of two registered voters. As if some magical process exists through which attestation can transform someone not meeting the “origin” rules of Ghanaian ancestry required for citizenship into a Ghanaian. Talk of metaphysics! More on this obnoxious law in subsequent posts. For now, suffice it to say that the circular illogicality foisted on the electoral process by CI 126 emanates from the view that, somehow, “contamination” of the electoral roll can be adequately cured by fiat at one level of civil identification, the electoral. Of all the flaws bedeviling such logic, this is the most pernicious.

A weakness at any point in any chain of identification does in fact enable an individual to obtain one document which leads to another until eventually they have the “right document” for some specific purpose. This is why some have argued that multipurpose ID cards tend to be more robust generally than single-purpose cards, like the Voter ID.

Chain of provenance” logic, as championed by the EC, would almost always become a servant, and never the cure, of absurdity, because administrative rules never stay static. One can always go back to some arbitrary point in the past when the rules or their enforcement differed from the present. There are people who are citizens of Ghana merely because they were born before a certain year, for instance. No amount of back-auditing would have anything useful to say about their status, and any suggestion that a civil register must aim for genealogical perfection before it can meet basic civil needs can thus only be absurd.

The only means of preventing the criminal acquisition of citizenship rights, in light of the above, is effective general law enforcement, not civil identification. It is the duty of the Police and Immigration authorities to implement measures for the prevention and determent of criminal gangs and other unscrupulous individuals from attempting to exploit the rightfully liberal civil identification regime for whatever ends. This duty cannot be neglected by the right agencies, only to be transmuted into a metaphysical search for fool-proof, perfect-origin, documents for the gratification of the civil identification authorities. 

Civil identification is not by its nature hinged on archival investigation and genealogical evidence, upon which an absolute emphasis on origin-based citizenship must ultimately rests. The philosophy of civic identification in general is, rather, to be focused on being able to assign a cluster of records to an individual. Far more important is this objective in most serious societies than is the conclusive determination of an individual’s history. Accurately linking the records of an individual to their identity would, of course, require some level of security and authentication to preserve the integrity of whichever process that record-ascription is meant to achieve, but wherever and whenever breaches become rampant enough to undermine the overall effectiveness of the intended objective, it is the law enforcement authorities that are equipped to tackle the problem not the custodians of civil registers.

For example, there are latex devices that can enable the forging of fingerprints. 
Theoretically, a voter with the right technological savvy can impersonate a whole host of people with this technology and by so doing beat biometric barriers to overvoting. No civil identification measure can stop this. It is the province of the Police and the other security and intelligence services to address such organised attacks on the integrity of the electoral system, or any other civil process. Any attempt to devolve all that burden to civil identification and registration bodies is bound to fail. And when such entities abandon the time-tested due process principle of “proportionality” and go for high-handed, blunderbuss, measures to force into being a delusional notion of perfection, they end up undermining the very integrity they claim they are seeking to protect.

The Ghanaian EC’s declared intent of using registration tools and policies, now backed with law, to achieve a goal properly domiciled in the domain of general security and law enforcement is thus seriously misguided and will not lead to an outcome that edifies Ghana’s democratic credentials. 

Diagram of pooling method for serum samples from blood donors. The ...

Source: Gibran Horemheb-Rubio et al, 2017


A curious thing happened about a week ago. The Government of Ghana, which has been touting its investments into Covid-19 testing capacity to universal acclaim, as the country galloped up the continental league tables, suddenly found itself in the dock. Before long, prominent health leaders in the country were accusing the Government of massaging testing numbers for PR benefits.

Ghana’s Magic

By the time the dust settled, it had become clear that it wasn’t massive investments into diagnostic assays and reagents, not to talk of RT-PCR testing machines, that had catapulted Ghana to its celebrated position of number two in the league table of African countries that have carried out the most tests. Rather, it was the ingenuity of scientists at the country’s preeminent biomedical institution, the Noguchi Memorial Institute of Medical Research that was largely responsible.

Without waiting for the usual bureaucracy of national ethical review approvals or situation-specific peer-reviewed studies, they had decided to deploy the well-known process of “pooled sampling”, thereby expanding testing capacity severalfold literally overnight. I chronicle these fascinating matters here and here.

With countries like Nigeria struggling to obtain enough reagents and other testing consumables in the international market, and with their testing output stuck at 0.0055% of the population (compared to Ghana’s 0.328%, which is nearly 60 times higher on a proportional basis), Ghana’s feats had seemed like pure magic.

A Matter of Some Urgency

The Africa Centers for Disease Control (Africa CDC), the lead continental agency coordinating the regional Covid-19 response, in its 23rd April situation report, gave the total number of tests conducted on the continent as 415,000, of which Ghana alone was responsible for at least a sixth. By the 27th of April, Ghana had completed over 100,000 tests, whilst Nigeria was hovering around 12,000 tests. The Head of the Nigerian CDC continued to open up  about the challenges the country was having in securing supplies of reagents because of severe global shortages.

Recognising that Ghana’s ten-fold multiple of Nigeria’s testing capacity is almost entirely down to a clever algorithm – testing based on pooled samples – one immediately begins to wonder why other African countries should not immediately adopt this algorithm and accelerate their testing coverage dramatically. Especially bearing in mind that with the worldwide testing average now in the region of 1.5%, Africa’s less than 0.01% is no longer tenable. Should Africa not embrace pooled testing continentally right away?

Perhaps, we first need to break down the principles of pooled sampling. When I touched on the subject in my earlier articles, many people reached out to me on social media and through my blog to challenge my thinking and ask clarification questions about my analysis. Clearly, it is the most critical short-term decision African public health authorities would need to make regarding how to alleviate the constraints holding back mass testing.

Pooled Testing: The Basics

Supposed you had 100 items you needed to test. You know that a couple of these items have a particular defect but you don’t know which specific items. You could certainly test each item one after the other. But knowing that only a few of these items have the defect, that would seem to be a waste of time and resources, especially if each test consumes significant time and resources.

Supposed the items could be mixed up in such a manner that the resulting mixture was generally similar in properties to the individual items? If so, then it would make sense to group the 100 items into smaller pools of ten items each (leaving a reserve amount from each sample) and proceed to test each pool. If any of the pools test positive for the defect in question, then the reserve samples from only that pool can be tested one after the other to find out which sample or samples from that pool might contain the defect that is triggering the positive result.

This algorithm can be summarised very simply, if also crudely, as follows.

Number of Samples 100
Test Round 1
Number of Pools 10
Number of Tests 10
Pools Testing Positive 2
Tests “Wasted” 8
Test Round 2
Number of Pools 0
Number of Tests 20
Tests Returning Positive 3
Positive Case Rate 3%
Tests “Wasted” 17
Total Number of Tests 30
Tests Saved 70


In effect, assuming simplistically that one homogenous test kit is required for each testing exercise, 30 test kits have been used to do what would have required 100 test kits in the conservative one-by-one model.

More Math Than Biology

The first thing that may have jumped at the reader is the statistical nature of the decision-making here. It is not molecular virology that determines, fundamentally, the issues at play. True, that important discipline plays a role in addressing some of the concerns about sensitivity (a measure of how effective a testing method is in fishing out genuinely or truly defective samples) and specificity (a measure of how effective a testing method is in establishing that no defect exists) that the rest of this article will grapple with, but the primary logic derives from statistical analysis. Not surprising then that the first person to rigorously publish about the process was a political economist writing for a statistical journal.


Only a tiny modicum of imagination is needed to spark the most obvious concern: might mixing the ten samples into one pool not dilute it to the point where it falls below the threshold of detection? In lay terms, and in the context of Covid-19, could it not happen that when a small quantity from one or more original samples is scooped out and mixed with others to create a new single sample (“the pool”), that smaller sub-quantity might lack the viral RNA? This is the not-so-difficult-to-fathom issue of dilution. Because the tube or well in which each test reaction takes place inside the PCR machine is finite in size, pooling samples necessarily means reducing the amount of sub-sample taken from each original sample.  RT-PCR tests may be the gold standard for detection now, but they are certainly not perfect. There is a roughly 10% chance anytime a PCR test is conducted that the result may be invalid. Will dilution not compound matters?

Luckily, there has been quite a steady stream of experimental research work that has established that modern assays are now so sensitive that the dilution risk is not necessarily elevated by pooling samples, though considerable optimisation is required for different infection scenarios.

So, all good then, we can effectively expand capacity by up to ten times across Africa by instituting pooling as a continental protocol. Except that there are some very important caveats to consider before celebrating this potential bonanza.

Pooling Efficiency

As already indicated, the algorithmic logic of pooling is based on simple mathematics, but one that can become very complex very quickly. I will spare the reader the exotic combinatorics and focus on the easy matter of “pre-test probability”.

Because of weeks of conducting Covid-19 tests, every country now has a rough idea of what the probability of detecting a positive case might be at every point in time. While this factor is changing all the time, the shifts are fairly steady and programmable. In Nigeria, for instance, the probability that a test will return positive is nearly 9.5%. In Ghana, it is currently around 1.5%. The vast disparity is significant and will be touched on later.

This “likelihood of finding a defect” (or, in the Covid-19 context, an individual testing positive) has a huge effect on the effectiveness of pooled sampling. Let us return to our simple table above. The positive rate in that thought experiment was 3%. But supposed we brought it closer to the Nigerian case, and assumed a 9% pre-test probability of finding a defect (or positive case)?

Number of Samples 100
Test Round 1
Number of Pools 10
Number of Tests 10
Pools Testing Positive 9
Tests “Wasted” 1
Test Round 2
Number of Pools 0
Number of Tests 90*
Tests Returning Positive 9
Tests “Wasted” 91
Total Number of Tests 100
Tests Saved 0

*Each of the nine pools is being rerun for individual tests because of an assumption of uniform distribution of viral RNA through the mixture.

It follows then that as pre-test probability of positivity rises, there arrives a time where there is a theoretical mathematical possibility of saving no test kits or time. In fact, an important element of RT-PCR in general is that the saving in time does not scale linearly with the saving in reagents. This is because there are some time-consuming steps like viral RNA extraction, machine setup, maintenance and the like that can be common to all methods, whether pooled- or individual – based. In that regard, in a situation where the pre-test probability of finding positive cases is high, one can actually lose time doing pooling. There are certainly many optimisation algorithms to evade some of these constraints. The Ghana model for instance could be enhanced by successive division of pools instead of complete retesting of every pool that tests positive. But there is only so much optimisation one can do to push back the theoretical limits.

The insight illustrated above has led many experimentalists to argue that when the confirmed cases are above one or two percent of total tested, pooled testing may not be advisable. The only government that has reviewed the situation extensively and taken a policy and ethical (more on “ethicality” later) stance on pooled sampling/testing so far is the Government of India, which has decided to limit the pooling size to 5 samples and the context of use to “screening”, which is most beneficial during the early stages of community transmission of the disease. Which brings us to the last two important caveats.

Error Replication

Once the testing process moves out of clinics into the broader community, through for instance the community screening and contact tracing models, the whole affair becomes a complicated supply chain puzzle to unknot.

Public health officers have to be equipped with sample collection devices and sent off to various corners of the country to collect samples. A single officer may collect more than ten samples and there could be more than 1000 sample takers. The samples have to be transported in cold boxes, deposited at aggregation points and finally delivered to labs, which now have to manage these samples as one would any inventory process, with planning for storage, handling, batching and tracking.

Even in the most rigorous of mass testing situations, errors will occur, and far more than would be the case in a routine testing situation. Some samples will be poorly collected and transported than others.

Pooling, in these circumstances, can easily obscure traceability by making it highly difficult to identify points within the supply chain that are defective, since for the vast majority of tests, there is no link between a test result and its supply chain history.

Should a sample be cross-contaminated, the effect is not just on that second sample but on an entire pool. Quite apart from interfering with corrective action protocols, pooled sampling can increase workload due to error replication.

In effect, whilst pooling under laboratory conditions can be heavily controlled to remove error effects, additional safeguards are required in mass testing protocols where samples are not collected inside health facilities.

But by far the most critical issues are ethical in nature.

Ethical Clearance

In the pooled sampling and testing methodology, pools that test negative are not retested, for obvious reasons: the whole point is to save time and consumables.

In some cases, this is not an issue, but in others it is. During an epidemic or pandemic, such as the current one, testing is not a monolithic affair. We must look at the mass testing protocol in every country as a multi-layered one involving different cohorts of testing subjects.

First, there are those who were ill or felt ill and reported to a clinic or other health setting. The clinician suspected Covid-19 and sent off samples for testing, a process often referred to as “routine surveillance”. The pre-test probability in such cases (what we might also call the “index of suspicion”) is relatively high. In some cases, despite a negative test result, considering the inherent imperfection of all tests, re-testing is imperative. An example would be a situation where the individual’s symptoms and travel history are so similar to those who typically test positive for the disease. In these circumstances, pooling can be harmful to such patients because negative results are not reviewed individually.

Second, there are those who come into contact with persons who have been confirmed to have the disease. In a country with a sophisticated contact tracing system, the risk level for such persons can be considered elevated if they have been in contact with multiple positively tested individuals in settings that heighten exposure. Here too, despite a negative result, it might be necessary to re-test but in a pooling scenario that would be impractical.

Lastly, we have those individuals who fit some theoretically determined risk profile (for example, living in a neighbourhood where many people have tested positive). Testing in such “community screening” scenarios is driven largely by the evolving picture of the outbreak and by the quality of data, modelling and analysis. In countries where public health institutions are weak, the data, models and analyses involved may well also be weak.

In Ghana’s case, the empirically observed situation accords cleanly with the above descriptions. Using the 21st April 2020 Covid-19 situational report for the Greater Accra Region (the capital city region that is presently the epicenter of the outbreak in Ghana, accounting for more than 85% of all infections), we computed a positive case rate of 8.5% for test subjects selected through routine surveillance and 8% for those selected through contact tracing. Community screening results on the other hand yielded positive cases of less than 0.1% of total number of individuals tested.

Ghana’s fascinatingly low 1.5% rate could thus be the result of a major dilution effect from potentially weak screening. If the community screening results are removed, the Ghanaian observed prevalence rate becomes almost identical with that of neighbours such as Nigeria and Ivory Coast.

Pooled Sampling Should be Used Selectively

But even if the community screening exercise is based on perfectly sound models, and I do admit that the current positive rate from routine surveillance nationwide in Ghana is around 2.5% (most likely due to loosening surveillance standards especially outside the capital) and thus less dramatically divergent from the community screening results, individuals screened under this method have different risk profiles and therefore clinical needs.

Those tested under the routine surveillance route, clearly because they are more likely to be sick, and some of those tested through the contact tracing route, because their risk is higher, may all have a right to a standard of care involving re-testing where necessary. Grouping of test subjects into cohorts based on various indices of suspicion and vulnerability seem therefore like an elementary ethical requirement.

In summary, whilst pooled testing could considerably boost testing capacities around the African continent, it should only be implemented in each country following peer-reviewed studies to establish the right optimisation models and after national ethical clearance has established the right of patients to differential diagnosis.

It is probably best then that such methods are reserved for community screening (typically more than 60% of the overall testing volume anyway) only. The evidence shows clearly that this category of testing has the lowest risk, at least before there has been widespread community infection. In the same vein, it may be prudent to avoid using the pooled technique on samples from individuals or patients identified through routine surveillance and contact tracing.

On 11th April, 2020, I wrote a blogpost warning that if the government of Ghana fails to get its data management under control, it will start to lose public trust, regardless of how well the actual management of the Covid-19 outbreak itself was going.

On 19th April, 2020, the President of Ghana announced to Ghanaians that he was, effective the 20th of April, lifting the “partial lockdown” imposed on the country since 30th March, 2020.

He cited the “robustness of data” and the “constancy” of the situation as the basis for the decision. In an earlier article, I have discussed why the President’s assurances did not go down well with many of the country’s health leaders.

The government has left the metrics and indicators that will trigger specific actions (such as the loosening of restrictions) much too loose and ad hoc. The trade-off of this flexibility is second-guessing by experts outside the government’s team.

In this shorter article, I shall be listing, for those who could not read any of the earlier posts, the outstanding data-related issues that will continue to fuel controversy if not comprehensively addressed.

  1. The GHS’ public bulletins are too data-thin and patchy to serve researchers.

The Ghana Health Service (GHS) releases periodic updates on a portal to inform the public about new developments. Compared to the “situational reports” prepared at regional and district level for health administrators, the data on the portal (though visualisation has been improving) is not very useful to researchers as it lacks granularity. None of the reports circulating around is in a format that can be exported to spreadsheets for analysis anyway. The inability of independent researchers to build models to explain features of the disease phenomena is forcing them to speculate and rely on the grapevine. Experts without the help of data are rarely more insightful than ordinary joes.

  1. Beyond data on the spread of the disease, there is also no information on the protocols guiding the response itself.

The Covid-19 response in Ghana is, according to the authorities, based on three main pillars: trace, test and treat (including isolation if necessary). Each of these strategies has many underpinning operational elements. And none of them are being guided by published, widely available, national protocols and standard operating procedures. In the absence of documentation, speculations are rife about all manner of things. In this short post, we will deal primarily with the first two strategies: trace and test.

  1. The “supply chain” for delivering tests to people needs work.

The country’s mass testing protocol has many gaps and lags. Clinical referrals for testing during routine surveillance (where people with Covid-19-like symptoms are identified by clinicians and sampled at a health facility) are currently not automated. The performance of the different tracing teams (public health personnel who actively search for cases in the community) differ considerably. Samples need aggregation before they are sent off to the various labs (virtually all of them in the country’s two major cities of Accra and Kumasi), but the inefficiencies can affect the quality of the sample and thus testing integrity.

Per WHO and US CDC standards, samples need a cold chain at all times. If a sample will take more than 5 days before reaching the lab, dry ice (-70 degrees celsius) is required. Ideally, samples should be transported in protein-antibiotic complexes called viral transport media (VTM). Some laboratory scientists complain of some delivered samples lacking even basic saline buffers, arriving unsealed, or having such small sample volumes as to interfere with viral RNA extraction.

Whilst Ghana’s laboratory scientists are consummate professionals doing their best in trying circumstances, there is a limit to what their ingenuity only can achieve.

  1. The government’s “aggressive tracing regime” has been faltering.

Given all these supply chain and resource constraints, it is not too surprising then that the “aggressive tracing” promised as a partial substitute for the lockdowns appears to be slackening.

In the first week following the lockdown, tracing surged from 635 contacts reached to 5308. Sample collection rose from 589 to 4969. By 19th April, the day the lockdown was lifted, contact tracing figures were down to 2049 and samples collected were as low as 1018. Considering that infection stats are highly sensitive to overall levels of tracing and sample collection, this apparent slackening is worrying.

  1. The interpretations being given by the government about the ratio of positive cases to overall tested results are statistically loose.

At the time of the lifting of the lockdown, much was made of the fact that only 1042 out of 68,591 (ergo, 1.52%) tested subjects were positive. However, that analysis involves a bit of mixing apples and oranges. Some of the testing protocols are so different in their quality that their results should not be allowed to dilute the overall picture.

To illustrate this point, I shall focus solely on the central hotspot of the epidemic in Ghana, Greater Accra. I was lucky enough to get a hold of the Greater Accra Covid-19 situational report of 21st April 2020, just around the period the lockdown was lifted.

The high-level breakdown of the aggregated numbers in Greater Accra as at that date was as follows:

In Accra, people who are being referred for testing because they are showing Covid-19 symptoms (i.e. through “routine surveillance”), at the time the lockdown was lifted, had an 8.5% chance of testing positive. People who were identified for testing because they had come close to someone confirmed as infected had an 8% chance of being positive too.

These two categories of people are being targeted for testing using very established and grounded epidemiological methods. The people who are being randomly tested based on the GHS model of which communities are at risk tend to have a much lower probability of testing positive. The interpretation the government’s advisors have given to this fact is that community spread is low. The more likely answer is that the GHS’ model is weak. Since they refuse to publish and defend it before independent analysts, most biostatisticians I have discussed this issue with dismisses the model out of hand.

  1. Even the higher ratio of positive cases in routine surveillance may be underestimating true spread.

Ghana’s routine surveillance programs have considerable weaknesses. In 2010/2011, and again in 2017, they failed to detect H1N1 outbreaks till very late. In the 2017 episode, four KUMACA students who died of H1N1 at the Komfo Anokye Teaching Hospital were diagnosed only after death. According to the present Auditor-General, the Veterinary Services Department failed or neglected to set up an Asian Influenza pandemic preparedness system in 2010, opting instead to devote the money to workshops. When the epizootic crisis hit, over 400,000 livestock belonging to poor Ghanaian farmers perished from H1N1. The 8.5% positive testing rate recorded in Accra for Covid-19 suspected cases at the time of lifting the lockdown may thus have been lower than the true situation.

I do note that, in more recent days, the national-level routine surveillance ratio of positive cases has fallen to as low as 2.5%. Since situation reports for other regions are hard to come by, it is unclear if suspicion parameters and case definitions are identical nationwide. For instance, when composing the national picture, the GHS, unlike the regional directorates, lumps the community screening activities (based on its proprietary model) with the enhanced contact tracing activities thereby obscuring the fact that it is virtually not detecting any cases through the so-called “community screening” exercises, most likely due to weak modelling.

  1. There is concrete evidence that the GHS risk-based model for community screening is weak.

Using this proprietary model, the government decided to concentrate its efforts in Ayawaso West. At one point, it was even suggested that screening in this district will be universal and compulsory, only for the proclamation to be withdrawn later without explanation or ceremony.

It soon became clear that the transmission dynamics were far more complex. In a few days, the virus penetrated deeply into Ayawaso Central, an extremely high-density, inner city enclave of the city, where suburbs such as Nima, Maamobi and Kanda are clustered. Then it stormed Accra Central (Jamestown, the High Street, the Central Business District etc) before making the most fascinating move of all: turning Korle Klottey (Osu, Ridge, North Adabraka, Odorna etc) into the fastest growing hotspot.

A simple biostatistical model based on covariance analysis of how trends in positive case confirmations across economically connected zones align should have shown clearly that commuter patterns of informal labour pools criss-crossing the Maamobi, Tudu, CBD, Odorna and North Adabraka inner-city rings were the primary features of interest. Some serious urbanography, not just epidemiology, should have been deployed immediately. The lack of open data prevented urban researchers from joining the fray.

  1. Urbanographic analysis is clearly critical in anticipating the worst.

Ayawaso East, Ayawaso West and Korle Klottey are the areas where Covid-19 related hospitalisations are likely to increase due to the growth trend of routine surveillance results. Accra Central and Ayawaso Central appear to be harbouring fast-growing numbers of asymptomatic individuals. How the trends will move from here on require an urbanographic, not just epidemiological, lens. Covid-19 always seems containable until it finds a vulnerable population or highly susceptible community in some cluster and then starts wreaking havoc. We can only hope that we can race ahead of the virus to identify such communities and ring-fence them before Covid-19 does.

  1. The government’s attempt to push the narrative that it was investing heavily into testing resources created the confusion about testing capacity.

It turns out that it was the clever scientists at Noguchi that had found a workaround: pooled sampling, not massive injections of resources into testing infrastructure as the country was being told. Pooled sampling refers to the consolidation of multiple samples from different individuals for a single thermocycling run (i.e. single test).

  1. Pooled sampling has rescued the country but it has important limits.

India is one of the few countries in the world to have commissioned a detailed efficacy and ethical review of whether to update the national protocol on testing by allowing pooled samples. It did so just a little over a week ago but added many caveats, which should be of concern to Ghana too.

The India Council of Medical Research’s (ICMR’s) decision to impose a cap of five samples, a threshold Ghana initially adopted before “escalating” to 10 samples per well, speaks to the fear of overestimating diagnostic sensitivity thresholds. They also went further to permit pooled sampling only if the pre-test probability of positivity is lower than 2%.

Stanford’s Benjamin Pinsky, a clinical virologist, recently led a team to conduct mass community screening for Covid-19 (especially at sub-clinical level) in San Francisco. His team determined that a pre-test probability of 1% is the reasonable threshold to allow pooled sampling. These precautions put in place in other epidemiological contexts raise important issues for Ghana’s continued use of pooled sampling.

Firstly, a pooled sampling protocol is highly responsive to the specifics of the test kit in use, the epidemiological background, and the goals of screening. Hence, the protocols must be submitted to peer review and national-level ethical clearance, as has been done in India.

The ethical issues are compounded because differential diagnosis remains the standard of care in a context like Covid-19 where observed symptoms can be highly non-specific. Many respiratory pathogens could be implicated in the clinical presentation. (Even some health workers have taken to calling SARS-COV-2, the microbe that causes Covid-19, a “flu virus”, but it belongs to a completely different family of viruses).  In that regard, re-sampling for further tests could be warranted even if a negative result ensues. In a pooled testing scenario, this situation is complicated, especially in the absence of patient consent.

In light of this, only the mass screening exercises appear ethically suited for mass sampling. Routine surveillance and enhanced contacts tracing cases, with their high pre-test positivity rates, on the other hand, are best not confirmed through pooled sampling.

  1. The issue of whether the denominator used in determining the positive case ratio is being overestimated remains unresolved.

The different testing labs for Covid-19 in Ghana at present maintain separate indexes and case investigation form-coding procedures. There is currently no efficient way to harmonise and consolidate multiple tests performed on different samples from the same individual, especially also as the case investigation forms map to a unique sample ID but not to a unique patient ID. Multiple cases submitted to different labs would automatically count as separate cases.

Multiple cases submitted to the same lab can be harmonised against a single patient if the data is de-identified. In a pooled sampling regime, it is problematic to de-identify the samples constituting every pool to check for history without defeating the original goal of saving time.

These issues need to be clarified properly in an open and transparent manner before the 1.45% total positivity rate reported by the GHS on 22nd April 2020 can be accepted at face value.

12. Open Data is NOT the enemy.

Open Data is clearly not the enemy here. If anything it is the scorned friend waiting on the sidelines to save the day.


On 19th April, 2020, the President of Ghana announced to Ghanaians that he was, effective from 20th April, lifting the “partial lockdown” imposed on the country since 30th March, 2020.

He cited the “robustness of data” and the “constancy” of the situation as the basis for the decision, but he did not elaborate if by this he meant that the lockdown had achieved the generally cited purpose of that draconian form of epidemic control: reducing the number of new infections (called “incidence” in epidemiology).

In later explanations to the media, the Presidential Advisor on Health would add some nuances to the effect that the purpose of the lockdown was to gather more insights into the nature of the disease’s spread so as to pave the way for more targeted measures. Such a step, in his view, was critical to preventing the possibility of a worse humanitarian emergency in poorer communities for whom the lockdown was exacting a frightening economic toll. The inference here must be that the Administration had evaluated the state of the pro-poor economic reliefs it had rolled out at the beginning of the lockdown and determined that they couldn’t do the job of abating suffering, and possible social unrest, should the lockdown continue.

Throughout all these explanations, the country was nevertheless not served with any data-backed models to enable independent researchers and other critical observers analyse the totality of the situation for themselves.

Hardly surprising then that before long senior public health figures outside the government, including a former Head of the Ghana Health Service, the country’s preeminent primary and secondary health agency, decided to break ranks and take serious issue with the quality and integrity of both the data and the decisions purportedly based on them.

I couldn’t suppress the wry smile on my face. On April 11th, at the peak of the partial lockdown, I had pleaded with the government to be more transparent and far more diligent than it had been to date in showing people in advance what specific metrics would prompt which specific actions. To avoid the charge of data manipulation and afterthought justifications of conclusions predetermined on the basis of factors other than the public interest, the public, I argued, has to be educated comprehensively about how different status indicators shall serve as triggers for specific actions. That required, of course, that said status indicators were themselves clearly defined, solidly rigorous, well thought through, and reasonably comprehensive.

I am afraid to say that the government did not listen. Its status indicators remain hard to pin down and what can be gleaned from its interpretation of the data leads to ambiguity on the most important issues; such as whether to intensify or loosen lockdowns, whether to change tack or stay the course in its approach to “mass/community screening”, and whether to admit to an acceleration of incidence or continue to insist on this “constancy” theory.

The government’s earnest and vivacious “communicators” keep pushing one theme: everyone should keep calm, trust government-appointed and favoured “experts”, and have faith that all decisions are being driven by “science” and a dogged commitment to the public interest.

Unfortunately for our good friends in the government’s various PR brigades, science and expertise are not hymn sheets from which a harmonised tune can be blared from loudspeakers on the high and imperious walls of Jubilee House.

“Science”, whether in a pandemic or in normal times, is a highly variegated domain of knowledge, replete with competing worldviews among disciplinary specialists of many stripes. That is why almost every university and think tank in the world worth its salt in every sophisticated country has its own set of theories and models about the virus, complete with different, sometimes contradictory, trajectories and forecasts. If Ghana were a more sophisticated country, there would have been more not less of the disputes amongst different types and grades of health “experts” witnessed in the aftermath of the President’s decision to lift the blockade.

Another obvious truth in this matter is that an expert not armed with data is rarely more effective than the ordinary joe in making sense of emerging phenomena. By failing or refusing to publish sound models and put out enough data to aid serious analysis, the government consigned experts to the grapevine with the rest of us. As snippets of information and anecdotal evidence swirled around, contextless documents added a veneer of depth to opinions, until eventually things came to a head when some of the country’s most eminent health thinkers openly accused the government of “massaging” the Covid-19 incidence and “observed prevalence” statistics. The two main labs in the country handling the surge in testing found themselves in the cross-fire, accused of misrepresenting their testing capacity.

The government was on the verge of squandering the trust it had built up in the early days of the pandemic when citizens rallied behind their leaders as predicted by the sociology of emergencies.

In this brief article, I shall be shedding light on the two main data-related controversies – a) is the incidence rate of Covid-19 “stable” in Ghana? And (b) is “pooled sampling” a satisfactory rebuttal to those who question the effective testing capacity in Ghana? I shall be doing so with a view to reiterating my earlier points about the extreme importance of “trustworthy data in decision-making” during a crisis as we have now.

From the onset of the pandemic in Ghana, the Ghana Health Service (GHS) has published periodic bulletins to inform the public about major developments. Researchers have complained about the unavailability of deeper and broader data sets in a format that can be exported into spreadsheets to enable them chart trend curves and develop elaborate and on the fly models to explain various facets of this unprecedented phenomenon. They have humbly requested for these data sets to be released promptly and consistently to allow for dynamic modelling and analysis, yet the GHS refuses to budge.

A part of the reason may well be capacity. The country’s mass testing protocol has many gaps and lags. Clinical referrals for testing during routine surveillance (where people with Covid-19-like symptoms are identified by clinicians and sampled at a health facility) are currently not automated. The performance of the different tracing teams (public health personnel who actively search for cases in the community) varies. Collected samples have to be aggregated and sent to the various labs (virtually all of them are in the country’s two major cities of Accra and Kumasi) using an ad hoc supply chain.

Per WHO and US CDC standards, samples need a cold chain (between 2 and 8 degrees celsius) and if delays in dispatch will last beyond 5 days, dry ice is required for storage and transport. Ideally, samples should be transported in viral transport media (VTM), protein-antibiotic based mixtures, that some GHS tracing teams don’t have. Some laboratory scientists complain of the occasional sample coming in without even basic saline medium, or having been improperly sealed, or in such small volumes as to interfere with viral RNA extraction. In short, all the messiness of real-world supply chains. In a country with such weak health infrastructure, it is a miracle how the fine professionals working in the reference labs are still keeping things together. The end result, however, is that there are inefficiencies in the data collection process, as I alluded to in my initial article. Until capacity is ramped up at all levels, the data will continue to be patchy. The delays and lumpiness in the release schedule, despite widespread perception, are not all caused by the Ministry of Health and the Ghana Health Service’s insistence on “validating” the data before public release.

But all this shouldn’t be a problem if the entire mass testing protocol was transparent to researchers and critical observers. Statistical methods exist to clean and fix data weaknesses and gaps. The real problem is disinterest on the part of the government’s Covid-19 response team in engaging candidly and openly with the research community on the data question.

Even as the government’s communicators were justifying the decision to lift the lockdown on the grounds of accelerated tracing, tracing figures were actually declining in Accra. In the first week following the lockdown, tracing surged from 635 contacts reached to 5308. Sample collection rose from 589 to 4969. By 19th April, the day the announcement was made to lift the lockdown, contact tracing figures were down to 2049 and the daily tally of collected samples was as low as 1018. Obviously, the facts on the ground did not align with the view that lockdown measures were being replaced with aggressive contact tracing. Considering that infection stats are highly sensitive to overall levels of tracing and sample collection, the discrepancy is curious.

Which brings us to the issue of the positive case ratio per population of the virus. Apart from the fact that testing positive for the virus does not imply a clinical diagnosis of Covid-19, there is also the fact of our present misunderstanding of the role of viral load or concentration on both symptoms progression and detectability. These complexities require care and diligence in executing a mass testing protocol.

Thus, when the President’s advisors conclusively argued in the days following the decision to lift the lockdown that because only 1042 out of 68,591 individuals (ergo, 1.52%) tested were positive, the degree of community spread was somewhat restrained, they were missing some very critical granular facts. The insight is not in these global numbers, made up as they are of apples and oranges forced into a mix. It can rather only be found by careful disaggregation and analysis.

To illustrate, let me use the central hotspot of the epidemic in Ghana. I was lucky enough to get a hold of the Greater Accra Covid-19 situational report of 21st April 2020, just around the period the lockdown was lifted.

The high-level breakdown of the global numbers as at that date was as follows:

Even at this high level, it can quickly be seen that the “positivity profile” of the tested individuals in the different groups listed above (those referred for testing because they were showing some symptoms; those identified for testing because they had come into contact with someone who tested positive; and those who were randomly tested because they live in an area considered as lying within a perimeter or cordon of risk as determined by the prevailing hotspot models of the GHS) vary considerably. Whilst those referred by clinicians for presenting Covid-19-like symptoms had a roughly 8.5% chance of testing positive, those who were identified because they had been in contact with a positive case had a slightly less than 8% chance. More crucially, those earmarked for testing simply because they were caught in the GHS risk-based net had virtually no chance of testing positive.

Different biostatisticians and epidemiologists may draw different conclusions. But one of the most plausible would be that the GHS risk-based community screening model is broken and should not be lumped together with the more epidemiologically grounded models of routine surveillance and primary contact tracing.

True, Ghana’s routine surveillance programs have their own weaknesses. In 2010/2011, and again in 2017, the country’s capacity to detect outbreaks of H1N1 in time to avert mass casualties was tested and found wanting. In fact, in the 2017 wave, four KUMACA students who died of H1N1 at the Komfo Anokye Teaching Hospital were diagnosed post-mortem. Eventually 96 people would be detected but well past when a functional surveillance program should have done so.

According to the present Auditor-General, the Veterinary Services Department failed or neglected to set up an Asian Influenza pandemic preparedness system in 2010, opting instead to devote the money to workshops and other such trifles. When the epizootic crisis hit, over 400,000 livestock belonging to poor Ghanaian farmers perished from H1N1. But all that is merely to say that the positivity rate for routine surveillance is very likely to be higher than 8.5%. Letting the unproven “community screening” numbers dilute the overall positive case ratio amounts to a failure of sound analysis. Even more so since the GHS refuses to publish any document to explain the logic driving it.

Furthermore, enhanced granularity only deepens anxiety. The preliminary “hotspot perimeter designation model” orally described by the President’s coordinators of the Covid-19 response effort, recommended a concentration of effort in Ayawaso West. At one point, it was even suggested that testing there should be universal and compulsory, only for the proclamation to be withdrawn without explanation or ceremony.

It soon became clear that the transmission dynamics were far more complex. In a few days, the virus charged into Ayawaso Central, an extremely high-density, inner city enclave of the city, where suburbs such as Nima, Maamobi and Kanda huddle tightly together. Then Accra Central (Jamestown, the High Street, the Central Business District etc) took its turn. Before the most fascinating development of all, the emergence of Korle Klottey (Osu, Ridge, North Adabraka, Odorna etc) as the fastest mutating hotspot of all. Ethnographic economic data shows trends tied to the commuting habits of specific pools of informal labour that reside in one enclave and ply particular types of trades in other enclaves.

Rudimentary covariance analysis of the interrelationships among trends in shifting hotspots would have highlighted the spatio-economic links driving the spread of the virus from one high-density spot to the other. But serious urbanography, beyond pure epidemiology, would be required to deepen the insight, once again making a strong case for both a wider sharing of prompt and complete data and for maintaining a multidisciplinary stance.

The social relief strategy advised by such a stance would not have prioritised warm rations over invasive but respectful broader public health interventions beyond just disease surveillance. Communal toilets, communal baths, communal pipe stands and similar locations would have seen re-engineering to enhance a modicum of social distancing however difficult that prospect might be in places like Old Fadama, where human density at certain seasonal peaks exceeds 3000 per hectare. The distribution of hygiene products would have received the same attention as the sharing of warm rations.

Knowing that Ayawaso East, Ayawaso West and Korle Klottey are the areas where Covid-19 related hospitalisations are likely to increase due to the growth trend of routine surveillance results, mobile screening facilities should have been deployed more aggressively in a kind of shifting sentinel strategy. Accra Central and Ayawaso Central having become a source of worry because of the fast growing number of asymptomatic individuals observed during contact tracing should have been earmarked for limited serological surveys as part of the post-lockdown measures.

In short, the main point here is not to quibble over the President’s decision to lift the lockdown per se. The focus here is on highlighting what a truly data-driven set of measures would have look like.

And the other controversy over testing numbers?

Here too, much of the hoopla could have been avoided had politicians not attempted to take credit for Ghana’s supposedly high ranking on testing league tables in Africa. The impression was created that the country’s performance was as a result of massive investments into RT-PCR platforms, reagents, and diagnostic assays.

People with connections in the scientific community knew however that these resources were yet to be made available in any quantity capable of effecting such a dramatic transformation of the country’s testing capacity. Judging from the experience in other countries, the recently donated test kits from the Jack Ma Foundation were yet to be put to use because compatible reagents were not available. No wonder then that these PR narratives triggered protests, most famously from the former Director-General of the Ghana Health Service itself.

It turns out that the clever scientists at Noguchi had found a workaround: pooled sampling. It is true that the original algorithms and supporting logic for how to maintain experimental validity when running tests in aggregates date all the way to the work of political economist, Robert Dorfman, notably in his 1943 paper in the Annals of Mathematical Statistics (another toast to multidisciplinary thinking). But the sheer boldness to respond in this manner to the national call for a surge in testing even without the corresponding resources deserves respect.

Contingent ingenuity in the laboratory does not however excuse incoherence at the level of national policy. As Noguchi’s leadership freely admits, pooled sampling shall only remain effective if infection rates in Ghana are subdued. It is not for nothing that many reference labs around the world have not got around to accepting pooled sampling for routine Covid-19 case confirmation. Whilst India did so just a little over a week ago, the many caveats added by the country’s apex health authority, the India Council of Medical Research (ICMR), show clearly that concern over the very real risk of false negatives due to dilution remains a concern for many laboratory quality assurance and bioethics experts.

The ICMR’s decision to impose a cap of five samples, a threshold Ghana initially adopted before “escalating” to 10 samples per well, speaks to the fear of overestimating diagnostic sensitivity thresholds. The ICMR also went further to permit pooled sampling only if the pre-test probability of positivity is lower than 2%. Stanford’s Benjamin Pinsky, a clinical virologist, recently led a team to conduct mass community screening for Covid-19 (especially at sub-clinical level) in San Francisco. He would peg the suspected positivity ratio at 1%. These precautions put in place in other epidemiological contexts raise important points for our continued use of pooled sampling.

Firstly, a pooled sampling protocol is highly responsive to the specifics of the test kit in use, the epidemiological background, and the goals of screening. Consequently, the protocols tend to be submitted to peer review. In Ghana, the protocol has not even been published. Secondly, there are some important ethical issues that arise when human subject testing protocols are changed midstream. Institutional Review Committee approval is typically required. In a public health emergency, national-level ethical clearance, as was the case in India, become important. Given that members of the medical fraternity in Ghana appeared unaware, I would reckon that this is yet to be done. At any rate, in India, the process was openly announced.

It is fair to wonder if this is all mere bureaucracy. Alas, it isn’t. In routine surveillance referrals for testing, clinical outcomes for the individual remain important notwithstanding the prioritisation of public health. Differential diagnosis remains the standard of care in medical situations like Covid-19 where observed symptoms can be highly non-specific. Many respiratory pathogens could be implicated in the clinical presentation. (Even some health workers have taken to calling SARS-COV-2, the microbe that causes Covid-19, a “flu virus”, yet it belongs to a completely different family of viruses).  In that regard, re-sampling for further tests could be warranted even in the event of a negative test result. In a pooled testing scenario, this situation is complicated, especially in the absence of patient consent.

Moreover, because best practice also recommends the use of double probes to target different gene/ORF regions of viral RNA to increase sensitivity, a negative result can, theoretically, involve discordance between the two probes, which again may require retesting.

More crucially, in a routine surveillance or enhanced contact tracing situation, molecular tests are contributory but not determinative in every instance. A high index of suspicion due to travel history, clinical observations and other factors might warrant re-confirmation. A protocol which conserves resources (and to a somewhat lesser extent, time) by eliminating individual retesting of negative samples in all cases risks bumping up against ethical norms.

How to resolve the conundrum of reconciling the national demand for surged testing with the rights of the individual patient? Simple: segment the testing pool into its original cohorts. Mass community screening, which is the main bulk of current testing, has a limited connection to clinical management and should probably not generate any serious ethical issues in a pooled sampling regime. Routine surveillance and enhanced contacts tracing cases, on the other hand, present a clear challenge and are best not confirmed through pooled sampling. Especially considering the higher pre-test probability of positivity going by Ghana’s Covid-19 testing data, well above what stringent review boards elsewhere have determined.

To set minds at ease, the reference labs may consider independent IRB evaluation of the claims that even in the case of samples collected from asymptomatic individuals, presumably with very low viral loads, cDNA concentration techniques and additional amplification time are not required during thermocycling to bring the probability of false negatives within acceptable limits. I acknowledge the steady stream of papers in the protocol investigation literature providing reassurance about the merits of pooled testing, but those testimonials are precisely the kind of evidence that ethical committees are best qualified to weigh.

The last strand of this particular controversy concerns the possibility of “multiple counting” of the same individual as a result of multiple tests per individual. A claim has been made that compartmentalisation of results for recovering patients who are tested a total of three times to resolve a case addresses all the issues. Unfortunately it does not.

The different testing labs at present maintain separate indexes and case investigation form coding procedures. There is currently no efficient way to harmonise and consolidate multiple case investigations of the same individual, especially also as the case investigation forms map to a unique sample ID but not to any patient ID at all. Multiple cases submitted to different labs would automatically count as separate cases. Multiple cases submitted to the same lab can be harmonised against a single patient if the data is de-identified. In a pooled sampling regime, it is problematic de-identify the samples constituting every pool to check for history without defeating the original goal of saving time. So, here too, it is best for limitations to be openly acknowledged and solutions widely canvassed through open and sincere national conversations.

On the whole, Ghanaians seem quite impressed by the enthusiasm with which members of the country’s long neglected scientific community have embraced their role in responding to the public health emergency. The constraints being imposed by politicians’ lack of “data candour” are however slowly undermining the confidence some sections of the society have in the respectful coexistence of science and politics that marked the early days of the government’s response.








The Government of Ghana has announced a three-prong strategy for comprehensively responding to the Covid-19 crisis: Testing, Tracing & Treatment.

Of those three dimensions,  many observers feel that the first two are the most critical in the current phase of the crisis as they are more visible and more closely linked with prevention, which given the country’s limited resources, is far more critical than curing.

There is no doubt that tracing and testing are critical, but the strategy for doing both well is even more important.

In addition to early detection, effective tracing and testing also enable responders to use the number of *confirmed cases* to project/predict the pattern of *true cases*. In every epidemic, there are always many people with the disease whose condition is not known and has therefore not been recorded by the official health system. Hence, confirmed cases always lag and underrepresent true cases. What is important is for the confirmed cases to track the true situation on the ground reasonably faithfully.

For that to happen though, confirmed cases must constitute a *representative sample* of true cases.

Crudely, C —> xT, where “x” is a common or constant ratio, “C” is a measure of the confirmed cases and “T” is a measure of true cases. If you consider each daily count/announcement of the infection rate/level as a term in a series, that term should be expressed as closely as possible by the crude mathematical mapping relationship indicated above: C —-> xT.

The reason why one needs a roughly stable relationship between confirmed cases and true cases is because that is the only way one can use the official count of confirmed cases for any kind of policy management.

If the trend in confirmed cases does not reflect the underlying trend of true cases, then the official count becomes useless. No one can tell, in those circumstances, if any policy, such as lockdowns, are working or not.

For the confirmed counts to be representative of the true level of prevalence, the total number of tests doesn’t really matter as much as usually supposed unless the number of tests cover a very large proportion of the overall population, i.e. is in the millions. In Ghana’s case, tests underway are about 44,000, of which 15,000 have been completed (Presidential Advisor on Health, April 11th, 2020). The condition of overwhelming proportionality does not therefore apply.

What matters more than anything else then is how public health authorities determine and secure a *representative sample* of the likely exposed populations for testing without overestimating the true extent of the spread.

So far, I haven’t seen any clear analytical logic explained clearly by officialdom in Ghana as to how that high bar is being aimed for.

And given the logistical challenges in pooling samples, running tests, batching results, sending them to the Ministry of Health, which then releases the data to the GHS strongroom, before proceeding to inform the public, the actual data points in any global number announced on any particular day could be coming from any of the preceding days spanning a two- or even three- week period.

So, when the Ghana Health Service (GHS) announces that 30 more infections have been recorded between, say, the 10th and 11th of April, the breakdown of that “30” figure could easily be something like this:

A. 10 out of the 30 people announced as positive for that 24-hour cycle may have been tested 3 weeks ago.

B. 11 people tested 17 days ago.

C. 3 people tested 3 days ago.

D. 6 people tested on 10th April.

Thus, one is not looking at some kind of realtime dashboard of a consistently evolving situation. One is, in fact, looking at a mixed reality, composed of different snapshots across time. A lagging, composite, picture; not a sequential reel.

It is thus meaningless to say that infections are growing, slowing, growing faster or slowing sluggishly etc etc by simply relying on these global numbers. The current structure of data collection and delivery does not really allow a mere observer to say that.

The Government itself, on the other hand, has better insight into which tests came from which batches etc and therefore has better official intelligence to make those determinations. The general public unfortunately does not.

When the Government wishes to change the tone of policy, however, it would need to align its private picture of the epidemic with the public picture it has painted over time. That process, currently, is a work in progress as the authorities are now in the process of bringing more testing capacity on board by activating other laboratories in the veterinary services, the Tamale Teaching Hospital, the CSIR, the Food & Drug Authority and even, as I have recently heard, Korle Bu Teaching Hospital.

This will make the official counts (example: 408 infections as of 11th April) a truly dynamic picture of the true trends.

Aligning the *public trend picture* with the *official trend picture* is however only one of the two critical things that have to happen to make government policy more reflective of the supporting data.

The second task is what I mentioned earlier: aligning the confirmed cases picture with the true cases picture by ensuring something as close to a constant/common ratio in the daily progression of announced counts. That is to say, work must be done to increase confidence that the confirmed case count for day one is roughly consistent with the true, unknown, case count on day one and the confirmed case count for day two is roughly consistent with the true case count for day two.

In simple terms, if on day one, there were 200 confirmed cases but the true number of infected individuals is 4000, then if the number of confirmed infections move to 220 on day two, the true level of infections must also shift close to 4400. Note that this is more critical than the absolute number, whether 200 or 220. And therein lies an important distinction between the alignment point canvassed in this brief note and other concerns swirling around about what the true prevalence level might be.

These two alignments would then enable the Government to make forward-looking policy based on whether previous policies are having a statistically significant effect or not.

Until those alignments are in place, policy is merely provisional.

Naturally, I have had to severely simplify epidemiological statistics to a great degree in order to make the quick point I intended to make here. But the core points are valid. Refinements using standard biostatistical methods and techniques won’t change the fundamental insights too much.

How can these alignments be achieved then?

Aligning the public and official trend pictures would require improved logistics for sampling and increased capacity, which the Government is already working on. The strategy there is quite clear.

Aligning the confirmed case count more uniformly with the true level of prevalence requires serious modelling of the spatial distribution of the Covid-19 burden in Ghana presently using historical data of where people from overseas usually disperse among the population. And then conducting mass randomised testing that omni-axially tracks infection dynamics along certain key radial pathways. But it also requires deliberate validation of “control sites”. One does not want to overestimate prevalence anymore than one wishes to underestimate it.

In connection with this second angle on alignment, the government’s plans are vague. What has been said publicly suggests considerable gaps in process design since the entire enhanced tracing regime has been based on direct tracing of returnees and attempts to identify and test their direct contacts.

At any rate, the distribution of contact tracers in the current process does not follow a statistically rigorous distribution pattern. Well noted returnee hotspots like Asante Akim and the Techiman area have seen very limited tracing and limited risk-based sampling for mass testing. Nor is any attempt being made to validate assumptions about “non-hotspots” in order to reduce “data anisotropy”.

Part of the challenge arose from the initial skewing introduced by designing contact tracing around the 1030 international arrivals placed under mandatory quarantine. Depending on which day of the week, the cohort of such arrivals would not be adequately representative of international arrivals since the outbreak intensified. The Government’s decision to extend the coverage to most of the entirety of March helps matters but does not entirely dispel the data challenges since the training and effective distribution of trackers nationwide takes time to build up, during which period case contact trails become more convoluted.

The most critical issue of all, though, is the lack of public awareness, even at elite levels, of these gaps and the timeline for fixing them. This makes political milestone management lax since critical observers don’t know how to measure the progression of the health authorities towards this all critical point of alignment.

The media, in these times that civil society is taking a backseat to give Government space to focus on relief, needs to better understand the statistical aspects of the pandemic so that they can nudge the government towards delivering and communicating more effectively on the *twin alignments* discussed here.

It is absolutely imperative that Government assessments of whether the country is doing well or not be sufficiently transparent and logically easy to follow so that the roadmap to success is not hijacked by distrust, morbid partisanship and confusion.

When the time comes for the Government to loosen restrictions and actively kickstart the resumption of economic activities, the collaboration of the citizenry shall be vital. Much better if the logical journey to making those decisions has been made clear from the outset to the larger part of the population for the most part.

Elections in Africa have become some of the most expensive in the world.

A major part of the reason is deep mistrust among the participants leading to levels of paranoia about rigging that would make an Afghan politician squirm.

But at the heart of the matter is the simple reality that, with the atrophy of most of its pillar institutions, democracy has been reduced to elections in most of Africa. When elections are all there is, the vote attains sacramental significance in the ritualised state, and all fiscal logic must be sacrificed on its altar of convenience.

Hence the reason why the UK spent $150 million on its 2015 general elections, whilst its former colony and democratic imitator, Ghana, splurged $212 million on its 2016 version (the budget was in fact $278 million, but due to constraints about 40% could not be disbursed). Looking at it from a cost per voter perspective, Ghana spent $13.5 whilst the UK spent about $3.2. Put another way, elections relatively cost Ghana more than four times what they cost the UK.

The mistrust and paranoia alluded to above have endowed African countries like Ghana with some of the most advanced electoral systems in the world. Roughly half of African countries have deployed state-of-the-art biometric technologies to prevent cheating at the polls; nearly no Western European or North American country has.

In those places where democracy has led to sound institutions, voters can rely on identification established by other state institutions. In Africa, electoral bodies and the political parties insist on a separate, watertight, identification system just for elections. Everything else can keep using the leaky, messy, systems, but not elections. Elections are holy ground that must not be profaned.

But paranoia-driven, bleeding-edge, technology is not the full story. African electoral management bodies are not the efficiency paragons their political party enablers would have them be. They are as wasteful and disorganised as many of the other atrophying democratic institutions in your typical African country. Once again, let’s use the case of Ghana.

Though the Electoral Commission (EC), Ghana’s elections management body, charges considerable filing fees when political candidates seek to stand for office, and also rents out its services to various political parties and institutions for their own internal elections, it refuses to consistently report its “internal generated funds” (the jargon in Ghana for earnings made by state institutions, usually from commercial services) for financial planning purposes. Expenditure planning datasheets used by the Ghanaian Ministry of Finance to map out spending over the medium-term horizon have in recent times spotted blank spaces where the EC’s internal generated funds data should be. State auditors report spending on capital infrastructure at a level ten times what the legislature approves. They also lament failure by the institution to submit accounts for auditing by statutory timelines.

At one point, the EC even sold access to voters’ data on the electoral roll to a private fintech startup called BSystems and promptly forgot, or so it claims, to collect the revenue.

The 3x growth in its budget from circa $65 million in 2008 to circa $200 million today is thus not all due to the “escalation in sophistication” of the electoral process as a result of paranoia. A significant proportion of the cost escalation is due to sheer waste. And nothing exemplifies that fact more than the EC’s ongoing effort to rip out the existing biometric system used for identifying voters, along with its accompanying electronic infrastructure, and spend as much as $150 million (inclusive of contingency) building a brand new one.

When the EC started its campaign, its claims seemed rather reasonable. It anchored its arguments on apparent facts that the existing biometric system was largely obsolescent, having been implemented for the 2012 elections and used since then for two general elections and multiple district level polls, by-elections, mop-up registration exercises, referenda, and council of state elections etc.

According to the EC, the Israeli contractor responsible for procuring key components of the biometric system – BVRs, used for voter registration and BVDs, used for verifying voters on voting day – had informed them that all the equipment are at the end of their serviceable life and ought to be replenished and upgraded. The vendor had, according to the EC, quoted $74 million for this activity. Buying a brand-new system on the other hand would cost “just” $56 million.

Who can argue with such logic. If throwing away $60 million of existing equipment would lead to cost avoidance of $74 million and replacement expense of just $56 million, then surely this makes sense? Even if doing so would also mean an additional $70 million spent registering and collecting biometrics of all 17 million voters afresh.

Except that the whole argument betrays a mindset of such perverse wastefulness that were the account to be true, every one of the eighty-one information technology (IT) personnel on the organisation’s payroll, who shall be responsible for this new setup, would need to be whipped. And they might deserve it since previous state audit inspections have found equipment gathering dust in various stores, and an absence of even a rudimentary asset register.

But in this instant case of “throwing away millions to save millions is cheaper” though, a certain historical rite comes to mind: the Florentine Bonfire of the Vanities.

A Dominican Friar, called Savonarola, emerged on the scene in Renaissance Florence whose art of persuasion was so strong that he convinced some of that epical city’s most famed artists and scholars to turn over priceless works of art and scholarship to be burned in massive bonfires as acts of edification. Irreplaceable valuables, rare manuscripts and fine ware, were collected from many eminent citizens and brought to the bonfire to be consumed before chanting crowds. Until, finally, the Church began to fear Savonarola’s excessive influence, and had him excommunicated and executed.

Ghana is in the throes of a similar seduction. Many people watch on helpless, and some have even been seduced by the rhetoric, as the EC prepares to rip out $60 million worth of equipment and burn them (for that they must, as there is no market for second-hand electoral technology).

Yet, the EC’s arguments do not pass the most basic of common-sense litmus tests, much less expert review. Those seriously interested in this topic may consider reading this document issued by IMANI on behalf of a large group of civic organisations in Ghana opposed to the EC’s plans.

This simple table below provides a snapshot of the state of the current system by focusing on the key question of wear and tear.

Fig. 1. Ballpark Asset Wear & Tear Analysis*

Biometric Voters Registration Machines (BVRs)  
  Class Baseline


Number of times used Depreciated Value  
  A $10.5 million 8 times $5.5 million  
  B $7 million 4 times $5.5 million  
  C $5.25 million 2 times $4.75 million  
Biometric Voters Verification Devices (BVDs)  
  Class Baseline


Number of times used Depreciated Value  
  A $12 million 10 $4 million  
  B $4.2 million 4 $3 million  
  C $5.25 million 2 $4 million  
  D $5.25 million 0 $5 million  

*For ease of analysis, the software and communications peripherals (such as ground satellites) also costing millions of dollars and scheduled for destruction were not considered in this analysis. Thus, not all the $60 million worth of assets to be “bonfired” are represented in this table.

One analogical framework that has brought this fairly complex technical subject home to people is referring to the BVRs as “laptops with fingerprint sensors and webcams” and the BVDs as mobile phones or POS payment terminals. When people are then asked to imagine the wear and tear rate of such devices if they are used only once or twice a year, their eyes light up, the scales fall off, and the con suddenly becomes very clear. The pictures below should make visualisation even easier.

A Biometric Voters’ Registration Kit (BVR) 

Image result for bvr biometric"

Credit: Laxton Group

Image result for bvd biometric"

Credit: Isidore Kafui Dorpenyo

The trick to understanding the con is realising that the BVRs and BVDs were not all supplied in 2011 or 2012 and that not all of them are used for each registration or voting exercise.

The mid-term registration programs (called “limited registration” in Ghana) for instance typically require half of the overall portfolio of BVRs and there is usually an excess margin of about 15,000 BVDs that are not deployed during even general elections, not to talk of smaller-scale elections such as the Council of State elections or by-elections. Meanwhile, about 60% of the system dates from just 2016. Some equipment has, in fact, never before been used.

Furthermore, continuous purchase of fresh equipment from the original Dutch vendors, HSB, has always involved an upgrade of the auxiliary software both on the devices as well as in the datacentre, including the critical fingerprint matching technology, known as the ABIS, supplied by another Dutch vendor, Genkey.

Thus, as is amply clear from the table, it can easily be shown that millions of dollars’ worth of equipment has been used much too infrequently to be damaged beyond repair. And many of the devices were bought too recently to be end-of-lifecycle or incapable of being refurbished except at cutthroat costs.

If the EC were to continue buying replacements for only truly worn out devices, it would also avoid the $70 million it intends to splurge on fresh biometric enrollment of 17 million voters plus many other middleware components.

By buying an extra 3000 BVRs and 10,000 BVDs in a truly transparent and competitive tender, in continuance of the current continuous upgrading regime, it shall be entitled to an upgrade of the auxiliary software as well. It’s total expenditure shall be in the region of $15 million (throwing in datacenter and communications systems upgrades) and not the $150 million it plans to spend.

The EC commissioned an audit of the biometric system in 2016, the only time it has done so. The findings provide no support whatsoever for the obsolescence theory. In fact, the EC’s current claims about how much it will cost to refurbish or upgrade the system are all based on what it claims it was told by a self-interested contractor whose agreement with the EC had been abrogated a year ago. No other independent audit of the biometric system has been conducted since 2016. Requests for even internal equipment audit reports that recommend this drastic, wholesale, replacement action are met by blank stares and stonewalling by the EC.

The costs related to the complete replacement of the system, the EC’s preferred cause of action, on the other hand, seemed to have emerged from a shadowy tender that could qualify as the opaquest tender ever conducted in the world. The hints the EC has given about likely pricing following the sham tender ($3000 for a BVR and $400 for a BVD) suggest costs almost double what countries like Zimbabwe, Kenya and Nigeria have been able to obtain in the market in recent years for one component or the other.

Very untypical of a public tender, neither the Expression of Interest nor the Request for Proposal was published on the EC’s website or exposed to the press. (Mysteriously, the EC’s website went down on January 12th as soon as the controversy over its plans to build a new register and biometric system flared up again and has since not come back up.) Despite the tender having closed in April 2019, not a single press release has been issued by the EC to provide information on the bidders, the longlist, shortlist, or winning bids. Naturally, no evaluation reports have been seen.

The EC’s claims of being motivated by savings are thus as ephemeral as the purported divine benefits of Savonarola’s famed bonfires.

The Bank of Ghana (BoG) on November 29th, 2019, released a document justifying the need for the introduction of “higher-value denomination” currency (HVD) notes.

The question of whether HVD notes are warranted at any point in time is not a merely conceptual one. It is, in fact, a highly empirical enquiry to be approached from a careful analysis of considerable amounts of data about the proportional use of different notes and the differential costs of security and distribution. One cannot have random opinions about such a subject and expect to be taken seriously.

I don’t have any data on the contemporary usage patterns of the different Ghana Cedi (GHS) notes in circulation or about the costs of printing different notes. Like most Ghanaians, therefore, I did not have too much difficulty delegating all thinking about the issue to the fine technocrats at the central bank. Until now.

A bunch of controversies arising from the refusal of major supermarkets and even banks to accept the new notes because they lack the means to validate their authenticity compelled me to finally take a look at the BoG’s explanatory documents today. After all, item authentication has been a decade-and-half interest of mine.

Having now read the BoG document and subsequent BoG press releases on the subject, I am suddenly completely unsure of my earlier faith in the central bank to do all our thinking for us in this matter. I will simply lay out the worrying things I found and let readers of this blog judge for themselves.

  1. The most problematic element of the BoG’s analysis is the central reason it offers for introducing the HVD notes. In the very first paragraph of the original Q&A-formatted release, it states as follows:

 High levels of inflation and currency depreciation in the past have eroded some of the gains from redenomination. The deadweight burden, reflected in high transaction cost has re-emerged.

In this one statement is contained all the alarm bells about the depth of technical preparation that should have gone into this exercise.

  1. Except in cases of hyperinflation and hyper-depreciation requiring a new currency series, high-value denomination notes are almost never justified by the need to counter or offset routine inflationary pressures over time. For example, when the $100 was first introduced in the US in 1869 (and reintroduced as a federal reserve note in 1914), it was for a time the highest in circulation. In due course, larger denominations surfaced, mostly as a result of wartime and other contingent exigencies. By 1946, a firm decision to stop the issuing of HVD notes had been taken and, by 1969, all currencies higher than $100 were being removed from active circulation to reduce the costs of fighting counterfeiting and money laundering. Today, the $100 bill is worth only $4 if measured against its original value as a result of cumulative inflation. It remains the highest denomination in circulation purely in keeping with contemporary policy. An alternative policy of trying to preserve the face value of the $100 note at the time of the 1969 currency reforms would have required an introduction of $1000 and $2000 bills in the United States, something that cannot be countenanced in today’s anti-terror and anti-narcotics climate.
    •  It is easier to understand the argument when one recalls that in a floating exchange regime, currencies can rise and fall over time. Is the logic here then that should the Ghana cedi strengthen against the USD consistently over time, that would dictate the retirement of the largest of the HVD notes?
    • Things are even starker in a managed exchange rate regime. Consider this fact. By the time of the 1984 budget, the official cedi – USD exchange rate was 35 cedis to the dollar. Governor J.S. Addo lowered the value to 38.5 cedis to 1 dollar, in a conclusive reversal of the last peg from 1978 (of 2.75 cedis to the dollar) then in effect. Yet, ahead of the latest round of managed devaluations in 1982, the highest note, the 50 cedi note, had been removed from circulation (or “demonetised”) ostensibly as an anti-corruption measure (echoing an earlier currency confiscation in 1979).
    • When in 1984, the HVD notes of 50, 100, and 200 cedis were introduced (or, in the case of 50 cedi note, reintroduced), with no serious explanation as to why fears of corruption were no longer an issue, the effective worth of the highest-value note was officially $5. By the time of the 1985 budget, it was $3.7. More interestingly, the parallel unofficial, in actual fact market, rate was about 156 cedi/USD suggesting hence that the 50 cedi note that just 10 years earlier had been nominally worth $50 was now worth just 32 US cents ($0.32) whilst the highest circulating note was worth just $1.28. In short, the history of “face value preservation” tactics and politics in Ghana betrays a ridiculous mixture of arbitrariness and confusion. Trying to bulwark the face value of Ghana’s benighted national currency notes against depreciation and inflation trends is clearly to court absurdity, and also distrust in view of this messy track record.
    • At any rate, the historical record clearly shows that Ghana’s highest-value notes have for the most part generally exchanged for low dollar amounts. In the 1985 to 1990 period, the 500 cedi note moved from $5.5 to $1.5, peak to trough. In the 1990 to 2000 period, the 10000 and 20000 notes emerged as the highest-value notes, yet at their strongest they were about $1.4 and $2.7 respectively.
    • The dramatic change of affairs represented by the introduction of the 50 Ghana cedi note in July 2007, with its debut value of $54 marking the high point of Ghanaian currency face-value politics in 30 years, was a break with the past precisely because it also marked the start of a new series, effectively a new currency. Such an exceptional development cannot set a precedent for the routine preservation of the currency’s purchasing strength at some arbitrary USD rate by printing larger and larger notes to approximate the continuous appreciation of the highest notes’ nominal value(s).
    • By the time of the introduction of the 100 and 200 Ghana cedi notes in late 2019, the value of the 50 Ghana cedi note in USD terms was about $9. In comparison with other HVD notes in Ghana’s history, adjusting for both US and local inflation, this amount, as has already been shown above, was big enough.
    • The largest Nigerian Naira note is exchanged today for $2.75. The largest Kenyan Shilling note is exchanged for $9.91. The largest South African Rand note is exchanged for $13.8. By the time the Government of India removed HVD notes out of circulation due to purported fears about money laundering and crime, the highest Rupee note was exchanging for $14. Simply put, economies comparable to Ghana’s in various characteristics have HVD notes well within the value range of Ghana’s last version, the GHS50 note.

2. The BoG makes plausibly persuasive points about an expansion of incomes leading to an increase in preference for the then highest-value notes, the GHS20 and the GHS50. In the European common market, for instance, it is the 50 euro note, not the higher value 100 and 200 euro notes (or the 500 euro notes that are being demonetised over the usual concerns) that are in widest circulation. In the US, the highest circulation bill is the $20, whilst the highest value ($100) bill is mostly preferred outside the US. Thus, the preference for higher value notes in Ghana is a significant point. However, the point in the way it was put in the BoG release, that “GHC50 and GHC20 account for about 70% of the total demand”, is completely vague. Is this “demand” being expressed in monetary value or quantity terms? If in value terms, then it is completely unremarkable since the face value being several multiples of the face value of the lower denomination notes, their quantity could well be tiny and still constitute 70% of usage. In such a scenario, the need to print larger quantities would be far from clear, and user preference may well tilt to a medium portion of the value spectrum as is the case in many other countries.

3. It is important in connection with the above point to note that convenience is affected by both the concern about carrying large sums of low-value notes and the, almost converse, concern about finding change when counterparties pay for goods and services with large notes. Thus, preference is an entirely empirical matter and rather technically complex to compute. If the BoG wished to educate the public in this matter by releasing that statement, it ought to have been clearer.

4. Most strangely, the BoG chose to provide none of the critical information that would have best assisted curious persons in evaluating the propriety of its decision to introduce high-value denominations against the global trend of demonetizing high-value notes. At the heart of any sound analysis would be a trade-off between printing less money (and the concomitant result of users carrying less money) and creating highly tempting targets for crime.

The cost of printing higher quantities of money in the face of a rising average size of transactions in the economy reduces the seigniorage revenue of the central bank. Thus, if the average size of transactions moves upward, it makes sense to introduce more HVD notes. At the same time, such notes are more expensive than lower-value notes because they require more security features to prevent counterfeiting and more policing to suppress money laundering. For example, in the US, the most expensive notes (such as the $50 bill) can be as much as four times (4x) costlier to print than the cheapest ones (such as the $1 bill). The right balance between transactional convenience and policing cost is, as always, entirely empirical. By refusing to provide a breakdown of the costs of printing the different bills and the quantities in circulation as well as the average velocity per class and other critical monetary parameters, the BoG shows a complete disinterest in helping analysts come to any sensible conclusions about the need for these new HVD notes.

5. The weirdest of all the claims in the BoG document is the assertion that, somehow, printing high value notes will reduce the “deadweight burden” associated with current transaction costs. This claim is manifestly erroneous. There is no welfare loss context here to even warrant use of the “deadweight burden” term.

6. Lastly, we live in a time of great cynicism about the procurement practices of governments. The introduction of a new class of currency notes represents a potentially major procurement opportunity for agents and representatives of the mints and printing presses in Europe and America with whom the Republic of Ghana deals on these matters. Where the new bills are significantly more expensive than older bills, commissions may be in order, opening the door to unpleasant allegations about the real motives impelling the procurement action. It would have been reassuring had the BoG provided information about the procurement terms of this new production, whether by De La Rue, the loss-making, financially struggling, security printing firm or Crane Currency, the controversy-plagued contractor, now embroiled in corruption allegations in Liberia.

The decision to shroud all these important matters in silence, including even the name of the printer/mint, is doing very little to dispel lingering doubts and confusion about this whole HVD notes printing business.

My friend and colleague, Selorm Branttie, forwarded to me snippets of a Facebook thread about my recent comment on the Year of Return, in which he had been tagged and expressly asked to forward to my attention.

My inclination was to ignore the thread, for obvious reasons.

First, it is Christmas for Christ’s sake! Second, I have had to switch off Facebook because it was taking too much of my time. Facebook is designed to compel conversation. I simply don’t have hours in the day to spare for casual banter anymore; the daily hustle has made sure of that. Twitter, which I have been using more and more, is, on the other hand, a broadcast medium that allows me to disseminate random thoughts with far less expenditure of time. Lastly, I recognised the key discussants on the thread as strong partisans of the ruling party in Ghana with whom I share a bit of a history.

In the past it has been virtually impossible to have a sincere debate about anything since they seem sworn to see any dissent from positions of the ruling NPP government as motivated by “cynicism” and “ill will”. In my brief and occasional dealings with such people, sincere and mutually respectful discussion of facts and figures consistently degenerated into motive-questioning and other such base exchanges.

Frankly, I have neither the time nor ability to understand what it is that make some people so loyal to political parties to the point of losing all capacity for objectivity, but, well, we live in a complex world.

I have decided to respond to the Facebook thread for only one reason: a debate over data is always worth having in the peculiar circumstances of Ghana’s development.

Having chosen to mount what amounts to a mini-campaign about the issue of the Government’s reliance on bad data to appraise the generally successful Year of Return initiative, I feel obliged to address reactions, even if they are accompanied by insults accusing me of “cynicism” and statistical ignorance, so long as they present themselves as offering counter evidence.

The “counter evidence” presented in the said Facebook thread was along these lines:

  1. In the thread, I was accused of presenting data covering all “international arrivals” instead of just “tourist arrivals”.
  2. My data is thus, according to this framing, a superset. The more relevant subset of tourist arrivals is much smaller. The data that I had culled from the CEIC and IATA datasets, and which was corroborated by Ghanaian Civic Aviation Authorities disclosures (some of which are openly available on the Ghana Airports Company website) could thus, in the view of my Facebook accusers, not be relied upon since not all international visitors are tourists.
  3. If the smaller tourist arrivals dataset is used, the claims by Government functionaries that $1.9 billion has been generated from the Year of Return would be justified.

This is the coherent part of the argument. How exactly the exponent of this theory jumps from these premises to satisfy the requirement of showing that $1.9 billion has been generated by the Year of Return initiative is both murky and confused.

Here is an extract from the thread so that the reader can judge for herself:

At the beginning of 2019, the Ghana Tourist Authority (GTA) projected that they were expecting 150, 000 more tourists from the African diaspora, thus making 2019 tourist number 500,000 compared to 350,000 of 2018. They estimated total spending by tourists to be about $925m.

Then in September 2019, with arrival data, the actual number was calculated at over 750,000, surpassing the 500,000. Based on the rate of arrivals, and the peaking of activities in December, they estimated 1 million tourists by year end. Note that due to the doubling of the numbers, the expected spending by tourists, also doubled to $1.9bn.

Recall that my argument was very simple:

  1. In 2018, international arrivals were about 984,000.
  2. In 2019, the most optimistic projection suggests 1 million arrivals.
  3. The most optimistic allotment of any increase in arrivals due to the Year of Return cannot thus exceed 15,000 extra visitors in 2019.
  4. Because we have a reasonably strong estimate of average spending per visitor (about $1800 in 2018), we can confidently project an additional measurable revenue gain of $30 million if we use average spending of $2000 per visitor. This of course does not account for any qualitative gains, which are almost impossible to measure in this case due to paucity of data.
  5. Based on both the mean rate of arrivals for the last decade and the mean rate of projected arrivals for the decade after 2017, we have no evidence that the Year of Return has boosted arrival numbers beyond anything other than the mere increase in arrivals in 2019 over the figure recorded in 2018. And even that boost is only to the extent that the growth rate exceeds the average growth rate in visits over the last couple of years, which, sadly, it does not.

The counter-argument being canvassed in the Facebook thread is that we need to use figures preferred by the Ghana Tourism authorities and that if we do we shall record 650,000 extra or additional visitors in 2019 over the 2018 figure. Presumably, the interlocutor has no objection to the use of a spend-per-tourist figure closer to the $2000 number, which should give us $1.3 billion extra yield supposedly attributable to the Year of Return. Still, no $1.9 billion in sight though.

Except that all of this seeming “analysis” is actually empty of both research value and statistical fidelity. The process of determining whether an intervention has had an “effect” is a very elementary one, taught to everyone who has ever taken the most elementary of introductory courses in statistics. It is standard “hypothesis testing”. In a situation such as this one where the means and variances are all so well known due to near-complete historical data, there is hardly any need for the kind of quibbling seen on that thread. Which brings us to the central point: data integrity and validity.

If indeed the tourist subset is about a third of the set of total arrivals, such that we had 350,000 tourists out of the 984,000 arrivals in 2018, then one can only expect 2019 to record total arrivals in excess of 3 million for the tourist component alone to hit the 1 million mark, as canvassed by our Facebook commentator. A completely absurd projection that is at variance with several Government projections.

In fact, there is no logic in holding that the arrivals data I used is the superset and that the Tourism authorities in Ghana have higher-resolution subset data that can be used to hone in on tourist numbers with better precision. Simply because such a supposition was both superfluous and easily verified to be false.

Here is the Tourism Ministry’s own set of data available from page 22 of its latest strategic plan:


This data is actually lower resolution, and includes land border arrivals that are very difficult to process due to the high translocalism of intra-African borders. My decision to focus on data that can be corroborated and triangulated using international aviation stats was driven by this very fact. That data pays as much attention as possible to pruning non-visitor noise wherever possible.

But all that is besides the point really. The most important point here is that this data does not in anyway vindicate any of the strange and statistically perverse claims made by our interlocutor on the Facebook thread. The Ministry’s tourist arrivals figure for 2016, the most recent year for which it had an actual tally is 1,202,200. Its projected figure for 2018 is 1,454,700.

Average visitor spend, according to the Ministry, for 2016, the latest year for which it had complete data, was $1,890, virtually identical to the Ministry of Finance estimate of $1,800 for 2018.

It is important to bear in mind that this average spending amount is comprehensive. To focus solely on spending on tourism activities, strictly speaking, would be to considerably shrink tourism receipt numbers since direct spending on pure tourism in Ghana is incredibly low. According to the Ghana Statistical Service’s “Trends in the Tourism Market in Ghana 2005 – 2014” report (page 15), visitor spending adjusted for inflation has actually been falling for some time:

Visitor arrivals to selected major tourist sites rose from 381,600 to 592,300 over the decade. Real revenue also rose marginally from GHȼ490,000 to GHȼ492,000 over the same period with average spend per arrival falling from 1.3 GH¢ to 0.8 GH¢. Both arrivals and real revenue grew positively in the first half of the period, at an annual average growth rate of 10.7% and 6.1% respectively, while during the second half, both categories fell every year on average by -1% and -3.3% respectively.

We are seriously talking about total earnings in the hundreds of thousands of dollars for the whole country here, and average spending in tourist sites of about a dollar and cents over the course of a full decade. One has to employ a broader definition of visitor spending for tourism revenues to make any sort of statistical sense, which points to the absurdity of trying to use a lower-resolution spec for defining “tourist arrivals”.

So, where did our commentator on Facebook find his numbers? Surely not from the Tourism Authorities, despite purporting to be filtering out tourism data. How did he construct his hypothesis that the “tourist arrivals” number used by the authorities is lower than the “international arrivals” number widely used by global observers and by the Ministry of Finance to compute tourism revenues in Ghana? Surely not from any reputable or defensible data source.

But even if we are to use the lower-resolution, looser, dataset employed by the Tourism Ministry for strategic planning, the fact remains that there is no mechanism to “extract” this strange $1.9 billion figure from any statistical manipulation known to the science.

For instance, we can use the 1.45 million arrivals projection for 2018 (as made by the Tourism Ministry in 2017 in the table reproduced above) and also use the most aggressive projection for arrivals in 2019, which is 1.5 million, and we will still only be able to come up with $100 million as the level of tourist receipts in 2019 in excess of the figure recorded in 2018. But we would then need to address the hurdle of this “increase” being lower than previous year-on-year increases. For example, the last three years has seen arrivals growth rates in the magnitude of about 10% per annum. The apparent increase in numbers in 2019 compared to 2018,  would however, if we are to benchmark against the Ministry’s data, be less than 7%, suggesting lower growth in 2019 compared to the trend over the last couple of years.

In short, rather than contribute to the debate in a sincere effort of enhancing policymaking in Ghana through a promotion of facts, figures, data and analysis, the Facebook thread I referred to earlier was primarily concerned with pushing a partisan agenda, and a partisan agenda alone.

From what I can see on social media, many Ghanaians are now completely tired of this inability of politically engaged people to rise above petty partisan squabbling. I share in this frustration.

Ghanaian officials are basking in glory and ecstasy following a very successful marketing and branding campaign for the “Year of Return”, an initiative of the President of Ghana marking a major timeline in the sordid history of black slavery.

So far, impressive feats of national showcasing are there for all to see: tons of positive international press, great mentions and fabulous celebrity endorsements, most of it at no cost whatsoever to Ghana.

So why is Ghanaian officialdom so keen on selling the success of the initiative on the tabletop of incoherent statistics and woolly numbers instead of better cataloguing these clear achievements? It is a very strange sight to behold. Is this sad spate of fuzzy arithmetic just another example of how as a country Ghana struggles to master data-driven policymaking or is this an isolated case of mere overexuberance?

I know that no malice is intended, but before I am pummeled to pulp for being a killjoy, let me hasten to point out that sound data is important for drawing accurate inferences.

Unfortunately, various government agencies and supporters of the Year of Return program have bandied figures such as “200,000” extra arrivals, “1.5 million” total visitors and “$1.9 billion” extra tourist spending as measurements of outcomes related to the Year of Return with zero commitment to using actual, widely available, statistical data.

This means that instead of focusing on what so spectacularly went well – the brilliant coopting of African American celebrities like Steve Harvey as informal brand ambassadors – we shall soon be luxuriating in fictitious numbers bearing no resemblance to reality.

Here is the data we do have. As at last count, 750,000 international visitors had made their way to Ghana in 2019. ( The Authorities are projecting total arrivals for the year to hit 1 million. This is however doubtful considering the proximity to year-end.

But even if the numbers do hit 1 million, that would only mean a tiny fraction more than the 984,250 visitors who showed up in 2018, in fact a mere 15,000 more.

According to Ministry of Finance computations, average spending per tourist was $1512 in 2014, rising to roughly $1800 in 2018. Let’s pad this to $2000; though with Cedi exchange rate depreciation outstripping inflation, foreigners should actually find Ghana about 5% cheaper than last year and might spend less. Be that as it may, the “extra spending” that could conceivably be attributed to an increase in arrivals due to the Year of Return (if the projected 1 million visitors estimate holds up) would amount to about $30 million in this scenario.

By what conceivable mechanism can a $30 million optimistic projection mutate into $1.9 billion?

As already hinted, growth in tourist numbers in 2019 may well be below the average 3% per year rate seen over the last couple of years (and certainly below the 5.1% annual growth rate trend experts have projected between 2017 and 2027). There is further grounding for such speculation in the author’s estimate of a rise in hotel and short-stay apartment rooms inventory and a fall in average room rates based on an analysis of several weeks of (a major travel site) data.

But all this should really be beside the point since no data-conscious person would insist that every successful marketing exercise must necessarily bear fruit even whilst it was still underway. There is almost always a timelag before results materialise. The danger in elevating phantom figures to the level of truth is in the complacency they can breed. So that instead of girding our loins to build on this successful marketing exercise and translate the increased awareness about Ghana and its enduring international goodwill into tangible tourism gains, we would instead declare victory on all fronts, relying on shaky, unchallenged, numbers and then promptly relapse into business as usual.

The only reason, therefore, for sounding the alarm about these widely publicized and widely believed numbers is the wish to forestall such a bad outcome and to motivate the authorities to see their successful marketing and communications strategies as merely the foundation on which to erect a truly effective sales plan for Ghana’s tourism and investment climate potential.

And we absolutely have their back.