The Chinese Government’s plans to introduce a “social credits” scheme to rate and rank the behaviour and conduct of its citizens far beyond their financial circumstances (the current focus of Western-created “credit scoring” systems) has predictably rattled observers.

One journalist summed up the situation starkly:

“The Communist Party’s plan is for every one of its 1.4 billion citizens to be at the whim of a dystopian social credit system, and it’s on track to be fully operational by the year 2020.”[i]

Many of the discussions have followed similar lines, focusing on the harrowing implications of such an intrusive state-run machine for individual freedoms and the right to privacy.

What has been less investigated is the essential structure of the social algorithms required to achieve the objectives of the Chinese government, and in particular the tensions between technical efficiency and political economy when mass surveillance is devolved to machine power and incorporated into social-behavioural systems in the presence of capitalist incentives.

Few treatments of the notion of “sovereign privacy” give it any respect. Yet, there are framings of the “state secrecy” question that goes beyond mere necessity (especially in such contexts as “law enforcement” and “national security”).

LSE’s Andrew Murray provides an interesting angle in his brief 2011 take on transparency:

“All bodies corporate (be they private or public) are in fact organisms made up of thousands, or even tens of thousands, of decision makers; individuals who collectively form the ‘brain’ of the organisation. The problem is that individuals need space to make decisions free from scrutiny, or else they are likely to make a rushed or panicked decision.”[ii]

When viewed as a “hive” of personnel insecurities, biases, errors, stereotypes, ambitions and proclivities, Central Governments emerge out of the monolithic pyramid we tend to envisage atop the panopticon of general surveillance and descend onto a more examinable stage, where their foibles and miscalculations and misdiagnoses can also receive useful attention.

Because the Communist Party’s 90 million members are an integral part of its overall structural integrity, its social management policies rely greatly on their ability to participate and contribute.[iii]

Many of the 20 million people who work in the 49000 plus state enterprises, especially from middle management and up, are fully paid-up members of the party. Some estimates put the percentage of the country’s 2 million press and online censors who belong to the party at 90%. Last year, the last barrier between the Party and command at all levels of state paramilitary and security institutions was removed, bringing an even larger number of non-career security commissars into both operational and oversight positions.

Such broad-based participation in the “social management strategy” might at first sight appear to favour the decentralised nature of social credit-based control. The only problem with that view is that the strategists behind the scheme see it in purgatory terms:

“The main problems that exist include: a credit investigation system that covers all of society has not yet been formed, credit records of the members of society are gravely flawed, incentive mechanisms to encourage keeping trust and punishments for breaking trust are incomplete, trust-keeping is insufficiently reward, the costs of breaking trust tend to be low; credit services markets are not developed, service systems are immature, there are no norms for service activities, the credibility of service bodies is insufficient, and the mechanisms to protect the rights and interests of credit information subjects are flawed; the social consciousness of sincerity and credit levels tend to be low, and a social atmosphere in which agreements are honoured and trust are honestly kept has not yet been shaped, especially grave production safety accidents, food and drug security incidents happen from time to time, commercial swindles, production and sales of counterfeit products, tax evasion, fraudulent financial claims, academic impropriety and other such phenomena cannot be stopped in spite of repeated bans, there is still a certain difference between the extent of sincerity in government affairs and judicial credibility, and the expectations of the popular masses.”[iv]

The goal is as much about moral self-policing as it is about social control. Self-policing inevitably induces low-intensity and highly diffuse factionalism and clique politics.

Chinese observers certainly understand the critical factor of power-play in these circumstances, as is obvious from the following passage by PhD student, Samantha Hoffman:

“The first is the struggle for power within the Party. The Party members in charge of day-to-day implementation of social management are also responsible to the Party.  As the systems were being enabled in the early 2000s, these agencies had a large amount of relatively unregulated power. The age-old problem of an authoritarian system is that security services require substantial power in order to secure the leadership’s authority. The same resources enabling management of the Party-society relationship can be abused by Party members and used against other within the Party (War on the Rocks, July 18, 2016). This appears to be the case with Zhou Yongkang, Bo Xilai, and others ahead of the 18th Party Congress. The problem will not disappear in a Leninist system, which not subject to external checks and balances. And it is why ensuring loyalty is a major part of the management of the party side of “state security”.[v]

But Hoffman and many like her misconstrue the implications of fragmented trust for social credit based control.

Complex social algorithms over time start to amplify signals that their makers do not fully understand and cannot control in advance. We have seen this many times with even much simpler systems like Facebook, Twitter and Instagram, whose operators have extremely narrow objectives: maximising attention retention to attract advertisers.

In a system designed to compel conformance to ideal criteria and yet dependent on large numbers of participants to shape that criteria, deviance can easily become more prominent when algorithms start to reinforce once latent patterns. Whether it is preening on Facebook or bullying on Twitter, there is a fundamental logic in all simple systems trying to mould complex behaviours, and this logic tends to accentuate deviancy because algorithms are signal-searching.

This is where the “sovereign privacy” point comes in. A state like China seeks inscrutability. It also seeks harmony of purpose. Social algorithms tend to want to surface hidden patterns and concentrate attention. A time-lag renders algorithm-tweaking for specified ends in advance highly unreliable. Very often, the operator is relying on surfaced trends to manage responses. The danger of rampant “leaking” of intention and officially inadmissible trends rise exponentially as the nodes in the system – financial, political, social, economic, psychological etc – increase. The “transparency” that results from the inadvertent disrobing of the intents of millions of Chinese state actors does not have to be the kind that simply forces the withdrawal of official propaganda positions. It can also be the kind that reveals which steps they are taking to regain control of the social management system.

The problem is somewhat philosophical. Right now, membership in the Communist Party and public conformance with the creed is non-revelatory. Integrating multiple “real behaviour” nodes together to compel “sincerity”, as is the official goal of the program could immediately endanger the status of tens of millions of until-that-moment perfectly loyal cadres and enforcers of moral loyalty. The proper political economy response, at least in the transition stages, is to flatten the sensitivity of the algorithms. Doing so however removes the efficiency, which alone makes the algorithms more effective than the current “manual” social conformity management system.

Unfortunately, such efficiency would render redundant large swathes of the current order. Which in turn means that lower levels of the control pyramid have very little incentive in providing complete data. The effect of highly clumpy data exacerbates algorithmic divergence from other aspects of social reality (in the same way that Twitter fuels political partisanship in America as oppose to merely report it) and prompts “re-interpretations” of the results churned out by the system. Over time, the system itself begins to need heavily manual policing. The super-elite start to distrust it. Paranoia about the actions of their technocratic underlings grow in tandem. Along with dark fears about a “Frankenstein revolt”.

At the core of all of this is the simply reality that any system that can realistically achieve mass deprivation of privacy will threaten sovereign privacy as well, and would thus not be allowed to attain that level of intrusion by the powers that be.

Notes:

[i] “China’s ‘social credit’ system is a real-life ‘Black Mirror’ nightmare”. Megan Palin. 19th September 2018

[ii] Andrew D Murray. 2011. “Transparency, Scrutiny and Responsiveness: Fashioning a Private Space within the Information Society”. The Political Quarterly.

[iii] See: Yanjie Bian, Xiaoling Shu and John R. Logan. 2001. “Communist Party Membership and Regime Dynamics in China.” Social Forces, Vol. 79, No. 3, pp. 805-841.

[iv] “State Council Notice concerning Issuance of the Planning Outline for the Construction of a Social Credit System (2014-2020)”. GF No. (2014)21.

[v] Samantha Hoffman. 2017. “Managing the State: Social Credit, Surveillance and the CCP’s Plan for China”. China Brief Volume, 17 Issue 11.

Utpal_Bhattacharya_Ponzi_Math_Equation_2003
PROPOSITION 1: A Ponzi scheme may exist if an economy has a large public sector (” is bounded below), and the assets of the state could be used for a bailout ($ is bounded below), and the probability of early termination of the Ponzi scheme by a regulator is low (2 is bounded above), and there is inexpensive access to citizens through the mass-media (c is bounded above), and there are no severe penalties on promoters of Ponzi schemes (d is bounded above).
PROPOSITION 2: A Ponzi scheme will exist even under partial bailout, if the condition in Proposition 1 holds and the probability of bailout, $, is higher than (1-n*), where n* is the critical fraction of citizens that are required to be involved for there to exist the possibility of a bailout.
—- Utpal Bhattacharya, “The Optimal Design of Ponzi Schemes in Finite Economies”, 2003.

The smart money is on the prediction that the more sophisticated regulatory frameworks around the world shall tend to balance technology growth and privacy protection if they are to retain their political legitimacy in an environment where both consumer rights and economic competitiveness have attained nearly equal status in policy debates around the word.

Skilled regulators have already begun to justify new reforms on the basis of privacy measures stimulating considerable technology progress.

Consequently, the growing concerns of consumers about the abuse of their personal data and the misuse of targeting algorithms to interfere with their decision-making autonomy are spurring some of the most fascinating work in the platform architecture design space today. A broad range of blockchain applications, for instance, is now anchored to the premise of facilitating greater user control over their own data, and provisioning of this data to service providers based solely on the wishes of the data owners.

Savvy governments have recognised this development and have begun developing regulatory frameworks that focus more on rewarding creative privacy management rather than stymieing novel business models and technologies based on some misguided, precautionary, principles. Others are just starting to align with the times.

Consider, for instance, Costa Rica’s Executive Decree # JP-40008. Enacted in December 2016 to amend extant provisions on privacy, the subsidiary legislation considerably transformed what, in the beginning, had been a wholesale, “precautionary approach”, regime into an innovation-compatible system of rules specifically designed to facilitate investments, business, and technology development in the data-rich arts and sciences. How does decree JP-40008 achieve these goals precisely?

Firstly, it retires the provision in Executive Decree # 37544-JP, an annex to primary Law # 8968, which had introduced a highly restrictive requirement for the “registration” of databases, registers, and other data repositories with the main data ombudsman in the country, the PRODHAB. Instead, it calls for the vetting of the security protocols employed to safeguard such data repositories against malicious breaches or inadvertent disclosures of personal data.

Furthermore, regulated financial institutions in the Central American country are exempt from the requirement of database registration with PRODHAB. The dynamics of inter-party certification in the financial industry, whereby such security and privacy certification is very often a prerequisite for interoperability (PCI-DSS plus being an obvious example), already delivers a higher standard for personal data protection in a more efficient and decentralised manner than can be achieved by most purely government-managed regimes.

The amendments also take into account the reality of cross-border data movements within federated entities by focusing on systematic compliance and downplaying overplayed concerns around jurisdictional fragmentation. The mere act of data crossing a border does not necessarily invoke jurisdictional issues if the technology platform observes uniform standards that may be higher than domestic requirements. The ability to investigate claims of abuse in electronic systems is rarely hindered, in practice, by such jurisdictional fragmentation, yet policymaking on “data sovereignty” and “data domiciliary” considerations frequently operates on unscientific notions that suggest that physical borders are determinate.

Costa Rica’s focus on ensuring that the country’s Data Protection Law evolves to reflect the growing appreciation of its technocrats for “embedded regulation” vindicates the hope that fast-paced technology progress can be aligned with pro-privacy regulatory regimes.

Embedded regulations seek to strengthen industry standards and promote cross-network accountability among industry actors in a relatively more decentralised fashion. Thus, whereas in the previous regime, “written individual consent” was required, the amendments now enable the use of digital assent, bringing the process more closely in line with the fast-growing trend of “e-signature management as a service”. The pace of innovation in the e-signature management space is such that the cost of complying with “individual consent” shall continue to drop dramatically without sacrificing the quality of compliance.

The experience of Singapore is also instructive in clarifying this “embedment” notion of weaving of regulations into the fabric of a country’s technology enterprise culture.

In Singapore, the Personal Data Protection Commission (PDPC) sees itself as a “capacity building” institution mandated to bear a significant portion of the costs and capacity burden of transitioning business, particularly small and upstart businesses, from complacency and ineptitude to readiness and vigilance. PDPC strives to transform enterprises of varying levels of sophistication into data-savvy operators equipped with the latest tools for complying with the law, whilst contributing at the same time to the tiny entrepot’s declared vision of becoming the world’s “data hub”.

Singapore’s government has invested in a significant range of compliance tools for seamless compliance tracking and reporting so that small businesses seeking to create disruptive technologies would not be distracted from that state-sanctioned mission.

This does not mean however, as it might seem at first, that consumer needs and rights have been deprioritised. On the contrary, the country is convinced that improved privacy protection is a technical-investment public good that must be addressed as a baseline for its technology industry to leverage for leadership.

The government of Singapore, in the context of dynamic privacy protection, refers to the “embedded regulations” notion used above as, “data protection by design”. This language has become popular in recent years within stringent regimes, but the assumption in such regimes has usually been that businesses are responsible for rebuilding critical infrastructure in order to comply. The Singaporean government, on the other hand, takes the view that this is best achieved through the cultivation of an “ecosystem of trust”, and that the key role of the public sector is not primarily that of a police service, enforcing aloof laws on a suspicious crowd of businesses, but that of an investor safeguarding a key resource: trust.

The Singaporean government’s posture on this matter is summed up in this quote from the country’s data ombudsman:

“The key challenge lies in enabling the use and disclosure of data to support the progress of technology and innovation, whilst protecting personally identifiable information, to allay privacy concerns.”[1]

By highlighting the fact of businesses confronting considerable reputational and business-disruption challenges when data is mishandled, Singapore’s privacy regulators have succeeded in driving consensus on a baseline of “data hygiene and ethics” that fosters collective action. Such action when backed by public investments contributes to advancing the state’s preferred motif of an “ecosystem of trust” beyond rhetoric into substantive interventions in critical data governance areas such as disputes resolution, advanced notifications of disclosure, profile reviews, and aggregation.

Whilst many countries focus on writing laws that merely heighten the risk barriers for legitimate enterprises but do nothing in facilitating the identification and penalisation of rogue operators, Singapore prefers a broad principles-based regime coupled with an active, co-investing, regulator that is respected by consumers for operating a transparent and highly communicative process, and trusted by businesses for a pro-innovation mindset that welcomes joint exploration of how to advance risk-fraught emerging disciplines such as big data, supervised machine learning on live diagnostic data, and behaviour profiling.

It is too early to conclusively judge whether or not role modelling in the international community will be sophisticated enough for the experiences of the likes of Singapore and, even, Costa Rica to become yardsticks of emulation. But with the heating up of competition in the machine learning space, it is very likely that international data treaties among like-minded countries shall in due course begin to drive the formation of “smart country leagues” akin to “free trade areas”. Data treaties should, in places like East Africa, prevent unnecessary replication of infrastructure whilst at the same time addressing concerns about “sovereign” data control. Should this happen, the world is likely to witness some short-term schism in the trajectory of data innovation, a veritable new digital divide between countries in pro-innovation “data leagues”, and those locked out due to incompatible privacy and data protection regimes.

In the long-term, however, the sense that only the rapid advancement of above-board, and done in the open, technology can safeguard consumers and citizens from powerful, malicious, actors, who do not give a toss about privacy, is likely to prompt an overall race to the top.

[1] Leong Keng Thai quoted on the website of the Info-communications Media Development Authority in an article titled: “Balancing Innovation & Personal Data Protection”, posted on 3rd November, 2017.l