Is artificial intelligence a ticket to Borges’ Babylon?

A thought experiment under construction

Juan Ortiz Freuler
9 min readNov 9, 2017

Versión en castellano (Spanish Version)

As I tried to imagine what a world completely governed by un-explainable AI would look like, I was reminded of Jorge Luis Borges’ “The Lottery in Babylon”. In this story, Borges describes how a simple Lottery eventually became a complex (and secret) institution. At first, like every other lottery, it oversaw the random process of assigning the jackpot. Eventually, all inhabitants of Babylon were forced to participate and the lottery became more complex: people could lose (or win) a job, a position among the nobles, the love of their lives, life itself, honor… “The complexities of the new system are understood by only a handful of specialists (…) the number of drawings is infinite. No decision is final; all branch into others.” At its peak, every aspect of a person’s life became subject to the secret rulings of the Lottery.

This blogpost does not include a claim that AI will trigger the end of humanity. However, it argues that the underlying structures powering it might lead to profound change. Particularly, it argues that the era in which human rationality was granted center stage in our social and economic system, could be coming to an end.

AI is being presented to us as a sorcerer that offers magic to those willing to take a leap of faith. If we cave (or perhaps even if we don’t?) our lives may end up governed by an endless chain of secret lotteries.

But what is artificial intelligence?

When you tag a picture of your friend Ana on Facebook, you are basically helping train Facebook’s AI to distinguish Ana from every other friend you have on the network. You tag her in a picture in which she’s smiling. And one in which her lips are covered by a coffee mug, and one in which her head is partially turned. At first Facebook typically suggests the wrong tags. But, over time, it becomes quite good at figuring out the combination of factors that make Ana different from Emma.

Creepy success! You’ve trained the AI system to be better than that professor who still can’t tell Ana from Emma, even though you are already several months into the course.

What’s AI?

AI is basically a catch-all phrase used to describe a broad set of methodologies. Machine learning, among the most popular, involves “training” a model on a case-by-case approach, such as tagging and correcting wrong tags on Facebook pictures. Through this method the machine learning system eventually develops an implicit system of rules and exceptions underlying the collection of teachings, which it then uses when exposed to new cases (such as a new photo of Ana).

A revolutionary component of some AI methods includes the possibility of continuous training, unsupervised by humans. In this way, progress regarding how to best execute a specific task–like distinguishing between people, or choosing the best move in a game of chess– is exponentially quicker than how humans learn the same tasks. Machines don’t need a break.

Expectations regarding how AI could upgrade healthcare, education…[you name it]… are sky high. These expectations are often based on myths that go far beyond what the technology affords today.

The same positive hype is mirrored by an equivalently extreme set of fears . “With artificial intelligence, we are summoning the demon”, claimed Elon Musk in a 2014 interview. Yet most specialists in the field argue today’s artificial intelligence is too basic to instill such grand fears. If the robot uprising is your uncle’s obsession, reassure him that they won’t be taking over the world anytime soon, and have him watch this gif on a loop for a while.

As a reaction to the anxiety, some governments have chosen to draft regulation. In 2018 a right to an explanation will take force in the EU. This is meant to ensure individuals affected by automated decision-making processes can be told why their specific case led to a specific decision. It establishes that algorithms that substantively affect people’s lives cannot hide their inner processes in a black box. They have to be understandable by the people they affect.

But what does this right to an explanation actually require? This is still being debated. Some claim the complexity of AI systems means that the explanations that can be developed would be meaningless to a human.

Imagine Facebook’s algorithm trying to explain why Ana isn’t Emma: It wouldn’t say one has a freckle, thinner hair, etc. Computers don’t abstract like we do. The AI system probably turned their faces into pixels, and isn’t assessing the freckle as “a freckle”, but as a disturbance in the pattern of pixels. And so on with each difference

Producing an explanation might require backwards-engineering the effect of each of the (potentially millions)of images the system was exposed to throughout the training processes. The quantity of information this backwards-engineering process would spit out is of a similar size and complexity as the original problem…the problem we decided to delegate onto computers precisely because of its size and complexity.

Therefore, the argument goes, we can’t expect to enjoy the benefits of AI systems AND understand how these outputs came to be. There is a trade-off.

If this trade-off is inherent to AI systems, and the benefits of AI are as high as expected, the candle of rationality that we have told ourselves has been key to understanding our world and personal history might be blown out. A complete paradigm shift. Not having an explanation for something like Facebook’s distinction between Anna and Emma might be fine. When a computer is allowed to tell a judge that you are guilty of a crime, or unworthy of credit, it’s another matter.

Let’s go back to Borges’ Babylon now. As mentioned, a place where every aspect of a person’s life was subjected to the secret rulings of the lottery.

“The [Lottery], with godlike modesty, shuns all publicity”, begins the closing paragraph. The narrator then wonders whether or not the Lottery still governs the fate of Babylonians today…or if the Lottery ever existed in the first place, but concludes that

“it makes no difference whether one affirms or denies the reality of the shadowy corporation, because Babylon is nothing but an infinite game of chance.”

Juan Ortiz Freuler CC-BY

But…does it matter if the Lottery exists?

This was Borges’ mind game decades before AI and explainability were on debate.

Let’s assume that in both worlds you would face the exact same fate: would you be indifferent between a world in which an unknown third party is executing such fate, as compared to one where it merely occurs?

I believe the answer is no. We are not indifferent.

Borges’ narrator mentions having overheard back-alley discussions regarding whether the Lottery was corrupt and actually favoring a privileged few. The narrator mentions the Lottery publicly denied these claims and insisted that the Lottery representatives were mere executioners of fate with no actual power. The very existence of a space for doubt should tilt us towards picking the world in which no third party has the power to decide whether or not to meddle with our lives in such way.

But then again we are told there is an inherent trade-off between accuracy and explainability. So the contrast is not merely between two equivalent outcomes. Unexplainable models, so they say, come with extra benefits.

Would we choose the world of unexaplainabilty for an extra dollar a day? Probably not worth the dread.

-Ok. What about a million dollars?

-Hmm…?

Yet the problem with constructing the option in this way is that, if we allow Companies to build an artificial layer that is in fact unintelligible on top of our existing chaos, we wouldn’t really be able to assess the trade-off itself! We might no longer know if we were given an extra dollar, a million, or if the system actually sucked our wallet dry. As in Borges’ story, we might be told that actually “No decision is final; all branch into others.”

So it turns into a matter of trust…

Would such a trade-off require our societies to abandon rationality as a guiding principle and have us adopt a mechanism of faith in the gods of silicon?

Borges’ Babylon seems run by a pretty unaccountable Company. Questioning outcomes was something to be done in a dark alley, not in the public square.

At this point in time, being afraid of AI as an entity is as reasonable as being afraid of dice. Self-conscious AI is many decades away according to the optimists, and even further away according to the rest of the experts, many of whom claim it is not something that can be achieved.

Yet we should learn from the past. Those who claim to be interpreters of the whims of god tend to reserve for themselves a disproportionate share of “god’s gifts”. Such is the effect of power on humans. Thus today we should focus on questions like are the dice loaded? who gets to define and/or execute the consequences of a draw? We should focus on those building AI systems, and those who pay their checks. These are the people who might be fiddling with the idea of setting up tomorrow’s Lottery.

We should not be lured into trading our drive to understand the world for shiny mirrors.

Nor should we feel paralyzed by these challenges.

Tangible progress towards the explainability of algorithms is already underway. More can and needs to be done.

What‘s our compass in these troubled waters?

#1- Fairness

Those who broker the distribution of the efficiency gains created by the deployment of AI in low risk cases should ensure a reasonable portion of these benefits are geared towards the development of auditable and fair outcomes of AI in high risk cases. This research is particularly urgent in areas such as access to public services and the judicial system.

#2- Don’t cave to oppression

Until we develop robust mechanisms to interpret and explain outputs, and ensure a degree of fairness, governments should not impose these systems on people. Offering them as an alternative to human decision-makers is something that might sound attractive. Given systemic discrimination and existing human bias, some people might reasonably prefer a black box than a racist human judge, for example. Understanding these baselines is important. Such is the way the jury system is being deployed in the Province of Buenos Aires: Understanding that the introduction of juries might significantly alter the odds of being convicted, by offering the jury system as an option that the accused can opt-in for, the judicial systems strives to ensure the accused do not perceive this change as a violation of their right to a fair and impartial trial. The jury is not a feature of the process as such. Perhaps, like in Babylon, the lottery begins as “a game played by commoners”. Those with nothing to lose. The two-tiered system that would ensue is unethical, even if its in the narrow/immediate interest of each person who chooses it. As such, we need to foster a broader conversation which acknowledges that governments have a duty to eliminate the underlying systems of oppression.

#3- Public disclosure

Our political representatives need to create reasons for developers to open the black boxes before our governments contract their services or buy their products. This could take the form of conditions to be included in public tenders, tests to be carried out as part of public tenders, or liability for not disclosing certain risks, for example.

Over time, advances in explainability in the high-stakes areas such as those driven by government contractors could lead to the development and adoption of explainable models in more areas.

That is precisely the role of government: to look into the future and design an incentive structure that–honoring the rights and balancing the interests of each individual and group that forms the social fabric–triggers the coordination required for the construction of a world. A world that each and every one of us can look forward to. As such, at a time in which technology is actively reshaping social relations and the distribution of wealth, governments need to double down on these responsibilities.

So, is artificial intelligence a ticket to Borges’ Babylon? Not necessarily. Babylon is nothing but a possible world. One we should not settle down for.

-
*Working draft. Comments and suggestions welcome

  • See also “How ChatGPT is hallucinating us into a Borgean universe (2023) “- https://juanof.medium.com/how-chatgpt-is-hallucinating-us-into-a-borgean-universe-ef8f3ad51d5

--

--

Juan Ortiz Freuler
Juan Ortiz Freuler

Written by Juan Ortiz Freuler

Justice & participation. Geopolitics and information Prev: Affiliate @BKCHarvard. Alumni: @oiiOxford & @blavatnikSchool . Chevening Scholar. Views=personal.

Responses (1)