In 2021 an African American man was arrested in Michigan and handcuffed outside his home in front of his family. The arrest warrant was generated by an Artificial Intelligence (AI) system, which identified the subject as the perpetrator of a theft. The AI had been trained mostly on white faces and completely made a mistake when identifying the offender. It was probably the first wrongful arrest of its kind.
That same year, 26,000 families were accused of fraud in the Netherlands. The fact in common between them was having some migrant origin. The event led to the ruin of thousands of innocent people, who lost homes and jobs, forced to return money from social assistance.
It was an error that generated an “unprecedented injustice” in that country. The cabinet resigned when faced with the scandal. The diagnosis of the alleged fraud was made by an AI.
AI: dataist utopias and dystopias
Novel as AI is, the place of mathematics in processing social issues is not new.
Philosophy brims with dataist utopias. For Tomás Moro, the establishment of a new method of government should be based on a tool that guaranteed excellence in business administration: mathematics. With AI, Saint-Simon’s utopia of “the good administration of things and the good government of people” promises a renewed opportunity through algorithms processed by machines, not for nothing called “computers.”
AI involves the interaction between software that learns and adapts, hardware with massive computing power, and vast amounts of data. It has been defined as “a constellation of processes and technologies that enable computers to complement or replace specific tasks that would otherwise be performed by humans, such as decision making and problem-solving.”
Its advantages are many: informed decision-making, massive information management, struggle against the climate crisis, restoration of ecosystems and habitats, retardation of biodiversity loss, efficient allocation of social resources, improvement of humanitarian aid and social assistance, diagnoses and health applications, control of traffic flows, etc.
Now, the different political connotations of the uses of mathematics have been noticed in many ways. Engels told Marx in 1881: “Yesterday, at last, I found the strength to study your mathematical manuscripts and, although I did not use supporting books, I was glad to see that I did not need them. I congratulate you on your work. The matter is as clear as daylight, so it never ceases to surprise me how mathematicians insist on mythologizing it. It must be because of their partisan way of thinking.”
Karl Popper, the author of The Open Society and Its Enemies, considered the “bible of Western democracies” by Bertrand Russell — himself a mathematician — began his career as a professor of mathematics and physics.
The Leviathan, by Thomas Hobbes, a completely anti-republican political program, said that a good government comes from modeling on a machine: “this great Leviathan called REPUBLIC or STATE is nothing more than an artificial man, although of stature and strength superior to those of the natural man.”
The AI, that “artificial man,” promises to be neutral, but is often partial: thus it produces an “algorithmic leviathan.” Frequently, it operates with a black box: the information supplied to the algorithm is known, but the process followed by it to achieve a certain result is unknown. Under these conditions, if discrimination exists, it is unknown whether it occurred on the basis of sex, ethnicity, skin color, age, religion, ideology, or another dimension.
Without black boxes — in some countries they are being subjected to legal regulation — it would be possible to identify how an algorithm discriminates. In general, this is because the information on which the algorithms are trained is partial, or because they reproduce pre-existing discriminatory biases. It is not possible to ignore that the historical structure of the technology industry is made up mainly of white men, from class strata and cultural frameworks that are quite homogeneous among themselves.
However, discrimination can also be intentional. Hate, division and lies are good for business: they multiply the exchanges to be monetized. In this field, the production of discrimination can be hidden under business secrecy.
Algorithmic racism
The notion of race occupies a central role in algorithmic discrimination.
This centrality is reflected in the Recommendation on the Ethics of Artificial Intelligence, the first global instrument on the subject, adopted in November 2021 by 193 UNESCO Member States. The document seeks, among other objectives, to ensure equity and non-discrimination in the implementation of AI, seeking to prevent existing social inequalities from being perpetuated, and to protect vulnerable groups.
There is no scientific way to justify the existence of human “races.” All individuals of the human race have 99.99% identical genes and DNA. The traits that determine the physical appearance of people comprise only 0.01% of the genetic material. The concept of race is a result of racism, not its origin.
The AI covers its behavior in matters of race with a cloak of science — it assures that it is an invisible variable — but it often operates on pseudoscientific foundations.
The first formal use of the term “pseudoscience” is recorded in 1824, to describe phrenology. Facial recognition systems that claim to predict dangerousness, characteristics or personality from photographs reproduce the logic of this pseudoscience.
An expression of scientific racism, as were craniometry, racial demography, and criminal anthropology, phrenology asserted the possibility of determining character and personality traits, and criminal tendencies, based on the shape of the skull, head, and the features. It has had no scientific validity for a long time.
However, according to Achille Mbembe, the leading Cameroonian philosopher, “new security devices [such as facial recognition using AI] take into account elements of the past in previous regimens: disciplinary and penal regimens of slavery, elements of colonial conquest and occupation wars, juridical-legal techniques of exception.”
There is bulk evidence of this. The COMPAS system, used in the United States to predict recidivism, has been questioned because African American defendants are twice as likely to be mischaracterized by the system. One same resume is 50% more likely to pass a job interview if the candidate’s name is identified by the algorithm as European American than as African American.
Joy Adowaa Buolamwini, a computer scientist at the Massachusetts Institute of Technology (MIT), to whom Netflix dedicated the documentary Coded Bias, has evaluated various facial recognition systems from cutting-edge companies in the field. Her conclusion was that recognition error rates for lighter-skinned men were no more than 1%. At the same time, she found that when dealing with darker-skinned women, the errors reached 35%.
Leading facial recognition systems have been unable to recognize the faces of Michelle Obama, Oprah Winfrey and Serena Williams. Twitter was unable to identify Barack Obama. A rapper built with Artificial Intelligence was “fired” for reproducing racist stereotypes. Tay, an AI designed to have “cool” interactions with its users, in less than 24 hours went from saying that humans were “super cool” to saying that “Hitler did nothing wrong.” People of black skin color may not receive home sales ads on Facebook.
Is AI racist?
Let ChatGpt3 answer the question:
ChatGpt3’s response translated:
Yes, AI (artificial intelligence) can be racist if it has been trained with data containing prejudices or racial discrimination. AI learns through the processing of large amounts of data and, if these data contain prejudices or discrimination, the AI can replicate these same prejudices and discrimination in its decisions and results.
For example, if an automatic learning algorithm is trained with data that underrepresent hundreds of racial groups, it is possible that the AI will learn to make biased and discriminatory predictions. In a similar way, if the group of data used to train AI contains racial stereotypes, it is possible that the AI will reproduce those same stereotypes.
It is important to point out that AI has no intentions or emotions and cannot be consciously racist as people are. However, AI’s biased decisions and results can have discriminatory consequences in real life, which makes it crucial to approach the problem of the bias and discrimination in the automatic learning algorithms and improve the quality of the data used for its training.
AI is only as good as the data it processes. A poorly designed algorithm diffuses bias at scale. Accepting the data to be processed by the AI, without subjecting them to critical scrutiny, is a dream of reason that endlessly produces monsters.
Police data series, for example, respond to information bases that have been built, in many cases, with partial data, with legal practices that later ceased to be legal, or with methods that have since been illegal, in common contexts of police racism.
Racism is a structural, social and cultural legacy that is reconstructed at the same time. Racism is reinterpreted, evolves and reproduces. It is not enough with a criterion of justice such as “non-discrimination,” when processing the data, which keeps the names involved anonymous, and makes racial data invisible, which understands justice in the way of “treating everyone as equals.”
A known case in the United States showed the inability of this criterion to produce fair results: an AI system, to grant bank loans, omitted names and any data that could refer to skin color. However, the result produced markedly racist results.
The investigation showed that the request for the zip code of the home of each subject involved in the investigation reintroduced racism, although it had been intended to expel the race mark from the data collected. The zip code of areas identified as having majority African American populations was disadvantaged compared to neighborhoods whose zip code was identified by the algorithm as mostly white.
The present is its history: what is taken out the door, comes back through the window. Superseding history requires making it visible, not the other way around.
Technological solutionism is not the solution
The solutions offered by machines have an aura of ideological neutrality, technological efficiency, and encrypt new capacities to face old problems. AI is presented as technological management of the organization of common affairs. It is easy to frame it within the non-partisan ideology of “technological solutionism.”
However, for Cathy O’Neil, a U.S. mathematician and activist, algorithms are “opinions locked up in mathematics.” Without committing to race-awareness statistics, without making the data take into account the socioeconomic differences of population groups compared to other groups, without guaranteeing participation, control and transparency in the collection and use of data, the algorithm loses much of its technological fascination, and reveals, rather primitively, the political nature of the context in which it operates.
Race does not exist, but racism exists. AI is not racist per se, but it produces racist results. Without taking charge of history, the algorithm is an opinion that encodes the exclusion and programs the discrimination dominant in the history inscribed in its data.