A few years ago, China launched its new social credit system aimed at rating individuals’ trustworthiness in society. The algorithms behind it, will assign citizens with a score based on available user data – from social interactions to consumer behavior. It is legitimately raising question about social control by the Chinese Government and a great number of western media described the Chinese initiative as “dystopian” or “Orwellian”. However, in the age of big data and social media, algorithms used in the Chinese project are not so far detached to the ones already classifying and rating us every day.

Tinder, Uber and Facebook are examples of platforms where algorithms have replaced human judgement. Automation and the massive availability of personal data are giving algorithms more and more tools to analyse, rate and classify us. For decades already, computer programs have been used in financial scoring all around the world. In the US, the system that determines whether you are trustworthy enough to get credit is called FICO. Through its secret computer programs, it rates individuals based on previous data such as if you have been paying your debts in the past and other credit history.
Up until now, no social data has been involved in classifying citizens. However, platforms and services are increasingly widening the sources they use to provide an accurate classification of their users. In the financial sector, start-ups are integrating data issued from social media in order to financially score individuals. For instance, Lendoo a Hong Kong based start-up uses any social data available on the Internet to score individuals that don’t have enough credit history, claiming to “be expand access to credit”. And the trend is growing: three years ago, Facebook patented a technology that would enable the system to financially assess its users.
Algorithms are currently assessing our value far beyond financial scoring. For example, the dating app Tinder uses similar programs to sort its users’ profiles based on attractiveness in order to match them and increase likability. This system, called the ELO score, has an obvious impact on who you might end up with. It further raises critical questions: on which terms can an algorithm judge something as subjective as attractiveness based on someone’s profile?
Nowadays, the proliferation of big data models has given algorithms an alarming power determining the future of individuals. And the problem is that we tend to blindly trust them. They define who is getting a loan, who will be your next partner, who will get an interview for a specific job or who might be considered by the police as likely to be a criminal. According to Cathy O’Neil, writer of the book “Weapons of Math Destruction” and big data sceptic: you might not have noticed but it is likely that your interactions with any bureaucratic entities would go through an “algorithm in the form of a scoring system”.
But what if algorithms are wrong?
Algorithms are a set of rules written in computer code that are understood and executed by the computer. They can be trained to predict future events by analysing historical data patterns and comparing it to a pre-coded definition of what is aimed at – whether it is attractiveness, trustworthiness or success.
There is a commonly accepted idea that algorithms are maths, thus true and objective. This trend is what developer Fred Benenson calls “mathwashing” or the human tendency to assume algorithms are objective only because they have math at their core. In a recent Q&A, Benenson clarified:
“Algorithm and data-driven products will always reflect the design choices of the humans who built them and it’s irresponsible to assume otherwise”.
But the threat also lies in the uncertainty of the source of the bias as the coding and the historical data used to run the program can be at the origin of a discrimination. Furthermore, most algorithms have limited access, they are opaque “Black Boxes”. Thus, it is difficult for the average individual to know if they have been valued fairly, especially lacking the technical knowledge required to understand an algorithm.
Can a bridge be racist? The question might seem odd but it is the argument made by sociologist Landon Winner in “Do Artefacts Have Politics?”. He claims that all technologies are embedded with their creator’s biases following from the choices made during the creation. Thus, they carry political implications and embody a certain form of power. In his essay, he references the social and racial prejudices of Robert Moses, a New York urban designer from the 20th century. His bridges were too low for public buses, thus only allowing access to public parks to individuals owning cars – which mostly included white upper class individuals. If a bridge can be racist, then an algorithm can most certainly also be.
The automation of human judgment by algorithms will inevitably create winners and losers, accentuating existing inequalities. The human beings behind the software are probably not aware of the moral dimension of their work, nor given the appropriate social issues background education for it.
An example of algorithmic bias is a US study on the usage of Compas, a program aimed at predicting a criminal likelihood to reoffend. The algorithm was used by the police in view of achieving more objectivity in their judgment. And guess what? According to the investigation, black criminals were twice as likely to be misclassified as a reoffender. In the racial context of the United States, the use or computer programs could have been a powerful tool against human racial bias, but as it turns out, racism can also be coded.
Now, think how the same type of patterns could reappear in similar situations – an algorithm scanning through resumes of jobs applicants. Algorithms have internal dictionaries called “word embedding” that allow the computer to associate words such as capitals with their corresponding countries. A research from Princeton University has showed that while some male names were associated with “boss” and “computer programmer”, female names were linked to “housekeeper” or “artist”. And whether the bias appears through the coding or the historical data fed to the program, it is also highly dependent on the definition of a “successful application”. As a result, the female applicants might be less likely to be selected for a job in a technology firm.
Thankfully, it does not have to be like that and individuals should not feel powerless against automated value judgments impacting their lives. As Cathy O’Neil claims, “data scientists are not ethical deciders” and technology should be a tool working for us rather than against us: to accomplish that she suggests algorithmic auditing. The first step is to make algorithms more accessible and allow individuals to challenge the data likely to significantly affect their lives. Then, there is an increasing demand for algorithm accountability through government awareness and regulation. In that sense, the European Union has recently adopted a measure aimed at providing citizens with the right to ask for explanations in the context of data-driven decisions, especially in online credit applications or e-recruiting processes.
The critical difference between algorithmic scoring with the Chinese system of social rating is that the Chinese government is the one setting the variables and the definitions; we can challenge them. As data increasingly means money for a lot of sectors it is important to ask ourselves: where is my data going and for what purpose? Law Professor Lawrence Lessig put it right:
“We should interrogate the architecture of cyberspace as we interrogate the code of Congress”.