Tech giants forced to reveal AI secrets – here’s how this could make life better for everyone

<span rang=Boost AI transparency and accountability. PopTika/Shutterstock” src=”https://s.yimg.com/ny/api/res/1.2/V0Q9RzclWQyDeZNT2XSKzw–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTUwOQ–/https://media.zenfs.com/en/the_conversation_464/29279d805e245a7b16ccc4500087962f” data -src= “https://s.yimg.com/ny/api/res/1.2/V0Q9RzclWQyDeZNT2XSKzw–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTUwOQ–/https://media.zenfs.com/en/the_conversation_464/29279d805e245a7b16ccc4500087962f”/>

The European Commission is forcing 19 tech giants including Amazon, Google, TikTok and YouTube to explain their artificial intelligence (AI) algorithms under the Digital Services Act. Asking these businesses – platforms and search engines with more than 45 million EU users – for this information is a much-needed step towards making AI more transparent and accountable. This will make life better for everyone.

AI is expected to impact every aspect of our lives – from healthcare, to education, to what we watch and hear, and even how well we write. But AI also generates a lot of fear, often revolving around a god-like computer becoming smarter than us, or the risk that a machine with an innocuous task could inadvertently destroy humanity. More pragmatically, people often wonder if AI will make them redundant.

We’ve been there before: machines and robots have already replaced many factory workers and bank clerks without putting an end to work. But AI-based productivity gains come with two new problems: transparency and accountability. And everyone will lose if we don’t think seriously about how best to address these problems.

Of course, we now have experience evaluating algorithms. Banks use software to check our credit scores before giving us a mortgage, so do insurance companies or cell phone companies. Ride-sharing apps make sure we’re pleasant enough before offering us a ride. These evaluations use a limited amount of information, selected by people: your credit rating depends on your payment history, your Uber rating depends on how previous drivers felt about you.

Black box ratings

But new AI-based technologies collect and organize data without human supervision. This means that it is much more complicated to hold someone accountable or indeed to understand what factors were used to arrive at a machine-made rating or decision.

What if you find that no one is calling you back when you apply for a job, or you are not allowed to borrow money? This could be due to some error about you somewhere on the internet.

In Europe, you have the right to be forgotten and ask online platforms to remove inaccurate information about you. But it will be difficult to know what the wrong information is if it comes from an unsupervised algorithm. Most likely, no one will know the exact answer.

If errors are bad, accuracy can be even worse. What would happen for example if you let an algorithm look at all the data available about you and assess your ability to repay credit?

A high-performance algorithm might infer that, all else being equal, a woman, a member of an ethnic group that tends to be discriminated against, a resident of a poor neighborhood, someone who speaks with a foreign accent or is not “good looking” , it is less credible.

Research shows that these types of people can expect to earn less than others and are therefore less likely to repay their credit – algorithms will “know” this too. Although there are rules to stop people at banks from discriminating against potential borrowers, an algorithm acting on its own might consider it fair to charge these people more to borrow money. Such statistical discrimination can create a vicious circle: if you have to pay more to borrow money, you may find it difficult to make these higher repayments.

Even if you prevent the algorithm from using data about protected characteristics, it may reach similar conclusions based on what you buy, the movies you watch, the books you read, or even the way you you write and the jokes that make you laugh. But algorithms are already being used to screen job applications, assess students and assist the police.

The exact cost

Apart from matters of fairness, statistical discrimination can hurt everyone. A study of French supermarkets, for example, has shown that when employees with a Muslim name work under the supervision of a biased manager, the employee is less productive because the supervisor’s prejudice becomes a self-fulfilling prophecy.

Research on Italian schools shows that gender stereotypes affect achievement. When a teacher believes that girls are weaker than boys in mathematics and stronger in literature, the students organize their efforts accordingly and prove that the teacher is right. Some girls who could be excellent mathematicians or boys who could be great writers may choose the wrong career as a result.

When people are involved in decision-making, we can measure and, to some extent, correct bias. But it’s impossible to hold unsupervised algorithms accountable if we don’t know the exact information they use to make their decisions.

Some people could relate to AI decision making being helpful. Land Picture / Shutterstock

If AI is really going to improve our lives, then, transparency and accountability will be key – ideally, before algorithms are even introduced into the decision-making process. That is the goal of the EU Artificial Intelligence Act. And so, as often happens, EU rules could quickly become the global standard. This is why companies should share commercial information with regulators before using it for sensitive practices such as hiring.

Of course, this type of regulation is about striking a balance. The big tech companies see AI as the next big thing, and innovation in this area has also become a geopolitical race. But innovation often only happens when companies can keep some of their technology secret, so there’s always the risk that too much regulation will stifle progress.

Some believe that the EU’s lack of major AI innovation is a direct consequence of their strict data protection laws. But if we don’t hold companies accountable for the results of their algorithms, many of the potential economic benefits of AI development could be reversed anyway.

This article from The Conversation is republished under a Creative Commons license. Read the original article.

The conversation

The conversation

Renaud Foucart does not work for, consult with, or hold shares in any company or organization that would benefit from this article, nor has he disclosed any relevant interests beyond their academic appointment.

Leave a Reply

Your email address will not be published. Required fields are marked *