Rules for robots - towards a more civil robot apocalypse

UM’s Katleen Gabriels is a moral philosopher, specialised in computer ethics. She looks into how morality and computer technologies influence each other. Her new book Rules for Robots. Ethics and Artificial Intelligence asks some questions that will need to be answered very soon.

Don’t worry: the robot apocalypse will be at least another decade. Self-driving cars on the other hand will come soon. The developers assure us that the cars can make decisions far faster and with greater accuracy than humans can. However, if the choice is between killing two pensioners or a toddler, then what will that decision be based on?

Katleen Gabriels is a moral philosopher studying computer technologies. “Artificial intelligence (AI) has come a long way,” she explains, “from if-then programming to associative learning to machine learning. Deep learning has been around since the 1980s but now there are commercial structures providing vast amounts of data to train the machines.”

Machines and moral beings

The progress in AI was fuelled by conceptual shifts – how we think the world can be made sense of – but also breakthroughs in technology, such as vastly increased storage capacity and processing power, the latter often referred to as Moore’s law, the observation that the number of transistors on microprocessors doubles every two years.

While AI’s evolution is impressive, it pales compared to our own. Gabriels conducted her PhD research on Second Life, an online social virtual world. “We are fundamentally moral beings. Based on our sense of reciprocity and underlying desire to be loved and to excel, a morality emerges that is not inherent in the virtual world.” In this digital Wild West, morality still manifested itself – and Gabriels’ curiosity was piqued.

Frankenstein’s regrets

Gabriels cites former Apple engineer Tony Fadell, who has now voiced regrets regarding his design choices. “He noticed that at dinnertime, his children were constantly checking their phones – and he realised that they had designed these products for themselves, i.e. people in their twenties, single, no kids. But they’d never considered different use contexts, like a family setting.”

Gabriels thinks this is a more general problem which universities have a place in addressing. “Designers often think that they are doing exact science – little by little, students now understand that what they do isn’t neutral.” In her book, she pleads for ethics by design and even says there should be a variation of the Hippocratic Oath for designers to not do harm.

Don’t come to us for the truth!

The choices and trade-offs involved are complex and often opaque. Gabriels thinks students have to be made aware of all the potential implications of design choices and be taught the conceptual toolkit to navigate these thought processes. “I’ve taught mandatory ethics courses for designers at TU Eindhoven and I think this shift will come – like it did with ethics in medicine.”

Gabriels doesn’t trade in easy answers; the focus is on ways of thinking and dialogue. “Don’t come to us for the truth! It’s not like ethicists just know… but we can teach how to reflect together.” She now gets to do just that in Digital Society, UM’s new interfaculty bachelor programme combining a technical understanding of digital technology with tools and perspectives from different disciplines.

Attention thieves

Isn’t it just apps? What’s the fuss? “Four out of ten Americans already used Facebook as a news source in 2016, so this is highly relevant to civil democratic society – to all of us, right now.” Given the nonchalant attempts at moral disengagement, is it fair to assume the industry is this insidious? Is it really a group of people pondering how to steal and monetise our attention?

“It’s literally that. Sitting around a desk and working on how these products can be made as distractive as possible – from how to present notifications and how to drive interaction, to things like autoplay (i.e. that the next video starts automatically), to refreshing and endless scrolling. This is called ‘distraction by design’.”

katleen gabriels

Lack of understanding and regulation

Policy makers had hoped the sector would self-regulate – if it had worked so well in finance, why not here too... The paucity of oversight came to a memorable nadir when a smirking Mark Zuckerberg attended a US Senate hearing to explain the internet to (and take selfies with) a group of woefully underprepared lawmakers nominally in charge of defending the interests of their electorate.

“The media’s lack of nuance doesn’t help either,” Gabriels disparages the state of the public discourse. Analogous to its tendency to divide foodstuffs into those preventing and those causing cancer, the media have taken to reprinting lyrical press releases about the next app/gadget to make man complete, while at the same time conjuring up estimates as to when killer robots will harvest our organs for heavy metals.

Neither perfect, nor neutral

While our discourse on algorithms is wanting, algorithms already influence our discourse to a terrifying extent. “Just because it’s technology, doesn’t mean it’s value-neutral. All those algorithms have to be trained using existing data sets. If this data set is biased, then the algorithm will be biased too.” Gabriels cites the famous example of page after page of saccharine stock photos if when googling “three white teenagers” versus the deluge of mugshots when googling “three black teenagers”.

Beyond human biases, one ought to remain sceptical of grandiose industry claims too. “We use the term intelligence very inconsistently. AI is absolutely amazing at computational intelligence like playing chess but it displays hardly any autonomy, struggles with language processing and when it comes to emotion recognition software, a lot of things on the market are basically untrustworthy bullshit,” says Gabriels, steering well clear of academic jargon.

Be aware and wary

For her, transparency is key. “The EU’s General Data Protection Regulation gives people a right to know how AI is coming to a decision; in quite a few cases, companies can’t or won’t provide this information. I think they should also have to publish the accuracy of their products.” In any case, Gabriels thinks it is problematic to shift all the responsibility to the user. “After all, Facebook also tracks people who don’t use their service. ‘Offline’ is increasingly an illusion.”

Apart from regulation, her answer is better dialogue. “People need to understand what distraction by design is, or that you pay with your time and attention for the service you’re using.” She takes her outreach duties very seriously. “The threat of mass surveillance has been an academic debate for 40 years – but not a societal one. Academics should be encouraged to participate more in societal debates.”

By Florian Raith

Also read

  • Maastricht University received grants for three of the ten research projects starting in the National Growth Fund program Circular Plastics NL.  

  • UM alumni Brian and Rob Timmermans combined their degrees in Econometrics and Sustainable Finance with their passion for karate, resulting in an ever-expanding trophy collection. Both brothers are multi-time and reigning Dutch champions in their respective weight classes and have achieved success on...

  • Maastricht University takes care of many distinctive buildings and art works that we all know. By giving them a new purpose, we preserve these icons and give them a new meaning, making them the vibrant heart of a bustling city. 

    Did you know that these buildings and art works also provide access to...