Webinar Report: Researching Discrimination in E-Commerce and Online Advertising

by: in Law
Conference

M-EPLI, along with the Maastricht Law & Tech Lab and the Institute of Data Science, hosted the online webinar ‘Researching Discrimination in E-Commerce and Online Advertising’ on the 4th and 5th of March 2021. Throughout the two-day event, speakers from different countries, institutes and disciplines addressed discrimination issues present in online advertising practices. The first day saw experienced scholars weighing in on these topics, while the second day welcomed young researchers who presented their ideas.

After a brief opening by Dr. Caroline Cauffman and Pedro V. Hernández Serrano (Maastricht University), the first speakers began by discussing protected grounds and the system of non-discrimination law in the context of algorithmic decision-making and artificial intelligence. In their draft paper and their presentation, Prof. Dr. Janneke Gerards (Utrecht University) and Prof. Dr. Frederik Zuiderveen Borgesius (Radboud University Nijmegen) discuss that, although current non-discrimination law offers some protection from algorithmic discrimination, the use of AI brings several challenges to the law. They noted that certain new types of differentiation used by AI (e.g., a postal code) could evade existing non-discrimination law altogether, as these do not fall into the category of protected characteristics. Seeing the loopholes in the protection offered by current non-discrimination law in Europe, they hypothesise that the most suitable way to address algorithmic discrimination by private bodies is the adoption of a hybrid system, with a semi-closed list of grounds and an open system of justifications. The example grounds would serve as benchmarks and create clarity, but since the list would not be fully closed, other grounds, such as a postal code, could also be added. The open possibility of exemptions would account for the diversity of potential justifications and offer flexibility to make way for new developments.

After the discussion, Prof. Dr. Hans Micklitz (University of Helsinki) was the second speaker on the first day. Referring to his work, he discussed personalised advertising, and started by establishing that the personalization of marketing leads to the universal and structural vulnerability of consumers and increases imbalance. To define personalization, he relied on a composite of four different elements: 1) basic information on the consumer, such as age, sex, and residence; 2) individual preferences inferred from the data the consumer leaves behind on the internet; 3) data on social ties inferred from social media; and 4) proxies used by marketing businesses to fill in the remaining gaps. Prof. Micklitz argues that it should not be for the individual consumer to understand how these four elements interact and shape individual advertising, and therefore, he suggests that the appropriate solution is to reverse the burden of proof to the marketing businesses.

Prof. Dr. Anne-Marie Hakstian (Salem State University, USA) was the third speaker, and she discussed cases of discrimination in online platforms such as Uber and Airbnb. She examined three lawsuits, namely Selden v. Airbnb, Harrington v. Airbnb and Ramos v. Uber Tech, with the Airbnb cases concerning racial discrimination, and the Ramos case concerning discrimination due to disability. Prof. Hakstian explained the legal challenges these cases posed to the US law. Firstly, it had to be assessed whether there was intentional discrimination present on behalf of the platform, and the Harrington case answered this in the affirmative. It was further confirmed that businesses in the platform economy may be characterized as ‘places of public accommodations’, and that platform businesses are not per se exempt from the Fair Housing Act. An additional legal challenge, not yet definitively answered by the courts, is whether a platform can be immune to liability under the Communications Decency Act. Lastly, the question arose whether platform users are bound by mandatory arbitration clauses, thus preventing them from suing. The court in Selden thought so, while in Ramos it decided on the contrary, as the T&C were not appropriately visible to the user.

After a discussion round and a break, the following presenter was Prof. Dr. Christo Wilson (Northeastern University, USA), who discussed algorithm auditing in the Amazon Buy Box. The Buy Box is a feature of the Amazon website which prominently features a certain seller for a specific item, and is particularly important, seeing as 80-90% of buyers purchase the product via the Buy Box. Prof. Wilson found that the Buy Box algorithm considers the lowest price and the most positive customer feedback to be the most important features for ending up in the Buy Box. He noted that if Amazon itself sells a product, it will win the Buy Box over 90% of the time. An important takeaway from his research is that 50% of products that end up in the Buy Box are highly dynamic. These dynamic sellers will peg their price offer to a certain benchmark, such as the lowest or second-lowest price, or the price offered by Amazon, and thus ensure that they remain competitive sellers.

The presentation of Prof. Dr. Philipp Hacker (European University Viadrina, Germany) followed, who examined algorithmic affirmative action. He stated that algorithmic fairness is a valuable tool, as it attempts to entrench legal and societal values at the code level of the machine learning model itself. Fairness in computer science may entail either individual or group fairness, and as Prof. Hacker explained, increased group fairness will lead to a decrease in individual fairness. To bridge the division between the two fairness notions, he developed a model and demonstrated its use by reflecting on the LSAT scores of white applicants as compared to black applicants. In the second half of the presentation, he turned to discuss the legal challenges of the use of such algorithmic tools. In a European context, the case law of the CJEU suggests that the legality of algorithmic affirmative action depends on the moment in which it is applied. In the selection phase (post-processing), the CJEU assessed this in a restrictive manner, providing that automated rerankings are not possible and human intervention is necessary. On the other hand, if the affirmative action takes place before the selection phase (e.g., a decision on whom to invite for an interview, pre- and in-processing), the CJEU held that this concerns equality of opportunity and is, therefore, more open to positive action measures. Finally, Prof. Hacker challenged the view that post-processing is more problematic than pre- and in-processing.

The final speaker of the first day was Dr. Oana Goga (French National Centre for Scientific Research), whose presentation discussed targeted online political advertising on Facebook. Her research focused on the characterization of certain ads as political ads: on Facebook, it is the advertiser himself who needs to declare whether the ad is political in nature. To what extent this is controlled by Facebook is unclear, therefore, the true number of political ads is uncertain. In her empirical research on the 2018 Brazilian elections, Dr. Goga found that 2% of ads shown to Facebook users were political in nature, but not declared as such. This demonstrates that Facebook missed a considerable number of political ads that were not self-reported by advertisers. Furthermore, she illustrated that issue ads (e.g., concerning abortion) can be difficult to characterize in practice. Her research showed that people disagree on more than 50% of ads, with some saying that a certain ad is political in nature, and some saying it is not. To support transparency, Dr. Goga suggests that social media platforms shall provide ad libraries including all advertisements. The first day was concluded with a final discussion round and some closing remarks.

After the opening of the second day of the webinar, the first speaker was Li Qian (Maastricht University), whose presentation discussed how competition law should respond to AI-enabled price discrimination. After defining when AI-enabled price discrimination occurs, she illustrated that this may impede effective competition and harm consumers if done by dominant undertakings. As a suggested response, she analysed Article 17 of the Chinese Anti-Monopoly Law and its three requirements, namely that 1) the undertaking must hold a dominant market position; 2) there must be discriminatory treatment; and 3) no justifications are available. This legal regime, although not specifically tailored to AI-based discrimination, still applies to the digital market and can inspire other jurisdictions.

The second speaker of the second day was Dr. Fabrizio Esposito (NOVA University Lisbon), who spoke about why the right to know impersonal prices exists and empowers consumers. His presentation focused on EU law, more specifically a provision in the Consumer Rights Directive, which entails that there is a right to know if a price was personalised. To illustrate the problems with the provision, he showed an example of an advertisement that merely states that the price was calculated on prior purchasing behaviour, which was held to satisfy the requirements of the provision. Instead, he suggests that the consumer shall not only have a right to know that the price was personalized, but also what the impersonal price is compared to the personal one. 

Andreea Grigoriu (Maastricht University) was the following speaker, who discussed the elements of establishing a valuable dataset in her research into abusive language on YouTube. In the collection of text data, Andreea focused on divisive topics like veganism, gender identity and workplace diversity. In order to define abusive language, she relied on the common denominators present in legal definitions which relate to hate speech: 1) incitement to hatred; 2) threat, violent behaviour, or offensive language; and 3) degradement or belittlement. Andreea opted not to use binary labels, but instead created a scale of 1 to 7 to identify the degree of abusiveness in a text. Lastly, she trained law students with some pre-existing knowledge to be annotators, as opposed to using crowdsourced volunteers.

This was followed by the presentation of Richard Frissen and Dr. Rohan Nanda (Maastricht University), who reflected on bias indicators in recruitment processes using Natural Language Processing (NLP). For their upcoming research, they explained that resume screening may reveal certain information which consciously or unconsciously influences the hiring personnel’s decision about which candidates to invite for an interview. Their research will identify possible bias indicators in a resume, and once the indicators are identified, they aim to investigate how NLP can contribute to the realization of a blind resume. This will be done by developing an automated process that makes it possible to hide the identified indicators on a resume.

The subsequent speaker was Dr. Antonio Davola (LUISS Guido Carli University, Italy & University of Amsterdam), who spoke on fostering consumer protection in the granular market. After elaborating on the advantages and the disadvantages of price discrimination, he distinguished the three main attempts of regulating algorithmic discrimination: 1) the functional interpretation of GDPR-related user rights; 2) through competition law; and 3) through consumer law. None of these solutions are without shortcomings, therefore, he concluded that further harmonization is necessary. He argued that private law shall be used as a conceptually unifying framework to preserve consumers’ free will. Reflecting on existing consumer law, he hypothesised that rules on defective consent can provide a valid solution to correct some adverse and discriminatory effects, while at the same time, preserving benefits.

The presentation of Marvin van Bekkum and Prof. Dr. Frederik Zuiderveen Borgesius (Radboud University Nijmegen) followed, who discussed using special categories of data to prevent discrimination by AI, and whether the GDPR needs a new exception. They explained that AI requires that in order to investigate whether it discriminates based on a certain characteristic (e.g., ethnicity), that characteristic must be known. However, the GDPR prohibits collecting data about special categories (such as people’s ethnicity), subject to exceptions. The research of Marvin van Bekkum and Prof. Zuiderveen Borgesius examines whether the GDPR needs a new exception to the prohibition on using special categories of data, to help prevent discrimination by artificial intelligence.

The last presentation of the webinar was delivered by Jingxi Liu (Queen Mary University London), who discussed gender bias in behavioural advertising. She explained that when the target group of advertisements is women, advertisers may pay a higher price, and therefore, the differential placement of online advertising is not a discriminatory behaviour, but is determined by market economic factors. This phenomenon may also be related to the deprivation of choice for women, resulting in their indirect discrimination. Nevertheless, the effect may be that an advertisement for a higher paying job is shown to women less than it is to men. Therefore, Jingxi Liu reflects on the dilemma of whether there should be a new legal framework to regulate advertisers to target both genders equally.

After a final discussion round and the delivery of closing remarks, the webinar was concluded. Overall, the webinar provided a topical overview of discrimination in e-commerce and online advertising. Although there is no universal agreement on what the most appropriate tool would be to achieve these objectives, the debate led to the discussion of multiple proposed solutions to tackling these concerns.

Written by Laura Zsarnai, M-EPLI intern

The links to the recorded two-day webinar can be found here: Day 1 and Day 2