21 Sep 2022

Indirect discrimination can occur when there are rules or arrangements that apply to a group of employees or job applicants, which apparently are neutral but which in practice disadvantage employees with a protected characteristic (e.g. sex, race, age, disability, sexual orientation or gender reassignment).

Indirect discrimination can be justified if the employer can justify an otherwise discriminatory rule or arrangement on objective grounds, by showing that: (a) it has a legitimate aim it is pursuing; and (b) the rule or arrangement is a proportionate means of achieving that aim.

For example, the fire service requires all job applicants to take a number of physical tests. This could be indirect discrimination because individuals with certain protected characteristics are less likely to pass the tests.  However, it has a legitimate aim in ensuring that candidates are fit enough to perform the physically demanding elements of the job and the physical tests are a proportionate means of achieving that aim.

This article examines the 2021 case of Filcam VGIL Bologna and others v Deliveroo Italia SRL in the Bologna Labour Court. The case held that Deliveroo’s algorithm, which determined its workers’ priority to access delivery time slots, was discriminatory.

The facts

Deliveroo riders used a digital platform downloaded onto their smartphones, which created a personalised rider profile. An algorithm then determined when riders could access shifts based on a “score” awarded to each one.

This algorithm scored the individuals based on a number of criteria, including:

  • the number of occasions when the rider, despite having booked a session, did not participate in the first 15 minutes of the session;
  • the number of times the rider becomes available for work between the hours of 8 pm and 10 pm on Friday for home consumption food delivery; and
  • the penalties imposed on riders who cancelled booked sessions with less than 24 hours’ notice.

When new shifts were released each week, the riders with the highest scores were given first access, and the riders with the lowest scores were given last access.

Deliveroo’s objective justification for this policy was simply that the commercial relationship between the company and its riders entitled it to monitor and distribute shifts as it saw fit and that all riders were treated in the same manner.

Court decision

The court ruled that:

  • a company like Deliveroo is entitled to require riders who are reliable and committed to ensure that there are sufficient riders available to cover peak shifts;
  • however, although the system was applied consistently to all staff, the error in Deliveroo’s system was that it failed to consider the reasons for cancellation and non-participation in a shift. This meant that the system could potentially indirectly discriminate, for example a rider may cancel for childcare reasons or for a doctor’s appointment;
  • the fact that the algorithm did not take into account any reason for late cancellation or non-participation in peak shifts, meant that it could potentially be indirectly discriminatory. It was for Deliveroo to objectively justify the system, which they failed to do, in part due to the company’s reluctance to reveal the algorithm’s inner workings (discussed below).

Absence of Transparency

Organisations using algorithms are often unwilling to disclose how the algorithm operates in practice. This is often either because:

  • AI software does not allow a user to ‘look under the bonnet’ to verify how a decision has been reached. Instead of storing what it has learned in a neat block of digital memory, it diffuses the input information in a way that is exceedingly difficult to decipher. It may be difficult to track the logic behind how a decision has been reached; or
  • developers are unwilling to disclose their algorithms (even in court) for fear of losing their competitive advantage.

Lack of transparency was a feature of the Deliveroo case since the company declined to “prove” the “concrete mechanism” at the heart of the algorithm. Since there were factual disputes in the case over how the algorithm worked, the court upheld the claimants’ case in relation to those factual matters in the absence of Deliveroo “proving” its alternative factual narrative.

It is worth noting that a similar approach was taken in a Dutch case in 2020, which concluded that a discriminatory algorithm was being used in the absence of a full and proper explanation of how the algorithm operated, from the state authority that deployed it.

It is likely that a court or tribunal in the UK would take a similarly dim view of organisations that deploy algorithms which are alleged to be discriminatory, but who decline to explain how they work.  It is a fundamental legal principle that once a claimant has established facts from which discrimination could be inferred, the burden of proof shifts to the employer to demonstrate otherwise.

Over reliance on output

Whilst the algorithm at the heart of the Deliveroo case is no longer in use, there is an important lesson to be learnt from the decision. Unthinking reliance on algorithms can lead to unintended discrimination.

The error in Deliveroo’s system was failing to distinguish between the reasons for cancellation and non-participation in a shift.  Deliveroo failed to ask questions of the algorithm’s output and consider the repercussions i.e. was the reason for absence linked (or potentially linked) to a protected characteristic? The system failed to understand and sensibly accommodate disadvantaged staff, leaving itself open to accusations of discrimination.

As such, any management relying solely on AI systems governed by data driven metrics for assessing employee performance should be avoided. Human interaction is key to overseeing the output of AI.

This article is part of a series written by Daniel Gray on “AI in the Workplace“.