Dutch court rejects Uber drivers’ ‘robo-firing’ charge but tells Ola to explain algo-deductions - 13 minutes read
Uber has had a good result against litigation in the Netherlands, where its European business is headquartered, that had alleged it uses algorithms to terminate drivers — but which the court has rejected.
The ride-hailing giant has also been largely successful in fending off wide-ranging requests for data from drivers wanting to obtain more of the personal data it holds on them.
A number of Uber drivers filed the suits last year with the support of the App Drivers Couriers Union (ADCU) in part because they are seeking to port data held on them in Uber’s platform to a data trust (called Worker Info Exchange) that they want to set up, administered by a union, to further their ability to collectively bargain against the platform giant.
The court did not object to them seeking data, saying such a purpose does not stand in the way of exercising their personal data access rights, but it rejected most of their specific requests — at times saying they were too general or had not been sufficiently explained or must be balanced against other rights (such as passenger privacy).
The ruling hasn’t gone entirely Uber’s way, though, as the court ordered the tech giant to hand over a little more data to the litigating drivers than it has so far. While it rejected driver access to information including manual notes about them, tags and reports, Uber has been ordered to provide drivers with individual ratings given by riders on an anonymized basis — with the court giving it two months to comply.
In another win for Uber, the court did not find that its (automated) dispatch system results in a “legal or similarly significant effect” for drivers under EU law — and therefore has allowed that it be applied without additional human oversight.
The court also rejected a request by the applicants that data Uber does provide to them must be provided via a CSV file or API, finding that the PDF format Uber has provider is sufficient to comply with legal requirements.
In response to the judgements, an Uber spokesman sent us this statement:
“This is a crucial decision. The Court has confirmed Uber’s dispatch system does not equate to automated decision making, and that we provided drivers with the data they are entitled to. The Court also confirmed that Uber’s processes have meaningful human involvement. Safety is the number one priority on the Uber platform, so any account deactivation decision is taken extremely seriously with manual reviews by our specialist team.”
The ADCU said the litigation has established that drivers taking collective action to seek access to their data is not an abuse of data protection rights — and lauded the aspects of the judgement where Uber has been ordered to hand over more data.
It also said it sees potential grounds for appeal, saying it’s concerned that some aspects of the judgments unduly restrict the rights of drivers, which it said could interfere with the right of workers to access employment rights — “to the extent they are frustrated in their ability to validate the fare basis and compare earnings and operating costs”.
“We also feel the court has unduly put the burden of proof on workers to show they have been subject to automated decision making before they can demand transparency of such decision making,” it added in a press release. “Similarly, the court has required drivers to provide greater specificity on the personal data sought rather than placing the burden on firms like Uber and Ola to clearly explain what personal data is held and how it is processed.”
The two Court of Amsterdam judgements can be found here and here (both are in Dutch; we’ve used Google Translate for the sections quoted below).
Our earlier reports on the legal challenges can be found here and here.
The Amsterdam court has also ruled on similar litigation filed against India-based Ola last year — ordering the India-based ride-hailing company to hand over a wider array of data than it currently does; and also saying it must explain the main criteria for a ‘penalties and deductions’ algorithm that can be applied to drivers’ earnings.
The judgement is available here (in Dutch). See below for more details on the Ola judgement.
Commenting in a statement, James Farrar, a former Uber driver who is now director of the aforementioned Worker Info Exchange, said: “This judgment is a giant leap forward in the struggle for workers to hold platform employers like Uber and Ola Cabs accountable for opaque and unfair automated management practices. Uber and Ola Cabs have been ordered to make transparent the basis for unfair dismissals, wage deductions and the use of surveillance systems such as Ola’s Guardian system and Uber’s Real Time ID system. The court completely rejected Uber Ola’s arguments against the right of workers to collectively organize their data and establish a data trust with Worker Info Exchange as an abuse of data access rights.”
Update: Speaking to TechCrunch in a call following the ADCU’s press statement, Farrer emphasized that there are many other Article 22 cases still outstanding against Uber, related to other types of terminations. “The Article 22 cases have not been decided yet,” he told us. “This is an indication for one set of cases but there are many more where we think are absolutely pure automated decisions where there has been no intervention.
“I am really heartened by the Ola case where automated decision-making was identified. Because I think that is more analogous to the types of automated firings that Uber has been doing than what were decided today.”
The challenge with automated decision-making is that if the people subject to such systems can’t get full transparency on the data and processes involved how can they determine whether the outcome was fair or not?
Discussing the Uber judgements, Jill Toh, a PhD researcher in data rights at the University of Amsterdam, told us: “It has shown that drivers are still insufficiently able to obtain sufficient data and/or a more comprehensive understanding of their work and how they are algorithmically managed. The one good point of the judgment is that Uber’s claim that workers are abusing the GDPR was not granted.
“In some parts where the court rejected workers request for access to data, the court’s explanation is that workers are not specific enough to the exact claims of their personal data, yet it is precisely because they do not know what specific data is being captured of them that they need this access,” she went on. “Another irony is that in the robo-firing case, one of the alleged fraudulent actions by drivers (related to trip fraud), is the use of software to game Uber system’s by attempting to pick their rides. It’s clear that these tactics and strategies are aimed to gain more control over their work. Both judgments have shown that workers are still a long way from that. What is also evident is that workers are still unprotected by unfair dismissal, whether by an algorithm/automated decision-making system or by humans.”
In an interesting (related) development in Spain, which we reported on yesterday, the government there has said it will legislate in a reform of the labor law aimed at delivery platforms that will require them to provide workers’ legal representatives with information on the rules of any algorithms that manage and assess them.
Court did not find Uber does ‘robo firings’
In one of the lawsuits, the applicants had argued that Uber had infringed their right not to be subject to automated decision-making when it terminated their driver accounts and also that it has not complied with its transparency obligations (within the meaning of GDPR Articles 13, 14 and 15).
Article 22 GDPR gives EU citizens the right not to be subject to a decision based solely on automated processing (including profiling) where the decision has legal or otherwise significant consequences for them. There must be meaningful human interaction in the decision-making process for it to not be considered solely automated processing.
Uber argued that it does not carry out automated terminations of drivers in the region and therefore that the law does not apply — telling the court that potential fraudulent activities are investigated by a specialized team of Uber employees (aka the ‘EMEA Operational Risk team’).
And while it said that the team makes use of software with which potential fraudulent activities can be detected, investigations are carried out by employees following internal protocols which require them to analyze potential fraud signals and the “facts and circumstances” to confirm or rule out the existence of fraud.
Uber said that if a consistent pattern of fraud is detected, a decision to terminate requires an unanimous decision from two employees of the Risk team. When the two employees do not agree, Uber says a third conducts an investigation — presumably to cast a deciding vote.
It provided the court with explanations for each of the terminations of the litigating applicants — and the court writes that Uber’s explanations of its decision-making process for terminations were not disputed. “In the absence of evidence to the contrary, the court will assume that the explanation provided by Uber is correct,” it wrote.
Interestingly, in the case of one of the applicants, Uber told the court they had been using (unidentified) software to manipulate the Uber Driver app in order to identify more expensive journeys by being able to view the passenger’s destination before accepting the ride — enabling them to cherry pick jobs, a practice that’s against Uber’s terms. Uber said the driver was warned that if they used the software again they would be terminated. But a few days later they did so — leading to another investigation and a termination.
However it’s worth noting that the activity in question dates back to 2018. And Uber has since changed how its service operates to provide drivers with information about the destination before they accept a ride — a change it flagged in response to a recent UK Supreme Court ruling that confirmed drivers who brought the challenge are workers, not self employed.
Some transparency issues were found
On the associated question of whether Uber had violated its transparency obligations to terminated drivers, the court found that in the cases of two of the four applicants Uber had done so (but not for the other two).
“Uber did not clarify which specific fraudulent acts resulted in their accounts being deactivated,” the court writes in the case of the two applicants who it found had not been provided with sufficient information related to their terminations. “Based on the information provided by Uber, they cannot check which personal data Uber used in the decision-making process that led to this decision. As a result, the decision to deactivate their accounts is insufficiently transparent and verifiable. As a result, Uber must provide [applicant 2] and [applicant 4] with access to their personal data pursuant to Article 15 of the GDPR insofar as they were the basis for the decision to deactivate their accounts, in such a way that they can are able to verify the correctness and lawfulness of the processing of their personal data.”
The court dismissed Uber’s attempt to evade disclosure on the grounds that providing more information would give the drivers insight into its anti-fraud detection systems which it suggested could then be used to circumvent them, writing: “In this state of affairs, Uber’s interest in refusing access to the processed personal data of [applicant 2] and [applicant 4] cannot outweigh the right of [applicant 2] and [applicant 4] to access their personal data.”
Compensation claims related to the charges were rejected — including in the case of the two applicants who were not provided with sufficient data on their terminations, with the court saying that they had not provided “reasons for damage to their humanity or good name or damage to their person in any other way”.
The court has given Uber two months to provide the two applicants with personal data pertaining to their terminations. No penalty has been ordered.
“For the time being, the trust is justified that Uber will voluntarily comply with the order for inspection [of personal data] and will endeavor to provide the relevant personal data,” it adds.
No legal/significant effect from Uber’s aIgo-dispatch
The litigants’ data access case also sought to challenge Uber’s algorithmic management of drivers — through its use of an algorithmic batch matching system to allocate rides — arguing that, under EU law, the drivers had a right to information about automated decision making and profiling used by Uber to run the service in order to be able to assess impacts of that automated processing.
However the court did not find that automated decision-making “within the meaning of Article 22 GDPR” takes place in this instance, accepting Uber’s argument that “the automated allocation of available rides has no legal consequences and does not significantly affect the data subject”.
Again, the court found that the applicants had “insufficiently explained” their request.
From the judgement:
It has been established between the parties that Uber uses personal data to make automated decisions. This also follows from section 9 ‘Automated decision-making’ included in its privacy statement. However, this does not mean that there is an automated decision-making process as referred to in Article 22 GDPR. After all, this requires that there are also legal consequences or that the data subject is otherwise significantly affected. The request is only briefly explained on this point. The Applicants argue that Uber has not provided sufficient concrete information about its anti-fraud processes and has not demonstrated any meaningful human intervention. Unlike in the case with application number C / 13/692003 / HA RK 20/302 in which an order is also given today, the applicants did not explain that Uber concluded that they were guilty of fraud. The extent to which Uber has taken decisions about them based on automated decision-making is therefore insufficiently explained. Although it is obvious that it is The batched matching system and the upfront pricing system will have a certain influence on the performance of the agreement between Uber and the driver, it has not been found that there is a legal consequence or a significant effect, as referred to in the Guidelines. Since Article 15 paragraph 1 under h GDPR only applies to such decisions, the request under I (iv) is rejected.
Ola must hand over data and algo criteria
In this case the court ruled that Ola must provided applicants with a wider range of data than it is currently doing — including a ‘fraud probability profile’ it maintains on drivers and data within a ‘Guardian’ surveillance system it operates.
The court also found that algorithmic decisions Ola uses to make deductions from driver earnings do fall under Article 22 of the GDPR, as there is no significant human intervention while the discounts/fines themselves may have a significant effect on drivers.
On this it ordered Ola to provide applicants with information on how these algorithmic choices are made by communicating “the main assessment criteria and their role in the automated decision… so that [applicants] can understand the criteria on the basis of which the decisions were taken and they are able to check the correctness and lawfulness of the data processing”.
Ola has been contacted for comment.
This report was updated with additional comment
Source: TechCrunch
Powered by NewsAPI.org