This week, a third driver won his license back after being unfairly dismissed by Uber. The driver was one of six that challenged Uber’s automated deactivations in the Amsterdam District Court earlier this year. All drivers in the case were alleged to have engaged in fraudulent activity based on geolocation checks conducted through Uber’s Hybrid Real Time ID system. We analysed driver data, obtained through a subject access and data portability request to decode Uber’s claims. As Worker Info Exchange, we have been supporting a growing number of drivers that have been unfairly dismissed after being flagged by Uber’s Hybrid Real Time ID (RTID) system. Challenging deactivations forms a significant and urgent aspect of our work, since a dismissal by Uber means not only losing access to the platform, but also losing one’s livelihood. Uber reports all dismissals to Transport for London, who take licensing action on Uber’s algorithmic decisions. This can result in the revocation of drivers’ PHV licenses, which strips away their ability to work for any operator.
The facial recognition component of RTID - Microsoft’s FACE API - which asks drivers to submit a live selfie to verify their identities, has been in the spotlight recently due to its failure to match faces correctly. We have raised our concerns about FACE API with Microsoft, who noted in response that the tech provider and “the entity that builds and operates the system utilising that technology on the other hand, each have distinct roles and responsibilities to play in enabling the system to operate appropriately and fairly.”
The slew of cases we are currently handling suggests that neither Uber, nor TfL are fit for exercising this responsibility, and facial recognition is not the only issue. RTID is also associated with the complex location profiling systems Uber operates to prevent fraud on its platform. These location detection systems get triggered when RTID selfies are submitted, which prompt Uber’s algorithms to review the precise GPS locations of the devices associated with the driver account.
The majority of deactivations we are seeing are happening in this context. Drivers are accused of account-sharing after geolocation checks determine that their accounts are being "accessed” from two devices that are “a significant distance apart” within a short timeframe. This vague description of “account access” leaves drivers in a complete state of uncertainty and insecurity, facing mounting expenses and months of lost work as they struggle to appeal the decision. In the allegations of fraud, Uber does not offer an interpretation of what access means, while some license revocations refer only to incidents of “attempted access,” with no clarification of whether the suspect device did successfully log-in to the driver account.
Through the cases we have examined, we have discovered a variety of reasons for GPS locations having an irregular appearance. It may be that the driver has lost reception on one device and switched to another; or a family member has used one device while the driver was elsewhere completing trips on his work device, or, as we’ll explore further in this blog, the driver has previously used a friend’s device to access his account, which has remained linked with the driver, generating data suggestive of suspicious activity.
Sometimes, Uber contacts drivers after they’ve been dismissed to ask a series of questions about whether the drivers have encountered any unusual or unexpected events, security issues or suspicious behaviour, without explaining what these events or behaviours might look like. Often, drivers are not informed of the outcomes of these reviews. In these instances, we have to perform the ‘human review’ that Uber claims all deactivation decisions are subject to and decipher what real life activities the data correlates with. Here is one such example:
In late 2020, an Uber driver had his account suspended due to an allegation of fraud. The driver had been working with Uber for seven years and had an exceptional rating of 4.97 obtained through nearly 15,000 trips. The driver denied the allegation and tried to contact Uber numerous times over the course of a week to reverse the decision. He was dismissed by Uber two weeks later and subsequently had his license revoked by TfL the following month. TfL’s letter stated:
“You were dismissed by Uber... This was after they conducted a verification check at 09:13 BST... Two devices were found to be used to attempt to access your account at the same time from two different locations which were a significant distance apart, suggesting that someone other than yourself was attempting to use your account.”
Indeed, at 09:11 the driver was asked to submit a selfie for an ID check, which he passed. Through this check, it was verified that he was at the location where he had just completed a trip. However, Uber then conducted a further check, to identify the locations of the other devices connected with his account, and at 9:13 they detected a device, several miles away, with the potential of accessing his account.
In order to get more information on the device causing the suspicious activity, we obtained the driver’s data. Details of the data we retrieved are available in the guidance document provided by Uber. This included two datasets of particular interest to us:
1. Driver Detailed Device Data, which contains information on GPS locations, time stamps for these locations as well as further information about the device such as model, operating system, carrier, serial number and IP address.
2. Driver Online Offline, which contains information on GPS locations corresponding to the status of the driver on the app. These are: Open, En route, On trip and Offline.
These datasets indicated to us that there are two different states under which the Uber app may be transmitting location information and therefore considered to be “accessed.” With the Uber app running in the foreground or background and online, making it ready to accept work; or offline, making it unavailable for bookings.
The Driver Detailed Device Data for the day of suspension confirmed that there were two devices being tracked. We separated the corresponding GPS data using the serial numbers provided for the devices. (Fig. 1) Device 1 represents the device taking the trips and Device 2 represents the device attempting to access the account: Driver Detailed Device Data - Device 1 Driver Detailed Device Data - Device 2
However, when we mapped the Driver Online/Offline data onto the Detailed Device Data, we found that it only matched the activity of Device 1. (Fig.2) This showed us that Device 2 had not been online and had not been used to accept any fares or take any trips. Nor was there any history of it attempting to do so.
Driver Detailed Device Data - Device 1
Driver Detailed Device Data - Device 2
Driver Online/Offline
Further inspecting the data, we were able to identify the home base of Device 2, which the driver then confirmed as the address of a friend, whose device he had used to carry out account maintenance on a prior occasion.
To offer some context on device use, it is common practice for drivers to use multiple devices to access their accounts. In fact, many drivers point to the necessity of carrying both a private and work phone, due to the technical glitches that can occur while using the Uber app alongside other apps and phone functions. Uber does not provide any specific guidance on device selection or usage and is clear that the responsibility for any technical failures, rests solely with the drivers:
“You are responsible for acquiring and updating compatible hardware or devices necessary to access and use the Services and Applications and any updates thereto. Uber does not guarantee that the Services, or any portion thereof, will function on any particular hardware or devices. In addition, the Services may be subject to malfunctions and delays inherent in the use of the Internet and electronic communications.”[1]
Drivers sometimes choose to access their accounts on different devices to complete various administrative jobs as well. The Uber driver portal provides many other services beyond the dispatch of jobs. For instance, drivers can download invoices or XML files detailing the trips they have carried out for Uber. These functions may not be supported by some smartphones and require the use of other, suitably configured devices, such as more advanced smartphones or personal computers. Drivers therefore frequently switch between devices to carry out a range of personal and business related tasks.
These would have been easy and straightforward explanations for the driver offer, had he been presented with the data in the first instance. However, Uber’s choice of leveraging these use patterns as evidence of fraud, rather than offering clear information to drivers on how particular types of device activity may result in deactivations, indicates this system is really designed for the surveillance of drivers, under the guise of providing them with independence and flexibility.
That this simple action resulted in a dismissal and license revocation that took months of stressful legal proceedings to resolve, raised important questions for us about the role of transparency and human reviews in algorithmic systems. If the absence of fraudulent activity is so obvious to us, why was it not apparent to Uber’s reviewers? Is the objective of human review, the identification of actual instances of fraud or merely the possibility of fraud? If it is the latter, how does this differ from the objective of the algorithm? Or perhaps more importantly, is Uber willing to recognise the human complexities of how their technologies are used?
It is possible we are seeing a recurrence of the same issues because there is no review taking place. We know this to be the case with TfL, who accepts the specious data presented by Uber without question. In one of the recent licensing appeal cases, the court remarked: “We have sat several hours today hearing two very similar cases which…are predicated on the fact that there is little forensic analysis of the relationship between TfL and Uber and the information it provides.”
All of these failures, human and technological, amount to one thing: The persistent shifting of costs and risks and the burden of proof to the workers, who need to contest information that has not been disclosed to them. One of the critical challenges of being algorithmically managed is continually being measured against unknown classifications of fraud, improper use, or irregular activity. As Worker Info Exchange, we work to reverse this informational asymmetry and reinstate the balance of power in the gig economy by defending workers’ rights to data access and transparency. However, despite countless data requests and ongoing cases in London and Amsterdam, we’re only beginning to scratch the surface.
コメント