The eruption of scandals and debate about facial recognition has become almost everyday news around the world and Europe is no exception. Pilot projects and the testing of systems are widespread, and in the case of France and Sweden, these trials are happening in schools.
Cédric O, France’s Secretary of State for digital, stated that facial recognition experiments are necessary for the development of French industries and that democratic debate would only be contemplated afterwards.
Recently, data protection authorities (DPAs) in France and Sweden expressed their opposition to two of these experiments, which specifically targeted children. In this post, we look at the decision taken by these DPAs and what it means for the deployment of facial recognition in the EU.
Tech companies tried to use students as guinea pigs to test their systems…
In November 2018, a Swedish school started using facial recognition in classes to keep track of students’ attendance. Participating students were being filmed when they entered a classroom and images from the camera were then compared to pre-registered images of their faces.
This pilot project was conducted in partnership with a Finnish software and IT consulting firm called Tieto, which provided the technology. The Swedish DPA, upon discovering the project thanks to the media, conducted an inquiry into whether the trial was compliant with data protection regulations. The managers of the project failed to do a proper impact assessment which should have led to consulting the authority due to the risks involved.
Similarly, in France, two high schools in Nice and Marseille used facial recognition technology to control access to the entrance gate, provided free of charge by the U.S. tech company Cisco. NGOs, teachers’ unions, and parents waged a campaign against the project which led to the region asking for the French DPA, CNIL’s, opinion. The project has been on hold since then.
While the companies providing the technology initially did not charge the schools for the systems, both Cisco and Tieto would as a result be able to use students as training subjects for their facial recognition systems. These pilot projects are crucial for companies to test and improve their products and be able to sell it going forward.
…but data protection authorities said no
Both the Swedish and French DPAs found these projects to be in violation of the EU data protection law: the General Data Protection Regulation (GDPR). The Swedish DPA fined the municipality around €20,000 for using such technology as well as issuing a warning against further use. As for the French DPA, it determined that the project could not be “implemented legally”.
- Consent is not enough to roll out facial recognition in these projects
Under the GDPR, the use of biometric data — such as the image of your face — for identification purposes is generally prohibited unless you have given explicit consent.
The French and Swedish projects both claimed to rely on the consent of the students and parents. The DPAs, however, came to the conclusion that consent was not enough in this context, meaning that it was not an appropriate legal ground for processing personal data.
The authorities have not given a definitive answer on whether there is an appropriate legal basis under the GDPR for introducing a facial recognition system in a school. They have reached a conclusion that in these specific cases the requirements for processing sensitive biometric data based on consent were not fulfilled.
For instance, the Swedish DPA stated that students could not be in a position of having freely given their consent due to the power dynamics of the school-student relationship. Indeed, students are in a position of dependence with respect to the school regarding grades, education, and future education and employment, and cannot therefore be expected to be empowered to refuse the use of a technology deployed on the school grounds. The DPA added that even in instances where there is a choice possible, as it was the case here on paper, this substantial power imbalance could lead to pressure into the acceptance of the use of facial recognition technology.
Going forward, this analysis does not necessarily need to be restricted to the school system and could be applied within many public places such as shopping centers, train stations, and elsewhere. Thus, one can wonder following this finding whether and how facial recognition can be legally deployed in public spaces.
- There are (always) less intrusive means than facial recognition
Based on the principle of proportionality and the GDPR, any use of personal data should take place only if no other, less intrusive options, are available.
Both DPAs agree that student monitoring can easily be carried out in less intrusive means (control carried out by humans, badges, etc.) and therefore the use of facial recognition is disproportionate for identification purposes.
As Wojciech Wiewiórowski, the European Data Protection Supervisor wrote, we must ask ourselves:
“Is there any evidence yet that we need the technology at all? Are there really no other less intrusive means to achieve the same goal? Obviously, ‘efficiency’ and ‘convenience’ could not stand as sufficient”.
Both DPAs also underlined that the projects involved substantial intrusion with major privacy risks. The CNIL even noted that this kind of facial recognition monitoring would lead to an intensified feeling of surveillance.
What’s next?
As more facial recognition projects develop we already see that the GDPR provides useful human rights safeguards that can be enforced against unlawful collection and use of sensitive data such as biometrics. But the irresponsible and often unfounded hype around the efficiency of such technologies and the underlying economic interest could lead to attempts by central and local governments and private companies to circumvent the law.
For instance, despite the CNIL’s warning, the South Region president and the Nice mayor are very eager to see this project running and intend to file another application as soon as possible.
The Fundamental Rights Agency of the European Union (FRA) published a report recently on the fundamental rights considerations of facial recognition technology in the context of law enforcement. The FRA concludes that “[g]iven the novelty of the technology as well as the lack of experience and detailed studies on the impact of facial recognition technologies, multiple aspects are key to consider before deploying such a system in real life applications”.
No matter what new technology is deployed, the EU and its member states must implement and uphold the GDPR. If, after open and transparent public debates, the EU or member states decide to authorise use cases of facial recognition then we must examine the need for more safeguards to adequately protect people’s rights. But for now, as Wojciech Wiewiórowski puts it, it is time to “determine whether — if ever — facial recognition technology can be permitted in a democratic society”.
This post was written by Laureline Lemoine.