This blog is a shortened and edited version of a speech (full version available here) delivered at the G7-DPA Roundtable on 6 September 2022.
Data flows with trust are critical, not only for a free and open internet, but also for realising human rights online. But none of these benefits can be achieved without robust and comprehensive data protection, data security, privacy safeguards, and human rights frameworks that protect people’s information. In the last decade, governments around the world have made progress in protecting personal data – yet remaining gaps are preventing people from being able to fully exercise their data protection and privacy rights.
When it comes to protecting personal data, the largest outstanding gap relates to data harvesting. This currently underpins the digital economic model, despite the misgivings of many civil society representatives, academics, and regulators about its economical and social benefits or even its legality. Most tech companies and many governments take a “collect it all” approach, which is directly at odds with basic data protection principles such as data minimisation and purpose limitation. To be sustainable, the digital economy requires that public and private organisations alike move from data harvesting to data detoxing; prioritising quality over quantity.
The nature of the internet and the online economy demands that data can flow easily, to ensure services are delivered across borders. However, this is not and should not be a free pass to ignore or lower data protection principles. In today’s world, protecting personal data is not a luxury, it is a necessity. Most countries have recognised that privacy is a human right and many of them have adopted, modernised, or started developing data protection frameworks that include tools for data transfers.
“Collect it all”, but why?
Despite legislative progress to protect personal data, many companies have yet to update their practices and business models in a way that respects and promotes data protection and privacy. Huge amounts of information are still being collected, stored, and moved around, and privacy considerations are often secondary.
A 2015 study of more than 300 ICT companies in the UK, France, and Germany showed that 72% of data gathered by these companies did not end up being used. Other studies from the US and around the world suggest that anywhere between 60 and 80% of the data collected by companies goes unused. With the rise of artificial intelligence and profiling, companies are often trying to figure out how to make money from this unused data. Instead, companies would do well to ask themselves, or be pushed to do by regulators: why was this data collected in the first place?
The danger of dormant data
Online entities have largely been collecting any and all data they want about any person. This can harm people considerably and lead to data-driven discrimination that can, among other things, affect employment, health, housing, and educational opportunities. Data minimisation and purpose limitation principles, which have existed for some time, should apply to stop this from happening – but they are largely ignored.
If these principles are not binding or not enforced, companies and entities will continue to collect more data than they need and the potential for people being harmed will increase. Every minute that unused data remains stored unnecessarily and every time it is transferred across jurisdiction without needing to be, people’s rights are placed at risk. In addition, companies are potentially liable if the data is leaked, misused, or accessed without proper authorisation.
Some companies would justify the continuation of the “collect it all” practices by promising transparency and strong security. These promises are important, but insufficient if not paired with a reduction in the amount of data collected. The best way to prevent data breaches is not to have the data in the first place.
Less is more when it comes to data collection
It is also important to think of data minimisation in the context of the development of artificial intelligence. Regulators should address the relationship between the data harvesting business model and the way algorithms are currently being built. Troves of data are often injected into opaque AI and automated systems to place ads, sell products, and even generate text or image content. It is critical that regulators address the misconception that for the digital economy and AI to succeed, as much data as possible is needed, when what is actually needed is the best possible quality of data, which is more limited.
Paving the way forward
While enforcing data minimisation is not a silver bullet for data transfer disputes or other data protection challenges, it can reduce tensions. After all, the less data collected, the less data to be secured or moved around. We must incentivise companies to collect only the data they need to deliver their products or services, to do so transparently, and to store all data securely. Collecting data that may be “useful” for a theoretical, undefined future purpose goes against data protection principles.
Data protection isn’t about never using any data; it’s about using data in a wise, secure, necessary, and proportionate way. Data minimisation, purpose limitation, data security, and transparency are not interchangeable; each overlapping and complementary principle is as important as another. They allow businesses to operate with confidence and reassure people that they and their information are safe.