Image source: Tumitu Design
Internet of things (IoT) products seem to grow in number and variety every week. They include products sold to consumers as toys or everyday devices, to industry as enterprise solutions, or to governments as new tools. They are household appliances, fitness trackers, jewelry, and vehicles, as well as city infrastructure, lightposts, and entirely new things, like the “virtual assistants” Amazon Echo and Google Home, which don’t have easy corollaries in the less-connected world.
While there are countless efforts around the world to grapple with keeping the Internet of Things secure (and we are starting to see the emergence of IoT regulation), there are few that look specifically at the human rights implications. As a human rights organisation, we see that issue as the central consideration, and so at our 2017 RightsCon summit in Brussels, we held a panel discussion and workshop to open the discussion with experts from around the globe.
How does the Internet of Things interfere with human rights?
Fantastic question! There are many ways in which IoT can interfere with human rights, either actively or passively. Not all countries have the privacy or data protection framework to provide adequate protection for our personal data from a pervasive IoT. There is a real threat of near ubiquitous surveillance when hackers can exploit vulnerabilities in IoT toys like Cayla and governments have the capacity to spy on you using more than just your computer and smartphone. That’s dangerous for human rights. Recent research has repeatedly proven that surveillance not only harms users’ privacy but also chills free expression. We don’t do or say the same things when we are watched. These same studies suggest that the negative impact on our behavior and on society could become even broader.
So increase the safeguards, why don’t you?
Indeed, it’s clear that we lack sufficient safeguards, either in law or in current practice, to address the impact of IoT on human rights. For example, fair information practices, and in many cases, laws on the books, require explicit user consent in order for companies to collect personal data. Yet consent is tricky with the Internet of Things. A company may be able to get consent from someone who has purchased an IoT device or application, but the device still may need to collect passive data to function, impacting the privacy of others. For instance, self-driving cars collect data about their surroundings, and that includes data about the people around them. It might not be possible to implement a traditional user notification regime for this application of IoT technology. Yet people still have a fundamental right to privacy, and they might not consent to this form of data collection if they are given some form of choice.
A key question here: To what degree should there be an exception to rules regarding consent when data collection is (strictly) necessary for a device to function? People buy many IoT devices because they collect and analyse data; one buys a smart fridge so one doesn’t have to remember when to buy groceries. This product’s analysis of your personal habits is what makes it valuable to you; otherwise, the product can’t deliver on its promises.
Even with these issues in play, it is still feasible to apply human rights principles to the IoT ecosystem. Here’s a look at what we discussed with the experts at RightsCon, including an exploration of principles such as data minimization and privacy by design.
What sort of rights should we ensure for individuals in the IoT world?
Let’s start with (in)security. At DEFCON 2016, hackers tested the security of a variety of products in the IoT “village.” They were able to find 47 new zero-day vulnerabilities across 23 different types of devices from 21 different manufacturers. Most of these were simple, easily preventable security gaps. Many of them are. In fact, one of the biggest IoT botnets, known as Mirai, was built explicitly to exploit IoT’s omnipresent weak security by scanning for IoT devices accessible over the internet and protected only by factory default or hard-coded usernames and passwords. These kinds of attacks have only continued to increase as companies continue to neglect basic security.
At RightsCon, participants in the IoT panel discussed key security concepts: security of data, separation of security vs. functionality updates, security updates for a reasonable lifetime of the given product, and digital security more broadly — using encryption and information security to ensure the privacy and integrity of the devices, services, and the data they produce. These are baseline best practices, so it’s a shame we still have to address them in the IoT product and services market. Yet, as we’ve shown above, that’s where we are.
However, even if we achieve better security for the Internet of Things, it does not guarantee better privacy. Our expert panel concluded that there should be “system defaults” for IoT that go beyond tech specifications. Examples include notice prior to changes in terms of service (on top of an existing clear, rights-respecting terms of service), location privacy, a set date for data expiry, a purpose limitation for the data collected, and strict rules for data minimization. These rights are already guaranteed in certain jurisdictions. Some experts also argued for rules to support consumer choice, such as guaranteeing the interoperability and portability of personal data, as well as the context for the data, so that no provider would be able to lock people into a service or product forever. There was consensus that people should be told about it when third parties are involved in provision of an IoT device or service, whether they are working with data or hardware security.
The issue for human rights isn’t so much that an IoT device collects and analyses data, since that’s often its raison d’etre. The issue is that individuals should be able to maintain their autonomy and adjust their products to work for them, rather than the other way around. Individuals must be able to inspect/audit, repair, or simply examine their IoT devices and services and the data they produce. Furthermore, we should always have the option to turn our IoT products off (we are talking Faraday mode-level powered off) or to use them offline, meaning that the products should still work without a connection to the internet — so individuals can opt out without losing basic functionality.
Of course, this list is not exhaustive. There are significant issues left to address, such as transparency and algorithmic accountability, to ensure that the IoT ecosystem can function in a way that respects human rights. However, it is a good beginning to the broader conversation that needs to happen.
So how do human rights organisations and other stakeholders that care about IoT privacy begin to articulate these concerns for everyday users? We’re inspired by the efforts of Norwegian consumer protection agency Forbrukerradet, which created the following easy to understand IoT “wishlist” for consumers:
- Help me understand the rules
- Don’t change the rules behind my back
- Give me real choices
- Don’t snoop
- Don’t take liberties with my stuff
- Don’t over-share
- Let me move my stuff
- Don’t evict me or my stuff for no reason
- Let go of me
- Don’t forget to lock the door
Amen.
Putting words into action
While governments can implement some protections for human rights, companies must also step up and implement protections at the software or hardware levels, and of course, end users need to take action as well (and may even ultimately be required to do so, though they have to have the tools made available). This will require a steady cooperation between civil society (including efforts such as Ranking Digital Rights), consumer protection groups, and technology companies. This is a collaboration Access Now will continue to support and engage with the best we can.
Some further reading:
- Overview from Wired: The Internet of Things Is Wildly Insecure — And Often Unpatchable
- Deep dive from Schneier on Security: Security and the Internet of Things
- The global effort from ITU: Internet of Things Global Standards Initiative
- Vulnerabilities of wearables from Nature: What could derail the wearables revolution?
- EU gets technical from Body of European Regulators for Electronic Communications report: Enabling the Internet of Things
- Look at vulnerabilities from Wired: How the Internet of Things Got Hacked
- Regulation emerges with Tunisia’s Information and Communications Technology Ministry: draft law