New technologies are being built that discriminate against already vulnerable populations. Can we do better?
In 2005, Stanford placed a plaque on its Gates Computer Science Building commemorating the “birth of the internet.” While the text on the plaque indicates that “thousands if not tens of hundreds to thousands” contributed to the evolution of the internet, it names only 33. Of those 33, just three — less than 10% — are not male. Additionally, while you can’t determine race and cultural heritage by name or photo alone, it appears that only two were not predominantly white.
As the plaque indicates, these 33 people are not the sole parents of the internet. The Wikipedia article on internet pioneers is much longer and it includes many more women and people of color. Yet those named in both places are still overwhelmingly white men, and that suggests that this demographic has had an out-sized influence in shaping the internet from its earliest days.
I don’t bring this up to cheapen the creation of the internet; in fact, I’d argue that it is perhaps the most revolutionary innovation of our time. But I like to think it could have been better. For example, since at least 1992 experts have pointed out that the internet was not built securely. A few years later, in 1998, renowned hacker group L0pht (another group of white men) would tell a panel of U.S. senators (itself a group of white men) that the internet is extremely vulnerable, explaining that they could take it down in about 30 minutes.
In the nearly 20 years since that hearing, these kinds of warnings have gone largely unheeded, and today we have an internet that remains open to many forms of attack. Some are the kind that threaten governments themselves, and the public networks, services, and infrastructure that we all depend on. Others are launched by governments against marginalized people and communities. For those of us on the margins, the internet is not only a tool for empowerment but also a weapon used against us for censorship, tracking, and surveillance.
Imagine the internet that could have been. What if those in marginalized communities, who are intimately familiar with the many forms of oppression, had a bigger influence on how the internet was and is developing? While there have been many innovations and contributions from the margins of societies globally — under-recognized though they may be — let’s suppose they were more central. Would we have been able to build systems that enable free expression and innovation, yet were also resistant to pervasive surveillance by repressive governments? Could we have anticipated the suppression of speech and ideas, simultaneously through censorship and hate speech, and tried to adjust? Would that adjustment have undermined some of the good of the internet as we know it, or created a stronger, more flexible platform that benefits more people?
We will never know the answer to these questions. What we do know is that today, we are creating internet-enabled tools and systems that are designed to discriminate. While I may join those who chuckle when voice recognition tools fail to interpret certain accents, it’s not remotely funny that Google’s facial recognition algorithm identified black faces as “gorillas” and it’s downright horrifying that similarly bad algorithms are disproportionately misidentifying black faces as suspects in criminal activity. We have learned that virtual reality systems lead to motion sickness more often in women than men and that Apple’s health app could track just about every aspect of health except menstruation. These are only a few examples of tools we could all potentially benefit from having clearly been designed to benefit a very specific demographic: the affluent white man.
When it comes to design of the internet and ensuring digital security, ignoring or failing to protect vulnerable populations can have life or death consequences. Abusive partners use the internet to stalk and maintain power over their victims, including threatening or carrying out violent acts. Dictators use it to monitor, censor, arrest, imprison, or kill those who dare to speak against them. Companies label traditionally black neighborhoods “dangerous.” Criminal groups accumulate vast databases of stolen personal data, to be auctioned off to the highest bidder.
Unfortunately, despite our best efforts, the internet as we know it may never protect the most vulnerable people in our societies. But we should not just accept the status quo. Improvement starts with recognizing the problem. We are starting to see the early signs of this recognition. For example, some startups today have begun to take diversity and representation seriously and are preparing to dig into the big issues. This could lead to broader recognition of the discriminatory purpose or impact of products and services and help create technologies that go beyond the white, male “standard” to better serve the rest of us. But it may be awhile before we really get it right.
Ultimately, we won’t be able to change things in any significant way so long as we create and facilitate an environment that is hostile toward diversity. Instead of digging our heads into the sand when scandals erupt, those of us working in tech should embrace change and invest in identifying and promoting smart, diverse voices. That means developing institutional and operational systems and processes that respect the range of backgrounds and experiences that diversity brings; creating public policies that are not developed or dictated by a single point of view; and providing platforms for discussion such as panels or events that highlight the voices and perspectives of under-represented people and organizations that are breaking through societal roadblocks and developing valuable expertise, often at great personal cost.
We can’t re-create the internet. But there’s nothing to stop us from developing better ways to build the technology for a stronger, more secure future for everyone, not just the privileged few.