Data shadows: challenges of a data-rich world

Stephanie Hankey

‘Like the wallpaper sticks to the wall
Like the seashore clings to the sea
You’ll never get rid of your shadow
You’ll never get rid of me’


These words, written in 1927, are the start of the well-known song ‘Me and My Shadow’. When we at Tactical Tech launched our project of the same name, 85 years later, we weren’t thinking of two inseparable friends but of the inseparable shadows each of us has grown in the digital realm – 

digital shadows formed by the personal details 

 

we've put on social networking sites, the times we've clicked 'yes' 

without knowing what we're agreeing to, or the 'bots' that share our 

 

details with each other

. Whatever the shape of our digital shadow and however much of it we know or don’t know, we’re starting to realize that, just like the shadow in the song, it sticks to us ‘like the seashore clings to the sea’. The effects of this digital shadow vary. They may be useful, annoying or extremely dangerous. In all cases, however, they have direct implications for privacy, civil liberties, freedom of expression and human rights, and therefore should not be ignored by foundations concerned with any of these issues.

How did we get here?

The recent Wall Street Journal series What They Know[1] maps what data the 50 biggest websites and mobile services are collecting on their users, what they do with it, and if they give users the chance to opt out or not. In parallel, tech activists and practitioners are trying to empower users, lifting them out of unwitting data slavery. Mozilla, for example, recently launched a plug-in, Collusion, that allows you to track the data others are collecting on you directly as you browse, giving you some control of what you are giving away and the ability to make better decisions about which services you use.

Pesky or risky?
Looking at What They Know or using Collusion can quickly make you quite paranoid. But the question is, do we need to be? When is data collection and analysis ‘pesky’ and when is it ‘risky’? Pesky is what marketing people call predictive analytics (that is, the analysis of your behaviour in order supposedly to give you what you want), using your data to serve you up more expensive products. In this way, Orbitz reportedly offers Mac users more expensive hotels because they have discovered that on average Mac users spend $20 to $30 more on hotels than their PC counterparts. Risky is when you are part of a marginalized community and thought you were organizing in a closed group, but Facebook changes its settings, making everything you thought was private become public. In these circumstances, our lack of control over our data puts our communities and livelihoods, even our lives, at risk.

Whose data is it anyway?

Me & My ShadowThis problem isn’t limited to our online socializing, reading and buying. As we start to use online services that do the work for us, for free, such as those that help us publish our photographs or videos, we are not only handing over the details of who we are, where we are and what we like, but also our content. What we gain in convenience, we lose in control. As we move our data off our devices and into ‘the cloud’, this has some serious consequences. We entrust our content to companies who are themselves finding it hard to keep on top of the fast pace of the digital world. Last year, the popular file storage and transfer service Dropbox experienced a security breach that made it possible for anyone to use any account without any authentication. It lasted only four hours, but the incident confirmed that it is a fledgling environment and we should be careful about giving up our control of information.

Who’s in control?
Service providers are companies in the real world, so obviously, they will hand over information to government agencies when requested. The Electronic Frontier Foundation has done some interesting work on this. When the government comes knocking, who has your back? uses a star system to rate the main service providers, if they publish information about requests for user data or not, and if they defend users’ rights against authorities. Some service providers clearly come out better than others and some have been embarrassed by taking censorship into their own hands and taking down user content.

Despite the widespread coverage in the media of repressive states leaning on technology companies for information, evidence shows this is not restricted to places where there is considered to be a lack of freedom of expression. In July 2012, Twitter published its first transparency report showing that the US government made 679 requests for user data in the first half of the year, more than all other governments combined. In 2011, Google reported that US government agencies asked for Google users’ data 12,271 times. Twitter is heralded as a carrier of revolutions; Google, along with Facebook, is the biggest data repository in the world. All three are American companies.

The control of our data is not entirely in US hands of course. 27 January 2011 will no doubt go down in history as proof of this, as the world watched the Egyptian government demonstrate that there was an internet ‘kill switch’ and they could press the button. During 2012, Wikileaks, Privacy International and the Wall Street Journal published large amounts of data about what they call the new ‘arms industry’ and the extent of surveillance technologies being sold to governments worldwide. The leaks showed how governments are using technologies not only to protect and defend but also to track huge amounts of data created and exchanged by all their citizens.

Who ultimately controls the data we own, share and use and who is responsible for it is currently being fought out not only by users and companies, but also by academics, policymakers, bilateral organizations and government agencies, at conferences and international forums from Sweden to Brazil. What was once a technical issue for geeks has now become a major global political issue.

What does this mean for journalists and activists?
This month, the United Nations Commission for Human Rights (UNCHR) passed a resolution that recognizes that freedom of expression principles in the real world should be mirrored in the online world. The Arab uprisings have undoubtedly driven UNCHR to this point. The internet was fundamental in enabling expression and action during the Arab Spring, but was also a major source of vulnerability, exposing thousands of activists across the region. Thousands of activists have been compromised through their use of online social media networks and publishing platforms in Syria. This has now reached such a level that many high-profile activists use online platforms only for broadcasting and publishing, not for mobilizing and organizing, as the digital traces they leave behind them are far too great to take the risk.

Nor is it only governments who are using these new means of surveillance. In Uganda last year a gay rights activist was murdered after a local tabloid published personal information from Facebook. In Mexico the biggest threat to opinionated young bloggers comes from drug cartels who are monitoring the internet and then directly targeting individuals and torturing or murdering them.

What now?
It is no longer possible to separate the issues of technology and data – traditionally the domain of IT or communications departments – from the programmatic work of foundations. The technologies and the work they facilitate have become intertwined, the people and their digital shadows have become inseparable. The fact that the universe of things we can’t control as users has become so large and so difficult to influence makes it even more important to start looking at what can be done at a practical level and to focus on what is within our control. This is doubly true for those working on the frontline, for whom these issues have switched from troubling to life-threatening.

We need to start helping users change their behaviour, supporting them to better understand the services and tools they are using, and continue to question what has too quickly and easily become the norm. Technology providers have a responsibility to treat users’ rights with the same care as human rights. Privacy means different things to different people. It can hinge on your philosophical and political approach to society and where you are and who you are. No doubt everyone would agree, for example, that it is critical in extreme cases like Syria, but in circumstances where data about people’s political beliefs are less life-threatening, there needs to be a respect for choice. The default position should be that the user can opt in to sharing data, not that they bear the responsibility for opting out.

Foundations should be supporting not only watchdogs and policymakers to keep up the debate but also those who help users understand and control their digital shadows; they should support people to be playful, creative and subversive. Internet technology is still in the age of prototyping. We as users should be challenging and improving it.

1 http://blogs.wsj.com/wtk

Stephanie Hankey is executive director of Tactical Tech. Email Stephanie at ttc@tacticaltech.org

Recommended and related links
Explore your own shadow: http://myshadow.org
Digital security toolkit: https://security.ngoinabox.org
Digital security tips: https://onorobot.org
Mobiles: https://guardianproject.info
Anonymity: https://torproject.org
Policy and legal aspects: http://www.eff.org


Comments (0)

Leave a Reply

Your email address will not be published. Required fields are marked *



 
Next Special feature to read

Where next after Bellagio?

Rob Garris