Facebook has turned data against us. Here's how we fight back

The more data the recommendation algorithms that power Facebook and Google have about us, the more disempowered we become

Data-driven personalisation has become central to our life online. Curated recommendations govern our online experiences: the search results we receive, the music we listen to, the content that fills our social media feeds.

These recommendations are powered by algorithms, trained on unimaginable volumes of behavioural and self-disclosed data to understand our individual interests. Some of them now claim to know more about us than we know about ourselves — they appear to be able to predict our wants and needs before we even realise we have them.

In 2020, we will finally open our eyes to the reality that personalisation does not actually serve our best interests. Rather, it serves the best interests of private companies, driven by advertising – the business model of the internet.

Personalisation drives profit because it reduces digital advertising waste. The richer and more comprehensive its data sources, the more targeted and dynamic it can be. The endgame of digital marketing is to build relationships through the real-time execution of campaigns tailored to the individual.

On the surface level, it is difficult to critique personalisation. This is due to the strong association the word has with relevance. Who can argue that relevance does not add value? But this is the wrong question to be asking: the more important question is who decides what is relevant. A much more fundamental, and alarming, issue with online personalisation is that it disempowers us: the algorithm has control over our choices. And the more data it has about us, the more disempowered we are.

We are now reaching a crunch point for personalisation, where we will need to rethink its costs and benefits. Predictive algorithms have become sophisticated and impossible to understand and they operate in intimate areas of our lives – even our facial expressions are being processed in real time, and deeply personal characteristics such as sexual orientation are being inferred without our knowledge. At the same time, data breaches, machine bias and fake-news scandals have escalated the importance of privacy and algorithmic transparency. In 2020, we will enter a new era of digital human rights and data ethics.

The EU’s General Data Protection Regulation (GDPR), which recognises the right to be forgotten and the right to access data held about us, was just the beginning of a long discussion about data ethics. Now we are looking at the right to influence how algorithms represent us, and more generally the right to be protected from AI. New data-protection laws will undoubtedly ensue.

We will also discover alternatives to personalisation in the digital world. One solution, customisation, takes us back to the early years of the internet. Customisation empowers the user by providing choice as it relies upon explicit user instructions as opposed to algorithmic recommendations. As a result, it is remarkably transparent. Twitter’s home feed is an excellent example of our new respect for customisation, and its history shows the power of users. In 2016, the company’s more “democratic” timeline that displayed the most recent tweets from accounts that a user followed was replaced by one that displayed the “best” and most relevant tweets, as determined by an algorithm. This included tweets from accounts that users did not follow. The backlash was so extreme that Twitter reinstated the time-based timeline two years later.

In 2020, we will see a rise in the number of internet companies that choose customisation over personalisation, and the adoption of easier-to-understand algorithms. And this will be good for business. Given that so much consent has been taken away from us, and that trust in AI is low, simplicity and transparency will be the significant differentiators between services on the internet.

Yin Yin Lu is a product manager at Researcher, a platform for academics, and a research affiliate at the Oxford Internet Institute

This article was originally published by WIRED UK