Skip to main contentSkip to navigationSkip to navigation
Captain Kirk, Star Trek
"There is no such thing as the unknown – only things temporarily hidden, temporarily not understood," said Captain James T Kirk in Star Trek, the Corbomite Manoeuvre, 1966. Photograph: James Vaughan/flickr Photograph: James Vaughan/flickr
"There is no such thing as the unknown – only things temporarily hidden, temporarily not understood," said Captain James T Kirk in Star Trek, the Corbomite Manoeuvre, 1966. Photograph: James Vaughan/flickr Photograph: James Vaughan/flickr

A manifesto for the future of the 'right to be forgotten' debate

This article is more than 9 years old
and Luciano Floridi

The landmark ruling against Google Spain presents the opportunity for a proper debate. Here are five strategies for reframing the ethics of our online lives

For more than two months now, the inaptly-named “right to be forgotten” has remained buoyant in the news cycle. The reasons are as complex as the distortions, but distinctly missing from the discourse is critical engagement with the foundations and implications of the European data protection regime that gave flight to the discussion.

Also missing is an exploration of the proposed course of action by Google and other players engaged in implementation, and an assessment of how those responses could be strengthened and improved.

The Article 29 Working Party – the advisory association of European data protection authorities that ought to be at the helm of navigating solutions – has called Google and other search engines to a meeting in Brussels on 24 July to discuss the ruling and its concerns. This is a welcome though belated move, but what is required is a thorough discussion of what is possible: proactive, not reactive.

What is the "right to be forgotten" ruling?

On 13 May, the European Court of Justice (ECJ) ruled that, in some circumstances –notably, where personal information online is inaccurate, inadequate, irrelevant, or excessive in relation to data-processing purposes – links should be removed from Google’s search index. A Spanish lawyer, Mario Costeja González, was concerned that Google searches on his name prominently featured two foreclosure notices published under legal requirement in 1998, when his home was repossessed for debt.

The Spanish data protection authority rejected his claim to remove the original archived notices, but asked Google to remove referring links from its index. Google appealed, and the Spanish court requested guidance from the ECJ. The ECJ accepted Mr Costeja’s claim that indexing the notices was irrelevant to Google’s purposes as a search engine under the 1995 EU Data Protection Directive, triggering an international discussion on the availability and accessibility of information online.

The ECJ’s ruling was unexpected. Contrary to the advisory opinion of Advocate General Niilo Jääskinen, Google was found to be a European data controller with associated responsibilities. More controversially, by allowing the deletion of links in this case, the ECJ opened the gamut to a challenging regulatory debate about privacy, freedom of expression, and access to legally-published information, not to mention complex questions associated with implementation.

The result is a watershed in the evolution of the infosphere – the informational environment represented by our increasingly hyperconnected world. So what were the foundations of the ruling, and what strategies might be considered in formulating a suitable response?

Why is it called the 'right to be forgotten'?

The ECJ’s ruling has been labelled the ‘right to be forgotten’ – an unfortunate and polarising catchphrase that inadequately reflects its legal origins and practical effect.

Legally, Mr Costeja’s right is just one facet of his individual privacy rights derived from the EU Data Protection Directive and the European Convention on Human Rights. Privacy rights inevitably involve boundary problems as they come into conflict with rights to freedom of expression and access to information. These rights are not absolute; fair balance is required.

In this context, and against the current default, the ECJ strongly emphasised the importance of individual privacy interests in otherwise privatised, economically-driven digital curation and navigation, recognising the ubiquity of digital information and the ever-increasing influence of search engines and other intermediaries in shaping who we are and what we do online.

So what is the ruling about?

The ECJ’s ruling appealed at base to deeply-held social values of autonomy, forgiveness, and closure. It correctly recognised that, in physical (offline), virtual (online), and increasingly hybrid (or ‘onlife’) spaces, we must afford one another the possibility to make mistakes, to restart, and to move on.

However, ‘to forget’ is a misleading label, and the vague interpretation that the court gave to the notion of relevance – which is always relative to changing interests – is unsatisfactory. Although current data protection law precludes such a course, a better ground might be to consider a clearly identifiable reference to whether information is harmful, prejudicial, or exclusively personal.

Neither the taxonomy (‘right to be forgotten’) nor the logic (relevance determined by age as justification for support) truly addresses the broader need for information sedimentation – solutions, adapted to the infosphere, that enable us, individually and as a society, to remember without recalling; to live with, but also beyond, experience; to acknowledge without constraining.

How is the ruling being put into practice?

There is understandable discomfort concerning implementation of this ruling by Google and other intermediaries. It applies to 500 million European citizens whose data are strewn across billions of webpages. When Google first responded with an online complaint form allowing individuals to identify “irrelevant, outdated, or otherwise inappropriate” links, apparently 40,000 claims were made within the first six days, with another 30,000 in the month following.

The risk is that, in order to manage the interests recognised in the ruling at scale, powerful but blunt tools may be deployed. Such tools, it is feared, may serve the interests of disinformation, rather than better information and more social cohesion.

Nevertheless, it is unhelpful to claim that the ruling invites censorship. These criticisms fail to engage with the interests at stake or to consider creative solutions. It must be recalled, prosaically, that the search industry is a business. There are already well-established practices in reputation management and search result optimisation that customise online information delivery.

Five principles for creative solutions

The crucial challenge is how to achieve appropriate information sedimentation inside the infosphere. So far, there have been few public details on the procedures being applied by Google and other intermediaries to deal with requests in response to the ruling.

While the haste and seriousness of efforts to comply must be lauded, caution must be applied to ensure that any solutions adequately address both individuals’ interests in information concerning them, and the public interest in useful information curation. Here are five strategies worth testing.

1. Build on past experience in a transparent fashion

Currently, Google’s complaint form has an unconstrained 1,000-character input box. It could become more useful by structuring inputs according to how different types of information may be treated, with a hierarchy based on the seriousness of the privacy intervention.

Costeja’s case is just one amidst the considerable experience that national data protection authorities have accumulated, tasked with balancing personal data protection rights against the public interest.

Google could work with these authorities, other data controllers, academics, and practitioners to aggregate past cases, guidelines, and experience, in order to chart the contours of individual interests that data protection law must respect, and to integrate that knowledge with the growing volume of new requests. This will allow the refinement of tools and policies that can be applied consistently and respectfully, by private or public data controllers alike, in a way that is transparent to the public.

2. Seek interoperable, durable, empowering responses

It would be better for the infosphere, and individuals’ experiences within it, if Google’s solutions could integrate with those developed by other search engines and information aggregators, from Bing and Yahoo, to Twitter and Facebook, and if those solutions could be unconstrained regionally (currently anyone can access information that has been ‘forgotten’ by simply using a search engine not based in the EU).

If a collaborative approach is pursued, it could also provide options for smaller, less-resourced intermediaries, ensuring that when individuals operate across multiple platforms, there is some institutional memory and coordination to prevent either under- or over-determination of individual requests.

The online form creates collateral problems in collecting identifying information, and in relying on individuals to identify and bring information to the attention of search engines. Not every user has the awareness, skills, and resources to monitor his or her information online, so we must be conscious of usability issues, particularly (though this is not the case at the moment) in view of a possible future expectation that the presence, rather than absence, of damaging information is a matter of choice.

Empowering users with a self-filled form is undoubtedly an improvement, but it should be accompanied by an exploration of proactive responsibilities for information quality control by other agents in the infosphere.

3. Constrain discretion and arbitrariness

The fruitful synthesis of decided cases and new claims will help to focus the technical expertise of Google and other search engines. Individuals should be able to assess readily – ideally, through a simple online tool – whether personal information that they notify to a data controller will be dealt with in an automated fashion or by applying human discretion.

In both cases, the principles and guidelines being applied should be made clear.

4. Investigate a generally available right to comment

There has been considerable speculation regarding whether Google will issue individual take-down notices, as is presently done for copyright-infringing material, or otherwise notify the public (eg. via the original publisher) that information has been removed. While there is certainly a need for transparency, this can be done more successfully via the three mechanisms described above – in an aggregated and de-identified manner – than by highlighting individual cases.

Issuing individual notices may create speculation and attention that could be more damaging than the information was in the first place, undermining information sedimentation.

It is appropriate that notices should be sent to webmasters so there is an opportunity for independent assessment of the merits of a case and for archival purposes.

However, it is not appropriate for those notifications to then be republished in identified form to the public at large or to appear on a website such as hiddenfromgoogle.com, which would likely cause unwanted damage or distress, potentially in breach of domestic law.

There are many alternative technical possibilities to de-indexing, including reordering information results, de-identifying information, and appending additional qualifying information. In particular, where possible, the capacity to tag information at scale could be very useful, since it would also improve the quality of the infosphere. One of the problems with search, and with the infosphere more generally, is that the popularity of some information is not a good reflection of its truthfulness or utility.

Instead of an all-or-nothing approach, internet services could be encouraged to consider a generally-available right to comment – for example, by linking metadata to alternative URLs that clarify, update, or contextualise. This would retain an important element of transparency, and could be effectively deployed alongside a more limited right to removal.

The problem is the conflict between information and disinformation – particularly, in preventing overburdening web interfaces and search results with augmented content. Here, the expertise of Wikipedia’s Jimmy Wales, who has been co-opted to advise Google on this ruling, will be invaluable.

There is now considerable experience in crowdsourcing, community standards, and endorsement, all of which could ensure that tagging of search results produces an information experience of higher quality, usability, and reliability, which is at the same time respectful of individuals’ past lives and legal requirements, as well as being technically feasible and economically viable.

The more we live in the infosphere, the more natural it will be to take care of it as our informational environment.

5. Contrast removal at source with removal on republication

Although a limited right to comment might be generally warranted, the removal of information is also envisaged by the ruling, and must be addressed. The important distinction here is between the right affirmed by the ECJ – to have a link removed from a search index, ie. on republication – and the right under existing and revised data protection law, which would allow removal at source.

These different solutions have different strengths and weaknesses, making them more suitable to some applications than others. They require focus on a uniquely digital issue, which is that the availability (there and then, e.g., as a printed text in a physical newspaper in an archive) and the accessibility (anywhere and anytime, eg., as a link on a search engine online) of information have been decoupled.

While the reality and redundancy of digital information, once made broadly accessible, may make it impractical ever to forget completely, we should study ways in which information can be made easier or harder to find, more visible or opaque and, as a result, more useful and less damaging, when required.

How we deal with this difficulty requires sensitivity to competing interests, now and in the future, and a full appreciation of legal, social, technical, economic, and ethical considerations.

The seriousness of acting responsibly

Today, episodes of our lives in the infosphere appear as digital traces across sources beyond our control. As those traces grow ever larger and move towards near complete reflection and inspection of our lives, it is important that we reflect carefully on how this information and its sedimentation can be pro-actively and safely managed.

We are, after all, designing the environment in which future generations will spend their lives. It is an extraordinary opportunity and a huge responsibility. We must take both very seriously.

Julia Powles researches law, science and technology at the Faculty of Law, University of Cambridge (St John’s College)

Luciano Floridi is Director of Research and Professor of Philosophy and Ethics of Information at the Oxford Internet Institute, University of Oxford (St Cross College). Professor Floridi has been appointed as an independent, unpaid, external member of Google’s Advisory Council on the ruling discussed

More on this story

More on this story

  • Wikipedia link to be hidden in Google under 'right to be forgotten' law

  • Lords describe Right to be Forgotten as 'unworkable, unreasonable, and wrong'

  • Right to be forgotten: Wikipedia chief enters internet censorship row

  • The right to be forgotten will turn the internet into a work of fiction

  • Right to be forgotten? Most of us are still trying to be remembered

  • UK commissioner expects Google 'right to be forgotten' removals complaints

  • The 'right to be forgotten' may help protect our digital dignity

Most viewed

Most viewed