Balancing in the GDPR: legitimate interests v. right to object

[Note: This post was originally published on the CiTiP Blog]

Balancing exercises permeate data protection law. This post investigates the interaction between two core manifestations of such balancing in the GDPR: the last lawful ground (Art.6(1)f) and the right to object (Art.21(1)).

Balancing in Article 6(1): From lawfulness to guiding principle?

Balancing exercises play a pivotal role in the General Data Protection Regulation (GDPR). They are implied in concepts such as fairness and proportionality that permeate the GDPR. The centrepiece of balancing exercises within the Regulation is to be found in Article 6(1)f. This last lawful ground permits processing of personal data whenever ‘necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject’. In other words, this provision puts the legitimate interests of the controller and those of third parties against the interests, fundamental rights and freedoms of the data subject.

The balancing exercise put forward in Article 6(1)f arguably transcends merely being one lawful ground among the others. Indeed, the previous four grounds (necessary for (b) performance of a contract, (c) compliance with a legal obligation, (d) protect the data subject’s vital interests, (e) tasks carried out in the public interest), could be qualified as variations on a theme. They are simply situations where the interest in processing prevails by default. In light of the fairness principle, one may even claim that the first lawful ground on consent is subject to some level of balancing in light of Article 6(1)f. The very act of consenting can be qualified as a strong indicator that there is a balance. This is particularly true in light of the new Article 7, inter aliaputting forward the unconditional ability to withdraw one’s consent as well as guarding against all-or-nothing clauses.

The practical relevance of this consideration is that the lawfulness of any processing operation, regardless of its lawful ground, will have to be assessed in light of Article 6(1)f’s balancing exercise. The Art. 6(1)f balancing test will inform – not determine – the validity of the other lawful grounds. In sum, the balancing test in Article 6(1)f is the manifestation of the fairness principle (Article 5(1)a) in the form of a lawful ground. As such it can be used as a proxy for evaluating the validity of any of the lawful grounds.

Balancing and the right to object

The balancing exercise in Article 6(1)f can be qualified as an ex ante obligation. It needs to be complied with before the processing operation initiates. In practice, this balancing exercise will unilaterally be defined by the controller. Enter data subject rights. The rights to object and to erasure install ex post rights, effectively empowering data subjects to challenge the balance put forward by the controller. The right to object (Art.21) is particularly interesting in this regard, because it defines its very own balancing exercise. When a data subject objects ‘on grounds relating to his or her particular situation, […] the controller shall no longer process the personal data unless the controller demonstrates compelling legitimate grounds for the processing which override the interest, rights and freedoms of the data subject […]’. As such, the right to object may be one of the clearest manifestations of the fairness principle in the shape of a data subject right.

Despite the striking resemblance between the balancing exercises in Articles 6(1)f and 21(1), there are some key differences. The right to object explicitly requires the data subject to put forward grounds relating to his/her particular situation, but at the same time puts the onus on the controller to establish overriding ‘compelling legitimate grounds’. The shift in burden of proof when compared to its predecessor (Art.14) in Directive 95/46, seems to have settled halfway. Both controller and data subject have one, though the controller’s is more onerous. In light of the accountability principle in Article 5(2), the burden of proof for establishing a balance under Art.6(1)f a priori lies with the controller alone. But, to all intents and purposes, when a data subject wishes to effectively challenge this balance, he/she will have to substantiate that claim.

Importantly, Art.21(1) does not include a reference to third party interests, which makes it seem more narrow than Art.6(1)f’s balance. Reading more closely, however, the provision merely requires the controller to demonstrate overriding ‘compelling legitimate grounds’. The language does not constrain these grounds to be of the controller alone and could therefore encompass much more than only ‘the interests pursued by the controller or by a third party’ in Art.6(1)f. This would render it harder for data subjects to exercise their right to object than to challenge the lawfulness of the processing operation(s) itself. This puzzling conclusion might partially be countered by the fact that Article 21(1) does not require the rights and freedoms of the data subject to be fundamental (contrary to Art.6(1)f).

Things get even more confusing when reading recital 69, which conflicts with Art.21(1) on three occasions, referring to (a) the controller’s own legitimate interests only; (b) ‘interests’ instead of ‘grounds’, the former being narrower; and (c) data subjects’ fundamental rights and freedoms. The end-result is that it is unclear how the balancing exercise under Art.21(1) should be performed exactly, leaving data subjects less empowered.

In conclusion, one may wonder what the added value of Article 21(1)’s balancing exercise is in the first place. Firstly, data subjects are free – at least in theory – to challenge lawful grounds (including Art.6(1)f) already anyway. Secondly, a closer look at Art.21(1) suggests that its balancing exercise is less likely to benefit data subjects than the one in Article 6(1)f. In light of data protection law’s rationale – routinely confirmed by the CJEU – the right to object should be interpret from the perspective of ensuring a high and effective level of protection. Such a systematic reading implies the right to object’s balancing exercise should be mirrored against (not equated to) the one in Art.6(1)f. The right to object can still be successful even though the processing is lawful stricto sensu under Article 6(1)f. The main added value of Art.21(1) therefore appears to be (a) allocating the burden of proof; and (b) empowering data subjects to challenge the status quo.

The Personal Data Equaliser

[Note: This post was originally published on the CiTiP Blog]

The concept of personal data – key in determining data protection law’s material scope of application – may seem pretty straightforward in the abstract. In practice, particularly when assessing the applicability of specific data subject rights, things get a lot murkier.

 Personal Data – a Disharmonious Concept

It is a truism to say that Personal Data constitutes data protection law’s central building block. Indeed, personal data is the key factor in determining the framework’s applicability. Directive 95/46 – as well as the upcoming General Data Protection Regulation (GDPR) – are pretty concise in defining the concept: “any information relating to an identified or identifiable natural person.” No a priori distinction is made between different sources, types, formats, and so on. Only the contentious sub-category of ‘sensitive data’ explicitly enjoys special status. Personal data’s incredibly wide definition has been documented and criticised widely. Data protection law’s ‘information-agnosticism’ is also an important element separating it from the general right to privacy. The latter primarily covering more intimate data (or elements/activities) affecting an individuals’ private sphere.

In reality, not all (personal) data are equal. Many legal texts specify or differentiate particular kinds of personal data because of the heightened risk to the individuals’ rights, interests or freedoms. Even though the data protection framework explicitly refers to some sub-categories of data – sensitive data, (online) identifiers, pseudonyms, traffic and location data – it appears unfeasible to devise an overall taxonomy for personal data. The many attempts that have been made – in privacy policies, by consultants and academics in different fields – fail to offer a satisfactory and comprehensive overview.

The few data categories explicitly/implicitly appearing throughout the GDPR are useful indicators for assessing the extent of rights and obligations. However, the applicability of data subject rights – and the right to erasure in particular – cannot be reduced to a mere qualification of the underlying data in one of these categories. Google’s and Facebook’s privacy policies differentiate personal data on the basis of its origin, its form and/or its function. But data can also be differentiated on the basis of its nature, sensitivity or visibility/obscurity. To complicate things even more, predefined categories often overlap in practice, further rendering nonsensical any attempts at straightjacketing personal data into predefined categories.

Still, the category of data will often impact the exercise of data subject rights. The rights to erasure and to object illustrate this quite well. Different data-types will have a different impact on the data subject and the balancing exercise generally accompanying a request to erase/object. Sensitive data may be the most obvious example of a data category that will generally tip the balance in favour of the data subject. In short, there is clearly some merit in qualifying the relevant personal data in one way or another. In light of the concept’s incredible heterogeneity however, attempts at developing a comprehensive ‘personal data taxonomy’ are doomed from the start.

 Personal Data Equaliser

Instead of trying to come up with a data taxonomy – or even a more modest list of specific data categories – an alternative can be envisaged. From the perspective of exercising one’s data subject rights, it makes more sense to identify relevant variables on a case-by-case basis. These may relate to the data itself (e.g. accuracy, public interest, sensitivity, format), the source (e.g. voluntarily shared, inferred), the data subject (e.g. role in public life, child), time, context, etc. Each of these ‘variables’ – some of which correspond with categories in obsolete data taxonomies – should be seen as non-binary continuums.

By analogy, one could think of an audio equaliser, ubiquitous in eighties’ stereo sound-systems. Every slider represents a variable, impacting – to a greater or lesser extent – what comes out of the speakers. Similarly to its audio-counterpart, the ‘personal data equaliser’, comes with certain pre-sets. For certain situations or ‘data types’, there will be pre-defined defaults. Depending on the circumstances, certain sliders will be hardwired (e.g. format of the data, controller), whereas others might still be tweakable (e.g. visibility/obscurity). Crucially, determining the configuration of parameters is only possible a posteriori, when evaluating the applicability of data subjects’ rights in a particular case.

The Data Equaliser acknowledges the complexity of today’s information processing landscape. It recognises the impossibility of a priori determining the potential implications on an individual of one type of personal data or another. Today’s vast – and quickly expanding – data processing eco-system transforms seemingly trivial and/or anonymous data into personal data and vice versa. Unsurprisingly, determining the reach of data protection rights (notably, the right to erasure) is a tough exercise in the abstract. Though helpful indicators, the personal data categories defined by the legislator do not offer quick-and-easy answers either. The idea behind the ‘personal data equaliser’ recognises the messiness of data and the importance of looking at the particular circumstances of each individual case. It acknowledges the fluidity of ‘personal data’, depending on time and context.

Looking ahead, attempts at bringing more structure to the concept of personal data should focus on identifying potential variables rather than types of personal data. Such a functional approach will be much more valuable to the interpretation of data subject rights in practice.

French Court ordering Selfie-Takedown

From the Maw-Law Blog: “On December 12, 2015, Brahim Zaibat, a dancer and choreographer, posted on social media a selfie he had taken two years ago, showing him in an airplane, just a seat behind the one where Jean-Marie Le Pen, the honorary president of the French National Front, had fallen asleep. …” Read more at:

Irish Case highlights different tiers of responsibilities regarding RtbF implementation

The Irish Times recently reported that: “Google has refused to remove search results that show the names of newly naturalised Irish citizens in the State’s official gazette, because of the Government’s “ongoing choice” to make the information public.

Indeed, after the individuals approached the Irish DP Commissioner, the latter stated that the publication of information is mandated by a 1956 law.

Shortly after the Irish Times reported about these issues, the official government website publishing the names made a technical change, in order to prevent the listing by Search Engines (while still maintaining the actual content).

The overlap with both the Google Spain Case and the still pending Manni Case is striking and demonstrates a broader issue. Though search engines clearly have some responsibility in the delisting process, this seems the right time for regulators and official public registers to proactively reassess their – often decades-old – publication/divulgation policies.

The divulgation or removal of information should not be seen as a binary, nor can responsibilities in this regard simply be imposed on one entity exclusively.

CJEU is asked to rule on the ‘Right to be Forgotten’ again

The Italian Supreme Court recently asked the CJEU for a preliminary ruling on two questions regarding the ‘right to be forgotten’.

[Disclaimer: this information is loosely translated from official documents published by the Dutch Ministry of Foreign Affairs. The original request can be found here [it]. The CJEU’s documents folder (still empty at the time of writing) can be found here.]

Facts & Procedure (Case C-398/15 – Manni)

The original plaintiff (Salvatore Manni)’s business had gone bankrupt in 1992. This was added to a public Company Register, managed by the defendant (Camera di commercio di Lecce). Plaintiff argued he (his business of selling houses in particular) suffers damages and requested defendant to anonymise his name or restrict access to the register. Defendant stated that the ‘Companies Register’ is a public database with the primary function of informing (on request) about relevant information of companies. The case escalated all the way to the Italian Supreme Court (Corte Suprema di Cassazione), which referred to questions to the CJEU.

Questions referred

The Italian Court essentially wonders whether information legally consigned to (and made public by) the defendant, can be erased, anonymised or access-restricted after a certain time. The Court does point out the importance of the public Register (for legal certainty). Referring to the Google Spain Case (C-131/12), the Court asks not whether the information should be erased from the Register, but whether limits should be put as to the (further) use of this public information by third parties.

  1. Does Article 6(1)(e) of the Data Protection Directive supersede the making public through the company register as commended by Directive 68/151/EEG and corresponding national legislation, to the extent that the latter requires that anyone should have have access to the personal data in the register without restrictions?
  2. Does Article 3 of Directive 68/151/EEG allow, in contrast with the rule that the Company Register saves public information for an indeterminate time and can be consulted by anyone, the information to be made no longer ‘public’, though still available to a specific group, and this to be decided on a case-by-case basis by the Register’s manager?


The underlying facts in this ‘Manni’ Case, are strikingly similar to the ones in the Google Spain Case. Instead of focusing on the third-party, however, the CJEU is now asked to evaluate the obligations of the original publisher. In Google Spain, it was already decided (by the national DPA) that the original publication could not be touched before even reaching the CJEU. In the Manni Case, the original source also has a legal obligation to publish. Yet, it is not asked to remove personal data from the source altogether. Only whether the source can be asked to make it less accessible. This raises very interesting questions – left unanswered in Google Spain – as to the obligations on the shoulders of the original publishers and different degrees of publicity.

To be continued…!

Forget, Erase & Delist – But Don’t Forget The Broader Issue

Current State of Affairs

A new year, a new CPDP Conference (Computers, Privacy & Data Protection, 21-23 January 2015). In the past 12 months we have seen privacy and data protection issues taking a much more prominent role in many different internet policy discussions. One of the key examples in this regard is the so-called Google Spain case by the Court of Justice of the EU (CJEU). Acknowledging the right of individuals to ask search engines to delist certain name-based search results, the ruling sent shockwaves through internet policy circles. If anything, the case has made us all re-think the balance between different fundamental rights and interests, the allegedly ‘neutral’ role of search engines and the extra-territorial reach of local regulations. Besides the unprecedented public debate and media coverage, the CJEU’s decision also resulted in the Article 29 Working Party publishing interpretation guidelinesand Google setting up its very own ‘Advisory Council’ which held public hearings across the EU.

Unsurprisingly, the Google Spain ruling is usually talked about against the backdrop of the so-called ‘Right to be Forgotten’. This ‘right’ has been criticised fiercely by freedom of expression advocates and is emblematic of the fissure between the US and EU regarding online privacy policy-making. Nonetheless, there is at least one point all sides seem to agree on: the terminology is very problematic. Hence, it is great to see that the latest draft of the EU’s ‘Proposal for a Data Protection Regulation’ simply refers to the ‘right to erasure’ (Article 17). Still, the provision has been attacked for being unclear on both its scope and how it is to be implemented. But can we really be that upset about this? Isn’t the exact goal of legal norms to put forward an abstract – and especially future-proof – principle that should be interpreted differently, depending on the relevant facts and context of each case? Shouldn’t we rather look at the judicial (courts) and executive (e.g. Data Protection Authorities) branches to help make sense of the rules put forward by the legislator?

Some Thoughts on Balancing

Requesting the removal of certain information (on whatever legal ground) will always generate a conflict of interests and rights. In the context of the ‘right to be forgotten/erasure’ debate, the most recurring conflicts relate to either the right to freedom of expression (Article 11, CFREU) or economic freedoms (Article 16, CFREU). The ‘conflict’ between privacy and freedom of expression interests, however, is immensely inflated (to the great benefit of the big data industry).

As most readers will know, Europe has a rich legal tradition of balancing the two rights (most notably in the case law of the European Court of Human Rights), with clear criteria and safeguards for restricting either right. As a side note, one can only applaud the CJEU in Google Spain for clearly distinguishing the responsibilities and protections of actual speakers (i.e., newspapers) from third parties (i.e., search engines), each subject to a different balancing test.

Additionally, if you look at Google’s transparency report, it becomes clear that the majority of delisting requests does not relate to legitimate news reporting websites in the first place. Instead, most requestors seem to be average Joe’s concerned about how websites like or gratuitously show information about them on the basis of a mere name search. In short, the ‘right to be delisted’ is about online obscurity, not about eradicating information from the internet altogether.

More importantly, we should not be swayed by ‘right to be forgotten’ rhetoric professing that it constitutes a fundamental threat to freedom of expression online. The right to erasure has a much more important – and largely understated – goal: empowering data subjects with regard to their data being harvested and exploited ‘behind the scenes’ (e.g. for commercial/political profiling, digital market manipulation, etc.). No conflict with the right to freedom of expression exists in these contexts. Instead, these situations usually require a balancing exercise between individuals’ privacy and data protection interests on the one hand and the data controller’s economic freedoms (Article 16, CFREU) on the other. With regard to the latter, the Google Spain case made clear that such freedoms weigh considerably less when compared to freedom of expression interests. Still, neither fundamental right/interest can be discarded without giving it due regard in the balancing exercise first.


The right to erasure undoubtedly results in a conflict of rights/interests that needs to be solved. Whereas many cases will be very straightforward, a considerable portion will require a more thorough balancing exercise. We should be wary, however, not to be blinded by the rhetoric of freedom of expression absolutists, libertarians and corporate lobbyists defending their own agendas. We should not want to reinvent the wheel either. Balancing exercises and the proportionality principle are deeply embedded in the European legal framework. Applying them to the issue(s) at hand might not always be straightforward. But is it really asking too much from entities that, ultimately, benefit from using personal data?

Finally, the whole debate on the ‘Right to be Forgotten’, the ‘Right to Erasure’ and the GoogleSpain case seems to be far from finished. I hope you join me in congratulating Computers, Privacy & Data Protection (CPDP) for having provided a fertile platform (books and panels) for discussing these issues.

*This BlogPost originally appeared as an Op-Ed on the Internet Policy Review*

R2E – R2O – RtbF: What’s in a Name?

– Excerpt from Draft Article –

The Google Spain case has definitely added fuel to the fire regarding the so-called ‘Right to be Forgotten’ debate. Much of the discourse, however, has mixed up several related concepts. Hence, it is important (once again) to distinguish and clarify three important notions: the ‘right to be forgotten’, the ‘right to object’ and the ‘right to erasure’. First of all, the so-called ‘right to be forgotten’ is not mentioned in the Directive (and it has been removed from the latest DPR draft as well), but is rather used as a catchphrase in the rhetoric of different sides in the debate (pro and contra). In fact, the ‘right to be forgotten’ can be described as an umbrella term, aiming to encapsulate different important rights. Its very name already causes confusion by suggesting an obligation on third parties to ‘forget’. Instead, it is a rather clumsy translation into law of a broader policy goal. In this regard, it can be traced back to the French droit a l’oubli. This ‘right’ has traditionally been applied in situations where an individual was confronted with publicity of his personal life in the media in a disproportionate or unreasonable way (e.g. an ex-convict who sees new articles appearing decades after the facts). Nevertheless, it has no dedicated legal ground and is usually invoked on the basis of a variety of legal frameworks (e.g. right of personal portrayal, defamation, general right to privacy, etc.). Given the potential conflict with the right to freedom of expression, the right has only been applied sporadically by Courts. In any situation, this right implies the presence of an (potential) imminent harm that can only be prevented by removing the information or at least preventing its (further) publicity.

The right to object and right to erasure can be found in the data protection framework. The rationale of these rights is not as much to prevent/withdraw the publication of one’s personal data, but rather to empower data subjects. Instead of rights to ‘stop me from speaking about you’, they are intended as a check on how personal data is used and allow individuals to control the use of their personal data over time. The right to object (article 14) can be invoked on the basis of compelling and legitimate grounds, relating to one’s particular situation. It should be emphasized, however, that this right only relates to a specific processing activity. When successfully exercised, the controller will not be allowed to process the personal data for the purposes objected to anymore. The same personal data might still be processed for different purposes for as long as these other activities comply with the data protection framework. A social network, for example, will not be able to process my personal data for direct marketing anymore, but can still use it for other purposes (e.g. statistics, personalisation, etc.). The right to erasure (art. 12(b)) on the other hand, addresses the personal data itself. It can be invoked whenever the controller does not comply with the Directive, in particular because of the incomplete or inaccurate nature of the data. In other words, when the data subject can demonstrate the controller has violated any of its legal obligations under Directive 95/46, it can – depending on the facts – obtain the removal of his/her personal data. If successful, the data cannot be used for any other purpose.

To summarise, whereas the right to object relates to a specific processing activity, the right to erasure relates to the data itself. The ‘right to be forgotten’ constitutes a catchy – though deceptive – policy goal.