Irish Data Protection Authority on Facebook

  • Facebook’s real-name policy was seen as having “substantial benefits in protecting the people who use Facebook,” and the DPC said the social network had “valid and justified” reasons for prohibiting pseudonyms;
  • Information collected through the use of social plug-ins is not associated with individuals (user or not);
  • Facebook’s tag suggestion tool does not go against DP regulation (though could be more transparent);
  • Advertisement-based business-model is legitimate;
  • It is not possible for third-party developers to repeatedly access personal data;

See: Facebook Gets Passing Grade From Irish Agency Audit.

Shunning Facebook, and Living to Tell About It – NYTimes.com

“As Facebook prepares for a much-anticipated public offering, the company is eager to show off its momentum by building on its huge membership: more than 800 million active users around the world, Facebook says, and roughly 200 million in the United States, or two-thirds of the population.

But the company is running into a roadblock in this country. Some people, even on the younger end of the age spectrum, just refuse to participate, including people who have given it a try.

One of Facebook’s main selling points is that it builds closer ties among friends and colleagues. But some who steer clear of the site say it can have the opposite effect of making them feel more, not less, alienated.”

See: Shunning Facebook, and Living to Tell About It – NYTimes.com.

Privacy Protection and Intermediary Liability

During the past decade, interactivity on the Internet has known an interesting growth. Partly fostered by the increase in User Generated Content (UGC) platforms, which is in turn instigated by a growing (demand for) interactivity. In the last five years or so, interaction and UGC platforms have also become more and more ‘social’. In other words, Information flows have evolved from uni-directional to bi-directional to multi-directional. Especially with regard to privacy, this evolution has brought forward some major issues. One interesting question concerns the liability of intermediaries in privacy conflicts. Are platforms such as YouTube, Facebook, Flickr, etc. liable for potentially infringing content that they host? In Europe, an intriguing interaction between two important directives (the e-Commerce Directive and the Data Protection Directive (DPD)) can be observed.

Imagine the following situation: X uploads a video on YouTube of him and his friends bullying a classmate (Y) on the playground. The video becomes very popular and appears in the top ten most viewed videos in that region. Y discovers the video and notifies YouTube, which in turn removes it. A similar situation was the subject of the famous Italian Google Case in 2009. The Court convicted three executives mainly because Google would not have sufficiently ‘warned’ X before the upload. The reasoning of the Italian judge is inconsistent with the law and prior warning is completely irrelevant in evaluating liability. In the following paragraphs I will propose the correct interpretation of the relevant rules.

For as long as the intermediary has no ‘actual knowledge’ of the illegality of the content, it cannot be held liable. Many different interpretations exist on what constitutes ‘actual knowledge’, but proper notification is undeniably one of them. In other words, when someone notifies an intermediary that the content it hosts or helps distributing is illegal, the intermediary should take it down to benefit form the exemption regime in the e-Comm Directive. This simple principle – mainly used in copyright cases – has been ignored by many judges in a quest for easy targets with deep pockets.

The question that I am trying to solve here is what should happen when an intermediary gets notified of the fact that the respective content relates to a third party. Initially it had no knowledge of the actual content, so it could not be seen as a (personal) data controller in the sense of the DPD with regard to the potential individuals appearing in the video. It was, however, already a data controller vis-a-vis the user who uploaded the content with regard to the file as a whole. So, from the moment it becomes aware of the presence of third parties in the content itself, it becomes subject to the DPD. But for as long as the intermediary does nothing with this new information (without being assigned to it by the uploader), it becomes a mere ‘data processor‘, processing the data under the authority of the original poster, aka the data controller. However, when it starts using the data for its own (new) purposes (e.g. picture-tagging on Facebook results in the platform adjusting the relevant (commercial) profiles), the intermediary becomes a controller himself and consequently has to comply with all the relevant DPD provisions. Nevertheless, when the intermediary remains a mere data processor, it can legally presume that the data controller has complied with the DPD, and more specifically article 7 of the Directive. In other words, the data  controller/original poster is (a priori) the only person to be held liable for infringing the data subject’s rights (e.g. for not asking prior consent, not removing data upon request, etc.). However, when the data processor – aka the intermediary – learns about this infringing behavior, it has to act promptly to avoid liability. You could say that the data controller’s non-compliance with the DPD renders the respective content ‘illegal’. Just as with copyright cases, the intermediary becomes liable from the moment it gains ‘actual knowledge’ of this illegality (see e-Comm Directive). Instant takedown upon notification clearly exonerates the intermediary. This interpretation seems compatible with recital 14 and art. 1(5)(b) of the e-Comm Directive.

Many important questions arise from the above: How to interpret ‘actual knowledge’? Do these intermediaries have a monitoring obligation despite art. 15 e-Comm Directive (recital 14 e-Comm Directive excludes application of the directive to issues covered by the DPD)? How should be dealt with conflicting values such as copyright and free speech (taking down the privacy infringing content might violate the original poster’s rights too)? How does the household exception (recital 12 & art.3(2) DPD) fit in all this?

Despite the high complexity of all these questions, I will try to frame a brief answer to the last one. When the content is knowingly uploaded without any viewing restrictions (‘public’ on YouTube), or when he doesn’t act instantly when he learns about it, clearly the uploader does not benefit from the exemption. When shared only among Facebook friends, arguably there is not much the data subject can do (just as he/she wouldn’t be able to prevent the person from showing the picture to his friends in an offline environment). When the intermediary itself is notified, it will have to make its own decision whether to protect the one person’s privacy or the other person’s freedom of speech and/or copyright.

To get back to the Italian case. X, arguably, cannot benefit from the household exemption and definitely did not comply with the DPD. YouTube, at the other hand, can not be held liable because it removed the content instantly after it gained ‘actual knowledge’.

Concluding, the issue discussed here constitutes just one aspect of a larger problem regarding intermediary liability in Europe. Although the applicability of different regulations (AVMS Directive, e-Comm Directive, DPD, etc.) may give the impression of complexity, a lot of cases can be solved quite easily if interpreted correctly. Many courts in Europe, however, have failed to grasp the intent of the European legislator and show shortsightedness in their decisions. Let us hope that the current re-evaluations of the e-Comm Directive and the DPD will ‘enlighten’ judges by clarifying some important issues and taking into account the modern technology landscape.