“In his Opinion delivered today, Advocate General Pedro Cruz Villalón, takes the view that the Data Retention Directive1 is as a whole incompatible with the requirement, laid down by the Charter of Fundamental Rights of the European Union, that any limitation on the exercise of a fundamental right must be provided for by law.”
[Excerpt from work in progress…]
It seems fair to say that the title of the proposed provision – the Right to be Forgotten and to Erasure – is ill-conceived. It has led to largely unfounded fears of critics and overblown hopes among enthusiasts. It was used as a well-resonating political slogan, consolidating the general demand for more control over personal data in today’s information society. As Google’s European head of privacy sharply remarked, the provision can be compared to a Rorschach test: “people can see in it what they want.” It would indeed be more desirable if the terminology would be abandoned in the final text, sticking to the more accurate – and existing – ‘right to erasure’ vernacular. But even then, the application of this right will not be as straightforward as its name suggests.
Tomorrow, Advocate General Jääskinen is to release his opinion in the much debated Google v Spain (aka Right to be Forgotten) case. According to Google’s adversaries, Search Engines are to be considered data controllers under the Data Protection Directive for the personal data (on the websites) they refer to and are therefore (under certain circumstances) liable to remove links (for more info, see my previous blogpost on this case).
An often invoked counter-argument to liability assertions by Internet intermediaries relates to the automated nature of their processing activities. In other words, Intermediaries often argue that they are merely providing a neutral service, content-agnostic and fully automated. After all, it is claimed, decisions are made by algorithms and no human eyes actually ‘see’ the information. In 2010 the CJEU seems to have acknowledged such an interpretation, stating that services that only perform activities ‘of a mere technical, automatic and passive nature … having neither knowledge of nor control over the information which is transmitter or stored’ should be exempted (Louis Vuitton v Google, C-236/08). In the UK, Justice Eady has also ruled that automation precludes intentionality (In this case, Google was not held liable for defamatory snippets it displayed in its search results.). The argument that automation equals neutrality, however, seems to be falling apart. Being a mere (automated) ‘organizing agent’ does not necessarily entail neutrality, nor does it necessarily validate the exemption of liability. After all, as U. Kohl aptly describes: “both organization and automation require human judgment and thus have assumptions, values and goals embedded into them.” Put differently, the use of algorithms does not imply neutrality. Instead of looking at the automated nature of an intermediary service provider, one should look at how it is designed. Such interpretation was also followed by AG Jääskinen in L’Oreal v eBay. In his Opinion, he clearly stated that neutrality does not even constitute the right test to decide on the exemption from liability. Instead, one should look at the type of activities of the relevant service provider. The liability of a search engine, for example, may depend on the fact whether it simply refers to information, displays snippets or autosuggests the information. Particularly with regard to defamatory content, a considerable amount of – widely diverging – cases has emerged over the last few years. A number of French Courts have ruled search engines to be liable for auto-suggesting information. Very recently, the German Federal Court of Justice also overturned two decisions, requiring Google to block defamatory auto-complete search suggestions. In a 2011 Italian judgment, the Court followed the plaintiff’s claim that Google should be considered to have ‘created’ the auto-complete terms. Even if not considered to actually create the actual terms, search engines still make a deliberate decision to adopt autocomplete functionality and design the underlying algorithms. Moreover, it is particularly hard to rely on the neutrality argument (based on automation and alleged content-agnosticism) after the intermediary has received a notification (infra).
Put briefly, we see a crumbling of the ‘automation=neutrality=exemption-from-liability’ argument. Particularly taking into account AG Jääskinen’s previous statements in L’Oreal v Google (specifically expressing concern on the argument that “in order to establish whether the liability of a paid internet referencing service provider may be limited under Article 14 of Directive 2000/31, it is necessary to examine whether the role played by that service provider is neutral in the sense that its conduct is merely technical, automatic and passive, pointing to a lack of knowledge or control of the data which it stores”), it is most likely that he will come to a similar conclusion in his Opinion in the Google v Spain Case which will – normally – be released tomorrow.
There seems to be general consensus in many EU jurisdictions on the fact that minors should benefit from a stronger protection of their privacy and personal data. The European Commission’s proposal for a new Data Protection Regulation expressly states that the processing of personal data of a child below the age of 13 years shall only be lawful if and to the extent that consent is given or authorised by the child’s parent or custodian (art.8). In one its recitals, the proposal mentions that minors deserve extra protection because they may be less aware of risks, consequences, safeguards and their rights. The proposed Regulation also provides extra protection to children in specific provisions relating to transparency (art.11, recital 46), the right to erasure (art. 17, recital 53), data protection impact assessments (art.33) and codes of conduct (art. 38).
One of the main issues regarding personal data protection of minors seems to be at what age they can be expected to give a valid consent. The law is unclear about this and practices vary in different jurisdictions. Many European jurisdictions seem to draw a vague line around the age of 14. Nevertheless, potential data controllers have an extra duty of care when dealing with minors. In Germany, for example, professionals such as doctors, social workers or teachers have such Fürsorgepflicht when assessing the consent of minors from 12 upwards. In practice this means that the minor’s consent is a priori valid, but the data controller must make a professional judgement and consult the parents or even refuse consent if deemed appropriate. Failure to do so might result in a breach of their duty. The French Data Protection Authority (DPA) has emphasised the importance of involving parents, and expressly stated that written parental consent is required for the collection of personal data in a school environment (en milieu scolaire). Both Germany and France have also stressed that minors should not be consulted with regard to personal data that does not relate to them (but, for example, to their parents or siblings). The Belgian Privacy Commission stated that extra care is required for minors that have not reached ‘maturity’ yet, but leaves this concept deliberately vague. Although the Belgian Privacy Act does not explicitly mention a specific regime for minors, its provisions are flexible enough to make an appropriate balance depending on the context and actors at stake. Put briefly, according to the Belgian DPA specific parental consent will be required when the processing relates to sensitive data (e.g. health information); when the child has not reached maturity yet; when the purpose is not in the direct interest of the child (e.g. direct marketing); or when the data is intended for publication. The Portuguese DPA emphasised that although children over 14 can give a valid consent (even from the age of 12 in trivial matters), it will generally be required that their parents are at least consulted. In Spain, the data protection legislation explicitly states that personal data of over 14 year olds may be processed with their consent, except ‘in those cases where the law requires the assistance of parents or guardians in the provision of such data’. The general rule of thumb in Denmark seems to apply the age of legal competency (15) to data protection as well. The DPA, however, has stressed that this is merely a rule of thumb and that all relevant elements in each particular situation should be taken into account. In Sweden there is a similar guideline (age of 14-15, exceptionally 13) that remains subject to context-specific elements and the minor’s level of maturity. The European NGO Alliance for Child Safety Online (eNacso), finally, has stated that parental consent is required whenever a minor cannot be expected to understand the data transaction. The Alliance continues to say that service provides cannot deduct general consent from the fact that their service is paid/contracted for by the minor’s parents.
Besides issues related to consent, it has been stressed that the transparency requirement must be taken extra care of when dealing with minors. The data controller will have to make its information very accessible, simple and direct. Data controllers in Spain are even legally bound to provide this information in easily understandable language, with express indication of the minor’s rights. In Sweden, data controllers will always have to inform the parents of minors, even if they are deemed to be capable of giving a valid consent. According to the Belgian Privacy Commission, minors should retain full control over their personal data and be encouraged to inform their parents of their online activities.
Put briefly, when processing personal data of minors, data controllers will always have to take extra care. Although age would constitute a straightforward and easy criterion to decide on whether or not consent is an adequate legitimacy ground, other criteria are deemed to be more important (e.g. level of maturity). Data controllers will have an important responsibility and duty of care when dealing with minors. Each situation of data processing will have to be assessed independently, taking into account the specific context, identity of actors and type of personal data (processing). As a general rule, data controllers are advised to put extra efforts into all their legal obligations (supra). More specifically, provide short and understandable information, ask parental consent and clearly define the purpose and scope of processing.
From the Abstract:
This research paper analyses societal implications of Deep Packet Inspection (DPI) technologies.
Deep Packet Inspection (DPI) surveillance technologies are communications surveillance tools that are able to monitor the traffic of network data that is sent over the Internet at all seven layers of the OSI Ref erence Model of Internet communication, which includes the surveillance of content data.
The analysis presented in this paper is based on product sheets, selfdescriptions, and product presenta tions by 20 European security technology companies that produce and sell DPI technologies. For each company, we have conducted a document analysis of the available files. It focused on the four following aspects:
1) Description and use of the Internet surveillance technologies that are produced and sold.
2) The selfdescription of the company.
3) The explanation of the relevance of Internet surveillance, i.e. why the company thinks it is important that it produces and sells such technologies.
4) A documentation of what the company says about opportunities and problems that can arise in the context of Internet surveillance.
The assessment of societal implications of DPI is based on opinions of security industry representatives, scholars, and privacy advocates that were voiced in white papers, tech reports, research reports, on web sites, in press releases, and in news media. The results can be summarized in the form of several impact dimensions:
1. Potential advantages of DPI
2. Net neutrality
3. The power of Internet Service Providers (ISPs) for undermining users’ trust
4. Potential function creep of DPI surveillance
5. Targeted advertising
6. The surveillance of file sharers
7. Political repression and social discrimination
The conducted analysis of Deep Packet Inspection (DPI) technologies shows that there is a variety of po tential impacts of this technology on society. A general conclusion is that for understanding new surveil lance technologies, we do not only need privacy and data protection assessments, but broader societal and ethical impact assessments.
You can find the full paper here.
German Court of First instance rules that YouTube is only liable for secondary liability for user’s infringing uploads, but must prevent future infringements of identified works by screening of and implementing a word filter for new uploads.
Not only should people be allowed to block websites from collecting and keeping their data, he says, but that should be the default setting — on European browsers, at least.