When discussing Artificial Intelligence (AI) policy, it is hard not to talk about the General Data Protection Regulation (GDPR) at the same time. That’s because GDPR has had the most impact of any law globally in terms of creating a more regulated data market – while data is the key ingredient for AI applications. Admittedly, the link between GDPR and AI raises intriguing questions in policy-related conversations. In EU policy making, AI and machine learning are the new hype as Europe aims to be a global leader in AI adoption. But the crux of AI discussions and potential new regulations falls outside the scope of the GDPR.
consider Communication on AI for Europe, which the European Commission published in April 2018. The document aims to reflect the relevant legal and ethical framework to create an environment of trust and accountability and to ensure that Europe develops and uses AI in accordance with its values. Several countries in the EU (and globally) are adopting national policies that set ambitious plans for AI adoption.
But what is the interaction between GDPR and AI? And how are AI systems regulated by the EU’s GDPR? Are they two friends, enemies – or something in between?
Profiling and automated, individual decision making
A set of specific provisions within GDPR affect AI-based decisions on individuals, especially those related to automated decision-making and profiling. Many of these are contained in Article 22. But the intent of the GDPR around these provisions is not always clear. This is partly due to the technical complexity of the issue and is partly due to the extensive negotiations and compromises that took place during the legislative process. The result is sometimes a misunderstanding of what General Data Protection Regulation really requires. To get it right, it is important to have an accurate reading of the letter of the law while outlining the intention of the legislature.
Without going into a full legal analysis, here are a few comments on these provisions.
Article 22 is a general restriction on automated decision making and profiling. It only applies when a decision Is based alone on automatic processing – including profiling – as produces legal effects or similarly affects the data subject. The italicized wording sets a high threshold for triggering the limitations of this article.
In addition, the stricter GDPR requirements in Article 15 are specifically related to automated, individual decision-making and profiling that fall within the narrow scope of Article 22. These include:
- “existence”Of automated decision making, including profiling.
- “Meaningful information about the logic involved. “
- “The significance and intended consequences of such treatment“For the individual.
The bottom line: If Article 22 does not apply, these additional obligations do not apply either.
Despite the narrow applicability of Article 22, the GDPR includes a handful of provisions that apply to all profiling and automated decision making (such as those relating to the right of access and the right to object). Finally, to the extent that profiling and automated decision-making include the processing of personal data, all GDPR provisions – including, for example, the principles of fair and transparent processing, apply.
The European data protection authorities have issued guidelines on how to interpret these provisions. While the guidelines may be useful in some respects, the courts will ultimately provide the legally binding interpretation of these rules.
Meaningful information about the logic involved versus algorithmic ‘explanation’
One of the most frequently discussed topics in the context of GDPR and AI discussions concerns the so-called GDPR “right of explanation.” Despite common misinterpretations, the GDPR does not refer to or refer to a right of explanation that extends to the “how” and “why” of an automated individual decision.
“Meaningful information on the logic involved” in relation to Article 22 of the GDPR is to be understood as information around the algorithm course of action used rather than one explanation about rationale of an automated decision. For example, if a loan application is rejected, Article 22 may require the controller to provide information about the input data related to the individual and the general parameters set in the algorithm that enabled the automatic decision. But Article 22 does not require an explanation of the source code or how and why the specific decision was made.
On the other hand, algorithmic transparency, accountability and the intelligibility of AI systems outside GDPR are discussed. We need further research and reflection on these issues.
If setting up intellectual property issues around the source code would the explanation of the algorithm itself really be useful to individuals? It would probably be more meaningful to provide information on the data used as input to the algorithm, as well as information on how the output of the algorithm was used in relation to the individual decision. Ensuring data quality, addressing algorithmic distortions, and applying and improving code interpretation capabilities to help reconstruct the algorithm can all play a key role in fair and ethical use of artificial intelligence.
New rules on AI and machine learning
The European Commission’s recently adopted Communication on AI recognizes the potential impact of the GDPR and other EU legislative proposals, such as the Regulation on the Free Flow of Non-Personal Data, the e-Privacy Regulation and the Cyber Security Act on AI Development. In particular, the paper announces that AI ethics guidelines will be developed at EU level by the end of 2018. Security and responsibility are two other main areas addressed.
The AI community needs to prepare for several EU laws that regulate specific issues that are at the heart of AI use and development. For example, the recently proposed law on promoting fairness and transparency for business users of online mediation services includes provisions that require transparency and access to data be incorporated into algorithms used in ranking. Another example is the recently proposed revision of the EU Consumer Rights Directive. This assumes that contracts entered into in online marketplaces must provide information on the key parameters and underlying algorithms used to determine the ranking of offers presented to a customer conducting an online search.
GDPR and AI: Neither friends nor enemies
GDPR and AI are neither friends nor enemies. The GDPR sometimes limits – or at least complicates – the treatment of personal data in an AI context. But in the end, it can help build the confidence needed for consumer and government AI acceptance as we continue to move toward a fully regulated data market. When all is said and done, GDPR and AI are lifelong partners. Their relationship will mature and solidify as we see more AI and data-specific rules emerging in Europe and globally.