PhD research Tilburg Institute for Law, Technology, and Society (TILT)
At present the following research is carried out by our internal PhD candidates:
Algorithmic Transparency as protection against automated data ...
processing under the relevant legal frameworks
Emre Bayamlioglu
Major research question and the research framework
What are the transparency needs engendered by data-driven decision-making practices, whether this ‘transparency desiderata’ is properly addressed in the current EU data protection regime, and to what extent IP rights stand as an impediment?
The study proceeds with the unfolding and elaborating of the major research question through sub-questions under the below framework.
- A legal framework of transparency which goes beyond the conventional understanding –of certain access rights and disclosure requirements- to ensure the intelligibility of algorithmic processes and their possible discriminatory and privacy invasive outcome.
- Whether the EU data protection regime is compatible with the extent, forms and mechanisms of the transparency desiderata prescribed within the study.
- Taking into account Recital 51 of the GDPR -which provides that transparency allowances of the data protection regime should not adversely affect the rights and freedoms of others- to what extent IP rights stand as an impediment for the transparency framework.
Supervisor: Prof. dr. Ronald Leenes
Sensing the risk
How citizen sensing may transform the governance of environmental risk to public health
Anna Berti Suman
‘Citizen Sensing’, framed as grassroots-driven monitoring initiatives based on sensor technology, is increasingly influencing the governance of environmental risk to public health. When lay people distrust official information or just want to fill data gaps, they may resort to sensors and data infrastructures to visualize, monitor, and report risks caused by environmental factors to public health. Although through an initial conflict, Citizen Sensing may ultimately converge with and actually strengthen institutional risk governance. This PhD project aims at investigating under which conditions community-led Citizen Sensing, responding to a risk and eventually generated from distrust, can complement institutional risk governance and which interventions are needed for the practice to result in this contributory outcome. The practice of Citizen Sensing is legitimized on the basis of individual rights: the right to live in a healthy environment and the right to access environmental information, and brings the promise to contribute to a more accountable governance of the risk. This complementary potential is assessed by empirically researching the policy uptake of Citizen Sensing and its ability to produce the mitigation or even removal of the risk at issue. A number of case studies, e.g. the Safecast Radiations Monitoring case and the AiREAS Air Monitoring case, are inspected, through a mix of qualitative and quantitative methods. The case studies are analysed through a theoretical framework built on theories derived from legal, socio-political and STS scholarship on Risk Governance, on the Risk Society, and on Boundary Work, combined with theories on Co-production, Communicative Action and on the role of Non-Expert Knowledge and of Scientific Knowledge in society.
Supervisors: Prof. dr. Ronald Leenes and Prof. dr. Jonathan Verschuuren
The puzzle of sharing personal data collected by private companies for law enforcement purposes
Magda Brewczyńska
The European data protection framework consists of two separate regimes with different thresholds for the protection of personal data. The first, governed by the General Data Protection Regulation (GDPR), applies to data processing operations carried out in both private and public sector, with the exception of activities that serve law enforcement purposes. The processing for those purposes, whenever performed by competent authorities, falls under the second regime, established by the Directive on the processing of personal data by Police and Criminal Justice Authorities (Police Directive).
This seemingly straightforward dualistic system is, however, challenged by increasingly common instances of collaboration between the private and public sectors for the prevention, detection and investigation of crime. Such collaboration may take different forms, with varying degrees of involvement of the private parties. It can be limited to an ad hoc facilitation of access to the records maintained by the private entities or take a form of continuous active sharing of personal data with the law enforcement authorities, which can be pursued for instance within established to that end Public-Private Partnerships (PPPs).
The aim of Magda’s PhD project is threefold. Firstly, it will offer a comprehensive overview of the European dualistic data protection framework, as well as its interplay with other legal frameworks imposing obligation on the private sector to share personal data of their clients for law enforcement purposes, such as the anti-money laundering and terrorism financing framework. Secondly, the project will provide a critical analysis of current data sharing practices in the light of the existing regulatory landscape and jurisprudence of the CJEU and the ECtHR. Thirdly, it will propose a solution to the identified challenges that may contribute to strengthening the right to the protection of personal data.
Supervisors: Prof. dr. Eleni Kosta and Dr. Esther Keymolen
Alexa, Google Assistant, Siri: intelligent assistants and the protection of the private sphere at home
Silvia de Conca
In her research, Silvia analyses how intelligent assistants (such as Alexa or Google Assistant) interact with the private sphere inside the home, and with the legal tools deemed to protect it (GDPR and e-Privacy Directive in particular).
Intelligent voice assistants, or smart speakers, are gaining popularity in North America and in Europe, thanks to the many functionalities that promise to simplify daily life and entertain both adults and children. The intense use of AI and Cloud computing on which the assistants base their intelligence can make the home, traditionally the sanctuary of the private sphere, more permeable to external interferences outside of the control of the dwellers.
Silvia’s research is divided in two parts.
Part one of the research consists in analysing how the home and the private sphere are conceptualized. This analysis has been carried out using concepts and theories belonging to several different disciplines, including behavioural sciences, philosophy, history, economics, and law. The result of the analysis is a conceptual framework that connects said disciplines and highlights three possible aspects that can be affected by intelligent assistants: the mismatch between perceived and actual control, (including considerations about anthropomorphising and crowding in the home); the mediation of reality that occurs when individuals use intelligent assistants, in particular through the vocal interface (based on captology and on the theory of mediation of technology); and the clustering of the private spheres of different users due to the profiling performed by the assistants, with the emergence of a collective dimension of the private sphere (based on Actor Network Theory and on theories of sociology of law).
Part two consists in analysing how the tools existing at European level to protect the private sphere, in particular the GDPR and the ePrivacy Directive (and proposed amending Regulation) can be applied to the intelligent assistants. The analysis is carried out using an eco-systemic approach, considering the interaction of the assistants with users, temporary guests, IoT devices, third-party app developers, and their mother companies. The analysis also focuses on certain aspects which appear particularly critical, for instance the role of consent in the vocal interface, voice data as biometric data (and the relating special protection), the issues concerning owners of intelligent assistant as controllers of the data of their guests, data protection by design and by default, the restrictions concerning profiling and automated decisions, transborder transfer of data, as well as cookies and geolocalization.
The incongruences between the frictions highlighted in Part one and the protection analysed in Part two is then discussed and analysed, in order to highlight how the existing legislation might not be fully capable of addressing some of the changes occurring in the private sphere due to the presence of intelligent assistants.
Supervisors: Prof. dr. Ronald Leenes and Mr. dr. Colette Cuijpers
Safeguarding data Protection in an Open data World (SPOW)
Lorenzo Dalla Corte
We are on the wake of a revolution in urbanism – a shift from data-informed urbanism to data-driven, networked urbanism. An ever-increasing deluge of data is being collected, analyzed, and used to fuel what has been defined with the umbrella term “smart city”: an environment in which an extended network of sensors, coupled with big data analytics techniques, produce an extremely large amount of data, allowing to manage and control diverse facets of the urban ecosystem.
The unprecedented amount of information that smart cities are bound to bring forth, however, warrants a cautious approach, and calls for clear-cut values in order to orient the design of the data gathering and processing infrastructures on which smart cities will be based. On one hand, the data gathered by and through the smart city environment can revolutionize urbanism, and enable a plethora of positive effects and constructive consequences. On the other, the array of networked sensors and the extensive data processing capabilities that define the smart city’s technological stack raise a number of legal and policy issues, which need to be tackled from the very outset of the smart city’s development – from the design phase on.
Privacy and data protection, in primis, are naturally threatened by the deluge of data gathered by the multiplicity of sensors on which the smart city is based. The future evolution of large-scale smart environments has the potential to shift the normality of urban dwelling from a paradigm in which anonymity is the norm and identification the exception to one in which inhabitants are identified by default, and anonymous by exception. The SPOW project, carried on together with TU Delft’s Open Data Knowledge Centre (KcOD), aims at investigating the idea of balancing open data – both a by-product of and a precondition for the smart city’s development – and the right to personal data protection.
Supervisors: Prof. dr. Eleni Kosta (TiU) and Dr. ir. Bastiaan van Loenen (TU Delft)
"Care to Explain?" Articulating legal demands to explain AI-infused decisions, responsibly
Aviva de Groot
The implementation of decision (support) tools that make use of complex computational methods is accelerating. Their current opacity raises many-natured concerns, e.g. around domination, objectification, and moral agency. With the much discussed duties in European data protection law to explain automatedly generated decisions as a starting point, Aviva de Groot's research aims to identify rights relevant explanatory benchmarks for these AI-infused practices.
A premise it shares with various disciplines' research paths towards responsible AI-design is that the extent to which these technologies, for various reasons cease to resonate with our –common as well as expert– intuitive and logical reasoning, poses obstacles. De Groot's focus in this space, rather than on the affordances of the technologies we create, is on the affordances of the human epistemic practices that underlie our explanatory exchanges. In understanding 'explanation' as a form of testimony, the question becomes what such a practice should entail to be called responsible and just.
Theory is applied from philosophical accounts of epistemic (in)justice, reflection and corroboration sought in case studies of domains where explanation is, or became, regulated by law and/or professional ethics. The spotlight is on the decision maker/explainer rather than their explainees, taking on board their interdependence and fundamental equality. Justification of authority, and the maintenance of our communal knowledge space through individual responsible conduct are emphasized. Finally, a non human-exceptionalist, non-naive understanding of dignity is called upon to strengthen the element of 'care' in explanatory exchanges where the power imbalances at play are particularly salient.
Supervisors: Dr. Nadezhda Purtova and Prof. dr. Ronald Leenes
Standardising the protection of personal data in the Internet of Things era: a European perspective in an interconnected world
Irene Kamara
Irene’s PhD dissertation explores the role of standardisation in the field of human rights, through examining the case of technical standards in support of the right to protection of personal data in the EU.
The aim of the research is mainly to contribute to the data protection body of literature by introducing a framework of principles and safeguards under which technical standards may support the protection of the right of Art. 8 Charter Fundamental Rights EU, taking into account the legitimacy and governance issues of the standards-development processes, the human rights nature of the right to protection of personal data, but also the policy and regulatory appraisal of technical standards as an instrument which may support the aims of EU secondary legislation.
Supervisors: Prof. dr. Paul de Hert , Prof. dr. Kees Stuurman and Prof. dr. Eleni Kosta
The impact the use of financial services technologies (fintech) and biometric technologies in Kenya has on issues of data justice
Hellen Mukiri-Smith
Hellen Mukiri-Smith is undertaking her PhD research within Dr. Linnet Taylor’s Global Data Justice Project. She is conducting research on, the impact the use of financial services technologies (fintech) and biometric technologies in Kenya has on issues of data justice. The research explores:
1. the extent to which biometric and fintech data ecosystems or data value chains create power asymmetries, and how power is distributed within these ecosystems or value chains among different actors;
2. the regulatory environment that governs fintech and biometric technologies including, data protection and competition regulations and other upcoming regulations meant to govern biometrics use;
3. how users of fintech and biometric technologies experience using these platforms and sharing their data through these platforms. What freedoms or unfreedoms do they experience? What are platform users’ valued functionings?
Supervisors: Dr. Linnet Taylor and Prof. Morag Goodwin
A risk-based approach to fundamental rights in the context of ...
personal data processing. Is the risk-based approach of the General Data Protection Regulation compatible with the aim to achieve fundamental rights protection?
Claudia Quelle
This research project concerns the risk-based approach of the General Data Protection Regulation, and in particular its relation to the objective to protect fundamental rights. The risk-based approach is understood as a starting point in compliance and enforcement practices which entails that the applicable legal obligations are or should be regarded as more or less stringent, in accordance with the level of risk posed by the processing operation to the rights and freedoms at stake. The focus is not only on the letter of the law, but also on the underlying duty to prevent adverse effects on the individuals concerned. The DPIA and the prior consultation together play a pivotal role in the articulation, assessment and subsequent mitigation of risk.
The risk-based approach can be seen as a meaningful supplement or alternative to user empowerment, embodied in data protection law through consent and data subject rights. This is because the onus of bringing about proper rights protection is placed first and foremost on controllers and the supervisory authorities which are to hold them to account. It is also a flexible instrument, able to cope with societal and technological change.
However, its suitability as a regulatory instrument to bring about the protection of fundamental rights can be questioned. I am researching a number of facets to this main concern. Will the data protection impact assessment be taken seriously by controllers – and what would that require? If low risk situations are neglected, can we still speak of full-fledged fundamental rights protection? Lastly, can we speak of such protection if its content and scope is determined, first and foremost, by the controller and its supervisory authority, rather than by the (ideally: empowered) data subjects concerned?
Supervisors: Prof. dr. Ronald Leenes and Prof. dr. Bert van Roermund
Transparency requirements in Big Data practices in the law enforcement domain
Sascha van Schendel
The increased use of Big Data analytics to extract information and patterns from large datasets, and construct predictions, contributes to the importance of data and the authoritative role of data in decision making. Especially in sectors such as that of law enforcement, Big Data analytics can impact the way processes work and decisions are made. In the law enforcement sector, decisions have a very serious impact on the human rights of suspects or other citizens in the case at hand. In the course of the general policing task, fundamental rights of individuals or groups can be impacted as well by the use of Big Data analytics. A specific issue is the opacity of these processes towards impacted individuals and the general audience, creating a lack of awareness as well as issues with regard to the execution of human rights, such as the right to an effective remedy.
The research targets specific practices of Big Data analytics and analyzes the relevant safeguards and requirements under the frameworks of criminal law and data protection legislation, both on the EU level and Dutch level, with specific attention to transparency requirements.
Supervisor: Prof. dr. Eleni Kosta
The following PhD candidates will upload a research description later:
- Tom Chokrevski
- Shazade Jameson
- Mara Paun