The Evolving Cybersecurity Landscape and Regulatory Approaches in Cybersecurity

Track E: AI and Data Protection

In track E, we explore in a critical view the main data protection and privacy challenges arising out of the use of AI in private and public sectors.

The past couple of years has witnessed the spreading use of artificial intelligence systems across public and private sectors ranging from education, healthcare to delivery of benefits under social protection schemes. By way of example, the public sector has adopted AI to enable efficient delivery of public services and as a tool for policing by law enforcement authorities. The push towards adoption of digital products and services as a result of the Covid-19 pandemic has further accelerated the use of AI in the private sector.

Against this background, concerns around the privacy and data protection risks due to such widespread adoption have been raised. Some of these concerns include lack of efficient safeguards for exchange of personal information across different platforms, challenges of obtaining valid consent while providing services to minors, auditability of platforms delivering public services and many more. These concerns aren’t novel – i.e., some of the concerns regarding privacy and data protection have existed for a long time with no convincing solution to address them. The rapid developments in technology (exemplified by the recent hype about general purpose AI systems, such as ChatGPT) have been argued to have outpaced policy debates and regulatory frameworks concerning privacy and data protection. Multiple regulations and guidelines have been introduced across jurisdictions to address the challenges associated with the use of AI.  

In this track, we want to explore in a critical view the main data protection and privacy challenges arising out of the use of AI in private and public sectors. Following is an indicative list of topics that the contributions for the track can relate to: 

  • General issues in the AI and law domain, e.g.: 
    • Trust, accountability, fair and lawful processing of personal data in AI applications.  
    • Re-identification and identifiability within AI.  
    • Collective privacy/group privacy to address AI harms. 
    • Explainable AI.  
    • Impact on minors’ privacy rights. 
  • Sector-specific data protection issues in the use of AI, e.g.: 
    • Use of AI in education and research.  
    • Use of AI in law enforcement.   
    • Use of AI in justice. 
    • Use of AI for social welfare delivery/social protection schemes.  
    • Use of AI in healthcare. 
  • Regulatory and market developments in the AI domain, e.g.:   
    • Fundamental rights impact assessment and risk-based approach. 
    • Regulatory models and multistakeholderism in the governance of AI. 
    • The proposed EU AI Act and fundamental rights.  
    • The proposed EU AI Liability Directive and fundamental rights. 
    • The US Blueprint for an AI Bill of Rights: a call for digital constitutionalism?
    • Public private partnerships for development of AI/the role of AI developing companies and platforms vis a vis privacy and data protection challenges. 

Deadline for submitting extended abstracts and proposals for panels/interactive workshops via the EasyChair conference system: January 31, 2024

For questions about possible presentations for this track, please contact Dr. Marco Bassini: m.bassini@tilburguniversity.edu