Big Data and Law

Professional Learning

Big Data | AI & Law

AI & Law

Date: Time: 09:15 Location: Faculty club

The multi-disciplinary executive program AI & Law offers you a broad spectrum on the legal, regulatory and ethical issues that arise with the development and use of Artificial Intelligence.

During this program you will know exactly which role Artificial Intelligence (AI) and Big Data are taking in today’s geopolitical and global legal landscape.  You will get a comprehensive view on the latest developments in the different forms of AI, including Large Language Models (LLMs). You will get fully acquainted with the risk-based rules in the upcoming AI Act regulating the various AI systems and how to implament adequate risk management for the various forms of AI. You will learn how these new rules align with the General Data Protection Regulation (GDPR) and how to build upon your existing privacy management programs, for example buildign upon your exsiting Data Protection Impact Assessments (DPIA), to comply with the new requirements under the AI Act for Comformity and Fundamental Righs Impact Assessments (FRIAs). You will be rained in the underlying ethical rules and now to respond to ethical dilemmas in everyday practice. 

Having followed this program, you have an up-to-date understanding of the legal and ethical challenges of AI and you will be able to apply laws and regulations and respond to ethical dilemmas when dealing with such technologies. 

This program will be offered at the Faculty Club and Auberge du Bonheur next to Tilburg University. 

Includes lunch and coffee breaks

Register now for AI & Law

"I recently participated in the Big Data, AI & Law course at the Tilburg Institute for Law, Technology, and Society, and it was an excellent addition to my professional development. As an AI governance specialist in financial services, I found the course to be comprehensive and in depth in its approach to the latest advancements in Big Data and Artificial Intelligence (AI) within the legal field. The workshops, all led by experts, were particularly engaging, effective and really enjoyable."

Tony Hibbert, AI Risk Manager, ING Bank

The AI & Law program in short

  • Start: Next program starts on June 3, 2024
  • Price: € 3,500 (Tilburg University alumni and employees receive a 10% discount)
  • Duration:  4 full days
  • NovA hours: 24
  • Location: The Faculty Club, Tilburg University Campus and Auberge du Bonheur
    Route description
  • Language of instruction:  English

Standard program AI & Law

Day 1 - June 3, 2024

Understanding Artificial Intelligence and its societal relevance (morning session)


This morning session of the first day will give a general introduction to what AI is and the societal issues that accompany it, ranging from more recent discussions on the implications of Generative AI to older discussions on discrimination and machine autonomy. We will first look at the history of AI and see how its definition evolved in interaction with technical and social developments. We will then discuss how in the past two decades, key developments within the field of AI have been at the heart of significant societal changes that have triggered calls for regulation.

Learning goals for this session are:

  • Insight into different perspectives on what AI is
  • Understanding of societal issues particular to AI
  • Understanding why developments in AI would warrant new regulation
The new European Regulation on Artificial Intelligence (afternoon session)


This afternoon session will provide an introduction to the new EU AI Act and will cover in depth a number of regulatory and interpretative issues that have already arisen since its adoption. The new Regulation opens a new chapter not only in the obligations of providers, importers, distributors, and users of AI systems, but more importantly creates a new risk-based assessment system for AI systems, introducing four levels of risk, along with specific provisions to identify risks related to general purpose AI models. The session will provide an in-depth explanation of these four levels of risk, offer a comprehensive approach to categorizing AI systems into one of these categories, and present the system of obligations and assessment of compliance with the provisions of the AI Act.

  • Definition of AI
  • Concept of an AI system
  • Prohibited practices in the field of AI 
  • High-risk AI systems
  • Specific transparency risk for certain AI systems 
  • Systemic risks and general-purpose AI models (and large-scale production AI models) 
    Compliance assessment

Day 2 - 10 June, 2024

Ethical issues in relation to Big Data & AI (morning session)


The morning session of the fourth day will deal with the relevance and use of ethics in relation to big data analytics. Topics will include the relevance of ethics for privacy, accountability, transparency and fairness. We will also consider the new problems on the individual, group and societal level that are arising through data analytics, algorithms and AI, and will discuss how ethical frameworks intersect with legal ones in guiding the real-world activities of data scientists. Goals for this module are for participants to:

  • Understand the different perspectives on ethics in relation to data science, and be able  to apply these perspectives to evaluate practice;
  • Differentiate between the risks of data analytics that are addressed by current data protection frameworks and those that are not;
  • Determine which conceptual framework to apply to a given problem, and how to relate ethical principles and practice broader problems in relation to emerging data analytic techniques.
Where Ethics meets Personal Protection - practical dilemma's and ways of dealing with them (afternoon session)


The day will end with a discussion of ethical dilemma’s encountered in practice when developing and implementing AI tooling

  • In retail (loyalty programs)
  • Distribution (optimization)
  • HR recruiting tools
  • Fraud prevention tooling
  • Financial services (inclusive financing, customer duty of care)

Day 3 - June 17, 2024

Controlling AI: how to establish a governance framework to facilitate responsible adoption and scale of AI (morning session)


Supervisory authorities around the globe, typically consider the so-called three lines of defense model as best practice for risk management and internal control. This model is not fit-for-purpose when it comes to digital innovation. Because new technologies are not fully regulated yet, it is difficult to perform a clear-cut compliance check. AI opens up a whole new range of design issues and associated ethical dilemma’s.  Years of controls by the compliance function have undermined the self-learning capacity of the business to make contextual assessments and factor in ethical considerations. This leaves the compliance department with no other option but to reject the innovation. Responsible innovation is only possible if the relevant compliance experts are part of the innovation team and if teams take joint responsibility for compliance.

  • How to adapt the 3-lines-of-defense model to ensure responsible innovation
  • How to train your data scientists and innovation teams on ethical dilemmas
  • How to implement quality assurance and business controls to ensure ‘responsible AI’
Responsible AI in practice: Interactive case study & reflections on real-life challenges around AI risk and accountability (afternoon session)


In our increasingly information-based and algorithmic society it is not surprising that the attention for Responsible AI is rapidly growing. Responsible AI is about organizations ensuring that their use of AI fulfills a number of criteria. First, that it’s ethically sound and complies with regulations in all respects; second, that it’s underpinned by a robust foundation of end-to-end governance; and third, that it’s supported by strong performance pillars addressing bias and fairness, interpretability and explainability, and robustness and security.

While theories in this domain still are in an early stage of development, practice is already placing designers, developers, users, and subjects of these AI systems before important risks, dilemmas and choices.

In this session we will dive into the practice of Responsible AI. Through an interactive case study, we will look at what it takes to develop an AI algorithm and what the key risks and considerations are in doing so. Furthermore, we will explore current approaches and instruments for AI Accountability, using examples from practice.

Day 4 - June 24, 2024

Conformity of AI Systems & Algorithmic Accountability (morning & afternoon session)


The morning and afternoon session of the second day will build upon the topics and issues dealt with and explained in the afternoon session of the first day. Prof. Lokke Moerel and Marijn Storm will discuss in detail the conformity requirements that apply to AI systems under the AI Act and the GDPR, and how algorithmic accountability (including white box development) can be used to comply with such conformity requirements.

Participants will get insights on:

  • The requirements for AI under the AI Act and the GDPR, how these align, and how existing privacy risk management programs can be leveraged for compliance with the AI Act (for example, building upon your existing Data Protection Impact Assessments (DPIA), to comply with the new requirements under the AI Act for Conformity and Fundamental Rights Impact Assessments (FRIAs).
  • The latest on how to address unlawful bias in algorithms
  • How to deploy an algorithm to prevent existing unlawful discrimination
  • Cursing in the privacy church: why we need sensitive data categories to address bias
  • How to train AI if you need to make distinctions between minority groups, like in healthcare
  • When disparate impact turns into disparate treatment?
  • Alternatives to facilitate explainability to individuals if the AI is a black box
  • How the AI Act relates to AI laws around the world, or if we follow the AI Act and the GDPR, are we also OK elsewhere in the world?

In addition, we will discuss the generative AI developments, the main risks for companies and the various options for companies how to address these risks. We will also discuss the main do’s and don’ts for employees and the core components of an internal generative AI workforce policy.

Practical information AI & Law program 


The classes will be held at The Faculty Club, Tilburg University Campus and Auberge du Bonheur, Tilburg

Route description

Time schedule

Each class starts at 09:30 hrs. and ends around 17:00 hrs. The complete program AI & Law equals 24 NovA hours. At the end of the program your will receive a certificate upon request stating the sessions/modules and associated hours.

Download the brochure

Costs and lifelong learning discount for Tilburg University alumni

The tuition fee for AI & Law is € 3,500 for the 4-day program. The tuition fee includes all course materials and catering, but does not include lodging expenses. Tilburg University alumni receive a 10% discount.

Tilburg University Alumni

Lifelong learning is needed in our everchanging environment. At Tilburg University we want to encourage and stimulate lifelong learning. To make participating in Professional Learning programs even more appealing Tilburg University offers a 10% discount on every Professional Learning program for Tilburg University alumni.

To apply for the discount simply state you are a Tilburg University alumnus in the registration form of the program you are interested in. 

Your profile

The program AI & Law is open to everyone who is interested to learn more about big data and law.

Teaching methods

The program AI & Law consist of a combination of keynotes, lectures, workshops and discussions led by experts in the field. All sessions are characterized by a mix of theoretical and practical aspects of big data and law.

General terms and conditions Professional Learning

You can register for this course only by filling out the online registration form on the website. You will immediately receive a digital confirmation of your admission.

General terms and conditions Professional Learning

Course fee

The course fee is due 14 days before the start of the course. No VAT will be charged for this course.


Tilburg University reserves the right to change parts of the course in the event of unforeseen circumstances or recent developments. You will be informed of any changes as soon as possible.


Cancellation by a course participant needs to take place in writing. If you cancel in writing not later than four weeks before the first course day, the course fee will be refunded. Tilburg University reserves the right to cancel the course if an insufficient number of participants have registered.


After every course day, the course participants will be given an evaluation form. It is very important for the further development of the post-academic courses to receive the course participants’ feedback through these evaluation forms. The information obtained will be used as much as possible in organizing and designing future courses.


The course participant can report any complaints to Tilburg University in writing. The complaint needs to be described in detail. Complaints do not suspend the obligation to pay the course fee. If the complaint is upheld by Tilburg University, the course participant will receive a reduction of the course fee.

Big Data | AI & Law

Are you interested?


For general information/ questions please mail: 

Alexandra ZiakaModerator
Shweta DegalahalModerator
prof. dr. Eleni KostaAcademic director
prof. dr. Lokke MoerelAcademic director