Keynote speakers TILTing Perspectives 2019
The following keynote speakers have confirmed their attendance at the Conference.
- Opening address: Karen Yeung
- Health and Environment: Geert van Calster
- Justice and Data Market: Seda Gürses
- Digital Clearinghouse: Alexandre de Streel
- Responsibility in Artificial Intelligence: Virginia Dignum
- Data Protection: Lee Bygrave
- Intellectual Property and Innovation: Niva Elkin-Koren
Please find their bio’s and abstracts below.
Opening address: Karen Yeung
Karen Yeung is the University of Birmingham’s first Interdisciplinary Chair, taking up the post of Interdisciplinary Professorial Fellow in Law, Ethics and Informatics at the University of Birmingham in the School of Law and the School of Computer Science in January 2018. She has been a Distinguished Visiting Fellow at Melbourne Law School since 2016.
Karen is actively involved in several technology policy and related initiatives in the UK and worldwide, including those concerned with the governance of AI, one of her key research interests. In particular, she is a member of the EU’s High Level Expert Group on Artificial Intelligence (since June 2018) and a Member and Rapporteur for the Council of Europe’s Expert Committee on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT). Since March 2018 she has been the ethics advisor and member of the Expert Advisory Panel on Digital Medicine for the Topol Independent Technology Review for the NHS. Between 2016-2018 she was the Chair of the Nuffield Council on Bioethics Working Party on Genome Editing and Human Reproduction and during that time she was also a member of the World Economic Forum Global Future Council on Biotechnology.
Title and Abstract
‘‘Law, Regulation & Technology"
Keynote Health and Environment: Geert van Calster
Geert Van Calster is full professor in the University of Leuven, visiting professor at King’s College, London, and Monash University, Melbourne, and a Member of the Belgian Bar. His research, teaching and practice involves many aspects of international litigation and regulation, including private international law, trade law, and environmental law.
Prof Van Calster’s research interests in Artificial Intelligence relate to risk analysis as well as regulatory structures. Risk analysis and –management have been a staple part of Geert’s research output since he started reviewing the legal aspects of the ‘risk society’ at the end of the 1990s. This has resulted in roll-over grants researching society’s, particularly legal, response to emerging and new technologies, including research into biotech, food and feed, nanotechnologies, and shale gas. Prof Van Calster’s focus lies not just on the actual shape and form of the legal discipline surrounding these technologies, but also their structural set-up: the involvement of private and public players, co- and self-regulation, and the differences between EU and other regimes.
In the field of environmental law, of note are, with Dr Leonie Reins, his co-authorship of Edward Elgar’s reference handbook on EU Environmental law, and his single authored Handbook of EU Waste law with Oxford University Press.
Title and Abstract
‘‘Too clever by half? What the regulation of AI might want to learn from environmental law’?
Geert will discuss some of the core suggestions currently being made for the regulation of artificial intelligence. He will then test these against the lessons we may or may not have learnt from the regulations of environmental law specifically, and the regulation of new technologies in general.
Keynote Justice and Data Market: Seda Gürses
Seda Gürses is currently a FWO post-doctoral fellow at COSIC/ESAT in the Department of Electrical Engineering at KU Leuven, Belgium. She is also a research associate at the Center for Information Technology and Policy at Princeton University. Prior to that she was a fellow at the Media, Culture and Communications Department at NYU Steinhardt and at the Information Law Institute at NYU Law School working together with a dandy group of researchers under the leadership of Helen Nissenbaum. During her time at NYU, she was also part of the Intel Science and Technology Center on Social Computing.
Outside of the university, she has been affiliated with a number of groups and initiatives. After many years of collaboration, she has finally become a member of her favorite collective Constant VZW. One concrete outcome of that collaboration has been the course “networked social” at the Ecole de Recherche Graphique which they have been teaching since 2012. She has also been a member and supporter of Alternatif Bilisim Dernegi , an association based in Turkey working on digital rights.
Title and Abstract
"Beyond Privacy? Protective Optimization Technologies"
In the 90s, software engineering shifted from packaged software and PCs to services and clouds, enabling distributed architectures that incorporate real-time feedback from users. In the process, digital systems became layers of technologies metricized under the authority of objective functions. These functions drive selection of software features, service integration, cloud usage, user interaction and growth, customer service, and environmental capture, among others. Whereas information systems focused on storage, processing and transport of information, and organizing knowledge "with associated risks of surveillance" contemporary systems leverage the knowledge they gather to not only understand the world, but also to optimize it, seeking maximum extraction of economic value through the capture and manipulation of people's activities and environments. The ability of these optimization systems to treat the world not as a static place to be known, but as one to sense and co-create, poses social risks and harms such as social sorting, mass manipulation, asymmetrical concentration of resources, majority dominance, and minority erasure. In the vocabulary of optimization, these harms arise due to choosing inadequate objective functions. During the talk, I will provide an account of what we mean by optimization systems, detail their externalities and make a proposition for Protective Optimization Technologies
Keynote Digital Clearinghouse: Alexandre de Streel
(Please note that our Keynote Alan Riley unfortunately had to cancel)
Alexandre de Streel is Professor of Law at the University of Namur where he is the director of the Research Centre in Information, Law and Society (CRIDS). His research focuses on regulation and competition law in network industries.
Alexandre is also a Joint Academic Director at the Centre on Regulation in Europe (CERRE) in Brussels, and a member of the Scientific Committee of the Florence School of Regulation at the European University Institute.
Alexandre regularly advises international organisations (including the European Commission, European Parliament, OECD, EBRD) and he is an Assessor (member of the decisional body) at the Belgian Competition Authority.
" Redesigning regulation for digital platforms"
Responsibility in Artificial Intelligence: Virginia Dignum
Virginia Dignum is an Associate Professor at the Faculty of Technology, Policy and Management, Delft University of Technology. She received a PhD in 2004 from the Utrecht University, on A Model for Organizational Interaction. Prior to her PhD, she worked for more than 12 years in consultancy and system development in the areas of expert systems and knowledge management.
In 2006, she was awarded the prestigious Veni grant from NWO (Dutch Organization for Scientific Research) for her work on agent-based organizational frameworks, which includes the OperA framework for analysis, design and simulation of organizational systems.
Title and Abstract
“Responsible Artificial Intelligence”
As Artificial Intelligence (AI) systems are increasingly making decisions that directly affect users and society, many questions raise across social, economic, political, technological, legal, ethical and philosophical issues. Can machines make moral decisions? Should artificial systems ever be treated as ethical entities? What are the legal and ethical consequences of human enhancement technologies, or cyber-genetic technologies? How should moral, societal and legal values be part of the design process? In this talk, we look at ways to ensure ethical behaviour by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems. We will in particular focus on the ART principles for AI: Accountability, Responsibility, Transparency.
Keynote Data Protection: Lee Bygrave
Lee A. Bygrave is professor at the Department of Private Law, University of Oslo, where he is in charge of the Norwegian Research Center for Computers and Law (NRCCL). For the past three decades, Lee has been engaged in researching and developing regulatory policy for information and communications technology. He has functioned as expert advisor on technology regulation for numerous organizations, including the European Commission, Nordic Council of Ministers, and Internet Corporation for Assigned Names and Numbers. He was recently appointed by the Norwegian government to sit on Norway’s ICT Security Commission, with a mandate to recommend improvements to the country’s cybersecurity framework. He currently heads two major research projects at the NRCCL: VIROS (‘Vulnerability in the Robot Society’), which canvasses legal and ethical implications of AI-empowered robotics; and SIGNAL (‘Security in Internet Governance and Networks: Analyzing the Law’), which studies transnational changes in the legal frameworks for security of critical internet infrastructure and cloud computing. Lee has published extensively within the field of data protection law where his two principal books on the subject – Data Protection Law: Approaching Its Rationale, Logic and Limits (Kluwer 2002) and Data Privacy Law: An International Perspective (Oxford University Press 2014) – are widely acknowledged as standard international texts. He has just completed co-editing and co-authoring a comprehensive Commentary on the EU General Data Protection Regulation, which will be published by Oxford University Press later this year.
Title and Abstract
In this keynote address, Lee revisits basic tenets of European data protection law as they have evolved over the last 50 years. He argues that the latest iteration of European data protection law revolves predominantly around three normative prongs, abbreviated as ‘p’, ‘d’, and ‘f’. He further argues that this pdf-triad (the meaning of which will be revealed in the address) is not radically different from the normative thrust of previous iterations of data protection law, but is the result of a gradual, largely harmonious evolution of norms. Nonetheless, particular elements of the triad were only faintly visible in past decades. Lee also considers the utility of the triad in ensuring that technological-organisational developments do not deleteriously affect human rights and freedoms. He suggests that while the triad cannot, in practice, adequately meet all of the myriad challenges thrown up by technological-organisational change, it has a flexibility that such change, at least in principle, cannot easily ‘outrun’. And, in the hands of activist data protection authorities or, perhaps more importantly, in the hands of a judiciary that is sympathetic to the safeguarding of fundamental human rights, the triad is a potentially powerful regulatory tool, both now and for the foreseeable future.
Keynote Intellectual Property and Innovation: Niva Elkin-Koren
Niva Elkin-Koren is a Professor of Law at the University of Haifa, Faculty of Law and a Faculty Associate at the Berkman Klein Center at Harvard University. She is the Founding Director of the Haifa Center for Law & Technology (HCLT), a Co-Director of the Center for Cyber, Law and Policy. During 2009-2012 she served as Dean of the Faculty of Law at the University of Haifa.
Her research focuses on innovation policy and access to knowledge, digital governance, online platforms, and the legal implications of AI and big data. She is currently studying the implications of governmental access to data for data-driven-innovation. She is also interested in predictive justice and seeks to develop new measures for keeping a check on algorithmic adjudication.
Prof. Elkin-Koren has been a Visiting Professor of Law at Harvard University, Columbia Law School, UCLA, NYU, George Washington University and Villanova University School of Law. She is the Chair of the Scientific Advisory Council, of the Alexander von Humboldt Institute for Internet and Society in Berlin, a member of the Executive Committee of Association for the Advancement of Teaching and Research in Intellectual Property (ATRIP), and a board member of the MIPLC Scientific Advisory Board of the Munich IP Law Center at the Max Planck Institute for Innovation and Competition. She is also a member of the editorial boards of the Journal of the Copyright Society (since 2009) the Journal of Information Policy (since 2010) and the Internet Policy Review (since 2016).
Prof. Elkin-Koren received her LL.B from Tel-Aviv University Faculty of Law in 1989, her LL.M from Harvard Law School in 1991, and her S.J.D from Stanford Law School in 1995.
She is the coauthor of The Limits of Analysis: Law and Economics of Intellectual Property in the Digital Age (2012) and Law, Economics and Cyberspace: The effects of Cyberspace on the Economic Analysis of Law (2004). She is the co-editor of Law and Information Technology (2011) and The Commodification of Information (2002). Her publications are listed here.
Title and Abstract
The growing pressure on online platforms to expeditiously remove illegitimate content, is fostering the use of Artificial Intelligence (AI) to minimize their potential liability.
This is potentially game-changing for democracy. It facilitates the rise of unchecked power, which often escapes judicial oversight and constitutional restraints. The use of AI to filter unwarranted content cannot be sufficiently addressed by traditional legal rights and procedures, since these tools are ill-equipped to address the robust, non-transparent and dynamic nature of governance by AI. Consequently, in a digital ecosystem governed by AI, we currently lack sufficient safeguard against the blocking of legitimate content, while securing due process and ensuring freedom of speech.
I propose to address AI-based content moderation, by introducing contesting algorithms. The rationale of Contesting Algorithms, is that algorithmic content moderation often seek to optimize a single goal (i.e., removing copyright infringing materials as defined by right holders,). At the same time, other values of the public interest (fair use, free speech) are often neglected. Contesting Algorithms introduce an adversarial design, which reflects conflicting interests, and thereby offer a check on dominant removal systems.
The presentation will introduce the strategy of Contesting Algorithms, and demonstrate how regulatory measures could promote the development and implementation of this strategy in online content moderation.