TiLPS

The Tilburg Center for Logic, Ethics, and Philosophy of Science studies knowledge, reasoning, and value in all their forms.

tilburg university

TILPS Colloquium

Upcoming colloquia

Thursday, 30 November 2017, 16:45-18:30 - CANCELLED

Speaker: Lee Elkin (Munich)

Wednesday, 6 December 2017, 12:45-14:30

Room: RTZ 501 (Reitse Toren)

Speaker: Jeffrey White (Korean Advanced Institute of Science and Technology)

Title: Machine Ethics – developing a fully autonomous artificial moral agent

Long abstract: Machine ethics involves first understanding human morality in a way that may in principle be engineered into an artificial agent, in a way that machines may be held in evaluation in traditional human terms, and secondly involves adapting such a schema in the design and construction of artificial moral agents given adequate technology. It is not to be confused with robot ethics, which concerns the effects of semi-autonomous and robotic agents on human beings and their society, for example worker displacement due to robotic automation of the workplace and the broader economic consequences thereof, or safety and liability issues related self-driving automobiles. The distinction between the two, robot and machine ethics, can be drawn roughly along the lines of autonomy, with machine ethics focused on developing genuinely autonomous agents and robot ethics focused on what is much more limited.

Traditionally, machine autonomy and moral agency has been approached from the “outside-in”, with researchers focused on how to program digital computers with rules and principles derived from human experience and rendered in purely symbolic terms in some sort of logical framework. The fragility of such systems is well-known, and the subject of popular adaptations for example in Asimov’s famous four laws of robotics. However, this has not stopped researchers from pursuing exactly this tact. More than fifty years ago, Hubert Dreyfus famously analyzed the problem, as researchers tried to apply methods successful in relatively simple, formal contexts to increasingly complex, informal contexts, only to be met with disappointment. And, he was able to bring this assay to bear over the generations of AI developed since – good old fashioned artificial intelligence, expert systems informed by millions of individual explicit facts, and even relatively recent efforts in dynamical systems inspired neural network models. All have aspired to what is now discussed under the heading of “artificial general intelligence” and have failed – and will fail – for the same reasons. All lack authentic subjective grounds for moral agency. None are genuinely autonomous.

This brings us to what I feel is a fourth distinct generation of AI and with it an era ripe for an “inside-out” rather than an “outside-in” approach to morality in an artificial agent. The bulk of this talk concerns this approach and with it an appreciation of the research platform that facilitates its pursuit. First, we will review the inherited (Western) view on moral agency as articulated by Aristotle more than two thousand years ago and then as transformed by Kant for an increasingly liberal Christian Europe more than two hundred years ago. These views deeply influenced the framers of the US Constitution, for example, and continue to fundamentally shape ethical and moral discourse, so they remain important in understanding artificial agents in terms equivalent with human beings today. At root of this view is a general model of agency within the constraints of a natural world with others situated in the same terms. We will isolate this basic model of agency, and explain how Kant’s famous categorical imperative emerges through its normal exercise rather than being programmed into a machine as a primitive principle externally and without authentic subjective grounds. Finally, we will specify what is required of an artificial agent that it might embody such a moral capacity, and speculate briefly what it might mean for us to live amongst fully autonomous artificial agents when we finally do develop an essentially moral machine.

Thursday, 12 December 2017, 16:45-18:30

Epistemology and Philosophy of Science

Room: tba

Speaker: Sanneke de Haan (Tilburg University)

Title and abstract: tba


Recent colloquia

Tuesday, 12 September 2017, 16:45-18:00

Ethics

Room: PZ 002

Speaker: Sarah Potasi (University of Puget Sound)

Title: The Perfect Bikini Body: Can We All Really Have It? Loving Gaze as an Antioppressive Beauty Ideal

Abstract: In this paper, I ask whether there is a defensible philosophical view according to which everybody is beautiful. I review two purely aesthetical versions of this claim. The No Standards View claims that everybody is maximally and equally beautiful. The Multiple Standards View encourages us to widen our standards of beauty. I argue that both approaches are problematic. The former fails to be aspirational and empowering, while the latter fails to be sufficiently inclusive. I conclude by presenting a hybrid ethical–aesthetical view according to which everybody is beautiful in the sense that everybody can be perceived through a loving gaze (with the exception of evil individuals who are wholly unworthy of love). I show that this view is inclusive, aspirational and empowering, and authentically aesthetical.