Data, algorithmic, and methodological aspects
Tilburg University has considerable expertise in natural language processing, machine learning (including deep learning), agents and robotics, decision making, and computer vision. Our research not only evaluates the utility and viability of these technologies in new domains but also contributes to improvements in these technologies.
Tilburg University connects expertise in computer science with knowledge of human cognition
Supported by the increasing computational power, the broad impact of artificial intelligence research methods and techniques has been mainly due to the breakthroughs achieved by the implementation of deep neural networks. This fact illustrates the traditionally tight connection between AI and research in cognitive science, two disciplines that study intelligent behavior from different perspectives: the perspective of a machine on the one hand and of a human in its individual and collective settings on the other.
Historically, models of neural networks that served as precursors to deep learning were fundamentally inspired by the neural architecture of the human brain. Recently, research of the human brain has greatly profited from the power of deep learning models. These models provide cognitive scientists with the capacity to make sense of vast amounts of complex data produced by the brain and for an approximation of highly complex biological functions such as perception, speech and language processing, and decision making. The outcome of this scientific endeavor provides, in turn, valuable insights for advancements in robotics and brain-computer interfacing. The close relation between artificial intelligence and the social sciences is one of the reasons why, until recently, research and education in the domain of AI in the Netherlands has traditionally been conducted and offered mainly by general universities rather than purely technical institutions.
In the same vein, core AI research at Tilburg University is supported by multidisciplinary teams that connect expertise in computer science with knowledge of human cognition. It currently distinguishes six focal areas, defined in line with the Dutch AI Manifesto1:
Natural Language Processing
The theme of Natural Language Processing (NLP) represents an uninterrupted long-standing research line at Tilburg University, originating in the 80s of the 20th century. It involves the study of how humans comprehend and generate speech and language and the creation and study of computational models that mimic human language in its spoken, written, and signed form. Even though the desire to equip machines with the same language capabilities as displayed by humans has been at the core of the AI endeavor from the very beginning, this goal has proven to be one of the most difficult to achieve.
The aim to provide a natural human-machine communication interface has proven to be elusive although the use of deep neural networks in the NLP domain has led to an improved performance on several tasks. In general, NLP research focuses on the representations and biases learned by state of-the art language comprehension and machine translation models, network analysis of large corpora of human language judgments, and building and testing of cognitive models for language-related tasks that vary from word segmentation to meaning and multi-sentence reading. Interestingly, the techniques can be applied not just on human but also on programming languages. For example, an NLP-based analysis of code can support automatic detection of inconsistencies and violations of good programming practices, thereby uncovering potential indicators of software weaknesses, and even generate code for particular applications.
Other projects in this theme conducted at Tilburg University include building new frameworks to understand visually grounded spoken language via multi-tasking, creation of multilanguage intelligent virtual tutors who are capable of limited interaction with the user using naturalistic synthesized speech, and research and development of innovative machine translation frameworks that can be implemented on mobile applications to facilitate the exchange of information between deaf, hard of hearing, and hearing individuals.
The theme of Machine Learning primarily focuses on the customization, application, and automatic interpretation of deep learning networks. This theme spans research as diverse as developing techniques for more explainable AI, state-of-the-art classification of signals and images, learning multi-modal representations, capturing sequential information from transactional records, and generating useful synthetic data. The two uniting goals of this research line are the applications of unsupervised and semi-supervised learning architectures, such as Variational Auto-Encoders, and the improvement of our understanding of the benefits and costs of over-parametrization and parameter constraints on deep networks.
An important application domain explored by Tilburg University researchers concerns the segmentation and analysis of 2D/3D time-lapse biomedical videos and images using deep learning architectures. The insightful information that is extracted from these models serves as input for medical experts, for example, to detect tiny bone fractures in radiography images or to identify cell changes for early cancer detection. Traditional and advanced machine learning techniques, such as convolutional neural networks, have been used to assess effects of different medical treatments, e.g., techniques to treat chronic stress and psychopathologies such as obsessive-compulsive disorder. Considerable effort is devoted to the development and testing of algorithms that provide both a high performance score and interpretability, for example, by building classifiers able to explain their decision process rather than black-box solutions.
Autonomous Agents & Robotics
Autonomous Agents & Robotics involves both the study of human and artificial minds and their interaction in virtual, mixed, and augmented reality as well as the development of autonomous agents. This research line focuses on building and studying interfaces that are non-invasive, ethical, and engaging, for example, by employing intelligent interactive agents in mixed reality environments, robots that interact socially to support learning, and sensorbased interfaces between brains and computers.
The use of robots and agents in our daily lives requires that they have the ability to automatically sense and interpret human social behavior such as gestures, facial expressions, and prosodic qualities of speech. At the same time, the use of VR agents in lab settings allows for the detection of other neurocognitive and behavioral markers including EEG, eye tracking, heart rate variability, and skin conductance change that allow for the development of rich user models to improve human-AI interaction and potentially support other goals including learning and collaboration.
Planning and Search/Decision Making
Planning and Search/Decision Making examines reasoning in real-world environments that are characterized by high levels of uncertainty. Passively logged human data are modeled using AI and computational cognitive models and are used to extend existing theoretical accounts of behavior. The computational techniques employed include pattern recognition, anomaly detection, network analysis, classification, prediction, and recommendation. The resulting models provide insights into learning, memory, individual preferences, as well as collective behavior. For example, information collected from wearable sensors can be used to create models of human socio-spatial networks providing explanations for information transfer and collective decision making. A particularly challenging task lies in understanding collaborative problem solving in work domains that require fast decisions for complex problems, such as aviation and healthcare.
Computer Vision attempts to apply existing algorithms and to develop new approaches to automatic visual understanding of the world, traditionally building on knowledge of human visual perception. Making sense of visual data is achieved by deep learning architecture with applications in image classification and video recognition. For instance, deep learning algorithms are developed to estimate body shape characteristics from images or videos. Computer vision solutions are integrated with natural language processing for the purposes of automated image and video description.
Data Engineering and Analytics
Data Engineering and Analytics involves the study of methods and techniques developed to extract relevant information from complex unstructured or semi-structured data sets. These techniques include data mining, machine learning, text mining, network analysis approaches, and visual analytics to provide insights to stakeholders. An active area of interest in this research line concerns temporal analysis methods for time-series data. A recent application area for this theme involves analysis of data that are collected to monitor relevant environmental features represented in the soundscape, such as animal vocalizations, for research lines focusing on sustainable and healthy living communities.
All six research themes rely heavily on using and developing emerging technologies, including machine learning, deep learning, virtual/mixed reality, and robotics. Our research not only evaluates the utility and viability of these technologies in new domains but also contributes to improvements in these technologies. In many of our research projects, these technologies are used in conjunction with computational cognitive models to improve our understanding of human and artificial intelligence systems. This research is often done in collaboration with non-academic partners, including corporate, governmental, and not-for-profit organizations.
- Pieter Spronck
Full Professor of Computer Science
- Marie Postma
Associate Professor Cognitive Science and Artificial Intelligence