To facilitate collaboration among researchers from different AI related fields, in the TAISIG Talks series, Tilburg University brings together AI experts from various domains to discuss their most recent findings. Each Talk features scientists with different backgrounds and at different stages in their careers.
Upcoming TAISIG Talks
|TAISIG Talks 15||Thursday 14-07-2022 (17.00 - 18.00 hrs)|
|TAISIG Talks 16||Thursday 22-09-2022 (17.00 - 18.00 hrs)|
|TAISIG Talks 17||Thursday 20-10-2022 (17.00 - 18.00 hrs)|
|TAISIG Talks 18||Thursday 17-11-2022 (17.00 - 18.00 hrs)|
Past TAISIG Talks
TAISIG Talks 14
TAISIG Talks 14 - Part 1 by Marijn van Wingerden: Data science applications to clinical data
In this first part of the 14th edition of TAISIG Talks, dr. Marijn van Wingerden discusses the application of data science methods to clustering and prediction in (longitudinal) clinical data. This data was provided by different partners in the Elisabeth-Tweesteden Hospital in the context of several WeCare collaborations focusing on machine learning models for prediction of patient outcomes.
TAISIG Talks 14 - Part 2 by Boris Čule: Pattern-based time series classification
Efficient and interpretable classification of time series is an essential data mining task with many real-world applications. Boris Čule discusses the mining of sequential patterns in a set of time series, and using the presence or absence of these patterns to appropriately classify new instances. And by doing so, designing a highly interpretable classifier, while maintaining competitive performance in terms of efficiency and accuracy.
TAISIG Talks 14 Part 3: by Marijn and Boris: Predicting patient deterioration using data science
In the last part of the 14th edition of TAISIG Talks, dr. Marijn van Wingerden and Boris Čule discuss their shared research on predicting patient deterioration using data science methods.
To what extent do neural networks, trained on static or dynamic data, predict deterioration better and provide more clinical utility to the Elisabeth-Tweesteden Hospital than the MEWS (Modified Early Warning Score) and DI (Deterioration Index)?
TAISIG Talks 13
TAISIG Talks 13 Part 1 - Javad Pourmostafa: Creating parallel in-domain data for Neural Machine Translation
In this first part of the 13th edition of TAISIG Talks, PhD candidate Javad Pourmostafa discusses his research on improving the quality of machine translations by mitigating the lack of domain-specific data (i.e. domain specific language in machine translation systems such as Google Translate).
TAISIG Talks 13 - Part 2 by Stefan Bloemheuvel: Time series analysis with graph neural networks
In the second part of the 13th edition of TAISIG Talks, PhD candidate Stefan Bloemheuvel discusses his research on time series analysis with graph neural networks (e.g. using seismic data to estimate the magnitude and location of an incoming earthquake).
TAISIG Talks 12
TAISIG Talks 12 Part 1 - Phillip Brown: Recognition and interpretation of pain and cyber sickness in VR
In this first part of the 12th edition of TAISIG Talks, PhD Candidate Phillip Brown elaborates on his research on cyber sickness: a condition with symptoms similar to those of motion sickness when using VR as a distraction from symptoms of chronic pain.
TAISIG Talks 12 Part 2 - David Peeters: VR: a tool to study the psychology of language and communication
In the second part of the 12th edition of TAISIG Talks, dr. David Peeters discusses virtual reality as a tool in the study of the psychology of language and communication.
TAISIG Talks 11
TAISIG Talks 11 Part 1 - dr. Marie Postma: Can AI have consciousness?
In this first part of the eleventh TAISIG Talk, dr. Marie Postma elaborates on research that examines if AI can or will (in the future) have consciousness.
TAISIG Talks 11 Part 2 - dr. Werner Liebregts: Social signal processing in entrepreneurial research
In the second part of the eleventh TAISIG Talk, dr. Werner Liebregts talks about how social signaling processing can affect decision making in an entrepreneurial setting.
TAISIG Talk 11 Part 3 - Federico Zamberlan: How can psychedelics make us experience things?
In this final part of the eleventh TAISIG Talk, Federico Zamberlan elaborates on how we can use psychedelics to experience things we normally do not experience (such as a near death experience).
TAISIG Talks 10
TAISIG Talks 9
TAISIG Talks 8
TAISIG Talks 7
TAISIG Talks 6
TAISIG Talks 6 Part 1: Irene Kamara on conformity assessment
Dr. Irene Kamara is researcher at Tilburg Institute for Law, Technology and Society. Her research focusses on conformity assessment and standardization in the areas of data protection, cybersecurity and non-discrimination. Conformity assessment is the demonstration that specified requirements are, or are not, fulfilled. It is applied to products, processes, services and persons. Law can for example require toys to be safe. Conformity assessment organizations define and specify what ‘safe’ means, in collaboration with the European Commission and other public regulators. Public law harvests and relies on information from private regulators and lawmakers. The new EU AI regulation proposal is actually based on laws from the ‘80s. Conformity assessment has a central role within the proposal, on which Irene elaborates in her talk.
TAISIG Talks 6 Part 2: Emiel Krahmer on problems and prospects of GPT-3
Prof.dr. Emiel Krahmer was trained as computational linguist at Tilburg University in the late ‘80s. Since then, his research focusses on data-to-text generation. These are systems that take data as input and generate a new, coherent narrative about it. In his talk Emiel addresses the problems and prospects of GPT-3, one of the breakthrough AI models of 2020 created by OpenAI. The basic principle of the GPT-3 model is word prediction. Most people are familiar with word prediction through WhatsApp. The difference is that WhatsApp makes a prediction based on one or two of the last used words. GPT-3 is a complex deep-learning model with 96 layers and takes large contexts into account when making predictions. This complexity aids the creation of coherent texts, but the texts are not based on reality and contain facts that are not true.
TAISIG Talks 6 Part 3: Maryam Alimardani on Brain Computer Interfaces
Dr. Maryam Alimardani conducts research surrounding brain computer interfaces (BCI). BCI are systems that collect brain activity recordings and use them to understand what the user is experiencing when performing tasks. By applying these systems, researchers can measure user intention, emotional status and cognitive status.
Media usually shows BCI as invasive systems such as exoskeletons or surgically implanted electrodes. Maryams research differs from this as she uses a non-invasive BCI method. More concretely, she uses a wireless EEG cap or headset to measure brain activity. One of her research topics is neurofeedback and augmented learning.
Currently, not a lot is known on the learning process and experience of users. In the video below Maryam explains what the benefits are of understanding the learning process and how BCI can be applied to measure it.
TAISIG Talks 5
TAISIG Talks 5 Part 1: Sharon Ong on AI for diagnostic imaging
Dr. Sharon Ong is assistant professor at the department of Cognitive Science and Artificial Intelligence. In her talk she argues for the need and usefulness of AI in clinical decision making. Hospitals gather much data by issuing medical tests and examinations such as X-rays and CT Scans. Manually checking and analyzing this data is time consuming. AI can reduce the time used to analyze data and improve the quality of the decision made by medical experts. Sharon gives three examples of successful use of AI in diagnostic imaging. The first project is on fracture detection in scaphoids using X-ray images. The second project is on prediction of cognitive outcomes for a patient, using the MRI and clinical variables. The final project is on the detection of osteolytic bone lesions in patients with multiple myeloma by using CT scans.
TAISIG Talks 5 Part 2: Yash Satsangi on challenges of active perception
Yash Satsangi discusses the challenges posed by active perception and how planning and learning algorithms can help tackle these challenges. Sensor selection is an active perception task. This takes place for example when an algorithm of a multi-camera surveillance system needs to decide on the allocation of scarce resources such as computational power. To fulfil this task the agent needs the ability to model the partial observability, reason about the consequences of a decision, associate objective value to its estimate of uncertainty, be able to deal with a combinatorial action space and learn from its past actions. These challenges are addressed with a combination of different approaches. Yash elaborates on three of these approaches: decision-theoretic planning, submodularity and deep anticipatory networks (DAN).
TAISIG Talks 5 Part 3: Kenny Meesters on using data to solve crises
Drs.ing. Kenny Meesters specializes in information management during crises. Together with a team of his students, he was part of the National Operational Team Corona. They were responsible for structuring and analyzing the data around Covid-19 in the early stages. Kenny explains that in order to make good decisions, the decision maker needs all the relevant information and the ability to predict events and consequences of actions. When a disaster happens, these two requirements are not present. Data comes in as isolated signals, which need to be structured and organized in order to be useful. In his talk he presents how he and his team handled this and what needs to be considered when it comes to data, technologies, organizations and people.
TAISIG Talks 4
TAISIG Talks 4 Part 1: Katrijn Van Deun on data challenges of multidisciplinary research
Our first speaker, dr. Katrijn Van Deun, kicked off with her talk on the data challenges posed by multidisciplinairy research and how we can solve them. Katrijns research focusses on the domain health and wellbeing, which has two big questions: who is at risk for low wellbeing and why is someone at risk? When accounting for risk factors researchers do not only take lifestyle into consideration but also environmental exposure and genetic constitution. Data is gathered from one group of respondents for a multi-block structure data set. This type of data gathering results in challenges for the analysis. The first is of exploratory nature. Finding new relationships and the relevant variables is like looking for a needle in a haystack. The second challenge is that, usually, correlations within one block are stronger than between the blocks, while the latter is what researchers are interested in. In her talk, Katrijn gives us insight in her recently published research in which she has developed the Principle Components Analysis method. This method uses block regularization to find the inter-block correlations and identifies the joint mechanisms.
TAISIG Talks 4 Part 2: Afra Alishahi on decoding deep neural models
Dr. Afra Alishahi, our second speaker, is expert on cognitive models of language learning. The domain of developing interpretability techniques for deep neural models of language is young, yet vastly moving. Current neural network architectures, such as the front-runner Transformer Network and the BERT language model are complex due to their many layers. This complexity makes the models powerful, but also results in a black box, when it comes to their inner dynamics. In the video below you can see Afra present three general approaches that researches have proposed to understand the inner working of neural models. The first is input manipulation, where the input is adjusted one word at a time to get insight into which words receive attention of the model. The second approach is analyzing the internal representation. This shows which aspects of language the model encodes and on which layer. The third and last approach is representational similarity analysis where the same set of stimuli are arranged differently. This measures the correlations between similarities of the different arrangements.
TAISIG Talks 4 Part 3: Pieter Spronck on artificial intelligence in games
Our final speaker, prof.dr.ir. Pieter Spronck specializes in artificial intelligence in games. From the beginning of AI it was linked to games: DeepBlue was developed and defeated the late world champion Garry Kasparov in 1997. In the early days, game research was expanded with videogames, modern board games and tabletop role-playing games. Within this research, the game world is the content and AI agents are the players. From 2015 onwards, Google Deepmind has accomplished multiple breakthroughs. They developed AI, using deep convolutional neural networks and the Monte Carlo tree search, that played better than human champions did. Even though this is remarkable, we have not accomplished general AI yet, which can play any game based on its mere description. High-complexity board games, games with more than two players and role-playing games where players create a story along the way pose challenges for the AI player. In his Talk, Pieter presents three cases from Google Deepminds and shares his remarks.
TAISIG Talks 3
TAISIG Talks 3 Part 1: Wouter De Baene on brain and behaviour
Wouter De Baene presented his research in the domain of cognitive neuroscience and its clinical applications. Classical studies in this domain show that specific regions of the brain are responsible for performance concerning, for example, cognitive flexibility or working memory. However, none of the regions work in isolation as they are both structurally and functionally connected to one another. Damage in a specific area may thus have a broader impact on patients’ cognitive performance post-surgery. Wouter presented two cases in which machine learning methods were applied to draw individual level inferences, replacing traditional group level analyses. His project thus aims to (better) predict the functional outcome for tumor patients after surgery. One interesting practical aspect of his research concerns questions regarding health data privacy and how AI analyses can be conducted on sensitive data safely.
TAISIG Talks 3 Part 2: Lieke Gelderloos on learning through self-supervision
Lieke Gelderloos gave the second talk, on active word learning through self-supervision. Studying which words are mapped to which objects in the environment is one of the important goals of computational cognitive science. There are many methods and models to use in this field, but all of them are based on the idea that the learner happens upon language and input. However, research has shown that parents try to follow their children’s attention when teaching them new words, implying that children have potential to shape their own learning trajectory. Can AI learn from this insight? In her work, Lieke has formalized the aspect of curiosity and investigated if it can accelerate computational word learning. In the video accompanying this TAISIG Talks edition, she shares the model and the main results of the research.
TAISIG Talks 3 Part 3: Ronald Leenes on regulation of AI
Finally, Ronald Leenes discussed the need for AI regulation by law. It is well-known that AI applications have both benefits and drawbacks. Standard examples involve the use of facial recognition, which can help to keep us safe in public places, yet takes away our anonymity. Autonomous vehicles take over the driving tasks, thereby providing comfort for the driver. On the downside, they may not react appropriately in unique hazardous situations. (Killer)drones may take over the role of soldiers, but manning them from a distance will desensitize us to violence and acts of war. In the past years, ethical frameworks have been created to regulate AI but it appears that the regulation needs to be enforced by law. On April 13, 2021, a draft of EU AI Regulation was leaked that constitutes the first attempt in regulating AI. During his talk, Ronald gave his first impression of the regulations and discussed the timeline required for their implementation.
TAISIG Talks 2
TAISIG Talks 2 part 1: Eva Vanmassenhove on gender bias in machine translators
The first speaker of the evening, dr. Eva Vanmassenhove, is an expert in natural language processing with focus on machine translation. In her lecture, she addressed the problem of algorithmic bias in automatic translations of gender based on big data. Some natural languages, e.g., Bulgarian, use different morphological forms to express grammatical gender in adjectives, demonstratives and verbs, while others would exhibit a highly limited use of linguistic gender (e.g., English). Translating from the second kind of language to the first can amplify social biases by producing gender biased output and also lead to a loss of linguistic richness. A possible solution lies in the addition of metadata coding for gender information. So hopefully in the future, we will be able to know for sure whether it was what she said or what he said.
TAISIG Talks 2 part 2: Patricia Prufer on successfully making skill based job transitions
The second talk was presented by dr. Patricia Prüfer, Head of the Data Science Unit at CentERdata. Together with her team, Patricia conducts research on skill-based job transitions. Technological advances combined with the effects of the COVID-19 pandemic brought about a great deal of uncertainty on the job market. While in some sectors, such as hospitality, unemployment has been growing, other sectors are thriving and in need of qualified employees. To solve this problem, Patricia’s data science team developed a matching tool that looks for “good-fit” job transitions based on the desired skills set. The good news is that on average, any given job position can be substituted with 33 others, of which almost half include an increase in salary.
TAISIG Talks 2 part 3: Max Louwerse on improving education by using AI
Professor Max Louwerse kicked off his talk with the confession that he has been struggling with the traditional education system for many years. The considerable progress of our society over the last centuries has brought about few changes in how students are being taught in classrooms and outside all over the world. A telling example is the practice of rereading texts, highlighting and taking notes, despite the fact that it can be detrimental to the learning process as demonstrated by scientific research. Max and his team are involved in a number of projects where the process of learning is turned into a personalized interactive experience supported by intelligent tutoring systems, virtual reality, serious gaming and machine learning. Some of these projects take place in the DAF Technology Lab on the campus of Tilburg University, an advanced mixed reality lab that offers state-of-the-art equipment for both research and educational purposes.
TAISIG Talks 1
TAISIG Talks 1 Part 1: Eric Postma on extraterrestrial life
The first speaker during the kick-off, Professor Eric Postma, expert on computer vision and deep learning, demonstrated how convolutional neural networks can help detect exoplanets moving in front of stars. The pattern recognition abilities of AI automatically process huge amounts of data collected by the TESS telescope and recognize the dimming of brightness indicative of an exoplanet. Since exoplanets in a particular zone might contain water, AI research helps in the search for extraterrestrial life and the potential identification of habitable exoplanets.
TAISIG Talks 1 Part 2: Esther Keymolen on ethical issues
The second talk was presented by Esther Keymolen, Associate Professor at the Tilburg Institute for Law, Technology and Society. Esther’s research concerns the intersection of law and ethics where she focuses on trust, trustworthiness and privacy in technological applications. Currently, most of our enterprise takes place online, and we are worried about how our personal data is being used for commercial purposes. It is important that developers of new technologies work together with end users and address potential ethical challenges related to the intended context of use.
TAISIG Talks 1 Part 3: Chris Emmery on regaining privacy
The final speaker of the evening was Chris Emmery, AI researcher specialized in the field of adversarial computational stylometry. He explained how in his scientific work, he first examines what kinds of personal information can be collected by mining our publicly available text data, for example, texts placed on social media. Subsequently, using style obfuscation with auto-encoders, he creates and employs open source tools that combat invasive author profiling and thus help us protect ourselves against attempts to compromise our personal identity.