Data challenges, deep neural models, AI in games
On Thursday, May 20, 2021, the fourth edition of the TAISIG Talks took place. TAISIG is the Tilburg University Artificial Intelligence Special Interest Group: a research community committed to Artificial Intelligence (AI). Katrijn Van Deun, Afra Alishahi and Pieter Spronck shared their insights.
Data challenges of multidisciplinary research
Our first speaker, dr. Katrijn Van Deun, kicked off with her talk on the data challenges posed by multidisciplinairy research and how we can solve them. Katrijns research focusses on the domain health and wellbeing, which has two big questions: who is at risk for low wellbeing and why is someone at risk? When accounting for risk factors researchers do not only take lifestyle into consideration but also environmental exposure and genetic constitution. Data is gathered from one group of respondents for a multi-block structure data set. This type of data gathering results in challenges for the analysis. The first is of exploratory nature. Finding new relationships and the relevant variables is like looking for a needle in a haystack. The second challenge is that, usually, correlations within one block are stronger than between the blocks, while the latter is what researchers are interested in. In her talk, Katrijn gives us insight in her recently published research in which she has developed the Principle Components Analysis method. This method uses block regularization to find the inter-block correlations and identifies the joint mechanisms.
Decoding deep neural models
Dr. Afra Alishahi, our second speaker, is expert on cognitive models of language learning. The domain of developing interpretability techniques for deep neural models of language is young, yet vastly moving. Current neural network architectures, such as the front-runner Transformer Network and the BERT language model are complex due to their many layers. This complexity makes the models powerful, but also results in a black box, when it comes to their inner dynamics. In the video below you can see Afra present three general approaches that researches have proposed to understand the inner working of neural models. The first is input manipulation, where the input is adjusted one word at a time to get insight into which words receive attention of the model. The second approach is analyzing the internal representation. This shows which aspects of language the model encodes and on which layer. The third and last approach is representational similarity analysis where the same set of stimuli are arranged differently. This measures the correlations between similarities of the different arrangements.
Artificial Intelligence in games
Our final speaker, prof.dr.ir. Pieter Spronck specializes in artificial intelligence in games. From the beginning of AI it was linked to games: DeepBlue was developed and defeated the late world champion Garry Kasparov in 1997. In the early days, game research was expanded with videogames, modern board games and tabletop role-playing games. Within this research, the game world is the content and AI agents are the players. From 2015 onwards, Google Deepmind has accomplished multiple breakthroughs. They developed AI, using deep convolutional neural networks and the Monte Carlo tree search, that played better than human champions did. Even though this is remarkable, we have not accomplished general AI yet, which can play any game based on its mere description. High-complexity board games, games with more than two players and role-playing games where players create a story along the way pose challenges for the AI player. In his Talk, Pieter presents three cases from Google Deepminds and shares his remarks.