* Giovanni Cassani website, version 0.0.1.
* Last update: Dec 2021.
* All content is released under the open BY-NC-ND license
* https://creativecommons.org/licenses/by-nc-nd/3.0/

Currently Assistant Professor at the Department of Cognitive Science and Artificial Intelligence (CSAI) at Tilburg University.

I'm a researcher in computational psycholinguistics: I work on the relation between form and meaning and on linking language acquisition and language evolution. I'm also interested in data science: I collaborate with Tooso doing research on user intent prediction on e-commerce websites. I have a Ph.D. in Computational Psycholinguistics obtained from the CLiPS Research Center of the University of Antwerp with a research focused on how children learn lexical categories (PhD thesis). Before the PhD, I got a MSc in Cognitive Science at the Center for Mind/Brain Sciences of the University of Trento using computational linguistics and computer vision to study how children build conceptual representations from words and images (MSc thesis), and a BA in Literature and Linguistics also at the University of Trento, studying conceptual representations in congenitally blind people using feature norms (BA thesis).

I worked for a while at the press office of the University of Trento and like science communication very much. I cook, go to concerts, watch plays, movies and series, cycle, run, read, listen to vinyls.

If you want to know more about what I did and do you can check my Linkedin profile and my GitHub repo, or just e-mail me.

You can find an updated extended version of my CV here, while this is a compact, 1-page summary.


* Cognitive stuff: Computational models of language acquisition, Conceptual representations, Semantics, Category learning, Language change

A bunch of interesting questions to which I'd love to find an answer:
How do children learn their native languages? What information do they rely upon? Which learning models do the job? How do they form and represent concepts and categories? How do they understand meaning? What is the relation between language and perceptual information in creating meaning, concepts, and categories? How does what you know influence what you learn?

* Data stuff: user intent prediction, sequence modeling, computational social sciences

Problems I'd like to collaborate on:
Deciding whether users are going to perform a certain action on a website or not depending on how they interact with the website, network relations and interface influences in online elections, modeling labor market trajectories using sequence modeling and language models to learn a grammar of unemployment, identify people at risk of dropping out of school, therapies or the like using data about their interactions with teachers, doctors and so on, ...

* Stuff I'd like to work more with: brain imaging, cognitive psychology

If you're working on problems about brain imaging or cognitive psychology where my expertise in computational modeling, statistics, and data analysis may come in handy drop me a line, I'd love to be involved!


Data Science & Society Master, Tilburg University - Fall 2021 (ongoing)

Course on the analysis of user trajectories on websites, covering recommender systems, intent prediction, community detection and the like. I focus on intent prediction, the course is co-taught with dr. Boris Čule and dr. Gonzalo Napoles.

Data Science & Society Master, Tilburg University - Fall 2019 (ongoing)

Course on building blocks of NLP, appropriate machine learning methods, applications and evaluation techniques. Aimed at social scientists, data scientists and the like.

Cognitive Science and Artificial Intelligence Bachelor, Tilburg University - Spring 2020 (ongoing)

Course on CL and NLP (co-taught with dr. Raquel Alhama), focusing on different levels of linguistic analysis, the methods to address them and how to combine methods to implement end-to-end NLP models. Aimed at cognitive scientists and AI students.

Cognitive Science and Artificial Intelligence Bachelor, Tilburg University - Fall 2019 and Fall 2020

Intermediate level course on Generalised Linear Mixed Models (co-taught with dr. Travis Wiltshire) and their different applications in Cognitive Science and AI. Previous experience with core statistical concepts (distributions, regression, ...) is required.

Available materials:
Linear Mixed Effect Models.
An introduction to linear mixed effects models with random intercepts and random slopes.
Logistic regression.
An introduction to logistic regression, including paragraphs on risk and odds, how to interpret regression coefficients, and how to compute CIs.
Bayes Rule.
An entry level presentation on Bayes Rule using Star Wars and Covid-19


First Author publications

* Not just form, not just meaning: Words with consistent form-meaning mappings are learned earlier QJEP, 2021. [presentation]
* Words with Consistent Diachronic Usage Patterns are Learned Earlier: A Computational Analysis Using Temporally Aligned Word Embeddings Cognitive Science, 2021. [presentation]
* On the semantics of non-words and their lexical categories JEP:LMC, 2020. [poster]
* Lexical category acquisition is facilitated by uncertainty in distributional co-occurrences PLoS One, 2018
* Distributional learning and lexical category acquisition: What makes words easy to categorize? CogSci 2017 [presentation]
* Constraining the space in cross-situational learning: Different models make different predictions CogSci 2016 [poster]
* Which distributional cues help the most? Unsupervised contexts selection for lexical category acquisition CogACLL 2015 [poster]

Other publications

* Voting, fast and slow. Contemporary Italian Politics, forthcoming
* SIGIR 2021 E-Commerce Workshop Data Challenge. eCommerce workshop, SIGIR2021
* Shopper intent prediction from clickstream e-commerce data with minimal browsing information. Scientific Reports, 10(16983), 2020
* Prediction is very hard, especially about conversion. Fashion AI, KDD 2019
* Children Probably Store Short Rather Than Frequent or Predictable Chunks: Quantitative Evidence From a Corpus Study. Frontiers in Psychology, 2019
* Evidence for a facilitatory effect of multi-word units on child word learning. CogSci 2017
* Facilitatory Effects of Multi-Word Units in Lexical Processing and Word Learning: A Computational Investigation. Frontiers in Psychology, 2017
* Towards a Model of Prediction-based Syntactic Category Acquisition: First Steps with Word Embeddings. CogACLL 2015


* The rare, the new, and the expressive. Dante's linguistic creativity and a few computational models to take it seriously
Dante Symposium, Tilburg University, 2021
* Rethinking the arbitrariness of the sign
CIMeC, University of Trento (Rovereto), 2020
* Prediction is very hard, especially about conversion
Jheronimus Academy of Data Science ('s-Hertogenbosch), 2019


* Sound symbolic associations via contrastive learning of text and images
AMLaP, 2022
* Can Joe be brave and bad? Experimental and computational approaches to thick terms
AMLaP, 2022
* A bridge between form and meaning. Modeling the relation between names and attributes in fictional characters
AMLaP, 2022
* Computational approaches to incrementally study systematicity in word learning
AMLaP, 2022
* Nomen est omen: Fictional characters' names encode polarity, gender, and age
AMLaP, 2021
* The bootstrapping toolbox: Which cues are more useful to learn lexical categories and why
Phd Defence, 2019
* Distributional nuggets for lexical categories: Finding the useful information in co-occurrence patterns
PiF 2018
* From lexical categories to acceptable lexical choices: A discriminative learning approach
PiF 2018
* Multimodal distributional semantic models and conceptual representations in sensory deprived subjects
CLIN26, 2015
* Distributional Semantics for Child-Directed Speech
CLIN25, 2015