Language and Vision (LaVi)

Language and Vision are two fundamental modalities through which human beings acquire knowledge about the world. We see and speak about things and events around us, and by doing so, we learn properties and relations about objects. These two modalities are quite interdipendent and we constantly mix information we acquire through them. However, computational models of language and vision have been developing separately and the two research comunities have for a long time been unaware of each other's work. Interestingly, through these parallel research lines, they have developed highly compatible representations of words and images, respectively.

The importance of developing computational models of language and vision together has been highligted by philosophers and cognitive scientists since the birth of the Aritificial Intelligence paradigm. Only recently, however, the challenge has been empirically taken up by computational linguists and computer vision researchers.

In the last two decades, the availability of large amounts of text on the web has led to tremendous improvements in NLP research. Sophisticated textual search engines are now well consolidated and part of everybody's daily life. Images are the natural next challenge of the digital society. The combination of language and vision is the winning horse for this new era.

The UniTN researchers are at the fronting edges of this new challenge. Driven by theoretical questions, we look at applications as the test bed for our models. The focus so far has been on the investigation of multimodal models combining linguistic and visual vector representations; cross-modal mapping from visual to language space; enhancement of visual recognizers through language models. The recent results on LaVI at UniTN have profitted on the close collaboartion with the team of the ERC project COMPOSES and of MHUG Research Group.

We are part of the Cost Action The European Network on Integrating Vision and Language

News: Call for one PhD position.

If you are interested in working with us look at the CIMeC PhD call

People:

Publications