Abstract:
With the rise of multi-modal applications, the need for seman- tic knowledge of language and vision becomes prominent. While modern applications often consider both text and image, human perception is often only of secondary consideration. In my doctoral studies, I research the quantization of visual differences between concepts regarding human perception. Initially, I looked at local visual differences between concepts and their subordinate concepts, measuring the variety gap between images of, e.g. car and vehicle. In the following study, I used data-mining on Web-crawled images to estimate psycholinguistics metrics like the imageability of words. In this way, the tendency of low- vs. high-imageability can be esti- mated on a dictionary-level, defining the gap between words like peace and car. Going forward, I want to create visualization demos to analyze psycholinguistic relationships in image datasets.
Type: Paper at ACM Multimedia (ACMMM) 2019 Doctoral Symposium
Publication date: October 2019