Resumo:
Text-to-Image (T2I) generation is the task of synthesizing images corresponding to a given text input. The recent innovations in artificial intelligence have enhanced the capacity of conventional T2I generation, yielding more and more powerful models day by day. However, their behavior is known to become unstable in the face of text inputs containing nonwords that have no definition within a language. This behavior not only results in situations where image generation does not match human expectations but also hinders these models from being utilized in psycholinguistic applications and simulations. This paper exploits the human nature of associating nonwords with their phonetically and phonologically similar words and uses it to propose a T2I generation framework robust against nonword inputs. The framework comprises a phonetics-aware language model as well as an adjusted T2I generation model. Our evaluations confirm that the proposed nonword-to-image generation synthesizes images that depict visual concepts of phonetically similar words more stably than comparative methods. We also assess how the image generation results match human expectations, showing a better agreement than the phonetics-blind baseline.
Tipo: Journal paper at IEEE Access, vol. 12, pp. 41299-41316, Mar 2024.
Dato de publikigo: March 2024