TY - JOUR AU - Wu, Patrick Y. AU - Mebane, Walter R. PY - 2022/05/03 Y2 - 2024/03/28 TI - MARMOT: A Deep Learning Framework for Constructing Multimodal Representations for Vision-and-Language Tasks JF - Computational Communication Research JA - CCR VL - 4 IS - 1 SE - Articles DO - UR - https://computationalcommunication.org/ccr/article/view/102 SP - 275-322 AB - <p>Political activity on social media presents a data-rich window into political behavior, but the vast amount of data means that almost all content analyses of social media require a data labeling step. However, most automated machine classification methods ignore the multimodality of posted content, focusing either on text or images. State-of-the-art vision-and-language models are unusable for most political science research: they require all observations to have both image and text and require computationally expensive pretraining. This paper proposes a novel vision-and-language framework called multimodal representations using modality translation (MARMOT). MARMOT presents two methodological contributions: it can construct representations for observations missing image or text, and it replaces the computationally expensive pretraining with modality translation. MARMOT outperforms an ensemble text-only classifier in 19 of 20 categories in multilabel classifications of tweets reporting election incidents during the 2016 U.S. general election. Moreover, MARMOT shows significant improvements over the results of benchmark multimodal models on the Hateful Memes dataset, improving the best result set by VisualBERT in terms of accuracy from 0.6473 to 0.6760 and area under the receiver operating characteristic curve (AUC) from 0.7141 to 0.7530. The GitHub repository for MARMOT can be found at <a href="http://github.com/patrickywu/MARMOT" target="xrefwindow">github.com/patrickywu/MARMOT</a>.</p> ER -