The ACM International Conference on Multimedia Retrieval, ICMR’20, continues a decade-long tradition of being the top conference for introducing new ideas and paradigms in the domain of multimedia information retrieval and search. The initial meeting in 2011 held in Trento, Italy, was the culmination of joining the two conferences, ACM CIVR and ACM MIR, to create the flagship ACM multimedia retrieval conference. It was followed by meetings in Hong Kong, China, 2012; Dallas, USA, 2013; Glasgow, UK, 2014; Shanghai, China, 2015; New York, USA, 2016; Bucharest, Romania, 2017; Yokohama, Japan, 2018; and Ottawa, Canada, 2019. From the call for papers: Effectively and efficiently retrieving information based on user needs is one of the most exciting areas in multimedia research. The Annual ACM International Conference on Multimedia Retrieval (ICMR) offers a great opportunity for exchanging leading-edge multimedia retrieval ideas among researchers, practitioners and other potential users of multimedia retrieval systems.

Since the beginning of ICMR, there has been a corresponding special issue in IJMIR which highlights the best work from that year and allows the authors to present significantly deeper treatments of their work. This process always starts by asking the ICMR program chairs which papers they recommend as the best papers from the conference and then inviting extended submissions from the appropriate authors.

This has also been an unusual year due to the global pandemic. Most of the major computer science conferences have been significantly delayed for 2020 and that included ICMR shifting from a summer to a late fall date. Due to the worldwide health and travel guidelines, 2020 was the first year that ICMR was held virtually using remote video presentations. Moreover, many researchers faced unexpected situations due to critical facilities being inaccessible during the national lockdowns or because of health emergencies, which resulted in several of the recommended papers being indefinitely delayed.

For ICMR’20, the program chairs were Klaus Schoeffmann (Klagenfurt University, Austria), Phoebe Chen (La Trobe University, Melbourne) and Noel O’Connor (Dublin City University, Ireland). We are happy to have the following papers they recommended which were “Multimodal News Analytics using Measures of Cross-modal Entity and Context Consistency” by Eric Müller-Budack, Jonas Theiner, Sebastian Diering, Maximilian Idahl, Sherzod Hakimov and Ralph Ewerth; and “Counterfactual Attribute-based Visual Explanations for Classification by Sadaf Gulshad and Arnold Smeulders.

One of the leading multimedia research areas is multimedia/multimodal consistency. One significant area of real-world usage is that multimedia can give humans a wider and more diverse conception of the overall message. Furthermore, in numerous situations, it may give a good indication of fake news or controversial news. In the paper, Multimodal News Analytics using Measures of Cross-modal Entity and Context Consistency, the authors present a groundbreaking system for understanding multimedia consistency on real-world news articles with the aim of assisting human assessors. Unlike previous work, it is unsupervised and does not rely on any predefined training data. Also, the authors introduce several novel measures for the cross-modal similarity between text and images.

The leading trend in machine learning and multimedia analysis has been centered on deep learning using neural networks due to high accuracy and performance. However, one of the grand challenges is that it is very difficult to understand how the neural network makes decisions which is called the black box problem. How can we explain what is happening in these large, deep neural networks? In the paper, "Counterfactual Attribute-based Visual Explanations for Classification", the authors aim to explain how the deep neural networks make decisions. Inspired by human approaches, the proposed novel method utilizes both example-based and attribute-based explanations. The authors find that these explanations are also understandable by humans and intuitive.

On behalf of the ICMR2020 program chairs and the IJMIR editorial board, we hope to see you at a future ACM ICMR.