ABSTRACT
When conducting empirical research, it can be difficult to find experts to evaluate the obtained results, especially when the work is extensive and requires detailed feedback. In the case of undergraduate and graduate students, having access to Human-Computer Interaction (HCI) experts can be even more challenging. This paper reports a strategy used to overcome this problem by taking advantage of the opportunities that the Brazilian HCI community generates. The strategy, applied to finalize a master's research, consisted of using the IHC 2019 as a space to evaluate the proposed product. The result was the participation of experts with varied profiles, which generated in-depth and detailed feedback - a goal difficult to achieve in the context in which the research was conducted, where HCI researchers are still scarce. In this sense, we hope that sharing our experience can help students who need to evaluate their work but face similar difficulties in finding experts. We also envision this suggestion as a way to increase the integration between students and researchers.
- Nayana Carneiro. 2019. VALERIE: Um guia para elaboração de entre-vistas semiestruturadas para avaliação de player experience em jogos baseados em localização. (2019).Google Scholar
- Milagros Castillo-Montoya. 2016. Preparing for Interview Research: The Interview Protocol Refinement Framework. Qualitative Report 21, 5 (2016).Google Scholar
- Paolo Cifariello, Paolo Ferragina, and Marco Ponza. 2019. Wiser: A semantic approach for expert finding in academia based on entity linking. Information Systems 82 (2019), 1--16.Google ScholarDigital Library
- Timothy J Ellis and Yair Levy. 2010. A guide for novice researchers: Design and development research methods. In Proceedings of Informing Science & IT Education Conference (InSITE), Vol. 10. 107--118.Google Scholar
- Omayma Husain, Naomie Salim, Rose Alinda Alias, Samah Abdelsalam, and Alzubair Hassan. 2019. Expert Finding Systems: A Systematic Review. Applied Sciences 9, 20 (2019), 4250.Google ScholarCross Ref
- Andrew J Ko, Thomas D Latoza, and Margaret M Burnett. 2015. A practical guide to controlled experiments of software engineering tools with human participants. Empirical Software Engineering 20, 1 (2015), 110--141. Google ScholarDigital Library
- Effie Lai-Chong Law, Virpi Roto, Marc Hassenzahl, Arnold POS Vermeeren, and Joke Kort. 2009. Understanding, scoping and defining user experience: a survey approach. In Proceedings of the SIGCHI conference on human factors in computing systems. 719--728. Google ScholarDigital Library
- David Ledo, Steven Houben, Jo Vermeulen, Nicolai Marquardt, Lora Oehlberg, and Saul Greenberg. 2018. Evaluation strategies for HCI toolkit research. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1--17. Google ScholarDigital Library
- Gunnar Lucko and Eddy M Rojas. 2010. Research validation: Challenges and opportunities in the construction domain. Journal of construction engineering and management 136, 1 (2010), 127--135.Google ScholarCross Ref
- Christian Remy, Oliver Bates, Alan Dix, Vanessa Thomas, Mike Hazas, Adrian Friday, and Elaine M Huang. 2018. Evaluation beyond usability: Validating sustainable HCI research. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1--14. Google ScholarDigital Library
- Mohsen Shahriari, Sathvik Parekodi, and Ralf Klamma. 2015. Community-aware ranking algorithms for expert identification in question-answer forums. In Proceedings of the 15th International Conference on Knowledge Technologies and Data-driven Business. 1--8. Google ScholarDigital Library
- Jacob O Wobbrock. 2012. Seven research contributions in HCI. studies 1, 1 (2012), 52--80.Google Scholar
- Claes Wohlin, Per Runeson, Martin Höst, Magnus C Ohlsson, Björn Regnell, and Anders Wesslén. 2012. Experimentation in software engineering. Springer Science & Business Media. Google ScholarCross Ref
- Sha Yuan, Yu Zhang, Jie Tang, Wendy Hall, and Juan Bautista Cabotà. 2020. Expert finding in community question answering: a review. Artificial Intelligence Review 53, 2 (2020), 843--874.Google ScholarDigital Library
Index Terms
- It's (almost) done, I just need to evaluate it!: getting specialized feedback from the brazilian HCI community
Recommendations
Trafne: A Training Framework for Non-expert Annotators with Auto Validation and Expert Feedback
Artificial Intelligence in HCIAbstractLarge-scale datasets play an important role in the application of deep learning methods to various practical tasks. Many crowdsourcing tools have been proposed for annotation tasks; however, these tasks are relatively easy. Non-obvious annotation ...
Designing with Emerging Science: Developing an Alternative Frame for Self-Tracking
OzCHI '20: Proceedings of the 32nd Australian Conference on Human-Computer InteractionThe emerging science of the “gut-brain axis” has been used as the basis for self-tracking technologies assuming that this connection can be used productively for better regulating mood, supporting digestive health, and avoiding disease. Taking this ...
Almost an Expert: The Effects of Rubrics and Expertise on Perceived Value of Crowdsourced Design Critiques
CSCW '16: Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social ComputingExpert feedback is valuable but hard to obtain for many designers. Online crowds can provide fast and affordable feedback, but workers may lack relevant domain knowledge and experience. Can expert rubrics address this issue and help novices provide ...
Comments