In reality, we often find ourselves in the situation to determine the right data for addressing a particular research question. We regularly engage in weighing the pros and cons of individual data collection methods to unveil new insights. While in some instances we have clear ideas in mind throughout this exploration process, in other situations we experience the filtering by some parameters to be useful and, with these parameters in mind, look at specific situations and their underlying data. A practical example is one of our research projects where we are investigating the honeypot effect in more detail. Here, we first filter situations to be elaborated on in the body tracking data. Filters can be, but are not limited to, aspects such as the ones described by Azad et al. [3]: How many people enter a scene from the left, the right, or the front? How many people slow down or start interacting? In regards to the honeypot effect, we look at situations where initially only one person was standing in front of a display installation and where, then, others join this person. Next, we try to identify patterns in the underlying body tracking data to, ultimately, find other occurrences algorithmically. While we can obtain one or many instances of the honeypot effect quantitatively this way, we are then required to provide some context for these instances to provide meaning.
Context data is required for interpreting what can be really seen in body tracking and interaction data. As said before, it can make a difference if we are looking at a data set collected during holidays or when the needs for information within a company change. Context data can be, but is not limited to:
This list of context information can be expanded to include more complex aspects such as organisational work processes, for instance, in agile software development teams. Here, questions arise such as: Which sprint are the teams currently working on? When is the next release scheduled? When is the next on-site team meeting? What is the status of the individual teams? Also post-COVID questions emerge such as how can the hybridity of work processes be included in the understanding of the context and data processing? In hybrid work situations, the actors are exposed to the duality of the work space (i.e., both the physical and digital space exist simultaneously as communication and interaction spaces) [15].
Another type of context data is the location of an installation. If we collect data from several screens it might be interesting to document factors relating to the location for every screen separately (e.g., to determine whether the data is complementary or comparable). Context data can be also automatically obtained from calendars or (historical) services like weather services, but it can also be part of research projects in the form of interviews or the documentation of additional observations. We generally try to adhere to a procedure of writing laboratory journals indicating special events and times that might be interesting for interpreting usage data later on. Last but not least, it is worth mentioning that, as Dourish [3] vividly describes, the meaning of a specific context is per definition flexible and in constant negotiation with its participants. We therefore have to regularly review the initial understanding of context during a study to tie it back to the initial goal definition or adapt it to the research process if necessary.
The written contributions for this workshop cover what has been addressed in the previous sections: Rohde et al. [19] describe an infrastructure for interaction logging. Fietkau [10] showcases a toolset for logging and visualizing body tracking data. Koch et al. [12] document a long-term ITW deployment of multiple public screens. Cabalo et al. [6] and Lacher et al. [14] propose and test two different approaches for analyzing body tracking data for determining engagement or attention. Buhl et al. [5] report on a limited-time gamification study to check whether such a change in the application leads to different user behavior.
Below are some open questions that are raised in the workshop papers or that emerge from the bigger picture formed by the collective contributions:
We have attempted to address some of these pressing issues in our field in Figure 1. In the workshop, we aim to discuss this preliminary methodological blueprint and thereby revise it in a meaningful way.
A further important issue to be discussed in the workshop concerns the research data management (e.g., how to manage the collection and storage of interaction logs and qualitative data like interviews), including long-term data storage and making data accessible to others in future studies.
Finally, another interesting topic, which is closely related to long-term ITW deployments, is the “sustainablility” of IT research in practice. Nowadays, research in applied computing requires researchers to engage deeply in the field (e.g., with practitioners) in order to design innovative IT artifacts and understand their appropriation. The problem that has not been solved so far is what happens when the research project is completed (see, for example, Simone et al. [22] for a broader discussion on this matter).
As a research group, the workshop organizers are currently working on the DFG-funded research project “Investigation of the honeypot effect on (semi-)public interactive ambient displays in long-term field studies.” 1 They are eager to extend their internal discussions beyond the project's scope and to exchange insights with the broader community.
Michael Koch is a professor for HCI at University of the Bundeswehr Munich, Germany. His main interests in research and education are cooperation systems, i.e., bringing collaboration technology to use in teams. In the past decades, he has worked on several projects in the field of public displays and has conducted multiple long-term field studies in this domain.
Julian Fietkau is a post-doc researcher in HCI at University of the Bundeswehr Munich. His recently concluded doctoral project has involved the design and evaluation of public displays of different kinds to support older adults in outdoor activities.
Susanne Draheim is a post-doc researcher and Managing Director of the Research and Transfer Centre “Smart Systems” at Hamburg University of Applied Sciences. She has an academic background in sociology, educational sciences, and cultural sciences. She works on datafication & qualitative social research methods, companion technology, and digital transformation.
Jan Schwarzer is a post-doc researcher in the Creative Space for Technical Innovations (CSTI) group at Hamburg University of Applied Sciences, working on long-term evaluations of user behavior around ambient displays deployed in authentic environments. Recently, he concentrates on algorithmic approaches to distill underlying patterns in quantitative usage behavior data.
Kai von Luck is a professor for computer science at Hamburg University of Applied Sciences and the Academic Director of the CSTI group. His background in artificial intelligence informs and enriches his work on ambient displays and tangible interfaces.
1: https://gepris.dfg.de/gepris/projekt/451069094