To fulfill their task, SUOs require information about their users (see Section 4.1: something is only “smart” if it can be adjusted to individual users). A central solution for fulfilling this requirement – e.g., via a central profile service – is often rejected by users. The justified distrust of central storage of their data can be countered by a decentralized profile store on a user's own personal device.
The user profile is completely stored on the user's mobile device and allows independent control and handling of personal data. The profile includes the ability to edit profile information and set preferences for the usage of profile information – e.g. restriction to certain types of services. Fig. 9 shows a configuration screen of our prototype implementation. Valuable input for SUOs is a broadly defined user profile with information on interaction preferences, perceptual limitations, and motor disabilities. Furthermore, an important profile value is the comfort zone (Kötteritzsch et al. 2016) – a representation of the areas in which the user feels comfortable, derived from past activities. One goal of the overall system is to expand the comfort zone and give seniors more room to maneuver in their environment.
In addition to the modeling of profile information, the user identification is a relevant aspect as well. For the identification of the users, we use the Bluetooth LE protocol via mobile devices (or iBeacons). SUOs offer a Bluetooth interface permanently to which mobile devices of passersby can enroll. The usage can be described as follows: The elderly users carry their device (e.g., smartphone) with them and have a BLE Central implemented – i.e., an application that reacts to recognized SUOs and exchanges data with them. Thereby the SUOs know who is close to them and what impairments they have. This allows SUOs to respond to the individual user in the best possible way.
A flood of information and obscure fare structures regularly cause many people – especially the elderly and physically impaired – to despair when buying tickets. To help users overcome these challenges, we designed a personalized ticket vending machine. Elderly users with a low perception of self-efficacy and people with visual impairments should receive important information quickly and easily through this individual adaptation.
The concept is a smart ticket vending machine that uses proximity detection to recognize when a person – who is registered in the system – is standing in front of it. In this case, the vending machine accesses the stored profile and personalizes the selection options on the start screen based on the profile. This makes it possible to quickly book tickets to known destinations, e.g. home, visiting relatives, or to a popular shopping destination as this information was stored in the profile.
The start screen (Fig. 10) is not only personalized in terms of content, but also in terms of design. The size and contrast of elements such as icons and text are adapted to known visual impairments. Audio output can also be provided. In any case, the design is clear and intuitively understandable.
In addition to cash payment at the machine, billing is also possible via the user profile. This means that the amount due can be transferred conveniently from home or paid directly by direct debit.
As we have presented in Chapter 2, in our research, we were aiming at closing the gap between high level challenges and low-level design of SUOs by providing a discussion of challenges identified and categorized while designing a broad set of urban objects.
When looking into the properties of urban space, we encounter prerequisites for interaction that should be taken into consideration:
These properties lead us to the challenge of walk-up-and-use, and due to the voluntary usage to a need for joyfulness in the use (to improve motivation to use).
If we add these two challenges to the challenges already identified in Chapter 2, i.e. personalization and data privacy and multi-user usage we get the following list of challenges:
In the following, we will briefly address how these core challenges may be addressed in general and have been addressed in particular in some of the developed SUOs in UrbanLife+.
In general HCI, the adaptability of a system is considered fundamental for usability (Heinecke 2012). In public space – which is characterized by a lack of control of users and conditions of use – heterogeneous needs meet. The fit senior citizen in his mid-80s should be just as much addressed by technology as the 60-year-old wheelchair user. When involving people of different sizes, cognitive capacities, interests and motor skills, HCI must allow for diverse models of input and output, adapted presentations and changes in content and structure.
There is a lot of research on adaptivity in HCI. Starting with early work by Brusilovsky on adaptive hypermedia systems: “by adaptive hypermedia systems we mean all hypertext and hypermedia systems which reflect some features of the user in the user model and apply this model to adapt various visible aspects of the system to the user.” (Brusilovsky 1996)
So, there has to be a user model and adaptivity on different levels: From changes in procedures or structures in the system (pragmatic level), via changes in the content with which the user interacts (semantic level), changes in the basic interaction with the system (syntactic level), changes in the presentation of information (lexical level) to physical changes in input and output (sensomotoric level).
For our SUOs we first have defined a rich user profile including interaction preferences, restrictions on perception and motoric disabilities. Then we address the issue of privacy control when this profile is made available to SUOs by storing only highly sensitive personal data on mobile devices – not at a central storage – and restricting its use to the minimum necessary exchange between SUOs and profile service. One important profile value is the comfort zone – we derive from past activities – to motivate activities (Kötteritzsch et al. 2016). So, in our experience the issue of data privacy that was often stressed by the authors mentioned in Chapter 2, is closely related to this issue of adaptability.
For smart information displays or the smart bus stop we have provided adaption of the information displayed and of the modes of interaction with the display – including the possibility to physically lower the display (when approaching it in a wheelchair).
For smart activity support we have tried displaying different symbols and allowing different forms of interaction based on the user profile. Smart informants should only point to dangers that are dangerous for the person approaching, smart signposts can display different directions according to the abilities of the user (e.g., different accessible paths).
In terms of privacy and information sovereignty, one of our major design decisions was to avoid general location tracking techniques (such as GPS or other device vendor location services), instead relying only on proximity detection of users near the SUOs (which have a known position). This goes a long way towards avoiding the impression of user tracking, as proximity detection is necessary for personalized SUO interactions anyway and thus unavoidable. It also makes it easy for the user to disable all location tracking by toggling a switch in our mobile app or by turning off Bluetooth.
“As computation gradually becomes part of everyday physical space, the spatial context within which interaction between humans and computation takes place radically changes from a fairly static single-user, location-independent world to a dynamic multi-user situated environment”
All urban objects can be used by multiple users after each other, most of them even by multiple users at the same time. The multiple usage does not have to be coordinated. Also watching a public display by one user from a distance while another user is interacting with the display is multiple usage. Challenges stemming from the need to serve different users simultaneously include the balance between the single-user and multi-user contexts, and the facilitation of collaboration among multiple users who may be strangers (Ardito et al. 2015, Lin et al. 2015).
In the UrbanLife+ project, we found the most difficult issue is that most devices can be seen by more than one user at a time. That is true for large smart information displays (several users standing in front of a screen in different interaction zones) and for small devices like micro-information radiators or lamps in a smart lighting scenario.
One example of work on multi-user capability concerns the investigation of which directions of movement of text on the screen provide the best legibility. The use of moving text on the screen is motivated by various recommendations to use animations to attract or enlarge the attention of users (e.g. (Huang et al. 2008)). Classically, it is assumed that leading – i.e. moving a sequence of words from right to left – is the optimal animation method (So & Chan 2009). However, this work does not take into account that 1) the view of the screen may be partially blocked by other users, and 2) users may not stand rigidly in front of the screen but may move around while viewing the screen itself. In a laboratory study, we therefore ran through these scenarios with different directions of movement for text and determined the variant that offers the best subjective readability (Nutsi & Koch 2016). The result of previous experiments was that the typical text animation direction (right to left) is not always the best choice. When a user is standing in front of the screen, it has been shown that the best results are achieved when the text is animated vertically (from top to bottom). For moving users, it has been shown to be optimal if the text moves with the user (in the direction of movement).
For the large smart information displays we also looked into personal areas to be displayed for different users and into differently addressing users to the left or to the right of the screen.
For micro-information radiators we looked into using different colors for different users. However, this proved not to be ideal when several users are addressed at the same time. We had to limit to addressing the nearest user – which in itself is hard to determine since it is not only up to the physical distance, but also due to who can (better) see or faster approach the device.
Walk-Up-And-Use refers to the characteristic of systems that they can be used immediately without the need for introduction or study of a manual. This includes, firstly, an intuitive user interface, but also drawing attention to the systems and making potential users aware that the systems are interactive.
Intuitive usability was defined as, for example: “A technical system is intuitively usable if it leads to effective interaction through unconscious application of prior knowledge by the user” (Mohs et al. 2006). Raskin addresses the connection between intuitiveness and familiarity even earlier (Raskin 1994). However, the concept of the intuitiveness of user interfaces has not been finally clarified (Herczeg 2009).
In the context of smart information displays for example, we specifically address the question of how someone who walks past the screens can 1) be made aware of the screen and the interactivity of the screen, 2) be motivated to approach the screen, and 3) be motivated and enabled to perform beneficial touch interaction with the screen. The model is based on temporal zones of interaction (see Fig. 11).
In our project we experimented with different ways to communicate the “how-to-use” to our users. The simplest was a text sign on a large smart information display that points to the possibilities of interaction. More potential was identified in showing a personal information area and playing a personal audio greeting when the device was approached.
The best solution for intuitive usability often was, if the result appeared without the need for explicit interaction – i.e., if the users “only” had to approach the devices. But even then, the result had to be “intuitive” with respect to interpretation and understanding. An analysis of the walk-up-and-use design of the smart activity support system is provided with a process description:
Joy of use is a sub-topic of the design of interactive systems, which has occasionally appeared in the HCI literature since the late 1990s. Roughly speaking, it describes the extent to which interaction with a technical system can trigger feelings of joy, happiness or fun in the user. Unfortunately, there is no uniform and generally accepted definition. Probably the best known and most frequently cited attempt to define the term today is (Hassenzahl et al. 2001), although there have been other efforts to systematically develop the term, e.g. (Hatscher 2000). A successful overview of definitions proposed up to the time of publication is provided in (Reeps 2004), who also discusses a number of neighboring terms (e.g. gamification, funology) and their delimitation.
An interesting question, but one that, according to our own research, has not had its research potential tapped very deeply, is that of linking joy of use and the methods associated with it with technology in the public space. Especially in older HCI publications, the single-user context in private or professional environments is often implied. For the UrbanLife+ project, however, joy of use in public spaces is particularly interesting, including everything that goes along with it because it requires new interpretation for the elderly as a user group (such as the spontaneous gathering of several users who want to interact with a system at the same time – see also section on Multi-User usage, or e.g. theoretical work on the design of prosocial game experiences (Cook et al. 2016)).
That is not to say that there is no other work on joy of use in public spaces at all. Strands of research have recently coalesced around the terms “playable city” (Nijholt 2017) and “urban gamification” (Thibault 2019). Research challenges in these areas often mirror those of other urban technology deployments – how to operationalize user attention and engagement, how to gather data about users’ perceived emotional experience, etc.
In UrbanLife+ we designed and implemented a gamified reward system for urban exploration based around the game element of “quests”. Senior users would be presented with opportunities for urban activities (outside their comfort zone if possible) such as visiting a specific café, museum or social gathering. If they successfully completed such a quest, they would receive a small but tangible reward, such as a voucher for their next visit or a free cup of coffee. This idea draws upon existing research on both economic incentive systems (c.f. loyalty programs such as “Payback”) and personal narratives driven by self-determination (“being the hero of your own story”). This concept is described in more detail in (Fietkau 2019).
We conducted a brief empirical evaluation of this system prototype in the form of user tests and qualitative interviews with seven participants, some of whom were seniors (four persons older than 60 years) who were asked to judge from their own perspective, and some of whom were experts from the field of geriatric care who were asked to speak from their professional experience.
Within our interview group, we observed a diversity of opinion regarding material rewards as a potential motivator. Some subjects were immediately taken with the “prizes” and explicitly noted them as important motivation for activities outside the home, others showed indifference, or in one case even clear rejection.
Independently of the rewards, the participants showed a broad acceptance of the quest concept. The motivating function of the quests as a way to structure offers was repeatedly judged as positive not only for their own experience, but also hypothesized to be helpful for other elderly users in the context of society.
It should be noted that joy of use is not limited to gamified experiences. For example, during sessions with seniors using interactive technology, we were able to observe that highly joyful reactions (a powerful motivation to overcome fear of technology) were sparked by showing photos of people or places that the seniors could recognize. One resulting design – which was implemented as a prototype but ultimately not pursued for empirical research – was a digital jigsaw puzzle that allowed senior users to assemble photos of their social group or of places outside their home.
Another lesson we learned about joy of use was, that it is closely related to the purpose of the object. “Using” an object is not considered a burden, if it is purposeful – in our case, if the user gained additional safety by using the object.
In this report we briefly reviewed challenges for designing objects for urban space – and particularly presented experiences we gained in designing SUOs for improving the safety of seniors in urban space in the UrbanLife+ project.
We found the most important challenges to be adaptability (including data privacy), multi-user usage, walk-up-and-use and joy of use. While other papers have presented a much broader view here (e.g. (Stephanidis et al. 2019)), we found that our four challenges are more practical – they provide direct issues to look at when designing systems. That helped us a lot in the project.
There is a lot that can be done now – each of the four challenges could be addressed in a separate book. We will continue to use this thematic structure for presenting and discussing the challenges and the possible solutions in one particular class of SUOs: The information radiators – both large smart information displays and micro-information radiators.
One final remark about seniors as user group: We found that the need for addressing our four challenges is not unique for this user group – however, some challenges are different for seniors than for other user groups. One example for this is walk-up-and-use. As noted by other designers and researchers, we experienced that seniors in general are much more reluctant to use technology than other user groups. But even for this issue we found that providing a real benefit and presenting the objects in settings such as focus groups helps to overcome this challenge.
Prof. Dr. Michael Koch
michael.koch@unibw.de
www.unibw.de/inf2/personen/professoren/univ-prof-dr-michael-koch
Prof. Dr. Michael Koch studied computer science at the TU Munich and received his doctorate in the subject. After an industrial stay at the Xerox Research Centre Europe and subsequent habilitation in computer science again at the TU Munich, he now teaches at the University of the Federal Armed Forces Munich where he holds the professorship for Human-Computer Interaction.
Anna Buck
anna@koetteritzsch.net
www.koetteritzsch.net
Anna Buck (née Kötteritzsch) studied Applied Cognitive and Media Sciences (M. Sc.) at the University of Duisburg-Essen. Until 2019, she worked as a research assistant in the UrbanLife+ project at the University of the Federal Armed Forces Munich. Currently, she is working in the field of IT training.
Julian Fietkau
julian.fietkau@unibw.de
www.unibw.de/julian.fietkau
Julian Fietkau studied computer science and human-computer interaction at University of Hamburg, then worked for a year as a research assistant at Bauhaus University Weimar until he joined Universität der Bundeswehr München in 2016. Since then, he has been working as a PhD student with Prof. Dr. Michael Koch in the field of human-computer interaction, especially with regard to the research project UrbanLife+.
Laura Stojko
laura.stojko@unibw.de
www.unibw.de/inf2/personen/wissen_mitarbeiter/laura-stojko
Laura Stojko studied Information Systems at the University of Regensburg (Bachelor) and at the Technical University of Munich (Master) and has been working on her PhD in Human-Computer Interaction with Prof. Dr. Michael Koch at the University of the Federal Armed Forces Munich since September 2019. She is a research assistant and supports in teaching and research projects.