Compatibility of Support and Autonomy in Personalized HCI

Schriften zur soziotechnischen Integration, Volume 6
Julian Fietkau
Universität der Bundeswehr München
Mandy Balthasar
Universität der Bundeswehr München

Abstract

Developers and designers of interactive digital systems are faced with many challenges, some less visible than others. One of these more subtle balancing acts is between personalized user support and unrestricted user autonomy. It raises the question how a supportive design that takes a user’s needs and preferences into account can be implemented within a technical system while at the same time allowing the user to make decisions freely without restrictions. How can autonomy by design be incorporated into the process? In this paper we describe the breadth of choice heuristic, which can show where the support of the user by the system stops and a restriction of the user’s autonomy begins. To this end, we reference existing literature on similar issues, develop our own conclusions and apply them to our work in a project on IT for elderly people.

Keywords

Interaction design, user autonomy, value sensitive design, persuasive design, privacy, ethics

License

Creative Commons Attribution-NonCommercial-NoDerivatives

The content of this series is licensed to the public under the Creative Commons Attribution Non Commercial No Derivatives 4.0 International license. This means that you are permitted to copy and distribute this content, provided that the work is correctly attributed to the authors, the content is not used commercially, and the text is not edited, shortened or otherwise changed.


1. Introduction

Steven Krug’s Don’t make me think (Krug 2014) has been a valuable guiding principle for the design of easy-to-operate systems, but its catchy title does not provide a full picture for the responsible design of human-computer interaction (HCI). Clearly, interactive systems cannot take all critical thought out of the user’s hands. The blurry line between support and paternalism, between paving the way and prescribing the way, must be clarified.

When we speak of autonomy, this does not mean self-legislation as described by Kant (Kant 2012). Rather, autonomy refers to a process that opens up all possibilities for an individual to interact or not with his physical and digital world, and to position themselves within this exchange (Liggieri & Müller 2019). Thus autonomy can also be seen here as authenticity (Misselhorn 2018).

The daily experience of people is increasingly influenced by digital technology. Their data is collected, linked, analyzed and used in many ways, often unnoticed. In addition, the Privacy-by-Default Principle is frequently not applied, leading to the phenomenon of the Privacy Paradox (Joinson et al. 2010, Balthasar & Gerl 2019).

In this paper, especially in Section 2, we gather theoretical knowledge about user autonomy in the context of HCI. In Section 3 we share our approach taken in the UrbanLife+ project to preserve the autonomy of users of a personalizable information system. Subsequently, in Section 4 we present the breadth of choice heuristic, which is our core contribution to the understanding of the role of user autonomy in the design of digital systems. Finally, in Section 5 we subject our heuristic to a brief critical discussion.


2. Related Work

One of the first scholars to think critically about the role of user autonomy in the context of HCI was J. C. R. Licklider (Licklider 1960) in his work on human-computer symbiosis, in which he anticipates a future reality of fast and dynamic information flows between the human mind and computers. He concludes that a functional symbiosis of man and computer can only be achieved if man remains autonomous in his ability to think and make decisions even as computing power increases.

In recent years, a section of HCI that deals with playful interactions has paid special attention to the Self-Determination Theory (SDT) to explain motivation. The SDT as described by Ryan and Deci (Ryan & Deci 2000) considers three core aspects of intrinsic motivation: besides competence and relation, the autonomy of the user is of central importance. This autonomy should enable users to make their own decisions at their own pace. Thus SDT, like Licklider, builds on the aspect of user autonomy. According to a literature analysis by Tyack and Mekler (Tyack & Mekler 2020), the use of SDT in HCI system design is mostly very abstract and therefore often superficial.

Two other related concepts are User Openness (Davis et al. 1989), which occurs in relation to the appropriation of software solutions, and End-User Programming (Lieberman et al. 2006), which originates from Participatory Design.

A number of promising procedures such as Ethics by Design (Spiekermann & Winkler 2020), Value Sensitive Design (Shilton 2018) or the (now often legally prescribed) Data Protection by Design (Danezis et al. 2015) can be examined for the support of an ethical design of technology. In addition, participatory ethics evaluation procedures such as MEESTAR (Weber 2015) can provide concrete guidance. General brainstorming and analysis tools such as the Ethics Canvas (Lewis et al. 2018) are also available and will certainly be developed further.


3. Project Background

Without stakeholder commitment, the aspect of user autonomy in HCI remains purely theoretical and the effort an intellectual exercise. In order to avoid this, we took the opportunity to incorporate the idea of autonomy by design into a larger project. This project, named UrbanLife+, is a five-year endeavor in which a number of German-speaking universities and companies participate. It has developed concepts and prototypes that influence the behavior of seniors through an interactive system by encouraging them to participate in more outdoor activities (Fietkau & Stojko 2020).

The resulting system acts autonomously by collecting and aggregating data about the local context without explicit instructions from the user. For this reason, the topic of implicit interaction was thoroughly discussed within the design framework. The result is a system that already reacts to users entering its sensorially detectable area (knowingly or unknowingly). Each system is designed to provide specific value, but the human being has their unique dignity (Frankl et al. 2019), which is granted strong legal protection in German Basic Law (Grundgesetz Article 1). Wherever the two (the value of the system and the dignity of the human) may come into conflict, design decisions need to be made with great care.

Since the system can take action autonomously, questions arise about possible violations of the human being’s rights, particularly regarding personal autonomy as well as data sovereignty. In order to preserve this autonomy within the system, architectural and design decisions were only made after deliberate ethical considerations. This also involved the differentiation between two scenarios: on the one hand, a motivating support of the seniors to explore their environment and on the other hand, the manipulation of people to leave existing routines and comfort zones.

To ensure that the user remains in control, the system described above offers the option to disable user recognition on an individual basis. Information provided to users is usually personalized, but if the user prefers to interact anonymously, a simple button press makes this possible. This is enabled by conducting user recognition via Bluetooth in combination with a mobile phone app that provides a toggle switch. When personalization is deactivated, the user’s identity remains hidden from the system as well as from other nearby users. At the same time, however, the user’s own preferences previously defined in the system, such as data protection or the consideration of accessibility in the selection of information, remain untouched. A conscious integration of autonomy prevents its loss to automation.


4. The Breadth of Choice Heuristic

As described in Section 2, existing guidelines for ethical interaction design do not yet make it easy for designers to assess the impact of concrete design decisions on user autonomy. The provision of rich personalized functionality requires detailed information about the user and their needs and preferences. How can we determine during the development of a system whether its design respects the user’s autonomy? In answering this question, it is important to note that the line between truly helpful support and unwelcome paternalism cannot easily be drawn for the general case since it depends strongly on the person in question. The recognition of this line will vary from moment to moment and from situation to situation, so the framework of this project allows users to draw this line themselves by making their own decisions.

During the design process of our UrbanLife+ systems, we have developed a heuristic to anticipate the impact of a design choice on user autonomy. The question is: Does this interaction broaden the user’s spectrum of available actions, or does it narrow it? We observe that interaction designs which undermine user autonomy will usually have a specific activity at their core that they want the user to perform, such as watching a video advertisement. They will then attempt to ensure that this activity is simpler and more convenient than any possible alternative, e.g. by making the button to close the advertisement as small and difficult to see as possible. The user’s ability to decide what their technology should do is intentionally subverted, manipulating the user and undermining their autonomy.

In contrast, system designs that respect the user’s autonomy usually offer additional options for action or provide equally transparent conditions for all options. While the system may provide assessment information on less advisable options or even warn the user about potential dangers, ultimately its goal is to empower the user to make rational decisions based on all available information.

The logical argument for the breadth of choice heuristic is as follows:

  1. If the system respects the user’s autonomy,
  2. then the system leaves all important decisions to the user. If that is the case,
  3. then the system extends the user’s range of possible actions.

And in the opposite case:

  1. If the system does not respect the user’s autonomy,
  2. then the system makes important decisions on behalf of the user. If that is the case,
  3. then the system limits the user’s range of possible actions.

This breadth of choice heuristic is easier to evaluate and to discuss in the context of a design process compared to the abstract criterion of respect for the user’s autonomy. When design decisions are made, it asks: Does the decision to be made aim to increase or decrease the user’s range of available options?


5. Discussion

As the size and complexity of a project increases, there is a tendency towards a loss of autonomy for individuals within it. The responsibility for the resulting system becomes blurred, as partial results pass through many different hands and thought processes. The awareness of having a certain moral responsibility for the ideas, functions and their implementation is not automatic. Once an idea has developed into a first prototype, and that prototype has made contact with the system’s context, it is no longer possible to “reset” the context back to the earlier state, no matter how small the step taken (Liggieri & Müller 2019). This is all the more reason to reflect on possible consequences during the design process and not just at the end.

Describing the work of astronauts in space capsules, Anders (Anders 1994) paints a picture of humans serving the technical system instead of the other way around: astronauts do not “fly” their capsule in the same sense that a pilot “flies” a plane, instead they are flown by the capsule. This allegory describes the handing over of the two human responsibilities of action and creation, leaving humans only with the operation of the system.

In our project we have attempted to avoid reducing the users’ decision-making to a mere operation of the system. The goal is to offer choices which can be decided based on knowledge gained from experience and self-reflection – a design of HCI which makes Descartes’ anima rationalis (Liggieri & Müller 2019) possible – to be a free soul.

Nonetheless, the phenomenon of decision paralysis, in which someone confronted with one or several choices may experience anxiety and avoid or postpone the decision-making process even to their personal disadvantage, must not be underestimated (Anderson 2003). The answer cannot be to leave every single decision up to the user. As designers and engineers, we still need to make judgments about which decisions are important enough to involve the user’s mental resources.

A codification of the goal autonomy within an ethical system development process is not yet available and its preconditions must first be clarified. If, however, a large number of relevant actors are involved, as in many interdisciplinary projects like ours, this process has the potential to produce a binding result.

There is no simple, straightforward answer for the problem of support and autonomy in HCI, just like there can be no universally optimal chess move (Frankl et al. 2019), but one possible next step could be to derive more concrete design guidelines from the presented heuristic. By bringing this conversation into the wider HCI community, we are hoping not only that our heuristic can be useful to practitioners, but also that it can serve as a building block in future theories on ethical system design that respects user autonomy.


6. Acknowledgments

This article was originally submitted and accepted for the NordiCHI 2020 workshop “Strengthening human autonomy in the era of autonomous technology” organized by Tone Bratteteig, Diana Saplacan, Rebekka Soma, and Johanne Svanes Oskarsen. The workshop submissions were not published. At the time of this publication, the workshop participants are considering future work to build upon core aspects of the workshop discussion and contributions, including this article.

As the first author’s contributions to this paper were developed in the scope of the UrbanLife+ project, this work has been partially supported by the Federal Ministry of Education and Research, Germany, under grant 16SV7443. We thank all project partners for their commitment.


References

Anders, Günther (1994): Der Blick vom Mond: Reflexionen über Weltraumflüge. In: Beck’sche Reihe 1056, Orig.-Ausg. Auflage. München: Beck.
Anderson, Christopher J (2003): The Psychology of Doing Nothing: Forms of Decision Avoidance Result from Reason and Emotion. In: Psychological Bulletin, S. 139–167.
Balthasar, Mandy & Gerl, Armin (2019): Privacy in the toolbox of freedom. In: 12th CMI Conference on Cybersecurity and Privacy (CMI). Copenhagen, Denmark, S. 1–4.
Danezis, George; Domingo-Ferrer, Josep; Hansen, Marit; Hoepman, Jaap-Henk; Metayer, Daniel Le; Tirtea, Rodica & Schiffner, Stefan (2015): Privacy and Data Protection by Design – from policy to engineering.
Davis, Fred D; Bagozzi, Richard P & Warshaw, Paul R (1989): User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Management Science, 8/1989 (35), S. 982–1003.
Fietkau, Julian & Stojko, Laura (2020): A system design to support outside activities of older adults using smart urban objects. In: Proc. Europ. Conf. on Computer-Supported Cooperative Work 2020. EUSSET.
Frankl, Viktor E; Bauer, Joachim & Vesely, Franz (2019): Über den Sinn des Lebens, 1st. Auflage. Weinheim, Germany: Beltz.
Joinson, Adam; Reips, Ulf-Dietrich; Buchanan, Tom & Paine Schofield, Carina (2010): Privacy, Trust, and Self-Disclosure Online. Human-Computer Interaction, 1/2010 (25). Taylor & Francis, S. 1–24.
Kant, Immanuel (2012): Grundlegung zur Metaphysik der Sitten. In: Reclams Universal-Bibliothek 4507. Ditzingen, Germany: Reclam.
Krug, Steve (2014): Don’t Make Me Think, Revisited: A Common Sense Approach to Web Usability. In: Voices That Matter, 3rd. Auflage. San Francisco, CA, USA: New Riders.
Lewis, Dave; Pandit, Harshvardhan J; Devinney, Hannah & Reijers, Wessel (2018): Towards an Open Data Vocabulary for Canvas Driven Innovation Ethics. In: Waterman, K Krasnow; McGuinness, Deborah L & Hendler, James A (Hrsg.): Proceedings of the Workshop on Semantic Web for Social Good 2018, 2182. Monterey, CA, USA.
Licklider, Joseph Carl Robnett (1960): Man-Computer Symbiosis. IRE Transactions on Human Factors in Electronics, 1/1960 (HFE-1), S. 4–11.
Lieberman, Henry; Paternò, Fabio & Wulf, Volker (2006): End-User Development. In: WirtschaftsinformatikHuman-Computer Interaction 9, 1st. Auflage. Dordrecht, Netherlands: Springer.
Liggieri, Kevin & Müller, Oliver (2019): Mensch-Maschine-Interaktion: Handbuch zu Geschichte – Kultur – Ethik, 1st. Auflage. Stuttgart, Germany: J.B. Metzler.
Misselhorn, Catrin (2018): Grundfragen der Maschinenethik. In: Reclams Universal-Bibliothek 19583, 2nd. Auflage. Ditzingen, Germany: Reclam.
Ryan, Richard M & Deci, Edward L (2000): Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being. American Psychologist, 1/2000 (55), S. 68–78.
Shilton, Katie (2018): Values and Ethics in Human-Computer Interaction. Foundations and Trends in Human-Computer Interaction, 2/2018 (12), S. 107–171.
Spiekermann, Sarah & Winkler, Till (2020): Value-based Engineering for Ethics by Design. SSRN Electronic Journal, 2020.
Tyack, April & Mekler, Elisa (2020): Self-Determination Theory in HCI Games Research: Current Uses and Open Questions. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20. New York, NY, USA: Association for Computing Machinery.
Weber, Karsten (2015): MEESTAR : Ein Modell zur ethischen Evaluierung sozio-technischer Arrangements in der Pflege- und Gesundheitsversorgung. Technisierung des Alltags – Beitrag für ein gutes Leben?, May/2015, S. 247–262.