Teaching Showcase: Academic Writing Quality Criteria

As part of the “Human-Computer Interaction” course of 2021 at Universität der Bundeswehr München, I conceptualized and conducted a voluntary extra session for the students. Its goal was to deepen the participants’ ability to recognize and produce high-quality academic texts. The overarching course in which this session was embedded revolves around how science about HCI is conducted. It also includes a research project phase, in which students write a detailed scientific report that is graded at the end of the course.

Context

This extra session was conceptualized as a supplementary unit to the Human-Computer Interaction (HCI) course at Universität der Bundeswehr München. The course itself introduces students in the computer science master’s degree programme to the topic of human-computer interaction as a research field, as well as to relevant research processes and methods. The 2021 iteration of the course, taught by Prof. Dr. Michael Koch, has been conducted fully online.

In the German higher education system post-Bologna, degree programmes consist of modules which can then contain one or several courses. To give you an opportunity to get a full picture of the HCI module, I provide you with a full copy of its official module description (as of October 2020, inofficial translation into English by me). You can find the original on UniBw M’s MSc Computer science degree overview page.

Module nameModule number
Human-Computer Interaction1167
AccountWPFL Vert.: SIM - INF 2021
Responsible for moduleModule typeRec. trimester
Univ.-Prof. Dr. Michael KochElective1
Workload in (h)Presence time in (h)Self-study time in (h)ECTS points
270602109
Associated courses:
No.FormatCourse titleParticipationTWH
11671LectureHuman-Computer InteractionElective3
11672ExerciseProject Human-Computer InteractionElective4
Sum (mandatory and elective)7
Recommended preconditions
For the self-study of the recommended literature for the module, basic English proficiency is required.
Qualification goals
The central educational objective is a comprehensive overview over the goals and research questions in the discipline of human-computer interaction. Students gain a basic understanding of how interactive products can be developed with special consideration of user and task requirements. Designing usable products whose use can also be fun is the goal of this design process. Participants know the basic principles of interaction design to create interactive products. Participants know the basics of human information processing and their consequences for the design of interactive products. Participants know common process models, methods and tools for the creation of interactive products. Participants are capable of creating their own interaction designs for interactive products. Participants know basic evaluation procedures to judge interactive products.
Content

The content of this course follows the recommendations by the ACM Curriculum Human-Computer-Interaction and the GI FG 2.3.1 Software-Ergonomie:

  • Goals, principles and design spaces
  • Historical development
  • Perception psychology and information processing
  • Practical contexts for interactive products
  • Process model, design and prototyping
  • Input and output devices, interaction techniques
  • User-centered design
  • Usability evaluation
  • Cooperative systems (Groupware, CSCW)
  • Connections to other disciplines (e.g. design, pedagogy, psychology)
  • Integration into software development

A selection of these topics is developed more deeply using recent academic publications.

The acquired design principles, methods, tools and processes are applied in the project phase.

Literature
  • Preece J., Rogers Y., Sharp H.: Interaction Design, John Wiley & Sons, 2002(www.id-book.com)
  • Dahm M.: Grundlagen der Mensch-Computer Interaktion, Pearson Studium, 2006
  • Donald A. Norman, The Design of Everyday Things, Currency Doubleday, 1990
  • Shneiderman B., Plaisant C.: Designing the User Interface, Addison Wesley, 4ndEdition, 2005
Attestation of completion
Shared grade attestation for active work in the lecture and the project.
Applicability
The module is not conceived to serve as a basis for other modules. A combination with the module 1164 “Computer-supported Cooperative Work” is nonetheless recommended.
Duration and frequency

The module takes 2 trimesters to complete.

The module begins in the winter trimester of each year. The project phase is usually worked on during the following spring trimester.

The course does not utilize all of its weekly time slots for synchronous activities. Most weeks consist of either a traditional lecture or a discussion of assignment results that have been handed in. In both cases, the sessions take place in the university’s video conferencing tool of choice, BigBlueButton. However, several weeks leave the time slot open for self-study activities and students receive pointers towards valuable study materials. For the last such week without a scheduled lecture, I decided to offer a voluntary bonus session to sharpen the students’ understanding of how academic text quality can be judged – to help them find high-quality sources for their project work as well as to assist them in writing good project reports.

Hard Preconditions (course)

Group size
16 active participants (19 registered in total)
Location
Online (BigBlueButton)
Time
Fridays, 8:00 AM to 10:15 AM (integrated lecture and exercises)
Target audience
Students in the computer science MSc programme
Mandatory or elective
Elective
Exam
Graded project report

Soft Preconditions (session)

Prior knowledge
Course participants have read and reviewed at least two academic HCI publications, have written term papers and a bachelor thesis prior to taking this course.
Expectations
Participants will expect that being part of the extra session will offer additional clarification for the remainder of the course and that their time investment will be respected.
Interest and Motivation
Participants have shown investment into the research and documentation process, likely partially fueled by the requirement of a final report.
Studying/working habits
Prior sessions have established that the course, while primarily lecturer-driven, is often interactive and requires individual participation. The 16 participants have turned in prior assignments on time.

The bonus session happened on March 12th, placing it near the end of the winter trimester and thus the lecture-driven part of the course. The date was chosen in order to be able to build upon the participants’ experience gained from the course assignments up to that point, and to enhance their foundational knowledge as they proceed into the project phase.

Goals and Approach

Up to this point in the course, the participants have been working with specific academic texts and reflecting on their strengths and weaknesses. For the bonus session, my goal was to help them consider quality criteria of academic writing in a more explicit way, to verbalize and discuss them. Generalizing from expressions of quality criteria to the quality criteria themselves (as concepts that can be discussed, judged, and weighed) is a jump in abstraction. Accordingly, all the main learning goals for the bonus session can be found at the higher abstraction levels of Bloom’s taxonomy:

Besides the learning goals, a secondary motivation for the course organizers was that up to this point there had been no comprehensive grading rubric for the project reports. We thought that the students would likely appreciate clearer guidelines on how their project reports would be graded. I decided to make use of this opportunity by turning the compilation of grading criteria for the project report (or at least a rough outline of them) into the motivation for delving into quality criteria. By letting participants influence their own grading criteria, I establish a very close connection between the session’s content, the learning goals, and the grading process for the course, which is in line with current didactic theories such as Constructive alignment.

Going into the broad planning phase for the session, I used the AVIVA model to establish a didactic throughline.

The AVIVA model (Städeli et al. 2010, PDF, German) is a Swiss guideline for the structure of successful learning opportunities. The five letters of the acronym refer to five phases that can serve as a template for the broad strokes of a teaching unit design. See the following table, translated from the original German publication, for a brief overview.

PhaseInstruction (“direct teaching”)Self-directed learning (“indirect teaching”)
Arrival and preparation
(“Ankommen und einstimmen”)
Learning goals and steps are presented.The situation, the problem is presented. Learners decide on goals and processes laregly unaided.
Activate preexisting knowledge
(“Vorwissen aktivieren”)
Learners activate their prior knowledge under the direction and structure given by the teacher.Learners activate their prior knowledge on their own.
Inform/Instruct
(“Informieren”)
Resources are developed or expanded together, the teacher directs the way.Learners decide what resources they need to acquire and develop their own process.
Process
(“Verarbeiten”)
Learners actively engage with the given resources: process, consolidate, practice, apply.Learners actively engage with the new resources: process, consolidate, practice, apply, discuss.
Evaluate
(“Auswerten”)
Goal, process and learning success are evaluated.Goal, process and learning success are evaluated.

These goals and process guidelines were condensed into the following rough plan:

  1. Make session goals transparent for the participants, establish shared motivation
  2. Prompt participants to recall experience with prior course assignments
  3. Relate that experience to the topic of text quality criteria
  4. Main section: Work collaboratively on a structured overview of possible quality criteria based on participants’ own expertise as well as selected external inspirations
  5. Finish by explaining how the session result will serve as a building block for the project report grading schema

For a session driven to this extent by collaborative work, a suitable tool must be selected as a workspace. After comparing a few different ones, I decided on Explain Everything.

Explain Everything logo

It was clear at this stage of planning that the bonus session would be highly interactive and collaborative, drawing on the participants’ experience and expertise. From prior workshops I had attended, I knew about digital whiteboard tools that could be integrated into video conferencing systems like Zoom. However, in all cases I had experienced, they had been relegated to brief, constrained moments within a framework of PowerPoint-style slides. In order to take full advantage of the opportunity presented by the bonus session, I decided against using one of those functionalities and opted instead for a tool named Explain Everything to prepare a digital framework for the session.

The compelling idea behind Explain Everything is that it takes the classic premise of “the content consists of prepared slides, with a small number of optional collaborative moments in specific places” and turns it on its head: here, everything is collaborative first, unless the presenter specifically marks certain parts as non-interactive. It still allows you to prepare a sequence of views, but in Explain Everything each one is a collaborative whiteboard instead of a static slide. Elements can be locked in place to prevent accidental modification, but even that is easily undone. I carried this mindset into the design phase for the course materials. (It should be noted that Explain Everything also offers more restrictive access control modes, but the trust-based fully collaborative setting seemed like a perfect fit for this session.)

Screenshot of the Explain Everything whiteboard tool with its function buttons

This is what Explain Everything looks like for an individual user. Additional participants can be invited to the shared whiteboard.

The date for the bonus session was announced to the students a few weeks in advance, and it was made clear that participation would be 100% voluntary. (Though attendance is not compulsory for the lecture anyway.) The topic for the bonus session, quality criteria for academic writing in HCI, was also shared in advance.

Planned Schedule

Dur.ActivityDidactic approachGoal(s)Tools and materials
5 minGreetings, notice on data privacyEstablish a productive atmosphere, convey organizational information
5 minEstablishing goals for this sessionLecture-style presentationCreate a shared understanding of the intended results and lessons behind today’s activitiesPrepared slides
5 minReminder of students’ prior experiences consuming and producing academic texts in earlier assignments and other contextsLecture-style presentationActivate memories that will be helpful for the collaborative activities in this sessionPrepared slides
10 minCollecting ideas for possible quality criteria for academic writing in HCICollaborative brainstormingGather an unsorted and unstructured collection of quality criteria, incorporate ideas from all participantsDigital whiteboard
10 minClustering and structuring the collected criteriaCollaborative mindmap-esque structuring workSort the gathered quality criteria into an initial structure, identify duplicates, discuss aspects where participants show disagreementDigital whiteboard
20 minGetting inspiration for further quality criteria from external sourcesText analysis in small groups or individuallyDevelop expertise on established quality criteria in various academic contexts and communitiesBreakout sessions in the video conference system
10 minIntegrating new impulses into the earlier collectionCollaborative mindmap-esque structuring workIncorporate the external ideas from the individual experts into the shared criteria collectionDigital whiteboard
5 minSpecifying individual weights for each criterionIndividual placement of colored dotsGather a visual impression of participants’ individual opinions about how heavily each criterion should be weighedDigital whiteboard
5 minSummary of resultsLecture-style presentationEstablish a shared baseline of understanding of the collaborative results, instill a sense of accomplishmentDigital whiteboard
5 minOverview over next stepsLecture-style presentationExplain to participants how today’s results will be used going forwardPrepared slides
10 minGoodbyes and feedback surveyEnd the synchronous session and give participants plenty of time for anonymous feedbackAnonymous online survey

Detailed Retrospective

For anyone with a deep interest in the details of the session, this section allows you to browse through a complete archive, including post-hoc comments, of the associated materials. This includes all slides, whiteboard templates, and work results in 100% detail.

Human-Computer InteractionMarch 12th, 2021Julian FietkauBonus session:Quality criteria foracademic writing in HCI

Just as announced, the bonus session started at 8:00 AM on Friday, March 12th, 2021. I met with the participants in the BigBlueButton room for the course and shared my view of a title slide in Explain Everything. Of the 16 active course participants, twelve joined the session.

After waiting one or two minutes for possible late-comers, I greeted the participants, announced the topic of the session and proceeded to present a few notes on privacy and data protection, since the tool wer were going to use was new to the students.

A few words ahead of timeon data privacy...

I let the students know that we would be using Explain Everything and that their activities on the collaborative whiteboards would not only be visible to all participants, but also recorded and published later as part of my didactic training and evaluation. I notified them that, for the same reason, I was making an audio recording of my own voice, but not theirs – things they said over our BigBlueButton conference would be audible to me and all other participants, but not recorded. I let them know that Explain Everything also asks to use their device’s microphone, which I recommended to decline as we were not using the audio conferencing feature of the platform.

Goals for today– Collect quality criteria for academic texts in HCI– To that end: look back on previous assignments– Also look outside the box: what do journals, conferences andreviewers say they want?– Build a catalogue of criteria out of the collected impulses– Assess the importance of individual criteria to create a basisfor a transparent grading rubric for your project reportsIn summary: collect and collate your (diffuse) knowledge of textquality, verbalize it and reflect on it, so you can be even betterat recognizing and producing high-quality academic texts in HCI.

Next, I gave an overview over the intended goals (learning and otherwise) for the session.

This slide largely speaks for itself.

Engaging with academic writingConsume– literature research– academic reviewing– collegial feedbackProduce– homework, term papers– degree theses– your own academicpublications

After I was confident that I had gotten my plan across, I moved on to an additional lecture-style note to reinforce the topic of the session. I tried to elucidate the image of consuming and producing academic texts as two sides of the same coin, different but related skills that may pull distinct quality criteria to the forefront, but are ultimately founded on the same notion of quality.

Brainstorming: quality criteria

With the topic and goals introduced and everyone’s mind (hopefully) in gear, I proceeded to the first collaborative board, which initially looked like this.

The inner rectangle does not have any technical significance, I simply wanted to give the participants something to evoke the image of a canvas to visually invite them to add their contributions.

I very briefly introduced the method of brainstorming, just in case one of the participants was not familiar with it yet. Then I invited them to add any possible ideas for quality criteria that came to their minds to the board.

Brainstorming: quality criteriaunderstandablelanguagescientific valueclear guiding themecorrectly referencedno slangsufficient length(enough explanation)comprehensiblefactually correctpurposeful use of pictures, tablesadhere to formal standardsand requirements

This was the result a few minutes later.

I also actively took part in this action to signal to the participants that I would not be a neutral observer or judge of their work, but that we would be working together towards a shared result. The bottom two criteria were added by me. I do not believe that the participants set the font size intentionally, it seems that in this collaboration tool it starts out as whatever size text element was last selected by that user, so it should be assumed that the sizes do not carry any meaning.

Initial mindmap: quality criteriano slangunderstandablelanguagesufficient length(enough explanation)correctly referencedclear guiding themecomprehensiblescientific valuepurposeful use of pictures, tablesfactually correctadhere to formal standardsand requirements

With an initial collection of ideas for criteria available, I made a copy of the whiteboard and gave the students a few minutes to order the criteria into some sort of structure. There was no specific method or pcocess given, except that related criteria should end up close together. I let the participants work for a few minutes until activity on the board died down. This was the result.

So, what does ....... say?Springer: Peer Review Policy, Process and Guidance1Springer: Writing a journal manuscript2ACM CHIIR 2020: Reviewing Guidelines3ACM CHI 2020: Guide to Reviewing Papers4ACM DGOV: Review guidelines5Forrer/Spitzmüller: quality criteria for academic texts6Uni Siegen: pointers for term papers and theses7Munsch: evaluation criteria for scientific works8

With the students’ initial ideas for criteria in place and us being about a third into the session, it was time to introduce external inspirations. For this step I pulled up a prepared whiteboard that looked like this.

I had eight external resources prepared that could serve as useful inspirations for quality criteria. Some were from the HCI community, others from different fields or topic-agnostic. They cover stances from journal editors, conference organizers, and writing advisors. Sources 6 to 8 were in German, the others in English. Here are the links in case you are interested: 1, 2, 3, 4, 5, 6, 7, 8

First, I asked all participants to pick out one of the external sources based on the title, and put a dot next to that source’s number at the bottom right of its square using Explain Everything’s marker tool. Because I could not predict the exact number of active participants in the bonus session ahead of time, I prepared for two possibilities: (1) if many participants are present and active, several of them may share one of the sources, or (2) if not many participants are present, or they are present but do not actively take part in the session, not all sources may be assigned.

After a few moments of waiting, four of the twelve participants had picked a source. The others did not speak up when prompted, so I left it at that and decided to handle the remaining four sources on my own. Because the remaining participants did not say anything, I only gained further insight into the situation from the feedback survey afterwards (see below). After activity on the whiteboard had stopped, I made the links to the sources available via the shared notes section of our BigBlueButton room.

If several participants had to share one source, I would have created an appropriate number of breakout rooms in BigBlueButton. With only four participants having picked a source, I simply asked them to work individually.

So, what does ....... say?Springer: Peer Review Policy, Process and GuidanceQuality of the dataand the empiricism1Springer: Writing a journal manuscriptCorrectstatisticsAbstract & conclusionsupported bycentral thesis2ACM CHIIR 2020: Reviewing GuidelinesOriginalityAppropriate(research)methods3ACM CHI 2020: Guide to Reviewing PapersNovel contribution to the research field (at least 25% novel for text based on prior research) Transparency of research No results that have already been published elsewhere -> exception: translation by original author4ACM DGOV: Review guidelines-Partially subjective decision criteria (e.g. is it well written?) -Four recommendations: Accept, Minor Revision, Major Revision, Reject -No personal criticism, only constructive -Reviewers stay anonymous -Avoid Conflict of Interest(COI) (e.g. The reviewer is related to the author)5Forrer/Spitzmüller: quality criteria for academic textsStructure of outline reflects structure of content Fun to read due to linguistic variation6Uni Siegen: pointers for term papers and theses- Precise central question - Diverse sources - High-quality sources - Clear guidance - Grammar/phrasing - Appealing layout - Use (correct) quotes - Explain jargon terms - Answer central question in conclusion - gender-neutral language 7Munsch: evaluation criteria for scientific works- Spelling and grammar - Language & expression (sentences, word choice) - Reflection - Use tables and statistics with purpose and integration - Find clear central question - Derive structure from core question - Avoid stereotypes and generalizations - Do not plagiarize 8

I tasked them with examining their source and looking for quality criteria that they would consider interesting or worthwhile additions to our mindmap. To begin with, they were asked to add their notes to the current slide, into the respective source’s box. They were given 20 minutes for this task. After the time was up, this is what it looked like.

At this point, slightly over halfway through the planned time, I asked the participants whether they’d prefer to take a short break or power through the last 30 minutes. Two students replied in chat that they’d prefer continuing without a break, so that is what we did.

Amended mindmap: quality criteriano slangunderstandablelanguagecorrectly referencedhigh-quality sourcesno plagiarismpurposeful use of pictures, tablesadhere to formal standardsand requirementsAvoid Conflict of Interest(COI) (e.g. The reviewer is related to the author)originalitysufficient length (enough explanation)comprehensibleclear guiding themeprecisely defined central questionscientific value / clear core questionfactually correcttransparency of researchscientific methodappropriate scientific methodsclean statistics

With the external sources successfully mined for additional criteria, I returned to our mindmap, made another copy and asked the participants to add any important criteria that had been added to the prior whiteboard. Simultaneously, they were prompted to reorganize the mindmap as necessary to achieve a coherent result. After another ten minutes, this was the result.

The two-column structure was in no way guided by me, but emerged in the process of amending the mindmap by the participants.

Relative importance: quality criteriavery importantmoderately importantless importantno slangunderstandablelanguagecorrectly referencedhigh-quality sourcesno plagiarismpurposeful use of pictures, tablesadhere to formal standardsand requirementsAvoid Conflict of Interest(COI) (e.g. The reviewer is related to the author)originalitysufficient length (enough explanation)comprehensibleclear guiding themeprecisely defined central questionscientific value / clear core questionfactually correcttransparency of researchscientific methodappropriate scientific methodsclean statistics

After activity on the whiteboard came to a halt, I continued to the next and final task: on yet another new copy of the latest mindmap, I asked the participants to place colored dots next to each criterion based on how (relatively) important they would judge it. This is, of course, to give us an impression of how the students would want the criteria to be weighed in the project report grading scheme. After a few short minutes, this was the result.

Once again I waited until activity on the whiteboard came to a natural stop, the provided a brief off-the-cuff summary of the result, which kinds of criteria prompted a high rate of red votes or green votes, and where there may be relations between some of the criteria.

What now?– The course organizers will now create a catalogue of grading criteria with relative weights out of today’s results for your final project reports. (Additions and removals reserved.) – The complete grading schema will be available to you starting in April. – If it proves successful, it will likely be used again in future iterations of this course.

After that was done, I switched over to my last prepared slide to let the participants know how we would be proceeding from here.

To close out the session, I thanked the participants for their time and wished them a nice weekend. Before anyone left, I shared a link to a short anonymous feedback survey in the chat, the results of which are shown below.

The bonus session ended at about 9:15 AM, circa 15 minutes earlier than planned.

Feedback and Reflection

At the very end of the bonus session, I gave the participants a link to a very brief feedback survey, consisting of three questions that were answered on a five-point scale and one optional text field for more detailed feedback. Of the twelve students that were present, eight opted to fill out the survey. That is not enough to allow for meaningful statistical inferences, but we can still get an impression of how the session was received. The results are presented as follows.

Q1: Did today’s session deepen your understanding of the topic “quality criteria for academic writing”?

For this question we can see pretty balanced distribution of answers. Very few participants felt like they learned a ton or nothing at all, most are somewhere in the middle. This seems like it can be counted as a modest success.

Q2: Did the didactic approach as a whole (collaborative work, individual study etc.) seem sensible to you?

I expected more contrarian answers to this question, especially considering surveys are usually biased to select for respondents who have things to criticize, so having a strong majority in the positive half of the scale is a welcome result. It is no surprise that the interactive approach would not be favored by everyone, but the fact that there are zero votes for the middle answer is pretty interesting. It seems the session provoked a “love it or hate it” reaction.

Q3: Please give an overall rating for today’s session.

For the overall opinions on the session, we have almost all answers in the middle or somewhat positive categories, which seems sensible. Certainly there are valid criticisms and aspects that can be improved, so receiving this kind of rating distribution comes across as very fair. One participant appears to have hated the session, and I don’t want to dismiss them outright, but it is also true that you cannot always reach everyone.

The free text field at the end of the survey was used by six of the eight respondents. Most of the responses consisted of constructive criticism and helpful suggestions. To ensure confidentiality, the responses are not reproduced verbatim here, but I will extract and show my personal takeaways from the responses.

When using a new (to the participants) collaboration tool, offer a space and (short) time for familiarization, e.g. a canvas to let them try out the different markers.
I think this is an excellent suggestion. It was also my first time using Explain Everything and I spent some of my preparation time getting used to the idiosyncrasies of its text editor. We would have easily had enough time to let the students scribble around on the title slide for a few minutes to get the hang of it, and this is something I will definitely keep in mind for the next time an opportunity like this comes up.
The title of the session mentions academic writing in HCI, but the developed criteria (and some of the external sources) were independent of the scientific field. The content of the session did not necessarily live up to its name.
I can empathize with this reaction, especially since general academic writing advice can be found in plenty of other places besides an HCI course. It is worth noting that I did not know ahead of time how this session would unfold and what the results would be, and there was certainly some hope on my part that there would be more HCI-specific quality criteria, but for the future I will still take more care to properly manage the expectations I set through session titles and topic announcements.
Some participants – by choice or necessity – follow the course on a mobile device instead of a PC. For them, actively taking part in this session would have been difficult or impossible.
This is honestly a possibility that had not occurred to me ahead of time, which shows my personal bias as someone who avoids using his phone for productivity tasks. Nonetheless, I cannot think of anything about today’s session that I would have changed, had I known earlier that some participants would be using mobile devices. I obviously do not take it personally if someone decides not to take part in the active work portions of a voluntary bonus session, but in fairness, it was announced that it would be an interactive session requiring active participation. The two lessons for me here are to put more thought into how well users of mobile devices can be integrated into the workflow in the future, and then also communicate more clearly to participants what level of activity is going to be expected of them.

In addition to the points mentioned above, the one improvement that I would make for a future iteration of the same session plan would be to find a good way to evaluate the work results with the participants. In this session we ended with the complete criteria mindmap, but had no benchmark to judge it by or compare it against. Ideally I would like to give the participants an opportunity to find out how well they did. However, this is very difficult with a topic as layered and open-ended as this one, where having a “sample solution” in the back pocket would defeat the purpose.

All in all, I am satisfied with the bonus session. The plan translated well into practice and the results can be built upon for the future. I took a slight gamble by leaving the success of it to the participants to such a large extent, and I am content that it worked out.