Consume– literature research– academic reviewing– collegial feedbackProduce– homework, term papers– degree theses– your own academicpublications
After I was confident that I had gotten my plan across, I moved on to an additional lecture-style note to reinforce the topic of the session. I tried to elucidate the image of consuming and producing academic texts as two sides of the same coin, different but related skills that may pull distinct quality criteria to the forefront, but are ultimately founded on the same notion of quality.
With the topic and goals introduced and everyone’s mind (hopefully) in gear, I proceeded to the first collaborative board, which initially looked like this.
The inner rectangle does not have any technical significance, I simply wanted to give the participants something to evoke the image of a canvas to visually invite them to add their contributions.
I very briefly introduced the method of brainstorming, just in case one of the participants was not familiar with it yet. Then I invited them to add any possible ideas for quality criteria that came to their minds to the board.
This was the result a few minutes later.
I also actively took part in this action to signal to the participants that I would not be a neutral observer or judge of their work, but that we would be working together towards a shared result. The bottom two criteria were added by me. I do not believe that the participants set the font size intentionally, it seems that in this collaboration tool it starts out as whatever size text element was last selected by that user, so it should be assumed that the sizes do not carry any meaning.
With an initial collection of ideas for criteria available, I made a copy of the whiteboard and gave the students a few minutes to order the criteria into some sort of structure. There was no specific method or pcocess given, except that related criteria should end up close together. I let the participants work for a few minutes until activity on the board died down. This was the result.
With the students’ initial ideas for criteria in place and us being about a third into the session, it was time to introduce external inspirations. For this step I pulled up a prepared whiteboard that looked like this.
I had eight external resources prepared that could serve as useful inspirations for quality criteria. Some were from the HCI community, others from different fields or topic-agnostic. They cover stances from journal editors, conference organizers, and writing advisors. Sources 6 to 8 were in German, the others in English. Here are the links in case you are interested: 1, 2, 3, 4, 5, 6, 7, 8
First, I asked all participants to pick out one of the external sources based on the title, and put a dot next to that source’s number at the bottom right of its square using Explain Everything’s marker tool. Because I could not predict the exact number of active participants in the bonus session ahead of time, I prepared for two possibilities: (1) if many participants are present and active, several of them may share one of the sources, or (2) if not many participants are present, or they are present but do not actively take part in the session, not all sources may be assigned.
After a few moments of waiting, four of the twelve participants had picked a source. The others did not speak up when prompted, so I left it at that and decided to handle the remaining four sources on my own. Because the remaining participants did not say anything, I only gained further insight into the situation from the feedback survey afterwards (see below). After activity on the whiteboard had stopped, I made the links to the sources available via the shared notes section of our BigBlueButton room.
If several participants had to share one source, I would have created an appropriate number of breakout rooms in BigBlueButton. With only four participants having picked a source, I simply asked them to work individually.
I tasked them with examining their source and looking for quality criteria that they would consider interesting or worthwhile additions to our mindmap. To begin with, they were asked to add their notes to the current slide, into the respective source’s box. They were given 20 minutes for this task. After the time was up, this is what it looked like.
At this point, slightly over halfway through the planned time, I asked the participants whether they’d prefer to take a short break or power through the last 30 minutes. Two students replied in chat that they’d prefer continuing without a break, so that is what we did.
With the external sources successfully mined for additional criteria, I returned to our mindmap, made another copy and asked the participants to add any important criteria that had been added to the prior whiteboard. Simultaneously, they were prompted to reorganize the mindmap as necessary to achieve a coherent result. After another ten minutes, this was the result.
The two-column structure was in no way guided by me, but emerged in the process of amending the mindmap by the participants.
After activity on the whiteboard came to a halt, I continued to the next and final task: on yet another new copy of the latest mindmap, I asked the participants to place colored dots next to each criterion based on how (relatively) important they would judge it. This is, of course, to give us an impression of how the students would want the criteria to be weighed in the project report grading scheme. After a few short minutes, this was the result.
Once again I waited until activity on the whiteboard came to a natural stop, the provided a brief off-the-cuff summary of the result, which kinds of criteria prompted a high rate of red votes or green votes, and where there may be relations between some of the criteria.
After that was done, I switched over to my last prepared slide to let the participants know how we would be proceeding from here.
To close out the session, I thanked the participants for their time and wished them a nice weekend. Before anyone left, I shared a link to a short anonymous feedback survey in the chat, the results of which are shown below.
The bonus session ended at about 9:15 AM, circa 15 minutes earlier than planned.
Feedback and Reflection
At the very end of the bonus session, I gave the participants a link to a very brief feedback survey, consisting of three questions that were answered on a five-point scale and one optional text field for more detailed feedback. Of the twelve students that were present, eight opted to fill out the survey. That is not enough to allow for meaningful statistical inferences, but we can still get an impression of how the session was received. The results are presented as follows.
Q1: Did today’s session deepen your understanding of the topic “quality criteria for academic writing”?
For this question we can see pretty balanced distribution of answers. Very few participants felt like they learned a ton or nothing at all, most are somewhere in the middle. This seems like it can be counted as a modest success.
Q2: Did the didactic approach as a whole (collaborative work, individual study etc.) seem sensible to you?
I expected more contrarian answers to this question, especially considering surveys are usually biased to select for respondents who have things to criticize, so having a strong majority in the positive half of the scale is a welcome result. It is no surprise that the interactive approach would not be favored by everyone, but the fact that there are zero votes for the middle answer is pretty interesting. It seems the session provoked a “love it or hate it” reaction.
Q3: Please give an overall rating for today’s session.
For the overall opinions on the session, we have almost all answers in the middle or somewhat positive categories, which seems sensible. Certainly there are valid criticisms and aspects that can be improved, so receiving this kind of rating distribution comes across as very fair. One participant appears to have hated the session, and I don’t want to dismiss them outright, but it is also true that you cannot always reach everyone.
The free text field at the end of the survey was used by six of the eight respondents. Most of the responses consisted of constructive criticism and helpful suggestions. To ensure confidentiality, the responses are not reproduced verbatim here, but I will extract and show my personal takeaways from the responses.
When using a new (to the participants) collaboration tool, offer a space and (short) time for familiarization, e.g. a canvas to let them try out the different markers.
I think this is an excellent suggestion. It was also my first time using Explain Everything and I spent some of my preparation time getting used to the idiosyncrasies of its text editor. We would have easily had enough time to let the students scribble around on the title slide for a few minutes to get the hang of it, and this is something I will definitely keep in mind for the next time an opportunity like this comes up.
The title of the session mentions academic writing in HCI, but the developed criteria (and some of the external sources) were independent of the scientific field. The content of the session did not necessarily live up to its name.
I can empathize with this reaction, especially since general academic writing advice can be found in plenty of other places besides an HCI course. It is worth noting that I did not know ahead of time how this session would unfold and what the results would be, and there was certainly some hope on my part that there would be more HCI-specific quality criteria, but for the future I will still take more care to properly manage the expectations I set through session titles and topic announcements.
Some participants – by choice or necessity – follow the course on a mobile device instead of a PC. For them, actively taking part in this session would have been difficult or impossible.
This is honestly a possibility that had not occurred to me ahead of time, which shows my personal bias as someone who avoids using his phone for productivity tasks. Nonetheless, I cannot think of anything about today’s session that I would have changed, had I known earlier that some participants would be using mobile devices. I obviously do not take it personally if someone decides not to take part in the active work portions of a voluntary bonus session, but in fairness, it was announced that it would be an interactive session requiring active participation. The two lessons for me here are to put more thought into how well users of mobile devices can be integrated into the workflow in the future, and then also communicate more clearly to participants what level of activity is going to be expected of them.
In addition to the points mentioned above, the one improvement that I would make for a future iteration of the same session plan would be to find a good way to evaluate the work results with the participants. In this session we ended with the complete criteria mindmap, but had no benchmark to judge it by or compare it against. Ideally I would like to give the participants an opportunity to find out how well they did. However, this is very difficult with a topic as layered and open-ended as this one, where having a “sample solution” in the back pocket would defeat the purpose.
All in all, I am satisfied with the bonus session. The plan translated well into practice and the results can be built upon for the future. I took a slight gamble by leaving the success of it to the participants to such a large extent, and I am content that it worked out.