UniSA pilots AcaWriter automated writing feedback

The Teaching Innovation Unit (TIU) at the University of South Australia undertook a pilot of AcaWriter over the 2018 academic year. This preliminary evaluation highlighted the high potential of the tool to provide early feedback on students’ writing. The pilot also revealed certain technical complications that require further improvement to ease broader dissemination and uptake. The outcomes for the small study serve as a useful base to establish a technical roadmap for future development.

The following limitations were identified in an early version of the public demonstration version of AcaWriter:

  • The feedback provided to students was, at this stage of development, limited. The tool simply highlighted features of the submitted text that aligned with pre-determined descriptors of reflective or academic textual genres. In some cases, the feedback provided was confusing for end-users. For instance, every occurrence of the modal ‘would’ in a trial text was highlighted as an ‘Expression indicating self-critique’, even when its function was to indicate a habitual action in the past. These feedback inconsistencies can be readily addressed in future iterations of the tool.
  • The tool did not provide evaluative feedback regarding whether the text was overall consistent with the requirements of the selected genre, or suggestions on how the text could be improved.
  • Finally, the trial group noted that the user interface could be improved. For example, once a text was uploaded it could be expected that clicking on the ‘feedback’ button would generate the tool’s response. However, this was not the case; the user must instead click on the ‘Get Feedback’ button, which displays a ‘download’ icon; feedback is then accessed by clicking on the ‘Reflective Report’ tab. Again future modifications to the tool will readily address these minor concerns.

We are aware that since the initial evaluation at UniSA, further technical work has been carried out by other users and developers. The subsequent changes will serve to move AcaWriter in the right direction in terms of usability and effectiveness. For example, some experimentation is underway to trial machine learning approaches, rather than educator-generated rubrics, as sources of criteria for text evaluation.

Since our piloting, there has been further work at UTS co-designing the tool-generated feedback to be more informative and useful in supporting further development of draft texts.

Finally, the project has been improving the back-end of AcaWriter, with an API to be released in the near future. All of these suggested modifications will serve to encourage users to further engage with the tool, and support further development.

A benefit of this ATN collaboration has been increased awareness of the priority areas, of the problems (both technical and teaching) that other institutions are tackling, and how we can share resources to collectively move the sector forward in aspects of machine learning.