Mitigate AI problems with alternate assessments to the essay

In July of this year I wrote a teaching tip entitled, “Adapting the Essay to Maintain Authenticity in the Era of AI” [1]. The teaching tip advised using revision, reflection and feedback cycles in assignments that used the essay. A genuine critique of that advice is that instructors and students do not have the time to employ such a strategy in every instance of essay writing in a typical course. This teaching tip offers some advice on finding different kinds of assessments that will measure your stated learning objectives besides the essay, or ways in which you can embrace AI use in conjunction with such assessments while maintaining academic rigor.

Text generative AIs continue to be a disruptive force in education. Our University is not the only academy that struggles to find footing in this rapidly shifting landscape. Across the globe, educators are actively considering what to do about AI tools. There are a range of options from open armed adoption to prohibition.

Revisit Learning Objectives

According to Jenny Frederick, the associate provost at Yale University, that institution never considered banning the use of ChatGPT or any other AI tool. Frederick raises the importance of learning objectives with a back to basics question: “What do I want my students to learn in this course?” [2]. That inquiry naturally centers on the nature of learning objectives (LOs). Good course design will have instructors devote a significant amount of time in crafting LOs with enough specificity that they are clear and measurable. A skill or competency that is written in such a manner can be measured with different methods. While the essay has a long tradition in Western education, it is not the only way to accurately measure different aspects of student competencies.

Permutational Multiple Choice Questions

Objective questions can reveal a student’s ability to define, recall, identify, interpret information in a variety of ways. The most common form of objective questions come in the form of multiple choice questions (MCQs). The advantage of these kinds of questions is that they can be rapidly graded, and now with, yes AI tools, multiple choice questions can be rapidly generated. If you need to assess knowledge or competencies on the “lower end” of Bloom’s taxonomy, there is nothing wrong with using MCQs. But they do not assess analytical, evaluative or creative skills. MCQs can also be answered by guessing to a certain degree. These problems, according to the authors of an article in ACM SIGCSE Bulletin can be overcome through the use of Permutational Multiple-choice Questions (PMCQs) [3]. PMCQs have students carefully identify various possible responses to a coupled statement. 

According to their research, Farthing, Jones and McPhee argue that PMCQs nearly eliminate the possibility that students will guess a correct answer. At the same time, producing the correct answer sequence also demands that students “distinguish between similar concepts.” 

Which of these most closely describes the identifying characteristic of:
Black-box software testing __d__
White-box software testing __a__
aTests are designed with full knowledge of what is in the source code
bTests are run unattended
cA program is re-tested after a change
dThe application is checked to ensure the users’ needs are satisfied
eThe test results are checked for correctness
fThe entire application is retested after one part of it is changed.
An example of a Permutational Multiple Choice Question From Figure 2 in Farthing, Jones, and McPhee’s article on PMCQs [3]

What is constructed by the student in answering a PMCQ is a solution that is crafted only after considering the proper sequence and fine differences between possible answers. Although the authors admit that these kinds of questions are more difficult to develop than simple MCQs, they perform the task of accurately measuring student competencies of higher order thinking skills.

Concept Maps

Another form of assessment (and learning) is the concept map. A concept map will have a student present their understanding of the structure of something and the relationship of its component parts in a visual manner. The concept map was developed by Joseph Novak in 1972 who has since then advocated for its use in all levels of education.

Upon first glance at a concept map, the viewer is invited to explore the depths of the hierarchical structure and traverse the relationships between the building blocks. The complexity of a map can reveal the degree to which a student understands the concepts and can also provide teachers with opportunities for corrective feedback and assessment. When invented in the 70s, graphics tools were not readily available, but now there are a variety of web and learning management system tools that facilitate the making of concept maps.

In order for instructors to use concept maps as adequate assessment tools, students have to be trained in creating and interpreting them. This additional tool training would demand some devotion of time taken from a semester’s schedule, but it may be worth considering especially if this approach were to be adopted by a degree program, taught first in the introductory classes and used throughout until a student graduates.

A concept map of concept maps. The boxed words and phrases are concepts and the labeled connector lines indicate how one concept is related to another. From "The Theory Underlying Concept Maps and How to Construct and Use Them" by J.D. Novak and Alberto Caños [4].
A concept map of concept maps. The boxed words and phrases are concepts and the labeled connector lines indicate how one concept is related to another. From “The Theory Underlying Concept Maps and How to Construct and Use Them” by J.D. Novak and Alberto Caños [4].

Academic Rigor

Finally, there is the option to embrace the use of text generative AIs for essay and essay like assessments. The advice here is not to pretend that tools such as ChatGPT do not exist, but to communicate to students what is acceptable and provide adequate structure so students employ creativity and higher order thinking skills in an academically rigorous manner. At the very least instructors should indicate

  • which assignments can be completed with the aid of AI tools (if any)
  • how students should cite the use of AI tools (APA and MLA have already published these rules)
  • the possible risks of violating academic honesty provisions in the Student Code of Conduct

I know of one Biostatistics professor at Georgia State University who further requires a student to provide a reflection piece on the tool’s use, including what the student learned, any errors or hallucinations identified in the AI output, what prompts were employed and what reasoning was used by the student in generating any revisions to their prompts. The professor directed students to attach all of this information as an appendix to the work.

References

[1] Adapting the Essay to Maintain Authenticity in the Era of AI

[2] Ryan-Mosley, T. (2023, September 4). How one elite university is approaching ChatGPT this school year. MIT Technology Review. Retrieved September 17, 2023, from https://www.technologyreview.com/2023/09/04/1078932/elite-university-chatgpt-this-school-year/

[3] Farthing, D. W., Jones, D. M., & McPhee, D. (1998). Permutational multiple-choice questions: an objective and efficient alternative to essay-type examination questions. ACM SIGCSE Bulletin, 30(3), 81-85.

[4] Novak, J. D. & A. J. Cañas, The Theory Underlying Concept Maps and How to Construct and Use Them, Technical Report IHMC CmapTools 2006-01 Rev 01-2008, Florida Institute for Human and Machine Cognition, 2008, available at: http://cmap.ihmc.us/docs/pdf/TheoryUnderlyingConceptMaps.pdf

Dan Lasota

Dan LaSota

Instructional Designer
Certified QM Peer Reviewer
Certified QM Training Facilitator

dlasota@alaska.edu

Leave a Reply

Your email address will not be published. Required fields are marked *