Skip to content Skip to footer

AI Caramba! Dancing with the New Reality in Legal Education: On Quality Loss, Lack of Awareness, and Panic

Jessie Levano & Arno R. Lodder
ALTI – Amsterdam Law & Technology Institute – Vrije Universiteit

The text below appeared in Dutch on AI Forum, an online platform and journal started in 2025 by publisher deLex “to make high-quality legal information on AI and law accessible.”

Introduction

In February 2019, due to concerns about large-scale fake news and other forms of misuse, OpenAI did not fully release GPT-2:

“Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2.” (Cohen 2019)

Almost four years later, ChatGPT was introduced to the general public, and by then the system had improved so much that even seasoned AI experts were surprised. Over the past year and a half, the use of language models has not gone unnoticed in legal education.

The narrative that students are breezing through their academic careers by letting language models do the work for them – together with fears of an erosion in both critical thinking skills and overall academic integrity and quality – has dominated many front pages: from clear-cut statements (We are giving diplomas to chatbots, Parool, June 2025), to sharp accusations (Everyone is cheating their way through college, New York Magazine, May 2025), to polite requests (Please do not teach students to write essays with the ‘help’ of ChatGPT, De Standaard, June 2025).

In academia, too, the question of the dangers (and certainly also the opportunities) of AI in education has been widely discussed.

The aim of this contribution is neither to reinforce this fear (or possibility) nor to pass judgment on it. Instead, we reflect on our personal experiences with the (alleged) use of language models in education over the past academic year – which, it must be said, are not particularly encouraging. At the same time, there is room for more positive notes, and we end with a critical look ahead. After a brief description of the “first dance steps” in the use of AI in (legal) education, we identify three problems based on our experience: the quality problem, the related awareness problem, and finally, the panic problem.

First dance steps

One, two…

In the spring of 2023, the first experiments with language models took place within our department. As a kind of sparring partner, it worked quite well. You could ask a question about, for example, a provision of the GDPR and then follow up, with the answers becoming increasingly specific and generally accurate. The language model often answered old exam questions surprisingly well, even when they included half a page of case description.

This observation called into question the use of “take-home” exams, which had become common during the COVID-19 pandemic. What followed was a long discussion about the role of AI in education. Opinions varied (and still do): from an outright ban on using language models (Ban AI at the law faculty, NRC, July 2025) to teaching students how to craft good prompts and thereby integrating language models into education.

This also raised the more existential question: What is the purpose of a law degree? What are we training lawyers for? Should lawyers primarily understand the law, be able to analyze and reason legally, or must they also be equipped to navigate the practice of the future? The subtitle The handbook for the lawyer in the 21st century of the 1999 Dutch Information Technology for Lawyers pointed to the latter. Presumably, realism led to the omission of this subtitle in the second edition in 2002.

At the beginning of the 1990s, when personal computers emerged, university law programs often offered practical courses such as “computer use for lawyers.” About a decade ago the rise of internet and technology in general as well as a sudden rise in simple “legal tech” prompted the question: should we teach law students how to code? The advantage would be that they would better understand what technology can and cannot do. However, it is difficult to find a form of education that serves that purpose without reducing it to simply teaching programming skills.

Where programming classes also aim to deepen understanding of the law surrounding technology, teaching students to prompt language models is more like the computer-use courses for lawyers in the 1990s. Useful in itself, but not necessarily something that directly fits into a law curriculum.

Step! Examinations with a new look

One thing is certain: education is changing, and teachers require creativity and flexibility. An example of how we are dealing with this is the “return” to on-site examinations. This means students loose training writing and research skills, typically required for take home essays, also a preparation for master and bachelor theses, announced dead by some already.

Experiments are also being conducted with new mixed forms of examination. For example, for the Media & Communication Law course, the examination over the past two years consisted of two different components: writing an essay and writing a critical note on an essay produced by a language model on the same subject. The idea is that this trains students to detect both the weaknesses and the potential strengths (such as clear structure and sentence construction) of language models, while at the same time it can also be used as comparative material between the AI-generated essay and the student’s authentic work. Although students generally did well in criticising the AI essay, that criticism stands in stark contrast to – as is often the case when students, or people in general, have to criticise their own work – looking at their own work with a critical eye.

For other subjects, we work with more creative assignments, such as keeping a portfolio that requires a weekly critical reflection on the material discussed during lectures and seminars. Although it is obviously not possible to exclude the use of language models here, such as (re)structuring sentences and notes, this assignment does require active participation in education. Reading the portfolios was often a relief, especially in comparison to other types of assignments.

Dancing fool: hallucinations, repetition and unusual sentences

It is appropriate to briefly focus on the “why” behind the aforementioned measures. We distinguish – in short – between two related problems, namely the quality problem and the awareness problem.

Quality problem

We use our E-Commerce Law course as an illustration. Students were assigned to write a short essay on a current news item relevant to the course, viewed in the light of the material covered during the lectures. At least a third of the 150 students produced texts that were unmistakably productions of language models, often containing entertaining hallucinations. For example, a chapter by Lodder was attributed to our Danish colleague Savin, and vice versa; Paola Cardozo wrote a (non-existent) article in the second (!) edition of (non-existent) E-Commerce Law Journal 2025, before the first month of this year was even over; Eline Leijten co-authored the Draghi report. Although these kinds of careless mistakes make it easier for us to call students to account for their (unauthorised) use of language models, this is more difficult when the suspicion arises from extremely generic texts and sentences that are often difficult to follow. Before the final assignment, we made the following announcement to students:

“Unfortunately, it has come to our attention that many students have used a language model (e.g. ChatGPT) to create their paper, it is mostly easily recognizable when this is used in your texts. Remember that although the use of language models is not prohibited by the VU, your work should be the result of your own thinking and writing.”

It must be said that the final assignment was significantly better, but enough students still could not resist the temptation of the language model. A “Professor Muller”, unknown to us, was cited as one of the lecturers for the course, and there was even a student who, as part of the assignment, allegedly visited a company with a group of students to ask questions about certain EU legislative initiatives. At the examination board meeting on fraud, when asked which company they had visited, he stated that he had asked “a friend who worked for a Chinese company”. This student received a nice round 0.0 from the examination board.

Awareness problem

In addition to the obvious problems of the often poor quality of academic texts generated by language models, the above examples highlight an even more pressing issue. Students often lack a healthy degree of critical reflection – sometimes to such an extent that a “statement about their own work” is included in a text that is full of hallucinations, incorrect references and difficult-to-follow sentences. This also raises critical questions about how students view the university system itself. This awareness problem, relating to both the quality of the work submitted and the deeper issues involved in an increasing dependence on language models in the production of written texts, is perhaps even more problematic than the poor quality itself. If students do not recognize that language-model-generated texts are often flawed, incoherent, and sometimes factually wrong, we are in serious trouble.

Panic! At the university

Although concerns about the laxity with which some students shamelessly use language models and pass the work off as their own are often justified, we would like to add some nuance to this prevailing narrative of panic. The idea that students in general are “lazy” is stigmatising and (often) simply untrue. C-grade students have been around since the dawn of time, just like excelling, hard workers. To think that the former, the barely passing group, are now shaping the world leaders of the future with the help of language models is about as arbitrary as a footnote generated by ChatGPT. Nor is it self-evident that students who have a genuine interest in their subject and love to dive into their books – because yes, they still exist – will now drop out en masse and abandon their intellectual curiosity. When Google emerged, there were also alarmist voices: They do not need to learn anything anymore, they can find it anyway. Ultimately, those superior search engines did not prove to be the death for education.

Is there a problem? Of course. Is this problem really as insurmountable as it is sometimes presented? Doubtful.

As discussed earlier, there are ways to teach students without ever having to use a language model (if we so choose). However, this requires effort, creativity, and flexibility. Those few writing assignments? Perhaps some students will (continue to) use language models. However, that does not mean that the heart surgeons, hydraulic engineers, or bright lawyers of the future will no longer be able to formulate a normal sentence or will completely lack critical thinking skills. What is essential in all this, of course, is that students become more aware of what we characterise as a worrying awareness problem.

The fact that students submit empty, repetitive texts, often with strange or missing source references, is not only frustrating for the teacher, but also contributes little to the student’s development – at most in a negative sense, namely dulling their minds.

Instead of worrying about whether or not a student has had a paragraph corrected by a language model, we should perhaps focus on what lies beneath that panic: perhaps a fear among universities of being exposed as “learning factories” rather than institutions that inspire creative thinking and dynamic discussion.

After all, there is much more to becoming a “valuable” professional. Being “book smart” is only one aspect; values such as intrinsic motivation and social skills cover a considerably larger part, and it is precisely there that the core of the learning process lies, something that AI is still a long way from being able to match.

Let us focus on how to move forward, and as universities, let us ask ourselves the critical question: what are we here for? The ballet has only just begun; we have plenty of time to reposition ourselves before the coda begins.

Amsterdam Law & Technology Institute
VU Faculty of Law
De Boelelaan 1077, 1081 HV Amsterdam