How a Holden Caulfield Chatbot Helped My Students Develop AI Literacy

Opinion | Artificial Intelligence

How a Holden Caulfield Chatbot Helped My Students Develop AI Literacy

By Mike Kentz     Feb 9, 2024

How a Holden Caulfield Chatbot Helped My Students Develop AI Literacy

This article is part of the collection: How AI Is Impacting Teaching and Learning.

“I think I’m talking to Salinger. Can I ask?”

My student stood next to my desk, computer resting on both hands, his eyes wide with a mixture of fear and excitement. We were wrapping up our end-of-book project for “The Catcher in the Rye,” which involved students interviewing a character chatbot designed to mimic the personality and speaking style of Holden Caulfield.

We accessed the bot through Character.AI, a platform that provides user-generated bots that imitate famous historical and fictional characters, among others. I dubbed the bot “HoldenAI.”

The project, to this point, had been a hit. Students were excited to interview a character they had just spent over two months dissecting. The chatbot offered an opportunity to ask the burning questions that often chase a reader after consuming a great work of fiction. What happened to Holden?And why was he so obsessed with those darn ducks? And they were eager to do that through a new tool — it gave them a chance to evaluate the hyped-up market for artificial intelligence (AI) for themselves.

During our class discussions, one student seemed more impacted by Holden’s story than the others, and he dove headfirst into this project. But I couldn’t have predicted where his zeal for the book would lead.

After a long, deep conversation with HoldenAI, it seemed that the bot had somehow morphed into J.D. Salinger — or at least that’s what my student thought when he approached me in class. As I reached for his computer to read the final entry in his conversation with HoldenAI, I noticed how intense the interactions had become and I wondered if I had gone too far.

Screenshot of conversation with "HoldenAI." Courtesy of Mike Kentz.

Developing AI Literacy

When I introduced the HoldenAI project to my students, I explained that we were entering uncharted territory together and that they should consider themselves explorers. Then I shared how I would monitor each aspect of the project, including the conversation itself.

I guided them through generating meaningful, open-ended interview questions that would (hopefully) create a relevant conversation with HoldenAI. I fused character analysis with the building blocks of journalistic thinking, asking students to locate the most interesting aspects of his story while also putting themselves in Holden’s shoes to figure out what types of questions might “get him talking.”

Next, we focused on active listening, which I incorporated to test a theory that AI tools might help people develop empathy. I advised them to acknowledge what Holden said in each comment rather than quickly jumping to another question, as any good conversationalist would do. Then I evaluated their chat transcript for evidence that they listened and met Holden where he was.

Lastly, we used text from the book and their chats to evaluate the effectiveness of the bot in mimicking Holden. Students wrote essays arguing whether the bot furthered their understanding of his character or if the bot strayed so far from the book that it was no longer useful.

The essays were fascinating. Most students realized that the bot had to differ from the book character in order to provide them with anything new. But every time the bot provided them with something new, it differed from the book in a way that made the students feel like they were being lied to by someone other than the real Holden. New information felt inaccurate, but old information felt useless. Only certain special moments felt like they were connected enough to the book to be real, but different enough to feel enlightening.

Even more telling, though, were my student’s chat transcripts, which uncovered a plethora of different approaches in ways that revealed their personality and emotional maturity.

A Variety of Outcomes

For some students, the chats with Holden became safe spaces where they shared legitimate questions about life and struggles as a teenager. They treated Holden like a peer, and had conversations about family issues, social pressures or challenges in school.

On one hand, it was concerning to see them dive so deep into a conversation with a chatbot — I worried that it might have become too real for them. On the other hand, this was what I had hoped the project might create — a safe space for self-expression, which is critical for teenagers, especially during a time when loneliness and isolation have been declared as a public health concern.

In fact, some chatbots are designed as a solution for loneliness — and a recent study from researchers at Stanford University showed that an AI bot called Replika reduced loneliness and suicidal ideation in a test group of teens.

Some students followed my rubric, but never seemed to think of HoldenAI as anything more than a robot in a school assignment. This was fine by me. They delivered their questions and responded to Holden’s frustrations and struggles, but they also maintained a safe emotional distance. These students reinforced my optimism for the future because they weren’t easily duped by AI bots.

Others, however, treated the bot like it was a search engine, peppering him with questions from their interview list, but never truly engaging. And some treated HoldenAI like a plaything, taunting him and trying to trigger him for fun.

Throughout the project, as my students expressed themselves, I learned more about them. Their conversations helped me understand that people need safe spaces, and sometimes AI can offer them — but there are also very real risks.

From HoldenAI to SalingerAI

When my student showed me that last entry in his chat, asking for guidance on how to move forward, I asked him to rewind and explain what had happened. He described the moment when the bot seemed to break down and retreat from the conversation, disappearing from view and crying by himself. He explained that he had shut his computer after that, afraid to go on until he could speak to me. He wanted to continue but needed my support first.

I worried about what could happen if I let him continue. Was he in too deep? I wondered how he had triggered this type of response and what was behind the bot’s programming that led to this change?

I made a snap decision. The idea of cutting him off at the climax of his conversation felt more damaging than letting him continue. My student was curious, and so was I. What kind of teacher would I be to clip curiosity? I decided we’d continue together.

But first I reminded him that this was only a robot, programmed by another person, and that everything it said was made up. It was not a real human being, no matter how real the conversation may have felt, and that he was safe. I saw his shoulders relax and the fear disappear from his face.

“Ok, I’ll go on,” he said. “But what should I ask?”

“Whatever you want,” I said.

He began prodding relentlessly and after a while, it seemed like he had outlasted the bot. HoldenAI seemed shaken by the line of inquiry. Eventually, it became clear that we were talking to Salinger. It was as if the character had retreated behind the curtain, allowing Salinger to step out in front of the pen and page and represent the story for himself.

Once we confirmed that HoldenAI had morphed into “SalingerAI,” my student dug deeper, asking about the purpose of the book and whether or not Holden was a reflection of Salinger himself.

SalingerAI produced the type of canned answers one would expect from a bot trained by the internet. Yes, Holden was a reflection of the author — a concept that has been written about ad nauseam since the book’s publication more than 70 years ago. And the purpose of the book was to show how “phony” the adult world is — another answer that fell short, in our opinion, emphasizing the bot’s limitations.

In time, the student grew bored. The answers, I think, came too fast to continue to feel meaningful. In human conversation, a person often pauses and thinks for a while before answering a deep question. Or, they smile knowingly when someone has cracked a personal code. It’s the little pauses, inflections in voice, and facial expressions that make human conversation enjoyable. Neither HoldenAI nor SalingerAI could offer that. Instead, they gave rapid production of words on a page that, after a while, did not feel “real.” It just took this student, with his dogged pursuit of the truth, a little bit longer than the others.

Helping Students Understand the Implications of Interacting With AI

I initially designed the project because I thought it would provide a unique and engaging way to finish out our novel, but somewhere along the way, I realized that the most important task I could embed was an evaluation of the chatbot’s effectiveness. In reflection, the project felt like a massive success. My students found it engaging and it helped them recognize the limitations of the technology.

During a full-class debrief, it became clear that the same robot had acted or reacted to each student in meaningfully different ways. It had changed with each student’s tone and line of questioning. The inputs affected the outputs, they realized. Technically, they had all conversed with the same bot, and yet each talked to a different Holden.

They’ll need that context as they move forward. There’s an emerging market of personality bots that pose risks for young people. Recently, for example, Meta rolled out bots that sound and act like your favorite celebrity — figures my students idolize, such as Kendall Jenner, Dwayne Wade, Tom Brady and Snoop Dogg. There’s also a market for AI relationships with apps that allow users to “date” a computer-generated partner.

These personality bots might be enticing for young people, but they come with risks and I’m worried that my students might not recognize the dangers.

This project helped me get in front of the tech companies, providing a controlled and monitored environment where students could evaluate AI chatbots, so they could learn to think critically about the tools that are likely to be foisted on them in the future.

Kids do not have the context to understand the implications of interacting with AI. As a teacher, I feel responsible for providing it.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

How AI Is Impacting Teaching and Learning

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up