EducationFEATURED

CPS educators weigh use of artificial intelligence in classrooms

When ChatGPT launched in November 2022, Jordan Smith, who teaches language arts at Battle High School, worried.

As an AI chatbot, ChatGPT is able to mimic human speech through a technology known as a large language model. In seconds, ChatGPT can write an essay in Shakespearean dialogue or create lyrics to the melody of a Taylor Swift song.

The platform’s overnight success gave teachers little time to prepare for its effects on curriculum and learning.

“I was having a little bit of an existential moment whenever I found out that this was a thing that was going to be really hard to detect,” Smith said. “It was a tool that students were going to be able to use in a multifaceted way that might impact the traditional learning process that we’ve been going for: what it means to become a writer, what it means to write and why we write.”

The surge of generative AI has brought both opportunity and concern for teachers. It has challenged them to consider whether to, or how best to, incorporate the new technology into the classroom while also ensuring students’ work reflects their own learning. Interpretations of how to find the balance between the two can shift and evolve.

As soon as ChatGPT and other AI platforms became available, students in Smith’s class began using AI to complete school work without his knowledge. Now, Smith said, teachers are talking more frequently and intentionally about AI, what it means to use it and how to use it responsibly.

“I’m not as doomsday about it as I was,” Smith said. “But I still think that there are a lot of potentially problematic issues that are going to arise with generative AI — especially how quickly it’s changing.”

National, and local policies

When ChatGPT launched, some of the nation’s largest school districts, such as the Los Angeles Unified School District and New York City Public Schools, initially banned the platform. Since then, some of these districts revised the bans.

Districts have since taken a variety of approaches in addressing classroom use of generative AI. After initially blocking ChatGPT, Los Angeles Unified, the second-largest school district in the country, announced in August it would implement its own AI chatbot. In other districts, such as Washington’s Walla Walla School District, workshops on AI chatbots taught educators about the platform.

There has also been some conversation about establishing a national AI policy for education. In a 65-page report, the U.S. Office of Educational Technology outlined recommendations on crafting related policies. The policy recommends that although AI can be used as a tool to assist teaching and learning, teachers should be highly involved in its implementation and evaluation in classrooms.

“We envision a technology-enhanced future more like an electric bike and less like robot vacuums,” the document states. “On an electric bike, the human is fully aware and fully in control, but their burden is less, and their effort is multiplied by a complementary technological enhancement. Robot vacuums do their job, freeing the human from involvement or oversight.”

This year, Columbia Public Schools added a proactive statement on academic integrity to its student handbooks.

“We noticed in our student handbook that we only had information about academic integrity when it was after the fact and you were trying to punish someone for a lack of academic integrity,” district library media coordinator Kerry Townsend said.

The handbook states it is a “violation of policy” when a student takes credit for work that is not their own without the proper attribution or authorization, including through the use of “technologically generated writing.”

Although the statement doesn’t include the words “artificial intelligence,” it is included under this umbrella.

“We would not want to write a policy or procedure that includes something so trendy,” Townsend said. “It needs to be encompassing so you don’t have to keep rewriting your policy over and over.”

Chatbots like ChatGPT and Google’s Bard are blocked on a student’s school account, but students can still access them through their personal accounts and devices. Jayme Pingrey, a media specialist at Battle High, compares only blocking individual platforms to playing whack-a-mole.

“If you block one thing, another thing is gonna come up. It’s kind of a losing battle,” Pingrey said. She said she wanted people to take a more proactive approach by educating students about AI and its moral and educational implications.

Earlier this year, media specialists at Battle hosted two presentations for teachers that touched on how to speak to students about AI and how teachers can safely use the platform. The August presentation required all of the district’s English teachers to attend. In October, the presentation was available for any instructors who were interested in learning more about AI.

Academic dishonesty and AI

A common concern among teachers is that students will use AI to turn in assignments that are not their own work.

ChatGPT was released in the middle of the school year. If a student’s essay significantly improved overnight, teachers were more attuned to the student’s previous work and could easily pick out the discrepancy.

This year, students started their school year having access to platforms like ChatGPT, meaning it might be more difficult for teachers to spot AI-generated work. In response, many teachers have turned to other methods to discourage academic dishonesty.

Sometimes, the structure of an assignment can naturally encourage students to generate their own work. For example, students might need to complete a rough draft with a pen and paper, especially when practicing for timed writings on Advanced Placement exams.

“We find that when students have started it themselves, they’re more likely to finish it themselves,” said Brian Corrigan, a social studies teacher at Battle High.

When there are significant changes between a first draft and a second draft, Corrigan said this might raise suspicion.

“Typically, the first clue that I get is, ‘Oh, this is completely different than what you started with. What you started with wasn’t perfect but was doable and clearly something I’m used to hearing you read. But this seems to be in a totally different voice-over, a totally different topic.’ That’s usually my cue that maybe something didn’t go right,” Corrigan said.

Teachers have access to tools that aim to detect AI-generated work. Turnitin.com, a commonly used plagiarism detector, released its own AI detection platform in April. The district provides teachers with a subscription to the platform.

However, these platforms can elicit some concern. Turnitin reported that 1% of human-created document submissions were incorrectly flagged as being AI-generated.

“While 1% is small, behind each false positive instance is a real student who may have put real effort into their original work,” Chief Product Officer Annie Chechitelli said in a news release for Turnitin. “We cannot mitigate the risk of false positives completely given the nature of AI writing and analysis, so it is important that educators use the AI score to start a meaningful and impactful dialogue with their students in such instances.”

Teachers have differing views on how, and whether, AI should be implemented into classwork. At times, finding a line between using AI as a tool and academic dishonesty can be subject to differing interpretations.

“I think we also should respond as teachers and understand how to introduce AI responsibly — not that we should just let kids use it and stop writing, but we should introduce the fact that this is a tool, and if it’s going to be used, there are responsible ways to use it,” Smith said. “Plagiarism is still plagiarism. Not attributing your work to others is still academic dishonesty.”

Concerns and hopes

Whether it be graphing calculators or cellphones, as technology advances, it inevitably creates questions about how education will be shaped alongside it. AI platforms can now be added to that list.

“I think (AI) definitely will and should change the way we teach, but I don’t quite know what that will look like yet,” Corrigan said.

Many national recommendations for using AI in education have focused on it as a potential tool for teaching and learning.

The presentations hosted this fall by Battle High media specialists referenced some resources that teachers could use. One was Google Applied Digital Skills, a free module that teaches students about AI technology and its daily uses. Another was Canva’s Magic Write, an AI tool that will soon be available for teachers that can help brainstorm lesson ideas and create question prompts.

Some teachers have chosen to embrace AI and incorporate it in their curriculum in their own unique way.

“One of the ideas that I had was to have AI write a sample response to one of our essay prompts and then show students the AI-generated response and evaluate it,” Corrigan said. “Like, ‘If this was a student writing, what is it missing? What parts of the rubric did it do? What parts of the rubric would need to change for it to actually meet our standards in class?’”

Other teachers are including their students in discussions about AI. In past years, Sawyer Wade, a digital media teacher at Battle High, led discussions in his class about the ethics of using Photoshop to enhance or manipulate photos. This year, Wade has added responsible use of AI to the conversation.

Wade said that the nature of his digital media class means his material will have to lean into AI, but he remains optimistic. He plans to use ChatGPT as a tool to teach students how to code websites. The catch is: To actually be able to code what ChatGPT gives the students, the students must understand the coding language.

“Some teachers want to go against AI; some teachers want to lean into it,” Wade said. “In my opinion, I think we have no choice but to lean into it. We’re kind of too far along to not.”

Finding a way to shape AI into a tool that contributes to learning, as opposed to detracting from it, can be difficult. As teachers make sense of these emerging technologies, some concerns have arisen over what AI could mean for the future of education.

Smith worries that AI might affect the development of certain skills that students will need after graduation.

“The messy first step of making your own writing is, in my 12 years of teaching, super vital. And if we’re not doing that anymore, or doing it a lot less, then I am afraid of what the results might be,” Smith said. “We need students to learn to be critical thinkers, and writing is one way to get there.”

Despite these concerns, Smith relates AI to the introduction of the calculator into math classrooms. At the time, he said that teachers worried students wouldn’t know how to do math anymore because they would have a calculator to do it.

“There was a lot of growing pains with that, but now students are doing math, they’re just doing it differently,” Smith said. “So, who knows? Maybe AI would become that in some way in the future.”

ChatGPT created a wave of discussions around AI, but new platforms continue to emerge. The conversation about generative AI in the classroom will not stop, as these softwares are continuously evolving.

“It’s ever-changing,” Smith said. “And so coming up with policies or providing guidelines for students is still a bit hard.”

Source: https://www.columbiamissourian.com/news/k12_education/cps-educators-weigh-use-of-artificial-intelligence-in-classrooms/article_e2f1fec6-6eac-11ee-b361-13a949354d27.html