Loyola University reports a dramatic rise in academic dishonesty linked to AI, with 64% of honor violations this year involving AI. A survey shows 75% of students aware of AI-related cheating. Experts suggest education on responsible AI use is essential as the university addresses this growing concern.
Loyola University is currently in the midst of a growing crisis with academic integrity, particularly due to the surge in cheating and plagiarism cases linked to artificial intelligence (AI) technologies. The Honor Council reports that a staggering 64% of reported honor code violations this academic year involve AI tools, up from 52% last year. It’s a significant leap, considering just 27% two years ago, indicating a troubling trend as students increasingly turn to AI platforms like ChatGPT for assistance.
A recent survey by The Greyhound, with over 120 participants, highlighted this issue further, revealing that an astonishing 75% of students are aware of instances of AI-related cheating at Loyola. But the worrying trend isn’t confined to Loyola alone; students across various universities and even high schools are resorting to AI for academic purposes. According to the Wall Street Journal, OpenAI’s ChatGPT sees usage by around 400 million people weekly, with students being a predominant portion of that user base.
In response to these rising concerns, Mark Lee, the Administrative Moderator for the Loyola Honor Council, is actively working to educate students about the ramifications of misusing AI technologies. He recognizes the convenience and allure AI offers, as instant answers can entice students to lean on it for their assignments. “It might make some students feel like the research process is easier, because maybe they’ll just type in the question and get the exact answer… it might give them ideas that they didn’t have before,” Lee pointed out, shedding light on the psychology behind students’ choices.
Sophia Graney, a sophomore English major and Honor Council member, isn’t entirely on board with AI in academia. While she sees it as a potential tool, its integration in classrooms frustrates her. “I’m not the biggest fan of AI in the classroom, I must admit,” Graney states with a clear tone of disapproval. She believes that relying on AI takes away from the genuine effort of writing, especially in a subject where crafting words from scratch is essential.
Graney encourages her peers to lean into the resources Loyola offers for assignment help, rather than opting for AI solutions. However, she notes we need clearer communication from professors regarding AI’s use. “What I will say is that I think it should be clearer to students what the AI regulations are… Having professors assert the use of AI… could be helpful,” she suggested, revealing a common point of confusion.
Dr. Michael Puma, Dean of Undergraduate Studies, acknowledges the increasing trend of AI-related academic dishonesty but emphasizes that a blanket policy could create more problems. He notes that teaching students how to use AI responsibly might be more beneficial. “I think it’s going to be contingent on faculty and others on campus to think about what they hope to achieve through their assignments and their learning outcomes,” he reasoned.
Looking ahead, as Loyola hires a new Assistant Vice President for Faculty Development, Dr. Puma sees part of the role involving guidance around AI in education. Just as insightful is Gregory Hoplamazian, a professor who utilizes AI himself. He believes that conversations in classrooms need a shift in perspective. “If we can get students to think about the personal benefits of them learning a topic, that can do a fair amount to push people away from the appeals of plagiarism,” he remarked, emphasizing the importance of intrinsic motivation over shortcuts.
AI is undeniably embedding itself within various facets of students’ lives – from Google and Email to platforms like Grammarly. With such integrations becoming commonplace, it appears that AI’s presence is just beginning. Looking into the future, Lee remains optimistic that Loyola students will adhere to the honor code, mindful of the temptations AI may introduce. “We are all adjusting to how to integrate AI in our lives,” he concluded, highlighting a collective journey of adaptation and understanding.
In summary, Loyola University faces a tough battle against rising cases of academic misconduct linked to AI technologies. With a significant uptick in honor code violations tied to tools like ChatGPT, both students and faculty are navigating the blurred lines of AI use in academics. As educators seek to address this multifaceted issue, promoting understanding and responsible use seems key to maintaining academic integrity on campus. The road ahead looks challenging but also offers opportunities for growth in adapting to new technologies.
Original Source: thegreyhound.org