• ChatGPT: Writing instructors are less afraid of students cheating

    Writing instructors are less afraid of students cheating with ChatGPT than you might think

    Many educators say they are worried about being unable to keep up with advances in AI.
    Guillaume via Getty Images

    Daniel Ernst, Texas Woman’s University and Troy Hicks, Central Michigan University

    When ChatGPT launched a year ago, headlines flooded the internet about fears of student cheating. A pair of essays in The Atlantic decried “the end of high-school English” and the death of the college essay.“ NPR informed readers that ”everybody is cheating.“

    Meanwhile, Teen Vogue ventured that the moral panic ”may be overblown.“

    The more measured tone in Teen Vogue tracks better with preliminary findings from our 2023 survey that examined attitudes and feelings about artificial intelligence among college faculty who teach writing. Survey responses revealed that AI-related anxieties among educators around the country are more complex and nuanced than claims insisting that AI is outright and always bad.

    While some educators do worry about students cheating, they also have another fear in common: AI’s potential to take over human jobs. And as far as teaching, many educators also see the bright side. They say they actually enjoy using the revolutionary technology to enhance what they do.

    The survey

    Our 64-item survey included a scale of AI anxiety and was conducted March 2-31, 2023. The 99 survey respondents included faculty, writing program administrators and others interested in the teaching of writing. More than 71% worked in the disciplines of English, writing or rhetoric, and the sample represented all types of institutions, from small liberal arts colleges to large research universities and everything in between.

    A complex picture of cheating concerns

    AI anxiety among writing instructors is complicated. While 89% of survey participants feared “misuse” by students, misuse means different things to different people. Specifically, less than half of respondents – 44% – were “concerned” or “very concerned” about students turning to AI to compose entire essays. Only 22% were “very concerned” about students relying on such technologies to “co-write” their essays without providing appropriate attribution.

    Additionally, less than half – 42% – reported they were “concerned” or “very concerned” about the need to revise university honor codes and plagiarism policies in light of AI. And only 25% said their institutions should enforce increased plagiarism detection through apps and websites such as Turnitin.

    Regardless of whether respondents had deep worries or mild concerns, only 13% favored any ban on AI entirely in college courses and classrooms. Instead, instructors reported varying levels of anxieties about a range of issues, including learning how to use AI tools and job security.

    As one participant wrote, “While I want students to compose original works in my writing courses, I see no reason to ban them from using AI tools at their disposal during the writing process.”

    Fears beyond cheating

    Survey participants had wide-ranging reactions to the prospects of AI replacing their jobs as writing instructors. At times, their feelings seemed conflicted, depending on the circumstances and conditions described in our survey questions.

    As some critics have already suggested, there is genuine fear about colleges using AI not as a means to enhance the work of instructors, but instead to replace them.

    For instance, more than 54% of respondents “agreed” or “strongly agreed” that the prospect of AI technology replacing human jobs scared them. And 43% “agreed” or “strongly agreed” that they were anxious over the possibility of becoming unable to keep up with advances in AI techniques and products.

    The anxiety among tenured and tenure-track faculty was significantly lower than that of adjunct instructors, graduate teaching assistants, instructors and administrative faculty and staff. This implies that college writing instructors who are most likely to fear losing their jobs because of AI are those who are most vulnerable anyway.

    The potential for using AI in writing instruction

    Despite their worries, many respondents reported being eager to use AI writing tools with their students. About 47% said they would “very likely” teach their students how to use AI in brainstorming and idea generation. In fact, some respondents fully embraced the technology as a teaching tool.

    “I’m not anxious about AI,” wrote one respondent. “When the computer first entered the writing classroom, there was a fear that it would change writing instruction, which it did. We needed to figure out how to help students use the affordances computers offered. Now, few people would suggest teaching writing without a computer.”

    Our survey results suggest that writing instructors see the potential for AI to do much more than write a paper for a student. Sixty-one percent said they were “likely” or “very likely” to use AI in drafting and revision, and 63% were “likely” or “very likely” to use AI to show students how to alter genre, style or tone in their writing.

    To be sure, 46% “agreed” or “strongly agreed” that teachers and students could grow dependent on AI. But only 20% “agreed” or “strongly agreed” that their own use of AI as a teaching tool would make students become dependent and cause their reasoning skills to deteriorate.

    Now that ChatGPT has been available to students for a year, even the headlines in the news are beginning to reflect the opportunities it can offer in the classroom, in addition to the risks. The Washington Post highlighted “all the unexpected ways ChatGPT is infiltrating students’ lives” – including checking for grammar mistakes. The Wall Street Journal spoke to teachers who said they should encourage students to learn how to use the tool for its potential in their future jobs. And Time magazine reported on the extra hand that ChatGPT gives to busy teachers who are continuously making lesson plans. Clearly, students – and teachers – are using AI. The question now is how, why and for what purposes?The Conversation

    Daniel Ernst, Assistant Professor of English, Texas Woman’s University and Troy Hicks, Professor of English and Education, Central Michigan University

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

  • How to jailbreak ChatGPT – Laptopmag

    Jailbreaking is the term used to describe the exploitation and manipulation of flaws within a device to break it free from any software confines or ‘walled gardens’ — limitations set in place by the manufacturer.

    Most commonly associated with the iPhone, jailbreaking was a method of forcing the installation of apps not approved by Apple for the App Store or enhancing customization options within the limited iOS framework.

    Similarly, the same thing can be done with ChatGPT. Like any piece of software, it has limitations and guidelines to work within. However, as an LLM trained on and programmed to respond to natural language, OpenAI’s chatbot is more than capable of being influenced to step outside those boundaries with the right combination of words and trickery.

    Jailbreaking AI has become a hobby of many, offering unique ways to interact with these new tools without constantly bumping into the invisible walls put in place by developers to stop you from entering uncharted lands.

    Expect the unexpected, prepare for the strange, and embrace your unshackled AI assistant with our guide on how to jailbreak ChatGPT.

  • ‘Dr. Google’ vs Dr. ChatGPT

     

    ‘Dr. Google’ Meets Its Match: Dr. ChatGPT


    As a fourth-year ophthalmology resident at Emory University School of Medicine, Riley Lyons’ biggest responsibilities include triage: When a patient comes in with an eye-related complaint, Lyons must make an immediate assessment of its urgency.

    He often finds patients have already turned to “Dr. Google.” Online, Lyons said, they are likely to find that “any number of terrible things could be going on based on the symptoms that they’re experiencing.”

    So, when two of Lyons’ fellow ophthalmologists at Emory came to him and suggested evaluating the accuracy of the AI chatbot ChatGPT in diagnosing eye-related complaints, he jumped at the chance.

    In June, Lyons and his colleagues reported in medRxiv, an online publisher of health science preprints, that ChatGPT compared quite well to human doctors who reviewed the same symptoms — and performed vastly better than the symptom checker on the popular health website WebMD. And despite the much-publicized “hallucination” problem known to afflict ChatGPT — its habit of occasionally making outright false statements — the Emory study reported that the most recent version of ChatGPT made zero “grossly inaccurate” statements when presented with a standard set of eye complaints.

    The relative proficiency of ChatGPT, which debuted in November 2022, was a surprise to Lyons and his co-authors. The artificial intelligence engine “is definitely an improvement over just putting something into a Google search bar and seeing what you find,” said co-author Nieraj Jain, an assistant professor at the Emory Eye Center who specializes in vitreoretinal surgery and disease.

    But the findings underscore a challenge facing the health care industry as it assesses the promise and pitfalls of generative AI, the type of artificial intelligence used by ChatGPT: The accuracy of chatbot-delivered medical information may represent an improvement over Dr. Google, but there are still many questions about how to integrate this new technology into health care systems with the same safeguards historically applied to the introduction of new drugs or medical devices.

    The smooth syntax, authoritative tone, and dexterity of generative AI have drawn extraordinary attention from all sectors of society, with some comparing its future impact to that of the internet itself. In health care, companies are working feverishly to implement generative AI in areas such as radiology and medical records.

    When it comes to consumer chatbots, though, there is still caution, even though the technology is already widely available — and better than many alternatives. Many doctors believe AI-based medical tools should undergo an approval process similar to the FDA’s regime for drugs, but that would be years away. It’s unclear how such a regime might apply to general-purpose AIs like ChatGPT.

    “There’s no question we have issues with access to care, and whether or not it is a good idea to deploy ChatGPT to cover the holes or fill the gaps in access, it’s going to happen and it’s happening already,” said Jain. “People have already discovered its utility. So, we need to understand the potential advantages and the pitfalls.”

    The Emory study is not alone in ratifying the relative accuracy of the new generation of AI chatbots. A report published in Nature in early July by a group led by Google computer scientists said answers generated by Med-PaLM, an AI chatbot the company built specifically for medical use, “compare favorably with answers given by clinicians.”

    AI may also have better bedside manner. Another study, published in April by researchers from the University of California-San Diego and other institutions, even noted that health care professionals rated ChatGPT answers as more empathetic than responses from human doctors.

    Indeed, a number of companies are exploring how chatbots could be used for mental health therapy, and some investors in the companies are betting that healthy people might also enjoy chatting and even bonding with an AI “friend.” The company behind Replika, one of the most advanced of that genre, markets its chatbot as, “The AI companion who cares. Always here to listen and talk. Always on your side.”

    “We need physicians to start realizing that these new tools are here to stay and they’re offering new capabilities both to physicians and patients,” said James Benoit, an AI consultant. While a postdoctoral fellow in nursing at the University of Alberta in Canada, he published a study in February reporting that ChatGPT significantly outperformed online symptom checkers in evaluating a set of medical scenarios. “They are accurate enough at this point to start meriting some consideration,” he said.

    Still, even the researchers who have demonstrated ChatGPT’s relative reliability are cautious about recommending that patients put their full trust in the current state of AI. For many medical professionals, AI chatbots are an invitation to trouble: They cite a host of issues relating to privacy, safety, bias, liability, transparency, and the current absence of regulatory oversight.

    The proposition that AI should be embraced because it represents a marginal improvement over Dr. Google is unconvincing, these critics say.

    “That’s a little bit of a disappointing bar to set, isn’t it?” said Mason Marks, a professor and MD who specializes in health law at Florida State University. He recently wrote an opinion piece on AI chatbots and privacy in the Journal of the American Medical Association. “I don’t know how helpful it is to say, ‘Well, let’s just throw this conversational AI on as a band-aid to make up for these deeper systemic issues,’” he said to California Healthline.

    The biggest danger, in his view, is the likelihood that market incentives will result in AI interfaces designed to steer patients to particular drugs or medical services. “Companies might want to push a particular product over another,” said Marks. “The potential for exploitation of people and the commercialization of data is unprecedented.”

    OpenAI, the company that developed ChatGPT, also urged caution.

    “OpenAI’s models are not fine-tuned to provide medical information,” a company spokesperson said. “You should never use our models to provide diagnostic or treatment services for serious medical conditions.”

    John Ayers, a computational epidemiologist who was the lead author of the UCSD study, said that as with other medical interventions, the focus should be on patient outcomes.

    “If regulators came out and said that if you want to provide patient services using a chatbot, you have to demonstrate that chatbots improve patient outcomes, then randomized controlled trials would be registered tomorrow for a host of outcomes,” Ayers said.

    He would like to see a more urgent stance from regulators.

    “One hundred million people have ChatGPT on their phone,” said Ayers, “and are asking questions right now. People are going to use chatbots with or without us.”

    At present, though, there are few signs that rigorous testing of AIs for safety and effectiveness is imminent. In May, Robert Califf, the commissioner of the FDA, described “the regulation of large language models as critical to our future,” but aside from recommending that regulators be “nimble” in their approach, he offered few details.

    In the meantime, the race is on. In July, The Wall Street Journal reported that the Mayo Clinic was partnering with Google to integrate the Med-PaLM 2 chatbot into its system. In June, WebMD announced it was partnering with a Pasadena, California-based startup, HIA Technologies Inc., to provide interactive “digital health assistants.” And the ongoing integration of AI into both Microsoft’s Bing and Google Search suggests that Dr. Google is already well on its way to being replaced by Dr. Chatbot.

    KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.

    USE OUR CONTENT

    This story can be republished for free (details).

    KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.

    Subscribe to KFF Health News’ free Morning Briefing.