AI Should Not Replace Thinking at My University

The Atlantic

AI Should Not Replace Thinking at My University

Full Article Source

Im dismayed that any academic institution would encourage us to use chatbots rather than our intellects. I used to drive a stick-shift car, but a few years ago, I switched over to an automatic. I didnt mind relinquishing the control of gear-changing to a machine. It was different, however, when spell checkers came around. I didnt want a mechanical device constantly looking over my shoulder and automatically changing my typing, such as replacing hte with the . I had always been a good speller and I wanted to be self-reliant, not machine-reliant. Perhaps more important, I often write playfully, and I didnt want to be corrected if I deliberately played with words. So I made sure to turn off this feature in any word processor that I used. Some years later, when grammar correctors became an option with word processors, I felt the same instinctive repugnance, but with considerably more intensity, so of course I always disabled such devices. Read: The end of manual transmission It was thus with great dismay that I read the email that just arrived from University Information Technology Services at Indiana University, where I have taught for several decades. The subject line was Experiment with AI, and to my horror, Experiment was an imperative verb, not a noun. The idea of the university-wide message was to encourage all faculty, staff, and students to jump on the bandwagon of generative AI tools (it specifically cited ChatGPT, Microsoft Copilot, and Google Bard) in creating our own lectures, essays, emails, reviews, courses, syllabi, posters, designs, and so forth. Although it offered some warnings about not releasing private data, such as students names and grades, it essentially gave the green light to all IU affiliates to let machines hop into the drivers seat and do much more than change gears for them. Here is the key passage from the website that the bureaucratic email pointed toand please dont ask me what from a data management perspective means, because I dont have the foggiest idea: From a data management perspective, examples of acceptable uses of generative AI include: Syllabus and lesson planning: Instructors can use generative AI to help outline course syllabi and lesson plans, getting suggestions for learning objectives, teaching strategies, and assessment methods. Course materials that the instructor has authored (such as course notes) may be submitted by the instructor. Correspondence when no student or employee information is provided: Students, faculty, or staff may use fake information (such as an invented name for the recipient of an email message) to generate drafts of correspondence using AI tools, as long as they are using general queries and do not include institutional data. Professional development and training presentations: Faculty and staff can use AI to draft materials for potential professional development opportunities, including workshops, conferences, and online courses related to their field. Event planning: AI can assist in drafting event plans, including suggesting themes, activities, timelines, and checklists. Reviewing publicly accessible content: AI can help you draft a review, analyze publicly accessible content (for example, proposals, papers and articles) to aid in drafting summaries, or pull together ideas. I was completely blown away with shock when I read this passage. It seemed that the humans behind this message had decided that all people at this institution of learning were now replaceable by chatbots. In other words, theyd decided that ChatGPT and its ilk were now just as capable as I myself am of writing (or at least drafting) my essays and books; ditto for my lectures and my courses, my book reviews and my grant reviews, my grant proposals, my emails, and so on. The tone was clear: I should be thrilled to hand over all of these sorts of chores to the brand-new mechanical tools that could deal with them all very efficiently for me. Im sorry, but I cant imagine the cowardly, cowed, and counterfeit-embracing mentality that it would take for a thinking human being to ask such a system to write in their place, say, an email to a colleague in distress, or an essay setting forth original ideas, or even a paragraph or a single sentence thereof. Such a concession would be like intentionally lying down and inviting machines to walk all over you. Read: The end of recommendation letters Its bad enough when the public is eagerly playing with chatbots and seeing them as just amusing toys when, despite their cute-sounding name, chatbots are in fact a grave menace to our entire culture and society, but its even worse when people who are employed to use their minds in creating and expressing new ideas are told, by their own institution, to step aside and let their minds take a back seat to mechanical systems whose behavior no one on Earth can explain, and which are constantly churning out bizarre, if not crazy, word salads. (In recent weeks, friends sent me two different proofs of Fermats last theorem created by ChatGPT, both of which made pathetic errors at a middle-school level.) When, many years ago, I joined Indiana Universitys faculty, I conceived of AI as a profound philosophical quest to try to unveil the mysterious nature of thinking. It never occurred to me that my university would one day encourage me to replace myselfmy ideas, my words, my creativitywith AI systems that have ingested as much text as have all the professors in the whole world, but that, as far as I can tell, have not understood anything theyve ingested in the way that an intelligent human being would. And I suspect that my university is not alone in our land in encouraging its thinkers to roll over and play brain-dead. This is not just a shameful development, but a deeply frightening one.