Generative AI & Higher Education. “Friendly” To Professors. But Students Better Be Careful.

Generative AI & Higher Education. “Friendly” To Professors.  But Students Better Be Careful.

I work in higher education. There are challenges, for sure, including rising costs, student debt, research relevancy, declining enrollments, curriculum readiness, campus safety, effective leadership and educational alternatives, among other challenges that suggest some reinvention of the industry is way overdue. But while these problems are well-known (and growing), there’s a new one we should be concerned about: generative AI.


Stephen Marche writes in The Atlantic: “The College Essay is Dead” and “Nobody is Prepared for How AI Will Transform Academia.”

Here’s what Marche says:

“And now there’s GPT-3. Natural-language processing presents the academic humanities with a whole series of unprecedented problems. Practical matters are at stake: humanities departments judge their undergraduate students on the basis of their essays. They give Ph.D.s on the basis of a dissertation’s composition. What happens when both processes can be significantly automated? Going by my experience as a former Shakespeare professor, I figure it will take 10 years for academia to face this new reality: two years for the students to figure out the tech, three more years for the professors to recognize that students are using the tech, and then five years for university administrators to decide what, if anything, to do about it.”

Tony Wan confirms Marche’s concerns:

“Usually there’s a lag period between the emergence of new technology and its recognition and adoption in education. What is remarkable with ChatGPT and GPT-3 is the rapid uptake by people across the community. Students are leading the charge, using it to write papers (so long, paper mills!) and stirring cries of cheating among teachers. Other educators have experimented with it to create syllabi and grading rubrics and champion its creative potential.”

According to Chris Stokel-Walker, it’s worse: “Students Are Using AI Text Generators to Write Papers” and “High School and College Students are Awakening to the Grade-Boosting Possibilities of Text-Generating Software. Their Teachers are Struggling to Catch Up.”

How’s this actually working for students?

“He logged into GPT-3, a text-generating tool developed by OpenAI, which can create written content from simple prompts. Trained on a vast corpus of preexisting language drawn from Wikipedia, Common Crawl, and other sources, GPT-3 is intended as a tool for automating writing tasks. But it’s also increasingly helping students … avoid some of the tedium of academic writing and skip right to the fun part (being done).”


What about professors? What can they extract from ChatGPT? How about the development of a syllabus? Here’s what Ethan Mollick asked:

“Create a syllabus for a 12 session MBA-level introduction to an entrepreneurship class, and provide the first four sessions. For each, include readings and assignments, as well as a summary of what will be covered. Include class policies at the end.”

Here’s the response the professor got from ChatGPT:


“Session 1: Introduction to Entrepreneurship


  • “The Lean Start-Up” by Eric Ries
  • “The Innovator’s Dilemma” by Clayton Christensen


Complete the “Entrepreneur Mindset” quiz on the website

Read the assigned chapter from “The Lean Startup” and “The Innovator’s Dilemma”

“In this session students will be introduced to the concepts of entrepreneurship and innovation. They will learn about the entrepreneurial mindset and the role it plays in successful entrepreneurship. They will also learn about the key challenge and opportunities faced by entrepreneurs and the importance of customer discovery and validation in the early stages of a startup.”

As the professor says:

This is a perfectly fine syllabus for an introductory class for MBAs. The readings are reasonably modern (though it does not give page numbers, among other mistakes), and it actually has a reasonable structure building up to a final project.”

In fact, all four sessions were developed by ChatGPT complete with elaborate assignments.

How about coding? Can ChatGPT write code? You bet: “it can also code in many … languages. This is a potentially huge change to our research.”

How about reviews:

Question to ChatGPT:

“I want to write an academic review paper on why crowdfunding can help entrepreneurship. Write me the introduction in an academic style for a top management journal. Explain why crowdfunding is a context that generalizes to the study of venture-backed companies, and what theories it can help explore.”

It works.

I could go on with examples. They’re everywhere.

But Students — & Others — Better Be Careful

Whenever there’s a new technology, there’a response. Enter GPTZero which is a next generation plagiarism detection application. There will be many of these — especially because there’s a business here. There are at least five plagiarism tools for professors — anyone — to use with more on the way. So are we worrying too much about generative tools? Students (and all those who use ChatGPT and similar tools) better be careful. Detection is still possible.

Now What?

ChatGPT-4 will be available in 2023. 4 will be much more powerful than 3, which means applicability and relevancy will increase dramatically. You must get the idea. Higher education is only one of the industries impacted by generative AI. But since “words” play such a huge role in the educational experience, higher education is perhaps one of the most vulnerable to tools that are really good with words. Within higher education, generative AI can generate syllabi, exams, essays, research papers, annual performance reviews, presentations and even research proposals – and so much more. (Will the National Science Foundation have to “verify” proposals?) What about marketing campaigns? Debates? Novels? Screenplays? Will a new core competency include how to engage ChatGPT and similar large-language models? (Or will ChatGPT just rewrite bad questions?)

Generative AI (in this case, ChatGPT-3; though there are many more generative tools) has already invaded every campus in the world. Higher education must understand and analyze this technology if it’s to maintain its fundamental integrity. If higher education responds — quickly — with smart plagiarism tools it will respond “correctly.” But if it’s slow, how many “A” papers will fall through the cracks?

But is there another opportunity here? Is ChatGPT like Microsoft Office? Will it be embedded in MS Office, search and other commonplace activities? Will it become embedded in other applications — like ERP applications — which are becoming increasingly automated? We just don’t know. Nor do we know where “detection” will begin and end, or what will be considered “acceptable” and what won’t. Will algorithmic “explainability” be part of this trend as it is with more traditional AI and machine learning applications? Will hate speech generated by these tools will be regulated? Or will self-governance prevail?

What’s next?


Leave a Reply

Your email address will not be published. Required fields are marked *