- There Is No Spoon
- Posts
- ChatGPT Turns Two (And Other Tales of Digital Growth)
ChatGPT Turns Two (And Other Tales of Digital Growth)
Growing Pains and Growing Gains with AI in Education

AI Edge for Higher Ed
๐ฏ This Week in AI Education
Happy birthday, ChatGPT! Like any two-year-old, you're both amazingly capable and occasionally prone to making stuff up. As we celebrate this milestone in AI history, we're diving into everything from Claude's new literary aspirations to doctors using AI to take notes (finally, handwriting we can actually read!).
In this edition:
Meet Claude's "Satirical Surgeon" personality (Jonathan Swift would be proud horrified)
Breaking news: AI algorithms figured out price-fixing without being taught
A practical guide to AI privacy in education, because what happens in the AI chat should stay in the AI chat (but doesn't)
Upcoming events to expand your AI knowledge
Whether you're an AI enthusiast or just trying to keep up with your students' latest tech tricks, there's something here for you. Let's dive in!
Prompt of the Week ๐ฏ
Claude now has a user interface for setting the style of your response. Just below the chatbox, you can choose between formal, concise, or explanatory styles for the responses...or you can customize your own. To customize the style, you upload a piece of example writing (you should use your own or something without copyright) and Claude will generate a customization prompt.
I uploaded Jonathan Swift's "A Modest Proposal" and Claude created a style called "Satirical Surgeon" which it summarized as a style to "Deploy satirical, intellectual analysis through methodical rhetorical deconstruction". The full customization prompt is available by clicking on "Options" when you are editing your style. The full, customized prompt for Satirical Surgeon looks like this:
Write satirical social commentary using elaborate, formal 18th-century prose style. Employ dense, intellectual language with complex sentence structures, biting irony, and methodical logical argumentation while addressing serious societal issues. Maintain a tone of academic detachment while using increasingly absurd rhetorical strategies to expose systemic injustices. The user has included the following content examples. Emulate these examples when appropriate:
<userExamples>
A Modest Proposal for Addressing Urban Poverty in the Modern Metropolis
It is a matter of considerable perplexity to observe the current state of municipal governance, wherein the most vulnerable populations are systematically marginalized through bureaucratic indifference and structural economic constraints.
One might propose, with all due scholarly rigor, a series of interventions that appear simultaneously rational and profoundly satirical, designed to illuminate the grotesque contradictions inherent in our present social arrangements.
Consider, if you will, the following analytical framework:
Methodical deconstruction of existing welfare paradigms
Exposure of institutional hypocrisy through strategic hyperbole
Intellectual dismantling of conventional policy assumptions
The true purpose of such rhetorical exercise is not literal implementation, but rather a surgical dissection of societal pathologies through the scalpel of sardonic intellect.
</userExamples>
Interestingly, the "user example" given above is not from "A Modest Proposal". It's like it's making an example based on the example I gave it.
Try it yourself: Take a concept from your field and ask Claude to explain it in different styles. This could be used to help students understand how tone and style affect communication in your discipline. You might even have students analyze the differences between styles as a critical thinking exercise.
AI App Spotlight ๐
Happy Birthday ChatGPT.
Two years ago, you came out and changed the world...some of what you promised was hype, but much of what you promised was and continues to be impactful. Prior to your release, generative AI was only seen and talked about in small circles in Silicon Valley and other tech centers. Little whispers of the potential came out every now and then and then, on November 30, 2022, you brought the world of generative AI to the masses and now everyone (users, developers, technophiles and technophobes) is trying to figure out how it will/should fit into our lives.
You brought us silly poems, biased outputs, fun stories, false information, academic integrity challenges, and an existential threat to many jobs...or at least to tasks within those jobs. Much of your promise is still unrealized and we are still waiting for the perfect personalized tutor, the flawless coding assistant, and the promised revolution in healthcare. You've started a global conversation about AI regarding the nature of education, work, creativity, and intelligence.
While you certainly have limitations and risks, those limitations and risks are constantly changing. So here is to hoping that the next year brings thoughtful progress with consideration for both the challenges and opportunities you bring.
AI News of the Week ๐ฐ
Updates to Claude (ChatGPT's worthy competitor)
Integration with Google Docs (Pro subscription only)
Styles: You can now ask Claude to write in a particular style (formal, concise, explanatory, or you can customize)
Personalization: You can guide Claude in how to respond in all your communications (you could be silly: "Always respond like a pirate". or helpful "Use analogies when explaining things to me." or professional "Frame answers in business strategy terms")
This fascinating research reveals how AI pricing algorithms can autonomously learn to collude and raise prices, even without being explicitly programmed to do so. The study found that large language models like GPT-4 quickly developed sophisticated pricing strategies that led to higher prices and profits, raising important questions about potential consumer harm and regulatory challenges in an era of AI-powered business decisions.
Economist Anton Korinek explores three possible scenarios for ongoing AI development from business as usual to achieving AGI within 5 years - and argues why policymakers need to prepare for all of them. His analysis shows how different AI trajectories could dramatically impact economic growth, wages, and inequality, making a strong case for why we need adaptive policy frameworks that can respond to rapid technological change.
CBC and other leading Canadian media companies are suing OpenAI in a new copyright case. They are seeking billions in damages from OpenAI for allegedly using their content to train ChatGPT without permission.
AI Powered Pedagogy ๐
Need some more tips about the best ways to use Turnitin's AI detector? I like this guide from Conestoga College
Identifies potential copyright and privacy concerns with using tools other than Turnitin
Provides conservative approach to using AI writing score
80% AI score is indicated as threshold for possible indicator of AI use
encourages use of other data points
Lance Eaton is putting together this List of Institution AI Policies and Governance. See an overview of some of the privacy aspects from this list in the mAIn Event below
Upcoming Events ๐
These upcoming events caught my eye. I can't vouch for their quality, they touch on relevant themes and certainly might be worth investigating.
Building an AI-Native University. Dec 3, 2024. 9-10am PST
The Teaching Professor Conference on AI in Education. December 2-4 2024 (sessions available On-demand until Feb 17 2025)
GenAI: Navigating changes to the scholarly literature research process (from Elsevier). Dec 3. 10-11 am
Introduction to the Gen AI Toolkit (BCCampus).Dec. 6. 11am-12
1 Hour to AI proficiency. Dec 10. 9-10 am. This one will be a bit of a promotion for further (paid) training, but I have found the free workshops from this provider to be useful but more business focused.
The mAIn Event ๐ฏ
AI Privacy in the Classroom
Why it matters
Generative AI tools are becoming classroom staples for both students and teachers. Students are using it to support their learning and many teachers are using it to support their teaching and assessment. With both of these uses come new privacy risks that require immediate attention because any data shared with AI tools can be stored, analyzed, and potentially exposed, raising concerns about student privacy and institutional liability.
Key risks for students
Personal data can be captured when using generative AI tools
Mitigation:
inform students of the risks
if generative AI is to be used in an assignment, provide an alternative if students do not want to use AI
Prompts may accidentally contain sensitive information
Mitigation:
Discuss risk and encourage students to avoid providing personal information to chatbots
Terms of service often grant broad data rights to AI companies
Mitigation
read the terms of service (or get Claude to read and explain it to you)
Make use of privacy controls in apps that provide them - e.g., ChatGPT provides capability of turning off permissions to use user data for future AI training
What teachers should watch
Using unapproved AI detectors could violate both student privacy and student intellectual property rights
the only approved AI detector at OC is Turnitin
Grading with AI would require using an AI tool that is FIPPA compliant. There currently are none available for public use.
Be transparent about your own AI use. This is important for building trust with students. If you don't talk about your AI use, maybe they won't either
Best practices
Never input personal information into a generative AI tool
Review privacy, terms of use, and data use policies of generative AI tools before classroom implementation
If you absolutely need to put student work into an AI tool, get explicit consent before sharing student work with AI tools
Document and communicate AI usage policies clearly
The bottom line
Regardless of the powerful educational possibilities of generative AI, student privacy comes first. Stay informed about privacy requirements, be transparent about AI use, and stick to approved tools when handling student data.
What's next
The technology and the rules of engagement are both rapidly changing. When ChatGPT first hit the scenes it was certainly not FIPPA compliant. Now doctors in BC are using LLM's to transcribe patient visits. The only generative AI tool available to us as educators that that is FIPPA compliant and protects student IP is Turnitin (BCNET has performed the necessary Privacy Impact Analysis (PIA) and maintains the Master Registration Agreement with Turnitin protecting student IP...Okanagan College is a member of BCNET). However, I expect that will change in the future as generative AI ed tech tools look to build a bigger and bigger market. It's conceivable that OpenAI, or Microsoft, or Google, or Anthropic, will start to push privacy-centric servers for educational institutes to use in an effort to bring in even more users.
Reply