News & Views

Artificial Intelligence, Incorporating Human Creativity and Ingenuity

Dr. Reshan Richards, Director of Studies at New Canaan Country School and Stephen J. Valentine, Associate Head of School at Montclair Kimberley Academy
Example 1: A restaurant in New York City serves a complimentary glass of champagne to someone dining alone. The gesture becomes part of their signature service.
Example 2: To lower the amount of damage during the delivery process, an e-bike company decides to include an image of a flat-screen TV on their bike boxes. The move works—spectacularly. People, intentionally or unintentionally, believe a flat-panel TV needs to be handled with more care than a bike. Or perhaps the delivery company bosses have a mandate to reduce the amount of damage claims from high-priced electronics in their shipping care.

Example 3: A New England school considers how to use a new property, walkable from but not close to campus. A problem becomes clear: the most amazing physical space, educational experience, or educator can be present at the new site, but people won't use the site if it is not convenient (or delightful) to go there. A member of the planning group suggests they design a hayride experience to transport students and teachers to and from the new property

What do these three examples have in common? In their quiet ways, they celebrate a kind of lo-fi human ingenuity. While the AI wars are raging, it is important to make the case for a form of intelligence that lends a little grace, joy, delight, or ease to the world.

Human ingenuity is not born of rigorous data collection and analysis. It is not manufactured because it happened many times before or received the most votes or clicks. It is not the economical choice (though it need not be a costly choice, either). It is an action born of a blend of “I notice” and “I care” and “let’s try.” Think of it as inquiry’s output, sparked by belonging.

Advances in generative artificial intelligence (most recently ChatGPT) have certainly raised the alarm among teachers and school leaders. “Is this the death of homework?!” “Of the analytical essay?!” “Of our jobs?!” While it is still too early to say anything definitive regarding the above, ChatGPT and other GenAI products lead to natural, even healthy, inquiry about learning and creativity. A stiff challenge or constraint can, at the very least, help us get clear first about our beliefs and next about our options.

Facts are facts. ChatGPT acquired a massive user base out of the gate. It reached one million users faster than many tech products that have become cultural mainstays.

When a typical entrepreneur saw ChatGPT for the first time, they were likely to ask, “How can I build something with that?” They might have arrived at this question (and the options that it generated) because they believed experimentation can lead to products or services that might generate profit or at least further learning.

When a typical teacher saw ChatGPT for the first time, they may have been more likely to first think about the ways that students might use it to cheat on assignments. Or, slightly more positively, they might have vocalized that such a product would cost them time; they would now need to change their prompts or assessments. Like the entrepreneurs described above, these teachers reacted to ChatGPT based on their beliefs—about the purpose of assignments, about the types of prompts we should be offering in schools, about the uses of assessments, about students themselves.

ChatGPT is certainly an interesting tool. How we react to it can indicate beliefs about learning and the creativity that drives the best versions of it. Are our beliefs leading to the options we would hope to see for our students, our staff, and ourselves?

When trying to lead and shape the learning of others, few moves are more powerful than noticing where the learners stand in their learning process while caring enough about their learning (i.e., caring enough about them) to help them to take some meaningful next step.

Such work requires constant human ingenuity on the part of a teacher. Everyone’s problem with the problem in front of them is usually a little bit different, requiring a unique nudge. Every unique nudge is an outgrowth of lo-fi human ingenuity. And of course, the truest human ingenuity on the part of a teacher is to ensure that the students themselves leave each semester (or year) having internalized the problem-solving process so as to no longer need the teacher.

There are many different ways to define the desired ends of education; we hope all students become the type of people who scan both their environments, in search of ways to improve them for others, and their own minds, in order to apply their intelligence in what Bridle defines as “active, interpersonal, and generative ways.”

It is difficult to talk about ingenuity, and the type of intelligence it unleashes on the world, without at least glancing at Generative AI. A commitment to human ingenuity may help us, ultimately, to absorb or reject Generative AI— appropriately.

Human Ingenuity
Human ingenuity, after all, is a commitment to the ongoing construction of a robust toolbox rather than the singleminded application of a single tool. A good problem solver sometimes reaches for a pencil, sometimes for a wrench, sometimes for an analogy, sometimes for a calculator. Advancing on a problem step by step, angle by angle, keystroke by keystroke, tinker by tinker, produces a solution that could only have come from an engaged individual. A proliferation of such solutions makes environments habitable, hospitable, and even joyful. That is how we build communities, with and for each other, in an ongoing way.

For the problem solver, advancing on a problem by jumping over the opportunity to pour themself into the solution—by skipping the steps, angles, keystrokes, and tinkering—by definition leads to a solution that includes less of the problem solver. This changes the solution; it also changes the problem solver.

Writing and other forms of creating are about output. But more importantly, they are about the process of the writer or creator finding increasingly precise ways to express exactly what they believe, exactly how they see and feel and exist in the world, exactly who they are, in that moment, becoming. It is worth noting the most profound way in which Generative AI is not like a good teacher: It is a product of massive human ingenuity that diminishes the human ingenuity of those who use it without the appropriate amount of reflection, thought, discipline, and discretion.

While schools are figuring out whether to ban ChatGPT or how it will fit into their anti-plagiarism policies, it’s important to remember: Nothing about good human learning should change in the face of increasingly advanced AI.

Learning at one level is about generating correct answers. When teachers believe that this level of learning is important, they will likely assign tasks that ask students to ignore easily accessible technology (calculators, Google) or to memorize facts and figures. Their assessments, meanwhile, will ask students to pick an answer from a list or to be very precise in response to a static situation. Students may not need to show their work; they may or may not get credit for showing their work if they do.

To be clear, "helping students arrive at the correct answer” should not be characterized as deep learning or an appropriate aspirational peak for any teacher or school. Learning at a deep level is subtly and essentially different— it’s a matter of developing a significant understanding of the way(s) of reaching a correct or workable solution.

If teachers believe that this deeper level of learning is important, they will prepare students to face—and relish facing— novel situations. Their assessments, meanwhile, will ask students to pull from their past experiences and knowledge to make some step forward (for themselves, for others, etc.).

In the face of rapidly improving and accessible Generative AI, shallow learning and teachers who seek to inspire shallow learning might ultimately be considered easily replaceable. Deeper learning and teachers who seek to inspire deeper learning, on the other hand, will continue to be essential.In fact, the need for great human teachers (and parental figures) will likely become even more apparent because of another deep learning need to which generative AI points: discerning between probabilistic reality (i.e., AI-generated nature) and actual nature.

Many AI systems that use their initial, pre-launch training and their actual, in-the-world training continue to make data-driven guesses at what a correct and acceptable output might be. And these are not even guesses but rather IF/THEN/ELSE statements happening at a speed, density, and scale which has not necessarily been commercially experienced before.

Humans, in our way, do something similar when tapping into prior knowledge, experiences, and connections in order to provide a response, idea, or action. In terms of that task—tapping into prior knowledge, experiences, and connections—we are not nearly as effective as even a decent generative AI model; our search can never be as sequential or complete; but we do have one advantage, at least currently. In the moment before suggesting its answer, generative AI is not (yet, at least) seeking the kind of feedback for which humans are hard-wired: that is, feedback from others, from environments, from context, from body language, from culture.

Perhaps an AI tool might ask for a rating of its response as part of its workflow in order to help improve its training, but that human-in-the-moment feedback is not part of the AI’s initial response. Such a rating, its best version of “reading the room,” will only inform future interactions (which is not a bad thing but surely a limitation).

To guide a student toward durable and deep learning, the best teachers access some of what a generative AI can access—prior knowledge, experiences, and connections—and most of what a sensitive, thoughtful human can access—understanding where students are, demonstrating expertise with a personal message, leading people to fill in particular gaps.

Speed, density, and scale will, no doubt, continue to advance in ways that push the limits of what human development can keep up with. That might be okay, too, as long as the advances are being used to help humans do what they do best—being creative, exhibiting emotional and interpersonal intelligence, and finding fulfillment in contributing to something larger than themselves, but not at the expense of the wellness of themselves or others.
Back
New Canaan Country School admits students of any race, color, national and ethnic origin and are afforded all the rights, privileges, programs, and activities generally accorded or made available to students at the school and does not discriminate on the basis of race, color, age, sex, sexual orientation, national origin or ancestry, or disability in administration of its educational policies, admissions policies, financial aid policies or any other school-administered programs.