[POST] Socratic, so what?
AI just provides a more classical way to learn earlier in the educational journey
This is a reader-supported publication. I give it all away for free but could really use your support if you want me to keep doing this.
I use AI (Perplexity) a great deal in my daily analysis. I ask it questions galore both fore and aft of my writing. Before I commit to writing, I ask it a lot of “is this true(?) questions, testing ideas I have about a subject I’ve encountered in the media. After I’ve penned something, I also check a lot of my statements to make sure I haven’t gone off some particular ledge.
I don’t always agree with Perplexity and I never take its analysis without a grain of salt (often checking sources cited), but our lengthy Q&A sessions are incredibly helpful for me, parsing huge amounts of information and analysis so that I can locate exactly what I feel I want to say about a subject without getting so idiosyncratic that only I alone understand what I’m saying.
Perplexity is often described as this brilliantly fast research assistant, and it is.
Does using AI atrophy your analytic hunter-gatherer skill-set? Yes, yes it does, but only in this real-world sense. I used to spend hours/days/weeks physically tracking down information and data and now I take that instant access almost completely for granted.
Sure, such montage scenes are great in films, but, in real life, they’re incredibly tedious and exhausting and time-consuming.
But what would I have lost if that earlier hands-on, working-in-the-real-world learning-by-doing hadn’t occurred?
Not all that much, by my calculations. When you’re digging for information, you’re constantly asking yourself, Is this important? It’s a very in-or-out question: Should I know this for my analysis to be strong or is it just incidental?
That is, in effect, how I build my Sunday Cutdown list every week: I find something I suspect is important, something I feel I need to decide how I think about and decide where it fits in my larger thinking. So, that basic hunter-gatherer skill-set still exists within me, and, when I look at my kids and how they use the internet, I see they’re quite adept at it.
But, once they find something, they often come to me like Peeta Mellark turning to Katniss Everdeen in the final movie of that Hunger Games series and asking about something he thinks he remembers and knows: Real or not real?
I sometimes find my kids’ questions quite disturbing (How can you not know it’s real?) and completely reasonable (How can anybody tell if anything is real nowadays?).
Well, as a 63-year-old who learned how to parse reality in the Before Time (analog era), I still possess some instinctive skills that my kids lack.
An example: There are cultures and languages in Africa that don’t have words for right and left so much as they’ll simply cite which direction of the compass comes closest to the point you want to make — as in, Take your north hand and pick up that rock. Which is your north hand? It’s the one currently facing north, duh!
How can this be possible? Go far back enough in time and knowing north/south/east/west all the time was just a skill you had to have for survival. You were your own compass.
I lived in that world just long enough that I retain that skillset. I pretty much always know NSEW no matter where I am. I can just feel it (often by the angle of sunlight), like I could always tell which way was the ocean when I lived on the East Coast (far easier because you could always smell the salt and just sort of feel the vastness of the water).
Now, if I am born in a world where I can get directions from my phone, I don’t develop that real-world skill. I bump into this reality all the time when giving directions to young people or my kids. I’ll say head north and then west and they just have no idea — like I’m posing a riddle just to be unhelpful.
Of course, put me in some first-person shooter game with them and it’s like they have this super-detailed mental map of the world they’re moving within while I don’t have a clue (I mean, I know up and down alright because I’m constantly falling on my face). So, they have a version of that skill, it’s just been learned within different, virtual parameters of what I would argue are equally — if not more — complex environments as the real world.
If you want to escape some real-world natural disaster fast and you ain’t got your smart phone, see me. You want to evade all those zombies in some virtual environment, don’t see me because I’m just desperately trying to make my way up some stairs right now.
Okay then, so the hunter-gatherer thing still exists, it’s just sped up fantastically in terms of real-time processing for my kids because they’re so comfortable operating in these supremely complex virtual environments.
And yes, that sense of orientation is still there, it’s just more customized and computer-aided per the environment encountered. Nothing really wrong there unless you’re a prepper and you are convinced that the old skills are the only skills that will matter when Armageddon finally comes.
All of that is to say, I come across an article like this and I must process it somehow in relation to my thinking processes, which are definitely elevated far beyond what I was capable of doing in the past but I can’t tell if that’s just me getting wiser as I age or being artificially boosted by all this AI — with the mix remaining a mystery to me:
NYT: A Classroom Experiment — We explain how schools are using artificial intelligence
So, what’s going on? Rampant “cheating” or advanced-stage learning?
For now, mostly experimenting.
After years of hesitancy and hand-wringing about A.I., schools are starting to experiment with chatbots — some with enhanced privacy guardrails, some without. In a nationally representative survey, nearly half of districts reported having provided A.I. training for their teachers as of last fall. That’s twice the number from the previous year.
[Yes, I hate that the NYT is sticking with this A.I. punctuation requirement. It’s stupid and wastes time and energy. Plus, they are an outlier pissing in the wind on that one.]
So, how is this experimentation with AI translating for the kids?
In Kelso, Wash., middle and high schoolers used Google’s Gemini this school year for tasks like research and writing. In Newark, an A.I. tool from Khan Academy helps teachers place elementary-school students into study groups based on their skill levels. It also answers students’ questions as teachers give lessons.
Sounds good. Basic info still being relayed, and that’s an important building block. But the kids can quickly start leveraging AI for clarifying questions, which is great, in my mind.
But are the children “learning?”
I kind of hate that binary question. Of course they’re learning. The question is, what are they learning?
While tech companies promise that A.I. can facilitate “personalized learning,” many students and educators are simply using chatbots as a sophisticated search engine. (Some also use it to cheat, including by drafting essays.)
Advanced search engine? I’m not going to cite that as a disappointment.
Drafting essays? Even there I’m not totally sure that is the ultimate peak of learning to be scaled. I mean, it sure as hell ain’t the peak for your average dyslexic, whose intelligence cannot adequately be measured along that axis.
Maybe — just maybe, by getting there (i.e., essay drafting) that fast, deeper forms of learning and analysis become possible.
Think about your own life and experiences and rank your sources of learning:
There’s what I see.
There’s what I hear.
There’s what I read.
There’s what I remember.
There’s what I think/analyze.
There’s what I say to express what I’m thinking.
There’s what I write down to codify my thinking.
All legit sources (I’m sure you can add an item or two), and, depending on your learning style, you will favor certain pathways and combinations.
But, in my experience, none of these input/output pathways compare to a question-and-answer dialogue with someone more knowledgeable than you.
I think of everything I learned in school and then compare that to long conversations with my Mom or Dad, say, while we’re driving home to Boscobel in the dark from Madison. To me, those were magical moments of great intellectual breakthroughs, and, yeah, in some way I was asking Peeta’s question about some thought I had — as in, Realistic or not realistic?
Grade-school teachers don’t get to play that role all that much because they’re so busy loading you up with the basic facts, which can be incredibly helpful without being absolutely necessary for that Q&A dialogue to unfold.
High-school teachers get more opportunities, and that’s when teachers really start influencing who you become, in my opinion. The exchange is less dictatorial and more collaborative.
And then you get to college and you really find yourself using the Socratic method. From GoogleAI:
The Socratic method is a form of cooperative argumentative dialogue, often used in teaching, where a teacher uses a series of probing questions to help students explore their own beliefs and assumptions, leading to a deeper understanding of a topic. It's characterized by a back-and-forth exchange rather than a lecture format, encouraging critical thinking and self-discovery.
That’s what I did with my parents and with mentors throughout my education and career. It’s what I do every day with my kids. And it’s what I do with PerplexityAI every workday.
Now, the big question is, What happens if we expose that first-grader to their own, personalized Socrates? Because I think we’re talking about a whole other level of learning than what’s been attempted in the past with learning software.
I do worry that Peeta is always going to be confused and disoriented by these firehose flows of information and analysis, but I also think that higher levels of analysis and understanding will become accessible, and explainable, creating our ability to anticipate events and outcomes far better, making us more resilient in the process, with lucidity (clarity of thought) as the Holy Grail.
This is what my colleague and longtime friend Steve DeAngelis is proposing with his REAL construct as an answer to our VUCA world and the BANI sensations it creates within our heads.
See Steve’s recent, very intriguing post on this:
DEANGELIS REVIEW: Personal and Enterprise Resiliency: Going from BANI to REAL
Because I think Steve is really on to something with his REAL formulation:
To sum up:
A VUCA world is full of volatility, uncertainty, complexity and ambiguity.
That VUCA world creates the following conditions among humans, or the BANI mix of brittleness, anxiety, non-linearity, and incomprehensibility.
To overcome the latter, Steve proposes a directly-countering paradigm of REAL.
From his post:
To escape this BANI condition, we need to construct its complete opposite: a world in which resilience replaces brittleness, explainability reduces anxiety, anticipatory tames non-linearity, and lucidity surpasses incomprehensibility.
That’s how we get REAL (resilient, explainable, anticipatory, lucid) with our world.
To me, and this may be supremely simplistic on my part (but Hey! I’m walking here!), getting REAL is turning AI into your own personal Socrates — that role played by your parents, elder siblings, teachers, coaches, and mentors in your life. They get you to explainable. That allows you to anticipate — without fear. That enables your resilience when things do happen. And that’s how you keep your cool — lucidity.
In Steve’s emerging approach, this is how we craft a moral framework within which AI and humans can collaborate and co-evolve.
I like Steve’s ambition here: to explain AI is to know good AI from bad AI, and that requires a moral compass, something I wrote previously about Steve’s thinking in this post:
[POST] Constructing a moral compass for the looming Singularity
Steve DeAngelis and the Art of the Practical
To me, that’s not me surrendering to AI but harnessing AI, and yeah, that has to begin as early as possible in the classroom.
So, as far as I am concerned, the deeper the educational embrace of AI, the better.
Feel any better about AI? I do, and that’s enough for me to warrant this post.
And yes, if you’re wondering, Steve and I play Socrates to one another all the time: with me in the lead more on politics and security, and Steve more the mentor on economics and technology.
And, if that’s cheating, it’s the best kind of cheating.
Reading this was so much fun. It was like walking steadily from the shallow end of the pool towards the diving board until your hat is floating. A lot to explore and ponder in this post. 👍
Gonna have to reread and relisten this a few times before I can pass it off to my friends as one of my own theories ( "... so I see AI as a new Aristotle, who always asked poking questions..."). 😀