Patrick Hall’s Updates

Update 6 - Overall AI Reflection

Note: as of now, I have not received any peer reviews of my own paper.

Work Enhancement:

The AI review of my paper gave me a clear indication that my paper was not perfect. Granted, I knew this before I read any of the criteria. I did think the review was overly generous in its delivery of a 3 out of 4 (above expectations) for all of the rubric criteria. It made me wonder how badly I would need to write in order to score a 2.

I rambled a bit in one section of my paper (I was tired of writing), discussing the relevance and importance of my profession as an ELA teacher, which it did flag as unprofessional writing. I will be going back to make more theory-based or research-based content there.

It also flagged that I provided not enough context for my specific interest in researching for this paper (students using AI as a vehicle for cognitive offloading, sabotaging their learning). I will be providing specific examples of students using GenAI for the purpose of replacing their obligation to think through their writing.

Lastly, it flagged my modification of a scripted curriculum as a pedagogically sound moment for GenAI integration, but I didn't frame this pedagogical moment with any education research. I'll be adding source material here to enhance the validity of this idea.

I am happy to report that my final product will be greatly improved by the Cyberhelper review. I know just about exactly what I will be marked off on if I don’t make changes, and I have the impetus to score as highly as possible. Got to keep that 4.0 GPA!!

Knowledge Gain:

I absolutely think that the peer review process gave me new ideas and knowledge. That's a natural feature of reading something of interest -- it sparks ideas in the mind. The AI peer review -- not so much (there's something less interesting in something produced automatically rather than by a human -- this warrants research!). Reading what my peers are interested in and what they produced gave me inspiration for how to improve my final paper. I drew strong sources from both of the peer reviews I did, and I got new criticisms of integrating AI into the classroom from reading other people's updates.

AI / Human Review Reflection:

I have sadly received no human reviews myself, so I cannot elaborate on the efficacy of this versus the AI review. I will elaborate a bit on why I think the AI review of my paper was less than interesting.

Because the Cyberhelper performs the task with ease and is so very complimentary, I think I hold it with a bit of disdain or scorn. What takes me hours (it took me 2-3 hours to complete my peer reviews), it completes in minutes. We are repeatedly being taught not to trust AI and to check its source material, and this skepticism carries over into my interpretation of its criticism. It doesn't REALLY know what it's talking about. It's simply regurgitating a synthesis of others' ideas without the human context that can create a carefully crafted review. Then again, as a parent of a 2 year old, I don't really have the energy to craft the kind of peer review my peers deserve, so maybe Cyberhelper is better than I am.

AI Competence:

I'm not sure my ability to wield AI has changed much as a result of this course, but I will say that my understanding of why its limitations/strengths exist and what its pedagogical applications are is a bit stronger. I will certainly be experimenting and utilizing AI as a tool to synthesize solutions for planning and differentiation in the upcoming school year, and I MAY allow students some limited AI applications, but I'm not convinced that the guardrails exist yet in Chicago Public Schools that will prevent impulsive 7th graders from replacing their thinking rather than extending it.