My friend Ryan and I are writing a book of essays about how tech is changing how we think, decide, and experience meaning. We both use AI a lot. Much of we want to write about is the unintended negative consequences of AI. What we are losing. Why things feel subtly off.
We used ChatGPT to draft our thesis and our plan for putting the book together. We recently started to separately write individual chapters. I started by brainstorming with ChatGPT about what to include in my chapter. It felt natural continuing the conversation and asking ChatGPT to draft a vignette at the start of the chapter. And then for the segue to the rest of the chapter. A little bit more. Add in some references. Soon I had a first draft of the chapter that was 90% ChatGPT.
I tried reading and editing the text. It didn’t read well. Sentences were short and choppy. Sections were verbose and repetitive. We, ChatGPT and I, kept making similar vague, high-level claims without backing them up or getting into the details. The chapter would get poor marks as an essay for school.
I do what I always do when writing: start from the beginning of my current draft and obsess over each word choice, stopping reading and editing when my attention starts to wain, and then restarting at the beginning of the text the next time I have free time and am feeling interested in the project. I go over that opening vignette each time, changing it slightly on almost every read through. Ryan’s first comment on reviewing my work: the opening vignette reads very ChatGPT and not in a good way. I copy pasted in at the bottom of this post. Let me know what you think.
The references ChatGPT added were interesting. They weren’t references I was familiar with. Many were decades old. On first glance, they seem relevant and interesting. Herbert Simon? Marshall McLuhan? When I dug in, some of the references were quite interesting, prescient, still influential, and meaningful for my analysis. Some were none of those things.
I’m still trying to process this. Are my negative reactions a reflection of a bias against AI that is also driving Ryan and I to write this book? I will work with Ryan and edit that first chapter and see how it ends up. For the next chapter, I will try writing the first draft myself, without LLM help, and then asking ChatGPT for thoughts on my draft. I will write another chapter without using AI at all. Then compare notes.
Conclusions? AI is good for short form writing but can’t put together a coherent and quality book chapter. Either the base version of ChatGPT or AI in general isn’t great for references.
For coding and other work? Good for defining functions but not putting together a whole service. Maybe I just need “agents.”
Disclaimer: this post was written without using AI at all. Except for the vignette below.
Vignette:
You wake up and reach for your phone. It’s a reflex at this point. A headline sharp enough to provoke a reaction. A post from someone you haven’t spoken to in years. A photograph with an interesting detail that makes you zoom in. You pause. The system notices. Somewhere, a counter increments. A model updates, imperceptibly.
Emails, instant messages, and text messages ping notifications in the background. Work and personal messages. You rely on automatically generated replies to respond. The velocity of it all seems to be increasing. At night, as the noise fades, you sense a quiet disorientation. You have been busy all day, engaged constantly, yet you don’t feel as if you accomplished anything.
