We choose to think about stuff
Don’t you see that the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thoughtcrime literally impossible, because there will be no words in which to express it.
George Orwell, 1984
I watched as my LinkedIn feed became full of the words "delve" and "foster" separated by a scattershot of emdashes and cringe emoji placements. A sad simulacrum of a 40 year old trying to be 20…but weirder.
A collague of mine's writing style changed into the sort where bullet points are named and the names are bolded. It did not match the way he took notes. Or the way he talked. Or his opinions on anything, really. Another colleague of mine sent me a research report which I would use as fodder for an essay I was writing, and the first sentence of the first paragraph began with "absolutely."
My wife's collaborator sent her a doc to edit, and she noticed that some of the sources were fake. Clear hallucinations that she discovered at bedtime that she had to manually change all night, as the deadline was the next day. These were serious docuemnts going to serious people in serious orgs.
I found a great word for this, from an article I read by internet AI writer Gwern. He calls it "ensloppification." This is the increase in so-called AI slop that is permeating written content after November 2022. There is a layer that goes beyond the cut-and-paste of LLM output, however.
I was at a conference in Chicago, where I was looking at a LinkedIn post from a collaborator. I immediately called it out as ChatGPT output. She said no. Every emoji and visual pattern, that I dismissed as LLM output, was actually carefully engineered by her. I was shocked. Nothing against her: she did the right thing: used LinkedIn best practices to optimize for engagement. It just so happens that LinkedIn is so ensloppified at this point that it means learning to write like LLMs.
Gwern notes that those who are learning English as a second language are going to largely get feedback from LLMs, and thus learn how to speak the way LLMs speak. Accordingly, it could be that in a few years there is a whole swath of the population whose carbon-based writing is indistinguishable from that of LLMs.
But this is not the part that concerns me.
The part that concerns me is that the act of thinking about stuff seems to be going out the window. This is something that is a core value of mine. A large part of getting my PhD, and the tactical things therein like learning how to code, were of course to advance my career, but also to make me a better thinker. Because as I've learned, it's a core value of mine. But not necessarily a core value of everyone.
Famous AI researcher Andrej Karpathy wrote in a tweet "agency > intelligence." He further defines this in his tweet:
People with strong agency tend to set goals and pursue them with confidence, even in the face of obstacles. They’re the type to say, “I’ll figure it out,” and then actually do it. On the flip side, someone low in agency might feel more like a passenger in their own life, waiting for external forces—like luck, other people, or circumstances—to dictate what happens next.
This is all well and good. I am in full agreement that agency is important. I tell my colleagues all the time "you can just do stuff." And this is something that has definitely helped me get to where I am. But on the other hand, we still have the elephant in the room: what stuff should we do? What goals should we set? What things should we figure out?
The thing that is missing here is what I like to call the "relevance filter."[1] This is the ability to look at a bunch of stuff and say "…that one." At 39, I have built up a relevance filter for the life sciences that started around 20 years ago when I began taking chemistry and biology classes in undergrad.
A relevance filter is when my intern is debugging a program yesterday for 4 hours, and then I got wind of it and realized that actually we can scrap the whole thing and do a simpler thing, and get the same output. No more debugging. Just move to the next thing.
The relevance filter is why I can do what I can do. It's understanding that a personal website can double up as a portfolio, and trump any resume. It's understanding that LinkedIn can double up as an open research forum, which can in turn help with marketing, market research, product development, deepening my craft, making connections, and letting the world know that my intern is looking to do a PhD and study the microbiome, and showing off her work. It's seeing things that matter, that others miss.
The relevance filter has belt ranks to it. As of now, I am probably mid-tier. What does a black belt level relevance filter look like? When I was in grad school and running out of money, my aunt said "why don't you try consulting for companies on the side." I thought the idea preposterous, because what the heck did I know? But I went for it, and it changed my life. Here I am, in Berlin, with my own consulting company. My aunt's little nudge is a big part of it.
That same aunt was talking to my sister many years prior. My sister was majoring in biology, on track to be a nurse…and she was miserable. My aunt asked what she was good at and enjoyed, and she said art. But that art didn't pay the bills. My aunt said "Why don't you go into graphic design? The internet is getting big and there's going to be a need for web desiners and digital content (this was circa 2010)." So my sister gave it a shot. Fell in love with it. Met her husband in one of the classes. They now have a kid. She is associate creative director for a graphic design firm.
This is the highest level of relevance filter. Where you know exactly what little nudge can change one's entire life. But alas, there is another word for that. It's not intelligence. It's something else: wisdom. There is not very much discussion of wisdom in the AI circles right now. But that's really the heart of this.
The idea behind me doing things the old fashioned way, at least some of the time: writing and coding without AI for example, or at least trying it myself before getting AI feedback, is for the purpose of training my relevance filter. There is a proper way to use AI that still allows for the training of the relevance filter and the pursuit of wisdom, but it has not been fully fleshed out. Some protocol where there's a balance between using your head and using whatever AI tools. This is what I'm trying to get right at the moment.
What this looks like is to be determined, but I think it's going to be something to the effect of me thinking about the thing first and/or trying the thing out myself, and then consulting AI. A sort of inter-dependence rather than a dependence. I think this will be harder to do as the AI tools get better, more pervasive, and as I've seen in my gmail, more pop-up-ey. I otherwise think that the AI companies want us to become dependent on AI, so we enter an infantile state where we have to have some subscription to some external tool so we can properly "think" about stuff. And write emails. And wedding vows. And funeral eulogies.
One of the reasons I am writing this article is as a sort of letter to my future self. The AI tools are going to get better. And with these new tools the core values of society are going to change. It might be at some point we have smart glasses that are giving us continuous real-time AI input, so the line between human thinking and LLM-generated thoughts is completely blurred. It might be that school and university are completely meaningless in a world where AI can do the thing better and instantly.
It could very well be that, as in the book Ferenheit 451, "intellectual" becomes a swear word. It could be that I eventually slip into the grasp of AI, in the light of otherwise impossible deadlines. 10,000 lines of code due tomorrow. A 300 page report due tomorrow. An email inbox that is in the thousands that can only get processed by AIs (because the bureaucrats will add more paperwork to everyday life in order to justify their existence).
In such a world, I aim to stick to my guns. I'm not anti-AI by any means. I am fine with AI, but I am also pro wisdom. If Andrej Karpahy is right and agency > intelligence, I counter by saying wisdom > agency. And you can only get wisdom by knowing how to think about stuff in the absence of AI, and actively doing so. It's interdependence with AI, not dependence on AI. You can only get wisdom by training your relevance filter.
And so I write an essay that could very well be maligned in later years. It could end up in the cringe archives of the future. The stubborn luddite doing old things the old way. Or worse: some future equivalent of being cancelled (as we all know at this point that societal core values are applied retroactively).
But I nonetheless stand here to prolaim to my future self, with his GPT-12 connected smart glasses, along with the like-minded who stand beside me, that we choose to operate without AI sometimes. We choose to train our relevance filter. We choose to pursue wisdom. We choose to value wisdom above agency.
We choose to think about stuff.
[1] My "relevance filter" wording can be attributed to the official term "relevance realization" coined by cognitive scientist and philosopher John Vervaeke. He connects this directly to what wisdom is. So I'm not just blowing wind here.