March 2023

Home

The current subjective state of the month of March, 2023. I am writing this for the sake of those reading this in the near or distant future. To the readers reading this as a news digest, here is what I have seen in my little corner of the physical and digital world in the month of March. To readers reading this in 2030 or after, here is how things were going before the AIs destroyed the world (I'm joking…kinda).


As we struggle to understand large language models, AI guru Andrej Karpathy has created a completely free online class that takes you from the basics of deep learning all the way to how to build a GPT from scratch. From my perspective, I like to learn how things work from the ground up, so this class is a gold mine.


ChatGPT, the large language model interface causing quite a stir in the past six months or so, was banned in Italy. This was due to a possible lack of compliance with the EU's GDPR regulations. This is a data protection directive that on my end has simply led to paranoia about anything I put on this website. This matters for me because I live next door in Germany. If there is some conflict between these large language models and the GDPR regulations, then I'll be at a huge disadvantage against my competitors outside of Europe. But hey, free healthcare.


COVID introduced new pro-vs-anti things to argue over. Things like whether or not we should wear masks, or get the vaccine, and whether there should be legal mandates around each. Now, in the age of ChatGPT, we have a new and rather surprising one: whether or not we should continue AI research. An open letter was written urging for a pause AI research for six months so we can catch up with alignment (AI-doesn't-kill-us-all) research.

"Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium."

Eliezer Yudkowsky, prominent AI alignment researcher known also for his writing on the art of rationality, thinks we should permanently stop all AI research. He's been thinking about this longer than I have, and he has started an entire blog/forum/movement on how to think properly, so we should probably at least listen and carefully consider what he has to say.

This said, Twitter (which we all still use, to distant future readers) seems to be dividing, loudly, between pro and anti-AI research, depending on who thinks we're all going to die. I was influenced by a recent podcast between Daniel Schmachtenberger and Liv Boeree talking about AI in terms of Moloch, or negative sum games. This gets us into game theoretical considerations (if we pause, will China pause?), and societal considerations (AI beauty filters giving teens body dysmorphia). So the problem is much bigger than pro/anti AI research. Schmachtenberger calls it the metacrisis.

So do I think we're all going to die because of a metacrisis accelerated by AI? Well, yes. But when? It could be next year, or it could be in 100 years. Or we pull through. I wouldn't be writing this for distant future readers if I thought we were goners in a year. But I don't know.


Every generation has its oracles. People who seem to have an idea of what's coming next. For us, in these times, with respect to AI, it's Gwern. The mysterious internet writer, whose real name and face are unknown, but whose reputation precedes him, is very similar to L from Death Note.

He was tinkering with large language models before GPT-3. He was the first I saw to write about prompt programming and how to do it, long (on AI time scales) before anyone was thinking about it.

He wrote about the scaling hypothesis, long before ChatGPT. The idea that the secret to AGI is stacking neural nets on top of each other and adding lots of training data. Nothing more elaborate than that. The opposite view would be that you need something more complex. New algorithms, and/or something more modular like the human brain. The way things are looking with GPT-4, it looks like Gwern was right.

And finally, when Bing Chat came out, it was Gwern who, in a very long comment on a LessWrong post (scroll to bottom, first one), predicted that Bing was using an early version of GPT-4. He was right.

How does Gwern see things going when AGI finally happens? Have a look at this fictional story he wrote to get a feel for how he thinks things could potentially go.


We are just on the heels of the Silicon Valley Bank failure. This is outside of my domain, but it is being covered by independent journalist Balaji Srinivasen. I'm putting it in this newsletter because he predicts hyperinflation, to the point where he made a million dollar bet that 1 BTC would be worth 1 million dollars roughly 90 days after the banking crisis started. I note he's pro-bitcoin, so there's a potential perverse incentive here. This said, many more banks are apparently insolvent, due to the Fed raising interest rates after the banks had invested in bonds. Here is a relevant Twitter thread. As I link this, I am reminded that a lot of the most up to date news I get these days comes from Twitter. Second place is blogs. For science, it's pre-prints. News media and peer-reviewed publications are too slow these days, especially in fast moving fields like AI.

What matters for this newsletter is that he thinks we're going to enter hyperinflation soon. We're talking 100% or more, as opposed to the 10% rate we were in the US (and Germany) recently. It will be interesting for future readers to look at this either way, because if there are serious issues with the banks, something is probably going to have to give, somewhere.

Emacs 28.1 (Org mode 9.5.2)