2024 May

Home

I originally was going to have this be a newsletter, but I could not keep up. So now I am considering these to be snapshots. I'll post a few here and there, and if I don't post for a while, that's ok. The value is not being up to date with whatever it is, but looking back at what was going on in my little corner of the world (and my mind) however many years ago. It's already interesting to look back a year ago and see my impression of where the world was at. Helps with things like hinsight bias.


I have been watching a YouTube creator called Tsoding, who was described in a Twitter post as the "Bob Ross of coding." He does what he calls recreational programming, and his YouTube channel is essentially his Twitch live streams. I watch his content the way my mom watches cooking shoes. In part for entertainment and in part for tacit knowledge. These are the things he teaches without directly teaching them. You can see him decompose problems, organize his thoughts, see how often he's testing the code he's writing, how he reacts when there's an error. There's a ton of information here that you otherwise would not get if he had some prepared video on how to do a thing.

On the topic of tacit knowledge, there is a beautiful LessWrong post which is essentially a compendium of "tacit knowledge videos" like this across various subjects, that help one get tacit knowledge around anything from programming to building a bow and arrow in the wilderness (of note: Tsoding isn't here, but definitely should be).

In short, I am starting to value the concept of tacit knowledge a heck of a lot more. I did a 3 day bioinformatics seminar in Wisconsin last March and I realized that although I had a ton of stuff prepared, the times where I seemed to be teaching the most stuff per unit time was when I just had the blank text editor open and was coding on the fly, based on some left field question. Perhaps I should do live streams myself accordingly.

This is part of a bigger discussion happening at this point in history that revolves around what makes humans valuable as AI's capabilities expand. If AI can write and code, then how can the human writers and coders give value? What is the "human touch" exactly? These are hard questions, and they change month to month as AI gets better, but I think there is some signal in this whole discussion around tacit knowledge being shaken out of in-person lessons, livestreams, and these kinds of things.


As an update on the AI front, GPT-4o came out recently, which has multimodal input, and can talk to you just like the AI in the movie Her. There is some drama around Scarlett Johansson's voice here that I won't get into.

The point of talking about AI here is for future readers to see exactly where things were now so in order to serve as innoculation against hindsight bias. It's funny to see the different camps here.

We have the people who think that AI is going to completely wipe everyone out. Eliezer Yudkowsky is the flagship example of this. He has been on podcasts that are titled "we're all going to die." Yudkowsky has written millions of words on how to think, and has studied AI at a level much deeper than I have, so I'm inclined to at least consider his position.

Then, at least on LinkedIn (prominent B2B social media channel in 2024), we see some people strawmanning AI. Here's a cherry picked example of ChatGPT hallucinating, therefore AI is not going to do anything of substance ever. Not in the next 10 years of innovation. Not at all.

Then we have so-called effective accelerationism, or e-acc in shorthand. These are people, perhaps trolls, who profess that they want to bring about the so-called singularity as fast as possible. That it will lead to utopia and anyone else who says otherwise are doomers or whatever. The most prominent person here is Guillaume Verdon, the man who calls himself Beff Jezos on Twitter, who you can see in this Lex Fridman podcast.

Though people are slowly getting used to the idea that AI is just going to be part of our lives from here on out, there is still a bit of fear in terms of how the human is going to stay relevant. Some of this is derived from not AI itself but the intersection between AI and for-profit incentives. If you can replace 90% of knowledge workers with AI systems, with no hit on profit, well…if you don't then your competitors will, and then they will steamroll given all the capital they save with the lighter payrolls. Or maybe you just expect that each worker has to produce 100x more code or marketing copy or whatever else now. Remember when email showed up and made communication so much faster? Did we all just work four hour days from here on out? No…more was expected per worker.


This year is an election year. For those reading this long after the 2024 election, it is between Joe Biden and Donald Trump. It is basically a complete mess. I'm not here to make political commentary but no one seems to be looking forward to this one.

My mom and stepdad stopped watching the news all together. Between the election, the current Russia-Ukraine and Israel-Hamas wars, and everything else going on, its too depressing.

I am jotting this down here because perhaps a reader in 2030 is looking back at the simple and peaceful times of 2024 and how relatively nice it was and it sure would be nice to go back to those good old days or whatever. This is just to say that there are a handful of us right now who think that things are an absolute mess and are only going to get worse from here. And it sure would be nice to go back to the relatively simple times of 2017.

One lens I am looking at the state of the world through is that of things like planetary boundaries, many of which have apparently been crossed. Climate change, ocean acidification, fossil fuels. The stuff the hippies have been talking about for a long time. Soon as we see the effects here, then all the other stuff on the news (politics, wars, etc) get a whole lot worse. Nate Hagens does a great job talking about this in a very straightforward manner. If we're seeing effects from being a crappy steward of the biosphere at the time of reading this, you'll have to tell me if it looks something like Nate Hagens's idea of the Great Simplification.

In short, we're going to want AGI to be working on these kinds of problems. What we don't want is companies using AGI to extract stuff from the Earth faster than competing companies trying to do the same thing.


You can see in my writings that I tend to link a lot of stuff. This is in part because there are things I've seen that I think are relevant to whatever I'm writing. This is also in part to remind me of cool websites, videos, podcasts and whatever else that I once was into and then forgot. But there's a fundamental problem here. Many websites are disappearing.

According to one study by Chapekis and collagues from Pew Research Center (hopefully still) linked here, it was found that 38% of web pages from 2013 are no longer accessible. In this regard, what I really should be doing is linking sites using the Internet Archive. Internet writer Gwern wrote about this in 2011. Apparently its been a problem for a while. He excerpts a passage from a wikipedia article on the topic of link rot:

In a 2003 experiment, Fetterly et al 2003 discovered that about one link out of every 200 disappeared each week from the Internet. McCown et al 2005 discovered that half of the URLs cited in D-Lib Magazine articles were no longer accessible 10 years after publication [the irony!], and other studies have shown link rot in academic literature to be even worse (Spinellis, 2003, Lawrence et al 2001). Nelson & Allen 2002 examined link rot in digital libraries and found that about 3% of the objects were no longer accessible after one year.

The rest of his article goes deep into this, including solutions like making your own archives. A lot of blogs and social media accounts make posts to be read and liked and commented on in the moment, and then its time for the next one. But if you're making posts that are intended to be read in 30 or 300 years from now (depending on how optimistic you are about the future of humanity), then you should probably look into these solutions.


It's always fun to see something totally new out of biology. This preprint describes a completely new form of life: virioid-like elements being called Obelisks. From the abstract:

Here, we describe the “Obelisks,” a previously unrecognised class of viroid-like elements that we first identified in human gut metatranscriptomic data. “Obelisks” share several properties: (i) apparently circular RNA ~1kb genome assemblies, (ii) predicted rod-like secondary structures encompassing the entire genome, and (iii) open reading frames coding for a novel protein superfamily, which we call the “Oblins”. We find that Obelisks form their own distinct phylogenetic group with no detectable sequence or structural similarity to known biological agents.

At the time of writing [2024-05-24 Fri] it is a pre-print. However, this comes from the lab of Andrew Fire at Stanford (nobel prize winner and all around passionate about all things biology, teacher in my Advanced Genetics class in Fall 2011), so I trust that it is good work.

Emacs 28.1 (Org mode 9.5.2)