Hello readers, and welcome to From AI to ZI! For the next year, I’m being funded by Open Philanthropy to learn about, and hopefully contribute to, AI safety. As part of my grant, I’ll be keeping a blog to document my progress and thoughts, raise awareness, and clarify my own thinking by putting it into writing. This is that blog!
For those who don’t know, my name is Robert Huben, and I’m trying to get into AI safety. I completed my Ph.D. in mathematics at the University of Nebraska - Lincoln in 2021, and spent the next year as a scientist at a defense contractor. I believe strong, unaligned AI poses an existential risk to humanity, and that it is one of the pressing issues of our time to avoid such catastrophe. Luckily, I also have a large enough ego to think that I could contribute to that pivotal endeavor, so for the next year I’ll be doing a self-directed study/research program to get up to speed, find my best fit in the world of AI safety, and hopefully start contributing my own insights.
What can you expect from this blog? For now, a post every 1-3 weeks, diving into a paper, book, AI system, and safety project, or describing my priors on AI capabilities and safety. In the longer term, it’s harder to say, although I am willing to make two solemn commitments right now:
I don’t have the self-discipline to keep this blog to only AI topics, so I will blog about other thoughts that I can’t otherwise get out of my head. Each of these posts will come with an exclusive rating on the scale of “AI” to “ZI”, depending on their relevance to AI safety.
If I solve AI alignment in the next twelve months, I will break the news on this blog first.
Thanks to Open Philanthropy for their generous support, and thank you for reading!