I've had a few people new to the world of AI alignment and AGI x-risk reach out about beginning resources, so here's a quick compilation.
I've kept it short to focus on the highest value content. You can find plenty more by following links and searching terms in the resources below.
- [Post] Benefits & Risks of Artificial Intelligence by Future of Life Institute (good intro)
- [Post] Road to Superintelligence by Tim Urban (fun, accessible intro)
- [Book] Superintelligence by Nick Bostrom (somewhat dense, but very comprehensive and very well argues the case for taking AGI x-risk seriously)
- [Videos] Intro to AI Safety, by Rob Miles (plenty more in his YouTube channel)
At that point, if you want to go deeper, I highly recommend the AGI Safety Fundamentals online course. It's the next level of depth when it comes to AGI safety & alignment.
If you'd like to follow along with the field but don't think you can go as deep as a full course, I can suggest my own podcast, The AGI Show (also on YouTube), which is targeted to technical but non-expert audiences who want to better understand the state of the field and how to contribute.
Finally, Holden Karnofsky has some great resources on how to contribute targeted to a variety of audiences:
- Jobs that can help (get a job in the space)
- What AI companies can do (work on this with your employer)
- What governments can do (work on this with your govt via public service, lobbying, voting etc)
One last note: It's very easy in this field to get caught by the "doomer" whirlpool – endless content that's quite negative and defeatist. Almost everyone in this field has fallen into this trap at some point. As you go through AI safety content, remember to stay positive and pragmatic. Ultimately, all we can do is to contribute meaningfully (by taking action) and accept that the future is fundamentally uncertain. Anybody who says they know the future is wrong!
Good luck! Don't hesitate to reach out if I can be helpful with your journey into this space as well 🙂.