I love learning new things and working on different, important problems that make the world a better place.
Some of the roles I've worked in over the years:
- Startup founder (many hard earned lessons, one modest exit)
- Software engineer (e.g. at Plaid, during 15 -> 170 person growth stage)
- Hardware engineer (e.g. at Vow, building lab automation systems)
- Engineering manager (e.g. at Plaid, managing 6-12 team members, including cloud infra, dev tooling & cybersecurity)
- Head of Engineering (0 -> 8 team members, early stage startup Vow)
- Head of Technology (~35 team members, early stage startup Vow)
- A long list of other startup roles you take on in startups, from building recruitment processes, designing equity plans, negotiating commercials, setting up manufacturing, and much more.
All in all, I absolutely love wearing many hats and figuring out ambiguous problems with highly motivated, early stage teams.
Right now, I'm on a sabbatical working full-time on AI alignment, because I believe we're living in the most important century, as Holden Karnofsky aptly puts it.
I host The AGI Show podcast to interview AI experts on the risks & opportunities of artificial general intelligence (AGI) for our world. I deeply value all feedback, requests & questions - reach out via your preferred platform or at the contact at the bottom of this page.
If you're new to the challenge of AI alignment, check out Want to learn about AGI safety & alignment? - Beginner resources.
If you're skeptical about the field of AGI alignment, please reach out with your arguments for believing this. I would love to be wrong about the risks of AI alignment and to find out that it's going to be OK even if we don't work on this issue. So far I have not seen such arguments.
If you're working on AI alignment and think I can be helpful, never hesitate to reach out. Please message me at the contact below:
You can find my email address behind a CAPTCHA here.
You can also find me on: