About me

Soroush Pour profile photo
Hi, I'm Soroush 👋🏽

I love learning new things and working on different, important problems that make the world a better place. I'm currently working to ensure ever more advanced general AI systems are safe, aligned, & positive for our world at my new startup, Harmony Intelligence.

Past roles

  • Startup founder (many hard earned lessons, one modest exit)
  • Software engineer (e.g. at Plaid, during 15 -> 170 person growth stage)
  • Hardware engineer (e.g. at Vow, building lab automation systems)
  • Engineering manager (e.g. at Plaid, managing 6-12 team members, including cloud infra, dev tooling & cybersecurity)
  • Head of Engineering (0 -> 8 team members, early stage startup Vow)
  • Head of Technology (~35 team members, mostly PhD scientists & engineers, early stage startup Vow)
  • A long list of other startup roles you take on in startups, from building recruitment processes, designing equity plans, negotiating commercials, setting up manufacturing, and much more.

All in all, I absolutely love wearing many hats and figuring out ambiguous problems with highly motivated, early stage teams in pursuit of an important mission.

Current work in AI safety & alignment

Since Jan 2022, I've been working full-time on AI safety, including through upskilling, research, and evaluating opportunities for impact.

I'm now building a world-class team of AI safety experts to help evaluate & make advanced general AI models (e.g. LLMs) safer at a new startup called Harmony Intelligence, with a particular focus on catastrophic risk through misuse and misalignment. My co-founder at Harmony is Alex Browne, one of the most talented & high integrity engineers I've ever had the pleasure of working with.

Reach out if you're interested AI model evaluations, red-teaming, safety, and cybersecurity.

I also host The AGI Show podcast to interview AI experts on the risks & opportunities of artificial general intelligence (AGI) for our world. I deeply value all feedback, requests & questions - please reach out .

If you're new to the challenge of AI alignment, check out Want to learn about AGI safety & alignment? - Beginner resources.

If you're skeptical about the field of AGI alignment, please reach out with your arguments for believing this. I would love to be wrong about the risks of AI alignment and to find out that it's going to be OK even if we don't work on this issue. So far I have not seen such arguments.

If you're working on AI alignment and think I can be helpful, never hesitate to reach out.

Contact me

You can find my email address behind a CAPTCHA here.

You can also find me on: