Yahoo Canada Web Search

Search results

  1. Rationality: A-Z (or "The Sequences") is a series of blog posts by Eliezer Yudkowsky on human rationality and irrationality in cognitive science. It is an edited and reorganized version of posts published to Less Wrong and Overcoming Bias between 2006 and 2009.

  2. Eliezer Yudkowsky, a researcher at the Machine Intelligence Research Institute, in the Bay Area, has likened A.I.-safety recommendations to a fire-alarm system. A classic experiment found that, when smoky mist began filling a room containing multiple people, most didn't report it.

  3. www.lesswrong.comLessWrong

    And the older Yudkowsky-led research about decision theory and tiling and reflective probability is relevant. But this basic argument is in some sense simpler (less advanced, but also more radical ("at the root")) than those essays.

  4. Jan 9, 2024 · In Episode #10, AI Safety Research icon Eliezer Yudkowsky updates his AI doom predictions for 2024. After For Humanity host John Sherman tweeted at Eliezer, ...

    • 28 min
    • 10.5K
    • For Humanity Podcast
  5. Apr 8, 2023 · Eliezer is saying that we can in principle make AI safe but argues that it could take decades to advance AI safety to the point where we can be sufficiently confident that creating an AGI would have net positive utility.

  6. Nov 14, 2023 · Madhumita Murgia and John Thornhill speak to Yoshua Bengio, a pioneer of generative AI, Yann LeCun, head of AI at Meta, and Eliezer Yudkowsky, research lead at the Machine Intelligence...

  7. People also ask

  8. Apr 5, 2024 · In my first foray into this area, I briefly mentioned Eliezer Yudkowsky’s essay Taboo Your Words, where Yudkowsky suggests people replace words with descriptions of the concepts to which those words are meant to refer, in order to prevent differences in definitions from causing unnecessary confusion.

  1. People also search for