Russell thinks the key to making AI systems both safer and more powerful is in making their aims inherently unclear or, in computer science terminology, introducing uncertainty into their objectives. As he says, “I think we actually have to rebuild AI from its foundation upwards. The foundation that has been established is that of the rational [human-like] agent in optimization of objectives. That’s just a special case.” Russell and his team are developing algorithms that will actively seek to learn from people about what they want to achieve and which values are important to them. He describes how such a system can provide some protection, “because you can show that a machine that is uncertain about its objective is willing, for example, to be switched off.”
— Read on blogs.scientificamerican.com/observations/dont-panic-about-ai/

Leave A Comment