Observatorio IA - futuro de la IA

Artificial intelligence (AI) is progressing rapidly, and companies are shifting their focus to developing generalist AI systems that can autonomously act and pursue goals. Increases in capabilities and autonomy may soon massively amplify AI’s impact, with risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems. Although researchers have warned of extreme risks from AI (1), there is a lack of consensus about how to manage them. Society’s response, despite promising first steps, is incommensurate with the possibility of rapid, transformative progress that is expected by many experts. AI safety research is lagging. Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness and barely address autonomous systems. Drawing on lessons learned from other safety-critical technologies, we outline a comprehensive plan that combines technical research and development (R&D) with proactive, adaptive governance mechanisms for a more commensurate preparation.
Me reúno con Sutskever, cofundador y director científico de OpenAI, en las oficinas de su empresa en una anodina calle del Mission District de San Francisco (California, EE UU). El objetivo es que comparta sus previsiones sobre el futuro de una tecnología mundialmente conocida, y en cuyo desarrollo ha tenido mucho que ver. También quiero saber qué cree que será lo próximo en llega y, en particular, por qué la construcción de la próxima generación de los modelos generativos insignia de OpenAI ya no es el centro de su trabajo. En lugar de construir el próximo GPT o el creador de imágenes DALL-E, Sutskever cuenta que su nueva prioridad es averiguar cómo impedir que una superinteligencia artificial, una hipotética tecnología futura que ve llegar con la previsión de un verdadero creyente, se rebele.

Temas

Autores