Observatorio IA - regulación

Artificial intelligence (AI) is progressing rapidly, and companies are shifting their focus to developing generalist AI systems that can autonomously act and pursue goals. Increases in capabilities and autonomy may soon massively amplify AI’s impact, with risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems. Although researchers have warned of extreme risks from AI (1), there is a lack of consensus about how to manage them. Society’s response, despite promising first steps, is incommensurate with the possibility of rapid, transformative progress that is expected by many experts. AI safety research is lagging. Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness and barely address autonomous systems. Drawing on lessons learned from other safety-critical technologies, we outline a comprehensive plan that combines technical research and development (R&D) with proactive, adaptive governance mechanisms for a more commensurate preparation.
Grabación de la entrevista realizada por Lorena Fernández Álvarez, Directora de Comunicación Digital de la Universidad de Deusto, a Gemma Galdon Clavell, experta en análisis de políticas públicas especializada en impacto social de la digitalización e Inteligencia Artificial y CEO de Eticas Consulting. Reflexiones sobre el impacto social de los sistemas algorítmicos, la reproducción de los sesgos de la sociedad en los procesos de la Inteligencia Artificial. El impacto de los outputs de la IA tiende a la generación de mundos pequeños y homogéneos en vez de exigir mundos más grandes y heterogéneos. Esta entrevista se realizó el 26 de octubre de 2023, con motivo de su participación como ponente principal en el acto de DeustoForum, “Los Retos Éticos ante la Inteligencia Artificial” en la Universidad de Deusto (Campus Bilbao).
Gary Marcus TED (12/05/2023)
Will truth and reason survive the evolution of artificial intelligence? AI researcher Gary Marcus says no, not if untrustworthy technology continues to be integrated into our lives at such dangerously high speeds. He advocates for an urgent reevaluation of whether we're building reliable systems (or misinformation machines), explores the failures of today's AI and calls for a global, nonprofit organization to regulate the tech for the sake of democracy and our collective future.