Observatorio IA - artículo

A new set of principles has been created to help universities ensure students and staff are ‘AI literate’ so they can capitalise on the opportunities technological breakthroughs provide for teaching and learning. The statement, published today (4 July) and backed by the 24 Vice Chancellors of the Russell Group, will shape institution and course-level work to support the ethical and responsible use of generative AI, new technology and software like ChatGPT. Developed in partnership with AI and educational experts, the new principles recognise the risks and opportunities of generative AI and commit Russell Group universities to helping staff and students become leaders in an increasingly AI-enabled world. The five principles set out in today’s joint statement are: Universities will support students and staff to become AI-literate. Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience. Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access. Universities will ensure academic rigour and integrity is upheld. Universities will work collaboratively to share best practice as the technology and its application in education evolves.
Enrique Dans Enrique Dans (30/07/2023)
Estamos como en Poltergeist: delante de la televisión sin sintonizar, y diciendo eso de «ya están aquí«… No, no es malo, no es negativo, no lo vamos a prohibir, ni a regular de manera que no ocurra. Es, simplemente, inevitable. O desarrollamos una forma de distribuir la riqueza que se adapte a esos nuevos tiempos que ya están aquí, o tendremos un problema muy serio. Y no será un problema de la tecnología: será completamente nuestro.
There’s been a lot of discussion in recent months about the risks associated with the rise of generative AI for higher education. Much of the conversation has centred around the threat that tools like ChatGPT - which can generate essays and other text-based assessments in seconds - pose to academic integrity. More recently, others have started to explore more subtle risks of AI in the classroom, including issues and equity and the impact on the teacher-student relationship. Much less work has been done on exploring the negative consequences that might result from not embracing AI in education.
There are a lot of AI-powered “summariser” tools on the market. These tools allow us to paste in unstructured text and have AI identify important sentences, extract key phrases and summarise the main points of the document. My research shows that lots of us are using AI summariser tools to help us to learn more from notes that we take in class, in work, while reading documents, watching videos and listening to podcasts etc. But, while summarising and giving structure to information can help to manage cognitive load and support basic recall, it doesn’t in itself help us to learn
My initial research suggests that just six months after Open AI gave the world access to AI, we are already seeing the emergence of a significant AI-Education divide. If the current trend that continues, there is a very real risk that - rather than democratising education - the rise of AI will widen the digital divide and deepen socio-economic inequality. In this week’s blog post I’ll share some examples of how AI has impacted negatively on education equity and - on a more positive note - suggest some ways to reverse this trend and decrease, rather than increase, the digital and socio-economic divide.

Pages

Temas

Autores