Picture yourself in 1989, right before the creation of the World Wide Web. Imagine that you knew, before anyone else, the impact that the Internet was going to have. What skills would you say were the most important to teach to the ‘new generation’? This is not an easy question to answer. In many cases, what we saw between the mid-1990s and 2000s was that education systems focused on helping students learn how to use specific technology tools, and not on how to thrive in a society being re-shaped by the Internet.
Now let’s get “back to the future” and jump ahead to where we find ourselves today.
Image that you know before anybody else how artificial intelligence (AI) and data-intensive digital services will shape the next decade. What skills would you say are the most important to teach to the generation of learners currently in school? This is also not an easy question to answer. In many cases, education systems still focus primarily on helping learners learn to use specific technology tools. Perhaps this time we can do better.
There are a large number of approaches today on how to develop digital skills. Depending on the framework adopted, digital skills might or might not be included in so-called ‘21st-century skills’. Most of the literature highlights the relevance of these capacities for the world that is coming (in some case for a world that is already here). There is not a single framework to use as an unequivocal reference (although some have more visibility than others): Different countries/regions have decided to assess and measure these skills using different techniques and approaches.
Perhaps one of the differences between 1989 and 2019 is that societies today are more aware of the influence of digital technologies in almost every aspect of our lives. Considering that our current use of technologies has become more sophisticated, it is reasonable to expect that related skills and knowledge required are more complex. As we will see, this complexity is not associated with how difficult it is to interact with certain tools (simplicity in technology is the golden rule) but rather with the capacities of thinking critically and assessing contexts.
The current expansion of so-called “smart technologies” (adaptive, predictive, personalized) might make our lives easier in some cases (e.g. assistive chatbots or robotic caregivers). But it is also true that we are now much more aware of some of the unintended (in some cases negative) consequences of these new technologies than we were in the recent past.
Some people might feel uncomfortable thinking that AI or robots can be part of the current conversation in education. What is unknown tends to produce fear — or rejection. On the other hand, perhaps is time to think about how to better prepare the next generation to thrive in contexts where data-intensive systems might assist or replace a number of skills and capacities currently developed in schools.
Learning how to interact (understand, use, collaborate, behave, trust, feel) with robots might not be sci-fi anymore. Joseph Aoun, the author of Robot-Proof, suggests that reading, writing and mathematics form baseline capabilities for modern society. But now there are additional challenges. In addition, at least three more literacies are needed: data literacy (to read, analyze and use an ever-rising tide of information); technological literacy (including coding and understanding how machines tick); and human literacy (understanding how to function in the human milieu).
Before taking a strong position on what role can AI play in education, it might be a good idea to remember that humans have all sorts of intelligence and capacities that go far beyond what narrow AI can do today. Rosemary Luckin emphasizes that human intelligence is immensely rich and varied. When considering social intelligence, emotional intelligence and self-efficacy, Professor Luckin contends that one potential role for AI in education is to provide opportunities for human intelligence augmentation, with AI supporting decision‐making processes, rather than replacing people through automation.
Neil Selwyn, a professor of education at Victoria’s Monash University and the author of a new book, Should Robots Replace Teachers?, suggests that “the worry is not that teachers are going to be replaced but that they’re going to be displaced or de-professionalized.” Others prefer to go further, suggesting that AI will replace jobs or whole sections of the workforce. Whatever the case, research shows that educators might need support and guidance to adopt and teach this new knowledge and languages, and that their role during the learning experience of students remains fundamental.
Which would you rather be: the passenger or the driver?
The expansion of policies promoting the development of computational thinking capacities is something that has gained visibility and relevance. Countries like the UK have decided to adopt computational thinking as a central component of their national curriculum. Today is possible to find a growing number of countries (and civil society initiatives) promoting not only learning how to use technologies, but also how to create new ones. Perhaps one of the most interesting questions in this context has to do with whether to teach computational thinking as a subject, or to incorporate it in different disciplines, integrated as a “transversal literacy”. Both approaches have pros and cons; it is quite likely that this will continue to be an ongoing conversation.
Those who promote computational thinking emphasize that it’s not about coding but about understanding — not from the passenger seat, but as a driver — how the technology works and its implications in today’s society. As recently announced by the OECD, the 2021 PISA assessment will incorporate aspects of computational thinking for the first time. The emphasis will be on processes and mental models (e.g. abstraction, algorithmic thinking, automation, decomposition and generalization) that learners need to succeed in an increasingly technological world.
Interestingly, the more social the experience of using technologies becomes, the closer the connection between digital skills and the so-called social-emotional skills. As important as learning to code might be, it also will be important to learn how to decode. Identifying new problems can lead us to change the way we see today’s skills. Here some examples of transversal competencies which illustrates socio-technical capacities that might be useful to consider:
- Algorithmic thinking: To what extent might the information presented by algorithms influence ideas, feelings or decisions? How, when, and to what end might automated systems impact people’s lives? How to understand the potential cost of automated decisions? How to develop algorithm awareness to deal with potential bias?
- Smart skepticism: How to develop a selective trust to deal with deepfakes or fake news? What techniques, protocols or good practices can help us selecting reliable information? How to administrate trust in data-intensive environments? How to promote independent thinking, demanding evidence or even thinking scientifically with some doses of skepticism?
- Ethical fluency: How to infuse ethical thinking into the design, deployment, and adoption of information technologies? How to incorporate privacy and data protection in every stage of technology adoption? How to transition from the motto, “move fast and break things”, towards work that benefits of your community but does not negatively affect others?
- Self-regulation: In contexts of over-stimulation and hyper-connection, how to self-regulate one’s behavior, emotions, and thoughts in different digital environments, especially when they might affect others (or yourself)? What are the best strategies for maintaining online attentional focus?
Some people might be interested to learn about AI4K12. This North American community of scholars promotes national guidelines (but not curricula) for AI education for K-12 and argues that virtually everyone will need a basic understanding of the technologies that underpin machine learning and artificial intelligence. They claim that students should understand and evaluate new AI technologies and critically consider the ethical or societal impact questions raised by them.
The future of education poses a number of challenging questions: If machines are learning, what should we teach to those who are not machines? How should we design future-proof capacities for the coming generations? Instead of training students to deal with today’s technologies, how can we better prepare them to make sense of complex or unknown problems tomorrow? What are the foundational knowledge and capacities that won’t get obsolete? And equally important: How can educators (among other experts) be involved in this conversation?
“We’re always predicting the future and we’re always wrong about it”, says Ian McEwan. Although we might not be able to predict the future, collapsing the time between 1989 and 2019 offers an opportunity to think forward and design transformative solutions where more people can be drivers of their destination — and not only passengers in other people’s vehicles.
Source : worldbank