Big ideas in AI education

By Sue Sentance. Posted

The AI4K12 project’s Five Big Ideas in AI, © AI4K12

Originally published in Hello World: The Big Book of Computing Content, Oct 2022. All information true at the time of original publishing.

From September 2021 to March 2022, the Raspberry Pi Foundation hosted a series of seminars in partnership with The Alan Turing Institute focused on artificial intelligence (AI), machine learning, and data science education (helloworld.cc/AIseminars). These are important topics in both the Foundation’s learning resources for learners and educators, and for our programmes of research, and will only become more important as AI increasingly becomes ingrained in our societies. In this article, I will summarise and explore some of the ideas shared in one of these seminars, presented by Professor Dave Touretzky and Professor Fred Martin, about how to approach teaching AI.

AI4K12

The AI4K12 project (ai4k12.org), spearheaded by Touretzky and Martin, focuses on teaching AI in K–12 (that is, to learners aged 4–18) in the US. The AI4K12 team has aligned its vision for AI education to the CSTA standards for computer science education (helloworld.cc/CSTAstandards). These standards, published in 2017, describe what educators should teach in US schools across the discipline of computer science, but they say very little about AI. As such, this was the stimulus for starting the AI4K12 initiative.

The AI4K12 project has a number of goals. One is to develop a curated resource directory for K–12 teachers, and another is to create a community of K–12 resource developers. Several members of the AI4K12 working group are practitioners in the classroom who have made a huge contribution to taking this project from idea stage to fruition. If you’ve heard of AI4K12 before, it’s probably because of the Five Big Ideas the team has set out, to encompass the AI field from the perspective of school-aged children (helloworld.cc/fivebigideas). These ideas are:

  1. Perception: the idea that computers perceive the world through sensing

  2. Representation and reasoning: the idea that agents maintain representations of the world and use them for reasoning

  3. Learning: the idea that computers can learn from data

  4. Natural interaction: the idea that intelligent agents require many types of knowledge to interact naturally with humans

  5. Societal impact: the idea that artificial intelligence can impact society in both positive and negative ways

We sometimes hear concerns that resources being developed to teach AI concepts to young people are too narrowly focused on machine learning, particularly supervised learning for classification. It’s clear from the AI4K12 Five Big Ideas that the team’s definition of the AI field encompasses much more than this one area. Despite being developed for a US audience, I believe the description laid out in these five ideas is immensely useful to all educators, researchers, and policymakers around the world who are interested in AI education.

During the seminar, Touretzky and Martin shared some great practical examples. Martin explained how the big ideas translate into learning outcomes for each of the four age groups (ages 5–8, 9–11, 12–14, and 15–18). You can find out more about their examples in their presentation slides (helloworld.cc/AI4K12ppt) or the seminar recording (helloworld.cc/AI4K12seminar).

I was struck by how much the AI4K12 team has thought about progression — what you learn when, and in which sequence — which we do really need to understand well before we can start to teach AI in any formal way. For example, looking at how we might teach visual perception to young people, children might start when very young by using a tool such as Teachable Machine to understand that they can teach a computer to recognise what they want it to see (helloworld.cc/teachablemachine), then move on to building an application using Scratch plug-ins or CalypsoAI (calypsoai.com), and then to learning the different levels of visual structure and understanding the abstraction pipeline — the hierarchy of increasingly abstract things.

Glass and opaque boxes

Touretzky and Martin support teaching AI to children using a glass-box approach. By this we mean that we should give students information about how AI systems work, and show the inner workings, so to speak. The opposite would be an opaque-box approach, which would mean showing students an AI system’s inputs and outputs only, to demonstrate what AI is capable of without trying to teach any technical detail.

The AI4K12 researchers are keen for learners to understand, at an age-appropriate level, what is going on inside an AI system, not just what the system can do. They believe it’s important for young people to build mental models of how AI systems work, and that when young people get older, they should be able to use their increasing knowledge and skills to develop their own AI applications.

What does AI thinking look like?

Touretzky addressed the question of what AI thinking looks like in school. His approach was to start with computational thinking (he used the example of the Barefoot project’s description of computational thinking as a starting point; helloworld.cc/barefootCT) and describes AI thinking as an extension that includes the following skills:

  • Perception

  • Reasoning

  • Representation

  • Machine learning

  • Language understanding

  • Autonomous robots

He went described AI thinking as furthering the ideas of abstraction and algorithmic thinking commonly associated with computational thinking, stating that with AI, computation actually is thinking. My view is that to fully define AI thinking, we need to dig a bit deeper into, for example, what is involved in developing an understanding of perception and representation.

Thinking back to a previous Raspberry Pi Foundation research seminar, Professor Matti Tedre and Dr Henriikka Vartiainen shared their description of computational thinking 2.0 (helloworld.cc/tedreseminar). Their description focuses only on the ‘Learning’ aspect of the AI4K12 Five Big Ideas, and on the distinct ways that thinking underlies data-driven programming and traditional programming. From this, we can see some differences between how different groups of researchers describe the thinking skills young people need in order to understand and develop AI systems. Tedre and Vartiainen are working on a more granular description of machine learning thinking, which has the potential to impact the way we teach machine learning in school.

Another description of AI thinking comes from Juan David Rodríguez García, who presented his system, LearningML, at another of the Raspberry Pi Foundation’s seminars (helloworld.cc/garciaseminar). Rodríguez García drew on a paper by Brummelen, Shen, and Patton (helloworld.cc/brummelen2019), who extended Brennan and Resnick’s CT framework of concepts, practices, and perspectives (helloworld.cc/brennan2012) to include concepts such as classification, prediction, and generation, together with practices such as training, validating, and testing.

What I take from this is that there is much still to research and discuss in this area! It’s a real privilege to be able to hear from experts in the field and compare and contrast different standpoints and views.

Read more from our AI and data science education presenters in their write-ups of their seminars (helloworld.cc/RPFseminarpapers).


Further reading


www.twitter.com/suesentance

Print

Free - UK only

If you’re a UK-based teacher, volunteer, librarian or something in between, we'll send each issue free to your door.

Digital

Free

Just want to read the free PDF? Get each new issue delivered straight to your inbox. No fuss and no spam.

Buy

From £6

If you’re not a UK-based educator, you can buy print copies from our store.