Let’s take a minute to go back to some of our favourite learning theorists: Jean Piaget and Jerome Bruner (helloworld.cc/piaget1952 and helloworld.cc/bruner1964). Piaget believed learners couldn’t even begin abstract thinking until they were eleven, and Bruner recognised that learners needed to do repeated actions first (action-based thinking) before they could represent those actions on paper (image-based thinking). Both theorists support the idea that we need to work on a learner’s concrete understanding and that, as a learner progresses, they will transfer this to more abstract contexts.
This application of learning theory supports what many educators have found when using manipulatives such as Bee-Bots. For example, researchers Sapounidis and Demetriadis conducted a study to compare a tangible user interface for controlling a robot with a graphical user interface (helloworld.cc/sapounidis2013). In interviews with the participating children, they initially preferred the tangible, suggesting that it seemed more fun and engaging. Younger children got on better with the tangible system, although this could be more to do with their developing mouse-control skills. Other research on physical computing also finds increased engagement with hands-on tools, and greater problem-solving skills, so there is definitely support for this approach — but this is where things started to unravel for me.
I found that learners could explain what an algorithm was, and that a program was ‘a set of instructions that runs on a computer to tell it what to do’. Both met the curriculum needs, but I wasn’t convinced they could link these two facts together. Could they connect what they were doing on the Bee-Bot to the computing systems around them? Did they understand what a computer was?
What is a computer?
According to my class of nine- to eleven-year-olds, a computer is:
A piece of technology
A keyboard and a screen
A search engine
A machine used for work
A metal brain
A machine with a keyboard
An information device
This simple question highlighted a wealth of alternate conceptions about programming and computing systems. Many children identified that a computer needed a keyboard. Many also believed that the terms ‘machine’, ‘technology’, ‘electrical device’, and ‘computer’ were all synonyms. The other commonality was describing the computer’s function, as if we just need to know what it does to define it. This view of a definition leads to a reduced understanding of what computers are capable of.
Here’s a useful activity to explore this question with younger children. First, get a piece of paper folded into quarters. In the first quarter, learners have two minutes to draw a picture of a computer. Nearly all of them will draw a laptop. Discuss what they drew — did their laptops include a keyboard and a mouse? What about a screen? By acknowledging the parts of a computer, you can later explore which parts are necessary for a computer to work. Now move on to the second quarter. This time, ask learners to draw a different type of computer; you will usually get a mixture of desktop computers or games consoles connected to a TV. Again, talk about the parts. Now you can have a discussion about there being no keyboard on a games console. Repeat this process, but change the question to ‘What objects do you think have a computer inside them?’ Each drawing they do leads to interesting discussions, from traffic lights, to remote control cars, to iPads.
My learners now had two discrete chunks of knowledge: how to program a Bee-Bot, and that laptops were computers. However, without a bridge to connect them, this learning began to seem disjointed. If it’s not a computer, it can’t run a program, so what are they learning from playing with it? The answer took me back to the research about manipulatives and those early-learning theories I introduced at the start of the article. Learners needed to have a concrete, conceptual understanding of what a computer is before they could start comprehending the more abstract role of a program in that system. We needed to spend more time teaching computing systems.
What does that look like?
Even the youngest learners can start learning about what a computer is and how to recognise one. They start with spotting buttons, wires, and batteries, and then talk about what they do. If they recognise that when a button is pressed, there are instructions to follow, they’re beginning to understand what a computer is and where you’re likely to find them. As children move through lower primary, we can begin spotting buttons and discussing what might happen if we press them. This is where we can start differentiating between things that use electricity and those that run a program.
By upper primary, we explore the world around us and try to work out what the algorithm would be. We use input–process–output to decide if something has a computer inside it (see the next article for more on this). Each time we use this model, we reaffirm what an input and output are, as well as the basic concept of programs running on computers. For example, what’s the input on an iPad? How do we tell it what to do? There’s a home button and a touchscreen, or we can talk to it using Siri. What code runs when we press the home button? Something like ‘when button pressed, show home screen’. And then the output? We can see it on the touchscreen. This simple model allows us to test different machines or items of technology and tell if they’re computers or not.
One misconception I regularly hear is children referring to a monitor as a computer. Using this model, we can test this alternate conception. What’s the input? There are buttons. What happens when we press them? It says ‘no input’. What’s the program it’s running? It’s not doing anything, because there’s no laptop plugged in. Then is it a computer? No. We now have a way to start conversations about whether a device is a computer and therefore whether a device is running a program.
Having developed this solution to my problem with teaching computing systems prior to programming, I repeated the ‘What is a computer?’ question a year later with learners of the same age. This time I got much more varied and detailed responses. Here are some examples:
A computer has lots of switches and plugs to plug things into; it doesn’t have to have a screen
A computer needs code on a microchip to make it work; without that, pressing a letter would make nothing happen
Not all computers look like a computer; they have different shapes and designs and are used for different things
While these answers are not perfect, in just a year I was seeing noticeable progress in the complexity of the answers given. I found similar benefits when teaching programming, where learners could tell me that a wide range of devices ran programs, including Bee-Bots and beyond! Since these early discoveries, I ensure that each September, I start teaching with an age-appropriate introduction to computing systems and make regular links back to this learning when I teach programming later on. The Bee-Bot discussed here was one example of a manipulative, but there are many more examples, from floor robots, to Raspberry Pis, to microcontrollers. There are many ways for you to challenge learners’ concepts of what a computer is, including embedded systems where you can find computers in washing machines, traffic lights, or automatic doors.
Learners must learn the ubiquitousness of programming to grow their understanding of a world they’re a part of. And as a teacher, once you start these conversations, you never know where they’ll end up! Take some time this week to ask your class ‘What is a computer?’ and carve out time in your curriculum to ensure learners have a foundational understanding of computer systems.
Sapounidis, T., & Demetriadis, S. (2013). Tangible versus graphical user interfaces for robot programming: exploring cross-age children’s preferences. Personal and Ubiquitous Computing. 17(8), 1775–1786.