The Sumerian Game
Hello everyone. Before we get into this week’s episode, I’ve got a recommendation. Shortly after I recorded Episode 1 with its overview of the early history of games, I was in a local bookstore and came across something I was surprised I hadn’t run into before. It’s a graphic novel by Jonathan Hennessey and Jack McGowan called The Comic Book Story of Video Games: The Incredible History of the Electronic Gaming Revolution. You can tell what it’s about from the title, and I thought it might be of interest to those of you who enjoyed the part of Episode 1 dealing with the same period.
To clarify: this is a book about game history generally, not learning games. But it has lots of stuff to fill in the gaps where I made decisions to cut and condense in Episode 1. In talking about the earliest digital games in that episode, I chose to focus on the historical developments in non-digital gaming that led up to them. There’s another side of the story, which is the technological developments that led to them. Chapter One of the book covers that, starting off with the development of the CRT – that’s the earliest electrical device we developed to display a moving image and the same technology televisions and monitors used into the early 2000s -- and goes up to right before the first digital games.
Chapter Two picks up there, and has some fun details about the very earliest, tech-demo-y games from the 1940s and 50s, which I mentioned only in passing. The rest of Chapter Two and on into Chapter Five overlap with what I covered in Episode 1, though in greater and significantly more illustrated detail. I like Chapter 5 in particular because it’s got great coverage of the early days of Atari, including the human drama, which I didn’t talk about in Episode 1.
The whole book is fun and worth a read. The comics are full of easter egg references to dozens of games, and the authors have a lot of fun weaving them into the history in clever ways. There might actually be hundreds. I bet some of them went over my head. I’d just note that if you don’t want to spoil the historical narrative I’m going to follow in the podcast, stop at the start of Chapter 5. But I won’t be offended if you don’t.
OK, back to our regularly scheduled program.
Today, we’re starting our tour of digital learning games in earnest with what is generally regarded as the first digital learning game: The Sumerian Game, released in 1964. In addition to getting credit for being the first learning game, The Sumerian Game is also the owner of quite a few other significant firsts in videogame and learning game history: so many, in fact, that it’s easy to lose count.
Just so we don’t lose track, I’ve decided to use a convenient auditory device so you can keep track of them if you’re scoring at home. So, since I mentioned it already, let’s ring the bell for its being the first learning game, in fact, the first educational game, since it was designed explicitly for teaching. [Ding].
Because 1964 is quite a while ago, and what computing looked like at this time is so different from the way it looks today, I thought I’d start by setting the stage, now that I know the bell is working.
The sixties were the early days of America’s space program, and this provides a handy way to put the tech landscape of the time in context. At the same time The Sumerian Game was in development, a team at MIT was working on the computer for the Apollo lunar missions.
Spacecraft in early manned programs like Mercury and Gemini were controlled manually by the astronauts. But even the most ambitious flights never left Earth orbit. The Apollo missions were going to go longer and farther, and involved many complex maneuvers, not the least of which was the landing on the lunar surface. NASA recognized that they needed a computer to do things like orient the craft in space, navigate and control the engines, with the ability to make adjustments along the way. These were numerous and advanced tasks for computers at the time. And they all had to be done by something that could fit on a tiny spacecraft along with all the other systems and the astronauts themselves.
The Apollo Guidance Computer, or AGC, was the answer. The computer itself is about the size of a large briefcase. The “guts” in the briefcase connect to a terminal with a numeric keypad, a few other buttons and some indicator lights that the astronauts used for input and status. The Apollo missions carried two AGCs each: one in the command module that the three-man crew used to get to the moon, and a second in the lunar module that two of them used to get to and from the surface.
What made the AGC’s small size possible was that it relied on a cutting-edge technology: the integrated circuit. You may have heard the integrated circuit referred to by more popular names such as microchip or just chip, and sixty years later, ICs are still the foundation of computing technology. The processors in your computer, your smartphone, your game console and everything else, really, are all based on integrated circuits.
The “integrated” part of “integrated circuit” comes from the fact that the various parts that are used to make up the circuit are integrated onto a single package and made as part of a single manufacturing process. This allows for the components, and for the devices that contain them, to be very small: much smaller than what came before.
ICs were cutting-edge, literal space-age technology at the time of the Apollo program. The prior generation of computers – like the ones NASA used planet-side to do the same calculations as the AGC – were based on older, transistor technology, and took up entire rooms. If you’ve ever seen the film Apollo 13, you might remember Tom Hanks bragging about NASA’s computers of the day only taking up a single room to a tour group. I’m sure that line was designed to get a chuckle out of audiences, even in 1995 when the movie came out, but the joke is kind of on the writers, if you think about it. An astronaut like the one Hanks played, Jim Lovell, would have had a far smaller, and for the time, much more impressive computer to brag about in the AGC. And Lovell’s VIP tour group would almost certainly have been familiar with the room-sized computers of the era that NASA was using: mainframes.
Mainframes were the mainstream computers of the mid-sixties. “Mainstream,” in that 1960s context means something different than it does today, where it means “in every home,” or “in every pocket”: the technology to do that was still just for spaceships. Nobody had a computer in their home, let alone their pocket: the pocket calculator was even invented until the seventies: we see NASA astronauts using slide rules in the film to check Lovell’s math after the whole “Houston, we have a problem…” moment.
To be in the market for a room-sized computer, you had to have a good chunk of change and, well, the room… as well as the computational needs. In the sixties, that meant places like government agencies, research labs, some larger companies like airlines and, most relevant to our story, research universities.
One such university – really the university you wanted to be at if you were into computing at this time – was the one that developed the aforementioned AGC: MIT in Cambridge, Massachusetts.
In the late 1950s, IBM released the first in its 700 series of mainframe computers. By 1960, MIT had one, a 7090. As you might expect at a university, a lot of people were interested in getting their hands on the machine, but coordinating the scarce compute cycles was a challenge. In 1961, MIT released the first computer time-sharing system to address this problem. Before time sharing, a computer like the 7090 could run only one program at a time, then another. Time sharing allowed a single computer to run many programs for many different users at once through a technique called multitasking, where the computer switches rapidly between running one program, then another, then another, and then back, so it seems like everything is happening simultaneously. It’s the same technique your computer uses today when you have multiple apps open and doing things on a single processor. MIT’s early time-sharing operating system, CTSS, also gave each user a way of storing their programs and data.
Time sharing was a major unlock, and it allowed different computer scientists and software developers to pursue all sorts of varied projects. The innovation spread beyond MIT, and other time-sharing operating systems for the 7090 and related IBM computers began to be used elsewhere. One such place was in Westchester County, New York, just north of New York City, where a government agency called the Board Of Cooperative Educational Services, or BOCES, was working with IBM. These boards, which exist throughout New York State even today, provide support, curricular, management – and notably technology -- services to the public school districts in their area.
Westchester County also happened to be where IBM was headquartered, and in 1962, some folks in the curriculum division at BOCES started talking to their neighbors about potential uses of computers in K-12 education.
I want to push pause right here, because that 1962 date is quite interesting and powerful when you look at it in the broader context of educational technology at that time. 1962 is before there was widespread use, let alone acceptance, of television as an educational technology. While there had been pure entertainment children’s programming from the earliest days of broadcast TV, this was seven years before the debut of Sesame Street. When we think of contemporary video games with their high production values, compelling narratives and voice casts that sometimes feature Hollywood stars, there’s an understandable tendency to view them as derivative of film and TV. But these elements weren’t present in the earliest videogames, and they didn’t play a role in causing the people at BOCES and IBM to think videogames could have educational potential. As we’ll see, something closer to the opposite is true: the makers of The Sumerian Game decided to incorporate mixed media techniques into the game to enrich the player experience, but it’s an afterthought.
What did influence them, particularly the head of the curriculum group at BOCES, weren’t even early computer games, but computer simulations. By this time, universities and businesses were using computers to create simulations of various kinds, including business and economic simulations, and the BOCES-IBM team wondered whether a simulation could be used to teach in K-12 classrooms.
At this point, BOCES did a couple of things that will sound familiar to anyone who’s worked in academia. First, they held a series of workshops with local teachers, some folks at IBM and a curriculum researcher from BOCES, Dr. Richard Wing. Then, encouraged by the workshop results, they applied for, and won, a research grant to explore the question. This lineage makes The Sumerian Game, the first digital learning game, also a research-based one: another first! [Ding].
There are, of course, many great learning and educational games that don’t have a research pedigree, to say nothing of learning and educational games that aren’t even grounded in what you could call solid learning science. I’d argue that there are some downright great educational games that grew out of nothing more than what someone reckoned might be a good way to learn something, and without even considering how to determine whether the players were truly learning anything at all. But I think the fact that the very first learning game was a research project indicates a certain seriousness about designing a game that really is effective at teaching something in a way where any claims of learning stand up to scrutiny.
Some of what went on at those early workshops will also be familiar to game designers, learning and otherwise. The participants spent time brainstorming and discussing ideas, but also paper prototyping their concepts. For those new to game design, paper prototyping is the process of using non-digital means to explore, test and validate what is ultimately intended to be a digital game. It’s a common practice in the game industry, in part because it allows designers to rapidly explore and iterate, and to test their ideas with actual users without incurring the cost of technical development. One of the concepts prototyped – introduced by Bruce Moncrief of IBM – involved fusing ideas from simulations with ideas from boardgames, specifically Monopoly, to teach basic concepts in economics. The conceit that Moncrief developed involved doing this through the lens of an ancient civilization’s economy. Feeling that pre-Greek civilizations were underrepresented in the curricula of the day, he decided to set his game in Sumer.
It's worth reflecting here that while there are, of course, simulation games: not every simulation is a game. Moncrief was drawing on prior examples and research showing computer simulations were promising, if not effective, teaching tools. But it seems like he and other participants – ultimately, the entire project team – had a sense that a simulation alone didn’t have the same potential for education and engagement as a simulation game, that is a simulation not only with rules (as all simulations have), but also goals and challenge. And yes, that makes The Sumerian Game the first simulation game. [Ding].
After being awarded the grant, Dr. Wing at BOCES – in another recognizable academic research move – put out a request for proposals to the participants from the various workshops for the specific game to be made and tested. One of the proposals came from a 4th grade teacher and workshop participant named Mabel Addis. Addis’ proposal built on Moncrief’s concept, and his emphasis on a non-Greek civilization resonated with her since she had studied ancient Mesopotamia in college. Her proposal was the one that was accepted.
There’s a scene from the television show Mad Men – which, like our story is set in the 1960s -- that involves an elderly secretary passing away at work in the Manhattan office of the ad agency where the series is set. Her boss says of her, “She was born in 1898 in a barn. She died on the thirty-seventh floor of a skyscraper. She's an astronaut.” Mabel Addis wasn’t born until 1912 and lived to see the 21st Century, but I think the spirit of that quote applies to her.
Mabel Addis began her teaching career in a one-room schoolhouse in Westchester Country in 1937, already having earned a Masters in education from Columbia. Before running into the team at BOCES, she taught in several school districts around the county, ending up in the town of Katonah in 1950, where she’d go on to teach for the next 24 years. A love of history is a theme of her career that weaves through The Sumerian Game, and also various articles and projects on local history she pursued.
The Sumerian Game is what we’d call today a resource management game: the first resource management game, in fact. [Ding]. I like the way the Game Mechanics Wiki describes this genre, so I’ll quote from its summary now:
“Resource management is about collecting, monitoring, and leveraging quantitative resources with incomplete information… In simple English, this means you get money, ore, pylons, or whatever… and [these have] to be used wisely to compete. You never have enough information to make fully informed decisions and must develop the best strategy you can with imperfect information.”
This description implies that resource management can also be a mechanic, or if we want to stick to the game designer-y, verb-y way of talking about mechanics, “managing resources” is. And I think this is true. In this sense, lots of games, maybe even most digital games, have resource management mechanics. A player of an RPG who has to decide what loot to sell versus keep versus upgrade is managing resources. You could argue that any game with a health bar involves resource management, in that you have to decide how much of your health you’re willing to risk by going to a certain area when you don’t know what’s there or by diving into a boss fight where the outcome is uncertain.
But as a genre of games, resource management refers to games that have it as the core mechanic, and often follow similar conventions. Managing resources is at the heart of many simulations, too, largely because the purpose of many simulations, particularly economic ones, is to develop strategies that will tend to maximize gaining or keeping certain desirable resources. One way to think of resource management games as a genre is that a resource management game is an economic simulation with win states that involve a player maximizing certain resources and loss states that involve failing to do so.
Previously, I’ve talked about Gamestar Mechanic, the first educational game I worked on. I mentioned that Gamestar teaches players how to make games and uses a games-as-a-system paradigm to do so. The other side of the coin is that Gamestar’s goal is to teach systems thinking and to help players use it to understand real-world systems. Resource management games are, in essence, models of systems. Playing well involves understanding the dynamics of the system being modeled. And if the system itself is modeled well, then the understanding of the game translates to an understanding of the system.
This was precisely the theory of learning Addis had in mind when she designed The Sumerian Game, and it represents a very strong alignment of game mechanics and learning mechanics. I think it’s worth another tip of the hat to Addis and the people behind The Sumerian Game – and there are going to be more – for coming out of the gate so strong. It shows a visionary understanding of the power of what games can do. Especially given the computing constraints of the time, it would have been understandable if the first educational game was nothing more than a glorified pop-quiz: a design approach that, by the way, many educational games take today. Taking on the challenge of modeling a real-world system like an economy, even in simplified form, and, on top of that, taking on the challenge of making it fun and engaging for kids to interact with that system was a gutsy call. The Sumerian Game was a moonshot, happening at the same time America was getting ready to make her literal one.
Now, I say the game represents an economy in simplified form, but I don’t want to give the wrong impression: the economic simulation here ain’t all that simple. If you read through the report that was produced at the end of the research study for which the game was made, you will find pages upon pages of details about the design of the simulation: the elements of the economy being represented, the math behind the simulation, the different ways those elements do and don’t interact. It’s actually quite rich. A lot richer than a lot of simulation games we play today.
I’ll come back to this issue at the end of the episode because I suspect the complexity of the simulation hurts the game in certain ways and contributed to the mixed findings the study found in the game’s effectiveness as a teaching tool. But for now, I bring it up because: it would take several very tedious hours of podcasting to go into every detail of the design and, as much as I like hearing myself talk, I’m not going to put you through that. If you’re genuinely interested, the paper is linked in the show notes.
Instead, what I’m going to do is sketch out the basic play experience so you can put yourselves in the shoes of a sixth grader who experienced it, then layer in some – but not all – of the details so you can at least get a taste of the game’s richness.
The Sumerian Game is set around 3,500 BCE in the city of Lagash, one of the world’s oldest known city-states that sat at the junction of the Tigris and Euphrates rivers in modern-day Iraq. As a player, you take on the role of three successive priest-rulers of the city-state: Luduga I, Luduga II and Luduga III. The Ludugas, to be clear, were not real historical figures. Addis made them up as part of the game’s fiction. I’m guessing that the name “Luduga” is based on the Latin verb ludo, meaning “to play,” which you see as the root of some English words like “ludology,” meaning “the study of games or play”. Hey: that’s what we do here! Welcome, fellow ludologists!
I know I promised a second ago to get into the play experience, but I haven’t used the bell in a while so… the fact that Addis invented the fictional Ludugas makes them the first videogame characters. [Ding]. Ah… that’s better. If you’ve listened to Episode 1, you’ll recall that the very earliest non-learning videogames were titles like Tennis For Two, a sports game and Spacewar!, where the players control starships. But in casting the player as a fictional ruler, The Sumerian Game is the first videogame in which the player assumes the identity of fictional human being.
Early videogames like Tennis For Two and Spacewar! start with the player dumped into the action with no introduction or context. Addis recognized that this probably wouldn’t fly for her intended sixth grade audience, who weren’t expected to know anything about Sumer or basic economics. So rather than dump them right into interaction with the computer, the play experience that Addis crafted begins with a lecture.
The lecture wasn’t just an explanation of how to play – I’m sure the programmers of earlier games would stand next to the machine and gave new players instructions – it was an introduction to the game itself. It introduced the historical and fictional world, the player characters, the structure of the game, the play mechanics as well as the economic concepts players would need at the outset. And this was no mere verbal lecture: this was a multimedia extravaganza, because the opening lecture was accompanied by a slideshow featuring custom artwork to set the stage.
This may be a slight misuse of the bell, but I think it’s fair to say that this makes The Sumerian Game the first game to feature an opening cinematic. [Ding]. And even if you don’t buy that, because Addis designed and scripted the opening lecture (in addition to the in-game text), it also makes her the first videogame writer. [Ding]. The bell is in danger of coming apart, ladies and gentlemen. In fairness, I should point out that this opening lecture took twenty minutes, which would have today’s players pressing F to skip, as I imagine more than a few of those sixth graders in 1964 would have done if they could.
After the introduction, the player enters the first of three turn-based rounds, taking on the role of Luduga I. In this and subsequent turns during the first round, Luduga’s advisors inform him that the harvest has finished, resulting in the city harvesting a certain quantity of grain: 5,000 bushels in the first turn. The player is then asked to decide how much of this grain to save to plant for the next season, taking into account the city’s current population (500 people on the first turn), with the unsaved grain being used to feed the people. The player inputs their choice into the computer and the game’s simulation engine uses this to determine the city’s conditions at the start of the next turn: its standard of living, population, the result of the next harvest, etc. Each turn is meant to represent six months in the life of the city. If the player manages things well, the population expands, which the player must take into account because the growing population creates increased demands for resources, and soon the player gains the ability to invest grain in expanding and irrigating more farmland to feed their growing population. The reign of Luduga I lasts thirty turns and is focused on the management of this agricultural economy.
Before Luduga I shuffles off the mortal coil and his son, Luduga II, takes over, I want to go back to something I glossed over in describing the gameplay. I mentioned that the player inputs their choices about how much grain to save at the end of the turn. How exactly does the player do this? You might assume using a keyboard, and that’s sort of correct. But if you’re picturing that keyboard sitting on a desk in front of a monitor, that’s definitely not. Computer displays didn’t come into use until the 1970s, at least for mainframe use. In 1964, mainframe users – including the children playing The Sumerian Game – used something called a teleprinter as their input and output mechanism.
In the unlikely event you don’t have a teleprinter at home, and aren’t in a position to search for a picture at the moment, I’ve included one in the show notes if you want to check it out later. But I’ll try to paint a picture with words. Imagine a typewriter, but instead of it holding one sheet of paper at a time, it’s fed by a roll of paper… like the receipt paper in a cash register, but as wide as a piece of writing paper. The teleprinter, like a typewriter, could sit on a desk, but it was hooked up to the mainframe: remember, the mainframe is this massive thing off in a room somewhere. When the user pressed keys on the keyboard, these transmitted signals back to the mainframe as input. And the mainframe could send commands back to the teleprinter to print characters on the paper tape and advance to the next line.
When a user played The Sumerian Game, the program “sent” the messages from Luduga’s fictional advisors to the teleprinter to be printed. And when the player issued his commands, he typed on the teleprinter’s keyboard to send them to the mainframe as input to the program. The entire game would unfold on paper, and what was printed on the paper tape would provide a record of the entire gameplay session.
Parenthetically, this way of handling output and, in particular, input to the mainframe was fine for interacting with a program like The Sumerian Game in this back-and-forth way. But using a teleprinter as a way of getting large amounts of data into a computer was horribly inefficient and error-prone. This is where punched cards came in, which could represent larger amounts of data, like entire programs, in a much more durable and consistent way, and punched cards were the dominant mechanism of storing, representing and transferring computer data until they were supplanted by magnetic tape (the same technology used for audio cassettes if you lived through the 80s and 90s; or 8-tracks if you’re a decade or two older), floppy disks and eventually hard drives and other storage formats we still use. But I digress… it’s time for the reign of Luduga II.
In the second round of the game, the simulation becomes more rich. In addition to storing grain for next year’s harvest and for feeding the growing population, Luduga II has the ability to invest grain in developing new technologies and to specialize his labor force to do things other than farming. Investing in technology and specialization improves quality of life, helping the city’s population to grow further and faster.
By the time we get to the end of the reign of the next and final ruler, Luduga III, or Tre, as I like to call him, the game has layered in raiding, trading (and with it concepts of supply and demand), the production of specialized commodities, middlemen who charge markups, the need to maintain resources and quite a number of other components and mechanics. Oh: and there are also random events (like natural disasters) that can harm the city and persistent drains on resources like the fact that stored grain can rot. You get why I said before that this ends up being a lot.
More on that in a second, but first, I want to draw attention to something very cool in the instructional and game design. In starting with a limited set of mechanics at the start of Luduga I’s reign and slowly layering in more and more complexity, the challenge of the game increases. But Addis is obviously conscious of layering the complexity and challenge in in a controlled way: she doesn’t add it all at once, or start the game on hard mode. This is really good game design, and learning design. I mentioned this in passing in Episode 1, but Addis is applying a game and instructional design principle known to Montessorians as isolation of difficulty. The idea behind it is that if you want someone to learn something difficult or complex, it helps to break it down into small chunks that you introduce one at a time.
The Montessori approach to teaching a small child to use scissors is a favorite example of mine. As adults who have mastered using scissors, we tend to think of cutting with them as a single skill. But if you’re just learning to use them, there are a number of challenging things you have to master before you can successfully do something like cut a circular shape out of a piece of paper. If you give a pair of scissors to a child who’s never used them before and just ask her to do that cold, she’s likely to fail and get frustrated.
The Montessori approach breaks down the cutting into its constituent skills and introduces them one at a time. First, you have to master the hand motion to make the blades go up and down. Then you have to master coordinating moving the paper between them at the appropriate time: but just to snip the paper. We’re not trying to cut anything out, or even cut following a straight line. That comes later, and cutting along a straight line is introduced before cutting along a curve because the latter is harder and requires new skills straight line cutting doesn’t that you probably take for granted: for example being able to rotate the paper continuously and by small increments to follow the outline of a circle.
Another group of educators who are really good at this – and I would be remiss if I didn’t shout them out because I come from a family of them – are physical education teachers and coaches. If you’ve worked on your swing with a golf pro, you know they’re adept at isolating the different components of the swing: the weight shift, the club movement, and so forth, and building them up individually. There are tons of what are called developmental games in physed that are intended to isolate and teach different aspects of sports: I remember playing a game in middle school called Box Basketball that was designed to introduce the concept of zone defense.
Good videogames do this too as they teach you how to play. As I talked about in Episode 1, Level 1-1 of Super Mario Bros.is the most frequently cited example of this. In the very first sequence of the game, four different skills are introduced in progression: lateral movement, jumping in place, jumping while moving laterally (which combines the first two), and jumping laterally to avoid or stomp on an enemy. Addis is doing this in The Sumerian Game in the way that good educators do, and it’s cool to see a real-world learning principle applied in the very first educational game.
Because the game’s challenge increases along with its complexity, Addis is also applying another game design principle: progressive difficulty. Put simply, this means that the game gets harder as you go along. Most well-designed games we encounter today – and indeed going back to the time of Super Mario Bros. and beyond – implement progressive difficulty, some better than others. But videogames before The Sumerian Game didn’t, so we have another first. [Ding]. I know: I missed the bell, too.
But in spite of the isolation and progression of difficulty – or maybe because Addis thought the isolation and progression would keep things manageable -- The Sumerian Game ends up being pretty darn complex. I don’t want to be too hard on her, because to do so would be to judge her by today’s standards and with the context of sixty subsequent years of game design she didn’t have the benefit of. She was literally inventing the field of learning game design as she went. But with the benefit of that knowledge, I think a lot of game designers today would at least be suspicious of whether or not the game was going to work, and especially of whether or not it would succeed in its ultimate educational goal of getting children to understand economic concepts. Very few modern game designers would introduce that complex a system that fast, if at all. I get the impression that the game strained under the weight of its complexity from a couple of places in the research report.
First, in talking about the game’s design and development, the paper mentions that the game underwent a significant revision after its initial round of testing with kids. Some of these revisions weren’t related to gameplay, for example making the introductory presentation clearer and increasing its “production value”; as well as making use of pre-recorded audio for some of the updates from Luduga’s advisors to reduce the amount of text the children had to consume. But the revisions that were about gameplay show a pattern of reducing its complexity. The number of turns per round is reduced. In the second round, all the decisions about planting, harvesting and storing are taken over by the computer, leaving the student free to focus on learning and using the new technology development and resource allocation mechanics. A similar revision that simplified the third round to, again, have the computer take over more stuff to focus the player on the new mechanics, was planned but never implemented.
Before these revisions, the game functioned as an increasingly rich simulation of Lagash’s economy, with more dimensions of it being layered in as the game progresses from round to round. But especially if you pretend that the planned Round 3 changes were implemented, what you end up with is something closer to a game where each of the three rounds is simulating different dimensions of the city’s economy: agriculture in the first, division of labor in the second and trade in the third. It’s less progressive and more isolating, which reduces the burden on the player and would have thrown the individual concepts being taught into sharper relief.
The second piece of evidence I’d point to are the results of the research study itself. Among the hypotheses the researchers were testing was that children introduced to the underlying economic concepts through the game would better understand and formulate those concepts than children who were introduced to them through conventional teaching methods. To test this hypothesis, the researchers created a control group who did receive the conventional instruction and compared their understanding and retention of the concepts to those who played the game using pre-, post- and follow-on assessments, i.e. tests. Again, if you’ve done any kind of learning research, this surely sounds familiar.
The results were mixed. Children in the experimental group – that is, the ones who played the game – did show a statistically-significant greater knowledge gain than the control group, as measured by the pre- and post-assessment; though the experimental students started with more knowledge on average, which somewhat undermines the result. The researchers also admit that the assessments they used here were just so-so, and their conclusion is that you can’t really say that the kids who played the game learned more or did better than the control group, but it’s probably fair to say they didn’t do worse.
But on the retention assessment, the control group meaningfully outperformed the experimental group, meaning the kids who got the conventional instruction held onto what they learned much better than the kids who played the game. The researchers are at a loss to explain this, but I have a hypothesis. The game and the presentation were initially engaging and motivating, as games are, leading the children who played to be invested in what they were doing and retain what they learned decently well in the immediate aftermath. But because the game was so complex, the children held the concepts in sort of a muddy way, which led to things being jumbled when it came time to recall them more time away.
That’s not to say that study findings were all bad. In fact, there are some interesting nuggets, some surprising and some not. First, and not surprisingly, the concepts that were retained best were the ones that were repeated most often in the game. The strong readers got more out of the game than the weaker ones, which is also unsurprising given the game’s emphasis on reading. The researchers realized this in the middle of the study, hence the introduction of the recorded audio segments and lowering of the number of turns per round, which reduce the amount of reading.
More surprising is that the students who spent the least time in front of the computer had the most learning gains. At first glance, this might seem counterintuitive, but the researchers hypothesize that this is because these were also the most capable students (as measured by IQ tests). They were just able to get through the game faster.
If you agree with the researchers “worst case scenario” in which the children who played the game had about the same learning gains as the children who got the conventional instruction, the most interesting finding is that the children who played the game achieved those gains in about half the instructional time on average that it took for the children who learned the old fashioned way. Even if the data don’t support the conclusion that the players learned more or better, they do show that with games, the children learned more efficiently. I think this finding has parallels with some of what we’re seeing as AI plays a greater role in education, where there’s emerging data that shows that children who learn via AI-powered, adaptive learning systems can not only progress much faster, but can also achieve greater learning gains than their peers receiving conventional instruction. This is by no means a settled issue: there’s not enough data of enough quality, but there’s enough for it to be worth tracking.
Even if the researchers didn’t prove everything they set out to, what they certainly demonstrated in this first foray was that educational games were a valid and promising teaching tool, and one worthy of further investment and consideration. Everyone who has made our used a learning game, or worked in the field, owes them a debt of gratitude.
Mabel Addis developed the concept of the Sumerian Game, crafted the player experience and wrote the game, but she didn’t program it. She wasn’t a programmer. That job was done by IBM employee William McKay. We see this kind of specialization in games today, of course. Making a modern commercial game, or even most hobbyist and student projects, requires multiple disciplines from art to writing to programming to design. As I like to say, making games is a team sport. Because of the specific things she did in developing the game, and because she did them as part of a team with a division of labor, Mabel Addis is traditionally cited as the first videogame designer, learning or otherwise. And I think that’s worthy of a resounding [Gadong] on our list of firsts.
Because of her gender, she’s also traditionally cited as the first female game designer. And while that’s true, I find that perspective strange. Mabel Addis was indeed, female, and she was the first game designer. Those things are facts. But to single her out for being the first female game designer seems to me to smuggle in a false assumption that, obviously, the first game designer was, or would have to be male. It’s only appropriate to call someone the first of their gender, or race, or the first person on the block to do something if they’re also not the first to do it, period. And that’s what Mabel Addis was: the first.
As I mentioned before, Mabel passed away in 2004. At the Game Developer’s Conference in 2023, she was honored posthumously with the Game Developers Choice Awards Pioneer Award for her work on The Sumerian Game. An article I read in preparing for the podcast refers to The Sumerian Game as “the most important video game you’ve never heard of.” You’ve heard of it now, of course, but most gamers, and most educators who use games – which is to say most educators – never have. I do think that’s a shame for a game that can claim so many first. It’s not every game that launches a genre, an entire branch of the gaming tree and an entire gamemaking discipline. And you see echoes of its design n many later games, most notably the Civilization franchise.
My favorite thing from the research study comes at the very beginning. The researchers – and I assume they’re speaking for Addis, too – articulate what they believe the aim of education to be. They say:
“Our interest in computer-based instruction… is founded on the hope that a theory of instruction based on broad principles of individualization may provide clues for the improvement of education. This hope comes at a time when new computer technologies offer promiseing of new ways of establishing effective learning.
“An educational system should provide a learning environment in which each individual can learn those skills, concepts and attitudes which are appropriate to his own ability and ambitions and improve his character and personality in ways corresponding to an ideal notion of human worth.”
If that isn’t the perfect start of a manifesto on education, and on games and education, I don’t know what is.
If you’re enjoying the podcast, be sure to check out the Substack at historyoflearning.games, where if you subscribe, you’ll get each episode in your inbox along with show notes that include bonus info not covered in the podcast, including images of games discussed. I also post transcripts of each episode separately.
If you’d like to support the show, the best thing you can do is leave a review on your podcast platform of choice. If you’re so inclined, I would be truly grateful.
Thanks for listening and I’ll see you next time.