Michael Littman is a University Professor of Computer Science at Brown University and Division Director of Information and Intelligent Systems at the National Science Foundation. Littman is a Fellow of both Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery. He was selected by the American Association for the Advancement of Science as a Leadership Fellow for Public Engagement with Science in Artificial Intelligence.
Below, Michael shares five key insights from his new book, Code to Joy: Why Everyone Should Learn a Little Programming. Listen to the audio version—read by Michael himself—in the Next Big Idea App.
1. There’s just a few kinds of bricks.
An amazing fact about computers is that they are “universal.” The same box of circuits can act like a calculator, camera, phone, TV, book, and tool for creating equations, images, sound, animations, and text. How do they do that? Programmability. The main task that a computer does is to follow the instructions you give it that tell it what task to do. Very meta.
You typically give the computer instructions by pointing it to instructions written by someone else. That’s what you are doing when you download software and then click on it to run it. But computers give you more fine-grained control than that. You can write the specific instructions yourself. Given that computers can do practically anything, it seems like you’d have to learn an awful lot of instructions. But there are really only a handful. Just like a LEGO (™) Master Model Builder can create anything at all using a small set of bricks, you only need a few programming concepts to build simple or complex programs.
The building blocks are commands, conditionals, variables, loops, and functions. Each one is important because each one is useful and powerful on its own. But the real superpower comes when you learn how to combine these units in creative ways. By learning about each of these pieces in isolation, you have a fabulous foundation to build on.
2. You are already soaking in it.
The building blocks are sufficient for composing any program ever written and expressing any task imaginable. So, you’d think they’d be pretty hi-tech. But, the fact of the matter is that we use these ideas all the time. They entered human culture long before the invention of computers. When you are telling computers what to do, it’s not so different from how you might go about explaining to another person how to carry out a particular task.
Commands are just individual steps that can be strung together in a sequence. You see this idea in a movie script, which is a sequence of lines to read; or a piece of piano music, which is a sequence of notes to play.
“When you are telling computers what to do, it’s not so different from how you might go about explaining to another person how to carry out a particular task.”
Conditionals are branch points that specify when it’s appropriate to follow particular commands. We often convey this idea to each other using the word “if.” For example, Homeland Security insists: “If you see something, say something.” Tony Orlando sang: “Knock three times on the ceiling if you want me.” And Winston, in Ghostbusters, said: “When someone asks you if you’re a god, you say YES!”
Variables are stand-ins for other values. Returning to the Ghostbusters quote, the word “someone” is playing the role of a variable. It can stand in for whoever might be asking you the question. In the movie, it was Gozer the Gozerian. But the instruction says the same behavior should be taken no matter who asks the question. Words act as variables all the time. If I challenge you to a contest and say, “Winner takes all”, who does “winner” refer to? It’s whoever wins, of course. So, “winner” is standing in for that person and therefore acting like a variable.
Loops tell us to follow the same set of instructions multiple times. “Drop and give me twenty!” is a succinct way of saying “drop”, “do a pushup”, “do a pushup”, “do a pushup”, “do a pushup.” The word “twenty” acts as a loop statement telling us to repeat the implicit activity twenty times.
Functions provide a way of packaging up a set of instructions and giving them a name for later reference. I remember bringing my kids to Chuck E. Cheese for a birthday party. Chuck E. would say, “When I say happy, you say ‘boithday.’ Happy… Happy….” He’d pause after each “Happy” to give us a chance to yell “boithday!” Here, “happy” is acting as a function, telling us to yell “boithday.” If functions sound a little bit like variables, it’s because they are. They are essentially variables that represent instructions.
3. You can start today.
Given that the basic programming blocks are already familiar, there’s nothing stopping you from trying them out. Of course, it takes a lot of practice to write big complicated programs, but there are plenty of examples of simple systems that let you do practical things with just one programming building block at a time nearly right away. Using the blocks in concert is a superpower that will significantly increase your capabilities.
Creating online questionnaires, and defining keyboard macros in the Emacs text editor can help you practice sequencing commands.
The interactive fiction authoring tool Twine and trigger-action programming using if-this-then-that or Alexa Routine are good ways to become comfortable thinking about splitting into different behavior using conditionals.
“Using the blocks in concert is a superpower that will significantly increase your capabilities.”
Defining formulas in a spreadsheet or creating your own BuzzFeed quiz provides exposure to storing information in variables.
Experimenting with repeating calendar events in Google Calendar, Apple Calendar, Yahoo! Calendar, and the like, or designing your own video games with low-code online tools gives you an on-ramp to wrapping your head around consolidating instructions into loops.
Finally, the grouping functionality in drawing programs and building new behaviors in Google Apps Script can reveal the value of defining functions.
4. AI is a brick producer.
There’s a saying that goes: The mediocre teacher tells. The good teacher explains. The superior teacher demonstrates. The great teacher inspires.
These four verbs—tell, explain, demonstrate, and inspire—map nicely onto the four principal ways we have for conveying tasks to computers. “Telling” corresponds to coding, the ideas like loops and variables mentioned earlier. But the other three are all flavors of machine learning that have been producing fantastic advances in artificial intelligence and hold the promise to greatly expand the capacity of all people to make computers more powerful and useful to themselves.
The idea of explaining involves spelling out for the computer what its objective is instead of the specific steps that are needed to carry out that objective. They essentially help people construct loops for solving a task. This kind of machine learning is known as “reinforcement learning” because it generally involves deciding on rewards and punishments for the machine and letting it tune its behavior to match. State-of-the-art programs for playing board games and video games use this idea, as do some commercial applications like keeping data centers cool while minimizing electricity and maximizing throughput. They are also trading stocks and optimizing supply chains, problems where it’s easier for people to explain what success looks like than it is to decide on the specific steps to take to achieve it.
“These four verbs—tell, explain, demonstrate, and inspire—map nicely onto the four principal ways we have for conveying tasks to computers.”
Demonstration also plays a key role in modern computer systems. The branch of machine learning, known as “supervised learning,” produces high-performing software by example. Programs that decode speech and translate between languages are now regularly created by giving the computer giant collections of inputs and their corresponding outputs. Supervised machine learning approaches create the sequences of instructions to make it so. One kind of supervised learning solves problems by producing collections of conditional branches known as “decision trees.” Another organizes intermediate computations into values of a set of variables known as a “neural network.” These variables can themselves be organized into units analogous to function definitions.
Telling a machine what to do via “inspiration” is much less common, but a number of important applications of the idea are emerging. A recent example is in generating driving directions. It’s hard to tell a computer what route to pick for every possible location and destination. It’s even hard to explain what the objective is—mostly people prefer shorter routes, but they will sometimes go a bit further if a route has fewer turns. How many fewer? I don’t know. You can demonstrate routes to the computer, but how do those generalize to new or rare location-destination pairs? What engineers are starting to do is to use a combination of demonstrating and explaining: Basically, use demonstrations of routes that people select to extract a rule that explains why they picked those routes over other possibilities. Once the computer has extracted that rule, it can apply the learned explanation to new routes. I think that of that as inspiration because our examples serve as a source for the machine to do new and exciting things.
5. Mashups are the way forward.
Chatbots are very cool and they can even help us program. But at the end of the day, it’s still up to us to understand what we want the machine to do and to convey that message. Future programming might be a lot easier, but it still involves the same set of skills we use today. We already use these skills when dealing with other people, so that’s a great head start. But actively developing these skills will remain essential, perhaps even more so, in the age of AI.
Machine learning is making it easier for people to tell machines what to do. But, typically, systems work by telling, explaining, demonstrating, or inspiring. In contrast, when we ask other people to do something for us, we generally convey the task using a combination of these ideas, especially descriptions and examples. That’s because descriptions without examples can be abstract and ambiguous. And examples without descriptions leave you to guess the intent. Both are error prone.
For example, Carl Sagan said, “The nitrogen in our DNA, the calcium in our teeth, the iron in our blood, the carbon in our apple pies were made in the interiors of collapsing stars. We are made of starstuff.”
He told us that we are made of starstuff. But he also gave a handful of examples to make sure we really understood what he was conveying. Similarly, in the context of telling machines what we want them to do, combining rules and examples will be game-changing. Making it easy and reliable to guide computers to carry out our wishes is where we really see AI reaching its full potential.
I hope you are inspired to use computers more deliberately and with a sense of self-efficacy and empowerment. After all, computers are machines to which we can delegate our will. Delegating our will does sound profound, and I think Carl Sagan would be proud.
To listen to the audio version read by author Michael Littman, download the Next Big Idea App today: