This website stores data such as cookies to enable essential site functionality, as well as marketing, personalization, and analytics. By remaining on this website, you indicate your consent. Cookie Policy
Kevin Gausman made progress with his much-needed curveball in first full season with Orioles. (LM Otero )
It has become a pleasurable challenge to sit opposite Jonathan Flombaum in the Sun’s podcast studio and listen to him think out loud. An assistant professor in the department of psychological and brain sciences at Johns Hopkins University, Flombaum is associated with the Visual Thinking Lab at Homewood. He’s focused on big questions about the brain that seem more philosophical than scientific: What we already know about how it works and, more importantly, what we don’t know, and why so much of the brain’s process is a mystery to us.
We get into this matter in Wednesday’s Roughly Speaking with a discussion about algorithms and curveballs: You can’t learn to throw one from descriptions and diagrams in a book. Why is that?
“I’ve been thinking about algorithms and machine learning lately because the terms are all over the front pages, and also because they reveal a lot about what human intelligence is and is not,” Flombaum wrote me as we prepared for the podcast.
“When computer scientists talk about algorithms, they are not talking about instructions in a specific computer language. That’s a program. The algorithm is the basic set of instructions that can be translated into really any computer language. Every computer you interact with from your phone to your laptop uses essentially an algorithm to do addition.”
But, Flombaum says, even a simple algorithm like that requires additional external knowledge.
“Adding is not a trivial achievement by any stretch, and once implemented, computers can often do things that people do, but much faster and more cheaply. Algorithms form the basis for a lot of what we do on computers. But are they intelligent? Do they possess ‘knowledge’ or ‘wisdom’? To any extent that they do, it is vicariously, through the humans who program them.”
Artificial intelligence, machine learning and deep learning
“Artificial intelligence and its subfield of machine learning are concerned with building algorithms that are genuinely intelligent, in the way that humans are,” Flombaum writes. “Are there algorithms that can figure out how to do things that people have not explained to them in detail? Are there algorithms that can do things that people have not done yet, perhaps things that humans couldn’t do at all. Are there algorithms that can discover things and acquire new knowledge on their own, maybe even make discoveries that have yet to be made? In a word, are there algorithms that can learn?
“The answer, of course, is yes. Machine learning is in part a field that has married computer science with a little neuroscience and a good amount of psychology. Machine-learning algorithms — the kind that pick Netflix movies for you, that buy and sell stocks at some hedge funds, that serve you ads on the Internet, that pilot autonomous cars — are algorithms that learn, algorithms that obtain knowledge apart from the endowment of their human overlords.
“Deep learning algorithms are based on some facts about how the human brain works, and some ideas about how learning takes place there. The idea can be summarized with a famous quote by a psychologist and neuroscientist named Donald Hebb: ‘Neurons that fire together wire together.’
“The brain is made up of cells called neurons. These cells have electrical properties including occasional rapid changes in electric potential, what neuroscientists refer to as ‘action potentials,’ ‘spikes,’ and ‘firing.’ Neurons also have input and output connections to other neurons —hence the term ‘wire.’ So when one neuron fires, it can cause other neurons to also fire, if they are wired to one another in the right way.
“Hebb’s idea, at this point a central dogma in neuroscience and psychology, is that a good deal of learning happens through neurons strengthening or weakening the degrees to which they influence one another. . . . Deep learning essentially builds computer programs that work like this. They consist in ‘units,’ each with some amount of current activity. The activity in a unit is determined by the current activity in all the units it is connected to and the relative strengths of those connections. And -- this is crucial -- those strengths are altered after every experience, the changes producing changes in the network’s behavior in the future -- in other words, learning!
“With just two units behaving this way, or two neurons, one does not get very far, not much farther than Pavlov’s dog’s learning what a bell means. But deep learning works by including hundreds or thousands and sometimes many more units in a network.
Curveballs and Three-pointers
Advertisement
“Many of the things that deep learning algorithms are especially good for are things that people can do but can’t explain. Psychologists have for a long time distinguished between explicit and implicit knowledge. Explicit knowledge includes all the facts, experiences and opinions we talk about. Implicit is all the stuff we clearly know, but can’t explain.”
Ask Orioles Hall of Famer Jim Palmer how to throw a curveball, and he’ll probably give an excellent description. But there are critical aspects of the pitch he won’t be able to explain.
“Ask Steph Curry how to shoot a three-pointer,” Flombaum says, turning to professional basketball. “He can probably give you some tips for improving your shot. But ask him to write down a set of step-by-step instructions . . . . Can you write down instructions on how to cut with scissors, how to write the letters in the English alphabet, or how to drive?
Pedro Domingos, author of The Master Algorithm, explains why programs that adopt implicit human knowledge are so important: “As of today, people can write many programs that computers can’t learn. But more surprisingly, computers can learn programs that people can’t write. We know how to drive cars and decipher handwriting, but these skills are subconscious; we’re not able to explain to a computer how to do these things.”
Says Flombaum: “For a good chunk of our history with computers, we could only get them do things that we could explain in painstaking detail. They may still be doing something we can do, but we don’t need to tell them how —and that’s fortunate because we often don’t know how we do those things ourselves.”
Autonomous Cars and Teenagers
“The reason I especially like driving as an example is because it makes me think about the possibility that we humans learn to drive in the same way as autonomous cars,” Flombaum writes. “Cars teach themselves to drive through experience. They drive, while a huge network of weights adjust themselves in response to good things happening (staying on the road) and bad things happening (bumping into other things).
“Here is Domingos again explaining the process of deep learning in general terms: ‘If we give a learner a sufficient number of examples of each, however, it will happily figure out how to do them on its own, at which point we can turn it loose.’
“He uses the term ‘learner’ to refer to computer algorithms that are capable of learning. Wouldn’t it also be a fair description of a 16-year-old kid? What does the driving teacher or parent do when teaching someone to drive? It’s possible that specific instructions are important in order to learn to drive, such as, “Slow down, ease off the pedal, hand-over-hand.” That amounts to positive or negative feedback, heard by the driver merely as ‘Good, bad, very bad!”
“Maybe the skills that become implicit to humans are also learned mostly implicitly. The learner gets lots of experience, and figures out through trial, error and time which actions will lead to desirable versus undesirable outcomes. Once the learner has learned, he can’t explain what he knows, nor can anyone else really get in there and unpack it. A self-driving car, a teenager — what’s the difference?”
“My own take is that not all learning in humans is deep learning, nor is it all implicit learning. However, deep learning in all its applications reminds us that a lot what is intelligence amounts to the vexingly unexplainable, and that ‘teaching’ — something I do professionally and as a parent — need not always be explaining. Explaining might actually be useless a lot of the time. It would be facile to say something like ‘Show, don’t tell.’ But the truth may be even simpler: Just say ‘Good,’ or ‘Bad.’
Contact Jonathan Flombaum at flombaum@jhu.edu or on Twitter @flombaum.