Truly Modern: More Room for Learning

In 1909, one of the years during the decades when the American K-12 school system was taking its modern form, Charles Eliot, Harvard’s newly retired president was consummating a deal to publish a “five-foot shelf” of books that would contain all of the important knowledge needed by an educated person.

As first printed in 1910 each set of “The Harvard Classics” consisted of 50 hefty volumes containing the works of 300 authors. Eliot claimed that these fifty books could serve as a “portable university” making available six distinct courses of study: “The History of Civilization,” “Religion and Philosophy,” “Education,” “Science,” “Politics,” and “Criticism of Literature and the Fine Arts.” Eliot further claimed that reading the complete set would provide the equivalent of a liberal college education.  (Kirsch, 2001)

Although we are more than a century beyond the five-foot shelf, the idea that there is a fixed body of facts and procedures to know remains embedded in the DNA of our educational culture.

The modern version of the five-foot shelf is also to be mastered largely by reading the essential texts under the supervision of teachers who will monitor student progress using a system of regularly scheduled tests.

Significantly, both Eliot’s five-foot shelf and our schools were developed well before we understood very much about our brains or how people learned. (Sawyer, 2006, pp. 1-2)

It is only in the recent three or four decades that the study of learning has been based on findings in psychology, computer science, philosophy, sociology, and other science disciplines.  The new understandings have shown that many of the assumptions underlying traditional educational practices programmed into our educational DNA are seriously flawed.

During the decades when the assumptions and practices of the modern school emerged it was believed that children were essentially just smaller, ignorant adults.

To become proper adults they needed teachers who would stuff them full of improving knowledge such as that found in Eliot’s five foot shelf of books.

To illustrate: During a nationwide tour of American schools in 1892, pediatrician Dr. Joseph Mayer Rice, recorded his impressions of a New York City elementary school and in particular of the pedagogical views of its principal: “She believes that when a child enters upon school life his vocabulary is so small that it is practically worthless, and his power to think so feeble that his thoughts are worthless. She is consequently of the opinion that what a child knows and is able to do on coming to school should be entirely disregarded, and he should not be allowed to waste time, either in thinking or in finding his own words to express his thoughts, but that he should be supplied with ready-made thoughts is given in a ready-made vocabulary…Each child is treated as if he possessed a memory and the faculty of speech, but no individuality, no sensibilities, no soul.” (Rice, 1893, pp. 30-31)

But in contrast to the beliefs of Rice’s principal, the research findings from the past four decades have shown that human babies possess a brain; and that this brain is a system of organs of computation, designed by natural selection to solve the kinds of problems our ancestors faced in their foraging way of life, in particular, understanding and outmaneuvering objects, animals, plants and other people.” (Pinker, 1997, p. 21)

One of the contributions of computer science to our understanding of the brain has grown out of attempts to create intelligent machines or artificial intelligence (AI). Early explorations in AI involved the development of artificial neural networks to simulate the the brain’s neocortex with its billions of networked neurons.

Advances in technology (more powerful processors and larger storage in addition to more sophisticated mathematics) have created larger and more powerful artificial neural networks.

“Last June, a Google deep-learning system that had been “trained by viewing” 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats. Google also use the technology to cut the error rate on speech recognition in its latest Android mobile software. In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7%, translated them into Chinese language text, and then simulated his own voice uttering them in Mandarin.” (Hof, 2016)

The method used to train the neural network so that it is able to accomplish these impressive tasks, speak, understand language, and recognize objects, “with the eventual goal…to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog…is much the same as how a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.” (Hof, 2016)

The wonders of what virtual neurons are capable of learning may divert attention from the ensorcelling power of human brains with real neural networks to learn, imagine and create.

While the virtual neural network needs lots of smart and expensive engineers to learn to recognize (most of time!) the image of a cat in a YouTube video, give a baby brain a safe, well-lighted place with humans to interact with and she can learn several languages that she can use by the time she’s four to wrap the adults around her tiny fingers.

Like living organisms, human institutions such as schools are more likely to survive in a changing world when practices evolve in the direction of greater consistency with new empirically-based understandings; such as that the brain has been shaped by evolution to solve problems of existence rather than as a passive storehouse of facts and procedures.


Bransford, John D., (2001)  How People Learn: Brain, Mind, Experience, and School. Expanded Edition. National Academies Press, Washington, D.C. 2001.

Fisher, Douglas, Nancy Frey, Carol Rothenberg (2008). Content Area Conversations. ASCD. Retrieved from

Kirsch, Adam (2001). The Harvard Magazine, November-December 2001

Hof, Robert D. (2016).  Deep Learning: With Massive amounts of computational power, machines can now recognize and translate speech in real time. Artificial intelligence is finally getting smart. MIT Technology Review. Retrieved 10/7/2016


Sawyer, R. Keith ( ed.).  “The New Science Learning” in The Cambridge Handbook of the Learning Sciences. Cambridge University Press, Cambridge, U.K. 2006.



brain, neurons, virtual neurons, Artificial Intelligence, learning, DNA, Charles Eliot, The Harvard Classics


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s