Tag Archives: STEM

Changing the Relationship Between Knowledge and the Child

Robotics combines engineering, engineering design, and technology in ways that, in the words of Marina Bers of the Tufts DevTech group, “connects the T and the E of STEM” and certainly merits a prominent position in STEM education.
Robotics appears to fit into the STEM sequence as a subject for older students. At the Tufts DevTech research group however, robotics has been introduced successfully to pre-Kindergarten children reasoning that because interventions that begin early are, in the long run, less costly and also have greater impacts than those that begin later, robotics should be begun early
Watch this DevTech produced video clip that documents the robotics work of young children.
The spirited dancing of the children is accompanied by dance movements enacted by two-wheeled robots, which while less enthusiastic are more rhythmic and more disciplined than the children’s.
It is notable that although the robots are better at following the music’s rhythms, the robots’ movements were programmed by the same somewhat syncopated children. The video supports the case that young children are quite capable of engaging in robotics in non-trivial ways.
In their work with young children the DevTech research group uses a computer language called CHERP (Creative Hybrid Environment for Robotic Programming). The CHERP language substitutes a set of interlocking wooden blocks for typed in text. Each block is labeled with a graphic representing a command such as FORWARD, BACKWARD, BEGIN, or END. The program is “written” by assembling the commands by arranging the blocks. “The shape of the interlocking blocks and icons creates a physical syntax that prevents the creation of invalid programs and also eliminates the possibility of typographical errors,” notes Marina Bers. Once the blocks have been arranged to create a program for the robot to follow, a scanner on the robot is used to read the program into the robot’s memory.
The behavior of the robot will mimic the program developed by the child-programmer. Because the program is represented by the arrangement of blocks, children are able to make changes to the program by a rearrangement of the blocks. In addition, they can observe one another’s work and to see how other children have solved a particular problem (“how did you make the robot spin five times?”)
In a number of published studies Bers and her colleagues have collected evidence that
Robotics offers young children and teachers a new and exciting way to tangibly interact with traditional early childhood curricular themes. This study demonstrates that it is possible to teach Pre-Kindergarten children to program a robot with developmentally appropriate tools, and, in the process, children may not only learn about technology and engineering, but also practice foundational math, literacy, and arts concepts. While there are many challenges to overcome when implementing robotics in a busy Pre-Kindergarten classroom, our work provides preliminary evidence that teaching young children about and through computer programming and robotics using developmentally appropriate tools may be a powerful tool for educating children across multiple domains.
What is the reason that in addition to robotics and computer programming the “children may not only learn about technology and engineering, but also practice foundational math, literacy, and arts concepts?”
Seymour Papert who was a developer of the computer language LOGO in the 1970s asserted that a programming language like LOGO (or CHERP) changes the relationship between the child and knowledge.
He argued that most school instruction was based on “transmission” or the passing of “knowledge” from its possessor (the teacher) to the receiver (the student). When computers are used in schools, Papert’s argument continued, they are used to “program the child” in the same way that teachers program the child with the “required” knowledge.
The LOGO computer language was designed to enable the child to communicate with the computer. LOGO included a graphical Turtle that the computer’s user could move around on the screen. RIGHT would cause the Turtle to turn 90° to the right. FORWARD 10 would command the Turtle to move 10 paces ahead and so forth.
In the LOGO environment, the traditional relationship between the child and the knowledge was changed.
[t]he child, even at preschool ages, is in control: The child programs the computer. And in teaching the computer how to think, children embark on an exploration about how they themselves think. The experience can be heady: Thinking about thinking turns the child into an epistemologist, an experience not even shared by most adults. (Papert)
In addition, in the usual “teacher as source of knowledge,” model the child is placed in the “got it right/wrong” mode, and worse may not know either what was wrong or how to fix the error.
As Papert notes, when you learn to program, you seldom get it right the first time. “Learning to be a master programmer is learning to become highly skilled at isolating and correcting “bugs,”….
This is also the case with products of the intellect; they are usually neither “right” or “wrong” but are “buggy” works in progress.
If Papert is correct, changing the relationship between the child and knowing is fundamental to learning. Robotics with young children is perhaps a place to begin the change.
Resources:
Epistemologist: one who studies epistemology: the theory of knowledge, especially with regard to its methods, validity, and scope. Epistemology is the investigation of what distinguishes justified belief from opinion.
Marina Umaschi Bers, Safoura Seddighin, and Amanda Sullivan
Ready for Robotics: Bringing Together the T and E of STEM in Early Childhood Teacher Education Jl. of Technology and Teacher Education (2013) 21(3), 355-377

Sullivan, A., Kazakoff, E. R., & Bers, M. U. (2013). The Wheels on the Bot go Round and Round: Robotics Curriculum in Pre-Kindergarten. Journal of Information Technology Education: Innovations in Practice, 12, 203-219. Retrieved from http://www.jite.org/documents/Vol12/JITEv12IIPp203-219Sullivan1257.pdf

Seymour Papert (1980). Mindstorms, Basic Books.

Advertisements

Two STEAM Examples

We present two real world examples of ways that STEM and the arts can be connected.

In the first, art museum professionals partner with medical educators to improve medical practice.

In the second, art gallery visitors are guided through an exhibition by A.I. (artificial intelligence technology).

Examining Art to Improve the Medical Examination.

In June of 2016 Bonnie Pitman recently retired as the Director of the Dallas Museum of Art, convened a major conference entitled “The Art of Examination: Art Museum and Medical School Partnerships.  Participants represented sixty art museums and their partner medical schools at New York’s Museum of Modern Art (MoMA).

Partnership between art museums and medical schools are part of an emerging field called medical humanities, an interdisciplinary field in which knowledge from the arts makes contributions to medical education and practice.

The conference served as a platform that “provided a sound overview of the fields’ best practices, goals, history, terminology, evaluation, and future directions.” Such partnerships at major art and medical institutions in the U.S. and abroad are advocates for such programs and build a bridge between the arts and sciences.” (Pitman, 2016)

Careful examination of the history, composition, themes presented by an art object marks the work of the art critic or art educator.

Similarly, a physician begins her work with an examination of the patient’s various physical and affective characteristics, some of which may be important to the diagnosis while others are not. The ability to discriminate between the meaningful from the inconsequential is therefore an important skill shared by the art educator and the physician.

The first such art museum-medical school partnerships was created in 1999 when Dr. Irwin Braverman (Yale Medical School) and Linda Friedlaender, (Yale’s British Art Collection) began to work together to develop the observational skills of medical students by training them to use the techniques and language of art criticism as they learned how to examine their patients. (Pitman, 2016)

The connections between art and medical practice have led to at least one hundred such partnerships currently. There were sixty at the conference from U.S. Canada, England, and Italy. Forty more were on a waiting list for the conference.

A growing body of research literature published in medical journals also attests to the power of the intersection of where art museum and art professionals work with medical educators to the benefit of both health care professionals as well as to the community at large.

Going to the Tate Britain with an (artificially) Intelligent computer program named Recognition

You can visit the Tate Britain in London in person or online to both see and interact with the exhibition called Recognition. (Do not delay: Recognition closes on November 27)

The development of the exhibition was stimulated by the offer of the 2016 IK Prize that offers incentives to promote the use of digital technology in the arts.

The Tate Britain’s “mission is to increase the public’s enjoyment and understanding of British art from the 16th century to the present day…” as well as to increase the numbers of people who come to view the art; particularly young millennials whom it is hoped will become the next generation of art lovers.

However, Tony Guillan of the Tate recognized that looking at art and seeing art are not necessarily the same, the difference being that looking is simple discrimination, “that’s a painting,” while seeing connects the art to reality.

The successful quest for the IK prize began with the insight that the project would use A.I. technology, “…because getting machines to do what humans can do is one of the most exciting frontiers in technology…Is there anything more human than looking at art?” (Dobrzynski, 2016)

To compete, Tate Britain enlisted a number of partners: Microsoft, JoliBrain, a French A.I. company, and  Fabrica, an Italian communication research company. Fabrica would lead the development of Tate’s entry.

The team at Fabrica began with the question: “What if we could link our everyday lives to the Tate’s collection to illuminate similarities between the present and the past?” They developed the idea that the goal could be met by allowing the viewers to “see the world through two different lenses,” how the world has been represented historically by artists and how the world is represented today, through the news media.  (Dobrzynski, 2016)

Under Fabrica’s leadership, the partners began to work: Microsoft provided programming support, JoliBrain  contributed their DeepDetect API (application programming interface, a set of routines, protocols, etc. that makes it easier to develop programs) as well as DeepDetect server where the program would be run.

Fabrica put a variety of artificial intelligence technologies together, “including computer vision capabilities, such as object recognition, facial recognition, colour and composition analysis; and natural language analysis; and natural language processing of text associated with images, allowing it to analyze context and subject matter and produce written description of the images comparisons.” (Tate.org.uk)

As Recognition (or [re] [cognition]) works it creates a virtual collection of images by matching works from the Tate Britain collection with contemporary news photos from the news agency Reuters. The matches are based on similarities of objects, faces, composition, theme that the A.I. finds as it views images.

The human viewer can click to stop the process in order to examine any of the matches in the virtual gallery in order to give Recognition feedback by responding to the prompt: “what makes this an interesting match?”

A.I. has been used in health care and transportation but A.I. in art is “uncharted space” according to Microsoft’s Eric Horovitz which is why Microsoft was interested in working with the project. It is an opportunity to see how A.I. can be “creative and make mistakes and meander.” (Dobrzynski, 2016)

The humanities and science and technology are often seen as separate worlds; the one supposedly subjective, intuitive, vague; the other, objective, precise fact-filled.

But perhaps not. A medical student constructing her examination of a patient using language and insights from art criticism; Science? Art?

Human art gallery visitors are given a tour by an A.I. program that shows works from the gallery matched with news photos.

The humans are asked for their assessment of the match. The program uses the human generated assessments to refine its matches.

Humans and machine learn from one another. Humanities? Art? Technology? Science?

Time to reasses our categories.

 

Resources:

 

Dobrzynski, J. H. (2016c). Artificial Intelligence as a Bridge for Art and Reality. New York Times, p. 18. Retrieved from http://www.nytimes.com/2016/10/30/arts/design/artificial-intelligence-as-a-bridge-for-art-and-reality.html

Sheets, H. M. (2016c). How an Aesthete’s Eye Can Help a Doctor’s Hand

New York Times. Retrieved from http://www.nytimes.com/2016/10/30/arts/design/how-an-aesthetes-eye-can-help-a-doctors-hand.html?

 

Pitman, B. (2016). The Art of Examination: Art Museum and Medical School Parnerships. Proceedings from The Art of Examination: Art Museum and Medical School Parnerships, New York and Dallas.

Tags:

STEM, STEAM, art museum-medical school partnerships, clinical practice, A.I., Tate Britain, human assessment, A.I. and art