Animation Used to Explain Scheme Functions

Oliver Grillmeyer

Computer Science Division
and School of Education
University of California
Berkeley, CA 94720


A multimedia text to teach introductory computer science in Scheme is presented. This multimedia text contains a tool that animates a set of Scheme functions. Students often form erroneous models of these functions based on a few simple examples. Studies presented show that the use of this tool in classroom demonstrations and by individuals interactively can improve one's understanding of these functions.


Multimedia offers unique learning and teaching opportunities and a large avenue of research possibilities. These new directions dramatically change how one typically views instruction and learning. Pea and Gomez [5] give examples of how one should view the learner and teaching process in a multimedia setting including viewing knowledge as socially constructed, viewing learning as situated in communities of practice, taking on a learner-centered view, and modeling expert practice. Chi, et. al [1] and Pirolli and Recker [6] have found that most deep learning occurs when the learner does problems herself or carries out metacognitive tasks like reflection, self-evaluation, and assessment. In multimedia texts as the learner receives new information she can be directly tested, assessed, and appropriate actions can be taken based on the assessment results. This yields an environment similar to one-on-one tutoring.

Animations and illustrations can improve learning. White [7] studied animations in microworlds to introduce ideas to students. She found that students using her ThinkerTools system outperformed students studying physics in a regular science class. Mayer and Gallini [4] found that students with low prior knowledge in a domain had significantly better conceptual recall and problem solving abilities when presented with diagrams that augmented the text.


I have developed a multimedia computer science textbook based on Exploring Computer Science with Scheme, Grillmeyer [2], an introductory computer science text. In addition to the typical hypertext features, I have incorporated tools to facilitate learning. These include the following: an animator that graphically displays the actions that various functions perform; a Scheme interpreter to experiment and test out code; a personal notebook that allows the user to enter notes or to copy sections of the text or interactions with the Scheme interpreter; and a bookmark tool that allows the user to mark and name sections of the text for easy access.

The animator illustrates how various list functions and applicative operators work by making their normally tacit actions visible to the user. These functions include the list functions cons, list, and append, the applicative operators map, apply, for-each, and functions taken from Common LISP: find-if, remove-if, count-if, reduce, and every. The user specifies function calls which are illustrated by moving and manipulating the arguments. Using the integrated Scheme interpreter, new functions can be written and directly applied to applicative operators.

The creation and design of the animator was motivated by my teaching experiences. Students have difficulty properly using the list creation functions and applicative operators in large part due to their incomplete notions of these functions. Often the understanding of the applicative operators is predicated on the function that is being passed rather than on the applicative operator itself. This coupled with a poor sense of the intrinsic actions of the applicative operators results in students having difficulty deciding which applicative operator to choose when solving problems. Students often rely on an oversimplified model based on a few simple examples and they have difficulty extending beyond that. For example, many students think that (map + '(1 2 3)) returns the sum 6 instead of the list (1 2 3). Another mistake is to think that (find-if - '(1 -2 3)) will return the first negative number -2 instead of the first number yielding a true result when called with - which would be 1.


I have undertaken a series of studies to assess the animation tool. Pilot studies helped illustrate the misconceptions that students have and how the animator could help correct these. Areas in which the animator does not help or actually hinders were also exposed.

One study looked at the use of the animator to teach the list functions in a classroom presentation. Students had a brief introduction to the list functions but hadn't done many exercises creating lists. Each student was asked to complete some function calls to formulate different lists using the list functions. After that I gave a demonstration of the animator illustrating these functions. Then students were given the opportunity to refine their answers. The results were that 28% of the students' answers improved after the intervention, 24% remained correct, 46% remained incorrect, and 2% got worse. The impressive thing is that this was after a five minute demonstration. With individual interactive interventions, I would expect higher success rates.

A more detailed list study compared the use of the animator with reading a section of the text. The study used subjects with no prior knowledge of the list functions. They used the animator or read a section of text and then answered questions involving the list functions. The group using the animator showed statistically significant improvement (p < 0.05) over the groups reading the text.

Classroom studies akin to the list classroom study explored how effective a brief intervention is in providing a more correct understanding of the applicative operators. Students were given questions about what calls to applicative operators would return and how they work. Afterwards they were given a five minute demonstration of the animator and then allowed to correct their answers. In one study students showed improvement after the intervention on 23% of their questions, 75% remained the same, and 2% were worse. Taking out the cases in which students got the answer correct (47% of the questions), the results were 43% improvement, 53% stayed the same, and 4% declined. These are very positive results given such a short and noninteractive intervention.

Another study comprised a variety of questions to assess the students' basic and detailed understanding of the applicative operators and the functions that can be passed to them. This study used a pre-test, intervention, post-test model with three treatment groups: one using the animator, another using a tool called the replacement modeler, Mann et. al [3] that represents applicative operators in terms of functions the students are already familiar with, and the final group is a control doing exercises on a Scheme interpreter directly instead of using the other two software tools. Looking at overall results there were not statistically significant differences, but in some individual questions students using the animator performed significantly better. Students who used the animator enjoyed it, and on average they did perform better than students who used the replacement modeler or the Scheme interpreter. Students using the animator often talked about the applicative operators using the visualizations presented (e.g., "the elements of the list will go down to the function and get evaluated and then the results will go into a list one by one").

My theory is that the simple visual metaphor of the animator gives students a clear picture of the actions these functions perform and helps them in writing functions or answering questions about the use of these functions. This more correct model will hopefully replace incomplete models based on some simple exemplars. In further research I hope to understand this mechanism better through understanding of the cognitive processes involved so that the software can be improved.


[1] Chi, M. T. H., Bassoc, M., Lewis, M. W., Reiman, P., & Glaser, R., Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13, 145-182, 1989.

[2] Grillmeyer, O., Exploring Computer Science with Scheme. Springer-Verlag, 1998.

[3] Mann, L. M., Linn, M. C., & Clancy, M. J., Can tracing tools contribute to programming proficiency? The LISP Evaluation Modeler. Interactive Learning Environments, 4, (Special Issue on Computer Programming Environments), (1), 96-113, 1994.

[4] Mayer, R. & Gallini, J., When is an illustration worth ten thousand words? Journal of Educational Psychology, 82, (4), 715-726, 1990.

[5] Pea, R. D. & Gomez, L. M., Distributed multimedia learning environments: Why and how? Interactive Learning Environments, 2, (2), 73-109, 1992.

[6] Pirolli, P. L. & Recker, M., Modelling individual differences in students' learning strategies. Journal of the Learning Sciences, 4, (1), 1-38, 1995.

[7] White, B., ThinkerTools: Causal models, conceptual change, and science education. Cognition and Instruction, 10, (1), 1-100, 1993.