IMEJ main Wake Forest University Homepage Search articles Archived volumes Table of Content of this issue

Helping Learners Visualize and Comprehend Algorithms*
Steven R. Hansen, Air Command and Staff College
N. Hari Narayanan, Auburn University
Dan Schrimpsher, Teledyne Brown Engineering

The idea of using animations to illustrate dynamic behaviors of computer algorithms is over fifteen years old. Over a hundred algorithm animation systems have been built since then, with most developed in the belief that the animations would serve as effective learning aids for students. However, only recently have researchers started asking the question – do algorithm animations really help? Unfortunately, results of experiments driven by this question have been disappointing. We believe that previous attempts at using animation to teach algorithm behavior were unsatisfactory not because of a flaw with animation as a technique, but because of the approach used to convey the animations. This paper provides an overview of our algorithm visualization research, based on the premise that a rethinking of algorithm animation design is required in order to harness its power to enhance learning. A novel theoretical framework for the design of effective algorithm visualizations, one which espouses embedding interactive analogies and animations in hypermedia to enhance contextual learning, is presented first. We then describe the architecture of HalVis, an implemented system based on this framework, and illustrate the various learning components of this architecture through an annotated slideshow of a visualization of the SelectionSort algorithm. Finally, a summary of results from eight empirical studies conducted over three years, involving more than 230 undergraduate students, is provided. These experiments demonstrated a statistically significant advantage of our framework over both traditional means of instruction and algorithm animations representative of extant research on this topic. They also led to a surprising discovery of the important role of interactive and animated analogies in priming learning about algorithms from subsequent visualizations.

1. Introduction

Over nineteen years ago, with the unveiling of the movie Sorting Out Sorting at a Computer Graphics Conference (ACM SIGGRAPH-81), the idea of using graphics and animation to illustrate the dynamic behavior and functionality of computer algorithms was born. Algorithm animations appeared to hold great promise as instructional aids, with many dozens having been built since then, such as Balsa (Brown, 1988) and Tango (Stasko, 1990). Most were developed in the belief that algorithm animations would serve as effective supplements to lectures for students to learn about algorithms. This belief has a strong intuitive basis. Students have always had a relatively difficult time understanding abstract mathematical notions, especially when these included dynamics of how algorithms manipulate data, so concretizing these notions graphically and animating them to illustrate the dynamics ought to improve learning.

Although research on designing and deploying algorithm animations spans almost two decades, only recently have researchers started asking the question – do algorithm animations really help? Dozens of experiments have been conducted to test whether algorithm animations lead to improved understanding of algorithms. Hundhausen (1997) presents an excellent summary of the results of 29 empirical studies pertaining to the comprehension efficacy of algorithm animations and, more generally, software visualizations. Unfortunately, all one can say about this accumulated evidence is that the results, at best, are mixed. While animations are enthusiastically received by the students, none of the studies has proven conclusively that these visual presentations actually improve learning (Badre et al., 1991; Byrne, Catrambone & Stasko, 1996). The following quotation succinctly expresses the frustration felt by researchers working in this area: "Unfortunately, the viability of algorithm animations as instructional aids remains rooted in intuition. No substantive empirical evidence has ever been presented to support these claims" (Stasko, Badre & Lewis, 1993). Is this because animation is an ineffective teaching tool? Intuition tells us that this is not likely.

2. Rethinking Algorithm Animations - From Animations to Visualizations

Research presented here is based on the premise that animations are indeed useful. We believe that previous attempts at using animation to teach algorithm behavior were unsatisfactory not because of a flaw with animation as a technique, but because of the approach used to convey the animations. Hence, a rethinking of algorithm animation design is required in order to harness its power to enhance learning. Theoretical foundations of the resulting novel approach to algorithm visualization are based on research in several areas:

Drawing from these sources, we have developed a framework for the design of algorithm visualizations and a corresponding software system. The key idea is to embed algorithm animations at various levels of abstraction inside a hypermedia environment. Distinctive features of our design framework and their rationale are described below.

  1. Learning Objectives: Algorithm visualization design starts with a top-down process in which explicit learning objectives are used to divide, describe and deliver information. For example, an important learning objective for students in learning the BubbleSort algorithm is to first comprehend its central metaphor of large values "bubbling up." Another learning objective is to understand the core operation of swapping two data values that all sorting algorithms employ. This learning objective-based top-down design approach is learner-centered ("what do students need to know") rather than technology-centered ("what can we show"). In this framework, the focus is not on the animations but rather on the orchestration of various hypermedia elements to meet the learning objectives, leading to more effective visualization.
  2. Scaffolding: The self-learning process of students begins with familiar analogies designed to provide a basic conceptual understanding of the essential elements of the algorithm. An example is the animated playing cards that illustrate sorting and merging in the MergeSort algorithm visualization. These analogies serve the purpose of "scaffolding" (Hmelo & Guzdial, 1996) and are a natural prelude to subsequent learning. Douglas, Hundhausen & McKeown (1995) and Stasko (1997) have observed that when students are asked to design algorithm visualizations, they employ real-world metaphors such as a football game. In our design framework, the analogy both introduces the algorithm, and bridges the conceptual gap between the abstract components and behaviors of the algorithm and the concrete graphical representations used in subsequent animations to depict these. This facilitates comprehension by allowing the learner to internally connect the analogy and the algorithm animations. All this occurs prior to the showing of any algorithm animation. We hypothesize that analogies scaffold learning by setting the stage for subsequent comprehension of the detailed steps of the algorithm, helping to improve long-term retention, and leading to a more effective visualization experience.
  3. Animation Chunking: With the scaffolding provided by the analogy, learning can proceed to assimilating the discrete operations on data that form the building blocks of any algorithm. Unfortunately, current algorithm animation systems present the detailed dynamics of an algorithm as a one-shot, stand-alone show that entertains but tends to obscure the very details the student needs to learn. Research on cognitive media types (Recker et al., 1995) points to the importance of using different types of information (definitions, examples, etc.) in teaching algorithmic concepts. Kehoe & Stasko (1996) explain the importance of multiple representations and media, concluding that text is important for precision, pseudocode is useful for conveying steps of the algorithm, and animations are good for depicting operational behavior. We devised a technique called "animation chunking" in which animations are broken up into meaningful bite-sized pieces. Each animation chunk is presented in one pane of a window, in synchrony with other representations. For instance, textual explanations of the animation appear in one pane while a step-wise description of the algorithm (called pseudocode), with simultaneous highlighting of the steps being executed for the animation, is shown in another pane. Combining, synchronizing, and presenting information in multiple media in discrete chunks in this fashion is more than mere animation; it is visualization.
  4. Animations at Multiple Levels of Granularity: Three distinct kinds of animations provide views of algorithm behavior at different levels of granularity. The first is an animated real-world analogy intended to function as a scaffold and a bridge by conveying the essential features of the algorithm in familiar terms. The second is a detailed micro-level animation that focuses on specific algorithmic operations. The third is a macro-level animation that shows the algorithm's behavior on large data sets. The detailed animation is chunked at multiple levels (at the level of a single instruction execution, at the level of a "pass" for algorithms that make multiple passes over data, etc.). Each chunk is presented in tandem with pseudocode highlighting and textual/aural explanations. The learner can elect to see the animation at any of the available chunking levels. The micro-animation is presented before the macro-animation, both of which are seen by the learner only after the animated analogy. This sequential design is intended to address a particular difficulty uncovered by an earlier study: "a paradoxical problem with the animation is that it shows the big picture or emergent qualities that might be appreciated only by those that already understand the algorithm at the mechanical level" (Byrne, Catrambone & Stasko, 1996).
  5. Rich Interactivity: Most educators agree that increased student interaction improves learning. Experiments conducted by Lawrence, Badre & Stasko (1994) indicated a learning advantage for students who had active involvement in the creation of input values to algorithm animations. On the other hand, passively watching an animation may not facilitate learning (Rappin et al., 1997; Stasko, Badre & Lewis, 1993). In our algorithm visualization framework, all animations allow active intervention by the learner. Besides providing the usual replay and speed controls for animations, students are encouraged to alter the data and rerun the animations to see how the algorithm performs on data sets of their own choosing. Students can also make predictions about performance and compare these with actual results. All static and dynamic presentations are embedded within a hypermedia structure which, by means of hyperlinks that provide pathways to multimedia explanations of basic algorithmic concepts, provides additional knowledge on-demand and in-context. The additional explanations appear only when demanded by the learner through a clicking-on-hyperlink action. These explanations of basic concepts appear in the context of their use in illustrating a specific algorithm. Increasing the level of interaction in these different ways can lead to more effective visualization.
  6. Critical Thinking: A potential problem with multimedia presentations is often stated as "hands-on, mind-off," where the interactions engage the user so much that analytical thought about the content of the presentation is inhibited. Two kinds of probes are employed to avoid this syndrome. One, called "ticklers," includes questions that pop up in random order, but always in an appropriate context, and can be dismissed by the user. These questions require considerable mental effort in the form of self-explanations, which have been shown to promote comprehension (Chi et al., 1989). The system does not force students to answer these questions before they can proceed. The other kind, which we call "articulation points," involves questions the learner must answer before proceeding, and the system provides immediate feedback about the correctness of the answer. These questions are tied to learning objectives set out by the designer and help students self-diagnose their learning and allow them to reflect on what they know, what they don't, and where to find relevant information. Ticklers appear randomly while articulation points are placed between different modules of an algorithm visualization.

3. System Architecture

We built a system, called HalVis (Hypermedia Algorithm Visualizations), as a proof-of-concept and a prototype for the empirical evaluation of the framework presented in the previous section. It is implemented using Asymmetrix ToolbookTM and employs all the features described previously. Visualizations of four popular sorting algorithms (BubbleSort, SelectionSort, MergeSort and QuickSort) and one common graph algorithm (Dijkstra's Shortest Path algorithm) were created for experimental purposes. Each visualization consists of the five modules (see Figure 1) described below.

HAlVis Selection Sort Example available for download:
Powerpoint file (4.8 MB)
For Windows: Zip file (2.9 MB) of the Powerpoint file.
For Mac: HQX (2.9 MB)
Or, view as web pages (image quality is lower than the downloadable files).

Figure 1. HalVis Architecture.

  1. Conceptual View: This module introduces a specific algorithm in very general terms using a familiar analogy. For instance, BubbleSort is introduced using a flask of water with bubbles that rise to the surface according to their size. The Shortest Path algorithm is introduced with a simple game that allows the student to observe a map of airline fares through various cities and to select the route with the cheapest fare. This module uses animations, text and audio to present the essentials of the algorithm in familiar terms, and sufficient bridging information to help learners proceed from the visual elements in the analogy to the graphical depictions of data and algorithm operations in later modules.
  2. Detailed View: This module contains two screens. One consists of a detailed textual description of the algorithm alongside a pseudocode representation of it. Embedded in the text are hyperlinks to related information in the Fundamentals module. The second screen contains four windows that depict various aspects of the algorithm's behavior. The Execution Animation window shows how steps of the algorithm modify data using smooth animation. The animation is chunked at multiple levels of granularity corresponding to meaningful units of the algorithm's behavior. The level of chunking (which determines when the animation pauses, providing the learner with an opportunity for reflection) can be altered by the learner. At the lowest level, the animation displays the execution of an individual statement, pausing for the learner's signal to proceed. The next level corresponds to a logical set of operations, like completion of a single pass within a loop. At the highest level, the animation proceeds to completion without pausing. The Execution Status Message window provides comments and textual feedback to the student about key events and actions during execution. The Pseudocode window shows the steps involved in the algorithm, which are highlighted synchronously with the animation. Finally, the Execution Variables window contains a scoreboard-like panorama of the variables involved in the algorithm and their changing values. Before launching the animation, students can change the data input to the algorithm as well as the speed and granularity of animation. Execution of each step of the algorithm affects the display in the four windows simultaneously. HalVis intentionally limits the number of data items in the Execution Animation window to focus attention on the micro-behavior of the algorithm.
    A screenshot (25 KB) of Detailed View.
  3. Populated View: This module is designed to provide students with an animated view of the algorithm on large data sets to make its macro-behavior explicit. Many of the details presented in the detailed view are obscured to enhance the student's focus on the algorithm's overall performance. For example, the pseudocode is not shown, nor are the variable values. Animations in this module are similar to those provided by other algorithm animation systems, but there are two novel features. One is a panel of counters that show pertinent performance-oriented information such as number of comparisons, swaps, recursive calls, and so on. Another is a facility for the student to make a prediction about different parameters of algorithm performance and then compare those against the actual performance when the animation is running. When the student launches the animation, the system prompts for predictions, creates a random initial data configuration, and proceeds to animate the algorithm.
  4. Questions: The Questions module presents the student with a combination of multiple choice, true-false, and algorithm debugging (by reordering the steps) questions to measure competency and comprehension. The system provides immediate feedback on student responses. HalVis also pops up context-sensitive tickler questions to help focus student attention on key aspects of the algorithm in the other modules.
  5. Fundamentals: The Fundamentals module contains information about basic building blocks and concepts common to virtually all algorithms. Examples include Comparing & Swapping Data, Looping Operation, Recursion, and so on. Generally, this module is accessible only through hyperlinks from the Conceptual, Detailed and Populated Views, so that the basic information is presented on-demand (in response to a learner request in the form of clicking on a hyperlink) and in-context (of algorithm-specific information within which the hyperlink is embedded).

4. Experiments with Hypermedia Algorithm Visualization

We conducted a series of eight experiments over three years, involving 232 computer science undergraduate students as subjects. These experiments had two primary goals. The first was to evaluate the effectiveness of hypermedia visualizations of algorithms, specifically to validate our hypothesis that visualizations designed according to our framework are more effective than traditional means by which students learn about algorithms. The second was to determine the differential contributions of various features of HalVis to learning effectiveness. In each of the experiments, subjects were randomly divided into two (or four) matched groups. All groups took a pretest that measured their prior knowledge, learned one or more algorithms under different experimental conditions (using full or partial versions of HalVis or by a traditional method), and then took a posttest that measured their knowledge improvement. Students were tested on their ability to recognize and reorder pseudocode descriptions of algorithms, mentally simulate algorithmic operations, and predict resulting data structure changes. In these studies, we did not differentiate between visual and verbal learners since HalVis contains rich textual and visual presentations to support both kinds of learner dispositions. What follows is a summary of findings from these experiments. Other publications (Hansen, 1999; Hansen, Schrimpsher & Narayanan, 1998; Hansen, Schrimpsher, Narayanan & Hegarty, 1998; Hansen & Narayanan, 2000) contain additional experimental details.

Table 1 lists information about these experiments, showing the level and number of students involved in each, learning method used by the experimental groups and the extent of their learning, the algorithm(s) studied, and the statistical significance of results. For the first five experiments, group learning is indicated in parentheses by the mean percentage posttest score. For the next three experiments comparing different versions of HalVis, group learning is indicated by the mean percentage of pretest to posttest score improvement.

Table 1.
Experiment Summary.

Experiment I showed that hypermedia visualization is more effective than a textbook for learning an algorithm. This result was successfully replicated in Experiment II using more advanced students. Experiment III compared learning from HalVis to learning from a compilation of the best algorithm descriptions (extracted from 19 textbooks published between 1974 and 1997) followed by problem solving activities. While the posttest averages favored the HalVis group, the difference between the two groups is not statistically significant in this case. This experiment indicated that interactive visualization could be as effective as learning from carefully crafted text, when problem solving followed that textual learning.

Experiment IV was designed to compare and evaluate how algorithm visualization (AV) and conventional classroom lectures interact and contribute to student learning. Our hypothesis was that AV alone would assist learning more than a lecture. If so, it follows that AV used in conjunction with lecture should assist learning even more. We used a 100-minute live classroom lecture, by a computer science faculty known for his teaching ability, delivered in two consecutive class sessions. Participants were second year computer science students at Auburn University, divided into two matching groups. One group used HalVis before attending the lecture (called the visualization-lecture –VL– group) and the other group used HalVis after the lecture (called the lecture-visualization –LV– group). Having both groups attend the same lecture eliminated variations that can occur in different lecture sessions, while allowing common interactions with the teacher. A first posttest was given between the two phases, and a second posttest was given after both phases. The first posttest results (shown in the table) indicated that the VL group learned significantly more from HalVis than the LV group from the lecture. Following a session with HalVis, the VL group's average posttest score was 70% compared to 44% for the LV group. The second posttest results showed that after receiving both the lecture and the visualization, both groups performed at the same level (72%). These results suggest that interactive visualizations can have a significantly higher learning impact than a lecture, and that a combination of both is even better.

Experiment V compared the effectiveness of learning from HalVis to learning from an algorithm animation typical of extant research on this topic. One of the most mature, widely reported, and publicly available algorithm animation platforms is the Tango software suite developed by Stasko (1997). The Tango software distribution contains a library of animated algorithms, including eight animations of the Shortest Path algorithm. Of these eight, we selected one that appeared to be the most complete, easiest to understand, and which most closely matched the features of the HalVis system, as a representative animation. One group of students interacted with HalVis and the other group interacted with the Tango animation. The second group was also provided with a textual description of the algorithm, to which the HalVis group did not have access. On the posttest, the HalVis group averaged 89% while the Tango group averaged 71%, indicating that a hypermedia algorithm visualization is more effective than a mere algorithm animation.

With five studies demonstrating the learning benefits of HalVis, we next asked the question: which features or modules of HalVis are producing the observed learning benefits? One approach to answering this is to build different versions of HalVis by selectively eliding specific features or modules, and then to experimentally compare these versions against the original. We designed and carried out three such studies. In Experiment VI we identified three features as most likely to have an impact on learning – animation chunking, highlighting steps of the pseudocode, and questions (both ticklers and articulation points). We built three versions of the QuickSort algorithm visualization in HalVis, with each of these features removed. Four groups of students worked with the full and three elided versions. They were tested for their knowledge of the algorithm before and afterwards. As seen in the table, the HalVis group learned the most, followed by the group that did not get highlighted steps of the pseudocode, the group that did not get the questions, and the group that saw one-shot animations with no chunking. While these results did not attain statistical significance, the trend indicates that animation chunking may be the most important feature, followed by questions and highlighting steps of the algorithm in tandem with the animations.

Experiments VII and VIII were designed to reveal learning contributions of the three modules of HalVis – Conceptual View, Detailed View and Populated View. In Experiment VII four matched groups of students interacted with one of four versions of HalVis illustrating the QuickSort algorithm – a complete version (CDP version: all three views) and three elided versions (DP version: Conceptual View removed; CP version: Detailed View removed; and CD version: Populated View removed). Our hypothesis was that the most important view was the most information rich view – the Detailed View. We also expected that the Populated View would follow in significance and that the contribution of the Conceptual View would be the least since it contained the least amount of algorithm-specific information. As expected, the group that received all three HalVis views performed the best. However, the groups that performed at the next level were not the ones exposed to the Detailed View but rather the groups that interacted with the Conceptual View. Perhaps the most noteworthy observation from this experiment is the effect of the Conceptual View in apparently priming the learning of information presented in subsequent views. The groups that interacted with the Conceptual View in any combination with other views performed more than twice as better as the group that lacked the Conceptual View.

Experiment VIII isolated and compared each of the three views. Its procedure was identical to that of Experiment VIII, with four matched groups of participants. The CDP group worked with a full version of HalVis illustrating Dijkstra's Shortest Path algorithm. The C group worked only with the Conceptual View of this algorithm. The D group worked only with the Detailed View of this algorithm. The P group worked only with the Populated View of this algorithm. Our hypothesis was that the Detailed View would prove to be the most valuable because of the amount of information it provided. We were uncertain as to how impacts of the other views would get ranked, since neither the Populated View nor the Conceptual View contained the volume or depth of information available in the Detailed View. As expected, the CDP group outperformed the others, followed closely by the D group. Interestingly, the C group outperformed the P group by 21%. It is illuminating to note how well the C group did with the limited amount of information (only the analogy) that they received. This reinforces the important role of interactive and animated analogies in learning about algorithms from visualizations.

5. Conclusion

This paper contains an overview of algorithm visualization research conducted at the Intelligent & Interactive Systems Laboratory at Auburn University over the last three years. This work was motivated by the present state of affairs in algorithm animation research. While the idea of using animations to illustrate the dynamic behaviors of computer algorithms is nearly two decades old, experiments intended to demonstrate the learning benefits of traditional algorithm animations have mostly produced disappointing results. We based our work on the premise that a rethinking of algorithm animation design is required in order to harness its power to enhance learning. The resulting theoretical framework for the design of effective algorithm visualizations, the architecture of HalVis – an implemented system based on this framework, and a summary of eight empirical studies are presented. Collectively, these experiments provide statistical validity to the hypothesis that hypermedia algorithm visualizations can be significantly more effective than traditional teaching methods and algorithm animations, considered in isolation. They also led a surprising discovery of the important role of interactive and animated analogies in priming learning about algorithms from subsequent visualizations. More information on this research is available at our group's web site: .

6. References

Badre, A., Beranek, M., Morris, J. M. & Stasko, J. T. (1991). Assessing program visualization systems as instructional aids. (Technical Report No. GIT-GVU-91-23). Atlanta, GA: College of Computing, Georgia Institute of Technology.

Brown, M. H. (1988). Exploring algorithms using Balsa-II. Computer, 21(5): 14-36.

Byrne, M., Catrambone, R., & Stasko, J. T. (1996). Do algorithm animations aid learning? (Technical Report No. GIT-GVU-96-18). Atlanta, GA: College of Computing, Georgia Institute of Technology.

Chi, M.T.H., Bassok, M., Lewis, M., Reimann, P., & Glaser, R. (1989). Self-explanations: how students study and use examples in learning to solve problems. Cognitive Science, 13: 145-182.

Crosby, M., & Stelovsky, J. (1995). From multimedia instruction to multimedia evaluation. Journal of Educational Multimedia and Hypermedia, 4(2/3): 147-162.

Daily, B. (1994). Multimedia and its impact on training engineers. International Journal of Human Computer Interaction, 6(2): 191-204.

Douglas, S. A., Hundhausen, C. D., & McKeown, D. (1995). Toward empirically-based software visualization languages. In Proceedings of the 1995 IEEE Symposium on Visual Languages (pp. 342-349). Los Alamitos, CA: IEEE CS Press.

Hansen, S. R. (1999). A framework for animation-embedded hypermedia visualization of algorithms. Unpublished Ph. D. Dissertation, Auburn, AL: Dept. of Computer Science & Software Engineering, Auburn University.

Hansen, S., & Narayanan, N. H. (2000). On the role of animated analogies in algorithm visualizations. In Proceedings of the International Conference of the Learning Sciences (to appear), Mahwah, NJ: Lawrence Erlbaum Associates.

Hansen, S., Schrimpsher, D., & Narayanan, N. H. (1998). Learning algorithms by visualization: A novel approach using animation-embedded hypermedia. In Proceedings of the International Conference of the Learning Sciences (pp. 125-130), Charlottesville, VA: Association for the Advancement of Computing in Education.

Hansen, S., Schrimpsher, D., & Narayanan, N. H. (1999). From algorithm animations to animation-embedded hypermedia visualizations. In Proceedings of the World Conference on Educational Multimedia, Hypermedia & Telecommunications (ED-MEDIA'99), Charlottesville, VA: Association for the Advancement of Computing in Education.

Hansen, S., Schrimpsher, D., Narayanan, N. H., & Hegarty, M. (1998). Empirical studies of animation-embedded hypermedia algorithm visualizations. (Technical Report No. CSE98-06). Auburn, AL: Dept. of Computer Science & Software Engineering, Auburn University.

Hmelo, C. E., & Guzdial, M. (1996). Of black and glass boxes: Scaffolding for learning and doing. In Proceedings of the International Conference of the Learning Sciences (pp. 128-134). Charlottesville, VA: Association for the Advancement of Computing in Education.

Hundhausen, C. D. (1997). A meta-study of software visualization effectiveness. Unpublished comprehensive examination paper. Eugene, OR: Dept. of Computer Science, University of Oregon. Available at

Kehoe, C. M., & Stasko, J. T. (1996). Using animations to learn about algorithms: An ethnographic case study. (Technical Report No. GIT-GVU-96-20). Atlanta, GA: College of Computing, Georgia Institute of Technology.

Lawrence, A. W., Badre, A. N., & Stasko, J. T. (1994). Empirically evaluating the use of animations to teach algorithms. In Proceedings of the IEEE Symposium on Visual Languages (pp. 48-54). Los Alamitos, CA: IEEE CS Press.

Narayanan, N. H., & Hegarty, M. (1998). On designing comprehensible interactive hypermedia manuals. International Journal of Human-Computer Studies, 48: 267-301.

Rappin, N., Guzdial, M., Realff, M., & Ludovice, P. (1997). Balancing usability and learning in an interface. In Proceedings of the ACM Human Factors in Computing Systems Conference (CHI'97) (pp. 479-486). New York, NY: ACM Press.

Recker, M., Ram, A., Shikano, T., Li, G., & Stasko, J. T. (1995). Cognitive media types for multimedia information access. Journal of Educational Multimedia and Hypermedia, 4(2/3): 183-210.

Stasko, J. (1990). TANGO: A framework and system for algorithm animation. IEEE Computer, 23(9): 27-39.

Stasko, J. (1997). Using student-built algorithm animations as learning aids. In Proceedings of the SIGCSE Technical Symposium on Computer Science Education (pp. 25-29). New York, NY: ACM Press.

Stasko, J., Badre, A., & Lewis, C. (1993). Do algorithm animations assist learning? An empirical study and analysis. In Proceedings of the INTERCHI'93 Conference (pp. 61-66). New York, NY: ACM Press.

7. Acknowledgements

This research has been supported by the National Science Foundation under contracts CDA-9616513 and REC-9815016. We gratefully acknowledge the advice of Mary Hegarty, Department of Psychology, University of California at Santa Barbara, who serves as an expert consultant on experiment design and data analysis to this project. Finally, the experiments could not have been carried out without the voluntary participation of students from the following courses: Fundamentals of Computer Science II (CSE 220) and Fundamental Algorithm Design and Analysis (CSE 360).

* This is a revised and extended version of the paper "From algorithm animations to animation-embedded hypermedia visualizations" that appeared in the Proceedings of the World Conference on Educational Multimedia, Hypermedia & Telecommunications (ED-MEDIA'99), Association for the Advancement of Computing in Education (Hansen, Schrimpsher & Narayanan, 1999).

********** End of Document **********

IMEJ multimedia team member assigned to this paper Yue-Ling Wong