IMEJ main Wake Forest University Homepage Search articles Archived volumes Table of Content of this issue


Development of a Virtual Laboratory System for Science Education
Nils Jensen, University of Hanover
Gabriele von Voigt, University of Hanover
Wolfgang Nejdl, University of Hanover
Stephan Olbrich, University of Hanover

Abstract
The goal of this work is to develop synthetic laboratories to teach natural and engineering sciences by means of interactive 3D visualization. The development framework was designed over six years and scales from small to large computer simulations that are distributed on the Net. Pilot studies demonstrate technical feasibility, indicate educational value, and have helped us to understand better how learners use 3D media and where specific care in designing content is necessary to master learning challenges in a successful way.

1. Introduction
Developing educational visualization and simulation environments ("virtual labs") is hard, but, they are worth the effort because they support self-driven learning (Emigh, 1998; Youngblut, 1998; Dede et al., 1999; Trindade et al., 2002). One obstacle to their development is that one must invest substantial time and money to make them an effective part of the natural and engineering sciences curriculum at universities. Decision-makers are reluctant to integrate virtual laboratories in the curriculum of undergraduates due to the lack of standards and empirical results across user domains. We aim to narrow this gap between developers and teachers by specifying our experiences at the University of Hanover, in co-operation with Uppsala University, on the rapid development of virtual laboratories to support higher education. Researchers and students have used software to study the dynamics of fluids, molecules, atmospheric convections, and complex shapes by use of interactive, customized computer simulations and 3D visualization, where users track multi-dimensional patterns. In the article we report studies of technical feasibility and educational value in a meteorology class and a course on scientific visualization, compare Jensen et al. (2004). See Nielsen (1994, chap. 5) on Heuristic evaluation.

2. Technology for the Development of Virtual Labs
CoVASE is a toolkit to develop educational visualization and simulation environments for students, researchers, and tutors in the natural and engineering sciences in a rapid way. Developers compose existing components, customize them, and integrate domain-specific visualization and simulation programs by means of pre-defined application programming interfaces (API). In this way, users share existing research data and simulations that are augmented by educational scaffolds (Leigh et al., 1997). Care must be taken, however, that the system scales with the volume of data. Data-intensive simulations that run on proprietary laboratory hardware may or may not run on a student's workstation. The solution is to distribute software on networked processors. Simulations generate visualizations on servers, and users receive results on clients. Users control the system in accordance to what they see in real-time. Clients run software that receives data over a network, displays graphics, and manages input. A client sends input back partly to the data generator via the server, and partly to other clients to support collaboration between synchronous peers. Jensen et al. (2003; 2004) specify details about the implementation of multi-user scenarios.

CoVASE supports visualization and basic use of dialogue as a means to find common properties in a series of observations. For collaborative work, the system has text-based, vocal, and pictorial chat facilities, symbolic data panels to externalize information, tele-pointers to reference visual entities, and virtual desktops that connect to remote computers to run external applications. To support interactivity, CoVASE allows users to control simulations and manipulate 3D graphics. Sessions can be recorded.

The system is comprised of a run-time library that displays and manages interface elements (VASE) specified in C and XML. Another run-time library (DSVR) manages data traffic from a remote server over TCP/IP to update interface elements. Clients communicate with each other.

The user has two options to configure CoVASE. The first option is to edit the XML file to build 3D world views where an avatar moves and where 2D content is embedded in the 3D world. The second option is to build 3D model views (VISLETs) where the avatar usually does not move or where 3D content is surrounded by 2D Web content.

Hardware and Voice-over-IP-lines support high quality audio-visual conferencing with little contingency. The virtual lab displays the video images (via a grabber card) on a virtual wall to avoid the use of multiple physical screens.

We have integrated voice control by using a commercial speech recognition package to support freehand demonstrations, mainly for the support of presenters in lecture halls, and integrate an Open Source library to support spatial sound to attach voices to the avatars to localize them. Haptic (i.e. by touch) inspection of virtual entities is implemented to communicate data to users without cluttering displays.

demo: movie Screen capture (Flash, ~4.5 MB) demonstrates the option of editing the XML file to build 3D world views where an avatar moves and 2D content is embedded in the 3D world.

demo: movie Screen capture (Flash, ~8.8 MB) demonstrates the option to build 3D model views where the avatar usually does not move or where 3D content is surrounded by 2D Web content.

3. Development and Evaluation of a Virtual Lab in Meteorology
A senior lecturer had created a large-scale program to simulate atmospheric convections. Standard visualization methodology was applied to visually represent humidity, air flow, temperature, and air pressure by 3D animations. The lecturer could steer graphical elements to inspect the results of simulation runs and probe the results of variations of his code to see if they behave as expected. We wanted to embed the simulation in a CoVASE teaching environment for undergraduates to conduct similar probing and experimenting in approximately the third semester, in the meteorology course of the lecturer. The intent was that students would

  1. apply social skills to mutually help each other when problems occur and to achieve common goals,
  2. test their collaborative problem solving abilities by use of ill-defined contextual information, and
  3. acquire technical skills to use visualization software.

Due to the complexity of the subject, we preferred to design the virtual room so that it would provide space for task-sharing and peer-to-peer communication by virtual artifacts. We composed a virtual classroom that was populated with all available tools except the virtual desktop. As a design guideline, we took care to avoid cluttered displays or long routes to “walk”from one tool to another to use it (Olbrich & Jensen, 2003).

The subject -- the convective atmospheric boundary layer - development, processes and structures -- was complicated and ill-defined; no complete theory was known in the literature. Though we did not expect students to master the subject in three hours, we wanted to see if it would be feasible to motivate students to work on the learning goals and see how far they could go with the equipment.

Participants in a study from two groups were briefed and answered questions about their computer skills. The participants were then invited to attend a lecture, after which they answered two questions in pre-test-questionnaires on the taught subject. They could review the slides on the CoVASE virtual slide projector.

We connected client workstations over intranet and provided headset and audio/video-conferencing to make participants work together. The users

The virtual lab gave access to expert tools which were demonstrably effective to support the work of professionals. Gigabytes of results of complex calculations could be interpreted instantly. Participants should, like professional researchers, investigate the visualization over time, where ideally one would steer, one would watch, and another would record findings and plan next steps depending on intermediary results. Steering was reduced to place probing elements (slicers, different threshold values) to select information. The participants had to decide which information would prove valuable to complement their preliminary knowledge on the taught subject.

After the test, we asked participants to assess software quality and tool use. Further, they were asked to answer four detailed questions on what they saw in the experiment, on issues that were related to the subject taught in the lecture and queried in the pre-test.

Six meteorology students volunteered; all were Caucasian, half male. The age varied from 21 to 26, with a mean of 24 years. More details are in the original paper (Jensen et al., 2004).

3.1 Findings
Jensen et al. (2004) confirm that learning goals one and three were achieved. But the questions to satisfy learning goal two were difficult for the students because they did not orchestrate their activities during the lab session, and the strategy for placing probes was not systematic. Despite the initial training time and explanation of how to interpret visualizations, substantial contextual information in the form of methodology training was missing.
Self-directed learning was a higher burden on students than we thought. Participants must receive teaching on how to

4. Development and Evaluation of a Virtual Lab in Scientific Visualization
To teach students the operative aspects of scientific visualization, a second course was carried out. The course was conducted for several semesters, but for the first time supported by interactive test runs. We designed a demonstration run of the computer graphics algorithm Marching Cubes (MC). We were interested if the degree of interactivity with a simple simulation model would influence learning outcome, and how much time we would need to invest to develop effective learning material. Therefore, we re-used slides from existing lectures on MC and created in one day a test program that used MC to visualize volume data.
The learning goals for a participant were to
  1. test problem solving abilities by connecting information sources in an autonomous way, and
  2. acquire technical skills to use visualization software.

Participants were able to review pseudo-code of the test program (see below). We chose a less complex problem compared to the previous experiment to make students solve it in half an hour. To make learning not too easy, participants worked alone this time.

An external link to Read about Marching Cubes. http://www.exaflop.org/docs/marchcubes/ind.html

float volume[VX * VY * VZ]; // 16^3 voxels initialized with zero
	  Isosurface mesh;
  void createVolume(float volume[], int nr) { 
    double x=0,y=0,z=0;
    int r=0,s=0,t=0;
    for (z=-1, t=0; z<1 && t<VZ; z+=2.0/VZ,++t) {
      for (y=-1, s=0; y<1 && s<VY; y+=2.0/VY,++s) {
        for (x=-1, r=0; x<1 && r<VX; x+=2.0/VX,++r) {
          // arbitrary function, here: Cayley
          double value = 4.0 * (pow(x, 2) + pow(y, 2) + pow(z, 2) ) + 16.0 * x * y * z; 
          if (nr > 0) { // filter
            volume[t * VY * VX + s * VX + r] = value;
            --nr;
          }}}}}
   main() {
     int nr = 0;
     double threshold = 1500;
     int frames = 5000;
     while (nr < frames) { // for each frame of the animation
       createVolume(volume, nr); // initialize the volume (more and more voxels
     set per frame) updateThreshold(threshold);
       mesh = applyMarchingCubes(volume, threshold / frames);
       display(render(mesh));
  ++nr;}}

Participants in a study from two groups were briefed and answered questions about their computer skills. They specified knowledge about the MC algorithm by answering four multiple choice questions. Then we studied usage of the single-user version of a customized VISLET on a multimedia workstation for one participant at a time.

The VISLET specified an example from mathematics, the Cayley function. The VISLET showed the regular construction of the function by way of visualizing a 3D iso-surface. The user shifted the iso-surface by changing a threshold that determined between which levels of density an iso-surface would be created, to classify which parts of a volume contained densities that were lower or equal than the threshold, and which parts did not. The users

We created a non-interactive version by recording a session with the VISLET, and presenting this to half of the participants instead of the fully interactive version. Both media would help the student to study MC closely, and find information on MC's input data, output data, structure of output data, and effect of the threshold parameter by watching an application example. Further, the example had the advantage that it could clarify misinformation by different terms than textual specifications ("a picture tells more than 1000 words"). Finally, participants could improve their technical skills by using the visualization program on the computer.

After the test, we asked participants to assess software quality (if applicable, to verify that usability of the interactive version did not turn worse in comparison to the first study). Finally, they explained the purpose, use, and working of MC by answering the four multiple choice questions from the first test again, plus two additional questions.

Care was taken that the questions could, in principle, be answered by means of any combination of media. We designed media and the questionnaires in accordance with the evaluations by the lecturer of the scientific visualization course, and in part by a psychologist.

Twelve volunteers participated. All were Caucasian. One participant was female. The age varied from 23 to 38 years. The mean age was 27 years. All volunteers came from the field of computer science or a related area. Some where graduated professionals, the rest students. All volunteers were fluent in German, but three spoke originally Russian, and one originally Romanian.

The 12 participants rated themselves in accordance to Kolb (1984)'s learner models, whereby only three stated they would equally prefer all possible learning styles when asked about this (by "example,”"formulas," "probing," "observing"). The others selected all except "by formulas."

demo: movie Screen capture (Flash, ~2 MB). The VISLET showed the regular construction of the function by way of visualizing a 3D iso-surface. The user shifted the iso-surface by changing a threshold that determined between which levels of density an iso-surface would be created.

4.1 Findings
In the short-term, we were interested in studying correlations between interactivity and positive learning outcome ("gain"), where gain denotes a positive, non-null difference between questions correctly answered in the post- and pre-test. "Loss" means "not gain, except null difference." Only one person managed to answer both additional questions in the post-test in a correct way, so these were excluded from the ranking (Figure 1).

In the post-test, the slight improvement of performance from, on average, 2.66 to 2.75 correctly answered questions was expected. One participant had dropped from three to one correctly answered questions, for unknown reasons. Participants read the slides before the pre-test as long as they wished and did not have prior knowledge about the subject. Between pre- and post-test, only the 3D model (interactive or non-interactive) was shown.

We suggest the 3D model stimulated thinking about the problem or helped users to clarify textual information. Interestingly, the learning benefit was invariably true for those who claimed not to prefer one learning style over another.


Figure 1. Learning performance for each participant. Higher values indicate better performance. The light bars denote the results from the pre-test, the others from the post-test.

We believe that interaction can encourage users to actively eliminate learning deficiencies, thus improving learning outcome. Another possibility, which could have supported counter-arguments, would have been that it decreases learning outcome, because of the increase of interaction complexity.

Correlations
Interactive 3D model
Video of 3D model
Total
Gain
2
3
5
Loss
2
2
4
Neutral
2
1
3
 
 
12
Table 1. Specification of gains, losses, and neutral development due to the use of 3D media.

Both versions were not supported by our study (Table 1). Instead, the tendency was a nearly equal distribution along gain, loss, and neutral development, with small emphasis on gain. The degree of interaction was irrelevant. Taking gain and neutral development together, they comprised two times the number of participants whose scores decreased after they interacted with the 3D model. The finding indicates that the use of interactive and non-interactive 3D models did not harm the learning process. The explanation for losses is not fully clear, but we assume that some people were misled by the "obvious" properties of the 3D model and did not "dig deep enough" to understand the more subtle features. The models were not augmented by narration or deictic elements.

Students accepted the lab. Most fulfilled all learning goals without additional contextual information other than the slides.

5. Conclusion
We have developed CoVASE to create virtual labs (www.l3s.de/vase3), and demonstrated media design. Preliminary evidence shows inspecting 3D visualizations can improve learner satisfaction due to the vivid presentation, and at least maintain learning efficacy. Content must not distract from the learning objectives and must provide clear views that leave students little space for misinterpreting data. The next steps are to investigate how interactivity, collaboration, and content design influence each other with regard to learning performance, satisfaction, and comprehension in courses.
An external link to CoVASE. http://www.l3s.de/vase3

6. References
Dede, C., Salzman, M. C., Loftin, R. B. & Sprague, D. (1999) Multisensory Immersion as a Modeling Environment for Learning Complex Scientific Concepts. In W. Feurzeig and N. Roberts (Eds.), Computer Modeling and Simulation in Science Education. NY: Springer.

Emigh, D. (1998) Scientific Visualization in the Classroom. Proc. ACM / IEEE Supercomputing '98, (pp. 1-7).

Jensen, N., Seipel, S., Nejdl, W. & Olbrich, S. (2003) CoVASE --Collaborative Visualization for Constructivist Learning. CSCL Conference 2003, (pp. 249-253).

Jensen, N., Seipel, S., von Voigt, G., Raasch, S., Olbrich, S. & Nejdl, W. (2004) Development of a Virtual Laboratory System For Science Education and the Study of Collaborative Action. ED-Media Conference 2004, (pp. 2148-2153).

Kolb, D. (1984) Experiential Learning: Experience as the Source of Learning and Development. Eaglewood Cliffs: Prentice-Hall.

Leigh, J., Johnson, A.E. & DeFanti, T.A. (1997) Issues in the Design of a Flexible Distributed Architecture for Supporting Persistence and Interoperability in Collaborative Virtual Environments. Proc. ACM / IEEE Supercomputing '97, (pp. 1-14).

Olbrich, S. & Jensen, N. (2003) Lessons Learned in Designing a 3D Interface for Collaborative Inquiry in Scientific Visualization. HCI International 2003, (pp. 1121-1125).

Nielsen, J. (1994) Usability Engineering. San Francisco, CA: Morgan Kaufman.

Trindade, J., Fiolhais, C. & Almeida, L. (2002) Science Learning in Virtual Environments. British Journal of Educational Technology 33, 4, (pp. 471-488).

Youngblut, C. (1998) Educational Use of Virtual Reality Technology. Tec. Report. Inst. Defense Analyses, US.

********** End of Document **********