![]()
|
Abstract |
|
![]() |
1. Introduction |
![]() |
![]() |
2. Technology for the Development of Virtual Labs CoVASE supports visualization and basic use of dialogue as a means to find common properties in a series of observations. For collaborative work, the system has text-based, vocal, and pictorial chat facilities, symbolic data panels to externalize information, tele-pointers to reference visual entities, and virtual desktops that connect to remote computers to run external applications. To support interactivity, CoVASE allows users to control simulations and manipulate 3D graphics. Sessions can be recorded. The system is comprised of a run-time library that displays and manages interface elements (VASE) specified in C and XML. Another run-time library (DSVR) manages data traffic from a remote server over TCP/IP to update interface elements. Clients communicate with each other. |
![]() |
![]() |
Hardware and Voice-over-IP-lines support high quality audio-visual conferencing with little contingency. The virtual lab displays the video images (via a grabber card) on a virtual wall to avoid the use of multiple physical screens. We have integrated voice control by using a commercial speech recognition package to support freehand demonstrations, mainly for the support of presenters in lecture halls, and integrate an Open Source library to support spatial sound to attach voices to the avatars to localize them. Haptic (i.e. by touch) inspection of virtual entities is implemented to communicate data to users without cluttering displays. |
|
![]() |
3. Development and Evaluation
of a Virtual Lab in Meteorology
Due to the complexity of the subject, we preferred to design the virtual room so that it would provide space for task-sharing and peer-to-peer communication by virtual artifacts. We composed a virtual classroom that was populated with all available tools except the virtual desktop. As a design guideline, we took care to avoid cluttered displays or long routes to “walk” from one tool to another to use it (Olbrich & Jensen, 2003). The subject -- the convective atmospheric boundary layer - development, processes and structures -- was complicated and ill-defined; no complete theory was known in the literature. Though we did not expect students to master the subject in three hours, we wanted to see if it would be feasible to motivate students to work on the learning goals and see how far they could go with the equipment. Participants in a study from two groups were briefed and answered questions about their computer skills. The participants were then invited to attend a lecture, after which they answered two questions in pre-test-questionnaires on the taught subject. They could review the slides on the CoVASE virtual slide projector. We connected client workstations over intranet and provided headset and audio/video-conferencing to make participants work together. The users
The virtual lab gave access to expert tools which were demonstrably effective to support the work of professionals. Gigabytes of results of complex calculations could be interpreted instantly. Participants should, like professional researchers, investigate the visualization over time, where ideally one would steer, one would watch, and another would record findings and plan next steps depending on intermediary results. Steering was reduced to place probing elements (slicers, different threshold values) to select information. The participants had to decide which information would prove valuable to complement their preliminary knowledge on the taught subject. After the test, we asked participants to assess software quality and tool use. Further, they were asked to answer four detailed questions on what they saw in the experiment, on issues that were related to the subject taught in the lecture and queried in the pre-test. Six meteorology students volunteered; all were Caucasian, half male. The age varied from 21 to 26, with a mean of 24 years. More details are in the original paper (Jensen et al., 2004). |
![]() |
![]() |
3.1
Findings
|
![]() |
![]() |
4. Development and Evaluation
of a Virtual Lab in Scientific Visualization To teach students the operative aspects of scientific visualization, a second course was carried out. The course was conducted for several semesters, but for the first time supported by interactive test runs. We designed a demonstration run of the computer graphics algorithm Marching Cubes (MC). We were interested if the degree of interactivity with a simple simulation model would influence learning outcome, and how much time we would need to invest to develop effective learning material. Therefore, we re-used slides from existing lectures on MC and created in one day a test program that used MC to visualize volume data. The learning goals for a participant were to
Participants were able to review pseudo-code of the test program (see below). We chose a less complex problem compared to the previous experiment to make students solve it in half an hour. To make learning not too easy, participants worked alone this time. |
|
![]() float volume[VX * VY * VZ]; // 16^3 voxels initialized with zero Isosurface mesh; void createVolume(float volume[], int nr) { double x=0,y=0,z=0; int r=0,s=0,t=0; for (z=-1, t=0; z<1 && t<VZ; z+=2.0/VZ,++t) { for (y=-1, s=0; y<1 && s<VY; y+=2.0/VY,++s) { for (x=-1, r=0; x<1 && r<VX; x+=2.0/VX,++r) { // arbitrary function, here: Cayley double value = 4.0 * (pow(x, 2) + pow(y, 2) + pow(z, 2) ) + 16.0 * x * y * z; if (nr > 0) { // filter volume[t * VY * VX + s * VX + r] = value; --nr; }}}}} main() { int nr = 0; double threshold = 1500; int frames = 5000; while (nr < frames) { // for each frame of the animation createVolume(volume, nr); // initialize the volume (more and more voxels set per frame) updateThreshold(threshold); mesh = applyMarchingCubes(volume, threshold / frames); display(render(mesh)); ++nr;}} |
![]() |
The VISLET specified an example from mathematics, the Cayley function. The VISLET showed the regular construction of the function by way of visualizing a 3D iso-surface. The user shifted the iso-surface by changing a threshold that determined between which levels of density an iso-surface would be created, to classify which parts of a volume contained densities that were lower or equal than the threshold, and which parts did not. The users
We created a non-interactive version by recording a session with the VISLET, and presenting this to half of the participants instead of the fully interactive version. Both media would help the student to study MC closely, and find information on MC's input data, output data, structure of output data, and effect of the threshold parameter by watching an application example. Further, the example had the advantage that it could clarify misinformation by different terms than textual specifications ("a picture tells more than 1000 words"). Finally, participants could improve their technical skills by using the visualization program on the computer. After the test, we asked participants to assess software quality (if applicable, to verify that usability of the interactive version did not turn worse in comparison to the first study). Finally, they explained the purpose, use, and working of MC by answering the four multiple choice questions from the first test again, plus two additional questions. Care was taken that the questions could, in principle, be answered by means of any combination of media. We designed media and the questionnaires in accordance with the evaluations by the lecturer of the scientific visualization course, and in part by a psychologist. Twelve volunteers participated. All were Caucasian. One participant was female. The age varied from 23 to 38 years. The mean age was 27 years. All volunteers came from the field of computer science or a related area. Some where graduated professionals, the rest students. All volunteers were fluent in German, but three spoke originally Russian, and one originally Romanian. The 12 participants rated themselves in accordance to Kolb (1984)'s learner models, whereby only three stated they would equally prefer all possible learning styles when asked about this (by "example,” "formulas," "probing," "observing"). The others selected all except "by formulas." |
|
![]() |
4.1 Findings In the post-test, the slight improvement of performance from, on average, 2.66 to 2.75 correctly answered questions was expected. One participant had dropped from three to one correctly answered questions, for unknown reasons. Participants read the slides before the pre-test as long as they wished and did not have prior knowledge about the subject. Between pre- and post-test, only the 3D model (interactive or non-interactive) was shown. We suggest the 3D model stimulated thinking about the problem or helped users to clarify textual information. Interestingly, the learning benefit was invariably true for those who claimed not to prefer one learning style over another. |
![]() |
![]() |
|
![]() Figure 1. Learning performance for each participant. Higher values indicate better performance. The light bars denote the results from the pre-test, the others from the post-test. |
![]() |
|
|
![]() |
![]()
|
![]() Table 1. Specification of gains, losses, and neutral development due to the use of 3D media. |
![]() |
Students accepted the lab. Most fulfilled all learning goals without additional contextual information other than the slides. |
|
![]() |
5. Conclusion |
|
![]() |
6. References Emigh, D. (1998) Scientific Visualization in the Classroom. Proc. ACM / IEEE Supercomputing '98, (pp. 1-7). Jensen, N., Seipel, S., Nejdl, W. & Olbrich, S. (2003) CoVASE --Collaborative Visualization for Constructivist Learning. CSCL Conference 2003, (pp. 249-253). Jensen, N., Seipel, S., von Voigt, G., Raasch, S., Olbrich, S. & Nejdl, W. (2004) Development of a Virtual Laboratory System For Science Education and the Study of Collaborative Action. ED-Media Conference 2004, (pp. 2148-2153). Kolb, D. (1984) Experiential Learning: Experience as the Source of Learning and Development. Eaglewood Cliffs: Prentice-Hall. Leigh, J., Johnson, A.E. & DeFanti, T.A. (1997) Issues in the Design of a Flexible Distributed Architecture for Supporting Persistence and Interoperability in Collaborative Virtual Environments. Proc. ACM / IEEE Supercomputing '97, (pp. 1-14). Olbrich, S. & Jensen, N. (2003) Lessons Learned in Designing a 3D Interface for Collaborative Inquiry in Scientific Visualization. HCI International 2003, (pp. 1121-1125). Nielsen, J. (1994) Usability Engineering. San Francisco, CA: Morgan Kaufman. Trindade, J., Fiolhais, C. & Almeida, L. (2002) Science Learning in Virtual Environments. British Journal of Educational Technology 33, 4, (pp. 471-488). Youngblut, C. (1998) Educational Use of Virtual Reality Technology. Tec. Report. Inst. Defense Analyses, US. |
![]() |
![]() |
|
![]() |