![]()
|
Abstract 1. Introduction Hypermedia systems may be particularly appropriate as learning environments because the associative knowledge structures of a topic or subject matter are made explicit. However, in creating hypermedia systems, attention must also be given to design features that can support students in making appropriate connections in their own developing knowledge structures (e.g., McKendree et al 1995). Activities that will emphasize the key concepts of a domain and engage the students in problem solving and reflection are especially important in this regard. Examples include guided tours highlighting concepts in a domain and showing how they are related; multiple paths through materials that encourage consideration of the same concepts in differing contexts; questions that promote reflection on the relationships among concepts; and opportunities to generate and revise explanations of key concepts as understanding develops. The inclusion of these and other design features in hypermedia learning environments may have implications for the ways in which the software is used and for the quality of learning that results. |
|
![]() |
![]() STC’s designers created inquiry questions to provide scaffolding for students who need help building a mental structure or representation of the material (the directed inquiry mode). Students who did not need that help could browse the material through the concept map and standard navigational features (self-directed inquiry). As such, the inquiry questions were not intended as an assessment of student understanding. Rather, they provide an activity to help the student construct an understanding of foundational chemistry concepts and phenomena. In practice, however, inquiry questions provide the primary instructional activity in STC. They can be quite useful in this regard, because the questions represent a synthesis of topics or content cards, and the same topics may be relevant for several questions. Revisiting the same topics (e.g., concentration) in multiple contexts (e.g., pH, precipitation) should promote students’ understanding of the relationship among related concepts. Previous research has shown that "criss-crossing" concepts and topics facilitates connections among ideas (i.e., knowledge construction) (Jacobson 1991; Spiro et al. 1991) and the quality of students’ explanations of these concepts is directly related to their level of knowledge and understanding (e.g., Chi et al. 1989; Chi & van Lehn 1991). Throughout STC, students make decisions on how the software is to be used – whether and when to read the content cards or view supplemental media; which of the four navigation modes to use; and whether and when to revise their explanations. How students respond to features of the software is referred to as patterns of use. Patterns of use are particularly relevant in hypermedia learning environments, where the student has the flexibility to make choices in interacting with the software, because these choices may impact the extent of learning. We cannot assume that all students have seen the same material. In what follows, we consider two STC modules and examine the quality of student explanations provided in response to the inquiry questions. We chose these modules because they vary with respect to the opportunities for criss-crossing content cards when responding to inquiry questions. We expect that students will learn more (provide higher quality explanations) from instructional modules that provide more opportunities for criss-crossing. Our analysis of student answers and explanations to the inquiry questions provides a general measure of the impact of this design feature on student learning. Next, we examined student patterns of use and their relationship to students’ explanations to inquiry questions (used as a proxy for student learning or understanding) to elaborate on these general findings. We conclude with recommendations for software designers with regard both to interaction models and content design. |
|
![]() |
![]() ![]() |
![]() Table 1. Coverage in Acids & Bases and Solubility modules. |
![]() |
|
![]() Figure 1. Answer correctness by question (n=180). |
![]() |
![]() Explanations were scored on a 5 point scale from 0 to 4. In the Acids & Bases module, initial score totals range from 0 to 17 with a mean of 7.34 (sd=3.97). Totals after revisions range from 4 to 21, with a mean of 12.84 (sd=3.68). The means indicate that students average about 1 point per question (fragmented) for their initial explanations and about 2 points for their final explanation (partial). The change is statistically significant (t(df=179) = 18.65***). In the Solubility module, initial score totals range from 0 to 34, with a mean of 12.01 (sd=6.13). Final totals range from 6 to 36, with a mean of 19.06 (sd-6.44). With 10 questions in the module, this again means students improve from an average score just above 1 to just below 2. Again, the change is significant (t (df=179) = 15.40***). In both modules we see that students generally select the correct answer, but they do not provide high quality explanations for their choices. Moreover, students do revise their explanations and the change in the quality of explanations is statistically significant but the magnitude of the difference is small and the result is still generally poor. Furthermore, contrary to expectations, there were no obvious differences between the two modules in overall quality. Therefore, we investigated if differences were related to student patterns of use. We characterized the various patterns of use, the consistency with which students use them within and between modules, and the relationship to explanation quality. |
![]() |
![]() |
|
Glossary: To see the explanation of the abbreviations. |
![]() |
![]() The two components lead to a set of four base patterns, as shown in Figure 3. Do students consistently apply a particular pattern or vary their approach? In Acids & Bases, 119 students use only one of the four base patterns, a further 31 use one pattern for five of the six questions; therefore, 150 students (83.3%) use a single pattern almost exclusively. Similarly, in Solubility, 139 students use one pattern exclusively, while 19 more use one pattern for all but one question. So 158 (88%) use one pattern almost exclusively. Furthermore, only 20 students (11%) switched from a ‘cards first’ (cqr or cqcr) pattern to a ‘questions first’ (qr or qcr) pattern between the two modules. In both modules, the qcr pattern is used by almost half the students on each question. Although students are allowed to answer the questions in any order, many of them proceed in numeric order starting with question 1. Notice then that, as the module progresses, students tend to refer to cards between the question and the response (e.g. the size of the qcr group increases while qr decreases, and likewise for cqcr and cqr). This is contrary to expectations: we expected students would visit cards for earlier questions, and then not refer to them again. In order to examine effectiveness of strategies, we examined mean scores within the base pattern groups. Initial explanation quality score means and standard deviations are shown in Table 2. (Only significant results are shown.) On each of the 16 questions, we conducted a within-subjects (base pattern group: qr vs. cqr vs. qcr vs. cqcr) ANOVA. Base pattern group effects were significant for 5 of the 16 questions. For 4 of these 5, significant group mean differences were found. For two questions (AB 1, Sol 7), the qr group had lower mean scores than all other groups. For a third question (Sol 10), the qr group had lower scores than the two cards-first groups (cqcr and cqr). Consistent with our expectations, these results suggest that patterns of use are related to performance. Students who read cards (either before the question or between the question and response) generate better quality explanations than students who don’t read cards prior to writing their response. |
![]() |
![]() |
![]()
*p < .003 (.05 / 16) |
glossary: To see the explanation of the abbreviations. |
![]() |
|
![]() Figure 3. Incorrect answer, revision, and referral rates, by question. |
![]() |
![]() On the other hand, students do not commonly refer to cards while revising (the third set of bars in Figure 4). The referral rate ranges from 6% to 38%, with the majority of questions in the 10% to 15% range. Statistically, referring to cards does not seem to influence explanation quality. We examined whether students consistently applied revision patterns. In Acids & Bases, 20 students only use one of the four revision patterns; a further 28 students use one pattern all but once. Therefore, only 48 students (26.7%) use a single pattern almost exclusively. Similarly for Solubility, 10 students use one pattern exclusively, while 25 more use one pattern for all but one question. Thus, only 35 students (19.4%) use a pattern almost exclusively in Solubility. Contrast this with the base patterns of use where the values were 83% and 88%. Base patterns are fairly consistent but revision patterns are less so. This suggests that the base pattern is perhaps more characteristic of the student, while the tendency to revise or refer is more a reflection on the design features of the software. |
![]() |
![]() |
![]()
|
![]() |