Multimedia in Geographic Education: Design, Implementation, and Evaluation

John Krygier
Department of Geography
State University of New York at Buffalo

Catherine Reeves
National Geographic Interactive

Jason Cupp
Department of Geography
Pennsylvania State University

David DiBiase
Department of Geography
Pennsylvania State University

This paper describes an educational application of multimedia for geography and earth science education based on the assumption that multimedia is more than mere technology. Arguing for a focus on a coherent set of multimedia design guidelines informed by a broad array of evaluation functions, the paper takes the position that such design and evaluation guidelines must be shaped by broader educational and content (geography and earth science) goals. It suggests that this approach to the design, implementation, and evaluation of educational multimedia resources may guide other similar projects.

Proponents of multimedia claim it can change the way we understand, think, learn, and work; they have heralded it as bringing about an end to printed books and static graphics. Advocates of multimedia, both in its research and educational applications, see it as much more than mere technology. Multimedia is garnering increasing attention in cartography and geography, although there is a paucity of literature on the prospects of multimedia as a research or educational method in geography and the earth sciences.

Setting the Context

The typical design of multimedia is an array of representational forms (e.g., image, map, diagram, sound, video). Hypermedia is multimedia with substantive links between the various representational forms (Andrews & Tilton, 1993). For convenience, I will collapse the two terms into one (multimedia) in this paper. Multimedia does not necessarily require computers. For example, geographic educators often combine the use of slides, overheads, chalkboards, movies, videos, and sound recordings in their lectures and academic presentations. Further, atlases have a long tradition of integrating text, images, maps, diagrams, and graphs. Thus the multimedia concept is not completely new to geographers and cartographers. However, although we can draw on past experience to assist in the design and production of multimedia, we should not approach multimedia as nothing substantially different from what we have done in the past.

Pennsylvania State University has been developing technology classrooms for the past five years (Morrow & Boettcher, 1995), equipping them with an array of computers, software, network connections, and projection equipment. Our College of Earth and Mineral Sciences (EMS) has made funds available for the development of educational multimedia resources for the course entitled "Gaia: An Introduction to Earth Science" taught in the College. The Gaia course is a large-enrollment course taught in several sections each semester by different faculty.

The Penn State Deasy GeoGraphics Laboratory, affiliated with the Department of Geography and the College of EMS, offered the expertise required to design, produce, implement, and evaluate multimedia teaching resources for the Gaia course (DiBiase & Krygier, 1994). Two graduate students, one undergraduate student, the director of the Deasy GeoGraphics Laboratory, and an educational technology specialist have spent the last two years working on multimedia resources produced on the Macintosh using Macromedia Director authoring software.

Our design, production, and evaluation strategy has been to synthesize an awareness of geographical and educational goals with multimedia design goals. The result is not only a series of multimedia resources for teaching a single course, but a detailed planning, design, production, and evaluation strategy currently being used to develop materials for other courses. The process that produced our design strategy (see Figure 1) reveals two primary interactions: those between content/educational goals and multimedia design goals; and, within the context of the designer's goals, the interaction between multimedia design guidelines and evaluation guidelines. I stress that the process of such interactions is integrated, as the diagram suggests.

Figure 1. Overview of Multimedia Design and Evaluation Process.
Designing Educational Multimedia Resources

Our strategy for the design and production of educational multimedia resources divides into two subsections. The first concerns the design of the general interface and structure for all of our resources, which we call lectureware. The second subsection concerns a typology of multimedia forms and functions used to guide the design of individual resources, defined as particular multimedia units that explain a single concept or idea. Both our lectureware and the resource typology were informed by the evaluatory methods described in our third subsection.

Multimedia design and lectureware. The design of the general interface and overall structure of our materials has focused on the creation of what we call lectureware, designed to be used during lecture by instructors. Lectureware differs from courseware, which students use by themselves outside of lecture. Three design strategies that shaped our lectureware include (a) resources as a single concept or idea, (b) an easy-to-use interface, and (c) general graphic design guidelines.

Each multimedia resource should consist of one basic concept or idea, so that instructors can piece together a series of resources in the order with which they are comfortable. Instructors should also be able to use the same concept (resource) in several different lectures as appropriate. Designed to stand alone, the resources can be coherently related to each other when appropriate. Design of some resources aids the instructor in relating particular concepts or ideas learned to a more general goal. We have used graphic icons to make such relationships explicit. For example, an icon that summarizes the concept learned in one resource can be used at the beginning of a related resource that depends on understanding the material summarized in the icon.

Our second lectureware design strategy is the provision of an easy-to-use interface. We have constructed a simple menu-driven interface allowing easy access to the selected list of resources and other basic commands (e.g., editing a lecture menu, blanking the screen, or quitting). Second, we provide a lecture-building resource so the instructor can view available resources, search for particular topic, and add resources to a lecture-list, which is then installed as a universal menu item used to access the resources during the lecture. Finally, we have endeavored to provide a consistent set of navigational buttons to guide movement within individual resources. Our general goal is to make sure that the same buttons do the same things and are located in the same place on the screen from resource to resource. Our third strategy is to maintain strict consistency in design, to minimize confusion and maximize the manner in which the different resources can be used together, so that a series of resources looks as if they were designed to be used in that order.

We have linked these three design strategies--resources as a single concept or idea, easy-to-use interface, and general graphic design guidelines to educational and content goals to shape the concept of lectureware. Our multimedia resource typology complements and closely interrelates with the design of particular multimedia resources.

Multimedia design and the resource typology. Complementing our lectureware design guidelines is a resource-typology, which provides a means of linking content goals to appropriate representational forms, and assists in making design decisions about particular resources. Our resource typology consists of two dimensions (Figure 2). The first dimension, a range of representational forms, includes imagery, maps, diagrams, graphs, and tables. The resource typology second dimension encompasses a range of resource functions, including static, animated, sequential, hierarchical, and conditional resources.

Figure 2. Resource Typology.
Resource forms are useful for matching particular educational goals to appropriate representational forms. Careful consideration of educational goals in tandem with available resource forms has allowed us to avoid replicating inadequate or inappropriate materials in our resources. The resource form continua also helps guide consistent graphic design guidelines. For example, graphs have consistent design guidelines for colors, line widths, typography, and placement. Maps are usually designed to have labels that can be turned on and off. Such guidelines make the production of individual resources easier, and insure consistency from resource to resource. Thus the resource forms match educational goals to appropriate representational forms while facilitating particular graphic design and production decisions.

Resource functions, combined with resource forms, facilitate matching educational goals to a logical level of multimedia functionality (or interactivity), while providing guidance for consistent navigation and graphic design. Resource functions include (a) static, (b) animated, (c) sequential, (d) hierarchical, and (e) conditional resources.

Evaluating Educational Multimedia Resources

Evaluation should be used not only as a means of assessing the impact of existing resources, but in shaping and informing the design process. Evaluation is useful for informing the design of educational multimedia resources but not for prescribing it. Given the new (and in many ways unexplored) technologies available for instruction, coherent, carefully designed, and innovative examples of educational technology need to be developed (Ebel, 1982). A broader sense of evaluation is needed to assist in shaping and informing the design of innovative educational multimedia resources. We have adopted a four-part approach to evaluation as described by Reeves (1992), consisting of evaluation functions and methods (see Figure 3).

Broadly defined, evaluation serves four (often interrelated) functions: goal refinement, documentation, formative evaluation, and impact evaluation. Each of these evaluation functions can be facilitated with a range of evaluation methods, including interviews, focus groups, questionnaires, observations, ratings assessment, expert review, and achievement tests (Reeves 1992).

Figure 3. Evaluation Functions and Methods (after Reeves 1992).
Evaluation I: Goal refinement function. Reeves (1992) defines goal refinement as a "clear cut vision of what the [educational] goals ... should be" (p. 520). These goals may change or evolve in practice, but it is important to begin the process of conceptualizing, designing, and producing educational multimedia resources with specific objectives in place. We have attempted to assess goals as seen by course instructors, students, and administrators (who provide funds for developing such resources). These goals, in turn, informed the initial design of our lectureware, with attention to these differing (yet usually resolvable) goals. As an initial evaluatory step, goal formation is fundamental in shaping an overall design for explicit project goals.

Evaluation II: Documentation function. Reeves (1992) defines documentation as simply keeping a record of what is actually done throughout the process of creating educational resources (p. 521). From the beginning of the project we have compiled extensive documentation detailing what we thought we were doing, problems, ideas for changes, and reformulated goals. Information drawn from this documentation can make future projects more efficient. To keep a record of the implementation of the resources in the classroom, the project manager attended nearly every lecture taught with the resources, and the resulting documentation helped us to fix bugs and reshape our resources, thus serving a formative evaluation role (to be discussed below). Other important documentary sources are the yearly status and future planning reports compiled by the project managers, which document what we accomplished, and force the project managers to confront looming and unresolved issues. Careful and methodical documentation plays an important role in shaping our resources.

Evaluation III: Formative evaluation function. Flagg (1990) defines formative evaluation as the systematic collection of information for the purpose of informing decisions to design and improve the product. Formative evaluation required that we consult with experts in content and design. These consultations helped us rethink, reshape, and reform our original ideas to remove the kinks in our original design ideas.

Evaluation IV: Impact evaluation function. Impact evaluation considers a range of methods to assess the impact of given educational resources on student learning. Impact evaluation is appropriate after coherent, carefully designed, and innovative examples of educational technology have been produced, shaped by goal refinement, documentation, and formative evaluation functions. Effective and useful impact evaluation methods are often difficult to design. Although evidence for the effectiveness of particular educational multimedia resources exists (Podell, Kaminsky, & Cusimano, 1993), such studies say little or nothing about the quality and effectiveness of any particular application.

We have employed two impact evaluation methods: focus groups and questionnaires, both qualitative forms of impact evaluation. In addition, we have provided our general sense of the impact of our multimedia resources on the students. Information from these methods of impact evaluation have led to modifications of particular resources as well as a very general evaluation of some of the fundamental goals of our multimedia resources. Generally positive feedback from students has provided encouragement and has suggested that our resources have been refined enough to begin formulating more quantitative impact evaluations.

Focus groups seem relevant inasmuch as our lectureware is used with entire classes rather than individual students (Monmonier & Gluck, 1994). Feedback from student meetings provided us with an overall response--what they liked and didn't like, and how they would like to see the resources modified. The focus groups, while very positive about the resources and the course, were also able to articulate problems that had impeded their goals in the course. Thus, focus groups served as a rough check on our original goals, provided us with ideas for refining the resources, and provided a forum in which students could refine and define their educational goals in the midst of fundamental changes in the classroom.

Questionnaire impact evaluation was the second impact evaluation method we employed. In the two sections of the Gaia course taught by two different instructors our first semester, one section was taught without the lectureware and the other was taught with the lectureware. The first questionnaire (except for seeing the multimedia resources in one review lecture) queried students about the advantages and disadvantages of the multimedia materials. A second questionnaire, given to the class taught with lectureware, queried students about the use of the lectureware and its positive and negative attributes. The responses to a latter set of questionnaires were similar, and even more favorable, to these. In addition, comments culled from our focus groups closely correlated with comments elicited from the questionnaires.

The majority of students who saw the multimedia resources for one day were in favor of its being used for the entire class or at least in conjunction with other teaching methods. A number of students who were exposed to the use of our resources for the entire semester mentioned that they could not imagine learning the material without the resources. A high percentage of the respondents mentioned that the courseware was interesting and that it helped them visualize and understand difficult concepts. They found the step by step build up of ideas within sequential resources and the explicit relations between different resources useful in understanding concepts and their interrelations. Students particularly liked resources with 3-D graphics, movement, and conditional interactivity.

Problems included technical difficulties (bugs, crashed computers, fumbling with the computer), disrupted class, and wasted time. Lighting problems were the second largest concern among students who were in the multimedia section of the course. The more serious critiques concerned the atmosphere for learning and cannot be as easily fixed as the technical ones. Some students had difficulty gleaning the key points from complex resources. Others were bothered by the difficulty of depicting the gist of complex resources in their notes. Numerous students in the section that saw the multimedia resources for only one day expressed concern that the instructor would be distanced from the class by focusing attention on the computer and not the students. This problem, however, was expressed by only two of the students in the section that used the resources during the entire semester.

Qualitative methods of impact evaluation, along with goal refinement, documentation, and formative evaluation have helped inform original educational and design goals, and responses from students have provided us with some key issues that more quantitative methods of impact evaluation may address. Questions remain. For example, students strongly prefer multimedia resources with some kind of movement. Is this preference based on a desire to be entertained, or on the understanding that movement (particularly on graphs, diagrams, and maps) enhances the understanding of certain concepts and ideas? Students also claimed enhanced understanding from some of the more complex resources as compared to static depictions of the same materials in their course reader. Impact evaluations to assess such issues are currently being developed.


This integrated approach to the design, implementation, and evaluation of multimedia educational resources included a summary of our multimedia design based on consistent and coherent graphic design principles and the matching of educational and content goals to particular multimedia forms and functions. We used this design strategy to guide the production and implementation of educational multimedia resources from the onset of the project. Our approach to educational multimedia evaluation was based on a broad range of evaluation functions and methods implemented from the onset of the project. Evaluation, broadly conceptualized as goal refinement, documentation, formative evaluation, and impact evaluation played an informative role throughout the design, production, and implementation of multimedia resources rather than being introduced only at the end of the process. Our design and evaluation strategies were bound together by an iterative design approach by which the goals and expertise of content experts and educators shaped and were shaped by the goals and expertise of multimedia designers.

We hope that this approach to the design, implementation, and evaluation of educational multimedia resources may guide other similar projects, posing and addressing important questions about the prospects and impact of multimedia on earth science and geographic education.


Andrews, S. & D. Tilton. (1993). How multimedia and hypermedia are changing the look of maps. Proceedings of AUTOCARTO 11. Minneapolis, MN, 348-366.

DiBiase, D. & J. Krygier. (1994). Multimedia at Penn State. Cartographic perspectives 19, 40-42.

Ebel, R. (1982). The future of educational research. Educational researcher 11(8), 18-19.

Flagg, B. (1990). Formative evaluation for educational technologies. Hillsdale, NJ: Lawrence Erlbaum.

Monmonier, M. & M. Gluck. (1994). Focus groups for design improvement in dynamic cartography. Cartography and geographical information systems 21(1), 37-47.

Morrow, C. & J. Boettcher. (1995). Technology classrooms: Design and implementation, a Penn State perspective. Syllabus 8(6), 18-20.

Podell, D., S. Kaminsky, & V. Cusimano. (1993). The effects of a microcomputer laboratory approach to physical science instruction on student motivation. Computers in the schools 9, (2/3), 65-73.

Reeves, T. (1992). Evaluating schools infused with technology. Education and urban society 24(4), 519-534.