On the Necessity for Grassroots Evaluation of Educational Uses of Technology: Recommendations for Higher Education 

go to previous version

Some college administrators and faculty members act as though the Web had magical educational powers. "Use it and outcomes will improve (even if no change is made in the processes or structures of learning)!" But major improvements in educational results are far more likely when the Web is used in ways that enable significant change in who can learn, what they learn (educational goals), or what they do when learning. Using the Web to support a distance learning program ought to be accompanied by a reexamination of educational goals, a fresh look at students (including new kinds of students) and a reexamination of support for staff and students. But those can be "sea changes" for many colleges and universities. Unfortunately, errors and ambushes are much more likely when educators and their institutions grope their way into such unfamiliar territory. Unable to use much of their well-honed skills and intuition, they find themselves far more likely to make mistakes; it can amount to flying blind. Some of these mistakes are repeated time and again as new generations of innovators appear on the scene; others are new (Ehrmann, in press).

Many faculty members, administrators, and legislators respond to the risk of "blindness" and ambush by avoiding the danger. These people may use the Web, or recommend it, but they explicitly or implicitly restrict its use to familiar approaches. Unchanged practices unfortunately offer little additional benefit for students even though the use of technology may increase the cost of education. For example, if students read three articles on paper before and now read three similar articles on the Web, one may not detect much difference in what they learn. The failure to reexamine the fundamentals is a major reason why three decades of promises about educational uses of computing have led to frustration, both inside institutions and out.

We need more grassroots diagnostic studies of Web-enabled efforts in order to make certain programmatic improvements in the process and outcomes of education. 

Flying Blind

The TLT Group has done over 70 evaluation workshops for faculty and administrators. Even among those interested enough to attend such workshops, few have prior experience in doing studies themselves. Nor have most participants ever heard of even one study (grassroots or national) that produced usable, useful findings. This situation is understandable.  For centuries, education was a craft where people and institutions could thrive while doing things more or less as they had always been done. Twenty years or more could pass from the first appearance of an innovation until even half of all colleges and universities had implemented the new technique: time enough to learn by osmosis. In those days, one could be a fine faculty member without ever having seen or done research on the shape of one's classrooms, one's instructional materials, or the fine points of lecturing. The great teachers spent decades honing their reactions so that the tiniest quiver of action or inaction in a classroom or in a student's essay would help them interpret what was happening, to anticipate what was about to happen, and to make informed decisions about what to do next. Experienced teachers were capable of simultaneously planning months ahead while also altering their plans on the fly as circumstances changed. Administrators had comparable skills for looking into their crystal balls. 

But widespread reliance on technology driven by Moore's Law destroyed that relative stability. Moore's Law states that computer chips double in power every eighteen months or so. If that steadily growing computer power is used to make periodic, qualitative changes in educational practice and structure, the result can be continual turbulence, hidden problems, and lurking opportunities. The old instincts are not worth as much anymore. Suddenly we are all groping in the dark, hoping about the consequences of our next use of technology, and guessing about what might go wrong next.

Imagine, for example, that you are teaching a course that uses the Web. You have redesigned the course in ways that depend on students using the Web to collaborate on homework projects, even though you have never asked students to do much work together on homework before. Now, two weeks into the term, it is hard to know for sure, but you fear that students are not collaborating online as much or as well as you had hoped. The course's schedule and success might be in jeopardy. Or maybe everything is OK. Is there really a problem? If so, why? 

These and literally dozens of barriers could hinder collaboration online. But which few of these barriers are actually hindering your students?Unless you can find out quickly, the class may never achieve what you have hoped…  

Educators Need Help in Studying Their Own Studies

Asking such questions is not "rocket science," but many educators need a little help in knowing what questions to ask and how to interpret the answers. Nor is there any need for educators to start from scratch in designing and carrying out such an inquiry.

You could implement a survey, for example: ask students some pointed questions about how they are, and are not, working together online. Focus your attention on the problems that are typically otherwise invisible but that do happen often enough to be worth checking on.

Yet, "simple" does not mean "easy." As any pollster will tell you, good surveys are hard to create. It is difficult to write an unambiguous, unbiased question, and few educators have enough experience to recognize common hidden problems and opportunities. Finally, because it's so hard to do a good study and because, in the past, few educators needed to do them, such studies rarely get done.

Recently, that trend has been changing. The non-profit Flashlight Program, which I direct, helps educators evaluate their own educational uses of technology. Forty-nine institutions around the world are members of our growing evaluative network and approximately 250 others have also made some use of our evaluative tools. For example, Flashlight Online helps users gather data from currently enrolled students about how the Web and other technologies are actually being used. Another example: our cost analysis handbook helps educators study the use of time, money, and other resources in technology-intensive programs. To help educators learn to use these tools, we offer training both face-to-face and online.

Flashlight's emphasis has been on general-purpose approaches to evaluation of programs that depend on technology; however, there is a weakness in our approach. In order to make sure that each educator and each institution can tailor a unique study that is designed for their own local circumstances and needs, we sacrifice the option of sharing survey data across institutions and we sacrifice time. It takes time for each educator and each institution to design its studies. Moreover, if the studies turn out to be very much alike, that investment becomes wasted time. If people have chosen not to spend the time, then an opportunity to gather useful data has been wasted. There is no such thing as a predesigned, general purpose study. But it is possible to design targeted "turnkey" study packages that can be used by many educators who share the same "need to know" about a particular facet of Web use in education.

Nationally (and by that term, we mean to include not only the Federal Government but also national funders, state legislatures, and institutions), we need to do several things.

Recommendation

We need research on the educational possibilities and hazards of the most promising educational changes that can be supported with the Web. If we study enough similar efforts to make programmatic changes using the Web, we can uncover some of the most common hidden problems and opportunities. Those findings could guide the creation of easy-to-use diagnostic tools to help educators quickly discover whether any of those problems or opportunities are present locally. 

This research can also gather insights into how to respond to these issues. For example, what if you want your students to collaborate online but it turns out that 40% of your students believe that collaborative learning is an inferior, inefficient way to learn? What options does an instructor have in such a situation? Apropriate research would gather a library of responses to some of these problems and opportunities.

We must develop the specialized investigative tools that educators need to study and improve their own instructional programs. Among the elements likely to be important for any such study package are:

Some of these tools will be comparatively general purpose but most should focus on specific educational improvements that depend heavily on Web use.

Practitioners will require some training to use these study packages, even though the study tools should be designed to be relatively simple and easy to use. We suggest a mix of train-the-trainer, face-to-face, and online methods. This training ought to be organized on a national or even international basis, but some of it must be provided locally because some of it will need to be face-to-face.

Training will be the most expensive element of the program because the number of educators who will need at least a little training is enormous. The cost of the alternative, however, is even higher: allowing instructors to continue to fly blind with the result that they fail to make meaningful use of this very expensive technology.

Paying for this program will not be easy, despite its cost-effectiveness and the need. Many institutions are starting from almost zero—no skilled trainers to help their faculty members and staff, no budget, no tradition of work in the area. One strategy that can help calls for banding together with other institutions and sharing the work and investment. The Falshlight Program began in this manner. Institutions that are members of the same consortium or system could take similar measures, agreeing on a shared plan to develop or acquire training materials or evaluation tools. They could create a shared R&D agenda. One member might create study designs for engineers who want to safely expand the role of design in the curriculum; and who need help in monitoring the risks of faculty and student burnout. Another institutional member might be responsible for devising evaluative tools for monitoring and improving intercultural interaction online. As a reviewer of this article pointed out, it would also be useful to get the involvement of experts from relevant professional associations such as the American Evaluation Association (AEA) and the Association for Educational Communications and Technology (AECT).

A Long-Lived and Productive Investment

To make widespread grassroots diagnostic evaluation possible, we need to invest in evaluative tools and focused training. Fortunately, such an investment is likely to be far more long-lived and efficient than an investment in a piece of computer hardware. Generations of technology may make one another obsolete with almost absurd rapidity, but educational activities change far more sedately. A study package to help educators detect and deal with barriers to online collaboration could have been developed a quarter century ago (when Plato [not everyone is familiar with Plato. Please use a parenthetical expression to define Plato or link to the Plato Web page] began using e-mail and conferencing), because few barriers to online collaboration are specific to the details of one generation of hardware, software, or telecommunications. Thus, a 1970's study package on diagnosing the problems that block online collaboration could be used with only slight modification today.

Unfortunately [add],no such package was developed for use with Plato. We have paid the price in failed courses and lost opportunities ever since. However, if we do invest in such packages today, they should be useful with only minor modifications in many disciplines, on many levels of education, in many cultural contexts, and for many years. A little money can go a long way when invested this way. Over the coming years we could gradually build quite an extensive system of evaluative tools and training for educators.

Summary

The Web is of little instructional use unless it makes possible ambitious (and thus risky) changes in the organization, content, and support of instructional programs. Most educators are reluctant to make such changes in part because they sense hidden dangers. Local studies could help make such initiatives safer by revealing those dangers (and some hidden opportunities as well) in time for local educators to fix the problems (or seize the opportunities). 

We urge public sector funders to invest in the development of study packages and training that could help hundreds of thousands of educators avoid flying blind. Because these study packages and this training will focus on programmatic issues rather than just on the particulars of today's technology, they should have a useful life measured in decades.

[Editor's Note: This article is adapted from the author's e-testimony to the Web-Based Education Commission, August 15, 2000.]

References

Ehrmann. S. (in press). Technology and educational revolution: Ending the cycle of failure. Liberal Education. Retrieved September 26, 2000 on the World Wide Web: http://www.tltgroup.org/resources/V_Cycle_of_Failure.html

Ehrmann, S. (1997). Flashlight evaluation handbook. Washington, DC: TLT Group. Retrieved September 26, 2000 on the World Wide Web: http://www.tltgroup.org/programs/hownot.html