Environmental Scanning
During the 1960s and 1970s, planners and forecasters succeeded in developing many useful methods based on an "inside-out" perspective-, that is, it was implicitly assumed that knowledge about issues internal to their organizations was most important. At the same time, however, analysts increasingly found that emerging external issues often had a greater impact on the future of their organizations than any of the internal issues. In response, they began to modify some of their techniques and concepts so that outside developments could be formally included in their results. Initially' the emphasis on tracking the outside world fell on monitoring developments that, from an inside perspective, had already been identified as potentially important (Renfro and Morrison 1982).

Eventually, even this so-called "monitoring" was found inadequate as entirely new issues emerged that had major effects through mechanisms that had not previously been recognized. Thus, it became the responsibility of the forecaster to scan more widely in the external environment for emerging issues, however remote. The search for the possibility, rather than the probability, of major impact became common. The importance of scanning in the new sense was first recognized in the national security establishment and later by the life insurance industry, when it discovered that its market was declining. From the inside-out perspective of the insurance industry, the decline could not be explained. The economy was growing. The population was growing. The baby boom was just entering the labor market, adding millions of potential new customers. Yet the sales of life insurance failed to reflect this expected growth. Somehow the industry had failed to perceive a fundamental social change--the emergence of the wife as a permanent, second earner in the family. While many women in the past worked briefly before marriage or before starting their families, many if not most left the labor force when they began their families. In the late 1960s and through the 1970s, however, more and more women returned to work after starting their families. And this change affected the demand for life insurance: The life insurance needs of a family with one income are much greater than those of the family protected by two incomes. This development, coupled with a postponement of forming families, a decline in the birthrate, and an increase in childless couples, all reduced the traditional market for life insurance. That so major an industry could have overlooked these social developments stimulated the development of environmental scanning methods, particularly as the scope of scanning activities expanded to include technological developments, economic developments, and legislative and regulatory developments.

Developing the environmental scanning structure
Two main barriers impede the introduction of environmental scanning techniques in higher education: (1) learning the new process and (2) achieving the necessary organizational acceptance and commitment to make the process work and be worthwhile (Renfro and Morrison 1983a). These two barriers pose several questions: How can an environmental scanning function be developed in an already existing organizational structure? How should environmental scanning work within the organization? What resources are needed for the process to function successfully?

While the organizational structure of the scanning function will vary according to a given institution's management style, the functions of the scanning process are universal. Developing a scanning function within an existing organizational structure is necessarily evolutionary because sudden organizational change is disruptive and costly. While the scanning function could be implemented in many ways, the most popular of the formal systems by far is through an in-house, interdisciplinary, high-level committee of four or five members (but no more than 12 or so). If assigned to a particular department or contracted out, the results of scanning can easily be ignored. And to achieve the widest appreciation of the potential interactions of emerging issues, the scanning function must be interdisciplinary. Without several disciplines involved, cross-cutting impacts, such as the impact of a technological development (for example, the home computer) on social issues (for example, the family), will most likely be missed. To facilitate the communication of the results of scanning throughout the institution, it is easiest to work directly with the various leaders of the institution rather than with their designated experts. Ideally, therefore, the chief executive officer of the institution should appoint the scanning committee, and to increase the likelihood that results will be incorporated into the decision-making process, the chair of the committee should be one of the president's or chancellor's most trusted advisors.

Perhaps the essential issue for the successful operation of a scanning committee is the selection of the other members. Ideally, membership should include a broad cross-section of department heads, vice presidents, deans, the provost, faculty members, trustees, and so forth. Certainly the institutional research office should be represented, if not by the director, then by a senior assistant. The objective is to ensure that all important positions of responsibility in the institution are represented on the committee.

High-level administrators should participate in scanning for several reasons. First, only those with a broad perspective on an institution's current operations and future directions can make an informed evaluation of the potential importance or relevance of an item identified in scanning. Second, the problems of gaining the necessary communication, recognition, and acceptance of change from the external environment are minimized. Hence, the time between recognition of a new issue and communication to the institutional leadership is reduced, if not eliminated. And when an issue arises that requires immediate action, a top-level scanning committee is ready to serve the institution's leadership, offering both experience and knowledge of the issue in the external world and within the institution. Third, one of the more subtle outcomes of being involved with a scanning system is that the participants begin to ask how everything they read and hear bears on the work of the scanning committee: "What is its possible relevance for my institution?" Indeed, the development within top-level executives of an active orientation to the external environment and to the future may well be as beneficial to the organization as any other outcome of the process.

A scanning committee does not need to have general authorization, for it serves only as an advisory board to the chief executive. In this sense it functions similarly to the planning office in preparing information to support the institution's authorized leadership. The scanning committee is, of course, available to be used as one of the institution's resources to implement a particular policy in anticipation of or response to an issue. But the basic purpose of the scanning committee is to identify important emerging issues that may constitute threats or opportunities, thereby facilitating the orderly allocation of the institution's resources to anticipate and respond to its changing external environment.

The environmental scanning process
Environmental scanning begins with gathering information about the external environment. This information can be obtained from various sources, both internal and external to the organization. Internal sources include key administrators and faculty members; they could be interviewed to identify emerging issues they believe will affect the institution but are not currently receiving the attention they will eventually merit. Such interviews usually release a flood of emerging issues, indicating that the organization's key leaders are already aware of many important new developments but rarely have the opportunity to deal with them systematically because they are so overburdened with crisis management.

Administrators and selected faculty members could identify the sources they use for information about the external world-the newspapers, magazines, trade publications, association journals, and other sources they regularly use to keep in touch with developments in the external world. Typically, these surveys show that administrators read basically the same publications but only selected sections.

Scanning includes a broad range of personal and organizational activities. It is a process of screening a large body of information for some particular bit or bits of information that meet certain screening criteria (Renfro and Morrison 1983b). For example, some people scan headlines in a newspaper for particular kinds of articles, and when they find that information, they stop scanning and read the article. Then they resume scanning. This process has several distinct steps:

  1. searching for information resources
  2. selecting information resources to scan
  3. identifying criteria by which to scan
  4. scanning and
  5. determining special actions to take on the scanning results.

How these steps are taken determines the kind of scanning-passive, active, or directed. (For an excellent discussion of scanning used by business executives, see Aguilar 1967, pp. 9-30.)

Passive scanning. Everyone scans continually. Whatever a particular individual's interests, goals, personal values, or professional objectives, it is an element of human nature to respond to incoming information that might be important. Ongoing scanning at an almost unconscious level is passive scanning. No effort is made to select a particular information resource to scan. The criteria of passive scanning are obscure, unspecified, and often continuously changing. Only ad hoc decisions are made on the results of this type of scanning.

The basic purpose of the scanning committee is to identify important emerging issues that may constitute threats or opportunities.

Passive scanning has traditionally been a major source of information about the external world for most decision-makers and hence for their organizations. The external environment has historically been a subject of some interest to most people, requiring at least passive scanning at fluency in current or emerging issues. The pace of change some level for the maintenance of one's chosen level of in the external environment has moved this scanning from an element of good citizenship to a professional requirement-from a low-level personal interest satisfied by passive scanning to a high-level professional responsibility requiring active scanning-more like the special scanning used for subjects of particular importance, such as career development.

Active scanning. The components of active scanning are quite different from those of passive scanning. For example, the searching or screening process requires a much higher level of attention. The information resources scanned are specifically selected for their known or expected richness in the desired information. These resources may include some, but usually not all, of the regular incoming resources of passive scanning. Thus, a member of the scanning committee would not actively scan magazines about sailing for emerging issues of potential importance to the university. This is not to say that such issues will never appear in this literature but that passive scanning is sufficient to pick up any that do.

The criteria of screening for signals of emerging issues must be broad to ensure completeness, and they usually focus on certain questions: Is this item presently or potentially relevant to the institution's current or planned operations? Is the relationship between the likelihood and potential impact of the item sufficient to justify notifying the scanning committee? For example, a major renewal of central cities in the United States accompanied by high rates of inward migration might have tremendous impact on the educational system but just be too unlikely in the foreseeable future to warrant inclusion in the scanning process. It is not part of the institution's current "interesting future," which is a very small part of the whole future.

The interesting future is bounded by the human limitations of time, knowledge, and resources; it represents only that part of the future for which it is practical to plan or take actions now or in the foreseeable future. For almost all issues, this interesting future is bounded in time by the next three or four decades at the most, although most issues will fall in the period of the next 20 years. This time frame is defined as that period in which the major timely and practical policy options should, if planned or adopted now, begin to have significant impact.

The issues-policy-response time frame depends on the cycle time of the issue. For the issue of funding social security, the interesting future certainly runs from now for at least 75 to 85 years-the life expectancy of children born now. Actually, as their life expectancy will probably increase in the decades ahead, 90 to 100 years may be a more realistic minimum. For financial issues, the interesting future may be the next several budget cycles-just two or three years. For a new federal regulatory requirement that may be imposed next year, the interesting future runs from now until then.

The interesting future is bounded by a measure of the uncertainty that a particular issue might actually materialize. Developments that are virtually certain either to happen or not happen are of little interest in scanning, because they involve little uncertainty. If the institution has little ability to affect these more or less certain happenings, they should be referred to the appropriate department for inclusion in its planning assumptions. The aging of the baby boom, for example, is certain to happen and should be factored into the current strategic planning process. A potential new impact of the baby boom that may or may not happen-such as growing competition within the medical care system for federal resources-should be forwarded to the scanning committee for evaluation of both its probability and its importance. Thus, the interesting future is comprised primarily of those developments that are ( 1) highly uncertain, (2) important if they do or do not happen, and (3) responsive to current policy options.

A second dimension of scanning concerns the time element of the information source being scanned. Information sources are either already existing resources, such as "the literature," or continuing resources, which continue to come in, such as a magazine subscription. Passive scanning uses all continuing resources-conversations at home, television and radio programs, conferences, meetings, memos, notes, and all other incoming information. Passive scanning rarely involves the use of existing resources. Active scanning involves the conscious selection of continuous resources and, from time to time, supplementing them with existing resources as needed. For example, an item resulting from scanning continuing resources may require the directed scanning of an existing resource to develop the necessary background, context, or history to support the determination of an appropriate response.

Directed scanning. The active scanning of a selected existing resource for specific items is directed s(-anning. Usually this scanning continues until the items are located, not necessarily until the resources are exhausted. For example, if a member of the scanning committee knows that a good analysis of an issue was in a particularjournal some time last year, he could examine the table of contents of all volumes of the journal to locate the article. As the specific desired item is known and the resource can be specified, the scanning committee can delegate whatever directed scanning is necessary.


Scanning for the institution
To anticipate the changing conditions of its external environment, the institution needs both active and passive scanning of general and selected continuing information resources. The results of this process-in the form of clippings or photocopies of articles-will be reported to the scanning committee for evaluation. The chair of the committee (or its staff, if any) compiles the incoming clippings to prepare for the discussion of new issues at the committee's next regular meeting. In performing this task, the chair looks for reinforcing signals, for coincident items (each of which may have sufficient importance only if both happen), for items that may call for active or directed scans of new or different resources, and for information about the interesting future.

Developing a scanning taxonomy. Any number of taxonomies and mechanisms have been used to structure the scanning process. All of them attempt to satisfy several conflicting objectives. First, the taxonomy must be complete in that every possible development identified in the scanning has a logical place to be classified. Second, every such development should have only one place in the file system. Third, the total number of categories in the system must be small enough to be readily usable but detailed enough to separate different issues. The concepts developed from technology assessment in the mid- 1970s provide an elementary taxonomy consisting of four categories: (1) social, (2) technological, (3) economic, and (4) legislative/regulatory.

The taxonomy at the University of Minnesota, for example, includes five areas (*Richard B. Heydinger 1984, personal communication). The political area includes the changing composition and milieu of governmental bodies, with emphasis at the federal and state levels. The economic area identifies trends related to the national and regional economy, including projections of economic health, inflation rates, money supply, and investment returns. The social lifestyle area focuses on trends relating to changing individual values and their impact on families, job preferences, consumer decisions, and educational choices, and the relationship of changing career patterns and leisure activities to educational choices. The technological area includes changing technologies that can influence the workplace, the home, leisure activities, and education. The demographic manpower area includes the changing mix of population and resulting population momentum, including age cohorts, racial and gender mix for the region, the region's manpower needs, and the implications for curricula and needed research.

To develop a more specialized taxonomy, the scanning committee should focus on the issues of greatest concern to the institution. The committee can use any method it chooses to select these categories-brainstorming, questionnaires, meetings, for example. Whatever method is used, it should be thorough, democratic, and, to the extent possible, anonymous (so that results are not judged on the basis of personalities). One method that meets these criteria is to use a questionnaire based on an existing issues taxonomy. Sears Roebuck, for example, has over 35 major categories in its scanning system, ALCOA uses a taxonomy with over 150 categories, and the U.S. Congress organizes its pending legislation into over 200 categories. Such a list can be used as the basis of a questionnaire that asks respondents to rate the relative importance of each category and expand categories that may be of particular importance to the institution. For example, under the category of higher education, the committee may want to add subcategories concerning issues of tenure and the academic marketplace, among others.

Alternatively, the committee may want to develop its own taxonomy. Although using a detailed taxonomy like the one Congress uses helps to ensure thoroughness and although an organized system can be adapted to new issues as additional categories are opened, the advantage of starting with only four categories is simplicity.

When the questionnaire is complete, the categories named most frequently should be selected for scanning. That number is determined by the size of the committee; experience indicates that a 10- to 12-member committee can handle no more than 25 to 40 assigned categories for scanning, with each member having responsibility for two or three categories and the relevant sources to scan for each of them. The list of categories then becomes the subject index of the scanning files.

With this list of categories and a list of the publications and other resources already being scanned, the committee can identify the categories for which assigned scanning is necessary. At this point, the kind of resource takes on importance. For example, "alcoholism" may be an issue selected for scanning but one for which no current resource can be identified. For this issue, generic and secondary resources may be sufficient-newspapers, national weekly magazines, or other resources in 'he passive scanning network. Nevertheless, the resources designated for this issue and their designated scanners should be identified. Of course, a particular publication or resource may cover more than a single category, and it may take several publications to cover a single issue adequately.

What to scan. Determining which materials to scan is an extremely important and difficult task. This process involves deciding what "blinders" the committee will wear. It is obviously better to err on the side of inclusion rather than exclusion at this point, yet the amount of material committee members can (or will) scan is clearly limited. The decisions made at this point will determine for the most part the kind, content, and volume of information presented to the scanning committee and will ultimately determine its value to the institution. This question deserves substantial attention.

Because of the limitations of various resources, scanning must be limited to those resources reporting issues that have a primary or major impact on an institution, whether the issues originate in the external world or not. A college or university must anticipate, respond to, and participate in public issues-issues for which it may not be the principal organization affected but for which it nevertheless has an important responsibility to anticipate. It is useful, then, to formally structure the discussion of issues and their relative position to each other. An example of such a chart is shown in figure 4. Such a chart creates an orderly structure for the discussion of issues, ranging from an introspective focus to a focus on the entire world. The levels should be arranged so that all issues confronting the institution can be identified as having their focus at one of the levels.

The vertical dimensions of the chart are the areas of concern to the university. Although they will necessarily vary from time to time, the issues include students, research, finances, technological change, legislative/ regulatory change, social values, and more. The relative importance of each of the intercepts of the horizontal and vertical axes can be evaluated using the Delphi process described in "Forecasting." For the most important areas-usually about 10-to 12-the next step is to identify specific resources to be scanned. An area that is ranked as among the most important but without acceptable scanning resources may require some additional research.





University System




Western World






































Technological Change









Legislative/Regulatory Regulations









Economic Conditions









Alumni Support









Sociopolitical Implications









All members of the scanning committee should become more aware of their ongoing passive scanning. The special screen of the scanning criteria should be added to the flow of each person's continuing resources; it Is a level of sensitivity that has to be learned with experience. It must be a rule of the committee that information in any form is acceptable. The process of passing notes, clippings, or copies from any resource must become second nature. The scanning coordinator or staff person will have the responsibility to process the incoming flow for the committee's formal review.

The committee must now address the question of the resources it will actively scan, and it must consider several aspects of the available resources in making the decision. First, a survey of the committee will show the specific resources included in its passive scanning. Then the committee must determine the kinds of resources it should be scanning, which involves the content and the kind of research-for example, germane to all issues, germane only to special issues, emerging or first impression of issues, the spread of issues.

In the process of assigning resources to issues, the committee should also address the question of the mix of the media it is using-from periodical to annual publications, from print to electronic forms-and it should review its resources to determine a balance in the mix of the media. A list of journals focusing on the general field of higher education or on specific aspects of the field is shown in Appendix A, and Appendix B includes publications focusing on external issues.

Popular scanning resources. Newspapers are a major scanning resource, and the members of the committee should cover four to six national newspapers to balance the newspapers' particular focuses and biases: the Nevi, York Times for its focus on international affairs, the Washington Post or Times for their focus on domestic political developments, the Chicago Tribune for its focus on the Midwest, the Los Angeles Times for its West Coast perspective, and one of the major papers of the Sunbelt. USA Today and the Wall Street Journal, with their emphasis on trends and forces for change, are perhaps the most popular newspapers of scanners. The national perspective should be supported by a review of the relevant major state, regional, and local newspapers.

Magazines, periodicals, newsletters, and specialized newspapers in each of the four major areas--social, technological, economic, and legislative/regulatory--should be included. But it is also important to include publications of special interest groups that are attempting to put their issues on the national agenda (Congresswatch, Fusion, the Union of Concerned Scientists, the Sierra Club, the National Organization for Women, and Eagle Forum, for example) and journals reporting new developments, such as the Swedish Journal of Social Change and Psychology Today. Although the list of scanning resources may appear formidable, the number of new periodicals added to existing resources may be quite small, for at most universities, some member of the faculty already sees one of the resources or it already is received in a campus library.

A special effort should be made to seek publications of the fringe literature-the underground press-as exemplified by the Village Voi(-e and other non-establishment publications. Depending upon the results of the survey of literature already being covered by members of the scanning committee, a special effort could be made to include publications like Ms., Glamour, Working Woman, Working Mother, Family Today, and Ladies Home Journal. Finally, the scanning literature should include a few wild cards--High Times, Heavy Metal, Mother Jones, for example. The scanning staffer should maintain a list of publications that are being scanned and the committee members responsible for scanning them. Ideally, each member of the committee should be responsible for three to four titles.

Additional resources for scanning include trade and professional publications, association newsletters, conference schedules showing topics being addressed and considered, and, in particular, publications of societies and associations involved with education and training. For example, many instructional innovations are surfacing in corporate training programs and are being discussed at annual meetings of the American Society for Training and Development and in trade publications like Journal of Training and Development and Training: The Magazine of Human Resources Development. As a further example, the forecasting movement and the concept of strategic planning developed in the business sector years before most individuals in higher education were aware of them as potentially affecting colleges and universities. Other industries--health care and social services, for example--may experience issues before higher education. Strategies for cost containment in the health care sector, for example, may well merit adaptation by higher education as funding support lessens (Morgan 1983).

A number of associations and societies track or advocate social change. The World Future Society, for example, publishes The Futurist, The Futures Research Quarterly, and Future Survey all of which are dedicated to the exploration and discussion of ideas about the future. The American Council of Life Insurance in Washington, D.C., publishes a newsletter, Straws in the Wind, and periodic reports on emerging issues called The ACLI Trend Report. In addition, major corporations use commercial services to supplement their scanning functions: Yankelovitch's Corporate Priorities, the Policy Analysis Company's CongresScan and Issue Paks, the Naisbitt Group's Trend Report, SRI International's Scan, and the Institute for Future Systems Research's Trend Digest. The more expensive outside resources are beyond the budgets of most colleges and universities and are not without their own liabilities (many of them attempt to cover all issues from all perspectives, making their results too general to meet the needs of specific organizations), and an overemphasis on outside resources violates an organizational requirement that the scanning function be developed within the existing structure rather than added on from the outside.

The scanning committee should make a special effort to include within the scanning process whatever fugitive literature it is able to obtain, that is, sources that are published privately and are available only if their existence is known and they are hunted down. Such literature would include, for example, the more than 25 articles, pamphlets, and other private publications now available on the new field of issues management compiled by the Issues Management Association in Washington, private publications on changing social values such as the 1981 Connecticut Mutual Life Insurance study, AT&T's Context of Legislation, and the publications of research organizations like the Rand Corporation, SRI International, or the Center for Futures Research at the University of Southern California. Fugitive literature often enters the established literature, but sometimes years after its initial private publication. Thus, it is necessary to develop personal and professional contacts throughout the scanning network to gain access to these materials. Professional associations like The World Future Society, the Issues Management Association, or the North American Society for Corporate Planning and their conferences can be major sources for fugitive literature.

Other resources. The scanning committee should tap the resources of its resident experts (Renfro and Morrison 1983b), best accomplished by the publication of a weekly or monthly scanning newsletter prepared by the committee's staff. This brief newsletter might present two to five of the more significant items recently found by the scanning committee. Such newsletters continue to build a constituency for the scanning process and an informal network for the recognition and appreciation of the results of scanning. The newsletter might be sent, for example, to all department chairs with an open invitation for their comments and for suggestions of new ideas they see in their fields. Colleges and universities are in a unique position to conduct scanning: Many organizations do not have the in-house experience that is available on most faculties.

Internal scanning newsletters frequently use political and issue cartoons found in major newspapers and in national magazines like The New Yorker. Such cartoons provide an important signal that at least the editors believe the issue has reached national standing and that some consensus on the issue exists for the cartoonist to create the foil and hence the humor. These cartoons serve the additional functions of communicating a tremendous amount of information in a very small space-a picture is still worth a thousand words.

After operating for a year, the scanning committee needs to review the clippings and articles collected and eliminate outdated materials. A staff person should have the responsibility of maintaining the files, opening and closing categories only with the approval of the whole committee. To keep the scanning from becoming outdated, the list of publications scanned should be reviewed and those resources that yielded little information in the preceding year dropped.

Operating an environmental scanning process requires a commitment of time and resources. It may be desirable for colleges to form consortia to share resources, following the example of the life insurance industry. Or they may develop cooperative arrangements with local corporations through which they receive scanning information, particularly projections of the region's economy and emerging technology. It is imperative, however, to establish an effective scanning system in this fast-changing world to identify as early as possible those emerging trends and issues that may so dramatically affect the organization's future.

Evaluating the Issues
The most elementary environmental scanning system can quickly identify more emerging issues than the largest institution can address. Even Connecticut General Life Insurance Company (now part of CIGNA) limits itself to addressing no more than its six most important issues. The issues must be limited to some manageable number to ensure the organization's effectiveness. This limiting process is achieved by a rigorous, objective evaluation of the issues. The goal is to create a process within which the issues compete with one another to determine their relative and/or expected importance. The less important issues are the focus of continued monitoring and analysis or are used in the forecasting or other stages. The traditional methods of research analysis and forecasting can be used at this stage. Frequently, evaluation of the future impacts of an emerging issue must rest on opinion, belief, and judgmental forecasts. (Several techniques for gathering judgmental opinion as they apply to forecasting are described in the next section.) The methods described in this section for evaluating issues can also be used in forecasting.

What is the ability of the institution to effectively anticipate, respond to, and manage the emerging issue, trend, or event?
Probability-impact charts
One method of evaluating the issues, events, or trends identified during scanning involves addressing three separate questions: (1) What is the probability that the emerging issue or event will actually happen during some future period, usually the next decade? (2) Assuming it actually happens, what will its impact be on the future of the institution? (3) What is the ability of the institution to effectively anticipate, respond to, and manage the emerging issue, trend, or event? While these questions appear easy to answer, their use and interpretation in the evaluation process involve care and subtlety. The results for the first two questions are frequently plotted on a simple chart to produce a distribution of probability and impact. Many possible interpretations of the results can easily be displayed on such a chart.

The first question, that of the probability of the event's happening, may be easy to understand but difficult to estimate. If the scanning process has identified a particular event (that is, something that will happen or not happen in such a way that it can be verified in retrospect), then estimating the probability can be relatively straightforward. Suppose, for example, the United States replaces the current income tax system with a flat tax. This sharp, clearly defined, verifiable event is one about which the question being asked is clear (although opinions may differ). If, on the other hand, the scanning process identifies a broader issue that does not have this focus on a specific event, it may be extremely difficult to define when an issue has emerged and happened. In essence, the emergence of an issue is somewhat like news: It is the process of learning of something that makes it news. Thus, an issue emerges when it is recognized by a broader and broader spectrum of the society and in particular by those whom it will affect.

Collecting judgments on an event's probability, impact, and degree of control can be done by using simple questionnaires or interviews and quantifying participants' opinions using various scales (for example, probability can range from 0 to 100, impact from 0 to 10). When all participants have made their forecasts, the next step is to calculate a group average or median score. Quantification is useful because it is fast, and it tends to focus the attention Futures Research and the Strategic Planning Process of the group on the subject rather than the source of the estimates.

The next question concerns evaluating the impact of the emerging issue or event, based on the assumption that it actually occurs. Frequently a scale of 0 to 10 is used to provide a range for the answers to this question, where 0 is no impact, 5 is moderate impact, and 10 is catastrophic or severe impact. Usually plus or minus answers can be incorporated. This question and the first question (an event's probability) can be combined in a single chart that displays a probability impact space with positive and negative impacts on the vertical axis and probability from 0 to 100 on the horizontal axis. This chart can be used as a questionnaire in which respondents record their answer to the probability and impact questions by placing a mark on the chart with the coordinates of their opinion about the probability and the impact of the issue. When all of the participants have expressed their opinions, all of the votes can be transferred to a single chart to show the group's opinion. A sample chart with a group's opinions about an X-event and an O-event is shown in figure 5. The X-event shows reasonably good consensus that the event will probably happen and that it will have a positive impact; therefore, calculating an average for the group's response is useful and credible. For the O-event, however, the group shows reasonable agreement that the event has low probability of occurring but is split on its probable impact.

Source: Renfro and Morrison 1983a.

The X-event highlights one of the problems of this particular method: Respondents tend to provide answers either from different perspectives or with some inherent net impact where positive impacts cancel or offset negative impacts. In reality, an emerging issue or event often has both positive and negative impacts. Thus, the question should be asked in two parts: What are the positive impacts of this event, and what are its negative impacts? In rank ordering events, two ranks are prepared-one for positive and one for negative events-to permit the development of detailed policies, responses, and strategies based upon a recognition of the dual impacts of most emerging issues. Even with the recognition of an event's dual impact, consensus may be insufficient to identify the average group response. In this case, it may be useful to return the group's opinion to the individual participants for further discussion and reevaluation of the issue. This process of anonymous voting with structured feedback is known as Delphi. Anonymity can be extremely useful. In one private study, for example, all of the participants in the project publicly supported the need to adopt a particular policy for the organization. But when asked to evaluate the policy anonymously on the probability-impact chart, the respondents indicated that though they believed the policy was likely to be adopted, they did not expect it to have any significant impact. This discovery allowed the decision-makers to avoid the risks and costs of a new policy that was almost certain to fail. (The Delphi process is described further in the next section.)

When repeated reevaluations and discussions do not produce sufficient consensus, it may be necessary to redefine the question to evaluate the impact on particular subcategories; subcategories of the institution, for example, would include the impact on personnel, on finances, on curricula, or on faculty. As with all of today's judgmental forecasting techniques, the purpose is to produce useful substantive information about the future and to arrive at a greater understanding of the context, setting, and framework of the evolving future (De Jouvenel 1967, p. 67).

The most popular method of interpreting the result of a probability-impact chart is to calculate the weighted positive and negative importance--that is, the product of the average probability and the average (positive and negative) importance--for each event. The events, issues, and trends are then ranked according to this weighted importance. Thus, the event ranked as number one is that with the highest combined probability and impact. The other events are listed in descending priority according to their weighted importance.

Ranking the issues according to weights calculated in this manner implicitly assumes that the item identified in the scanning is indeed an emerging issue-that is, one that has an element of surprise. If all of the items identified in scanning are new and emerging and portend this element of surprise (that is, they are unknown to the educational community or at least to the community of the institution now and will remain that way until they emerge with surprise and the potential for upset), then the strategic planning process would do well to focus on those that are most likely to do so and to have the greatest impact. If, however, the issues are not surprises, then another system of evaluating and ranking the events and issues will be necessary. For example, if the entire community knows of a particular event and expects that it will not happen, then this low probability will produce a low priority. Yet, if the event would in fact occur, then it would be of great importance. The surprise then is in the occurrence of the unexpected. The key in this case is the upset expectation. It may be just as much of an upset if an item that everyone expects to occur does not in fact happen. Thus, the evaluation of a probability-impact chart depends on another dimension-that is, one of expectation and awareness. The most important events might be those of high impact and high uncertainty, that is, those centered around the 50 percent probability line. These are the events that are as likely as not to occur and portend an element of surprise for some portion of the community when they happen or do not happen.

Another aspect of emerging issues that is often evaluated is their timing, that is, when they are most likely to emerge. If an issue or event is evaluated in several rounds, consensus about the probability is often achieved in the early rounds. In the last rounds, timing can be substituted for probability by changing the horizontal axis from 0 to 100 to now to 10 years from now. Then the question becomes, In which of the next 10 years is the event most likely to happen? If necessary, additional questions can explore lead time for an issue's occurrence, year of last effective response opportunity, lag time to impact, and so on. All of these factors have been used to evaluate the relative importance of emerging issues and events.

Emerging issues and events that are ranked according to their weighted importance have a built-in assumption that should usually be challenged; that is, the ranking assumes that the administrators and the institution will be equally effective in addressing all of the issues. This assumption is almost certainly false and seldom of great importance. Suppose that the top priority issue is one on which the institution could have little influence and then only at great cost but that a lower-level item is one on which the institution could have a significant impact with a small investment of resources. It would clearly be foolish to squander great resources for little advantage, when great advantage could be obtained for a much smaller investment. Thus, in addition to the estimation of the weighted importance, the extent to which the event might respond to institutional actions of various costs and difficulty must be evaluated. The cost-effectiveness ratio measures the relative efficiency of alternative institutional actions-actions that are expressions of strategy. This outcome is especially evident when the differences in ratio are small, but if the emerging issues are competing for the same resources, the cost-effectiveness ratios will be essential in guiding the effective use of the institution's limited resources. The top-ranked events may also be important to major administrative functions other than strategic planning. Many corporations, trade associations, and not-for-profit institutions have formed special "issues management committees" to support the authorized leadership of the institution in managing all of the resources they might have available to address an emerging issue. While such systems may be more formal than is needed at most institutions of higher education, they may serve as a useful model.

Impact networks
Another simple evaluation method-the impact network-was derived from the concept of "relevance trees," which are essentially a graphical presentation of an outline of a complete analysis of an issue. Impact networks are a brainstorming technique designed to identify potential impacts of key events on future developments. An impact network is generated by identifying the possible effects of a given specific event. Such an event might be the abolishment of tenure, or the reduction of federally sponsored student financial aid, or the requirement that all professors be certified to teach in colleges and universities. When the issue has been selected and sharpened into a brief, clear statement, the group is ready to begin to form the impact network. The procedure is quite simple. Any impact that is likely to result from the event, whether negative or positive, is an "acceptable impacts" The question is one of possibility, not probability. With the initial event written in the middle of the page, each first-order impact is linked to the initial event by a single line (see
figure 6).


When five or six first-order impacts have been identified or when the space around the initial event is occupied, the process is repeated for each first-order impact. Again, the task is to determine the possible impacts if this event were to occur. The second-order impacts are linked to their first-order impacts by two lines. These steps are repeated for third- and fourth-order impacts, or as far as the group would like to go. Typically, third- and fourth-order impacts are sufficient to explore all of the significant impacts of the initial event. Usually a group identifies several feedback loops; for example, a fourth-order impact might increase or decrease a third- or a second-order impact. The value of impact networks lies in their simplicity and in their potential to identify a wide range of impacts very quickly. If more impacts or higher-order impacts need to be considered, the process is repeated.

A simple example of the use of an impact network illustrates the impact of the elimination of tenure in higher education (Wagschall 1983). As shown in figure 7, the immediate or first-order consequences of the event were perceived to be (1) reduced personnel costs, (2) more frequent turn-over of faculty, and (3) an improvement in the academic quality of the faculty. Each consequence then becomes the center of an impact network, and the search for impacts continues. For example, the improvement of the faculty's academic quality causes improved learning experiences, students' increased satisfaction with their education, and the accomplishment of more research. The reduction in personnel costs produces stronger faculty unions, more funds for non-personnel items, and decreased costs per student. Increased faculty turnover produces a decrease in average faculty salary, an increase in overall quality of the faculty, and a decrease in the average age of the faculty. Each consequence in turn becomes the center of the third-order impact network, and so on. A completed impact network is often very revealing. In one sense, it serves as a Rorschach test of the authoring group or the organization because the members of the group are most likely to identify impacts highlighting areas of concern. In another sense, by trying to specify the range of second-order impacts, new insights into the total impact of a potential development can be identified. For example, while an event may stimulate a majority of small, positive, first-order impacts, these first-order impacts may stimulate a wide range of predominantly negative second-order impacts that in total would substantially reduce if not eliminate the positive value of the first-order impacts. Feedback loops may promote the growth of an impact that would far outweigh the original estimate of its importance.

Source: Wagschall 1983.

Scanning typically leads to the identification of more issues than the organization can reasonably expect to explore in depth, given its limitations of time, money, and people. Simple evaluation techniques like those described in the previous section can help reduce the set of candidates to manageable size. The surviving issues can then be subjected to detailed forecasting, analysis, and policy evaluation. Many methods have been developed for forecasting. This section surveys the range of methods, beginning with several varieties of the simplest, most popular type of forecasting, individual judgmental forecasting. It then briefly describes techniques of mathematical trend extrapolation and group forecasting, cross-impact models, and scenarios.

Implicit forecasting
According to Yogi Berra, "You can observe a lot just by watching." And much of what can be observed is the future. Despite the constant flood of assertions about the accelerating pace of change, despite endless warnings about impermanence and future shock, despite the vigor of the minor industry that produces one book or report after another that begins by telling us that we are on the verge of a societal transformation every bit as profound as the industrial revolution (all of which may actually be true), the present still foreshadows the future. If only we knew the past and present well enough, far fewer "surprises" would catch us unaware in the future. It pays to watch, and it especially pays to watch the largest systems-government, education, transportation, primary metals, finance, health care, energy--for they usually change very slowly and only after protracted debate and consensus building.

No one should have any difficulty with the notion that many of the developments causing turmoil and confusion in each of these systems today were being widely discussed--even passionately advocated or resisted--at least 10 or more years ago. Five or 10 years from now no one should find it hard to look back to today and discover that the same was true.

It pays to watch . . . the largest systems . . . for they usually change very slowly.

Administrators in large institutions know that very long lead times are often required before major decisions can be initiated and fully implemented. They also know that the environment can change in peculiar, sometimes unpredictable ways while these decisions are coursing through the system. The result can be that by the time the decisions should have been fully implemented, the world will have changed so much that they must be abandoned or radically altered. To the extent, however, that the original expectations were shattered by forces arising from large systems, why should administrators be surprised by the outcome.' They may be exceedingly disappointed that they have persevered in a losing battle, but they should not be surprised.

Real surprises usually come from failing to keep track of small-scale developments in the external environment, not from excluding small-scale developments within one's own system. By systematically following these external developments it is possible not only to anticipate the directions and potential impacts of the slower, more pronounced, more profoundly influential changes but also to obtain the early warning needed for timely adjustments of strategy. Emerging patterns of events, the ebb and flow of particular sets of issues that can be revealed by close monitoring, provide a basis for forecasts relevant to policy. These forecasts are intuitive, to be sure, and perhaps seen only dimly in outline, but they are nonetheless the best forecasts available.

Even when the output from scanning consists of forecasts, we must still make our own judgments about the future, because we must decide what is relevant and we must make judgments as to whether we agree with the given forecasts. The same process is at play when we read newspapers, journals, reports, and government documents or listen to a broadcast. We constantly make personal forecasts on the basis of sparse and fragmented historical data in an attempt to distill the future that may be implied.

This process of trying to infer the future by mentally extending current or historical data is sometimes called "implicit forecasting." Such forecasting is obviously as useful as it is unavoidable when it comes to obtaining an appreciation of the broad outlines of possible futures. By itself, however, implicit forecasting is not sufficient when it comes to making today's decisions about our own most important long-range issues-the direction of a career, the development of a profession, the survival of an institution, department, or program, for example. In such cases, the need is also for methods that deal much more formally, systematically, and comprehensively with the nature and likely dynamics of future events, trends, and policy choices.

It is easy to see why our implicit forecasts of the general context are progressively less trustworthy as the questions at stake become more important. These forecasts are entirely subjective, they are no doubt idiosyncratic, they are often made on topics we are unqualified to assess because of a lack of relevant experience or knowledge, they rest very largely on unspoken arguments from historical precedent or analogy, and they are haphazard in that they are made primarily in response to information we receive that is itself usually developed haphazardly or opportunistically.

As futures research has developed since the mid-1960s, much work has gone into the invention and application of techniques intended to overcome these and other limitations of widely practiced methods of forecasting. In general, the newer methods are alike in that they tend to deal as explicitly and systematically as possible with the various elements of alternative futures, the aim being to provide the wherewithal for users to retrace the steps taken. The following paragraphs highlight some of these methods.

Genius forecasting
Apart from implicit forecasting, the most common approach to forecasting throughout history has been for a single individual simply to make explicit guesstimates about the future. In their weaker moments, many bright and otherwise well-informed people-including even futures researchers-are sometimes cajoled into offering such guesstimates, which typically take the form of one-line forecasts ("cancer will be cured," "no ship will ever be sunk by a bomb," or "the end is near"). But if they are persuaded to reflect on the future in a widely ranging way, to try to articulate the underlying logic of affairs and its likely evolution over time, to reason through the obvious alternatives and imagine the not so obvious ones, when in short they offer a careful but creative image of the future in its richness and complexity, then a much different process is involved. It has no common name, but in futures research it is often lightly called "genius forecasting." It is a powerful and highly cost-effective way to obtain forecasts if the "genius" is indeed thoughtful, imaginative, and well read in many areas.

The disadvantages of genius forecasting are clear enough to require no enumeration here. "In the end, genius forecasting depends on more than the genius of the forecaster; it depends on luck and insight. There may be many geniuses whose forecasts are made with full measure of both, but it is nearly impossible to recognize them a priori, and this, of course, is the weakness of the method" (Gordon 1972, p. 167).

If used properly, however, the strengths of the method usually outweigh its weaknesses. The probability of the integrated forecast produced by the "genius" is certain to be virtually zero. Time will show that the forecast was oversimplified, led astray by biases, and ignorant of critical possibilities. Yet the genius has the ability to identify unprecedented future events, to imagine current policies that might be abandoned, to assess the interplay of trends and future events in a far more meaningful way than any existing model can, to trace out the significance of this interplay, to identify opportunities for action that no one else might ever see, and to explain assumptions and reasoning. Although the genius forecast will be both "wrong" and incomplete, it will nevertheless have provided something very useful: an intelligent base case.

Occasionally, genius forecasts can serve as the only forecasts in a study. This approach makes excellent sense in studies being accomplished under severely constrained time and resources. Increasingly in futures research, however, studies are begun by commissioning one or more genius forecasts, which take the form of essays or scenarios of one sort or another. With them in hand, the investigators explore them carefully for omissions and inconsistencies, and then the forecasts are carefully pulled apart to identify the specific trends, events, and policies that appear to warrant detailed evaluation; that is, the most uncertain, problematical, intractable, and potentially valuable statements about the future can be selected. Being able to launch a more sophisticated forecasting effort from such a basis is much better than having random thoughts and blank paper.

Extrapolation of mathematical trends
Most forecasters and some practitioners of futures research use techniques of mathematical trend extrapolation that are well understood, rest on a fairly adequate theoretical foundation, convey the impression of being scientific and objective, and in skilled hands are usually quick and inexpensive to use. One of the most commonly used techniques is regression analysis, one purpose of which is to estimate the predicted values of a trend (the dependent variable) from observed values of other trends (the independent variables). Hierarchical regression models are sometimes referred to as "causal" models if an observed statistical relationship exists between the independent and dependent variables, if the independent variables occur before the dependent variable, and if one can develop a reasonable explanation for the causal relationship. A forecast of the independent variables makes possible a forecast of the dependent ones to which they are statistically linked, whether the case is simple or complex. In either case, however, the purpose behind causal regression models is always to explain complex dynamic trends (for example, college and university enrollment patterns) in terms of elementary stable trends (for example, demographics or government spending).

When cause is not an essential factor, trends are often forecast using time as the independent variable. Much of the "trend extrapolation" in futures research takes this form. Common methods of time-series forecasting being used today are the smoothing, decomposition, and autoregression/moving average methods. Smoothing methods are used to eliminate randomness from a data series to identify an underlying pattern, if one exists, but they make no attempt to identify individual components of the underlying pattern. Decomposition methods can be used to identify those components--typically, the trend, the cycle, and the seasonal factors--which are then predicted individually. The recombination of these predicted patterns is the final forecast of the series. Like smoothing methods, decomposition methods lack a fully developed theoretical basis, but they are being used today because of their simplicity and short-term accuracy. Autoregression is essentially the same as the classical multivariate regression, the only difference being that the independent (predictor) variables are simply the time-lagged values of the dependent (predicted) variable. Because time-lagged values tend to be highly correlated, coupling autoregression with the moving average method produces a very general class of time-series models called autoregression/moving average (ARMA) models.

All regression and time-series methods rest on the assumption that the historical data can, by themselves, be used to forecast the future of a series. In other words, they assume that the future of a trend is exclusively a function of its past. This assumption, however, will always prove false eventually because of the influence of forces not measured by the time series itself. That is to say, unprecedented sorts of events always occur and affect the series, which is precisely why the historical data are so irregular.

These difficulties have not deterred many traditional analysts and long-range forecasters from using such methods and thereby generating dubious advice for their sponsors. Within futures research, however, these techniques--when used well--are applied in a very distinctive way. The objective is not to foretell the future, which is obviously impossible, but to provide purely extrapolative base-line projections to use as a point of reference when obtaining projections of the same trends by more appropriate methods. What would the world look like if past and current forces for change were allowed to play themselves out? What if nothing novel ever happened again? The only value of these mathematical forecasting techniques in futures research is to provide answers to these remarkably speculative questions. But once they are answered, a reference will have been established for getting on with more serious forecasting.

For example, in a study by Boucher and Neufeld (I 98 1), a set of I I I trends was forecast 20 years hence both mathematically (using an ARMA technique) and judgmentally (using the Delphi technique). Analysis of the results showed that the average difference between the two sets of forecasts was over 15 percent. By the first forecasted year (which was less than a year from the date of the completion of the Delphi), the divergence already averaged more than 10 percent; by the 20th year, it had reached 20 percent. This result is interesting because even experienced managers usually accept mathematical forecasts uncritically. They like their apparent scientific objectivity, they have been trained in school to accept their plausibility, and acceptance has been reinforced by an endless stream of such projections from government, academia, and other organizations. Seeing judgmental and mathematical results side-by-side can thus be most instructive. Moreover, as some futures researchers believe, if the difference between such a pair of projections is 10 percent or more, it is probably worth examining in depth.

The Delphi technique
Given the limitations of personal forecasting (implicit or genius) and of mathematical projections, it is now common--and usually wise--to rely on systematic methods for using a group of persons to prepare the forecasts and assessments needed in strategic planning. Experience suggests, however, that at least five conditions must be present before the decision to use a group should be made: (1) No "known" or "right" answers exist or can be had (that is, acceptable forecasts do not exist or are not available); (2) equally reputable persons disagree about the nature of the problem, the relative importance of various issues, and the probable future; (3) the questions to be investigated cross disciplinary, political, or jurisdictional lines, and no one individual is considered competent enough to cope with so many subjects; (4) cross-fertilization of ideas seems worthwhile and possible; and (5) a credible method exists for defining group consensus and evaluating group performance.

The fifth condition is especially important--and often slighted. As a matter of fact, the emphasis one places on this consideration often determines the method of group forecasting one chooses. If, for example, the person seeking the forecasts will be content with an oral summary of the results (or perhaps a memo for the record), then a conventional face-to-face meeting of some sort may be the appropriate method. If, at the other extreme, it is known that the intended user will insist on having a detailed comprehensive forecast and that the persons whose views should be solicited would never speak openly or calmly to each other at a face-to-face meeting, then a different scheme for eliciting, integrating, and reporting the forecasts would surely be required.

Considerations like these were responsible in large part for the invention of what is no doubt the most famous and popular of all forecasting methods associated with futures research: the Delphi technique. Delphi was designed to obtain consensus forecasts from a group of "experts" on the assumption that many heads are indeed often better than one, an assumption supported by the argument that a group estimate is at least as reliable as that of a randomly chosen expert (Dalkey 1969). But Delphi was developed to deal especially with the situation in which risks were inherent in bringing these experts together for a face-to-face meeting--for example, possible reluctance of some participants to revise previously expressed judgments, possible domination of the meeting by a powerful individual or clique, possible bandwagon effects on some issues, and similar problems of group psychology. The Delphi method was intended to overcome or minimize such obstacles to effective collaborative forecasting by four simple procedural rules, the first of which is desirable, the last three of which are mandatory.

First, no participant is told the identity of the other members of the group, which is easily accomplished if, as is common, the forecasts are obtained by means of questionnaires or individual interviews. When the Delphi is conducted in a workshop setting--one of the more productive ways to proceed in many cases--this rule cannot be honored, of course.

Second, no single opinion, forecast, or other key input is attributed to the individual who provided it or to anvone else. Delphi questionnaires, interviews, and computer conferences all easily provide this protection. In the workshop setting, it is more difficult to ensure, but it can usually be obtained by using secret ballots or various electronic machines that permit anonymous voting with immediate display of the distribution of answers from the group as a whole.

Third, the results from the initial round of forecasting must be collated and summarized by an intermediary (the experimenter), who feeds these data back to all participants and invites each to rethink his or her original answers in light of the responses from the group as a whole. If, for example, the participants have individually estimated an event's probability by some future year, the intermediary might compute the mean or median response, the interquartile range or upper and lower envelopes of the estimates, the standard deviation, and so forth, and pass these data back to the panelists for their consideration in making a new estimate. If the panelists provided qualitative information as well--for example, reasons for estimating the probabilities as they did or judgments as to the consequences of the event if it were actually to occur--the role of the intermediary would be to edit these statements, eliminate the redundant ones, and arrange them in some reasonable order before returning them for the group's consideration.

Fourth, the process of eliciting judgments and estimates (deriving the group response, feeding it back, and asking for re-estimates in light of the results obtained so far) should be continued until either of two things happens: The consensus within the group is close enough for practical purposes, or the reasons why such a consensus cannot be achieved have been documented.

In sum, the defining characteristics of Delphi are anonymity of the estimates, controlled feedback, and iteration. The promise of Delphi was that if these characteristics were preserved, consensus within the panel would sharpen and the opinions or forecasts derived by the process would be closer to the "true" answer than forecasts derived by other judgmental approaches.

Thousands of Delphi studies of varying quality have been conducted throughout the world since 1964, when the first major report on the technique was published (Gordon and Helmer 1964). The subjects forecast have ranged from the future of absenteeism in the work force to the future of war and along the way have included topics as diverse as prospective educational technologies, the likely incidence of breast cancer, the future of the rubber industry, the design of an ideal telephone switchboard, and the future of Delphi itself. Some of these studies proved to be extremely helpful in strategic planning; a few virtually decided the future of the sponsoring organization. But most had little or no effect, apart from providing general background information or satisfying a momentary curiosity about this novel method of forecasting.

Part of the problem in many cases is that practitioners have had false hopes. The literature conveys the impression that Delphi is so powerful and simple that anyone can "run one" on any subject. What the literature often fails to mention is that no established conventions yet exist for any aspect of study design, execution, analysis, or reporting. Intermediaries, who are the key to useful and responsible results, are very much on their own. As novices they should examine studies by others, but because these studies are all different, it may be very difficult to find or recognize good models. Even with an excellent model in hand, the newcomer cannot fully appreciate what it means to use it. Only through practice can one discover the significance of four key facts about Delphi: (1) The amount of information and data garnered through the process can and will explode from round to round; (2) good questions are difficult to devise, and the better the design of the questions asked, the more likely it is that good participants will resign from the panel out of what has been called the BIF factor--boredom, irritation, and fatigue--because they will be asked to answer the same challenging questions again and again for each trend or event in the set they are forecasting; (3) the likelihood of such attrition within the panel means not that the questions should be cheapened but that large panels must be established so that each participant will have fewer questions to answer, which is very time consuming; (4) Delphi itself does not include procedures for synthesizing the entire set of specific forecasts and supporting arguments it produces, so that when the study is "completed," the work has usually just begun. And if, as one hopes, the intermediary and the panelists take the process and the questions seriously, the probability is high that the schedule will slip, the budget will be overrun, and so on and on.

Another reason that success with Delphi is hard to achieve is that, despite 20 years of serious applications, very little is known about how and why the consensus-building process in Delphi works or what it actually produces. No wide-ranging research on the fundamentals of the method has been done for more than a decade. According to Olaf Helmer, one of the inventors of Delphi, "Delphi still lacks a completely sound theoretical basis.... Delphi experience derives almost wholly either from studies carried out without proper experimental controls or from controlled experiments in which students are used as surrogate experts" (Linstone and Turoff 1975, p. v). The same is true today. The practical implication is that most of what is "known" about Delphi consists of rules of thumb based on the experience of individual practitioners.

For example, a goal of Delphi is to facilitate a sharpening of consensus forecasts from round to round of interrogation. And, in fact, there probably has yet to be a Delphi study in which the consensus among the participating experts did not actually grow closer on almost all of the estimates requested (as measured by, say, a decline in the size of the interquartile range of estimates). Yet the limited empirical evidence available on this phenomenon is replete with suggestions that increased consensus is produced only in slight part by the panelists' deliberations on the group feedback from the earlier round. The greater part of the shift seems to come from two other causes: (1) The panelists simply reread the questions and understood them better, and (2) the panelists are biased by the group's response in the preceding round of interrogation (that is, they allow themselves to drift toward the mean or median answer). The difficulty posed by this situation--which is far from atypical of the problems presented by Delphi--is that no way has yet been found to sort out the effects of these different influences on the final forecast. Accordingly, the investigator must be extremely careful when interpreting the results. Claims that Delphi is "working" are always suspect.

On the positive side, though again as a strictly practical, non-theoretical matter, Delphi appears to have a number of important advantages as a group evaluation or forecasting technique. It is not difficult to explain the essence of the method to potential participants or to one's superiors. It is quite likely that some types of forecasts could not be obtained from a group without the guarantee of anonymity and the opportunity for second thoughts in later rounds (certainly true when hostile stake holders are jointly evaluating the implications of policy actions that might affect them differently). Areas of agreement and disagreement within the panel can be readily identified, thanks to the straightforward presentation of data. Perhaps most important, every participant's opinion can be heard on the forecasts in every round, and every participant has the opportunity to comment on every qualitative argument or assessment. For this reason, it becomes much easier to determine the uncertainties that responsible persons have about the problem under study. If the panelists are chosen carefully, a full spectrum of hopes, fears, and other expectations can be defined.

Every participant's opinion can be heard on the forecasts in every round, and every participant has the opportunity to comment.

When successes with Delphi occur, it would seem that the explanation is not that the panel converged from round to round (which, as indicated earlier, almost always happens). Nor is it that the mean or median response moved toward the "true" answer (which is something that no one could know at the time). Rather, it is that the investigation was conducted professionally and that the results did in fact have the effect of increasing the user's understanding of the uncertainties surrounding the problem, the range of strategic options available in light of those uncertainties, and the need to monitor closely the possible, real-world consequences of options that may actually be implemented.

Delphi has been used in many policy studies in higher education. In one case, it was used to determine priorities for a program in family studies (Young 1978). Nash (1978), after reviewing its use in a number of studies concerning educational goals and objectives, curriculum and campus planning, and effectiveness and cost-benefit measures, concluded that the Delphi is a convenient methodology appropriate for a non-research-oriented population. The technique has also been used in a number of planning studies (Judd 1972). For example, it was used as a tool for getting planning data to meet the needs of adult part-time students in North Carolina (Fendt 1978).

In general, the more successful practitioners of Delphi appear to have tried to follow the 15 steps presented in figure 8. These "rules" may appear platitudinous, and virtually no one has ever followed all of them in a single Delphi. Yet the intrinsic quality and practical value of Delphi results are certain to be a function of the degree to which they are followed.

  1. Understand Delphi (for example, that at least two rounds of interrogation are necessary).
  2. Specify the subject and the objectives. (Don't study "the future." Study alternative futures of X--and do so with clear purpose.)
  3. Specify whether the forecasting mode to be adopted is exploratory or normative--or some clear combination of both.
  4. Specify all desired products, level of effort, responsibilities, and schedule.
  5. Specify the uses to which the results will be put, if they are actually achieved.
  6. Exploit the methodology and substantive results developed in earlier Delphi studies.
  7. Design the study so that it includes only judgmental questions (except in extreme cases), and see to it that these questions are precisely phrased and cover all topics of interest as specifically as possible.
  8. Design all rounds of the study before administering the first round. (Don't forget that this step includes the design of forms or software for collating the responses.)
  9. Design the survey instrument so that the questions are explained clearly and simply, can be answered as painlessly as possible, and can be answered responsibly.
  10. Include appropriate historical data and a set of assumptions about the future in the survey instrument so that the respondents will all be dealing with future developments in the context of the same explicit past and "given" future.
  11. Assemble a group of respondents capable of answering the questions creatively, in depth, and on schedule, and large enough to ensure that all important points of view are represented.
  12. Collate the responses wisely, consistently, and promptly.
  13. Analyze the data wisely, consistently, and promptly.
  14. Probe the methodology and the substantive results constantly during and after the effort to identify problems and important needed improvements.
  15. Synthesize and present the final results to management intelligently.

Other group techniques
Delphi is generally considered one of the better techniques of pooling the insight, experience, imagination, and judgment of those who are knowledgeable in strategic matters and who have an obligation to deal with them responsibly. Many other ways, however, can be used to exploit the power of groups in forecasting and futures research: brainstorming, gaming, synectics, the nominal group technique, focus groups, and others, including the Quick Environmental Scanning Technique (QUEST), the Focused Planning Effort (FPE), and the Delphi Decision Support System (D2S2). The last three are discussed in this section because they are currently used in futures research.

QUEST (Nanus 1982) was developed to quickly and inexpensively provide the grist for strategic planning: forecasts of events and trends, an indication of the interrelationships among them and hence the opportunities for policy intervention, and scenarios that synthesize these results into coherent alternative futures. It is a face-to-face technique, accomplished through two day-long meetings spaced about a month apart. The procedure produces a comprehensive analysis of the external environment and an assessment of an organization's strategic options.

A QUEST exercise usually begins with the recognition of a potentially critical strategic problem. The process requires a moderator, who may be an outside consultant, to facilitate posing questions that challenge obsolete management positions and to maintain an objective perspective on ideas generated during the activity. The process also requires a project coordinator, who must be an "insider," to facilitate translating the results of QUEST exercises into scenarios that address strategic questions embedded in the organizational culture.

QUEST involves four steps. The first step, preparation, requires defining the strategic issue to be analyzed, selecting participants (12 to 15), developing an information notebook elaborating the issue, and selecting distraction-free workshop sites.

The second step is to conduct the first planning session. It is important that at least one day be scheduled to provide sufficient time to discuss the strategic environment in the broadest possible terms. This discussion includes identifying the organization's strategic mission, the objectives reflected in this mission, key stake holders, priorities, and critical environmental events and trends that may have significant impacts on the organization. Much of this time will be spent evaluating the magnitude and likelihood of these impacts and their cross-impacts on each other and on the organization's strategic posture. Participants are encouraged to focus on strategic changes but not on the strategic implications of these changes. This constraint is imposed to delay evaluations and responses until a complete slate of alternatives is developed.

The third step is to summarize the results of the first planning session in two parts: (1) a statement of the organization's strategic position, mission, objectives, stake holders, and so on, and (2) a statement of alternative scenarios illustrating possible external environments facing the organization over the strategic period. It is important that the report be attributed to the group, not sections to particular individuals. Correspondingly, it is important that the report reflect that ideas were considered on the basis of merit, not who advanced them. The report should be distributed a few days before the second group meeting, the final step.

The second meeting focuses on the report and the strategic options facing the organization. These options are evaluated for their responsiveness to the changing external environment and for their consistency with internal strengths and weaknesses. While this process will not produce an immediate change in strategy, it should result in directions to evaluate the most important options in greater depth. Consequently, a QUEST exercise ends with specific assignments vis-à-vis the general nature of the inquiry needed to evaluate each option, including a completion date.

The Focused Planning Effort was developed in 1971 (Boucher 1972). Like QUEST, it is an unusual kind of face-to-face meeting that draws systematically on the judgment and imagination of line and staff managers to define future threats and opportunities and find practical actions for dealing with them. Because the process is perfectly general--that is, it can be used to address any complex judgmental questions on future mission or strategic policy--the range of applications has been widely varied. In recent years, topics have ranged from the potential merit of technologies to improve agricultural yields, to alternative futures for the data communications industry, to the assessment of human resources in the future.

The FPE has the following features, which in concert make it a distinctive approach to strategic forecasting and policy assessment:

  • All topics relevant to the subject chosen for investigation are explored, one by one and in context with each other. An FPE seeks to be comprehensive. Typically, the participants define the organization's mission, objectives, and goals, and then identify, forecast, and evaluate several issues: (1) the elements of their business environment, including relevant prospective social, economic, technological, and political developments; (2) the alternatives open to the organization; (3) criteria for deciding among the alternatives vis-à-vis the organization's mission, objectives, and goals; (4) the degree to which each important alternative satisfies the criteria; and (5) the dynamic cross-support interrelationships among the preferred alternatives.
  • No idea is off-limits. As in brainstorming, the first objective is to expand the group's sense of the options available.
  • All participants have a full and equal opportunity to influence the outcome at each step. In particular, each participant evaluates every important issue raised after it has been examined in face-to-face discussion by the group.
  • These individual evaluations become the group's response, but the range of opinion (that is, the uncertainty or lack of consensus) is captured and serves as a basis for clarifying differences and sharpening the group's final judgment.
  • Thus, the participants typically respond to the opinion of the group, not to the opinions of individuals within the group. In this way, team building is enhanced and personal confrontations avoided.
  • The FPE is highly systematic, thanks to the use of an interlocking combination of methods that have proven successful in structuring and eliciting judgment. Unlike QUEST, which uses a fixed combination of techniques, the mix used in an FPE varies depending on the subject, the number of participants, and the time available. It can include relevance tree analysis, brainstorming, the Delphi technique, subjective trend extrapolation, polling, operational gaming, cross-support and cross-impact analysis, and scenario development. And while such techniques are used in the FPE, they are not given a particular prominence; they are treated as means, not ends.
  • All judgments on important issues are quantified through individual votes, usually taken on private ballots. This quantification permits objective comparisons of the subjective inputs. Anonymous voting enables everyone to speak his mind.
  • The judgment of the group as a whole is available to each participant at the completion of every step of the FPE. These results then become the basis of the next step, thus helping to ensure that each part of the problem being addressed is dealt with in a context.
  • The major results of the FPE are available at the end of the activity, in writing, and each participant has a copy of the results to take with him.

The FPE process has three parts. The first--pre-meeting design--is the key. Each FPE requires its own design, and the process does not involve a pat formula. The design phase usually requires 10 to 15 days, spread over a few calendar weeks. During this phase, the problem is structured, needed historical data are collected, the FPE logic is defined in detail, and first-cut answers to the more important questions are obtained through interviews or a questionnaire or both. These preliminary answers serve as a check on the FPE design and as a basis for the discussion that will occur during the FPE itself. Ordinarily, this information is gathered from a larger group of people than the one that will participate in the FPE.

The final design is usually formulated in two ways: first, as an agenda, which is distributed to the participants, and, second, as a set of written "modules," each describing a specific task to be completed in the FPE, its purpose, the methods to be used, the anticipated outcomes, and the time allotted for each step in the task. These modules serve as the basis of the sign-off in the final pre-FPE review.

The second part of the process is the FPE itself. The number of participants can range from as few as seven or eight to as many as 20 to 25. The FPE normally requires two to three full days of intensive work, though FPEs have run anywhere from one to 12 days. The period can be consecutive or be spread out in four-hour blocks over a schedule that is convenient to all participants. Typically, the FPE is preceded by a luncheon or dinner meeting and a brief roundtable discussion, which serves to break the ice and helps to clarify expectations about the work to follow.

The FPE can be manual or computer-assisted. D2S2TM, developed by the Policy Analysis Company, uses a standard floppy disk and personal computer, usually connected to a large-screen monitor or projector (Renfro 1985). The larger the group of participants, the greater the desirability of using such computer assistance. Not only is the collation of individual votes greatly speeded; in addition, the software developed by some consulting organizations that provide the FPE service (for example, the ICS Group and the Policy Analysis Company) can reveal the basis of differences among subgroups of the participants and draw certain inferences that are implied by the data but not readily apparent on the basis of the estimates themselves. In D2S2TM, it includes confidence weighting, vote sharing, and vote assignment.

Although the design of the FPE is quite detailed, it is never rigid. On-the-spot changes are always required during the FPE in light of the flow of the group's discussion and the discoveries it makes. But the design makes it possible to know the opportunity costs of these adjustments and hence when it is appropriate to rein in the group and return to the agenda.

The final part of the process is post-meeting analysis and documentation of the results and specification of areas requiring action or further analysis. Although the principal findings will be known at the end of the FPE, this post-meeting activity is important because the results will have been quantified, and it is necessary to transcend the numbers and capture in words the reasons for various estimates, the basis of irreducible disagreements, and the areas of greatest uncertainty. Additionally, it may be necessary to perform special analyses to distill the full implications of these results.

Cross-impact analysis
Cross-impact analysis is an advanced form of forecasting that builds upon the results achieved through the various subjective and objective methods described in the preceding pages. Although as many as 16 distinct types of cross-impact analysis models have been identified (Linstone 1983), an idea common to each is that separate and explicit account is taken of the causal connections among a set of forecasted developments (perhaps derived by genius forecasting or Delphi). Among some futures researchers, a model that includes only the interactions of events is called a cross-impact model. A model that includes only the interactions of events on forecasted trends but not the impacts of the events on each other is called a "trend impact analysis" (TIA) model. In the general case, however, cross-impact analysis" is increasingly coming to refer to models in which event-to-event and event-to-trend impacts are considered simultaneously. Constructing such a model involves estimating how the occurrence of each event in the set might affect ("impact") the probability of occurrence of every other event in the set as well as the nominal forecast of each of the trends. (These nominal trend forecasts may be derived through mathematical trend extrapolation or subjective projections.) When these relationships have been specified, it then becomes possible to let events "happen"--either randomly in accordance with their estimated probability or in some prearranged way--and then trace out a distinct, plausible, and internally consistent future. Importantly, it also becomes possible to introduce policy choices into the model to explore their potential value.

Development of a cross-impact model and defining the cross-impact relationships is tedious and demanding. The most complex model that can be built today (using existing software) can include as many as 100 events and 85 trends. Although they may seem like small numbers--after all, how many truly important problems can be described with reference to only 85 trends and 100 possible "surprise" events?--consider the magnitude of the effort required to specify such a model. First, it is necessary to identify where "hits" exist among pairs of events or event-trend pairs. For a model of this size, 18,400 possible cross-impact relationships need to be evaluated (9,900 for the events on the events and 8,500 for the events on the trends). This evaluation is done judgmentally, usually by a team of experts. Experience suggests that hits will be found in about 20 percent of the possible cases, which means that some 3,700 impacts of events on events or events on trends will need to be described in detail.

How are they described? In the most sophisticated model, seven estimates are required to depict the connection between an event impacting on the probability of another event: 

  1. Length of time from the occurrence of the impacting event before its effects would be felt first by the impacted event;
  2. The degree of change in the probability of the impacted event at that point when the impacting event would have its maximum impact;
  3. The length of time from the occurrence of the impacting event until this maximum impact (that is, change in probability) would be achieved;
  4. The length of time from the occurrence of the impacting event that this maximum impact level would endure;
  5. If the maximum impact might taper off, the change in probability of the impacted event when its new, stable level were reached;
  6. The length of time from the occurrence of the impacting event to reach this stable impact level;
  7. A judgment as to whether or not these effects had been taken into account when estimating the probability of the impacting and impacted events in the Delphi.

Eight cross-impact factors need to be estimated to describe the hit of an event on a trend. The first seven are the same as those specified above, except that estimates 2 and 5 are not for changes in probability but for changes in the nominal forecasted value of the trend. The eighth estimate specifies whether the changes in the trend values are to be multiplicative or additive.

In short, if we have 3,700 hits to describe and if, say, 60 percent of them (2,220) are impacts of events on events and 40 percent (1,480) are of events on trends, then 27,380 judgments must be made to construct the model (that is, 2,220 x 7 + 1,480 x 8). With these estimates, plus the initial forecasts of the probability of the events and the level of the trends, the model is complete. It can then be run to generate an essentially unlimited number of individual futures. In one version of cross-impact analysis, developed at the University of Southern California, the model can be run so that the human analyst has the opportunity to intervene in the future as it emerges, introducing policies that can change the probabilities of the events or the level of the trends. This model operates as follows:

  1. The time period is divided into annual intervals.
  2. The cross-impact model computes the probabilities of occurrences of each of the events in the first year.
  3. A random number generator is used to decide which (if any) of the events occurred in the first year. (It should perhaps be emphasized that once the estimated probability of an event exceeds zero, the event can happen. No one may think it will happen, or conversely everyone may be convinced that it will. If it happens--or fails to happen--the event is a surprise. In cross-impact analysis, events are made to "happen" in accordance with their probability; that is, a 10 percent event will happen in 10 percent of all futures, a 90 percent event will happen in 90 percent of them, and so on. One would be surprised indeed if he or she were betting on a future world in which the 10 percent event was expected not to happen but did, and the 90 percent event was expected to happen but did not.)
  4. The results of the simulated first year are used to adjust the probabilities of the remaining events in subsequent years and the trend forecasts for the end of the first year and their projected performance for the subsequent years.
  5. The computer reports these results to the human analysts interacting with the simulation and stops, awaiting additional instructions.
  6. The human analysts assume that the simulated time is real time and assess the result as they think they would had this outcome actually taken place. They decide which aspects of their strategy (if any) they would change and input these changes to the computer model, which then simulates the next year's results using the same procedure described for the first year.
  7. The simulation repeats these steps until all of the years in the strategic time period have been decided (Enzer 1983, p. 80).

When all intervals are complete, one possible long-term future is described by modified trend projections over time, the events that occurred and the years in which they occurred, a list of the policy changes introduced by the analysts, and the impacts of those policy changes on the resulting scenario. The analysts may also prepare a narrative describing how they viewed the simulated conditions and how effective their policy choices appeared in retrospect.

By repeating the simulation many times, perhaps with different groups of analysts, it is possible to develop a number of alternative futures, thereby minimizing surprise when the transition is made from the analytic model to the real world. Perhaps the most important contribution that the USC model (or cross-impact methods generally) can make in improving strategic planning, however, is in its continued use as the strategic plan is implemented (Enzer 1980a, 1980b). The uncertainty captured in the initial model will be subject to change as anticipations give way to reality. Such changes may in turn suggest revisions to the plan.

Models of such complexity are expensive to develop and currently can be run only on a large, mainframe computer. For these reasons, their use is warranted only in the most seriously perplexing and vital situations. A number of less complex microcomputer-based cross-impact models are under development, however. For example, the Institute for Future Systems Research, Inc. (Greenwood, South Carolina), has developed a cross-impact model that can be run on an Apple Ile. Although in the alpha stage of development, this model has the capability of 30 events and 20 policies impacting three trends.

Much simpler models are commonplace. In essence, they are the same, but the rigorous calculations required for complex models can be approximated manually while preserving much of the qualitative value of the results, such as identifying the most important events in a small set. In the simplified manual calculation, the impact of the event is multiplied times its probability: A 50 percent probable event will have 50 percent of its impact occur, a 75 percent event will have 75 percent of its impact occur, and so on. This impact probability is calculated and added or subtracted, depending on its direction, to the level of the extrapolated trend at point a (see figure 9). The event-impacted forecast for the years from b are determined by connecting points b and a with the dashed line as shown. This process is repeated for each of the potential surprise events until a final expected value of the event-impacted indicator is developed. The event with the highest product of probability and impact is the most important event or the event having the greatest potential impact on the trend. This simple calculation is the basis of cross-impact analysis, though the detail and complexity (not to mention effort and cost) can be much greater in computer simulations. (For a more detailed discussion of this approach, including an example from the field of education, see Renfro and Morrison 1982.)

Source: Renfro and Morrison 1982.

Policy impact analysis
Most of the techniques of futures research developed in the last 20 years provide information about futures in which the decision makers who have the information are presumed not to use it; that is, new decisions and policies are not included in the futures described by these techniques (Renfro 1980c). The very purpose of this information, however, is to guide decision makers as they adopt policies designed to achieve more desirable futures--to change their expected future. In this sense, traditional techniques of futures research describe futures that happen to the decision makers, but decision makers use this information to work toward futures that happen for them. Apart from policy-oriented uses of cross-impact analysis, policy impact analysis is the first model that focuses on identifying and evaluating policies, strategies, and decisions designed to respond to information generated by traditional techniques of futures research.

The steps involved in policy impact analysis are based on the results obtained from the probabilistic forecasting procedure outlined previously. When the events have been ranked according to their importance (their probability weighted impacts), these results are typically fed back to the group, panel, or decision makers providing the judgmental estimates used to generate the forecast. As this group was asked to select and evaluate the surprise events, they are now asked to nominate specific policies that would modify the probability and impact of those events. Decision makers may change the forecast of a trend in three principal ways: first, by implementing policies to change the probability of one or more of the events that have been judged to influence the future of the trend; second, by implementing policies to change the timing, direction, or magnitude of the impact of one or more of the events; and third, by adopting policies that in effect create new events. If all or most of the important events affecting a trend have been considered, then new events should have little or no direct impact on the indicator. For some events, such as the return of double-digit inflation, it may not be possible for the decision makers at one university to change the events' probability, but it may be possible to affect the timing and magnitude of their impacts if they did occur. For example, it may not be possible to affect the president's decision to issue a particular executive order, such as cutting federal aid to higher education, but its impact can be diminished if administrators develop other sources of funding. Usually it is possible to identify policies that change both the probability and the impact of each event (Renfro 1980a).

Policies are typically nominated on the basis of their effect on one particular event. To ensure that primary (or secondary) impacts on other events do not upset the intended effect of the policy, the potential impact of each policy on all events should be reviewed, easily done by the use of a simple chart like the one shown in figure 10.

A policy can change the probability of an event by making it more or less likely to occur.

Source: Renfro 1980b.

Policies can impact the forecasts of an indicator in three ways: through the events, through the events and directly on the trends, and directly on the trends only. The relationship of policies to trends to the indicators might be envisioned as shown in figure 11. The policies that affect the indicator through events have four avenues of impact. A policy can change the probability of an event by making it more or less likely to occur, or a policy can change the impact of an event by increasing or changing the level of an impact, changing the timing of an impact, or changing both level and timing of an impact (see figure 12). (If a computer-based routine is used in policy impact analysis, numerical estimates must be developed to describe completely the shape and timing of the impacts, which, for the impact of one event on a trend, may require as many as eight estimates. These detailed mathematical estimates quickly mushroom into a monumental task that can overwhelm the patience and intellectual capacities of the most dedicated professionals if the task is not structured and managed to ease the burden. For a discussion of the details of the numerical estimates, see Renfro 1980b.)

Source: Renfro 1980b.

Source: Renfro 1980b.

The new estimates of probability and impact are used to recalculate the probabilistic forecasts along the lines outlined earlier. The difference between the probabilistic forecast and the policy-impacted forecast shows the benefit of implementing each of the policies identified. Completed output of all of the steps results in three forecasts: the extrapolated surprise-free forecast, the probabilistic event-impacted forecast, and the policy-impacted forecast.

To illustrate, suppose that the policy issue being studied is enrollment in liberal arts baccalaureate programs and that measurements of those enrollments since 1945 are part of the database available to a research study team. Further assume that those enrollments were forecast to decrease over the next 10 years, although the desired future would be one in which they would remain the same or increase. In this stage of the model, the team would first identify those events that could affect enrollments adversely--for example, a sudden jump in the rate of inflation, sharply curtailed federally funded financial aid, a significant cut in private financial support, and so on. The team would also identify events that could positively affect enrollments--for example, commercial introduction of low cost, highly sophisticated CAI programs for use on personal computers for mid-career retraining, a new government program to help fund the efforts of major corporations to provide continuing professional education programs for their employees, and so on. Such events may positively affect enrollments because a widely held assumption of liberal arts education is that it facilitates the development of thinking and communication skills easily translatable to a wide variety of requirements for occupational skills.

The next step would be to identify possible policies that could affect those events (or that could affect enrollments directly). For example, policies could be designed to increase enrollments by aggressively pursuing marketing strategies lauding the value of a liberal arts education as essential preparation for later occupational training. This strategy could be undertaken with secondary school counselors and students and with first- and second-year undergraduates and their advisors. Graduate and professional school faculty could be encouraged to consider adopting and publicly announcing admissions policies that grant preferential consideration to liberal arts graduates. Another policy could be to form coalitions with higher education organizations in other regions to press for increased federal aid to students and to institutions. With respect to the potential market in the business, industrial, and civil service sectors, policies with respect to establishing joint programs to provide liberal arts education on a part-time or "special" semester basis could be designed and implemented.

Policies could also be designed to maintain enrollments within the current student population. For example, one policy could concern an "early warning" system to identify liberal arts students who may be just experiencing academic difficulty. Others could be designed to inhibit attrition by improving the quality of the educational environment. Such policies would involve establishing faculty and instructional development programs and improving student personnel services, among others.

Next, the policies need to be linked formally to the events they are intended to affect, and their influence can then be evaluated. (As part of this process it is also important to look carefully at the cross-impacts among the policies themselves, as several of them may work against each other.) The result of this somewhat complex activity is a policy-impacted forecast for undergraduate baccalaureate programs, given the implementation of specific policies designed to improve enrollments. Thus, competing policy options may be evaluated by identifying those policies with the most favorable cost-benefit ratio, those having the most desirable effect, those with the most acceptable trade-offs, and so on.

Figure 13 is an example of a complete policy impact analysis where one may examine the relationship of an organizational goal for a particular trend, the extrapolative forecast, the probabilistic forecast, and the policy-impacted forecast. Note that the distinction between the projected forecasts is the result of the difference between the assumptions involved; that is, the extrapolative forecast does not include the probable impact of surprise events, whereas the probabilistic forecast does. Furthermore, the probabilistic forecast includes not only the effects of events on the trend but also the interactive effects of particular events on the trend. The policy impact forecast not only incorporates those features distinguishing probabilistic forecasts; it also includes estimates of the impact of policies on events affecting the trend as well as on the trend itself.

Source: Renfro 1980c.

Evaluation occurs when the policy impact analysis model is iterated after the preferred policies have been implemented in the real world. That is, the process of monitoring begins anew, thereby enabling the staff to evaluate the effectiveness of the policies by comparing actual impacts with those forecast. Implementation of this model requires that a data base of social/educational indicators be updated and maintained by the scanning committee to evaluate the forecasts and policies and to add new trends as they are identified as being important in improving education in the future, that new and old events be reevaluated, and that probabilistic forecasts be updated to enable goals to be refined and reevaluated. This activity leads to the development of new policies or reevaluated old ones, which in turn enables the staff to update policy impacted forecasts (Morrison 1981b). (The techniques of futures research described here, particularly the probabilistic forecasting methods, have been developed only within the last 10 to 20 years, and they have been used primarily in business and industry, with mixed results. The success of this model depends upon the ability of the staff to identify those events that may affect a trend directly or indirectly, accurately assign subjective probabilities to those events, design and obtain a reliable and valid data base of social/educational indicators, and specify appropriate factors that depict the interrelationships among the events, the trends, and the policies. The efficacy of the policy impact analysis model depends upon the close interaction of the research staff and decision-makers within each stage of the model.)

The scenario
A key tool of integrative forecasting is the scenario--a story about the future. Many types of scenarios exist (Boucher 1984), but in general they are written as a history of the future describing developments across a period ranging from a few years to a century or more or as a slice of time at some point in the future. The scenarios as future history are a more useful tool in planning because they explain the developments along the way that lead to the particular circumstances found in the final state in the future.

A good scenario has a number of properties. To be useful in planning, it should be credible, it should be self-contained (in that it includes the important developments central to the issue being addressed), it should be internally consistent, it should be consistent with one's impression of how the world really works, it should clearly identify the events that are pivotal in shaping the future described, and it should be interesting and readable to ensure its use. Scenarios have been used both as launching devices to stimulate thinking about the future at the beginning of a study and as wrap-ups designed to summarize, integrate, and communicate the many detailed results of a forecasting study. For example, the information generated in the policy impact analysis process can easily be used to generate scenarios. A random number generator is used to determine which events happen and when. This sequence of events provides the outline of a scenario. With this technique, a wide range of scenarios can quickly be produced.

Frequently, several alternative scenarios are written, each based upon a central theme. For example, in the 1970s many studies on energy resources focused on three scenarios: (1) an energy-rich scenario, in which continued technological innovations and increased energy production eliminate energy shortages; (2) a muddling-through scenario, in which events remain essentially out of control and no resolution of the energy situation is realized; and (3) an energy-scarce scenario, in which we are unable to increase production or to achieve desired levels of conservation.

By creating multiple scenarios, one hopes to gain further insight into not only the potential range of demographic, technological, political, social, and economic trends and events but also how these developments may interact with each other, given various chance events and policy initiatives. Each scenario deals with a particular sequence of developments. Of course, if the scenarios are based on the results from earlier forecasting, the range of possibilities should already be reasonably well known, and the scenarios will serve to synthesize this knowledge. If, however, the earlier research has not been done, then the scenarios must be made of whole cloth. This practice is very common; indeed, some consulting organizations recommend it. Such scenarios can be quite effective, as long as the user recognizes that the product is actually a form of genius forecasting and shares all of the strengths and weaknesses of that approach.

Slice-of-time scenarios serve to provide a context for planning; indeed, they are similar to the budgeting or enrollment assumptions that often accompany planning instructions. Yet instead of single assumptions for each planning parameter, a range of assumptions may be considered. In turn, assumptions for different parameters are woven together to form internally consistent wholes, each of which forms a particular scenario, and the set may then be distributed as background for a new cycle of planning.

Multiple scenarios communicate to planners that while the future is unknowable, it may be anticipated and its possible forms can surely be better understood. In the language of strategic planning, a plan may be assessed against any scenario to test its "robustness." An effective plan, therefore, is one that recognizes the possibility of any plausible scenario. For example, in a planning conference with the president, the academic vice president might speculate how a particular strategy being proposed would "play itself out" if the future generally followed Scenario I and, then, what would happen given Scenario 11. Heydinger has developed several plausible scenarios for higher education, which, although lacking the specificity required for actual institutional planning purposes, convey the flavor of a scenario (see figure 14).

The analysis of multiple scenarios requires attention to a number of factors discussed elsewhere in this monograph improbable yet important developments (Heydinger and Zentner 1983). Moreover, in developing the scenarios, it is helpful to recognize that they can be used to describe futures on almost any level of generality, from higher education on the national level to the outlook for an individual department. In addition, agreement on a "time horizon" is necessary. Because many colleges and universities depend heavily on enrollment for income, the time horizon might be 15 years, a foreseeable horizon with regard to college attendance rates, students' demographic characteristics, and composition of the faculty.

  1. The Official Future
    Enrollments are down, and while adult and part-time students are more numerous, their presence has not offset the decline of traditional-age students. One in 10 state colleges has closed in the last seven years, and 25 percent of liberal arts colleges have closed since 1980. With the supply of traditional college-age students resurging, however, a mood of optimism is returning to campuses.
    Industry establishes its own training facilities at an unheard-of pace and competes with higher education for the best postgraduate students.
    In high-tech areas, cooperative research arrangements with industry are commonplace. Most campuses now find that academic departments divide into the "haves" (technology-related areas) and "have-nots" (humanities and social sciences).
  2. Tooling and Retooling
    With job skills changing at an ever-quickening pace, individuals now make several career changes in a lifetime, and college is still considered the best place for training. Nationwide enrollment has thus fallen only 1.5 percent.
    Students are more serious about their studies. Passive acceptance of poor teaching is a relic of the past, and lawsuits by students are common. The implicit view that the professor is somehow superior to the student (left over from the days of in loco parentis) is gone. As students focus almost exclusively on job skills, faculty who prize the liberal arts become a minority.
  3. Youth Reject Schooling
    The plummeting economy makes structural unemployment a reality. With fewer job openings that require a college degree, all but the most elite youth reject formal schooling. Most young people, weaned on fast-paced information with instant feedback, come to find college teaching methods archaic.
    Student bodies are smaller and more homogeneous, comprised mainly of those who can afford the high cost of post-secondary education. A spirit of elitism grows on campus. Among faculty, the mood is one of "minding the store" while waiting for better days.
  4. Long-Term Malaise
    The long-awaited enrollment decline hits, with full force, and the advent of lifelong learning never materializes. The slumping economy forces the states to make deeper funding cuts and close some public campuses.
    Faculty attention is focused on fighting closure, and little discussion of programmatic change is evident. Feeling themselves under increasing pressure, many of the best faculty flee the academy. Higher education becomes a shrunken image of its former self.
  5. A New Industry Is Born
    High technology creates a burgeoning demand for job skills. To meet the new challenge, some professional schools break away from their parent university to set up independent institutions. Private corporations establish larger training programs. Even individuals now hang out a shingle and offer educational training. Amid this explosion of new educational forms, the traditional research university breaks down. Community colleges flourish as they adapt to the new needs of the educational market.
Source: Richard Heydinger, cited in Administrator 3 (1): 2-3.

Scenario development is essentially a process of selecting from the total environment those external and internal elements most relevant to the purpose of the strategic plan. This process might well embrace information on demographic characteristics of students, legislative appropriations, research contracts, the health of the economy, public opinion (about the value of a college degree, for example), developments in the field of information processing and telecommunications, and so on.

Furthermore, assumptions about the behavior of a particular variable in a particular scenario must be explicated. Thus, if the size and composition of the 18- to 20-year-old cohort were the variable under consideration, different assumptions might be developed vis-a-vis college attendance rates. One scenario, for example, might assume that in 1995 the number of students in attendance would be the same as in 1983 but that the number of students in the 25to 45-year-old group would equal the number of students in the 18- to 24-year-old group. An alternative scenario might assume that the number of students would increase by 1995 and that most of them would be third-generation students in the 18- to 21 -year-old group. Similar assumptions must be developed for each variable included in the scenario.

Explicating these assumptions is the most important part of creating scenarios and can require a good deal of prior research or, in the case of genius forecast scenarios, great experience, knowledge, and imagination. Once the assumptions are established, however, the nature of each scenario is established. Accordingly, to ensure that they are credible within the institution, it may be worthwhile to review them with local experts. For example, for key factors concerning students, the admissions office might be consulted. For economic variables, the economics department should be consulted. Such consultations are likely not only to improve the quality of the final products but also to build "ownership" into the scenarios, thereby enhancing the chances that they will be considered reasonable possibilities throughout the institution.

In addition to their other advantages, multiple scenarios force those involved in planning to put aside personal perspectives and to consider the possibility of other futures predicated on value sets that may not otherwise be articulated. Grappling with different scenarios also compels the user to deal explicitly with the cause-and-effect relationships of selected events and trends. Thus, multiple scenarios give a primary role to human judgment, the most useful and least well used factor in the planning process. Scenarios therefore provide a useful context in which planning discussions may take place and provide those within the college or university a shared frame of reference concerning the future. (See Heydinger and Zentner (1983) for a more complete discussion of multiple scenario analysis; see also Boucher and Ralston (1983) and Hawken, Ogilvy, and Schwartz (1982) for a more detailed discussion of the types and uses of scenarios.)

Goal Setting
Some years ago, in what was apparently the first serious attempt to understand the range and severity of difficulties that face long-range planners, UCLA's George Steiner surveyed real-world experiences in U.S. corporations (Steiner 1972). Steiner's questionnaire, which was cornpleted by 215 executives in large corporations (typically, long-range planners themselves), presented a list of 50 possible planning pitfalls, invited the respondents to suggest others, and then asked three basic questions for each: (1) How would you rank the pitfalls by importance? (2) Has your own corporation recently fallen into any of the pitfalls, partly or completely? (3) If it has, how great an impact has the pitfall had on the effectiveness of long-range planning in your company?

Steiner used the answers from the first questions more or less global assessment of the influence of the pitfalls on long-range planning-to rank order the items. He did not, however, exploit the much more interesting information about actual experience revealed by the answers to the second and third questions. Fortunately, he published the raw data in an appendix. An analysis of those data produces a very different picture of the obstacles to effective planning than does his rank-ordered list. If, for example, one looks for the pitfalls that the largest percentage of companies confess they have recently encountered, partly" or "completely," the top 10 items are those shown in figure 15. This list is most instructive for planners in all types of organizations, including educational institutions, but seven of these 10 items did not appear anywhere among Steiner's top 10!

Far more significant, however, are the results from the third question, which asked the impact of the pitfalls on the effectiveness of the organization's long-range planning. After all, some mistakes or barriers are more serious than others. If one ranks all of the pitfalls on the basis of the frequency with which real-world planners cited them as having great negative impacts on their effectiveness, another list of the top items emerges (see figure 16). Again, the list is different from Steiner's, but this time five of his candidates appear.


Pitfall Number

Pitfall Percentage of Corporations



Failing to encourage managers to do good long-range planning by basing rewards solely on short-range performance measures.




Failing to make sure that top management and major line officers really understand the nature of long-range planning and what it will accomplish for them and the company.




Becoming so engrossed in current problems that top management spends insufficient time on long-range planning, and the process becomes discredited among other managers and staff.




Failing to use plans as standards for measuring managers' performance.




Failing to make realistic plans (as the result, for example, of overoptimism and/or overcautiousness).




Failing to exploit the fact that formal planning is a managerial process that can be used to improve managers' capabilities throughout a company.




Failing to develop a clear understanding of the long-range planning procedure before the process is actually undertaken.




Failing to develop company goals suitable as a basis for formulating long-range plans.




Doing long-range planning periodically and forgetting it between cycles.




Failing, on the part of top management and/or the planning staff, to give departments and divisions sufficient information and guidance (for example, top management's interests, environmental projections, etc.).



Source: Steiner 1972.


Pitfall Number

Pitfall Percentage Answering "Much" Rank


Failing to develop company goals suitable as a basis for formulating long-range plans. 




Failing, by top management, to review with department and division heads the long-range plans they have developed. 




Becoming so engrossed in current problems that top management spends insufficient time on long-range planning, and the process becomes discredited among other managers and staff. 




Top management's consistently rejecting the formal planning mechanism by making intuitive decisions that conflict with formal plans. 




Failing to develop planning capabilities in major operating units. 




Thinking that a successful corporate plan can be moved from one company to another without change and with equal success. 




Rejecting formal planning because the system failed in the past to foresee a critical problem and/or did not result in substantive decisions that satisfied top management. 




Failing to encourage managers to do good long-range planning by basing rewards solely on short-range performance measures. 




Assuming that top management can delegate the planning function to a planner. 




Assuming that long-range planning is only strategic planning, or just planning for a major product, or simply looking ahead at likely development of a present product (that is, failing to see that comprehensive planning is an integrated managerial system). 




Extrapolating rather than rethinking the entire process in each cycle (that is, if plans are made for 1971 through 1975, adding 1976 in the 1972 cycle rather than redoing all plans from 1972 to 1975). 



Source: Steiner 1972.

The results for pitfall 28 clearly underscore the importance of appropriate goal setting in an organization. Not only is failure to do it well one of the most frequently encountered barriers to long-range planning (as indicated in figure 15); it also surfaces at the top of the list of pitfalls that can most debilitate comprehensive planning (as shown in figure 16). Moreover, this finding has a certain face validity, for even if an organization has a good idea of what it wants to be (if, that is, it has what is known in strategic planning as a good "mission statement"), it is exceedingly improbable that its forecasting and planning will be fruitful in the absence of clear, actionable statements about how it will know if it is getting there. Such statements are variously called "goals" or "objectives."

Some confusion surrounds these terms in the planning literature. Most authors assert that objectives are more general than goal statements, that objectives are long range while goals are short range, that objectives are non-quantitative ("to provide students with a thorough grounding in the humanities") while goals are quantitative ("to require each student to complete two years of instruction in English, philosophy, and history"), that objectives are "timeless" statements ("to provide quality education that properly equips each student for his chosen career") while goals are "time-pegged" ("to implement a program of education, career counseling, and placement by 1989 such that at least 60 percent of graduates find employment for which they are qualified by virtue of their education at this institution"), and so on. But other authors argue other positions. This problem of vocabulary is in large part one of hierarchies or levels of discourse, as one person's objective can obviously be another person's goal (see Granger 1964 or Kastens 1976, chap. 9). For purposes of this paper, the terms are used interchangeably to mean simply a broad but non-platitudinous statement of a fundamental intention or aspiration for an organization, consistent with its mission. Metaphorically, a goal or objective in this sense is like a trend around which the actual performance of the institution is expected to fluctuate as closely as possible.

The purpose of goals is to provide discipline. More specifically, the "objectives for having objectives" include:

  • To ensure unanimity of purpose with the organization.
  • To provide a basis for the motivation of the organization's resources.
  • To develop a basis or standard for allocating an organization's resources.
  • To establish a general tone or organizational climate, for example, to suggest a businesslike operation.
  • To serve as a focal point for those who can identify with the organization's purpose and direction and as an explication to deter those who cannot from participating further in the organization's activities.
  • To facilitate the translation of objectives and goals into a work-breakdown structure involving the assignment of tasks to responsible elements within the organization.
  • To provide a specification of organizational purposes and the translation of these purposes into goals (that is, lower-level objectives) in such a way that the cost, time, and performance parameters of the organization's activities can be assessed and controlled (King and Cleland 1978, p. 124).

The last two purposes lead especially to management control systems, such as the Planning-Programming-Budgeting system, Zero-Based Budgeting, and Management by Objectives.

To these ends, goals are necessary for every formal structure within an organization, including temporary task forces. If, for example, futures research itself is recognized as a distinct function, the failure to specify goals adequately can lead the futures researcher to assume that his or her domain includes all possible future states of affairs. But the job then becomes futile; all too often the planner is reduced to rummaging in the I future, looking willy-nilly for the hitherto unanticipated but "relevant" possibility (Boucher 1978).

Steiner's surprise that pitfall 28 ranked so high on the list of dangerous pitfalls prompted his asking several respondents why they had given it such prominence. Their answers clarify some of the attributes of an "unsuitable" goal:

  • It is too vague to be implemented ("optimize profits" or "establish the best faculty").
  • It is excessively optimistic. For example, an educational institution with a total annual budget of $10 million would be deluding itself if it sought to "establish the nation's premier faculty in physics."
  • It is clear enough to those on the top level who formulated it, but it provides "insufficient guidance" to those on lower levels.
  • Finally, it simply has not been formulated. For example, top management has recognized the need to develop goals for lower levels and lower levels would clearly welcome them, but management has not yet been able to specify goals.

How are goals or objectives developed? The short answer is that because they are about the future, they must at bottom be subjective and judgmental. In many organizations, especially small ones, no formal process is required to capture these judgments: The ultimate goals, at least, are the articulated or unarticulated convictions of the founder or top executives about how the organization is likely to look if everyone works intelligently to achieve the mission in the years ahead. The absence of a formal goal-setting process need not mean that the organization is doing something wrong. Indeed, for some of the largest and best-run firms in corporate America, it would appear that the presence or absence of such a process apparently does not matter greatly; what matters more is that a vision is shared and is regularly reinforced by the key people through direct, persistent contact with everyone else. For these companies, this process is a part of what has been called "Management by Wandering Around"--to discover what employees, customers, suppliers, investors, and other stake holders actually think about the organization and its products or services (Peters 1983). By reinforcing a vision through such contacts, these companies are able to adjust their behavior by comparing their mission, goals, and interim performance toward those goals and then shucking subgoals that are blocking the performance they seek.

No educational institution, to our knowledge, practices Management by Wandering Around. Educational planners and policy makers are more likely to use a formal process for setting goals of some sort, particularly those recommended by business schools for use in strategic planning. The many models available (Granger 1964; Hughes 1965; King and Cleland 1978; Steiner 1969a) tend to be bad models in at least one respect: Almost without exception they fail to recognize the contribution that futures research itself can make to the process of setting goals. The tendency in the literature--and hence in practice--is to suggest that one should, of course, look ahead at the organization's alternative external and internal environments, but, having done that job, one should then proceed to other, more or less independent things, such as setting goals. But futures research can contribute much to this activity, and it can make this contribution directly. Indeed, when futures research is operating in the normative mode, goals or objectives may be its principal output.

The key to exploiting this source of information is for the organization to explicitly establish the preliminary statement of goals as one of the goals of its futures research. We can make this notion more tangible by a simple example. King and Cleland (1978, p. 148 ff.), among others, recommend a process of goal setting that is based largely on "claimant analysis." In that procedure, each of the organization's claimants, or stake holders, is identified--for a public university or college, for example, they might include the trustees, the faculty, other employees, the students, government on all levels, vendors of one sort or another, competing universities, alumni, the local community, and the general public--and each group's principal "claims" on the organization listed. The claims of students, for example, might include obtaining a quality education, varied extracurricular opportunities, contact with faculty, a good library and computer center, non-bureaucratic administrative support services, and so on. Then, for each such claim, a numerical measure is developed, whether direct or indirect. Although the measures will often be difficult to specify, especially in an enterprise as soft as education, the effort should be made. (For example, the quality of education at an institution can be measured in a variety of indirect ways, from counting the number of applications or the number of dropouts to summing the scores on teacher rating sheets, to tracking the results of outside evaluations of the institution's own schools or departments, to measuring the socioeconomic status of alumni.) Finally, past and current levels of these measures are compared to discern whether the institution has been moving toward fulfilling each claimant's proper expectations. When it has not, the institution has found a new objective. When it has, the current objective has been sustained or rejustified.

This process--whatever its merits--could be strengthened considerably through futures research. If we know who our claimants have been and are now, it is immediately relevant to ask how the nature and mix of claimants might change in the future--or how it should be made to change. The same is true for the claims they might make. By the same token, having measures of their claims, it is clearly worthwhile to project these measures into the future, perhaps using a technique like Delphi, to see what surprises may lie ahead, including conflicts among forecasted measures. With projections of the measures, it is readily possible to ask about the forces that might upset these projections, using a method like cross-impact analysis. Having these results makes it possible to explore the potential efficacy of alternative strategies. Discovering how these strategies might work can then be the source of insight into the need for new or revised goals--goals that not only are responsive to present conditions but also are likely to provide useful guidance as the future emerges. And all of these considerations could then easily be wrapped up in a small set of scenarios (or planning assumptions), which could serve as a framework for the development of future strategic and operational plans.

Forecasting and goal setting work together to define two alternative futures: the expected future and the desired future. The expected future is one that assumes that things continue as they are. It is the "hands-off " future, in which decision makers do not use their newly acquired information about the future to change it. The desired future is the "hands-on" one, and it assumes that whatever the decision makers decide to do works and works well. In stable environments, the two worlds are the same for complacent administrators. But where stability is vanishing and complacency is much too dangerous (as seems to be the case in education today), management must lead in taking a final active step in the strategic planning process: to establish the policies, programs, and plans to move the organization from the expected future to the desired future.

Forecasting and goal setting work together to define two alternative futures: the expected future and the desired future.

If forecasting and goal setting have been done rigorously and professionally, much of the information needed to accomplish this stage is already identified. A complete forecast contains the structure, framework, and context in which it was produced so as to enable the user to identify appropriate policy responses (De Jouvenel 1967), which can then be implemented. Bardach (1977), Nakamura and Smallwood (1980), Pressman and Wildavsky (1973), and Williams and Elmore (1976) include excellent discussions of this type.

Monitoring is an integral part of environmental scanning and of strategic planning. Although the specific functions of monitoring are different in the two processes, they serve the same purposes--to renew the process cycle.

In many planning models, monitoring constitutes one of the first steps, for it is in this step that areas of study are identified and the indicators descriptive of those areas selected. These indicators are then prepared for analysis through the development of a data bank, which can then be used to display trend lines showing the history of the indicators. For example, if enrollments are the area of concern, it is important to select indicators that have historically shown important enrollment patterns and can be expected to do so in the future. That is, one would collect data containing information about entering students (sex, race, age, aptitude scores, major, high school, and rank in the school's graduating class) and perhaps how these students fared while enrolled (grade point average, graduation pattern, and so on). Furthermore, one might select information concerning characteristics of entering college students in similar institutions or nationally in all institutions so that entering students at one's own institution could be compared with others. Such comparisons are readily available through data gathered by the Cooperative Institutional Research Program, an annual survey of new college freshmen conducted by UCLA and the American Council on Education (Astin et al. 1984) and available directly from ACE or from the National Center for Education Statistics.

In this first role of monitoring, historical information is developed and prepared for analysis. This role depends upon the identification of selected areas for study. In the model described here, the areas for study would be developed around the issues identified from environmental scanning and rated as important during evaluation. Monitoring begins its initial cycle at this point in strategic planning. That is, indicators that describe these prioritized issues are selected and prepared for analysis during forecasting.

A number of criteria determine the selection of variables in this cycle. For example, does the trend describe a historical development related to the issue of concern? Is the trend or variable expected to describe future developments? Are the historical data readily available? Gathering data is expensive, and novel sources of data will introduce errors until new procedures are standardized and understood by those supplying the data.

A primary consideration involves the reliability and accuracy of the data. Several writers have dealt thoroughly with criteria for developing and assessing reliable and valid historical data (see, for example, Adams, Hawkins, and Schroeder 1978 and Halstead 1974), but information contained in variables derived from the data must be independent of other factors that would tend to mislead the analysis. For example, if the issue concerns educational costs, is this measurement independent of inflation?

Finally, history must be sufficient so that the data cover the cycle needed for projections; for example, if one is projecting over 10 years, are 10 years of historical data available on that trend?

The second role of monitoring begins after decision-makers have developed goals and alternative strategies to reach those goals and have implemented a specific program to implement policies and strategies to move toward the goals. That is, new data in the area of concern are added for analysis so that managers can determine whether the organization is beginning to move toward its desired future or is continuing to move toward the expected future. For example, if the strategies discussed during implementation to increase liberal arts enrollment were employed, the second cycle of the monitoring stage would involve collecting data on enrollments and comparing "new" data to "old" data. Thus, in effect, monitoring is the stage where the effects of programs, policies, and strategies are estimated. The information thus obtained is again used during forecasting. In this fashion, the planning cycle is iterated.

For the environmental scanning model, the specific techniques of monitoring are a function of where an issue is in the development cycle of issues. For some issues, it may be useful to apply some concepts from the emerging field of issues management. (The Issues Management Association was first conceived in 1982 and formally established in 1983 with over 400 members. The major concepts and methods of issues management are still in the experimental and developmental stages.) The issues development cycle shown in figure 17 focuses on how issues move from the earliest stages of changing values and emerging social trends through the legislative process to the final stages of federal regulations (Renfro 1982). This model is used to understand the relative development stages of issues and to forecast their likely course of developments. Thus, one can see, for example, how the publication of Rachel Carson's Silent Spring led to a social awakening of the problems of environmental pollution, which eventually culminated in the formation of the Environmental Protection Agency in 1970. Similarly, Betty Friedan's The Feminine Mystique helped to organize and stimulate the emerging social consciousness of the women's movement.

Copyright 1983 by Policy Analysis Co., Inc. Used by permission.

Championing issues through publications is not a new phenomenon. Upton Sinclair used the technique at the turn of the century to alert the country to the issue of food safety in Chicago's meat packing houses with The Jungle. Richard Henry Dana used it in Two Years before the Mast, published in 1847, to alert the country to the plight of seamen, whose lives were in many ways similar to those of slaves. Thomas Paine's revolutionary pamphlet, Common Sense, may be the earliest use of the technique in this country.

Other key stages in the development of public issues are a defining event, recognition of the name of a national issue, and the formation of a group to campaign about the issue. The early stages have no particular order, but each has been essential for dealing with most recent public issues. For example, the nuclear power issue had everything except a defining event to put it into focus until Three Mile Island. Usually the defining event also gives the issue its name--Love Canal, the DC-10, the Pinto. Of course, all events do not make it through these stages, and many--if not most--are stopped somewhere along the way.

In addition to these general requirements for the development of an issue, several specific additional criteria are needed to achieve recognition by the media: suddenness, clarity, confirmation of preexisting opinions or stereotypes, tragedy or loss, sympathetic persons, randomness, ability to serve to illustrate related or larger issues, the arrogance of powerful institutions for the little guy, good opportunities for photos, and articulate, involved spokesmen. Issues that eventually appear in the national media usually have histories in the regional and local media, where many of the same factors operate (Naisbitt 1982).

At this stage, an issue is or already has been recognized by Congress--recognition being defined by the introduction of at least one bill specifically addressing the issue. Now the issue must compete with many others for priority on the congressional agenda.

For those issues legislated by Congress and signed into law by the president, the regulatory process begins. The basic guidelines for writing new rules are the Administrative Procedures Act (APA) and Executive Order 12291, which requires streamlined regulatory procedures, special regulatory impact analyses, and plain language. After the various notices in the Federal Register, proposed rules, and official public participation, the regulations may go into effect. This process usually takes three to ten or more years, making the evolving regulatory environment relatively easy to anticipate using this model and a legislative tracking and forecasting service like Legiscan® or CongresScan™ or following developments in the Congressional Record.

This model of the national public issues process is of course continuously evolving. The early stages have shifted from national issues with a single focus to national issues with many local, state, or regional foci--as the drunk driving, child abuse, spouse abuse, and similar issues demonstrate. The legislative/regulatory process has also been evolving. First, many of the regulations themselves became an issue, especially those dealing with horizontal, social regulation rather than vertical, economic regulation. Regulations for the Clean Air Act, the Equal Employment Opportunity Commission, the Clean Water Act, the Occupational Safety and Health Administration, the Environmental Protection Agency, and the Federal Trade Commission, among others, have all defined new issues and stimulated the formation of new issue groups, which, like the original issue group, came to Congress for relief. Thus, Congress now is deeply involved in relegislation between organized, opposing issue groups--a slow, arduous process with few victories and no heroes.

With Congress stuck in relegislation at such a detailed level so as to itself redraft federal regulations, new issues are not moving through Congress. As a result, the list of public issues pending in Congress without resolution continues to grow. Frustrated with congressional delays, issue groups are turning to other forums--the courts, the states, and directly to the regulatory agencies. No doubt the process of recycling issues seen by Congress will emerge here eventually (see figure 18).

Copyright 1983 by Policy Analysis Co., Inc. Used by permission.

The emergence of the states as a major forum for addressing national public issues is not related to new federalism, which is a fundamentally intergovernmental issue. States are taking the lead on a wide range of issues that a decade ago would have been resolved by Congress--the transportation and disposal of hazardous wastes, the right of privacy, the right of workers to know about carcinogens in their work environment, counterfeit drugs, Agent Orange, and noise pollution. The process of anticipating issues among the states requires another model, one focused not on the development of issues across time but across states. In most states, legislators do not have the resources or the experience to draft complicated legislation on major public issues. Moreover, issues tend to be addressed or dropped within one session of the legislative body, and such a hit-or-miss process is almost impossible to forecast. Thus, the legislative ideas from the first state to address an issue are likely to become de facto the national standard for legislation among the other states. The National Conference of State Legislators and the Council of State Governments encourage this cribbing from one state to another, even publishing an annual volume of "Suggested State Legislation." A state legislator need only write in his or her state's name to introduce a bill on a major public issue. The process of forecasting legislative issues across the states then involves tracking the number of states that have introduced bills on the issue and the states that have passed or rejected those bills. While the particular language and detailed implementation policies will of course vary from state to state, this model is reasonably descriptive of the process and represents the current state of the art (Henton, Chmura, and Renfro 1984).

Like the model of the national legislative process, this model has been refined several times. Some states tend to lead on some particular issues. While it was once theorized that generic precursor states exist, this concept has been found to be too crude to be useful today. On particular issues, the concept still has some value, however. Oregon, for example, tends to lead on environmental issues; it passed the first bottle bill more than 10 years ago. California and New York lead on issues of taxes, governmental procedures, and administration. Florida leads on the issues of right of privacy.

The piggy-backing of issues is also important. Twenty-two states have passed legislation defining the cessation of brain activity as death. The issue is an important moral and religious one but without substantial impact on its own. Seven states have, however, followed this concept with the concept of a "living will"; that is, a person may authorize the suspension of further medical assistance when brain death is recognized. This piggybacked issue has tremendous importance for medical costs, social security, estate planning, nursing homes, and so on.

A state forecasting model would be incomplete without another phenomenon, policy cross-over. Occasionally after an issue has been through the entire legislative process, the legislative policy being implemented is reapplied to another related issue without repeating the entire process. The concept of providing minimum electric service to the poor, the elderly, and shut-ins took years to implement, but the concept was reapplied to telephone service in a matter of months. And telephone companies did not foresee the development.

The monitoring stage of the strategic planning process therefore involves tracking not only those variables of traditional interest to long-range planners in higher education (enrollment patterns, for example) but also issues identified through environmental scanning. Moreover, by identifying issues as to where they are in the development cycle of issues, more information is introduced for iteration in the planning process.