The Cardinal Rules of Educational Research

go to Critical Reviews

go to previous draft (under previous title)

As a teacher of educational research methods, I spend much of my time in dialogue with graduate students over design issues. Each semester, I ask the same questions, "Have you crafted a researchable question?" "Have you justified your choice of data collection procedures?" "What analytical tools will you choose and why?" While a handful of students each year leave my courses frustrated at what they perceive to be haggling over details ("Just let us collect our data!") most leave recognizing the beauty of a carefully crafted research project and with an awareness that the key is to make a real contribution to understanding. Given the wildly varying quality of research into educational uses of new technologies, I feel a particular responsibility to the growing number of my students conducting studies in this area.

An exemplar in this regard is McFadden’s (1999) research reported in the February 18, 1999, issue of Educational Policy Analysis Archives. To conduct her study into the character of Internet "hits" made by college students, McFadden reviewed the cache of six computers, a randomly selected 10 percent of the machines in a state-supported university’s open access computer laboratories. Her thorough discussion of this review and the attendant methodological choices she made offer a roadmap for others, including researchers wishing to replicate or expand upon her research. She notes, for example, that the review was conducted on January 25, 1999 in the late afternoon to determine how many and what kind of Internet "hits" were stored there. She points out that she made no attempt to determine how long each student spent at each site nor how long the user stayed in any particular category of site. As an administrator of an online learning enrichment program, I would feel confident making decisions based on her analysis and, as an instructor, I would share her methodologies with my graduate students. I would, for example, feel comfortable suggesting to a student wishing to conduct further research in this area, that they consider dissecting the character of "hits" more finely than the McFadden study. Because she was specifically interested in hits related to pornography and gambling, McFadden chose to collapse some 47 percent (or 1,097) of the other hits into a "General" category related to course activities, research, or personal interest, including anatomy, science, books, literature, airlines, government web sites, health and disease, psychology, and business statistics. Other students could easily adopt her procedures to conduct a similar study into the habits of students younger than college age.

A second, equally important, responsibility I have to my students is to mentor them as they become reasoned and sophisticated consumers of such educational research. My goal in this regard is to help them develop a collegial and constructive attitude and tone as they review and comment on the research of others. A counter-example of this I may use was provided by Jamie McKenzie (1999) editor of From Now On – The Educational Technology Journal who takes on the CEO Forum Report and Star Chart (CEO Forum, 1999). The editor’s diatribe begins with a complaint regarding the identity of the report’s commissioners, CEOs selling $5 billion in hardware and systems to schools. "As we read through the recommendations of the report," he suggests, "we need to notice the dollar signs emerging between the lines" (p. 4). He recounts how he made repeated phone calls to discover that the CEO group hired two consultants without appropriate academic credentials to offer so-called authoritative findings on current national usage of computers in schools. The consultants breach every rule of academic research, he writes, including not naming themselves as authors, not being objective, not reporting nor displaying actual data, not describing their data collection instruments, and not articulating their analytical procedures. "Sadly," McKenzie says, "this report has elements of the fox in the hen house. Mr. Fox, how would you suggest we lock the hen house?" (p. 3).

The report, he continues, supports the network version of the Emperor’s New Clothes (p. 4) and is fraught with hidden assumptions. Buyers of this research should "beware" (p. 11). While his methodological criticisms should prompt a critical review of this report and of all research in the burgeoning field of new technologies, the fact that they are presented as a rant is counterproductive. THE SKY IS FALLING! (to extend the poultry metaphor) will do as much for educators and those who care about education as it did for henny penny.

I have suggested to my students that they have a responsibility, especially in an earnest quest for research rigor, to adopt an unimpeachably rational tone. The stakes, in terms of educational policy and funding decisions, are too great for them, and all of us, to do otherwise.


CEO Forum on Education and Technology. (1999, February 22). Year 2 report. Retrieved May 11, 1999, from the World Wide Web:

McFadden, A (1999). College students’ use of the Internet. Education Policy Analysis Archives, 7(6). Retrieved May 9, 1999, from the World Wide Web: .

McKenzie, J. (1999). Beware of CEOs bearing gifts! From Now On: The Educational Technology Journal, 8(7). Retrieved May 9, 1999, from the World

Wide Web:

Critical Reviews

Critic GG

The paper makes several good points and is certainly relevant. I would, however, like to make the following comments:

  1. The title ought to mention "educational technology," in order to highlight the relevance for this journal's audience. For example, "The Cardinal Rules of (for) Research on Educational Technologies." However, this then creates a dilemma. What is unique about these rules for ed. tech. research? Nothing that I can see.
  2. Why is the McFadden's paper an exemplar? First it is a seemingly skimpy and superficial tally of "hits" from only 6 student computers. I say skimpy because the cache of only 6 computers was examined. Also, there is something strange about the data. Given the later comment that 47% of the hits equaled 1,097, this means that the total number of hits in the caches of the six machines was 2,551, or an average of 425 hits per computer. I am unaware that any browser stores this many Web pages in cache. Second, what was the "researchable question" that the author said earlier was crucial to good experimental design? Finally, why was the amount of time spent at a given site considered unimportant or at least ignored?
  3. The reference to collapsing 47% of the other hits implies that 53% of the hits by students on school computers were to pornography/gambling sites. Doesn't this concern anybody? Why did McFadden ignore this finding, even though it was her primary interest, to focus on the "other" hits?
  4. What ARE the cardinal rules? It would help to enumerate them, either in the beginning or in a concluding paragraph. Is there any facet of these rules that are especially different from or pertinent to research on educational technology?

Critic G

This is an odd short piece. The title is bold, but we don't find out what the cardinal rules might be, except perhaps rigour and rationality...not exactly a surprise, but surely these are not really rules. Such rules would give the boundaries and means for being rigorous or rational in this context. 'Craft' your research plan begs the question as to what a well crafted plan might look like - a country stool or a Hepplewhite cabinet?

The references to the author's students suggest some anecdotal support will appear, but no, they aren't really there other than as recipients of the author's wisdom.

The McFadden study is praised although it is acknowledged that how long the student spent on each page was not analysed. Now, from what I know of other attempts to analyse student online activity, the time spent on a page, or specific site, is considered quite important in some contexts - simply because most pages accessed are not 'read'. Yet the observation is followed by an assertion that the author would be confident making decisions based on the analysis. Why? I would be hesitant about making decisions based on any single research report. Just because it seems to be sound? However sound, educational research has the problem of degree of generalisability to other institutions, contexts, cohorts etc.

So, although we apparently have a 'good' research report praised, and a 'dubious' one condemned, neither the limits nor the objectives (defined by the researchers themselves, or those who commissioned them) of these reports are discussed. Governments have to act, researchers can just say that more research is needed (and if you give us a grant we'll happily do it). And the problem, as the positivist historians found at the turn of the century to their chagrin, is that the more data you have, the harder it is to see a pattern, and the levers of historical development don't neatly pop up to be identified - far from it, even further from it! So research may be of limited use, may even be conflicting and usually is, to those who need to do something now.

So, is this comparison really justified? And what really is the objective of the comparison? To highlight the cardinal rules. Well it doesn't. Perhaps there is legitimate comment buried here about the way some agencies go about justifying their actions using highly flawed 'surveys', but this piece doesn't seem quite sure of what its objective is.

The second to last paragraph is meaningless to those who have not read McKenzie, and I don't understand all the stuff about hens and other poultry!

In fact I find the whole piece a bit obtuse.

PS An outsider might be forgiven for thinking that the cardinal rules of research are that the researchers have academic credentials, adopt a pretence of neutrality and 'objectivity', where objectivity is presumably defined by academic authority, and follow the rules of the current paradigm. When is special pleading not special pleading? Where is the line? Many of the academics who special plead like this (not necessarily the author here) don't have any problem with teaching without credentials. Do I smell hypocrisy?

Critic I

Un-publishable … as a commentary.

The article lacks a clear introduction to what is the topic at hand (being discussed), hence a direction for the commentary. Equally, there is no conclusion or summary of the focus pursued by this commentary.

The first two paragraphs are interesting by themselves and may be part of a plea for rigor and the requirement to have clear constructs in field research … The rest of the article is confusing and does not have a precise or discernable intent as it relates to educational research. For example, the title or topic of the commentary -Cardinal rules of educational research - does not appear in the introduction or the conclusion and is not the object of the discussion. What are those rules, what is the argument of this commentary?