Classroom Assessment Techniques in Asynchronous Learning Networks

**The most recent version of this article is now available at http://horizon.unc.edu/ts/article/default.asp?show=article&id=908 **

go to original

Go to previous version

As more college and university courses are offered via asynchronous learning networks (ALN), an important question becomes this: How can classroom assessment techniques be implemented for distance students, especially students communicating asynchronously?

Cross and Steadman (1996) define classroom assessment as "small-scale assessments conducted continually in college classrooms by discipline-based teachers to determine what students are learning in that class" (p. 8). Classroom assessment provides in-process feedback and allows instructors to implement continuous quality improvement techniques in their class (Soetaert, 1998).

Part 1 of this article describes an [Is this your course? If so, please use first person, replacing "an" with "my"; And, if this is your course, use first person throughout the remainer of the manuscript.]attempt to adapt a specific type of classroom assessment techniques (CATs) to a distance learning course at Washington State University. Part 2 provides a brief overview of CATs with possible issues to consider for future adaptations to online learning.

A Case Study in Adapting CATs to Distance Learning

The class analyzed was a junior-level introduction to production management. It was taught via the WSU Distance Degree Program during the fall 1999 semester. Twelve distance students were enrolled. The class was offered online and used a threaded discussion list called the Speakeasy Studio and CafeŠ. 

[Suggest rewrite (on the assumption this is your course): In the fall semester of 1999, I taught a junior-level introduction to production management via the WSU Distance Degree Program. Twelve distance students were enrolled. The course was offered online and used a threaded discussion list called the Speakeasy Studio and CafeŠ.]  [If these programs are produced by commercial vendors or if they are available to the public, please consider linking to them.]

The course was designed to promote student-instructor and student-student dialogue. A significant portion of the final grade was determined by weekly postings to the threaded discussion list as well as required responses to peers' postings. The original course design emphasized teamwork on all assignments, weekly graded homework, weekly answers to discussion questions, at least three comments on peers' submissions, and three group projects.

During the third week of class, the instructor [If this was your course, replace all third-person references with first-person references. If it was not your course, explain who the instructor was and what role you had in the course.] encountered a significant problem: the threaded discussions were not going well. Answers to discussion questions generated few comments, and discussion threads were not very deep [Do you mean "deep" as in insightful or profound? Or as in extended, developed, long?]. In short, there was no extended back and forth dialogue among students on a particular problem.

The instructor decided to try a classroom assessment technique similar to a minute paper (Angelo & Cross, 1993). The delivery mechanism for the CAT was an online survey called CTL Silhouette, developed at the WSU Center for Teaching, Learning, and Technology. It was a good fit for distance students studying via an ALN. The URL of the survey could be linked to the appropriate spot in the online learning environment with an explanation of what the CAT was about.[This sentence seems confusing, perhaps because it might be out of place. Consider deleting.]Students were aked to complete it [please specify "it"] by a specific date. [if you linked to the CAT, would readers get an idea of what it looks like?]

The CAT the instructor used [use first person if you are the instructor] consisted of two questions:

  1. What is the one thing that is helped you learn the most in this week's activities?
  2. What is the one thing in this course that is least helpful to your learning?

The instructor had received negative feedback on the amount of [Immediately below, you state that the amount was not the issue, correct? Please reconcile these statements, or delete the phrase (in red) "amount of".] group work required, but assumed that was because students found it to be too much work. The short survey revealed that students' biggest concern was quite different: they were worried that other members of the ALN could see their work on the threaded discussion list before they were ready to post it.

The instructor therefore decided to change the format of the course. Instead of requiring that work be done in groups, only the three group [delete group?] projects would be done in groups. The rest could be done independently. Students were still required to post answers to weekly questions and comment on three peer postings weekly, but were not required to do this as a group.

Three days after the CAT was given, the instructor posted a summary of student responses to the online learning environment, along with explanations of changes being made as a result of student input, including changes to the group work requirement. He/she/I also explained which student suggestions could not be implemented. This worked well. In the first two weeks of the course, before the CAT was given, students made an average of 16 postings to the threaded discussion list in response to the discussion question. The week after the changes were made, the number of postings jumped to over 70. Many of the discussion threads were also deeper [suggest replacement with more precise word], indicating that more back and forth discussion was generated. The increased volume of discussions and the deeper discussion threads could have been due to the Hawthorne effect [some readers will not know what a Hawthore effect is; please explain it in a file that we will link to the word. Title the file "Hawthorne"], or because students were more used to the technology, or perhaps because they found new subject material more interesting. At the very least, however, the CAT cleared the instructor's misperception and moved the focus of the course from the instructor's perceptions (teaching centered) to students' learning (learning centered). This is one of the most important effects of classroom assessment techniques (Angelo & Cross, 1993, p. 4.)

Issues to Consider in Adapting CATs to Asynchronous Learning

CATs in asynchronous learning networks are distinctly different from face-to-face, in-class CATs. These differences may affect student responses to CATs and should be considered in their use.

Students in ALNs may be at different stages in a course. Most face-to-face CATs are given during a specific class period. All students have participated in the same class activities and CATs usually focus on those activities. Students in ALNs may be at various stages: some might have finished the current topic and started on the next, while others are just beginning the current topic. If an instructor wants feedback on a specific topic, the CAT should be worded accordingly.

Students in ALNs do not experience the same learning environment. Students taking a CAT in a face-to-face course are all in the same physical environment. The instructor does not know what environment ALN students are in when they complete a CAT. Students may be on the road, trying to connect via a hotel telephone; in a quiet office; or at home, trying to respond to demanding household members [We suggest: trying to deal with a busy household]. This is not necessarily a negative; instructors might use CATs to try to determine just such facts. [Please elaborate on this last sentence.]

Anonymity of responses in ALNs may be a problem. Examples abound of distance education instructors adapting assessment techniques similar to CATs. Some traditional correspondence courses send students pre-addressed and stamped envelopes and encourage them to mail in their feedback whenever they want. Many online courses solicit student feedback via e-mail. In some cases the instructor has a third party strip identifying marks from the e-mail [unclear, please clarify; you can describe the procedure in a separate file that we will link to the concept]. The WSU case study had the advantage of having access to an online survey tool that could keep responses anonymous. [This paragraph does not address the issue of anonymity, save for the last sentence. Please first outline the problem more clearly, then discuss WSU's solution in more depth. Also consider linking to the tool if it is available to the public.]

Assessments need to be planned in ALNs. Like face-to-face CATs, those in ALNs need to be well planned, ask pertinent questions, and get results back to students quickly. [The opening paragraph of this section promises a discussion of differences between CATs used in ALNs and in classrooms. This point is useful, but highlights a similarity, not a difference.]

[We suggest new paragraph here: With these issues and potential problems in mind, it is worth considering the following nine-step classroom assessment "project cycle" (Exhibit 1) for effective CATs, mapped out by Angelo and Cross (1993, p. 34).]

[Next, consider explaining/discussing the project cycle in a medium-sized paragraph, rather than listing its steps. We can create a new file (which we would call Exhibit 1 and link from above) that houses the information you give below on the 9 steps.]

Angelo & Cross map out a three phase, nine step classroom assessment “project cycle” for effective CATs. (Angelo & Cross, 1993, p. 34).

Phase 1Planning a Classroom Assessment Project

Step 1: Choose the focus class

Step 2: Focus on an assessable goal or question

Step 3: Plan CATs based on that goal

Phase 2Implementing the Classroom Assessment Project

Step 4: Teach the target lesson related to that goal or question

Step 5: Collect feedback data

Step 6: Analyze student feedback

Phase 3Responding to Results

Step 7: Interpret results and formulate an appropriate response to improve learning

Step 8: Communicate results, try out the response

Step 9: Evaluate the project's effect(s) on teaching and learning

Conclusion

[This conclusion is a bit disjointed. You make many good points, but it is not yet clear how all of them fit together. please choose a specific focus for your conclusion.]

CATs are effective and flexible, this paper gave an example of a classroom assessment technique for a specific purpose, many other examples exist, for example:

A web site from Eastern New Mexico University gives more background and examples on this topic; CYBER CATS Classroom Assessment Techniques administered and reported via the Internet (ENMU, 2001).

There are also many types of CATs. Cross and Steadman (1996) list over 40 in their book on classroom research.

CATs have been applied online for several years. This paper gives an example of using an online survey tool to administer them.

Experience indicates that CATs can be effectively applied to asynchronous learning networks. They are valuable and flexible learning tools and should be applied no manner the learning environment.

References

Angelo, T. (2000). Classroom assessment: Guidelines for success. Teaching Excellence: Toward the Best in the Academy, 12 (4), page numbers. North Miami Beach, FL: The Professional and Organizational Development Network in Higher Education. [This work is not cited in-text. APA style requires that we  include only those references specifically referred to in the article. If you do retain the reference, please indicate whether it is a journal (which it seems to be) or a book  Journals do not require publication information, only issue numbers; books require publication information. [see the link to APA style in our author guide at http://horizon.unc.edu/TS/default.asp?show=guidelines]

Angelo, T., & Cross, P. K. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). San Francisco: Jossey-Bass.

Bonwell, initial. (1999, September 9). Title of presentation. Seminar presented at Washington State University, city, WA.

Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. The Wingspread Journal, 9 (2), page numbers.

Cross, P., & Steadman, M. (1996). Classroom research: Implementing the scholarship of teaching. San Francisco: Jossey-Bass.

ENMU. (2001). CYBER CATS: Classroom assessment techniques administered and reported via the internet. Retrieved April 1, 2001 from the World Wide Web: http://www.enmu.edu/users/smithl/Assess/classtech/cat.htm.

Soetaert, initial. (1998). Quality in the classroom: Classroom assessment techniques as TQM. In T. Angelo (Ed.), Classroom assessment and research: Uses, approaches, and research findings. New Directions for Teaching and Learning, no. 75. San Francisco: Jossey-Bass.

Steadman, M., & Svinicki, M. (1998). A student's gateway to better learning. In T. Angelo (Ed.), Classroom assessment and research: Uses, approaches, and research findings (pp. page numbers). San Francisco: Jossey-Bass.

Wiggins, G. (1993). Assessing student performance: Exploring the purpose and limits of testing. San Francisco: Jossey-Bass. [This work is not cited in-text. Please cite it or delete it from the reference section.]