Go to Collaborative Learning Go to FLAG Home Go to Search
Go to Learning Through Technology Go to Site Map
Go to Who We Are
Go to College Level One Home
Go to Introduction Go to Assessment Primer Go to Matching CATs to Goals Go to Classroom Assessment Techniques Go To Tools Go to Resources




Go to CATs overview
Go to Attitude survey
Go to ConcepTests
Go to Concept mapping
Go to Conceptual diagnostic tests
Go to Interviews
Go to Mathematical thinking
Go to Performance assessment
Go to Portfolios
Go to Scoring rubrics
Go to Student assessment of learning gains (SALG)
Go to Weekly reports

Go to previous page

Classroom Assessment Techniques
Student Assessment of Learning Gains

(Screen 5 of 6)
Go to next page

Theory and Research
Research has found that effective teachers share several characteristics (Angleo & Cross, 1993; Davis, 1993; Reynolds, 1992; Murray, 1991; Shulman, 1990). Two of these characteristics are relevant with respect to this type of instrument:

There is substantial research which concludes that administering classroom instruments based on student perceptions of the efficacy of particular teaching methods can be both valid and reliable (Hinton, 1993). The SALG instrument discussed here is one method for obtaining information of direct utility to the classroom teacher about class content, teaching strategies (and the approach in which they are grounded), student activities, testing and grading procedures, materials and resources, organization, pacing, or workload. This information can be used to adjust aspect of any class so as to increase student learning. It can increase the awareness of learning processes in both teacher and students, and form the basis for discussions between teachers and their students, teaching assistants, and colleagues about methods that increase learning.

The instrument has its origins both in a need expressed by instructor classroom innovators and in the evaluation findings from a five-year multi-institution initiative to improve learning in undergraduate chemistry by the use of "modular" teaching. As with other instructors implementing classroom changes, modular chemistry instructors seek new forms of assessment that better reflect their revised learning objectives and pedagogy. These include more appropriate and accurate tests of student learning, and more precise feedback from students on the value to their learning of different aspects of the class.

The basis for a useful form of student feedback to instructors (and their departments) emerged from findings from a student interview study that formed part of the formative evaluation of the modular chemistry consortia. Three hundred and forty-four students were interviewed in a matched sample of modular and more traditionally-taught1 introductory chemistry classes at eight participating institutions. The sample was chosen so as to represent the range of different institutions across the two consortia. They were: two research universities, three liberal arts colleges, one community college, one state comprehensive university, and one Historically Black college. (Two more community colleges and one research university were added to the sample later).

The focus group interviews were tape-recorded, transcribed verbatim, and the text files entered into a computer program to assist with the analysis. Student observations were of three types: answers to interviewers' questions, spontaneous observations, and agreements with observations made by other focus group members. There were 12,993 discrete comments of all three types. We analyzed these data in two ways-in terms of student assessment of (1) instructor performance as teachers and (2) their own learning gains. In these analyses, we discovered that although students gave positive or negative ratings to specific aspects of the class or of their teacher's classroom performance (e.g., the quality of the teacher's lectures and demonstrations, or the fairness of their tests), the grand totals for all students' observations on how well instructors performed their teaching role were (for both the modular and the comparative classes) broadly 50 percent positive, and 50 percent negative. Thus neither group of instructors got a clear picture of the overall utility of their classroom work when students offered judgments of their performance as professional teachers. This is, arguably, because students lack the knowledge or experience to make such judgments. This finding reflects the common instructor experience that asking students what they "liked" or "valued" about their classes, or how they evaluated their teacher's work (often without offering any criteria for these judgements), tells the teacher little about what students gained from their class.

By contrast, in both the modular and comparative classes, students gave clear indications about what they themselves had "gained" from specific aspects of their classes. When all specifically gain-related student observations were totaled and divided into three types-positive (things gained), negative (things not gained), and mixed reviews (qualified assessments of gains), 55 percent of the observations were positive (for both types of class), 33 percent (modular) and 32 percent (comparative) were negative, and 11 percent (modular) and 13 percent (comparative) were "mixed." The strong similarity between the student learning gains evaluation totals for the modular and comparative classes (though not for particular items) is likely to reflect the early stage of development of the modules and the teachers' limited experience in using them at the time of these interviews. The issue here, however, is not the relative merits of modular or more traditional chemistry teaching, but the hypothesis suggested both by our data on reasons for instructor dissatisfaction with traditional course evaluation instruments, and by these student interview data: that it is more relevant and productive to ask students about what they have gained from specific aspects of the class than about what they liked or disliked.

The ChemLinks Evaluator, Elaine Seymour, who developed the SALG instrument, first made it available to chemistry consortia participants in the fall of 1997. This first version was tested (originally as a paper-and-pencil instrument) by instructor volunteers in 14 lower-division modular chemistry courses at eight institutions in the spring and fall of 1998. This was the first of a two-part test was enabled by a grant from the Exxon Education Foundation. This gave the ChemLinks evaluation team (at the University of Colorado, Boulder) 14 sets of completed instruments (including students' write-in comments). For comparison, some instructors also provided completed sets of their institutional or departmental classroom evaluations from the same classes.

The original version of the instrument includes questions that express learning objectives of particular importance to the developers and adapters of the chemistry modules. However, a "generic" version of the instrument (that can be adapted for use by instructors in any discipline using any teaching methods) is offered on the web-site. Versions of the instrument created by users in different disciplines are also offered for adaptation and use by other colleagues. The author and web-site developer are considering additions to the site prompted both by their research findings and by feedback from users.

Findings (both about the efficacy of the instrument, and about aspects of modular teaching were offered in technical and substantive reports to the Exxon Foundation (Wiese, Seymour, & Hunter, 1999; Daffinrud, 1999), have been shared with ChemConnections participants, and presented at a number of conferences and meetings (including AAHE, June 1999).

A second round of testing to determine the flexibility of the on-line instrument with instructors and their classes in a variety of science and non-science disciplines is underway and will include interviews. A comparative analysis of the nature of students' write-in comments offered in both the eight institution sample of SALG responses and in a sample of more traditional classroom evaluation instruments is near completion. Publication of the findings from the two rounds of tests and the qualitative data analysis is projected for spring 2000, along with their presentation at the American Chemical Society meetings.


Links
Access the SALG Instrument
   
http://www.wcer.wisc.edu/salgains/instructor


References
Angelo, T. A., and Cross, K. P. (1993). Classroom Assessment Techniques: A Handbook for College Teachers, 2nd ed. San Francisco: Jossey-Bass.

Davis, B. G. (1993). Tools for Teaching. San Francisco: Jossey Bass.

Daffinrud, S.M. (1999). Work Report for the Student Assessment of Their Learning Gains Web-Site. Report to the Exxon Education Foundation. LEAD Center, University of Wisconsin-Madison.

Hinton, H. (1993). Reliability and validity of student evaluations: Testing models versus survey research models. PS: Political Science and Politics September: 562-569.

Murray, H. G. (1991). "Effective teaching behaviors in the college classroom." In J. C. Smart (ed.), Higher Education: Handbook of Theory and Research, Vol. 7 (pp. 135-172). New York: Agathon.

Reynolds, A. (1992). What is competent beginning teaching? A review of the literature. Review of Educational Research, 62: 1-35.

Shulman, L. S. (1990). Aristotle had it right: On knowledge and pedagogy. Occasional Paper No.4. East Lansing, MI: The Holmes Group.

Wiese, D., Seymour, E., and Hunter, A.B. (May, 1999). Report on a panel testing of the student assessment of their learning gains instrument by instructors using modular methods to teach undergraduate chemistry." Report to the Exxon Education Foundation. Bureau of Sociological Research, University of Colorado, Boulder.


Selected Bibliography
Braskamp, L. and Ory, J. (1994). Assessing Faculty Work: Enhancing Individual and Institutional Performance. San Francisco: Jossey-Bass.

Centra, J. A. (1973). Effectiveness of student feedback in modifying college instruction. Journal of Educational Psychology, 65(3): 395-401.

Fowler, F. J. (1993). Survey Research Methods. Newbury Park, CA: Sage.

Gramson, Z. and Chickering, A. (1977). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39: 5-10.

Gutwill, J. and Seymour, E. (1999). ModularChem and ChemLinks Annual Evaluation Report. Presentation to the ModularChem National Visiting Committee, Berkeley, CA.

Henderson, M. E., Morris, L. L., & Firz-Gibbon, C. T. (1987). How to Measure Attitudes. Newbury Park, CA: Sage.

National Research Council (1997). Science Teaching Reconsidered: A Handbook. Washington, D. C.: National Academy Press.

Seymour, E. and Hewitt, N. (1997). Talking About Leaving: Why Undergraduates Leave the Sciences. Westview Press: Boulder, CO.

Shulman, L. S. (1991). Ways of seeing, ways of knowing - ways of teaching, ways of learning about teaching. Journal of Curriculum Studies, 23, (5): 393-395.

Theall, M. and J. Franklin, Eds. (1990). Student ratings of instruction: Issues for improving practice. New Directions for Teaching and Learning, No. 43. San Francisco: Jossey-Bass.


1. It should be noted that the degree to which the matched comparative classes were "traditional" in their pedagogy varied considerably by institutional character. The comparative classes reflected whatever was considered the "normal" way to teach introductory chemistry classes at each institution in the sample.

Go to previous page Go to next page


Tell me more about this technique:

Got to the top of the page.



Introduction || Assessment Primer || Matching Goals to CATs || CATs || Tools || Resources

Search || Who We Are || Site Map || Meet the CL-1 Team || WebMaster || Copyright || Download
College Level One (CL-1) Home || Collaborative Learning || FLAG || Learning Through Technology || NISE