Go to Collaborative Learning Go to FLAG Home Go to Search
Go to Learning Through Technology Go to Site Map
Go to Who We Are
Go to College Level One Home
Go to Introduction Go to Assessment Primer Go to Matching CATs to Goals Go to Classroom Assessment Techniques Go To Tools Go to Resources




Go to CATs overview
Go to Attitude survey
Go to ConcepTests
Go to Concept mapping
Go to Conceptual diagnostic tests
Go to Interviews
Go to Mathematical thinking
Go to Performance assessment
Go to Portfolios
Go to Scoring rubrics
Go to Student assessment of learning gains (SALG)
Go to Weekly reports

Go to previous page

Classroom Assessment Techniques
Student Assessment of Learning Gains

(Screen 4 of 6)
Go to next page

Analysis

  • Once the students have completed the SALG instrument, the instructor can check how many students responded and can view the raw or untabulated data. The instructor can see which IDs show responses--which is helpful if the instructor has assigned credit for completion.
  • The instructor can select averages, distribution tables, and cross-tabulation as well as the raw text and numerical data. I liked the fact that I could tailor the questions to my liking, students could answer my questions anonymously, I could use it as many times during a semester as I wanted to, and the analysis was automatically done for me.The scale chosen for the instrument is not a true Likert scale that has a neutral mid-point with two options above and below it. The authors wished to give students the option to distinguish between four possible levels of "gain" from "very little" to "a great deal," as well as a "no gains" and a "not applicable" option. Thus, instructors may regard averages on particular questions that are above 3.0 as "positive," and averages close to 4 or above as indicating a "good" or "very good" level of perceived student gain.
  • As in our tests of the instrument, instructors may find that averages for Question K (estimates of learning gains from "the way this class was taught overall") do not match the average for the total of all individual items. We have some doubts about the utility of questions asking for overall evaluations, but retained this question because it is popular with instructors, their departments, or institutions.
  • Instructors can save the versions of the SALG instrument that they have created, can offer them as samples for other instructors to use, can delete their own students' responses, and can, if they wish, delete their instruments.
The authors are considering the addition of other questions to the sample instrument, of additions to the statistical package, and a template for the classification/coding of additional typed-in student responses. User feedback on these and other issues are encouraged.


Pros and Cons

  • Students are accustomed to multiple choice instruments so the experience is familiar and comfortable. They seem very willing to complete on-line instruments and the response rates are, typically, high.
  • Even reticent students are usually comfortable expressing their ideas in this format, and students are generally pleased that the instructor is interested.
  • Instructors can quickly gain information about students' perceptions of what they are gaining/have gained from aspects of the class that their teachers consider important, and can do this more than once during the semester/term. The information thus gathered allows the instructor to make adjustments to their pedagogy in order to increase student gains in particular areas, and gives them a basis for discussion of issues that have arisen with their students and/or teaching assistants.
  • Survey findings are expressed in easily understood averages and distribution tables as well as raw scores and typed-in comments.
  • The act of completing the instrument can promote reflection, increase students' self-awareness of their learning processes, and reassure them that their instructor is concerned to know how well they are learning.
A fall 1999 faculty tester (in psychology) offered the following comment: "Overall, I think I'm getting a greater volume of analytic, honest, and potentially valuable feedback with this instrument than with any other I've used. I suspect it's partly the medium, and partly the high percentage of tailor-made questions."

However:

  • Instructors may discover that their students' estimates of how well any aspect of the class enables learning is quite different from their own assessment of how the class is going. They are then faced with the choice of changing some aspect of the class, discussing their methods with the students, or following through with the teaching methods and course content they have chosen.
  • Preserving anonymity is difficult with an on-line instrument, even with the system of ID assignment offered by this site. Fidelity on the part of the instructor, and trust by the students that anonymity assurances will not be breached, are necessary if student responses are to be candid, and, thus, optimally useful.
  • There is some imprecision in the scales such that instructors will have to decide at what average score level they can regard student feedback as "positive." This may best be resolved by discussing with students what score indicates a sense of satisfaction with their own level of learning gain. These may vary by school, class, student population, etc.

    Go to previous page Go to next page


    Tell me more about this technique:

    Got to the top of the page.



Introduction || Assessment Primer || Matching Goals to CATs || CATs || Tools || Resources

Search || Who We Are || Site Map || Meet the CL-1 Team || WebMaster || Copyright || Download
College Level One (CL-1) Home || Collaborative Learning || FLAG || Learning Through Technology || NISE