Online Evaluations Policy 22-13
The University Senate of Michigan Technological University
(Voting Units: Academic)
“Proposal to Amend Teaching and Course Evaluations”
(Ref: Senate Procedures 504.1.1 Section II Teaching Evaluation System)
The current paper intensive method for course evaluation is cumbersome, slow, and provides extremely limited feedback and opportunities for analysis. The rigid structure of the current system is inappropriate given our diverse instructional modes and contexts, and longitudinal time analysis of student opinion is not possible. Furthermore, the single measure of instructional excellence (question #20) provides inadequate detail to direct or motivate improvement, and may or may not be correlated appropriately with instructional effectiveness.
The Instructional Policy Committee recommends moving to online evaluations and also recommends structural changes to the evaluation questions. Though there is a need to maintain a consistent core of university-wide questions for the purposes of promotion and tenure, teaching award comparisons, etc., online surveys can be easily customized to suit different course types, departments, modes of delivery, or even for individual instructors and courses.
There are many advantages to using online evaluations:
Current print evaluations ask questions that are irrelevant to certain instructional contexts online, no textbook, etc.) Online evaluations would allow instructors to ask relevant questions tailored to their setting. For example, questions appropriate for evaluating a seminar might not be appropriate for evaluating a lab or an enterprise. Online evaluations would allow customizability in this regard.
The timing of course evaluations sometimes needs to vary. Most survey products do not limit the total number of surveys and provide flexible scheduling for midterm surveys, part term courses, or other special courses.
Online evaluations can be done during class since most students have access to a smartphone, tablet, or laptop, or they can be done outside of class time at a student’s convenience. This flexibility easily accommodates students who miss class on evaluation day due to activities, illness, etc.
Our current system provides minimal evaluation for summer courses (instructor request only) or online courses. Since we now teach more than 120 online courses each year, and since summer session contains several hundred evaluable sections, these courses represent a significant missed opportunity for evaluation.
Current evaluations take more than one month for processing and return. This lengthy turnaround time is especially problematic during spring term when the return of fall evaluation data happens after spring term starts, giving instructors no feedback to consider in the new term. Promotion and tenure considerations would also benefit from quicker turnaround.
Current paper evaluations are handled by MANY people. Instructors are supposed to have students return them to the department, but sometimes do it themselves. Center for Teaching and Learning staff have seen cases where students carry packets of evaluations around in backpacks for days, packets of evaluations are left in classrooms, or forms are placed in the wrong envelopes. Online evaluations would eliminate these concerns.
Print evaluations pass through many hands (in departments, in the Center for Teaching and Learning, in the registrar’s office), which means that there are multiple opportunities for error in data collection and for breaches of confidentiality. An online system, by contrast, allows data to flow directly from student to system to instructor/chair/dean with minimal intervention.
Better data analysis
Our current system is hardcoded and provides minimal flexibility in terms of analysis. In addition to the fact that asking better questions allows better analysis, most purchased systems would provide much more flexible reporting (by year, gender, etc.) as well.
Online systems we’ve evaluated carry an annual cost of $12-15,000. This seems high, until costs of the current system are considered. Paper forms, envelopes, etc. alone cost more than $5,000/year, and staff time/cost has been estimated to include at least 500+ hours/year of Center for Teaching and Learning staff time (approximately $8000) and 100+ hours/year from the registrar’s office (approximately $1500). Many “hidden” costs should also be added to these values, including mailroom time, staff time within departments, disposal/shredding/recycling, and deferred maintenance on the scantron machine used for the forms in the registrar’s office. (This machine is 10+ years old, and could need to be replaced at any moment.) Based on this analysis, we could use an online system without additional cost, and probably free up staff to do many other things as well.
We are Michigan’s Technological University. New faculty are frequently surprised to find out that longitudinal evaluation or other data are unavailable, and students are surprised to be asked to fill out bubble sheets when more efficient and sustainable technologies are clearly available.
The main concern expressed about a move to online evaluations is that response rates will be diminished. Multiple research studies at a wide variety of other institutions have refuted this, finding that the key element correlated to high response rate is high student belief that the evaluation results will be taken seriously both by the instructor and the administration. Though there are some difficulties in calculating it, our current paper response rate among students that should be evaluating instructors is between 63% and 77%. (The fact that we cannot even precisely determine this rate shows the low level of analyzable data from our current system!) Measuring the true response rate is complicated by several factors including the aforementioned summer and online section gaps. In addition, sections with less than 5 students are not (and probably should not be) evaluated.
Multiple online evaluation systems now in use at other institutions have been reviewed. The recommended systems involve giving instructors the ability to see their evolving response rate (only) for their courses as the evaluation is in progress. In addition to online reminders provided through the system, instructors and chairs are charged with providing reminders and incentives to reach a benchmark – typically somewhere around 80%. Student incentives may be as simple as the release of a final exam review, extra time for an assignment, or some small redemption or extra credit applied uniformly. This system not only encourages high participation, but it also lets students know that evaluation responses are valued both by the instructor and the administration. Since the evaluations should be highly customizable by the department and instructor, this support is much more likely to be sincere, but consequences of not meeting the benchmarks could be determined as appropriate.
Though the change of mode will offer far more flexibility in terms of data release, this proposal does NOT change current Senate data release policies (504.1.1) except that release will be done electronically rather than on paper. The instructional policy committee must address the many questions raised about additional release and set appropriate policies before the first full dissemination of evaluation data (end of Spring 2014) in order to take full advantage of the new system while maintaining confidentiality.
Sections II.A.2 and II.A.3 of Policy 504.1.1 require editing as follows to maintain existing data release policies under the new system.
Section II.2 will read:
Frequency of required student evaluation:
Faculty members and graduate teaching assistants will evaluate at least one section of each different course preparation each semester unless required to do more by the academic unit(s) associated with that course. Student rating of instruction surveys will be sent and summaries delivered only in sections with an enrollment of six or more students unless otherwise specified by an individual academic unit.
Section II.3 will read:
Procedures for student evaluations:
The Center for Teaching and Learning will electronically direct end-of-term-survey requests to students only during the last 3 weeks of any term. Faculty will be notified when surveys are opened, and have opportunities to see response rates and encourage responses according to their own discretion during the evaluation period.
The Center for Teaching and Learning will electronically release all written comments and summarized numerical responses to the faculty member. For teaching assistants, this release will be done to an instructional supervisor designated by the department chair or Dean. The chief academic officer, or her/his designee, as well as other academic administrators will also be provided with copies of relevant section summaries.
Summaries from general education core course sections will constitute a special case and also be sent to the relevant core course coordinator and to the person charged by the chief academic officer with general education instructional oversight.
The Center for Teaching and Learning will not release any information related to the student rating of instruction scores of any instructor prior to the end of the grade submission period for that term. No release will occur at any time to any other parties without the prior written permission of that instructor.
The Center for Teaching and Learning will present an annual report on teaching at Michigan Tech to the Senate. This report must include but is not limited to statistical analysis of the university required questions.
There is increasing pressure for accountability at public universities, and several states have already mandated posting of student evaluation data for all courses. The Michigan Tech undergraduate student government made a similar request in Fall 2012. The Instructional Policy Committee suggests that a move to an online tool
not only avoids a crisis should such a mandate come. , but also allows a proactive intermediate approach that would satisfy demands for accountability while preserving faculty anonymity. One suggestion is to post averages or percentages of faculty above a threshold for the department or college, maintaining anonymity but combating some of the more informal measures (like RateMyProfessor.com) that inaccurately paint instruction at Michigan Tech in a less-than-favorable light.
The Provost’s office currently has in stock sufficient supplies to conduct fall 2013 evaluations using the current system. Discussion with staff currently implementing this system indicate that in terms of both TPR and teaching awards, a change fits best at the calendar year – between fall and spring terms. The Instructional Policy Committee therefore proposes the following implementation schedule:
· Summer 2013: Tool selection and very limited pilot
· Fall 2013: Pilot to include online sections in addition to sections for some faculty not affected by TPR or consideration for awards
· Spring 2014: Full implementation for standard/default evaluation
· Summer 2014: Pilot for customization
· Fall 2014: Expanded to full customization
The Instructional Policy Committee recommends the survey be divided into four sections. The “student reflection” and “university” sections would remain standard on all evaluations. The “departmental” and “instructor” sections would be standardized initially, with a move toward customization (for those that want it) through departmental staff trained by the Center for Teaching and Learning. Recommended questions, largely adapted from our current evaluation questions, for each of the four sections are listed below.
The student reflection section attempts to target the student’s own desire and preparedness for the course.
I understood the goals and objectives of this course.
The goals and objectives of this course were relevant to me.
My effort in this course was adequate to meet course objectives.
I came prepared for each class session.
Essay: If you were meeting with another student about to start this class, what advice would you give him/her?
The required university section targets the fundamental attributes of quality teaching.
Each The first seven questions emphasizes one of the 6 six dimensions of effective teaching highlighted most commonly in the literature. Note that this suggested the long term implementation of these core questions does will not contain a single “overall” question. This is done intentionally to better evaluate the multiple dimensions present in instruction. Question #8 is included for calibration between the paper and online systems, and will cease use in the University required section at the end of academic year 2018/19. Where a single measurement is needed (Teaching awards, and other notifications historically driven by the former “question #20”) an average response to the first 7 questions – with the two responses about encouraging participation engaging students weighted so that together they count equivalent to each other question – would be considered.
1. The instructor was enthusiastic about the subject matter of the course.
2. The instructor communicated the course material clearly.
3. The instructor engaged students by encouraging participation during class.
4. The instructor engaged students by encouraging course preparation, reflection, or other activities outside of class.
5. The instructor provided timely feedback on my work (homework, assignments, exams, etc.).
6. The instructor displayed a personal interest in students and their learning.
7. The instructor used technology appropriately.
8. Taking everything into account, I consider this instructor to be an excellent teacher.
1) As I, the instructor, prepare to teach this class again, what aspects of this course (teaching methods, assignments, areas of emphasis, etc.) should I preserve that effectively furthered your learning?
2) What aspects of this course should I change to improve student learning? Specifically, what would you suggest?
The departmental section targets content or course policies. Instructors may or may not have control over many of the issues explored here. What’s below would be instituted initially, and in cases where departments have not provided other direction. The intention would be to allow departments the flexibility to modify these questions within a semester or so after implementation. Departments could also explore standard sets of questions for specific course types (labs, seminars, etc.), or eliminate these questions entirely.
The instructor found ways to help students answer their own questions.
The organization of the class helped me to learn.
The pace of this course was consistent with my ability to learn the material.
The course grading policies were fair.
Given the opportunity, I would take another course from this instructor.
Departmental Optional Questions (Examples)
The instructional resources (textbooks, handouts, etc.) furthered my learning.
The classroom and equipment (if applicable) were adequate to support effective learning.
Instructor Optional Questions
The instructor section targets individual practices within the class. This section again would have the following questions as a default, but within a semester or two of implementation, a system would be put in place allowing the instructor to either modify these questions directly or work with someone who could modify them. This could be a department coordinator or a single central coordinating staff member through either the provost’s office or the Center for Teaching and Learning. One of the latter is suggested; most institutions offer instructor choices from among a finite number of vetted questions. (See the Syracuse Univ: Item Bank athttps://oira.syr.edu/assessment/StudentRate/ItemBank.htm
I wanted to take this course.
Class sessions were thought provoking.
The instructor made me aware of his/her scheduled office hours.
The instructor encouraged students to seek additional help outside of class.
The instructor used class time effectively.
The instructor made connections between new material and material previously covered in class.
I am more interested in the subject now than I was before I took this class.
Taking everything into account, I consider this instructor to be an excellent teacher.
Introduced to Senate: 27 March 2013
Friendly Amendment (in red): 09 April 2013
Amendment (in blue): 10 April 2013
Approved by Senate: 10 April 2013
Approved by Administration: 27 April 2013