Outcomes Assessment Overview

We have taken an integrated approach to assessing learning outcomes through the use of several commercial software packages as well as software designed and developed in-house. In the fall of 2005, we constituted an ad hoc Outcomes Assessment Committee to review current initiatives regarding outcomes assessment at other business schools and to develop strategic and tactical measures to assess learning outcomes. Regular meetings were held through the school year and several trips were made to peer institutions to determine the degree to which procedures there might be applicable to our situation. As a result of this investigation, we determined that we needed to develop a means for collecting data that would be relevant to a continuous improvement process based on an assessment of learning outcomes related to the delivery of our curriculum. Our vision was to develop a data store that would allow us to tie concepts taught in courses with course evaluations and student performance.

After much debate, three major data collection instruments were proposed. The first was an Assurance of Learning Curriculum Content Inventory (AoL CCI) of the content of all classes taught by our faculty including concepts covered, as well as a survey of the assessment tools used in each course to test student achievement.

Before the implementation of this instrument, the school was basing curriculum change decisions on anecdotal information and various satisfaction measures. With the data collected by the AoL CCI, the faculty can take a more reasoned approach to addressing discrepancies between student achievement and pedagogical expectations that exceed the boundaries of any one course.

Next we needed an instrument to evaluate student learning. We chose the Major Field Test in Business (MFT) developed and administered by the Educational Testing Service (ETS) of Princeton, New Jersey www.ETS.org. In addition to tests covering business concepts, the ETS has developed exams to test basic academic subjects, critical thinking, science, mathematics, reading and writing, and cognitive and technical skills, as well as workplace readiness and job related skills. This standardized test provides us with an objective measure which allows for comparisons across student body as well as comparisons with other like institutions. We also felt that we could use the results from the MFT and our AoL CCI database together to determine if and where changes to our curriculum needed to be made. We selected ETS MFT in Business because it is aimed at 4-year colleges and universities and allows us to self-select peer reference groups. This tool also allows us to obtain learning outcome measures as well as comparisons/benchmarking of our students’ progress.

Our graduating MBA students took the ETS for MBA’s exam in Spring 2009 and we plan on doing this each spring for several years to obtain sufficient information for analysis. For the undergraduate program, we now have 5 data points: Fall 2004, Spring 2005, Fall 2006/Spring 2007 and Spring 2009. In reviewing the ETS MFT results for Fall 2006/Spring 2007 and Fall 2004/Spring 2005, we noticed a difference in the variation in scores between the two groups. While not statistically significant (p=0.15) anecdotal information suggests that a higher percentage of Fall 2006/Spring 2007 students may not have taken the test seriously. There were no incentives for these students and, in addition, they were charged a fee for the test. We and other researchers have found that without incentives it is very difficult to get students to approach the test seriously. As one incentive for future test takers, the faculty passed a resolution to include the test score and percentile on students’ transcripts.

Additional analysis of this data set indicated that our students’ ETS scores were significantly correlated at the .01 level with three external student admission measures: SAT math (.388), SAT verbal (.404), ACT (.512) and two internal measures of student learning: cumulative GPA (.446) and capstone course grade (.366). This gives us some confidence that the ETS scores provide an external assessment of student learning.

The final aspect of our assessment program aims to tie student satisfaction with student achievement (ETS-MFT) and curricular coverage (AoL CCI). We have been using the Educational Benchmarking Institute’s Undergraduate Business Assessment (EBI)[i] instrument since 2001 to obtain a measure of student satisfaction with all aspects of a college academic program, course evaluations to measure student satisfaction on a per-course basis, and finally, a survey of the effectiveness of the academic advising process. The program wide satisfaction instrument (EBI) is administered at the end of the student’s academic career, whereas the course and advising evaluations and are collected at the end of each semester via web forms.

Additional measures:

In addition to actual measures of student accomplishment, the school is interested in the program’s educational value as perceived by the students. Various measures have been implemented over the years, from exit interviews offered by external agencies to course evaluations and advisor evaluations developed and administered internally. These satisfaction instruments may reflect levels of quality and rigor in the program, but also may bring to light issues that would affect student enrollment and retention rates.

Internal – Grades:

While student grades may be a measure of the degree to which individual student learning has met a certain professor’s standards, it should be clear that student grades may not be a good measure of what a student knows and/or has learned while in an academic program. Certainly grading policies vary by instructor and the content of a course varies even if the instructors teaching the same course agree to follow the same syllabus. Measures to assess program effectiveness need to be used consistently among instructors and be directly linked to the specific desired learning outcomes (Rogers, 2006). Grades, while useful, should not be the sole metric of student achievement.

Internal – Pre-Post tests:

Most assessment mechanisms measure achievement and competency, but in some instances it is important to evaluate student growth. Pre-testing and post-testing can give an indication of direct progress attributable to a particular course. For several years, we have been using questions from the SkillCheck Assessment Portfolio (http://www.fadvassessments.com) to create a customized 30 minute skills-based test we called the Technology Readiness Assessment (TRA) score to measure each student’s hardware and software proficiency. In this test, students were asked to demonstrate their ability to perform basic software skills (e.g., save a spreadsheet to your hard disk) using a simulated operating system interface. There were also several multiple choice questions that simply asked about their basic technology knowledge (e.g. which of the following is a web browser?). The TRA provides us with a direct assessment measure of student technology application skills. We hypothesize that this assessment will allow students to better customize their academic program (e.g. it will serve as an added advising tool) while at the same time providing a yardstick to measure how much they have learned regarding their skills in applying technology to business.

Internal – Student Satisfaction with Courses:

In May 1999, the faculty approved the new questionnaire, and also approved implementing the electronic collection of evaluations as a result of an Ad Hoc Teaching Evaluation Committee which felt that while the old course evaluation form facilitated summary judgments about instructor performance, it provided little or no diagnostic information by which instructors might improve teaching effectiveness. The current evaluation form asks students to provide feedback on aspects of the course ranging from how effectively visual aids are used to the relationship of course content to other courses in the business curriculum. The primary goal is to supply useful diagnostic feedback to instructors to truly improve teaching effectiveness and enhance student satisfaction with the classroom experience. The faculty agreed that the evaluation results for sections that receive 70% or more response rate be made publicly available (http://www.uvm.edu/business/?Page=info/eval_login.php). The evaluations are anonymous – the only student specific information stored is whether or not she completed the evaluation. The on-line course evaluation is divided into four parts:

Part A: 22 questions about the learning process in the course (5-point Likert scale) including the following questions relating to satisfaction with learning outcomes:

Part B: 4 questions on the instructor’s patience, interest, fairness and enthusiasm (7-point Likert scale);

Part C: 2 general questions about the course which ask the student to “Please indicate your level of agreement or disagreement with the following statement”:

Part D: open spaces for comments about the course (best aspects, worst aspects & how it could be improved).

For each class surveyed, instructors receive the following information: a histogram of the distribution of responses across the rating scale; the mean score on each question; the percentage of 4’s and 5’s (agree, strongly agree) on each question; and open-ended student comments.

In order to have comparison data for promotion and tenure cases, the mean response to each one of a subset of questions from the on-line form is computed, and the mean score of each instructor for each course is compared to the overall mean, to obtain an “index of comparison.” Questions 3, 4, 5, 6, 8, 12, and 13 (listed below) constitute the subset of questions used for comparison between faculty.

To illustrate, consider Question #3, “Provides useful feedback on written work.” If, hypothetically, the mean score on this question across all faculty and all courses is 3.2, and the mean for Instructor “Smith’s” BSAD XXX course is 3.9, then the index of comparison is 3.9/3.2 = 1.219, or approximately 121.9%. Thus, any index score above 1.0 (i.e., 100%) represents a higher-than-average value on that question for the instructor, compared to the overall School average.

From Fall 1999 through Fall 2002, we “strongly encouraged” our students to complete the evaluations but there were few incentive measures to encourage participation. In Spring 2003, we decided that the response rates were not sufficient to render validity to the data and began a highly resource intensive process of escorting students to computer labs during the weeks before final exams and overall participation rates by course section doubled to 80-85% . In 2006, we created an automated reminder process to reduce the staff and lab intensive process but have noted a drop-off in participation rates.

Internal – Student Satisfaction with Advising:

In an effort to enhance student retention, meet university expectations regarding assessment of advising, and address student concerns with faculty advising, the School of Business Administration implemented an On-Line Advisor Evaluation process in spring 2004. To minimize demands on students to complete surveys, this process is linked to our fall and spring course evaluation process. Thus, we are able to assess advising at the end of each semester. The results are anonymous and not linked with the student who completed the questionnaire. The only information stored in the database that is directly associated with the student is the following and it is obtained to ensure that a student will fill out this evaluation only once per evaluation period:

  1. Did the student fill out an evaluation?
  2. Did the student think she knew her advisor’s name? (i.e., did she choose the "Don't know my advisor…" from the advisor list?)
  3. Did the student select the advisor of record correctly from the list of advisors?
  4. If the answer to #3 is "No," who did the student select from the list?

In addition to the above information, the survey also anonymously tracks the following:

  1. How often did the student meet with the advisor in the last advising period?

    The objective of this survey is to provide constructive feedback to academic advisors and gain insight to help strengthen advising and related administrative processes and support systems in the School. The student is asked to select his or her advisor from a drop down list to give us a fundamental indication of the quality of the relationship between the student and the advisor. If the incorrect advisor is selected or the student does not know their advisor’s name, they get an appropriate message with the name and photo of their advisor and no further questions are asked. The data is used primarily to improve our advising processes and for reappointment and promotion decisions. We currently have 6 semesters worth of data and are analyzing trends. The questionnaire and process description are available at http://www.bsad.uvm.edu/Academics/Courses/evals/AdvisorEvalProcess.htm.

External – Student outcomes:

In Fall 2009, we contracted with Edumetry http://edumetry.com/ for their services in assessing two individually written student assignments from a single course (BSAD 132) for scoring against two SLOs, using rubrics developed by UVM. Edumetry electronically embedded rich, actionable feedback into each of the student assignments based on how they performed against the identified SLO and provided that score to the student and instructor. They also provide a one-page summative report to the instructor as to the overall performance of the class on each of the assignments. We are currently reviewing the results of this experiment to determine its usefulness and viability in our program.

External – Faculty/Staff Satisfaction:

In an attempt to evaluate faculty and staff satisfaction with the overall organizational performance of the School, we used the National Council for Performance Excellence’s Baldrige Express Survey. This survey is based on the seven Baldrige criteria and can help to pinpoint organizational problems and opportunities for improvement. It is currently the only instrument that we use that collects faculty and staff perceptions of program effectiveness. The instrument was administered for the first time in August of 2006 and again in August 2008. Results continue to show marked differences in perceptions among tenured faculty, untenured faculty, and staff. Communication at all levels among these groups appeared to be the greatest opportunity for improvement. In an attempt to improve communications, a “Business BullETS” electronic newsletter was developed that is sent to all employees at least once each month outlining important events, policy changes, and related information.

Summary: Example of how various instruments are used in curriculum evaluation:

As an example of how we use the various instruments, consider the following example. Using EBI, ETS & AoL CCI, we analyzed the “Legal and Social Environment” area of our program, and how various instructors elect to teach the topics covered in the one course in our curriculum designed to provide this knowledge. Ten percent of the questions on the standardized student achievement (ETS) exam deal with this subject and are broken down into 4 areas (see Table).

Table: Legal and Social Environment – across the curriculum and as taught in the required course “Legal and Political Environment of Business”
  Content by faculty teaching the course*
  Average Across entire curriculum Average of dedicated Faculty Faculty #1 Faculty #2 Faculty #3
1) Legal environment:  
courts & legal systems 2.23 3.67 5.00 2.00 4.00
constitution & business 1.63 4.33 5.00 4.00 4.00
administrative law 1.71 2.67 3.00 2.00 3.00
2) Regulatory environment:  
employment & labor law 2.29 4.00 4.00 4.00 4.00
antitrust law 1.66 4.67 4.00 5.00 5.00
consumer protection 1.46 4.00 3.00 5.00 4.00
tort law 1.14 2.33 1.00 2.00 4.00
crimes 1.20 1.33 1.00 2.00 1.00
environmental & international law 1.71 4.00 4.00 3.00 5.00
3) Business relationships:   
contract& commercial law 1.14 1.33 1.00 1.00 2.00
business organizations 1.06 1.33 1.00 1.00 2.00
law of agency 1.37 1.33 1.00 1.00 2.00
intellectual property 1.54 2.33 4.00 1.00 2.00
4)  Ethics and social responsibility  
Ethics 3.11 3.33 2.00 4.00 4.00
Social responsibility 3.14 3.33 2.00 3.00 5.00

[i] We have also used the EBI Alumni satisfaction surveys in 2001 and 2008 to obtain input from our undergraduate and MBA alums.