By Dr. Peter Smith

Recently, I completed an article for the SloanC Journal on Asynchronous Learning Networks entitled  “From Scarcity to Abundance: IT’s role in Achieving Quality-Assured Mass Higher Ed

In an increasingly mobile society, with burgeoning opportunities to learn throughout life, the accuracy, clarity, and transparency of learning assessment at the course and program level is essential. Academic outcomes that are consistently and rigorously assessed by the institution and validated by third parties will be seen to have integrity in the 21st century, earning the respect of learners and employers alike.

Course level assessment (CLA) is one of our answers to this emerging need at Kaplan. CLA measures student learning and informs our continuous academic improvement process with processes that have been developed to identify and assess learning outcomes. CLA measures student mastery of stated course level learning outcomes in an objective way. It is criterion-referenced, not norm-referenced.

The scores obtained measure the student’s current skill mastery and knowledge described by the outcomes. CLAs support program-level outcomes while providing the framework for assessing specific learning objectives and activities within a course. Outcomes also share the following characteristics:

  • Each describes one primary area of knowledge or skill.
  • Each reflects specific behaviors underlying the knowledge or skills for which students should be able to demonstrate mastery by the end of the course.
  • Each is written in a style that reflects the appropriate level of complexity of the underlying cognitive tasks required for given levels of mastery.

The learning outcomes are supported by rubrics at the course level. For each course, faculty members assess student success in all of the course’s outcomes using standardized rubrics. Rubrics are developed for each outcome based on specific criteria, identifying student progress towards mastery. Scores on outcomes are then analyzed to determine if students are gaining the desired mastery. We evaluate on a 0-5 scale with 0 = no progress, 3 = “practiced”, and 5 = “mastery”. The objective is that each learner will reach mastery of discipline course outcomes by the end of the course and mastery of the general education outcomes by the time the degree is completed.

And, as students progress through their programs of study, their progress on achieving these outcomes is measured. In an initial analysis done in 2009 and 2010, showed a steady increase of the number of students being evaluated as “practiced” or better improved steadily, climbing from 77% to 87%.

Tracking student learning outcomes at the course level allows us to gauge both the effectiveness and the career relevance of our instruction and our curriculum – and to engage in a continuous improvement process. And the rubric structure allows us to look at the “profile of learning” within a section or across all sections of a particular course to identify anomalies and success rates as well as levels of learning.

In conclusion, technology is the key differentiator in being able to gather and analyze this kind of data.

  • First, an IT-based consistent approach to course and outcome design creates an environment where comparability is possible.
  • Second, technology allows us to scale this research to all of our learners, ultimately collecting information on hundreds of thousands of student/courses per year.
  • Third, technology allows us to collect consistent information across every section, something that would be impossible in a traditional institution’s classrooms.
  • And fourth, we know that we have clear control over the means and structure of learning assessment, leading to a high degree of consistency in that process as well.

My thanks to my colleagues Jason Levin, Kara Van Dam, and John Eads who contributed to this work.

Next week: Using IT to increase educational effectiveness 2: Curricular Matrixing

4 thoughts on “Peter Smith: Using IT to Increase Educational Effectiveness: Course Level Assessments

  1. Keith I really liked this series of article. This one might actually be my favorite. The implications of technology are unrivaled. Being able to track outcomes at the course levels is immensely powerful. The great thing about data is that the more you gather the more utility you can derive (and often pertaining to valuable things you might not have originally set out to acquire). So how do we decide which software or student management system to not only get the best data but derive the most utility from it?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s