I would be hard-pressed to think of someone who can speak with more authority on the relationship of outcome assessment and educational technology than consultant Neil Allison of New Publishing Solutions. I asked Neil to provide us with his perspective on the state of outcome assessment.

Neil Allison is a member of the Higher Education Management Group, a LinkedIn Group.

KH: Outcomes assessment was identified a few years back as one of the great growth areas in educational technology for the years to come. How would you characterize this field today? From where are the great innovations emerging? Vendors? Schools?

NA: Watching the development of the market for outcomes assessment technology has been fascinating.  Remembering what marketers say about how many ideas we can digest at a time, I’ll try to follow their advice with three C’s:

Confused

You know the saying:  ask 10 people for a definition, and you’ll get 10 answers.  The same is true of “outcomes assessment” software.  In fact, you’ll find analysts using different terms to describe the field and providers describing their solutions differently.  Here are a few examples:

Analysts:  Learning outcomes management (Eduventures); Providers:  Learning outcomes manager (eCollege); assessment management solution (Tracdat); Learning assessment accreditation solutions (Livetext); online assessment management (WEAVEonline); outcomes assessment management (Blackboard).

In some ways, this fuzziness or confusion is to be expected:  the outcomes assessment field is too immature for people to agree on what it is or isn’t, and what features and functionality it covers.  Of course, that makes it hard for institutions to figure out how to evaluate software for outcomes assessment.  [Full disclosure – I used work on the Blackboard Outcomes System].

It’s useful to reflect on what outcomes assessment is and how technology can support it.  Outcomes assessment is a reflective, repeatable process of defining goals, collecting and reflecting on evidence of performance, and informing actions to improve effectiveness.  In higher education, multiple technology capabilities can support this process:

  • Document management – collecting, presenting and sharing of documents.
  • Curriculum planning – defining and sharing standards and goals, rubrics, etc.
  • Data collection – designing and implementing of instruments like surveys, course evaluations, customized forms, etc. to measure achievement of goals and monitor progress.
  • Rubric-based evaluation – designing and sharing of rubrics, collecting and evaluating student work using rubric-based evaluation and reporting on results.
  • Process design and management – organizing, managing and reporting on assessment and accreditation initiatives.
  • Reporting and analysis – collecting, reporting on and understanding collected data.
  • Portfolios – collecting, sharing, presentation of and reflection on student work or performance (and often evaluation as well).

Crawling

Of course, the fuzzy definition problem reflects the level of adoption of assessment technology.  The technology, though it needs to improve, isn’t the main challenge.  Outcomes assessment still isn’t widely and deeply adopted on campus (what assessment expert Peggy Maki refers to as systemic and systematic processes).  Assessment isn’t part of faculty culture or day-to-day behavior.  And, perhaps most importantly, assessment does not meaningfully inform curriculum or policy decisions at most institutions.

I’m optimistic, though, that with better incentives and policies, technology will play an increasing role in easing the adoption of assessment [see innovations in answer to the next question].

Connecting

Institutional assessment by its nature involves each course, program, department and unit, connected through multiple processes (regional accreditation, program accreditation, academic program review, etc.).  As I noted above, it also depends on numerous different technology platforms:  learning management systems, portfolios, content management, assessment engines, survey tools, reporting tools, etc.  The Design School might want its own portfolio tool, while the Ed School’s NCATE requirements demand another.  One size may or may not fit all.

Only connect. The theme and imperative needs to be connection; ensuring, for example, that course-level activity connects to program and general education goals, or that the rich learning performance data in a publisher platform connects with an LMS.  When it’s often hard enough to get assessment going, it’s better to encourage and enable innovation wherever software is found to useful – even if it’s different tools in different departments and make it work by ensuring smooth connections between them.

Where are the innovations? There are institutions improving their effectiveness by using technology in each category above.  I’ll focus on a few emerging innovations that promise to have a disproportionate impact in learning outcomes assessment:

  • Embedded assessment – Assessment needs to become part of the day-to-day course activity to have an impact.  And some of the most valuable innovation is connecting “assessment” to the day-to-day activity of teaching and learning to power both summative assessment (after) and formative assessment (during).
  • Imagine if student papers from course assignments could be “harvested” from course management systems and the rubric-based evaluation process streamlined online.
  • Imagine if online assessments not only evaluated achievement of course objectives, but could also assess program goals or general education goals through the consistent embedding of specific questions.
  • Performance feedback systems – Think about the countless hours a faculty member spends providing feedback on student papers.  The current process is time-consuming, disconnected from consistent learning standards, and delays feedback to students.  It’s ripe for innovation and it’s happening:
  • Better tools to enable faculty to provide embedded feedback connecting to high quality “bite-sized” instruction relevant to the same student issues;
  • Rubric-based feedback as part of the grading process;
  • Automated, artificial intelligence feedback (quicker, faster, cheaper) via tools like Criterion;
  • Outsourced feedback via companies like SMARTHINKING and Edumetry [full disclosure – I was part of the founding executive team at SMARTHINKING].

KH: Compared to K12, institutions in higher education operate relatively independently of one another and have shown less interest in producing and sharing data on student success. This was one of the points made in the Spellings Commission in the U.S., for example, but policy makers have made similar arguments in other countries too. How might this impact our ability to share data across P-20 to help student success?

NA: We’re missing serious opportunities.  The Data Quality Campaign (funded in part by the Gates Foundation) has led an exciting movement to promote the collection, sharing and analysis of education data across P-20.  I recommend readers check out some of their recent publications on the state of data available and how innovative states have been using the data to inform better policies and practice – that’s the hard part.

I would put your question a slightly different way, and focus optimistically on what would be possible: what are the benefits of sharing student success data from higher education – across P-20 and to other stakeholders (students for example) – and what kinds of data need to be shared to make the sharing valuable?

School practice – Student success data about a school’s graduates back from colleges can help schools gauge their effectiveness and inform better decision-making.  Which courses or programs are best correlated with future success? Which students seem to be sticking it out through community college, and why?

Knowing student persistence data is one thing – are my students finishing courses and completing college?  But knowing how well they’re learning is another.  Course grades across institutions (and even within institutions, remembering my time in college) are far too inconsistent to provide meaningful information about student learning.  But where colleges do have clear standards and measures, it enables schools to align their curriculum (see CSU’s Math Success program) through more valid and granular measures of student learning.

Policy – Meaningful student success data can help policy-makers identify and spread potential best practices, whether in terms of school management, supplemental programs, or curriculum.  What if you could compare the college success of students by the different types of instructional approach or participation in government funded program? Getting visibility across schools and approaches helps identify what works better.  Governments and foundations have a strong interest in getting this data.

Student decisions – Students face wildly different costs for attending college, choosing between community college ($4K), public university ($15K) or four year private university ($34K) [College Board 2007 numbers, assuming no financial aid]. In the current economic climate, many students are wondering whether the big ticket options are really worth it.  Research suggests that they might be right.  [See Peter Ewell’s fine article in this month’s Change Magazine].  Better transparency – more of the right information could be informative and even revolutionary; students should know more of what is expected of them and more about the success of colleges in achieving student success.

KH: Many people in education have imagined a future in which web-based education allows us to not only analyze student activity/success, but to use this information to customize the student’s learning experience. How close are we to this, and what do you see as the remaining obstacles to overcome?

We’ve got a lot more data now:  through online-delivered or digital supported learning, the amount of data about student learning activity and performance has exploded (from learning management systems, publisher solutions, tutoring providers, etc.).  With mobile supported education, the data will be even richer and cover much more of the student learning experience.

It is what makes it possible to take on a grand challenge, as John Campell rightly terms it:  how to use data to inform and improve decisions about policies, budgets, and curriculum at the institution, program and course level.  Education has much to learn from other sectors; I recommend Bryan Hassel’s chapter, “Cutting Edge Strategies from Other Sectors” in A Byte at the Apple from the Fordham Institute.

Better data and digital delivery of learning makes it possible, in theory at least, to break from one-size-fit-all education and move to a more flexible, student centered model in which the type, sequence and pace of learning experiences are customized based on relevant student characteristics (current and previous performance and activity, and other learner profile information that proves relevant).  The imperative to create this more effective, more personalized learning environment is an equally grand challenge.

How close are we? Some customized learning is happening already in more limited scope.  Look at the innovation within course-based learning environments created by major publishers, like Pearson’s myLabs, and intelligent tutoring solutions developed by smaller companies like Carnegie Learning.

In the near term, I see continued customization innovation from the big 3 publishers expanding course-based solutions and the larger proprietary / for-profit institutions with the incentives to increase success and retention, internal sophisticated digital distribution platforms and greater control over course design.  In the longer term, we’ll need to see more planning to take advantage of this opportunity and better coordination across learning design, LMS, and other platforms.

What are the obstacles?  I’ll put them in three buckets:  measurement, incentives, and framework.

Measurement:  Again, back to assessment.  It’s hard to break with traditional teaching methods or course structure without clear measures to justify change.  A measuring stick would make it possible to weigh the benefits of different approaches to achieving defined learning outcomes, whether through customized learning experiences, teaching approach, etc. Without it, it’s easy to stick with the status quo.
As director of the National Center for Academic Transformation, Carol Twigg has been a stalwart promoter of using technology to redesign course delivery to reduce cost and improve or maintain learning outcomes.  She argues clearly for the “indispensable” role of assessment in justifying and supporting innovation in this article, “How Essential is Assessment?”    I’m hopeful that institutions with the most at stake and to prove will get together and create meaningful standards and assessments to prove just how good they are.

Incentives:  Carol’s work illustrates that assessment is necessary but certainly not sufficient to spark and spread innovative teaching practices.  Even if it were possible to show that more customized learning leads to better learning outcomes and reduced cost, even to improved access, it wouldn’t be enough.
Another key obstacle is the incentives both within and for educational institutions. It’s safe to say that they don’t currently reward better learning, improved access, or reduced cost, or at least not very well.  Every lever – state funding, accreditation, student financial aid policies, promotion and tenure, and faculty/staff reward systems – must be used to reward better outcomes.
With a very conservative higher education culture, we also need incentives to encourage the very process of innovation – the risky acts of creating new course designs, content, or technologies that could have an outsized impact.

Frameworks:  Making customized learning truly valuable depends on the sharing of information (about learners and their profiles, learning objects, etc.) across technology platforms and institutions and beyond the course.  It certainly requires the broad interoperability standards from the IMS Global Learning Consortium and others so that platforms can talk to each other.

But it also needs a learning outcome Babelfish, a universal translator that translates standards about learning objectives from one provider, institution, or state to another.

2 thoughts on “Neil Allison: Education, Outcome Assessment, and Technology

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s