What does a student know and how do we know that they know it? This fundamental question in higher education drives learning outcomes assessment, a critical component in providing a quality education. Those that are more removed from the classroom and assessment committees may not to give much consideration to the ongoing demanding work that learning outcomes assessment entails. It is not easy, quick or glamorous. Improving Quality in American Higher Education: Learning Outcomes and Assessment for the 21st Century, edited by Richard Arum, Josipa Roksa, and Amanda Cook is a good way to understand and appreciate the effort that goes into thoughtful assessment work.
Arum and Roksa are the authors of Academically Adrift, a well-known book that generated a great deal of attention throughout the academy. One of the complaints leveled against it was its heavy reliance on a standardized test, the Collegiate Learning Assessment (CLA). In this new volume, the authors bring to the reader the work of the Measuring College Learning (MCL) project, which was overseen by the Social Science Research Council. The MCL was for and about faculty from across higher education institutions in six disciplines: biology, business, communication, economics, history, and sociology. Each discipline generated a white paper on current and future learning outcomes assessment in that discipline. Those white papers are at the core of Improving Quality in Higher Education.
The scope and scale of the project are appropriately ambitious. Each discipline has its own organizations, tests, practices and culture. Every discipline asks particular questions, uses different tools, and seeks different answers. Minimizing those distinctions would undermine meaningful learning assessment, so the MCL project emphasized flexibility within a shared structure. Faculty in each discipline came together to identify a limited number of common concepts and common competencies or skills in their discipline. Each of the white papers, or chapters, notes that there was a surprising degree of coherence in the faculty’s work and approach to their discipline. It is an interesting look at disciplinary priorities.
The editors give us context, pointing out the need for the MCL and for more expansive understanding of higher education when it comes to policy. Grounding the effort are six principles: faculty at the forefront; students from all backgrounds should be given a fair opportunity to demonstrate their knowledge and skills; any single measure of student learning should be part of a larger plan; institutions should use assessment tools on a voluntary basis; and that measurements of student learning should be rigorous, high quality, and help with comparisons over time and across institutions. AAC&U and the National Institute for Learning Outcomes Assessment (NILOA) helped with the work, as did disciplinary organizations. The goal was the creation of frameworks for ongoing work.
Improving Quality is not a book for a general audience. It gets into the weeds in each discipline and demands attention to academic process. Familiarity with how learning outcomes assessment takes place at the course, department, program and college level also makes it much easier to follow. Bracketing all of that on one side of the ledger, on the other this book provides great insight into how wise and committed faculty can build better structures to improve teaching and learning. It is meaningful and deliberate work resists the sound byte or flashy headline. It does, though, really make a difference in how higher education pursues quality.
David Potash