How do we know if a student has learned enough to attain a degree or credential? Likely, the answer is currently phrased in the form of credit hours: 64 semester hours to earn an associate’s degree, 128 semester hours to earn a bachelor’s degree, and so on. But the credit hour, the most widely used currency of determining work put in toward a degree, was never intended to measure student learning at all.

In a paper for the New America Foundation and Education Sector, “Cracking the Credit Hour,” Amy Laitinen explains the history of the credit hour as a tool not for measuring learning but for awarding pensions. It began as a way to compare high school work, but quickly migrated to colleges through Andrew Carnegie’s work as a trustee of Cornell University:

In the late 1800s, the National Education Association endorsed the concept of a ‘standard unit’ of time that students spent on a subject as an easy-to-compare measure. But the idea of standard time units didn’t stick until later, when Andrew Carnegie set out to fix a problem that had nothing to do with high school courses: the lack of pensions for college professors.

When promoting a new pension program for colleges, Carnegie tied the pensions to whether or not the colleges used the new high school credit hour, since known as the “Carnegie unit.” The system quickly codified a credit hour as a unit that represented an assumption that a student spent an hour in class each week over a 15-week term for each credit hour earned, and colleges used this measure as a base for determining work toward a degree.

However, it is important to note that nothing in the history or application of the credit hour attempts to measure student outcomes other than time spent in a classroom. “College degrees are still largely awarded based on ‘time served,’ rather than learning achieved, despite recent research suggesting that shocking numbers of college students graduate having learned very little,” Laitinen writes.

This problem is familiar to anyone who has worked with transfer credits. Institutions struggle with how many credits to accept for transfer from other institutions, a situation that would be less of an issue if one credit of English 101, for example, indicated a standard amount of learning rather than seat time. Institutions have attempted to remedy this with articulation agreements, but these agreements can be difficult for institutions to negotiate and for students to understand. “In other words, through its everyday actions, the higher education system itself routinely rejects the idea that credit hours are a reliable measure of how much students have learned,” writes Laitinen.

Enter the federal government. “The time-based credit hour has been used by the federal government to determine how much and for what period of time federal aid such as the Pell Grant, the largest aid program is available to students,” writes Judith S. Eaton, president of the Council for Higher Education Accreditation (CHEA), in an article titled “Care, Caution and the Credit Hour Conversation”.  “While the academy is not trying to do the government’s work of figuring out federal financing of the credit hour, the government has displayed considerable interest in doing the academy’s work—determining and judging student learning.”

The federal government became particularly involved in the definition of the credit hour in 2011 when the U.S. Department of Education mandated a definition of a credit hour and required accrediting agencies to enforce the definition. Although it includes at its heart the traditional truism that a credit hour represents an hour in class along with two hours of study outside of class, it does include both time-based and outcomes-based definitions of the credit hour.

However, anyone who has looked at the definition recognizes the complexity of the federal credit hour descriptions, which Eaton describes as “complex and a work in progress.” “Given that the definition of credit hour has been the province of the academy for more than 100 years, why, for the first time, is the government defining this concept, superseding the work of the academy?” Eaton writes.

Learning outcomes as the new academic currency

Clearly, the credit hour is now the de facto currency in higher education. But what if there were a new currency? Deb Adair, director of Quality Matters (www.qualitymatters.org), suggests using learning outcomes to track student achievements. “We need to stop evaluating on proxy measures. We don’t have a really good measure of assessing what students know and can do,” she says.

Using learning outcomes instead of credit hours may allow institutions to look at student progress within a single course, within a program, or even between institutions, allowing for a better measure of what students have actually learned. This information can be passed to the next instructor in the program or the next institution the student attends, all with the intention of giving more complete direction to help the next instructor help the student.

Such baseline assessments are already in place. Adair explains that institutions already make assessments of transfer credit, deciding if the local community college’s English 101 is substantially equivalent to their own. A similar thing would happen if the assessment were made on learning outcomes, such as the ability to write a term paper or demonstration of critical thinking in academic analysis. Faculty members also routinely participate in such an assessment process when they design a program and the courses that will go into it. This often involves an assessment of what skills should be amassed as well as construction of courses designed to develop those skills. A curriculum map often shows the progression of desired skill development.

Further, faculty members often capture a great deal of data in the course that is lost in the grading process. For example, those who use rubrics may capture everything from how well a student performs using MLA format for a research paper to how well they exhibit grade-level-specific critical thinking skills. However, we lose this information as soon as we reduce the course assessment to a single grade—to take an extreme example, in some classes a student might pass comfortably without demonstrating appropriate critical thinking, simply be meeting all other criteria, creating a difficult situation for those who will teach the student next. “Grading rubrics are really helpful for faculty and can streamline evaluation. If you can track that, you will have a rich data set,” says Adair.

Adair explains that some schools are experimenting with competency-based models and even entire degrees based on student competency. There are examples of schools that issue traditional transcripts along with a competency list, while others use a portfolio process to demonstrate competency. Although the conversation is new, it is an important one to begin as institutions grapple with demonstrating that their students have not just attended but have actually learned. “Faculty are already doing the work of evaluating,” says Adair. “We need to do a better job of tracking.”

Jennifer Lorenzetti is editor of Academic Leader and a member of the Leadership in Higher Education Conference advisory board. She is a writer, speaker, higher education consultant, and the owner of Hilltop Communications. 

Reprinted from Academic Leader, 30.1 (2014): 6, 7. © Magna Publications. All rights reserved.