Toward a Framework for the Assessment of Integrative Learning

October 25, 2007

In the background paper for the Carnegie Foundation/Association of American Colleges & Universities project “Integrative Learning: Opportunities to Connect,” Carnegie Senior Scholars Mary Huber and Pat Hutchings summarized the promise and difficulty in fostering and assessing integrative learning within disciplines, across disciplines, between curriculum and co-curriculum, and between academic and professional knowledge and practice. The challenges are familiar and daunting. Despite the near ubiquity of “general education” requirements and the lofty language contained in many college mission statements, the predominant reality is that the college curricular experience is largely fragmented and general education requirements are still viewed by many as something to be “gotten out of the way” before the real business of college begins. The attempts to foster integrative learning through such activities as first-year learning seminars, learning communities, interdisciplinary studies, community-based learning, capstone projects and portfolios tend to be limited to a small number of students and generally isolated from other parts of the curriculum. Moreover, the historically insular character of departments, especially at larger universities, still militates powerfully against coherent efforts at fostering integrative learning in students.

It should therefore not come as a surprise that sustained efforts at assessing integrative learning, and good examples of such assessment, are rare. But existence proofs can be found. In this brief paper, I outline some of the characteristics that a good assessment of integrative learning in its various forms should possess. I lay claim to neither breadth of coverage nor depth of analysis. Rather, what follows is an attempt to specify some desirable properties of a sound assessment of the varied definitions of integrative learning-–from the individual classroom to a summative evaluation of the college experience, and finally to participation in civic life and discourse.

We should note at the outset that there is an understandable reluctance on the part of many faculty to attempt a formal assessment of such concepts as “liberal education” and “integrative learning.” Many feel that such attempts will ultimately trivialize these notions and induce students to adopt a formulaic approach to the assessment. There are good historical reasons for this reluctance. Educational testing is awash with examples of well-motivated and high-minded visions of important educational outcomes that have become polluted by the high-stakes character that the assessment eventually assumes. The SAT in college admissions testing is a classic case in point. Nevertheless the attempt at assessment must be made, for it is axiomatic that if a goal of education is not assessed, then from the student’s perspective it is not valued.

Forms of Assessment
Assessment specialists make a distinction between objectively scored, standardized tests on the one hand, and “performance” tests on the other. Examples of the former include multiple-choice tests, true-false tests, and matching tests. Performance tests, by contrast, are “product- and behavior-based measurements based on settings designed to emulate real life contexts or conditions in which specific knowledge or skills are applied” (Standards for Educational and Psychological Testing, 1999). The virtues and shortcomings of both types of tests are well known. Objective tests can cover an impressively broad area of knowledge, but only in a shallow and relatively impoverished manner. Their hallmark, of course, is their efficiency in scoring. This essay starts with the premise that only performance tests are viable candidates for assessing integrative learning. Scoring such tests is typically labor intensive and may involve considerable time and effort in rubric development and assessor calibration. Short answer assessments are almost by definition inappropriate as measures of integrative learning. No multiple-choice, true-false or matching test can adequately capture students’ ability to integrate what they have learned and display skill in using their knowledge to solve important problems, argue a position, or participate meaningfully in civic life. Equally inappropriate are “well-structured” problems—problems that can be solved quickly, typically have single “correct” answers, and can be easily scored. In fact, the acid test of whether an assessment is inappropriate as a measure of integrative learning is the ease with which it can be scored. In general, the easier the scoring the less likely the assessment will be a viable candidate for gauging integration.

The Centrality of Writing
Before considering some of the elements of a sound system for the assessment of integrative learning, it may be well to discuss briefly the central role of writing in the majority of attempts to gauge integrative learning. Although not all disciplines require writing, and indeed an entire category of artistic endeavor (the performing arts) require virtually no writing, these are the exception. In the vast majority of disciplines, writing about what one knows and can do is the predominant response mode. The requirement to write sometimes introduces a problem known in measurement circles as “construct irrelevant variance.” This concept is best illustrated by example. Imagine a test of quantitative reasoning ability that involves complicated word problems that draw heavily above the student’s ability to decode verbal text. If the difficulty level of the verbal material is sufficiently high, the intended object of measurement (quantitative ability) may be confounded with verbal skills. That is, two persons of comparable quantitative ability would differ in their performance because of differences in the conceptually unrelated construct “verbal ability.”

Construct irrelevant variance is a problem that formal test developers studiously guard against, but it should not distract us here. Full speed ahead. In the assessment of integrative learning, either in the classroom or as a summative senior year experience, the requirement to write about what one knows should not be viewed as a nuisance. In this context of integrative learning, writing ability is not a confounding variable. I believe that one’s writing provides a reliable and valid insight into one’s thinking, which has often been defined as silent speech. It is probably more than that, but I believe the analogy is largely true. If you cannot write clearly and intelligibly (not brilliantly or eloquently, just clearly and intelligibly) about what you know and understand, perhaps it is because you do not really know and you do not really understand.

The Elements of a Sound System for Assessing Integrative Learning
A sound assessment system for a comprehensive performance assessment of integrative learning consists of at minimum the following elements:

(1) The development of a framework and set of assessment specifications; that is, a clear statement of what is to be assessed. This is typically a team effort, and in the present context includes all relevant disciplinary faculty, and in some cases top administrative officials as well.
(2) Development of exercises that reflect the agreed upon assessment specifications. This is no mean task and will require a faculty willing to work to iron out differences of opinion regarding content and emphasis. But it can be done.
(3) A scoring rubric, typically on a 4-point scale, that describes in some detail the elements and characteristics of “inadequate,” “acceptable,” “competent” and “superior” performance.
(4) An assessor (i.e., faculty) training protocol and a procedure for assessor calibration.
(5) A procedure for adjudicating disagreements between assessors.
(6) A quality control mechanism for assuring that assessors remain calibrated and do not “drift” over time.

Although not a formal part of the assessment, one additional element should be a central component of a fair and valid assessment of integrative learning: What is expected of students and the scoring rubric that will be applied to student products? This should be made public and should be widely known and disseminated. There is no need for mystery or secrecy here. In fact, superior as well as inadequate samples of student attempts at integration (possibly with detailed annotations) should be available to students, perhaps on the Internet, so that there is no doubt about what makes some attempts at integration better than others.

No element in the above list should be treated lightly. An apt metaphor for the soundness of an assessment system for integrative learning is the familiar adage, “A chain is only as strong as its weakest link.” An otherwise superior assessment system can be destroyed by, for example, a poor assessor training and calibration. And an outstanding and thorough assessment framework can be rendered useless if scoring is flawed.

Assessing Integration: Notes from the Field
Although the notion of integrative learning may in some sense be a unitary concept, in practice it takes different forms depending upon the level of integration desired. At the level of the academic department in, say, the college of arts and sciences, it is desired that the student be able to integrate the many concepts within a given discipline toward the solution of theoretical or practical problems, or it may be desired to have students integrate their knowledge of two or more disciplines toward the solution of a practical problem. In professional education, the concern is typically that of putting professional knowledge into practice. At the highest institutional level, where “integrative learning” and “liberal education” become virtually indistinguishable, the goal is that students go beyond the integration of formal disciplines to adopt an enlightened disposition toward the world of work, society and civic life. Let us consider specific examples of each of these in turn.

The Assessment of Integrative Learning within a Discipline: An Example from Statistics
In one of the Carnegie Foundation’s Perspectives essays, I cited the example of a gifted instructor who gauged his own teaching by assigning an introductory statistics class a simple question about which of three local grocery stores had the lowest prices. Briefly, teams of three students were each given a week to grapple with this simple question and we had to describe and justify the things we did to arrive at an answer. The same question was repeated at the end of the semester after the class had been introduced to the elegant procedures of statistical inference. As I noted in that essay, the difference in quality between the before and after responses was astonishing.

Although this example was discussed in the context of an argument for pre/post or value added testing in the classroom, it also serves powerfully to illustrate that the assessment of integrative learning within a discipline is within reach of the vast majority of instructors. The grocery store question is simple on its face, but the thought behind it, and the things students must do and know to respond adequately are far from simple. The question has enormous “pulling power”; it evokes a variety of different responses and different approaches to the responses and it provides deep insight into students’ thinking, into how they organize and integrate what they know to sustain a position. The problem requires the student to devise a sampling plan, to determine if statistical weighting is appropriate, to decide upon an appropriate measure of central tendency, to specify a level of statistical significance, to actually carry out the plan, and finally, to analyze and report the results. In short, responses to the question reflect the student’s ability to integrate virtually the sum total of their knowledge of inferential statistical procedures.

Assessing Integrative Learning across Disciplines
Integrative learning across disciplines and its assessment presents special challenges. First, individual professors may not know enough about the various fields to develop assessments and evaluate responses. This implies a team effort and all of the administrative, personality and logistical problems that entails. Integration across disciplines also challenges us as educators to be more deliberate about how we see our own discipline and its connection with other disciplines, with education generally, and with life after formal schooling.

Some professions and majors appear to be natural foils for the development and assessment of cross-disciplinary integration. Engineering, ecology, history, urban planning and social work come immediately to mind, but architecture provides perhaps the archetypal example of a major where integrating across disciplines is not just an ideal; it lies at the very heart of professional practice. Among other things, architects must creatively integrate their knowledge of mathematics, structural engineering, geology, space and human interaction, not to mention their sense of the aesthetic. And although the “table of specifications” for their work may often be quite detailed, the problems they face are fundamentally ill-structured and there is never a single “right” answer. The great span across the Golden Gate could well have been something quite different than the graceful leap we have admired for generations.

In like manner, ecologists must integrate their understanding of various biological, chemical and social phenomena in such a way that the natural environment remains congenial to healthy plant and animal life while at the same time ensuring that economic growth and prosperity are not fatally compromised. Social work majors must integrate their knowledge of developmental, cognitive and social psychology, and marriage and family relations. Learning portfolios and senior capstone projects that require urban planning or ecology majors to analyze a proposed construction project and its environmental impact are excellent examples of assessing integrative learning and thinking toward the solution of practical problems. The requirement of the social work student to write a case study report on a wayward adolescent can be framed in such a way that it provides profound insights into her ability to integrate disciplines relevant to her work.

Assessing Integrative Learning at the Institutional Level
Perhaps nowhere are the measurement challenges more illusive and intractable than in the assessment of integrative learning at the institutional level. Here, integrative learning and liberal education are virtually synonymous concepts.

Although many scholars (beginning with Aristotle and continuing to the present day with Mortimer Adler, Lee Shulman, Robert Hutchins and others) have thought and written widely about the vision of the liberally educated individual, perhaps the most eloquent statement of that vision was crafted over a century ago by William Johnson Cory, the nineteenth century headmaster at Eton. Speaking to an incoming class, he said:

At school you are not engaged so much in acquiring knowledge as in making mental efforts under criticism…A certain amount of knowledge you can indeed with average faculties acquire so as to retain; nor need you regret the hours you spend on much that is forgotten, for the shadow of lost knowledge at least protects you from many illusions. But you go to school not so much for knowledge as for arts and habits; for the habit of attention, for the art of expression, for the art of assuming at a moment’s notice, a new intellectual position, for the art of entering quickly into another person’s thoughts, for the habit of submitting to censure and refutation, for the art of indicating assent or dissent in graduated terms, for the habit of regarding minute points of accuracy, for the art of working out what is possible in a given time; for taste, for discrimination, for mental courage and mental soberness.

Exemplary efforts to assess this vision are hard to find. The long and venerable assessment work at Alverno College perhaps comes closest. An extended discussion of these efforts is beyond the scope of this brief essay, but two publications describing the heroic work at Alverno are well worth the read: Student Assessment-as-Learning at Alverno College (The Alverno College Faculty, 1994) and the award-winning Learning That Lasts: Integrating Learning, Development, and Performance in College and Beyond (M. Mentkowski & Associates, 2000).

An axiom of the measurement and assessment community is “If you would understand a phenomenon, try to measure it.” Attempts to assess whether the undergraduate college experience has equipped students with the disposition to integrate the knowledge and skills they have acquired may well be the most important assessment challenge in higher education today. But initial attempts need not be flawless models of formal assessment; rather, it is important that the attempts be made, for the effort alone will go far in making clear to students one of the important goals of education, and in showing faculty where they have succeeded and where work still needs to be done.