I promise to get back to the issues of object-oriented ethics in my next post, when I am home and have books nearby. Today I am back playing my administrative role and find myself thinking along those lines in reading this recent report from the Social Science Research Council that once again raises the alarm of crisis in higher education. Namely, according to their longitudinal study, students do not develop as critical thinkers in the way that they should, and when it comes down to it, the primary culprit is a lack of "rigor." Undergrads need to be asked to write more and read more.
In my view, there are oh so many curious things about this report and its kin.
The report is based on the results of test called the College Learning Assessment (CLA). As the report describes:
The CLA aims to measure general skills-based competencies such as critical thinking, analytical reasoning,and written communication. Measures used to assess student learning consist of three sets of open-ended prompts that have been carefully constructed in consultation with experts on student assessment and elaborately pre-tested and piloted in prior work. The three components include: performance task, make an argument, and break an argument. We focus on the performance task because that component of the CLA was administered most uniformly across institutions, had the largest completion rate, and is the state-of-the-art component of the assessment instrument. The performance task allows students 90 minutes to respond to a writing prompt that represents a “real-world” scenario in which they need to use a range of background documents (from memos and newspaper articles to reports, journal articles, and graphic representations) to solve a task or a dilemma.
In short, it's a timed, essay examination. Now maybe you believe that performance on a timed essay exam tells you something important about human beings, and maybe you don't. Clearly the act in itself is not especially valuable. Perhaps performance here though is predictive of performance elsewhere on tasks that we might indeed value. In my view, the measuring tool is itself flawed.
The report goes on to state that students who read more, wrote more, and studied alone more tended to perform better on the test. OK, so what you're telling me is that students who spent more time reading and writing independently did better on a test that essentially measures one's ability to read and write independently.
How much did this study cost again?
What's implicit, of course, is that the true measure of "critical thinking" (aka intelligence) is the ability to sit alone and read and write. And how do I know that? Well, I know that because that's what I do and I'm smart, right? Ahh, the mini-me pedagogy never ceases to amaze.
Shall we press on? The central problem the report identifies is that over the four years of college, students on the whole make little improvement. They say, for example "Students who scored at the 50th percentile of students in their entering freshman cohort would have moved up only to the 68th percentile after four years of college (if, when graduating college, the students retook the test with a new cohort of entering freshmen)." Gee, I guess that sounds not too good. I wonder what the results would have been like in 1950 or 1980. Too bad we don't know. It sounds bad because we have a mythology about learning which assumes that improvement should be more dramatic. We have an expectation that the results should be different, but maybe our expectation, rather than our methods, is wrong.
One thing the report certainly gets right is that, on the whole, college students don't spend a lot of time doing the kinds of activities that they are asked to perform on the CLA. Given that they don't spend a lot of time reading and writing, it's not terribly surprising that their improvement on this general test is not that great. Of course, the absurd error here, performed over and over and over, is that writing is a generalizable skill. Let's imagine a different test. shall we? Let's imagine we take first-year students with declared majors and give them a test that asks them to read material from their field, analyze it, and produce a text that reflects an appropriate disciplinary genre. Then give them the same test when they are seniors. Do you think the improvement from first-year to senior would be more dramatic than what was found with the CLA? I'm betting it would. By the same token, I'm probably no better at writing a chemistry lab report today than I was when I was 20. I guess my education failed me, huh?
That said, here are things in the report that I agree with:
- IF we want to make the goal of higher education the ability to perform tasks such as the one represented in the CLA then we absolutely need spend more time teaching reading and writing. Not just assigning more reading and writing by the way, but actually teaching people how to do it. And I would contend that few if any universities have a faculty capable of meeting that goal, so we would be talking about a fundamental restructuring of not only undergraduate education but the graduate education that would prepare future faculty in all disciplines. (Good luck with that, btw.)
- I don't find it surprising that liberal arts majors performed better than students in other areas. Maybe it's because they read and write more, though I'm not sure if that is the case with science and math students, who actually performed the best on the CLA. I would suggest that students who major in the liberal arts are more likely have come to college out of a desire to learn rather than the necessity of a degree for a job. When it comes down to it, if you don't want to learn, you probably won't. And it won't matter how much I ask you to read or write.
- We do need a greater emphasis on undergraduate education. We could start by reducing the number of courses taught by adjuncts. That's not to say that adjuncts as a group are better or worse teachers than professors (who are generally not hired or tenured for their teaching ability, especially at universities). But professors are better positioned to participate in the larger shaping of curriculum and students over time than adjuncts are.
However, I don't think that our reformation of higher education should be imagined as trying to build some kind of time portal to the 19th century or even 20th century. My measure of success for higher education would be something along the lines of "Can students work together over an extended period of time (i.e. weeks) to investigate, research, and experiment and produce some creative, insightful, compelling solution, analysis, or response to an object, an issue, an experience, etc?" If that seems overly general, it's my attempt to account for disciplinary differences. Let me try it more succinctly: can a group of students make something that works?
You tell me if that is a higher, lower, more or less rigorous standard than what is proposed in that report.
Recent Comments