Andrea Loughry is the vice chair of the University of Tennessee System. In the most recent issue of Trusteeship she expresses the opinion that with "so many rankings (of institutions), there are so few measures of student learning" (p. 37).
Now before I respond in a calm and reasonable fashion, let me say that as a long-time inhabitant of the trenches on the front line of higher education, I don’t want boards of governance involved with measuring student learning any more than public officials. The reason is simple, neither one knows what the heck they’re doing.
Trusteeship is not a publication that I recommend to any serious academic, but I skim the articles and commentaries in each issue out of a sense of duty. I interact with trustees on a regular basis, and I like to know what these people who “run the show” think and read. It doesn’t inspire great confidence; fortunately most trustees of my acquaintance are successful professionals who don’t have time to read this journal.
Now back to the commentary. What does Andrea mean when she says there are so few measure of student learning? What does this lady think I do with my time? What does she think I'm doing in my classes when I make students do all that work, read all those assigned material, write all those papers, engage in all those diverse laboratory activities and discussions, research the many assignments, and take all the exams? Students don't think the learning measures are few. And does Andreas think students who can do all these things successfully have still not demonstrated any learning? Or is it that I have failed to demonstrate to my trustees that they have learned anything? Do they want to see all the papers, assignments, exams, essays, and notebooks? I can have them boxed up and delivered. If such activities are not measuring and assessing student learning, then why the heck am I bothering?
In case you are unfamiliar with educational jargon, let me introduce you to “learning outcome measures.” If you are teaching faculty you may be surprised to know that you use these. LOMs are basically all the things you do to evaluate student learning, but only people in colleges of education felt a noun-phrase-label was needed. Here’s a more official definition. LOMs are any “activity, product, behavior, knowledge, skill, ability, or attitude that we want a student to manifest in measurable or observable ways.” I don’t have a problem with any of this, although phrases like “learning outcome measures” hurt my brain. But here’s the rub, and where Andrea and I part ways.
Andrea calls on her fellow trustees to ensure that their institutions are setting high standards and “measuring student learning in a transparent manner.” I wish I knew what she actually meant. To me being transparent means making sure students understand what they are expected to do and know, and how I will evaluate it. But I think Andrea wants some kind of "learning measure" that can lead to comparisons among institutions, and she suggests learning outcomes measured “by a variety of broad and narrow tests selected by the institutions.”
Yikes. No Child Left Behind testing comes to higher education! Oh, and this is better in what way from having public officials demand testing? So much for all the diverse, multifaceted LOMs that I employ to determine student learning in my classes! No, my trustees are being urged to use some type of banal standardized exams to obtain transparency in student learning assessment. Well, the outcomes will be transparent OK because they have no substance. One might say such testing will be produce ethereal results.
Such testing is not going to capture the nuances of economic botany, plant taxonomy, or rain forest ecology, or any other subject area for that matter. What a crock! Any trustee that suggests such a thing should be dismissed on the spot for having displayed such a gross misunderstanding of the educational enterprise.
The take home message is pretty straight forward. First, trustees like Andrea obviously don’t trust me to do my job. Second, somehow trustees understand better than I how to set high standards and measure student learning. Third, and with all due respect to well-meaning trustees, this is what you get when amateurs are allowed to run an educational organization. And you only get this in higher education. House keepers don’t sit on the board of Proctor and Gamble, but people whose only association with higher education was that they were students once and graduated from your institution, are placed in the position of telling me how to do my job.
Oh, Andrea, it scares me that you don’t know better. Now I don’t mind telling you or showing you how I evaluate my students. And I guarantee that I set high standards and get my students to perform. But no testing you can imagine is going to give you something that allows comparison of student learning among institutions. Testing is not an assessment; how my students perform once they graduate, now that’s an assessment. And this is why faculty like me will fight tooth and nail to prevent something as banal as “a variety of broad and narrow tests” from acting as a stand in for us who are the only people in a position to actually measure student learning, even if most of us don't know what an LOM is.
RFK Jr. is not a serious person. Don't take him seriously.
3 weeks ago in Genomics, Medicine, and Pseudoscience
No comments:
Post a Comment