Field of Science

Showing posts with label assessment. Show all posts
Showing posts with label assessment. Show all posts

Apple evaluation

With October looming, it was time to get out the baskets and ladders and harvest the apple crop.  As this was the first of what is hoped to be many crops (and perhaps a larger crop also), the correct harvest time is unknown, but in general the Urban Apples are varieties that are mid-season type apples, and as the risk and worry of losing the entire crop to tree rats increased, the crop was picked as these types usually are ripe in late September, although it has been so warm that many apple varieties have stayed quite green.  So color may not be a good guide to ripeness.  Another part of the reason was that yesterday the Phactors foraged successfully for squashes and apples, and as a result had Jonathon, winesap, Jonagold, and northern spy apples on hand for tasting and comparison with our Tart lime urban apple. Our evaluation was that the apple was perfectly ripe, firm and crisp but not hard.  It was juicy, with a nice apply sweet tart flavor. Not as complex perhaps as the spy, but certainly not bad.  The F1 was quite impressed and our overall finding for the Tart Lime Urban Apple was highly favorable. Just so you know, the Phactors like firmer, tarter apples than many people, who seem to prefer softer, sweeter apples.  As kids a macintosh was fine, but our palate matured. 
And it's the same with apple cider. It isn't and shouldn't be just apple juice. It needs some of that tartness and maybe just a hint of cold-stored fermentation to be any good.  Got some grocery store cider that was just too sweet apple juice, and to sweeten and flavor a curried squash soup it was fine, but no good to drink.  Maybe it needs a pinch of champagne yeast and a week or so of cold curing? 

Assessing assessment in higher education


In an article entitled “The walking dead in higher Ed” (whatever that means; it never says) Geoff Irvine tees off on what passes for assessment in higher education, and at the institutional level, it is indeed woeful. So what the ever-loving hell does this guy mean when he says “they [colleges and universities] can’t prove that students are learning”? That is what this faculty member has done for the past 35 years! TPP constantly assesses student learning; they learn content, they learn to make connections with other fields of knowledge, a hallmark of a liberal education, they learn to understand concepts in more sophisticated ways, they learn to think, they learn to observe, they learn how to frame questions and test them, they learn how to learn. This all can be demonstrated to just about anybody if they have the time to wade through all the materials collected during a typical semester-long course. However, multiply this by the number of courses, and the mountain of material becomes insurmountable, so the university hired this faculty member to provide a single letter summary, a metric quite limited to be sure, and then they take my word for it that students have learned what the course was intended to teach them. Now of course what administrators really want, and what the non-educators who impose their views on higher education really want is some nice easy metric that says this university is this much better or worse than that university (nothing new here). Oh, there are lots of metrics that don’t have anything to do with learning, e.g., graduation rate. It is important to graduate students at a high rate, and it may mean you are good university, or an average university with very good students, or an easy university; it all can look the same. Teaching and learning are complicated things to assess, and at higher levels, institutional levels they never are assessed because the essential interaction is at the grass roots level between faculty and students. TPP has tried more new things, more new approaches, and more new techniques that you can imagine, some work well and others are quickly discarded. But really, you think students can pretend learning and this faculty member won't notice?

What does this guy means when he says, “the primary source of evidence for a positive impact of instruction has come from tools like course evaluation surveys”? Course evaluation surveys tell you one thing and one thing only; they tell you what students liked and what they didn’t like. As a long time professor, TPP can assure you that students can learn a great deal from things they don’t like, but fortunately as a pretty creative instructor, he has found a lot of interesting, and yes, fun things from which they also learn. And of this learning, he has direct evidence in many forms. Still you must take my word for it because even with all the materials is you won’t know what evidence of learning and what isn’t without my input.  To demonstrate this to an administrator once, TPP wrote three short justifications for a research project and told them one of was a total fabrication, an out-and-out fib, and challenged them to pick it out. Of course, in a distant field, someone could do the same to me. 

Geoff says, “the problem is that faculty and program leaders are indeed content experts, but they are no more versed in effective assessment of student outcomes than anyone else on campus.” Say what? Who else is going to assess whether students learned anything in my class? Who else would know?  You must challenge students with situations, some few of which are called exams, where they must demonstrate learning by first drawing the dots, using correct terminology and examples to label the dots, and then connect the dots in a manner that demonstrates learning when done on blank pieces of paper. 
 What Geoff was trying to accomplish with his article was pretty easy to figure out; he’s an entrepreneur, a huckster for his own company’s assessment product. He’s just one more assessment consultant trying to win the hearts and minds of clueless administrators who believe his assessment critique BS. Yes, that’s right, Geoff Irvine is the CEO of Chalk and Wire,  where you will find “all you need, to achieve your assessment vision”.  TPP envisions people learning the correct use of commas, but here we have evidence that they don’t, an illustrative assessment.

Inside Higher Ed should provide a disclaimer before they let a guy with a vested monetary interest in criticizing assessment use their publication as a bully pulpit. In case it isn't obvious, faculty generally dislike guys like Geoff because they seem to lack a basic understanding of the educational process, and a general contempt for what faculty think of as assessment, and how faculty assess learning.  At the very end of the article, Geoff says that institutions of higher education need “one essential unified goal: to make certain students are really learning”.  Always thought that was our single reason for existing, and to say we need such a goal rather indicates Geoff's disconnect with what he wishes to assess.

Students doing good - Real Assessment

Administrators and politicians always want to know how we know if we've taught our students anything. And of course they want it reduced to a simple little standardized exam. The Phactor has ranted about this issue in the past (here, here, and here), but it's too nice of a day to really rant. But you see, a former student who graduated a bit over a decade sent me an email to tell me that even though she miss IDed a columbine on a final plant ID exam, she hasn't missed IDed one since. One important thing to notice is that even though an exam indicated a failure of learning, learning took place, so something important was taught. This is something the dim bulbs that run many universities just don't get. Real learning is a complicated thing, and that's why you must rely on faculty to tell you when it has occurred. Students get to assess our teaching and courses even before they've finished a course, and the Phactor has gotten his share of brickbats and kudos, but the point here is simple. Most students don't know how much they learned, and sometimes a course just starts them on a trajectory of continued learning, and isn't that the point of higher education, developing interests and learning to learn. And when we do that, well, that's a real assessment of educational effectiveness. We've taught somebody something really important, and our students may not know or fully realize this until years after they graduate. The same SOB (He'd laugh out loud if he reads this.) taught me the first and last biology course the Phactor ever took as an undergraduate, and it took me years to realize that even though he wasn't a kind, likable, warm and fuzzy type of faculty member, he was an exceedingly influential and effective instructor who was years ahead of the curve on science education, and important lessons were learned that had an impact on my successful career as an educator. So real assessment of real learning, and therefore effective teaching, takes years. My role was simple, not just to teach some botany, but to instill an interest in plants and learning, and as a result in some small part, my student has done real well for herself and the Phactor is very proud of her active role in Seattle Tilth, spreading a legacy that plants are fun, important, and interesting. But many of our fearless leaders aren't very interested in such data points probably because if they admitted these were important, they'd also have to admit that we faculty know what we're doing. So it will be interesting to see how the new mandate for more assessment of our teaching effectiveness will deal with this. Prediction: it won't.

Assessment versus teaching - again

The Chronicle of Higher Education has just published a commentary that chides teaching faculty for not embracing assessment. And if we don’t embrace assessment, how do we know our students are learning? Do “we rely on evidence that is dubious (teaching evaluations) or circular (grades)”, they ask?
Well, girls, why are you so interested in this issue? Oh, yes, their commentary is actually flogging their soon to be published book on assessment, so you know of their deep commitment to learning. Once again the Phactor will pull on his boots and wade into this issue because he continues to wonder why to these women presume faculty don’t know when students are learning? You think we read about it in student evaluations? Hardly. Why would you think grades circular when those grades reflect levels of learning in evidence after extensive assessment? Basically it means they don’t trust faculty to do their jobs. Or maybe things are way more subjective in their disciplines? Granted the ABCs are not a nuanced reflection of my extensive assessments, but graduate schools and employers don’t want to read my essays discussing those nuances about various students in various courses; they want a quick short-hand of relative learning, grades. When asked for recommendations, the long version is provided. No, what these twits want is what many administrators want; some sort of broad assessment "instrument" (their word, not mine) that can be used across disciplines, colleges, and universities, although they admit that neither the National Survey of Student Engagement nor any other standardized assessment instruments, blunt as they are, can capture disciplinary knowledge and approaches to critical thought. That ladies is why they bloody well need disciplinary experts like me! My basis for reaching a conclusion that students have learned something is based upon their relative abilities to meet learning measures of several sorts including their answers to exam questions.
Here’s an example from just one of my disciplinary exams for an undergraduate course in plant diversity in its entirety less the more objective portions (definitions, factoids, etc.).
1. Chloroplasts and mitochondria are two of the cellular hallmarks of eukaryotic organisms. Evaluate the hypotheses that account for these organelles based upon observations and testable predictions.
2. Complex metabolisms appear to be constructed of smaller, simpler, ancestral components, some of which adopt new functions. Use photosynthesis and phylogenetic hypotheses to illustrate this concept.
3. Relative to the chemistry of the Universe how usual are the elemental components of life?
4. Ribosomal RNA sequence data provided biologists with a new phylogenetic understanding of all living organisms and had a major effect on our definition of Kingdom Monera. Explain.
5. Chlorophyll is composed of what type of building block molecule? What does phylogeny suggest about the hypothetical origin of chlorophyll? How does it differ in function from its presumed predecessor?
6. What are extremophile organisms, and why might our perspective of what is extreme be somewhat skewed? Why is the biology of extremophiles important to our understanding of early life on Earth?
7. What is the carbon biochemical fingerprint of life and what does it tell us?
8. Why is actin and actin binding protein so important in the early evolution of eukaryotic organisms?
9. For the longest time no fossil evidence of life was known prior to the Cambrian when fossils of large conspicuous organisms suddenly appear. How was fossil evidence of ancient life found? How old and what kinds of fossils were found? Then evaluate the sudden appearance of fossils.
Now having read those do you think students who can answer such questions have failed to demonstrate any learning? Think you could pull the answers for an exam like that out of the air and BS me without having done adequate reading and study, and learned something? Do you think the instructor incapable of discriminating objectively among excellent, good, poor, and terrible answers? Oh yes, and then later concepts are built upon these concepts, so cramming and purging won't do the trick. Do you still think you can capture disciplinary knowledge and critical thought on some sort of assessment instrument that doesn't simplify it to the banal? Sorry ladies, you sound clueless about the depth, detail, and sophistication of disciplines and that is the only gap that exists between teaching and assessment, so best leave assessment to us, the real professionals.

Moving up in the world?

Well, one hardly knows what to say when a humble botanist is quoted at length in the Chronicle of Higher Education simply because one writes a bit of a rant about assessment in response to a typical enough provost who typically enough doesn't get it.