A Facebook post called my attention to a neat little article about why swimming rules only recognize hundredths of seconds even though modern timing technology allows much more precise measurements. The gist is this: swimming rules recognize that construction technology limits the precision with which pools can be built to something like a few centimeters in a 50 meter long pool. At top speed a swimmer moves about 2 millimeters in a thousandth of a second. So, if you award places based on differences of thousandths of a second, you can’t know if you are rewarding faster swimming or the luck of swimming in a shorter lane.
This observation points to the more general phenomena of false precision, misplaced concreteness (aka reification, hypostatization), and organizational irrationality rooted in sloppy and abusive quantification.
These are endemic in higher education.
Students graduate with a GPA and it’s taken as a real, meaningful thing. But if you look at what goes into it (exams designed less and more well, subjective letter grades on essays, variable “points off” for rule infractions, quirky weighting of assignments, arbitrary conversions of points to letter grades, curves, etc.), you’d have to allow for error bars the size of a city block.
Instructors fret about average scores on teaching evaluations.
“Data driven” policies are built around the analysis of tiny-N samples that are neither random nor representative.
Courses are fielded or not and faculty lines granted or not based on enrollment numbers with no awareness of the contribution of class scheduling, requirement finagling, course content overlap, perceptions of ease, and the wording of titles.
Budgets are built around seat-of-the-pants estimates and negotiated targets.
One could go on.
The bottom line is that decision makers need to recognize how all of these shaky numbers are aggregated to produce what they think are facts about the institution and its environment. This suggests two imperatives. First, we should reduce individual cases of crap quantification. Second, when we bring “facts” together (e.g., enrollment estimates and cost of instruction) we should adopt an “error bar” sensibility – in it’s simplest form, treat any number as being “likely between X and Y” – so that each next step is attended by an appropriate amount of uncertainty rather than an inappropriate amount of fantasized certainty.