Skip to main content


Learning Assessment: Choosing Goals

Introduction I recently received a copy of a new book on assessment, one that was highlighted at this year's big assessment conference: Fulcher, K. H., & Prendergast, C. (2021). Improving Student Learning at Scale: A How-to Guide for Higher Education . Stylus Publishing, LLC.   This is not a review of that book; I just want to highlight an important idea I came across, from pages 60-63 of the paperback edition. The authors contrast two methods, described as deductive or inductive,for selecting learning goals that form the basis for program reporting and (ideally) improvement.Here's how they name the two methods, along with my suggested associations (objective, subjective) in italics: Deductive ( Objective ): "A learning area is targeted for improvement because assessment data indicate that students are not performing as expected." (p. 60). Inductive ( Subjective) : "[P]otential areas for improvement are identified through the experiences of the faculty (or th
Recent posts

Time to Graduation

How long does it take to complete a bachelor's degree? Furman's four-year graduation rate runs around 75% and the six-year rate is about 81%, so take a guess what the average time to graduation is. The answer is and average of 3.8 years from start to graduation, a rate that has been steady for years.  Surprised? I was, because the math doesn't seem to add up. Wouldn't it have to be more than four years? Here's the code I used to calculate time to graduation. grads <- grads %>%   mutate(GradDate = ymd(GradDate),          StartDate = ymd(paste0(Cohort,"/8/20")),          Time      = as.numeric((GradDate - StartDate)/365.25)) This relies on the lubridate library to convert text strings like "05-07-2021" into a date format and to perform the difference calculation in the last line. Subtracting two dates gives the difference in days, which I divided by 365.25 to get years. I approximated the actual start dates by August 20 of the year they enr

Kuder-Richardson formula 20

My account emails me links to articles they think I'll like, and the recommendations are usually pretty good. A couple of weeks ago I came across a paper on the reliability of rubric ratings for critical thinking that way: Saxton, E., Belanger, S., & Becker, W. (2012). The Critical Thinking Analytic Rubric (CTAR): Investigating intra-rater and inter-rater reliability of a scoring mechanism for critical thinking performance assessments. Assessing writing , 17 (4), 251-270. [link] Rater agreement is a topic I've been interested in for a while, and the reliability of rubric ratings is important to the credibility of assessment work. I've worked with variance measures like intra-class correlation, and agreement statistics like the Fleiss kappa, but I don't recall seeing Cronbach's alpha used as a rater agreement statistic before. It usually comes up in assessing test items or survey components.  Here's the original reference. Cronbach, L. J. (19

trace(AB) = trace(BA)

 Last time I showed how we can think of matrix multiplication as (1) a matrix of inner products, or (2) a sum of outer products, and used the latter to unpack the "positive definite" part of the product \(X^tX\). At the end I mentioned that the outer-product version of matrix multiplication makes it easy to show that  trace \(AB\) = trace \( BA \).  In the late 1980s I took a final exam that asked for the proof, and totally blanked. I mucked around with the sums for a while, and then time ran out. A little embarrassing, but also an indication that I hadn't really grasped the concepts. One can get quite a ways in math by just pushing symbols around, without developing intuition.  Some Sums The matrix multiplication of \(AB = C\) is usually defined as an array of inner products, so the \(i\)th row and \(j\)th column of the product is  $$ c_{ij} = \sum_{k=1}^q a_{ik} b_{kj} $$ where \(q\) is the number of columns of \(A\), which must be the same as the number of rows of \(B\