Skip to main content

Writing Projects

In the Southern Association (SACS) region, a Quality Enhancement Plan is now part of the decennial accreditation reaffirmation process. This is a project to improve student learning. At Coker we focused on writing, and I've stayed interested in the idea of how to better teach and assess writing. After bumping into several others at the annual SACS meeting with similar challenges in this area, I decided to try to make a list of writing QEPs. This is necessarily incomplete. If you have others I can add to the list, please email me.

The hyperlinks are to QEP documents where I could easily find them. I will update this list as I get more information.

Auburn University-Montgomery (WAC site)
Caldwell Community College & Technical Institute
Catawba Valley Community College
Central Carolina Community College
Clear Creek Baptist Bible College
Coker College
Columbus State University
Judson College
King College
Liberty University (pdf)
Lubbock Christian University (pdf)
South College
Texas A&M International University
The University of Mississippi
University of North Carolina Pembroke (pdf)
University of Southern Mississippi (pdf)
Virginia Military Institute (qep) (core curriculum)

One source: List of 2004 class QEPs from SACS (pdf)

My blog posts on writing assessment


Comments

  1. Hi,

    I'm finding this useful as I try to pull together a presentation for WAC 2010. I'm a writing specialist at Marymount University in Arlington, VA, a position created under our QEP--even though the plan addresses inquiry more than writing. I'm trying to find out what writing program changes institution make to satisfy a QEP.

    ReplyDelete
  2. Anonymous7:44 AM

    Add Auburn University-Montgomery to the list.
    I can't find their QEP document, but here's their WAC site:
    http://www.aum.edu/indexm_ektid2916.aspx

    ReplyDelete

Post a Comment

Popular posts from this blog

Bad Reliability, Part Two

In the last article , I showed a numerical example of how to increase the accuracy of a test by splitting it in half and judging the sub-scores in combination. I'm sure there's a general theorem that can be derived from that, but haven't looked for it yet. I find it strange that in my whole career in education, I've never heard of anyone doing this in practice. I first came across the idea that there is a tension between validity and reliability in Assessment Essentials by Paloma and Banta, page 89: An […] issue related to the reliability of performance-based assessment deals with the trade-off between reliability and validity. As the performance task increases in complexity and authenticity, which serves to increase validity, the lack of standardization serves to decrease reliability. So the idea is the reliability is generally good up to the point where it interferes with validity. To analyze that more closely, we have to ask what we mean by validity. Validi

Added Variable Graphs

Introduction A common difficulty in regression modeling is to figure out which input variables should be included. One tool at our disposal is the added-variable plot. Given an existing model, the added-variable plot lets us visualize the improvement we would get by adding one more variable of our choosing. This idea is described nicely in A Modern Approach to Regression with R  by Simon J. Sheather, pages 162-166. A function for displaying add-variable curves can be found in R's car  package, among other places. That package is the support software for An R Companion to Applied Regression , and has other useful functions for regression modeling. Simulated Data To illustrate the added-variable graphs, I'll use simulated data. Simulations are useful, because when we build the relationship between variables by hand, we know what the answer is when we start. Then we can check the actual answer versus the regression model.  library(tidyverse) library(broom) add_n

Teaching Critical Thinking

I just came across a 2007 article by Daniel T. Willingham " Critical Thinking: Why is it so hard to teach? " Critical thinking is very commonly found in lists of learning outcomes for general education or even at the institution level. In practice, it's very difficult to even define, let alone teach or assess. The article is a nice survey of the problem. The approach I've taken in the past (with the FACS assessment ) I've simplified 'critical thinking' into two types of reasoning that are easy to identify: deductive and inductive. Interestingly, this shows up in the article too, where the author describes the difference (in his mind) between critical and non-critical thinking: For example, solving a complex but familiar physics problem by applying a multi-step algorithm isn’t critical thinking because you are really drawing on memory to solve the problem. But devising a new algorithm is critical thinking. Applying a multi-step algorithm is deductive &