Skip to main content

Writing Projects

In the Southern Association (SACS) region, a Quality Enhancement Plan is now part of the decennial accreditation reaffirmation process. This is a project to improve student learning. At Coker we focused on writing, and I've stayed interested in the idea of how to better teach and assess writing. After bumping into several others at the annual SACS meeting with similar challenges in this area, I decided to try to make a list of writing QEPs. This is necessarily incomplete. If you have others I can add to the list, please email me.

The hyperlinks are to QEP documents where I could easily find them. I will update this list as I get more information.

Auburn University-Montgomery (WAC site)
Caldwell Community College & Technical Institute
Catawba Valley Community College
Central Carolina Community College
Clear Creek Baptist Bible College
Coker College
Columbus State University
Judson College
King College
Liberty University (pdf)
Lubbock Christian University (pdf)
South College
Texas A&M International University
The University of Mississippi
University of North Carolina Pembroke (pdf)
University of Southern Mississippi (pdf)
Virginia Military Institute (qep) (core curriculum)

One source: List of 2004 class QEPs from SACS (pdf)

My blog posts on writing assessment


Comments

  1. Hi,

    I'm finding this useful as I try to pull together a presentation for WAC 2010. I'm a writing specialist at Marymount University in Arlington, VA, a position created under our QEP--even though the plan addresses inquiry more than writing. I'm trying to find out what writing program changes institution make to satisfy a QEP.

    ReplyDelete
  2. Anonymous7:44 AM

    Add Auburn University-Montgomery to the list.
    I can't find their QEP document, but here's their WAC site:
    http://www.aum.edu/indexm_ektid2916.aspx

    ReplyDelete

Post a Comment

Popular posts from this blog

Bad Reliability, Part Two

In the last article , I showed a numerical example of how to increase the accuracy of a test by splitting it in half and judging the sub-scores in combination. I'm sure there's a general theorem that can be derived from that, but haven't looked for it yet. I find it strange that in my whole career in education, I've never heard of anyone doing this in practice. I first came across the idea that there is a tension between validity and reliability in Assessment Essentials by Paloma and Banta, page 89: An […] issue related to the reliability of performance-based assessment deals with the trade-off between reliability and validity. As the performance task increases in complexity and authenticity, which serves to increase validity, the lack of standardization serves to decrease reliability. So the idea is the reliability is generally good up to the point where it interferes with validity. To analyze that more closely, we have to ask what we mean by validity. Validi

Added Variable Graphs

Introduction A common difficulty in regression modeling is to figure out which input variables should be included. One tool at our disposal is the added-variable plot. Given an existing model, the added-variable plot lets us visualize the improvement we would get by adding one more variable of our choosing. This idea is described nicely in A Modern Approach to Regression with R  by Simon J. Sheather, pages 162-166. A function for displaying add-variable curves can be found in R's car  package, among other places. That package is the support software for An R Companion to Applied Regression , and has other useful functions for regression modeling. Simulated Data To illustrate the added-variable graphs, I'll use simulated data. Simulations are useful, because when we build the relationship between variables by hand, we know what the answer is when we start. Then we can check the actual answer versus the regression model.  library(tidyverse) library(broom) add_n

Problems with Rubric Data

Introduction Within the world of educational assessment, rubrics play a large role in the attempt to turn student learning into numbers. Usually this starts with some work students have done, like research papers for an assignment (often called "artifacts" for some reason). These are then scored (i.e. graded) using a grid-like description of performance levels. Here's an outline I found at  Berkeley's Center for Teaching and Learning : The scale levels are almost always described in objective language relative to some absolute standard, like "The paper has no language usage errors (spelling, grammar, punctuation)."  Raters review each student work sample and assign a level that corresponds to the description, using their best judgment. There are usually four or five separate attributes to rate for each sample. For written work these might be: language correctness organization style follows genre conventions (e.g. a letter to the editor d