The original idea was to have some retrospective data to look at for retention purposes. It works like this. In fall 2008 we had, of course, a group of students who had attended in fall 2007 but didn't graduate and didn't return: our attrition pool. Because the survey forms are tagged by student IDs, we can look back and see what indicators there might have been. This has proven to be very useful. We have since started using the CIRP for the same purpose--and it's really been a great source of information. It's essential to get as many student IDs as possible, however. Otherwise it's much less useful because you don't know who left and who stayed without the ID.
Here's one method of mining the data. Create your database of student IDs--I'll use just 1st year students from fall 2007 here--and use the trick I outlined here to get a 0/1 computed variable called 'retain' to denote attrit/retain. Add that as a column to the CIRP data or your custom survey by connecting student IDs. You can add other information too, like athlete/non-athlete, or zip code or whatever. Load this data set into SPSS and do an ANOVA, as shown below:
I've had issues loading directly from Excel, and usually end up saving a table as a .csv file--it seems to import better that way. You can only add 100 variables at a time, and the CIRP is longer than that, so it has to be done in chunks. Each will look something like this:
When the results roll in, look for small numbers in the significance column. I usually use .o2 as a benchmark. Anything less than that is potentially interesting. Of course, this depends on other factors, like sample size and such.
Now that's interesting--the ACCPT1st and CHOICE variables are very significant, meaning that they have power in distinguishing between those who returned and those students who didn't. Since I already had this data set in Access, I did a simple query and used the pivot table view to look at the CHOICE variable. For reference, the text of the survey item is:
Is this college your:Here are the results.
1=Less than third choice?
2=Third choice?
3=Second choice?
4=First choice?
Students for whom the institution was their first choice were the first to leave. Not only that, but these are the majority. This turned out to be a critical piece of information. By performing another ANOVA with CHOICE as the key variable, and then using the 'Compare Means -> Means' SPSS report, we can identify particular traits of these 'First Choicers' as we have come to call them. We corraborate this with other information taken from the Assessment Day surveys, and a picture of these students emerges. I also geo-tagged their zip codes to see where they came from. More on First-Choicer characteristics will come in another post.
This was the beginning of the Plan 9 attrition effort, which is deep in the planning phase now. The bottom line is that we discovered that many of our students don't understand the product they're buying, and we don't understand them very well either. It's not the kind of thing one can slap a bandaid fix on, but will require a complete re-think of many institutional practices.
No comments:
Post a Comment