Skip to main content

Generating Curricular Nets

I recently developed some code to take student enrollment information and convert that into a visual map of the curriculum, showing how enrollments flow from one course to another. For example, you'd expect a lot of BIO 101 students to take BIO 102 within the next two semesters. In order to 'x-ray' course offerings, I have to set thresholds for displaying links. For example, a minimum transfer of 30% of the enrollment from one course to another in order to show up. There are many ways to add meta-data in the form of text and color, for example using the thickness of the graph edges (the connecting lines) to signify the magnitude of the flow. This is a directed graph, so it has arrows you can't see at the resolution I've provided. Other data includes course name and enrollment statistics, and the college represented. It can be used to isolate part of the curriculum at a time to get more fine-grained graphs.

In the graph below, it's a whole institution's curriculum. The sciences, which are highly structure, clump together in the middle. Less strongly linked structures are visible as constellations around the center. I particularly like the dog shape lower left. This sort of thing can be used to see where the log-jams are, and to compare what advisors think is happening to what actually is.


Comments

  1. Anonymous1:44 PM

    That's pretty intriguing. What did you use to create the map?

    ReplyDelete
    Replies
    1. It's a Perl script that takes registration information as input, does a lot of sorting and averaging, and uses the open source package GraphViz (with the perl package as glue) to create the graphs. I also use GraphViz to map correlations in big data sets. See this article, for example: http://highered.blogspot.com/2011/12/x-raying-survey-data.html

      Delete
  2. Interesting, I've been doing a similar project with the newly released ACGME Milestones for graduate medical education. My goal is to empirically justify a rationale for centralized assessment while at the same time discerning which milestones should be centralized. Once all of the milestones are released I anticipate having to link over 1800 different competencies across 26 medical specializations. I've been using Gephi and adjacency tables on just 4 specializations, which has yielded 397 nodes and 145 edges.

    ReplyDelete
  3. I remember you had mentioned this earlier, Dave, but I had not seen it. Right now this is bassd on conventional enrollemt data. I would be curious to see what could be visualized with the more complex data that academic institutions can gather using eLumen, tracking individual students relative to expected student learning outcomes/capabilities/competencies.

    ReplyDelete
    Replies
    1. David, this is something we could collaborate on if you want. If you've got the data, I can modify the program.

      Delete
  4. Anonymous8:34 AM

    More! More! As a user of eLumen, I can say that this and other similar ways to visualize emerging evidence woul be most useful!

    ReplyDelete

Post a Comment

Popular posts from this blog

Course Grades as Data

Introduction To borrow a phrase from John Barth, within assessment circles there is a pernicious enthymeme that grades don't matter. Course grades are discounted outright as useful data about learning, or they are relegated to the purgatory of "indirect evidence." This ban is one of the data purity rules that also excludes surveys and really anything that is not:

tied to a specific piece of students work, andclassified by grading, rubric, etc. in an approved manner.  This is a standardized testing approach that retains only part of the standardization: (1) common point-in-time student work, and (2) a similar-looking rating method. I say "similar-looking," because a real standardized approach would evaluate reliability to ensure that the ratings have some statistical stability. This is rarely done in assessment practice. If it were done, it would reveal that the reliability is quite low most of the time.
There are standard rhetorical objections to using grades, …

Thinking in Predictors

There are some ideas that help us see the world more clearly. For example, the idea that a flipped coin has no memory of past events helps us realize that a recent string of heads does not increase the likelihood of tails (if the coin-flipping is done fairly). Thinking in predictors is one of those ideas that can make complex problems a little easier to think about.

This post is an introduction to how to visualize predictive power in the simplest case, where the outcome we want to know about only has two values. Examples include predicting first year student retention, student graduation, and applicants enrolling after being admitted. Having only two outcomes means there are only two possible predictions in a particular case--we predict that the outcome is one or the other of the two possibilities.

Let's suppose that you want to predict which of the new first-year students will return for their second fall. Suppose for a moment that I have a crystal ball and know the answer for ea…

ROC and AUC

This post continues the previous topic of thinking in predictors. I showed a measure of predictiveness, called AUC, which can be thought of as the probability of correctly guessing which of two different outcomes is which.Here I'll show how to visualize predictive power and relate the graph to that probability.

Suppose we want to predict which students will do poorly academically in a gateway course, where we define that by the student receiving C, D, or F. If we can predict that outcome far enough in advance, maybe we can do something to help. Note that this is a binary outcome (i.e. binominal), meaning there are only two possibilities.

The predictors we might consider are class attendance, use of tutoring services, prior grades, etc. I'll use the first two of these as illustrations. For example, back to the cups of the prior post, suppose one cup has the ID of a student who earned a C, D, or F, and the other cup has the ID of a student who earned an A or B. You don't kn…