Sorry for the l337 garbage in the title. I've been absorbing the ideas from Dr. Ed Nuhfer (see previous post) about knowledge surveys, and in particular the idea that complexity and difficulty are orthogonal. That is to say, different dimensions. We could think of an abstraction of learning that incorporates these into a graph. Mathematicians love graphs. Well, applied mathematicians love graphs anyway.
I have used my considerable expertise with Paint to produce the glorious figure shown above. It's supposed to lay bare an abstraction of the learning process if we only consider difficulty (expressed here inversely as probability of success) and complexity. I think it's fair to say that we sometimes think of learning happening this way--that no matter what the task, probability of success is increased by training, and that more complex tasks are inherently more difficult than less complex ones. This may, of course, be utterly wrong.
We encountered Moravec's Paradox in a previous episode here. The idea is that some things that are very complex seem easy. For example, judging another person's character, or determining if another beer is worth four bucks at the bar. So, it may be that the vertical dimension of the graph isn't conveying anything of interest. But I have a way to wiggle out of that problem.
If we restrict ourselves to thinking about purely deductive reasoning tasks, then success depends on the learner's ability to execute an algorithm. Multiplication tables or solving standard differential equations--it's all step-by-step reasoning. In this case, it seems reasonable to assume that increased complexity (number of steps involved) reduces chance of success. In fact, if we abstract out a constant probability of success for each step, then we'd expect an exponentially decaying probability of success as complexity increases because we're multiplying probabilities together (if they're independent anyway).
We could test this with math placement test results. An analysis of the problems on a standard college placement test should give us pretty quickly the approximate number of steps required to solve any given problem (with suitable definitions and standards). The test results will give us a statistical bound on probability of success. Looking at the two together should be interesting. If we assume that Pr[success on item of complexity c] = exp( d*c), where d is some (negative) constant to be determined, and c is the complexity measured in solution-steps, then we could analyze which problems are unexpectedly easy or difficult. That is, we could analyze the preparation of students taking the placement not just by looking at the number of problems they got correct, but the level of complexity they can deal with.
I've written a lot about complexity and analytical thinking, which you can sieve from the blog via the search box if you're interested. I won't recapitulate here.
I'll see if I can find some placement test data to try this out on. It should make for an interesting afternoon some day this summer. Stay tuned.
Update: It occurs to me upon reflection that if the placement test is multiple-choice, this may not work. Many kinds of deterministic processes are easier to verify backwards than they are to run forwards. For example, if a problem asks for the roots of a quadratic equation, it's likely easier to plug in the candidates for the answers and see if they work than it is to use the quadratic equation or complete the square (i.e. actually solve the problem directly). This would make the complexity weights dubious.
Subscribe to:
Post Comments (Atom)
-
The student/faculty ratio, which represents on average how many students there are for each faculty member, is a common metric of educationa...
-
(A parable for academic workers and those who direct their activities) by David W. Kammler, Professor Mathematics Department Southern Illino...
-
The annual NACUBO report on tuition discounts was covered in Inside Higher Ed back in April, including a figure showing historical rates. (...
-
In the last article , I showed a numerical example of how to increase the accuracy of a test by splitting it in half and judging the sub-sco...
-
Introduction Stephen Jay Gould promoted the idea of non-overlaping magisteria , or ways of knowing the world that can be separated into mutu...
-
I'm scheduled to give a talk on grade statistics on Monday 10/26, reviewing the work in the lead article of JAIE's edition on grades...
-
Introduction Within the world of educational assessment, rubrics play a large role in the attempt to turn student learning into numbers. ...
-
"How much data do you have?" is an inevitable question for program-level data analysis. For example, assessment reports that attem...
-
Inside Higher Ed today has a piece on " The Rise of Edupunk ." I didn't find much new in the article, except that perhaps mai...
-
Introduction A few days ago , I listed problems with using rubric scores as data to understand learning. One of these problems is how to i...
No comments:
Post a Comment