2013-08-29

The papers for ICER 2013 are available in the ACM Digital Library now at http://dl.acm.org/citation.cfm?id=2493394. I think that they remain free for a month (so, until September 12), so grab them quick.

ICER 2013 was a fabulous conference. I learned alot, and am already using some of the ideas I gained there in my research and in my teaching. I can’t possibly summarize all the papers, so here’s my unofficial list of what struck me.

I was invited to be a discussant in the Doctoral Consortium, and that was an absolute thrill. The students were so bright and had so many interesting ideas. I’m eager to hear about many of the results. We also noted that we had participants from several major research universities this year (Stanford, MIT, Virginia Tech, University of Washington). For some, it was the first time that they’d ever sent someone to the ICER DC. Why? Andy Ko (U. Washington) said that it was because it’s been three years since CE21 funding started, and that’s enough time to have something for a doctoral student to want to show. Really shows the importance of having funding in an area.

One of the big ideas for me at ICER this year was the value of big data — what can you do with lots of data? Neil Brown showed that the Computing At Schools website is growing enormously fast, and he told us that the BlueJ Blackbox data are now available to researchers. Elena Glassman talked about how to use and visualize student activity to support finding different paths to a solution.  Colleen Lewis presented with two of her undergraduate collaborators from Berkeley on data mining the AP CS exam answers.

My favorite example of the value of big data for CS Ed came from my favorite paper of the conference. Michael Lee and Andy Ko presented on their research on how adding assessments into a programming video game increased persistence in the game. The below graph appears in their paper, but in the talk, Michael annotated it with what was being taught in the levels that led to drop-offs in participation. (Thanks to Michael for providing it to me.) The control and assessment groups split on lists. Variables were another big drop-off, as were objects and functions. Here is empirical measurement of “how hard is that topic.” I’ve submitted my request to gain access to the Blackbox, because I’m starting to understand what questions we can ask with a bunch of anonymized data.



There were several papers that looked at student artifacts as a proxy for their understanding. I was concerned about that practice. As Scott Klemmer told us in his opening keynote, people program mostly today by grabbing stuff off the Web and copying it — sometimes, without understanding it. Can you really trust that students using some code means that they understand the idea behind that code?

Raymond Lister led a really great one hour special session around the idea of “Geek genes,”  whether CS really does generate a bi-modal distribution of grades, and whether the learning edge momentum theory describes our results.  It was a great session because it played to ICER’s strengths, e.g., really intense discussion, and yet generated moments of enormous laughter.  I came away thinking that there are no geek genes, we don’t have bimodal distributions, and the jury is still out on the learning edge momentum.

Elizabeth Patitsas presented a nice paper comparing introducing algorithms serially (“Here’s algorithm A that solves that problem…and now here’s algorithm B…”) vs as compare-and-contrast (“Here are two algorithms that solve that problem…”). Compare-and-contrast is better, and better when learning algorithms than even the existing education literature suggests. I mentioned this result in class just yesterday. I’m teaching our TA preparation class, and a student who teaches algorithms asked me, “Am I responsible for my students’ learning?” I showed the students Elizabeth’s result then asked, “If you know that teaching one way leads to more learning than another, aren’t you morally or ethically required to teach using the better method?”

Michelle Friend and Rob Cutler described a group of middle school girls figuring out a complicated algorithm problem (involving finding the maximum height that an egg drop protection mechanism will work). They showed that, without scaffolding, the girls were able to come up with some fairly sophisticated algorithms and good analyses of the speed of their algorithms. We’re getting somewhere with our understanding of CS learning at the schools age.

And I totally admit that my impression of this ICER is influenced by my paper on Media Computation winning the Chair’s Paper Award. Michael Lee won the popular vote “John Henry Award.” (I voted for him, too.)

I’m skipping a lot: Mike Hewner presenting on his thesis, an interesting replication of the McCracken study, new ideas about PCK and threshold concepts.  It was a great event, and I could write a half dozen posts about the ideas from the conference. Next year’s ICER is in Glasgow, 11-12 August. I am very much looking forward to it, and am working on my papers to submit already.

Tagged: BlueJ, computing education, ComputingAtSchools, ICER

Show more