Here I am at SIGCSE again. This is a wonderful opportunity to think and reflect on how I assist students in learning Computer Science and to be Computer Scientists. And to connect with other faculty, researchers, etc who are interested in teaching and doing so in a quality manner.
An Examination of Layers of Quizzing in Two Computer Systems Courses -
In this work, the instructor taught the Intro Computer Systems course and based on Bryant and O'Hallaron's book (paid link). After several years of teaching, she introduced a new layer of quizzing to the course. Effectively before each class, students take a pre-quiz worth ~0% of their grade (20 quizzes combine to 5%), and can then come to class with knowledge and feedback toward their deficiencies. From the experience of the quizzes, students have been doing better in these courses.
Subgoals Help Students Solve Parsons Problems - (previewed at Mark Guzdail's blog)
When learning new things, students benefit from labeling subgoals in solving. These labels provide a basis for solving similar problems. There are two different strategies for labeling: students can provide the labels or the assignment can provide the labels. An example labeling can be found with loops: initialize, test, change. If students provide the labels and provide cross-problem labels, they do best. If they provide the labels and they are problem-specific such as "are there more tips" (with respect to an array of tips), then these students do worse than those provided the labels. Developing labels can be valuable, but it may require the expert to still provide guidance to help abstract them across problems. This talk had one of the great moments when someone asked a question and Brianna replied by, "So and so has done great ..." And the questioner pointed out that he is "so and so".
As CS Enrollments Grow, Are We Attracting Weaker Students?: A Statistical Analysis of Student Performance in Introductory Programming Courses Over Time -
In this study, one instructor has analyzed the data of student assignment grades across 7 years of Fall semesters in the CS 1 course. Several specific and clear reasonings were applied to get a clear and comparable data set. The first test is that the number of student withdrawals remained the same as a percentage of the total class size. The second test is that the means of the grades for the courses are statistically indistinguishable. The third test is to use a mixture model (weighted combination of distributions) for each class's scores. A good fit is found with two gaussian distributions, such that there is one for the "good students" and a second for the high variance students who are "potentially weaker". From this, the study concluded that (at Stanford, in Fall CS1), there are more "weak students" and more "strong students" as the student enrollment is drawing from the same larger population.
A (Updated) Review of Empiricism at the SIGCSE Technical Symposium -
Using the proceedings from SIGCSE 14 and 15, they examined the empirical evaluation and the characteristics of these evaluations. How was the data collected in each paper? And what was being evaluated (pedagogy, assignments, tools, etc)? Is the subject novel or replicating other studies? Based on this study, would SIGCSE benefit from a separate track for longer paper submissions? Or workshops on how to empirically validate results? This and other material is being developed under an NSF grant and released publically.
Birds of a Feather -
In the evening, I attended two Birds of a Feather sessions. Both of which have given me further ideas for what I might do to further (attempt to) improve student learning. And also possible collaborators toward that end.
No comments:
Post a Comment