Showing posts with label pogil. Show all posts
Showing posts with label pogil. Show all posts

Friday, March 1, 2019

Conference Attendance: SIGCSE 2019 - Day 1.5

Back at SIGCSE again, this one the 50th to be held.  Much of my time is spent dashing about and renewing friendships.  That said, I made it to several sessions.  I've included at least one author and linked to their paper.

Starting on day 2, we begin with the Keynote from Mark Guzdial

"The study of computers and all the phenomena associated with them." (Perlis, Newell, and Simon, 1967).  The early uses of Computer Science were proposing its inclusion in education to support all of education (1960s).  For example, given the equation "x = x0 + v*t + 1/2 a * t^2", we can also teach it as a algorithm / program.  The program then shows the causal relation of the components.  Benefiting the learning of other fields by integrating computer science.

Do we have computing for all?  Most high school students have no access, nor do they even take the classes when they do.

Computing is a 21st century literacy.  What is the core literacy that everyone needs?  C.f. K-8 Learning Trajectories Derived from Research Literature: Sequence, Repetition, Conditionals.  Our goal is not teaching Computer Science, but rather supporting learning.

For example, let's learn about acoustics.  Mark explains the straight physics.  Then he brings up a program (in a block-based language) that can display the sound reaching the microphone.  So the learning came from the program, demonstration, and prediction.  Not from writing and understanding the code itself.  Taking data and helping build narratives.

We need to build more, try more, and innovate.  To meet our mission, "to provide a global forum for educators to discuss research and practice related to the learning and teaching of computing at all levels."

Now for the papers from day 1:

Lisa Yan - The PyramidSnapshot Challenge

The core problem is that we only view student work by the completed snapshots.  Extended Eclipse with a plugin to record every compilation, giving 130,000 snapshots from 2600 students.  Into those snapshots, they needed to develop an automated approach to classifying the intermediate snapshots.  Tried autograders and abstract syntax trees, but those could not capture the full space.  But!  The output is an image, so why not try using image classification.  Of the 138531 snapshots, they generated 27220 images.  Lisa then manually labeled 12000 of those images, into 16 labels that are effectively four milestones in development.  Then, a neural network classifier classified the images.  Plot the milestones using a spectrum of colors (blue being start, red being perfect).  Good students quickly reach the complete milestones.  Struggling students are often in early debugging stages.  Tinkering students (~73 percentile on exams) take a lot of time, but mostly spend it on later milestones.  From these, we can review assignments and whether students are in the declared milestones, or if other assignment structure is required.

For the following three papers, I served as the session chair.

Tyler Greer - On the Effects of Active Learning Environments in Computing Education

Replication study on the impact of using an active learning classroom versus traditional room.  Using the same instructor to teach the same course, but using different classrooms and lecture styles (traditional versus peer instruction).  The most significant factor was the use of active learning versus traditional, with no clear impact from the type of room used.

Yayjin Ham, Brandon Myers - Supporting Guided Inquiry with Cooperative Learning in Computer Organization

Taking a computer organization course with peer instruction and guided inquiry, can the peer instruction be traded for cooperative learning to emphasize further engagement and learning.  Exploration of a model (program, documentation), then concept invention (building an understanding), then application (apply the learned concepts to a new problem).  Reflect on the learning at the end of each "lecture".  In back-to-back semesters, measure the learning gains from this intervention, as well as survey on other secondary items (such as, engagement and peer support).  However, the students in the intervention group did worse, most of which is controlled by the prior GPA.  And across the other survey points, students in the intervention group rated lower.  The materials used are available online.

Aman, et al - POGIL in Computer Science: Faculty Motivation and Challenges

Faculty try implementing POGIL in the classroom.  Start with training, then implementing in the classroom, and continued innovation.  Faculty want to see more motivation, retaining the material, and staying in the course (as well as in the program).  Students have a mismatch between their learning and their perceived learning.  There are many challenges and concerns from faculty about the costs of adoption.

Tuesday, February 27, 2018

Conference Attendance SIGCSE 2018

I have just finished attending SIGCSE 2018 in Baltimore.  In contrast to my earlier conference attendance, this time I have had higher involvement in its execution.

On Wednesday I went to the New Educator's Workshop (NEW).  Even being faculty for two years, there was still a number of things that were either new or good reminders.  Such as including or discussing learning objectives with each lecture and assignment, or being careful with increasing one's level of service.  As a new faculty member, each service request seems exciting, as no one has asked me before!  But many senior faculty emphasized that this is the time in which they are protecting us from lots of service opportunities such that we can spend time on our teaching and research.

On Thursday morning, I presented my recent work that updated a programming assignment in Introduction to Computer Systems, and from which we saw improvements in student exam scores.  We did not research the specific action, and are therefore left with two theories.  First, the improvement could be from using better style in the starter code and emphasizing this style in submissions.  Second, we redesigned the traces to require submissions to address different cases and thereby implement different features.  I lean toward the formed, but have no data driven basis for this hypothesis.

Let's discuss active learning briefly.  I attended (or ran) several sessions focused on this class of techniques.  The basic idea is that students have better engagement and learning by actively participating in class.  There are a variety of techniques that work to help increase student activity.  On Thursday afternoon, Sat Garcia of USD, presented Improving Classroom Preparedness Using Guided Practice, which showed how student learning improved from participating in Peer Instruction, which particularly requires students to come to class prepared.  Shortly later, Cynthia Taylor joined Sat and I in organizing a Bird of Feather (BoF) session on using Active-learning in Systems Courses.  We had about 30-40 attendees there split into two groups discussing some techniques they have used and problems they have observed.  5 years ago, a similar BoF had attendance around 15-20, so we are making progress as a field.

On Friday, I spoke with Brandon Myers who has done work on using POGIL in Computer Organization and Architecture.  In POGIL, students are working in groups of 3-4 with specific roles through a guided learning, guiding students into discovering the concepts themselves.  We had a nice conversation and may be merging our draft resources.  This last point is often the tricky part of using active learning in that developing reasonable materials can be both time intensive and requires several iterations.

The Friday morning keynote presentation was given by Tim Bell, who spoke about K-12.  This topic is rather distant from my own work and research, so I was skeptical.  Yet, I came out quite enthused.  It was interesting to think about presenting Computer Science concepts in non-traditional ways, based initially on having to explain your field at elementary school when the other presenters are a cop and a nurse (his example).  How could you get 6 year olds to sort?  Or see the advantage of binary search as the data grows?

In the afternoon, I was a session chair for the first time.  I moderated the session on Errors, so obviously the AV system stopped working for a short duration.  Beyond that incident, the session seemed to go well.

I always like going to SIGCSE.  It is rejuvenating and exhausting.  So many teachers to speak with about courses, curriculum, and other related topics.  And then you find that you've been social for 16 hours or so hours.

Saturday, March 11, 2017

Conference Time: SIGCSE 2017 - Day 2

I started my morning by attending my regular POGIL session.  I like the technique and using it in the classroom.  However, I should probably make the transition, attend the (all / multi-day) workshop, and perhaps get one of those "ask me about POGIL" pins.

Lunch was then kindly provided by the CRA for all teaching-track faculty in attendance.  There is the start of an effort to ultimately prepare a memo to departments for how to best support / utilize us (including me).  One thing for me is the recognition of how to evaluate the quality of teaching / learning.

Micro-Classes: A Structure for Improving Student Experience in Large Classes - How can we provide the personal interactions that are valuable, which enrollments are large / increasing?  We have a resource that is scaling - the students.  The class is partitioned into microclasses, where there is clear physical separation in the lecture room.  And each microclass has a dedicated TA / tutor.  Did this work in an advanced (soph/ junior) class on data structures?

Even though the same instructor taught both the micro and the control class, the students reported higher scores for the instructor for preparedness, concern for students, etc.  Yet, there was no statistical difference in learning (as measured by grades).

Impact of Class Size on Student Evaluations for Traditional and Peer Instruction Classrooms - How can we compare the effectiveness of peer instruction being using in courses of varying class sizes?  For dozens of courses, the evaluation scores for PI and non-PI classes were compared.  There was a statistical difference between the two sets and particularly for evaluating the course and instructor.  This difference exists even when splitting by course.  This difference does not stem from frequency of course, nor the role of the instructor (teaching, tenure, etc).