Active learning is a set of techniques that require the student to take an active role in their learning during lecture. Research strongly supports that students will learn more when the lecture utilizes these techniques. And I have measured this effect in my own courses. However, this research shows that students like lectures that use these techniques less even though they are learning more. And I have also informally measured this, such as students who say at the end of the first lecture, "If you are going to require me to participate in lecture, I will not return". Unfortunately, the present educational model is based on the student evaluations (primarily measuring what students like) to evaluate the quality of instruction. Therefore perversely, this aggregate model encourages suboptimal teaching and learning.
The paper recommends then that professors take time in the beginning of the semester to demonstrate the benefits and gain buy in from the students. And then continue to do so. Students want to learn, so they will support this pedagogy. And many students will recognize the value with time, if they give it.
A discussion of how to do Computer Science well, particularly writing code and architecting program solutions.
Showing posts with label active learning. Show all posts
Showing posts with label active learning. Show all posts
Saturday, September 28, 2019
Friday, March 1, 2019
Conference Attendance: SIGCSE 2019 - Day 1.5
Back at SIGCSE again, this one the 50th to be held. Much of my time is spent dashing about and renewing friendships. That said, I made it to several sessions. I've included at least one author and linked to their paper.
Starting on day 2, we begin with the Keynote from Mark Guzdial
"The study of computers and all the phenomena associated with them." (Perlis, Newell, and Simon, 1967). The early uses of Computer Science were proposing its inclusion in education to support all of education (1960s). For example, given the equation "x = x0 + v*t + 1/2 a * t^2", we can also teach it as a algorithm / program. The program then shows the causal relation of the components. Benefiting the learning of other fields by integrating computer science.
Do we have computing for all? Most high school students have no access, nor do they even take the classes when they do.
Computing is a 21st century literacy. What is the core literacy that everyone needs? C.f. K-8 Learning Trajectories Derived from Research Literature: Sequence, Repetition, Conditionals. Our goal is not teaching Computer Science, but rather supporting learning.
For example, let's learn about acoustics. Mark explains the straight physics. Then he brings up a program (in a block-based language) that can display the sound reaching the microphone. So the learning came from the program, demonstration, and prediction. Not from writing and understanding the code itself. Taking data and helping build narratives.
We need to build more, try more, and innovate. To meet our mission, "to provide a global forum for educators to discuss research and practice related to the learning and teaching of computing at all levels."
Now for the papers from day 1:
Lisa Yan - The PyramidSnapshot Challenge
The core problem is that we only view student work by the completed snapshots. Extended Eclipse with a plugin to record every compilation, giving 130,000 snapshots from 2600 students. Into those snapshots, they needed to develop an automated approach to classifying the intermediate snapshots. Tried autograders and abstract syntax trees, but those could not capture the full space. But! The output is an image, so why not try using image classification. Of the 138531 snapshots, they generated 27220 images. Lisa then manually labeled 12000 of those images, into 16 labels that are effectively four milestones in development. Then, a neural network classifier classified the images. Plot the milestones using a spectrum of colors (blue being start, red being perfect). Good students quickly reach the complete milestones. Struggling students are often in early debugging stages. Tinkering students (~73 percentile on exams) take a lot of time, but mostly spend it on later milestones. From these, we can review assignments and whether students are in the declared milestones, or if other assignment structure is required.
For the following three papers, I served as the session chair.
Tyler Greer - On the Effects of Active Learning Environments in Computing Education
Replication study on the impact of using an active learning classroom versus traditional room. Using the same instructor to teach the same course, but using different classrooms and lecture styles (traditional versus peer instruction). The most significant factor was the use of active learning versus traditional, with no clear impact from the type of room used.
Yayjin Ham, Brandon Myers - Supporting Guided Inquiry with Cooperative Learning in Computer Organization
Taking a computer organization course with peer instruction and guided inquiry, can the peer instruction be traded for cooperative learning to emphasize further engagement and learning. Exploration of a model (program, documentation), then concept invention (building an understanding), then application (apply the learned concepts to a new problem). Reflect on the learning at the end of each "lecture". In back-to-back semesters, measure the learning gains from this intervention, as well as survey on other secondary items (such as, engagement and peer support). However, the students in the intervention group did worse, most of which is controlled by the prior GPA. And across the other survey points, students in the intervention group rated lower. The materials used are available online.
Aman, et al - POGIL in Computer Science: Faculty Motivation and Challenges
Faculty try implementing POGIL in the classroom. Start with training, then implementing in the classroom, and continued innovation. Faculty want to see more motivation, retaining the material, and staying in the course (as well as in the program). Students have a mismatch between their learning and their perceived learning. There are many challenges and concerns from faculty about the costs of adoption.
Starting on day 2, we begin with the Keynote from Mark Guzdial
"The study of computers and all the phenomena associated with them." (Perlis, Newell, and Simon, 1967). The early uses of Computer Science were proposing its inclusion in education to support all of education (1960s). For example, given the equation "x = x0 + v*t + 1/2 a * t^2", we can also teach it as a algorithm / program. The program then shows the causal relation of the components. Benefiting the learning of other fields by integrating computer science.
Do we have computing for all? Most high school students have no access, nor do they even take the classes when they do.
Computing is a 21st century literacy. What is the core literacy that everyone needs? C.f. K-8 Learning Trajectories Derived from Research Literature: Sequence, Repetition, Conditionals. Our goal is not teaching Computer Science, but rather supporting learning.
For example, let's learn about acoustics. Mark explains the straight physics. Then he brings up a program (in a block-based language) that can display the sound reaching the microphone. So the learning came from the program, demonstration, and prediction. Not from writing and understanding the code itself. Taking data and helping build narratives.
We need to build more, try more, and innovate. To meet our mission, "to provide a global forum for educators to discuss research and practice related to the learning and teaching of computing at all levels."
Now for the papers from day 1:
Lisa Yan - The PyramidSnapshot Challenge
The core problem is that we only view student work by the completed snapshots. Extended Eclipse with a plugin to record every compilation, giving 130,000 snapshots from 2600 students. Into those snapshots, they needed to develop an automated approach to classifying the intermediate snapshots. Tried autograders and abstract syntax trees, but those could not capture the full space. But! The output is an image, so why not try using image classification. Of the 138531 snapshots, they generated 27220 images. Lisa then manually labeled 12000 of those images, into 16 labels that are effectively four milestones in development. Then, a neural network classifier classified the images. Plot the milestones using a spectrum of colors (blue being start, red being perfect). Good students quickly reach the complete milestones. Struggling students are often in early debugging stages. Tinkering students (~73 percentile on exams) take a lot of time, but mostly spend it on later milestones. From these, we can review assignments and whether students are in the declared milestones, or if other assignment structure is required.
For the following three papers, I served as the session chair.
Tyler Greer - On the Effects of Active Learning Environments in Computing Education
Replication study on the impact of using an active learning classroom versus traditional room. Using the same instructor to teach the same course, but using different classrooms and lecture styles (traditional versus peer instruction). The most significant factor was the use of active learning versus traditional, with no clear impact from the type of room used.
Yayjin Ham, Brandon Myers - Supporting Guided Inquiry with Cooperative Learning in Computer Organization
Taking a computer organization course with peer instruction and guided inquiry, can the peer instruction be traded for cooperative learning to emphasize further engagement and learning. Exploration of a model (program, documentation), then concept invention (building an understanding), then application (apply the learned concepts to a new problem). Reflect on the learning at the end of each "lecture". In back-to-back semesters, measure the learning gains from this intervention, as well as survey on other secondary items (such as, engagement and peer support). However, the students in the intervention group did worse, most of which is controlled by the prior GPA. And across the other survey points, students in the intervention group rated lower. The materials used are available online.
Aman, et al - POGIL in Computer Science: Faculty Motivation and Challenges
Faculty try implementing POGIL in the classroom. Start with training, then implementing in the classroom, and continued innovation. Faculty want to see more motivation, retaining the material, and staying in the course (as well as in the program). Students have a mismatch between their learning and their perceived learning. There are many challenges and concerns from faculty about the costs of adoption.
Tuesday, February 27, 2018
Conference Attendance SIGCSE 2018
I have just finished attending SIGCSE 2018 in Baltimore. In contrast to my earlier conference attendance, this time I have had higher involvement in its execution.
On Wednesday I went to the New Educator's Workshop (NEW). Even being faculty for two years, there was still a number of things that were either new or good reminders. Such as including or discussing learning objectives with each lecture and assignment, or being careful with increasing one's level of service. As a new faculty member, each service request seems exciting, as no one has asked me before! But many senior faculty emphasized that this is the time in which they are protecting us from lots of service opportunities such that we can spend time on our teaching and research.
On Thursday morning, I presented my recent work that updated a programming assignment in Introduction to Computer Systems, and from which we saw improvements in student exam scores. We did not research the specific action, and are therefore left with two theories. First, the improvement could be from using better style in the starter code and emphasizing this style in submissions. Second, we redesigned the traces to require submissions to address different cases and thereby implement different features. I lean toward the formed, but have no data driven basis for this hypothesis.
Let's discuss active learning briefly. I attended (or ran) several sessions focused on this class of techniques. The basic idea is that students have better engagement and learning by actively participating in class. There are a variety of techniques that work to help increase student activity. On Thursday afternoon, Sat Garcia of USD, presented Improving Classroom Preparedness Using Guided Practice, which showed how student learning improved from participating in Peer Instruction, which particularly requires students to come to class prepared. Shortly later, Cynthia Taylor joined Sat and I in organizing a Bird of Feather (BoF) session on using Active-learning in Systems Courses. We had about 30-40 attendees there split into two groups discussing some techniques they have used and problems they have observed. 5 years ago, a similar BoF had attendance around 15-20, so we are making progress as a field.
On Friday, I spoke with Brandon Myers who has done work on using POGIL in Computer Organization and Architecture. In POGIL, students are working in groups of 3-4 with specific roles through a guided learning, guiding students into discovering the concepts themselves. We had a nice conversation and may be merging our draft resources. This last point is often the tricky part of using active learning in that developing reasonable materials can be both time intensive and requires several iterations.
The Friday morning keynote presentation was given by Tim Bell, who spoke about K-12. This topic is rather distant from my own work and research, so I was skeptical. Yet, I came out quite enthused. It was interesting to think about presenting Computer Science concepts in non-traditional ways, based initially on having to explain your field at elementary school when the other presenters are a cop and a nurse (his example). How could you get 6 year olds to sort? Or see the advantage of binary search as the data grows?
In the afternoon, I was a session chair for the first time. I moderated the session on Errors, so obviously the AV system stopped working for a short duration. Beyond that incident, the session seemed to go well.
I always like going to SIGCSE. It is rejuvenating and exhausting. So many teachers to speak with about courses, curriculum, and other related topics. And then you find that you've been social for 16 hours or so hours.
Saturday, March 11, 2017
Conference Time: SIGCSE 2017 - Day 2
I started my morning by attending my regular POGIL session. I like the technique and using it in the classroom. However, I should probably make the transition, attend the (all / multi-day) workshop, and perhaps get one of those "ask me about POGIL" pins.
Lunch was then kindly provided by the CRA for all teaching-track faculty in attendance. There is the start of an effort to ultimately prepare a memo to departments for how to best support / utilize us (including me). One thing for me is the recognition of how to evaluate the quality of teaching / learning.
Micro-Classes: A Structure for Improving Student Experience in Large Classes - How can we provide the personal interactions that are valuable, which enrollments are large / increasing? We have a resource that is scaling - the students. The class is partitioned into microclasses, where there is clear physical separation in the lecture room. And each microclass has a dedicated TA / tutor. Did this work in an advanced (soph/ junior) class on data structures?
Even though the same instructor taught both the micro and the control class, the students reported higher scores for the instructor for preparedness, concern for students, etc. Yet, there was no statistical difference in learning (as measured by grades).
Impact of Class Size on Student Evaluations for Traditional and Peer Instruction Classrooms - How can we compare the effectiveness of peer instruction being using in courses of varying class sizes? For dozens of courses, the evaluation scores for PI and non-PI classes were compared. There was a statistical difference between the two sets and particularly for evaluating the course and instructor. This difference exists even when splitting by course. This difference does not stem from frequency of course, nor the role of the instructor (teaching, tenure, etc).
Lunch was then kindly provided by the CRA for all teaching-track faculty in attendance. There is the start of an effort to ultimately prepare a memo to departments for how to best support / utilize us (including me). One thing for me is the recognition of how to evaluate the quality of teaching / learning.
Micro-Classes: A Structure for Improving Student Experience in Large Classes - How can we provide the personal interactions that are valuable, which enrollments are large / increasing? We have a resource that is scaling - the students. The class is partitioned into microclasses, where there is clear physical separation in the lecture room. And each microclass has a dedicated TA / tutor. Did this work in an advanced (soph/ junior) class on data structures?
Even though the same instructor taught both the micro and the control class, the students reported higher scores for the instructor for preparedness, concern for students, etc. Yet, there was no statistical difference in learning (as measured by grades).
Impact of Class Size on Student Evaluations for Traditional and Peer Instruction Classrooms - How can we compare the effectiveness of peer instruction being using in courses of varying class sizes? For dozens of courses, the evaluation scores for PI and non-PI classes were compared. There was a statistical difference between the two sets and particularly for evaluating the course and instructor. This difference exists even when splitting by course. This difference does not stem from frequency of course, nor the role of the instructor (teaching, tenure, etc).
Thursday, March 9, 2017
Conference Attendance SIGCSE 2017 - Day 1
Here in Seattle, where I used to live, attending SIGCSE 2017.
Exposed! CS Faculty Caught Lecturing in Public: A Survey of Instructional Practices - Postsecondary Instructional Practices Survey (24 items), 7000 CS faculty invited, about 800 responses. If the evidence is clear that active-learning is better for instruction, then we should be doing that more. The overall split for CS was equal between student-centered and instructor-centered (exactly same avearge, 61.5). The survey showed clear differences between non-STEM (student) and STEM (instructor). So CS is doing better than its overall group.
Now, to dig into which differences there are in the demographics. The major difference in instructors is women, and those with 15 years of experience versus 30, both showing a 5+ point difference between student and instructor centered. However, 60s are still "whatever" and are not strongly committed. For those who are strongly committed, there are about 20% for each, while the remaining 60% are whatevers.
Investigating Student Plagiarism Patterns and Correlations to Grades - What are some of the patterns of the plagiarism, such as parts or all and how do students try to obfuscate their "work". Data from 2400 students taking a sophomore-level data structure course. After discarding those assignments with insufficient solution space, four assignments remained from six semesters. Used a plagiarism detector, to find likely cases of cheating.
First, even though the assignments remained unchanged, the rate of cases stayed constant. Most cases involved work from prior semesters. About two thirds of students who cheated, did so on only one assignment. Second, the rate of cheating on the individual assignments was similar to the partner assignment. Third, while students who cheated did better on those assignments, but they did not receive perfect scores and that those cheating did worse in the course than those who did not. And that those who took the follow-on course showed a larger grade difference (p=0.00019). Fourth, the analysis used the raw gradebook data that is independent of the detection and result of that detection.
Six detectors used. Lazy detector (common-case, no comments or whitespace), Token-based (all names become generic, sort functions by token length): identical token stream, modified token edit distance, and inverted token index (compute 12-grams and inversely weight how common these are). "Weird variable name" (lowercase, removed underscores). Obfuscation detector (all on one line, long variable names, etc). Fraction of total cases found by each detector: 15.69%, 18.49%, 49.71%, 72.77%, 67.35%, 0.38%.
Exposed! CS Faculty Caught Lecturing in Public: A Survey of Instructional Practices - Postsecondary Instructional Practices Survey (24 items), 7000 CS faculty invited, about 800 responses. If the evidence is clear that active-learning is better for instruction, then we should be doing that more. The overall split for CS was equal between student-centered and instructor-centered (exactly same avearge, 61.5). The survey showed clear differences between non-STEM (student) and STEM (instructor). So CS is doing better than its overall group.
Now, to dig into which differences there are in the demographics. The major difference in instructors is women, and those with 15 years of experience versus 30, both showing a 5+ point difference between student and instructor centered. However, 60s are still "whatever" and are not strongly committed. For those who are strongly committed, there are about 20% for each, while the remaining 60% are whatevers.
Investigating Student Plagiarism Patterns and Correlations to Grades - What are some of the patterns of the plagiarism, such as parts or all and how do students try to obfuscate their "work". Data from 2400 students taking a sophomore-level data structure course. After discarding those assignments with insufficient solution space, four assignments remained from six semesters. Used a plagiarism detector, to find likely cases of cheating.
First, even though the assignments remained unchanged, the rate of cases stayed constant. Most cases involved work from prior semesters. About two thirds of students who cheated, did so on only one assignment. Second, the rate of cheating on the individual assignments was similar to the partner assignment. Third, while students who cheated did better on those assignments, but they did not receive perfect scores and that those cheating did worse in the course than those who did not. And that those who took the follow-on course showed a larger grade difference (p=0.00019). Fourth, the analysis used the raw gradebook data that is independent of the detection and result of that detection.
Six detectors used. Lazy detector (common-case, no comments or whitespace), Token-based (all names become generic, sort functions by token length): identical token stream, modified token edit distance, and inverted token index (compute 12-grams and inversely weight how common these are). "Weird variable name" (lowercase, removed underscores). Obfuscation detector (all on one line, long variable names, etc). Fraction of total cases found by each detector: 15.69%, 18.49%, 49.71%, 72.77%, 67.35%, 0.38%.
Monday, February 20, 2017
Repost: Learn by Doing
I want to take a brief time to link to two of Mark Guzdial's recent posts. Both including an important theme in teaching. Students learn best by doing not hearing. Oddly students commonly repeat this misconception. If I structure our class time to place them as the ones doing something, rather than me "teaching" by speaking, the appraisal can be that I did not teach. They may not dispute that they learned, but I failed to teach them.
Students learn when they do, not just hear. And Learning in MOOCs does not take this requirement into account.
I have to regularly review these points. So much so that I was able to give them to a group of reporters last week (part of new faculty orientation, but still).
Students learn when they do, not just hear. And Learning in MOOCs does not take this requirement into account.
I have to regularly review these points. So much so that I was able to give them to a group of reporters last week (part of new faculty orientation, but still).
Friday, March 4, 2016
Conference Attendance SIGCSE 2016 - Day 2
After lunch when we are all in food comas, let's attend the best paper talk!
A Multi-institutional Study of Peer Instruction in Introductory Computing -
This study followed 7 instructors across different institutions as they used peer instruction. This showed that both the instruction is generally recognized as valuable, while also touching on routes in which it can go awry. Tell students why this technique is being used and what it's effect. Hard questions are good questions to ask, as students will discuss and learn from the question. This requires that questions are graded for participation and not *correctness*. Possible questions and material for peer instruction is available.
Development of a Concept Inventory for Computer Science Introductory Programming -
A concept inventory is a set of questions that carefully tease out student misunderstandings and misconceptions. Take the exams and identify both the learning objective and the misconception that results in incorrect answers.
int addFiveToNumber(int n)
{
int c = 0;
// Insert line here
return c;
}
int main(int argc, char** argv)
{
int x = 0;
x = addFiveToNumber(x);
printf("%d\n", x);
return 0;
}
a) scanf("%d", &n);
b) n = n + 5;
c) c = n + 5;
d) x = x + 5;
Each incorrect answer illustrates a different misconception. For example, input must come from the keyboard. Or variables are passed by reference.
Overall, this study illustrated how the concept inventory was developed, but not the impact of having it, or what it showed in the students and their learning.
Uncommon Teaching Languages - (specifically in intro courses)
An interesting effect of using an uncommon language in an introductory course is that the novices and experts have similar skills. Languages should be chosen to minimize churn, otherwise students feel that they haven't mastered any languages. And related to this point, languages also exist in an institutional ecosystem. Furthermore, we want to minimize the keywords / concepts required for a simple program. A novice will adopt these keywords, but they also are "magic" and arcane. And then how long are the programs, as we want novices to only have to write short code to start.
I also attended the SIGCSE business meeting and then the NCWIT reception. I have gone to NCWIT every year at SIGCSE, as I want to know what I should do (or not do) to not bias anyone's experience in Computer Science.
A Multi-institutional Study of Peer Instruction in Introductory Computing -
This study followed 7 instructors across different institutions as they used peer instruction. This showed that both the instruction is generally recognized as valuable, while also touching on routes in which it can go awry. Tell students why this technique is being used and what it's effect. Hard questions are good questions to ask, as students will discuss and learn from the question. This requires that questions are graded for participation and not *correctness*. Possible questions and material for peer instruction is available.
Development of a Concept Inventory for Computer Science Introductory Programming -
A concept inventory is a set of questions that carefully tease out student misunderstandings and misconceptions. Take the exams and identify both the learning objective and the misconception that results in incorrect answers.
int addFiveToNumber(int n)
{
int c = 0;
// Insert line here
return c;
}
int main(int argc, char** argv)
{
int x = 0;
x = addFiveToNumber(x);
printf("%d\n", x);
return 0;
}
a) scanf("%d", &n);
b) n = n + 5;
c) c = n + 5;
d) x = x + 5;
Each incorrect answer illustrates a different misconception. For example, input must come from the keyboard. Or variables are passed by reference.
Overall, this study illustrated how the concept inventory was developed, but not the impact of having it, or what it showed in the students and their learning.
Uncommon Teaching Languages - (specifically in intro courses)
An interesting effect of using an uncommon language in an introductory course is that the novices and experts have similar skills. Languages should be chosen to minimize churn, otherwise students feel that they haven't mastered any languages. And related to this point, languages also exist in an institutional ecosystem. Furthermore, we want to minimize the keywords / concepts required for a simple program. A novice will adopt these keywords, but they also are "magic" and arcane. And then how long are the programs, as we want novices to only have to write short code to start.
I also attended the SIGCSE business meeting and then the NCWIT reception. I have gone to NCWIT every year at SIGCSE, as I want to know what I should do (or not do) to not bias anyone's experience in Computer Science.
Thursday, December 17, 2015
Teaching Inclusively in Computer Science
When I teach, I want everyone to succeed and master the material, and I think that everyone in the course can. I only have so much time to work with and guide the students through the material, so how should I spend this time? What can I do to maximize student mastery? Are there seemingly neutral actions that might impact some students more than others? For example, before class this fall, I would chat with the students who were there early, sometimes about computer games. Does those conversations create an impression that "successful programmers play computer games"? To these questions, I want to revisit a pair of posts from the past year about better including the students.
The first is a Communications of the ACM post from the beginning of this year. It listed several seemingly neutral decisions that can bias against certain groups. Maintain a tone of voice that suggests every question is valuable and not "I've already explained that so why don't you get it". As long as they are doing their part in trying to learn, then the failure is on me the communicator.
The second is a Mark Guzdial post on Active Learning. The proposition is that using traditional lecture-style advantages the privileged students. And a key thing to remember is that most of us are the privileged, so even though I and others have "succeeded" in that setting, it may have been despite the system and not because of the teaching. Regardless of the instructor, the teaching techniques themselves have biases to different groups. So if we want students to master the material, then perhaps we should teach differently.
Active learning has a growing body of research that shows using these teaching techniques help more students to succeed at mastering a course, especially the less privileged students. Perhaps slightly less material is "covered", but students will learn and retain far more. Isn't that better?
The first is a Communications of the ACM post from the beginning of this year. It listed several seemingly neutral decisions that can bias against certain groups. Maintain a tone of voice that suggests every question is valuable and not "I've already explained that so why don't you get it". As long as they are doing their part in trying to learn, then the failure is on me the communicator.
The second is a Mark Guzdial post on Active Learning. The proposition is that using traditional lecture-style advantages the privileged students. And a key thing to remember is that most of us are the privileged, so even though I and others have "succeeded" in that setting, it may have been despite the system and not because of the teaching. Regardless of the instructor, the teaching techniques themselves have biases to different groups. So if we want students to master the material, then perhaps we should teach differently.
Active learning has a growing body of research that shows using these teaching techniques help more students to succeed at mastering a course, especially the less privileged students. Perhaps slightly less material is "covered", but students will learn and retain far more. Isn't that better?
Friday, August 28, 2015
Repost: Incentivizing Active Learning in the Computer Science Classroom
Studies have shown that using active learning techniques improve student learning and engagement. Anecdotally, students have brought up these points to me from my use of such techniques. I even published at SIGCSE a study on using active learning, between undergraduate and graduate students. This study brought up an interesting point, that I will return to shortly, that undergraduate students prefer these techniques more than graduate students.
Mark Guzdial, far more senior than me, recently challenged Georgia Tech (where we both are) to incentivize the adoption of active learning. One of his recent blog posts lists the pushback he received, Active Learning in Computer Science. Personally, as someone who cares about the quality of my teaching, I support these efforts although I do not get to vote.
Faculty members at R1 institutions, such as Georgia Tech, primarily spend their time with research; however, they are not research scientists and therefore they are being called upon to teach. And so you would expect that they would do this well. In meeting with faculty candidates, there was one who expressed that the candidate's mission as a faculty member would be to create new superstar researchers. Classes were irrelevant to this candidate as a student, therefore there would be no need to teach well as this highest end (telos) of research justifies the sole focus on students who succeed despite their instruction, just like the candidate did. Mark's blog post suggests that one day Georgia Tech or other institutions may be sued for this sub-par teaching.
What about engagement? I (along with many students and faculty) attended a visiting speaker talk earlier this week and was able to pay attention to the hour long talk even though it was effectively a lecture. And for this audience, it was a good talk. The audience then has the meta-takeaway that lectures can be engaging, after all we paid attention. But we are experts in this subject! Furthermore, for most of us there, this is our subfield of Computer Science. Of course we find it interesting, we have repeatedly chosen to study it.
For us, the material we teach has become self-evidently interesting. I return to the undergraduate and graduate students that I taught. Which group is closer to being experts? Who has more experience learning despite the teaching? Who prefered me to just lecture? And in the end, both groups learned the material better.
Edit: I am by no means condemning all of the teaching at R1's or even Georgia Tech. There are many who teach and work on teaching well. The Dean of the College of Computing has also put some emphasis on this through teaching evaluations. Mark's post was partially noting that teaching evaluations are not enough, we can and should do more.
Mark Guzdial, far more senior than me, recently challenged Georgia Tech (where we both are) to incentivize the adoption of active learning. One of his recent blog posts lists the pushback he received, Active Learning in Computer Science. Personally, as someone who cares about the quality of my teaching, I support these efforts although I do not get to vote.
Faculty members at R1 institutions, such as Georgia Tech, primarily spend their time with research; however, they are not research scientists and therefore they are being called upon to teach. And so you would expect that they would do this well. In meeting with faculty candidates, there was one who expressed that the candidate's mission as a faculty member would be to create new superstar researchers. Classes were irrelevant to this candidate as a student, therefore there would be no need to teach well as this highest end (telos) of research justifies the sole focus on students who succeed despite their instruction, just like the candidate did. Mark's blog post suggests that one day Georgia Tech or other institutions may be sued for this sub-par teaching.
What about engagement? I (along with many students and faculty) attended a visiting speaker talk earlier this week and was able to pay attention to the hour long talk even though it was effectively a lecture. And for this audience, it was a good talk. The audience then has the meta-takeaway that lectures can be engaging, after all we paid attention. But we are experts in this subject! Furthermore, for most of us there, this is our subfield of Computer Science. Of course we find it interesting, we have repeatedly chosen to study it.
For us, the material we teach has become self-evidently interesting. I return to the undergraduate and graduate students that I taught. Which group is closer to being experts? Who has more experience learning despite the teaching? Who prefered me to just lecture? And in the end, both groups learned the material better.
Edit: I am by no means condemning all of the teaching at R1's or even Georgia Tech. There are many who teach and work on teaching well. The Dean of the College of Computing has also put some emphasis on this through teaching evaluations. Mark's post was partially noting that teaching evaluations are not enough, we can and should do more.
Subscribe to:
Posts (Atom)