Thursday, July 13, 2017

PhD Defense - Automated Data-Driven Hint Generation for Learning Programming

Kelly Rivers defended her PhD work this afternoon.  She will returning to CMU this fall as a teaching professor.

Student enrollment is increasing, so more work is needed to automate the support, as TAs / instructors are not scaling.  Prior work (The Hint Factory) developed models based on prior student submissions, and then a current student's work can be found within the model thus providing suggestions for how to proceed.  However, programming may not fit within this model due to the larger and more varied space for which students can solve the problems.

First, student code proceeds through a series of canonicalization steps - AST, anonymized, simplification.  Such that the following python code is transformed:

import string
def any_lowercase(s):
  lst = [string.ascii_lowercase]
  for elem in s:
    if (elem in lst) == True:
      return True
    return False

Becomes

import string
def any_lowercase(p0):
  for v1 in p0:
    return (v1 in string.ascii_lowercase)

Studies then went over 41 different problems with hundreds of correct solutions and thousands of incorrect solutions.  The model can then generate the edits and chain these hints as necessary.  In more than 99.9% of cases, the model could successfully generate a hint chain to reach a correct solution.

To further test this model and approach, the model started with the empty space (just teacher solution) and was compared against the final model.  Ideally, the final model will propose fewer edits than the initial model.  And for 56% of problems, this was true.  40% of problems were already optimal.  And 3% are opportunities for improvement to the model.

Next, given this model exists, how do the hints impact student learning?  Select half of the students to give them access to the hint model optionally.  Using a pre / post assessment, the measurement was a wash.  Instead, a second study was designed that required the students to use the system within a two hour OLI module.  Hints would be provided with every submission and either before or after the midtest in the OLI module.  Only 1/2 of the students actually proceeded through the module in order.  However, most learning was just within the pretest->practice->midtest, so adding those students increased the population.  The results show that the hints reduce the time required to learn the equal amount.

From interviews with students, students need and want targeted help on their work.  However, the hints generated thus far were not always useful.  Proposed another study based on different styles of hints: location, next-step, structure, and solution.  This study found that participants with lower expertise wanted more detailed hints.  Hint usage would sometimes be for what is wrong versus how to solve it.  And often, students know what to do, and just need to reference (via example / prior work) how to do this, rather than hinting what to do.

1 comment:

Tanvi said...
This comment has been removed by a blog administrator.