DescriptionAutograding systems are being increasingly deployed to meet the challenges of teaching programming at scale. Studies show that formative feedback can greatly help novices learn programming. This work explores techniques for extending an autograder to provide corrective and formative feedback on programming assignment submissions using a mixed approach. The dissertation first introduces a framework to help instructors identify common student errors for a programming assignment and write hints that the autograder can provide automatically for these errors. This approach starts with the design of a knowledge map, which is the set of concepts and skills that are necessary to complete an assignment, followed by the design of the assignment and that of a comprehensive test suite for identifying logical errors in the submitted code. Test cases are used to test the student submissions and learn classes of common errors. For each assignment, the instructor trains a classifier that automatically categorizes errors in a submission based on the outcome of the test suite. The instructor maps the errors to corresponding concepts and skills and writes hints to help students find their misconceptions and mistakes. I apply this methodology to two assignments in the Introduction to Computer Science course and find that the automatic error categorization has a 90% average accuracy. I report and compare data from two semesters, one semester when hints are given for the two assignments and one when hints are not given. Results show that the percentage of students who complete the assignments after an initial erroneous submission is three times greater when hints are given compared to when hints are not given. However, on average, even when hints are provided, almost half of the students fail to correct their code so that it passes all the test cases. While the first approach gave promising results as a first step towards providing formative feedback during autograding, it has its limitations, in particular, that it is labor-intensive. Thus, the second part of the dissertation explore an algorithmic approach that is much less labor-intensive. Using insights about student errors gained through applying the first approach, the second approach was designed to repair student programs and provide feedback on how to correct the program and on associated misconceptions. I start by compiling a set of programming repairs that are commonly needed to repair student programs across assignments. The repair set consists of modifications to both the statements and the program structure. These repairs fix errors that point to misconceptions about program structuring and other programming concepts. Each repair is used to modify the behavior of the student program until it matches the expected behavior, measured using a similar test suite as the first approach. Additionally, I provide a search procedure for finding a set of repairs that correct the behavior of a student program. I apply this approach to multiple assignments from the Introduction to Computer Science course and report a higher success (i.e. fully correcting the program) rate than with previous approaches, an increase of 20% of the attempted student programs for specific assignments. Manual analysis of the repaired programs reveals that the repairs do not introduce stylistic issues such as dead code. However, for some assignments this approach is successful for less than half of the attempted student programs. More research is needed to understand how to provide accurate automated feedback to all student programs and for all types of assignments, and how this type of feedback specifically impacts learning and teaching.