PyBryt is an open source Python library for pedagogical auto-assessment. Its goal is to empower students and educators to learn computer science and related disciplines through robust automated feedback, closing the gap between student need and instructor resources.
PyBryt is designed to allow instructors to write assignments such that students can implement a myriad of different solutions which can all be assessed by the same grading pipeline. PyBryt also incorporates a system for providing specific, in-depth feedback to students about how and where their implementations go wrong or right, while allowing the instructor to aggregate the results of its checks for grading.
Most autograders today are unit testing-based, requiring students to implement some pre-defined API and calling that API with specific inputs to assert that the submission provides the correct output. This naturally induces limitations on the way that instructors are able to construct assignments and on how students are able to think about and solve complex problems. PyBryt aims to eliminate this restriction by allowing instructors to write checks that examine how a student goes about solving a problem, rather than simply asserting that a program with a specific input should return a specific output.
PyBryt’s core auto-assessment behavior operates by comparing a student implementation of a programming problem to a series of reference implementations provided by an instructor. A reference implementation defines a pattern of values and conditions expected to be present in each student implementation. By comparing each student implementation and to any number of reference implementations, PyBryt allows the instructor to assess all manner of possible solutions, providing tailored feedback and advice written by the instructor to the student on how to bring their implementation closer to a reference. In so doing, PyBryt enables students to consider the design choices they made and improve their implementation in incremental steps.
Educators and institutions can leverage PyBryt to integrate auto-assessment and reference models into hands-on lab exercises and assessments. Some of the PyBryt benefits are:
- Educators do not have to enforce the structure of the solution
- Learners practice algorithm design, code design, and solution implementation
- Learners receive quick and meaningful pedagogical feedback, which substantially contributes to the learning experience
- Complexity of the learner’s solution can be analyzed
- Plagiarism detection and support for reference implementations
- Easy integration into existing organizational or institutional grading infrastructure
At Imperial College London, PyBryt has been used in assessing pre-sessional materials for the Introduction to Python course and in both summative and formative assessments in their Modern Programming Methods course, covering advanced Python and its scientific ecosystem. Students in these courses have described PyBryt’s “comprehensive feedback” as “immensely helpful,” a great tool for “pointing in the right direction.” PyBryt also helps students “find a breakthrough” on tough problems by providing feedback for fine-tuning solutions once students have a basic idea and providing “a good indication of where [the] errors [are].” Instructors in these courses appreciate the usefulness of PyBryt for providing feedback in an environment where instructor interaction can be limited: “Instead of raising their hand looking for a TA to explain the error message, students can rely on PyBryt explanations most of the time.” Another instructor comments that, even in its most basic form, PyBryt “saves teaching staff invaluable time by offering guidance, critique, and encouragement.”
You can learn more about PyBryt in its documentation, or from the Introduction to PyBryt and Advanced PyBryt Microsoft Learn modules.