Let us say we have N reference implementations for a particular problem. The student works on the problem and submits their solution. We, using PyBryt, compare the student's implementation against N different reference implementations. This results in N feedback reports (what annotations are (not) satisfied in each reference implementation). The question is: What feedback do we give back to the student?
- Giving all
N feedback reports to the student can be very confusing to the student and the student would not know what feedback to follow.
- Could the solution be to derive a metric by which we can specify "how close" the student is to the particular reference implementation. This way, we provide the feedback report of a reference solution the student is most likely implementing.
- Should there be a more sophisticated logic behind the scenes? For instance, if the student imported NumPy (or created an array of zeros), it is most likely they are following a particular reference.
This is the summary of some of the open questions we started brainstorming in one of the previous tech meetings to encourage the discussion. All ideas are welcome :)
Let us say we have
Nreference implementations for a particular problem. The student works on the problem and submits their solution. We, using PyBryt, compare the student's implementation againstNdifferent reference implementations. This results inNfeedback reports (what annotations are (not) satisfied in each reference implementation). The question is: What feedback do we give back to the student?Nfeedback reports to the student can be very confusing to the student and the student would not know what feedback to follow.This is the summary of some of the open questions we started brainstorming in one of the previous tech meetings to encourage the discussion. All ideas are welcome :)