-
Notifications
You must be signed in to change notification settings - Fork 546
macros: simplify compile-fail tests #2120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This PR touches files which potentially affect the outcome of the tests of an exercise. This will cause all students' solutions to affected exercises to be re-tested. If this PR does not affect the result of the test (or, for example, adds an edge case that is not worth rerunning all tests for), please add the following to the merge-commit message which will stops student's tests from re-running. Please copy-paste to avoid typos. For more information, refer to the documentation. If you are unsure whether to add the message or not, please ping |
|
Ugh, wait. If the test module is included in |
|
Also, the test runner tries to extract the body of a test to display it to users when a test fails. That wouldn't work (yet?) for these new doctests. I don't think that's a big problem, the previous snippets were not great either, since they just called another function with a file name. |
1663b00 to
34727ef
Compare
I think this true, it will enable students to cheat, technically. And it would probably be possible to fix that. i.e. mark |
34727ef to
25db67f
Compare
|
Ok, I think I found a pretty good low-effort solution to the possibility of cheating. We can import the module in the actual tests file, that will cause the normal tests to fail to compile if users attempt to just not include the |
I was experimenting with my solution locally and it didn't work. My solution was wrong, but the compile-fail tests didn't actually fail. I'm not 100% sure why that was the case, but I assume it's because I've set up a cargo workspace for my own exercise solutions. This moves the target folder somewhere else, but the compile-fail tests assume a specific target directory location. Using doctests should be much more reliable, since it's using a built-in way of Rust to do this. It's a little unconventional compared to the other exercises, but so were the previous compile-fail tests. One downside to this is that users who solve the exercise locally will have to learn a new way of un-ignoring a test. I've tried to make this smooth with appropriate comments. The test-runner should have no problems with this, since it uses `cargo test -- --include-ignored` to run all tests. This works for these new doctests just like for the normal ones.
25db67f to
240f7ee
Compare
| unused_imports, | ||
| reason = "prevent user from deleting the mod declaration" | ||
| )] | ||
| use macros::compile_fail_tests; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically speaking, what's to prevent a user from exporting a different module as compile_fail_tests in lib.rs?
In general I think the idea is good, but I feel like it can always be cheated somehow...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's true. I feel like it should be ok. It's not like users can win something unfairly by cheating. They're just cheating themselves out of learning.
I think there would be a way to make it completely bullet proof. We could turn lib.rs into a "test" file, meaning students cannot modify it. It includes the doctests and a mod actual_solution or whatever. src/actual_solution.rs would be the file users can modify.
But I don't want to make the exercise structure super confusing for normal users who wouldn't even think of cheating, if the risk is so low anyway.
What if we include a test in tests/macros.rs that reads the solution file and makes sure a normal mod compile_fail_tests; statement is present in the file ? 😅 I guess it's possible to have a string literal with that content... The next step is to parse actual Rust syntax 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it's fine the way it is... 😆
It's not really a big deal, if someone cheats with creating another module then more power to them.
I also agree that we should keep users editing lib.rs, I think cheating is better than having a whole new directory structure.
ellnix
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I was experimenting with my solution locally and it didn't work. My solution was wrong, but the compile-fail tests didn't actually fail. I'm not 100% sure why that was the case, but I assume it's because I've set up a cargo workspace for my own exercise solutions. This moves the target folder somewhere else, but the compile-fail tests assume a specific target directory location.
Using doctests should be much more reliable, since it's using a built-in way of Rust to do this. It's a little unconventional compared to the other exercises, but so were the previous compile-fail tests.
One downside to this is that users who solve the exercise locally will have to learn a new way of un-ignoring a test. I've tried to make this smooth with appropriate comments. The test-runner should have no problems with this, since it uses
cargo test -- --include-ignoredto run all tests. This works for these new doctests just like for the normal ones.[no important files changed]