I’m a CS student at the University of Guelph interested in building AI systems and understanding how they fail, especially in security, privacy, and adversarial settings.
| Security & Red Teaming | Privacy-Preserving ML | Software & AI Systems |
|---|---|---|
| Jailbreak evaluation | Federated learning | Legal AI systems |
| Adversarial attacks | Secure aggregation | Clinical AI systems |
| Prompt injection | Differential privacy | Document review systems |
| Multi-agent safety | Privacy-utility tradeoffs | Word plugin workflows |
- LLM Redteam Lab: Automated LLM red-teaming system for prompt injection and guardrail testing.
- Federated Poison Simulator: Simulation exploring poisoning attacks against federated learning aggregation.
- NPM Scanner: CLI vulnerability scanner for
package.jsonusing the OSV database. - Private Data Vault: Local-first encrypted vault combining strong encryption with privacy concepts.
I also publish open-source CS notes in Markdown to make technical topics easier for fellow students to learn:
Format: Markdown with LaTeX math · Best viewed in Obsidian or GitHub
