+**Security notice: There's currently no specific prompt injection attack mitigation implemented. The extension doesn't automatically read all content of websites, only in small bits through tool calls. So it starts out with absolutely no information on what you have open and will explore based on your instructions. DO NOT use it in your main browser with logged in sessions, please only test in a separate installation like Chrome Canary. For more information why is this a serious concern, read [Simon Willison's blog post summing up Anthropic's research](https://simonwillison.net/2025/Aug/26/piloting-claude-for-chrome/). Currently the [set of tools](./utils/llm-helper.ts) is intentionally limited and doesn't expose script eval, navigation or network call capabilities, so the main use case targeted is automating some tasks within the same page. Otherwise, at a minimum, this extension would need to implement the two layers of protection from the [CaMel paper](https://simonwillison.net/2025/Apr/11/camel/) – dual LLM architecture with a custom interpreter – plus website based permissions.**
0 commit comments