I work full time and am not taking on any contract, freelance, or collaborative work. All of the development that I do outside of my full time job is personal tinkering and research, either to be uploaded here as open source or to be kept private for personal use.
If you are here because someone is using SomeOddCodeGuy to apply for work or collaboration, or because you are already working with someone claiming to be SomeOddCodeGuy, you are being scammed. There are no exceptions to this.
If you were given an excuse as to why this message somehow doesn't apply to your situation, you are being scammed. Whoever you are talking to is not me and has nothing to do with me.
I'm a developer who's been visiting the world of LLMs as a hobby since 2023. My main focus is on locally run, offline, LLMs which I mostly use for even more hobby tinkering.
I'm quite passionate in regards to the power of workflows with LLMs, and as a developer I generally preferred more manual chat-style interfacing with LLMs powered by workflows than I did leaving a task to an automated agent. With that said, Claude Code has won me over, and I now find myself using it far more. However- I don't "Vibe Code". I use it as a junior developer, where I handle all the architecting, planning and design up front via AI assisted chat workflows and heavy use of Deep Research functionality, leaving very little creative expression to the agents.
Even doing all of that up-front work (which generally takes a fair bit of time), I find that I get more value, and faster iterations, than folks who just vibe code and then give up in the final stretch because the bugs and maintainability are falling apart.
Wilmer is still a huge part of my daily LLM use, though. I use it primarily for chatbot workflows (decision trees and whatnot to better control the output quality), and also for quality gates when running small agent scripts. I've got a lot more plans for Wilmer, even as it gets up there in age.
I started Wilmer during the Llama 2 era based on the idea that open-weight models at the time were weak as generalists compared to the big proprietary models like ChatGPT; however, individual fine-tunes within scoped domains (like coding or medical) could often compete with those big models. My goal has always been to try to find a way, either through routing or workflows, to help my local models keep pace with the big APIs.
Obviously, modern open-weight models are strong enough to not need that help nearly as much, but that just means the same methods can push those models even farther.
I'm not a python developer by trade; I picked it up to work on Wilmer, and I've been learning it ever since. Some of the mess in the codebases here are tech debt due to my fumbling along and learning early on as I started to understand it more. In my day job, I'm a dev manager that mostly works with C# and web tech.
- This started as a reddit post and then I decided to make it into an article, since I think it was something a lot of folks were interested in reading.
- M2 Ultra Mac Studio speed tests from freshly loaded models [Github Mirror]
- M2 Ultra Mac Studio speed tests utilizing KoboldCpp's context shifting [Github Mirror]
- M3 Ultra running Command-A 111b and Llama 3.1 405b [Github Mirror]
- M3 Ultra Deepseek V3 Run Speeds and Memory Costs [Github Mirror]
- M3 Ultra R1-0528 Run Speeds and Memory Costs + MLA difference [Github Mirror]
- M3 Ultra running Qwen3 235b, GPT-OSS-120b, GLM 4.5, and Deepseek V3.1
- M3 Ultra comparison of Q8_0 and MXFP4 ggufs for GLM 4.6
- Comparison of M2 Max, M2 Ultra and RTX 4090 speeds [Github Mirror]
- Comparison of M2 Ultra and M3 Ultra Speeds [Github Mirror]




