Skip to content

lxgr/vibeserver

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vibeserver

Vibeserver is an mostly occasionally HTTP compliant webserver that will answer all requests by invoking an LLM which generates the response completely on-the-fly.

Only very basic scaffolding exists to slightly increase the chances of a parseable response – the rest is all left to the LLM's imagination based on request parameters (URL, hostname, headers including the HTTP referer etc)!

Demo

For a limited time, a demo instance is available on https://vibes.lxgr.net/. It's single-threaded and uses a free hosted LLM backend, so please go easy on it!

It's protected by HTTP basic authentication to provide some protection against crawlers etc., but importantly to avoid confusing. The server will literally generate anything, including liability disclaimers or a lack thereof, a permissive robots.txt file, a legal imprint etc.

By entering the following credentials and visiting vibes.lxgr.net, or following any of the links below, you do so under the understanding that anything hosted there exists for entertainment purposes only and is automatically generated by a process outside of my control. No contractual or legal agreements expressed there are binding in the real world. All data you provide will be routed to an external LLM inference provider and may be retained or published by that third party.

  • Username: vibesonly
  • Password: iconsent

Setup

  • Install uv or otherwise install the Python packages llm and your preferred model adapter. I've successfully tested llm-mlx locally on a Macbook Pro and llm-openrouter as an API-based service.
    • For API-based models, you'll need to set up an API key
    • Local models have to be instlled per your plugin's instructions. For mlx-llm, running llm install llm-mlx and then llm mlx download-model <modelname> should do the trick. Non-thinking models work best for acceptable latency.
  • Configure your desired port number as PORT and model name as MODEL_NAME.

Gallery

APIs

➜  ~ curl http://localhost:3000/api/myip
{
  "ip": "2<redacted>:1"
}%

Blog posts

https://vibes.lxgr.net/blog/2025/05/go-got-it-wrong-why-null-strings-are-essential.html

image

Hacker News

https://vibes.lxgr.net/hackernews/news?p=2

image

Guestbook

Leave a message!

https://vibes.lxgr.net/guestbook.php?snow=true&animate=true&color=pink

image

3D graphics

It knows GLSL!

https://vibes.lxgr.net/webgl/shader-demos/simple-rotating-triangle.html

image

Academia

https://vibes.lxgr.net/sigbovik/submissions2025

image

Anomalies

https://vibes.lxgr.net/scp-wiki/scp-3125

image

Feature roadmap

  • Image generation
  • Threads? (but it'll just eat through my token budget faster)
  • Persistence? (probably not)
  • Make it self-hosting (i.e. replace the Python script with an LLM prompt for it, then execute the result), or at least back as many methods as possible by https://github.com/awwaiid/gremllm/

Warnings

The output of this web server is inherently unpredictable. It might generate things you do not agree with or want to have hosted on your website.

It will also serve all incoming requests, including those for robots.txt, and it might happily invite crawlers in that could then quickly churn through a prepaid LLM API key's budget, or rack up high costs on a billed one.

Access control is accordingly advisable for several resons.

See also [license.txt].

About

A little webserver making things up just in time

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages