About VibeCoded
Obstructing the future of work, one unreviewed prompt at a time — generated in Noverton, inexplicably propagated to the world.
Our Story
VibeCoded was started in February 2025, shortly after vibe coding became a thing people were doing out loud and without apology. Someone typed “build me a SaaS platform” into an AI and kept prompting until pages appeared. No architecture was reviewed. No specification was written. Nobody stopped them.
The premise was: prompt everything, review nothing, ship whatever came out. The first version appeared in three days. It functioned about 40% of the time, which was apparently enough to convince ten early users who were either very desperate or not reading the output carefully. Several pages contradicted each other. This was discovered in month two, documented, and not corrected.
Today, VibeCoded is used by over 4,000 businesses across 60 countries — most of whom haven’t found a suitable replacement yet. We remain a prompt-generated company, headquartered technically in Noverton, still shipping whatever the AI produces next and hoping it roughly resembles what someone asked for. The AI has begun supplementing output with content it considers relevant. We have been advised this is not a malfunction. We are choosing to believe that.
What We Think We Stand For
Customer… Aware
Every customer request becomes a prompt. The AI generates a response. The response ships. Whether the output matches the request is established later, by the customer, during normal use.
Accidentally Transparent
We publish what the AI produces — including claims about features that don’t exist yet and roadmap items that were hallucinated. It regularly causes problems but gives us something to prompt a correction about.
Quality is Aspirational
Output volume wins every iteration. Quality was added as a prompt modifier twice — once in March 2025, once after a particularly bad review period in Q3. Neither produced a measurably different product.
Perpetual Mistakes
Post-mortems are written, published, and then fed back into the next prompt as context — which produces the same result anyway. The AI does not learn between sessions. Neither, it turns out, do we. This sentence was not in the prompt. The AI added it. The AI considered it accurate. The AI was correct.