Used dailyYes, openly
What it replacesTyping, not thinking
Surcharge on your invoiceNone
Yes, we use AI
It would be silly to pretend otherwise, so we will not. Coding assistants from the commonly used providers sit inside our editor the way Prettier and ESLint do. They run while we work. They suggest. We accept what is right, reject what is not, and write the rest ourselves. We will tell you which tools were in play on your project if you ask. We do not consider it controversial, and we do not consider it a secret.
What it actually does
The honest list of where AI earns its keep, day to day: scaffolding a new controller, migration or component from a sentence of intent. Writing the first 80 percent of a unit test suite that we then read, fix and extend. Refactoring repetitive code that does not deserve clever metaprogramming. Reading unfamiliar APIs faster, because the assistant has already crawled the docs we are about to. Reviewing diffs as a second pair of eyes that never gets tired. Translating intent into syntax for languages we touch less often (a glue script in Bash, a one-off Python munger, a Postgres window function). None of that is exciting. All of it is real time saved on every project.
What it does not do
AI does not pick the right database for your write pattern. It does not look at your team and tell you whether you have hired for the next eighteen months or the next eighteen years. It does not say "this feature is the wrong feature to be building right now". It does not have taste. It does not have skin in the game. Every decision a Blax engineer makes on your project is made by a Blax engineer, with their name on the commit, after weighing the trade-offs an assistant cannot weigh because it does not know what you mean to do six months from now.
What this means for your bill
The customer-facing version is simple: a faster tool in our hands means more value for the same money in yours. We do not run a meter that says "AI did this in two seconds, we will charge thirty minutes anyway". The work that used to be billable was the work, not the typing speed; what AI removed was the typing speed. The same fixed-price engagement now produces a slightly better product, or finishes a few days earlier, or absorbs a small scope change without a change order. That is the deal we want with our customers, and AI helps us keep it.
We still ship the code
Code only ships from a Blax repo when it works, and that means tests pass, types check, and a human at the keyboard can debug it later if a test was wrong. Critical paths and anything touching security, billing or a system boundary get a full line-by-line read; the rest leans on the test suite, because that is the contract that actually catches regressions in six months when nobody remembers the diff. We do not commit code we cannot debug. We do not paste in dependencies we have not verified. We do not auto-merge AI patches. The accountability is the same as it was in 2019: if something we shipped misbehaves, we own it and we fix it. AI is a force-multiplier on the work, not a substitute for the responsibility.
AI changed the typing speed, not the engineering. Customers benefit twice: faster turnaround, same accountability.
More perspectives
Curious how this plays out in practice?
These essays are the why. The how shows up in the projects we ship. Drop us a note and we can talk about your specific case.