Technical Specification
For the operators who want to understand the technology before they commit.
01 / What KIT Is
There is a meaningful difference between an AI tool and an AI system. Most marketing tools are the former: one model, one prompt box, one output. You get what the model knows how to do. When the task changes, you adapt your prompt and hope.
KIT is the latter.
Twenty-four specialist agents. Each one has a defined role, a documented scope, and clear handoff boundaries. Brand voice, content strategy, copywriting, quality control, SEO, internal linking, editorial compliance, data collection, learning. The agents work in sequence. None of them operate outside their defined job. When one finishes, the next one takes over with the previous output as its input.
The practical result is a system that behaves less like a tool you operate and more like a team you brief. You tell it what you need. It works through the production chain and delivers compliant output. A 300-page hostel directory built with KIT takes under two weeks, start to finish, without a single brief being touched twice.
This page is the full technical account. If the pricing is what you want, that is on the KIT page.
02 / How It Works
Briefly: submit a brief, receive content. Less briefly: here is what happens in between.
A structured form. Three to five minutes. You specify content type, channel, format, and context (seasonal push, specific offer, upcoming event). That brief is the only input KIT requires from you.
The orchestration layer reads the brief and decides which agents activate, in what order, and with what constraints. A short social caption follows a different route to a 1,200-word blog post. A brief marked urgent enters a fast queue. A brief that flags brand compliance requirements triggers additional review steps automatically, with no action needed from you.
The active agents work in sequence. The content strategy agent defines the angle. The writing agent drafts against it. The brand compliance agent checks the output against your profile: your voice, your approved vocabulary, your property-specific details, the things you told KIT not to say. The quality control agent applies defined output standards.
The output passes through a final review gate. Pass: it appears in your dashboard, ready for your approval. Fail: the task re-routes, the failure is logged, and the relevant agent flags the issue. You see only the completed, compliant version. Failed attempts are not your problem to manage.
Every run generates data. Quality scores, failure flags, which approaches got approved quickly and which needed revision. All of it feeds back into the system. We will come back to this in section 3.
03 / The Forge
Ask any agency what their learning process looks like and most of them will pause. They learn by losing clients and by gut feel. KIT's learning cycle is called The Forge, and unlike most agency processes, it is documented and runs on a schedule.
Once a month.
Agents review each other's output from the previous period. Quality scores are analysed across the whole batch. Content that scored above baseline is examined for what made it work. Those approaches are codified as exemplars: reference outputs that future agents read before starting similar work. Content that failed is examined for root cause, and those failure patterns are documented as anti-exemplars, meaning the system has a record of exactly what went wrong and a block on repeating it.
This is not model fine-tuning. No new AI model is trained. The underlying models stay the same. What changes is the operational layer: the instructions, the constraints, the examples, and the documented failure modes that guide every future run. Think of it as the difference between a new hire and one who has spent twelve months learning your exact brand, your specific guests, and which angles your audience actually responds to.
The improvement is measurable. Quality scores, revision rates, and content approval speed all shift over a twelve-month engagement. Clients who have been on KIT for a year are not running the same system they started with.
04 / Routing
Most routing decisions in AI tools are invisible. The tool picks a model, probably the one the developer defaulted to, and uses it for everything. You have no visibility into whether that was the right choice for your task.
KIT's routing layer is called the Quantum Simulation Engine, and the naming is deliberate.
In quantum mechanics, a system holds multiple possible states simultaneously until observation collapses it to one. The routing engine works on the same principle. For any given brief, multiple foundation models are evaluated before a single one is committed to. The brief is assessed against a portfolio of leading models and specialist models optimised for different task types. The selection is based on three factors: the task type and its quality requirements, the cost efficiency of the available options, and the performance history for comparable briefs.
A narrative blog post about a surf camp's best-wave season routes differently from a structured FAQ about hostel booking policies. The engine knows this, because the performance data tells it which model produces output that passes brand compliance faster for each task type.
The model that handles your brief is not the cheapest option. It is not the most expensive. It is the one most likely to produce output your brand compliance agent approves on the first pass.
05 / Comparison
Single-model systems are consistent in exactly the wrong way.
| Single-model system | KIT | |
|---|---|---|
| Output quality | Constrained by one model's strengths and weaknesses throughout | Routed to the optimal model per task type |
| Cost | Full retail pricing on every request, regardless of task complexity | Simpler tasks route to cost-efficient models; average cost per output falls |
| Reliability | Single point of failure. Provider outage stops your queue | Failover routing. Primary model unavailable: the task moves to the next best option |
| Speed | Serial queue. Complex tasks block simpler ones | Parallel execution where task dependencies allow. Short briefs do not wait behind long ones |
| Improvement over time | Static. Same model in month twelve as month one | The Forge improves operational performance every month |
The failover point is undersold. Operators running time-sensitive campaigns cannot afford a queue that freezes because one model provider has a bad afternoon. KIT routes around it automatically.
06 / Architecture
Three layers. Understanding what each one does explains why KIT behaves differently from a single general-purpose AI.
The process definitions. Every task KIT can perform has a corresponding workflow: a structured specification defining the steps, sequence, inputs, outputs, quality criteria, and edge cases. Workflows are version-controlled, reviewed, and updated when The Forge identifies a better approach. No task runs without a governing workflow.
Twenty-four specialists. Each has a documented role and clear scope. An agent that encounters a situation its workflow does not cover stops and flags for human review rather than guessing. This matters. A general-purpose AI asked to do something outside its instructions will try anyway. A KIT agent will not. The error rate difference over thousands of runs is significant.
The execution layer. Agents make decisions. Tools run deterministic actions: a search, a data collection from a specified source, a content check against a defined word list, an output formatter. Probabilistic reasoning stays with agents. Deterministic execution stays with tools. The system is auditable: you can see exactly what each tool ran and what it returned.
The quality loop. Every output is scored on accuracy, completeness, and usability. High scores trigger review for promotion to exemplar status: reference outputs that agents consult before starting comparable work. Low scores trigger failure pattern documentation. The quality loop runs continuously, not just in The Forge cycle.
07 / Cost
A junior marketing hire covering social content, copywriting, and basic SEO costs between £2,000 and £2,800 per month in base salary. Add employer taxes, benefits, equipment, and management overhead and the true cost is closer to £2,800 to £3,600 per month. That person works approximately 160 hours per month, takes leave, gets sick, and spends the first three months finding their feet.
| Junior marketing hire | KIT Starter | KIT Professional | KIT Agency | |
|---|---|---|---|---|
| Monthly cost | £2,800+ (fully loaded) | £119/month | £279/month | £595/month |
| Setup | 4 to 8 weeks onboarding | 1 hour brand profile build | ||
| Output volume | Variable | Up to 20 pieces/month | Up to 60 pieces/month | High volume, multi-property |
| Languages | Typically 1 to 2 | 5 | ||
| Availability | 160 hours/month | Continuous | ||
| Improvement over time | Depends on the individual | Structural via The Forge | ||
| Cost per output piece | £140+ at 20 pieces/month | £5.95 | £4.65 | Variable |
KIT Starter runs 23x cheaper than the equivalent human hire at equivalent output volume. That number is not a rounding error.
The comparison is not perfect, and saying so is worth doing. A skilled marketing professional brings relationships, strategic judgment, and creative initiative that KIT does not replicate. For the core production work, though, drafting and checking and formatting and scheduling, paying human rates for machine-speed tasks is the inefficiency KIT removes.
Clients on the Scale retainer (£2,800/month) receive KIT Professional as part of the package. The combined cost of a full managed service plus the AI system is still less than one mid-level marketing hire in most markets.
08 / Comparison
09 / Security
Short version: your content is yours, it does not train any model, and it is not visible to other clients.
Slightly longer version: KIT routes briefs to foundation model providers through enterprise-grade API connections. Your brand profile, property details, and content outputs are held in your account. Nothing is shared across client accounts. Nothing is retained beyond the active project lifecycle unless you request archiving. We do not use your brand voice configuration as training material.
API connections use standard encryption in transit. Account credentials and brand configuration are held to the same security standards as payment data. A data processing agreement is available on request.
On model providers: when a brief routes to a foundation model API, the provider receives the prompt and returns a response. That is the same mechanism as using any AI tool directly. KIT structures and constrains the prompt before it leaves your account. Identifying client information is not sent in prompts. The data retention policies of our model providers have been reviewed; we select providers whose enterprise agreements include appropriate data protection terms.
One point that comes up for larger operators: if your property is in a jurisdiction with specific data residency requirements, raise it before onboarding. We will confirm whether our current provider routing is compatible or whether it needs adjusting. Better to establish this in week one than discover a conflict after go-live.
10 / Fit
The honest version: KIT works for operators who are producing content regularly but doing it badly, or not doing it at all, because no one has the capacity.
If the choice between KIT and the managed service is not obvious, the AI Readiness Assessment takes ten minutes and returns a personalised recommendation. No sales call required.