Launch HN: Cekura (YC F24) – Testing and monitoring for voice and chat AI agents (undefined)
66 points by atarus 9 hours ago
Hey HN - we're Tarush, Sidhant, and Shashij from Cekura (https://www.cekura.ai). We've been running voice agent simulation for 1.5 years, and recently extended the same infrastructure to chat. Teams use Cekura to simulate real user conversations, stress-test prompts and LLM behavior, and catch regressions before they hit production.
The core problem: you can't manually QA an AI agent. When you ship a new prompt, swap a model, or add a tool, how do you know the agent still behaves correctly across the thousands of ways users might interact with it? Most teams resort to manual spot-checking (doesn't scale), waiting for users to complain (too late), or brittle scripted tests.
Our answer is simulation: synthetic users interact with your agent the way real users do, and LLM-based judges evaluate whether it responded correctly - across the full conversational arc, not just single turns. Three things make this actually work: Scenario generation + real conversation import - Our scenario generation agent bootstraps your test suite from a description of your agent. But real users find paths no generator anticipates, so we also ingest your production conversations and automatically extract test cases from them. Your coverage evolves as your users do.
Mock tool platform - Agents call tools. Running simulations against real APIs is slow and flaky. Our mock tool platform lets you define tool schemas, behavior, and return values so simulations exercise tool selection and decision-making without touching production systems.
Deterministic, structured test cases - LLMs are stochastic. A CI test that passes "most of the time" is useless. Rather than free-form prompts, our evaluators are defined as structured conditional action trees: explicit conditions that trigger specific responses, with support for fixed messages when word-for-word precision matters. This means the synthetic user behaves consistently across runs - same branching logic, same inputs - so a failure is a real regression, not noise.
Cekura also monitors your live agent traffic. The obvious alternative here is a tracing platform like Langfuse or LangSmith - and they're great tools for debugging individual LLM calls. But conversational agents have a different failure mode: the bug isn't in any single turn, it's in how turns relate to each other. Take a verification flow that requires name, date of birth, and phone number before proceeding - if the agent skips asking for DOB and moves on anyway, every individual turn looks fine in isolation. The failure only becomes visible when you evaluate the full session as a unit. Cekura is built around this from the ground up. Where tracing platforms evaluate turn by turn, Cekura evaluates the full session. Imagine a banking agent where the user fails verification in step 1, but the agent hallucinates and proceeds anyway. A turn-based evaluator sees step 3 (address confirmation) and marks it green - the right question was asked. Cekura's judge sees the full transcript and flags the session as failed because verification never succeeded.
Try us out at https://www.cekura.ai - 7-day free trial, no credit card required. Paid plans from $30/month.
We also put together a product video if you'd like to see it in action: https://www.youtube.com/watch?v=n8FFKv1-nMw. The first minute dives into quick onboarding - and if you want to jump straight to the results, skip to 8:40.
Curious what the HN community is doing - how are you testing behavioral regressions in your agents? What failure modes have hurt you most? Happy to dig in below!
shubhamintech an hour ago
The full-session evaluation framing is the right call - most teams don't realize the failure happened in turn 2 until they've spent 3 hours blaming the model. One thing worth thinking about as you grow: connecting caught regressions to production conversation data. When your simulation flags a new failure mode, being able to say "this pattern has already surfaced X times in prod this week" cuts the prioritization debate in half. Does Cekura currently let you correlate simulation failures back to real user sessions, or is that still a manual step?
atarus an hour ago
We track the failure modes in production directly instead of relying on simulation. So if suddenly we are seeing a failure mode pop up too often, we can alert timely. In the approach of going from simulation to monitoring, I am worried the feedback might be delayed.
Doing it in production also helps to go run simulations by replaying those production conversations ensuring you are handling regression.
FailMore 7 hours ago
Any ideas how to solve the agent's don't have total common sense problem?
I have found when using agents to verify agents, that the agent might observe something that a human would immediately find off-putting and obviously wrong but does not raise any flags for the smart-but-dumb agent.
atarus 7 hours ago
To clarify you are using the "fast brain, slow brain" pattern? Maybe an example would help.
Broadly speaking, we see people experiment with this architecture a lot often with a great deal of success. A few other approaches would be an agent orchestrator architecture with an intent recognition agent which routes to different sub-agents.
Obviously there are endless cases possible in production and best approach is to build your evals using that data.
rush86999 4 hours ago
Only solution is to train the issue for the next time.
Architecturally focusing on Episodic memory with feedback system.
This training is retrieved next time when something similar happens
atarus 4 hours ago
Training is an overkill at this point imo. I have seen agents work quite well with a feedback loop, some tools and prompt optimisation. Are you doing fine-tuning on the models when you say training?
rush86999 3 hours ago
guerython 3 hours ago
we treat each scenario as an explicit state machine. every conversation has checkpoints (ask for name, verify dob, gather phone) and the case only passes if each checkpoint flips true before the flow moves on. that means if the agent hallucinates, skips the verification step, or escalates to a human too early you get a session-level failure, not just a happily-green last turn. logging which checkpoint stayed false makes regressions obvious when you swap prompts/models.
chrismychen 4 hours ago
How do you handle sessions where the correct outcome is an incomplete flow — e.g. the agent correctly refuses to move forwards because the caller failed verification, or correctly escalates to a human?
atarus 3 hours ago
This comes from our architecture. Since we are aware of the agent's context our test agents know the incomplete flows and the assertions are per session.
If we miss some cases, there's always a feedback loop to help improve your test suite
jamram82 4 hours ago
Testing voice agents would require some kind of knowledge integration. Do you have any plans to support custom knowledge bases for test voice agents ?
atarus 3 hours ago
Yes, we already support knowledge base integrations for BigQuery and plan to expand the set of connectors. You can always drop knowledge files currently.
Moreover, we even generate scenarios from the knowledge base
niko-thomas 4 hours ago
We've tried a few platforms for voice agent testing and Cekura has been the best by a long shot. Keep up the great work!
sidhantkabra 9 hours ago
Was really fun building this - would love feedback from the HN community and get insights on your current process.
moinism 8 hours ago
congrats on the launch! do you guys have anything planned to test chat agents directly in the ui? I have an agent, but no exposed api so can't really use your product even though I have a genuine need.
atarus 8 hours ago
Yes, we do support integrations with different chat agent providers and also SMS/Whastap agents where you can just drop a number of the agent.
Let us know how your agent can be connected to and we can advise best on how to test it.
michaellee8 6 hours ago
Interesting, I have built https://github.com/michaellee8/voice-agent-devkit-mcp exactly for this, launch a chromium instance with virtual devices powered by Pulsewire and then hook it up with tts and stt so that playwright can finally have mouth and ears. Any chance we can talk?
atarus 6 hours ago
That's actually interesting. Is it a dependancy on user to create the HTTP endpoints for the /speak and /transcript?
One of our learnings has been to allow plugging into existing frameworks easily. Example - livekit, pipecat etc.
Happy to talk if you can reach out to me on linkedin - https://www.linkedin.com/in/tarush-agarwal/