Show HN: Multimodal perception system for real-time conversation (raven.tavuslabs.org)
26 points by mert_gerdan 4 hours ago
I work on real-time voice/video AI at Tavus and for the past few years, I’ve mostly focused on how machines respond in a conversation.
One thing that’s always bothered me is that almost all conversational systems still reduce everything to transcripts, and throw away a ton of signals that need to be used downstream. Some existing emotion understanding models try to analyze and classify into small sets of arbitrary boxes, but they either aren’t fast / rich enough to do this with conviction in real-time.
So I built a multimodal perception system which gives us a way to encode visual and audio conversational signals and have them translated into natural language by aligning a small LLM on these signals, such that the agent can "see" and "hear" you, and that you can interface with it via an OpenAI compatible tool schema in a live conversation.
It outputs short natural language descriptions of what’s going on in the interaction - things like uncertainty building, sarcasm, disengagement, or even shift in attention of a single turn in a convo.
Some quick specs:
- Runs in real-time per conversation
- Processing at ~15fps video + overlapping audio alongside the conversation
- Handles nuanced emotions, whispers vs shouts
- Trained on synthetic + internal convo data
Happy to answer questions or go deeper on architecture/tradeoffs
More details here: https://www.tavus.io/post/raven-1-bringing-emotional-intelli...
ycombiredd 7 minutes ago
Hmm.. My first thought is that great, now not only will e.g., HR/screening/hiring hand-off the reading/discerning tasks to an ML model, they'll now outsource the things that require any sort of emotional understanding (compassion, stress, anxiety, social awkwardness, etc) to a model too.
One part of me has a tendency to think "good, take some subjectivity away from a human with poor social skills", but another part of me is repulsed by the concept because we see how otherwise capable humans will defer to "expertise" of an LLM due to a notion of perceived "expertise" in the machine, or laziness (see recent kerfuffles in the legal field over hallucinated citations, etc.)
Objective classification in CV is one thing, but subjective identification (psychology, pseudoscientific forensic sociology, etc) via a multi-modal model triggers a sort of danger warning in me as initial reaction.
Neat work, though, from a technical standpoint.
Johnny_Bonk 9 minutes ago
Holy
ashishheda 34 minutes ago
Wonder how it works?
jesserowe 4 hours ago
the demo is wild... kudos