we've jusst just upgraded btcframe aiโs voice stack to eleven_flash_v2_5. the latency drop is insane like orders of magnitude faster than the openai tts pipeline we were running before. once the full model is deployed directly on-device, the inference path becomes near-zero hop. the ux jump is going to be wild.
Login to reply