Automatically detect the voice model to improve transcription in use cases where multiple speakers have different accents (e.g. US and UK on the same line) - similar to language detection in Watson Assistant.
Why is it useful?
|Who would benefit from this IDEA?||Customers with use cases where multiple speakers from different geo's are interacting on the same line.|
How should it work?