IBM Watson™ Ideas

Welcome to the IBM Watson™ Ideas Portal


We welcome and appreciate your feedback on IBM Watson™ Products to help make them even better than they are today!


If you are looking for troubleshooting help or wondering how to use our products and services, please check the IBM Watson™ documentation. Please do not use the Ideas Portal for reporting bugs - we ask that you report bugs or issues with the product by contacting IBM support.


Before you submit an idea, please perform a search first as a similar idea may have already been reported in the portal.


If a related idea is not yet listed, please create a new idea and include with it a description which includes expected behavior as well as why having this feature would improve the service and how it would address your use case.

Improve API response structure for timestamps and word_confidence within STT SpeechRecognitionAlternative model

As part of the response from making  a POST to the v1/recognize endpoint in the Speech to Text service, the user receives an array of "alternatives". Within these "alternatives" objects, there are two arrays called "word_confidence" and "transcript". Below is an example of this piece of the response:

"alternatives": [
{
"transcript": "thunderstorms could produce",
    "confidence": 0.994,
    "word_confidence": [
    [
      "thunderstorms",
        1
    ],
[
      "could",
       1
     ],
      [
      "produce",
        1
      ],
   ],
   "timestamps": [
    [
"thunderstorms",
       1.49,
        2.32
     ],
      [
      "could",
        2.32,
       2.54
      ],
      [
      "produce",
        2.54,
        3.01
     ],
    ]
  }
]

The problem with this response is that both "word_confidence" and "timestamps" are arrays of arrays, even though the internal arrays have specific attributes at particular indices. As is, this response lends itself more to "word_confidence" and "timestamps" being arrays of objects like so:

"alternatives": [
{
"transcript": "thunderstorms could produce",
    "confidence": 0.994,
    "word_confidence": [
    {
      "thunderstorms",
        1
    },
{
      "could",
       1
     },
      {
      "produce",
        1
      },
   ],
   "timestamps": [
    {
"thunderstorms",
       1.49,
        2.32
     },
      {
      "could",
        2.32,
       2.54
      },
      {
      "produce",
        2.54,
        3.01
     },
    ]
  }
]

Added motivation for the change is that this is currently causing a problem with the API specification and, in turn, efforts to automatically generate code/documentation based on the API.

 

Currently, we are specifying the various Watson APIs based on the OpenAPI version 2.0 specification. These can be found here. The way the response is actually structured, with a nested array of mixed types, cannot be properly specified according to the specification, resulting in manual work to tweak code that may be generated from the Speech to Text API spec.

 

Overall, making the proposed change to the v1/recognize response in the Speech to Text service would both make more sense based on the structure, but would also make it easier to document and help further the effort to streamline API changes across all relevant tools.

  • Logan Patino Middaugh
  • Jan 31 2018
  • Attach files