This is awesome! I'm really glad the Azure team is making these Cognitive Services available to developers and the general public. It is very impressive how far these capabilities have come, especially the new speaking style capabilities that allow users to better express emotion. I'm interested in seeing how we can apply these tools. Hirox mentioned video games. I also think about chat agents, virtual assistants, tools for the hearing impaired, and so on. Are there any posts about or information on how these Cognitive Speech Services technologies are being applied currently?
For anyone that is interested, I am working on an application that uses the https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-sdk?tabs=windows%2Cubuntu%2Cios-xcode%2Cmac-xcode%2Candroid-studio. The application is https://www.type-recorder.com.