Feb 10 2021
09:21 AM
- last edited on
Feb 10 2021
01:33 PM
by
TechCommunityAP
Feb 10 2021
09:21 AM
- last edited on
Feb 10 2021
01:33 PM
by
TechCommunityAP
Looking at the available documentation, I still have questions around what limitations may exist as it relates to one speech model.
E.g., how many hours of audio can be processed into one speech model and how many sentences can be processed into one speech model?
Feb 11 2021 09:20 AM
See the documentation here for recommended amounts of training data: Prepare data for Custom Speech - Speech service - Azure Cognitive Services | Microsoft Docs
For audio training data the limit is 20h. For text data you could technically use more than the recommended data amounts, but typically gains would be minimal if any.