News
On Wednesday, OpenAI released a new open source AI model called Whisper that recognizes and translates audio at a level that approaches human recognition ability. It can transcribe interviews ...
Last week, OpenAI released Whisper, an open-source deep learning model for speech recognition. OpenAI’s tests on Whisper show promising results in transcribing audio not only in English, but ...
In the past month, Whisper was downloaded over 4.2 million times from the open-source AI platform HuggingFace, with the tool being the most popular speech recognition model on the website.
OpenAI has released Whisper, a robust speech recognition model that can understand and transcribe multiple languages. Speech recognition remains a challenging problem in AI and machine learning.
aiOla, a leader in speech recognition technology, has announced today the release of its new open-source AI model, Whisper-Medusa. The new model, based on a multi-head attention architecture ...
To coincide with the rollout of the ChatGPT API, OpenAI today launched the Whisper API, a hosted version of the open source Whisper speech-to-text model that the company released in September ...
Assembly AI claims its new Universal-1 model has 30% fewer hallucinations than Whisper - VentureBeat
AI-as-a-service provider Assembly AI has a new speech recognition model called Universal-1. Trained on more than 12.5 million hours of multilingual audio data, the company says it does well with ...
In addition, the Whisper model has parameter sizes of 'tiny' with 39 million, 'base' with 74 million, 'small' with 244 million, 'medium' with 769 million, and 'large' with 1.55 billion.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results