This is an experimental technology
Because this technology's specification has not stabilized, check the compatibility table for usage in various browsers. Also note that the syntax and behavior of an experimental technology is subject to change in future versions of browsers as the specification changes.
The Web Speech API enables you to incorporate voice data into web apps. The Web Speech API has two parts: SpeechSynthesis (Text-to-Speech), and SpeechRecognition (Asynchronous Speech Recognition.)
The Web Speech API makes web apps able to handle voice data. There are two components to this API:
SpeechRecognition
interface, which provides the ability to recognize voice context from an audio input (normally via the device's default speech recognition service) and respond appropriately. Generally you'll use the interface's constructor to create a new SpeechRecognition
object, which has a number of event handlers available for detecting when speech is input through the device's microphone. The SpeechGrammar
interface represents a container for a particular set of grammar that your app should recognise. Grammar is defined using JSpeech Grammar Format (JSGF.)SpeechSynthesis
interface, a text-to-speech component that allows programs to read out their text content (normally via the device's default speech synthesiser.) Different voice types are represented by SpeechSynthesisVoice
objects, and different parts of text that you want to be spoken are represented by SpeechSynthesisUtterance
objects. You can get these spoken by passing them to the SpeechSynthesis.speak()
method.For more details on using these features, see Using the Web Speech API.
SpeechRecognition
SpeechRecognitionEvent
sent from the recognition service.SpeechRecognitionAlternative
SpeechRecognitionError
SpeechRecognitionEvent
result
and nomatch
events, and contains all the data associated with an interim or final speech recognition result.SpeechGrammar
SpeechGrammarList
SpeechGrammar
objects.SpeechRecognitionResult
SpeechRecognitionAlternative
objects.SpeechRecognitionResultList
SpeechRecognitionResult
objects, or a single one if results are being captured in continuous
mode.SpeechSynthesis
SpeechSynthesisErrorEvent
SpeechSynthesisUtterance
objects in the speech service.SpeechSynthesisEvent
SpeechSynthesisUtterance
objects that have been processed in the speech service.SpeechSynthesisUtterance
SpeechSynthesisVoice
SpeechSynthesisVoice
has its own relative speech service including information about language, name and URI.Window.speechSynthesis
[NoInterfaceObject]
interface called SpeechSynthesisGetter
, and Implemented by the Window
object, the speechSynthesis
property provides access to the SpeechSynthesis
controller, and therefore the entry point to speech synthesis functionality.The Web Speech API repo on GitHub contains demos to illustrate speech recognition and synthesis.
Specification | Status | Comment |
---|---|---|
Web Speech API | Draft | Initial definition |
Feature | Chrome | Firefox (Gecko) | Internet Explorer | Opera | Safari (WebKit) |
---|---|---|---|---|---|
Basic support | 33[1] | 49 (49)[2] | No support | No support | No support |
Feature | Android | Chrome | Firefox Mobile (Gecko) | Firefox OS | IE Phone | Opera Mobile | Safari Mobile |
---|---|---|---|---|---|---|---|
Basic support | ? | (Yes)[1] | ? | 2.5 | No support | No support | No support |
webkitSpeechRecognition
; You'll also need to serve your code through a web server for recognition to work. Speech synthesis is fully supported without prefixes.media.webspeech.recognition.enable
flag in about:config
; synthesis is switched on by default. Note that currently only the speech synthesis part is available in Firefox Desktop — the speech recognition part will be available soon, once the required internal permissions are sorted out.To use speech recognition in an app, you need to specify the following permissions in your manifest:
"permissions": { "audio-capture" : { "description" : "Audio capture" }, "speech-recognition" : { "description" : "Speech recognition" } }
You also need a privileged app, so you need to include this as well:
"type": "privileged"
Speech synthesis needs no permissions to be set.
© 2005–2017 Mozilla Developer Network and individual contributors.
Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later.
https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API