This is an experimental technology
Because this technology's specification has not stabilized, check the compatibility table for usage in various browsers. Also note that the syntax and behavior of an experimental technology is subject to change in future versions of browsers as the specification changes.
The SpeechRecognition
interface of the Web Speech API is the controller interface for the recognition service; this also handles the SpeechRecognitionEvent
sent from the recognition service.
SpeechRecognition.SpeechRecognition()
SpeechRecognition
object.SpeechRecognition
also inherits properties from its parent interface, EventTarget
.
SpeechRecognition.grammars
SpeechGrammar
objects that represent the grammars that will be understood by the current SpeechRecognition
.SpeechRecognition.lang
SpeechRecognition
. If not specified, this defaults to the HTML lang
attribute value, or the user agent's language setting if that isn't set either.SpeechRecognition.continuous
false
.)SpeechRecognition.interimResults
true
) or not (false
.) Interim results are results that are not yet final (e.g. the SpeechRecognitionResult.isFinal
property is false
.)SpeechRecognition.maxAlternatives
SpeechRecognitionAlternative
s provided per result. The default value is 1.SpeechRecognition.serviceURI
SpeechRecognition
to handle the actual recognition. The default is the user agent's default speech service.SpeechRecognition.onaudiostart
SpeechRecognition.onaudioend
SpeechRecognition.onend
SpeechRecognition.onerror
SpeechRecognition.onnomatch
confidence
threshold.SpeechRecognition.onresult
SpeechRecognition.onsoundstart
SpeechRecognition.onsoundend
SpeechRecognition.onspeechstart
SpeechRecognition.onspeechend
SpeechRecognition.onstart
SpeechRecognition
.SpeechRecognition
also inherits methods from its parent interface, EventTarget
.
SpeechRecognition.abort()
SpeechRecognitionResult
.SpeechRecognition.start()
SpeechRecognition
.SpeechRecognition.stop()
SpeechRecognitionResult
using the audio captured so far.In our simple Speech color changer example, we create a new SpeechRecognition
object instance using the SpeechRecognition()
constructor, create a new SpeechGrammarList
, and set it to be the grammar that will be recognised by the SpeechRecognition
instance using the SpeechRecognition.grammars
property.
After some other values have been defined, we then set it so that the recognition service starts when a click event occurs (see SpeechRecognition.start()
.) When a result has been successfully recognised, the SpeechRecognition.onresult
handler fires, we extract the color that was spoken from the event object, and then set the background color of the <html>
element to that colour.
var grammar = '#JSGF V1.0; grammar colors; public <color> = aqua | azure | beige | bisque | black | blue | brown | chocolate | coral | crimson | cyan | fuchsia | ghostwhite | gold | goldenrod | gray | green | indigo | ivory | khaki | lavender | lime | linen | magenta | maroon | moccasin | navy | olive | orange | orchid | peru | pink | plum | purple | red | salmon | sienna | silver | snow | tan | teal | thistle | tomato | turquoise | violet | white | yellow ;' var recognition = new SpeechRecognition(); var speechRecognitionList = new SpeechGrammarList(); speechRecognitionList.addFromString(grammar, 1); recognition.grammars = speechRecognitionList; //recognition.continuous = false; recognition.lang = 'en-US'; recognition.interimResults = false; recognition.maxAlternatives = 1; var diagnostic = document.querySelector('.output'); var bg = document.querySelector('html'); document.body.onclick = function() { recognition.start(); console.log('Ready to receive a color command.'); } recognition.onresult = function(event) { var color = event.results[0][0].transcript; diagnostic.textContent = 'Result received: ' + color; bg.style.backgroundColor = color; }
Specification | Status | Comment |
---|---|---|
Web Speech API The definition of 'SpeechRecognition' in that specification. | Draft |
Feature | Chrome | Firefox (Gecko) | Internet Explorer | Opera | Safari (WebKit) |
---|---|---|---|---|---|
Basic support | 33webkit [1] | No support [2] | No support | No support | No support |
continuous | 33 [1] | No support | No support | No support | No support |
Feature | Android | Chrome | Firefox Mobile (Gecko) | Firefox OS | IE Phone | Opera Mobile | Safari Mobile |
---|---|---|---|---|---|---|---|
Basic support | ? | (Yes)[1] | 44.0 (44) | 2.5 | No support | No support | No support |
continuous | ? | (Yes)[1] | ? | No support | No support | No support | No support |
webkitSpeechRecognition
; You'll also need to serve your code through a web server for recognition to work.media.webspeech.recognition.enable
flag in about:config on mobile. Not implemented at all on Desktop Firefox — see bug 1248897.To use speech recognition in an app, you need to specify the following permissions in your manifest:
"permissions": { "audio-capture" : { "description" : "Audio capture" }, "speech-recognition" : { "description" : "Speech recognition" } }
You also need a privileged app, so you need to include this as well:
"type": "privileged"
© 2005–2017 Mozilla Developer Network and individual contributors.
Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later.
https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition