• W3C Workshop: The Multilingual Web - Where Are We?
26-27 October 2010, ETSIT-UPM, Madrid, Spain
2:30 pm - CREATORS
"Multilingual Aspects in Speech and Multimodal Interfaces", Paolo Baggia (Loquendo).• Speech Synthesis Markup Language (SSML 1.1) is W3C Recommendation
07 September 2010
W3C extended speech on the Web to an enormous new market by improving support for Asian languages and multi-lingual voice applications.
The Speech Synthesis Markup Language (SSML 1.1) Recommendation provides control over voice selection as well as speech characteristics such as pronunciation, volume, and pitch. SSML is part of W3C's Speech Interface Framework for building voice applications, which also includes the widely deployed VoiceXML.Read more in the Loquendo press release, W3C press release and W3C Member Testimonials. Learn more about voice browsing.
• Multimodal Architecture and Interfaces Draft Updated
21 September 2010
The Multimodal Interaction Working Group has published an updated Working Draft of Multimodal Architecture and Interfaces (MMI Architecture), which defines a general and flexible framework providing interoperability among modality-specific components from different vendors - for example, speech recognition from one vendor and handwriting recognition from another.The main changes from the previous draft are the inclusion of state charts for modality components, the addition of a 'confidential' field to life-cycle events and the removal of the 'media' field from life-cycle events. A diff-marked version of this document is available.
Learn more about the W3C Multimodal Interaction Activity.
• Draft of Emotion Markup Language (EmotionML) 1.0 Published
29 July 2010
The Multimodal Interaction Working Group has published a Working Draft of Emotion Markup Language (EmotionML) 1.0. As the web is becoming ubiquitous, interactive, and multimodal, technology needs to deal increasingly with human factors, including emotions.The present draft specification of Emotion Markup Language 1.0 aims to strike a balance between practical applicability and basis in science.
The language is conceived as a "plug-in" language suitable for use in three different areas: manual annotation of data; automatic recognition of emotion-related states from user behavior; and generation of emotion-related system behavior.
Learn more about the Multimodal Interaction Activity.
Good company - kind people - exciting people