the future of advanced dialogue applications
play

The Future of Advanced Dialogue Applications Simona Gandrabur, - PowerPoint PPT Presentation

The Future of Advanced Dialogue Applications Simona Gandrabur, Enterprise NLU & Dialogue Team, Montral 1 What is the future niche for speech? Beyond directed prompt, single-slot: User frustration : inefficiency, lack-of


  1. The Future of Advanced Dialogue Applications Simona Gandrabur, Enterprise NLU & Dialogue Team, Montréal 1

  2. What is the future niche for speech? • Beyond directed prompt, single-slot: – User frustration : inefficiency, lack-of coverage, lack of flexibility – Competition : Smart Phones • Beyond standard “how may I help you” call-routing: – NLU limited to classify the entire utterance in a meaning bucket – Fall-back to directed prompt follow-up questions • The future “niche” for speech/language: complex problem solving / deep NLU / multi-modality – What’s my first meeting with Peter next week? – I want to go to Montréal sometime tomorrow afternoon, I`d like to get there before 6pm. – Did that cheque for Bob Smith go through? – I want to go from <X> to <Y>, and stop at a pharmacy on my way. 2

  3. What is needed? • Use-case = Problem solving – dynamic call-flow – multi-turn decision logic – data-driven optimizations ⇒ Not even SCXML is enough 3

  4. What is needed? • Use-case = Deep NLU – ASR : SRGC grammars insufficient, need generic “language model” (statistical/rule-based/combinations/other?) – NLU : • Let’s not standardize the underlying NLU mechanisms • Richer semantic representation, beyond slot-value pairs • Example : modifiers / quantifiers. Can we find a common generic representation? • Example : relations / frames • Standard ontologies / knowledge-bases 4

  5. What is needed? • Use-case = Multimodality – Emma : can’t wait for it to be a requirement! – Abstract semantics : independent of modality, work with common concepts and have modality dependent rendering – Abstract functionality : independent of modality : can we define a limited common I/O modality independent set of functionalities and let the rendering be modality specific – Multi-source input : • Speech / text / touch / history (conversation, application, user profile / data) – Multi-source confidence 5

  6. What is needed? • Use-case = Complexity – Maximize reuse: • Standard reusable ontologies / Knowledge bases • Standard reusable VXML libraries • Standard reusable Grammar libraries, • Etc. 6

  7. What is needed? • Use-case = R&D Agility – need to make it easier to vehiculate vendor-specific information 7

Recommend


More recommend