Users Need UC Choice in "Mobile App" Interfaces

28 Nov 2013

In a recent article in Speech Technology magazine, many industry experts were interviewed about the new, flexible interface needs of "multimodal" end users. UC-enabled speech has now become part of the multimodal approach to online "mobile apps" and "personal assistants" (like Apple's Siri), as well as an option for all forms of messaging between both people and online applications. This is particularly critical for BYOD mobile users who will be using a variety of smartphones and tablets, with different form factors and mobile operating systems, for all their mobile interactions with people and online applications.

I have long viewed "multimodal" unified communications as not only the choice of interface medium, but also the choice difference between synchronous and asynchronous connectivity. This important factor will increasingly come into play for mobile customer services in the form of "click-for-assistance" options within self-service mobile apps. (The most notable recent development here has been Amazon's "Mayday" button for live customer assistance from video agents in learning to use the many features of their latest Kindle Fire HDX tablet.)

Because mobility implies dynamic user interface requirements based on individual circumstances (driving, in meeting, in noisy environment, walking, etc.), the many comments in the article reinforce the role of "multimodal" VUI and GUI interfaces that support the user's choice of visual, touch, and voice for input and visual and voice for output. Because consumers are increasingly using multimodal mobile devices for online self-service applications, it is time to accommodate the practical combination of speech input (easier and faster) with visual informational output (faster and easier) whenever possible. When other modes are needed, however, there must be no impact on the basic application process.

"Separation of Church and State"For Online Self-Service Applications - The flexibility for a Web-based, mobile self-service application to use VUIs for input and GUIs for output means that such applications have to move away from their old development silos, which assumed either all GUIs for desktops or all VUIs for telephones. What is implied is a new and separate layer of interface control by the end user, not the application itself. The application will get input and generate output through a standard data connection, but that original input will be converted from whatever medium the end user created it in, and the application response will be converted, likewise, to whatever medium the end user dynamically needs.

The Speech Technology magazine article suggests that new W3C standards will facilitate this coming change with an "interaction manager" that will facilitate the dynamic use of different UIs, depending on the context of the end user's device usage. The multimodal mobile devices will be able to dynamically accommodate which medium will be used for input independently of which medium will be used for output. In addition, as I discussed in a recent post, all end user needs for live assistance can be contextually and flexibly accessed in their choice of modes (text/voice message, IM chat, voice/video connection). Such flexibility will be supported with the adoption of the new capabilities provided by WebRTC for real-time connectivity.

This approach will also support the need for consistency by designing applications to be functionally independent of the medium used for inputs and outputs. This will apply to both user control commands, as well as to any form of informational content. Just as person-to-person communications have become multimodal at the contact initiator level, independently from that of the recipient's, self-service applications must now support end user multimodal end users consistently and flexibly.

Comments

There are currently no comments on this article.

You must be a registered user to make comments