The next big interface of controlling and operating any smart device or machine will be 'voice'.
The big challenge is 'semantic speech recognition' meaning that the computer understands your intention behind what you say. The chain of action to be completed is:
- Speech -> Text
- Text -> Meaning
- Meaning -> Intention
- Intention -> Action
While algorithms are already incredibly strong in transforming speech to text, the steps 'Text->Meaning' & 'Meaning->Intention' are only poorly enriched by contextual clues and thus only of limited usability.
That means that we develop algorithms further to understand language, the intention behind it and link it to computational action. I will develop a system that allows for an unparalleled ability to train your personal algorithm to your intentions, just like your personal usage of language is unique to you - your algorithm will learn to read your intentions behind what you say and put it into action.
Devices with this functionality will be immanent in any home, car, building, shop etc. and relevant in purchase/service/emergency/nursing situations - not to speak from the potential that this technology could offer to disabled people of all sorts - and with the advent of smart speakers and the exponentially growing user numbers the market reinforces the need to push further in this area.