Language hard-wired in human thinking
Bias is a problem: Every voice AI interprets human language in a slightly different way. Needed are shared rules how to encode voice commands to make AIs interoperable.
The goal of whoelse.ai is to provide a standardized language to process human language between different voice AI ecosystems in shared format. This idea is based on Noam Chomsky's theory about universal grammar: “(..) By universal grammar I mean just that system of principles and structures that are perquisites for the acquisition of language, and to which every language necessarily conforms.” (Chomsky, 1984)
We, maybe, can demonstrate that who else? questions exhibit properties to be such a rule in language. A similar proof for universal grammar by Max Planck researchers won the IG "alternative" nobel prize in 2015.
Is "Huh?" a universal word?
Is "Who else?" a universal grammar?
Who else? questions apparently can be used to formulate almost every kind of request. It's a linguistic shortcut to turn every kind of phrase into a question or call to action. This is true as well for user interfaces: It's one of the most common call to actions across apps.
1 wake word for all voice AIs
We develop a simplified language as namespace between voice AIs to trade speech commands (intents). Voice AI can this way be faster localized and adopted for different use cases and input languages.
But as well the usability of voice interfaces will be increasingly important: How will users speak to AI?
Users remember on average only 3.7 brands. 80% of users report problems not knowing what they can ask a voice assistant about. Voice interface discoverability is expected to become a huge linguistic usability problem for consumers.
We can demonstrate that so called who else? questions exhibit properties of Noam Chomsky's theory about hard-wired Universal Grammar (UG). It''s a language every user demographic understands to apply and intuitively responds to:
"Who else needs a ride-share?"
"Who else looks for a flatmate?"
"Who else orders food?"
By understanding how to paraphrase almost every kind of online service in a simplified language, we likely discovered as well a better invocation phrase for voice assistants.
Since September 2019 Amazon runs an own voice AI interoperability initiative. This is a strong validation for the general problem of voice AI. However, the Amazon consortium will not be able to solve the problem of naming arbitrary language requests. We believe independent marketplaces to store and trade language in voice AI are therefore inevitably needed.
Regardless of naming, it will become a problem if 2 or more voice AIs listen in parallel to the same user. The different AIs involved will have to agree which voice assistant was addressed by the user and should be considered as background noise.
Ready to see what we’re building?
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.