Bridging the Gap Between Human Intentions and Machine Behaviors
Computer languages have always been the bridge between real-world requirements and software solutions. Until now, the bridge was long, and required specialists all along the length of it – just to get devices to do what you wanted them to do. As the world moves to more voice-based assistants, there are limitations to the approaches currently being used. Challenges remain in 4 primary areas.
How to improve virtual assistant technology?
- Personalization (skills that match an individual users’ way of thinking)
- Agility (the ability to locally sense, infer, and act)
- Engagement (discovery, continued use and creation of skills)
- Interoperability (rapidly supporting new devices, platforms and data).
The Synergy® Solution
Synergy® enables dialog (when beneficial) between users and the machine. It allows users of devices to simply describe the desired result, using natural language dialog, and Synergy’s powerful backend, rapidly translates these instructions into adaptive software code. Accordingly, developers of voice-enabled computing appliances which incorporate Synergy will no longer need to limit the ways users express what they want to achieve, while reducing dependency on coders; the possibilities are endless.
What does Synergy® do?
“Essence is the attribute or set of attributes that makes an object or substance what it fundamentally is, and which it has by necessity, and without which it loses its identity.”
Plato – 424/423 BC – 348/347 BC
Synergy® interfaces human users with the computing services of the Essence® Platform, for achieving computing success where other systems consistently fail. Essence® easily integrates with other systems without wrappers, with structured or unstructured data, is massively scalable, and requires no meta data. Read this earlier blog post to learn more about the Essence® Project.
Synergy® provides a means to ask questions, make statements (which is effectively ‘adding data’), and to instruct Essence® to run commands against vast amounts of disparate data. It currently contains 262,00 unique collections of data (e.g. images, sounds, tables of synonyms, dictionaries and more). Synergy’s methods for scaling and adapting in real-time are truly game changing.
Synergy® keyboard control demonstration
How does Synergy® work?
We will soon show Synergy® responding to user inputs with compound sentences that are generated. This involves dynamically constructing complete sentences, not from libraries of canned responses, but based on the ability to match stored and modifiable units of meaning to the situation.
Synergy® is designed to be the bridge between what the ‘user wants to happen’ and ‘what the software actually does’. Minimally, this involves exposing all the data (nouns) that can be queried and/or modified (adjectives in English or in the future, any human language) so the user can describe what they want to happen: whether typed; spoken or using buttons/menus to assemble a chain of instructions.
In the Synergy® video referenced above, it ends with sharing the experience with another person via email. The generated email contains the screen snapshot, and the actions/queries taken during the session. The recipient can then choose to process the experience, change it, and share it again. All of the user specific nuances [ways of thinking] can be embedded within the text characters themselves [hidden from view]. This is a powerful way of allowing for differences in thinking, speaking and even languages when sharing experiences. [Watch more Synergy® videos on our website here]
How is Synergy® different from other approaches?
Voice interfaces and typed searches, such as Google, Siri, Alexa and Cortana, have helped with certain styles of activity such as finding a web page with desired content or lifestyle activities such as calendar appointments, but they are still limited to a single task, with a limited feedback cycle (if any) with the user to refine the desired task.
Knowledge bases, such as Wolfram Alpha/Language, take the interface paradigm a bit further, but still require very specific language and ordering procedures to issue a single task.
Synergy® leverages Semiotic Intelligence contained within the Essence® Platform. “Semiotics is the study of signs and symbols, in particular as they communicate things spoken and unspoken. Common examples of semiotics include traffic signs, emojis and emoticons used in electronic communication, and logos and brands used by international companies to sell us things—”brand loyalty,” they call it.
But signs are around us all the time: consider a set of paired faucets in a bathroom or kitchen. In your kitchen, the left side is almost certainly the hot water tap, the right is the cold. Twenty years ago or so, taps even had letters designating the temperature of the water—in English, H for hot and C for cold. But in Spanish it was C for hot (caliente) and F for cold (frio). Modern taps have no letter designations at all or are included in one tap—but even with one tap, the semiotic content of faucets is still tilt or turn left for hot water, and right for cold. The information about how to avoid being burned is a sign: you know which direction to tilt.
Read more of the article by Richard Nordquist on this topic.
Essence® translates all inputs (e.g. video, audio, text, natural and programming languages, etc.) into symbolic representations in real time. The many advantages of this approach stem from the ability to generate machine instructions that map directly to the translated inputs. Instead of waiting for programmers to write code in a specific language (which takes time and is inherently deterministic), Essence® bypasses that step by deriving meanings [what the user or system designer wants to happen], then translating them into machine instructions in real time. At MindAptiv, we like to refer to this as software becoming wantware.
- Personalization – Apps triggered by commands do not provide individual users with a personal assistant ready to respond to their desired outcomes. Current platforms are increasingly seen as a return to command line (i.e. reminiscent of Microsoft DOS) approaches that require users to remember specific sequences of commands and words. Alexa, Siri and Google do not support dialog with the user. If there isn’t a clear match found, the user is presented with “I don’t have that Skill” (Alexa), “I don’t understand” (Siri), Synergy® is a way to translate desired outcomes from articulated language and unique thinking styles into stable expressions, which become code. This does not result in code libraries, as the code is generated and then deleted. Synergy® creates a database (either personal or shared) of semantic meanings/idioms, which are used as input to the code generation process. It is crucial to note that Synergy® supports generated code, which coexists with engineered solutions (i.e. skills created by coders).
- Agility – Sensors are becoming ubiquitous and the size and amount of data is requiring that data be processed locally vs. in the cloud. The volume, variety, and veracity of data will require local processing in order to extract relevance of the data as well as driving machine learning to improve accuracy of the data insights that are extracted. More sophisticated devices should not have to wait for the cloud to act in response to local needs, including operating when Internet access is down. Synergy® is a natural language dialog system that maps all signal data (from sensors, repositories and users) to semantic meanings (i.e. in English) that are used to safely direct the behaviors that code produces.
- Engagement – according to a recent Recode article, 69 percent of Alexa “Skills” have zero or one customer review. On average, the retention rate is only 3% for Alexa and Google Assistant users by week 2. For comparison’s sake, Android and iOS apps have average retention rates of 13 percent and 11 percent, respectively, one week after first use. Synergy® provides a natural language dialog system and the ability to generate code on the fly to augment the user (with their existing communication skills). The focus is on the ‘software behavior’ and the experience the user seeks. This greatly reduces the discovery problem, which leads to user frustration and wasted resources on unused skills.
- Interoperability – Industry forecasts are between 18 and 50 billion devices will be connected by 2020 and the IoT market will become a multi-trillion dollar market by then. The most successful platforms will enable simple, seamless and secure operations on the largest number and types of connected devices. Synergy® uses a revolutionary back end that generates platform agnostic code and leverages powerful tools (i.e. a next generation source code packager) to enable new levels of interoperability.
Other benefits and attributes of the Synergy® back end include the following:
- Allows for rapid and easy integration with devices and services without shutting down or restarting.
- Uses a revolutionary approach for protecting user data and devices from hackers and coding errors.
- Significantly improves speech recognition given that Synergy® matches synonyms, beginnings of words, and doesn’t require the fixed ordering of a given sentence. It can use the matching of phonemes to increase accuracy per-word, per-context, and per-user-history.
- Transforms all signal data (i.e. text, images, streamed and recorded video, audio, sensors, code, content on web pages and more) into a uniform data structure that maps directly to semantics (groupings of ideas).
- Provides a breakthrough approach to real time image processing for improved computer vision, object recognition, and video processing.
- Translates programming languages and natural languages
- Adapts to more or less compute resources for leveraging both the cloud and the edge (localized compute resources).
Synergy® takes voice-based assistants to new levels by adding personalization, agility, engagement, and interoperability. Follow us on this blog, social media, and our website as we unveil more potentially world-changing solutions.
Get ready to ‘create at the speed of thought®‘
Ken Granville & Jake Kolb
Cofounders of MindAptiv®