Wantware is an approach that allows anyone to combine and remix popular algorithms (e.g. Tensorflow, Caffe, GPT-3, and many others) with each other and our own generated machine instructions for parallel execution, all on the fly. This results in an artificial intelligence algorithm superset. AI/ML, and traditional algorithms run together in parallel that are easily altered in real-time via natural language dialog. This allows the creation of new and unique solutions to all manner of problem sets. The implications are enormous.
Our Unique Approach to AI and Everything Else in Computing
Our team believes that programming languages are insufficient for filling the gap between human intentions and machine behaviors. So we created Essence Elements (formerly known as Elixir).
Like elements in the Periodic Chart, Essence Elements carries instructions. Expressions of meaning and significance are built using Essence Elements’ base set of semantic units, which distill an idea down into its most atomic pieces.
What Makes Essence® a Different Approach to AI/ML?
The current trend of machine learning, which is mostly neural-net based, is built on layers of data transforms that expand or contract the overall amount and precision of data. Essence technology (implemented in every application based on it) is a superset of this approach since it can model and *directly* use existing neural net graphs as well as classic algorithm approaches and a variety of solutions that fall in between.
We can combine and adapt on-the-fly (no need to stop and restart to explore different combinations of algorithms, data or intent), while auto-tuning and auto-scaling compute resources. That enables an enormous leap forward in performance speed, energy efficiency and sheer opportunities to explore possibilities at much lower cost (no one writing the code, potentially reducing cost by up to 80% compared to legacy approaches).
Snippets of Essence Elements are Like Magic Potions – Just Pop the Top and Pour
A sample Essence Elements Snippet: JoSmxTryChoHa
The above Essence Elements Snippet translates into: ‘Given a collection of events, locate the desired event within the collection and extract its contents. Then, create a new cloned event from the extracted data’.
Change the sequence of the two and three character Essence Elements and the meaning of the snippet changes as well as the resultant behaviors. That’s why we compare it to DNA sequencing or modifying components using their fundamental elements. Simple changes in the ordering of the nucleobases in DNA can cause drastic changes in an organism’s biology. The same is true for reordering Essence Elements.
No Need to Learn Essence Elements to Use Them
Stored Meaning Units [ordinary language representations] are mapped to Essence Elements. The atomic nature of Essence Elements supports re-expressing Meaning Units in any language. This includes; regional variants like British or Appalachian English; completely unrelated languages like Japanese; or even fictional or hypothetical tongues like Elvish, Klingon, or a language communicated by visitors from another galaxy.
- Essence Elements are remapped to any of the potential translations, including individualized slang and interpretation of ideas.
- Essence Elements are re-expressed to match an individual or group’s style of expressing ideas, in real-time.
Highly compressed and permuted text as granular differences are not duplicated, but use a highly efficient hashing approach that takes up little space.
A Unified Four Part Process
We separate our approach into training, recognizing, synthesizing, and translating. In each case our capabilities capture Training, which imports/expands/labels data to understand valid examples or candidates to further classify. Recognition passes incoming data, such as text/image/sound/shape/coding-sequence, through an Essence formula to provide new information about it, such as classify it or create new classifications. Synthesis passes data in reverse through the same formula to attempt to create a ‘valid’ example. Translation is how we alter Essence formulas, Essence examples, or other ML data between domains.
Example 1: Enabling Breakthroughs in Machine Vision
Problem: Consider ‘machine vision’ which can be implemented with classic edge detections and region checking or with a CNN, as Nvidia has been doing in its work on self-driving cars. There are many approaches to optimal vision, particularly considering the conditions (only at night? only moving objects? only red balls?, etc.), the computational-power(FLOPS)/bandwidth/wattage, and time (do new elements need training? how long if so? how fast can results be made?).
Solution: Essence technology functionally assembles transforms, permutes the operations between them to find best speeds/spaces/power-uses, and allows editing them in real-time. It generates code to; recognize objects with ML (via CNN), apply realtime ML training (via FastGANN), employs traditional heuristic (light-scatter conjure), while applying polynomial regression/solving (re-sampling using quaternion log space) approaches.
This approach allows conversion between the complicated ML models (pill-identification, cat-breeds, voices, grammar, etc. See ‘model zoo’ or ‘kaggle examples’), operators(smaller transforms such as edge-detect or rescale values into a nonlinear curve), and all the connections that make the network (how values flow from layer to layer).
Example 2: Applied to Live Streaming of Video
Problem: The need for high quality, low latency video compression is critical to meet the growing demand for high-quality video content and other bandwidth intensive services. Increasing demands for HD and 4K and HDR streaming media negatively impacts network performance.
Solution: Our approach leverages multiple signal processing approaches (in parallel) to deliver paradigm-shifting performance for bandwidth-constrained OTT (over-the-top), OTA (over-the-air), cable networks and numerous remote analysis and content delivery applications.
- adapts to massive drops in network connection and results by simultaneously using upsampling, prediction, de-noising, and historical frames as references.
- does not ‘see’ detail without signal data, but can provide guesses, like puzzle pieces fitting empty slots, with tolerances that might help detect anomalies or indicate warnings when data is otherwise unavailable.
- works with modern compression such as HEVC/H.265, VP9 (WebM). Also with VP10, Daala, and Cisco-Thor-Next_Gen, but the improvement of codecs only enhances the offering.
- brings a self-tuning signal-processor to mitigate compression artifacts and provide rapid adjustments to real-time or archived signal streams.
- provides non-linear blending between previous frames and the ‘pixel-2nd-derivatives’ or ‘accelerated-directions’ to deliver most likely changes from frame to frame.
- uses the previous frames of ‘final image values’ to calculate ‘rates of change’ and then predict the likely pixels.
- Consider modern televisions that interpolate 24Hz or 30Hz cable or DVD signals and upsample them to 60Hz or 120Hz. The chip used in all known cases does linear interpolation (mix by percentage) between the current and previous frame. In these cases, it isn’t cost effective to provide many HD or 4K frames or memory to store all buffers for higher-quality interpolations. In this case, most incoming frames, even high-noise image frames such as the Explosions or Confetti image frames that look blocky on modern video, are using only 1/9th to 1/33rd of the original frame when translated to pattern-fields.
- can fill in missing areas of an image. The idea of video codecs using “delta-compression” aka “don’t resend pixels if they haven’t changed or are close enough to what they just were for the past N frames” is widespread. Our approach of combining ‘delta-compression’ with known matches to use as small bit cost to replace ‘delta’ with a known example seems absent from any academic or corporate white papers.
- allows us to alter the signal for enhanced viewing, data mining, improved recognition when matching objects, and other repurposing of the signal. If the only signal processing being done is enhancing edges to better silhouette a visual or adjusting for poor lighting conditions, then the cost is generally N operations per pixel. The cost grows quadratically as it is based on width by height for an image or more for volumetrics.
- can automatically reduce processing costs by processing at lower resolutions and resampling at higher. The bigger gains are found in cases when the processing is not ‘simple’, such as a User desire to ‘de-noise, median filter, upsample, color remap, sharpen @ 2.5X, remove unicells, and soften’.
- uses 7 techniques and for a SD video image requires 23 ‘passes’ in a standard Adobe or Autodesk pipeline, given resampling of buffers. Our approach often compress most signal processing ‘recipes’ into a single pass.
Consider the numbers, even for only handling 7X techniques:
- Standard Pipeline (Adobe-Stack/Autodesk-Pipe): SD image ops 7x read + 7x write = 14x per pixel
- Essence Pipeline (the recipe generally fits into a single pass): 1 read-combined-write = 1x or worst-case 2x per pixel.
- The difference here is performance, battery/watts, and efficiency for the hardware instead of bandwidth, but this ends up being a bottleneck if you need processing in real time and something takes 14 minutes instead of 1.4 minutes or 4.2 seconds. This example is fairly common as a test case, but could be far more extreme or far less given a user’s needs. What matters is that the end user has a choice to do these things without the overhead of an extensive pipeline.
The Transformations also include:
- Real-time compositing of objects extracted from multiple sources (e.g. files, IP locations, sensors, etc.) for product placement within live video streams
- Customization of objects per demographic or user
- Objects become transactional with a simple implementation without expensive and complex backend technologies
Figure 1 – The Essence® Effect
The Takeaways on AI
So the primary takeaway is not that current approaches to AI don’t work. It is that we can now bring many different approaches together, modify them, integrate and accelerate them (with far fewer samples). AI becomes usable at practically every feature and function level. We call this Superset AI.
The ability to transform incoming signals into easier examples changes the problem. This is particularly true for the hard to succeed corner cases such as a blurry digit for ‘eight’ that might be a ‘six’ or ‘nine’. Our approach works with user recipes for transforming one or more signals into many features at higher or lower resolutions. This makes it suited for rapidly iterating on new features or better recognized features amongst hard to classify datasets.
Note that this means we can complement existing schemas such as Berkley’s Caffe, Google’s Tensor Flow, NYC’s Torch and so on. This applies to all major distributors of learning, including Amazon, Baidu, Microsoft, and IBM. Similar to Nvidia’s Cuda Caffe, which enhances a particular recipe with faster routines. Essence can also speed up the learning or recognizing phases by iterating on the algorithms used to find machine-specific optimal combinations of instructions and bandwidth use.
At the same time, we can empower others who know nothing about coding or scripting. In most cases, the user doesn’t care about coding. The value of software is not the code, it’s the desired behaviors.
At MindAptiv, we’re excited about the near future and opportunities to share our vision and our work. Follow us on our website @ https://mindaptiv.com and on social media.
Get ready to ‘create at the speed of thought®‘