Wantware: A new approach for generating code from ordinary language

NLP vs. Meaning:

Natural Language Processing has improved greatly in the past few years given advances and adoption of machine learning methods. The ability to recognize or synthesize text offers many possibilities. Note that it works with text written in human or coding or domain-specialized languages. In all cases, languages express meaning as words, idioms, clauses, context and other schemes.

NLP works with ‘expressions’ of meaning as ‘text’, never meaning itself.  The distinction is crucial as NLP works with similar or not-similar as its main pattern.  Which meanings are captured, which nuances or opposites are stored, is dependent on its training text…not actual meaning itself.

The current trend of machine learning, which is mostly neural-net based, is built on layers of data transforms that expand or contract the overall amount and precision of data.

Our Unique Approach to Software

Wantware is an approach that allows anyone to combine and remix popular algorithms (e.g. Tensorflow, Caffe, GPT-3, and many others) with each other and our own generated machine instructions for parallel execution, all on the fly. This results in an artificial intelligence algorithm superset. AI/ML, and traditional algorithms run together in parallel that are easily altered in real-time via natural language dialog. This allows the creation of new and unique solutions to all manner of problem sets. The implications are enormous.

We can combine and adapt on-the-fly (no need to stop and restart to explore different combinations of algorithms, data or intent), while auto-tuning and auto-scaling compute resources. That enables an enormous leap forward in performance speed, energy efficiency and sheer opportunities to explore possibilities at much lower cost (no one writing the code, potentially reducing cost by up to 80% compared to legacy approaches).

Our meaning units, made from Essence Elements, are the actual ‘meaning’ not a reflection or simulacra of it.  We use NLP, we love NLP, and are excited for NLP’s continuing advances as it helps us to initially guess as a user’s intent from their text expressions. However we store ‘meaning’ so we can work with precisely-defined-in-computing-contexts-and-tolerances.

Meaning can and should be ‘expressed’ differently at different times for different audiences, interests, and needs.  Expressions of meaning, via language, code, art, and other streams of symbols are filters or ‘level of detail operations’ on meaning itself.


Snippets of Essence Elements are Like Magic Potions – Just Pop the Top and Pour

At MindAptiv, we believe that programming languages are insufficient for filling the gap between human intentions and machine behaviors. So we created Essence Elements (formerly known as Elixir).

Like elements in the Periodic Chart, Essence Elements carries instructions. Expressions of meaning and significance are built using Essence Elements’ base set of semantic units, which distill an idea down into its most atomic pieces.

A sample Essence Elements SnippetJoSmxTryChoHa

The above Essence Elements Snippet translates into: ‘Given a collection of events, locate the desired event within the collection and extract its contents. Then, create a new cloned event from the extracted data’.

Change the sequence of the two and three character Essence Elements and the meaning of the snippet changes as well as the resultant behaviors. That’s why we compare it to DNA sequencing or modifying components using their fundamental elements. Simple changes in the ordering of the nucleobases in DNA can cause drastic changes in an organism’s biology. The same is true for reordering Essence Elements.

No Need to Learn Essence Elements to Use Them

Stored Meaning Units [ordinary language representations] are mapped to Essence Elements. The atomic nature of Essence Elements supports re-expressing Meaning Units in any language. This includes; regional variants like British or Appalachian English; completely unrelated languages like Japanese; or even fictional or hypothetical tongues like Elvish, Klingon, or a language communicated by visitors from another galaxy.

  • Essence Elements are remapped to any of the potential translations, including individualized slang and interpretation of ideas.
  • Essence Elements are re-expressed to match an individual or group’s style of expressing ideas, in real-time.

Highly compressed and permuted text as granular differences are not duplicated, but use a highly efficient hashing approach that takes up little space.

A Unified Four Part Process

We separate our approach into training, recognizing, synthesizing, and translating. In each case our capabilities capture Training, which imports/expands/labels data to understand valid examples or candidates to further classify. Recognition passes incoming data, such as text/image/sound/shape/coding-sequence, through an Essence formula to provide new information about it, such as classify it or create new classifications. Synthesis passes data in reverse through the same formula to attempt to create a ‘valid’ example. Translation is how we alter Essence formulas, Essence examples, or other ML data between domains.

Example 1: Enabling Breakthroughs in Machine Vision

Problem: Consider ‘machine vision’ which can be implemented with classic edge detections and region checking or with a CNN, as Nvidia has been doing in its work on self-driving cars. There are many approaches to optimal vision, particularly considering the conditions (only at night? only moving objects? only red balls?, etc.), the computational-power(FLOPS)/bandwidth/wattage, and time (do new elements need training? how long if so? how fast can results be made?).

Solution: Essence technology functionally assembles transforms, permutes the operations between them to find best speeds/spaces/power-uses, and allows editing them in real-time. It generates code to; recognize objects with ML (via CNN), apply realtime ML training (via FastGANN), employs traditional heuristic (light-scatter conjure), while applying polynomial regression/solving (re-sampling using quaternion log space) approaches.

This approach allows conversion between the complicated ML models (pill-identification, cat-breeds, voices, grammar, etc. See ‘model zoo’ or ‘kaggle examples’), operators(smaller transforms such as edge-detect or rescale values into a nonlinear curve), and all the connections that make the network (how values flow from layer to layer).

Example 2: Applied to Live Streaming of Video

Problem: The need for high quality, low latency video compression is critical to meet the growing demand for high-quality video content and other bandwidth intensive services. Increasing demands for HD and 4K and HDR streaming media negatively impacts network performance.

Solution: Our approach leverages multiple signal processing approaches (in parallel) to deliver paradigm-shifting performance for bandwidth-constrained OTT (over-the-top), OTA (over-the-air), cable networks and numerous remote analysis and content delivery applications.

Bandwidth Optimization
  • adapts to massive drops in network connection and results by simultaneously using upsampling, prediction, de-noising, and historical frames as references.
  • does not ‘see’ detail without signal data, but can provide guesses, like puzzle pieces fitting empty slots, with tolerances that might help detect anomalies or indicate warnings when data is otherwise unavailable.
  • works with modern compression such as HEVC/H.265, VP9 (WebM). Also with VP10, Daala, and Cisco-Thor-Next_Gen, but the improvement of codecs only enhances the offering.
  • brings a self-tuning signal-processor to mitigate compression artifacts and provide rapid adjustments to real-time or archived signal streams.
Latency Mitigation
  • provides non-linear blending between previous frames and the ‘pixel-2nd-derivatives’ or ‘accelerated-directions’ to deliver most likely changes from frame to frame.
  • uses the previous frames of ‘final image values’ to calculate ‘rates of change’ and then predict the likely pixels.
  • Consider modern televisions that interpolate 24Hz or 30Hz cable or DVD signals and upsample them to 60Hz or 120Hz. The chip used in all known cases does linear interpolation (mix by percentage) between the current and previous frame. In these cases, it isn’t cost effective to provide many HD or 4K frames or memory to store all buffers for higher-quality interpolations. In this case, most incoming frames, even high-noise image frames such as the Explosions or Confetti image frames that look blocky on modern video, are using only 1/9th to 1/33rd of the original frame when translated to pattern-fields.

Prediction

  • can fill in missing areas of an image. The idea of video codecs using “delta-compression” aka “don’t resend pixels if they haven’t changed or are close enough to what they just were for the past N frames” is widespread. Our approach of combining ‘delta-compression’ with known matches to use as small bit cost to replace ‘delta’ with a known example seems absent from any academic or corporate white papers.

Transformation

  • allows us to alter the signal for enhanced viewing, data mining, improved recognition when matching objects, and other repurposing of the signal. If the only signal processing being done is enhancing edges to better silhouette a visual or adjusting for poor lighting conditions, then the cost is generally N operations per pixel. The cost grows quadratically as it is based on width by height for an image or more for volumetrics.
  • can automatically reduce processing costs by processing at lower resolutions and resampling at higher. The bigger gains are found in cases when the processing is not ‘simple’, such as a User desire to ‘de-noise, median filter, upsample, color remap, sharpen @ 2.5X, remove unicells, and soften’.
  • uses 7 techniques and for a SD video image requires 23 ‘passes’ in a standard Adobe or Autodesk pipeline, given resampling of buffers. Our approach often compress most signal processing ‘recipes’ into a single pass.

Consider the numbers, even for only handling 7X techniques:

  • Standard Pipeline (Adobe-Stack/Autodesk-Pipe): SD image ops 7x read + 7x write = 14x per pixel
  • Essence Pipeline (the recipe generally fits into a single pass): 1 read-combined-write = 1x or worst-case 2x per pixel.
  • The difference here is performance, battery/watts, and efficiency for the hardware instead of bandwidth, but this ends up being a bottleneck if you need processing in real time and something takes 14 minutes instead of 1.4 minutes or 4.2 seconds. This example is fairly common as a test case, but could be far more extreme or far less given a user’s needs. What matters is that the end user has a choice to do these things without the overhead of an extensive pipeline.

The Transformations also include:

  • Real-time compositing of objects extracted from multiple sources (e.g. files, IP locations, sensors, etc.) for product placement within live video streams
  • Customization of objects per demographic or user
  • Objects become transactional with a simple implementation without expensive and complex backend technologies
No alt text provided for this image

Figure 1 – The Essence® Effect


The Takeaways on AI

So the primary takeaway is not that current approaches to AI don’t work. It is that we can now bring many different approaches together, modify them, integrate and accelerate them (with far fewer samples). AI becomes usable at practically every feature and function level. We call this Superset AI.

The ability to transform incoming signals into easier examples changes the problem. This is particularly true for the hard to succeed corner cases such as a blurry digit for ‘eight’ that might be a ‘six’ or ‘nine’. Our approach works with user recipes for transforming one or more signals into many features at higher or lower resolutions. This makes it suited for rapidly iterating on new features or better recognized features amongst hard to classify datasets.

Note that this means we can complement existing schemas such as Berkley’s Caffe, Google’s Tensor Flow, NYC’s Torch and so on. This applies to all major distributors of learning, including Amazon, Baidu, Microsoft, and IBM. Similar to Nvidia’s Cuda Caffe, which enhances a particular recipe with faster routines. Essence can also speed up the learning or recognizing phases by iterating on the algorithms used to find machine-specific optimal combinations of instructions and bandwidth use.

At the same time, we can empower others who know nothing about coding or scripting. In most cases, the user doesn’t care about coding. The value of software is not the code, it’s the desired behaviors.

At MindAptiv, we’re excited about the near future and opportunities to share our vision and our work. Follow us on our website @ https://mindaptiv.com and on social media.

Get ready to ‘create at the speed of thought®

The Technology Inside the Gem

Pivotal Innovation

3 solutions to choose from for alignment with your business needs. Essence technology is designed to address legacy, current and future computing needs.

Wantware

A new approach to computing that combines human expressed wants with semantic units to generate highly efficient machine-level code in real-time.

Essence Elements – e2

A new curated semantic unit ontology for mapping meaning to machine behaviors. e2 has been refined over 7 years to cover the fullness of what computing can do.

3 Essence Solutions

New set of options for modernizing legacy systems, adding powerful software to existing platforms, and transcending legacy computing.

A Deeper Dive

As we discuss our technology with engineers, we will post answers to their questions on our website. We look forward to sharing more about our approach.

Past, Present, Future

3 Solutions for improvements in current legacy code, current generation platforms, and delivering the era of computing when computers are much more like appliances.