A Triple Act Like No Other

Essence ushers in an entirely new era of software that adapts to the runtime environment, and inputs from developers and users by auto-generating exponentially more powerful code on demand. Over two decades of research and 10 years of dedicated development, have resulted in revolutionary capabilities.

The Essence Agent, Powers, and Skills form a unique system that is designed to provide a means to understand, tweak, edit and create machine behaviors.

Agent

The Master Algorithm Generator

A system that regenerates code according to profiling to best manage computing resources and to allow humans to understand, tweak, edit and create machine behaviors. The Agent is packaged with every wantware product powered by Essence.

Powers

The Code Packager

A system that supports all possible ways that an Essence Agent can process native source code for any device, service, computing platform, data, all inputs and outputs. Once source code has been packaged as a Power, it can be called upon using a Skill for processing by the Agent.

Skills

The World Builder

The ways to access Powers and Agent generated behaviors without needing to be a coder. Each Skill is made from Essence Elements and presented to the user as an interaction method or to other systems via platform specific methods like APIs packaged as Powers.

The Agent’s 10-stage Process

The Essence Agent uses a 10-stage process to automatically generate, adapt, and/or optimize computer-readable code for parallel processing in response to its evaluation of memory latency, number of data fetches, number of instruction cycles, order of instructions in the instruction pipeline, size of the cache, wattage consumed by each series of instructions, and the modes of the underlying chip and instruction set architecture chosen.

Code Generator

The Agent regenerates the CPU assembly or GPU code according to profiling, to tradeoff memory stalls vs arithmetic operations or to best manage dependencies from previous results, and more.

The Agent generates, auto-tunes, auto-scales, and auto-syncs 100%-parallelized code for management of compute resources.

Resource Balancer

Automates the tuning of controls for visual, audio, physics-simulation, or other sensory and/or calculation-based services.

Data & Code Transformer

Treats data and code like ingredients to be processed by a unified formula processor. A wide variety of signal transforms can be performed in parallel.

Profiler

The ability to decide the amount and timing of work done to make better Scheduling estimates for similar future work and code generation.

Real-time profiling costs us nearly nothing in performance cycles yet accumulates effective intelligence to estimate future work costs, particularly with varying amounts of data.

Data & Code Bridger

The ability to process dynamically added native code (e.g. hardware drivers, browser engines, emulators, virtual machines, operating systems, APIs, media packages, network code, AI algorithms, etc.) and remove native code, when it is not needed.

This is a highly efficient way of managing code and reducing security vulnerabilities from static code (fixed targets for hackers).

Introducing Morpheus

Given we expect that some will doubt our claims of what our tiny 26kb Agent is able to achieve are real, we have provided details here about our patented code and data generation system, Morpheus.

Click here for details about Morpheus

MorpheusTM – Code and Data Transformer

Morpheus is a platform that allows Essence tools and applications to run on existing operating systems like MacOS, with Android, iOS, tvOS, WatchOS, Linux,, and Windows versions coming in 2021 and 2022.

Whether applying pre-existing (packaged as Powers) or code generated by the Essence Agent, this Skill brings a powerful capability to efficiently apply transforms (e.g. AI/ML algorithms and more) to any signal or data in real-time, without the need to write code.

Morpheus can transform existing programming jobs into tiny algorithmic units. Examples might include identifying a search pattern, processing a mathematical formula, a file seek and read, and the reordering of data.

Each of these algorithmic units can be expressed in different code templates that produce different machine instructions. Each of these instructions can be bundled and profiled for timing, for energy use, and for resource use and then separated to run on different processor cores based on scheduling access to changes in data.

While some workloads, such as banking transaction processing, are intrinsically serial in nature, the latencies associated with the reading of caches, disk I/O, network packets, and other events can make it possible to split up the work for better performance in many cases.

Some tasks, such as image rendering, sound rendering, and shape generation are inherently optimal for parallel processing. Other tasks such as searching for data patterns, sorting, running mathematical formulas, and logical decisions of a container of data, simulations, and synchronizing precisely timed changes amongst machines can also be parallelized easily. This parallelization can be accomplished using a transform that allows all cores to run simultaneously (or go to sleep if idle) by transforming workloads into algorithmic units and scheduling their instructions across multiple computing processors.

While some tasks inherently have delays, stalls or bottlenecks, the use of tiny algorithmic units maximizes performance with self-profiling and avoids the semaphore, mutex, or locking mechanisms that affect performance in many other parallel systems. This approach to handling work processing requires all work to be done, which is defined as any computational task expressed in the semantic units of Essence Elements, to be estimated for worst, average, and/or best-case duration and resource usage.

The methods used can be mappings between the Essence Elements and instruction blocks, such as: “iterate all elements in Collection A, for each element consider its value B, if it matches C, then add counter D”.

In English, a phrase like: “tell me who I know in Zaire” or “Please show me anyone in my contacts who resides in Zaire”, will map to “Iterate all people in Contacts, Facebook Friends, and LinkedINConnects, and then for each person, if their residence is Zaire, add that person to the collection named ‘People of Zaire’ and then display ‘People of Zaire’ “.

While it is a simple example, it displays each expansion of a basic term into known resources, with a most-recently-stored value for a range (such as the last time we read Facebook Friend List, it was 1000) and iterating per person, using the btree-iterate approach.

So, the code used is determined for data-access, iteration-of-data, and operations on data (compare it to “Zaire” in this case, which might be GPS-distance or name match or any other method).

These task histories can form an address that can reside in RAM until the space is needed for something else. If RAM space is needed, the data can be cached to disk or dropped and recreated as needed.

These task histories can store the combination of Semantic-Units (the template of activity, such as iterate, compare, find, add) with DataCraft (which databases, which pieces of info used by the Semantic part) and Algorithmic Units.

Algorithmic units identify which actual algorithms and data-reformatting-is needed and was selected, such as using a linear iteration of consecutive addresses (array partition), an incremental pointer de-referencing (a doubly-linked-list), a hash-table, or tree/graph format, etc), which are generally governed by a top-level ‘code-choice’, a midlevel ‘data-format’ (XYZXYZXYZ or XXX YYY ZZZ), and low-level machine instructions (LD, LD, TST, JNE, etc.).

As each task history grows in Data-Craft and Algorithmic Unit histories, the probabilities of making future choices shift based on the accuracy of estimation and the number of optional choices that remain.

For some operations such as square root, there are 2 single-instructions and 4 multi-instruction methods to approximate the value, which is only 6 choices for low precision and only 2 choices for high-precision – a simple case because little variation is possible outside of reordering when the calculation is issued in the task pipeline.

Other cases can be far more complex and have many more expressible choices, which means it may take longer to reach a locally optimal state. Regardless of how much task history data exists, all current tasks are assigned priorities and sorted by resources. We use the classic and effective ‘greedy-solution’ to this NP-complete task, often called the knapsack or traveling salesman problem.

Any delays or missed durations are relayed to the user as required by semantic-unit scope (such as tell me if late, ignore, or log).

It is notable that processor selection alters the Algorithm Unit selection since different instructions may or may not be available as well as accessible ranges of memory usable. It simply results in certain task history scores being set as negative to indicate not applicable.

This approach can have several layers of simulated annealing solutions to the N-tasks using P-processors with I-instructions on R-resources problem. This approach relies on computations being expressed by the Essence Agent.

The Essence Powers System

The Essence Powers System (EPS) has 64 Power Types [8 groups with 8 Powers each] that cover all possible ways that an Essence Agent can process native source code for any device, service, computing platform (e.g. drivers, APIs, emulators, browser engines, virtual machines, operating systems, media types, network code, databases, etc.), data, all inputs and outputs.

Each Power has one or more Tech-Certifications, which includes providing behavior (code) and answers (data) as well as a standardized means to test & verify/prove that they do what they say (trust).

The Agent combines Powers with its own Agent-generated machine instructions based on human intentions.

Any device, service, data or software can become the digital equivalent of a Lego® Brick. The Agent swaps Powers in and out of memory as needed.

The Essence Skills System

The Essence® Agent is a powerful way to unlock the potential of machines. Skills empower the individual user to be in control [of the Agent] on an unprecedented level, without requiring the skills of a coder.

As a Developer or a non-Developer, Skills are the ways to access and soon create Powers and Agent generated behaviors without needing to be a coder. Each Skill is presented to the user as an interaction method. Skills can be turned on and off on an App/Aptiv basis.

Skills are the ways to access and soon create Powers, and Agent generated behaviors from semantic units [Essence Elements]. Skills can be turned on and off on an App/Aptiv basis.

Our enterprise and consumer products are made up of various Skills and Powers like, natural language dialog (NLD) and natural language processing (NLP), object and facial recognition, code and data transformers, digital signal processors, upsamplers and downsamplers, compressors and encoders, code generators, and much more.

While other approaches combine code from repositories, our solutions combine both wantware (code generated from expressed intentions) and packaged code (APIs, drivers, emulators, browser engines, media packages and data bases, VMs, and practically any other code) turning them into the digital equivalents of Lego® Bricks. Just plug them in and they work. Hot swap them in and out of memory for much higher efficient use of memory. Our approach makes selecting and combining machine behaviors exceptionally easy and efficient. Skills puts Powers and Essence Agent capabilities in the hands of everyone.

Contact Us for Investment & Partnering Enquiries
Wantware for Humanity