NEBULO

Why Nebulo

Sharing Machine Behaviors as Easily as Sharing a Photo

Sharing Machine Behaviors as Easily as Sharing a Photo

  • Unique Identifiers
  • Translatable Structures
  • The Guard
  • Object Relationships
  • Object Intention
  • Procedural Programming
  • User-level Meaning Expression
  • Parallelization
  • Object Inference
  • Object Translation
  • Object Relationships
  • No-file/No-database Management
  • Our Sensory Pipeline
expand

Details About Nebulo

Nebulo assigns unique identifiers to each unit of information. The unique identifiers can be 128-bit IDs that can be mapped and addressable across multiple devices.

These unique identifiers, which can also be called HashIDs, can enable the following:

  • Can serve as a way of tagging a file or object to allow it to be searched quickly without requiring any context for the search.
  • Can prevent the duplication of data since redundant copies of the data can be stored once with multiple pointers.
  • Can improve the speed for searches and reduce bandwidth required to perform searches.
  • Can be combined with probabilistic fuzzy data methods to compensate for spelling errors and other scoring-match-based searches.

“Translatable Structures” let us reorder stored data and runtime processed data-streams (such as lossy compacting 0…1 floats, or reordering SOAs to AOSs, as in padded XYZw, XYZw, XYZw arrays to XXX, YYY, ZZZ, etc.). Here’s why this is such a powerful way to work with machine instructions, data and code:

  • Reordering lets us map the same data groups to many different chunks of machine instructions and code as well as the ability to profile and regenerate the actual implementation, its algorithmic units (conventional or AI), and the final machine instructions on the fly. This takes processing of machine instructions, data and code to new levels of efficiency.
  • Profile and regenerate allows us to self-optimize for the computer’s conditions, such as heavy hard drive traffic from another process or low-battery-mode is toggling on and off, etc. Smart optimization of computing resources produces desired outcomes such as efficient use of batteries, processors, networking, I/O devices, etc.
  • Our design requires Objects to have two patterns: All chunks of data have a Guard, which manages access/trust/security and a means to Synchronize and Coalesce changes/reading-the-correct-values-at-the-correct-time.
  • We refer to uniquely named entities/objects as a ‘Thing’. We use  Meaning Coordinates, a meaning representation system (not a language) to build meaning units.
  • Meaning Coordinates can be stored on a ledger that has the benefits of Blockchain, with the additional attributes of a semantic/meaning based system. In this case, Meaning Coordinate #97, whose mnemonic is ‘Dz’.
  • Dz are found via a unique id (like a context-routed URL) and ‘time of access’. The Dz Meaning Coordinate is defined as: Thing – Named Entity, Existence, Presence, package of Possibilities in an Spacetime existence. Our Dz model separates the request to load/read/write into ‘who & when’ first, when it then injects an authorized task into the nearest/most-relevant job pool.
  • Presence describes a container for a Thing, an Existence, whether concrete or abstract.

Presence means that its name can be referenced as an Existence.

The Guard is the part of a Dz (Thing) Meaning Coordinate that has a single representation which determines if any value associated with Dz can be read, written, or even loaded/decompressed/decrypted.

Uses traditional prime-factor cryptographic keys with an ever-growing suite of algorithms to choose from, which can be combined and implemented in parallel (e.g., using different encryption algorithms that are applied at the atom-level and are permuting). We believe that this aspect of our design makes security of generated machine instructions, fixed code and data, post-quantum ready.

The Guard does not use ‘time of access’ unless it has been altered. Via procedural programming logic (regular Essence Meaning Coordinates), the Guard supports many different ‘users’ with varying access types and conditions.

The Guard generally, has little runtime overhead as its part of scheduling a ‘task’ or fulfilling a request inside a task. Unlike traditional Object protection, such as public/private C++ declarations, C89 symbolic-name, scope-obscurity, etc., it is designed to be changed in real-time, support multiple contexts, and to avoid race-conditions/stalls or slowdowns of traditional Actor Message passing queues.

Our Nebulo system for managing meaning coordinates internally operates as a distributed job system. All access/changes/synchronizations of values are handled via scheduling instead of with locks or traditional Actor Messages. This eliminates latency that locks introduce in other systems. Our Dz model separates the request to load/read/write into ‘who & when’ first, when it then injects an authorized task into the nearest/most-relevant job pool.

The second piece of Dz is ‘Synchronization‘ that handles returning the correct value for the time accessed or making allowed changes.

  • ‘Sync‘ allows us to work with a Clone of Dz values in whatever job we’re completing, such as a map/reduce query, graphical render, or conversion.
  • ‘Clones‘ are only the Dz data needed, such as an object’s speed, age, and GPS coordinates, and are translated (copy-converted) to a processor’s local memory when possible, such as PTX VRAM for an Nvidia GPU, 80×86 cache line for a CPU socket, etc.
  • ‘Sync‘ handles coalescing writes, such as a series of location changes.
  • Dz are structured as a hash-indexed, compressed tree of ‘Idea’s, which are our ‘flexible data structures‘. Each hash-index ‘Idea‘ that a Dz has is the usual ‘has-a’ model named properties.
  • Any Idea that exists can be referenced in an Aptiv.
  • Dz properties such as ‘physical-presence’, ‘bank-account’, ‘image-logo’, or ‘music-preferences’, have a memory address, generally in the same memory page, where we’ll find a compact b-tree of access-times and unique-data-locations.
  • Each node in that tree represents a time-recorded change, or a compressed spline of progressive changes, or deltas-only (depending on which is smallest for the structure and limits on too many deltas, etc.).
  • Nodes can be unique to that Dz, shared, for all Dz in small groups, or inherited. This allows us to create large amounts of Dz entities, such as trees in a computer-generated forest with only a few bytes per Dz initially.
  • Then as unique modifications or group modifications are made, those individual changes are saved but no other data is kept. This allows us to track, which changes were made, by who, when, etc. for debugging/understanding as well as rewind-fast-forward of many different variations or ‘Dz forks‘.
  • Note that our ‘access-time’ values are 2D, meaning we have an actual ‘time’ value and a ‘variation’ or fork value in cases where the same Dz exists in different variations simultaneously, which is required to have ‘things thinking about things and being able to modify them, see how they react, etc.”
  • Our goal is to support distributed simulations and computation in a scenario where processors might fail, and tasks might be migrated to other processors in real-time.
  • Our ‘Object‘ model, for ‘Dz‘ things, separates access from synchronization-history.

Relationships between objects can be traditional tuples, as we have in Functional Programming like Prolog, such as ‘Abraham, Isaac, father-son’ or more complicated sets, where instead of creating a singular yes/no Essence Idea for son, you have a more complicated data structure with more data fields, such as ‘genetic relation’.

In all cases however, the important aspect is the ability to allow ‘son’ and ‘genetic relation’ and other versions of ‘child’ to be translated between each other. This design choice has been central to all we’ve built to encourage many users to reuse other people’s ideas or create their own unique ideas but potentially reuse other behaviors or in this example ‘inferences’.

Ontological Structure – “As-a” or ‘is-like’ ends up more important than ‘is-a’ relationships. You might think of this as multiple, simultaneous ontologies allowing ‘everyone to create/instruct and reuse/purchase behaviors/data from others’.

Storing Relationships – The staple of relationships, the ‘tuple’, are usually stored as 3 unique ids in a ‘belief-type-box’. This can be queried in the usual SQL/Lisp style approaches.

Change Awareness – The other part of relationships that we support is ‘awareness’ of change. Inside each ‘Idea‘ we allow any has-a property, such as age, weight, volume, density, etc. to be stored as a value or behavior or both, in the case of ‘changing density’ needs to update weight.

Formula Graphs – Relationships can be built, as a formula graph of logic/math/processing operators (as Meaning Coordinates), to express interdependence inside of an ‘Idea’.

Hash Trees – Relationships can also be represented between instantiated Dz, so if a Screen Dz switches to an alert, it can notify its Movie-Player Dz to pause. That notification system is designed to be compact and efficient as a hash tree of observer Dz, the ‘information-unit’ that changed, and the context (access/time-variant). As described earlier, this is passed to the Dz’s Guard and then becomes a job.

Internally everything is stored as a binary-stream beginning and ending with Meaning Coordinates to annotate what ‘Idea’ and format it is in. This allows us to carry several ‘Clones’ of the same data in memory pages (Readonly/Readwrite both supported).

Booting Essence – To boot the system and generate the Essence startup files, we have a “Generate_Origin” program of C code that fills in all the required ideas/data into a file used to run an Essence package (as an App, an OS, Emulator, VM, etc.).

Grok-Units – To express data as a user, we have a method to associate Natural Language terms (spoken via speech-recognition or typed) with known “verb, subject, object, modifier” clauses, which we call a ‘Grok-Unit®‘.

  • We handle misspellings, reduce/remap synonyms, map words to Dz and their Idea properties to find ‘valid’ combinations and weight them based on proximity of match, recent history, and other factors to sort most-likely matches first.
  • A user might type “show me a cloud, take a picture, add the words ‘have a nice day’, email it to my wife” and it will match all “Grok-Units” that fit each. The way this works is to dialog with the user, in the vein of ‘here’s what I’m understanding from your words’. The choices presented are always valid machine instructions.
  • No possibility of syntax errors, of referencing invalid data, misused/mismatched types, etc. The user can select on the interpretation and only choose other ‘valid’ choices. It’s interactive but keeps the user experience focused on their words, their expression of what they need.
Under the hood, in Meaning Coordinates, we have a meaning model of computation and data representation that covers the traditional procedural programming, so most programming paradigms are handled with a known and reproducible behavior such as if-then, loop, match-a-known-set, collection add/remove/find/sort-by, create/destroy.

We have ‘Synergy®‘ to express rules/data as words and ‘Maven®‘ to edit them as a visual graph

Order of events is visually segmented as the Meaning Coordinates generated are meant to be run as parallel as is possible/practical and any dependencies are shown (change A, change B, *C is updated to be the average of A and B*, or “if A is an Imaginary Number, then C is updated to be an imaginary number too”, etc.).

This really helps for investigating behaviors and to ‘understand why did the computer do X’. Side effects are the trickiest aspect to traditional coding, so we’ve made that an explicit, up-front atom in how we operate as we must generate the machine-instructions from the meaning-intent, we are able to determine dependency-chains and side-effects upfront.

The unit of operation inside Essence is ‘work’ which is defined as a ‘Job’ (an array of Meaning Coordinates ‘Meanings’), a ‘Context‘ (data specific to that job/access/variant) along with a ‘Situation‘ which handles a virtual machine style model of ‘Mind’ which has a series of Topics (think of this like coding Stack Frames but ordered like graph nodes so we can switch between them).

Thought Nodes – Each Topic has ‘thought nodes‘ that contain Situations’ subject, object, verb, modifiers, etc.

Algorithm Selection – Machine instructions are generated from these by choosing an Algorithm variant, such as sort, based on matches for the ‘verb’ (as an operator, like multiple-accumulate, find-least, if (X), etc.). The ‘Thot nodes‘ are simple Dz-Properties that refer to a piece of data, such as a single byte, a GiB buffer, or a sparse mapping of tuples.

Map/Reduce Operators – On actual computation (load/read/write/sync) that Dz-has-a property is accessed. Facts would be generally modeled as a query, returning a yes/no, or certainty-percentage, or list of elements, and are done with map/reduce operators in Essence Meaning Coordinates.

Essence aims to enable users to create, modify, and market-place trade behaviors and data. It’s about making code behavior a commodity the way that data (i.e., photos, videos, text, etc.) is. That requires that any given ‘Idea’ can attempt to translate itself to another…which might go poorly without human feedback or in cases where there are no shared examples. However, this makes it possible, in real-time, unlike how it’d be done in existing coding methods with engineering talent and refactoring.

Objects can be shared with all ‘Ideas’ as translatable data structures, and mapped to various ‘new worlds/open worlds’ as best they can fit. Thinking of this within the context of the metaverse idea of virtual reality, augmented reality, and mixed reality worlds, translatable data structures allow many metaverses to be efficiently constructed and interacted with.

Here we’ll describe how our system for expressing and storing meanings is used to form relationships using a semantic periodic chart for computing.

Meaning Coordinates

Meaning Coordinates (see the 256 semantic Meaning Coordinates here) is a way of representing meanings and implementing a method that acts upon them in real-time, for computing. The power of WantWare lies in the ability to map out the components of a machine in the form of its hardware assets, services, users, data, and their possible behaviors. Meaning Coordinates, like DNA, carry instructions. In the case of  Meaning Coordinates, the instructions are specifically for computing meaning (describing what things are and their behaviors).

Expressions of meaning and significance are built using a base set of semantic units, which distills nearly any way to express an idea down into its most atomic pieces.

Internally, Meaning Coordinates is defined with the traditional Is-a (identity-type), Has-a (possession), As-a (specific translation) relationships as well as Logical/Fuzzy/Math operators to investigate.

Here’s a sample Meaning Coordinates Snippet: JoSmxTryChoHa

The above Meaning Coordinates snippet translates into: ‘Given a collection of events, locate the desired event within the collection and extract its contents. Then, create a new cloned event from the extracted data’.

Change the sequence of the Meaning Coordinates and the meaning of the snippet changes as well as the resultant behaviors the machine will perform to accomplish what that snippet is expressing. That’s why we compare it to DNA sequencing, simple changes in the ordering of the nucleobases in DNA can cause drastic changes in an organism’s biology, and the same is true for reordering Meaning Coordinates.

No Need to Learn Meaning Coordinates to Use Them

Stored Meaning Units, plain English representations, are mapped to Meaning Coordinates. The atomic nature of Meaning Coordinates supports re-expressing Meaning Units in any language, human or machine. This includes regional variants like British or Appalachian English; completely unrelated languages like Japanese; or even fictional or hypothetical tongues like Elvish, Klingon, or a language communicated by visitors from another galaxy. Meaning Coordinates are;

  • remapped to any of the potential translations, including individualized slang and interpretation of ideas;
  • re-expressed to match an individual or group’s style of expressing ideas, in real-time;
  • highly compressed and permuted text takes up little space. Granular differences are not duplicated but use a highly efficient hashing approach.

Now let’s put the previously discussed features into the context of data storage and access.

The Nebulo approach to data management provides an opportunity to move away from the legacy approach of files and databases with their fixed and limited formats for locating and referencing information. Moving data between systems is often possible. However, this is typically with varying degrees of compromise. This happens because there is generally no easy way to take both data and code behaviors that exist in one system to another. Nebulo changes that by enabling both data and code behaviors to become like Lego Bricks.

Aptiv Types

Data storage and access is separate from the data structure of the Dz (Thing) itself for a variety of reasons. We have 8 types of Aptivs based on how we can scale data (lossy-signal, exact-record, rolling-queue history, belief-certainty, thoughts, etc.).

Granular security/access issues are handled by each Aptiv so whether you can edit or read-only or just name-only or ‘nothing/no-awareness’ is determined from within each Box…not a central (potentially hackable) system.

All information for each Dz is stored in these boxes, which are organized as a container-of-containers with fixed depth and size constraints.

In real-time Simulations and Videogames as well as offline Movie-CGI and CAD rendering, there are a variety of graphical primitives used, such as triangle meshes, voxel surflets, CSG shapes via SDF fields, Point Clouds as Splats, polynomial surfaces, etc.

How an object is stored, edited, animated, or modified by physics is often different than how it is rendered, with conversions between graphical primitives being common to handle collision detection differently than global illumination.

We handle translations between different primitives natively and with worst-case coverage to avoid seams, geometric-holes, surface inversions, and other boundary-representation conflicts.

Our primary goal is the ability to provide levels of detail from a single lit point to a highly detailed, procedurally enhanced model.

The formula generation process keeps all data types as ‘potential fields’ which groups data by level of detail. Perhaps imagine a wavelet or mipmap hierarchy for images as ideas but replacing the uniform structure with discontinuous blobs and directions. Extend that to 3D or 4D for real-world, moving object models and we have a system that can scale from a fingertip to a valley to planets to galaxies and back down again.

The most critical design aspect chosen was not for a specific rendering algorithm or 3D model format, but how do we scale scenes from a few highly detailed objects to a scale where they are no longer directly visible but could possibly contribute to the larger visual. We decided early on that scaling was the singular problem that prevents ‘run-anywhere’ experiences, whether graphics are limited by hardware power, available memory/persistent storage, individual model complexity, scene complexity, or observer range. Granted, there are many scenarios that will display poorly or have obscured desired details, such as a vast crowd of unique faces being scaled down to a few upsampled pixels, but it will still show and be interactive or take a long time but show at high-quality.

So graphical Objects can be imported and exported as many traditional-primitives and can be rendered with many styles, resolutions, scene-techniques, and future methods using sequences of Meaning Coordinates or packages of native-code.

We currently manipulate and run 2D images of color, intensity, depth, distance-fields (pictures/movies/depth-cameras) and 3D triangle-meshes, voxels, point-clouds, and SDFs (conjured scenes).

How Nebulo Compares to C++/Java

While there are parallels between Nebulo and Object-Oriented programming designs, we’d describe our system as Semantically Intelligent instead of object-oriented. The ‘description’ of what the user wants in a behavior and associated data is the fundamental model.

Idea (Jy)

  • Data structures are re-ordered, reformatted and translated into other Ideas. Ideas do not have the structure-alignment, static typing, and method model that typical OO Classes do.
  • Using Is-a and As-a, we can make multiple inheritance and dynamic conversions, but all function association is handled separately, akin to ‘protocols’ or ‘interfaces’ in many OO languages.

Thing (Dz)

  • Thing design is not a Class instantiated Object since it is designed around access control, via the Guard’s behavior/data. Unlike Objects, Things have their data structures split across various buffers and potentially cloned for multiple processor uses. Things do not have the usual Getter/Setter model for a single record but a universal time/variation key used to solve for ‘when the get/set can occur’ and if it is allowed.
  • Things (Dz) are better related to OO ideas as a Collection of Objects, some owned ( unique ), some shared ( by group Thing ), and some default ( inherited ).
  • Things’ primary role is to handle if and when a load, read, write, or sync needs to occur.
  • Can message other Things via using the same Behavior calls they might use on themselves. The resolution is transparent but does invoke that Things’ Guard and may return ‘not-allowed’ or ‘not-now, try-in-8ms’ or other replies based on scheduling read/write access and estimates.

ID

  • All information, whether a scalar, vector, or large dynamic collection, has a unique ID, similar to a URL, and provides for level of detail access to any property or larger container.

Dz

  • All information is Guarded by a Thing (Dz). The owner, author, editor, frequent user, etc. can and likely are different Things (Dz).
  • The takeaway is that all information has a Thing (Dz) which guards access.

Behaviors

  • Actual Functions, which we call a ‘Behavior‘ if it’s written in Meaning Coordinates as Meaning Units and ‘Code‘ once the Meaning Coordinates have been mapped into Qcode (cacheable nuggets of auto-generated adaptive computer code) and subsequently machine or emulated instructions. Behaviors use Context ( local data from one or more Things ) and Situation ( work-specific data, such as Subject/Object/Where/When/etc. ).

Aptivs

  • Packages or Modules in most OO models do fit our notion of Aptivs along with additional digital content such as media, big-data-tables, and localization. All data is stored in Aptivs which are built of collections of those data elements. The rules for how those collections behave ( FILO stack, FIFO queue, arbitrary set/bag, sorted by number/lexical/spatial/etc. ) are customized for each need. It’s like having all similar data stored in Relational Database Tables, but the tables might be uniquely arranged/accessed, such as signals.

Dictionary

  • Essence has no fixed set of keywords, just a dictionary with a bunch of synonyms/mappings-to-meanings.

Grok-Units

  • Nebulo Behaviors accept 1 or more Grok-Units (units of meanings/beliefs), which are the input/output template for a Behavior that specifies one or more ‘to read’ or ‘to write’ Ideas (Jy).

Ideas + Behaviors @Startup

  • Nebulo starts with over 500 base Ideas and close to 1,000 initial Behaviors, many of which are art styles or procedural generators for tables.

Data & Resources State Rules

  • Collections separate Creation ( Allocate memory, flag as uninitialized ) is separated from Add ( Init, make available ), same for Remove ( deactivate, make available for Creation ) versus Destroy ( allow space to be reclaimed ).
  • Issues of what a symbol maps to, order of precedence, and range of results are always explicitly defined for any Behavior.
  • Nebulo doesn’t define a Null as in traditional OO languages. There are various ‘Not Available, Not Defined, Not Logical’ which are paired with a change estimate in time or percent done, such as Not Available having a percent available and estimated time to be done.
  • Order and Constraints are explicit in every Behavior and every piece of Work done.

Additional comparison information that elaborates on how different Nebulo is from Object-Oriented (OO) paradigms:

  1. No Global Variables: All values have multiple-levels of discoverability/visibility/modification and a ‘when’ the reply will be available.
  2. Explicit Scheduling: Explicit Scheduling of when persistent values are changed prevents ambiguity on side-effects. Access time and a permutation number keeps parallelism valid.
  3. Persistent Values: Persistent values ‘considered’ as thoughts can pass thru a variety of code ( topics/actions that may modify the value locally, as per Functional Programming Paradigms ). Requiring an explicit ‘Remember’ to any variable change may seem cumbersome, but it is easy to automate and later analyze/understand in a parallel environment.
  4. Parallel Processor Distribution: Note that ‘parallel’ may be per CPU ( same die w/ shared caches or different socket w/ different caches ), per Processor ( CPU & 3 GPUs w/ Clones of same data synchronized ), or per Machine ( LAN, WAN, even slow exchanges like Cloud-sync/SMTP for exchanges, etc. ). Same principles apply as the latency of changes increases ( going from multi-core chip to WAN ).
  5. Type Inference: Type Inference is handled via assignment and reduction of use cases, implementing a Hindley–Milner type system.
  6. Concurrency: Regarding Co-routines in Concurrency, all Behaviors are executed in parallel, whether a separate Cooperative Fiber, Preemptive Thread, OS Process, or different Machine.
  7. Channelization: Regarding Channels, each Thing (Dz) can have 1 or more MessageBoxes (Smv) to use to communicate to each other, allowing user-level influence on how to handle responses to overflow of messages, unwanted messages, loss of response, etc. The protocol is a context clause or ‘GrokUnit‘ (Sx) which is one or more Ideas (Jy) expressed as the Grok Unit requires.
  8. Suspension and Resumption: All Behaviors can be suspended and resumed with the same state, if desired, like Knuth Coroutine, or can simply complete and be invoked again with fresh state, like a classic Subroutine.
  9. Translatable Structures: Our Idea (Jy) translated-data-structure model allows explicit Is-a and As-a relationships gives it a ‘manifest’ type system that could be also defined as ‘structural type system’. However, the ‘As-a’ aspect of translation allows a virtual-duck-typing or runtime-binded-typing approach in the sense that some Ideas (Jy) can be adapted to provide the structure members needed on the fly or via cache.

Elevate Your Business with WantWare

Contact us to learn more about our plans to transform the cloud and beyond.