A breakthrough in big data processing

Nebulo® uses Translatable Data Structures to reorder stored data and runtime processed data-streams (such as lossy compacting 0…1 floats, or reordering SOAs to AOSs, as in padded XYZw, XYZw, XYZw arrays to XXX, YYY, ZZZ, etc.).

Reordering lets us map the same data groups to many different chunks of code as well as the ability to profile and regenerate the actual implementation, its algorithmic units (conventional or AI), and the final machine instructions on the fly.

Self-optimizing for real world conditions

Profile and regenerate allows us to self-optimize for the computer’s conditions, such as heavy hard drive traffic from another process or low-battery-mode is toggling on and off, etc.

The Nebulo® design of “Translatable Data Structures, Meaning Implementations, and Code Regeneration” enables us to mix and match code behavior from unrelated domains or even forks of the same project.

A new way to provide trusted security

Other approaches depend heavily on the operating system and external security solutions to keep data and systems safe. With Nebulo®, all chunks of data have a Guard, which manages access, trust, security, and a means to Synchronize and Coalesce changes/reading-the-correct-values-at-the-correct-time.

Best solution at low connection speeds

Nebulo® uses traditional prime-factor cryptographic keys with an ever-growing suite of algorithms to choose from, which are combined and executed in parallel.

Does not use ‘time of access’ unless it has been altered. Via procedural programming logic, Nebulo® supports many different ‘users’ with varying access types and conditions.

Generally has little runtime overhead as its part of scheduling a ‘task’ or fulfilling a request inside a task.

A new way to provide trusted security

Unlike traditional Object protection, such as public/private C++ declarations, C89 symbolic-name-scope-obscurity, etc., Nebulo® is designed for tasks to be changed in realtime, support multiple contexts, and to avoid race-conditions/stalls or slowdowns of traditional actor message passing queues.

All access/changes/synchronizations of values are handled via scheduling instead of with locks or traditional actor messages.

Our model separates the request to load/read/write into ‘who & when’ first, when it then injects an authorized task into the nearest/most-relevant job pool.