Wantware Technical Q&As

Before providing questions and answers, we begin with some foundational ideas that apply to any scenario of ‘delivering software using wantware’:

  1. Essence Architecture Basics
  2. Sliding-Scale-of-Use/No-Lock-In-Export
  3. Level of Detail Assumptions

Please note that every answer here either refers to parts of Essence that are (a) working well now, (b) under active development/bug-fixes, or (c) partially-done/foundation-support-but-yet-to-demo, such as an Android App ( we aren’t past a minimal skeleton ).

3 Powerful Essence Solutions

Essence Chameleon

A tool for ingesting and transforming text or code. Modify and export transformed text or code or run it as Essence® powered software.

Essence Morpheus

A platform that allows Essence tools and applications to run on existing operating systems like MacOS, with Android, iOS, Linux and Windows versions coming soon.

Essence Noir

Run Essence as an appliance and create software without an operating system. Use Essence Chameleon and Morpheus to run software built for OSes.

Architecture Basics

We call our product ‘Essence’ and consider it a solution for software-creation, maintenance, modification, and understanding. It is fundamentally a piece of software that transforms user-input, characterized as sloppy natural language, GUI-selections, or other ‘meaning-mappable’ expressions (such as programming-code or VM instructions) and outputs a meaning-transformed result which is either creating/updating digital-files (Chameleon), running as a software-process on an existing OS (Morpheus) or running directly as a unikernel for an Appliance-like experience (Noir).  Essence can run as a command-line application, a multi-screen/multi-machine GUI (visualization/interface), or as a network service ( serving HTML pages, XML-replies, or compressed screen-sharing ).

Essence-Chameleon: The simplest ‘result’ might be a series of text and media files & directories if you use wantware to generate application-code that you use in your existing pipeline, such as export C# for .NET or JavaScript/CSS/HTML/GLSL for a webpage or Swift/XUI/XCode-Project file for an iOS project. We call this case ‘Chameleon’ as it transforms ‘what you want, what you specified as limits and all the extra ‘unspecified things to fill in’, such as connections to OS services, APIs, & certificates.  While we’ve just described a software-application package result, it works well as personal-mind map/notes export (I use daily exporting to YAML, Apple-Notes, & OneDrive as text-files), or entries in a database (I use to track health data daily w/ Blood Pressure, Glucose, Pulse, O2, etc. stored as a record).  Again, this result is simply input transformation and ‘fill-in-missing-needs’ to produce Application-Projects, Text-file-Projects, or Database entries.  At this point, you simply use today’s existing tools ‘as-is’ and wantware is a tool in your existing pipeline.

Essence-Morpheus: The easy ‘result’ runs your wantware-input in realtime inside of one or more processes (we run 1 on MacOS/iOS/Android but many on Windows/Linux to keep Essence-Powers, such as Web Kit-browser, MySQL-Server, and other services isolated). As many wantware-inputs can run together, whether isolated or sharing behavior/data, we call them Aptivs instead of Apps, and each is assigned a ‘channel’ to track all resources needed, whether memory-space, bandwidth-copies, or chip-performance. We may scale resources up and down based on all needs (downsampling media, change update-rates on simulations, adjusting work-ranges for all job-pools whether as CPU Threads or GPU Workgroups or other Chip/Task-Divisions). This approach abides by all OS requirements of scheduling, access to resources like USB hubs, Media-Code-Drivers, or GPU/CPU write/executable memory.  Essence-Morpheus acts as a simulator and can be run within VMs or Isolated-Containers such as in Docker. This scenario allows us to use “Essence-Powers” which is a means to package existing software for use in wantware.  It maps a library, API, device-commands, or other service into ‘meaning-units’, such as a series of ‘templates’, like a ‘mad-lib’ for code/data, that can be used directly.  These E-Powers are built with existing DLL/dylib/etc.platform-specific code or platform-agnostic programming code (we have tons of C/C++ Powers & some Python/Rust/Lua/FORTH/Prolog/Lisp as tests so far ). Using code in an E-Power requires compiling tools and a runtime Power (Python backend, LUA/Forth/Prolog interpreter) to cover memory-model assumptions, job-managements (PThreads? Coroutines? etc.) all being handled inside Essence.  The E-Power package handles versioning/changelog issues, license/rights, compression/encryption, and several hardening approaches which require unit-tests as proofs (randomly selected X-inputs-give-Y-outputs).  We manage this through a set of 8 interface-families each having 8 domains for the types of claim/proof/report models.  There is nothing special about 64 categories of interface-claim/proof/report other than it’s a mandatory layer for connecting E-Powers into wantware.  We currently have nearly 1500 calls supported via E-Powers.

Essence-Noir: The streamlined ‘result’ runs wantware as a unikernel without mode-switching, address-translation, and other performance/energy costing impacts of modern kernels. Running natively in our unikernel uses a single-scheduler per computer and manages resources directly (skipping typical-OS-gatekeepers and OS drivers) for efficiency and control.  This approach has no memory-rings nor page-access faults.  Security is handled entirely by Essence-Guards, which can be assigned to a single ‘record’ of data or many boxes.  It’s as granular as the ‘name-locating’ system we use allowing security to be isolated, tracked, and controls for any collection of information. This scenario cannot use or limits many existing E-Powers as its direct-to-the-metal model isn’t compatible with all of the existing OS issues (outside of emulation/container) however, it also isn’t compatible with all of the existing security exploits as control of read, write, delete or even knowing an Asset’s name (can you even know the name of some asset precludes any wantware statement referencing it and thus any access/meta-data/relationship cannot occur).

Each of these Essence Products are in varying stages of tech-readiness-level built from one source-base.  The first is simply transformation of input data, the second runs one or more ‘App-Engines’ on your existing OS, and the last delivers an Appliance-approach to software generation.  Each has its pros and cons.

Sliding-Scale-Of-Use/No-Lock-In-Export

We designed wantware so we can try-it-without-consequences and use it while being able to freely migrate away at any time. While ‘no-lock in’ doesn’t entrap clients with an investment, we believe it’s essential to attract others and to support changing needs.  Anything that can be imported, whether text, media, or recorded choices in a live session, can be exported back out.  While we can always dump everything made into a XML, Json, or Yaml database or as coding-languages with limits of their runtime-support, we can only export capabilities that exist elsewhere along with their limitations, such as a SPIR-V shader won’t run on old GPUs or support more than the device’s register-limit per workload/wavefront/etc.

Given we’re a small startup, our own technical, document, and data has always been at risk and thus it’s always been important to be able to preserve and propagate what we’ve done in the future.  It is frustrating that code written in the 80s/90s won’t run anymore, sometimes not even in emulators as we use a particular accelerator-add-in-boards or drivers that no longer exist.  In this case, what we have made/continue-to-make at MindAptiv has future-value in all outcomes.

Thus, all the questions below about existing Pipelines for Front-End, Middleware, and Back-Ends can be answered immediately as ‘you can continue using everything you use today along with some wantware benefits’.  Overtime, you can try additional wantware ‘Aptivs’ in the pipeline or try the OS-application (Morpheus) or Appliance-application (Noir) editions built fully on wantware.

Level Of Detail Assumptions

Wantware isn’t a programming nor natural language.  Its an intermediate representation of ‘meaning’ as in ‘all the details that form what you want and the limits/requirements for it.’  For a single number, the meaning might be the actual value, upper and lower range limits, precision of representation. In a domain such as sports, where a common number is a Score or tracked statistics, it might have a lower limit of 0 but no upper limit. In a domain such as chemistry, the number might have many requirements such as precision so steps between extremely small or large values are coherent.  In other domains, irrational versus rational representations are needed, which might force us to choose 128-bit integers instead of 32 or 64bit IEEE floats. For a behavior, it might be ‘i want a list of names’ for simple cases or i want a ’48bit cuckoo hash table for UTF8 strings with Unicode Normalized Data and Han-charset, etc.’

The key idea is that we only work with ‘meanings’ we can represent in computing.  When it’s loosely specified we’re able to choose different implementations for code snippets to link for a given chip, such as AMD Thread ripper, custom RISC-V or proprietary Nvidia 3000x GPU or different storage representations, offsets-to-location (reorder Array of Structures vs Structure of Arrays, to reuse code) and compressions.  When it’s tightly specified, we’re limited to choosing what those limits are, but at least we can monitor, query, and understand what is happening in our ‘behavior’.  Our goal was to automate many of the hard engineering tasks (which algorithm to implement, how much impact it has on existing data structures/pipelines, etc.) to reduce the cost to try something new. We reduce friction, in some cases, massively, but are still doing what engineers are already doing with some parts automated, permuted, and optimized for desired outcomes.

Behind the scenes, the wantware cycle is ‘wants-expressed by a user or file/stream’ are processed with a context that prioritizes ‘best guesses’ for closest matches for each term, for each ‘statement’, which might be a few terms or a long winded compound sentence.  Terms are matched to ‘InfoSigns’, which are means to locate a semantic resource, an ‘Asset’ whether code or data or both, in a particular timeline (variety/forks) and version (change log iteration). InfoSigns determine ‘what we are referencing’ and what we can change/do with that piece of Info.  All ‘security’, what we call an Essence-Guard, is embedded *with* its Assets. It’s part of a decrypt, decompress, access, change, store, re-compress, re-encrypt cycle. For many Assets, such as the pixels in a frame of your webcam, few of these apply, and for others, such as your contact info, they all have extensive and understandable stages.  It’s up to the user and their wants.  This same approach goes to ‘chronicling which InfoSigns are changed, by whom and when, along with permission ledger’ or not storing any changes (such as in the case of a webcam’s data which is changing continuously).  These choices are up to the user, always changeable if the user owns the ability to change it.

We store the words or GUI-choices you chose to be able to show them back to you in your own terms and to better guess future meanings from your past confirmations of meaning. However, we do use your words or GUI-choices for anything other than to store as ‘your context’. We use the meanings you confirmed, such as “increase the account by the deposit minus the standard fee, etc.” means “On Event, Store X as X + Y – Z where X = some reference, Y = some other reference, etc.”  That single meaning will likely have many additional statements that address ownership, concurrency, format, chip-types allowed to run this, ordering, etc.”  It is *only* the meaning statements that are mapped back to code, through a series of generate-live (short math/logic/branch needs) or link-existing-snippets (long container-find/remove operations) or bind to E-Powers (add prolog/epilog to properly call and handle results of a call to external service). We store ‘representable-meanings’ for your wantware, which allows us to present it very detailed or highly summarized, pending your level of interest.  We can re-express the same meanings in other contexts, aka with other people’s words for that context, including your own from years ago to aid in knowing ‘what was I thinking??’.  There is value in cross-language support and different programmers/wantware-users being able to better understand what someone meant, but the core value is being able to fluidly remap such meaning back to code & data permutations to better fit a goal, such as faster, less-space, minimal bandwidth, or least battery.

Technical Questions & Answers

BUILD PIPELINE QUESTIONS

Q1: I am most concerned about security and vulnerability scanning that would take place during a normal software build/SDLC process. Generally, those things are hard requirements to operate in a commercial or government space. Can that be explained in any additional detail?

A1: If you only use wantware for generating source-code or an application that gets dropped into your normal process, then everything proceeds as you’d expect.  If you wish to use Essence to run your products, you can incorporate many pipeline tools as E-Powers or replace them in wantware for better transparency, control, and improvements to your particular needs.  This assumes you are running an Essence in realtime to edit, whether alone or collaboratively, or just to provide it to clients with limited control.

Q2: If I think about a Jenkins build pipeline for example: I would pull source code from version control first, how is version control handled if this thing is just changing that underlying source whenever I say something?

A2: You can use existing version control systems for your wantware & data/media-assets.  We do this internally for multi-pronged backups.  Note that the ‘wantware’ itself is not just ‘plain text words’, but for every term or after every section of words (not a grammatical clause, just clusters of association terms), we embed the ‘meanings affirmed by the user’ and the default-contexts that fill in needed extra meanings (whether it’s the visual interface meanings you saw in our Scoreboard App, or 3D simulation objects or data-type/database ones, etc.).  So, what you’re ‘checking-out’ is usually ASCII-7, UTF8 or UTF16 text files which embed the ‘meaning mappings’ into the file.  The embedding can be hidden, such as inside the ASCII-control-codes of the first 0…31 entries or the upper control bits of extended ASCII or UTF-user space. Or the embedding can be explicit using an escape symbol and either the Phonetic Symbols (PyDru), visual symbols (in HTML we could embed the glyphs as symbols with Base64 encoding or use Markdown w/ Pictures), or possibly user-defined abbreviations.  However, that is not expected as a common use case and available primarily for MindAptiv validation/testing.

Q3:Does this work with an industry standard like Git or GitHub?

A3: I use Git regularly for our core Agent/Origin source code with a repo living on my LAN’s NAS. I have some E-Powers on GitHub for eventual 3rd party use. Think of that as a simple SDK & working-examples to expose your existing software or network services to wantware. However, if you use Essence as your runtime, it can track changes directly by who changed what & when in multiple timelines (forks with ownership/rights) and versions.  We offer this for compression of delta-encoded changes as well as fast replay/undo when iterating. Ownership is important for collaboration, security, and successful concurrency scheduling and forms a key aspect of our internal version control.  This approach, which is an Essence-Chronicle, doesn’t preclude any individual or organization from using GitHub or others. I would expect it as our approach is Peer-2-Peer networked, whether on a fleet of your own machines or to others across the globe.  The encryption enforcing validity and the ledge with proofs of validity claims are provided as E-Powers and can be changed for each Asset and Owner as often as they like.

Q4: How would a development team of more than 1 work in this type of environment without working on top of one another?

A4: As is the theme, both existing commercial text-file merge tools, such as Git/Mercurial/SVN or manual ‘Diff’ & Edit, and native wantware-fork/merge work with multiple contributors.  I use different machines & IDs to edit the same section of code on different cloud or NAS copies when I’m in my lab at a workstation (via text & direct wantware edit), my laptop while in bed or my phone (via text edit) while ‘anywhere else’.  I either use text merge tools (Visual Studio Code, XCode, Unix diff & text edit) or an ‘in-progress’ native wantware tool (shows side-by-side changes for 1 to 4 ‘conflict-files’ and allows selecting the merge result). The key aspect is that multiple contributors to the same section rely now have ‘two-sources’ to work with for conflict detection, ordering, and resolution which are the raw-text-input (what you said) and the resulting, user-approved dialog-resulting meaning (meaning symbols that are mapped to the original words, but can be readily remapped to other words and statement-templates, as before, think of ‘rewriting a Madlib sentence.’

In the collaboration space, I find ‘real time’ collaboration, as you might do with multiple-folks-editing a Google Doc or Spreadsheet, quite valuable.  The current state of the art is rather buggy in my experiences and by showing the choices-made (such as “why did that word disappear while we kept the other one, etc.”) to be valuable.  We have an Essence-Power for resolving live conflicts and ordering changes which mostly follows the Conflict-Free Replicated Data Types approach.  It is not complete but the architecture of “InfoSigns” makes it easier for us to handle editing as ‘Meaning’ instead of plain text or spreadsheet cells since we have more information for ‘invalid/not-possible’ than with raw data alone.  Again, real time collaboration, whether on wantware-behaviors, or edit data-records or 3D-objects in a VR scene, all use the same path of having a stream of changes for a given version, timeline, and access-guard (rights & encryption).

Q5: How are variables declared and shared?

A5: We have lots of meanings to use to explicitly control access, storage location, persistence, records-of-change-vs-one-value-only, resolving-conflicts & protocols for Byzantine Generals-consensus, and chip-restrictions.  However, if you just say I want a table of 1 billion numbers that go from 1 to 10 or just a single temperature value, we’ll take your request and map it to the best-guess meaning.  So, the big array might end up using disk storage with a RAM page for most-recently-used if we choose x86 solution and it might be cycled into VRAM-memory-mapped tiles for an Apple M1 GPU.  We can show all the details or very few pending the user’s interest.  The key differentiators that we ‘guess’ to fill in are:

    1. Persistence – do we track changes? do we compress every N milliseconds/hours/weeks? do we have a limit to how many changes we keep total?
    2. E-Guard – who can know the name, can read, write, or destroy (permanently-erase) this Info, the conditions/rules by which that can change (purchase? license?).  Usually we group all variables into the ‘Aptiv’ Guard for that user but it’s often up to the data’s volatility (again, consider live sensor like a camera or mic or web-feed versus a website password)
    3. Idea Limits – this handles whether a variable can be resampled to different precision (bit representation for compression covering range) or resolution (can we downsample this variable), if it includes ‘many others’ such as an Image or 3D X-ray Scan as well as propagation, if it’s a ‘Fact’ do we introduce the required delays when reading this value, if it’s owned and possibly updated by others, such as a bank account or score, versus a ‘Belief’ where your local/last version is fine to use and it can be changed retroactively (rollback) if found to have diverged.  There is a lot to say on this aspect of ‘Idea Limits’, but the simple answer is that we force the variable into one of 8 types that determines how we handle where it lives, when we sync, what requirements occur to do so, and how it can be adjusted (computationally-clamp, spatially-resample, temporally-resample, use-until-sync’d (belief), sync-before-use (fact), local-operation-only (thought), predictive (chronology), etc. ).

Q6: How is the underlying “stuff” commented so if I lose a “developer” and bring in someone new they understand what they are looking at, and what is the software supposed to be doing?

A6: A key value proposition of wantware is to be able to present ‘what is happening’ as in what variables are loaded, read, written, when and by whom on which chip for which job in ‘your terms’.  Initially that’s simple English and over time that’s however you’ve trained it to map the same underlying ‘meanings’ in your terms.  E-Powers are unfortunately a Black Box of input-output services, but they do have lots of metadata that can be queried, logged, and put back into sentences.

Every system has its confusions and complexities and I’m NOT claiming our approach is anything but designed to help better present the level of detail you’re seeking to better understand.

Q7: Does the code generated have a design meant to leverage a containerized platform such as Openshift, Docker, and Kubernetes?

A7: If using the Essence-Morpheus scenario, we are able to run in VMs and Containers like any other multi-process application (or single proc on Mobile/MacOS).

Q8: Is there any leveraging of current open source binaries as part of these apps or does this just build everything from scratch?

A8: We have over 100 powers that use open source libraries packaged as E-Powers (in varying stages of complete validation/claim proofs…so things like Curl, Brotli, TensorFlow are supported but require the User acknowledges they’re using it w/o proofs-finished!…aka ‘developer mode’).  Pretty much any project in SourceFourge, GitHub, or an old BBS can be converted into an E-Power (given dependencies met or emulated), including projects like Electron to run JavaScript/webservices directly, MAME as arcade or old PC emulator, and etc.

Q9: From a Maven Repository for example?

A9: So for Apache’s Build Tool Maven, we can add an E-Power that handles Maven Build and all its internal rules directly. Basically, we use the source and wrap it with meanings, statement templates, rules, and claims/proofs (add internally generated unit tests to randomly synthesize input & validate output or skip with a disclaimer/user-opt-in!).  We could also directly read and write to Maven Build files in ‘wantware’ starting with ‘open file, read and do this and this, etc.’  That makes it all transparent and everything in Maven (iirc?) calls other tools via shellscript, as we do regularly for building E-Powers, etc.

Q10: If yes how are these scannable to something like Sonotype Nexus software?

A10: For the purposes of Sonotype-Nexus repo use and user distribution, we could write an E-Power that connects to Sonotype’s service and shares both ‘wantware’ as text/meaning-symbols-embedded and E-Powers as binary packages of interface/functionality/validation.  However, if you’re using Essence as the runtime, then discovery of other packages is provided P2P, which can come from known associates or commercial entities.  Essence ‘Content’ can be embedded on webpages (as binaries), streamed via services (as binary-stream exchange protocols) or directly (machine to machine IP6/IP4 via UDP/ICMP or other protocols).

Q11: How would I reference these libraries?

A11: In the Essence-Chameleon approach, anything you make in wantware is exported into a coding-project, so everything is used as is and if you’re using Essence in an OS (E-Morpheus) or an Appliance(E-Noir), then the idea of libraries only exists as E-Powers (meaning/templates assigned for use, validation-added, metrics taken, etc.) or as cut’n’paste or imported ‘wantware’ (described earlier as plain text with embedded meanings).

Q12: Can the build Artifact utilize unit testing (such as JUnit or JaCoCo I know these are meant for JAVA, I am wondering if any current COTs software can fill this gap)?

A12: Anything that can be run as a shell-command, launched as a Application (with arguments or some initialization-file or registry setup), or handled via network-commands (SSH run?), can be run via wantware mapped into safe, user-friendly/user-designed statement templates for use.

Adding a specific Java Machine (JVM) as an E-Power has worked fine in the past via open source choices and a ‘display window’ for the machine to live within. Providing input/output for a Java runtime in other scenarios (web client) is equally feasible although it carries all the limitations and isolation-requirements that running any emulator does.  Wantware can only interface with what we’ve made meanings/verb-templates (think mad-libs) that connect.

Q13: Can I run any code quality scanning against what is being generated?

A13: In the case of E-Chameleon where it’s ‘just code projects’, of course.  In terms of the Essence runtimes, there are lots of profiling and runtime scanning tools that apply to the live experience.  However, this requires telling your wantware Aptiv to ‘don’t permute this code’, meaning that its generated one time for whatever chips/resources it uses and can be scanned at a fixed location, just like regular code.   You can also export your “NO-PERMUTE” code as C99 for Valgrind or SPIR-V for Khronos Validation (LunarG tools), etc.  It’s a choice on whether you want self-optimized/regenerated results or fixed-results.

Q14: Security scanning (AppScan) at build time?

A14: Security Scanning, particularly virus-scan-apps, acts the same as the above ‘code-scan’ in terms of options/limits. When we are mapping memory, injecting code live (again, much is linking-snippets/register-rebalancing for CPU/GPU work) and executing/profiling, we tend to be either ignored or heavily red-flagged by security scanners.  We do offer limited use of ‘virtual-memory/disk-memory/device-access’ etc. based, as always, on user wants.

Q15: 508 Compliance (DeQue)? Which is going to be a government requirement.

A15: Essence is designed for inclusion and the requirement to make software accessible to more people.  Ol is to support as broad an audience as able, from prioritizing text-legibility control, filtering for color-blind-variations, making everything ‘speakable’ (including wantware input and dialog response, so everything can be handled without vision), as well as being able specify rights/controls for any ‘created wantware Asset’, whether the behavior/data described in words or imported other media types.

Q16: Load Testing (LoadRunner, MicroFocus)?

A16: I have not tried this load-testing application but I would expect it works when we run Essence Morpheus and have all the wantware aptivs using that “NO-PERMUTE” mode or perhaps with it?  Not sure how their approach works (if it controls our processes and injects stack-profiling code? if it does anything w/ GPU, VideoCodecs, DMA, or USB controllers used for work?).  Be fun to try and I’d expect if it can run video-games, web-browsers, or any emulator, it’ll run us fine using ‘FULL-PERMUTE’ wantware Aptivs too.

Q17: Regression Testing (Selenium)?

A17: We could put the Selenium IDE/API into an E-Power and wrap it with meaning/validation to use it inside Essence if desired or you can use it directly with wantware exported as C#/JS/PHP (E-Chameleon) for web browser regression testing.  Given its purpose to make certain your webapp runs in the popular browsers, I think we’d have less crossover here than other questions, but it’s certainly useful when either (a) exporting webapps & testing (just use as is!) or (b) when using Essence as a web server to test the client experience.

MIDDLEWARE QUESTIONS

Q18: What are the middleware requirements for this? Does it leverage a standard stake? Does this have to run in a specific environment?

A18: The middleware option is to convert it into a dynamic library or static by including it into dynamically-relocated sectors for BSS/Code/Text/etc., or to write a tiny dynamic library that calls the network services or sends/receives device protocols or other service model, including intra-Process-Communication, whether messages, shared-memory, or other IPC approaches.

The result must manually map what data types it can receive in binary format of which bits mean what & where, what functions or messages it can take, its ‘templates’, such as verb objectA objectB modiferC defaultModiferC, and a set of tests that can generate random inputs and validate the outputs. This part is done and shown in wantware using the interface map.  It might sound complicated but it’s effectively a wrapper around existing solutions that offers a means to make ‘statements’, to convert data into what it knows that feeds those statements, and provide some proofs for claims/tests.

It is designed to work with ALL known software, no matter how small (ISA-ops for RISC-V) or large (enterprise or HPC-clusters).

BACKEND QUESTIONS

Q19: It is interesting watching a front end being built, but how is the back end of this software handled?

A19: Basically the identical way you saw the front end, but with words describing what you want for a backend…i.e. “store each person’s name, address, favorite ice flavor, and like points in a record under the table “Client-Rewards” in our Super-Scooper-Database.

Q20: How would I implement this front end with a back end DB MYSQL server for example?

A20: So any existing database can have an E-Power wrapped about it for CRUD operations, Data scheme-limits, and how to answer the CAP theorem priorities. Specifically,  if you implement a database protocol, you have to implement the “LifeCycle” interface which handles the Create/Erase, Open/Close, Update/Sync questions for creating/connecting as well as the “Container” interface (Find, First/Last/Next/Prior, Add/Remove, Accumulate/Map, etc.) into wantware, which automatically adds existing meanings/templates as well as any proprietary or special offers.  The process requires resolving a claim/test for BASE principle (NoSQL/Big_Table style) or ACID principle (SQL/typical-RDMS).  We have many E-Powers for databases from simple ‘ini/TOML’ file read/write to regular MySQL.  Have done some work with StoredQueries, Shards, and other Database concepts but it’s all work-in-progress.

Q21: How would I tie this app to that DB and identify fields?

A21: So if the user asks to create or open a database, Essence generates a reply that guesses “Ok you want me to open ‘Paradox Database called SuperScooper located here, etc.” and from there it’s really just a dialog of the user directly instructing the DB or asking for answers (what are the Ice Cream Flavors again? Show me all the top-level tables? How many people have over 1m likes?) that either runs a DB command or filters into a known answer.

Q22: Does this auto scale without the need for something like AWS Auto Scaling?

A22: We have an E-Power called ‘Maestro’ that auto scales your wantware Aptivs in ways it is able based on the total machine load what else is running really matters, especially if its Clang, LLDB or Chrome-Decoding an AVI1 and what user priorities are known.  It works well but is limited to ‘per machine’ and not per cluster of machines, although it’d be straightforward for each machine in a cluster of shared tasks to adjust themselves, such as a big database-MapReduce or render-an-8K-photorealistic-scene or do an N-Body simulation for M cycles.  All wantware Aptivs are broken into jobs that are assigned to chips (CPU, GPU, Special Chips, etc.) and to machines, so adaption is available by design.  However, we have done very little testing here and there is presumably a good chunk of time to train and tune the choices for large clusters. 


Wantware for Humanity