Article Originally Posted by Futurist Speaker, Thomas Frey – futuristspeaker.com

I had great difficulty completing this column. This is partly due to the complex nature of the technology and partly because its implications may indeed be so far reaching that I’ll sound over-reaching in describing it.

Several companies may find what I’m describing to be rather disturbing. It’ll be disturbing because this technology is on the verge of undermining most, if not all, of their product development plans.

For two nights this week I was immersed in understanding the foundational shifts about to occur inside the software development industry, and this work is all taking place inside a tiny company called Mindaptiv located in Innovation Pavilion in the Denver Tech Center, a hub of startup activity in Colorado.

With a core team of true believers on staff that filled the presentation room, the company’s CEO, Ken Granville, and chief technology visionary, Jake Kolb, took our team from the DaVinci Institute through a series of demonstrations and discussions to grasp the potential of what they are on the verge of unleashing.

Working from inside his secluded geek lab in Boston, Jake started this journey in 2011 by asking the basic question, “What if software didn’t have to be written?”

As most developers know, scripting a thousand lines of new code can be a very painful process. So what if a computer could simply recognize objects and you could just tell this JARVIS-like machine what you wanted it to do with them?

Over the past three years, that’s exactly what Jake and Ken have been building, a kind of “Ironman Room” of spatially capable objects that can be directed both verbally and through gestures with symphony-like precision. Even though they’re only partially there, it’s the kind of technology that would make Tony Stark proud.

Rest assured, I only know a few of the tricks this duo has up their sleeves, but we’re all about to become part of something much bigger than some new gadget we can all carry around in our pockets. No, this one is a game changer on steroids, and here’s why.

 

History Of Transformational Computer Technologies

Computer technology has gone through several fundamental shifts since they were first invented.

  1. 1944 – ENIAC: The grandfather, where Digital Computers began
  2. 1964 – IBM 360: Start of the Mainframe Computing era
  3. 1974 – Altair 8800: Start of the Personal Computing era
  4. 1990 – Tim Berners: Beginning of the World Wide Web
  5. 2007 – iPhone 1: Start of the Mobile Computing era
  6. 2015 – Mindaptiv: Entering the Semantic Intelligence era

Admittedly this is a gross oversimplification of the biggest transformations in computers. I could have included many other significant shifts ranging from the introduction of Browsers, to Search Engines, to Open Source, to P2P, to Cloud Computing, and much more.

Without a doubt, all of these elements have contributed to the evolution of today’s highly nuanced improvements leading to today’s sophisticated computer technologies.

But on a zero to ten scale for rating tectonic shifts on the Richter Scale of computing, Sematic Intelligence is drawing lines on parts of the chart that haven’t ever been written on before.

Semantic Intelligence Explained

We use our devices such as laptops, tablets, and phones to convey meaning. We talk on the phone, write and read text, emails, blogs, news, look at and send pictures and videos. We do this because these inputs and outputs symbolically represent objects with behaviors and attributes that make sense to us as humans.

We don’t see pixels; we see words that our mind converts into pictures. We don’t see all the tiny squares, circles, and rectangles on the screen, but rather what they represent. In video, we don’t see still images or individual frames. Instead, we see the fluid shifting of movement, as we would experience in real life.

Our brains are hardwired to detect objects and assign value and meaning.

To explain this more simply, humans don’t think like computers and computers, until now, haven’t had the ability to understand humans. At least not easily.

Scientists working on this problem have identified a number of semantic gaps that have prevented this from happening:

  1. The semantic gap between different data sources – structured or unstructured
  2. The semantic gap between the operational data and the human interpretation of this data
  3. The semantic gap between people communicating about a certain information concept.

The Mindaptiv Approach to Closing these Gaps 

The Mindaptiv approach is to turn every object into a set of instructions using a system for automatic object detection. This involves a process for dynamic down-sampling and up-sampling what it sees.

In doing so, every object is converted into a description, and the file size for that description is exponentially smaller than the data itself. This means that every server, laptop, tablet, and smartphone can easily be converted into a Semantically Intelligent device.

For example, a video is converted automatically and seamlessly from pixels into objects with attributes like size, shape, and color, with corresponding information about its time and space coordinates, just like our brains do.

Unlike Artificial Intelligence (AI), that requires a super computer like Watson, Semantic Intelligence, with its diminutive file structures, takes far less processing power and bandwidth. For this reason, high definition images and video, can be stored, transmitted, and presented from semantic definitions at a fraction of the time and cost it would take to send the pixels.

Taking this a few steps further, Semantic Intelligence’s size and speed advantages mean we will be able to send a text in English and have Hindi, Egyptian, or Mandarin come out the other side. Semantic Intelligence would deliver the text in the correct version of the 14 variants of Chinese, while translating between the vernacular of both the communicator and the receiver.

When it comes to the Internet of Things, the flow of “intelligence” from one device to the next will be exponentially greater. And given a few learning cycles, our devices will finally learn to “think like us.”

The pieces I’ve explained so far are only what Ken refers to as, “a few shavings of ice off the iceberg of possibilities.” It’s not even close to being the tip.

 

Ken Granville (right) and the rest of the team at Mindaptiv

Describing the Capabilities

One of the first demos we saw was a side-by-side comparison of a low-res photo, a jpeg under 50k file size. With one side showing the current state of the art, any zooming in on the photo resulted in a highly pixelated image.

Using Mindaptiv technology for transmitting a description rather than pixels, that same low res image could be expanded to a stadium-sized image and still maintain its crispness.

This was also demonstrated with several videos. Think about what it would be like to project a video the size of an airline hangar onto a massive wall and still maintain perfect resolution, yet transmitting the information through exponentially smaller file packets.

The second demo was designed to show how its object-capturing and object-manipulation features worked. In this presentation, a video feed of a vase showed how the vase could be selected and stripped away from the rest of its background. The vase was then placed onto a variety of different video backgrounds. In this example, the vase remained part of a live feed, so the vase itself could be repositioned, expanded, or turned sideways in real time.

Features like this will be very appealing to the special effects people in Hollywood and the gaming world.

Additional demos showed the difference in code once an object was reduced to a description. The number of lines of code dropped from thousands to dozens. Once the description file was sent to it’s receiving device, the lines of code once again expanded into its original multi-thousand-line format.

This contraction-expansion feature will have massive implications in everything from big data, to telecom, to Internet security, to new hardware designs.

 

Final Thoughts

Admittedly, what I’ve described so far is not enough to give you an accurate sense of what’s going on here. Even for those working on the technology, the true implications will take years to fully realize.

In my opinion, Mindaptiv is sitting on a loaded powder keg waiting to explode.

Yes, there are still any number of things that can go wrong, and this may be far too disruptive for most computer companies to readily embrace. But from my vantage point, Mindaptiv will transform the business world more significantly than the invention of the computer itself.

This is a revolution. Over time, all devices will become Semantically Intelligent. As a second step, which may happen somewhat concurrently, AI will be layered over the top, with AI adding the thinking, reasoning, decision-making, diagnosing, even feeling elements to the equation. Think of the movie “Her,” only better.

With a Semantic Intelligence layer, AI will be faster, cheaper, and perform better than anything in existence today.

Yes, I may indeed have had one glass too many of the Mindaptiv Kool-Aid. But even if they don’t manage to carry the torch across the finish line, someone else will. And personally, I can’t wait.