Architecture of an Artificial Intelligence Operating System (AIOS)

At the lowest level, an artificial intelligence operating system (AIOS) has a ‘kernel’, that functions as a resource manager to allocate available hardware (i.e. -memory, CPU, disk, etc.) to tasks that request it. The hardware can be physical or virtualized, and may be a single computing node, or a combination of nodes forming a cluster. A work scheduler coordinates with the resource manager to run applications. There is also be a means to handle basic input and output (I/O). The first form of I/O would be to hear and speak, and be supported by a speech recognition and synthesis framework, as well as a natural language processor (NLP). The second means of I/O would utilize machine vision and optical character recognition libraries, and would allow the environment to see and interpret video streams, photos, and graphical images. Layered on top there would be a set of integrated services in support building smart applications, as well as for running them. This would include one or more integrated software development environments (IDE’s) with source level debuggers, machine learning algorithms, text analytics, a data mining workbench, a logic programming environment, a production rule system, computational grid, data grid, message bus, and some ancillary components. The first artificial intelligence operating system available for commercial use in the United States is called cognition (

For additional information please contact the nTeligence Corporation sales department, via email, or by phone at 561-922-8054

True Meaning ™ – The Most Important AI Breakthrough in 25 Years

A July 2016 article entitled “Where machines could replace humans-and where they can’t (yet)”, written by McKinsey & Co researchers Michael Chui et al., discusses the potential for automating work tasks performed by people across various sectors of our economy. In it they identify a crucial gap in current AI technology… “if machines were to develop an understanding of natural language on a par with median human performance, that is if computers gained the ability to recognize the concepts in everyday communication between people…”. They go on to state the impact that this single advancement would have on the automation of jobs. From their findings, the number of work tasks that could be automated within the retail sector would increase by 7%, but in financial services and insurance the impact would be much greater, an additional 23% of work tasks could be automated if this technology existed.

In a similar light, a piece written by Joanna Stern, appeared in today’s Wall Street Journal. It spoke about the improved version of Apple’s Siri intelligent assistant. Although some progress has been made in areas related to common consumer focused usage, including seamless integration with a number of popular apps, there was still an obvious weakness in its “conversational abilities” as was evidenced by its technical design. As a point of reference, Norm Winarsky Siri’s co-inventor stated that the “hard part is recognizing the intent and context of the conversation”. This same technical limitation seems to hold true for Microsoft Cortana, Google Now, Amazon Alexa, and IBM Watson as well.

nTeligence is proud to announce that we have now achieved this key technological breakthrough as described above, within bounded business, government, and military domains. We call it True Meaning ™. True Meaning ™ allows a machine to completely grasp the core concepts in human communications, and fully understand its intent from both a content and contextual perspective. This includes both spoken, as well as written, sentences and paragraphs of English language text. The technology is currently only available to early adopters of our cognition² artificial intelligence operating system.

For additional information please contact our sales department via email at, or by phone at 561-922-8054.


Why Algorithms Alone Are Not Enough

In 2006 Netflix offered a million dollar prize to any individual or group that could improve their movie recommendation algorithm by more than 10%. After an entire year of effort one team,  Korbell (ATT Research), won an interim progress prize for improving the Cinematch algorithm by 8.34 %. This was achieved through the use of one hundred and seven (107) different algorithms used together in what is called an ‘ensemble’. Two of the algorithms, Singular Value Decomposition, and Restricted Boltzman Machines, performed well and were adopted to some extent by Netflix.

It took another two years for a combined team led by Korbell, to actually surpass the 10% improvement goal. But the technical approach used to finally win the contest proved too complex to actually implement within a production computing environment by Netflix. During the three year duration of the contest, there were over twenty thousand entrants, including some of the finest mathematical minds in the world.

Google, a company whose core line of business is intrinsically dependent on algorithms, uses the knowledge of people, in order to enhance the accuracy and completeness of their search results. As one example, information is taken from data repositories that are maintained by humans, such as Wikipedia, Freebase, and the CIA World Factbook, and then displayed alongside that of typical search results. In another instance people manually review two sets of search results that are produced through an ambiguous query, such as “what does king hold”. The individual is then asked to select the result set that makes more sense. As Scott Huffman, an engineering director in charge of search quality at Google states,  “There has been a shift in our thinking, a part of our resources are now more human curated.”

At Facebook, although algorithms manage to a degree what users see as trending news topics, people also play a crucial role in the selection as well. In 2014 Facebook assembled a team of ‘news curators’ in New York and tasked them with identifying the most commonly discussed topics on the social network. As SC Moatti, a former Facebook product leader stated, “Facebook’s news feed team needs a human touch because ranking based purely on algorithms would feel unnatural, the same way that robots do not appear quite human”.  What is very interesting to note is that very recently accusations have been made that these very same news editors have been politically biased in terms of the content they select. This led to Facebook to having to respond to an inquiry from the United States senate.

What conclusion can you reach when weighing all of the facts stated above? The first realization that one must make, is that we have just about reached the physical limits of what mathematics can accomplish when used by itself.  Simply put, algorithms have been maxed out. If one were to take the approach of having ‘human helpers’ such as what Google and Facebook have done, then the expected outcome is a somewhat better. But, we are still burdened by the fact that the human mind itself is imperfect, and is susceptible to what is known as ‘cognitive biases’, which directly affect rational decision making.

What is the best solution then? To use algorithms in conjunction with the software based simulation of human like thought processes, which can eliminate the unwanted cognitive bias, as well as other types of errors in human judgment. This is exactly what we are working on at nTeligence, systems that incorporate ‘hybrid models of cognitive behavior’, which leverage algorithms when needed, along with simulated human thought patterns, sans the biases, and errors in judgment. This purely digital, integrated approach, provides the best of both the art and science of decision making.

For Additional Information Please Contact:

nTeligence Sales Dept.


voice: 561-922-8054


Where Does Cognition Need To Live?

Cognition comes from the Latin word “cognito,” which means “to think.” The definition can readily be expanded to also include learning and reasoning. When evaluating cognitive computing environments, the most important question is where should cognition, or machine intelligence, actually reside? In the bible, it says that god created man in his own image. Most likely it will be the same case when man creates intelligent machines. The very essence of human nature will lead to designs modeled after our own likeness. People will feel the most comfortable interacting with machines that look, talk, move, and think, just like we do. But what does this mean from a software architecture and design perspective, and what impact will it have on the long term success, and product viability, of existing cognitive computing products and environments?

Think of the human body for a moment. Where are our eyes, ears and mouth located? These core senses, as well as our ability to speak, are all positioned right alongside our brain. What this translates into, from a software engineering perspective, is that cognition needs to live right at the very edge of the computer network, at the point where cameras, microphones,  speakers, and a host of other sensors, directly connect to laptops, desktops, and workstations.

Imagine for a moment the operation of a ‘self driving car’.  What would happen if the intelligence of this vehicle were located somewhere up in the cloud? This would mean that when an eighteen wheeler jams on its brakes in front of the autonomous vehicle, a client application running under the hood would need to send a request containing real-time data to a server up in the cloud. Situated perhaps a thousand miles away. The vendor’s cognitive computing engine would have some machine vision algorithm running there, that would first determine that there is no safety bar in the rear of the tractor trailer, and then another would determine how long it would take before impact. The roundtrip request and response, without even factoring in the time it takes to perform these core computations, may take a half of a second. But in that time period, moving at 60 miles per hour, you would have traveled forty-four feet, which may mean the difference between life and death for you.

There are other business, medical, scientific, and military use cases that also confirm the basic design flaw in putting ‘cognition’ up in the cloud. Think for a moment of a battlefield scenario, where a technologically advanced enemy has the ability to electronically jam internet protocol communications between virtual assistants, or robo advisors. They would simply be rendered useless. The ability for an intelligent machine to operate independently in the midst of battle,  is essential to victory. Then of course there is the proverbial ‘moon shot’ that every company dreams about. In the case of nTeligence corporation, we envision a ‘Mars shot’. Our long term goal is to accompany astronauts on a mission to the red planet. In this environment it will take a signal from Earth at least five minutes to reach Mars. If there were a ‘mission critical’ virtual assistant or robo advisor running on the surface of the planet, this enormous time lag could render it worthless, even if it were just to provide psychological counseling to the men and women there.

For Additional Information Please Contact Our Sales Department

Voice: 561-922-8054