Tuesday, January 10, 2017

Controlling AIDAP

This blog follows the creation of AIDAP. As mentioned before, AIDAP aims to be a way to move past present physics and animation technology for games, VFX and VR, and use a system based on AI (more on this in previous posts).

So it is great to have an open system where interaction is more real, but there are downsides. If you are developing a game, you've just taken necessary code coverage for testing from a limited space to possibly infinity. A crashing car isn't as fun if it destroys the lead character. The aim isn't always to be real, but it is to be fun and reliable.

So this can be achieved using history data. AIDAP's decisions are based on data, so by manipulating that we can control things like making sure a character always survives a wreck if the narrative needs it or even if the objective is to look more cool.

Here is the test from the last post. It was an inelastic collision where all the energy is transferred from one cube to another:


So if we edit (manually int his case, but it could be a UI at some point) the data used to create this simulation by AIDAP we can change it so there is a little more action via a restitution:


Another edit creates a completely counter intuitive reaction to us in the real world to demonstrate a little more how we can used data to control events. The cube goes up instead of just back:


This data manipulation can also be used to fine tune things to be more compelling, like facial expressions or body language.

Wednesday, January 4, 2017

AIDAP’s first hit

Actually literally.

Below is a look at the Unity editor environment. Unity, in case you were not aware, is an editor for designing games. A lot of the work of spinning up a 3d environment is done for you, and it has an API that allows you to control objects with code.

The first gif and the second look roughly the same. They depict a simple inelastic collision where one cube hits another of the same mass, meaning when they collide the energy of the first is transferred to the other, and the hitter stops.


There is one significant difference. The one on top uses the physics engine included with Unity, Physx. To make the cube move it is calculating force, acceleration, kinematics, etc. The one on bottom uses AIDAP (the AI technology this blog is about). It uses past memory and cognition to intuitively determine what should happen when they collide. No physics calculations, just like your brain.

Mind you, this is a simple test application, a start. 

The hope is where physics and animation techniques stop being practical, this technology will pick up where they left off to make better content for games, animation and VR/AR. For less money hopefully as well.

So what’s the big deal? Here are the potential benefits:
  • Much of what you need for physics ‘just works’ no setup of Colliders or other tasks.
  • Much of what is needed for animation could be made much simpler or better. Maybe even the elimination of rigs.
  • Using the DPD features already spelled out in this blog, game developers could create much more in depth and realistic environments.

      Basically, it’s a potentially disruptive idea. better stuff for cheaper and faster. Today a cube, tomorrow rigging sails in a VR environment.

A little more

For the AIDAP simulation, the Unity RigidBodies are deleted to turn off physics. An AIDAP interface translates what it is told the motion should be into Unity's animation commands.

The data is gleaned from recording what happens to the physics simulation, storing that in a data file and reading that when executing the AIDAP simulation. The system can compile this data into new skills it continually adds to. Once a skill is learned by the system, it's learned. So while motion capture has to go back to actors again and again for new performances, once an AIDAP actor learns how to say open a drawer, it always can.

This will not just drive how things work in the game environment, but it will assist how DPD works as well. For instance, say a character in a VR game needs to reach inside of a coat it’s wearing to get at a gun in a holster to fire it. DPD can set up the reaction plan, but it needs this sort of physics to model things like where to pull on the coat, how much pressure and how far to pull.

Interesting Unity-Physics stuff

The data to drive the simulation was created by tracking action in the physics simulation. This needed to be cleaned up and a simpler subset for dev/debugging made.

When using Unity physics to gen data, I wanted a basic setup for the first simulation to eliminate control variances. So the cubes I created had no material. This resulted in the simulation not reacting as expected from a physics perspective. They actually stuck together and ran after collision.

I added a basic bounce material and they worked as expected, but when inspecting the resultant data there was an interesting change in motion. It would experience negative and positive deceleration intermittently while moving to speed, rather than a smooth linear or nonlinear change of motion.  I think they put the data through some kind of differentiator to perhaps give the appearance of restitution to the collision. There was also micro motion on the axes outside of the direction of force. Anyway, I filtered this and you can see the collision from AIDAP is truly inelastic. The restitution effect can be attained using a string of gradual procedures that create a smoothing effect, but that is for another time.

This will be the first in a series of simulations that will incrementally increase the capabilities.

As with all new code, it needs to be tightened up first, then on to the next challenge.

For more about AIDAP and DPD technology, please see the earlier tutorials from this blog.

Sunday, November 27, 2016

Introducing: AI Driven Animation and Physics

So the first application of DPD will be to make a framework (we'll call it AIDAP) that potentially could replace and improve upon physics engines, animation in games and computer graphics production.

I think it has the potential to take games and graphics where they've never been before. It could add more realism, possibly be more efficient than physics engines and lead to new games with characters and game objects that can do more. It could be an 'it' factor for VR.

I like this as a first application. There is no hardware involved. There is no compliance or regulatory issues like there could be in other businesses. It is games, not for example medicine, so there is wide berth for error.

So, what Could It DO?

Hi Fidelity action

- Characters that can do things like put on or take off jackets, equipment. Open complex doors, manipulate objects, rip and break stuff and other complex tasks. All with simple instructions.

- Environments that react more realistically. The math to make the snow in Frozen was amazingly complex, this could be replaced with AIDAP. Things could really burn and fall. Or explode, deform. Beyond the limits of what can be down now.

- VR: more realistic environments for VR. Working door latches, tents, old ship rigs, truly cutting into things.

- Being able to solve a problem in an environment in a novel way by arranging items. Like jumping a wall by moving boxes, an NPC moving things to get to a destination or a million other ideas.

- All New game genres could be created that were unattainable before. 

- Can be restrained to testable limits if necessary.

- Sports games - real activity, running, catching 

- Virtual characters of different intelligence levels from adversaries to pets.

Simplify control

Instead of creating a network of nodes for an animation, simple noun-verb descriptions to direct action. IE - Open the door slowly.

So, in essence, better quality work for less money.

How will it do it?


So DPD is a framework for understanding, inferring and reacting to an environment (more in earlier posts here). When I began using game engines (primarily Unity) as a test environment for DPD, I noticed that the physics and animation were really not what I was hoping for. I ultimately want to use it to animate things like the above list.

To do this, first we will record classic physics in action. DPD learns by observing this. It can apply this knowledge to the environment and extrapolate from it. Then it uses this to both recreate the environment (replace the physics engine) and allow players and NPCs to act in it (replace the animation engine).

An important thing to note that this is an "entertainment grade" version we are talking about. When you imagine in your mind say a ball being thrown, or are trying to catch one you are not using physics equations to do the job. You are relying on experience, so it is an approximation. Same here.

Soon I will post very simple 'Hello World' examples and build from there. Ensuing blog posts will look at these more closely.

Thursday, March 17, 2016

DPD Summed up

True machine understanding and inference. Machine creativity. Inductive and deductive reasoning. Solving poorly defined problems in an open environment. DPD has these qualities and more.

It's a big boast. So to back it up, here is a simple series of videos that will explain how this is possible in a few short minutes.

Watch these in order to get the best understanding, they build upon each other. Each is just a few minutes long. It is at a high level and you do not need a PhD to get it.

They cover theory, how DPD works, how unsupervised learning works, some simple examples and demos. fyi -You may need to adjust the sound on the videos depending on your browser.

Like reading instead, just check out the previous blog posts below or the FAQ.


Part 1 - Background and Theory


Part 2 - How it works


Part 3 - Basic Examples


Part 4 - Unsupervised learning and Memory

Part 5 - Simple code demos


Part 6 - Applications

As always, you can learn more and get contact info from the faq here. Thanks for watching.

Friday, January 22, 2016

Intelligent problem solving in an open environment

The next demo for DPD is done. It is a task performed using 2d primitves. As the title implies it displays intelligent action in an open environment, a fundamental definition of AI.

This demo illustrates the same principles (core code) that will be used in the character animation product (and other applications later).

The demo is of a sentient yellow element that has control over the movement of itself and another yellow element (it’s “finger”). The sentience identifies a desirable target on the other side of a wall. It has a positive memory of approaching the target so it decides to do so. Here the elements are labeled:

Objects in the Demo


The Sentience wants to get to the target. The software did not provide DPD with a goal or a solution in any way, The Sentience used DPD to independently find a goal, create a plan and execute it.

It decides on the goal of approaching the target because it remembers a positive experience from a memory. When it sees there is a fence and a gate in it’s way, it creates a plan to move the gate out of it’s way with it’s finger and then approach the target. Let’s see the sentience in action:





So the sentience recognizes there is an obstacle, plans to remove it by moving the gate (the far points of the fence are out of bounds for this demo, otherwise it may have decided to just go around) then moves through the entrance.

This is an example of an excitatory response in DPD, as opposed to the inhibitory response of the first demo. The sentience is excited by the prospect of going to the target and plans to do so. It also introduces the use of features not used in the previous demo, such as element hierarchy, more use of memory for understanding/planning and coordinated multi-part reaction planning (using both itself and it’s finger to solve the same problem).

Here is the planning that the sentience ascribed to different elements that we can see in the Thought Process Visualizer:

The procedures are displayed here sorted by the element being acted on. Here the Sentience recognizes it's environment (yellow nodes), notices the target (black nodes) then plans to move towards it (blue nodes):


It realizes there is an obstacle (sliding gate), so it decides to move it (black node):


It decides to use the Sentient Finger to move the gate. It does this while coordinating the action with it's own motion. The black nodes prescribe the finger's progression of movements (four motions are needed to get to the correct point and push the gate):


It would be way more procedure nodes for say a hand picking a lock, however, the core process is the same. Another thing to note is that this process and the memory access procedures are perfect candidates for parallel processing on say a GPU, so something like character animation or robotic applications can use this real time.

Summary of features this demonstrates:
  • -          The use of limited or single experience memory, rather than a reinforced data set (aka one shot learning)
  • -          Creative understanding of a problem and determining a solution
  • -         Balancing possible solutions
  • -         Inductive reasoning (creation of the deductive framework as well)
  • -         Coordination of activity
  • -     Understanding and knowledge representation as a function of impact and geometry, which you can learn more about from the first post here or the DPD faq

This is a stopping point for a bit. First off, some of the physics simulation needed to be stubbed out because the framework is not hooked up to a proper game engine/physics engine and it needs that. No sense reinventing that wheel so I am evaluating what’s out there. Also there is some tech debt to be dealt with. When building a core, it is usually best to nip that in the bud ASAP to avoid headaches later.

There may be one or two more 2d demos then 3d ones will begin.

Hope you’re finding this interesting. I didn’t think I would get as many reads as I already have with this being so different from typical AI research. Thanks so much for reading. If you haven’t already, please look at the faq, which can be found here:




Thursday, December 31, 2015

The DNA is done

(To learn more about DPD, please read the faq here)

Nice milestone today. The original source code for DPD was more of a code laboaratory than a product, it was basically slapping pieces of code be together trying to understand the nature of DPD’s methods and the challenges that had to be dealt with. It was done with little regard for separation of concern, encapsulation, and other basic tenets of software architecture. But that was not it’s purpose either.

Once the code was done, the needs could be seen in relief and I got to do what most software engineers never do. Throw it out and start again, knowing what I’d do different.

So after specifying the architecture and implementing it, the basic code, the ‘DNA’ of the DPD engine is in place and operating. There is still A LOT of work to get it to the first product (more in next post).

There is a simple ‘Hello world’ test world that is used to see it working in a baseline. It is a very simple scene. An element moves toward a sentient element. The sentient element uses the understanding and analysis of DPD to get out of it’s way. That’s it. But, this simple move is not programmed, it is not the result of a massive pile of data and neural network. DPD breaks down the simple situation and reacts. It is simple yet very powerful.


Here is a gif of the new console. You can see the action of the two elements in the upper window, DPD’s thought process in the lower window (in the console you can click on the nodes to see what they mean), and the memory used in the right.

So now that the source base is ready, the road to a usable product can begin. It will start by making successively more sophisticated situations for DPD to analyze, eventually moving to 3D and GPU use.

For instance, the next test being developed now is to open a gate:


The two sentient fingers (yellow) push open the purple gate to get to the green target. This will demo the attainment of an excitation goal rather than an inhibiting one.

As simple as these demos look, they are modeling complex behavior, thought and most importantly understanding. A character doing something more complex such as putting on a jacket in a game are essentially the same,

Once a game physics package is selected to develop the system with the, tests can move on to 3D eventually complex motion, like a character opening a car door and getting into the driver's seat.

Thursday, October 29, 2015

Why we don’t treat scientists the same as we treat rock stars?

(To learn more about DPD, read the faq here)

Do you know who the person in the picture at the podium is? 



His name is Norman Bourlag. Do you know who that is without looking it up? He’s a Nobel laureate. He is a scientist that led the improvement of food production in third world countries, so much so that he is referred to as the ‘Savior of a billion lives’. That’s a billion with a B. He’s not just a scientist, he’s a ‘rock star’ scientist.

Do you know who the guy in this next picture is?




Imagine if these two each walked into a stadium full of people. Even after introducing Bourlag and explaining his accomplishments, he would probably have the respect of the crowd, but he would simply not generate the same excitement and reaction from the crowd Clooney would. Why is that?

We could chalk this up as a paradox of our society or a comment on it’s values, but it really is not. There’s an interesting explanation. (Yes, it goes back to AI and how DPD implements AI, we will address that below).

The question it poses is: ‘Why does our society not have the same level of celebration for it’s top scientists and engineers that it does for rock stars and other celebrities?’

Possible reasons that come easily to mind are ‘there are a lot of stem people and not a lot of celebs’ or ‘science is hard and people are stupid’. But these are really not the reason.

The reason comes from how your brain works and how you make decisions. We like to think that we use logic primarily and sometimes logic alone to make our decisions. We seldom do. We almost always rely on emotion.

We use emotion when we buy a sporty new car or similar expensive purchases. Slap the word Pink on the side of a $3 pair of sweats and we’ll pay $70 for it. We hire people and make some of the biggest decisions of our lives not from logic but from emotion.

Fear drives us. We are afraid to miss the great sale. We vote not from logic, but from fear and hope. Emotion is why it is so effective to tell people what they want to hear. Many salesmen and marketing gurus of all types have made a living understanding this simple fact.

We all understand that we can emotionally connect with things, but we are not just connecting, we are actually deciding. It's not just a favorite TV show we must watch or a singer we spend $100 to see, all of our decisions rely on emotional response to a much higher degree than we are comfortable to admit.

It trickles into the tech world too, though we are loathe to admit it. Just see any 'this tech stack is better than that tech stack' flame war. This is not the only example.

When we tell ourselves we are using logic, we may be trying to use logic, but that logic is based on belief, and those beliefs are usually based on emotional experiences.

The emotion we want of course is happiness or at least inner peace. What makes us happy we are drawn to. An important survival skill from our ancestors.

So we are a slave to our emotions. It’s just part of being human. So when Bourlag walks into the stadium, we do respect him, but we have no emotional connection. That is the difference. It’s not a result of ‘poor values’ or ‘dumbing down’. It is because the people in the stadium have and always will rely much more on emotion than they ever will on logic, no matter their IQ level.

As a side note, when we teach kids about science we should make an effort to connect to the emotional system. Tell a story, make a connection, not just the facts. Tell how knowledge changes and saves lives, leads to happiness.

What does this tell us about designing an AI? DO NOT RELY ON LOGIC. Logic is a derivative of belief, which is a derivative of emotion. This is why two intelligent people can have completely opposite views of a topic where they both think they arrived at purely through logic. DPD uses emotions to drive it's decision making, just like us. 

The emotional impact of an event drives how the system understands it and may react to it later. In fact in DPD these are called Impacts. It is what ultimately forms belief and logic. It is just as critical to DPD as it is to us. 

If you want to learn about DPD (perhaps get emotionally connected to it and learn something new :-) ), follow the link below: 


Learn more about DPD

Tuesday, October 20, 2015

Some of the theory and ideas behind DPD

The intro paper for DPD here explains how it works, but does not go into detail about the fundamental ideas behind it. Let’s do that.

Dynamic Procedural Decomposition (DPD) is an intelligence system based on decomposing the world that it is analyzing and then leveraging those decomposed components to understand it.

The first notion that should be understood is this idea:

All information requires the expenditure of energy to exist and the expenditure of energy creates information.

The idea behind this simple statement is that information is a description of the state of something, and to change the state of anything requires energy, thus new information.

Example: If an object moves from point a to point b, it’s state has changed. That state change can be described as:

S1 + dS = S2

Where S1 represents the state the object started in (being at point a) dS is the change (motion) and S2 being the final state (it’s at point b).

There are two new pieces of information created as a result of this change, the change dS and the new location, S2.

There is no method of change that does not require the expenditure of energy. This holds true for all of Newtonian mechanics, including fluid and gas motion. It holds for thermodynamics, as well as for the transmission of electromagnetic energy. All the information we know about or have ever known exists in the physical world, so this idea holds for everything, even the 1s and 0s on storage devices.

So when energy expenditure occurs it creates information as it does so, and no information can exist unless energy is expended.

So how do we take advantage of this simple fact for DPD?

DPD uses the changes of a model to describe it, understand it, to predict what will happen to it and how to react to it’s changes. To do this DPD strategically reduces or decomposes these changes to common components. All information, as stated above, must exist in the physical world (even if it is bits on a hard disk), so we can use fundamental physical qualities to decompose this information. Examples of these components when modeling a physical world include existence, path, orientation, energy and others. These are called Capabilities.

Capabilities are the fundamental things an object must be capable of to undergo a physical change. For instance, path is a fundamental capability of motion because without a path, the object cannot move.

Capabilities needed for a change vary slightly depending on which physical change they are a component of (for instance object motion versus heat radiation have differing capability needs).

The general rule for capabilities is this:

A change needs all of it's required capabilities for it to happen, but if any one is not possible (path is blocked, energy is too low) and the change cannot happen.

This fact is very convenient for creating a cognitive system that can understand and react to it's environment. It creates a channel through which desired change in an open environment can be achieved automatically. Also, the decomposition supports contextual machine learning/understanding at this atomic level.

So getting back to Energy:

One common capability all physical changes must have is energy per our above statement. This fact can be used strategically during the analysis of capabilities needed for change. This analysis can be used by an autonomous agent to analyze or create change as required.

DPD uses an internal model that represents what it is studying. This model is comprised of objects that can undergo change. The most typical implementation for say a robot or autonomous car would be physical geometry that models it’s environment. Different models can be used to study molecules, provide network security, etc.

This observation and manipulation of changes in the system is the basis for how DPD understands and reacts to it's environment. This strategic use of decomposition to a basic and common form, all the way back to how it transfers energy basically, is what separates DPD from other solutions and is the source of it’s potential.

This is just the underpinning ideas for DPD. If you would like to learn more about DPD, follow the links below.


Demo update: The 2.0 demo is being coded now. The first version was crude and served to confirm the potential of the ideas. The next demo is first a refactor of the 1.0 code, a little tech health, then will move on to a more challenging yet basic display of its potential. Please write if you would like to know more about the project.

Thursday, October 8, 2015

How to make a robot laugh

(To learn more about DPD read the faq here)

Can we make a robot laugh using Cognitive computing? Also, if we can make a robot laugh, can the robot make original jokes as well?

To understand how, consider this lame, yet fine for our purposes joke:

What do you call an Irishman that stays out all night? Paddy O’Furniture.

So, in the question part of the joke we recognized, then constructed a mental model with a partying Irishman. In the second part, we saw the name, and then rapidly recognized an associated meaning, (name is like patio furniture), then got the joke.

But why is it funny to us (indulge me and agree that it was funny).

There isn’t just recognition at play, but emotion as well. Let’s put recognition aside and consider emotion.

There are several psychological models of emotions. Often the models define emotions as fixed states. Happy is here, sad is there.

The emotion system used by DPD (emotions are vectors called Impacts) is different. It can be visualized using a 3d grid. Impacts can have not just a position, but the can have a velocity to that position and an acceleration of the velocity.

Here’s a look at the Impacts in 3 space:



Basically, there are things in the past that affect you negatively and positively, things in the future that affect you the same way and the Z axis. If an event or thing is negative on the Z axis, you do not understand/recognize it. The opposite if it is positive.

An important thing to understand at this point is if the cognition or robot sees or hears something and that thing is unrecognized or not understood, it pushes the Z value down. As it is understood, the Z value goes up.

So, in this model, impacts/emotions are something that can move at different speeds. The motion is what causes the strength changes (they can have an inertia too, but that is for another post). 

Changes in position and velocity of Impacts is how concern turns into alarm or like into love, etc. So when you have a string of events that have these varying emotional changes, you can get a sophisticated, complex emotional spectrum. It also can make subtle changes as well, say from something feeling a little abnormal to creepy.

But back to humor. Humor responses happen when the cognition rapidly recognizes something (moves up on the Z axis) and this is mixed with other factors, typically arousal not being negative or staying away from negative thus we have a happy ending. So when we hear Paddy O’furniture we infer and rapidly recognize that the name also means patio furniture, which stays outdoors, triggering the humor response.

Other examples of rapid recognition and humor:
  • -       When a joke is told and the situation is true but how it's presented causes rapid recognition from a different perspective (men versus women jokes).
  • -       Why might we laugh when someone falls? Because for a moment our brain does not know if the situation is serious. It rapidly recognizes it isn’t. When this happens we have a rapid positive acceleration along the Z (unknown) axis and our alarm falls off rapidly, resulting in a humor response.
  • -       Something behaving silly rapidly goes from unrecognized to recognized rapidly when inferences are complete.
  • -       When we laugh because someone has an eccentricity or a peculiar personality trait and we all laugh and say ‘That’s Frank, haha’, we are again recognizing rapidly, but maybe not as fast, so it’s not that funny.

This could also be why we can hear the same song a thousand times and still like it, but jokes lose the recognition aspect, because we already know them, so we can only find them funny once or twice.

So sure, this is a basic formula to make jokes, but why do humans have a humor response when rapidly recognizing? Study of human evolution shows clearly that being social, thus adapting well to a community, has been a critical survival trait. When humans evolved the ability to infer, the ones that got humor response when rapidly recognizing were probably at a great advantage.

So with this formula, a robot with cognition can conjure its own jokes as well. For example, it can set up a situation that is recognizable (a guy walks into a bar), if it then changes some part of the model by describing something similar that can be rapidly recognized (or inferenced, resulting in recognition) without negative arousal, this should provoke a positive Z axis acceleration in the listener’s Impact system creating humor (fyi - at this point I can barely believe I am talking about being funny).

So with this, an automated cognition can be potentially the funniest and most charismatic entity we have ever known!

If you found this interesting, this blog is about a system that provides the AI that can do what we talked about here and more, including the inference and creativity. Learn more by reading about it here and letting us know what you think.

Learn more about DPD

Sunday, September 27, 2015

Abstraction is not as abstract as you think

(To learn more about DPD read the faq here)

This post is about abstraction, but it is also about how DPD works as an AI tech.

If I asked you to name a few abstract concepts for me, what would pop into your mind? Love? Sure. Childhood? Parenthood? Evil? Enthusiasm? Loyalty? All good.

Now, imagine explaining to someone else what these mean. You will inevitably use references to your life experiences. It's likely you may use examples directly or metaphorically. What the examples you use will all have in common is that they will be things that happened in the physical world.

You will use physical action occurring to physical objects (dynamics) and how this affected you or others emotionally. Evil? You think of moments you witnessed it and your emotional reaction. The physical action itself ultimately defines it.

Now try explaining these same ideas without any geometry or physics. At best this is extremely difficult and almost certainly incomplete.

Everything we understand, even abstract concepts, can be reduced or transferred to geometry and our emotional response to what happens with that geometry. This is maybe the most important thing to understand about DPD and why it works.

Everything we know and understand can be resolved back to geometry and physical action. By decomposing these to fundamental parts they are transferable and can be used to compare and analyze. This analysis leads to understanding. When a system can understand what has or will happen, one can call it intelligent.

The whole point of neurons in early evolutionary animals was almost certainly to survive in and analyze the 3d geometry of their world. I think the reason many of us find quantum mechanics so hard to understand is there is no easy correlation from the concepts it presents to the geometry we know and love.

This decomposition and building back up of reality is the heart of DPD. To understand how it works, there’s a good intro paper right here

This blog is a sort of journal of the effort to develop DPD. It also will chronicle the effort to make the 2.0 version of the demo software.

The 1.0 version of the software is pretty ugly. That’s okay though. It was not supposed to win any beauty contests, just support the thesis that DPD is a viable idea. Now that it’s mission is done, 2.0 is being built and will be put up for others to see.

If you’re interested, read the FAQ at the link below, then take a look at the code. It will be sparse at first, but it should fill up quickly after that.