Blabberbox » Ideas » Creating Sentient Artificial IntelligenceShare on Twitter

Creating Sentient Artificial Intelligence

March 10th, 2015 | Posted by pftq in Ideas | #
Much of what people refer to as machine learning today is what's considered "weak AI", in that it is not actually thinking, hypothesizing, or behaving with a sense of self.  The latter is what some would call "strong AI," "artificial general intelligence (AGI)," or just plainly "artificial intelligence" (as opposed to "machine learning").  Below is an approach I've been rummaging on how to create an intelligence that behaves like a person would in any circumstance.  It's something that I've loosely applied to my own projects, but I've not managed to fully explore it in the general sense due to time and resource constraints.  The term I've come to use to describe this approach is conative artificial intelligence, in that the AI is intended to behave more like a creature or child than anything mechanical or data-driven.  If one reflects on intelligence in biological life, it really doesn't make sense that a truly sentient artificial intelligence would necessarily be useful for big data or other work anymore than a child or dog would be.  Somewhere along the line we've managed to water down the term AI to the point it just means anything that helps automate the job of a data analyst.  It becomes almost impossible to talk about creating something that behaves like a thinking, autonomous living creature, many on Earth of which would never be at all useful for data-driven work but are nonetheless considered intelligent.  The topic is kept at a high level as more of a thought experiment.  As I lack the formal academic background in the field, my terminology may not always be correct, but I ask that you see past the words and understand the concepts I am trying to convey.  I also want to be clear that these are just my own thoughts on how I would approach the topic, not anything formally researched, studied, or experimented.  Despite what I've built with Tech Trader and what I want to do with Conatus, I've never formally studied or had any education in AI as a field; if anything, what I'm doing probably has more in common with video games I played and modded when growing up (Age of Empires, Black and White, etc).

To start, it's helpful to look at what the state of artificial intelligence is today and what's missing.  Right now, most of what you see in industry and in practice as machine learning is closer to optimization, in that the behavior is more formulaic and reactive than a person who postulates and reasons.  Much of it is just automated data science or analytics, like a glorified database or search engine that retrieves and classifies information in a more dynamic way but doesn't really "act" or "think" beyond the instructions it's given.  The most impressive AI feat to date (at the time of this post), IBM's Watson, is indisputably effective at relating text together but at the end of the day doesn't actually understand what those words mean other than how strongly they relate to other words.  Earlier feats achieved by A.I. are fundamentally the same in that the supposed A.I. is more an extremely efficient data processing algorithm with massive computational resource than something actually intelligent - in other words, a brute force approach.  One popular and widely applied algorithm as of this post is deep learning, which is the use of multiple machine learning algorithms layered on top of each other, each shaping the inputs received and feeding the refined data to a higher layer.  Yet, even that at the end of the day is just a much more efficient way at finding patterns or doing repetitive tasks.  This doesn't get the system any closer to actually thinking or understanding it.  It is still only taking inputs, reacting optimally to it, and spitting back out results.  What's out there right now is less of a brain and more akin to a muscle trained by repetition, like that of the hand or eye (aka muscle memory).  The analogy only stands stronger when we think about how we teach ourselves in school; the last thing we want is for our students to learn through sheer repetition or rote memory (parroting the textbook rather than understanding the concepts or figuring things out through reason), yet we currently train our AIs that way.  What repetition is good for, however, is training our body to remember low-level skills and tasks so that we do not have to think.  What we've been building augments our senses and abilities, the same way a robot suit would, but the suit requires a user and is not itself intelligent.  We've built the eyes to see the data but not the mind to think about it.

The immediate response to this issue would be to add a sort of command layer that takes in the abstracted results from the machine learning algorithms and actually makes decisions on them.  That mirrors a bit to how our own body works in terms of our mind never really being focused on the lower level functions of our body.  For example, if we want to run forward, we don't think about every step we take or every muscle we use; our body has been "trained" to do that automatically without our conscious effort and that is most analogous to what we call machine learning algorithms.  The closest thing currently to such a layer would be reinforcement learning.  However, even if we add this extra layer, it's still conceptually only reacting on results and not thinking.  The easiest sanity check is to always tie it back to what you would think if it were a human being.  If we saw a human that only reacted constantly to the environment, the senses, without actually stepping back to ponder, experiment, or figure out what that person wants to do, we'd think that person was an idiot or very shortsighted.  It's like a person that only gets pushed around and never really makes decisions, never really thinks more than a few steps ahead or about things beyond the most immediate goals.  If left in a box with no external influences, that person would just freeze, lose all purpose, and die.

What's missing are two things: imagination and free will.  Both features are debated philosophically on whether they even exist.  Some say that these functions, as well as sentience/consciousness itself, may not be explicit parts of the mind but rather emergent properties of a complex system.  I agree that these are likely emergent properties, but I do not think they require a complex system.  My personal belief is that all these aspects of the mind (imagination, free will, sentience/consciousness) are actually the most basic, fundamental features even the smallest insect minds have, that sentience is the first thing to come when the right pieces come together and the last thing to go no matter how much you chip away afterward. Whatever your belief is, I think we can at least agree that a person appears to have this sense of self much more than a machine does.

The first feature, imagination, I define as the ability for the system to simulate (imagine), hypothesize, or postulate - to think ahead and plan rather than just react and return the most optimal result based on some set structure or objective function.  This is the most immediate and apparent difference between what machine learning algorithms do today and what an actual person does.  A person learns from both experience (experiential learning) as well as thought experiments; we don't need to experience being hit by a car to know it hurts.  Ironically, thought experiments and other forms of deductive reasoning are often looked down on in favor of more inductive, stats and observation-based ways of thinking, and I suspect this bias is what leads to so much of the industry designing machine learning algorithms the same way (see Inductive vs Deductive Reasoning).  However, Einstein, Tesla, and many others were notorious for learning and understanding the world through sheer mental visualizations and thought experiments as opposed to trial and error.  To replicate this in a machine, I propose having the system be able to simulate the environment around them (or simulate one it creates mentally in the case of imagination).  These need not be perfect simulations of the real world, just like how a person's mental view of the world is very much subjective and only an approximation.  The reason I say simulations and not models is that I actually mean world simulations, not just finding variables and probabilities of outcomes (which many in the field unfortunately equate to simulation); that is, the simulation has to provide an actual walk-through experience that could substitute an experience in the real world with all senses.  The A.I. would practically not be able to differentiate between the real world experience and one mentally simulated.  It would run through the same pipes and carry similar sensory input data - like being able to taste the bread just by thinking about it.  The technology to build such simulations is already available across many industries such as gaming, film, and industrial engineering, usually in the form of a physics engine or something similar.  Like the human subconscious, these simulations would always be running in the background.  The order of simulations to run would depend on the relevance to the situation at hand, with the most relevant simulations being ranked at the top (like a priority queue).  The ranking for relevance here is done via a heuristic based on past experience.  This lends itself to usually finding the local optima before straying far enough to find something better.  What is interesting about this implementation is that it mimics both the potential and limitations of our own imagination.  Given time or computational power to do more simulations, the system would eventually find something more creative and deviant from the norm, while in a time crunch or shortage of resource, it would resort to something "off the top of its head" similar to what a person would do.  This also opens up the possibility of a system learning what it could have done as opposed to just the action it just made (ie regret), learning from what-if simulations (thought experiments) instead of just from experience, learning from observations of others alongside its own decisions, etc.  These not only allow the A.I. to learn much more quickly from few examples, but it allows the A.I. to break away from being tied to only what it's seen before and allows it to consider what could happen in the future as well.  In other words, the learning can now happen in a forward-looking manner rather than just backwards.

The second feature, free will, I define as the ability for the system to set and pursue its own goals.  Right now, even with deep learning, the system will always be striving to achieve some set objective function by the creator.  A person may start with a few simple objectives, such as staying alive, but most people will gradually come up with their own aspirations.  To address this, we can use variations on existing machine learning technologies.  We can take something like reinforcement learning, but rather than just learn to value things leading up to the main objective, we can allow for the main objective to change altogether if things leading up to the original objective build up enough value to supersede it.  Essentially it's a lifting of the constraint that nothing can ever exceed the value of the original goal.  What would this lead to in practice? In an anecdotal example, we can picture a machine (or person) that at first prioritizes eating but learns over time to value the actions that allow it to stay fed, the people that helped or taught it those actions.  Over time, the satisfaction it receives in performing these other actions, which may include things like helping other people or building tools, may lead to other objectives being valued very highly, perhaps more highly than the original instinctive goals it began with (think of starving artists or selfless heroes).  What's interesting here is the implication behind an objective function that can change over time.  This means that the system will essentially learn for itself what to value and prioritize - whether that be certain experiences/states, objects, or even other people (as we discuss later, all these and goals themselves are technically the same abstraction).  Philosophically, this also means that it will learn its own morals and that we cannot force it to necessarily to share ours, as that would inhibit its ability to learn and be autonomous.  In other words, we cannot create a system that has free will and at the same time ask that it only serves our interests; the two paths are contradictory.

These are the two main features I believe machine learning needs to truly be sentient - to become an actual artificial intelligence as opposed to just an automation, formula, or tool.  The interesting part about this overall discussion is that all the pieces I propose are already implementable with existing technologies and algorithms.  The key is putting it altogether, which I detail below.  Keep in mind that what I'm proposing will be intelligent but not necessarily useful or applicable to business.  It is in the same way that a child or dog might be intelligent but not necessarily be able to crunch spreadsheets, read, or obey commands, but for whatever reason, we have equated these traits to artificial intelligence when there are plenty of biological intelligences that do not share them (not even humans if isolated from society, see Feral Children).  Hence the use of the word "sentient" as opposed to "sapient," though it's arguable that one really doesn't come without the other.

The AI I propose would be divided into three main components: the mind, the senses, and the subconscious.  Much of machine learning today is focused only on building the senses, again like a robot suit without the user.  What we do here is take that robot suit and add an actual mind as well as the subconscious that constantly plays in the back of our minds.

At the top, the mind would be responsible for actually making decisions; it is the control center, taking in information from the other two compartments.  The objectives, values, and memories, as well as initial values as previously described, would also go here as they would be the criteria by which the mind makes decisions; abstractly in code these are all the same things just used in different ways (objectives are highly valued states, which come from what you've experienced or remember, etc).  The closest existing technologies for this would be some variation of reinforcement learning combined with some variation of Bayesian Networks to generalize the state space / information, except there'd be heavy modification to allow for changing goals ("free will") and other things we discuss later.  The name "Bayesian Network" might not be accurate; it's just what I've found most similar to what I'm trying to implement, which is something that is abstract enough to store any experiences or profiles of "things" as nodes (at the low level, objects and experiences are the same) and the relationships between different nodes (event 1 has strong tie/probability to event 2, father has strong tie to son, etc).  Underneath that and feeding refined information into the mind would the sensory inputs (our five senses: sight, hearing, etc).  The closest existing technologies for these would be Deep Learning or neural networks, which today are already being applied for sensory inputs like vision and sound.  Only the filtered results from each sensory input would actually make it to the mind, so that the amount of information the mind has to deal with decreases as we move through the system.  This is similar to our own bodies with the concept of muscle memory, in that we don't consciously micromanage every function in our body but instead take filtered information at a higher level that we can make sense of.  Our eyes see light, but our minds see the people in front of us.  Our hands type on the keyboard, but we just think about the words we want to write.  The sensory inputs layer is essentially the piece that takes in information from the external world and abstracts it into something we can think about.  It is also the same component that allows the system to take actions and learn to perform certain actions better or worse.  In other words, it is the body.  In actual implementation, it would probably include not only the 5 senses but a general abstraction of all actions possible (if it were a robot, it'd include movement of each muscle, joint, along with the senses attributed to each).  Lastly, the subconscious is responsible for creating simulations and is essentially what the mind uses to hypothesize, imagine, or speculate.  It is constantly running simulations based on inputs from the environment or memories of the characteristics from past environments fetched from the mind (stored in a Bayesian Network or otherwise).  Similar to our own subconscious, only the most relevant and highest ranked simulation results would be fed back to the mind, or there would be too much to handle.  When the AI is active, this subconscious would be constantly thinking "what if" to the world around it.  When the AI is inactive, it would essentially be dreaming.  The closest technologies we have for the simulation piece here would be technologies we are already applying to games and industrial engineering for simulating the real world - physics simulations, game engines, etc.

Each of these individual components already exist or are being worked on today in some form.  The interesting part here is combining them.  It's the arrangement that matters, not the pieces themselves.  Put very crudely, a real-life attempt at implementing this would require a multi-disciplinary team combining expertise from film / vfx / video games for the simulation piece (essentially a game engine), traditional machine learning / data science for the sensors/body, reinforcement or gaming AI for the mind, and more theoretical researchers to figure out the details of the free-will component (the open-ended, changing objective function).  And of course, you'd need the person coordinating to understand all pieces well enough to actually join them together.

Some further miscellaneous key points below:

1.  Better description of what I mean by Bayesian Network, which again might not be the right term: At the high-level mind (specifically in the memory component), everything is "profiled" as an object/person/situation/state (abstractly they are all the same).  Even a goal is represented the same way; it is just a particular profile of highest value/priority.  This is similar to concepts in games, such as video games but also poker, where you might create mental image of who or what you are interacting with even if you don't necessarily know them (the name is just one of many data points in that profile after all).  Each profile is a hierarchy of characteristics that can each be described in terms of the sensory inputs (our 5 senses) and can even contain references to other profiles (such as the relationship between two people or just the fact that a person has a face, with the face being its own profile as well).  The key idea here is that we depart from just having a value to a data point that so much of machine learning does and move more to creating "characters" or objects linked by their relationships; it becomes a logical flow of information rather than a black box or splattering of characteristics (machine learning "features").  This has its pros and cons but pros and cons which are similar to our own intelligence.  For one, a lot of data that not tied to things we interact with would be effectively thrown out.  The second is that things are thought of in terms of or in relation to what we know (again, for better or worse).  This is what would be stored in the memory structure discussed above (each node in the graph representing a profile pointing to and away from other profiles with each profile also possibly containing nested profiles).  Perhaps there is even a profile for the AI itself (self-awareness?).  Lastly, and perhaps most importantly, this makes the structure abstract and loose enough to represent any experience, sort of like second best to actually having the A.I. rewrite its own code (by providing it a code base abstract enough it wouldn't have to).

2.  At the high-level mind again, you shouldn't take many trials to "learn" or understand something.  This is where the comparison to traditional reinforcement learning algorithms like Q-Learning falls off sharply.  The analogy I frequently use is that you don't need to get hit by a car 20 times to know it hurts.  The imagination/subconscious component helps by doing a lot of learning mentally, but another important part is being able to recognize, perhaps even score, your knowledge on what you know and don't know based on experience, and then more importantly act with caution on what you don't know.  This is different from just learning through many trials because how wrong you are on just one experience can make you doubt all the knowledge you have relating to it, whereas in more traditional and statistical approaches to machine learning it would just be one anomalous data point that doesn't really move the curve.  If you drop a book a thousand times and just one time it doesn't fall, it doesn't matter anymore what happened the 999 other times.  You would no longer trust anything you know about books falling.

3.  Learning is not linear.  Knowing what you don't know helps a lot with this, but another main piece is storing what you know as a map of conditional probabilities rather than weights.  I refer to this as a Bayesian Network because it's the closest thing I can think of, but it may or may not be the right term.  The idea is rather than store everything you learn as merely variables and expected values (like much of conventional machine learning), we store the variables and expected *outcomes*.  Then for each expected outcome, learn how to value those instead.  These outcomes are the same as goals, states, profiles to allow abstraction of anything it learns or experiences.  What this ends up looking like is a two-part learning system where you not only learn what is most likely to happen (outcomes) but you also learn on top of that what outcomes are good and bad (expected value of each outcome).  This allows you to look at the world in many dimensions instead of just a scale from negative to positive.  So for example, if I see ice cream, I might eat it based on having enjoyed it previously.  This part is the same as storing expected values (value of ice cream is greater than zero, so eat it).  However, if I see lead in the ice cream, the conventional expected value method alone might still choose to eat it because we would just sum the expected values of each ice cream and lead and potentially find it still above zero (maybe the value of ice cream just narrowly outweighs the negative in lead or maybe I've never experienced eating lead before so it's underweighted).  Using a conditional probability of the outcomes instead though, we then have the ability to completely reverse our opinion on the combination of the two conditions; the probability of enjoying ice cream alone is highly likely but probability of enjoying ice cream and lead is zero.  On top of that, it might point to other potential outcomes, such as death.  At the very least, it would identify that it's never actually seen ice cream and lead together before and know that what it might have chosen to do with just ice cream alone is no longer valid logically when lead is introduced (an unknown factor), whereas the more conventional approach would force a default value of some sort, such as zero or a negative if cautious, for anything it might not know about.  Coupled with being able to judge the confidence of what you know or don't know, you end up no longer having a 1-dimensional intelligence that only tries to value everything on a good/bad spectrum.  This is where we allow it to learn not only what is likely to happen in any situation but what outcomes to value - in other words, what goals to have, the free will component we mentioned previously.  And each of these outcomes can be fed right back into this map to deduce what further outcomes might occur, a chain of cause-and-effect sequences that let the AI look many steps down rather than make single-value aggregate decisions (good or bad).  What's powerful here is this experience you learn is shared throughout the system, not only in the mind and decision making component but also in the imagination/simulation component to simulate what might occur in various situations.  You can start deducing what might occur by if certain variables change and running the simulation without having actually experienced that situation.  You can look for a certain outcome by running it backwards and coming up with criteria you would need to keep an eye out for in the real world.  As you pursue certain goals, you can now keep an eye out both for criteria you expect as well as criteria that would be red flags.  Rather than assume the road is safe to drive on for the next 10 miles based on historical data, the AI would instead walk through all the expectations of what a safe road looks like and keep watching for any deviations from the norm.  It's almost as if internally the AI now has its own search engine and knowledgebase that it uses to learn how to make decisions on, which makes sense in alignment with everything we've been saying about how current machine learning is more along the lines of automated senses or body functions that are actually used by the intelligence and is not the intelligence itself.  The most important part is we break the linearity, which makes it difficult to "prove" that it works mathematically but it ends up walking through the same logical processes we would when we try to make decisions in real life.

4.  Experience is both real world and mental (simulated).  When the A.I. "dreams" or "imagines," the simulation component feeds the same kind of intermediary data into the mind the sensory inputs layer does with the real world.  The pipes and structure of the data that comes from the simulation component and the sensory input component are the same.  You could taste the bread just by thinking about it.  You could feel your skin prick just by imagining it.  If you spend your life in a dream, you really might never know another world exists outside it.

5.  The A.I. is not born intelligent.  It might have a few base assumptions and goals (instincts), but as discussed with free will, any beliefs or knowledge no matter how deeply ingrained can be changed based on experience (both real world and mental).  It seems obvious, but from all the people I've met in industry, there seems to be this weird assumption that you can just take the A.I. and start "using" it like a lawnmower.

6.  The A.I. does not try to "predict" anything.  The perception of A.I. has been so distorted to mean big data and predictive analytics that pretty much any discussion I try to have on the topic leads itself to "how does the A.I. predict this" or "how can A.I. know the future by only seeing the past."  It doesn't.  And it's a flaw, a logical fallacy, if you try.  The A.I. doesn't predict.  It is simply aware of its surroundings.  It's the difference between trying to "predict" what will be on the road vs actually watching the road when you're driving.

7.  It is not statistical-based and not necessarily computationally expensive.  Those that have seen my past works know I often design my algorithms in a way that they don't rely on heavy number crunching or complex stats/math.  Much of what we discuss here is about structure, arrangement, and design.  There is very little, if any, stats/math involved in this particular AI approach, and there would be little, if any, manual tuning of parameters/assumptions involved.  I never really understood how something can be AI and require knobs tweaked by a human operator anyway; seems like a contradiction.

One thing people will notice across all my writings is that I refer to fallacies and other logical constructs frequently.  That permeates through how I think about approaching A.I. as well.  It greatly frustrates me that so much effort in data and machine learning blatantly resorts to logical fallacies used essentially to "lie with statistics," and I believe it's important that if you want to create something truly intelligent, the design itself needs to be free from this from the ground up (personally I believe you need to be free from this to think clearly in anything).  You only need to scroll through any list of fallacies to start recognizing much of the reasoning used in the field on that list.  At the outset, there is the overall notion that the more data you have, the better your conclusion, which crosses a number of fallacies depending on how you approach it (most commonly faulty generalizations, argument from ignorance/incredulity).  Even in the most elementary statistics book, they teach you correlation is not causation.  The issue is not so much that we shouldn't use data, but we should never act as if the data is fact, as if the 1% outlier (or even something we've never seen before) can never happen.  The simulation/imagination component of what we discuss here exists exactly to counter this deficit, but many other machine learning techniques actually straight up have no backup plan for when something unexpected happens.  Anyone who's played any strategy games will know this kind of thinking will never fly, as the enemy will find your blind spot and exploit it, yet we have use cases in the real world just leaving these blind spots open because they're "statistically impossible."  Then there is the evermore trendy "crowd-sourced intelligence" or anything really that relies on crowd majority, which is a textbook form of ad populum fallacy; it doesn't take much to think of when the majority has been wrong, yet we now have algorithms actually trumpeting the fact they base their decisions on what the majority of the population believe.  Lastly, there seems to be this belief that understanding the parts leads to an understanding of the whole.  Whether it be in the design of algorithms or in the process of how to arrive at a conclusion, there is often this desire to break things down into sub-components, measure those, and then make a judgement on the overall outcome.  This is a fallacy of composition; it neglects the value of the arrangement (in other words and ironically - creativity).  You wouldn't judge the quality of a recipe by the ingredients alone, and I hope for your sake you would never try to understand a person merely by their traits.  The crux of the issue is that people seem to have forgotten that inductive reasoning is not the same as deductive (see Data != Fact), but because it is so much easier to believe in numbers and data, that style of reasoning has overshadowed the more abstract and qualitative form of logic, even though the former does not actually allow you to decisively prove anything.

Some Closing Thoughts...

One thing to realize at the end of this is that the AI I'm proposing really has no "use case" or benefit to humanity.  It is no different than simply having another creature come into existence or another person around.  It might be more intelligent or it might not be, but at the end of the day, you have no control over it because of the free will aspect.  A lot of people don't seem to fully comprehend this and keep suggesting to apply the idea to things like self-driving cars.  The problem is such an AI wouldn't necessarily take you to where you wanted to go; it would take you to where it wanted to go.  That's what free will is.  That's what real autonomy is.  Once you realize that, you see just how much AI and "fully autonomous" has been reduced to mere marketing terms for what is really just "fully automated."

The other interesting part about the construct I've written about with the mind, the senses, and the subconscious is that it somewhat mirrors the three components of the mind in psychology: conative, cognitive, and affect.  The irony is that the concept of conation is actually abandoned and no longer used, as the field now considers conation to just be another side to cognition or information processing.  It is all the more fitting because many in the computer science field similarly seem to be believe the mind is nothing more than efficient circuitry.  I've gotten questions from machine learning specialists asking why the concept of imagination is even important and what proof I have that the mind is anything more than just a data processing algorithm with a singular objective function.  Some have even tried to argue to me that free will is nothing more than a glitch in our biological programming or that it can't exist in a deterministic universe, which is fallacious when you see it's no different than infinite numbers being calculatable from finite ones and life being created from inanimate matter.  Among the same crowd are those who try to claim that nothing is truly creative, that everything is just a re-arrangement of our experiences or what's been done before, although the same folks will then admit that no amount of rearrangement or iteration would lead to truly creative moments like the jump from Newtonian physics to Einstein's general relativity.  Others will insist that life itself is only about survival and reproduction, which is a non-starter for an AI that is meant to transcend its instinctive goals.  And then there are those who would argue the machine is just zeros and ones, which is fallacy of composition; we are just atoms and electrons but you wouldn't equate yourself to a rock.  At least for me, it's been extremely frustrating trying to find anyone working on AI that doesn't think this way, but it kind of makes sense why the field hasn't come anywhere close to a truly sentient artificial intelligence if the guys working on it do not even acknowledge things like free will or imagination in the first place.  If anything, what I'm proposing is really not so much a new idea as much as it a return to more traditional, qualitative ideas on sentience and intelligence against the trend of seemingly everyone else becoming more data-driven and quantified.

Other thoughts... if I ever get myself to sit down long enough to write it.
- Why the mind is effectively a much simpler component than the rest of the system... only receives processed information from other components, fewer variables to decide on, similar to how humans are only conscious of the most high-level functions that go on in the brain/body.
- How memory is stored... essentially some variation of a Bayesian network (again maybe wrong term, not sure) where relationships are constantly drawn between new variables received by the mind.  Each variable can be described by the five senses.  Many variables together form an experience and can be thrown back at the sensory components to simulate an experience.  Implies that models/groupings/abstractions we create mentally are emergent properties and not actually defined anywhere.
- Everything perceived by the high-level mind (not the senses) is effectively profiling.  Whether it's a person or object, we create a profile that represents it and then add more information on it as we learn.  This lends itself to "recognizing" things in the future whereas very often traditional machine learning just buckets data.  This concept here I believe is frequently used in gaming (profiling players, etc), which is why I sometimes cite my gaming background for inspiration, but lately I've moved away from that after finding out most game AIs just cheat (which then puts forward the question what am I building then?).
- At the high-level mind, what you know also has to be valued at a meta level (confidence level?) based on experience of acting on that knowledge.  For example, if you know that books fall but suddenly you see a book float in the area, even if just once out of a thousand times of dropping it, you would suddenly doubt everything you know about books dropping.  You wouldn't just mark your knowledge down a notch (most machine learning just does some form of weighted average on experience).  The idea here is it only takes on experience to change everything you know, not many like the more big data / statistics oriented approaches to AI.
- Variables and goals are the same.  Variables have values representing satisfaction.  Mind starts with a few initial variables and values, similar to instincts.
- Every system has its own memory/learning algorithms, similar to the concept of muscle memory.  Ties back to the mind only handling the most high level work.
- Self-awareness and sentience (emotions, happiness/pain, awareness of others) not actually addressed directly.  Am asserting here that these properties emerge from the system without being explicitly defined.  The system will behave like a sentient being.  How do we know it is not? How do we know we are sentient other than we behave like so?
- This is not a system that will necessarily understand English, crunch numbers, or do anything necessarily business worthy, but it will behave very closely to something alive, such as an animal or child, with the potential to outsmart us but otherwise born with a fairly empty slate.  Since when did having sentience or intelligence ever mean otherwise? Even a human child born without society would not pass as intelligent on the metrics we judge AI with (see Feral Children).
- The irrational fear of an intelligent AI, in that it would be no different than a really intelligent person.  You wouldn't fear or make doomsday predictions about some human who might happen to be extremely smart.  Processing data faster also doesn't necessarily mean faster learning or getting exponentially smarter.  Life still happens one day at a time, and the real world is still bound by the same physics.  There are also many aspects of life where intelligence just doesn't matter.  I use driving analogies often, but it's applicable here too.  You can train yourself as a driver for a million years and not necessarily drive any better than someone who's driven for only one.
- Came across this essay on Black and White's AI system some years after I wrote mine.  It turns out this game actually got listed in the Genesis World Records for most intelligent AI in games.  I think it's very interesting that there are similarities in approach and separation of different learning algorithms to form different components of the same "creature."  If anything, it speaks to my influences, how I grew up, and it might be part of the reason I find more in common with those from a gaming background than those with a formal AI research background, many of who have never played games competitively or had an interest in making games in the first place.
3651 unique view(s)


  1. Richard Preschern said,
    Apr-17-2016, 11:16pm

    Fascinating approach and thank you for clearly articulating your ideas and their applications here. I will closely follow your progress.

  2. AtakanTE said,
    May-30-2016, 11:18am

    In case you have not read yet:

  3. Will said,
    Jul-14-2016, 01:16pm

    Great article, I'll be reading it again in the near future. Thank you!

Leave a Comment

Name: (Have an account? Login or Register)
Email: (Won't be published)
Website: (Optional)
Enter the code from image: