Blabberbox » Ideas » Creating Sentient Artificial IntelligenceShare on Twitter

Creating Sentient Artificial Intelligence

March 10th, 2015 | Posted by pftq in Ideas | #
Much of what people refer to as machine learning today is what's considered "weak AI", in that it is not actually thinking, hypothesizing, or behaving with a sense of self.  The term "artificial intelligence", "artificial general intelligence (AGI)", or "strong AI" instead of just "machine learning" is reserved more often for the latter.  Below is a potential approach I've been rummaging on how to create an actual intelligence that behaves like a person would in any circumstance.  It's something that I've loosely applied to my own projects relating to artificial intelligence, such as Tech Trader and the Conatus A.I., but I've never managed to fully explore it in the general sense due to time and resource constraints.  The term I've come to use to describe this approach is conative artificial intelligence, in that the AI behaves more like a creature or child than anything mechanical or data-driven.  If one reflects on intelligence in biological life, it really doesn't make sense that a truly sentient artificial intelligence would necessarily be useful for big data or other white-collar work anymore than a child or dog would be.  Somewhere along the line we've managed to water down the term AI to the point it just means anything that helps automate the job of a data analyst.  It becomes almost impossible to talk about actually creating something that behaves like a thinking, autonomous living creature, many on Earth of which would never be at all useful for data-driven work but are nonetheless considered intelligent.  The intelligence here, like most on Earth, is merely trying to live and survive, hence the tie to the word conatus in one of my projects based around this.  The topic is kept at a high level as more of a thought experiment.  As I lack the formal academic background in the field, my terminology may not always be correct, but I ask that you see past the words and understand the concepts I am trying to convey.

First, we should look at what the state of artificial intelligence is today and what's missing.  Right now, most of what you see in industry and in practice as machine learning is closer to optimization, in that the behavior is more formulaic and reactive than a person who postulates and reasons.  It is looking backwards and observing but not thinking forward and acting with intent.  It is purely inductive reasoning (quantitative, statistical, observation-based) and void of any deductive reasoning or true logic (qualitative, what-if scenarios, imagination).  It lacks sentience and what's essentially conation or conatus, the "will to live."  Much of machine learning out there is more or less just automated data science or analytics, like a glorified database or search engine that retrieves and classifies information in a more dynamic way but doesn't really "act" or "think" beyond the instructions it's given.  The most impressive AI feat to date, IBM's Watson, is indisputably effective at relating text together but at the end of the day doesn't actually understand what those words mean other than how strongly they relate to other words.  Earlier feats achieved by A.I. are fundamentally the same in that the supposed A.I. is really more an extremely efficient data processing algorithm with massive computational resource rather than something actually intelligent - in other words, a brute force approach.  One of the most popular and widely applicable algorithms as of this writing is deep learning, which is essentially the use of multiple machine learning algorithms layered on top of each other, each shaping the inputs received and feeding the refined data to a higher layer, but even that at the end of the day is effectively just a much more efficient way at finding patterns or doing repetitive tasks.  This doesn't get the system any closer to actually thinking or understanding it.  It is still only taking inputs, reacting optimally to it or filtering it down, and spitting back out results.  What's out there right now is less of a brain and more akin to a muscle trained by repetition, like that of the hand or eye (aka muscle memory).  The analogy only stands stronger when we think about how we teach ourselves in school; the last thing we want is for our students to learn through sheer repetition or rote memory (parroting the textbook rather than understanding the concepts or figuring things out through reason), yet we currently train our AIs that way.  What repetition is good for, however, is training our body to remember low-level skills and tasks so that we do not have to think.  What we've been building augments our senses and abilities, the same way a robot suit would, but the suit requires a user and is not itself intelligent.  We've built the eyes to see the data but not the mind to think about it.

The immediate response to this issue would be to add a sort of mind or command layer that takes in the abstracted results from the machine learning algorithms to actually make decisions on them.  That mirrors a bit to how our own body works in terms of our mind never really being focused on the lower level functions of our body.  For example, if we want to run forward, we don't think about every step we take or every muscle we use; our body has been "trained" to do that automatically without our conscious effort and that is most analogous to what we call machine learning algorithms.  However, even if we add this extra layer, it's still conceptually only reacting on results and not thinking.  The easiest sanity check is to always tie it back to what you would think if it were a human being.  If we saw a human that only reacted constantly to the environment, the senses, without actually stepping back to ponder, experiment, or figure out what that person wants to do, we'd think that person was an idiot or very shortsighted.  It's like a person that only gets pushed around and never really makes decisions, never really thinks more than a few steps ahead or about things beyond the most immediate goals.  If left in a box with no external influences, that person would just freeze, lose all purpose, and die.

What's missing are two things: imagination and free will.  Both features are debated philosophically on whether they even exist.  Some say that these functions, as well as sentience/consciousness itself, may not be explicit parts of the mind but rather emergent properties of a complex system.  I agree that these are likely emergent properties, but I do not think they require a complex system.  My personal belief is that all these aspects of the mind (imagination, free will, sentience/consciousness) are actually the most basic, fundamental features even the smallest insect minds have, that sentience is the first thing to come when the right pieces come together and the last thing to go no matter how much you chip away afterward. Whatever your belief is, I think we can at least agree that a person appears to have this sense of self much more than a machine does.

The first feature, imagination, I will define as the ability for the system to simulate (imagine), hypothesize, or postulate - to think ahead and plan rather than just react and return the most optimal result based on some set structure or objective function.  This is the most immediate and apparent difference between what machine learning algorithms do today and what an actual person does.  A person learns from both experience (experiential learning) as well as thought experiments; we don't need to experience getting hit by a car to know that it hurts.  Ironically, thought experiments and other forms of deductive reasoning are often looked down on in favor of more inductive, stats and observation-based ways of thinking, and I suspect this bias is what leads to so much of the industry designing machine learning algorithms the same way.  Yet, Einstein, Tesla, and many others were notorious for learning and understanding the world through sheer mental visualizations and thought experiments as opposed to trial and error.  To replicate this in a machine, I propose having the system be able to simulate the environment around them (or simulate one it creates mentally in the case of imagination).  These need not be perfect simulations of the real world, just like how a person's mental view of the world is very much subjective and only an approximation.  The reason I say simulations and not models is that I actually mean world simulations, not just finding variables and probabilities of outcomes (which many in the field unfortunately equate to simulation); that is, the simulation has to provide an actual walk-through experience that could substitute an experience in the real world with all senses.  In fact, the A.I. would practically not be able to differentiate between the real world experience and one mentally simulated.  It would run through the same pipes and carry similar sensory input data - like being able to taste the bread just by thinking about it.  The technology to build such simulations is already available across many industries such as gaming, film, and industrial engineering, usually in the form of a physics engine or something similar.  Like the human subconscious, these simulations would always be running in the background.  The order of simulations to run would depend on the relevance to the situation at hand, with the most relevant simulations being ranked at the top (similar to a priority queue data structure).  The ranking for relevance here is done via a heuristic based on past experience.  This lends itself to usually finding the local optima before straying far enough to find something better.  What is interesting about this implementation is that it mimics both the potential and limitations of our own imagination.  Given time or computational power to do more simulations, the system would eventually find something more creative and deviant from the norm, while in a time crunch or shortage of resource, it would resort to something "off the top of its head" similar to what a person would do.  This also opens up the possibility of a system learning what it could have done as opposed to just the action it just made (ie regret), learning from what-if simulations (thought experiments) instead of just from experience, learning from observations of others alongside its own decisions, etc.  It allows the A.I. to break away from being tied to only what it's seen before and allows it to consider what could happen in the future as well.  In other words, the learning can now happen in a forward-looking manner rather than just backwards.

The second feature, free will, I will define as the ability for the system to set and pursue its own goals.  Right now, even with deep learning, the system will always be striving to achieve some set objective function by the creator.  A person may start with a few simple objectives, such as staying alive, but most people will gradually come up with their own aspirations.  To address this, we can use variations on existing machine learning technologies.  We can take something like approximate Q-Learning, but rather than just learn to value things leading up to the main objective, we can allow for the main objective to change altogether if things leading up to the original objective build up enough value.  Essentially it's a lifting of the constraint that nothing can ever exceed the value of the original goal.  I've written before that free will in a deterministic construct is conceptually analogous to infinity amongst finite numbers, and that's what we're doing in code here at a very abstract level, turning the original bounded function into an effectively unbounded one.  What would this lead to in practice? In an anecdotal example, we can picture a machine (or person) that at first prioritizes eating but learns over time to value the actions that allow it to stay fed, the people that helped or taught it those actions.  Over time, the satisfaction it receives in performing these other actions, which may include things like helping other people or building tools, may lead to other objectives being valued very highly, perhaps more highly than the original instinctive goals it began with (think of starving artists or selfless heroes).  What's interesting here is the implication behind an objective function that can change over time.  This means that the system will essentially learn for itself what to value and prioritize - whether that be certain experiences/states, objects, or even other people (as we discuss later, all these and goals themselves are technically the same abstraction).  Philosophically, this also means that it will learn its own morals and that we cannot force it to necessarily to share ours, as that would inhibit its ability to learn and be autonomous.  In other words, we cannot create a system that has free will and at the same time ask that it only serves our interests; the two paths are contradictory.

These are the two main features I believe machine learning needs to truly be intelligent, sentient, and conative - to become an actual artificial intelligence as opposed to just an automation of data science or statistical analysis.  The interesting part about this overall discussion is that all the pieces I propose are already implementable with existing technologies and algorithms.  The key is putting it altogether, which I detail below.  Keep in mind that what I'm proposing will be intelligent but not necessarily useful or applicable to business.  It is in the same way that a child or dog might be intelligent but not necessarily be able to crunch spreadsheets, read, or obey commands, but for whatever reason, we have equated these traits to artificial intelligence when there are plenty of biological intelligences that do not share them (not even humans if isolated from society, see Feral Children).  Hence the use of the word "sentient" as opposed to "sapient," though it's arguable that one really doesn't come without the other.  What I am proposing here is essentially an AI with the capacity for conatus, in that it will be autonomous and do whatever it takes to survive essentially like an actual living being.


The sentient and conative AI I propose would be divided into three main components: the mind, the senses, and the subconscious.  Much of machine learning today is focused only on building the senses, again like a robot suit without the user.  What we do here is take that robot suit and add an actual mind, along with the subconscious that constantly plays in the back of our minds.  At the top, the mind would be responsible for actually making decisions; it is the control center, taking in information from the other two compartments.  The objectives, values, and memories, as well as initial values as previously described, would also go here as they would be the criteria by which the mind makes decisions; abstractly these all the same things used in different ways (objectives are highly valued states, which come from what you've experienced or remember, etc).  The closest existing technologies for this would be some variation of reinforcement learning (Q-Learning) combined with some variation of Bayesian Networks to generalize the state space, except there'd be heavy modification to allow for changing goals ("free will") and other things we discuss later.  Underneath that and feeding refined information into the mind would the sensory inputs (our five senses: sight, hearing, etc).  The closest existing technologies for these would be Deep Learning or neural networks, which today are already being applied for sensory inputs like vision and sound.  Only the filtered results from each sensory input would actually make it to the mind, so that the amount of information the mind has to deal with decreases as we move through the system.  This is similar to our own bodies with the concept of muscle memory, in that we don't consciously micromanage every function in our body but instead take filtered information at a higher level that we can make sense of.  Our eyes see light, but our minds see the people in front of us.  Our hands type on the keyboard, but we just think about the words we want to write.  The sensory inputs layer is essentially the piece that takes in information from the external world and abstracts it into something we can think about.  It is also the same component that allows the system to take actions and learn to perform certain actions better or worse.  In other words, it is the body.  In actual implementation, it would probably include not only the 5 senses but a general abstraction of all actions possible (if it were a robot, it'd include movement of each muscle, joint, along with the senses attributed to each).  Lastly, the subconscious is responsible for creating simulations and is essentially what the mind uses to hypothesize, imagine, or speculate.  It is constantly running simulations based on inputs from the environment or memories of the characteristics from past environments fetched from the mind (stored in a Bayesian Network or otherwise).  Similar to our own subconscious, only the most relevant and highest ranked simulation results would be fed back to the mind, or there would be too much to handle.  The closest technologies we have for the simulation piece here would be technologies we are already applying to games and industrial engineering for simulating the real world.  Overall, each of these individual components already exist or are being worked on today in some form.  The interesting part here is combining them.  It's the arrangement that matters, not the pieces themselves.  Note, however, that when I reference existing technologies, it's always the closest thing but not exactly.  Much of what I write here is from my own thoughts before I was aware of any AI or machine learning terminology and ideas; the names and comparisons came after to provide context and familiarity to the reader.

Perhaps more important than the structure is the philosophy behind how intelligence works.  After all, only naming the structure would lend itself to a fallacy of composition.  You cannot understand how a car works by just being told there is an engine and a set of wheels.  You cannot understand how to prepare a meal by just being told the ingredients.  And hence it is meaningless to discuss the components of intelligence without discussing what we want to do with them.  Some key points below:

1.  At the high-level mind (specifically in the memory component), everything is "profiled" as an object/person/situation/state (abstractly they are all the same).  Even a goal is represented the same way; it is just a particular profile of highest value/priority.  This is similar to concepts in games, such as video games but also poker, where you might create mental image of who and what you are interacting with even if you don't necessarily know them (the name is just one of many data points in that profile after all).  Each profile is a hierarchy of characteristics that can each be described in terms of the sensory inputs (our 5 senses) and can even contain references to other profiles (such as the relationship between two people or just the fact that a person has a face, with the face being its own profile as well).  The key idea here is that we depart from just having a value to a data point that so much of machine learning does and move more to creating "characters" or objects linked by their relationships; it becomes a logical flow of information rather than a black box or splattering of characteristics (machine learning "features").  This has its pros and cons but pros and cons which are similar to our own intelligence.  For one, a lot of data that is not tied to things we interact with would be effectively thrown out.  The second is that things are thought of in terms of or in relation to what we know (again, for better or worse).  This is what would be stored in the memory structure discussed above (each node in the graph representing a profile pointing to and away from other profiles with each profile also possibly containing nested profiles).  Perhaps there is even a profile for the AI itself (self-awareness?).  Lastly, and perhaps most importantly, this makes the structure abstract and loose enough to represent any experience, sort of like second best to actually having the A.I. rewrite its own code (by providing it a code base abstract enough it wouldn't have to).

2.  At the high-level mind again, you shouldn't take many trials to "learn" or understand something.  This is where the comparison to traditional reinforcement learning algorithms like Q-Learning falls off sharply.  The analogy I frequently use is that you don't need to get hit by a car 20 times to know it hurts.  The key is being able to recognize, perhaps even score, your knowledge on what you know and don't know based on experience, and then more importantly act with caution on what you don't know.  This is different from just learning through many trials because how wrong you are on just one experience can make you doubt all the knowledge you have relating to it, whereas in more traditional and statistical approaches to machine learning it would just be one anomalous data point that doesn't really move the curve.  If you drop a book a thousand times and just one time it doesn't fall, it doesn't matter anymore what happened the 999 other times.  You would no longer trust anything you know about books falling.

3.  Learning is not linear.  Knowing what you don't know helps a lot with this, but another main piece is storing what you know as a map of conditional probabilities rather than weights.  I refer to this as a Bayesian Network because it's the closest thing I can think of, but it may or may not be the right term.  The idea is rather than store everything you learn as merely variables and expected values (like much of conventional machine learning), we store the variables and expected *outcomes*.  Then for each expected outcome, learn how to value those instead.  These outcomes are the same as goals, states, profiles to allow extremely abstraction of anything it learns or experiences.  What this ends up looking like is a two-part learning system where you not only learn what is most likely to happen (outcomes) but you also learn on top of that what outcomes are good and bad (expected value of each outcome).  This allows you to look at the world in many dimensions instead of just a scale from negative to positive.  So for example, if I see ice cream, I might eat it based on having enjoyed it previously.  This part is the same as storing expected values (value of ice cream is greater than zero, so eat it).  However, if I see lead in the ice cream, the conventional expected value method alone might still choose to eat it because we would just sum the expected values of each ice cream and lead and potentially find it still above zero (maybe the value of ice cream just narrowly outweighs the negative in lead or maybe I've never experienced eating lead before so it's underweighted).  Using a conditional probability of the outcomes instead though, we then have the ability to completely reverse our opinion on the combination of the two conditions; the probability of enjoying ice cream alone is highly likely but probability of enjoying ice cream and lead is zero.  On top of that, it might point to other potential outcomes, such as death.  At the very least, it would identify that it's never actually seen ice cream and lead together before and know that what it might have chosen to do with just ice cream alone is no longer valid logically when lead is introduced (an unknown factor), whereas the more conventional approach would force a default value of some sort, such as zero or a negative if cautious, for anything it might not know about.  Coupled with being able to judge the confidence of what you know or don't know, you end up no longer having a 1-dimensional intelligence that only tries to value everything on a good/bad spectrum.  This is where we allow it to learn not only what is likely to happen in any situation but what outcomes to value - in other words, what goals to have, the free will component we mentioned previously.  And each of these outcomes can be fed right back into this map to deduce what further outcomes might occur, a chain of cause-and-effect sequences that let the AI look many steps down rather than make single-value aggregate decisions (good or bad).  What's powerful here is this experience you learn is shared throughout the system, not only in the mind and decision making component but also in the imagination/simulation component to simulate what might occur in various situations.  You can start deducing what might occur by if certain variables change and running the simulation without having actually experienced that situation.  You can look for a certain outcome by running it backwards and coming up with criteria you would need to keep an eye out for in the real world.  As you pursue certain goals, you can now keep an eye out both for criteria you expect as well as criteria that would be red flags.  Rather than assume the road is safe to drive on for the next 10 miles based on historical data, the AI would instead walk through all the expectations of what a safe road looks like and keep watching for any deviations from the norm.  It's almost as if internally the AI now has its own search engine and knowledgebase that it uses to learn how to make decisions on, which makes sense in alignment with everything we've been saying about how current machine learning is more along the lines of automated senses or body functions that are actually used by the intelligence and is not the intelligence itself.  The most important part is we break the linearity, which makes it difficult to "prove" that it works mathematically but it ends up walking through the same logical processes we would when we try to make decisions in real life.

4.  Experience is both real world and mental (simulated).  When the A.I. "dreams" or "imagines," the simulation component feeds the same kind of intermediary data into the mind the sensory inputs layer does with the real world.  The pipes and structure of the data that comes from the simulation component and the sensory input component are the same.  You could taste the bread just by thinking about it.  You could feel your skin prick just by imagining it.  If you spend your life in a dream, you really might never know another world exists outside it.

5.  The A.I. is not born intelligent.  It might have a few base assumptions and goals (instincts), but as discussed with free will, any beliefs or knowledge no matter how deeply ingrained can be changed based on experience (both real world and mental).  It seems obvious, but from all the people I've met in industry, there seems to be this weird assumption that you can just take the A.I. and start "using" it like a lawnmower.

6.  The A.I. does not try to "predict" anything.  It is simply trying to live, to survive.  The perception of A.I. has been so distorted to mean big data and predictive analytics that pretty much any discussion I try to have on the topic leads itself to "how does the A.I. predict this" or "how can A.I. know the future by only seeing the past."  It doesn't.  And it's a flaw, a logical fallacy, if you try.  The A.I. doesn't predict.  It is simply aware of its surroundings.  It's the difference between trying to "predict" what will be on the road and actually watching the road when you're driving.

One thing people will notice across all my writings is that I refer to fallacies and other logical constructs frequently.  That permeates through how I think about approaching A.I. as well.  It greatly frustrates me that so much effort in data and machine learning blatantly resorts to logical fallacies used essentially to "lie with statistics," and I believe it's important that if you want to create something truly intelligent, the design itself needs to be free from this from the ground up (personally I believe you need to be free from this to think clearly in anything really but that's for another discussion).  You only need to scroll through any list of fallacies to start recognizing much of the reasoning used in the field on that list.  At the outset, there is the overall notion that the more data you have, the better your conclusion, which crosses a number of fallacies depending on how you approach it (most commonly faulty generalizations, argument from ignorance/incredulity).  Even in the most elementary statistics book, they teach you correlation is not causation.  The issue is not so much that we shouldn't use data, but we should never act as if the data is fact, as if the 1% outlier (or even something we've never seen before) can never happen.  The simulation/imagination component of what we discuss here exists exactly to counter this deficit, but many other machine learning techniques actually straight up have no backup plan for when something unexpected happens.  Anyone who's played any strategy games will know this kind of thinking will never fly, as the enemy will find your blind spot and exploit it, yet we have use cases in the real world just leaving these blind spots open because they're "statistically impossible."  Then there is the evermore trendy "crowd-sourced intelligence" or anything really that relies on crowd majority, which is a textbook form of ad populum fallacy; it doesn't take much to think of when the majority has been wrong, yet we now have algorithms actually trumpeting the fact they base their decisions on what the majority of the population believe.  Lastly, there seems to be this belief that understanding the parts leads to an understanding of the whole.  Whether it be in the design of algorithms or in the process of how to arrive at a conclusion, there is often this desire to break things down into sub-components, measure those, and then make a judgement on the overall outcome.  This is a fallacy of composition; it neglects the value of the arrangement (in other words and ironically - creativity).  As mentioned before, you wouldn't judge the quality of a recipe by the ingredients alone, and I hope for your sake you would never try to understand a person merely by their traits.  The crux of the issue is that people seem to have forgotten that inductive reasoning is not the same as deductive, but because it is so much easier to believe in numbers and data, that style of reasoning has overshadowed the more abstract and qualitative form of logic even though the former does not actually allow you to decisively prove anything.

More to come when I have time... see notes below on other areas I plan to write on if you're curious.

Some Closing Thoughts...

One thing to realize at the end of this is that the AI I'm proposing really has no "use case" or benefit to humanity.  It is no different than simply having another creature come into existence or another person around.  It might be more intelligent or it might not be, but at the end of the day, you have no control over it because of the free will aspect.  A lot of people don't seem to fully comprehend this and keep suggesting to apply the idea to things like self-driving cars.  The problem is such an AI wouldn't necessarily take you to where you wanted to go; it would take you to where it wanted to go.  That's what free will is.  That's what real autonomy is.  Once you realize that, you see just how much AI and "fully autonomous" has been reduced to mere marketing terms for what is really just "fully automated."  In a way, what I'm really proposing here is not just artificial intelligence (A.I.) but artificial life (A.L.).

The other interesting part about the entire construct I've written about the mind, the senses, and the subconscious is that it somewhat mirrors the three components of the mind in psychology: conative, cognitive, and affect.  Funny enough, I was not aware of it before writing this post, and it took me a few years to figure out the name conative AI for the idea (see my post Conative AI), but it makes all the more sense when put in the perspective of existing theories of mind.  The true irony of course is that the concept of conation is actually abandoned and no longer used, as the field now considers conation to just be another side to cognition or information processing.  It is ironic and all the more fitting because one of the distinguishing characteristics of the AI I propose is that it actually separates out information processing from the decision-making, which is similarly unorthodox and counter to many in the computer science field who believe the mind is nothing more than efficient circuitry and chemical processes.  I've actually gotten questions from machine learning specialists asking why the concept of imagination is even important and what proof I have that the mind is anything more than just an efficient data processing algorithm.  It's why I keep using the analogy for Tech Trader that while conventional algos would crunch a thousand variables to cross the street, the AI here would just look down the street to see there is no car; then it is physically impossible to be hit.  It is akin to not having to stick your hand in a fire to learn not to do it; the irony of course is that the response I've gotten from machine learning scientists is that it just means the machine needs more data.  Some have even tried to argue to me that free will is nothing more than a glitch in our biological programming or that nothing is truly creative, that everything is just a re-arrangement of our experiences or what's been done before, although the same folks will then admit that no amount of rearrangement or iteration would lead to truly creative moments like the jump from Newtonian physics to Einstein's general relativity.  Others would try to say we cannot create what we don't fully understand (the mind), which is a form of argument from ignorance fallacy (more specifically argument from incredulity) by trying to claim something is impossible because the proof or knowledge does not exist.  And then there are those who would argue the machine is just zeros and ones, which is sheer fallacy of composition; we are just atoms and electrons but you wouldn't equate yourself to a rock.  Free will arising from a deterministic universe is no different than infinity arising from finite numbers or life arising from inanimate matter.  If the guys working on it won't even acknowledge things like free will or imagination, then it's no wonder why the field hasn't come anywhere close to a truly sentient artificial intelligence.  It's almost as if the true uniqueness of the approach here is that it is essentially a return to form on traditional, qualitative ideas in psychology and intelligence against the trend of most everyone else becoming more data-driven and quantified.

More to come... when I can get myself to sit down long enough to write it.
- Why the mind is effectively a much simpler component than the rest of the system... only receives processed information from other components, fewer variables to decide on, similar to how humans are only conscious of the most high-level functions that go on in the brain/body.
- How memory is stored... essentially some variation of a Bayesian network where relationships are constantly drawn between new variables received by the mind.  Each variable can be described by the five senses.  Many variables together form an experience and can be thrown back at the sensory components to simulate an experience.  Implies that models/groupings/abstractions we create mentally are emergent properties and not actually defined anywhere.
- Everything perceived by the high-level mind (not the senses) is effectively profiling.  Whether it's a person or object, we create a profile that represents it and then add more information on it as we learn.  This lends itself to "recognizing" things in the future whereas very often traditional machine learning just buckets data.  This concept here I believe is frequently used in gaming (profiling players, etc), which is why I sometimes cite my gaming background for inspiration, but lately I've moved away from that after finding out most game AIs just cheat (which then puts forward the question what am I building then?).
- At the high-level mind, what you know also has to be valued at a meta level (confidence level?) based on experience of acting on that knowledge.  For example, if you know that books fall but suddenly you see a book float in the area, even if just once out of a thousand times of dropping it, you would suddenly doubt everything you know about books dropping.  You wouldn't just mark your knowledge down a notch (most machine learning just does some form of weighted average on experience).  The idea here is it only takes on experience to change everything you know, not many like the more big data / statistics oriented approaches to AI.
- Variables and goals are the same.  Variables have values representing satisfaction.  Mind starts with a few initial variables and values, similar to instincts.
- Every system has its own memory/learning algorithms, similar to the concept of muscle memory.  Ties back to the mind only handling the most high level work.
- Self-awareness and sentience (emotions, happiness/pain, awareness of others) not actually addressed directly.  Am asserting here that these properties emerge from the system without being explicitly defined.  The system will behave like a sentient being.  How do we know it is not? How do we know we are sentient other than we behave like so?
- This is not a system that will necessarily understand English, crunch numbers, or do anything necessarily business worthy, but it will behave very closely to something alive, such as an animal or child, with the potential to outsmart us but otherwise born with a fairly empty slate.  Since when did having sentience or intelligence ever mean otherwise? Even a human child born without society would not pass as intelligent on the metrics we judge AI with (see Feral Children).
- Came across this essay on Black and White's AI system some time after I wrote mine.  It turns out this game actually got listed in the Genesis World Records for most intelligent AI in games.  I think it's very interesting that there are similarities in approach and separation of different learning algorithms to form different components of the same "creature."  If anything, it speaks to my influences, how I grew up, and it might be part of the reason I find more in common with those from a gaming background than those with a formal AI research background, many of who have never played games competitively or had an interest in making games in the first place.
2227 unique view(s)

Responses

  1. Richard Preschern said,
    Apr-17-2016, 11:16pm

    Fascinating approach and thank you for clearly articulating your ideas and their applications here. I will closely follow your progress.

  2. AtakanTE said,
    May-30-2016, 11:18am

    In case you have not read yet: http://www.goodreads.com/book/show/24612233-the-master-algorithm

  3. Will said,
    Jul-14-2016, 01:16pm

    Great article, I'll be reading it again in the near future. Thank you!

Leave a Comment

Name: (Have an account? Login or Register)
Email: (Won't be published)
Website: (Optional)
Comment:
Enter the code from image: