Blabberbox » Ideas » Conatus - A Conative Approach to AIShare on Twitter

Conatus - A Conative Approach to AI

April 25th, 2016 | Posted by pftq in Ideas | #
    For the longest time, I didn't really have a name for the approach to AI I proposed in my write-up Creating Sentient Artificial Intelligence.  Buzzwords like AI, machine learning, and even sentience are thrown around so loosely that, for a while, I actually avoided calling what I was doing AI to not get lumped in with what is really just data science or statistical analysis done in an automated fashion.  Even now, when I explicitly point out that the approach I suggest to AI would result in something that behaves more like a child or animal than anything to do with data processing or pattern recognition, the first feedback I get is to somehow apply it to anomaly detection, price optimization, or classification - in other words, big data.  Somewhere along the line we have managed to water down the term AI to the point it just means anything that helps automate the job of a data analyst.  It becomes almost impossible to talk about actually creating something that behaves like a thinking, autonomous living creature, many on Earth of which would never be at all useful for data-driven work but are nonetheless considered intelligent.

     One of my goals in the longer term is to actually take this approach to AI and try to apply it more generally - not as in applying it to multiple industries but as in actually creating an almost living, self-preserving program that acts and explores on its own without being bound to any particular environment or niche (like Tech Trader, but not limited to the environment of the stock market).  It took me a long time, but I finally did find a word that most closely describes that nature of the AI - conatus.  The word describes the instinctive need to exist and grow - essentially the "will to live."  It fits the AI I suggest above in that the AI is merely trying to do whatever it takes to survive.  It is not necessarily that smart or capable (at least when it comes to data, math, or other first-world problems), but much like how an animal cast into an unknown environment is much better at survival than a human who spent most his life in a cubicle, the conative AI would be much better at adapting to a changing environment than an algorithm designed to optimize for value or accuracy.  It is the difference between approaching AI through inductive reasoning (quantitative, statistical, observational) and deductive reasoning (qualitative, what-if scenarios, imagination).  For whatever reason, we seem to have forgotten that inductive reasoning is only meant to describe how we gather information but not how we think about it, enough so that I've actually gotten questions from machine learning specialists asking why the concept of imagination is even important and what proof I have that the mind is anything more than just an efficient data processing system.  It is akin to not having to stick your hand in a fire to learn not to do it; the irony of course is that the response I've gotten from machine learning scientists is that it just means the machine needs more data.

     The other play on words here is that the variation conative, or more specifically conation, describes a third component of the mind in psychology (alongside cognition and affect) for autonomy and free will, both of which are also major themes in the approach to AI described above.  The concept that the AI may set its own goals and values over time as well as the ability for the same AI to run fully autonomous with no human intervention is perhaps a more defining characteristic.  In a way, what I'm really proposing here is not just artificial intelligence (A.I.) but artificial life (A.L.).  The irony of course is that the concept of conation is actually abandoned and no longer used, as the field now considers conation to just be another side to cognition or information processing.  It is ironic and all the more fitting because the one of the distinguishing characteristics of the AI I propose is that it actually separates out information processing from the decision-making, which is similarly unorthodox and counter to many in the computer science field who believe the mind is nothing more than efficient circuitry and chemical processes.  Some have even tried to argue to me that free will is nothing more than a glitch in our biological programming or that nothing is truly creative, that everything is just a re-arrangement of our experiences or what's been done before, although the same folks will then admit that no amount of rearrangement or iteration would lead to truly creative moments like the jump from Newtonian physics to Einstein's general relativity.  Others would try to say we cannot create what we don't fully understand (the mind), which is a form of argument from ignorance fallacy (more specifically argument from incredulity) by trying to claim something is impossible because the proof or knowledge does not exist.  And then there are those who would argue the machine is just zeros and ones, which is sheer fallacy of composition; we are just atoms and electrons but you wouldn't equate yourself to a rock.  Free will arising from a deterministic universe is no different than infinity arising from finite numbers or life arising from inanimate matter.  If the guys working on it won't even acknowledge things like free will or imagination, then it's no wonder why the field hasn't come anywhere close to a truly sentient artificial intelligence.  It's almost as if the true uniqueness of the approach here is that it is essentially a return to form on traditional, qualitative ideas in psychology and intelligence against the trend of most everyone else becoming more data-driven and quantified.

     Read more in-depth at Creating Sentient Artificial Intelligence...
438 unique view(s)

Leave a Comment

Name: (Have an account? Login or Register)
Email: (Won't be published)
Website: (Optional)
Enter the code from image: