For those who started their careers in AI and left in disillusionment (Andrew Ng confessed to this, yet jumped back in) or data scientists today, the consensus is often that artificial intelligence is just a new fancy marketing term for good old predictive analytics. They point to the reality of Apple’s Siri to listen and respond to requests as adequate but more often frustrating. Or, IBM Watson’s win on Jeopardy as data loading and brute force programming. Their perspective, real value is the pragmatic logic of the predictive analytics we have.
But, is this fair? No.
via Artificial Intelligence – What You Really Need to Know – Forbes.
I think this article makes an important point. A lot of old school AI people are unhappy with the direction that commercial AI has taken. Commercial AI has largely abandoned trying to create an artificial mind, and is instead focusing on where they can make computing systems more intelligent.
They are, of course, driven by the profit motive. They’ll take what results they can use now, and worry less about pie in the sky aspirations. I think that’s totally reasonable for them. Getting upset that they aren’t working toward building a human like mind is just misguided.
I also think it’s a strategy that can eventually lead to breakthroughs. A lot of times, when you can’t figure out how to solve the big problem, often times focusing on smaller problems will eventually shake things up enough until a solution for the big problem becomes apparent.

To be fair, many who believe that AI has gone in the wrong direction were very big dreamers. The idea that computers could think on the same level as humans was not really a goal that had a monetary, social, or political point. It was a problem that existed for the purpose of solving. Over time, this changed into the business model that we now have of AI. This isn’t a bad thing, but it can feel like a betrayal to those who cared only for the problem itself.
LikeLike
I think you’re right. Of course, there are people still working on the dream version, and insights from psychology and neuroscience will probably eventually factor into it, but I think commercial AI will contribute as well before it’s over.
LikeLike
AI may be a subset of predictive analytics, but that’s only because predictive analytics is basically knowledge. After all, it’s about finding patterns in data, and isn’t that what all learning and knowledge is about? Even cases where we have rules may be nothing more than rule mining, and the rules themselves may actually be replaced by data mining.
LikeLike
Good point. Isn’t a rule, after all, only data, information?
LikeLike
Pretty much. A rule is a way of generating data and as such can be seen as data compression. I can list data or provide a rule to generate that data (possibly within a certain range of error). The data compression view also gives a nifty definition for rule quality; rules that are more compact (all else being equal) are to be preferred.
LikeLike
It’s even possible that the two goals may merge. After all, if the current “useful” AI tools are useful to us, then maybe they would be useful to a human-like AI too. You could imagine that the human sense of smell is a bit like this – it’s a specialised functional tool that helps us to make decisions about our world.
LikeLike
Agreed. I think solving the small problems often leads to solution to the big ones. Every solution to basic information processing problems gets us closer to the human-like AI.
To me, it’s similar to flight. No airplane mimics how a bird flies, but we accomplish the goal of flying (while far exceeding what actual birds can do). Would building an artificial bird be an interesting accomplishment? Sure. Is it something Boeing and others should be interested in? Not particularly.
LikeLike