Let artificial intelligence evolve? Probably fruitless, possibly dangerous.

Michael Chorost has an article at Slate about artificial intelligence and any dangers it might present.  I find myself in complete agreement with the early portions of his piece, as he explains why an AI (artificial intelligence) would be unlikely to be dangerous in the way many fear.

To value something, an entity has to be able to feel something. More to the point, it has to be able to want something. To be a threat to humanity, an A.I. will have to be able to say, “I want to tile the Earth in solar panels.” And then, when confronted with resistance, it will have to be able to imagine counteractions and want to carry them out. In short, an A.I. will need to desire certain states and dislike others.

Today’s software lacks that ability—and computer scientists have not a clue how to get it there. Without wanting, there’s no impetus to do anything. Today’s computers can’t even want to keep existing, let alone tile the world in solar panels.

This is similar to the arguments I’ve made, that humans, and all animal life, are essentially survival machines, or more precisely, gene propagation machines.  The ultimate source of all of our actions are innate desires, instinctual impulses, “wants” in Chorost’s terminology, all evolved for their genetic success.  I think he’s completely right that, without those wants, AI isn’t going to be motivated to take over the world, wipe out humanity, or enact any of the other scenarios people often fear.

This isn’t to say that there’s no danger.  Of course there is.  But it’s the danger of giving poorly programmed systems too much unsupervised volition, a danger we already face and, for the most part, have shown pretty good common sense on so far.  Despite all the fears expressed about things like drone attacks, it’s worth remembering that they’re currently remotely controlled by human operators, and the military has, so far at least, shown little enthusiasm for letting machines make life or death decisions.

Yes, we’re going to increasingly allow things like self driving cars to make those kinds of decisions, but only after an exhaustive period of testing, only once we become confident that the machines will make the right decisions at least as often as humans do, or more likely, far more often than humans do.

But Chorost’s article continues, and goes to a place that I can’t.  He seems to think that it would be a good thing to let AIs evolve.

To get a system that has sensations, you would have to let it recapitulate the evolutionary process in which sensations became valuable. That’d mean putting it in a complex environment that forces it to evolve. The environment should be lethally complex, so that it kills off ineffective systems and rewards effective ones. It could be inhabited by robots stuffed with sensors and manipulators, so they can sense threats and do things about them. And those robots would need to be able to make more of themselves, or at least call upon factories that make robots, bequeathing their successful strategies and mechanisms to their “children.”

…Now let’s say humans invent robots of this nature, and after successive generations they begin to have sensations. The instant an information-processing system has sensations, it can have moral intuitions. Initially they will be simple intuitions, on the order of “energy is good, no energy is bad.” Later might come intuitions such as reciprocity and an aversion to the harm of kin.

There is a major implied assumption here, which is: we are the inevitable result of evolution, or perhaps more broadly, beings like us are the inevitable result.

Tree of Life: Image credit: Tim Vickers via Wikipedia
Tree of Life: Image credit: Tim Vickers via Wikipedia

The first part of this assumption is that if we put entities in an environment with strong selection pressures, that we’ll inevitably get entities of increasing intelligence.  But if we examine the evolutionary history of life, there is very little evidence for this.  Humans are members of an unusually intelligent species, who despite centuries of scientific evidence to the contrary, still have a strong bias of seeing ourselves, and our attributes, as the pinnacle of creation.

But if you look at the evolutionary history of Earth, it’s very difficult to see humans as inevitable.  Our success seems to be the result of two unusual attributes.  The first is an high degree of dexterity: an ability to manipulate the environment to our needs.  It’s an ability we share with a pretty limited number of species: other primates, and perhaps some other industrious social insects such as ants.

Image credit: Fred the Oyster via Wikipedia
Image credit: Fred the Oyster via Wikipedia

The second is a hyper degree of intelligence, unmatched in the animal kingdom.  Great apes in general tend to be very intelligent, on a par with elephants, dolphins, crows, and cephalopods, but humans are in a class all our own.  The thing is, if you look at the paleolithic record, humanity almost didn’t make it.  We’re a relatively minor branch of primates, whose evolution was far from inevitable, and was once only a natural disaster or two away from extinction.

Steven Pinker, in his book ‘How the Mind Works‘, points out how unusual human intelligence is by noting an unusual attribute of another species: elephant trunks.  There are very few species with such trunks.  An elephant might consider their trunk to be the pinnacle of evolution, but we, looking at it as just an unusual attribute, probably wouldn’t agree.  Pinker points out that the evolution of human level intelligence is just as improbable as the evolution of trunks.  (A fact which doesn’t bode well for us finding intelligent extraterrestrial life anywhere near us.)

If sapient level intelligence has a low probability, morality has an even lower one.  Implying that moral intuitions are inevitable strikes me as spectacularly misguided.  If we run an environment in the manner Chorost suggests, we can’t predict with any accuracy what might crawl out of it.  It might have the morality of a shark, an evolutionarily successful animal that has no problem eating its own siblings in the womb.

In summary, I think if we attempt to evolve AIs, chances are we aren’t going to get intelligence, but simply some other system that is very successful at surviving the environment we grow it in.  But if we do manage to get something intelligent, the idea that it will inevitably be moral seems dangerously naive.

It seems to me that such an intelligence could be as dangerous as the worst fears of AI that have been imagined.  These entities would essentially be a form of life, with their own survival agendas, agendas that might starkly clash with humans interests.  It would be everything Nick Bostrom and others fear, an immensely powerful alien intelligence that might regard humanity as an obstacle to its ambitions.

Even if the result of the hyper evolution environment isn’t intelligent entities, the resulting life forms could still be immensely dangerous.  For example, the shark like entity mentioned above wouldn’t necessarily have to be intelligent to be dangerous, nor would something like a technological version of a hyper virulent Ebola-type virus.  I think my attitude toward such an environment would be much like Elon Musk’s attitude toward AI in general, that building it could amount to “summoning the demon.”

The good news is that there’s no indication that we’ll need to do anything like this to get most of the desired benefits of AI.  We can have self driving cars, robots maids, and many other benefits without the added complexity of evolved instincts.  In other words, we can keep our AI as tools rather than as slaves that could turn on us.

22 thoughts on “Let artificial intelligence evolve? Probably fruitless, possibly dangerous.

  1. I think you are right on many counts. Allowing AIs to evolve in a hostile environment is just as likely to produce Daleks as Buddhist monks. It’s most likely to produce something that’s neither intelligent nor moral, nor useful.
    Could we allow AIs to evolve, not in a hostile environment, but in a supportive one? Could we breed team-players who thrive by being helpful to each other, and using their skills to make the world a better place (i.e. a safer, more co-operative one?)
    Certainly the answer must be that nobody knows for sure.
    The safest solution would be to develop task-based intelligences that can help humans (or augmented humans) do jobs better. But I bet that someone’s going to try the evolution path at some point.
    A psychopathic shark-like robot might have its uses. If you wanted to build an army, that would be a good start, provided you had some means of stopping it from running out of control. That was historically a problem with human armies too, of course.

    Liked by 3 people

    1. “But I bet that someone’s going to try the evolution path at some point.”
      I think people already have. From what I recall, the results have been interesting, but innocuous, as we noted would probably be the case, although I don’t know if anyone has explicitly aimed for sapience in any of the results.
      https://en.wikipedia.org/wiki/Artificial_life

      “If you wanted to build an army, that would be a good start, provided you had some means of stopping it from running out of control.”
      The last part is the rub. I think a robot that wants to kill only precisely when we want it to kill, would be superior to one that wants to kill but has to be restrained (and frustrated) until we desire it to kill. Although it might make for good sci-fi, it’s not clear to me why anyone would want the latter. As you note, it’s often been what we’ve had with human armies or various types of attack animals.

      Liked by 1 person

  2. Just read the Slate article. I hate to be cynical, but looking at human history, I don’t think it’s clear at all that intelligence automatically leads to morality. In fact, I’d say intelligence often undermines morality by providing us with the means to justify our immoral behaviors.

    Liked by 2 people

    1. I agree, although I think I’d say that intelligence allows us to be both more moral or more immoral. Intelligence allows us to better foresee the results of various courses of action, and depending on those results, make more moral decisions. But only if we’re moral. If we’re not moral, the same capabilities allow us to focus solely on our own selfish interests, perhaps in a way that best masks our immorality, often, as you note, even from ourselves.

      Liked by 2 people

  3. I think you all might be making too much of this ‘intelligence’ – danger can come from non-intelligent objective- and response-based behaviour. A system which has a programmed or evolved objective, and the means to carry-out various tasks/effects can be super dangerous, precisely because it’s ‘dumb’. Intelligence, I think, isn’t some sort of ‘mystical’ (I know none of you are implying such – it’s my choice of word to convey a sort of mysterious or ‘revernce-worthy’ status) state of being. It think we might be quite disappointed to find out that intelligence is actually quite banal and unsurprising – just that /our/ intelligence is ‘different’ because more complex, that’s all.
    I think, somewhat contrarily, that the behavioural ‘intelligence’ of the animal kingdom (if I’m to exclude us for a moment) is actually better than ours – as if our added complexity has somehow downgraded us (“Trop d’intelligence tue l’intelligence” one could say in French). The reason I’m improvising this opinion right here and now is because I’m thinking of Buddhism, Taoism (ok, particularly Zen which is a blend of both), and quite a few of the Hindu traditions, whose objectives of the utter effacement of ‘self’ and ‘ego’ would – if you really push that thought experiment to its limits on a social scale (“What if we were all enlightened Zen practitioners?”) – would result in a ‘society’ of ego-less beings just like the animal kingdom has right now. Most animals, if given enough space and aren’t pushed into excessive proximity by Human settlement, will ‘live and let live’ to a degree that is quite laudable. Even the Western world aspires to the banishment of ego or at least paints egoism in a negative light. The dynamic ‘space-management’ of the animal kingdom is natural and done without intelligence as we see it – yet it happens on its own – and might look a little like Brownian motion of a sort, where each animal has a boundary around it and it ‘bounces off’ other animals’ boundaries as it moves through the world. I think artificial intelligence, if left to its own devices, might find its niche in this way too.
    But that’s just sleepy Saturday-morning babble (haven’t even had breakfast yet!)… what do you think? I’d love to know 🙂

    Liked by 1 person

    1. I agree to some extent, Tom, especially the notion that intelligence and self-awareness may turn out to be quite banal mechanisms. But on the subject of the “natural” world, aren’t there examples of animal predators that seem to enjoy killing and inflicting needless cruelty on their victims, and plenty of other species that practice unthinking cruelty? Predators like spiders terrify me because of their seemingly unthinking and ego-less ability to kill their prey. They look just like killer robots.

      Humans have a rare ability to suppress their own urges, instincts and self-interest for others, or for abstract ideas like the “greater good”. Nature, when left to its own devices, seems to me to be extraordinarily careless and wasteful of life.

      Liked by 2 people

    2. I agree with both of you that we shouldn’t elevate intelligence into some mystical magical sort of place. Tom, was there something in particular that was said that might have led you to think we were?

      But I have to agree with Steve on nature. It pays to remember that the average animal in the wild dies by predation, mostly when it’s too young to escape or defend itself, or when it’s sick, injured, or too old and weak to fight or flee anymore. Yes, animals will often leave you alone if you leave them alone, but a lot of that is because humans are the alpha of alpha predators, and most animals seem to instinctively understand it.

      Incidentally the alpha predator status largely comes about because of our intelligence. Our closest cousins, chimpanzees, don’t seem to enjoy the same status, principally because their ability to socially organize is far more limited.

      Liked by 2 people

  4. Predation will and does happen at all scales of ‘intelligence’ unfortunately – but it takes a particular kind of intelligence to amplify that predatory behaviour and especially enact that behaviour when it is not born of survival necessity (i.e. war of principle, not for resources).
    A particularly striking quotation from the book I’m reading:
    “There are times when men’s passions are much more trustworthy that their principles. Since opposed principles, or ideologies, are irreconcilable, wars fought over principle will be wars of mutual annihilation. But wars fought for simple greed will be far less destructive, because the aggressor will be careful not to destroy what he is fighting to capture. Reasonable – that is, human – men will always be capable of compromise, but men who have dehumanized themselves by becoming blind worshipers of an idea or an ideal are fanatics whose devotion to abstractions makes them the enemies of life.” (Alan Watts, pp 29-30, “The Way of Zen”)
    The animal kingdom doesn’t only leave us alone (and we’re definitely not always alpha – you go alone into the Savannah and you’re not alpha for one minute), but also its own kind. Even a predator will abandon its quarry if its own well-being is too much at risk. SAP, you say ‘mostly when its too young… sick… etc.’ but that’s also the case for us (i.e. we are also at risk in those cases). I think the point you’re making is the /nurture/ aspect of human behaviour, which you seem to have denied our animal cohabitants. I think that’s a mistake of perspective to totally remove that aspect from them. Yes, animals may abandon their sick or weak more often than humans, but it is not systematic. And the fact that we (mostly) don’t is an example of the better outcomes of our higher intelligence (principled, or abstract-reasoning).
    Steve, you make a point about the apparent cruelty in the animal kingdom – I’m thinking in particular how sharks (or was it killer whales?) will fling still-struggling seals into the air to catch them and play with them. Many animals play with their still-twitching food (my cat left me a ‘gift’ of a still-live bat in front of my bedroom door once – the squeaking was what woke me up!). Often it is an almost mindless form of ‘training’ or ‘practice’. And I agree that the lack of consciousness present at those moments is unsettling/frightening. When I lived in the mountains of Neuchâtel, some nights I would hear blood-curdling screams from the forest that absolutely chilled me to the bone. Immediately I had a wishful thought ‘oh please put it out of its misery quickly!’. Mercy/Compassion is something we developed/evolved, certainly.

    BTW, I don’t think any of you have sanctified intelligence anywhere in particular so please don’t take it as an accusation – it was merely a manner of speech to say ‘make much of’. So returning to the dangerousness of Artificial Intelligence, and tying in why I thought of the ‘making much of’, I mean to say that I don’t think it’s so much the Intelligence that will be the problem but the dynamics/rules of interaction with that ‘intelligence’ – artificial or animal. Indeed, the example of ‘covering the earth in solar panels’ alludes to a highly /societal/ impact, which could only arise if our human society ceded to it – i.e. it cannot encroach without our retreat. Again, this ties into the ‘boundary’ dynamics I’d mentioned in the animal kingdom. If given the room, it (A.I.) will occupy it. And that’s why I don’t worry much about the dangers of A.I. – I place a lot of (undue?) faith in mankind’s own capacity to fight for its own space.

    Liked by 2 people

    1. Certainly those who wage war because of their deeply-held moral beliefs can be the most dangerous of all. Someone who believes that god or right is on their side can be unstoppable in their cruelty. I wonder if a similarly fanatical AI would wage war in its determination to cover the planet with solar panels?

      Liked by 2 people

    2. Certainly, we are the species that has symbolic thought, so it makes sense that we’d be the only one to fight war for symbolic reasons. Although the main symbolic reason for war, religion, reportedly only accounts for 6-7% of them. And I’m not sure I’d agree that such wars, on average, are necessarily any crueler than wars fought for other reasons, such as territory, loot, slaves, or mates. Of course, this is a complicated question, because ideology is almost always tangled up in large scale conflicts, and separating it out is difficult.

      Humans aren’t the only ones cruel in war. Jane Goodall observed chimpanzee groups at war with each other, and watched as the losers were brutally annihilated, sometimes with the victors in a skirmish drinking the blood of the still living loser while others tore skin and broke limbs. And ant colonies have been known to go to war and enslave the losing colonies.

      One human alone is definitely not an alpha predator, unless of course that human has the right tools. But it’s easy to take for granted that humans organized together are one of the most potent forces on the planet. We’re such a potent force that many other predators have evolved an aversion to us, likely a result of tens of thousands of years of anything aggressive toward us being selected out of the gene pool.

      Thanks for the clarification on intelligence. I don’t worry too much about AI either. The notion of evolving them is one of the few exceptions, and even then, I think the result is likely to be innocuous in most cases.

      Liked by 1 person

  5. SAP, your examples of Jane Goodall (which I’d also heard, but pushed into the oubliette of my mind so disturbing an image it evokes), does shine a light on a flaw of Alan Watts’s quotation: the devastation of war – principled or ‘passionate’ (to use Watts’s word) – is only different with regards to the spoils (i.e. whether or not there will be any). Indeed, the cruelty shown to the loser most probably doesn’t differ greatly. Except, “Take no prisoners!” comes to mind which might be a war-cry which emanates more probably from a principled war as opposed to a passionate war. The aggressor having nothing to reproach of the defendant other than the fact that they are defending what the aggressor wants might be more inclined to keep prisoners. But that’s not very realistic (because there will always be someone to justify that ‘they’ aren’t like ‘us’ and thus dehumanize on both sides), as experience shows (I realize now I’ve just echoed exactly what you said – my apologies!).

    Steve, it is a curious concept, that of a fanatical A.I. I hadn’t considered it before, but I don’t suppose there’s any reason it couldn’t happen – after all, if it is to be a ‘true’ A.I. then it must be capable of ‘belief’ (here specifically false ‘knowledge’ – where true knowledge I would just call ‘knowledge’), whatever the degree. Then I muse: would it not be thus capable of believing us to be their creators/deities? Would they not fanatically worship humans and wage wars among themselves over their opposing beliefs regarding us? That makes for an intriguing sci-fi plot I wager – told from the perspective of despairing humanity watching its creations tear themselves apart and being helpless to do anything that won’t make matters worse! (that might be a fun means of arguing the parallel problem of the non-intervention of God).

    I’d like to bring-up a slightly different question here though: If A.I. were to evolve – at a rate that allows for the emergence of ‘true’ A.I. within a human life-span (or two, tops), then would it not be reasonable to presume that the evolution might continue at such a rate and thereby surpass human intelligence? If that’s the case, my question to you all is this: What would more evolved intelligence look like? Wouldn’t it be more peaceful? ‘Enlightened’ even? Is there a ceiling to intelligence? Does further-evolved intelligence necessarily imply ‘better’ (i.e. more good) intelligence?

    Liked by 2 people

    1. Tom, feel free to echo me anytime. It flatters my ego 🙂

      On AI regarding us as gods, have you ever read Charles Stross’s books “Saturn’s Children” and “Neptune’s Brood”? They posit a robotic civilization that regard their creators (humans) as gods. Humans have died out in the stories (too easy to have sex with robots instead of the reproductive variety), but the most religious among the robots strive to bring humans back so they can care for them.

      I don’t know that there’s any guarantee that a superior intelligence would be good as we understand it. It might be. But our sense of “good” is tangled up with our instincts as a social species. If evolved AIs didn’t evolve into their own version of a social species, I tend to think they might act like extremely intelligent reptiles, which might mean only being out for their own individual interests. Although being intelligent, they might be able to game theory out various cooperative scenarios, but assuming that would result in something similar to human morality could be wishful thinking.

      Liked by 1 person

  6. I think, people often mistake AI with complex problem solving. A self-driving car or a chess-playing machine aren’t intelligent. They are simply very efficient processing tons of data using algorithms defined for them by humans to achieve a goal set for them by humans. If we let machines have their own desires, we will have a self-driving smart car that would decide to go play tennis or something it’s absolutely not supposed or intended to do when you want it to take you to work. And why would anyone want such a “smart” car? I have enough trouble with my kids who don’t do what they are supposed to do.

    Liked by 3 people

    1. Hey agrudzinsky. Good hearing from you!

      Artificial intelligence is definitely a slippery term. In common usage, it has a tendency to mean cutting edge computational technology, or sometimes what humans can do that computers can’t, yet.

      But, just out of curiosity, since you make a distinction between “intelligence” and the ability to do complex problem solving, what would you say intelligence is beyond that?

      Totally agree about not wanting my self driving car to have its own independent desires. I want my technology to be a tool driven to fulfill its engineered purpose, not an entity with its own self actualization impulses that then has to be convinced to do what I want it to.

      Liked by 1 person

      1. But, just out of curiosity, since you make a distinction between “intelligence” and the ability to do complex problem solving, what would you say intelligence is beyond that?

        Intelligence is in the eye of the beholder. “Intelligence”, perhaps, refers to the level of complexity. When a machine is complex enough that we do not understand how it makes “decisions” to do certain things, we call it “intelligent”. But when we understand how machine’s actions are triggered, the impression of “intelligence” disappears. For instance, my smartphone may suddenly tell me: “Hey, if you want to be in time for that meeting, you’d better start now and, by the way, avoid that highway – there is an accident near exit 69”. That’s intelligent, right? How did it come up with such a timely and useful message? But, of course, the smartphone “knows” about the time and place of my next meeting from my Google Calendar. It also knows my current location from GPS and can calculate how long it takes to get to the meeting using Google Maps. It also knows about the traffic based on the information from thousands of smartphones on the road aggregated at the Google server. The smartphone does not just “decide” that this message would be useful to me. The smartphone knows nothing of being useful. It is programmed to do things that the designers of Google Now considered useful. So, if we don’t know all these things, the message appears intelligent. But if we do understand how things work, the impression of “intelligence” disappears.

        However complex the machine, if it exists, humans (at least, some) must understand how it works. Perhaps, nobody individually, but collectively, there will be a group of experts whose knowledge covers all aspects of the machine. So, perhaps, existing machines will be never considered “intelligent” and the term “intelligent” will always be reserved for some mysterious “next generation”. Of course, nobody has an idea what the next generation of machines will do. So, it’s quite appropriate. On the other hand, we might as well consider that the AI already exists because what I described in my example would certainly blow my mind 20 years ago.

        Another thought. “Intelligence” assumes purpose. There are very complex natural systems with very complex behavior. But unless they do something that appears useful or purposeful to humans, they are never called “intelligent”. The term “intelligence” seems to be closely related to goal setting and decision making and, therefore, to the question of free will. Before we answer whether machines can be intelligent, we need to answer whether humans are intelligent themselves or are mere automatons. And there is no answer to this question. It’s a matter of philosophical worldview.

        Liked by 3 people

        1. Well said. I agree across the board.

          It does raise the interesting question of what our attitude toward intelligence will be as neuroscience progresses. (At least, unless substance dualism still somehow manages to emerge as a thing, or per Penrose, that the mind operates according to its own physics, which would amount to much the same thing.)

          Like

  7. I’m loving this discussion! Each response is helping me paint a clearer picture in my mind.
    [shameless plug: http://taomath.org/2016/04/two-modes-mind/ ] of course! Tying in Steve’s comment on your other post (https://selfawarepatterns.com/2014/02/27/artificial-intelligence-is-what-we-can-do-that-computers-cant-yet/) about Stupidity – I agree with him that error will be a decisive factor.
    “Intelligence is in the eye of the beholder.” – agrudzinsky
    Totally! After all, that is the main gist of most of the world’s questioning: “When will we know that True A.I. has occurred?” i.e. “When will we be unanimously convinced?”
    See that’s the thing: is such a determination anything that could ever be unanimous and indubitable? I think now that, with again Steve’s point on ‘fanaticism’, ‘True A.I.’ will probably need to include negative behaviour – i.e. taking the apophatic thinking perspective here, it is when we see ‘stupid’ (Not-intelligent), ‘irrational’ (not-rational) and ‘fanatical’ (not-free) behaviour – i.e. obstinate and determined, to the point of its own detriment, in addition to all the positive qualities it may have displayed earlier – that we might get ‘spooked’ (i.e. that leap over the uncanny valley) into realizing “Gosh, this thing’s alive!”
    Determined intelligence that cannot be reasoned-with is very upsetting. One of my fears is death by something that won’t be reasoned with to stop – like a shark attack.
    Aww… you know what? Never mind. I can’t help but lose that sense by explaining it away (isn’t that agrudzinsky’s point after all?). I keep thinking of a scenario of interaction with an intelligent animal, substituting it with a sophisticated electromechanical equivalent and immediately realizing ‘yes, but we could program that behaviour into it.’ and the intelligence vanishes.

    Maybe /that’s/ the key – and again, coming back to your post SAP – that by evolution, we’re letting its behaviours arise without our intervention. I didn’t program my dog to not let go of the chew-toy, but dammit he won’t. That determination seems to be motivated by ‘personal factors’ hence why I so readily attribute intelligence to him. So too perhaps an A.I. that I know was not deliberately programmed by any human being wherever (but most importantly by me), which ‘suddenly’ (i.e. unexpectedly) displays a self-motivated behaviour which was not part of the original programming… aww hell, there it goes again! Why this time? Because of Google’s Deep Learning.

    This qualification for A.I. seems as Buddha-nature in Zen traditions. Which would lead to the following statement: True A.I. already exists, we just refuse to recognize it because we refuse to see our own intelligence as being so simple. I don’t know if that’s a true statement, but it could be…

    Liked by 1 person

    1. “Maybe /that’s/ the key – and again, coming back to your post SAP – that by evolution, we’re letting its behaviours arise without our intervention.”

      Yes and no. It’s not hard for me to imagine an AI system that has behavior that we didn’t precisely program into it. I think the actual distinction is the existence of primal goals which we had no part in. An AI might find a path to a goal that we’d find surprising, but that seems different than the AI having goals that didn’t originate from us.

      On true AI already existing, I think it’s a valid point. Consider that a person suddenly transported from 1950 would likely regard modern systems as intelligent, at least until they gained some experience with them. Although I suspect as long as we can detect a difference in the degree of versatility between us and machines, we won’t consider the machines intelligent, even though you could say that they are intelligent, just not as intelligent (yet) as we are.

      Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.