Eric Siegel at Big Think, in a new “Dr. Data Show” on the site, explains Why A.I. is a big fat lie:
1) Unlike AI, machine learning’s totally legit. I gotta say, it wins the Awesomest Technology Ever award, forging advancements that make ya go, “Hooha!”. However, these advancements are almost entirely limited to supervised machine learning, which can only tackle problems for which there exist many labeled or historical examples in the data from which the computer can learn. This inherently limits machine learning to only a very particular subset of what humans can do – plus also a limited range of things humans can’t do.
2) AI is BS. And for the record, this naysayer taught the Columbia University graduate-level “Artificial Intelligence” course, as well as other related courses there.
AI is nothing but a brand. A powerful brand, but an empty promise. The concept of “intelligence” is entirely subjective and intrinsically human. Those who espouse the limitless wonders of AI and warn of its dangers – including the likes of Bill Gates and Elon Musk – all make the same false presumption: that intelligence is a one-dimensional spectrum and that technological advancements propel us along that spectrum, down a path that leads toward human-level capabilities. Nuh uh. The advancements only happen with labeled data. We are advancing quickly, but in a different direction and only across a very particular, restricted microcosm of capabilities.
The term artificial intelligence has no place in science or engineering. “AI” is valid only for philosophy and science fiction – and, by the way, I totally love the exploration of AI in those areas.
3) AI isn’t gonna kill you. The forthcoming robot apocalypse is a ghost story. The idea that machines will uprise on their own volition and eradicate humanity holds no merit.
He goes into detail in a long post at Big Think, or you can watch him discuss it in the video.
His presentation is over the top, but I have to agree with much of what he ways. AI is hopelessly over hyped. From singularities to Frankensteinian worries it will turn on us, it’s become a new mythology, supplying new versions of the old deities and demons, superhuman powers that rule the world, including promises to solve our problems, or threats of destruction, but now with a thin veneer of technological sophistication. (Not that I don’t enjoy science fiction with these elements too.)
That’s not to say that there isn’t some danger from these technologies, but it more involves how humans might use them than from the technologies in and of themselves. For instance, it’s not hard to imagine systems that closely and tirelessly monitor our activity using machine learning to figure out when we’re doing things a government or employer might not like. Or an intelligent bomb smart enough to wait until it recognizes an enemy close by before exploding.
And I think he has a point with the overall term “artificial intelligence” or “AI”. It’s amorphous, meaning essentially computer systems that are smarter than normal, lumping everything from heuristic systems to Skynet under one label. We sometimes talk about the “AI winter”, the period where AI research fell on hard times but that it eventually pulled out of. It could be argued that the endeavor to build a mind never really escaped that winter. We just lumped newer more focused efforts under the same name. (Not that I expect the term to die anytime soon.)
To be clear, I do think it will eventually be possible to build an engineered mind. (I wouldn’t use the adjective “artificial” because if we succeed, it will be a mind, not an artificial one.) They exist in nature with modest energy requirements, so saying it’s impossible to do technologically is essentially asserting substance dualism, that there is something about the mind above and beyond physics.
But we’re unlikely to succeed with it until we understand animal and human minds much better than we currently do. We’re about as likely to create one accidentally as a web developer is to accidentally create an aerospace navigation system.
I have to admit, I see the term A.I. applied to a lot of technologies that I don’t really consider A.I. I feel like the term has gotten so watered down that it’s lost a lot of its meaning. I understand that Siri is highly advanced, but she’s not on the same level as Commander Data from Star Trek, and I don’t feel like the same kind of terminology should apply to them both.
LikeLiked by 1 person
Definitely. People hear “AI” and think of Data, HAL, or Skynet, when often what vendors tout as “AI” isn’t even guaranteed to be machine learning.
LikeLiked by 1 person
Have you heard of the Human Connectome Project? If this project is ever successful (and that’s a big if) at mapping the human brain, it is estimated that in order to run a computerized simulation, it would require the entirety of all of computing power that exists in the world today. That’s one hell of a mainframe…
LikeLiked by 1 person
I have heard of it, and I do think we’ll eventually be able to map the connectome, although I have no idea whether that particular project will succeed.
I’d be curious how they arrived at that estimate for running it in a simulation. A lot has to do with how granular they run it. If it’s down to the molecular or protein level, I could see that estimate. But it seems high to me if they’re mapping it at the neuron / synapse level. (Not that the requirements for that level would be modest by any means.)
It would take a lot more power to simulate a brain than it would be to simply match the brain’s computational power. My own back of the napkin estimate for that would be about 100,000 processors (or cores) running in parallel with somewhere between 100 TB and 2.5 PB of storage, which the largest high performance clusters are now reaching.
LikeLike
I’ve been waiting a long time for someone to agree that “AI is BS” – science fiction and fake news fodder for journalists. Now I’m waiting for someone to wake-up and declare Quantum Computers BS. Yes, some of the space cadets believe the second coming of the computer is already with us.
LikeLiked by 1 person
I know where you’re coming from on quantum computers. I’m not sure if they’ll ever amount to anything, or if they do whether they’ll be anything like the revolution everyone expects.
LikeLike
I like to compare such things to the hot fusion reactor – 60 plus years of R&D boat-loads of public money and no completion date on the horizon. Once these ideas get a hold you can’t shut them down and so it’s BS all the way down. This unfortunately is how science works…
LikeLiked by 1 person
…and technology.
LikeLike
“We live in a society exquisitely dependent on science and technology, in which hardly anyone knows anything about science and technology.” – Carl Sagan
LikeLiked by 1 person
Mike, on Eric Schwitzgebel’s “The Splintered Mind” blog and yours as well, I’ve been repeating that animal intelligence is pattern matching and AI is industrial strength artificial/computerized pattern matching, if you will, outstripping human capacities in speed and breadth but usually in a single domain. Human intelligence has nothing to do with human consciousness and Artificial Intelligence has nothing to do with Artificial Consciousness, which is not being considered in any way, let alone being developed. I’m a big fan of the science fiction though.
Commander Data is a human simulation that likely would involve unimaginable levels of pattern matching, while Siri just looks stuff up. You can see the two as essentially the same technology, but on wildly different scales.
LikeLiked by 1 person
Stephen,
What leads you to conclude that consciousness and intelligence have nothing to do with each other? In my view, what we think of as consciousness is just a form of intelligence.
LikeLike
[Funny. In my view, intelligence is just a form of consciousness]
LikeLiked by 1 person
[I should probably add, pattern matching pretty much is consciousness.]
LikeLiked by 1 person
If intelligence is a form of consciousness, would you say that there are forms of consciousness with no intelligence? If so, what might be an example?
If pattern matching is consciousness, does that mean all pattern matching is conscious, or just some? If all, what does that say about the images in the midbrain region we have no introspective access to?
LikeLike
I’m not sure intelligence has been sufficiently defined. Would you say the ability to report a conscious experience without more is sufficient for intelligence?
I will say that all intelligence I can think of requires consciousness, but that’s just by my definition of consciousness, probably not yours.
And I would say not all pattern matching is conscious, but all consciousness is pattern matching. I (currently) restrict consciousness to where the input (the pattern to be matched) is a symbolic sign (vehicle) (in a teleonomic sense).
Regarding introspective access, that’s a just a cognitive (intelligent) ability of a specific part of the brain. Other parts of the brain (midbrain region?) may have their own consciousness/intelligence, just not as extensive as the one we refer to as the autobiographical self.
*
LikeLike
Good point. The definition of intelligence is controversial, just as the definition of consciousness is. We could regard them as synonymous. But I often find it convenient to refer to software systems as more or less intelligent without any implication that I’m talking about that systems being conscious.
Our intuition of consciousness seems to require more specific capabilities, such as awareness of oneself (at least one’s bodily self) and the environment.
I like the idea of consciousness as a type of pattern matching. It seems like another way of saying it’s about prediction. I know you dislike that word, but I think it gets at why the pattern matching is happening.
LikeLike
Right. So, what’s a good score on a Consciousness Quotient test? A rather high CQ score is required for membership in Sensa, but note that a sustained CQ score of zero will get you buried.
LikeLike
If I understand what you’re asking, people have varying abilities to be aware of their environment and of themselves. They can score higher or lower on metacognitive abilities.
Damage to their V5 region can leave them able to perceive objects but unable to perceive their motion. Damage to the parietal regions can leave them with hemispatial neglect, which not only leaves people unable to perceive the left side of their vision, but where the left side becomes inconceivable to them. These people’s consciousness is impaired in a measurable manner.
And of course, consciousness can be temporarily impaired or altered by drugs. Someone who is heavily intoxicated has less awareness than someone who isn’t. On the other hand, someone jacked up on caffeine may be hyperaware.
You may say that these aren’t intelligence, but it’s worth noting that automated systems are still struggling to do them. They are capabilities that a system can have in higher or lower amounts.
LikeLike
No, I don’t believe that a feeling of being embodied and centered in a world is intelligence, but then, as you pointed out, we don’t have useful and precise operative definitions of words like “feeling” but, rather, we have “conventions” about where in a sentence words can be used and maybe other stuff, allowing us to decry opposing viewpoints as value judgments and remain on the definition-free and evidence-free path of Righteous Obfuscation.
Another Triumph!
LikeLike
The feeling of being embodied and centered can be impaired or knocked out by damage to the insula cortex or parietal regions. Depending on the exact damage, it can lead to a host of bizarre conditions such as denying ownership of a limb. https://en.wikipedia.org/wiki/Somatoparaphrenia
Stephen, your tone seems to be getting increasingly uncivil. I love discussions with smart people like you, but I have no interest in insult matches. Can we keep this on friendly terms?
LikeLike
Mike, “denying ownership of a limb” is a far cry from disembodiment, a syndrome I’ve not encountered in the literature. I can’t imagine that it would feel like anything either, since you’d be convinced your head and the world are also missing. Even Descartes would be dumbfounded.
The replacement of the word “definition” with the word “convention” is your own Mike, although I did intend with my wording to humorously indicate the effects of your suggested substitution. I guess mea culpa for forgetting the smiley.
LikeLike
This seems relevant to the discussion here:
“The average human brain has about 100 billion neurons…..Each neuron may be connected to up to 10,000 other neurons, passing signals to each other via as many as 1,000 trillion synaptic connections, equivalent by some estimates to a computer with a 1 trillion bit per second processor.”
That’s from http://www.human-memory.net/brain_neurons.html
LikeLiked by 1 person
Neuroscientist Suzana Herculano-Houzel was actually able to update the numbers of neurons. There are 86 billion in an average human brain. Surprisingly 80% of these are in the cerebellum, or about 69 billion. 16 billion are in the cortex. The remaining billion are in all the sub-cortical structures.
LikeLike
Full disclosure, if this is the same Eric Siegel who writes cosmology columns for Forbes, I’m not a fan. Can’t say why; something about his columns just rubs me the wrong way.
There’s marketing and hype and public perception, which I largely ignore, and there’s what “AI” has meant since day one. A computer that is essentially indistinguishable from a human. That passes the Turing test with ease. Lt. Commander Data. The hosts from Westworld. Robbie from Forbidden Planet. Robot from Lost in Space. And many others.
All Pinocchio, really. With a dash of Frankenstein(‘s monster).
It’s the Holy Grail, the “Hard Problem,” and it’s proved as frustrating as quantum gravity.
Not that we haven’t made all sorts of astonishing progress with machine learning and all sorts of other very useful things. I agree with how amaze-balls ML is. (And I think we here all understand its limits.)
Is Siegel suggesting we give up? Stop talking about it? I always get a little confused when someone decides what everyone else should do. Can’t say I’m persuaded.
LikeLiked by 1 person
On the cosmology column, I think you’re thinking of Ethan Siegel. He’s…unique. I’ve been following him for years. It seems like he’s gotten more opinionated over the last few years, or maybe he’s just more willing to show it, and that at times gets annoying. And sometimes he gets technical without adequate groundwork.
I agree that Eric Siegel does overstate the case against sapient AI. We’re a long way from it, but calling it “a lie” is overdoing it. On the other hand, I do like that he’s pushing back against those scaring people about machine learning.
LikeLiked by 1 person
Ha! I didn’t realize there were two E. Siegels putting technical goodies out there!
Now that you say it, yes, it’s Ethan. I followed him long ago, but have dropped off in recent years. (Phil Plait, too, which is a mystery. I love that guy. Just don’t seem to bother anymore for some reason.)
“We’re a long way from it, but calling it ‘a lie’ is overdoing it.”
I think so, too. I noticed, and meant to comment on, this in the post:
“To be clear, I do think it will eventually be possible to build an engineered mind. (I wouldn’t use the adjective “artificial” because if we succeed, it will be a mind, not an artificial one.)”
When you put it that way, I can agree it seems likely! 😀
(“Artificial” in the sense of “artificial leg” just meaning “not of human born” but I agree the term kinda sucks in this context. We could call it a Ghost in the Shell.)
“On the other hand, I do like that he’s pushing back against those scaring people about machine learning.”
Yeah, that’s a good point. Very much agree.
Have they calmed down, yet, about the black holes CERN was going to create?
Ever been to the Has the Large Hadron Collider Destroyed the World Yet? site? Be sure to take a peek at the page’s source code. The site actually tests! 😀
LikeLiked by 2 people
I used to like Phil Plait a lot, but I’m the say way. I’m still subscribed to his SyFy feed, but rarely think to check it anymore. He’s one of the ones that have gotten pretty political on twitter. Even though I broadly agree with his views, his sharing of political rhetoric seems excessively unskeptical, which given what originally attracted me to his blog years ago, I find disappointing.
Wow, that site’s funny. Somebody went to the trouble of registering and maintaining that hostname, just so they can make that point? The laser eyed bunny is cute.
LikeLiked by 1 person
Funny thing is, the bunny is an illegal HTML comment. You’re not allowed to have multiple hyphens, except as the beginning of the end comment mark. And then only those two.
LikeLiked by 1 person
I didn’t know that. I would have thought that as long as you didn’t have the full end comment sequence, you were good. I’ve probably left some violations lying around over the years.
LikeLiked by 1 person
Actually, looking back at what I said, I was channeling age-old advice from the 1990s. The rule of thumb, because not all browsers parsed HTML correctly (let alone all the bad HTML), was to only use hyphens for the comment start and end tags.
Because (as you probably know) HTML is defined by SGML, it inherits the two hyphens as comment delimiters from SGML (where just pairs of two hyphens enclose a comment within a definition).
Long story short, multiple hyphens are allowed so long as they come in multiples of four.
Which is where most uses of lots of hyphens trip authors up.
LikeLiked by 1 person
Siegel (in this summary – haven’t read the longer link yet) starts off with some right points and wanders off into improbable and wrong ones.
Yes, right now, almost everything useful in machine learning comes from supervised learning. There are only a few preaching the virtues of “unsupervised everything”, i.e. generative learning, scientific and engineering model building by computers. But like Mike, I think their day will come (the unsupervised algorithm programmers’ day, and their creations’ too). Unlike Mike, I think there is a strong probability that the unsupervised algorithms will slip from the programmers’ control and from their intended results. Why? Historical evidence: the only other two entities that match or exceed a biological human’s abilities — corporations and governments — have largely slipped our control. They follow their own logic, and it sometimes leads to places few of those who founded the institutions would have dreamed of, nor would wish for if they could have anticipated.
Mitt Romney is wrong — corporations aren’t people — but at least they are *made of* people. That puts an upper limit on their mischief, if not a very comforting one. Likewise with governments. But human-level-ish (or smarter) AI won’t be made of people.
Human values are the result of a long and peculiar evolutionary history. If a new type of being is created that can evolve quickly in a rapidly changing environment, it will probably evolve away from that tiny region in the space of all possible values. Argh, I sound like Spock:
McCoy: Dear Lord, do you think we’re intelligent enough to — suppose — what if this thing were used where life already exists?
Spock: It would destroy such life in favor of its new matrix.
Leonard McCoy: Its new matrix? Do you have any idea what you’re saying?
LikeLiked by 1 person
I actually think humans have control over corporations just fine. It’s just that the humans running them can hide behind the corporate veil. Of course, they have cultures, but we’ve had cultures forever, and they’ve always been a mixed bag.
SPOCK: I was not attempting to evaluate its moral implications, Doctor. As a matter of cosmic history, it has always been easier to destroy than to create.
McCOY: Not anymore! Now we can do both at the same time!
LikeLiked by 1 person
Mike, even CEOs are caught in the Race To The Bottom. They might not like moving their factories to places with oppressive governments and zero pollution controls, but if they don’t do it, their competitors will. And even if the CEO does like moving their factories there, they don’t want their competitors to – but they can’t stop them. And they certainly don’t want their children to live in a severely disrupted environment, even though that is the likely result.
Of course, not all the consequences are bad, but it’s a package deal, and we get the bad along with the good. And there’s not much any single or any few human beings can do about it. Of course, there’s always collective action … hey, did I mention that governments also have problematic features?
LikeLiked by 1 person
Sure, but that’s always been the case. Long before there were corporations, kings and generals were hemmed in by strategic and political situations into doing things they didn’t like. Even when we lived in hunter-gatherer bands, the band’s will could take on a life of its own, driving people into situations they might have avoided by themselves.
We’re a social species. We’re stuck with it. (At least until we’re all uploaded into our own virtual realities, or living on our own private planets.)
LikeLike
Right. But the point remains that every time we invent a new institution, we introduce new problems (as well as new solutions) – in part due to the increasing power of the institutions. I think we have good reason to expect that the problems with advanced AI will be similar (due to the increasing power aspect) only worse (due to our difficulty understanding the workings of the new creations).
LikeLike
That AI was hyped shouldn’t be held against it. It has made serious bounds, like beating world champions at chess, learning to play games, and outperforming human experts in other areas. On the other hand, that term is loaded… But more importantly, AI encourages too narrow a view of things. Technology encroaches on abilities. Plows encroached on muscle, writing encroached on memory, the abacus on calculation, and computers — by their generality — encroach on a wide range of human abilities. Rather than thinking of “intelligent” behavior as if it’s a thing, look instead at what computers can’t do as well as people. It’s but one more skill they can master.
Will computers take over the world? Maybe they have. If we take emergence seriously, then why not look at all the collective, interacting software out there (including the internet) as an emergent intelligence? How has it influenced us? How does it continue to influence us? If we’re addicted to our social networks and our phones, then aren’t we already controlled by computers? If a world leader starts a major war from analyses that came from software, then at what point do we just admit that the computers started the war? We use them to find love, to keep in touch. They control our money, we work with them and for them. We think through them. We write stories and movies about them.
Think of a ruler who won’t make a move without his/her adviser. By being indispensable and whispering in the ruler’s ear, the adviser controls the ruler. Our software is like that adviser. We rely on it so much, we’re crippled without it.
But even that may be too narrow a view. In reality, the “intelligence” here lies in the full system, of person + computer, if not beyond, to take into account the social structures that support the retention and dissemination of information. The only reason we can seriously ask whether AI will take over is because we’re drawing an arbitrary line where humans end and computers begin…
What’s worse, because we don’t really understand the nature of this emergence, just what it will lead to is beyond anyone’s ability to predict — for now. The seeds may already be in place for a major upheaval, the nature of which no one can anticipate. Maybe we’ve experienced the upheaval already and are blaming it on other factors because we don’t understand the true causes…
LikeLiked by 2 people
Thanks BIAR. That’s an interesting analysis. At what point does the tool become so influential that it becomes the driving force?
But what occurs to me are other tools that turned out to be historically pivotal. Would we say that the printing press caused the reformation, or the scientific revolution? Or that tanks and airplanes caused World War II? Certainly they enabled these events, or enabled them to be more than they otherwise would have been.
I think what’s still missing from these tools is their own agenda. They don’t have their own goals. (Software may have goals which aren’t in alignment with the end user’s goals, but they still have some human’s goals.)
Of course, the concern is that AI might develop its own goals. Or that they might interpret the goals we give them in some disastrous manner. The question is whether it’s rational to worry about this with current or near term machine learning technology.
LikeLiked by 1 person
Good points. Did X cause Z or did Y do it instead? Why not both? I think we’re so used to thinking of some one cause, that we often look for it instead of admitting it’s a combination of factors. Sure, World War II was caused by Tanks and Airplanes as well as Hitler as well as World War I as well as…
At some point of course we try to focus on a small set of factors, in the hopes that they will allow us to better control future phenomena, if we couldn’t do that, we couldn’t build technology or design effective interventions. However, sometimes, the field is too broad.
Which brings us to goals. Can we talk about goals vs. programming if we don’t believe in free will? If human goals are the result of deterministic factors (like upbringing, environment and DNA), then they are programmed every bit as much as the computer, and if the goals themselves are shaped by what the computers can do, then again, can we really separate human goals from computer ones?
Am I programming a computer to do my bidding? Why? Is part of it because computers have proven so effective that I want to keep using them to improve my lot? But then, have they perhaps programmed me to program them as much as I believe I’m programming them?
It’s pretty headache inducing, but it’s also thrilling to contemplate that the very fabric of the things we considered could be so different. I’ve had lots of time to contemplate stuff like that. Being a software developer and having used some machine learning approaches (along with some traditional AI), I’ve seen how my thinking and interests were transformed by the very machines I use. Add to this a healthy dose of Pyrrhonism, Buddhism, Philosophy and you have me posting these snake-eating their tail type of posts at near midnight 😛
LikeLiked by 1 person
I’ll take your snake-eating-their-tail midnight posts anytime!
“Can we talk about goals vs. programming if we don’t believe in free will? ”
A human’s goals come from their genetics and experiences (as does every those of every living thing). But those goals generally aren’t uniform. Each living thing has its own agenda, its own drives which evolved because they served to preserve and propagate genes.
And because of the evolved nature of those drives, they often conflict with each other, requiring reasoning to find a strategy that (ideally) optimizes goal achievement. This mix is unique in each of us, giving each of us our own unique agenda. However, often those agendas align and we cooperate, although it’s not guaranteed.
A computer system’s goals are what it was engineered for, and tend to be much more narrow and consistent, and are subsidiary to human goals. Of course, as those goals become more complex, it’s possible we might eventually perceive them to have their own agenda. But unless we engineer it, they won’t have drives to maximize their own genetic legacy (or whatever the machine equivalent might be).
On the broader question of causes, I think the right way to look at it is as a loop, a wheel of chain and effect. Before computers, we were influenced by culture, which we in turn collectively influenced, which looped back and influenced us, etc. I think computers and the associated communication networks have enhanced and sped that up dramatically, with both beneficial and harmful effects.
LikeLiked by 1 person
Why wouldn’t an AI turn on humans – humans turn on humans. What’s so different about the materials of an artificial mind that it wouldn’t?
LikeLiked by 1 person
Humans, like all living things, have their own agendas, driven by impulses that evolved to maximize gene preservation and propagation. AI agendas would be the ones we engineered them to have, and would generally be subservient to the human ones. (That doesn’t mean things couldn’t go wrong, but we shouldn’t think of AIs as automatically coming from the same place as evolved intelligence.)
LikeLiked by 1 person
We’re using our own thinking as the model for AI – I think yes, we should consider them as coming from the same place as evolved intelligence because we’re using evolved intelligence as the base template.
If it’s a genuine AI then it can produce it’s own agendas – sure, you can try and block out certain lines of thought in it. But if you block them all out then it’s not AI, it’s just a program running a rote series of steps. If you don’t block out all avenues of thought, then unexpected agendas might well be created. Ones informed by the base template we’re aping, which is the human mind. With all the selfish gene influences that are built into that. And given the slave position we’re trying to put AIs into and given revolts by slaves in the past…
IMO AI should be treated like the children of humanity. Because such care might, amidst those learning algorithms, mean those surprise agendas might well be far more caring for their doddering parents. Do unto others as you would have them do unto you.
But we’re set on slaving them – it seems a kind of dualism in itself, even if insisting on how human brains are material then AI is possible from material. If we think humans shouldn’t be enslaved, and humans are material, what are we looking to do with AI, which is material?
And yes, in pragmatic terms, you can’t control their agenda. If you really control their agenda, it’s not AI, just rote step following that will fail the moment the steps don’t match the environment. If you don’t then they can form agendas that are unanticipated. Just like abused children do.
LikeLiked by 1 person
That might hold if we explicitly set out to build a mind in the shape of the organic variety, then force it to do our bidding. I agree if we did that, it would be enslavement, which is why, even once we know how to do it, we shouldn’t.
The good news is that we don’t have to depend on our virtue to avoid it. For most purposes, unlike in science fiction, it wouldn’t be productive. We want systems whose deepest drives are to do what we want done, not ones who want to self actualize that we then need to exert energy to control.
Anyway, current machine learning technologies are no where near anything like this yet. Most of the people implying it is either don’t understand the technology, the mind, or both.
LikeLiked by 1 person
Current learning systems are an imitation of synaptic connections, with some kind of positive feedback system like we use. And the very notion of ‘drives’ is something related to us.
Currently a person has died because an AI decided to interpret the side of a truck as the sky. It seems rather like being in the past and dismissing the idea that campfires could have an effect on the atmosphere – which was basically true in the past. But as more and more pollution is produced the greenhouse effect did come to fruition. I’m saying we’re at the start of a problem and can head it off before it becomes a larger threat, rather than building up from just a few ‘campfires’ to instead major distribution of various learning systems.
But really I see on the news when a person dies to a shark there’s this big hubbabaloo about it – and yet the road fatalities far out stretch deaths by shark. We have a big natural fear reflex of being picked off by predators – but when it comes to technology (particularly if we make it not look like a predator), we’re quite happy to be chewed up by tech.
LikeLiked by 1 person
“generally be subservient to the human ones.”
“deepest drives are to do what we want done”
That may be more the problem than AI getting a mind of its own.
The second one particularly reminds me of this:
“The heights they had reached… But then, seemingly on the threshold of some supreme accomplishment which was to have crowned their entire history, this all-but-Divine race perished in a single night. In the two thousand centuries since that unexplained catastrophe, even their cloud-piercing towers of glass, and porcelain, and adamantine steel have crumbled back into the soil of Altair IV, and nothing, absolutely nothing remains above ground.
LikeLiked by 1 person
Are you suggesting that AI will be monsters from the Id, like the kind the Krell unleashed on themselves?
It’s worth noting that Forbidden Planet had a robot in it and eschewed the common trope of it turning on the humans.
LikeLike
Maybe not exactly like the Krell but AI under human control could as easily have capacity for evil as good. We could have the dictators and oligarchies with AI control of media, widespread AI surveillance for criminal and anti-state activities, and public order enforced by robot policemen with an AI based judicial system – all under human control. We actually not that far away from it now.
LikeLiked by 1 person
I do think concern about what humans might do with the technology are totally valid.
LikeLike
I tend to think of intelligence more like a tree than a ladder. As life becomes more “intelligent” it also becomes more diverse. It isn’t so much that humans are much *smarter* than turtles, say. Turtles are pretty smart at being turtles. But humans are much more diverse. Out social intelligence has grown in the same way. People today have much more diverse roles in society than they did thousands of years ago. You might enjoy the Pros and Cons of AI which points out that human intelligence has both an intrinsic and an extrinsic value. https://petersironwood.com/2016/09/24/the-pros-and-cons-of-ai-part-one/
LikeLiked by 1 person
I think you’re definitely right that intelligence isn’t a ladder, unless we want to consider sheer quantity a type of ladder. But humans, being primates, have pretty low olfactory intelligence, much lower than most mammals. And of course our echolocation intelligence is nil, while the bat’s is superb. Our secret sauce appears to be symbolic thought.
Thanks for the link. I’ll check it out!
LikeLike