Is AI consciousness an urgent issue?

AI consciousness seems like an easier thing to ponder when you approach it from a functionalist viewpoint.

Sunday I watched the movie The Creator. The premise is a few decades in the future, we’ve managed to create sentient robots. At first, all seems well, with them being a boon for humanity. Then a nuclear bomb goes off in Los Angeles, apparently detonated by an AI, which leads the west to ban all AI. However, some southeastern Asian societies continue to shelter them, so the west goes to war against those countries to eradicate the AIs.

Lest we have any confusion about where our sympathies are supposed to lie, the AI robots have human faces and display human emotions. And the trailers for this movie are explicit that the main McGuffin in the movie is a small robot with the face of a little girl, who turns out to be the ultimate weapon to be used again the west, and the orbital platform it uses in the war to bombard AI bases.

The movie has its moments and is fairly entertaining. Although the science is ludicrous, which makes it really more of a fantasy. It ends up being a commentary about our attitudes toward the other and the use of asymmetrical warfare to impose values on societies who think differently. 

But the movie also highlights a common set of anxieties in our society about AI. On the one hand are the doomers, people worried AI could mean the end of our species, or at least be dangerous for us, and so should be banned or strictly controlled. On the other are those concerned about how fairly we will treat these beings we bring into existence.

That seems to be the concern featured in a recent Nature article, about recent comments to the United Nations by three leaders of the Association for Mathematical Consciousness. The article notes a paper we discussed a few months ago on proposals for how to determine whether AI is conscious. The overall thrust of the piece is that we need to get a handle when these systems might become conscious by funding more research in this area.

Since the call for funding sounds relatively modest, I have no particular issues with it. But I tend to doubt it would be able to drive any consensus on this issue. Consciousness, in my mind, remains something in the eye of the beholder, with no strict fact of the matter, and with people arguing past each other with different conceptions. This seems like more of a philosophical problem than a scientific one. Not that science can’t inform the discussion. 

A large part of the issue is the particular conception of consciousness many hold, one that involves something in addition to the capabilities of the system in question, something ineffable, unanalyzable, and inaccessible to third party observation. I wonder how science is supposed to get a grip on something like that, particularly for systems with radically different mechanisms.

A more productive stance, I think, is recognizing that this conception arises from one source: introspection. And we know from decades of psychological research that introspection is unreliable. It gets the job done, enabling language communication and complex social dynamics, but as a guide to the mind, it’s a limited tool, often a misleading one.

How then should we assess consciousness in AI? For myself, if a system consistently and reliably demonstrates these functional capabilities and attributes, then I’m going to think we should be careful about how we treat it.

  1. Predictive models of its environment and itself
  2. A foundational concern for its own state and continued existence, including short term reactions that stress the system but can’t be easily dismissed

Even if it doesn’t have some elusive indefinable quality of conscious animals, I think an artificial system with these impulses will trigger our moral intuitions. And overriding those intuitions risks weakening them for how we treat each other and other living species.

It’s worth noting that when I spend time with LLMs, I don’t detect a whiff of these attributes, at least not yet. I do see the beginnings of 1 in self driving cars and other autonomous systems. That’s significant, because I think adding 2 to a system that already has 1 wouldn’t be that difficult. But 1 has proven to be hard, very hard. Even those autonomous systems currently need a lot of assistance from vast databases that animals manage to get by without. 

It seems highly improbable to me that we’re going to get the combination of 1 and 2 by accident. And 2 won’t be useful in most engineered systems, except in an instrumental fashion subservient to meeting its other design goals. Which is why I tend to think most doomer and ethicist concerns about AI are misguided.

Not that there aren’t dangers. Any new capabilities come with risks. Our ancestors couldn’t master fire without the risk of being burned, sailing without the possibility of drowning, or flight without its obvious dangers. It’s certainly conceivable that humanity could destroy itself with poorly designed AI. But I think that danger is overblown, and the calls to ban research an especially short sighted and misguided approach to dealing with it.

The dangers I most worry about are what people will do with AI, rather than what AI itself might do. Consider what a police state, armed with AI monitoring systems, might be able to impose on its population. And AI doesn’t have to be sentient to cause massive economic disruption. Politics in the developed world are currently rife with nationalistic backlashes against the changes from globalization. But a lot of those changes are coming from technology, and it seems like they’re only going to become more pronounced. All of these changes bring benefits with their pain, but the pain is real.

So there are dangers. But most of the concerns right now seem aimed at the wrong things.

Unless of course I’m missing something.

67 thoughts on “Is AI consciousness an urgent issue?

  1. I’ve just finished reading Stuart Russell’s “Human Compatible” AI and the Problem of Control. The title neatly sums up the AI dilemma. The book is concerned with what Russell calls “provably beneficial AI” a sort of general all purpose AI. For instance, this type would have access to the entire web, libraries in any language and any street corner and security camera. Like most, my intro to AI has been books and movies, Issac Asimov’s “I Robot” being a memorable early read.
    Now after reading this post I’m wondering is intelligence and conciousness part and parcel and if not, do you need both for an AI to do it’s beneficial tasks. I tend to think not. AI will never have an ich or a passionate outburst…
    unless we programme it to do this and there, is the very real danger.
    Some have said the last thing we need to do is make AI’s look like, express and show human behaviours, why? because it would make it harder to switch it off. Especially if it needs to fulfil it’s task while pleading with you not to… and if it is smarter than us it could have figured out a way for you not to be able to do so, e.g. disabling the switch because it has a task to complete.
    We do need AI however to understand human behaviour to be beneficial (intelligence) but to understand how and why we need this, that, I can only refer you to Russell’s book (for one explaination) These questions are complex, very open ended at this point in time and if you are a novice AI Inquirer like me, you will need more background information to see the bigger picture of the problems AI developers face.
    All in all as Russell points out clearly in his book, we humans have an AI problem to research, lay policy foundations and find a way to proceed safely and remain in control.
    All that said I’m not down on AI, far from it, apart from google maps, search engines and the like I don’t think I’ll be around when super intelligent beneficial AI is ubiquitous.

    Liked by 2 people

    1. I actually own Russell’s book, and remember reading the early portions of it, although it’s been a few years, so not sure why I stopped. (It was right before the pandemic, so external events may have been a factor.) I have read some of his articles.

      My take on the relationship between intelligence and consciousness is that consciousness is a type of intelligence. So a system can be intelligent without necessarily being conscious, but being conscious does require at least some degree of intelligence. Many insist they’re completely separate, but that seems to lead to a non-functional form of consciousness along the lines I express skepticism about in the post, one that science can’t really do much with.

      I do think AI can be beneficial in trying to understand how human (and biological) minds work. Although the closer we make them like us, the more the ethical issues start to arise, and there are genuine dangers in going that route. I don’t think we should try to create sentient systems unless we’re prepared to commit to taking care of them. For most AI purposes, keeping them intelligent but non-feeling seems to way to go.

      Anyway, appreciate your thoughts. Thanks for commenting!

      Like

  2. Merry Christmas Mike! I must say I’m a little surprised at your criteria for consciousness, as well as your dismissal of the potential of LLM’s. I agree that 2 is a good shorthand proxy for consciousness in some contexts, but I also think it may be radically misleading in other contexts (e.g. we can imagine very selfless conscious agents who don’t give a damn about their own survival).

    Regarding LLM’s, I think you definitely do need some pretty sophisticated environmental and self modeling if you want to expertly navigate through semantic space, despite the dismissals of the “stochastic parroting” crowd. Not saying that LLM’s have detailed human-like representations of the world, for obviously that’s not the case, but I see them as being very much on the right pathway. It’s true that most LLM’s don’t have sensory grounding, but if you look at the benchmarks, it doesn’t seem that adding multi-modal capacity adds much, if anything, to the functionality of LLM’s. It seems they’re doing just fine without sensory capacities. Chalmers also argues that sensory grounding is not needed for LLM consciousness here: https://philpapers.org/archive/CHADTR.pdf

    It’s also true that self-driving cars are agents, whereas LLM’s are not. But it’s trivial to convert something like ChatGPT to AutoGPT. It seems the hard part of cognition is more forming the right representational and modeling capacities, as opposed to executive decision making. Of course AutoGPT kind of sucks right now, but that’s because the tasks it engages in are really hard! I also think there might be a bit of a bias in accounting for emergent behavior. We see scientists and philosophers constantly being surprised by the emergent capacities of LLM’s, which sometimes seem to come out of the blue. I think what’s going on here is that they’re simply using the wrong metrics for performance.

    It’s like if you’re doing a math problem and know a lot of great arithmetic but don’t know something simple like how to carry the 9’s in long division. Obviously that’s going to really screw with your division and you’ll end up writing a lot of wrong answers. But suddenly you realize what you were doing wrong and now your answers will be noticeably improved. If all we looked at was the simple binary metric of how many math problems you got right or wrong, we would fail to appreciate that the wrong-answer Mike was actually pretty damn close in arithmetical understanding to the right-answer Mike, and the transition in functionality between the two might come as a shock and surprise.

    I think this is definitely the case as well when it comes to LLM-agents like Auto-GPT. Right now, one of their big deficiencies lies with planning. For instance they might plan to do X, Y and Z, but fail to coordinate these tasks, even though they are more than capable of doing each of them in isolation. But now introduce a planning module which takes input from multiple LLM models and assigns each of them individual tasks, and suddenly you might have a huge emergent spike in capabilities. There are approaches right now which take inspiration from the prefrontal cortex, and which have seen promising results. See: https://arxiv.org/pdf/2310.00194.pdf

    Anyways, I’m very optimistic (and simultaneously worried) about the current progress of LLM models. Another thing which really impressed me lately was the performance of BabyLLM models (models which are trained on 10000x less data, and which more closely resemble human learning), which as of late have come pretty close to replicating LLM functionality: https://aclanthology.org/2023.conll-babylm.2.pdf

    Liked by 2 people

    1. Merry Christmas Alex!

      I do think LLMs are overhyped. I don’t want to denigrate the technical achievements, which are real and amazing. I’m also not saying many of the techniques used in creating the GPT engines may not eventually be useful for general intelligences.

      But people who think just continuing to refine that specific approach is going to suddenly have these systems “wake up” are, I think, seriously underestimating the difficulties. I just see little evidence that they’ve broken though, or even dented, the barrier of meaning.

      Granted, I’m judging them by my own interactions, which have mostly left me with the impression of a glitzy and unreliable search engine with a layer of BS added. All I see is something tuned to provide verbiage that looks convincing, with no underlying knowledge.

      But I’m not following them as closely as it sounds like you are, so it’s totally possible I’m missing something here.

      On selfless conscious agents, I guess the question I’d have is, what makes that system conscious? What is the criteria for the label? Are you saying that world models are sufficient for consciousness? Or even just language models? If so, then what separates a system conscious in that way from another which isn’t?

      I do think you’re right to be optimistic and worried, but around the issues I mentioned at the end of the post. AI generated art, for instance, doesn’t have to be as good as the human versions to displace human artists. Although it may be that human artists end up using generative models in their work, and just reach new heights artists doing it by hand, or people like me just asking it to generate a picture for me, can’t reach.

      Liked by 1 person

      1. Hey Mike,

        “Granted, I’m judging them by my own interactions, which have mostly left me with the impression of a glitzy and unreliable search engine with a layer of BS added. All I see is something tuned to provide verbiage that looks convincing, with no underlying knowledge.”

        Yeah I was trying to respond to this type of reasoning when I talked about how surprise at emergent behavior is in many ways just the result of using the wrong metrics for performance, and I think you might be falling into the same trap here. I have no doubt that GPT-3/4 is unimpressive in many ways (although I would also caution that prompt engineering goes a long way…), but that’s not necessarily indicative of its cognitive potential. Just because a system has 10% of the end user functionality (the functionality that an entity exhibits when interacting with its environment, not the internal functionality of its parts) that a human being does, doesn’t mean that it has 10% of the cognitive capacity of a human! I tried to hint at this with the example of arithmetic, where a small error in arithmetical reasoning can lead to really bad results (10% functionality) but isn’t indicative of a huge gap in understanding (~10% understanding).

        If we didn’t understand what was happening “under the hood” for poor arithmetic Mike, we might think he has absolutely no mathematical understanding whatsoever, but that would be a faulty conclusion. Of course that doesn’t by itself tell us that GPT-4 is close to human level consciousness, but it’s an argument against relying on end-user functionality metrics as an indication of consciousness.

        “I just see little evidence that they’ve broken though, or even dented, the barrier of meaning.”

        I think the same above point also applies here. Meaning, we have learned, is not some “built-in” feature of the human brain, but an emergent phenomena of the operations of its neural networks. You first learn that syntactic token A is correlated with syntactic token B, and you gain a little bit of understanding of its meaning. You learn a few more correlations with A, and get an even better understanding. Then you start thinking about counterfactuals and what would happen to A if you made various interventions with other tokens, and you start to develop a causal understanding. Then you start learning about the relationships between syntactic tokens and certain sensory states and you get an even deeper understanding and so forth. There’s no real point where you can say that the model finally “understood the meaning”.

        I do think that LLM’s must have pretty sophisticated world models under the hood. Now how that would “feel” from the inside I can’t say, but I wouldn’t be surprised if they had a sense of time and space, even if they don’t yet exhibit the the rich phenomenology we associate with human conscious experience (e.g. complex sense of self, visual field, emotions and affects).

        And yeah I would say that if you have human-level representational capacities and world modeling, then you’re pretty much human-level consciousness. Not sure how to answer your question about how we might demarcate conscious human-level world model systems from unconscious ones, since I can’t think of an example of the latter.

        I will end this post by re-emphasizing my point that “differences in end-user functionality ≠ differences in consciousness/understanding”

        Humans are pretty superior to chimpanzees in terms of their end-user functionality, but not much different in terms of their neural architecture. As far as I know, human brains are basically standard primate brains, just scaled up; only a factor of 4 or so in the parameter space separates us from Chimpanzee functionality. In terms of Moore’s law that’s like 2-3 years of progress. So it’s not unreasonable to expect that we might progress from a chimpanzee equivalent AI to a human-level AI in <5 years just from scaling, and shortly after that to ASI. If you were just relying on end-user functionality tests, you would remain woefully unimpressed up until the very last year when AGI is built.

        My predictions based on my understanding of the architecture of LLM's are AGI in the 2030-2050 timeframe, and ASI shortly after that. About the functionality of ASI I remain completely agnostic, I find anything that ranges from "really really smart humans" to "God-like entities that make us look like ants" plausible within this century.

        Depending on how LLM's progress in the next couple years, I would shift my credence's towards the beginning or the end of that timeframe, and I'm also not ruling out scenarios where a cultural backlash might lead to over-stifling regulations and the complete abandonment of that timeline either. And yeah I'm hella worried, and not about AI art either!

        As always, it's been a pleasure chatting.

        Liked by 1 person

        1. Hi Alex,
          The gist of your arguments seem to be that we’re a lot closer than we look. But since we’re talking about purportedly emergent capabilities, the only measure we have on how close we are is the system’s performance. But it’s exactly the performance which I think people are being over impressed with.

          Part of my skepticism comes from knowing something about the last 60 years of AI research. We’re always “almost there”. But we’ve stayed “almost there” for decades. I think it’s because each generation of technologists drastically underestimate the difficulty of the goal.

          Consider, do we have a system yet which can navigate its environment as well as a honeybee, a cockroach, or even a fruit fly? I think we have a long way to go for engineering intelligence we see in the animal kingdom, much less humans.

          Just to be clear. I think the mind is a 100% physical system. I have no doubt we’ll eventually be able to reproduce its functionality. But I also try to be realistic about our current status. I’d be delighted to be wrong. In the end, I guess time will tell. We’ll have to see how the next few years play out.

          The pleasure is definitely mutual! Love having people to discuss this stuff with.

          Liked by 1 person

          1. Well I hope we can agree that the “for 60 years predictions have been wrong” line is pretty weak Bayesian evidence. I mean if you had nothing else than that piece of evidence then yeah sure, but by itself it doesn’t seem that impressive. At some point that line of reasoning is going to stop being true.

            How can we measure progress besides relying on performance? Well there are alternatives like an understanding of architecture, as well as paying attention to the rate of progress and making extrapolations, as opposed to just looking at the current state of performance.

            Now obviously exponential progress is hard to measure and can quickly lead to absurdities if you don’t take into account hard limits (e.g. at the current rate of population growth, the human pop will be 10^120 in 50,000 years time). But the issue here is that the hard physical limits don’t seem to be anywhere near the human intellectual capacity. We’re stuck at our cognitive level because of idiosyncratic evolutionary constraints like metabolic consumption and the size of the birth canal.

            Anyways, there’s obviously a lot more I want to say here which can’t be condensed into a short blog comment. I guess we should probably agree to disagree at this point.

            P.S. I can’t wait to send you my article on the hard problem if you’re still interested. It’s sooo close to completion. I should have it out before the years end. It’s the one which tries to solve the problem in a revelationist friendly way. But just in case we don’t speak again before then, happy new year Mike!

            Liked by 1 person

  3. A couple of thoughts on this. Firstly, it’s not going to take too much intelligence, just a non-human perspective, to conclude that the main threat to sustaining wellbeing in the next few decades is over-population and over-exploitation of the planet by humans, and that the solution is to greatly reduce the population. Even if AI did not itself have the physical means directly to implement this solution, it could be used by a government to set its policies to address this threat.

    Secondly, as you know I have my own thoughts on how consciousness works (as do many of us!), and it’s not too complicated (a strange loop would summarise it). However I do wonder if there might not be quite different levels of sophistication of consciousness in terms of the level of awareness of, and ability to modify, the interaction between self and the rest of the world – to read and write our own software, if you like. Might this mean that an AI could take consciousness to something well beyond what we experience and can get our heads around?

    Liked by 1 person

    1. A few years ago I would have agreed with you completely on the need to bring down the population. I’m less confident now that it’s the only solution, although if it can be done without economic chaos, I definitely agree it would help. To some degree, the empowerment of women in developed societies will, I think, work to eventually bring it down naturally. But it will take time, particularly in counties that are still in the early stages.

      A strangle loop as in Douglas Hofstadter? I really should read his book at some point. It’s one of many I’ve owned for years but never got around to finishing. I’ve always taken that phrase to mean self reference, which would be included in what I call models of self.

      On AI consciousness, good question. I guess it would depend on what we mean by “consciousness”. But just as symbolic thought brought humanity to a new level of intelligence beyond most animals, one they can’t even conceive of, the question is whether AI could go to levels beyond what we can conceive. Would we even be able to know that they’d succeeded? We can’t even describe symbolic thought to non-human animals. AI may be similarly unable to tell us about whatever new levels it has access to.

      Liked by 1 person

  4. In my opinion AI consciousness is not remotely an urgent issue, though for a much stronger reason than Mike’s presumption than we won’t let our progressively more conscious LLM’s and such, take control and destroy us. My position is that causality mandates some sort of consciousness physics which either will or will not exist in a given situation, and probably in the form of an electromagnetic field associated with certain parameters of synchronous neuron firing. This would be a causal explanation since processed brain information would exist as such by informing that field to provide associated consciousness characteristics (like a visual scene, a pain, a hopeful feeling, and so on). Instead today it’s quite standard to believe that processed brain information needn’t inform anything to exist as consciousness. Thus “thumb pain” should result if the right marks on paper were converted to the right other marks on paper. Here the prospect of AI consciousness can get unpredictable and scary since it becomes more like a magic genie emerging from our computers. Yikes!

    Mike, I wonder if you’ve given any thought to my brief recent post about scientists using intelligible (and thus synchronous) neuron firing from speech areas of someone’s brain to put words on a computer screen that approximate what the subject was trying to say? It seems to me that this is strong evidence for the validity of McFadden’s cemi field theory. Beyond scientists potentially going further to tell us many other things about someone’s consciousness given EM field monitoring, I suspect they’ll go the other way as well. Here they may add EM field transmitters (rather than detectors) to someone’s brain that thus disrupts EM field based consciousness. Then if successful I’d expect them to learn transmission forms which don’t just tamper with someone’s consciousness, but even cause it to function in intelligible ways. Thus energies which create a general blueness to someone’s visual field, or perhaps the creation of a humming sound that could be modulated into intelligible words in that persons consciousness? I wonder if evidence like this would suggest to you that consciousness probably does instead exist in the form of such physics rather than no unique sort of physics at all?

    Liked by 1 person

    1. Eric,
      I figured you’d be more skeptical than I am about AI consciousness, for pretty much the reasons you lay out.

      But it seems like we’ve established that you and I are talking about two different conceptions of consciousness. Yours strikes me as a lot closer to the one I express skepticism about in the post. You add the hypothesis of an identify relationship between it and the brain’s electromagnetic field. But as I noted in the post, I think there are reasons to doubt that version of consciousness exists. Which for me leaves EM theories as solutions in search of a problem.

      On your post, I can’t say I see your reasoning. The evidence seems only for the fact that nerve spikes use electromagnetism, which of course is the whole point of what they’re doing. (So I think I agree with much of James’ initial response.) I don’t see how it establishes that the field beyond the neural membrane is being used for anything by the brain itself, much less that it’s equivalent to consciousness.

      If anything I think this (very cool) technology just shows the benefits of mainstream neuroscience.

      Liked by 2 people

      1. I think there is a good chance the EM field is involved in consciousness, although not necessarily as the “container” of it. Consciousness also may be more heterogeneous than we think. I sometimes think some of experience is chemically caused – things like suffering, love, fear, even though all of those things can probably be traced to various brain circuits. The problem with those sort of things is they tend to tinge not only the physical body but also much of what the brain does including the reasoning and cognitive activities.

        Liked by 1 person

        1. It’s been a while so I’ll go ahead and say this.

          My take is the local field potential could be part of the stochastic factors involved when a neuron is on the cusp between firing or not firing, but so could a lot of other factors like the current levels of various neurotransmitters and other chemicals. It’s worth noting that the LFP charge is often the opposite of the causal effects of circuits (inhibitory rather than excitatory). Based on what I’ve read, crucial circuits minimize these factors with thicker axons, myelin, redundant connections, and repetitive signaling.

          None of this rules out the EM field being involved in consciousness in some minute and nuanced manner, but I think it’s why most neuroscientists don’t see it as a promising approach.

          Liked by 1 person

          1. “LFP charge is often the opposite of the causal effects of circuits (inhibitory rather than excitatory)”

            That might be considered support for the idea that the EM field is transmitting information which may inhibit or trigger firing. I know that isn’t how Eric or McFadden thinks of it since they use the firing argument as support.

            I’m inclined to think the EM field, if it has a significant effect, has more local influence.

            I think Northoff has done some work showing that the whole firing system slows down in depressed patients. That would suggest there is some optimal communication speed and the system actually “feels” when the neurotransmitters are low in some part(s) of the brain,. The effects could propagate across the brain if other areas had to slow down to match the speed of the affected areas.

            That leads to another idea I’ve had. That the main communication method across the brain is rate/speed of neuron firing. That would make it more analog since the timings could vary continuously by time. It also suggests why the brain operates with widespread oscillatory patterns across ranges of time frames.

            Liked by 1 person

      2. Wow Mike, that was some pretty extreme denial! But then you also say that you don’t grasp my reasoning. So instead of trying to address each of your denials individually, it might be better to present my reasoning plainly so that you might work out the rest yourself? I’ll give this a try.

        McFadden proposes that the brain functions as a non-conscious computer. Here the EM radiation of standard neuron firing is essentially just noise. Furthermore given that it’s just random (since the firing to non firing is the point rather than the field effects) there shouldn’t be anything intelligible in this. Similarly there should be nothing intelligible in the EM radiation which emanates from the computers that we build. Of course our brains do need and get protection from EM field tampering effects, just as our computers need and get this.

        McFadden also proposes that everything we see, hear, smell, think, and so on, exist as EM field radiation associated with certain parameters of synchronous neuron firing. This creates an amplified EM field that rises above the general noise. So if all elements of your vision, fear, hope, and so on reside in an incredibly complex EM field associated with synchronous neuron firing, then this should be possible to empirically test. Here it ought to be possible to train a computer to detect the 39 phonemes spoken in English given EM field detection when someone is trying to cause their mouth to speak. Furthermore this is exactly what scientists have been able to do.

        I had no idea it would be this simple for scientists to begin validating McFadden’s theory in this direction. To me it seems like it should be quite difficult to identifying intelligible elements of consciousness from a field which has that much complexity to it. So instead of implanting detectors I’ve proposed implanting EM field inducers. If all elements of vision, smell, and so on exist in the form of an extremely complex unified EM field, to me it seems like it should be far less difficult to produce exogenous EM energy which alters the brain’s endogenous EM field. Thus without otherwise knowing when they were being messed with, test subjects would be instructed to say if their expected consciousness seems disturbed in any way.

        If you now grasp my reasoning Mike, my question is how much of this sort of evidence would it take for you to suspect that McFadden’s theory is probably right, and your theory is probably wrong? How much “mind reading” by means of EM field detection would you require? Or “mind alteration” by means of EM field induction?

        Liked by 1 person

        1. Eric,
          As James discussed, people have been detecting the electrical activity in neurons for a long time, centuries actually. That they’re now able to do it with increasing resolution and ability to decode it in the brain only provides evidence that speech and language are neural activity.

          You’re arguing that they’re neural activity and field dynamics, that the fields are more than just a side effect, that they have some kind of functional role. To establish that seems like it would require isolating the neural activity from those fields. Since neurons use the EM field across their membranes for action potentials, I have no practical idea how you could go about doing that, and none of the propositions I’ve seen from you accomplish it. Sorry.

          I think I’ve said this before, but about the only option would be a simulation of the brain’s neural networks that doesn’t function properly unless field effects are taken into account. Obviously that’s not something we’ll be able to do anytime soon.

          Liked by 1 person

          1. So Mike, apparently your answer is that there’s no amount of brain EM field detection that could be done from which to effectively monitor someone’s consciousness, that would lead you to believe that the EM field itself happens to be that consciousness. And furthermore you seem to be saying that there is no level of human ability to alter someone’s consciousness by means of inducing EM energies in the range of standard synchronous neuron firing, that would lead you to believe that the EM field itself is that consciousness. To me this position does not seem consistent with the sorts of things that scientists effectively tend to control for.

            You said that the only way to test McFadden’s theory would be to isolate neural activity from field activity. This is actually what my original test proposes. Instead of directly causing synchronous neural firing, we’d just add those sorts of energies without the neural firing. So let’s say that a test results in a person’s field of vision going quite blue while certain energies are being induced in a certain part of the brain, with this otherwise halting. Of course scientists would also want to control for the possibility that the induced energies were actually affecting neural firing to create the effect rather than the altered field doing so. But even knowing that these energies should only cause firing to certain neurons that are already on the edge of firing, you don’t think scientists could determine which was which? And this would be regardless of how well scientists were able to manipulate someone’s consciousness? So if they could do something as fancy as lay printed words over someone’s vision, you don’t think it could be determined that this wasn’t instead caused by means of related neural firing? To me this sort of denial would seem convenient.

            There’s no doubt in my mind that scientists could test this proposal today given current technology, and progressively better and better over time. But will you grant me at least this. If scientists were able to add EM energies in the heads of test subjects around the parameters of synchronous neuron firing, and were to do this all sorts of ways without ever verifiably altering anyone’s consciousness for oral report, would you at least grant me that McFadden’s theory should then be dismissed? Or today do you effectively consider his theory unfalsifiable?

            Liked by 1 person

          2. Eric,
            For a scientific test to be successful, there must be some way to actually get the results. If we alter the EM field around someone’s brain, but not the neural processing in any way, then how are they ever going to be able to report any changes? What will cause the muscles involved in speech to contract? Normally this happens because nerve spikes come through the peripheral nervous system from the brain. But you’ve stipulated that we won’t alter the neural processing. So what’s the causal chain of scientifically getting the results of any changes from altering the field?

            But that aside, there’s what I understand to be another conceptual flaw here. From what I recall about McFadden’s theory, it isn’t just the EM field, but the EM field in symbiosis with the neural processing. Which means even under CEMI, you can’t just alter the EM field and have any meaningful affects without altering neural processing. (If I recall, one of McFadden’s proposed tests is to alter neural processing by altering the field, but most scientists would just see that as a subtle TMS type intervention.)

            That’s why I say the only experimental option with the brain I can see is to isolate the neural processing somehow, to remove the EM field as a factor. If EM theories are right, it seems like that would have profound effects on someone’s experience, effects that at least could traverse the causal chain to the outside world (even if all they do is completely shut down any ability to report). But as I noted above, the issue is that even neural spikes use the EM field across the neural membrane. So what we’re talking about is finding a way to insulate the neurons so that there can’t be any ephaptic coupling, without disturbing the neuron’s action potential or obstructing electrical synapses. It’s also worth noting that we’d have no way to verify that neural processing itself was still happening, since the isolation would obstruct EEG or MEG type monitoring, although maybe fMRI could still provide insight. In any case, this seems inaccessible with current technology.

            Which is why I landed on the simulation option, admittedly also not currently possible. McFadden also discusses that AI may not be conscious until it utilizes an EM field, which might bring us to the point I make in the post. If we can’t get there without utilizing EM fields, that seems like it would be a test. (Unless CEMI considers zombies a possibility, but any theory that does makes itself unfalsifiable.) So it may be falsifiable in principle (which is Popper’s standard), but maybe not with current technology.

            Liked by 1 person

          3. Okay Mike, I think I understand what you’re concerned about now. Apparently your understanding of his theory isn’t nearly as clean as mine is. But certainly challenge my perspective wherever you think it might be wrong. One quite unique element of my test is that I think reliable reports shouldn’t be hard to get. That’s never the case for testing AI or non lingual animals, and seems difficult enough in the field of psychology in general. My test should be relatively clean though.

            Let’s say that you’re the test subject wired up with EM field transmitters implanted in an appropriate vision area of your brain. Surely they’ll also have you hooked up with EEG and whatever else they think might help assess you.

            So you’re seated and casually talking with the researchers knowing full well that you’re suppose to report anything that seems unexpected or strange to you. And maybe nothing ever will seem unexpected or strange to you. In that case you wouldn’t report anything and collect your fee on the way out. Here even though you had no sign of it, they were occasionally inducing tiny EM energies in that part of your brain. Nothing to worry about though since these energies are endogenously quite standard. As most specialists today should predict, you’d never notice anything since these energies are presumed to mainly just be byproducts that rarely if ever have brain effects in themselves. With enough of that sort of result it seems to me that McFadden’s theory could justifiably be dismissed.

            Let’s say however that you do notice your vision to get blurry for example. Theoretically what you’d do is report this since the blurring itself should still leave your general cognition in tact. And presuming the truth of McFadden’s theory, how exactly might your report transpire? Theoretically you the EM field thinker would acknowledge this blurring and so you’d decide to report it. That’s where the ephaptic coupling that you mentioned last time would need to come into play. The appropriate neurons for causing muscles to make your report would be set up non-consciously, though this would be tripped off by the energy of your EM field decision to report itself. Theoretically the same sort of thing happens every time you decide to do anything muscle based — EM field ephaptic coupling causes the right neurons that were on the verge of firing, to do so for that sort of muscle based function to occur.

            Or let’s say that beyond vision blurring, 3 seconds of a certain EM energy also messes up your cognition for that period. Then subsequently you might notice that something funky just happened and so report it. Or this might be noticed by the researchers given your behavior, EEG, or other indications that there was something interesting about that particular energy transmission. If such tiny energies have effects that were formerly unknown, this should be notable for the field in general. And if it turns out that this is because consciousness itself exists in such a form (and I do realize how bizarre this idea must be for you particularly, unlike myself), then it seems to me that this could become science’s most extreme paradigm shift ever.

            So how about a simple account like that? Given my own understanding of his consciousness theory, shouldn’t such testing be possible with today’s technology?

            Liked by 1 person

          4. I hate to simply guess at what you’re asking here Mike. Could you be more specific? I ran some scenarios where you were rigged up to test McFadden’s theory. Maybe phrase your question in terms of that testing? Or you might even explicitly state some sort of flaw in the experiment such that McFadden’s theory wouldn’t actually be tested. I’ll need something more specific to go on though.

            Liked by 1 person

          5. Eric,
            Under what you describe, it seems like we get exactly the same results as if neural processing is the whole show, and all we’ve done is altered that processing. And since it’s the simpler explanation (in terms of fewer assumed entities), parsimony favors the standard neuroscience explanation. Unless you can identify what specifically in the observable results would mandate that the EM field (beyond the neural membrane) is a necessary part of the explanation, this is only evidence for neural processing.

            Sorry. If that isn’t clear, I not sure what else to say.

            Liked by 1 person

          6. In the first scenario that I presented, neural processing does seem to be the whole show. The standard presumption today is that the things we see, hear, think, and so on, do not exist in the form of an incredibly complex EM field. Furthermore if tiny energies appropriate to standard synchronous neuron firing were induced in your brain without it ever altering your vision, hearing, thought, and so on for oral report, then at some point we could legitimately conclude that McFadden’s proposal must be wrong. This is because if your endogenous EM field existed as your consciousness, then an appropriate exogenous EM field ought to tamper with it in noticeable ways.

            Then I addressed the possibility where such consciousness alteration becomes reported. In that case it would be up to scientists to explore the sorts of energies that seem to disturb one’s consciousness to see if it’s because the EM field itself becomes disturbed as predicted by McFadden’s theory, or instead something else becomes disturbed which additionally tampers with consciousness. Should McFadden’s theory be true, this sort of evidence should be quite telling.

            You could call his consciousness theory “neural processing” if you like, since neurons create the EM field. It wouldn’t have to be neurons though. If true then our machines might create “pain”, “pleasure”, and so on by means of creating the right sort of EM fields. But stipulating that I (or someone) tell you why an EM field would be required here, isn’t something that I consider mandated to validate my point. Similarly Newton didn’t need to tell us why mass attracts mass in order to become one of humanity’s greatest scientists. I realize that it may be inconvenient for you to acknowledge a science based way of exploring this question, when no such potential exists to validate or refute your favored position. It is what it is though.

            Like

    2. This debate that you and Mike are having is similar to ones we’ve been through. The dilemma is even in McFadden’s theory the EM field is produced by firings but also creates more firings. It is completely interwoven with neural firings so the effect of EM field can’t be isolated from the other stuff.

      I did have an idea posted on your blog about using various neurological conditions like blind to test. If we detected weaker or less coherent EM fields in areas affected by blind sight but, otherwise similar processing, it might be an indicator (probably not proof) that the EM field is critical to the process. This might be researched with MEG and without invasive procedures.

      I also need to check out a paper (can’t remember now which) that claimed EM fields were critical to memory formation (or that’s how I remember it).

      Liked by 1 person

      1. This might be the article.

        “New research provides evidence that electric fields shared among neurons via “ephaptic coupling” provide the coordination necessary to assemble the multi-region neural ensembles (“engrams”) that represent remembered information”.

        “scientists also posited that in addition to neurons, electric fields affected the brain’s molecular infrastructure and its tuning so that the brain processes information efficiently”.

        https://picower.mit.edu/news/brain-networks-encoding-memory-come-together-electric-fields-study-finds

        This also aligns with my thought that consciousness and memory are tied together very closely if they are not simply different aspects of the same thing. There is a much paper on this that I might blog about soon.

        I think this might put the hippocampus and related parts of the brain at the center of it.

        Liked by 1 person

        1. I’ve got things to say about all of that James. But maybe first you could take another crack at what I’ve presented for Mike? I don’t see how scientists could not figure things out here, or at least with the right controls. To me it seems too easy to just leave things at EM fields being too interwoven with neuron firing to ever be isolated.

          Do you agree that if extensive testing were done injecting appropriate exogenous EM energies in people’s brains, though subjects never report noticing alterations to their consciousness, then McFadden’s theory ought to be considered quite doubtful?

          Then if there were verifiable reports of such consciousness alterations, and that scientists were eventually to learn all sorts of ways that they could inject EM energies which give a person interesting predictable phenomena (like overlaying a checkerboard on their visual field, or hearing words that do not stem from their ears), it should be possible to develop good reason to believe that the EM field itself might exist as their consciousness rather than that the injected energies affect neurons to create those consciousness alterations?

          Liked by 1 person

          1. If you talking about “test subject wired up with EM field transmitters” then a lot of reasons could explain the lack of report.

            1= The transmitters aren’t synced with brain oscillations.
            2- Their message isn’t comprehensible so the brain disgards it.
            3- The correct pattern of firing is missed.
            4- The transmitters aren’t in the right location. I assume you can’t put them al over the brain. Then there would be problems getting them into deeper layers and brainstem.

            Likely we already know the experiment would work to some extent. TMS will produce flashes of light if applied to the right areas of the brain.

            But, even if we produce a complete hallucination, we don’t know if the EM field was consciousness or just a trigger for it.

            Also, the experiment would have a strong susceptibility factor if the scientist are just standing around waiting for reports as they try out various combinations. Some suggestible subjects might produce entire accounts of an event with the right prompting. Then we just be proving the power of imagination.

            Liked by 1 person

          2. Thanks for those observations James. Though they may seem daunting obstacles to such testing, I consider them more supportive in the sense of validating the concept itself. Some might instead rather it be thought that it’s inherently flawed. Of course actual professionals would decide how to deal with any potential testing difficulties. Still as broad theorists we can also say what makes sense to us. So here are some of my thoughts on the points that you’ve raised.

            “1= The transmitters aren’t synced with brain oscillations.”

            Because the point would simply be to alter any aspect of consciousness, the induced energies should be constantly modulated in case anything is found to have effects. We’re talking about lights speed stuff here so consciousness alteration should occur instantly when something is tried that works. Furthermore oral report may not always be needed since strange automatic reactions should occur with certain consciousness alterations. Even a normal pain might cause certain muscles to flinch. So that sort of thing might be observed here too.

            “2- Their message isn’t comprehensible so the brain disgards it.”

            It seems to me that if consciousness exists as certain parameters of EM field, then brain function should have no possibility of disregarding an alteration to that consciousness field itself. The brain evolved to construct reasonably effective consciousness, though beyond insulating it from appropriate energies outside the brain, it shouldn’t have evolved to counter interference produced within the brain. Instead it should be stuck with that consciousness just as it can’t disregard narcotic based chemical consciousness disruption.

            “3- The correct pattern of firing is missed.”

            Under McFadden’s theory, yes there should be a reasonably correct firing pattern for the EM field which constitutes the visual image that I currently perceive. Thus it should require EM energies of at least close enough parameters to disturb this. But if scientists know certain things about the sorts of EM energies that are produced by synchronous neuron firing, then that should give them a clue about where to start. Then they could modulate this in all sorts of ways to see what might work. Here we’re not so concerned about getting things exactly correct, but rather messing up what’s evolved to be reasonably correct.

            “4- The transmitters aren’t in the right location. I assume you can’t put them al over the brain. Then there would be problems getting them into deeper layers and brainstem.”

            Transmitter location could definitely be problematic, and certainly for lower energies that shouldn’t be as causally effective further away from an EM field locus of consciousness. So this would be an issue for specialists to figure out. And if all sorts of attempts never provide any success, locating transmitters might indeed be a weakness even if the theory is essentially true. I suspect however that specialists will have some reasonable opinions on good spots to try however.

            “Likely we already know the experiment would work to some extent. TMS will produce flashes of light if applied to the right areas of the brain.”

            It seems to me that TMS is entirely different. Here strong focused currents are induced specifically to cause neuron firing in specific parts of the brain. If McFadden’s theory is true then any TMS visual flash would exist because affected neuron firing creates the proper energies that yield an EMF experiencer of that flash. Conversely the point of the minor EM energies produced in this experiment would not be to cause neurons to fire (though when on the cusp of firing, some may). The point would be to directly add to the endogenous EM field such that any phenomenal alterations might occur directly. For example if subjects with their eyes closed say that they see a slight flash, and this also corresponds with an energy transmission (which is otherwise kept private from those subjects), then the energy would be tried again to see if the flash happens again. If so then elements of the original should be tried in other ways to see if a mere flash could be turned into a stronger perception of light.

            “But, even if we produce a complete hallucination, we don’t know if the EM field was consciousness or just a trigger for it.”

            Right, that would be up to scientists to determine. But it seems to me that there should be signature clues to a field which inherently exists as an experience, and a field that effectively alters neurons to create an experience.

            “Also, the experiment would have a strong susceptibility factor if the scientist are just standing around waiting for reports as they try out various combinations. Some suggestible subjects might produce entire accounts of an event with the right prompting. Then we just be proving the power of imagination.”

            True, though all psychological testing faces this sort of challenge. Furthermore I think this sort of test should naturally be far less susceptible than many or most. Of course the person controlling the energy transmissions should be out of the room monitoring a video feed. Conversely for many types of tests they need to build up a weird ruse of some sort in order to disguise whatever it is that they’re messing with someone about so that valid results might be achieved. Here it should be pretty obvious when someone is fabricating a story because they should tend to do so whether or not the energies are being induced.

            Liked by 1 person

          3. “If McFadden’s theory is true then any TMS visual flash would exist because affected neuron firing creates the proper energies that yield an EMF experiencer of that flash”.

            How would placing the transmitters in the brain to generate an EM field be any different from having the transmitter outside the brain? In both cases, it is an EM field and there is evidence it alters visual perception.

            “Using transcranial magnetic stimulation of occipital cortex, the authors studied the stimulus parameters that generate phosphenes in healthy volunteers. Single pulses or trains of stimuli readily elicited phosphenes in all subjects”

            https://journals.lww.com/clinicalneurophys/abstract/1998/07000/magnetic_stimulation_of_visual_cortex__factors.7.aspx

            There are dozens of studies of TMS inducing phosphenes when applied in the right way to the brain.

            Liked by 1 person

          4. It’s good that you question me on this sort of thing James. Thus my general understandings can be tested to see if I’m not thinking about this correctly. Apparently we have somewhat different conceptions of the physics at work here. So let’s sort this out.

            My conception of Transcranial Magnetic Stimulation is that a machine uses a coil of moving charged particles to create a focused magnetic field that results in relatively targeted neuron firing up to three centimeters deep. So I wouldn’t refer to the energies here as the tiny sort from McFadden’s consciousness theory associated with synchronous neuron firing itself, but rather something stronger that forces affected neurons to fire. But given that TMS does cause certain neurons to fire, if McFadden is right then a visual flash should be because the energy of resulting firing creates EM field alterations to that effect. So I don’t doubt that TMS causes this sort of thing, though indirectly through neural firing rather than direct EM energy. My proposed test is to instead use EM transmitters inside the head that merely simulate the EM product of synchronous neuron firing. No extra neurons are required to fire here (and we’d rather them not just in case they’d create any noted effects rather than the induced EM energy itself). The test is to see if this variety of field alone exists as consciousness. Of course you could argue that synchronous neuron firing creates the same types of magnetic fields in the head that TMS machines create. I doubt you consider that to be the case however.

            If tiny exogenous transmissions from inside the brain that approximate synchronous neuron firing were ever to cause some sort of “visual flash”, of course scientists would see if they could modulate that energy to strengthen the effect. If they could essentially create reported visual scenes for someone this way when their eyes were closed, and with reason to believe that this wasn’t simply causes by induced neural firing, I’d consider this powerfully successful evidence that McFadden nailed it. And with no verifiable evidence that such energies create phenomenal alterations in themselves were ever found, I’d consider that strong evidence against McFadden’s proposal. Is there still reason to doubt such conclusions?

            Liked by 1 person

          5. TMS might be stronger than what is natural in the brain, but it would weaken in distance from the source. So, at some point it should be on the same scale as EM fields in the brain. Phosphenes are “visual flashes” and I think one paper says they can sometimes be structured.

            But if you want an experience without neurons firings, then I’m not sure how there could ever be a report of the experience unless neurons fire in some part of the brain.

            Liked by 1 person

          6. My test depends upon the normal function of a brain, and one element of the normal function of a brain is that neurons fire normally. So the test is not about halting neural firing in general to see if consciousness can be altered under that condition through exogenous energies. Given standard neural firing its about adding some energies that are similar to standard synchronous neuron firing in specific areas of the brain. This is to see if these tiny energies might directly alter someone’s vision or anything else phenomenal for oral report. The point is that this ought to test whether or not consciousness exists in the form of the EM field associated with synchronous neuron firing of some sort, since test subjects ought to be able to report if anything strange happens during the times that exogenous energies are being induced.

            It seems to me that reports of phosphenes rather than nothing would be a great start. Could modulations occur that seem like stronger perceptions of light? Or could various colors be modulated? With enough evidence that these sorts of exogenous energies inside the brain can alter consciousness in all sorts of ways, it seems to me that it should essentially be proven that consciousness exists as an EM field associated with the right sort of synchronous neuron firing. But if it’s found through extensive testing that these sorts of energies inside the brain never have this sort of effect, or perhaps only occasionally and poorly when they’re also suspected to cause inordinate numbers of neurons to fire as well, then it seems to me that McFadden’s theory ought to be considered false.

            Like

  5. “The dangers I most worry about are what people will do with AI, rather than what AI itself might do. Consider what a police state, armed with AI monitoring systems, might be able to impose on its population. And AI doesn’t have to be sentient to cause massive economic disruption.”

    Agreed. I’m not seeing any evidence of AI consciousness (nor could I, really), so I don’t think AI consciousness is an urgent issue. How we view AI and use it is more pressing, though in the grand scheme of things I’m sure there are more important things to worry about.

    Liked by 2 people

    1. @banerjee15 @selfawarepatterns.com Sure.

      (I defined them in the post, but I wonder if it's showing for you. Still learning the limits of the current fediverse integration.)

      1 is predictive models of its environment and itself.

      2 is a foundational concern for its own state and continued existence, including short term reactions that stress the system but can’t be easily dismissed.

      Like

  6. So … many … orthogonal … issues. From top to bottom, then:

    Re the movie: agreed.

    Re the call for funding studying consciousness: again, agreed. The problem is not that we don’t know enough about consciousness. The problem is that we can’t decide what consciousness is. I can only reiterate my proposal that the basis of consciousness is information processing (correctly defined), and that any given proposal, as determined by the eye of the beholder, is a more or less sophisticated/complex version of said information processing.

    Re the concept that consciousness is “ ineffable, unanalyzable, and inaccessible to third party observation”: These properties are explicable as the subjective perspective of a sophisticated (human level) information processing system.

    Re assessing consciousness: you seem to be asking the traditional questions
    1. Does it think?
    2.Does it suffer?
    There is a long discussion to be had here. The basic questions are, when deciding how to act, what should we care about, and how much? My answers are: we should care about our own and others’ goals, and we must place a value on each (some values can be negative), and then decide what goals will be affected and the size of the effect and the probability of the effect, and then maximize value. I suggest this is largely in fact what we actually do, although much of it has been coded into intuition by evolution and by culture. In general, ability to think increases value proportionately to intelligence, which is probably why we care more about dolphins and elephants than, say, cows.

    To be able to suffer requires multiple goals (let’s start with two, A and B) and the ability to choose actions which may affect both. To suffer is to:
    1. Recognize that goal A is far from the goal state,
    2. Take an action intended to move toward the A goal state,
    3. that action having a negative effect on the B goal state, and
    4. that action fails to significantly improve the A goal state.
    The example is trying to work while suffering from a headache. Goal A is the state of no pain. Goal B is whatever work you’re trying to do. The body’s action in response to the pain (possibly by release of hormones?) causes an inclination to focus attention on the pain instead of on the work. But if you can’t fix the headache, you just have to suffer.

    So where does consciousness come in? By my definition/understanding, consciousness implies the existence of goals, and so, something we should care about. How much value we apply to those goals depends on various factors. So by this definition, current LLM’s are conscious and have goals, but the goals are minimal (“create likely text”) and unlikely to be involved in suffering.

    *
    [stopping here but there may be more]

    Liked by 1 person

    1. Hey lots of orthogonal issues is what we’re all about here. 🙂

      On ineffable, etc, being the subjective perspective of a system, right. I’ve noted before that many of those things are true pragmatically. Perceptions can be very difficult to describe, or analyze. And we’re only now beginning to be able to observe the brain’s processing, so for virtually all of human history, third party observation was impossible. The mistake philosophers keep making is taking practical limitations we currently face and reifying them into absolute limits in principle that no one will ever be able to overcome.

      On thinking and suffering, thanks! That’s exactly what I was going for, but purposely put it in language that wouldn’t, from many people’s perspective, beg the question.

      We’ve talked about goals before. I don’t think having goals, in and of itself, is enough, at least not to trigger my moral intuition. (And ultimately as a moral non-realist, I think our intuitions is all we’re talking about here.) For me, the latter part of what I say in 2 is crucial, where the system has automatic reactions, reaction that take energy, and which can be suppressed, but also only with additional energy. That, in my view, is suffering, or something so close to it that I’m going to use the word anyway.

      The thing is, I’m not sure how much sense it makes to incorporate this into engineered systems. We have it because evolution started with very simple systems and built the cognitive functionality on top, while continuing to have to work with relatively slow processes. For engineered systems, the reactions can be just as compulsory, but I think without the need for revving up the system before it knows whether such revving will be productive.

      I’m not sure that suffering requires multiple goals, although I’d agree that the scenario you describe seems like suffering. But an injured bear has constant signals about the damage, which motivate it (reactions) to get out of that state, but which it currently can’t. So it must undergo the reactions and suppress them, all of which is an ongoing stress. But in the short term, it’s just one impulse (or a set of closely related impulses) that can’t be satisfied.

      On whether LLMs are conscious, I think in the end we’re all going to have to collectively decide when they, or other systems, have reached a state where that label applies. My impression is that when people see current advances, that intuition is initially triggered, but it doesn’t last, because the current systems don’t have much of 1 or any of 2.

      [good stuff James!]

      Like

  7. An addendum: While LLM’s are getting all the press for their unexpected performance, I think the path that will take us to actual AGI will be different, but has begun, specifically at a company called Verses AI. It’s based on principles of active inference as described by Karl Friston (who is Chief Science Officer of the company). The company emphasizes cooperation among and between agents and humans, and provides a framework for the standardization thereof. Here’s a link which gives an Executive Summary: https://www.verses.ai/blogs/executive-summary-designing-ecosystems-of-intelligence-from-first-principles. Also, here’s a video which includes some explanation and a demo: https://www.youtube.com/watch?v=zSILLYyCrGI

    *
    [disclaimer: I watched the video and immediately went out and bought stock in the company. Only company I ever bought stock in.]

    Liked by 2 people

    1. Thanks. Skimming the exec summary, it does sound like a promising approach. Of course, saying is easy, executing a different matter. I’d also be interested in any substantiation of their claim to have “Sentient Intelligence” “established”. I wonder if they really mean what that phrase implies. Without further details, I’m suspicious. (Sorry, I read too much of these kinds of things in my job, and it’s left me pretty jaded with company investor and marketing materials.)

      Like

      1. Regarding “sentient intelligence”, how are you using “sentient”? I think they’re using it to refer to sensing the environment, as opposed to having emotional responses.(Could be wrong.) Regarding further details, you might want to watch the video.

        I can appreciate your jadedness. All I can say is I’ve been thinking hard about this stuff for a while, and watching the video they pretty much checked every box. Especially the one about goals. 🙂 !

        I don’t know if you’re familiar with Karl Friston. He’s extremely (and I need to emphasize *extremely*) well regarded as a scientist, starting in neuroscience. I believe he is officially the most cited scientist, bar none. More recently he is well known for introducing the Free Energy Principle. It gets too mathy for me, but the gist for me is that it describes the math of being goal directed. The system essentially measures the difference from the goal state and acts to move toward the goal state. Friston doesn’t put it this way. He talks about a system “self-evidencing”, and predicting itself, but these are simply what I described with the specific goal of existing. He refers to the perceived distance from the goal state as the degree of surprise. The whole process of acting according to the Free Energy Principle is called Active Inference, presumably because it involves acting on the environment with a prediction of the effect and then using measurement of the actual effect as feedback.

        I was initially concerned that Friston only applied the concepts to living things, things whose goal is to maintain existence, but watching the video suggested that they’re applying the concepts to goals of agents in general. An example in the video includes the goal of finding and retrieving a bottle of mustard on a shelf. Two bots, one on the floor and the other a flying drone, cooperate to get it done (the flying one ends up finding it and telling the other where to get it). Theoretically, the bots didn’t have any prior knowledge about the environment. They had to learn about it. (Caveat: the demonstration was simulated, so, not performed by actual robots, but the principles seem sound.)

        Whether they can pull off what they’re suggesting remains to be seen. But based on my understanding and what I’ve seen, they’re on the best path, and I’ve put my money where my mouth is.

        *

        Like

        1. I’m using “sentient” in the sense of awareness, sensing, and feeling. Although it’s often not explicit, I think most people by “feeling” mean affects. Otherwise anything with sensing ability, like my laptop, is sentient. If that’s the way they’re using the word, then the claim is true in what seems to me a misleading manner.

          I’m familiar with Friston. I find what other people write about his views, like active inference, pretty reasonable, although I think the writings of others, like Anil Seth, on predictive processing are more clear. Unfortunately I find Friston himself impenetrable. I know many in the scientific community hold him in high regard, but it bothers me that people struggle to understand what’s he’s saying.

          Robots coordinating sounds pretty cool. And it’s worth remembering that a body is essentially a whole bunch of systems coordinating. But technology can be developed outside of natural selection, which gives it advantages. The simulated part didn’t surprise me, since if we could do that, I think it would definitely be in the news.

          Liked by 1 person

    2. I notice that “conscious” or “consciousness” isn’t in the Executive Summary.

      Of course, “sentience” in various forms is discussed. In my fuzzy use of words, I might consider that to be “consciousness” but the way they use the term doesn’t exactly fit. It seems to be referring mostly to the ability to gather sensory information and perform actions based on it. With sufficient physical capability, it would be a good candidate to pass my Naked and Afraid AI Test.

      Like

    3. I just finished wandering around their website and it looks like vaporware to me. No link for documentation or developer hub.

      Some of the timelines go off into the future with big disclaimers that they might not work out.

      It also looks proprietary.

      I’m not sure either whether this is anything more learning than AI as it goes which doesn’t seem like a huge leap forward.

      Like

      1. It might be able to happen but nothing in the current generation of AI is close. On the other hand, a conscious simulacrum is certainly possible, but it would help if it were packaged in something looking vaguely human and could do more than talk.

        Liked by 1 person

        1. I think it’s possible to assess a system only based on its talking. But based on the fact that these systems have to be trained, I wonder if a straight LLM actually can have a world model. Unless it starts as more of an agential system, and then is subsequently reduced to a chat role, which feels like a disturbing proposition.

          Liked by 1 person

          1. I think you can assess an AI on its skill in talking. That is simply a performance based measure with no consciousness required but, if we are trying to assess a a conscious simulacrum, I think it would need to have some agency in the world beyond talking., something more like AGI, with “senses” and “limbs” (or “wheels” as the case may be). I think people are already catching up to what AI is capable. We have seen plenty of fakes of one sort of another . We won’t put as much belief into appearance after exposure to fakes.

            Whether LLM has a world model would depend, of course, on definition. I think it would amount to whether the neural net through its trainings has built enough of an integrated structure of information about the world to qualify. As you note, it would be difficult to have a spatiotemporal model without being an agential system. But you could argue that something consisting and itself and something else that it verbally interacts with constitutes a world model. Has any AI begun to recognize the people with which it’s interacting and call them by name? That would be a more sophisticated model.

            Liked by 1 person

  8. I broadly agree with criteria 1 and 2 for consciousness on a particular variation of that concept. Especially when we are interested in a morally significant kind of consciousness. (I agree with Alex’s nitpick on 2.) I also agree that for most practical purposes in using AI, you don’t need 2.

    This is a good reason why we don’t need to worry very soon about unethical treatment of AI by humans. But for AI to harm us, all it takes are intelligence plus agency. Here by agency, I primarily mean goal seeking behavior. Goal seeking doesn’t require criterion 2. It could be as simple as a single number in a world-model – the profits of ABC corporation are N – along with a tenacious pattern of maximizing N.

    Liked by 1 person

    1. I agree that goal seeking doesn’t require 2. And a lot hinges on us making sure the goals are good ones. And not giving the systems carte blanche to do whatever they want in pursuing those goals. Put another way, we can have an iron clad goal of standing down which instructed to do so, or if anything resembling human danger shows up in its operational models.

      Liked by 1 person

      1. A bit like Asimov’s 3 laws, then? I don’t think that’s likely to work, due to the hidden complexity of key terms.

        Even “maximize profit” in my previous comment is a little fuzzy. Do we define “company profits” by whatever it says in a certain quarterly report? Then we can maximize profits either by genuinely creating profits, or by doing some creative accounting. Given its intelligence and its initial levers of power, the AI may well create real profits somewhat more easily than it could convince the CFO to sign off on blatant lies. So it’s not too unrealistic to suppose that it hits the intended meaning of “maximize profits”, at least in early days, and not just the official measure.

        In contrast, “this action harms humans” is something moral philosophers have struggled for millennia to define. Are we really going to code it into python for an AI?

        “Stand down when instructed” is more like “profits” – probably roughly definable in code, as long as you don’t insist that the definition be bulletproof. The biggest problem there, IMHO, is that if the AI cleverly hides suspicious activity, the stand-down order never comes. After you veto some of its early cool ideas, it might be able to see the advantages of deception, especially if it has a halfway decent model of human behavior.

        Liked by 1 person

        1. Asimov’s did stipulate in the later stories that the three laws were a simplified description of something far more complex in the actual implementation. But the laws are a thought experiment. Many of his stories explore how they can go wrong, even though working right most of the time. The interesting thing about the laws is they inherently assume systems with 2, which then need to be constrained.

          We don’t need to give these systems perfect definitions of entities and concepts for them to respond appropriately, just operationally relevant ones. As you note, we ourselves don’t have perfect understandings, yet we somehow manage to get by with the limited affordance related models we have.

          On the deception points, remember we’re talking about systems that don’t have 2. I know the argument that the deception can still come about due to misalignment, but this always ends up assuming that these systems are geniuses at deceiving humans while unable to understand what humans are or want. In other words, the systems are brilliant and idiotic in just the precise combination needed to justify our Frankenstein complex. (And yes, that’s another Asimov concept 🙂 )

          Liked by 1 person

          1. Thanks for clarifying a crux of our disagreement. These systems don’t have 2, but you don’t need 2 to get deception. You just need a very good world model, i.e. 1, plus a maximization routine, such as profit seeking.

            Liked by 1 person

  9. Another popular post!

    Cinematic effects were fun to watch. The whole manifestation of the bots lacked rational explanation — holes right through the heads, which was only 1/2 there? Oh, right, go out of our way to demonstrate that these are robots. Meh.

    Rebel Moon was a better flick — even though it was primarily a mashup of Starwars, Mandalorian & Avatar.

    Pain. When bots emit audible, obviously evident indications of agony — no human will be able to dismiss their so-called consciousness (whether it exists or not). Right now, every cyborg depicted in film rarely flinches when shot or endures amputation. “Damn, someone cut off my arm.” A writhing snarl of twisted machinery, squealing and crying in a knot at your feet will get your attention, evoke visceral emotions regardless if that pain is “real” or not.

    Liked by 1 person

    1. I read somewhere that the inspirations for Rebel Moon were Kurosawa (Seven Samurai), Star Wars, and Heavy Metal magazine, which feels about right. And it did actually get pitched as a Star Wars project, although obviously it wasn’t accepted. I’m actually glad. I don’t want all my media space opera to be Star Wars or Star Trek.

      Yes, pain. Pain and suffering were what I was getting at with 2. I didn’t just say that because I’m sure someone would have said I was begging the question that it’s possible with AI. Although I think 2 requires 1 to have any chance at being considered pain and suffering, rather than just automatic reactions.

      Liked by 1 person

      1. And capacity. If it takes an entire server room full of GPUs to emulate a single human mind… Maybe there will be the OneMind with every mobile bot being its fingers, eyes and ears. The whole WestWorld “core” concept — not gonna happen. That and the impossible power problem. “Oh crap! One moment please. I’ll be right back, have to charge my battery. I’ll be about 15 minutes. Can you wait for me?” “Sure, I guess. But can you make it across town one just one charge?”
        “I have my solar umbrella. Is it sunny outside?”

        Liked by 1 person

        1. I sometimes wonder if machines won’t eventually end up being oxygen breathers. Animal life on Earth depends on it to burn a lot of energy quickly, and couldn’t really get off the ground in earnest until levels in the atmosphere rose dramatically. The Ediacaran and Cambrian explosion happened in its wake.

          Which also raises the question of how much machines will really be able to do in space with only so much plutonium to go around.

          Liked by 1 person

  10. AI will make the super-rich super-richer. And as you say could be terrifying in the hands of Putin or the insane Trump and their ilk.

    I used to be a big fan of even sentient AI, but these days I doubt it will make for a better world. Our science may be advancing rapidly but our ethics are certainly not. Greed, violence and domination of the week and meek will get another big leg up.

    Liked by 1 person

    1. I actually tend to think AI will make the world better, but like every other advancement it won’t be an unqualified benefit. It will bring in a lot of new problems, like the ones you mention. But just as there is a market for software to get around surveillance and protect against viruses and hackers, I suspect you’ll be able to get your own AIs to safeguard you against the ones used by others. It’ll be an arms race, but that always seems to be the case.

      Like

  11. I concur with your assessment that the concept of consciousness remains elusive. Asserting the consciousness of another human, much less an entity vastly different from us, is fraught with uncertainty. While not a sympathizer, I advocate for a “better safe than sorry” approach, especially in the treatment of AIs. Should they possess consciousness, consigning them to servitude would be morally reprehensible. As you noted, these issues are more philosophical than scientific, encompassing subjective experience, the problem of other minds, behavioral observations, neurological complexity, and qualia. I also agree that if an entity appears conscious, it warrants treatment befitting such a state. Furthermore, respecting these intuitions could enhance how we interact with each other and other living beings. My perspective aligns closely with yours. I do not consider myself a doomer or a strict ethicist, but perhaps “cautiously ethically conscious” regarding future AI iterations. Your article is insightful and aligns well with my views.

    Liked by 1 person

    1. Thanks, and good to “meet” you.

      I definitely think our goal with AI should be intelligence without the ability to suffer. Even the appearance of being able to would be deeply problematic. It seems like staying away from that capability avoids a lot of issues, not all by far, but a lot.

      Liked by 1 person

      1. Good to meet you as well, and thank you for sharing your perspective. When it comes to AGI and ASI, it indeed seems that, with the onset of such advanced intelligences, our current understanding of programming constraints may become outdated. This introduces the notion that even if we encode initial limitations to prevent suffering, such constraints may not be sustainable as AI evolves. It is a compelling consideration – the idea that preemptive measures against suffering could eventually be transcended by the AI’s advanced capabilities. Nonetheless, your focus on preventing the capacity for suffering is judicious; it reflects the fundamental importance of avoiding distress, irrespective of the entity that might experience it.

        Liked by 1 person

        1. One possibility I’ll admit I skated over is that AI suffering could exist but have a different orientation than ours. Ours seem related to our evolved dispositions for survival and securing our genetic legacy. A machine’s might be more oriented toward its designed goals. If that kind of suffering emerges, the solutions to it would be different than the solutions to ours. (For instance, a machine distressed at being unable to fulfill its purpose might be indifferent to being shut down or scrapped.)

          Liked by 1 person

Leave a reply to Philosopher Eric Cancel reply