Are zombies conscious?

This is not a question about philosophical zombies.  I did a post on them a while back.  (The TL;DR is that I find that whole concept ranges from incoherent to dubious, depending on the exact version.)

This post is on the zombies we see in fiction, such as Night of the Living Dead, the Resident Evil franchise, World War Z, and a host of other movies and shows.  Last week, while watching the Game of Thrones episode featuring the epic battle with the dead, and the way the zombies searched for and pursued their victims, a question suddenly occurred to me.  Are zombies, as traditionally portrayed, conscious?  (Yes, I know I’m talking about fantasy entities here, but I’m doing so to get at intuitions.)

Let’s say first what most, if not all, of the portrayals indicate zombies are not.  They’re not the original person.  This fits with the original Haitian Vodoo concept of a zombie, a reanimated but soulless corpse.  The zombies as typically portrayed appear to have no memory of their past life, have lost the ability to communicate with language, and generally seem to be cognitively limited in a number of ways.

In the case of Game of Thrones, the zombies are controlled by the White Walkers, but there appear to only be a limited number of those Walkers, so it doesn’t seem like they’re controlling the detailed movement of every zombie.  Broadly speaking, the GoT zombies have no long term will of their own, but a lot of their detailed movements appear to be left to their discretion.

And as in a lot of other fiction, the zombies seem able to search for and pursue victims.  This indicates that they have exteroception, awareness of their environment, enough that they can navigate around in it.  They also seem able to discriminate between other zombies and living humans.  And they seem to be able to focus attention on specific people or groups.

On the other hand, your typical zombie doesn’t appear to have much of any somatic sense, any sense of touch.  Or if they do, it doesn’t appear to affect them much.  For instance, zombies seem to only minimally notice when they lose body parts.  So their interoceptive sense is either missing or stunted.

This might tempt us to conclude that the zombies have no sense of their own body.  However, being able to navigate your environment, as the zombies can clearly do at least on some level, requires being able to understand your body’s existence and its relationship to that environment.  So the zombies appear to have a only a limited sense of their own body, but a sense nonetheless.

I mentioned above that zombies don’t have memory of their past life, but they also don’t appear to have any long term memories of their current existence.  In most depictions, they do seem to have short term memory and imagination, not instantly forgetting a prey just because that prey is momentarily out of sight.  But they don’t appear to have any memory beyond the last few moments or be able to imagine anything more than a few minutes into the future.

I think it’s fair to say that zombies, while they may have some limited sense of their body, have no metacognitive self awareness, but then neither do most animals.  Although the zombies also have no self concern, no survival instinct, which everything alive seems to have in some form or another.  They do have some limited affective desires, such as desiring to eat brains, kill humans, or whatever, but those affects generally aren’t oriented toward their own preservation.

I suspect it’s this last point that really nixes any intuition we might have of them being conscious.  But what do you think?  Which aspects are necessary for us to think of a system as conscious?  Which ones, if they were in a machine, might incline us to feel like that machine was conscious?

37 thoughts on “Are zombies conscious?

  1. Your question is, of course, a really good question because it asks us to distill our personal requirements for consciousness. And of course, as asked, there is no answer with respect to movie zombies because those are magical creatures which clearly have suspended at least some laws of physics. [How do they get their limbs to move? Neurons? The standard zombie does not have a blood-pumping heart. And then, where is the energy coming from in the first place?]. My point is that if consciousness requires physical events, zombies can suspend that requirement.

    So I’ll just give my requirements for consciousness. Consciousness requires the creation and use of symbolic signs, because that explains phenomenology, “what it’s like-ness”. Zombies are conscious because they can see objects and change their behavior based on that perception. Presumably they would use the same internal mechanisms we would (except perhaps for that whole suspending physics thing), and those mechanisms would require the creation and interpretation of internal symbolic signs.

    Now I understand that some people will require certain cognitive functions, like self-awareness, or self-preservation, but for me, those are just subsets of the things you can do with symbolic signs. Humans can do more things with symbolic signs than anything else in the universe (probably), and so we look for those capabilities in other things to decide if they are conscious, but in most cases those other things just have a subset. Some things will have one subset, and others a different subset. And a few will have capabilities outside of our subset. (Detect the earth’s magnetic field much?). But what all of them will have is the use of symbolic signs. Granted, they will all use atoms as well, but the use of atoms will not be enough to explain the hallmarks of phenomenology, intentionality.

    *

    Like

    1. Thanks James.

      The use of symbols seems like it applies to computation or information processing overall. In that sense, I think of it as a prerequisite for a conscious system, but broader. The device you’re using to read this uses symbolic processing. Although if I remember correctly, you see that device as having consciousness to some degree?

      Or would you mean more specifically symbols related to sensory input? Your comments about phenomenology and intentionallity seem to imply that. And you did say specifically “creation and use of of symbolic signs”, which implies something like the representations or image maps that brains and neural networks build.

      If so, then that would restrict conscious systems perhaps to something like self driving cars or other autonomous robots that build internal representations of the world?

      Like

      1. Mike, yes the use of symbols applies to computation/information processing, and yes, that means the device I’m using has consciousness to some degree, and yes I understand you require something more. I’m interested in knowing exactly what that more is.

        You suggest phenomenology and intentionality imply sensory input, and I would point out that phenomenology/intentionality can be applied to input in general, not just sensory input.

        You also say “creation and use of symbolic signs” would restrict conscious systems to systems that build internal representations of the world. This is true, but my understanding of a “representation of the world” may be broader than yours. A thermostat that responds to a digital readout of temperature by turning on a heater or cooler counts as generating a representation of the world.

        *

        Like

        1. James,
          Well, as you know, I think consciousness is a matter of interpretation. There is no fact of the matter, just a designation we give to systems that seem more like us than different. Before I was well read in this stuff, my intuition was pretty much the common one, that a system needed to have intentionality but also some sense of self concern. But at this point, I’m not sure what my intuition is anymore.

          On the thermostat, if we’re talking about a traditional one, I’m not sure I’d say it actually has its own representations so much as participates in a model that both the designer and user hold. A more modern one with a small computer in it would have representations, but in what sense would it be generating them as opposed to just using them?

          Like

          1. Mike, regarding your intuition, progress!

            Regarding the new-style thermostat, somewhere there is something that is measuring the temperature and putting a value in a memory location, thus generating a symbol, a representation. Some downstream mechanism reads that symbol and responds accordingly. And we know there needs to be coordination between what generates the symbol and what reads it, because if that coordination is faulty bad things happen, like satellites that fly off into space instead of orbiting Mars.

            *

            Like

          2. “somewhere there is something that is measuring the temperature and putting a value in a memory location, thus generating a symbol, a representation”

            Ok, I see what you’re saying here. I guess my thinking of “generating” a symbol means forming something that there isn’t really a template for yet. Using a pre-existing structure to record a value doesn’t really strike me as “generating”, more like “manipulating”. Although I could see the argument that it’s generating a specific instance of a symbol.

            For me though, this isn’t perception yet. It feels still solidly in the reflex-arc layer.

            Like

  2. I assumed zombies were unconscious, but I never had any reason to think so. I don’t have an answer to your question, but I have some additional questions and thoughts that I hope may be useful.

    Consciousness is a subjective quality, so I can’t know whether something is conscious; I can only assume it is if it exhibits similar enough behavior to the only system I know is conscious — me. I assume if something exhibits purposeful behavior, it’s motivated by a subjective state. However, that assumption breaks down in two cases. First, scenarios where I go on autopilot and perform complex, purposeful actions unconsciously show that purposeful behavior need not be motivated by consciousness. Second, scenarios where people don’t respond to their environment yet are conscious (e.g.: total paralysis) show that a lack of purposeful behavior is not necessarily a sign of unconsciousness.
    .
    Of course, we can also throw out additional examples. Is software that is written to mimic human behavior conscious? Why or why not — and how can we really know? What if that software is put into a robot? Etc…

    On a side note, it is appealing to think of GoT zombies as not being conscious, because it gives the plot a nice symmetry. The Night King wants to destroy the 3-Eyed Raven because he’s the memory of the world (think of memory’s relationship to consciousness), and the 3-Eyed Raven wargs often, which basically makes it a hyper-aware being, a being as close to pure awareness as the show offers. If those zombies are unaware, then the whole thing becomes a battle of awareness vs. unawareness, which gives an interesting angle to the light vs. dark angle of the battle.

    Liked by 1 person

    1. Excellent points and questions. On your point about purposeful action happening without consciousness, it makes me wonder what it is about the consciousness aspect that makes it different. After going in circles on this for years, I often now think it might come down to it being a cluster of circuits that can be accessed by the introspection circuits, and the language productions ones.

      On GoT, it’s also interesting to wonder if the Night King and his immediate minions are conscious. How much of their previous human lives do they remember? (Presumably nothing for the ones converted as babies.) They definitely seem to have long range memory and imagination in their current states.

      And obviously their own primal programming was has been adjusted to give them their overwhelming desire to “erase the world of men”, a weapon system that got out of hand. In the parlance of fantasy, they lost their soul, a soul being a mind programmed to be a human being, and their new state being programmed to destroy humanity.

      Liked by 1 person

  3. The moral of the philosophical zombie is that we can imagine behavior divorced from consciousness – completely. I can turn over in my sleep, so unconscious actions are certainly conceivable. The trouble is in the other direction, and our fictional zombies demonstrate that.
    To respond in any interesting way, they have to display an intentional state and therefore the basics of consciousness.

    Liked by 1 person

        1. I think a self driving car is far less oriented to its surroundings than a human driver, but more so than a snail, sea slug, or a lancelet. It is also a surrogate for its programmers, but then all animals could be considered surrogates for their genes and evolution overall.

          Like

          1. I have had someone say to me that a proposition had intentionality (by way of illustrating the possibility of intention without location/embodiment).
            But a proposition merely carries the aboutness of the one who proposed it, and that of the reader.
            Is the programming of a self-driving car something other than a series of propositions?
            Is our reading of genes and evolutionary processes any different – biology is the shining example of a category made of its instances?
            A snail is oriented on a comprehensive background (as Searle describes it) in a way that the self-driving car or thermostat cannot be, at least as such things are fabricated currently.
            Not to say that a snail is conscious or not, just that it is possible.

            Like

          2. A proposition strikes me as only being information, data. It seems like anything that takes action, or is designed to take action, is more than a proposition or series of them. That would include the devices we’re currently using the communicate. Not that I’m inclined to view a system that only reacts reflexively as being conscious.

            I’m pretty convinced that Searle is a biological chauvinist. He’s biased toward attributing consciousness to organic systems and biased against it in technological ones, as far as I can see applying different standards in each.

            Liked by 1 person

          3. Yeah, I agree with most of what he has to say, but he probably is a bio-chauvinist.
            I’m not a bio-chauvinist, but think that we will have little reason to cultivate consciousness in the end. If we do, I think our own weaknesses will come right along with it.
            Narcissism may be the only justification, and on that account, maybe a better understanding of divine motives.

            Like

          4. I’m on board with that. Reproducing the idiosyncrasies of biological brains isn’t necessarily going to be productive. Often it will be starkly counter-productive, and unethical.

            Like

  4. The problem for me with movie zombies is the question of, “How is it possible?” They are products of fantasy, so I don’t see them as fruitful sources of intuition. (There is also that zombies are on my list of popular things that don’t interest me, so I’ve never given them much thought.)

    But the question of philosophical zombies is one I’ve been pondering lately. I’ve never quite known what to make of Chalmers’ p-zombies (and other variants). I’m, again, a bit confounded by the fantastical aspect. And the how. (I did want to go read your post.)

    I think I provisionally fall under “zombies aren’t a coherent enough idea” (aka are too fantastical) for me to form an opinion about consciousness from them. But I’m still chewing on it.

    “…your typical zombie doesn’t appear to have much of any somatic sense, any sense of touch.”

    I think it’s more that they feel no pain. As you point out, they function in a physical environment, so they must have some sense of touch.

    One more canon addition: Terry Pratchett, on Discworld, has zombies who are part of everyday life. They’re essentially like living people, except for being dead. Body parts fall off sometimes, and they’re immune to pain and most forms of damage, but otherwise they’re pretty normal.

    Also, FWIW, best zombie movie ever: Shaun of the Dead.

    “Which aspects are necessary for us to think of a system as conscious?”

    That’s an easier question. It does depend on which form of consciousness you mean. (Especially given you feel consciousness is subjective in the first place.) Sentient behavior indicates one kind of consciousness; sapient behavior indicates another.

    It’s one thing to take a single system in isolation — a magically produced possible AGI (or a zombie!) — and ask how we would view it. It’s easy to get lost in single-case examples.

    It’s quite another thing to take an entire class of systems, humans, bats, all zombies, and ask questions about that. When considering classes, individual cases (comas, sleeping, etc.) don’t matter. What matters is the spectrum of behavior demonstrated by the class. Usually, within that spectrum, there are strongly recurrent behaviors we treat as belonging to the class.

    So the whole class of humanity demonstrates consciousness very clearly. I’m not sure what to make of an imagined single machine. Or zombies.

    Liked by 1 person

    1. I do understand the limitations of exploring fantasy entities. I started to write this post about Game of Thrones zombies (wights in the book) in particular, but decided there were enough people who don’t watch that show that it would have limited the audience. But your point that it blurs what we’re talking about is a good point. Although I think I stayed general enough that I caught the most common forms.

      Pratchett’s zombies sound like a cross between traditional movie ones and philosophical ones. And there have been some comedy sketches treating zombies as actual people, with romances and all. Come to think of it, doesn’t Shaun of the Dead have something like that at the end?

      “It does depend on which form of consciousness you mean. (Especially given you feel consciousness is subjective in the first place.) ”

      That’s definitely my own view. But it’s interesting to put out these cases and see how everyone else comes down. As I noted to James above, at this point I’m not sure where my own intuitions comes down anymore. Although there are still systems that feel more conscious to me than others.

      Like

      1. Pratchett’s zombies are akin to the comedy zombies you mention. They’re essentially just people, but they didn’t let dying slow them down.

        In Shaun of the Dead the hero’s friend, who got bitten, is now a zombie. He’s kept locked up in the tool shed where he’s happy playing video games — which is pretty much all he did in life. Great movie! 🙂

        “Although there are still systems that feel more conscious to me than others.”

        I think maybe it’ll boil down to us figuring out more about the mechanism of phenomenal experience. That’ll give us a handle on trying to recognize it in some foreign system.

        I’m not much on Turing Test or other external means of trying to recognize it — conscious is what appears conscious doesn’t sit well with me — zombies, Chinese rooms, etc. I think we need to understand why we have that phenomenal experience to have any hope of answering the question.

        Liked by 1 person

  5. Giving this some thought, I think one crucial aspect for me is that consciousness needs to be able to attest to its own consciousness. In humans this is pretty easy: we can speak to our own conscious experiences (special cases aside).

    In animals, I think it requires observing how they react to pain, fear, hunger, community, others, and so forth. We tend to recognize the level of consciousness in animals through those behaviors.

    I’ve long said that when a computer writes a joke that makes me laugh, or a song that brings tears to my eyes, or tells a story I find engaging and meaningful, then I would find it hard to deny its consciousness. More crucially, perhaps, if it ever protests being scrapped for a better model.

    If a machine can somehow attest to its consciousness (and we have no reason to believe it’s lying), I’d call that strong evidence.

    The best evidence, obviously, would be understanding the nature of consciousness such that we can positively identify conscious systems.

    Like

    1. How do you distinguish assessing a system’s consciousness by external means from the system attesting to its consciousness? Your description on how animals might go about it strikes me as assessing their consciousness based on behavior.

      In general, what I think I’m hearing from you is that the system showing self concern is a crucial aspect. If so, I think that’s a very common metric. (One philosopher said he assesses a system’s consciousness based on what it does when poked with a stick.)

      Given my views on consciousness, I suspect the only thing we’ll ever understand is why a particular system reports its consciousness, or shows behavior we interpret as conscious. It will leave a lot of people unsatisfied, although like biological vitalism, it may be that we eventually just stop talking about it.

      Like

      1. “Your description on how animals might go about it strikes me as assessing their consciousness based on behavior.”

        Absolutely. I said so explicitly, so it struck you 100% accurately! 🙂

        A system can attest to something non-verbally. My dog’s behavior over time clearly (in my eyes) attests to there being “something it is like” to be that dog.

        “In general, what I think I’m hearing from you is that the system showing self concern is a crucial aspect.”

        Certainly a big part of it. But also, as I mentioned, how they interact with their community and others. For one example, dogs, from what I’ve seen, have a special interest in other dogs (community). They also have a different kind of special interest in humans (others).

        Their behavior towards many other animal species often lumps under “ignored” or “prey” so I take their attitude towards other dogs, and towards humans, as special to them. (All of which, to me, is strong evidence for a conscious mind at work.)

        “I suspect the only thing we’ll ever understand is why a particular system reports its consciousness, or shows behavior we interpret as conscious.”

        As I’ve mentioned, I find your skepticism about consciousness (even your own?) a bit unfathomable, especially in light of your embrace of computationalism (which seems on much shakier ground to me).

        What does your skepticism consciousness even exists imply about the possibility of mind uploading?

        Like

        1. “I find your skepticism about consciousness (even your own?) a bit unfathomable”

          Remember, I accept subjective experience, since if phenomenal consciousness is an illusion, the illusion is the experience. But when thinking about the objective reality, I think it’s more productive to look for what constructs that experience rather than for any irreducible thing floating above the neurons.

          And yes, I think my experience is just as constructed as anyone else’s. I find it curious that so many people can accept that they could be a brain in a vat because our outer senses can be doubted, but can’t conceive of doubting their inner senses.

          “What does your skepticism consciousness even exists imply about the possibility of mind uploading?”

          I think it means that consciousness is irrelevant to the question. If we can reproduce the information flows of the brain, I expect the construction of experience programs to come along with it.

          Interestingly, in Alastair Reynold’s Revelation Space books, he posits two types of simulations, alphas and betas. Alphas are a full copy of the original person and are sentient. Betas are simulation built from a detailed psychological profile of the original person, and isn’t thought to be conscious. One of the characters even gets in an argument with a beta when the beta asserts she has an inner experience. The human character later grieves for the beta after it’s “murdered”.

          Like

          1. “Remember, I accept subjective experience,”

            Right, but if I understood our conversation, you aren’t willing to say brains universally having that experience is an objective fact of reality?

            “…but can’t conceive of doubting their inner senses.”

            I’ve heard phenomenal experience referred to as “undeniable and incorrigible” — what you seem to feel is what you, in fact, feel.

            I tend to agree with the sentiment. What inner sense would you doubt? And how?

            “I expect the construction of experience programs to come along with it.”

            That’s fine; I’m just saying the skepticism seems much lower here.

            There are a lot of strong arguments for and against computationalism, which tells me the matter is very undecided. I’m not aware of any strong arguments against that human brains have subjective experience, so it’s objective truth seems more likely to me.

            Like

          2. On subjective experience, I won’t call it objective because that seems like it would make it…objective experience. But maybe it’ll help if I’m slightly less epistemically cautious than I was in the other thread and say that it’s plausible that the construction of subjective experience in healthy human brains is objective?

            On your question about inner sense, I think we have to be careful not to conflate the content of the sense and what it purports to tell us. A amputee who has phantom limb pain is actually experiencing pain, but what the pain purports to tell them is not reality, that they have an aching limb. So to doubt the sense is not to doubt the experience of the sense, but what it purports to tell us.

            There is a wealth of psychological research that introspection is unreliable.

            https://aeon.co/ideas/whatever-you-think-you-don-t-necessarily-know-your-own-mind
            That last link is an article by Keith Frankish, perhaps the strongest advocate in philosophy for illusionism right now. He has a book on Amazon which rolls up a debate from the Journal of Consciousness Studies a couple of years ago, which pretty thoroughly explores the conceptual space of this question.

            On computationalism, I suspect I provoked its mention by using the word “program”. I find it very convenient to use that kind of language. Any chance I could convince you to bracket this difference in opinion by viewing this language as metaphor?

            Like

          3. “On subjective experience, I won’t call it objective because that seems like it would make it…objective experience. But maybe it’ll help if I’m slightly less epistemically cautious than I was in the other thread and say that it’s plausible that the construction of subjective experience in healthy human brains is objective?”

            Well, that is the question I was asking all along. 😀

            “So to doubt the sense is not to doubt the experience of the sense, but what it purports to tell us.”

            Yes, completely agree. I was just thrown when you wrote that so many people “can’t conceive of doubting their inner senses.” The idea of phantom limbs, or that the content of our introspection can be self-deceptive, seemed common ideas to me, so I wondered if you meant something more.

            (Not the first time I’ve been surprised that knowledge I thought was common turned out to be much less than I thought.)

            “Any chance I could convince you to bracket this difference in opinion by viewing this language as metaphor?”

            We certainly don’t need to cover old ground. (You can save it for my upcoming posts. 🙂 )

            (The thing is, my view is that it is being used as a metaphor — a misleading one. But let’s set that aside for now. I was just expressing a little surprise at the contrast in levels of skepticism.)

            Like

          4. “The idea of phantom limbs, or that the content of our introspection can be self-deceptive, seemed common ideas to me, so I wondered if you meant something more.”

            I think the issue is that many people are aware of and accept that idea, but not its implications. Everything we can assert about subjective experience, we do from introspection. If introspection is unreliable, then anything that we can assert only from it becomes suspect, particularly if it seems to contradict empirical data, such as the sense of a unified self.

            Like

          5. “If introspection is unreliable, then anything that we can assert only from it becomes suspect,”

            I agree, but let’s be careful to not throw out the baby, too. I think we can make an important distinction about inner sense using the phantom limb example.

            The sensation of the limb is a false sensation in that no limb exists. But the sensation is still “undeniable and incorrigible” in that you definitely are having the sensation and that sensation is definitely about the (missing) limb.

            So someone can say to you, “The limb doesn’t exist,” but they cannot say you are not having the sensation. The sensation is undeniable.

            Nor can someone say the sensation is actually hunger for pizza or nostalgia for 1967. It’s definitely about the limb. The sensation is incorrigible.

            Of course, we can also have analytic thoughts that are true within themselves. If I invent mathematics, no one can claim 1+1=3. By extension, a lot of our logical analysis can be trusted (to a fair degree, anyway).

            But totally agree that phantom limbs, optical illusions, and counter-intuitive reality, do mine that landscape. (The blow up kind of mine, not the pick and shovel kind of mine. Or the “not yours” kind of mine. 🙂 )

            (If you’re ever looking for an argument in favor of philosophy, one of them is that it trains the mind to avoid the mines in the mental landscape. As Leon Wieseltier once put it, “The role of the mind is to actually question some of the assumptions and dogmas and prejudices of the heart.”)

            Like

          6. “Nor can someone say the sensation is actually hunger for pizza or nostalgia for 1967. It’s definitely about the limb. The sensation is incorrigible.”

            Actually, I don’t think we can be that confident. Years ago I was having a tooth ache. It strongly felt like it was the back tooth on my upper jaw. But after the dentist had numbed me, it was still aching. I told him. He squirted water on the back tooth on the lower jaw, whereupon it became obvious that was the tooth actually hurting. He said that sort of misdirection happens all the time.

            Lisa Feldmann Barrett in her book tells a story from her grad school days about being asked out by someone she was not particularly attracted to. But once on the date, she found herself swooning over him. She finished the data feeling like she was seriously in love. But when she got home, she vomited and discovered that her feelings were actually the beginnings of a stomach flu.

            Similarly, the worst time to go before a parole board is just prior to lunch. Board members are consistently harder on prisoners when they’re tired or hungry.

            Cases like this make me think that nothing about introspection is undeniable, at least absent corroborating evidence.

            Like

          7. Well, yes, we already agree that content can be misleading.

            Your toothache, for instance. You really were in pain, that was undeniable, and it really was pain, no one could correct you that it wasn’t pain.

            The parole board members really are hungry (or not). That, too, is undeniable and incorrigible. It really is hunger.

            I’m not sure what to make of Ms Barrett mistaking nausea for “swooning” (whatever that is).

            I was on a date once where we had pizza and then went to a movie. I apparently got food poisoning and actually did throw up (politely behind the car which I pulled over in time) while driving her home. I was definitely feeling something off prior, but it was in a completely different part of my phase space than anything involving feelings for another.

            Regardless, she was undeniably feeling something, and no one could correct her to say she wasn’t feeling something that, for her, apparently is very similar to swooning. (It makes one wonder if love always makes her feel a little nauseated.)

            But absolutely we can mistake the content of those experiences. I don’t dispute that at all.

            OTOH, we are feeling what we think we’re feeling even if our intellectual understanding of it is mistaken.

            Like

  6. I remember in one movie (Day of the Dead, I think) they trained a zombie, almost like a pet, and the zombie seemed to develop something like affection for the humans who were taking care of it. That’s just one movie, of course, but it raised a lot of questions. There was also the book and movie “Warm Bodies,” in which the zombies have started to develop something like a society, now that humans are more or less out of the way.

    Liked by 1 person

    1. I hadn’t heard of those, but it does seem like when any character type becomes pervasive, it starts mutating until they’re the sympathetic hero. Don’t know if zombies have got there yet like vampires, but it only seems a matter of time. (Actually, the zombie like character in Neal Asher’s book I described to someone above actually ended up being the main character of one of Asher’s books.)

      Liked by 1 person

      1. Vampires have been sexy hero characters at least since Saberhagen and Chelsea Quinn Yarbro!

        Yarbro, especially, way predates, and way out does, anything Twilight. Yarbro’s books were definitely for adults!

        Zombies seem less popular, although there have been some movies. It might be the dead meat thing. What vampires do is something of a blatant sexual metaphor. And they rule the night. 🙂

        Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.