AI and creativity

Someone asked for my thoughts on an argument by Sean Dorrance Kelly at MIT Technology Review that AI (artificial intelligence) cannot be creative, that creativity will always be a  human endeavor.  Kelly’s main contention appears to be that creativity lies in the eye of the beholder and that humans are unlikely to recognize AI accomplishments as creative.

Now, I think it’s true that AI suffers from a major disadvantage when it comes to artistic creativity.  Art’s value amounts to what emotions it can engender in the audience.  Often generating those emotions requires an insight from the artist into the human condition, an insight that draws heavily on our shared experiences as human beings.  This is one reason why young artists often struggle, because their experiences are too limited yet to have those insights, or at least too limited to impress older consumers of their art.

Of course an AI has none of these experiences, nor the human drives that make that experience meaningful in the way it is to us.  AI may be able to use correlations between things in other works and how popular those works are, but for finding a genuine insight into the human condition, it is simply not going to be equipped to do it, at least not for a long time.  In that sense, I agree with Kelly, although his use of the word “always” has an absolutist ring to it I can’t endorse.

But it’s in the realm of games and mathematics that I think Kelly oversells his thesis.  These are areas where insights into the human condition are not necessarily an advantage, although in the case of games they can be.

Much has been written about the achievements of deep-learning systems that are now the best Go players in the world. AlphaGo and its variants have strong claims to having created a whole new way of playing the game. They have taught human experts that opening moves long thought to be ill-conceived can lead to victory. The program plays in a style that experts describe as strange and alien. “They’re how I imagine games from far in the future,” Shi Yue, a top Go player, said of AlphaGo’s play. The algorithm seems to be genuinely creative.

In some important sense it is. Game-playing, though, is different from composing music or writing a novel: in games there is an objective measure of success. We know we have something to learn from AlphaGo because we see it win.

I can’t say I understand this point.  Because AlphaGo’s success is objective we can’t count what it does in achieving that win as creative?  The fact is AlphaGo found strategies that humans missed.  In some ways, this reminds me of the way evolution often find creative solutions to problems, solutions that in retrospect look awfully creative.

In the realm of mathematics, Kelly asserts that, so far, mathematical proofs by AI have not been particularly creative.  Fair enough, although by his own standard that’s a subjective judgment.  But he then focuses on proofs an AI might come up with that humans couldn’t understand, noting that a proof isn’t a proof if you can’t convince a community of mathematicians that it’s correct.

Kelly doesn’t seem to consider the possibility that an AI might develop a proof incomprehensible to humans that nevertheless convince a community of other AIs who can demonstrate its correctness by using it to solve problems.  Or the possibility that the “not particularly creative” AIs today might advance considerably in years to come and produce ground breaking proofs that human mathematicians can understand and appreciate.  Mathematics is one area where I could see AI eventually having insights a human might never have.

But I think the biggest weakness in Kelly’s thesis is at its heart, his admission that creativity, like beauty, lies in the eye of the beholder, that it only exists subjectively.  In other words, it’s culturally specific, and our conception of what is creative might change in the future, particularly as we become more accustomed to intelligent machines.

This leads him to this line of reasoning:

This claim is not absolute: it depends on the norms that we allow to govern our culture and our expectations of technology. Human beings have, in the past, attributed great power and genius even to lifeless totems. It is entirely possible that we will come to treat artificially intelligent machines as so vastly superior to us that we will naturally attribute creativity to them. Should that happen, it will not be because machines have outstripped us. It will be because we will have denigrated ourselves.

In other words, machines can’t be creative because we humans won’t recognize them as so, and if humans do start to consider them creative, then we will have denigrated ourselves.  This is just a rationalized bias for human exceptionalism, a self reinforcing loop that closes off any possibility of considering counter-evidence.

So, in sum, will AI ever be creative?  I think that’s a meaningless question (similar to the question of whether it will ever be conscious).  The real question is will we ever regard them as creative?  The answer is we already do in some contexts (see the AlphaGo quote above), but in others, notably in artistic achievement, it may be a long time before we do.  But asserting we never will seems more like a statement of faith than a reasoned conclusion.  Who knows what AIs in the 22nd century will be capable of?

What do you think?  Is creativity something only humans are capable of?  Is there any fact of the matter on this question?

This entry was posted in Zeitgeist and tagged , , . Bookmark the permalink.

46 Responses to AI and creativity

  1. Creativity is subjective, so we can’t use it as a metric for deciding if an AI is creative. The best we could do is try to work with it statistically. Have some works created by AIs and people, send them (keeping the author hidden) to an unbiased sample of folks, and let them rate the work based on creativity.

    If the numbers bear that out, then the AIs win, and one more “Gap” for human exceptionalism will hopefully be caulked shut.

    It reminds me of an AI that analyzed a sample of classical works by a particular composer, and then produced a new work in that vein. In addition, at least one human composer attempted to produce another work in that vein. When put to the test, the audience voted that the AI’s work sounded the most like the original composer.

    Liked by 1 person

    • A Turing test for creativity. I like it.

      That AI story doesn’t surprise me. Of course, Kelly would argue that it wasn’t being creative, but most human art is similarly derivative. Sounds like the AI does derivation much better than humans.

      Which leads to the question: how much do art consumers really care about creativity, at least in the sense of getting something ground breaking and new? The commercial success of The Sword of Shannara in the 70s comes to mind. It was an utter rip off of the Lord of the Rings, but people wanted more Tolkien so bad, they were willing to buy a warmed over generic version. It arguably jump started a large genre of pseudoTolkien books.

      Liked by 1 person

      • You touch upon an interesting point. Even if we’re being unbiased about creativity, are we really dealing with smoke and mirrors? Some works get attention because of the political climate. Others, because people aren’t aware of prior art (which may exist). Yet others because they intentionally try to subvert social standards.

        The more cynical may conclude that creativity is an empty term used to evoke emotions for things they want to push.

        A more charitable definition would tie creativity to social trends and at least add a necessary condition of “dialog” with culture. While still nebulous, it at least can provide a concrete pre-requisite: A creative AI should use as its inputs information about current events, arts, social trends, along with historical events.

        Liked by 1 person

        • Excellent points! I have to admit that I often lean toward the cynical interpretation, at least when people are using the term in a judgmental manner. Or perhaps another way of putting it, maybe creativity is just novelty we approve of, and like all values, it is inevitably specific to culture and the historical zeitgeist.

          Liked by 1 person

  2. Capybasilisk says:

    I would also disagree with his assertion that artistic talent can’t be objectively measured. There are several metrics I can think of just off the top of my head: number of weeks on the NYT bestseller list, number of gushing reviews from highbrow critics, number of prestigious awards accumulated etc etc. I mean, this is certainly how we measure artistic merit among humans, so why wouldn’t it be good enough for AI?

    Even that seemingly ineffable quality of something being unpopular in its own time, but appreciated by future generations, can be reliably measured if an AI outputs a work designed to be unpopular up to some time t, then increasingly popular after t. Failure modes can be defined as the work becoming prematurely popular, or failing to become popular after t.

    @bloggingisaresponsibility

    It was actually way funnier than that:

    The audience thought the AI was the genuine Bach, and Bach the mere imitator.

    Notably, the event was presided over by Douglas Hofstadter of Godel, Escher, Bach fame:

    In his Pulitzer-prizewinning book ”Godel, Escher, Bach,” published in 1979, Dr. Hofstadter speculated on whether uplifting music would ever be composed by an artificially intelligent machine. A program that could produce music as mesmerizing as the great masters’, he concluded, would require more than simple routines for stringing together notes. The machine would have to learn what it feels like to be alive. It ”would have to wander around the world on its own,” he wrote, ”fighting its way through the maze of life and feeling every moment of it. It would have to understand the joy and loneliness of a chilly night wind, the longing for a cherished hand.”

    Now he is not quite so sure. ”I find myself baffled and troubled by EMI,” he said. ”The only comfort I could take at this point comes from realizing that EMI doesn’t generate style on its own. It depends on mimicking prior composers. But that is still not all that much comfort. To what extent is music composed of ‘riffs,’ as jazz people say? If that’s mostly the case, then it would mean that, to my absolute devastation, music is much less than I ever thought it was.”

    Liked by 1 person

    • Those metrics seem like variations of BIAR’s Turing-like test. We’re effectively measuring how much the culture accepts those works. Like the Turing test, it’s the only way to bring something hopefully philosophical into the realm of science.

      Hofstadter makes the same point I made in my reply to BIAR. Most art is a knock to some degree. Actually, I’d go so far as to say that all art is. There’e nothing new under the sun. But some art pulls from a variety of sources in patterns no one else has though to use yet, and we refer to that as “creative.”

      Like

  3. Steve Ruis says:

    We seem to overvalue creativity, the ability to create something new. I have no doubt that AI’s will create new things and thus be creative, if we do not destroy all of the AI supporting infrastructure as we reduce this planet to a state where it cannot support life well.

    Liked by 1 person

  4. I think the first step toward interesting and creative AI art will be to develop an AI that can judge whether art is good or bad, interesting or not. In order to create good stuff, you have to be able to evaluate it. (But then, how much good art is lost to the world because the artist thought it sub-par). AlphaGo had an easy way to evaluate different attempts at playing Go. So what would happen if an AI could start from zero (like AlphaZero) and just start trying things, and then evaluate each try?

    *

    Liked by 1 person

  5. J.S. Pailly says:

    Sounds to me like everything depends on what the definition of creativity is, and we can just keep redefining creativity in such a way as to keep excluding AI from having it, if that’s what we want to do.

    Liked by 2 people

    • I think that’s the main point. It’s almost like the point someone made a few years ago that what distinguishes “artificial intelligence” from regular computing is if it’s something that, until now, only humans could do. Once computers can do it and have been doing it for a while, it ceases to be artificial intelligence and reverts to plain old automation.

      Liked by 1 person

  6. Wyrd Smythe says:

    Just passing through quickly, will be back, but an initial thought: If a search engine finds something you wouldn’t have found on your own, is that a “creative” search? I don’t know I’d call AlphaGo finding new pathways to success “creative.”

    When an algorithm writes a story that grips, amuses, and impresses me, then I’ll call it creative. Until then it’s just 6588648337950935760. 😀

    Liked by 1 person

  7. I think that Kelly is defining creativity in a way that affirms his thesis, although I think creativity could be defined differently. I sense that problems and their solutions may be culture-related, relative in other ways, or universal. Creativity could be defined as coming up with a solution to a problem via a procedure that nobody has ever thought of before, or via a step function rather incrementally or linearly, etc. I believe that once we’ve finally explored the house of mirrors that is human consciousness, we will discover that creativity is basically an algorithm, which might go something like this:
    1. Do I have a ready workable solution to this problem?
    2. If yes, then terminate, referencing this solution.
    3. Do I have a possible workable solution in my memory queue?
    4. If yes, then terminate, referencing the solution(s) in the queue.
    5. Make a list of unworkable solutions to this problem.
    6. Randomly modify one of the unworkable solutions.
    7. Does the modified solution work now?
    8. If yes, then terminate, referencing this solution.
    8. If no, then go to step 6, unless you’ve run out of unworkable solutions.
    9. If you’ve run out of unworkable solutions, then make a list of unrelated solutions.
    10. Add 1 to counter.
    11. If counter = 100, then ask someone else for ideas.
    12. Go to step 6.
    I believe that creativity, whether in humans or in AI systems, is based on randomizing and free association. Does this make sense or am I oversimplifying?

    Liked by 1 person

    • I get what you’re trying to do here. And I agree that it ultimately amounts to algorithms. However, the actual algorithms are probably far more complex, exist in clusters that run in parallel with cross pollination, and generally are extremely difficult to map to linear processes.

      A lot of this is complicated by the fact that in humans, much of it happens below the level of consciousness. We perceive it as intuitions that seem to come from nowhere, which inclines a lot of people to take a mysterian stance toward it, when all it really is are processes we don’t have introspective access to.

      Liked by 1 person

      • I do realize the actual creativity algorithm(s) will prove to be much more sophisticated than my poor example, but I thought it might serve as a useful example to some readers of how an algorithm “feels” and how it might apply to human behavior (and not just machines). I hadn’t thought about our unconscious algorithms, I agree there are algorithms operating “autonomously” (unconsciously) as well as those we actively control. I’m not just whistling in the dark though. When the winds of inspiration fail to fill my sails, I engage in an active process of free association and randomization. See my blog post, https://uncollectedworks.wordpress.com/2015/05/23/the-flying-poetry-creation-contraption/, for a fairly detailed description of the process I follow. You can judge the results in some of my poetry, which I usually call “Ouija Poem #n”, where n is an integer. Of course, sometimes the results come out pretty raw, but the clean-up process is mostly mechanical (at a fairly high level). Tom Clancy also describes a “Crazy Ivan” (Brownian motions) procedure utilized by submarine commanders when trying to avoid discovery. Many fighter jets jynx randomly to avoid missile tracking. Incidentally, reading your posts and “talking” with you via our comments have become one of the pleasures I look forward to. Keep on doing what you do so well.

        Liked by 1 person

        • Inspiration is probably an unconscious algorithm too. I doubt poets and artists sit around passively like so many couch potatoes waiting for a muse to touch their temples. I often wish I could gain access to that process.

          Liked by 1 person

          • Here is one of my Ouija poems:

            “Ouija Poem #5”
            (Raanana, October 17, 2018)

            The hornet teases the parsnip with its buzzing,
            The schools of the future will be built of sticks and stones
            And break our bones,
            The feeling of skepticism may be the beginning of creativity
            But not its end,
            A lone kingfisher perches on the branch of a mulberry tree
            Among the rocky crags of the Hermon mountains
            Though there are no kings to be fished there,
            The house was wakeful in its ugliness
            But longed to fly to the clouds of Oz,
            If you believe in peace, it will take a chance on you
            And the galaxy will carve your name
            Among its myriad constellations,
            The tent by the edge of the sea
            Stuttered its dreamy uncertainty
            And the mulberry was blanketed in fog,
            The asphodel wrote of the mountain’s wisdom
            With great sympathy,
            The incubus kicked over the night lantern
            Though it was intangible to the vulture’s knowing.

            Liked by 1 person

        • Definitely I didn’t intent to imply your example wasn’t valid. You were just demonstrating that it can be algorithmic, which I’m on board with.

          Interesting method for creativity. Unfortunately I’m an utter philistine when it comes to poetry, so I’m unable to appreciate the result. But your point about errors was interesting. It reminds me of something Orson Scott Card said in his book on writing science fiction and fantasy, that mistakes are opportunities. He described drawing a map of a fantasy city, but he screwed up and drew buildings in front of one of the gates, but instead of correcting it, decided to come up with story ideas for why that gate might have been walled off.

          Mike, I’m totally grateful for your kind words. I very much enjoy our conversations. You have interesting insights that make discussions fun and interesting. Please keep weighing in on any post you find interesting!

          Like

  8. Wyrd Smythe says:

    I think the conversation is conflating “artistic skill” (which is absolutely subjective) with mere “creativity” (which has objective properties). Consider how “creative” is a word we tend to apply only to certain kinds of art, not to all art despite that all art is creative on some level.

    OTOH, sometimes a “creative” type is synonymous with an “artistic” type, so the word could use some precise definition.

    I think what we mean by “creative” is not just finding something new. A rigorous brute-force search can find something new. I think it means combining (existing) elements in a new way that expresses significant meaning or purpose (isn’t just random or algorithmic).

    I do think algorithms can combine new elements. Rather easily. Will they ever do it with meaning (to the discerning eye) is a deeper question. It would seem to require an understanding of reality that’s part of the AI Holy Grail, perhaps tantamount to consciousness itself.

    AI has solved games in novel ways not obvious to human players (although what happens when human players pick up on these novel approaches?), but they do so, AFAIK, using the existing techniques of the game.

    A really “creative” AI would hack the game! 😀

    Liked by 2 people

    • What would be the objective properties of creativity? The word strikes me as an inherently amorphous and ambiguous one. Dictionary definitions verge on uselessness. Most of them simply define it as being about creating something, which doesn’t seem particularly useful in this context.

      So a really creative AI would pull a Captain Kirk (in the sense of his Kobayashi Maru solution)? Creativity requires cheating?

      Maybe we just have to get creative without our conception of creativity 😛

      Like

      • Wyrd Smythe says:

        “What would be the objective properties of creativity?”

        I’ve already touched on some of them. “Newness” is an objective property. I think “meaning” can be measured when it comes to art, “purpose” or “value” when it comes to “creative” solutions to problems.

        The truth of the phrase “there is nothing new under the Sun” (which comes from Ecclesiastes 1:9) depends a great deal on the metric.

        Every new book, in a literal sense, is new under the sun. But when books are reduced to the infamous “seven basic plots” then there have been no new books under the sun in millennia. Somewhere between we talk about stories that tell those seven plots in “fresh” or “creative” ways.

        (Surely, for one example, the NBC sitcom, The Good Place, would qualify as “creative” compared to how many sitcoms just reheat the old stew of “family” or “workplace.”)

        “Dictionary definitions verge on uselessness.”

        Yeah, the irreducible is hard to define. The only real definition is the thing itself; all a dictionary can do is describe it. But the definition still lurks in the consensus of all those descriptions.

        It’s a bit like how there is no perfect circle, but all the circles converge on a mathematically perfect one. That’s what allows our conception of pi.

        “So a really creative AI would pull a Captain Kirk (in the sense of his Kobayashi Maru solution)? Creativity requires cheating?”

        LOL! Yes, exactly. I wouldn’t say it requires cheating, per se, but thinking “outside the box” is exactly what we mean by “creative” thinking. It’s the combination of elements into an unexpected new thing.

        You’d expect Kirk to pull of some brilliant maneuver. You wouldn’t really expect him to just cheat.

        Like

        • Because I’m a hopeless reductionist, I think about the definition I came up with in my discussion with BIAR: novelty that we approve of. Or perhaps more generously: novelty we find useful. The nice thing about this definition is it handles things like the solutions that evolution manages to stumble on that seem creative. It allows us to just say they are creative.

          “The truth of the phrase “there is nothing new under the Sun” (which comes from Ecclesiastes 1:9) depends a great deal on the metric.”

          I shouldn’t be too surprised that quote comes from the Bible. It seems like a lot of good ones do. And this one’s from one of the few books in the Bible that I think have philosophically interesting things to say.

          “You’d expect Kirk to pull of some brilliant maneuver. You wouldn’t really expect him to just cheat.”

          It could be argued that the Kobayashi Maru test was itself a cheat, an unreasonable scenario just to make cadets squirm, and that all Kirk really did was cheat the cheat. A more realistic test was the one shown in the ST:TNG episode: Thine Own Self, which tests an officer’s ability to deal with a scenario where survival requires recognition that a sacrifice is necessary.

          Like

          • Wyrd Smythe says:

            “Or perhaps more generously: novelty we find useful.”

            Yep, I’ve said essentially the same thing. I don’t agree with the idea of “approve of,” though. Novelty seems a fairly objective property to me.

            “It allows us to just say [evolutionary solutions] are creative.”

            I don’t see it that way. Why would we want to be allowed to say that?

            Nature is never “creative” in my book. Creativity is strictly limited to intelligent solutions and creations. (You haven’t astonished everyone by switching to ID, have you? :O 😀 )

            Equally, despite the great beauty nature provides, that beauty is never “art” in my book, because art can only be created by an intentional, intelligent mind.

            As I’ve said before, I see a strong correlation between art and creativity, although there are also creative engineering solutions (“clever hacks” per the old, lost definition of “hacking” — which meant, essentially, “creative solutions”).

            “It could be argued that the Kobayashi Maru test was itself a cheat, an unreasonable scenario just to make cadets squirm,”

            A different discussion than whether Kirk “creatively” solved the test, and one I might agree with. The stated purpose was to teach loss and how sometimes you can’t win, which seems valid enough at first blush, but does any organization actually use such tactics? (I dunno, do they? Do SEAL teams ever get tested that way, for instance?)

            It strikes me that the test would be hard to keep secret, and the secret is kind of the only way it works. Kirk apparently knew what he was getting into (and wasn’t having any of it).

            Certainly the scenario you mentioned was more realistic!

            Liked by 1 person

          • “Nature is never “creative” in my book. Creativity is strictly limited to intelligent solutions and creations. (You haven’t astonished everyone by switching to ID, have you? :O 😀 )”

            One of the things I often find frustrating about discussing evolution (or philosophy for that matter), is that it’s always subject to getting bogged down in a discussion of language. The fact is, biologists often refer to “innovations” and “developments” in evolution. Sometimes they even use the word “creative”. They never mean that anything is being innovative or that any conscious entity is developing anything or being creative, but the language is useful, and the pedantically correct version is tiresome, both to write and read.

            So if you ever see me use those words, please rest assured that I haven’t gone ID (or creationist, or whatever). 🙂

            Along those lines, I don’t have your commitment that those words can only be used in relation to conscious agents. I fully understand that their etymology comes from that kind of usage, but similar to words like “information”, “know”, or “memory”, it seems like we now have to account for the actions of non-conscious systems, whether they be biological or technological, and those words might be useful, such as asking how an IT system “knows” to launch a particular business process at a particular time.

            I actually think the Kobayashi Maru gag was really more of a throwaway (but creative way) for the screenwriter to set up a theme in Wrath of Kahn that would ultimately resonate with Spock’s actions at the end. It might have been more creative if they could have found a concept more resilient to examination (like the ST:TNG one), but as we’ve discussed before, movies audiences don’t seem to reward that kind of creativity.

            Like

          • Wyrd Smythe says:

            “Along those lines, I don’t have your commitment that those words can only be used in relation to conscious agents.”

            And I don’t object to how those words are used. I understand what people mean. It’s more a matter of how I understand and define it. (My handle, after all, is Wyrd Smythe, so you can imagine how wyrds are significant to me. 🙂 )

            “I actually think the Kobayashi Maru gag was really more of a throwaway…”

            Absolutely. Starts the movie, doesn’t it? You get to think key characters get killed! What a great way to start! (And, hey, yeah, it’ll all tie in later!)

            Liked by 1 person

    • paultorek says:

      I’m with Wyrd here – I suspect creativity is partly objective, just as measures of Kolmogorov complexity are objective up to a limit with minor variations based on the choice of measure. Creative solutions are highly informative in a way that seems to invite some clever application of information theory to characterize it.

      Unfortunately, I’m not creative enough, and definitely not expert enough, to come up with that characterization.

      Liked by 3 people

      • Wyrd Smythe says:

        ” Creative solutions are highly informative…”

        Yes, that’s a good way to put it!

        Images and text I’ve seen generated by AI is often interesting or hilarious, but never yet informative.

        Like

  9. I’m going to take the general perspective of Wyrd Smith and Paultorek here, though with my own twist. It’s quite possible for our computers to do things which to us seem “creative”. (I personally don’t like referring to them as “AI”, since to me this suggests that we’ve built something special here, or even in the realm of “conscious”.) Because a human that figures out a new strategy for Go or chess will display creativity, it should naturally seem to us that a computer will be creative for doing the same. This is to say that we humans can appreciate their solutions. But now turn the question around. Can they appreciate their solutions? Or indeed, can they appreciate anything? No. They have no personal purpose from which to appreciate anything at all regarding existence.

    So try this out for an effective definition. In order for something to display “creativity”, it must have the capacity to appreciate creativity as well. And by “appreciate” I mean that it must at minimum be sentient, though in human terms perhaps more than just the standard sentience that we perceive in a lizard. This would be an ability to perceive and be moved by “beauty”.

    Here Mike might suggest a need for “metacognition”, or the ability to think about thought. Well supposedly the dog has no metacognition, though I’m fairly sure that it can appreciate beauty to at least some degree, as well as be proud of itself for coming up with innovative solutions. So I’d say that the dog can be “creative” as I’m defining the term, though our machines cannot.

    Since they’re somewhat related I’m now going to reply to Mike’s comment from back here: https://selfawarepatterns.com/2019/02/17/did-smell-lead-to-consciousness/comment-page-1/#comment-27551

    Agreed on panpsychists, as well as that you shouldn’t be dissuaded even if you do give them or zany futurists comfort with your “sliding scale” view of consciousness. I do find this interesting however.

    On emergent dualism, I think that you may be taking the root of the word too literally. Here a person could be referred to this way for acknowledging any binary distinction. Thus it’s probably best to confine the term to the way that it’s formally used in academia. Here Rene Descartes was a dualist given his supernatural account of human function. I put Chalmers there no less, and regardless of his skills and credentials from which to be regarded as somewhat of a naturalist.

    Let’s try this. Do you believe that a given rock can be sentient, which is to say that existence can be good or bad for it? No. But do you believe this of yourself? Yes. So here you perceive the existence of sentient and non-sentient kinds of stuff. Since you believe that sentient stuff exists this way given causal dynamics of this world, then like me and unlike Chalmers, you are a monist.

    What gets more tricky however is the way that I phrase the existence of sentience. I don’t say that any part of my body is sentient, and certainly not my whole body. I say that my body produces sentience as an output of its standard function. Thus it creates “me”, or a purpose driven entity. This is like Wyrd Smith’s laser analogy. Just as a device can output a laser beam that is not itself the device, I consider my body to output sentience for the experience of a thusly created entity which is associated with my body.

    If it’s too abstract (or something) for you to consider sentience as an output of the body under my models, then go ahead and consider sentience as a part of the body under my models. And so in that case the laser beam will simply exist as a part of the laser gun that shoots it. Thus your conscious figuring right now will exist as a part of your brain rather than as an output of your brain. My two computers model should be intelligible from that perspective as well if helpful.

    So to tie this back to the current post, do not expect to see our computers *appreciating* creativity until after they become sentient, and so do not expect my definition for creativity to be displayed by them until after a presumably very hard problem becomes demonstrably overcome. Though one might guess that sentience occurs through communication from the reflexive parts of the brain to the reasoning parts to allow the reasoning parts a chance to intervene in action selection, without any evidence, that’s all such speculation shall remain.

    Liked by 1 person

    • Wyrd Smythe says:

      I don’t know that it helps define creativity, but being able to appreciate creativity does seem a decent litmus test for having it. (Similar to how it likely requires consciousness to recognize consciousness.)

      FWIW, I’ve known a lot of dogs in my life. None of them showed a spark of creativity. Likely no animal does per your definition.

      The more I think about it, the more I see consciousness and “creativity” coming from the same ability of a mind, so to grant creativity is to grant consciousness.

      Liked by 1 person

      • Wyrd,

        The more I think about it, the more I see consciousness and “creativity” coming from the same ability of a mind, so to grant creativity is to grant consciousness.

        Definitely! And if you’re excluding it from dogs then it sounds like you’re talking about an advanced form of consciousness. But as a dog guy, I wonder if you agree with my assessment of the dog? Can it ever “appreciate beauty”? Can it ever be proud of itself for coming up with innovative solutions for problems? Either criteria bars creativity from our modern machines, though I personally don’t mind lowering the bar to anything which possesses such characteristics. So do dogs?

        Like

        • Wyrd Smythe says:

          In my experience dogs are not capable of appreciating beauty or having pride in their accomplishments. Those, especially the first, are high-level emotions.

          They are quite capable of being very pleased that they’ve made you happy. That’s their evolutionary coding.

          Liked by 1 person

    • Eric,
      Your definition seems like it creates a loop similar to the one Kelly does. It does safely preserve creativity for conscious entities, but seems to do so largely by the defining it to be so.

      I’m finding it interesting that so many people seem to have a protective attitude toward the concept of creativity. When I look at the Merriam definitions: “the ability to create”, “the quality of being creative”, which like all quality dictionary definitions are based on societal usage, I don’t see anything that inherently requires consciousness. Is society just being too casual with our use of that word?

      “On emergent dualism, I think that you may be taking the root of the word too literally….it’s probably best to confine the term to the way that it’s formally used in academia.”

      The academic use includes substance dualism, but also property dualism and predicate dualism. https://en.wikipedia.org/wiki/Mind%E2%80%93body_dualism I’ll admit that most of my own usage of the word when it’s unqualified refers to substance dualism, but that’s only because it’s the most known variant.

      I’ll just note this because I haven’t done so yet in this thread, but I don’t think you’re accurately characterizing Chalmers’ position. For example, he accepts the concept of mind uploading, which is usually not considered compatible with supernatural dualism. Using your interpretation of the word “dualism”, I don’t think Chalmers would be one either.

      Based on past conversations with Wyrd, his laser analogy posits that consciousness is a physical product of the brain, such as maybe an electromagnetic or quantum field. (Wyrd, let me know if I’m getting this wrong.) So are you saying the conscious computer is a similar product? Or are you saying that the conscious computer is what’s known in IT as a logical or virtual machine, essentially a machine made out of software? Just trying to understand.

      “Though one might guess that sentience occurs through communication from the reflexive parts of the brain to the reasoning parts to allow the reasoning parts a chance to intervene in action selection, without any evidence, that’s all such speculation shall remain.”

      There actually is evidence. One example is prefrontal lobotomies severe the connections between the prefrontal cortex and the rest of the brain, severing the connections between much of the reasoning parts and the reflexive parts. This typically resulted in a dramatic reduction in a patient’s ability to have emotional feelings. The procedure was used in the middle decades of the 20th century. It was abandoned once less invasive psychiatric drugs were developed.

      Liked by 1 person

      • Wyrd Smythe says:

        “(Wyrd, let me know if I’m getting this wrong.)”

        You are not!

        Liked by 1 person

      • Mike,
        On Kelly, the problem is that he (like most everyone given that my first principle of epistemology has not yet gained general acceptance) seems to believe that there is a “true” definition for creativity which exists somewhere for us to potentially discover. But I can’t fault him for doing what’s virtually universal. So instead I must realize that he’s using a relatively stringent definition (which can certainly be useful in many contexts). Notice that gravity between the sun and our planet creates tidal changes, and so is also “creative”. At the moment however he doesn’t seem to be referring to something like this with his usage of the term. I’ve simply given his implicit definition an explicit form, as well as observed yet another demonstration of how crucial it will be for academia to finally develop a respected community of professionals armed with their own generally accepted principles of philosophy to help straighten out the mechanics of science. From here our reference books should finally stop defining things like what “is” creative, and instead obligate readers to accept a writer’s definitions in the attempt to comprehend what’s being said.

        I’ll keep other forms of dualism in mind where they apply to me. I’m certainly this way regarding value for example. Existence seems valuable to stuff like me and not to stuff like a rock. But obviously the Cartesian substance dualism connotation will naturally tend to bleed over into other connotations.

        It’s interesting to me that you believe that if I knew David Chalmers (which beyond hearsay I don’t) that I wouldn’t actually consider him to be a substance dualist. Well okay, I’ll stop referring to him this way for now. But believing in the concept of mind downloads does not help me see him differently whatsoever. This goes quite well with a person who knows how to say what’s needed in order to get his way, not to mention the sci-fi element of that sort of thing (even if conceptually possible). Obviously the converse position doesn’t make F&M substance dualists.

        Yes like Wyrd, my own conception of consciousness is that it’s a physical product (or output) of the brain. My sentience is something that I can be sure is at least as real as a laser beam burning through some material. A virtual rain produces no wetness. Similarly a virtual me produces no reward or punishment, and even if it does accurately predict my function (not that I believe such a thing could ever predict my function very well).

        On your evidence that sentience exists as communication from the reflexive parts of the brain to the reasoning parts to allow the reasoning parts a chance to intervene in action selection, sure, it could be interpreted that way. But then if this easy then of course we’d like illustrations that this is all it takes in order for our own machines to create purpose driven function. And as I mentioned once before, also note that your position here may also be interpreted to conform with my dual computers model. Anyway perhaps my latest clarifications will help you grasp my positions, and yours me.

        Like

        • Eric,
          Sorry, I didn’t catch that your definition was meant to be an explicit statement of Kelly’s implicit definition.

          Just in case you’re interested, Chalmers has a web site where you can get his views straight from the horse’s mouth.
          http://consc.net/
          He’s also in a bunch of Youtube videos.

          “Yes like Wyrd, my own conception of consciousness is that it’s a physical product (or output) of the brain.”
          “And as I mentioned once before, also note that your position here may also be interpreted to conform with my dual computers model.”

          Thanks for the clarification, but given your statement in the second quoted sentence above, I’m still not sure I’m clear on your conception. The first statement implies that the conscious computer is a physically produced construct that performs its own computations. It sounds like this mechanism might exist as complex interactions of an electromagnetic or some other kind of field. (This is meant purely as an example, not to tie you to any specific implementation which I know you’re agnostic on.) Per other comments you’ve made in the past, the resulting computer only computes a fraction of 1% of what the overall brain computes and interacts with the brain.

          However, what I referenced in my last comment as interactions between the reasoning and reflexive parts of the brain, map to interactions between different brain regions, most notably the prefrontal cortex and the brainstem (interacting through the limbic system). I have often wondered if the frontal lobe regions that coordinate imaginative simulations couldn’t be considered equivalent to your second computer, but there are major differences.

          This region is far larger than what you usually describe as a very tiny computer, the division between what it does and the rest of the brain isn’t nearly as sharp as in your descriptions, not all aspects of what we call consciousness happens within it, and it is very much part of the brain, not produced by it.

          So, there are similarities, but the differences seem significant. At least unless I’m still not grasping your model accurately.

          Obviously we don’t understand the brain well enough yet to produce an equivalent technological system. But just because we can’t meet that standard yet doesn’t mean we don’t know a lot, or that everyone’s speculative theories are on equal footing.

          Liked by 1 person

      • Okay Mike, let’s see if I can finally explain this sufficiently.

        You’re right to be suspicious that the frontal lobes region that coordinates imaginative simulations is not the tiny second computer I speak of, or the one which does less than one thousandth of one percent as many calculations as the main computer which outputs it. The frontal lobes are part of the brain, and indeed the entire brain and nervous system constitutes the main computer which outputs the tiny conscious form of computer that I’m referring to.

        Here let me anticipate an obvious concern. Didn’t I say that this computer wasn’t virtual? Didn’t I say that it’s no less real than a laser beam? Yes, but I also know that it exists as a non-virtual entity far more certainly than I know that I have a brain in addition to it. While my body is merely a belief, “I think therefore I am”, and even if I actually exist as a product of some evil demon. If you “think” (which is to say, interpret conscious inputs and construct scenarios in the quest for happiness) then you could say the same.

        While the brain form of computer seems to function on the basis of neurons, and the computers that we build clearly function on the basis of electricity, this one (outputted by the brain) functions on the basis of valence, or affect. Unlike any other throughout all of existence, the conscious form of computer is purpose, or value, driven.

        (Consciousness seems to have emerged because evolution couldn’t sufficiently program for the function of subjects in more open environments. This is the “why” of consciousness that Chalmers erroneously calls a “hard problem”. Instead it’s his “how” problem that I consider to be extremely hard.)

        If that’s comprehensible enough then I’d expect a person to say, “Oh, so you’re talking about consciousness itself (however you’re defining it) existing as a computer”. Yes that’s what I’m talking about. Furthermore I’ve developed a detailed architectural model of its various forms of input, processing, and output function.

        Then from here I’d expect a person to ask why I consider it helpful to classify consciousness itself as “a computer”? Well I haven’t been able to come up with a better analogy, even given that others seem to consider this classification irksome. And since I’m talking about three forms of input, one form of logic based processor, and one form of output, I actually like it. Furthermore according to Wyrd Smith, back as far as the 1600s highly trained people were paid to consciously figure things out, and referred to as “computers”. So the etymology for this usage does seem solid, and even if today people tend to think of computation as something different than, or even opposed to, conscious function.

        Liked by 1 person

        • Eric,
          Okay, thanks. This clarifies some things I’ve been wondering about. I wasn’t sure if you considered the tiny computer to be physical or virtual. (Note that “virtual” in this context shouldn’t be taken to mean “fake”. Virtual Machines are now quite common in IT. The server that hosts this blog may well be a virtual server. I know it would be if I ever decided to self-host. “Virtual” here only means that it is implemented in software rather than hardware. Of course, all VMs ultimately do run on hardware, but the hardware is abstracted away from them, allowing for flexibility in deployment.)

          So with that clarification, I can now point out what I see as the primary difference between our understandings of consciousness. You spend a couple of paragraphs defending the idea of the second computer being a computer. You don’t have to defend that to me. Instead, you have to defend why you think this second computer even exists.

          To be clear, I don’t think there is a second computer. There is only what you call the “main” or “large computer”. (I usually don’t use the word “computer” to describe the brain because it’s probably more accurately thought as a vast interlocking network of computational systems, which is what confused me for a long time about your concept of two computers.) There is only the system itself and what it does. The only outputs it produces, as far as I can see, are electrochemical signals that produce movement and physiological changes.

          Why are so many people convinced that there is something separate and apart from this system (tiny computer or otherwise)? I think it’s because the brain holds a model of certain aspects of its processing. This model is a simplified cartoonish version of the reality, but it is highly adaptive in allowing it to monitor and fine tune those processes. But as a guide to reality, it’s misleading. It gives us self reflection, but what it reflects back is highly simplified, so simplified that it gives the impression that something mystical is going on, when all it is are limitations in the system’s ability to model itself.

          You mention Descartes’ “I think therefore I am”, but I think you’re making assumptions beyond mere existence, assumptions about the nature of “I” and “am”. “I” might be just a piece of executable code with access to a model of a grander self that might or might not exist, and “am” could be existence in a virtual environment. I’m not pointing this out because I think that’s what is, just that our knowledge of our own self has its own set of uncertainties and question marks. It’s important to understand these uncertainties when assessing what’s going on in our brains.

          Our views do have similarities at a higher level. I do think there is an imaginative engine in the cerebrum, which has many of the attributes you ascribe to the second computer. But it’s an integral part of, and takes up a substantial portion of the brain, not something running in a generated computer.

          Liked by 1 person

      • It’s great to hear that we’ve been able to move this forward Mike! So let’s carry on…

        Yes I’m talking about a physical computer. Your non-fake “virtual” option is tempting, though I’d need good reason to go with such an exotic distinction. Pain is an output of the brain that the conscious entity experiences, somewhat like a laser gun outputs a laser beam to thus affect things. Simulated pain and simulated laser beams shouldn’t quite do what actual pain and lasers do. So for now I’ll continue on without adding the “simulation” qualification. (Furthermore the “software” connection seems suspicious since I doubt it’s useful to say that the brain uses any.)

        I don’t believe that I need you to believe that a second computer “exists”, since what you’re telling me is something that I believe no less than you. This is to say that in our heads reside “a vast interlocking network of computational systems”. But will you tell me that it can never be helpful to speak of conscious and non-conscious forms of function? Given that you do so all the time, of course not. So let’s say that I from now on I start calling the brain a single computer, as well as that this computer has conscious and non-conscious forms of function which correspond with the quite extensive architectural model that I’ve developed. Then you can try to figure out what my model implies to the various situations that you commonly read about. Does this model help explain what’s observed? Then once a working level grasp of my model’s nature become’s attained, you should be quite able to assess both its strengths and weaknesses. How does that sound?

        On the appeal of dualism, I do appreciate your “false cartoon of consciousness” explanation. I’m going to have to stick with the tried and true “heaven and hell” explanation however. We want eternal reward, as well as punishment for the evil who wrong us. Each can be accomplished through the invention of a soul.

        On Descartes, my own interpretation of “I” is my sentience, or the value dynamic to existence as I see it. Just as our computers stop working without electricity, “I” disappears without sentience. So I think in order to to feel good, though if there is no “I” then there will be no thought. I guess I could get a bit more basic by saying “I feel (as in good/bad), therefore I am.” And if my sentience happens to be “code”, well hey, it still creates punishing and rewarding existence, or me.

        Liked by 1 person

        • Eric,
          “So let’s say that I from now on I start calling the brain a single computer, as well as that this computer has conscious and non-conscious forms of function…”

          The difficulty is that I don’t see anything inherently different about the processing we label “conscious” versus the processing we don’t. The only difference is that some of that processing is accessible to introspection, and I suspect the only thing that makes it accessible are the existence of circuits between the region doing the processing and the regions that do introspection. When I say something happens within consciousness, all I mean is it is introspectively accessible.

          ““I” disappears without sentience. ”

          The issue is that someone can have sensory consciousness without necessarily having affects. Consider people who have had a prefrontal lobotomy. Their ability to feel those punishments and rewards are substantially eliminated. The same for people with destroyed amygdala, who are often incapable of making a decision because they can’t feel anything about possible courses of action.

          These people are substantially disabled. But they are still aware of and can generally navigate their environment, and can often engage in habitual or reflexive action. Many can have conversations. It seems difficult to say they’re not conscious, at least in some capacity.

          These scenarios are ones I’ve struggled with myself, and they’ve forced me to revise my own thinking.

          Liked by 1 person

      • Mike,
        Well beyond introspection differentiation between conscious and non-conscious forms of function, surely other differences must exist. Observe the standard contention that only the human and a few primates even possess the ability to introspect. Thus it would make no sense for anything else to even be conscious/ sentient. So surely there must be some other differences between conscious and non-conscious forms of function as I define the terms. For example, I perceive that I’m choosing the words that I say to you now. Conversely I don’t perceive choosing to cause my heart to beat as it does. Whether or not I have a say in the matter seems like a major difference between these two forms of function.

        On states of existence with reduced or apparently nonexistent valence, I’m pleased that you’ve been concerned about this just as I have. It does naturally seem that if I were perfectly lethargic or numb, but still received “vision”, “sound”, and so on in non-valence capacities, that I would continue to remain somewhat conscious. Why not interpret this information and construct scenarios about what to do? Why not think? But this may be more a case of “the lights are on but no one’s home”. I’ve yet to receive good evidence that conscious function continues under such a deficit. Can something lose all valence and still function in a conscious capacity? Without such motivation it could be that consciousness becomes lost here, somewhat like removing electricity for one of our computers. (Mind you that non-conscious function may be apparent however, which may seem conscious.)

        The observations that you’ve currently provided may support this theme. Losses of valence from a partial lobotomy or an amygdala void should not harm habitual, or non-conscious function, though should harm conscious function as observed, such as an inability to make a decision. But wherever we have good evidence of a perfect loss of valence (unlike here as I understand it), as well as continued conscious function, my models become challenged. So I do seek this sort of evidence to further assess my models.

        Liked by 1 person

        • Eric,
          “So surely there must be some other differences between conscious and non-conscious forms of function as I define the terms.”

          Careful. Arguing from consequences only works if you can demonstrate the consequences contradict reality. I don’t think there’s anything separating conscious processes from unconscious ones other than introspection. That means in most animals, there is no distinction.

          That said, we often are not introspecting when we’re focused on specific tasks, but we’re still considered conscious. And many of the cognitive processes we introspect also occur in animals, even if most of them don’t introspect. Does that mean they’re conscious? Or does their lack introspection mean they’re not? I don’t think there is a fact of the matter here. Only specific capabilities are objective.

          “For example, I perceive that I’m choosing the words that I say to you now. Conversely I don’t perceive choosing to cause my heart to beat as it does.”

          Your ability to perceive choosing the words comes from the introspection machinery, machinery that doesn’t have access to the machinery that controls your heart rate.

          I don’t know of any neurological cases where all valences have been lost. But it does seems like every individual conscious valence has been lost in at least some cases. People can lose the ability to have emotional feelings, to feel pain, or just about any other valanced perception.

          But let’s say that someone did lose all of those abilities at the same time. They might still be aware of their surroundings, they might still have memories, attention, and many other capabilities we commonly associate with consciousness. They’d still have to ability to act on habits they’d previously formed or to react reflexively. There’s no doubt that they’d be dreadfully disabled, but you might have a hard time selling that they’re not conscious. Again, I see no fact of the matter here.

          Consciousness is in the eye of the beholder.

          Liked by 1 person

      • Mike,
        Yes consciousness does lie in the eye of the beholder. In fact I consider this to be the case for all human terms. Fortunately various generally agreed upon definitions also permit effective human communication. Obviously not so for consciousness.

        The “ordinary language” philosophers thought that they could solve this sort of problem by advocating the use of the most standard definitions for any given term. They failed, I think, largely because ordinary definitions aren’t always all that useful (let alone agreed upon). Though certainly not ordinary, it took no less than Sir Isaac Newton to define force as “accelerated mass”.

        In contrast to their approach, I seek to preserve worthy theory by obligating the reader to use the writer’s definitions in the attempt to understand what’s being said. But here’s the thing. It turns out that I haven’t been good at practicing what I preach. An excellent display of this concerns your introspection/ metacognition definitions and modeling.

        With this formal acknowledgment however, hopefully I’ll be able to fully accept your definitions so that I might better grasp and assess the nature of your models.

        Liked by 1 person

        • Eric,
          On accepting definitions, I don’t think you’re any more delinquent on this than the rest of us. I find it striking how often philosophical and scientific disputes amount to disagreements in definition.

          One thing I think might help with this is to prefix terms to indicate which definition is being discussed. So we can talk about Platonic-information, Shannon-information, Kolmogorov-information, etc. In the case of consciousness, we might talk about Locke-consciousness vs Chalmers-consciousness vs Eric-consciousness.

          This can be helpful when there are real ontological differences. Often times a definition is an assertion of a particular ontology. Accepting that definition might imply accepting that ontology. For example, while I find naturalistic panpsychism involves a definition of consciousness I find unproductive, dualistic panpsychism entails actual assertions about reality I don’t buy. But labeling it means I can discuss it without calling it “consciousness” in some unqualified manner.

          Even when the ontology isn’t in dispute, the problem is that often a definition of consciousness amounts to a philosophical statement about what aspects of our cognition are important, that is, which ones make us worthy of moral consideration. This seems to make many people insistent that their version of consciousness is the only true one.

          Liked by 1 person

      • Wow Mike, that might really help! It never occurred to me that adopting a simple convention like that might be a great interim solution. Instead I’ve been thinking that prominent people in academia would need to agree upon something like my first principle of epistemology in order for this vast problem to begin getting straightened out. And formal acceptance of such a rule would be complicated. Note that after thousands of years trying, today philosophers are judged harshly for having no agreed upon understandings to their credit. Here many seem to defend their profession by asserting that it exists as more of an art to potentially appreciate than something which needs to provide practical answers so that science can become more effective. But they’d be breaking this self serving and backwards rule if they were to finally come up with a given general understanding — here they’d have to explain why this is all they’ve got (beyond their “art” that is). So overcoming the first precedent should be challenging.

        In order for your not so threatening proposal to catch on, I’d think that first names wouldn’t be distinguished enough. Perhaps “MS consciousness” would do for a “Mike Smith” version, then maybe “PE consciousness” would work for my own, and so on. The reader would need to be informed about who’s version a given abbreviation refer to, and in longer threads some redundancy my be helpful for casual readers passing by. I suppose that for a prominent person like David Chalmers, a simple “Chalmers consciousness” would probably work be best.

        I can’t wait to get started on this!

        Liked by 1 person

  10. dsbhat says:

    Reblogged this on Strands in the Weave of My Thoughts and commented:
    I really liked this post and the accompanying intellectual discussion. I’m, most definitely, not academically qualified to leave an input on the topic, but I do believe that art (a point brought up as only one aspect of creativity) is solely (and soully) a human trait. AI might be able to compute creatively and will probably improve exponentially in the future, but being a product of the human mind it is still only just a tool, and any creativity is still a derivative of the human mind. Just a humble layman’s thought.

    Liked by 1 person

Leave a Reply to dsbhat Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.