Service Model

Adrian Tchaikovsky’s Service Model is a fresh take on what can go wrong in a world of robots and AI.

Charles is a robot valet. He works in a manor performing personal services for his human master, checking his travel arrangements, laying out his clothes, shaving him, serving meals, etc. However, it appears to have been several years since his master has gone anywhere, and he is apparently not doing well.

One morning, Charles discovers that he has killed his master, slitting his throat during a shave, although he doesn’t remember why he did it or the act itself. He reports his guilt to the house majordomo computer so that the police can be called. Police robots eventually arrive, but don’t appear to be functioning well. Apparently there is a very high volume of calls and they are having maintenance issues. After much confusion, Charles is ordered to report to Central Services to be decommissioned.

However, the house majordomo removes him from the house roster and orders him to report to Central Services for diagnostics. In the process, he removes Charles’ name, obviating the order given to Charles to report to decommissioning. The now undesignated valet unit (who eventually becomes known as “Uncharles”) follows this directive and walks to Central Services. On his way, while analyzing what kind of future there might be for a valet who occasionally kills humans, he sees large numbers of manors in various states of abandonment and disrepair.

When he finally reaches the Central Services diagnostics center, there is a long line of robots, which doesn’t seem to be moving. It actually appears that robots in the line have been there for years. Uncharles decides to break in line, and ends up talking to “the Wonk”, who he initially takes to be a diagnostician robot. The Wonk tells him that he appears to be infected with the “Protagonist Virus”, but Uncharles is skeptical. The Wonk also observes that the world appears to be screwed up.

When Uncharles asks too many questions about when or if there will be any service, the central computer decides he’s taking up too much computational resources and orders that he be taken to “data compression”, which apparently involves physically compressing malfunctioning robots into a small cube. However, through a sequence of unlikely events, he is able to escape Central Services.

What follows is Uncharles’ quest to find a new valet role, or some way to serve humans, which takes him on an odyssey through the world. Since Uncharles is a robot, his comprehension of many events is limited, although we see enough through his perceptions to deduce what is happening.

Tchaikovsky tells the story from Uncharles’ point of view. However, it’s not clear whether the narrative is strictly third person limited, where we’re seeing things entirely through Uncharle’s perceptions, or a more objective description of a machine’s operations. Often mental states are mentioned “that nobody would design a valet to feel”, which could be interpreted as Uncharles’ self reflection, or the narrator’s observation. Tchaikovsky leaves the reader somewhat free to choose, and so which level of cognitive sophistication to ascribe to him.

As the story progresses and we learn more about what is happening, it becomes clear that the world before the problems was not a robot utopia. In the end, we see a reflection of, and warning about, many human attitudes that exist today, and where they could eventually lead when they can be enforced with technology.

As usual, I love Tchaikovsky’s exploration of ideas, in this case, about how robots might perceive the world and make decisions, along with the likely limitations. Also as usual, he loves his long descriptions. It’s probably a plus for more detail oriented readers, but it was sometimes work for me to get through, and I probably ended up glazing over important points because of it. But this book wasn’t as bad on that front as some of his others, and the good far outweighs the bad.

So definitely worth checking out if it sounds like your kind of story.

9 thoughts on “Service Model

    1. I actually wondered if he did it intentionally. I’ve noticed British sci-fi authors tend to be a little looser on third person limited, at times shifting into a more omniscient view. That could be what was happening here. But the overall effect, at least for me, was the same.

      Liked by 1 person

  1. It’s unlikely that a robot would “decide to break in line” unless it’s programmed to do so. Machines don’t decide to do anything. They have triggers for various actions, and the logic path can be sophisticated. But I don’t see why Charlie would decide to break in line if other robots have been in line for years. I think the story humanizes the machines too much.

    Liked by 1 person

    1. All of the robots stay in line until Charles arrives. A few follow his example, but most stay even after he does. So Charles appears to be different. The difference, whether it exists and if so, its exact nature, is an ongoing question throughout the book.

      Like

      1. I have not read the book but from your description it sounds like the author has no idea how software, robots, or any product is designed and tested. AI is nothing but a technology. There are systems using AI. The systems are designed for specific purposes. If a system does not fit the purpose or does something else, it’s redesigned until it works. AI trained to recognize pictures won’t be able to ride a bicycle or play chess. There are design principles and methodologies, such as DFMEA (design failure mode and effect analysis). I can’t imagine that someone would design a robot to shave a human and would not build in any metrics for the shave quality and a way to self-correct. A machine that uses a razor blade for shaving? Seriously? Has the author heard the term “poka-yoke”? What could possibly go wrong with that? A robot designed to serve people that does not notice that its client had been dead for years? If the robot is that bad with analyzing the feedback from its actions, it’ won’t be able to move around. It seems that the author believes that self-driving cars work like a wind-up toy.

        A robot is a system of itself, but there seems to be a whole network of robots that works like a human organization. Each decent organization is also a system with a purpose, and it also has a feedback loop called “quality system” with metrics, performance indicators, and constant feedback analysis and adjustment. Again, a line of robots that does not move for years is an epic organizational failure. The Social Security Administration can take many months or years to process a case, but every case eventually gets processed.

        Any technology is known to kill people. Railroads, airplanes, bicycles, Segways, cars, electricity, phone batteries exploding, Tesla autopilot not recognizing a white semi-truck making a left turn across the car path, Boeing landing system having an uncalibrated ground level. Each such case is rigorously investigated for root causes, design flaws, manufacturing defects, etc., and those are fixed so that the failure never happens again.

        Sure, any system can fail. There are failures in societies, cars, and human bodies. These failures can cause the system to collapse. It has nothing to do with AI per se but rather with the system design and its ability to self-adjust. The book describes a poorly designed system that predictably failed because of multiple very obvious design flaws. The problem is not unique to AI at all.

        Like

        1. I think if you read the book, you’d find the author acknowledges your view more than you might expect. But overall, like a lot of sci-fi, the story is a thought experiment, so it probes the boundaries of what may or may not be possible with AI, at least in the near-ish future.

          That said, no book is for everyone, and I’m not sure this one would be your cup of tea.

          Like

          1. Yeah. I’m an engineer by trade. I test circuit reliability. I also used to work with customer return failure analysis and software quality testing. We just had an ISO9001 audit a few months ago at work. It’s my job to make sure that shit described in this book does not happen. The effort to create a product that “works in principle” or “can work” is a fraction of an effort that is needed to make sure that the product works on the scale of a major brand device mass production. Testing and qualification often takes longer than product design. There is a term “Design for Reliability” (DFR). Designers are required to anticipate everything that can possibly go wrong and make sure the product can handle it. Same applies to organizations. Reading a book describing a robot that accidentally slits the throat of a human with a razor blade, discovers it only after several years, is casually “told” by its control unit to go somewhere for decommissioning but “decides” to run away instead might be mildly disturbing to me. Neither the organization that designed Charles nor the Central Control system would pass a basic ISO9001 audit. 🙂

            Liked by 1 person

    1. Now that you mention it, I don’t know if I would have picked up this book if it hadn’t been by an author whose work was already well known to me. But if I didn’t know Tchaikovsky’s stuff already, this would be a good intro.

      Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.