Is Artificial Intelligence a new stakeholder?

Alicia Vikander and Sonoya Mizuno in Ex Machina (2014)

Alicia Vikander and Sonoya Mizuno in Ex Machina (2014)

Gregory Ronczewski is the Director of Product Design at TeamFit platform - view his Skill Profile

Gregory Ronczewski is the Director of Product Design at TeamFit platform - view his Skill Profile

Last week I was invited to a meeting organized by Ibbaka—a pricing software and consulting company TeamFit has a close relationship with. The goal was to use the Ibbaka process and platform to look at the company from an outside view. For me, it is a prelude because soon, Ibbaka will adjust its lens to look at TeamFit.

The critical question on the table was "value." What is the value proposition that Ibbaka offers? To understand the whole ecosystem, the first task is to identify stakeholders. Pretty soon, on the whiteboard—too bad we did not have blackboard and chalk—we listed a number of positions that fit the bill. Now, the thought of including AI (Artificial Intelligence) in this list came to me later, and I did not share it with the rest of the team, but I am sharing this with you now. Clearly, the brain moves much slower than it's synthetic counterpart. Did I just say that I am slow? I guess I did. So, should we include AI in the stakeholders' list? And if the answer is yes, what are the implications?

Obviously, we are not yet at the moment when bots are making business decisions, conducting negotiations and signing contracts. But if we pause for a moment, and try to think how many choices that we make that are informed with the support of an AI, perhaps this does make sense. I saw an estimate prepared by a market research firm that by 2021, the installations of virtual assistants will top 7.5 billion. More than the world's population. What are the chances that a robot delivers a service you are using? And if we look at the value generated by the offer, we have to consider that this value is artificially created. What does it mean? Should we even care that it is provided by a robot and not a human?

The second point that Ibbaka's facilitator asked us to consider were the emotional and economic value drivers for each stakeholder. We started with the emotions. This is where I think it gets fascinating. When we think about AI as a continuation of personal computing, the path shows without a doubt that the machine is superior in calculating, comparing, and all sorts of data analysis. Connecting economic value drivers with the AI support seems natural. What about the emotional values? This is where humans still have a lead. There is, however, another twist. According to many studies of how our brain works, the majority of our decisions are emotional-based. Many of us will disagree with this. After all, we believe we are rational and believe we do not fall into emotional traps. Unfortunately, the data suggests otherwise. Unless we get our "lizard brain" onboard, we will not make any decisions based just on a cold evaluation of the offer. Of course, it's not only black-and-white. Emotions and economics both play a role in how we evaluate any proposal. It is, however, interesting that AI may be responsible in influencing our thinking. We should also consider speed. The AI is fast, that's why it works well. Although the new developments make the bots far more human-like, still the lack of real empathy, in human terms, characterizes the bot. Human emotions are far more complicated, and take longer to process. But once the emotional processing is done, a truly memorable experience can be encoded. One that feels authentic and unique. The lack of authenticity is often to blame for why AI support is not widely adopted.

Let's go back to the meeting, the emotional and economic value drivers. The lists are checked against Maslow's Hierarchy of Needs: Basic Functionality, Security, Community, Self-Esteem and, at the top, Self-realization. To me, as I move through the pyramid, the bottom can be quickly informed by a combination of big data, IoT information processed against the needs of a stakeholder. As we move higher, the connection to a human-only type of interactions seems mandatory. Let's look at a few options: we need to consider a two-way communication process. From humans to machines and from machines to humans. It means that people will be interested in why a computer comes up with a confident prediction. And, the machine needs to "understand" what the human is asking for and why. Our language, or rather the way we formulate questions, is often confusing. As with any dialogue, we need to learn how to communicate with the machine. We often hear comparisons between a brain and a computer. Despite the fact that many of these AIs were built to resemble how (people thought) the brain works, they are actually very different. The AI provides a way of approaching a problem in a manner that humans would never think of. Hence the Artificial Intelligence with an emphasis on Intelligence. There are some algorithms that are is hard to explain in terms of human logic. As a result, trust suffers.

Part of the Ibbaka process is based on comparison. Offers are compared with their alternatives. Stakeholders are compared with each other. Emotional and economic value drivers are compared. To advise on pricing, a full list of the alternatives, as well as the end recipients of the service, needs to be considered. Comparison is a powerful action. It may change the way we understand things. Often, when the same object is presented, small nuances may go unnoticed, but with the help of an algorithm, a contrasting view will generate a new picture. Based on that, we will often reconsider the action plan. I have to admit the comparison is one of the key elements that TeamFit is working on when it comes to our Competency Model module. Visualizing multiple alternatives will for sure influence stakeholder's decisions. I think on both planes, the emotional and economic. So again, AI will take a place at the negotiating table. It is a stakeholder like it or not. We need to learn how to interact with it.

There is an article in MIT Technology Review on Duplex, Google's AI assistant. It is interesting to hear different opinions regarding how it sounds. Some people are fine if the assistant sounds like a human, but some object. Google commented that Duplex will clearly introduce itself as AI assistant, but is this enough? Does it matter? When a simple task is all I want to accomplish, let's say, reserve a table at a restaurant, do I care who I am speaking with? Or, instead, if I know that this is a virtual person, perhaps I need to know how to address my request. Sounds like a new skill or behaviour. Right now all of the people on TeamFit are actual people, we think. In the near future, some of the are likely to be AIs (and we may not even know this). How will the skill profiles of AIs differ from each other and from actual human beings?

Lil Miquela is a 3D generated trendsetter that has more than a million followers on Instagram. When you check the feed, some of the photos are very 3D looking, but some are not. The point here is that Instagram, or any social media channel for that matter, runs on emotions, so even though we complain that AI is not good at this stuff, we follow Lil anyway. I guess I will be adding AI to my list of stakeholders before next meeting at Ibbaka.

Previous
Previous

Enabling internal mobility - a design thinking approach

Next
Next

Working from home today