The Right Approach to AI Concierges
I’ve taken some time to explore how best to continue writing about this topic and sharing our expanded work into AI for consumer brands. Between three separate AI agent projects and time pondering how best to chronicle this for our readers and clients, a good amount of time has passed since my initial reveal. So, about which platform to use, my decision for the foreseeable future is to continue leveraging our blogging platform right here, and rely on my LinkedIn account to extend distribution. The reason is simple: in contrast to the wide range of other social media channels, LinkedIn continues to provide the best opportunity to interact with clients, prospects, and colleagues because well, its a professional network. And our C[IQ] blog is established; I’d be starting over with either Substack or Medium.
So, I kick off the continuance of conversation here, with some comments about the right (and wrong) way to leverage the rise of AI to extend your consumer engagement and CRM strategy. In coming posts I may dive a bit deeper into why there are real challenges to simply adopting a large language model (LLM) from one of the AI Tech-Titans and attempting to build on that, and taking more measured steps to leveraging a particular way of building natural language agents.
Restating the Opportunity
Today, C[IQ] is involved in the design and development of three (3) AI “Service Agents” (let’s just refer to them as “service agents” from here on for simplicity sake.) As I wrote last month, while they have common architectural features they are distinct in their service objective:
An agent for concierge services; highly personalized, smart, 1:1 consumer relationships; always on 24x7, tirelessly able to assist with any product questions and all aspects of online commerce; continuously growing smarter about the customer’s needs; and (the best part): infinitely scalable.
An agent for assisting athletic training in a specific sport, to help a user train to reach her peak potential while limiting risk to injury; the agent continuously “learns” from every aspect of incoming data to help recommend training adjustments where indicated to avoid over-training, or worse: injury.
An agent for assisting citizen engagement with government, helping its users navigate government processes to ensure they can benefit from civic services in a legally compliant manner.
To be sure, of all of these, the latter is a bit removed from our wheelhouse of consumer marketing. Yet, because there are challenges of building and sustaining consumer relationships that apply even in government bureaucracies (where citizens are really the government’s “consumers”), here, our client convinced us that we had real value to offer. In fact, we discovered that is really the case, and all government services should think “consumer-centric”.
The other two are in our wheelhouse, especially the first one. To be sure, the athletic training service agent offers aspects of building and sustaining engaging relationships, and falls into a sweet spot of my decades spent in sports apparel and footwear marketing.
In my last post you surely intuited that I have some reservations about the AI gold-rush. I do, but that applies primarily to the big infrastructure plays. At the retail-level, while there are right and wrong ways to go about it, I remain and am growing more bullish on the opportunity to apply AI in delivering more sustainable (and profitable) consumer relationships at the retail level. And that is where my attention will be focused, although from time-to-time you might read me rant about the hype and pitfalls of some runaway AI-everything mentalities.
How To Do Retail AI Service Agents Right
That brings me back to the point for this time, which is: the best approach. Let’s start with the not-best approach.
The not-best approach is simply using an external LLM from one of the big AI Tech-titans (OpenAI, Google, Microsoft, etc.). Doing so, means surrendering your consumer data to that LLM as well as accepting the risk that unfavorable data about your brand, products, or services, or offering-up competitor’s products may creep into the conversation.
What is required (and the right way to do service agents) is to not completely rely on an LLM, but rather build a knowledge base and then connect it to a (small) language model (also known as an efficient language model, or ELM) strictly for the purposes of facilitating natural language conversations.
You see, the real issue is not about relying on some enormous LLM to help determine the right things to say. The right things to say should be restricted to the knowledge base you need to provide for the service agent to serve your brand. Then, the LLM is utilized to create natural language answers, ideally in conversational tones. With this approach, you are now addressing the real issue: the success of personalized conversations using an Agent rather than an army of humans.
Consumer Satisfaction Lies in the Quality and Utility of the Conversation
The caliber of personalization is far more important to consumer satisfaction — we well know it fosters loyalty and transforms your consumer’s brand engagement. So, this means using AI for the right aspect of the service agent, not simply as a panacea for the whole experience. Imagine how impactful your service agent can be if you can create a highly personalized shopping experience. Your service agent could suggest products to each consumer based on past purchases, browsing history, and even current viewing trends. It can even offer up-sells with recommended products, and provide additional information on product care or product complements, and more. However, this is all due to the extents of your internal knowledge base, and not the extents of an LLM.
Similarly, you could use your service assistant to bring up a consumer’s product usage history, or to provide personalized troubleshooting instructions based on their past issues recorded in your CRM database. The agent can route the query to specialized support paths based on the product type or issue severity, using the knowledge base to guide the consumer through a resolution.
This type of AI-powered conversational capability is totally within reach. The key is the knowledge base, where you load in everything from your entire product catalog, to the full contents of your website(s), FAQs, help articles — literally any kind of content you can digitize. Now, using training tools, your service agent, which is trained on (and only on) your knowledge base, will be able to generate responses that feel like an extension of your human customer care team. And as we’ll discuss in coming posts, from here your service agent can become increasingly interactive and autonomous in operation.
Towards Autonomous Interaction
I do want to make a couple of comments here about agent autonomy. Complete automation could make the biggest difference to companies' bottom lines. According to a recent article in Axios, that can “boost productivity gains from simply making existing human agents 10-20% more efficient to independently handling 70% of cases.”
So, giving more autonomy to AI service agents; that is, where a service agent acts on behalf of your consumer, suggests tantalizing benefits — along with ethical dilemmas we're only beginning to ponder. This is an important consideration because as AI evolves from just conversing with us, to actually doing things for us, its potential benefits and harms could multiply, and fast.
AI service assistants will become truly “assistants” or “concierges” when they can plan and execute sequences of actions on behalf of a consumer in line with the consumer’s expectations. Of course, working on a consumer’s behalf requires representing the consumer’s values and interests, as well as abiding by societal norms and standards. That might seem obvious, but it’s important to consider in the design of these service agents — I speak from direct experience.
Let me offer a simple example. As AI service agents become more human-like and personalized, they will become more helpful by engendering an “emotive connection” with the consumer (e.g., “Hmm, it’s speaking my language and ‘gets’ me; I trust it’s recommendations”). However, these agents can also make people vulnerable to inappropriate influence. That introduces new issues around trust, privacy, and something called “anthropomorphizing AI.”
These are deeper issues for another post, another time. However, as we move toward interactive service agents, we need to bear in mind that the goals and objectives of the service agent must remain well aligned with the goals and objectives of the brand — your brand — that the service agent represents.