top of page

Running Before You Can Walk: The Hidden Cost of Premature AI in Customer Experience.

  • Writer: Jude Temianka
    Jude Temianka
  • Nov 11
  • 5 min read

Updated: Nov 12

The pressure on business leaders today is immense. The board demands a rapid return on investment (ROI) from digital transformation, and the siren song of Generative AI promises salvation—namely, slashing support costs by replacing human agents with scalable, 24/7 automation.


This often leads to a mindset of "automate everything now."


However, rushing AI into customer experience (CX), especially in high-context, high-risk environments like financial services, is the corporate equivalent of building a skyscraper on a sandcastle. It’s a strategic misstep that trades short-term efficiency for massive long-term risk and a devastating hidden cost.


The real goal of support transformation is not full automation; it is achieving Deflection Through Education and proving Capability Maturity.


Pair of headphones next to a laptop on a table of a support centre.


The True Cost of Context: Beyond the Token Count


When a finance executive or even a product manager calculates the cost of an AI-powered customer support flow, the discussion often starts and ends with basic token costs (input and output). But there’s a critical flaw in this thinking: the cost calculation for complex, high-context situations quickly skyrockets due to context reabsorption (something I like to call "Context creep").


Simple chatbots for basic FAQs (e.g., "What is the store opening time?") are cheap. However, when an AI system needs to handle a complicated, multi-stage interaction—such as a customer challenging an unexpected fee or reporting a suspected fraud event—the required context window explodes.


  • The model must hold the entire conversation history.

  • It must pull in the customer's transaction logs via API.

  • It must re-read all associated policy documents and compliance rules.


A simple query that was estimated to cost 1 million tokens can quickly become a 5 million token interaction, forcing the use of the latest, most powerful (and most expensive) Large Language Model (LLM)—not a cheap, low-context GPT nano-4 mini.


In these scenarios, the cost of AI becomes unpredictable, and the bottom line can soar, utterly destroying the promised ROI.


Here’s why:

Task Component

Required Context / Action

Impact on Token Count

Initial Report

AI must categorise the intent and pull in the customer's identity and basic account status.

Medium-High Input

Investigative Phase

AI must call the Transaction API, pull 6 months of transaction history, cross-reference merchant details, and load the bank's fraud policy rules.

Massive Input: All this data is loaded into the context window for the AI to "read."

Resolution/Escalation

AI must generate a legally compliant response, draft the next steps (e.g., card freeze), and generate a clean ticket summary for a human agent.

High Input (summary) + High Output (response)


Another Core Mistake: Skipping Capability Maturity


Years ago, a well-known fintech attempted to address its support load by focusing on self-service. Their initial, smart move was deflection through education, using a Help Hub of self-service articles to handle low-risk, process-driven queries (e.g., "How to reset your password", "How to activate a new card" ). Human agents were reserved for high-sensitivity issues like declined payments or suspicious login reports.


However, the later regulatory pressures—driven by continuous failures to meet compliance standards and handle high-risk use cases like account closures appropriately—showed the danger of scaling without capability. This is the ultimate example of running before you can walk.


The other day, someone asked me: 

“What new capabilities do companies need to introduce to be AI support-ready?”.


The truth is, that was a tricky question to answer, because in my experience, medium to large-sized digital-first businesses often already have the talent—engineers, data scientists, technical product managers, service designers, technical writers, researchers, legal advisors, industry-specific support specialists.


The gap isn't so much the people, but rather the capability maturity. What many companies lack is a mature framework to handle:


  • Regulated AI Build: The capability to build, test, and audit AI models that meet the high compliance standards required for data privacy and decision-making in financial services.

  • Data Synthesis without Bias: The knowledge of how to structure data collection. You need to segment and synthesise data with variance—not just volume—in mind. Too much similar data creates bias (e.g., flagging all overseas income as high risk); too much variance creates useless training material.

  • Protocol & Guardrails: Establishing clear, secure protocols that isolate high-risk functions. As the Air Canada chatbot ruling demonstrated, businesses are legally liable for their AI's misinformation.


To start the process of tackling this capability gap, your organisation can conduct a formal, cross-functional AI Maturity Assessment to identify blind spots in data governance, not just the technology stack. Many leading consulting and technology firms offer comprehensive, free online self-assessments (search for 'Contact Centre AI Maturity' or 'Enterprise AI Capability Assessment').

Concurrently, establish a high-risk use case inventory and map every decision point to a human-in-the-loop guardrail before deploying anything customer-facing. 



The Indispensable Human: When Risk Demands Empathy


To avoid this costly rush, leaders must internalise a simple rule: The human is the ultimate fail-safe and the only source of genuine relationship growth.


We can categorise the following use cases (derived from my professional experiences) for insurance (flight compensation), financial services and mobility that cannot be entirely, safely offloaded to AI yet—not because the technology isn't clever, but because the stakes are too high, and the emotional context is too critical.


Human Guardrail

Why AI Fails (For Now)

Industry Example & Rationale

Complexity & Variability

AI struggles when rules are broken, need non-linear investigation, or policy interpretation.

Flight compensation: Complex multi-segment itineraries or edge-case document review. Rationale: Causality analysis is difficult to automate because disruptions may affect only one leg, requiring human-supported cause analysis and oversight.

Risk & Compliance

AI cannot assume legal liability or execute legally complex actions; human verification is the only regulatory fail-safe.

Financial Services: Locking an account due to fraud or closing an account. 

Rationale: Security-sensitive scenarios require immediate human verification and execution of a legal step (publishing policy online; agent executes).

Emotional Sensitivity

AI lacks genuine empathy, de-escalation skills, and intuition for critical events.

Mobility: Roadside assistance or emergency support. 

Rationale: Time-critical, safety-sensitive—must route directly to trained human personnel, even when AI is used to trigger an incident report. 

Financial Services: Suspicious login or fraud report.

Bias & Fraud Nuance

AI models are easily skewed by biased data; human intuition is still needed to handle exceptions and poor-quality input.

Insurance: Identity verification (passport/ID check/naming anomalies) in hard-to-read and/or edge cases. Rationale: Requires human review due to extensive document diversity, poor image quality, and regulatory risk, even when supported by AI-confidence scoring.

The critical insight from internal frameworks is that even tasks currently supported by AI-powered confidence scoring still require Human Review.


The human remains the final, critical check for uncertainty, risk, and emotional context.


The Platform for Relationship Growth


This may be controversial, but I believe forward-thinking leaders should view support centres not as a cost centre, but as a platform for relationship growth. Especially as access to support (and support content) becomes increasingly democratised across product and omnichannel experiences. AI's role should be to handle the predictable and transactional cases (like status tracking or automated confirmations), freeing human experts to focus on high-context, high-value, and high-empathy scenarios that build true loyalty and increase Customer Lifetime Value (CLV).


By strategically pacing your AI rollout, proving your capability maturity, and respecting the hidden costs of complexity, you stop running aimlessly and start walking with purpose. The result is a system that not only saves money, but fundamentally strengthens customer relationships—a far better metric for long-term success.



What’s the most complex, high-stakes support issue your team is currently being pressured to automate? Let’s talk about the missing capability maturity.


Comments


© 2024 Jude Temianka

bottom of page