Updated November 20, 2025
AI now sits across digital touchpoints that shape brand perception, from chatbots to images in ad campaigns. Before you scale the next AI use, let’s understand where customer sentiment lies.
In June 2023, Marvel launched Secret Invasion with AI-generated opening credits. But it didn't go over well with the public. Artists criticized the move, calling it tone-deaf during a period of job insecurity for creatives. The studio said the choice fit the series' theme. However, the backlash drowned out that explanation and hijacked the launch-week conversation. Many people remain uneasy about brands using generative AI.
AI now powers customer service, media production, and analytics. However, not every customer is pleased with it. In fact, 33% of consumers in a Clutch survey said AI worsens their perception of a brand, while only 16% said it improves their perception.
Looking for a Public Relations agency?
Compare our list of top Public Relations companies near you
This article explains how AI enhances the customer experience and where it crosses into territory that undermines trust and authenticity. You will see practical guardrails and examples, so your team can use AI without undercutting the brand you worked to build.
Most marketing teams now use AI across various workflows. A 2024 study by the Nuremberg Institute for Market Decisions found that 100% of the 600 marketers surveyed use AI in their activities, from asset creation to media optimization.
Customer service departments were among the first to adopt AI, as the efficiency gains were immediately apparent. Yet broad consumer polling still shows mixed feelings — 45% of U.S. adults dislike the AI chatbot experience. That level of resistance shows that efficiency alone does not guarantee a positive customer experience.
Content creation teams were among the first to incorporate AI into their daily work. They use it to resize product photos, create ad versions, summarize reviews, and draft emails faster.
At the same time, Deloitte's 2025 Connected Consumer study reports that 53% of surveyed consumers already test or regularly use generative AI in daily life. People now encounter AI-generated copy, images, and videos in various aspects of their daily lives, not just in brand campaigns. This makes AI feel more normal. It also means people notice when AI-generated content from a brand feels cheap or untrustworthy. To maintain trust, brands must be transparent when AI is involved and ensure that the output meets the same quality standards as human work.
Targeting strategies and business optimization also benefit from AI adoption. For example, AI tools can scan large sets of customer data to identify valuable audience groups based on past revenue or conversion rates. Teams can then adjust ad spend so that more budget is allocated to the ads that perform best. S&P Global reports that 60% of organizations investing in AI now have implemented genAI.
AI scales speed and consistency. It automates repetitive tasks and highlights key findings, allowing teams to focus on informed decision-making. Where it goes wrong is when speed beats judgment or when customers feel tricked.
Many customers see value in AI when it saves time or clarifies options. Resistance spikes when AI operates in the background or supplants human judgment in critical moments. Let's look at a few key points generating friction.
The fix, however, is not “less AI," because there are obvious benefits of AI. It's more honest AI disclosure and better guardrails that will lower resistance.
AI pretending to be a person breaks trust. If a chatbot writes “Hi, I am Sarah from support,” and then fails to recognize basic context, the mismatch damages credibility. Customers expect clarity about who or what is responding and a quick handoff when the bot hits its limits.
AI-generated content, without oversight, also damages credibility.
Clutch’s September 2025 study found that 57% of consumers were unable to correctly identify AI-generated photos, despite 66% feeling confident beforehand. That insight cuts both ways. It means that AI-generated visuals often pass as real; yet, when customers later learn that a photo was synthetic and not disclosed, they might feel misled.
Overpersonalization is another risk area. It occurs when a brand uses excessive data, making messages feel intrusive rather than helpful. AI can make this more likely by presenting highly targeted offers to each individual. In one survey, 46% of consumers said tailored promotions feel "creepy."
Deepfakes are another problem. Bad actors can misuse AI tools to impersonate others in scams. For example, the CEO of WPP was targeted with a deepfake voice and video in 2024 to trick a colleague into a fake business deal. Brand safety teams should plan for risks arising from such fake content.
Even when AI supports a goal that customers agree with, the way you use it can still backfire. For example, Levi’s drew criticism in 2023 for its plan to expand representation by adding AI-generated models rather than booking more human models with diverse backgrounds. The company later stated that human models would remain central, yet the backlash had already begun.
The goal is simple. Use AI in the parts of your digital experience that represent your brand. Done well, this helps people get answers faster or compare options more easily. The following practices focus on using AI in ways that support your brand promise and protect trust.

Treat disclosure as part of your brand messaging. Customers don't want surprises when your assets are AI-generated, so use simple labels on AI chatbots and photos. Offer an opt-out where feasible, such as "Talk to a person" in a chatbot. Or provide a setting that lets users turn certain AI features off.
Such transparency can be a brand asset. Dove publicly pledged not to use AI to represent real women in its advertising and paired that stance with a creative playbook. The brand's transparency regarding its AI use policy received positive media coverage.
Adopt and maintain a consistent AI disclosure policy. Always label synthetic assets and outline the review process across the organization. Also, share that policy publicly so it stands as a commitment rather than a case-by-case exception.
AI should support your team, not replace it. For example, in customer service, some tasks are still better handled by humans.
A mix of AI and human touch can be the answer to resolving customer pushback. The bot can answer simple questions and gather basic details. When a request is complex, the conversation can move to a person.
For creative work, treat AI output as a draft. But keep humans in charge of final edits and approvals.
Make quality checks part of production. Build a simple workflow for AI outputs that flags risky claims and legal issues. When your systems summarize or answer questions, test accuracy against a known source of truth and track error rates. If the model cites a data point, click through and confirm. If the content references real people, verify the info. Internal checklists reduce the most common failure modes. A few practical starting points can be:
You can also test how AI use affects perception across your ads. If suspected AI content around your ads drags down conversion, consider reducing spend or tightening standards for where your AI-generated content appears.
Customers want help, not surveillance. Keep personalization to a minimum and explain why someone sees a recommendation. Also, offer controls to turn features off.
Transparent data practices show what your brand stands for and go beyond basic compliance. Publish a plain-language note that answers key questions.
Additionally, review the terms of third-party vendors to confirm training rights and retention policies before handing over creative or customer data.
AI for branding should sound like your team at its best. Create a style guide that covers brand voice and banned phrases. Maintain a library of approved examples and make your voice rules accessible to outsourced agencies as well.
When possible, keep creative roles human-led. Use AI to explore variations, but let writers and designers make the final decisions. The Marvel and Sports Illustrated examples show what happens when AI becomes the headline instead of the helper. People sense when a brand uses AI as a shortcut for tasks that a human actually handles better.
AI will likely be a part of every marketing stack in the future. The reports already indicate mainstream adoption among consumers, and enterprise adoption continues to grow. At the same time, broader polling shows concern rising. Pew Research Center reported that half of Americans now feel more concerned than excited about AI. Brands will thus have to balance AI’s efficiency gains with careful use that does not erode customer trust.
Clutch’s September 2025 survey adds another lens. One-third of consumers say that AI negatively impacts their perception of a brand, while only 16% believe it has a positive effect. Where customers sense clarity and care, AI tools land as helpful. Where they sense shortcuts or spin, they push back.
There are a few practical solutions companies can adopt right now to use AI without alienating consumers:
When you implement practices like these, AI in branding enhances campaign speed and quality, rather than introducing risk. You can move faster and use your budget more efficiently without losing the human connection that earns loyalty. If your next campaign uses AI for branding, start with one question: Would a reasonable customer feel informed and respected by this choice? If the answer is yes, ship it. If not, revise until it is.
Seeking the ideal partner for AI branding? Compare and choose from a vetted list of top AI consultants listed on the Clutch directory. Clutch helps leaders find the right B2B partners with verified reviews across marketing and creative services.