A guide to creating an Intercom workflow to tag conversations for CSAT analysis

Stevia Putri
Written by

Stevia Putri

Amogh Sarda
Reviewed by

Amogh Sarda

Last edited October 28, 2025

Expert Verified

Customer Satisfaction (CSAT) scores are great for a quick pulse check on how your customers feel. But let’s be honest, a happy emoji doesn't tell you the whole story. The real gold is buried in why a customer felt that way, and to get to it, you need to analyze their feedback. That all starts with good conversation tagging.

If you’ve ever tried manually tagging every conversation based on CSAT ratings, you know it’s a soul-crushing task. It takes forever, mistakes happen, and it simply doesn’t work once your support volume starts growing. Automation is really the only way forward if you want clean, consistent data you can actually use to make decisions.

This guide will walk you through setting up a standard Intercom workflow to handle CSAT tagging. But more importantly, we’ll get into the annoying limitations you’ll probably run into and show you a smarter, AI-powered way to get much deeper insights from your customer feedback.

What is an Intercom workflow to tag conversations for CSAT analysis?

An Intercom Workflow is basically a set of automated rules that runs inside the platform. It follows a simple "if this, then that" logic. A specific trigger, like a teammate closing a conversation, kicks off a series of actions you've set up in advance.

A visual representation of the Intercom workflow builder, illustrating how automated rules and actions are configured.
A visual representation of the Intercom workflow builder, illustrating how automated rules and actions are configured.

When it comes to customer feedback, the goal of an Intercom workflow to tag conversations for CSAT analysis is to automatically stick the right tags on conversations as soon as a customer leaves a rating.

For example, if a customer gives a 'Bad' rating, the workflow can slap a "CSAT-Negative" tag on it. This makes it super easy for you or a support manager to filter for all the negative interactions. You can then spot patterns, find coaching opportunities, or flag product bugs without having to read every single support ticket.

How to create a basic Intercom workflow to tag conversations for CSAT analysis

While Intercom gives you the tools to build this automation, it’s good to know the standard process so you understand what you’re working with. Here’s a quick look at how it's usually done.

Start with a trigger: A conversation is closed

The most logical place to start a CSAT workflow is with the 'Teammate changes conversation state' trigger. You’d set it to fire when the state changes to 'Closed'. This ensures the survey goes out right after the agent has hopefully solved the customer's problem.

Add the CSAT rating request

Next, you’ll use Intercom’s built-in 'Ask for conversation rating' action. This sends that little three-emoji ('Bad', 'Okay', 'Great') satisfaction survey to the customer in the chat messenger. It’s a simple way for them to give you feedback with just one click.

Use branches to apply tags

After the rating comes in, you use 'Branches' to split the workflow into different paths based on the customer’s answer. The logic here is pretty straightforward and helps you sort feedback on the fly:

  • Path 1: If the Conversation Rating is 'Bad', then add the tag "CSAT-Negative".

  • Path 2: If the Conversation Rating is 'Okay', then add the tag "CSAT-Neutral".

  • Path 3: If the Conversation Rating is 'Great', then add the tag "CSAT-Positive".

Once this is live, you’ll have a basic system for categorizing feedback based on which emoji a customer clicks. Simple enough, right?


graph TD  

    A[Start: Conversation Closed] --> B{Send CSAT Survey};  

    B --> C{Customer Responds};  

    C --> D[Rating is 'Great'?];  

    D -- Yes --> E[Apply 'CSAT-Positive' Tag];  

    D -- No --> F[Rating is 'Okay'?];  

    F -- Yes --> G[Apply 'CSAT-Neutral' Tag];  

    F -- No --> H[Rating is 'Bad'?];  

    H -- Yes --> I[Apply 'CSAT-Negative' Tag];  

    E --> J[End];  

    G --> J;  

    I --> J;  

Where the native Intercom CSAT tagging workflow falls short

Setting up this basic workflow is a nice first step, but teams often discover pretty quickly that it’s not enough to get real, meaningful insights. The problem is that these rule-based workflows are just too rigid to handle the messiness of actual customer conversations.

Rigid rules just don't get the nuance

Here’s the biggest headache: a simple emoji click doesn't have any context. A customer might tap the 'Great' emoji but then add, "The agent was fantastic, but I’m getting tired of having to contact support for this same issue every month."

Your standard Intercom workflow will tag this as "CSAT-Positive" and move on, completely missing the fact that you have a recurring problem and an annoyed customer. It can't read the text for sentiment or specific topics, so you’re left with a positive tag on a conversation that’s actually a red flag.

How the native workflow fails in common situations

As some frustrated users have pointed out in the Intercom Community, Intercom is designed to not send a CSAT survey if a conversation involves multiple teammates. This is an intentional feature to avoid spamming people, but it means your automated tagging workflow won't even run on those tickets. Suddenly, you have big gaps in your data, especially from your most complex support cases.

Complicated workarounds

To do anything slightly more advanced, you usually have to get creative with Conversation Data Attributes (CvDAs). Let's say you want to stop sending CSAT surveys for certain types of tickets, like spam or sales questions. You have to create a custom attribute, get your agents to manually set it on every single one of those conversations, and then build extra branches into your workflow to check for it.

This just adds more manual work for your agents and turns what should be a simple workflow into a tangled mess that’s a pain to maintain. This is exactly the kind of manual configuration that modern AI tools are built to handle automatically, without making you build a maze of rules.

A smarter approach: Using AI

Instead of getting tangled up in rigid rules and clunky workarounds, you can integrate an AI platform that plays nicely with your existing helpdesk. A tool like eesel AI connects directly to Intercom, adding powerful new capabilities without making you switch platforms.

An Intercom ticket view with an AI Copilot integrated into the sidebar, demonstrating a smarter approach to CSAT analysis.
An Intercom ticket view with an AI Copilot integrated into the sidebar, demonstrating a smarter approach to CSAT analysis.

Go beyond simple ratings with AI-powered triage

eesel AI's AI Triage doesn't just look at the emoji a customer clicked. It reads and understands the entire conversation, figuring out the true context, sentiment, and topics being discussed.

This means it can automatically apply much more useful tags like "sentiment:frustrated", "topic:billing-error", or "product-feedback:ui-suggestion". It can catch this stuff even if the customer left a 'Great' rating, making sure you never miss critical feedback hidden in their comments. You get the full picture, not just the emoji.

Get set up in minutes, not hours

You can forget about building complicated workflows with dozens of branches and custom attributes. With eesel AI, you just connect your Intercom account, and you're pretty much done. The AI starts learning from your past conversations right away to understand your specific support issues and brand voice. You can get valuable insights from day one without a huge setup project or pulling in developers.

Get actionable insights, not just data dumps

The analytics dashboard in eesel AI does more than just list tagged conversations. It surfaces trends and points out gaps in your knowledge base. For example, it might show you that 25% of your negative CSAT ratings are tied to one confusing feature that doesn't have a good help article. This gives you a clear, data-backed to-do list for making changes that will actually lower your ticket volume and make customers happier.

FeatureNative Intercom Workflowseesel AI
Tagging LogicRule-based (e.g., IF rating is X)AI-based (analyzes full conversation text)
Setup ComplexityHigh (requires branches, CvDAs)Low (one-click integration)
Contextual NuanceLow (misses context in comments)High (understands sentiment, topic, intent)
Multi-participant chatsFails to send CSAT surveyAnalyzes all conversations regardless
ReportingBasic tag filteringActionable insights, knowledge gap analysis

What does Intercom charge for workflows?

To build the workflows we've been talking about, you'll need access to Intercom's Workflow builder. That feature is only available on their Advanced (starting at $85 per seat, per month) and Expert plans.

On top of that, Intercom's own AI agent, Fin, has a per-resolution pricing model. As you can see on their pricing page, you'll be charged $0.99 for every resolution. This can lead to unpredictable costs that creep up as your support volume grows, making it tough to budget.

In contrast, platforms like eesel AI offer transparent, predictable pricing based on interactions, not resolutions. You get a clear, flat rate, so you're not punished for helping more customers.

Moving from basic tagging to deep analysis

Setting up an Intercom workflow to tag conversations for CSAT analysis is a decent starting point for organizing customer feedback. But the limits of rule-based automation mean you’re only scratching the surface. You're seeing the what, but you're completely missing the why.

To really understand the story behind each rating, you need a tool that can analyze the full conversation. By bringing in AI, you can move beyond simple, reactive tagging and start spotting trends, fixing root problems, and creating a customer experience that earns genuinely great feedback.

Unlock the real story behind your CSAT scores

Ready to move beyond just tracking emojis? See how eesel AI's seamless integration with Intercom can fully automate your CSAT analysis and give you the deep insights you need to make your customers happier.

Start a free trial or book a demo today.

Frequently asked questions

An Intercom workflow to tag conversations for CSAT analysis is an automated system that applies specific tags to customer support conversations based on the CSAT rating they provide. Its main purpose is to help categorize feedback, making it easier to identify patterns, improve support quality, and uncover product issues.

The basic steps involve setting a trigger, usually when a conversation is closed, then using Intercom's "Ask for conversation rating" action. Finally, you create "Branches" that apply different tags (e.g., CSAT-Positive, CSAT-Negative) based on the customer's chosen emoji rating.

Key limitations include a lack of contextual understanding from just an emoji, which misses nuance in customer comments. Additionally, these workflows often don't run on conversations involving multiple teammates, leading to data gaps, and require complex workarounds for advanced filtering.

AI enhances this process by analyzing the full conversation text to understand true sentiment, topics, and intent, not just the emoji. This allows for more granular and accurate tagging, uncovering hidden insights even in seemingly positive feedback, without complex rule-based setups.

To build a native Intercom workflow to tag conversations for CSAT analysis, you typically need Intercom's Advanced or Expert plans, starting at $85 per seat per month. If using Intercom's AI agent, Fin, there's an additional charge of $0.99 for every resolution, which can lead to unpredictable costs.

Looking beyond the emoji is crucial because a simple rating lacks context; a "Great" rating might still come with critical product feedback in the comments. Analyzing the full conversation helps uncover the "why" behind the rating, identifying root causes, recurring issues, and specific areas for improvement.

Yes, a standard Intercom workflow to tag conversations for CSAT analysis might fail to apply tags in conversations involving multiple teammates. This is an intentional feature in Intercom to prevent spamming customers with multiple surveys, but it creates significant data gaps, especially for complex support cases.

Share this post

Stevia undefined

Article by

Stevia Putri

Stevia Putri is a marketing generalist at eesel AI, where she helps turn powerful AI tools into stories that resonate. She’s driven by curiosity, clarity, and the human side of technology.