
So you’ve rolled out an AI agent like Intercom's Fin. That’s step one. Now comes the hard part: figuring out if it’s actually helping. You need to know how it’s affecting your costs, your team’s workload, and, most importantly, if your customers are happy.
Most AI platforms give you a dashboard packed with charts and percentages. At first glance, it all looks pretty impressive. But often, that data doesn't tell the full story, making it tough to calculate your actual return on investment (ROI) or see where things could be better. You're left asking yourself, "Are we saving money? Are customers getting what they need, or are they just getting frustrated?"
That's what this guide is for. We’re going to walk through Fin AI Reports in a straightforward way. We'll look at what the metrics mean, shine a light on the crucial details they leave out, and show you how to get a more complete and honest picture of how your AI is doing.
What are Intercom's Fin AI Reports?
Intercom Fin is one of the more popular AI agents out there for customer service. The idea is simple: it jumps into customer conversations to answer common questions automatically, which frees up your human agents to handle the trickier stuff.
To show you how it's performing, Intercom gives you a reporting suite called Fin AI Reports. It’s basically the AI's report card. The goal of these reports is to give support managers a quick look at Fin's performance, showing how it’s affecting resolution rates and how customers are reacting to it.
You can find these reports using a pre-built template inside your Intercom workspace. They center on a few key metrics to give you a snapshot of your AI's effectiveness. While it’s a decent starting point, a quick snapshot doesn’t always give you the whole picture.
The key metrics in Fin AI Reports (and what they really mean)
To get anything useful out of a report, you have to know what you’re looking at. Let's break down the main parts of the Fin AI Reports dashboard.
A screenshot of the Intercom CSAT dashboard, which is a key part of Fin AI Reports.
Resolution and deflection rates
These are usually the first numbers everyone looks at. They seem to directly measure whether the AI is doing its main job.
-
Resolution Rate: This is the percentage of conversations Fin handles all by itself, from start to finish, without a human ever getting involved. Intercom tracks two kinds: "assumed" resolutions (where the customer just goes quiet after getting an answer) and "confirmed" resolutions (where the customer clicks a button to say their issue is solved).
-
Deflection Rate: This one is a bit broader. It includes full resolutions plus any time a customer finds help without needing to chat. For instance, if Fin suggests a help article, the customer clicks it and then leaves, that’s counted as a deflection.
Customer experience (CX) and CSAT scores
Automation is only a win if your customers don't hate it. That’s where this feedback comes in.
Intercom has a Customer Experience (CX) Score that uses a simple 1-to-5 rating. After chatting with Fin, a customer might get a quick CSAT survey asking them to rate the experience. This gives you direct feedback on how people feel about talking to your AI, which is critical for making sure you aren't trading customer loyalty for a bit more efficiency.
Content performance and involvement rates
These metrics give you a peek behind the curtain at what your AI is up to.
-
Involvement Rate: This number shows you what percentage of all conversations Fin participated in, even if it didn't solve the problem. It’s a good indicator of how often the AI is being triggered.
-
Content Performance Table: This table shows you which of your help articles or knowledge docs Fin is using most often. It also tries to highlight which content leads to the most resolutions, helping you spot your best articles and find ones that might need an update.
The hidden gaps: What your Fin AI Reports aren't telling you
Okay, here's the important bit. While the metrics above give you a basic overview, they leave out some really critical context. If you rely on them alone, you might get a skewed view of how your AI is actually doing.
You can't forecast performance before you go live
One of the biggest hurdles with Intercom Fin is that you only get performance data after the AI is already live and talking to your customers. There's no way to reliably predict its resolution rate or calculate potential ROI beforehand. It can feel like you're flipping a switch and just hoping for the best.
This "launch and learn" method is risky. If the AI doesn't perform well in its first few weeks, it can frustrate customers and erode trust. Imagine if you could run a full simulation on thousands of your past tickets to see exactly how an AI would perform before a single customer ever interacts with it. That would be a much safer way to get started.
You don't know why resolutions fail
The content performance table in Fin AI Reports is a great idea in theory, but it has a frustrating blind spot. It shows you what content the AI used, but it offers almost no insight into why a conversation failed and had to be passed to a human.
Was a key step missing from the article? Was the information outdated? Did the AI just misunderstand the question entirely? The report won't tell you. To figure it out, managers have to manually sift through conversation transcripts, which is a slow process that nobody has time for. This means the same knowledge gaps stick around, and your AI's performance can hit a ceiling.
Siloed and incomplete data
This might be the biggest gap of all. Fin AI Reports can only measure what Fin knows, and Fin only knows what you’ve stored inside Intercom, like your help articles.
But what about all the other places your team keeps important information? Most companies have knowledge spread out everywhere. Your in-depth technical guides might be in Confluence, your team playbooks in Google Docs, and your internal cheat sheets in Notion.
Fin can't see any of that. So, its reports are based on a small, incomplete fraction of your company's collective knowledge. This not only leads to lower resolution rates but also means your reports aren't reflecting the true state of your knowledge base.
Beyond Fin AI Reports: A better approach to AI reporting with eesel AI
These gaps aren't just small annoyances; they point to a reactive way of managing AI. A more effective approach is built on prediction, useful insights, and connected knowledge.
Simulate before you automate for predictable ROI
Instead of launching and hoping for the best, a tool like eesel AI lets you run tests in a powerful simulation mode. You can connect eesel AI to your helpdesk (including Intercom) and run it on thousands of your past tickets in a totally safe environment.
This gives you an accurate, data-backed forecast of your potential resolution rate and cost savings. It helps remove the guesswork from launching an AI, letting you build a business case on your own data before you go live.
Get actionable insights that improve your knowledge base
While Fin's reports tell you what happened, eesel AI's analytics are designed to tell you what to do next. The dashboard doesn't just show you resolution rates; it actively hunts for gaps in your knowledge base.
It flags questions the AI couldn't answer, showing you exactly where your documentation is thin. Better yet, it can look at successful conversations handled by your human agents and draft new knowledge base articles based on their answers. This creates a feedback loop that helps your AI get smarter with every customer interaction.
Unify all your knowledge for a complete picture
The biggest shift is that eesel AI is built to connect with your entire knowledge ecosystem. You can plug it into all the places your team keeps information: Confluence, Google Docs, Notion, Slack, and many others.
By learning from all of your company's knowledge, the AI can provide much more accurate answers. And as a result, its reporting gives you a holistic view of your AI's performance and knowledge coverage. You're no longer trying to measure things with one hand tied behind your back.
Comparing pricing: Intercom Fin vs. eesel AI
Predictable costs are also a big deal. Intercom Fin usually charges $0.99 per resolution, on top of your regular Intercom subscription. This model can lead to bills that are hard to predict. If you have a busy month and your AI does really well, your costs go up. It can feel like you're being penalized for automating effectively.
The pricing model for eesel AI works differently. It's based on tiered plans with a set number of AI interactions per month. There are no per-resolution fees, so your costs stay predictable even as your AI handles more and more conversations. This lets you scale up your automation without worrying about surprise bills.
Feature | Intercom Fin | eesel AI |
---|---|---|
Pricing Model | Per-resolution fee ($0.99) | Tiered plans with set interactions |
Cost Predictability | Low (scales with resolutions) | High (fixed monthly/annual cost) |
Hidden Fees | Potential for high costs with high volume | No per-resolution fees |
Move from basic metrics to actionable intelligence
Standard dashboards like Fin AI Reports are a fine place to start. They give you a basic pulse check on your AI. But they often lack the predictive power and deep, actionable insights needed to really improve your support and get the most out of your investment.
Success with AI isn't about having a fancy chart. It's about having a system that lets you test with confidence, shows you exactly where to improve, and works with all of your real-world, scattered knowledge. It’s about turning reporting from a passive glance in the rearview mirror into an active tool for getting better.
Get started with smarter AI reporting
Ready to see what your real automation potential is? You can get a good idea in the next ten minutes.
Sign up for a free eesel AI trial and run a simulation on your past tickets. You'll get a data-backed ROI forecast that shows you exactly what’s possible, all before you turn it on for a single customer.
Frequently asked questions
Fin AI Reports provide a snapshot of your AI's performance by tracking metrics like resolution and deflection rates, customer experience scores, and content involvement. This gives you an initial idea of how Fin is engaging with and resolving customer inquiries.
Key metrics include Resolution Rate (how many issues Fin solves alone), Deflection Rate (how many customers find help without human interaction), and Customer Experience (CX) or CSAT Scores, which reflect customer satisfaction with AI interactions. Content Performance and Involvement Rates also show how the AI uses your knowledge base.
Unfortunately, Fin AI Reports do not offer a way to forecast performance before going live. The data becomes available only after the AI agent is actively engaging with customers, which means you typically have to launch and then learn from its live interactions.
While Fin AI Reports show you what content the AI used, they generally lack insights into why a resolution failed. To understand the root cause of failures, managers often need to manually review conversation transcripts, which can be a time-consuming process.
No, Fin AI Reports are limited to the knowledge stored directly within Intercom, such as your help articles. If your company's important information is spread across other platforms like Confluence, Google Docs, or Notion, those sources are not included in Fin's data or reporting.
Intercom Fin typically charges per resolution on top of your standard subscription, which can make costs unpredictable as your AI's success directly increases your bill. This model means higher automation can sometimes lead to unexpected expenses.