How to use Freshdesk Freddy AI to evaluate knowledge gaps for bots in 2025

Stevia Putri

Stanley Nicholas
Last edited October 28, 2025
Expert Verified

There’s nothing quite like the frustration of launching a new AI bot only to watch it fail. You’ve put in the time, connected it to your knowledge base, and hit the 'on' switch. But then you start hearing that it's giving weird, unhelpful, or just plain wrong answers. It’s the classic "garbage in, garbage out" scenario, and it’s a sure sign your AI doesn’t have the information it needs to be useful.
Finding and filling these knowledge gaps is the most important thing you can do to get your AI performing well. It’s what turns a bot that annoys customers into one that actually helps them.
This guide will walk you through exactly how to use the tools inside Freshdesk Freddy AI to evaluate knowledge gaps for bots. We’ll cover the native features Freshdesk offers, but we’ll also dig into a more advanced method that gets you much better results by simulating your AI’s performance against thousands of real customer questions.
What you'll need to get started
Before we jump in, let's quickly cover what you'll need to have handy. This whole process relies on having access to a few specific features within your Freshdesk setup.
-
A Freshdesk account: You need to be on their Pro or Enterprise plan to get access to the Freddy AI Agent.
-
The Freddy AI Agent add-on: This is a paid feature. Freshdesk’s pricing is based on "sessions," which you buy in packs. To give you an idea, it’s about $100 for 1,000 sessions.
-
Admin access: You’ll have to be an administrator in your Freshdesk account to poke around in the settings and analytics for Freddy AI.
-
An existing knowledge base: Your Freddy AI bot has to be connected to your Freshdesk solution articles. This gives it a starting point of knowledge for us to check.
How to find knowledge gaps with Freshdesk's tools
Freshworks has built some features to help you pinpoint where your Freddy AI bot might be coming up short. The whole thing really comes down to digging into its analytics and using the built-in self-testing tool to spot weaknesses.
Here’s how to do it.
1. Go to the Freddy AI Agent analytics dashboard
First up, you need to see how your bot is actually doing in the wild. Head over to your Freshdesk admin panel and find the analytics section for Freddy. This dashboard gives you the key stats that can point you toward knowledge gaps.
Keep a close eye on metrics like these:
-
Resolution rate: How many chats is the bot actually closing on its own? If this number is low, that's a big red flag.
-
Unhelpful responses: This tracks every time a user manually clicked to say the bot’s answer wasn't helpful.
-
Unanswered questions: This shows you every query where the bot threw its hands up and had to escalate. These are your most obvious knowledge gaps, right there on a silver platter.
A screenshot of the Freshdesk analytics dashboard, which is used with Freshdesk Freddy AI to evaluate knowledge gaps for bots.
2. Test the bot with the "evaluate itself" feature
Freshdesk has a feature where Freddy can quiz itself on its own knowledge. You can find this in the Freddy AI Agent configuration settings. When you kick it off, Freddy cooks up a list of questions it thinks are relevant based on your help articles and then shows you how it would answer them.
It’s a decent way to catch some obvious holes. For example, if you have an article about your return policy but nothing on exchanges, the self-evaluation might ask about exchanges and fail, flagging a clear gap in your docs.
The Freddy AI assistant in the Freshdesk automated ticketing system, which helps to use Freshdesk Freddy AI to evaluate knowledge gaps for bots.
3. Dig into "unhelpful" and "unanswered" replies
Honestly, your best feedback comes directly from your users. The analytics dashboard splits the bot’s failures into two useful buckets:
-
Unhelpful responses: These are pure gold. A customer cared enough to tell you "this didn't help." Sift through these one by one. Was the answer technically right but just confusing? Or was it flat-out wrong? This tells you which articles need a good rewrite.
-
Unanswered questions: This is where Freddy just gave up. This list is basically your to-do list for new knowledge base articles. If a dozen people asked how to track an order and the bot blanked, you know exactly what article to write next.
4. Manually add or update your solution articles
After you’ve found the gaps from the reports and self-tests, the last step is to actually fix them. This part is all manual. You’ll need to pop into your Freshdesk knowledge base and either write new solution articles or edit existing ones.
A good practice is to create one new, focused article for each separate gap you find. And with that, you’ve completed Freddy’s native process for finding and fixing knowledge gaps.
Limitations of Freshdesk's native evaluation tools
Following the steps above will definitely lead to some improvements. But if you’re trying to automate a real chunk of your support tickets, you'll probably hit a wall with this approach pretty quickly.
-
Simulated questions aren't real questions: Freddy testing itself is a bit like a student writing their own exam. It can only test what it already knows is in the curriculum. It can't mimic the messy, weird, and completely unexpected ways real customers ask for help. It won't use slang, misspell words, or ask about that brand-new bug that just popped up.
-
It only looks at one knowledge source: Freddy is almost entirely stuck looking at your official Freshdesk knowledge base. But think about where the real answers live in your company. A lot of the time, they’re buried in a Google Doc, a Confluence page, or in the resolution notes of an old support ticket. Freddy can't see any of that, leaving huge blind spots.
-
It’s reactive, not proactive: The whole process is based on waiting for the AI to fail in front of a customer (which gives you an "unanswered" response) or running manual tests. It doesn’t give you a true forecast of how your bot will perform across thousands of real-world scenarios before you let it talk to a single customer.
A better way to find knowledge gaps
While Freddy’s tools are a start, a much more effective solution is a platform built specifically for heavy-duty testing and deep knowledge integration. Instead of guessing, you can use a tool that shows you exactly how your AI will do and where it will stumble before you even go live.
Test against thousands of real, historical tickets
This is where a tool like eesel AI really shines. Instead of letting the AI make up its own questions, eesel AI’s powerful simulation mode runs it against thousands of your actual past Freshdesk tickets.
It takes real customer questions and shows you precisely how the AI would have answered. This gives you a startlingly accurate picture of your true automation potential and instantly shows you where your knowledge is weak. You're no longer guessing; you're using your own history as the ultimate cheat sheet.
Connect all your knowledge, wherever it lives
A lot of the time, a "knowledge gap" isn't a gap at all. The information is out there, it’s just not in the one place your AI is allowed to look.
eesel AI connects with over 100 sources, so you can plug it into everything, not just your help center. Pull in knowledge from Google Docs, Confluence, Notion, and even the text from past support tickets. This gives your AI access to every possible answer, which massively reduces the number of times it gets stumped.
Let the AI help you fill the gaps automatically
Fixing knowledge gaps shouldn't be a purely manual grind. After running a simulation, you'll get a clear report of questions the AI couldn't handle. With eesel AI, you can take it a step further.
The platform can automatically generate draft knowledge base articles from successful human resolutions. If an agent solves a tricky problem that the AI couldn't, eesel AI flags that conversation and whips up a draft article based on the agent's response. This helps you constantly improve your knowledge base with proven solutions, turning your team's hard work into something you can use again and again.
An example of eesel AI's copilot drafting a reply within Freshdesk, showcasing an alternative to manually fixing issues found when you use Freshdesk Freddy AI to evaluate knowledge gaps for bots.
Comparison: Freshdesk Freddy AI vs. eesel AI
| Feature | Freshdesk Freddy AI | eesel AI |
|---|---|---|
| Testing Method | Self-generates questions | Simulates on thousands of real, past tickets |
| Realism | Low (AI tests itself) | High (Based on actual customer issues) |
| Knowledge Sources | Primarily Freshdesk articles | Freshdesk, Confluence, Google Docs, past tickets & more |
| Gap Identification | Manual review of reports | Automated reports highlighting specific gaps & trends |
| Go-Live Confidence | Moderate | High (with accurate performance forecasts) |
Stop guessing, start simulating
At the end of the day, finding and fixing knowledge gaps is the only way to build an AI support bot that actually works. While the built-in Freshdesk Freddy AI to evaluate knowledge gaps for bots gives you some basic tools to get started, its self-testing and siloed knowledge sources will only get you so far. You'll end up spending most of your time reacting to problems instead of getting ahead of them.
A more modern, effective approach is to simulate your AI's performance against your own historical data and bring all of your company's knowledge together. This is how you build an AI agent you can actually trust with your customers. It’s time to move beyond putting out fires and adopt a proactive, data-driven strategy.
Take the next step
Ready to see what your real automation potential is? Run a free, no-risk simulation on your historical tickets with eesel AI. You can be up and running in minutes and get an accurate forecast of how many tickets you can automate, along with a clear report on any knowledge gaps you need to fill.
Frequently asked questions
To get started, you'll need a Freshdesk Pro or Enterprise account with the Freddy AI Agent add-on enabled, which is a paid feature. You'll also need admin access within Freshdesk and an existing knowledge base connected to your Freddy AI bot.
The primary steps involve reviewing the Freddy AI Agent analytics dashboard for metrics like unhelpful responses and unanswered questions. You can also use the bot's "evaluate itself" feature and then manually update your solution articles based on the identified gaps.
The main limitations include the bot's self-generated questions not fully mimicking real customer queries, leading to unrealistic testing. Additionally, it primarily relies on your Freshdesk knowledge base, often missing valuable information stored in other company documents.
A simulation-based approach, like with eesel AI, tests your bot against thousands of real historical customer tickets, providing a much more accurate forecast of its performance. This method uncovers gaps proactively and can integrate knowledge from over 100 sources beyond just Freshdesk articles.
For continuous improvement, regularly review your bot's performance metrics and user feedback to identify new or recurring gaps. Consider adopting advanced tools that automatically generate draft articles from successful agent resolutions to continually enrich your knowledge base.
Yes, the Freddy AI Agent add-on, which is essential to use Freshdesk Freddy AI to evaluate knowledge gaps for bots, is a paid feature. Its pricing is based on "sessions" purchased in packs, for example, around $100 for 1,000 sessions.



