How to deliver above and beyond support for every customer, every time

Andrea Lean

Andrea Lean,

Senior Content Editor

29 April 20250 min read

The quality of the conversations between you and your customers can be tracked by your internal quality score (IQS). Not sure what yours is? Model your internal QA process after our award-winning support team.

Make room, CSAT and NPS! There’s an equally important metric you should be tracking to make sure your customer service is truly 5 stars all-around.

Enter IQS: your internal quality score. This is an internal measurement of the quality of your team’s customer interactions.

You can determine your IQS through a customer service quality assurance (QA) process, which is where you regularly review the conversations between your customers and your team against your own standards. 

Setting quality standards to start tracking IQS

At Front, our QA process is driven by our core company values: transparency, collaboration, care, high standards, low ego, and speed. Our award-winning support team performs QA every week to make sure we’re maintaining high standards and continuously going above and beyond for our customers whenever they reach out for support.

We use an internal quality scorecard to review each conversation against criteria like tone and calculate our IQS. We’re not kidding around when it comes to meeting our high bar for service quality (or other support metrics for that matter), which is why we set our IQS OKR at 99% and report on it monthly and quarterly.

A behind-the-scenes look at Front’s QA process

We have an average of 1,200 conversations per week across our customer communication channels, and roughly 2% of our emails and chats undergo QA with our scorecard. Three support managers review about six conversations a week, one from each of our 19 support agents. 

Our support team has built an automated QA process using Front rules to select conversations that meet our criteria:

✅ Reached resolution

✅ Involved at least two replies

✅ Occurred within the last week

❌ Excludes unqualified conversations like feature requests

Once a conversation is tagged at random by the rule, it creates a Discussion between the manager and the support agent for the conversation to be scored against four attributes: completeness, accuracy, tone, and brevity.

⭐Completeness: Were all the questions answered and/or followed up with clarifying questions?

💯Accuracy: Was the answer correct and the right resources provided?

🤖Tone: Did the tone of the response align with the guidelines?

🩲Brevity: Was the answer clear and concise?

Here’s an example discussion that kicks off QA

Each attribute is scored from 1 to 5 according to this rubric: 

Completeness

1 - Incomplete

  • Significant questions were unanswered or ignored

  • No attempt to clarify ambiguous requests

  • Demonstrates lack of thoroughness

2 - Partially complete

  • Some questions were answered, but others were missed or addressed superficially

  • Limited attempts to clarify unclear requests

  • Follow-up is minimal or ineffective

3 - Moderately complete

  • Most questions were answered, but some minor points might have been overlooked

  • Adequate attempts to clarify, but some gaps remain

  • Generally addresses the core issues

4 - Mostly complete

  • All questions were answered, and most necessary clarifications were made

  • Proactive follow-up demonstrates attention to detail

  • Minor improvements could be made in thoroughness

5 - Full complete

  • Every question was answered comprehensively, and all ambiguities were resolved with clear follow-up

  • Anticipates potential follow-up questions and addresses them proactively

  • Demonstrates exceptional thoroughness and attention to detail

Accuracy

1 - Highly inaccurate

  • Provided incorrect information that could lead to customer errors or frustration

  • Supplied irrelevant or harmful resources

  • Demonstrates a significant lack of product/service knowledge

2 - Mostly inaccurate

  • Provided some correct information, but significant errors or omissions were present

  • Resources provided are partially relevant or outdated

  • Requires significant correction

3 - Moderately accurate

  • Generally correct information, but minor inaccuracies or omissions were present

  • Resources provided are relevant, but may not be the optimal choice

  • Requires some correction

4 - Mostly accurate

  • Provided accurate and reliable information with minimal errors

  • Supplied relevant and helpful resources

  • Minor improvements could be made

5 - Fully accurate

  • Provided completely accurate and precise information

  • Supplied the best possible resources for the customer’s needs

  • Demonstrates expert-level knowledge and resourcefulness

Tone

1 - Highly inappropriate

  • Rude, dismissive, or unprofessional tone

  • Demonstrates lack of empathy or respect

  • Significantly violates company tone guidelines

2 - Mostly inappropriate

  • Tone is inconsistent or occasionally unprofessional

  • Lacks empathy or demonstrates impatience

  • Partially violates company tone guidelines

3 - Neutral/adequate

  • Tone is generally neutral but lacks warmth and enthusiasm

  • Meets basic professionalism standards

  • Meets basic company tone guidelines

4 - Mostly appropriate

  • Friendly, helpful, and empathetic tone

  • Demonstrates a positive and supportive attitude

  • Closely aligns with company tone guidelines

5 - Highly appropriate

  •  Exemplary tone that is consistently professional, empathetic, and enthusiastic

  • Builds rapport and fosters a positive customer experience

  • Perfectly aligns with and exemplifies company tone guidelines

Brevity

1 - Excessively wordy/Unclear

  • Responses are overly long and difficult to understand

  • Contains irrelevant information or jargon

  • Lacks clarity and focus

2 - Long-winded with increased risk of confusion

  • Responses are longer than necessary and contain some unnecessary details

  • Clarity is compromised by excessive length

  • Requires significant condensing

3 - Moderately concise

  • Responses are generally concise but could be improved for clarity and brevity

  • Some unnecessary details might be present

  • Requires some condensing

4 - Mostly concise 

  • Responses are clear, concise, and to the point

  • Avoids unnecessary jargon or details

  • Minor improvements could be made

5 - Highly concise

  • Responses are exceptionally clear, concise, and efficient

  • Delivers information in the most direct and effective way possible

  • Demonstrates masterful communication skills

At the Tier 2 support level, the inquiries are more technical and responses can easily slip into wordiness. Adding some warmth to responses can go a long way for even the most complex topics.

Anthony Galleran, Technical Support Manager at Front

Averaging all of your scorecard ratings reveals your IQS, which is helpful to follow conversation quality trends and understand your service quality at a glance.

Manual vs. automated QA process

While it’s quick and easy to stand up a manual review process, there are benefits and limitations. 

Pros

Cons

  • Quick and easy to implement

  • No additional tool required

  • Simple way to kick off QA process and begin monitoring service quality

  • Manual process

  • Limited view of your tickets

  • Incomplete assessment of agent performance

However, if you have enough ticket volume, it might be more helpful to automate the QA process. Many tools are available to automatically evaluate closed tickets, like Front’s Smart QA or MaestroQA integration. This provides a more comprehensive view of service quality that can inform performance management, systemic improvements, and strategic planning.

MaestroQA’s take on unlocking real insights with automated QA

Automating QA isn’t just about efficiency, it’s about surfacing insights that transform support strategy. Without full visibility into every interaction, teams risk making decisions based on incomplete data, missing trends that could improve agent performance, customer satisfaction, and operational workflows.

With automated QA, support leaders can:

  • Identify trends across all interactions instead of relying on a small sample

  • Pinpoint coaching opportunities and drive performance improvements

  • Uncover systemic issues faster to refine workflows, training, and CX strategy

  • Uncover root causes behind recurring issues, not just surface-level trends

A great example of this in action comes from Upwork’s Chatbot QA Story. Upwork’s chatbot was designed to handle customer inquiries at scale, but hallucinations and inaccurate responses were eroding customer trust. Before automation, their team manually reviewed just 1% of chatbot interactions—a time-consuming approach that left critical blind spots. With MaestroQA, they automated the initial QA process, reducing review time from 16 hours per week to seconds. But the real impact wasn’t just speed, it was what they could do next.

With visibility into 100% of chatbot interactions, Upwork’s team can identify systemic issues faster, analyze trends at scale, and drill into root causes to improve chatbot accuracy, refine escalation logic, and optimize customer journeys. Instead of reacting to problems after they surface, they’re proactively shaping a smarter, more reliable support experience.

When QA is automated with purpose, it becomes a strategic advantage, not just a process. The best teams use automation to uncover patterns and then apply human expertise to drive meaningful improvements.

💡 Learn how MaestroQA helps support leaders scale QA, uncover trends, and drive CX strategy.

Want step-by-step instructions on how to calculate your IQS? Download the guide to learn how to build your own customer service QA scorecard — template included!

guide: How to build a customer service QA scorecard

Your internal quality score (IQS) helps you understand the quality of the conversations between your customers and your customer service team. Model your internal QA process after our award-winning support team’s scorecard template.

Written by Andrea Lean

Stories that focus on building stronger customer relationships