Why you should measure async interviews
Async interviews have become a fundamental tool for hiring teams that operate across multiple timezones, handle high candidate volume, or simply want a more efficient process. But implementing async interviews without measuring their effectiveness is like running marketing campaigns without analytics.
The advantage of async interviews over synchronous ones is that they automatically generate structured data: when the candidate responded, how long they took, how complete their answers were, and how they rated the experience. This data is gold for optimizing your hiring process.
Key Takeaway
Companies that measure and optimize their async interviews reduce time-to-hire by 35% and improve quality of hire by 28%, according to data from teams using Selenios in 2025.
The 4 fundamental metrics
1. Completion rate
Completion rate measures the percentage of candidates who finish the async interview after receiving the invitation. It's the most important metric because a process that candidates abandon is a broken process.
The factors that most affect completion are:
- Number of questions: more than 5 questions reduces completion by 20%
- Instruction clarity: candidates need to know exactly what to expect before starting
- Estimated time: indicating "10-15 minutes" vs. giving no estimate improves completion by 18%
- Flexible deadline: giving 48-72 hours works better than 24 hours
- Mobile experience: 45% of candidates complete async interviews on their phones
How to measure it
Divide the number of completed interviews by the number of invitations sent. Segment by candidate source, role, and seniority to identify where you're losing the most candidates. Selenios shows this metric in real time on its dashboard, with alerts when the rate drops below your threshold.
2. Response quality
Not all completed interviews are equal. Response quality measures how relevant, complete, and deep candidates' answers are. This metric is crucial for evaluating whether your questions are well-designed.
A question that generates generic responses isn't serving its purpose. If every candidate gives the same answer to "what's your greatest strength," the problem is the question, not the candidates.
How to measure it
Response quality is evaluated across two dimensions:
- AI evaluation: language models analyze relevance, depth, and structure of the response. Selenios assigns a score from 1 to 10 automatically.
- Reviewer evaluation: hiring managers rate responses on a standardized scale. The correlation between the AI score and human scores indicates whether the model is well-calibrated.
The average quality benchmark is 6.5/10. Consistently low scores on a specific question suggest that the question needs reformulation.
3. Time to complete
Time to complete has two components: the time it takes the candidate to start the interview after receiving the invitation (latency), and the time it takes to complete all questions (duration).
Both are informative:
- High latency (more than 48 hours): may indicate low candidate interest or that the invitation email wasn't effective
- Very short duration: may indicate superficial responses or that the questions are too simple
- Excessive duration: may indicate confusing or overly complex questions
How to measure it
Track the timestamp of invitation sent, interview started, and final submission. Calculate the median (not the average, to avoid outliers) for each metric. Segment by role: a senior developer may take longer on a technical question than a junior candidate, and that's expected.
4. Candidate satisfaction
The candidate's experience in the async interview directly impacts your employer branding. A confusing or impersonal process can cause a candidate to reject an offer, even if they passed all stages.
How to measure it
Send a short survey (3-5 questions maximum) immediately after the interview is completed. Key questions are:
- NPS: "On a scale of 1 to 10, how likely would you recommend this process to another professional?"
- Clarity: "Were the instructions clear?"
- Fairness: "Do you feel you were able to demonstrate your skills?"
- Preference: "Did you prefer this format to a live interview?"
The NPS benchmark for well-designed async interviews is 7.5/10. Below 6, there are serious experience problems.
Benchmarks by industry and seniority
Not all metrics have the same benchmarks. These are the ranges we see in Selenios data:
By seniority:
- Junior (0-2 years): completion 78%, average duration 12 min, NPS 7.8
- Mid (3-5 years): completion 74%, average duration 15 min, NPS 7.2
- Senior (6+ years): completion 65%, average duration 18 min, NPS 6.8
By role type:
- Technology: completion 70%, average quality 6.8/10
- Sales: completion 76%, average quality 6.2/10
- Operations: completion 73%, average quality 6.5/10
The trend is clear: senior candidates have lower completion because they have more options and less tolerance for processes they don't perceive as valuable. This reinforces the importance of designing an interview experience that respects their time.
How Selenios tracks these metrics
Selenios automatically captures all 4 fundamental metrics without additional configuration. Its analytics dashboard shows:
- Real-time view: completion, latency, and duration updated to the second
- Weekly trends: how your metrics evolve week over week
- Role comparison: identifies which positions have the best and worst experience
- Smart alerts: notifications when a metric drops below the configured threshold
- Improvement suggestions: data-driven recommendations to reformulate questions, adjust deadlines, or modify the flow
Strategies to improve each metric
Improving completion rate
- Reduce questions to 3-4 maximum
- Include an introductory video from the hiring manager (increases completion by 12%)
- Offer the option to respond via text or video
- Send a friendly reminder at the 24-hour mark
Improving response quality
- Use situational questions: "tell me about a time when..." instead of "what's your opinion on..."
- Provide role context before each question
- Allow candidates to re-record their response
Reducing time to complete
- Optimize the invitation email: clear subject line, visible CTA, estimated time
- Ensure the platform works without installations or registrations
- Offer full mobile compatibility
Improving satisfaction
- Personalize the experience with the candidate's name and role
- Include information about next steps upon completion
- Send an automatic thank-you with estimated response timeline
What metrics should I measure in async interviews?+
The 4 fundamental metrics are: completion rate (percentage of candidates who finish the interview), response quality (evaluated by AI and human reviewers), time to complete (from receiving the link to submitting responses), and candidate satisfaction (measured with post-interview NPS). Each provides different insights for optimizing your process.
What is a good completion rate for async interviews?+
The industry benchmark is 72%. Rates below 60% indicate experience problems: too many questions, confusing instructions, or lack of role context. Rates above 80% are excellent and indicate a well-designed process. Selenios shows this metric in real time and sends alerts when it drops.
How does Selenios improve async interview metrics?+
Selenios automatically tracks completion, response time, and quality. Its conversational AI reduces candidate friction, increasing completion rate by 15% compared to one-way video platforms. It also generates comparative reports by role, seniority, and candidate source, with data-driven improvement suggestions.