top of page
Image by Sonny Mauricio

Balancing User Experience & Scalability in Telecaller Training

what i did

Foundational Research & Product Management

for

SquadStack

context


SquadStack is a tele sales marketplace that connects businesses with trained tele-callers. For companies, the platform offers access to a reliable supply of callers to scale sales operations. For tele-callers, it provides consistent remote earning opportunities and professional development through structured training and performance support. I was part of the User Product Team, focused on improving the experience of supply-side users (tele-callers).


In this ecosystem, training was critical: it ensured that tele-callers could quickly ramp up on client requirements, deliver consistent quality, and maintain performance. At the time, training was conducted through Zoom calls led by a small group of trainers.


As the platform scaled, however, training became increasingly difficult to manage. Trainers had to manually schedule and conduct sessions for large batches, while tele-callers struggled to fit trainings around their daily calling shifts. For many, this meant either missing parts of the training or being forced to choose between completing sessions and meeting work targets. This friction led to inefficiencies, low satisfaction, and uneven skill development across the tele-caller base.


AI generated image to capture the struggles of a remote tele-caller training scene.
AI generated image to capture the struggles of a remote tele-caller training scene.

problem


SquadStack’s training programs had clear intent: upskilling tele-callers to improve performance and retention. The content itself was strong and trainers were deeply invested. But the delivery model was failing the learners.


  • Earning vs. learning conflict: Sessions often clashed with live calling hours, forcing tele-callers to pick between immediate income and long-term growth.

  • Lack of visibility into progress: There was no structured, productized way to track a tele-caller’s learning journey. Trainers relied on manual tracking, which was inconsistent and sometimes inaccurate.

  • Low impact & engagement: Trainings felt repetitive and fragmented — lessons and assessments were disconnected, retention was poor, and motivation steadily dropped.


Together, these issues made training feel like a burden rather than an enabler, hurting both tele-caller satisfaction and their long-term performance.



goals


The goal was to make training simpler, more engaging, and more effective for tele-callers, while also reducing inefficiencies for trainers.



For tele-callers (end users):

  • Balance earning with learning: Give tele-callers flexible access so growth doesn’t compete with income.

  • Make progress visible: Turn learning journeys into something trackable and motivating, not lost in manual spreadsheets.

  • Create stickiness: Design lessons and assessments as a seamless flow, making knowledge easier to retain and apply.

  • Boost engagement: Shift training from repetitive to interactive, so learners feel invested instead of fatigued.

  • Match (or beat) live training: Deliver a digital experience that’s just as effective and energizing as a trainer-led session.


For trainers (internal users):

  • Reduce repetition: Cut down on manual, one-off sessions so energy goes into refining content, not re-delivering it.

  • Enable scale: Replace ad-hoc tracking with a productized system that gives real-time visibility into every learner’s progress.

  • Reclaim focus: Free trainers to invest in what actually matters(building better training experiences) rather than chasing attendance and follow-ups.



success criteria


Primary Metrics:

  • Course completion rate improved (target: >75%).

  • Drop-off between lessons and tests reduced (<15%).

  • Assessment scores improved (+15% average).


Secondary Metric:

  • Trainer hours saved through automation (target: ~50% reduction).


Guardrails:

  • Hygiene & quality of lessons maintained across cohorts.

  • Call performance metrics (conversion, CSAT) maintained or show uplift after training.



approach


The only sustainable way forward was to automate and productize the training system so it could scale without losing quality. The goal was to bring training and assessments into a single flow that felt seamless for tele-callers, while also giving trainers the tools to maintain lesson hygiene and reduce repetition.


To explore this, we took two parallel tracks:

  • User research → Talked to tele-callers and trainers to map their pain points, understand their workflows, and identify what an ideal training experience would look like.

  • Feasibility study → Assessed whether to build an in-house system from scratch or integrate third-party tools, weighing speed, cost, and flexibility.


The north star of this approach was to design a scalable training experience that matched or exceeded the value of live trainer sessions, without forcing tele-callers to choose between work and learning.



process


To tackle the problem, we followed a phased research-driven process, grounding every step in user insights before moving toward solutions.


Phase 1: Understanding Users

We began with exploratory research to uncover the lived experiences of tele-callers and trainers.


Using contextual inquiry and observation, I attended two live training sessions to see how tele-callers navigated lessons and tests in real time within their remote environments. To complement this, I conducted in-depth interviews with 6 tele-callers and 2 trainers, focusing on their goals, frustrations, and workarounds. Since trainers had already voiced recurring challenges, I conducted fewer interviews here, mainly to validate and triangulate findings.


In parallel, I analyzed backend training data(attendance rates, completion patterns, and assessment performance) to establish benchmarks. This quantitative layer helped contextualize qualitative insights, showing not just what challenges users faced, but also how often and to what degree they occurred. Turns out, the data was stored across many google sheets, which was very tough to maintain & track.


Using these combined insights, I created journey maps for both tele-callers and trainers, visualizing where breakdowns occurred in the training lifecycle. This phase highlighted a fragmented experience: tele-callers had to switch contexts between lessons and tests, struggled with repetitive, unengaging sessions, and often sacrificed work hours to complete training. Trainers, meanwhile, faced the inefficiency of repeating content manually and lacked visibility into learner progress.

Phase 2: Analysis & Defining Needs

All qualitative data was coded using a thematic analysis approach, clustering patterns into user needs and pain points.


Example of my in-progress dovetail board.
Example of my in-progress dovetail board.

For tele-callers, the needs were clear:

  • A seamless way to learn and test without interruptions.

  • More engaging, less repetitive learning experiences.

  • Transparency into progress and next steps.

  • Flexibility to balance training with live calling shifts.


For trainers, the critical needs included:

  • Standardization and reusability of lessons to ensure content hygiene.

  • Visibility into learner performance across cohorts.

  • Reduced manual overhead in scheduling and conducting sessions.


These insights clarified the problem statement: training had to be productized in a way that created value for both learners and trainers while sustaining quality at scale.


Phase 3: Ideation — Exploring Build vs. Buy

With the problem reframed, we moved into solution exploration. The team weighed two pathways: building an internal training system from scratch versus integrating a third-party tool. I led the competitive scan and feasibility research, benchmarking available tools against criteria such as content interactivity, assessment integration, progress tracking, and ease of integration with our existing systems.


Through this process, EdApp emerged as the strongest candidate. It offered micro-lesson formats that reduced repetition, built-in assessments, dashboards for progress visibility, and a scalable structure that aligned well with our trainers’ needs.


Edapp Dashboard, one of the many reasons why we chose this product for us.
Edapp Dashboard, one of the many reasons why we chose this product for us.

Phase 4: Prototyping & Testing

We ran two pilot studies with EdApp, each with a different tele-caller cohort over two weeks. In these pilots, we combined quantitative metrics (completion rates, assessment scores, drop-off data) with qualitative feedback(usability, engagement, trainer effort) from both tele-callers and trainers.


ree

Key findings included:

  • Tele-callers described lessons as “lighter,” and “bite-sized” making it easier to fit learning into their workday without compromising live calling hours.

  • Immediate follow-up tests reinforced concepts, boosting short-term retention.

  • Completion rates rose significantly when lessons and assessments were tightly integrated into one flow.

  • Trainers appreciated the reusability of modules, noting a clear reduction in repeated manual effort.

  • Progress dashboards were a major motivator. Tele-callers were excited to track their growth, compare performance with peers, and celebrate milestones.


These insights validated the feasibility of scaling EdApp across the platform.

Phase 5: Integration & Handoff

Once EdApp was selected, my role shifted to enabling adoption. We designed a handoff process where trainers were onboarded through a workshop, learning how to create and manage their lessons within the new system. On the technical side, I collaborated with the EdApp team(externally) and our engineering team to ensure smooth integration into our ecosystem and delivered a formal research readout with findings, success criteria, and integration requirements.


By the end of this phase, training had transformed from a manual, fragmented process into a streamlined, productized system that better served both trainers and tele-callers.


Edapp learner view
Edapp learner view


insights & impact


We tracked two primary success metrics: maintaining lesson hygiene and quality across cohorts, and ensuring call performance outcomes (conversion, CSAT) held steady or improved after training.


The pilots delivered clear wins:

  • Average assessment scores rose by 15%, reflecting stronger concept retention

  • Completion rates jumped from 62% to 84% with integrated lessons and tests

  • Drop-off between lessons and tests fell from 40% to ~5%, proving the new flow reduced friction

  • Trainer effort on manual repetition decreased by ~40%, freeing capacity for content improvement—an impact likely to compound at scale


ree

These results didn’t just make training smoother; they laid the groundwork for business outcomes that mattered most. By improving retention, consistency, and motivation, the pilots ensured that tele-callers could learn without sacrificing work—and that training became an enabler of stronger conversion and satisfaction on calls, rather than a competing priority.



reflections & takeaways


A few key reflections:

  • Balancing third-party integrations is its own design challenge: This was my first time leading a project around a third-party tool, and it taught me that the tool itself isn’t the solution—the real work lies in ensuring it fits seamlessly into the workflows of all stakeholders. Finding that balance was a complex but invaluable exercise.

  • Empathy + evidence = stronger decisions: I realized how powerful it is to pair quantitative signals (completion rates, scores, drop-offs) with qualitative insights from tele-callers and trainers. Numbers proved what was happening, but stories explained why. Together, they made the case for change far more compelling.

  • Systems thinking creates long-term leverage: Beyond fixing immediate pain points, this project showed me how productizing workflows—like reducing trainer repetition—compounds impact. Freeing up human effort for higher-value work ultimately drove outcomes that scaled well past the pilot.



For me, this project underscored that great product management sits at the intersection of research rigor and strategic decision-making. Empathy uncovers what matters, structure sharpens focus, and trade-offs drive scalable solutions.


bottom of page