Case Study · 2024–25

Designing for
Field Researchers

How I helped an AI-powered qualitative research platform onboard complex clients — from NGOs and research institutions to organizations conducting social impact research across India and beyond.

Role
Service Designer
Company
Dots by Ooloi Labs
Duration
6 Months
Focus
Service Design · Research Ops · AI Platform
MAD Storytelling Initiative — Dots Platform

Image: Dots by Ooloi Labs — getdots.in

A social enterprise building AI tools for people doing real work in the field

Dots by Ooloi Labs is a social enterprise building an AI-powered qualitative research platform for organizations working on public health, education, climate, and gender equity. Their users aren't typical SaaS customers — they're field researchers, NGO teams, and social scientists collecting data in complex, often resource-constrained environments across the Global South.

I joined as a Service Designer on the Strategy and Partnerships team, working directly with the two founders. My first major assignment was the Make A Difference (MAD) Storytelling Initiative — a project I owned end-to-end, from the first discovery call to a live platform with 2,000+ active users.

Onboarding clients who couldn't afford for the platform to get it wrong

The organizations using Dots weren't experimenting with a new tool for fun. MAD runs a volunteer network of 2,000+ young leaders across 50+ chapters in India, working with children in need of care and protection. They needed a platform to capture authentic volunteer stories at scale — qualitative data that could drive program decisions and strengthen their community of practice. A broken tagging feature or a platform crash mid-session didn't just mean a bug report. It meant lost data, delayed research, and eroded trust.

The challenge: configure Dots from scratch to match MAD's specific research workflow — role-based access for contributors, chapter leads, admins and public viewers, AI-assisted annotation, WhatsApp chatbot integration, and meta-tagging by region and theme — and make sure everything worked before 2,000 volunteers touched it.

2,000 volunteers. 50+ chapters. 5,000 student beneficiaries. This wasn't a prototype — it was live infrastructure for real communities.

From client call to live platform — how the work actually flowed

01
Discovery — understanding the research context
I ran discovery sessions with the MAD team alongside the founders — understanding their volunteer structure (contributors, chapter leads, admins), how stories were currently captured, and what insights they needed to surface. The goal wasn't just feature requirements — it was understanding their movement's logic so the platform could serve it.
02
Configuration — setting up the platform from scratch
With direct access to the staging environment, I configured the platform end-to-end for MAD: user roles and batch registration by chapter and location, meta-tagging by region and theme, AI annotation and coding setup, and collection page structure. Every decision mapped back to how MAD's volunteer network actually operated — not a generic template.
03
Stress testing — finding where things break
Before MAD's team saw the platform, I stress tested every feature they'd use — tag selection, annotation tools, WhatsApp chatbot flows, user permissions, oAuth with Platform Commons. I documented every bug and crash, worked with the product team to resolve them, and re-tested until the environment was stable. 2,000 volunteers were going to use this. There was no room for a broken experience at launch.
04
Iteration — closing the loop with clients
After MAD's team reviewed the staging environment, feedback came back and the cycle repeated — adjust, retest, present. This continued until there was full alignment between what MAD needed and what the platform delivered. I then handed off a fully validated environment to the product team for the production build, which rolled out to 2,000+ volunteers.
MAD Platform — Story cards interface

The live MAD platform — story cards tagged by region, chapter and theme. Image: Dots by Ooloi Labs

Benchmarking the competitive landscape of qual research tools

Alongside the MAD project, I led a competitive analysis of the qualitative research tools market — Dovetail, NVivo, and MaxQDA. The goal wasn't just to list features. It was to understand why researchers preferred specific tools, what workflows they were designed around, and where Dots could meaningfully differentiate — particularly in AI-assisted coding and annotation for field teams.

I went deep into each platform's annotation capabilities, AI features, collaboration tools, and pricing models — building a framework the founders could use to prioritize the Dots product roadmap.

Competitive matrix
Feature comparison
Key Finding
Dovetail wins on collaboration and speed. NVivo wins on depth and academic rigor. The gap Dots could own: AI-assisted coding that doesn't require a PhD to configure — designed for field teams, not just researchers.

The MAD platform goes live — 2,000+ volunteers, zero critical issues

The MAD Storytelling Initiative was my first assigned project and I owned it from the first discovery call to production launch. The platform rolled out to over 2,000 volunteers across 50+ chapters — with role-based access, AI-assisted annotation, WhatsApp chatbot story capture, and regional meta-tagging all live and functioning as designed.

The WhatsApp chatbot — which interactively captures volunteer stories and generates structured data for analysis — was in final configuration at handoff, with the core platform already serving MAD's full volunteer base. No critical issues post-launch. The research workflow was preserved exactly as MAD's team needed it.

MAD volunteers at a community session

MAD volunteers at a community session — the people the platform was built to serve. Image: Make A Difference / Dots by Ooloi Labs

Six months at a social impact startup taught me things no classroom could

01
Design ops is design
The process of getting a product from concept to a real user's hands is a design problem. Configuring platforms, writing QA protocols, managing handoffs — these are just as much design decisions as any interface choice.
02
Context changes everything
Working with NGOs and field research teams taught me that the same feature means something completely different depending on who's using it. A tagging system for a PhD researcher is not the same as one for a field worker in rural India.
03
AI tools need human translation
The AI features in Dots were powerful — but only when someone had done the work of mapping the client's research logic onto the system. AI doesn't remove the need for service design. It makes it more important.
04
Speed and rigor aren't opposites
In a fast-moving startup, I learned to stress test faster, document smarter, and know which corners could be cut and which absolutely couldn't. That judgment is something you only develop by doing the work.
Next Project
Wrapping in Poison