
Why research projects take longer than planned, and how to estimate them properly
Every research project timeline is optimistic until it isn't. Here is why the gap between plan and reality is so consistent, which phases carry the most risk, and how to build a timeline that holds up.
The gap between plan and reality
Ask any research project manager how many projects have finished on time and within the original timeline, and the honest answer is: fewer than clients remember and fewer than proposals suggested.
This is not a failure of project management. It is a structural feature of how research projects are planned. Timelines are typically built under optimistic assumptions: that recruitment will proceed as briefed, that clients will turn around feedback within 24 hours, that fieldwork will run without cancellations or no-shows. Each assumption is individually plausible. Collectively, they produce a plan that has almost no buffer for the delays that occur on most projects.
This article explains why research timelines slip, which phases carry the most risk, and how to build an estimate that survives contact with reality.
Why research timelines are systematically optimistic
The pattern is consistent enough across projects, methodologies, and agencies to suggest something structural rather than individual. Three forces drive it.
Proposals are written under competitive pressure. When a timeline is part of a pitch, the incentive is to give the client a number that looks fast. A realistic timeline with buffers built in looks slower than a competitor’s optimistic one, even if the competitor’s is unrealistic. The result is an industry-wide ratchet toward underestimation.
Dependencies are underweighted. Research projects are chains of dependent phases. Design cannot start until the brief is confirmed. Recruitment cannot start until the screener is approved. Analysis cannot start until fieldwork closes. Each dependency is a potential delay point, and delays compound rather than average out.
Recruitment is both the most variable phase and the hardest to estimate. It is where most projects lose time, and it is also where variance is highest. A B2B qualitative study targeting senior decision-makers in a niche sector can take twice as long to recruit as a consumer study with a broad target. But both get similar recruitment windows in the initial plan.
The five phases and where risk lives
A standard research project moves through five phases: design and approval, recruitment, fieldwork, analysis, and reporting. Each phase has a different risk profile.
Design and approval is driven almost entirely by client responsiveness. Writing the screener, discussion guide, or questionnaire typically takes a few days; the variable is how quickly the client reviews and approves. One round of feedback with a responsive client adds a day. Three rounds with a slow one adds a week and a half. Multi-market projects add further complexity because materials often need to be adapted and approved market by market.
Recruitment is the phase most likely to run over. The causes vary: a niche target audience, lower-than-expected incidence rates, B2B access difficulties, high no-show rates. But the direction is always the same. Projects rarely recruit faster than planned; they frequently recruit slower. A realistic recruitment estimate for qualitative work includes a buffer of at least 20–40% over the base estimate, more for B2B or specialist audiences.
Fieldwork is more predictable for surveys, which run continuously once live, than for qualitative work, where each session is a scheduled event that can be cancelled or rescheduled. For qualitative projects, no-shows and late cancellations are routine. Building in reserve sessions from the start is more efficient than trying to reschedule them mid-field.
Analysis has a well-known property: it expands to fill available time. When the schedule is tight, analysis is compressed; when there is slack, it deepens. This means the analyst’s estimate of how long analysis will take is partly a reflection of how much time they think they have, not just how complex the data is. The practical implication is that analysis should be scoped against deliverable requirements rather than left open-ended.
Reporting is predictable in its base duration but variable because of client review rounds. Each round adds 2–4 working days in practice, not because the revisions themselves take that long but because aligning feedback, making changes, and getting sign-off involves calendar time. Projects with three client review rounds take materially longer than those with one, and this needs to be built into the timeline from the start.
The compounding problem
What makes timeline estimation particularly challenging is that delays compound. If recruitment runs two weeks over, fieldwork starts two weeks late. Analysis follows fieldwork, so it starts two weeks late too. By the time the project reaches reporting, a single recruitment delay has pushed the entire timeline back, even if every subsequent phase ran perfectly to plan.
flowchart TD
A[Recruitment slips 2 weeks] --> B[Fieldwork starts 2 weeks late]
B --> C[Analysis starts 2 weeks late]
C --> D[Reporting starts 2 weeks late]
D --> E[Project delivers 2 weeks late] This is why a realistic timeline looks so much longer than an optimistic one. It is not just about adding buffer to each phase individually. It is about recognising that the system has low tolerance for delay at the front end.
The phases that carry the most downstream risk are the earliest ones: design approval and recruitment. A one-week delay in design approval flows through to every subsequent phase. This is why the most effective risk management in research projects happens before fieldwork starts, not during it.
Multi-market projects: the coordination multiplier
Every additional market multiplies coordination overhead in ways that are easy to underestimate. A single-market project has one set of materials to approve, one recruitment agency to manage, one fieldwork schedule to coordinate. A three-market project has at least three of each, plus alignment calls, translation requirements, and the reality that each market operates on its own timeline.
The relationship between market count and project duration is not linear, but it is consistent. Each additional market adds overhead to every phase: design, recruitment, fieldwork, analysis, and reporting. For markets in different time zones, the overhead is higher; for markets requiring local language materials, higher still.
The mistake is treating a three-market project as three single-market projects running in parallel with minimal overhead. In practice, the coordination work of aligning materials, managing multiple agencies, and integrating data from different markets adds time even when each individual market is running well.
What a realistic timeline looks like
Take a standard qualitative project: 15 in-depth interviews across two markets, with a presentation deliverable and two client review rounds. A realistic timeline typically runs 12–14 weeks. An optimistic plan might show 8–10. The difference is not padding; it is buffer for the events that occur on most projects of this type.
For a single-market online survey of 400 completions, standard consumer audience, report deliverable with one review round, 6–8 weeks is realistic. A fast execution under favourable conditions might reach 4–5 weeks. An estimate below 4 weeks for a full-service study is almost certainly optimistic.
For a mixed-methods project, quantitative phase followed by qualitative, the critical risk is the phase handover. If the quantitative phase slips, the qualitative phase starts late. If the qualitative sample is designed around findings from the quantitative phase, any delay in quantitative analysis flows directly into the qualitative design work. The safest approach is to begin qualitative recruitment during quantitative fieldwork, using interim data to refine the screener rather than waiting for the final quantitative output.
How to build a timeline that holds
A few principles that consistently produce more reliable estimates.
Use three scenarios, not one. An optimistic estimate, a realistic estimate, and a conservative estimate communicate something that a single figure cannot: the range of outcomes that are plausible given the project scope. The realistic estimate should be what you plan against. The conservative estimate is what you should be able to live with if things go wrong.
Scope recruitment last, not first. Recruitment duration depends on audience difficulty, and audience difficulty is not always known at proposal stage. If you are proposing on a project without a confirmed screener, build in a wider range for recruitment duration and flag that it will be confirmed once the target audience is finalised.
Name the delay risks explicitly. Rather than building undisclosed buffer into each phase, it is more useful to name the specific risks that could cause delay and communicate them to clients. “Recruitment is estimated at 10–14 working days, based on a standard consumer audience. B2B or specialist audiences typically add 5–10 days.” This sets expectations without requiring you to defend a number that looks slow.
Lock the workshop date before fieldwork starts. If the deliverable includes a workshop or debrief session with multiple stakeholders, the date for that session should be confirmed before fieldwork begins. Stakeholder calendars are a constraint that cannot be managed in the final week of a project. The workshop date should drive the project schedule backwards, not be squeezed in at the end.
Build in transcription time for qualitative work. Transcription is consistently underestimated. A full day of qualitative interviews produces several hours of audio that needs to be transcribed before analysis can begin. AI-assisted transcription tools have compressed this significantly, but the time still needs to be in the schedule. Running transcription in parallel with live fieldwork, rather than batch-processing at the end, is the most efficient approach.
The honest conversation
The most useful thing a research team can do with a client at project kick-off is have an honest conversation about timeline risk. Not an apology for the schedule, but a clear explanation of where the risk lives, what would cause delays, and what the team is doing to mitigate them.
Clients who understand that recruitment is the highest-risk phase are better equipped to help with access and introductions. Clients who know that each review round adds 2–4 days are more motivated to consolidate feedback rather than sending it in instalments. Clients who see a range rather than a single date develop a more realistic relationship with the project schedule.
The research industry’s habit of presenting optimistic timelines as confident commitments does not serve anyone well. A realistic estimate, clearly explained, is more useful than an optimistic one that needs to be revised two weeks into fieldwork.
The research timeline estimator builds these realities in by default; phase-by-phase estimates across five methodologies, with the delay risks most likely to affect your specific project type.