How to Conduct a Pilot Study

How to Conduct a Pilot Study

Conduct A pilot study 

How to Conduct a Pilot Study for Your Dissertation

So your adviser told you that you must conduct a pilot study, but what is a pilot study? Think of a pilot study as a controlled trial run that can change your mind, your methods, and your materials before you commit to the full dissertation study.
what is pilot testing in research, pilot testing, pilot test, pilot testing in research Conduct a Pilot Study

A pilot study is not a “mini dissertation” whose purpose is to prove your hypotheses. If you treat it like a hypothesis test, you will almost always get misleading results because pilot samples are usually too small to produce stable p-values. The real value lies in practical and methodological considerations: feasibility, measurement performance, and early signals about whether your design is headed in a useful direction. how to conduct a pilot study

Field test versus pilot study versus “try it on a friend”

Different programs use different labels. The safest distinction to use in dissertation language is:

  • Field test: tests your procedures in the real setting (site logistics, scheduling, privacy constraints, human behavior in context).
  • Pilot study: often broader; may include preliminary analysis to estimate effect sizes, reliability, or feasibility.
  • Informal pretest: helpful for early instrument clarity (one-on-one feedback), but it is not a substitute for field conditions.

When a pilot study makes sense (and when it does not)

Conduct a pilot study when your dissertation has any of the following features:

  • Your procedures are new to your setting. You have a new intervention, training protocol, observational rubric, or workflow involving multiple people.
  • Your measures are uncertain. Your survey or assessment has not been used with your population, your interview protocol is new, or your coding scheme needs rehearsal.
  • Your assumptions are fragile. Your timeline, recruitment, retention, or access depends on gatekeepers, school calendars, shift work, clinical schedules, or privacy constraints.
  • Your analysis needs parameter estimates. You need a realistic estimate of variance, missingness, or intraclass correlation to plan the main study.
  • A pilot study is often unnecessary when your procedures and measures are already well-established in your exact context, and your main risk is not feasibility but scope. In that case, a smaller logistics field test may be sufficient. A pilot study is a heavier lift, so it should earn its keep.

10 Steps to Conduct a Pilot Study

Step 1: Write the pilot purpose as a tight, non-hypothesis statement

A strong pilot purpose statement clearly states what you are testing and which decisions the pilot will inform. Keep it blunt.

Example structure you can adapt:

The pilot study will evaluate the feasibility of recruitment, retention, and protocol fidelity, and assess instrument performance and the preliminary variability of the primary outcome in the target setting. Pilot results will inform revisions to study procedures and support sample size planning for the main study.

That final clause matters. A pilot study should end with actionable decisions, not vague reassurance.

Step 2: Specify what “success” means before you collect data

A pilot study without decision thresholds becomes a storytelling contest. Predefine feasibility benchmarks and proceed based on evidence.

Typical benchmarks include:

Recruitment rate (number of eligible participants enrolling per week), retention rate (number who complete), protocol completion time, missing data rates, intervention fidelity, and participant burden indicators (dropouts, complaints, nonresponse).

Also define “stop rules.” If you encounter a privacy barrier, a safety issue, or a fidelity failure that cannot be resolved without a redesign, pause the pilot and revise it. That is not failure. That is the pilot doing its job.

Step 3: Choose a pilot design that matches your dissertation design

Conduct a pilot study that closely resembles your main study to test your assumptions, but not so large that it becomes a second study.

Common pilot designs:

  • A single-arm pilot (everyone receives the same procedure) works well for feasibility and measurement testing.
  • A two-group pilot (comparison or randomized) is useful when you need early estimates of variance or when implementation differs across groups.
  • A crossover or within-subjects pilot can help when sample access is limited, and you want participants to serve as their own controls.
  • A qualitative pilot (interviews, focus groups, observations) is ideal for refining protocols, sharpening constructs, and testing whether your questions yield usable data.

A mixed-methods pilot can be powerful, but only if each component has a clear role. Avoid “mixed methods” as a label without a plan for integration. Pick the simplest design that answers your pilot questions.

Step 4: Decide on a pilot sample size that is defensible and realistic

Pilot sample sizes are not powered for definitive hypothesis tests. Instead, think in terms of feasibility and estimation. A practical approach is to choose a sample that can:

  • Expose operational problems (recruitment, scheduling, and missingness).
  • Provide stable enough descriptive information to plan the main study (variance estimates, reliability signals, fidelity estimates).

Common ranges in dissertation pilots look like this, with lots of variation by design:

  • Qualitative pilots often use a small set of interviews or one focus group to refine protocols.
    Quantitative pilots often use a small number per group to check procedures and estimate variability.
  • Intervention pilots often focus on whether delivery and adherence work, not on proving outcomes.

The most defensible sentence you can write is: “The pilot sample was selected to evaluate feasibility benchmarks and to estimate key parameters needed for planning the main study, rather than to test hypotheses.”

Step 5: Lock your pilot materials and define what is allowed to change

Before you conduct a pilot study, label your materials as versioned documents: consent, recruitment script, instruments, interview guide, training script, fidelity checklist, and data collection workflow.

Then define “allowable changes.” For example:

  • You can reword confusing survey items, reorder questions, tighten an interview prompt, or adjust scheduling.
  • You cannot change your core construct definitions, your primary outcome, or your intervention dose without documenting a redesign decision.

A pilot study is a change engine, but it should change the right things for the right reasons.

Step 6: Align ethics and approvals with the pilot scope

A pilot study involving human participants still requires ethical oversight in most universities. Treat it as research, not rehearsal. Make sure your materials match reality:

  • Consent describes what you will collect, how long it takes, what is recorded, what is stored, who has access, and how withdrawal works.
  • Risks and mitigations are explicit, especially for sensitive topics, minors, workplaces, clinical settings, or power dynamics with gatekeepers.
  • If the pilot includes an intervention, define what you will do if a participant experiences distress or adverse effects.
  • A pilot study can create more risk than the main study if you “improvise” procedures. Avoid improvisation.

Step 7: Run the pilot with a real operations plan

A pilot should be executed with the same discipline you plan to use in the main study. Create a short operating protocol that includes:

  • Eligibility screening steps, contact sequence, session flow with timings, scripts, device or platform checks, and a plan for disruptions.
  • Train anyone who touches data. If you have research assistants, do a calibration session so everyone collects data the same way.
  • Test your data pipeline end-to-end. Do a dummy run that starts with recruitment and ends with a correctly named, securely stored dataset ready for analysis.

Step 8: Collect process data, not just outcome data

Most pilot failures are not “the effect was small.” Most pilot failures are “we could not run the study cleanly.” Collect process metrics alongside outcomes:

  • Enrollment and retention counts
  • Time per step
  • Number of reminders needed
  • Rates of incomplete responses
  • Reasons for withdrawal
  • Fidelity ratings and notes on setting constraints

If your study includes qualitative components, add a short debrief at the end:

  • What felt confusing, burdensome, or invasive?
  • What did participants interpret differently than you expected?
  • What was missing from your questions?

Step 9: Analyze the pilot data for learning, not victory laps

Your analysis plan should align with the pilot goals. Useful pilot analyses include:

  • Descriptive statistics for feasibility metrics (rates, timing, missingness).
  • Instrument diagnostics (item nonresponse, ceiling or floor effects, internal consistency signals when appropriate).
  • Preliminary estimates of variance and effect size direction to support planning, with careful language that these are exploratory.
  • Fidelity and adherence summaries if you have an intervention.
  • For qualitative data, a rapid coding pass to refine the codebook, improve prompt wording, and test whether themes map onto your constructs.
  • Avoid “statistical significance” as your headline. Pilot studies can easily produce false negatives and false positives. Focus on estimation, feasibility, and what must change.

Step 10: Make a go–revise–no-go decision and document changes

At the end, decide one of three things:

  • Go: feasibility benchmarks are met, and only minor edits are needed.
  • Revise: benchmarks are partly met, but specific fixes are required before scaling.
  • No-go: the design is not feasible in this setting without major changes to scope, access, measures, or procedures. Conduct a Pilot Study

Then produce a short change log: What changed, why it changed, and which document version is now current.

This change log is one of the best pieces of evidence you can bring to a committee meeting, because it converts “trust me” into “here is what happened and what we fixed.”

How to write up the pilot study in your dissertation

A clean pilot section usually includes: Conduct a Pilot Study

  • Purpose of the pilot and how it informed the main study.
  • Design, setting, participants, and procedures.
  • Feasibility benchmarks and results.
  • Instrument performance notes and protocol fidelity findings.
  • Revisions implemented for the main study.

Keep the tone operational. The pilot is evidence that your study is executable, ethical, and methodologically sound.

Common pilot study mistakes to avoid Conduct a Pilot Study

  • Overclaiming results from a small sample. Treat findings as exploratory and planning-oriented.
  • Failing to predefine feasibility thresholds. Without thresholds, every outcome becomes a narrative.
  • Running a pilot that is too different from the main study. Similarity is what makes the lessons transferable.
  • Ignoring the data pipeline. A pilot is the place to catch naming issues, storage issues, and missingness patterns.
  • Treating “pilot data” as a secret stash for the dissertation results chapter. Use the pilot to refine and plan, not to sneak in conclusions.

A pilot study is one of the few research components that can be both humble and strategic. You run it to protect your main study from preventable chaos and to give your committee confidence that your dissertation is not just a good idea but a viable project.

Related Posts Conduct a Pilot Study

Four Assumptions in Qualitative Research

Field Test or Pilot Study

Triangulation in Qualitative Research

Post and social media featured image by Yen Vu on Unsplash; embedded graphics © 2026, Compita Consulting, LLC

Narrative Inquiry and Thematic Coding