Meta-Analysis
R-based statistical engine for systematic reviews — forest plots, funnel plots, publication bias tests, subgroup analysis, and meta-regression, all without writing code.
Meta-Analysis
Publication-ready forest plots. Rigorous statistics. Zero lines of R code.
Meta-analysis is the quantitative heart of a systematic review — the step where individual study results become pooled estimates. It's also where most researchers hit a wall: learning R, debugging metafor scripts, formatting plots for journal requirements. mapped's R-based statistical engine handles the computation while giving you full control over model specification and interpretation.
Statistical Models
mapped supports the full range of meta-analytic models:
Fixed-Effects Model
Assumes all studies estimate the same underlying effect. Appropriate when:
- Studies share similar populations, interventions, and designs
- Heterogeneity is expected to be minimal
- Weighted by inverse variance (Mantel-Haenszel or Peto methods also available)
Random-Effects Model
Assumes true effects vary across studies. Appropriate when:
- Studies differ in populations, settings, or intervention details
- Heterogeneity is expected and meaningful
- DerSimonian-Laird, REML, or maximum-likelihood estimators available
- Knapp-Hartung adjustment for confidence intervals
Mixed-Effects Model
Combines fixed moderator effects with random study-level effects. Used for meta-regression and subgroup analysis with continuous or categorical moderators.
Effect Measures
mapped supports all standard effect measures:
| Measure | Use case |
|---|---|
| Mean Difference (MD) | Continuous outcomes, same measurement scale |
| Standardized Mean Difference (SMD) | Continuous outcomes, different measurement scales |
| Risk Ratio (RR) | Binary outcomes, relative risk |
| Odds Ratio (OR) | Binary outcomes, especially case-control studies |
| Hazard Ratio (HR) | Time-to-event outcomes |
| Risk Difference (RD) | Binary outcomes, absolute difference |
| Correlation (r) | Association between continuous variables |
Select the appropriate measure for your data, and mapped handles the transformation, weighting, and pooling.
Forest Plots
The signature visualization of any meta-analysis. mapped generates publication-ready forest plots that include:
- Individual study effect estimates with 95% confidence intervals
- Study weights (proportional to box size)
- Pooled effect estimate (diamond)
- Heterogeneity statistics (I², τ², Q statistic, p-value)
- Prediction interval (optional, for random-effects models)
- Subgroup subtotals (when subgroup analysis is performed)
Plots are fully customizable:
- Font size, colors, and layout
- Study ordering (by year, effect size, weight, or custom)
- Label formatting and axis ranges
- Export to PNG, SVG, PDF, or EPS for journal submission
Funnel Plots
Funnel plots visualize the relationship between study effect sizes and precision, helping detect publication bias:
- Standard funnel plot — effect vs. standard error
- Contour-enhanced funnel plot — overlays significance regions
- Trim-and-fill funnel plot — shows imputed missing studies
Asymmetry in the funnel suggests potential publication bias — a critical issue that must be addressed in your manuscript.
Publication Bias Tests
Beyond visual inspection, mapped runs formal statistical tests:
| Test | What it detects |
|---|---|
| Egger's regression | Small-study effects (standard method) |
| Begg's rank correlation | Correlation between effect size and variance |
| Trim-and-fill | Estimates number and effect of missing studies |
| Fail-safe N | How many null studies would be needed to nullify the effect |
| P-curve analysis | Whether the distribution of p-values suggests genuine effects |
Results are automatically reported with the interpretation needed for your manuscript.
Subgroup Analysis
Test whether the effect differs across predefined subgroups:
- Categorical moderators — study design, geographic region, risk of bias level, intervention type
- Between-group heterogeneity test — quantifies whether subgroup differences are statistically significant
- Subgroup forest plots — separate pooled estimates per group with a test for interaction
mapped ensures subgroup analyses are pre-specified (to avoid data dredging) and correctly interpreted.
Meta-Regression
For continuous moderators or more complex relationships:
- Single-predictor models — does year of publication or sample size predict effect?
- Multi-predictor models — multiple moderators simultaneously
- Bubble plots — visualize the relationship between a moderator and effect size
- R² analog — proportion of heterogeneity explained by the moderator
Sensitivity Analysis
Test the robustness of your findings:
- Leave-one-out analysis — re-run the meta-analysis excluding each study in turn
- Influence diagnostics — identify studies with disproportionate impact on the pooled estimate
- Cumulative meta-analysis — see how the pooled estimate changes as studies are added chronologically
- Exclusion by quality — compare results when restricting to high-quality studies only
Heterogeneity Assessment
mapped reports comprehensive heterogeneity statistics:
- Q statistic — tests the null hypothesis that all studies share one true effect
- I² statistic — percentage of variability due to true differences rather than chance
- τ² (tau-squared) — absolute measure of between-study variance
- Prediction interval — range where the true effect of a future study would likely fall
Interpretation guidelines are displayed alongside the statistics — no need to look up thresholds.
Export and Reporting
All outputs are designed for direct manuscript insertion:
- Plots: PNG, SVG, PDF, EPS
- Tables: Word, Excel, CSV, LaTeX
- Statistical reports: formatted text blocks ready for Results sections
- Raw data: R script equivalent if you want to verify or extend the analysis independently
Why This Step Matters
A meta-analysis is only as credible as its methodology. Incomplete heterogeneity assessment, missing publication bias tests, or poorly specified models are the most common reasons for major revisions. mapped runs all the standard tests, visualizations, and sensitivity analyses that peer reviewers expect — in a guided workflow that doesn't require statistical programming expertise.
Next step: With your analysis complete, learn about Collaboration → to manage your team throughout the process.
Quality Assessment
Risk of bias assessment with four validated tools (RoB 2.0, ROBINS-I, NOS, QUADAS-2) and GRADE evidence certainty rating — AI-assisted, human-confirmed.
Collaboration
Team roles, dual-reviewer assignment, real-time chat, Google Workspace integration, and a full audit trail — collaboration built for research rigor.