From March 1, 2025, onward, the Journal of Marketing (JM) will adopt a new policy for reporting empirical results. This policy will hold for new submissions and revisions under the new editorial team and is based on the recommendations for comprehensive reporting of statistical results as laid out in the article by McShane et al. (2024), and many articles from other fields such as strategy, biology, and medicine.
In addition to the guidelines currently available on the journal’s submission guidelines page and on the data transparency policy page, JM will implement the following new reporting guidelines:
- JM submissions should report the actual p-values (three digits) rather than the threshold p-value in tables, running text, and in hypothesis testing.
- Submissions should refrain from adding asterisks that signify thresholds (e.g., *p p p-value in parentheses. For example, “the effect of X on Y is .20 (p = .047).” Following a convention in experimental research, the p-values would be reported as (for example), “F(1, 200) = 5.27, p = .023.”
- Parameter estimates in tables should include the standard errors.
- Effect sizes need to be reported. This can be achieved by adding corresponding information to results tables, figures, or in text. Please note the following:
a. Effect sizes are a means to demonstrate the substantive significance and to complement the statistical evaluation of empirical findings. They help to assess whether the findings are of an order of magnitude to relevant stakeholders (e.g., managers, consumers, policy makers, other societal stakeholders).
b. Evaluating the substantive significance should take into account the strength of association between focal measures and/or the impact size of one or more focal measures. JM does not define what the appropriate effect size measure is; it is the authors’ responsibility to provide evidence for the substantive significance of their findings based on their knowledge of the topic. Researchers from different domains have used different measures. For example, in experimental research, Cohen’s d, r, eta-square, and the odds ratio are among the most commonly used effect size metrics. In econometric research, elasticity, standardized regression coefficient, and unstandardized regression coefficient in combination with a predetermined change in IV (one unit or one SD) are among the most commonly reported effect size metrics. The latter two are also commonly used in survey research.
These guidelines are intended to help authors provide a more complete picture of their findings and demonstrate the potential impact of the research. Implementing Guidelines 1 and 2 reduces the incentives for p-hacking, a counterproductive behavior that has been documented across many fields of science (e.g., economics, biology, medicine). Reporting actual p-values also facilitates meta-analyses and is more precise. Guideline 3 gives readers and reviewers a better idea about the range of estimates they in their research might reasonably expect. Finally, Guideline 4 aligns with JM’s mission to develop and disseminate substantive knowledge to relevant stakeholders, which is hard to show on statistical grounds only. All these guidelines are designed to strengthen JM’s mission of reporting robust, relevant knowledge. It is this mission that makes JM unique and impactful.
Reference
McShane, Blakeley B., Eric T. Bradlow, John G. Lynch Jr., and Robert J. Meyer (2024), “‘Statistical Significance’ and Statistical Reporting: Moving Beyond Binary,” Journal of Marketing, 88 (3), 1–19.