Family. Support. Advocacy. Due Process.
Inequitable Title IX 1.png

Myths About Sexual Assault

Myths About Sexual Assault

 

Myths About Sexual Assault

There is no lack of politicians, activists, or media personalities broadcasting inaccurate statistics about sexual assault to drive some political or ideological agenda. Yet, amidst the fervor of advocacy and political discourse, how often do we pause to scrutinize the validity of these numbers? Unsurprisingly, an evaluation of such statistics revealed several misconceptions, such as "1 in 5" female students are sexually assaulted while in college, and only 2-10% of rape allegations are false. 

1 in 5 is Sexually Assaulted??

The 1 in 5 statistics is derived from a 2007 survey funded by the Department of Justice, conducted on two campuses and titled the Campus Sexual Assault (CSA) Study. This study consisted of an online voluntary response survey with a low response rate (42%) and a high possibility of response bias (only attracting people with strong opinions related to the topic to participate). CSA also used broad, ambiguous definitions for sexual assault that encompass a range of behaviors that could be subject to interpretation, such as "forced kissing," "events you think (but are not certain) happened," or simply "don't know" what happened and not specifying that the sexual contact was unwanted at the time. Additionally, the CSA study is unpublished but made available to the public on the DOJ website and, therefore, probably not peer-reviewed for accuracy. After it was posted on the website, published critiques revealed flaws in the methodology employed (i.e., Schow A. 2014). Even the authors of the CSA study denounce extrapolating the results nationwide and suggest that the results only reflect two campuses where the survey was conducted (Krebs C. et al. 2014).  In a special report by the Bureau of Justice Statistics, a very different rate of sexual assault for college females was found to be 6.1 in 1000.  The authors suggest that the difference in outcome is mainly due to the difference in the methodology used (Sinozich S. et al. 2014).

Only 2 to 10 % of Rape Allegations are False??

Please see our evaluation of this here.

Statistical Misrepresentation

Many others have raised concerns about misrepresenting statistics (i.e., Schow A. 2014, Kessler G. 2014, Fox J. et al. 2014, Factual Feminist 2015, FreedomToons 2019), and some have cleverly relayed their findings in short (~5 minute) videos linked below for you to view: 

Only 2% Of Rape Accusations Are False???

Sexual assault myths: Part 1 | FACTUAL FEMINIST

Sexual assault myths: Part 2 | FACTUAL FEMINIST

The frequency with which a statistic is reported does not determine its accuracy. As we and others have shown, studies can be misrepresented and could lead people to believe something that is not true or not the whole truth. Below is a list of things to think about when statistics are thrown at you to get a sense of whether or not they are accurate and reliable:

  1. Promotor Credibility: Consider the reputation and credibility of the person quoting the statistic. Do they have an agenda, or are they promoting an ideology? Ask for the source of the statistic or how they know what they are claiming to know.

  2. Peer Review: Determine if the study has undergone peer review, where experts in the field (who could identify any shortcomings) evaluate its methodology, findings, and conclusions before publication. Peer-reviewed studies tend to be more reliable.

  3. Author Credentials: Assess the qualifications and affiliations of the authors. Are they recognized experts in the field? Do they have relevant academic or professional credentials, including rigorous academic training (i.e., degrees in a hard science or math) that requires using the scientific method? 

  4. Critiques or Commentary: Look for responses to the study from other experts. Commentary, critiques, and follow-up studies can provide valuable insights and help contextualize the original findings.

  5. Publication Credibility: Consider the reputation and credibility of the journal or publisher where the study is published. Reputable journals typically have rigorous editorial standards and credible editorial boards; examples include Science, The Lancet, and Proceedings of the National Academy of Sciences. Beware of "predatory journals," which are publications that exploit publishing for financial gain while disregarding academic rigor and quality standards; one example is the Scientific Institute For Advanced Training and Studies (Beall's List 2024). Also, some journals may lack academic rigor and quality peer review; one suggested example is Fat Studies (Soave R. 2018).    

  6. Funding Source: Examine who funded the research. Potential biases may exist if organizations with vested interests in specific outcomes fund the study. Transparency about funding sources is essential for evaluating potential conflicts of interest.

  7. Conflict of Interest Disclosure: Check the authors' disclosure statements for any conflicts of interest that could influence the research findings. Transparency about potential biases is essential for assessing the credibility of the study.

For a more detailed analysis of the research:

  1. Sample Size and Selection: Evaluate the sample size and how participants were selected. Larger, more diverse samples generally yield more reliable results. Randomized controlled trials (RCTs), whenever possible, are often considered the gold standard for research design. 

  2. Methodology: Scrutinize the research methods used, including experimental design, data collection procedures, and statistical analyses. Look for any potential biases or flaws in the methodology that could affect the validity of the results, such as survey studies with live interviewers, which are considered more accurate than online questionnaires.

  3. Replicability: Consider whether other researchers have replicated the study's findings. Reproducibility is a crucial aspect of research and lends credibility to the original findings.

  4. Conclusions Supported by Evidence: Assess whether the conclusions drawn by the authors are supported by the data presented in the study. Beware of overgeneralizations or unsupported claims.

Conclusion

The truth is we don't know how many have been sexually assaulted on campuses.  Inaccurate broadcasting statistics will not produce safer campuses but will lead to intense anxiety and could incentivize administrators to create biased procedures that punish innocent individuals, which is simply cruel and unacceptable in a just society that respects human rights. Focusing on preventing sexual assault through protective means (i.e., no alcohol on campus, security-monitored housing and event spaces, and sex education with unambiguous definitions for sexual assault and harassment) would undoubtedly be a more humane approach. 

References read more