Recent article in New York Times( “Many Psychology Findings Not as Strong as Claimed, Study Says” by BENEDICT CAREY, AUG. 27, 2015) as well as Science News article ( “Psychology results evaporate upon further review: Surprising reports, findings with marginal statistical significance least likely to be reproduced, study concludes” by Bruce Bower, August 27, 2015) highlight the problem of replicability in behavior science research. Thomas Insel in his p-Hacking Blog (November 14, 20114) addressed this problem for behavior research sometime ago, and Brian Koehler of International Society for Psychological and Social Approaches to Psychosis (ISPS) brought attention to this issue to ISPS members.
My comment on this issue is as follows, which I am reproducing from the ISPS Listserv communication:
“Besides what Thomas Insel (NIMH Director) rightly pointed out in his p-Hacking Blog (where one can pretty much find support of evidence for any idea, if one reanalyzed the same data repeatedly from various angles to come up with the desired p value to justify one’s hypothesis, notwithstanding the idea that “the existential reality” is multifaceted, whereby every unique human perspective or thought may have support and validly from that person’s unique experiential base, if one explores exhaustively that angle of individual perspective) the problem with replication related to behavioral research due to inherent variability of human behaviors, to which one can add, the difficulty in designing controlled studies with any human beings for that matter.
Human behaviors, as we all know, is greatly influenced by personal observations, interpretations, intuitions, beliefs, thoughts, imaginations, and judgements, etc., and greatly vary among individuals and subcultures. Any outside measure employed to observe us, influences or changes our behaver, similar to observing and predicting the behavior of individual electrons by an instrument (a la Heisenberg’s Principle of Uncertainty). We engage a great deal of our daily time in reading and appreciating fictional writings of all sorts, novels, poems, watching movies, appreciate artistic work by others, and are guided by our imaginations, internal ruminations involving autistic like imageries, personal interpretations, faiths, political and or social beliefs, values, etc., much of which cannot be subjected to criteria of objectivity, other than the fact they are “appealing” to us when we engage in these activities or share these experiences with others as appropriate. This is where the ‘soft science” element of some of the studies of behavioral science may come from, but that should not make “soft science’ less important than “hard science,” as they greatly influence our everyday conscious life.
Many so called “soft science” related studies may present “creative ideas’ that makes intuitive and appealing sense to others, and have the potential effect of “positive influence on others, and may inspire others to follow up for future investigations or to undertake application of the idea in their practice on an “evaluative” (pilot study) basis. These studies should have a legitimate place for publication in professional journals, whether such studies by themselves are replicable or not, on the basis of generating creative and useful ideas. There is also a potential for future studies that may emerge from such initial creative thoughts presented. If one were to follow rigorously the criteria of reproducibility for conducting research and publications, it may limit severely creativity in research or innovative thinking,as it is often very difficult to conduct studies with human subjects, except in very limited and artificially controlled environments. Even with highly well designed experimental studies, without p-hacking issue present, the results of the studies may still have limited applications to real life situations due to the issue of gap between efficacy and effectiveness a la the presence of variability of human conditions.
We know that many behavior science researchers are under pressure to conduct experimental designed studies (for which also more funding is available) and get their studies accepted for publications, if they follow the gold standard of experimental designed study model, but this may limit their focus to a much narrower field of investigations, (sometimes bordering on what may appear to be very “esoteric topics” that may not have any immediate or long-term practical implications) or may motivate the researchers to use the p-hacking system to support their intuitive hypothesis to justify their investment in a research project idea. (I am not saying it is always the case!) Moreover, many of these published results, even with no p-hacking present, may end up having very limited practical values. My personal impression of going through some of the well esteemed behavior science journals, I found that while I am often impressed by many of the extremely well designed studies, where the rationale and designs for the studies appear to be very logical and articulately presented, with comprehensive literature reviews, and extensive data collection process, statistically clean and data presented impressively, but often they end up making speculative hypothesis on underlying brain or psychological functioning, with conclusions that the findings are not equivocal and that “more research needs to be done” in the area. Often the authors do not venture to suggest potential practical implication possibilities. Many of you may have the same feelings.
If an idea makes sense to others, and is presented without the use of a “well controlled and reproducible experimental designed study method,” but appears to have “reasonable observational data base” that may include case studies, behavioral observations by some objective criteria along with “client/subject survey data,” and is considered by “others” (reviewers of a journal) to have the merit for further investigation, and potential follow-up application to real life situations, with “desirable benefits” to others and society, and, furthermore, is considered to add to the collective creative thinking process and stimulate others to undertake further investigative research, then it should be considered for review by professional journals for consideration for publication. One may note, major psychological models, Freud, Piaget, Kohlberg, initially were developed and based on case studies and observations,(supplanted with their own phenomenological experiences), and when published, were intuitively appealing to others and inspired many to adopt their theories or elements thereof in their own clinical work, research, and engaging in further writings and investigations, etc. Many early behavior science research, including Skinner’s learning theory as well as other learning paradigms were identified through animal research in controlled experimental settings, and then extrapolated to human behaviors, and their future application to human behaviors and research followed much later. We know that many scientific fiction writings and movies also have generated creativity, research and application of human inventions much later on after their initial publications.
The point I am making is that it is better to acknowledge that many behavior science studies may not meet the criteria of “reproducibility,” but their acceptance and validity should not be valued less, and they should be judged more by the criteria whether the ideas presented do make “intuitive” sense in the context of our present knowledge base, and that they have consideration for “potential benefit” to people, and that these studied are worth following up for further investigations or research or for possible implementation in practice, provided identified criteria for outcome assessment in real life situations are provided. As we know, consumer survey is now universally used to justify whether a given “product” including a human service product is useful to people or not. Using this criteria, it will be easy for others to evaluate the applicability and usefulness of a “novel” idea or approach.
If behavior science accepts what part of it is “hard science” meeting the replicability criteria and what part is “soft,” and both of these sciences, “hard” and “soft” are viewed as valued pursuits of human knowledge, with access to funding and acceptance for publications in professional journals, we may have more productive and creative publications in the field that may benefit all of us at large from this sharing process of ideas.
Just a food for thought on this difficult and “complicated” issue, which is laden with the politics of scientific investigations!”