BEN, BEM, Betham discussed on Serious Inquiries Only

Automatic TRANSCRIPT

I guess blind spot a little bit or that kind of a little bit of self deception that could be present in all kinds of areas. Not just siree search. Oh, for sure. Okay. I mean, actually, that's a nice segue because the the paper that I wanna talk about as an example of this is Ben's paper and one of the reasons that it's significant that people still talk about it is that it was one of the big catalysts for these broader discussions in psychology, social psychology particular about whether our standards are appropriate. So, yeah, like p hacking happens across disciplines and it's it's something that at least psychology we've been really actively talking about and trying to correct, especially over the past few years. So in part because of that paper, you talked about that was however many years ago. Yeah, thousand eleven. Okay. Yes, a credit where Credit's due, right? I mean, so Betham's paper. This is this is a paper detailing, ten experiments and nine out of ten experiments have significant effects supporting Cy. So right off the bat for the power reasons that we talked about a minute ago. There's reason to be suspicious of this because given the power in Bem studies, some people have estimated that even if again, the surreal effect, he should have only gotten like six out of ten. Where do they get the effect size or the power that they're using to calculate that? Like is that based on something he said like. If we're just researching something that frankly isn't real, how how do we have like a knowledge of what the power of the effect is. So. So what you're assuming is, is that the effect sizes that you actually observe the studies are estimating the true affect size, so you're sort of like assuming that from the beginning, and if that's the case, like say you have an effect size of point to that tells you something meaningful about how often you should detect it in any given study. So you're you're sort of when you're doing this analysis or when whoever's doing it, you're kind of granting for the sake of argument. LEGO, let's say you did detect something here. Yes. And then kind of using that to go reevaluate the on the whole, how many of these studies use should expect to see results. Okay, that's that's precisely it. Yeah. So right off the bat, it like there's something to suggest that there might be some selective reporting either Bem left out some experiments or maybe he did more analyses per study than a he reported or or something. Right. And when you look at the actual effects that are that are reported across studies, there are some sort of weird inconsistencies. So for example, there are individual differences or personality traits that that appear in disappear as things that predict accuracy extroversion shows up sometimes and not other times at it's not clear whether Ben is just measuring different things across these studies or whether he's just neglecting to report them in some studies. That's one thing. But another thing is, if you look at some of the specific hypotheses that he says he created or he set forth before analyzing the data, some of them don't make sense. So for example, let's look at experiment one in the study. So in the study, participants are presented with these images of two doors on a screen left and right, and they're trying to guess the location of target picture and on different targets. I mean on different trials, the pictures have different types of imagery in them. So on some of them, they're like neutral on some of them, their romantic and on some of them they're porn, which is fun. So he's got these erotic images and he says, he popped the sized prior to date analysis that accuracy on detecting the erotic pictures would be better than chance, but that on all the non erotic pictures, it would be at chance. Does anything stand out to you as strange about that prediction..

Coming up next