Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some people find it helps them understand themselves better. And people usually want to understand people they're close to or work with.


>Some people find it helps them understand themselves better.

Some people have said the same of astrological signs. (EDIT: As per @simias' comment on confirmation bias. This is why "text blurb profile"-oriented reasoning is especially suspect. We already have ample evidence of that reasoning being as unreliable guide to truth as is unreliable a belief that your birthday determines your personality.)


MBTI results correlate moderately to strongly with Big 5 scores. Is Big 5 equivalent to astrology too?


These are answers to a standardized test. So, they are (probably) teasing out some tiny kernel of truth in the distribution of (probably context sensitive) human traits (insofar as these traits are related to the questions, but probably more dubiously insofar as people use English words like "intuitive" for axes or name one axis two different English ideas like Judging and Perceiving).

Honestly, the more "words" you add to try to summarize a pile of question answers, the further away you are getting from "science" and the closer you get to astrology. If there is only one lesson I can convey here, it should be that. The science is the 50 or however many questions dimensional questionnaire and the 4 or 5 dimensional distributional summary of the full high dimensional distribution. You project "down from the questions" to the summary and then people using English "back project" but are mostly just filling in poetic detail from anecdotal experience. The bigger that back projection to paragraphs and chapters the more like astrology or psychology by and for poets things have become.

However, as mentioned a couple of times, these summaries may be telling you very little (not "probably says" as you say a few times, but "probably says nothing") about the 99% neurotypical people (yes, my estimate is hand-wavy based on the distribution along just one axis from the article which says the other are "similar"). The tests may be only useful to identify outlying personality extremes which is very much not how amateurs apply them. Amateurs overconclude..reading profiles and basically acting out the epistemological play of astrology over again (but backed by "science" so they feel less guilty; EDIT - and yes I have known several such people in real life).

Incidentally, this also explains "correlations" with other things (like IQ or engineers or etc.) because linear statistics like correlation are notoriously dragged around by outliers. To whatever extent these things are true, the correlations (unadjusted for individual noise as already mentioned) will have much weaker statistical power than you would naively expect. In effect, your sample size may be only 0.2% to a few % what you thought it was. So, instead of 1000s of people you have 10s with thin population of each of the 16 categories. This would show up in the reproducibility of the correlation studies, but honestly require a more careful meta-analysis than I am prepared to offer for free. So, these studies, too, may have a grain of truth, but perhaps only a grain and this grain is probably much smaller than what personality test advocates in this comment thread seem to understand.

At least one real world problem with taking these measurements too seriously is, as others in this thread have mentioned, that these thinly evidenced divisions - even if real - can be as ripe for abuse as race/whatever designator is. In-group preference [1] is a strongly replicated result in psychology. This is another reason why axes-only "a% I, b% N, c% T, d% P" are better than classification, and why the big 5 (which at least resists categories) is less harmful. a +- A, b +- B, etc. would be even better.

Have you or anyone on this whole thread ever heard of a p-value for your personality assignment or error bars on your personality axes before I mentioned them? (Yes, I know the problems with p-values. Those problems are not the point here.) No? Yet everyone has heard of same-person reproducibility problems. These are obviously deeply related issues.

So, the problem is not that "MB is totally not science". It's that personality measurement in general is weak enough science to be far more likely to be misapplied than to be correctly understood, especially by lay practitioners like HR depts/mgmt/people on first dates/people in emotionally heated scenarios. It may have a real and even positive role in diagnosing true outlier children in need of interventions, though "profiles" of all the super extreme 16 cases will probably sound bad and may be obvious sans any testing.

[1] https://en.wikipedia.org/wiki/In-group_favoritism




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: