>related to a unique null derivative instead of multiple maxima
I think the word you're trying to use is "bimodal" and yes, that is one example where the author's reasoning fails. But it's not the only one.
>I couldn’t find any papers or information on this.
You said you have no formal higher education in mathematics - how would you even go about finding (let alone understanding) papers? Regardless, just to be clear, this is not something you would learn from papers but from introductory textbooks and university courses. Everyone who has to deal with statistics in science needs to go through a whole lot of extra education exactly because there are many pitfalls like this.
>it is essential to check the underlying data distribution to avoid being misled by the information
That is another half-truth that everyone on the outside seems to agree on, but it is useless in practice. What do you do if the underlying data is not accessible. And what if you don't have the means to process it for every paper you read (which is what usually happens)? Then you have to rely on the actual tricks of the trade, which will come naturally if you worked with tons of statistics before. There are lots of telltale signs that let you spot bad analyses by only looking at a plot or summary chart. Granted, you won't catch all of them, but it often takes real malice and deep statistical competence on the author's side to cover up these things.
I think the word you're trying to use is "bimodal" and yes, that is one example where the author's reasoning fails. But it's not the only one.
>I couldn’t find any papers or information on this.
You said you have no formal higher education in mathematics - how would you even go about finding (let alone understanding) papers? Regardless, just to be clear, this is not something you would learn from papers but from introductory textbooks and university courses. Everyone who has to deal with statistics in science needs to go through a whole lot of extra education exactly because there are many pitfalls like this.
>it is essential to check the underlying data distribution to avoid being misled by the information
That is another half-truth that everyone on the outside seems to agree on, but it is useless in practice. What do you do if the underlying data is not accessible. And what if you don't have the means to process it for every paper you read (which is what usually happens)? Then you have to rely on the actual tricks of the trade, which will come naturally if you worked with tons of statistics before. There are lots of telltale signs that let you spot bad analyses by only looking at a plot or summary chart. Granted, you won't catch all of them, but it often takes real malice and deep statistical competence on the author's side to cover up these things.