You are so right about trying to trust our sources. The Wall Street Journal article didn't pass my "test" described below because a sample of 26, out of the millions infected, is statistically immaterial.
I read a lot of articles in my efforts to understand the conditions, diseases and treatments people ask questions about here on Connect. Some are in the realm of what you call "junk science". Others are pure self-promotion or product promotion. Still others are what I refer to as "interesting, but not ready for Prime Time", such as case studies and theories. Then there are the credible research studies, that may support specific treatments or medications, or report that other methods may be just as effective or even better, or most likely conclude "there is evidence that this is effective in some cases, but more study is needed."
I have now begun, with even the scientific and medical journals to look first at the number of subjects, second the number and qualifications of researchers involved, and third an independent peer review.
If the report is a "case study", numbers could be small (10-100) with rare conditions. But with something widespread like arthritis or drugs for cholesterol or treatments for back pain, I look for a "research study", involving at least 1000 people.
If the author has professional, research or scientific credentials, I look for them to be affiliated with a respectable organization. And I look at the last page to see their potential conflicts of interest.
And I look for the peer review.
Finally, I use "Dr Google" to determine any hidden motives (Web sites where they are touting treatments, etc.)
Only if I believe an article is credible, includes enough subjects to be useful, is not a product promotion by someone with a financial stake, do I ever share the link or summary. If someone on Connect asks a about a study, treatment or product, I read their link and apply the same criteria before I comment.
Sue
Excellent, thank you