There seems to be a common thread amongst sceptics out there that science is done via something that looks a little like the Council of Nicaea. That is to say, that a committee of scientists decides what is "doctrine" before instructing publishers what to print. There is confusion between the ways the legal system (or political system) works and how science works.
Lets have a look at some typical tasks in a scientist’s professional life:
1. Data collection. This can be the longest and hardest (and most boring) phase. This is where hours are spent over test-tubes, or, in my case, hours in the hot sun staring closely at rocks. Whilst you may be thinking about the end-game in this phase, the task is usually so routine that bias hardly exists (if it does, it is because the method itself is biased, or you're just sloppy). Actually, there will be mistakes, but these tend to revert to the mean, so will be cancelled out in the final analysis.
2. Hypothesis generation. I put this after data collection to bait some people, but actually, it has to be said that hypotheses are generated throughout the scientific process. The important thing is that you are only testing the original hypothesis whilst conducting an experiment designed to test that hypothesis. Other ones must wait for other experiments. There is no harm in "hypothesis-driven research" - this is what science is. However, this is different to biased research driven towards a pre-determined conclusion. Note the difference - a hypothesis is actually tested, a pre-determined conclusion is circular.
3. Data analysis. Here comes the statistics. So you have the data, and you see patterns. Are they significant? This is a technical, statistical question that determines whether you can use your data (gathered in 1. above) to test the hypothesis (2. above). If there is no significant result, then there is no support for the hypothesis from your results. THIS DOES NOT MEAN IT IS DISPROVEN. It is more like an absence of evidence, which, as the saying goes, is not evidence of absence. If the results show a statistically significant result, then you can compare it with your hypothesis. Now a hypothesis can be disproved - proposing that the sky looks blue and finding it to look green would be an example. Unfortunately the opposite does not apply. If your result concurs with your hypothesis, it lends support to it, but does not prove it. It can never prove it due to a quirk of inductive logic that demonstrates that no matter how many positive examples you show in support of a proposition, since the set of possible examples is infinite, you cannot rule out a counter-example emerging next. Which is quite different from the deductive logic of mathematics, where 2 + 2 = 4 as a result of the system itself.
To make a long story short, the last juicy step is publication.
Now you run into trouble. You've done your experiment, and supported your brilliant earth-shattering hypothesis. Will anyone believe you?
To find out, you detail your method (and the back story - why you felt it worthy of research) and your results and a bit of discussion on what it all means. Then you send it for peer review. This is a blind (well semi-blind - sometimes people work out who the reviewers are) process where your reviewer doesn't know who wrote the paper and is asked to appraise the science, comment, and put their opinion on whether it is fit to publish. Most papers fail this test on the first pass, and the majority never make it to publication. What tends to define success is that the paper details a properly conducted line of research taking into account previous work in a similar field. Failure in peer-review doesn’t mean that there is a conspiracy against you – it usually means your paper is either not relevant to the journal in question, or that you need to write up your science better. Without peer review, this statement cannot be made with any certainty about a paper.
Also, consider that how the media treats science is not the same as the science itself. Science is only balanced in its reporting in so far as it “objectively” reports the outcomes of research and the opinions of researchers. So 90% of scientists might agree with a broad-based position, but it only takes one from the 10% to balance a journalists report – giving a 50/50 impression. Note also the diversity of opinion that will lie within the 90% who agree – these people do not speak to a common mantra, they merely assent to certain generalisations.
So next time you see controversy about methods and "conspiracies" to promote one "side" of an argument over another, consider the above and consider that most scientists are too busy with the steps involved to also hold some sort of cabinet meeting on how to bend the entire scientific community. After all, that would be like herding cats.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment