I have been reading The Geek Manifesto. This, in combination with Bad Science and several other books, suggested to me that part of the problem with the reporting of science in the press is the lack of the metadata that surrounds a scientific paper, and the context which will be available to anyone in the field through their current knowledge.
This could be fixed. I propose a rating system for science stories. A simple 1-5 scale on a couple of elements:
Is the view or theory described in the article, a reliable overview of a mainstream idea, or is it on the fringes?
- Highly controversial, espoused by very few in the field
- Controversial but with significant minority support
- Not controversial but not widely disputed
- Widespread support but not yet universal acceptance
- The scientific consensus view
How near are we to being able to use this research?
- This is a very early result, we are a long way from anything useful
- This result moves us ahead slightly along an established line of inquiry
- This result is relevant but usable results are unlikely in the short term
- This result is relevant and likely to yield results in less than 5 years
- This result has immediate relevance and applicability
How important is this result?
- If confirmed, it will fill in a minor area of uncertainty.
- If confirmed, it contains useful background.
- If confirmed, it will form a solid foundation for further work.
- If confirmed, it will mean a significant step forward.
- If confirmed, this will be a genuine breakthrough.
There are a couple of ways this could be used. One way is by science editors rating the stories their writers publish, another is by communities of geeks building an aggregator where stories are listed by publication, date, headline and abstract (which would obviously require some sort of reputation management to prevent abuse, but Wikipedia has shown that this is a problem that can be solved with enough community input).