Science and Judgement. - Restoring Excellence

Science and Judgement.


Feb 18
Science and Judgement

Science and Judgement.

This is often why decision making in bureaucracies is so slow. Many checklists have to be completed. Things have to be given scores. Scores have to be weighted and tallied. This official and meticulous looking process then helps arrive at a well documented decision that is externally justifiable.

In many cases, it is doubtful how much this removes the human judgement, bias and emotion from the decision, and how much it simply helps to make things justifiable to others and disperses responsibility.

This is not necessarily a bad thing. Or a good thing. It is both.

Similarly, in science, it would be nice if we had a set of rules that immediately helped us sort science out. This would make the judgement of the worth of any particular bit of scientific evidence very easy, particularly in the clinical situation where huge amounts of science get generated.

Science, as it relates to health care has two general forms.
1. Observational.
2. Comparative.

The former is where all the major developments occur. The basic science of how things work. How chemistry, physics and biology can be harnessed in the field.

The latter, whilst not generally generating the big developments, tries to answer the question of which treatment is best. This is conversely very difficult.

In the field of health care science, several attempts have been made to provide a system for making judgement on science. The most famous and common is of course the hierarchy of evidence.

Put aside for the moment that the hierarchy of evidence is actually a philosophical argument, and does not satisfy it's own requirements, and hence comes under anecdotal evidence.

In this, various forms of research or literature are ranked from case reports at the lowest, to reviews of randomised control trials at the highest. Probably first described in 1979, it now has taken centre stage in any discussion about evidence based medicine (another term that is also losing favour).

Rather than judge each piece of scientific evidence individually, which takes considerable effort and thought, this now gave us a system for quickly sorting information, perhaps without thought, into strong, medium or weak.

A by product of this is that it created a group of science that was the "winners". Unfortunately, when favour falls on any human endeavour, human nature kicks in, and you get more of that thing. If we decided to be compassionate and award $1million to anyone suffering an amputation, I can guarantee the number of amputees would skyrocket.

This is called unintended consequences.

Since the randomised controlled trial is now the winner. Guess what. We now get a lot more of them.

This has three problems;
1. Things not suitable for such trials are shoehorned into them.
2. They are given a level of weight that is regularly unjustified due to the quality of the trial.
3. They are regularly quoted unquestioningly.

Not surprisingly, when you reward something, you get more of it, and sometimes the increase in volume can lead to a drop in quality.

However, the elephant in the room is that the hierarchy of evidence is basically anecdotal. It does not fulfil it's own criteria. It is philosophical, just like this piece of writing.

Worse, like all systems that take thought and responsibility out of the judgement process, it leads to dumbed down decisions.

An anecdote can be a crucial piece of evidence. Or it can be rubbish. A lot depends on who it is coming from (I do have some hints for choosing who to listen to and who not, but that can wait).

A randomised control trial can be a crucial piece of evidence. Or it can be rubbish. A lot depends on how it is constructed, and whether the figures got fudged to get a result (publish or perish is a reality for academics just like prep or perish may be in general dentistry).

So what should we do?

I personally recommend that you make a judgement on every piece of evidence that you come across. Never accept anything because of who it came from, how famous they are, nor where it sits on the hierarchy of evidence or the publication it comes from.

(I'm even skeptical when my best dental friends recommend things. Sometimes they change their mind....)

And carefully read the statistics. Often the best answer is hidden there and ignored by the authors and the peer review process.

Judge all evidence thoughtfully, and then compare it to your reality. Some things have many in vitro studies, but don't work, because the study focuses on something not clinically relevant.

In the end, despite all the evidence and studies, you will only use something if YOU see it works, and so do your patients. That paradoxically, is the piece of anecdotal evidence, that finally makes or breaks a treatment.


About the Author

Dr Lincoln Harris has been completely focused on excellence and quality from the beginning of his career as a dentist. He established the first private dental practice in Bargara – Harris Dental Boutique in 2000. Since graduation he has trained extensively in Aesthetic Implant Techniques and Full Mouth Rehabilitation to attain immense skill and knowledge. With his vast dental knowledge Dr Harris coaches and trains dentists from all over the world on complex aesthetic dentistry, surgical techniques and business management. Dr Harris is the founder of RIPE. Restorative Implant Practice Excellence: Full Protocol group an international forum of over 70,000 members worldwide. The purpose of the group is to share information and excellence in the dental industry. He has also lectured in multiple cities throughout Australia, North America, Asia, Singapore, United Kingdom and Europe.

(1) comment

Add Your Reply