could you please define the term 'evidence based practice"
Doing a survey of PT's on this question: (it'll only take a sec)
On a scale of 1-10, how important is evidence-based practice to you?
Thanks for your input
Similar Threads:
Last edited by physiobob; 12-10-2006 at 03:52 PM.
could you please define the term 'evidence based practice"
Hi,
EBP will be 10 if practice is understood by patients and PTs.
It will be 0 in contrary.
Evidence based practice is only as good as the research that it is based on. Unfortunately there is not enough research (not an excuse but it is simply a fact), there is a lot of poor research and there is a huge amount of poor conclusion based on quite good data.
The concept is a good one for furthering the profession. However it would not stand up at present to what we need to have any decisions based on. Perhaps it's best use is to rule out things, rather than vica versa? 8o
Defining your terms is a good way to begin any analysis, though for the wide physio community this is usually thought of as evidence giving credence and repeatability to techniques and methods that yield verifyable results.This is almost always gained through properly conducted research. Though not always so. Clinical information is used by every therapist to adjust techniques. Regional variations in patient populations throw some "standards" out of whack,individuals throw emphasis into methods that give better responses by virtue of individual sensitivity and strength. Were we to allow ouirselves to be guided only by "evidence" as per the often used definition , we would fail our own creative selves as well as our patients. What ought to guide us at least as much is what works for us and is both safe and repeatable. I would rate myself in your one to ten scale as likely to use both 'evidence' and my own senses equally, scoring 5.
Heres a thought. If we based our proctice soley on Quantitative good researched EBP we would miss new innovative qualitative techniques that improve patient outcomes but have not hd the quantitative analysis done on them to support this. Thus all those patients that have benefited from McKenzie type techniques would not have had the advantage of this treatment for over 20 years, which is how long it has taken to "prove' it is useful
Hi
Evidence based practice is being widely used as a wedge to force therapists and Insurers / Associations to interact the way certain interest groups would like. I echo the comments to date. We risk throwing the baby out with the bath water if evidence based practice is continued to be used the way it currently is in Australia. Our profession is being hijacked by professional students, many of whom have had minimal patient treatment experience outside their studies, yet these people noe populate committees and enjoy excessive power within the profession, telling the rest of us how to treat our clients. Worse still, these all powerful therapists misuse their power by setting up fee hierarchies to support themselves. Very experienced therapists who do not do professional University study over years, will become poor cousins to the new breed.
Our profession will die, and patients will suffer. We need to be aware of improper science, however the current implementation is not the way to go. My score? 3/10.
MrPhysio
Hi,
I always search for EB reports when I use new techniques. However I think EB reports are not done scientifically enough...
So, great in theory, bad in practice. 3/10
I would like to agree with MrPhysio on his comments.
Just because something doesn't have evidence to back it up doesn't mean that it doesn't work.
Just because a study says that something doesn't work doesn't mean that the study was a good one.
I think i read in Grieve's Modern Manual Therapy a chapter on "A Contemporary Approach To Manual Therapy". It was written by WA physios i think - MrPhysio must be thinking of my mates here at Uni Sydney:lol - and they seem to be on the case.
In terms of Australian Unis and research, the following seems to apply:
Curtin Uni (West Aus) - The "gold standard" masters course. Motor control, neurodynamics, L/S classification, anatomy, etc. Great graduates, wish i could study there. Waiting for a distance Edu course to do my masters! Good research coming out of here. My first pick of quality.
Uni Queensland - Motor Control, Whiplash and Deep Neck Flexors, TrAbs, etc - good researchers, maybe a little too rigid in their focus but excellent work and research. Second best.
Uni SA- Clinical Reasoning, Home of Maitland and Butler, Shacklock with neurodynamics, shoulders. All in all, my third pick
Uni WA - Distance Edu course with great reputation. Fazey et. al. Don't know of any research but graduates are high quality.
La Trobe (Victoria) - Don't hear much about these guys. Don't know much about them at all. Had a Sports Physio masters graduate from there as a staff member. He seemed ok. Don't know of much research.
Uni Sydney - They seem to have the loudest voices here but produce research about other people's research. Probably good in CP and Neuro but useless in MS. Research Methods and Statistics is their strongest point but often ask wrong questions.
I hope my opinion is not too harsh but helpful when considering papers written by different Aus unis.
Bottom Line - If you think something works, find someone to help you prove it!
<blockquote><strong><em>Quote:</em></strong><hr>Research Methods and Statistics is their strongest point but often ask wrong questions.<hr></blockquote>
I could not agree more wholeheartedly. This is not isolated. Physios are great at data collection but all to often the wrong questions are asked and even more unusualy conclusions are drawn.
If I remember correctly the terms are "operationalisation of the question". We did learn about this in my undergrad course at the University of Sydney. Unfortunately to often researches are not practising what is being preached to the undergrads! Let's continue to look at what appears to work and then try to find out why. If you break things up into small RCT's you often find things in isolation are not statictically significant. However without breaking them down to fit a homogeonous, double blind RCT many clinical approaches/applications still seem to have a positive effect or outcome. To me that would say that an RCT is not the correct measurement tool. <img border=0 src="http://www.ezboard.com/images/emoticons/nerd.gif" />
How can you blind a therapist in an RCT?
The skill in physio is to use your assessment skills to determine the correct treatment.
RCTs are excellent for determining if U/S vs sham U/S produces an effect but how useful is it to determine if a Gr3 mobilisation applied to everyone at a specific level works or not (if you do gr3 mobes at all :\ !).
As an example, Barb Hungerford did her PhD proving that the sacrum nutates during the stork test but in people with SIJ pain, it "unlocks". But then you speak to Sydney Uni staff and they will ask how did you know they had SIJ pain if you did not inject the SIJ with local anaethestic therefore the research is not strong!!!
We need to support those who are doing research in the right areas and not drag them down. They are the ones who are trying to improve our evidence base and drive the profession forward.
Unfortunately, not all people agree and would prefer to turn us into glorifed exercise prescribers with general exercise programmes...
8/10 and have to disagree with the general thread.
We seem to forget the three components of Evidence Based Practice:
- Best available external evidence.
- Be applicable to the patient or situation at hand.
- Involve clinician's experience and expertise.
Sacket et al described this somewhere (I think around 1998 but I'll have to look it up).
So understanding when the research findings do and don't apply is crucial, meaning understanding the research and our patients clinical findings at the same time. Only then will we be able to avoid the pigeon holing type scenario that compensable bodies might try for in an inaapropriate situation.
Comments?
Hi Brad,
I just read this thread again and I didn't actually put down that I do believe that Evidence-Based Medicine (EBM) or practice is ideal. I just don't think that the way it has been used against physios is good for the profession due to problems with design and the lack of quality research.
I went looking for Sackett's 1998 Article and found it at BMJ...1998 August 1; 317(7154): 339–342. Using research findings in clinical practice. www.pubmedcentral.gov/art...id=1113635
In the article, he summarises the 5 steps necessary when practicing EBM...
The method is good, i just don't think the researchers apply the steps well.Steps necessary in practising evidence based medicine
1. Convert the need for information into clinically relevant, answerable questions
2. Find, in the most efficient way, the best evidence with which to answer these questions (whether this evidence comes from clinical examination, laboratory tests, published research, or other sources)
3. Critically appraise the evidence for its validity (closeness to the truth) and usefulness (clinical applicability)
4. Integrate the appraisal with clinical expertise and apply the results to clinical practice
5. Evaluate your performance
Also, doing a systematic review on only a handful of studies is pointless. We need more research.
Now I could bitch and moan about the lack of research and sit on my butt but my plans are to do research - to find out if clinical practice which *appears* to work stands up to scientific scrutiny.
My bottom line is that I don't want the reputation of physiotherapy to be dragged down by greedy, machine-loving, 4-patients-per-hour physios. That is why we need EBM. I have seen some amazing results from manual therapy - pain and conditions that have been there for decades. We all have those stories to tell. What we need is good evidence to back us up.
My challenge is... contribute to the body of work! I will someday (soon i hope!)
3/10Unfortunatley this is based on availability of resources, skill of the therapists, workload, enthusiasm etc etc. I remeber attending a casemix lecture at the John Hunter Hospital, Newcastle Australia, back in 1994. There I stood up and protested loudly against such type of research based, yes on statistical methods, but on only a average approach to common problems being administered by burnt out healthcare professionals.Best available external evidence.
We all know that the "guru" therapists are outliers on the bell curve, even though they get the best results. Why then do we measure the average or what the majority do? It is simply a dollars and cents issue. i.e. Research what is the best, average approach for the limited resource (people, equipment and financial) that a health system can provide. Then you get a streamlined approach to the health dollar. Some call it "Best Practice"! But surely it is not the best treatment!
This is where we should acknowledge the very real difference..
is it me who hears that the machine-happy 4 or more patients per hour physio can use the same argument put up by most in this thread to preserve his/her piece of the healthcare fund pie? can someone enlighten me with a better solution to sieve out the bogus modalities which does actually exist? Are will still adamant that the so called techniques, despite their heterogeneity in practice between the gurus, are physically modifying the patients in actuality? If RCTs are asking the wrong questions, what is the right question to ask in terms of treatment outcomes other than improvement in physical performance, pain intensity, perceived disability...? Is it patient satisfaction? I know a physio who achieves that with free coffee and muffins per visit.
Patient Satisfaction would indeed be a very appropriate starting place. Perhaps a measure of an individuals independence of the healthcare system and/or a reduction in the dollar value annually that a particular group of patients drain from the healthcare system budget. Often the preventative strategies are longitudinal case studies that take years to measure. Unfortunately, like smoking, we will change the approach several times before a measurement can be made.
So there is the problem. We do not have the right answer yet on how to measure "success". Perhaps a combination of what is considered "success" to the patient (who pays the taxes) and also "success" for the budget in terms of viability.
But we should not continue in a system that is clearly flawed. Or, if we do, we should acknowledge the limitations of the model and not make incorrect statements of clinical effectiveness when they should be about clinical usefulness..8o
Hi brad,
Just quickly, an example of a wrong question to ask is can 2 therapists (or more) assess how much the SIJ is moving in the stork test - this could be via a 5 point rating scale, etc. A better question to ask is: "Is the left side symmetrical to the right?"
Quantification of results leads to better statistical analysis. However it may measuring something irrelevant.
I am not arguing that we should not do research or not use evidence-based medicine. I am saying we should be doing research that backs us up as a profession.
I read about a Manips physio who has recently completed their PhD on the Origin and History Of Maniuplative Physiotherapy. Now I think that history is *very* important and useful because things happen in cycles and we have much to learn from history. But given the fact that we need more research in supporting our profession, that physio (who is a respected clinician) might have helped the profession with producing more currently relevant work. Having said that, they are entitled to do their PhD on whatever they want
I do stuff that works. Not because someone told me how to but because it makes sense and I use my clinical reasoning to select which patients I use these things on. Physiobase's comments on the best average approach is true. What tends to happen is that people latch on to a finding and then apply to it all with disappointment.
A good example of what I would consider a good research method is the work of Peter O'Sullivan. He has developed a classification system for chronic low back pain. By going through the process of identifying, classifying, testing inter-tester reliability, etc I hope to see that people can use this system to do research into effective treatment strategies, streamline diagnosis, etc.
His work has opened my eyes and explained why some of my patients need to actually do less exercises (excessive compression - active extension pattern). I had one lady who was the most diligent patient at doing core stability and gym-based exercises and had chronic pain for 4 years. Her problem was that her muscles were compressing her too much. She is now leading a pain free life in only a few months of "learning" how to "relax" and be "sloppy" - which means get back to more normal tone.
In summary, better research questions into more relevant topics to support our profession.
Am I making sense?
And here I was thinking that we were demonising EBM without understanding it. And I agree totally that the external evidence has more holes than not.