Review by Choo Li-Hsian
Wrong: Why Experts Keep Failing Us – And How to Know When Not to Trust Them
Author: David H. Freedman
Publisher: Little, Brown & Co
EVERY day, we are surrounded by “expert advice” from various media. So, how do we pick the good stuff out of the constant stream of flawed ones?
In a sense, we often trust experts blindly because we are programmed to do so from young – at first with our parents, teachers, and then the authoritative voices in textbooks and the network news.
Studies on brain scans apparently show that we actually surrender our own judgment and forego our own decisions when presented with “expert advice”.
David H. Freedman, author of the book Wrong: Why Experts Keep Failing Us – And How to Know When Not to Trust Them has spent the past three years examining why expert pronouncements so often turn out to be exaggerated and misleading.
He provides several reasons. One of them is that scientists are not as good at making trustworthy measurements as we give them credit for.
Surveys revealed that fraud, careerism, suppression of data and lousy analysis, among other reasons, are fairly rampant even among the most respected researchers and institutions.
As Freedman puts it: “It is not that they are mostly incompetents and cheats. Well, some of them are... (but) a bigger obstacle to reliable research though is that scientists often simply cannot get at the things they need to measure.”
He terms this as the streetlight effect – a reference to a joke scientists love to tell. Late at night, a police officer finds a drunken man crawling around under a streetlight.
The man says he is looking for his wallet; that he is likely to have dropped it across the street. “Then, why are you looking over here?” the police officer asks. Because the light is better here, explains the drunken man.
Freedman notes that “many and possibly most scientists spend their careers looking for answers where the light is better rather than where the truth is more likely to lie... it is often extremely difficult or even impossible to cleanly measure what is really important, so scientists instead cleanly measure what they can, hoping it turns out to be relevant.”
In many cases, scientists are stuck with surrogate measures in place of what they really want to quantify.
For example, economists cannot track the individual behaviour of billions of consumers and investors, so they rely on economic indicators and data extracts to form conclusions. A 1992 study by researchers at Harvard and the National Bureau of Economic Research examined papers from a range of economic journals. They discovered that none of them had conclusively proved anything.
John Ioannidis, a highly regarded “medical mathematician” from Greece’s University of Iaonnina examined the 45 most prominent studies published since 1990 in the top medical journals. He found that about one-third of them were ultimately refuted.
Scientific studies are also not always performed on the right subjects. Patient recruitment is a problem in medical studies. Researchers often end up enlisting those who do not represent the population in terms of health or lifestyle – students, the poor, drug abusers – as their subjects.
Studies on human health are based on animal testing but three-quarters of the drugs that prove safe and effective in animals end up failing in early human trials.
“Publication biasness” is quoted as the biggest culprit, that is, journals’ tendency to eagerly publish the small percentage of studies that produce exciting, surprising breakthrough results.
How can we counter all this? Freedman is not calling us to discard experts and their findings. The key is to distinguish between expertise that is “more likely to be right” and those that is “less likely to be right”.
We need to ask: “What does better advice have in common?” or conversely “What does bad advice have in common?” Bad advice, according to Freedman, tends to be simplistic.
It tends to be definitive, universal and certain; it is advice we love to hear, for example, chocolate is good for you.
The best advice tends to be less certain – those who say: “I think maybe this is true in certain situations for some people.” We should, therefore, avoid findings which shout “it’s exciting, it’s a breakthrough, it’s going to solve your problems.”
Instead, we should consider advice that embraces complexity and uncertainty. While this may go against our intuition, we have to accept that we live in a complex, messy and uncertain world. Experts who are more likely to steer us in the right direction are those who acknowledge this.
But here’s the million dollar question: since Freedman is a kind of expert on experts, why should we trust him? Freedman concedes that you should not.
In fact, he even dedicates a whole chapter to this subject entitled “Is This Book Wrong?” He emphasises that his purpose is not to give people answers but to provoke thinking, raise awareness and point out that there are real questions we should all be asking instead of passively accepting the status quo. In essence, we should all be smarter about how we pick our advice.
Author: David H. Freedman
Publisher: Little, Brown & Co
EVERY day, we are surrounded by “expert advice” from various media. So, how do we pick the good stuff out of the constant stream of flawed ones?
In a sense, we often trust experts blindly because we are programmed to do so from young – at first with our parents, teachers, and then the authoritative voices in textbooks and the network news.
Studies on brain scans apparently show that we actually surrender our own judgment and forego our own decisions when presented with “expert advice”.
David H. Freedman, author of the book Wrong: Why Experts Keep Failing Us – And How to Know When Not to Trust Them has spent the past three years examining why expert pronouncements so often turn out to be exaggerated and misleading.
He provides several reasons. One of them is that scientists are not as good at making trustworthy measurements as we give them credit for.
Surveys revealed that fraud, careerism, suppression of data and lousy analysis, among other reasons, are fairly rampant even among the most respected researchers and institutions.
As Freedman puts it: “It is not that they are mostly incompetents and cheats. Well, some of them are... (but) a bigger obstacle to reliable research though is that scientists often simply cannot get at the things they need to measure.”
He terms this as the streetlight effect – a reference to a joke scientists love to tell. Late at night, a police officer finds a drunken man crawling around under a streetlight.
The man says he is looking for his wallet; that he is likely to have dropped it across the street. “Then, why are you looking over here?” the police officer asks. Because the light is better here, explains the drunken man.
Freedman notes that “many and possibly most scientists spend their careers looking for answers where the light is better rather than where the truth is more likely to lie... it is often extremely difficult or even impossible to cleanly measure what is really important, so scientists instead cleanly measure what they can, hoping it turns out to be relevant.”
In many cases, scientists are stuck with surrogate measures in place of what they really want to quantify.
For example, economists cannot track the individual behaviour of billions of consumers and investors, so they rely on economic indicators and data extracts to form conclusions. A 1992 study by researchers at Harvard and the National Bureau of Economic Research examined papers from a range of economic journals. They discovered that none of them had conclusively proved anything.
John Ioannidis, a highly regarded “medical mathematician” from Greece’s University of Iaonnina examined the 45 most prominent studies published since 1990 in the top medical journals. He found that about one-third of them were ultimately refuted.
Scientific studies are also not always performed on the right subjects. Patient recruitment is a problem in medical studies. Researchers often end up enlisting those who do not represent the population in terms of health or lifestyle – students, the poor, drug abusers – as their subjects.
Studies on human health are based on animal testing but three-quarters of the drugs that prove safe and effective in animals end up failing in early human trials.
“Publication biasness” is quoted as the biggest culprit, that is, journals’ tendency to eagerly publish the small percentage of studies that produce exciting, surprising breakthrough results.
How can we counter all this? Freedman is not calling us to discard experts and their findings. The key is to distinguish between expertise that is “more likely to be right” and those that is “less likely to be right”.
We need to ask: “What does better advice have in common?” or conversely “What does bad advice have in common?” Bad advice, according to Freedman, tends to be simplistic.
It tends to be definitive, universal and certain; it is advice we love to hear, for example, chocolate is good for you.
The best advice tends to be less certain – those who say: “I think maybe this is true in certain situations for some people.” We should, therefore, avoid findings which shout “it’s exciting, it’s a breakthrough, it’s going to solve your problems.”
Instead, we should consider advice that embraces complexity and uncertainty. While this may go against our intuition, we have to accept that we live in a complex, messy and uncertain world. Experts who are more likely to steer us in the right direction are those who acknowledge this.
But here’s the million dollar question: since Freedman is a kind of expert on experts, why should we trust him? Freedman concedes that you should not.
In fact, he even dedicates a whole chapter to this subject entitled “Is This Book Wrong?” He emphasises that his purpose is not to give people answers but to provoke thinking, raise awareness and point out that there are real questions we should all be asking instead of passively accepting the status quo. In essence, we should all be smarter about how we pick our advice.
No comments:
Post a Comment