what... huh?
Active Member
The December 1992 EPA study "Respiratory Health Effects of Passive Smoking: Lung Cancer and other Disorders" (EPA/600/6-90/006F... commonly referred to as the EPA '93 study) is a crock of shit. To understand how, and why is very complicated, and I am not a statistician, nor a science researcher... so some of the evidence I will provide is complicated, and some of it is simple.
For instance... a simple, understandable example of corruption is the fact that the results of the study were released before the research was completed.
1998 Federal Judge William Osteen, who had a history of siding with the government on tobacco issues - vacated the study. He declared it null and void after extensively commentating on the shoddy way it was conducted. His decision was 92 pages long. Here is an excerpt:
"In this case, EPA publicly committed to a conclusion before research had begun; excluded industry by violating the Act's procedural requirements; adjusted established procedure and scientific norms to validate the Agency's public conclusion, and aggressively utilized the Act's authority to disseminate findings to establish a de facto regulatory scheme intended to restrict Plaintiffs, products and to influence public opinion. In conducting the ETS Risk Assessment, disregarded information and made findings on selective information; did not disseminate significant epidemiologic information; deviated from its Risk Assessment Guidelines; failed to disclose important findings and reasoning; and left significant questions without answers. EPA's conduct left substantial holes in the administrative record. While so doing, produced limited evidence, then claimed the weight of the Agency's research evidence demonstrated ETS causes cancer. Gathering all relevant information, researching, and disseminating findings were subordinate to EPA's demonstrating ETS a Group A carcinogen."
The statistical manipulation is very difficult for me. My old man has a PHD in economics. I make my wife balance my checkbook. I will do my best... and by do my best, I mean steal other peoples explanations.
Relative risk is determined by first establishing a baseline, an accounting of how common a disease (or condition) is in the general population. This general rate is given a Relative Risk of 1.0, no risk at all. An increase in risk would result in a number larger than 1.0. A decrease in risk would result in a lower number, and indicates a protective effect.
For instance, if a researcher wants to find out how coffee drinking effects foot fungus, he first has to find out how common foot fungus is in the general population. In this fictional example, let's say he determines that 20 out every 1,000 people have foot fungus. That's the baseline, a RR of 1.0. If he discovers that 30 out of 1,000 coffee drinkers have foot fungus, he's discovered a fifty percent increase, which would be expressed as a RR of 1.50.
If he were to find the rate was 40 out of 1,000, it would give him a RR of 2.0.
He might find foot fungus was less common among coffee drinkers. A rate of 15 out of 1,000 would be expressed as a RR of 0.75, indicating that drinking coffee has a protective effect against foot fungus.
The media usually reports RRs as percentages. An RR of 1.40 is usually reported as a 40% increase, while an RR of .90 is reported as a 10% decrease. (In theory, at least. In practice, negative RRs are seldom reported.)
As a rule of thumb, an RR of at least 2.0 is necessary to indicate a cause and effect relationship, and a RR of 3.0 is preferred.
"As a general rule of thumb, we are looking for a relative risk of 3 or more before accepting a paper for publication." - Marcia Angell, editor of the New England Journal of Medicine"
"My basic rule is if the relative risk isn't at least 3 or 4, forget it." - Robert Temple, director of drug evaluation at the Food and Drug Administration.
"Relative risks of less than 2 are considered small and are usually difficult to interpret. Such increases may be due to chance, statistical bias, or the effect of confounding factors that are sometimes not evident." - The National Cancer Institute
"An association is generally considered weak if the odds ratio [relative risk] is under 3.0 and particularly when it is under 2.0, as is the case in the relationship of ETS and lung cancer." - Dr. Kabat, IAQC epidemiologist
This requirement is ignored in almost all studies of ETS.
While it's important to know the RR, it's also very important to find the actual numbers. When dealing with the mass media, beware of the phrase "times more likely."
For instance, a news story may announce "Banana eaters are four times more likely to get athletes foot!" You find the study, read the abstract and find the RR is, indeed, 4.0. But further digging may reveal that the risk went from 1.5 in 10,000 to 6 in 10,000. Technically, the risk is four times greater, but would you worry about a jump from 0.015% to to 0.06%?
[SIZE=+1]Confidence Intervals [/SIZE]
The Confidence interval (CI) is used to determine the precision of the RR. It is expressed as a range of values that would be considered valid, for instance .90 1.43.
The narrower the CI, the more accurate the study. The CI can be narrowed in many ways, including using more accurate data and a larger sample size.
Confidence intervals are usually calculated to a 95% confidence level. This means the odds of the results occurring by chance are 5% or less.
This is one reason epidemiology is considered a crude science. (Imagine if your brakes failed 5% of the time.) The EPA, in their infamous 1993 SHS study, used a 90% CI, doubling their margin of error to achieve their desired results.
The RR could be any number within the CI. For instance, an RR of 1.15 with a CI of .95 1.43 could just as well be a finding of 1.25, an 25% increase, or .96, a 4% decrease, or 1.0, no correlation at all. Pay close attention to any study where the CI includes 1.0. (It does in virtually all ETS studies.) When the CI includes 1.0, the RR is not statistically significant.
[SIZE=+1]Confounders [/SIZE]
On average, women live longer than men. Any study on longevity has to account for this fact. This is called a confounder, which is easy to remember because it can confound the results of a study. Some studies use the term "confounding variable." Any study of longevity (usually referred to as a study of morbidity) which doesn't take this confounder into account will be very inaccurate. For instance, when studying the longevity of smokers, it's important to adjust for the gender difference, and adjust for the percentage of men and women in the study.
Sound complicated? It gets worse. Poor people die sooner than rich people. Black people die sooner than white people, even when adjusting for the income confounder. People in some countries live longer than people in others. So if an impoverished black male smoker in Uruguay dies before reaching the median age, is it because of his income, race, gender, smoking, or nationality?
When studying the effects of tobacco exposure, either to the smoker or to those around him, confounders include age, allergies, nationality, race, medications, compliance with medications, education, gas heating and cooking, gender, socioeconomic status, exposure to other chemicals, occupation, use of alcohol, use of marijuana, consumption of saturated fat and other dietary considerations, family history of cancer and domestic radon exposure, to name a few.
When studying the effects of SHS on children confounders include most of the above, plus breast feeding, crowding, day care and school attendance, maternal age, maternal symptoms of depression, parental allergies, parental respiratory symptoms and prematurity.
A study that does not account for all of these factors is likely to be very inaccurate, and is probably worthless.
That is enough for a first post... everyone is just gonna glaze over it anyway.
For instance... a simple, understandable example of corruption is the fact that the results of the study were released before the research was completed.
1998 Federal Judge William Osteen, who had a history of siding with the government on tobacco issues - vacated the study. He declared it null and void after extensively commentating on the shoddy way it was conducted. His decision was 92 pages long. Here is an excerpt:
"In this case, EPA publicly committed to a conclusion before research had begun; excluded industry by violating the Act's procedural requirements; adjusted established procedure and scientific norms to validate the Agency's public conclusion, and aggressively utilized the Act's authority to disseminate findings to establish a de facto regulatory scheme intended to restrict Plaintiffs, products and to influence public opinion. In conducting the ETS Risk Assessment, disregarded information and made findings on selective information; did not disseminate significant epidemiologic information; deviated from its Risk Assessment Guidelines; failed to disclose important findings and reasoning; and left significant questions without answers. EPA's conduct left substantial holes in the administrative record. While so doing, produced limited evidence, then claimed the weight of the Agency's research evidence demonstrated ETS causes cancer. Gathering all relevant information, researching, and disseminating findings were subordinate to EPA's demonstrating ETS a Group A carcinogen."
The statistical manipulation is very difficult for me. My old man has a PHD in economics. I make my wife balance my checkbook. I will do my best... and by do my best, I mean steal other peoples explanations.
[SIZE=+1]Relative Risk[/SIZE]
Relative risk is determined by first establishing a baseline, an accounting of how common a disease (or condition) is in the general population. This general rate is given a Relative Risk of 1.0, no risk at all. An increase in risk would result in a number larger than 1.0. A decrease in risk would result in a lower number, and indicates a protective effect.
For instance, if a researcher wants to find out how coffee drinking effects foot fungus, he first has to find out how common foot fungus is in the general population. In this fictional example, let's say he determines that 20 out every 1,000 people have foot fungus. That's the baseline, a RR of 1.0. If he discovers that 30 out of 1,000 coffee drinkers have foot fungus, he's discovered a fifty percent increase, which would be expressed as a RR of 1.50.
If he were to find the rate was 40 out of 1,000, it would give him a RR of 2.0.
He might find foot fungus was less common among coffee drinkers. A rate of 15 out of 1,000 would be expressed as a RR of 0.75, indicating that drinking coffee has a protective effect against foot fungus.
The media usually reports RRs as percentages. An RR of 1.40 is usually reported as a 40% increase, while an RR of .90 is reported as a 10% decrease. (In theory, at least. In practice, negative RRs are seldom reported.)
As a rule of thumb, an RR of at least 2.0 is necessary to indicate a cause and effect relationship, and a RR of 3.0 is preferred.
"As a general rule of thumb, we are looking for a relative risk of 3 or more before accepting a paper for publication." - Marcia Angell, editor of the New England Journal of Medicine"
"My basic rule is if the relative risk isn't at least 3 or 4, forget it." - Robert Temple, director of drug evaluation at the Food and Drug Administration.
"Relative risks of less than 2 are considered small and are usually difficult to interpret. Such increases may be due to chance, statistical bias, or the effect of confounding factors that are sometimes not evident." - The National Cancer Institute
"An association is generally considered weak if the odds ratio [relative risk] is under 3.0 and particularly when it is under 2.0, as is the case in the relationship of ETS and lung cancer." - Dr. Kabat, IAQC epidemiologist
This requirement is ignored in almost all studies of ETS.
While it's important to know the RR, it's also very important to find the actual numbers. When dealing with the mass media, beware of the phrase "times more likely."
For instance, a news story may announce "Banana eaters are four times more likely to get athletes foot!" You find the study, read the abstract and find the RR is, indeed, 4.0. But further digging may reveal that the risk went from 1.5 in 10,000 to 6 in 10,000. Technically, the risk is four times greater, but would you worry about a jump from 0.015% to to 0.06%?
[SIZE=+1]Confidence Intervals [/SIZE]
The Confidence interval (CI) is used to determine the precision of the RR. It is expressed as a range of values that would be considered valid, for instance .90 1.43.
The narrower the CI, the more accurate the study. The CI can be narrowed in many ways, including using more accurate data and a larger sample size.
Confidence intervals are usually calculated to a 95% confidence level. This means the odds of the results occurring by chance are 5% or less.
This is one reason epidemiology is considered a crude science. (Imagine if your brakes failed 5% of the time.) The EPA, in their infamous 1993 SHS study, used a 90% CI, doubling their margin of error to achieve their desired results.
The RR could be any number within the CI. For instance, an RR of 1.15 with a CI of .95 1.43 could just as well be a finding of 1.25, an 25% increase, or .96, a 4% decrease, or 1.0, no correlation at all. Pay close attention to any study where the CI includes 1.0. (It does in virtually all ETS studies.) When the CI includes 1.0, the RR is not statistically significant.
[SIZE=+1]Confounders [/SIZE]
On average, women live longer than men. Any study on longevity has to account for this fact. This is called a confounder, which is easy to remember because it can confound the results of a study. Some studies use the term "confounding variable." Any study of longevity (usually referred to as a study of morbidity) which doesn't take this confounder into account will be very inaccurate. For instance, when studying the longevity of smokers, it's important to adjust for the gender difference, and adjust for the percentage of men and women in the study.
Sound complicated? It gets worse. Poor people die sooner than rich people. Black people die sooner than white people, even when adjusting for the income confounder. People in some countries live longer than people in others. So if an impoverished black male smoker in Uruguay dies before reaching the median age, is it because of his income, race, gender, smoking, or nationality?
When studying the effects of tobacco exposure, either to the smoker or to those around him, confounders include age, allergies, nationality, race, medications, compliance with medications, education, gas heating and cooking, gender, socioeconomic status, exposure to other chemicals, occupation, use of alcohol, use of marijuana, consumption of saturated fat and other dietary considerations, family history of cancer and domestic radon exposure, to name a few.
When studying the effects of SHS on children confounders include most of the above, plus breast feeding, crowding, day care and school attendance, maternal age, maternal symptoms of depression, parental allergies, parental respiratory symptoms and prematurity.
A study that does not account for all of these factors is likely to be very inaccurate, and is probably worthless.
That is enough for a first post... everyone is just gonna glaze over it anyway.