BoatShoes;1502572 wrote:I can sympathize with this. However, I think the conservative thing to do is to treat the 90% CI as warranting strategic action. Given widespread unemployment and excess capacity in our economy, we could do worse than to mobilize real resources to convert to a greener economy based on the assertion that is very likely that human consumption of fossil fuels/emitting greenhouse gases, etc. is causing climate change that will have negative economic effects.
You are putting entirely too much faith and misunderstanding of a 90% CI. What you are proposing is not conservative, it's destructive. That's a lot of unwarranted faith in a bunch of lawyers as central planners.
The reason so many of these models are failing and proving inadequate is because "very likely" tremendously overstates reality. The reason you go with a higher CI in such complex systems is because of the very real probability that you are measuring unaccounted for factors and not an impact of the test variable.
A 99% CI means you are increasing your margin for error given the inherent issues with the models and data, that should be intuitive even for a layperson, but also obvious if you've been following this story for the last 15-20 years.
A 90% CI is more accurately interpreted as evidence of a POSSIBLE relationship warranting further study. You should never put your money on that relationship actually existing - it is that tenuous. The "very likely" is specific to the model and data as represented, which is to say if the model and data DON'T accurately represent reality then the only thing very likely is that your model is junk. However politics (and more broadly than Dem/Repub) comes into play and grossly misrepresents the "certainty" of a relationship.
Now what SHOULD happen, if there is truly a relationship, is models evolve and you get real power at a 95% or 99% significance. But that's not really been the direction. In fairness, computing power has been a fairly limiting constraint. Another real problem is we seem to be identifying NEW sources of NEW error, rather than closing the gap on the existing error. What that means is this model here that was "very likely" at the 90% level has been invalidated, and now here's a NEW model that is "very likely" at the 90% level.
Wash, rinse and repeat. Then you being a rational and intelligent fellow, how many times are they going to get it wrong before you start to question the "strong scientific certainty" you've been led to believe exists?