Presidential Failure: How Pollsters Errors Could Impact False Claims Cases

For those of us who want to use appropriate scientific evidence to fight fraud, the recent failure of pollsters in the election must be explained and examined.

Statistical analysis has been held to be a legitimate tool to use to determine both liability and the extent of damages in certain circumstances. A small sample of claims made in a hospice could show that there were extensive amounts of false claims made for services without a demonstration of medical necessity, which creates liability under the False Claims Act.

To examine the full number in any such case if you have, say, 100,000 charges to Medicare and you can do an appropriate sample and you find 48 percent are upcoded or lack medical necessity in similar ways, you may have a good case. See, for example, United States ex rel. Martin v. Life Care Ctrs. of Am., Inc., 2014 U.S. Dist. LEXIS 142660 (E.D. Tenn. Sept. 29, 2014).

Unfortunately, this science took a bit of a hit in the recent weeks. Maybe you noticed that just about every professional pollster got the election wrong in the last week leading up to the election. Those with consciences are busy either making excuses or vowing to do better next time.

Making too much of this without looking at the differences between election polling and sampling objective data is a danger to those of us who want to use an appropriate scientific method to measure damages and expose liability. I have no doubt that persistent and well documented polling failure will be used to harm our efforts.

Yes, the examples of polling error abound. Nobody even remembers 2012 as a year of polling disaster. It was. Most pollsters had Obama winning, but by considerably less of a margin than he actually won. Nobody remembers it because he ultimately won anyway in line with the predictions.

This year most pollsters had Clinton ahead by 4 percent or so nationally and her popular vote edge ended up being less, but it also meant that pollsters got the result flat wrong as Donald Trump won the election. Therefore, they look much stupider for making a mistake in about the same amount this year as in 2012.

I think there are many reasons to be skeptical of polling, (a subset of statistical sampling), which do not necessarily affect the legitimate uses of statistical sampling.

Pollsters in an election season really face little actual scrutiny. A few companies attempt to rate them, but really what is the cost to a pollster of putting out an inaccurate nationwide poll of the electorate? Nothing, in fact, most pollsters get free publicity for doing just that. So why not just keep putting numbers on the board. In a court case, such issues are open to scrutiny.

In addition, polling human beings in an election is a more difficult prospect than dealing with say Medicare claims. People may lie to the pollster of course. However, the biggest difference is a question of determining the sample: Whom do you poll?

Turnout is the key difference that separates winning and losing in almost any campaign. There is no real way to know in advance, which of the registered voters may show up on any given election cycle. When sampling evidence for a Court there is generally a defined and limited set of claims to review. The kind of claim to be sampled is not dependent on the mood of a human being and whether that person just does not feel like participating in an event.

Another difference is that smallish error can change the entire result in an election. Being off by 1 percent can indeed be the difference between winning and losing. In dealing with statistical sampling, an error of that magnitude can be acknowledged, while not undermining the ultimate conclusion or really changing anything about what the parties may agree to in a case.

I hope we crack down on pollsters predicting results of elections. When dealing with human beings who can and do change their minds for many reasons or who may not tell the truth to a pollster or who may just stay home when they say they will vote, there are too many variables for us to allow pollsters to gain such notoriety in close elections. We should accept that the election will be close and go vote and not waste so much time trying to predict it.

Yet the very failures of polling in such a close election again re-enforce the legitimacy of the overall science of statistical sampling. It works just fine on objective material.