McKinsey’s results on employer based healthcare: It’s all in how you look at it.

Date: June 28, 2011 | Shawn Herbig | News | Comments Off on McKinsey’s results on employer based healthcare: It’s all in how you look at it.

McKinsey & Company recently released a report that claims that 30% of employers in the United States will drop their employee insurance coverage in 2014, when the Patient Protection & Affordable Care Act (aka Obamacare) takes full affect.  Before we get into the controversy surrounding these results (and they are indeed making waves across the research and government communities), let’s take a brief look at what the study indicated:

  • 30% of employers will definitely or probably stop offering ESI in the years after 2014.
  • Among employers with a high awareness of reform, this proportion increases to more than 50%, and upward of 60% will pursue some alternative to traditional ESI.
  • At least 30% of employers would gain economically from dropping coverage even if they completely compensated employees for the change through other benefit offerings or higher salaries.
  • Contrary to what many employers assume, more than 85% of employees would remain at their jobs even if their employer stopped offering ESI, although about 60% would expect increased compensation.

The point of contention is generally with the first figure mentioned above, namely that 30% of employers will drop employer-sponsored insurance.  It’s a figure that you see popping up all over the place, cited by the New York TimesNational Public RadioLos Angeles Times, and the Wall Street Journal (to name a few); not to mention all the independent bloggers and journalists out there posting on the topic.  In response, McKinsey has released its methodology and even has created a separate email address to direct all inquiries of the study.  The survey itself has also been released.

But why is this turning into such a controversy?  Simply put, it does not correspond with the past figures citing attrition of ESI due to the Affordable Care Act.  Furthermore, it is being used as political fodder among the Republican presidential nominees to attack Obamacare.  Other research conducted by the Mercer Group , the Rand Corporation, and the Urban Institute have all cited attrition projections that are much lower than McKinsey’s 30%.  And, what is more, it took McKinsey some time to release the methodology (they rejected any requests upfront, until they began to feel the pressure of every major news outlet screaming for it).

It is easy to become caught up in the controversy surrounding this, but a researcher, I am more curious as to the reasons why the controversy exists in the first place; particularly the data underlying the results and the way in which it was collected.

To be forthright, the methodology of the research seems to be sound enough, and the questionnaire itself does not appear to be skewed in such a way as to solicit particular responses.  But if this is the case, then why all the fuss?  The answer to this question lies beneath the surface of all this, as it is a function of context framing rather than accuracy.  And this, to some degree, is addressed by McKinsey in their methodology response.

McKinsey’s study was a study of perceptions, while Urban Institute et al used forecasting models to predict the impact the healthcare bill would have on ESI.  Given this delineation, it becomes clearer perhaps why such discrepancies exist between the various studies.

But why, you may ask, should these two models differ so drastically, and which one is more correct and reliable?  Well, for the second question, only time will tell and I’m not about to about to open that can of worms, but the first question can be answered pretty simply.

Perception studies, like the one McKinsey performed, are based on responses during a single point in time and are influenced by the emotions around that topic at that time.  As we have seen, the emotions surrounding this topic are particularly contentious right now.  This is not to say that all perception studies are fraught with emotion and because of this emotion they cannot be trusted.

In this case, the employers were asked if they would continue their ESI based on a specific scenario.  Some 30% indicated that they likely would not.  When 2014 arrives, maybe all 30% will do exactly as they indicated, but more likely, some of the respondents will change their minds based on the final financial implications as well as new information at that time.

Forecasting models, on the other hand, are designed to take into account numerous scenarios based on what the healthcare bill may provide and the predicted responses to the same.  Typically using regression modeling and past performance to functionally predict the behaviors of both people who indicated they would drop ESI and those that didn’t.

Perhaps emotions will die down and a larger percentage of employers will decide to stay with their ESI, or perhaps the forecasting models are underestimating the actual response come 2014 – time will provide that answer.  McKinsey’s study should not be discounted because it results differ from previous predictions.  And let’s not forget that the perceptions and opinions that they measured are indeed those of the decision-makers themselves.  However, this is a perfect example of how people can be misled by statistics and figures that give the appearance of contradiction.  I’m not trying to argue which model is right or which is wrong.  Both are valid and serve a valuable purpose.  My point here is an attempt to shed some light on perhaps why these models differ.

Research is about providing answers and both models provide different parts of the answer.  If we allow our own emotions and preconceived notions to take control then we will lose this answer in the midst of controversy.

view all