Addressing Top Health Data Risk Management MistakesClearwater Compliance CEO Bob Chaput on Critical Steps to Take
Healthcare organizations must take several important steps to improve their risk management programs, says security expert Bob Chaput.
See Also: Webinar | Data Breach Myth Vs. Reality
"There are several actions [organizations] can take, but if you hold me to one - we have got to build security into the system development life cycle," says Chaput, CEO of Clearwater Compliance, who is speaking on risk management at the HIMSS18 conference in Las Vegas.
In an interview with Information Security Media Group (see transcript below), Chaput discusses:
- How covered entities and business associates can address some of the most common mistakes they make in their risk management programs;
- The most concerning emerging cyber threats and challenges in the sector;
- Risks and threats involving medical devices.
Chaput has nearly 40 years of combined healthcare and cybersecurity experience, managing complex projects for more than 500 clients, including large healthcare delivery networks, hospitals and health plans. He holds several professional and technical certifications, including the Certified Information Systems Security Professional, Health Care Information Security and Privacy Practitioner, Certified in Risk Information Security Controls and Certified Information Privacy Professional/US.
MARIANNE KOLBASUK MCGEE: In its HIPAA enforcement activity, the Department of Health and Human Services' Office for Civil Rights has repeatedly spotlighted the lack of a comprehensive and timely enterprisewide risk analysis by covered entities and business associates. Why does this continue to be such a struggle for so many healthcare sector organizations?
BOB CHAPUT: There are a number of reasons. One of them is what I'll label as the "wrong view." That is to say, we're going through an evolution from this being a compliance matter to it being a security matter to, frankly, it becoming a patient harm issue. And patient harm is a very serious enterprise risk management issue. There are lots of organizations that just haven't made the move to view this as an enterprise risk management matter.
The second thing I'll cite is catching up. Healthcare, of course, is playing major catch-up with technology adoption. We've seen an explosion of the amount of healthcare data that's gone online over the course of the last 10 years or so, and it's continuing. And if healthcare is playing catch-up in terms of the adoption of technology, when it comes to safeguarding information that is even greater.
The third thing is there is a fundamental misunderstanding as to what makes up a comprehensive and accurate risk assessment. The upshot of that is that organizations believe that a compliance gap assessment is sufficient. They believe that conducting certain technical testing, like pen testing and vulnerability scans and social engineering testing, is sufficient, and they're not.
The next issue is really key: It's a matter of a wrong skill set. Now increasingly, we're seeing a greater number of very smart security architects, security engineers, security operations people who are coming into healthcare. ... That's not the same as risk analysis and risk management people.
And I often draw an analogy of harking back to the old days of the Miller Analogies Test. Security operations people are to risk management as accountants are to financial analysts. Both are important roles; both require well-honed skills and knowledge and experience. But they're very different roles. The point is that being a good security person does not make you a good risk analyst.
And then the last point as to why this is happening is a risk analysis is hard. It is nearly impossible to perform this work without the right tools.
We see the purview of the Office for Civil Rights. It is appropriately looking at IT assets, EMRs, the patient accounting system, patient billing, pharmacy, etc., and also looking at our medical devices, and last but not least, looking at the internet of things, building management systems, etc. And if you think about a midsized integrated delivery network with perhaps 15 to 20 hospitals and clinics and ambulatory surgery centers, by the time you consider all the information assets that need to be risk analyzed, it's into the hundreds if not crossing into the thousands, so it's hard work to do.
KOLBASUK MCGEE: Are there common mistakes that you see covered entities and business associates making when it comes to their risk analysis, as well as their overall risk management programs and the steps that they can take to address these issues?
CHAPUT: Number one, the most common mistake, is submitting the wrong report. I alluded to this a moment ago. Organizations submit information that meets the nontechnical evaluation cited in one of the standards for the technical evaluation. It's critical to understand there is a difference. In fact, the HIPAA Security Rule cultured three separate assessments: Nontechnical evaluation, technical evaluation and the risk analysis. I'm afraid that there has been many consulting firms and a lot of the audit firms out there come to the table with a point of view that this is about going through some checklist, and as a consequence of that has led to customers to make such submissions.
Point two, it's not asset based. A moment ago I alluded to the three categories: Traditional IT assets, biomedical devices and the internet of things. What's happening is that organizations are starting with a list of controls and answering questions as to whether or not those controls are in place, rather than starting with the Epic system or a GE or a Siemens system and really getting into understanding what are the reasonably anticipated threats to that asset. Rather than doing that, organizations are starting with a control checklist.
This is the third most common mistake: not comprehensive enough. We interpret what we see in the corrective action plans, 54 in total, 41 of them involving ePHI. Organizations are not including every asset in every line of business in every facility in every location in their risk analysis. It gets back to the matter I mentioned earlier. It's hard. It takes a lot to do that.
The next point is: They're not detailed enough. We have seen enough OCR investigation letter data requests and enough corrective action plans to infer that OCR is looking for risk analysis work to be performed at a very detailed level.
As an example, if we're risk analyzing the Epic system, we're going to peel the onion back and look for where all the PHI actually lives. It might be on an application server, a database server, snapshotted out to a storage area network. Good news: We backed it up in the evening. Information is downloaded to workstations on wheels to laptops and Android devices and iOS devices - every one of those media types individually.
If an organization has 25,000 laptops, we don't risk analyze each one, but we group the laptops together. And now think about a laptop, using that as an example, what's a reasonably anticipated threat? Well, one very common one is it may be stolen. It's a reasonably anticipated vulnerability. It may not have encryption. What I just described constitutes a risk.
Risk exists when one has an asset and a threat and vulnerability. Just to paint it a little bit further. Same laptop, same asset, same threat source; it may be stolen, a new vulnerability. The organization doesn't have strong passwords. Then again another laptop. Now I'm a home health clinician, and there's a new threat source - a shoulder surfer when I'm in Starbucks in between appointments. The vulnerability in that case is a lack of a privacy screen.
OCR is looking for that level of granularity when it comes to tooling these risk analyses. And another common mistake is that organizations are not following the guidance that's been put forth.
Some would assert that OCR has not provided clarity on what they're looking for, and I would beg to differ. Between the guidance published ... plus the 41 case studies I mentioned, and last but not least, the OCR audit protocol, it seems eminently clear to me that OCR has provided guidance as to how to do this.
We've worked with a number of organizations, large integrated delivery networks, and we've been brought into situations where vendor number one said there's a risk analysis rejected by OCR. Vendor two, rejected. Vendor three, rejected - all for reasons around they're just simply not following the guidance.
Last but not least is the matter of documentation, rigor and indication of level of engagement. We see OCR looking for evidence of a vibrant ongoing program, not a flash in the pan that came about as a result of meaningful use attestation. Numerous cases have cited lack of policy, procedures, practices, documentation or risk analysis.
There are two cases at least where OCR has dinged the organization because of lack of management engagement. That's public domain information. One of them is Oregon Science and Health University and the other is University of Mississippi Medical Center. It's clear to me what they're looking for, and yet organizations are making these mistakes over and over again.
Security by Design
KOLBASUK MCGEE: You mentioned a pretty long list of mistakes that they're making. Is there any one big step that they can take to correct some of these or many of these mistakes?
CHAPUT: There are probably several actions that organizations should take, but if you held me to one, this would be my response: We have got to think about building security into the system development lifecycle or the technology deployment lifecycle.
... In healthcare, we've deployed all sorts of great solutions, but we've done it without regard to security. We need to build security by design into the technology development lifecycle.
KOLBASUK MCGEE: Bob, what are some of the emerging threats and risks that trouble you the most?
CHAPUT: Well, we know that the landscape is dynamic to say the least. There are latent vulnerabilities that appear and are discovered. There are the threat sources that emerge. If I had to zone in and hone in on one particular area, I'm going to focus on this matter of patient harm and patient safety, and within that world we talk about biomedical devices.
We now have devices that are attached to our patients or implanted in them - defibrillators, pacemakers. The attachment might be a wireless IV infusion pump or an insulin pump. And then we have devices that are not necessarily attached to them, such as medical imaging devices. This whole category of biomedical devices represents the single biggest threat, because risk, after all, is about loss or harm. And what may happen in the case of these devices being compromised is there could be direct patient harm, physical patient harm if not it being a life or death matter.
Take a typical medical imaging device, a CT scan as an example. The attack that might be the introduction of malware into the primary computing platform, and modification of the configuration file could cause the device to operate wildly differently from the way it was designed. These devices are electromechanical devices. They have motors. And someone hacking into a medical imaging device, the CT scan, and modifying the pattern of the motors that are in those devices, moving patients in and out of the radiation could result in very serious physical harm.
Think about someone attacking one of these CT scans and modifying the images. What if a tumor that was there is no longer there, because the image was modified, and there's a misdiagnosis? Or what if there's an aberration in what appears to be a tumor ends up being treated when there's not really an issue there?
And then, of course, last but not least in terms of attacks is the matter of a distributed denial-of-service attack on one of these devices. I go back to the area of biomedical devices. I'm very concerned about confidentiality, integrity and availability of information. Where we as an industry may sustain the greatest harm is when it comes to our patients; that's top of line for me in terms of top threats and top risk.
Risk Management Steps
KOLBASUK MCGEE: Are there any recommended steps that you think that healthcare entities should take to get a better handle on the risks and threats that they're dealing with when it comes to these medical devices?
CHAPUT: Well, it gets actually back to the theme of this conversation, which is about risk assessment and ultimately risk management. And of course, the common working definition of risk assessment is all about truly understanding what my exposures are, assessing the likelihood of that bad thing happening, assessing the impact were it to happen, and prioritizing those and going after them.
So we see a lot of medical device manufacturers absolutely stepping up in this regard and really formalizing what they're doing from a risk analysis point of view. In fact, the National Cybersecurity Center of Excellence, which is part of NIST, has run a couple of projects, and actually completed one last year around risk analysis of wireless IV infusion pumps. ... So there's good, positive movement in this direction by the device manufacturers, as well as, of course, the healthcare delivery organizations themselves.