First, do no harm. Traditionally, this is the primary tenet by which doctors work – patients and their safety and well-being are most important.
But as more processes become reliant on behind-the-scenes technology, physicians’ ethics and considerations can fall by the wayside through no fault of their own.
Within the last year and a half, hackers have held medical records hostage, infected devices with malware, caused the shutdown of dozens of hospitals in one sweep, and even turned a hospital’s computers into junk.
While the fundamentals of cybersecurity do exist in the healthcare industry, making it stronger is proving complicated. Protecting critical medical systems can run into obstacles that no other industry faces – particularly, patient privacy and safety. And only too often it is a tradeoff between conflicting priorities.
“A clinical risk vs. a security risk – which is bigger?” said Axel Wirth, a healthcare solutions architect with Symantec. “Neither patients nor physicians have had past experience with these types of security-based risk decisions.”
Protecting patient privacy has always been important because of the sensitivity of the information kept in medical records. The implementation of the Health Insurance Portability and Accountability Act (HIPAA), which created stricter standards for who had access to a patient’s information, made that issue a priority in the early 2000s; later laws required wider use of electronic records, adding to the need for stronger security.
That change pushed traditional cybersecurity further up the priority list. Now doctors in solo practice and CEOs of major hospital systems alike also must worry about keeping intruders out of their information, their networks and their devices. And they must approach the issue from multiple directions:
-- Implantable or wearable medical devices such as pacemakers or insulin pumps that can be programmed via external interfaces;
-- Medical equipment such as MRIs, X-ray machines or patient monitors that are directly connected to hospital networks;
-- Record-keeping systems such as electronic records or mobile workstations.
Given that the average 500-bed hospital alone may have as many as 7,500 medical devices and/or connected pieces of equipment, the scope of the challenge becomes clear – especially since patients are involved.
“Data protection solutions are often ill-suited to protect human life,” said Beau Woods of the grassroots cybersecurity group I Am the Cavalry and a Cyber Safety Innovation Fellow with the Atlantic Council. “You don’t want to lock the doctor out (of hacked equipment) while a patient needs help.”
Until recently, health care providers hadn’t worried too much about hacks. “We always hear, ‘No one would ever go after a hospital – there’s no money in it,’” Woods said.
Now, however, they’re getting caught up in broad scale attacks that hit a multiplicity of industries at once. For instance, the West Virginia hospital that had to scrap its computer system after the Petya ransomware attack earlier this year was on a list of affected businesses that included pharmaceutical giant Merck and the Danish shipper A.P. Moeller-Maersk.
“Everything we have seen so far with medical device hacks can be classified as collateral or incidental damage,” Wirth said. “(The target) fit the malware or the attacker’s profile – it’s not because they were looking for the medical device. It happened to be on the hospital network and it fit what the attacker was looking for.”
Along with a handful of lawmakers who have introduced legislation on the issue – among them Sen. Richard Blumenthal, D-Conn., and Sen. Mark Warner, D-Va., the Food and Drug Administration’s Center for Devices and Radiological Health is taking the lead on the issue, urging device makers to regularly update and patch their products.
In late 2016, the agency published guidelines for manufacturers to guide them through post-market changes “throughout the total product life cycle,” Dr. Suzanne Schwartz, the associate director for science and strategic partnerships at the center, wrote in the FDA’s official blog in October. “This includes closely monitoring devices already on the market for cybersecurity issues.”
I Am The Cavalry last year proposed a Hippocratic Oath for Connected Medical Devices that would apply to the entire medical community, a takeoff on the original “First, do no harm” Hippocratic oath.
“There are certain things the market side needs to start asking for and demanding,” Woods said. “But part of it is a time-scale problem – (manufacturers) haven’t had time” to develop and implement new security features leading to safer devices. It can take at least five years for a device to pass FDA muster and reach market, where it can stay for as long as 20 years, he added.
“You have a group of devices that are very compact and very critical from a function standpoint,” Wirth said. “It’s very difficult to add new features after the fact. You’re restrained by the design of the device. … This is not a problem that can be remedied quickly. If you didn’t design in security from the get-go, it’s difficult to retrofit it.”
The FDA has decided that manufacturers who have figured out how to upgrade security on devices don’t have to repeat the entire regulatory approval process, “but I would still need to retest and recertify the device, which is still a burden. It’s not trivial,” Wirth said.
If you are interested in this topic, Axel Wirth will be speaking at the HIMMS18 (Health Information and Management Systems Society) conference in March. Please find more information below:
We encourage you to share your thoughts on your favorite social platform.