Software and connectivity underpin virtually all modern medical devices. Without careful implementation, software risks can offset the medical benefits of poorly made devices. Most patients and medical professionals view medical devices in strictly mechanical terms. Devices are imagined as highly specialized technologies with the sole purpose of benefitting people. But these devices are being made with commodity software and hardware and traditional software engineering techniques. By utilizing technology in that way, residual software problems of the Web and the PC inevitably seep into the medical world.
Medical devices are prone to a few biases that they must work against to minimize the risk to patients through software failure.
No more security through obscurity. Medical devices not only include the implantable kinds (e.g., pacemakers, cochlear implants), but also telematics and wearable devices. Early generations of these devices could only be understood by highly trained and deeply experienced technical experts because they used custom hardware and ran hand-tuned code built exclusively for a specific device. When the materials to build, program and debug a device are expensive and rare, fewer people can misuse or abuse the devices. Today, however, devices often include commodity technologies like Bluetooth, WiFi and NFC, which allows for cheaper and faster development and a wider pool of developers who can help in the creation of the device. It also means many more people understand enough to misuse and abuse the devices.
The legacy of obscurity in medical devices means that developers believe their source code, designs and other technical details are closed and hidden. Because devices are made from well-known parts and use well-known protocols, we can often observe enough behavior externally to infer much about how devices work. The implementation details matter for device integrity, device trustworthiness, patient privacy and safety. Although rarity and obscurity may have provided some modest protection in the past, it is long gone and can no longer be assumed.
The right answer, in most industries, is to build security in, to start with security considerations at the beginning of the lifecycle and continue them all the way through to post-deployment. Medical devices demand similar security considerations.
Tyranny of miniaturization. Smaller size brings diminished computational capacity, yet many secure protocols and security designs require significant computation when implemented correctly. We need modern cryptography in order to provide security assurances like confidentiality of medical data at rest and in transit. We need CPU cycles to calculate integrity checks on software and connections. These kinds of computation, however, drain batteries and make communications take longer to set up.
Cryptographically-protected communication is more complicated and comes with a number of error situations that could not occur otherwise. These errors complicate the devices and the software that interacts with them. As we shrink devices, they can perform duties they never could before, but these tiny form factors have limited computational horsepower with no reasonable user interface to handle security failures.
Updating is an attack vector. One of the great benefits of modern devices is the ability to update them after they are fielded. Whether a device is implanted or worn, we have safe and practical ways to change the software without changing the physical device. Any means of updating the software, however, is also a means to inject new and malicious instructions into the device. Any device that listens for a trusted updater must have the means of rejecting malicious updaters. Devices will be connected to home WiFi, garden-variety personal computers and general-purpose smartphones. One can't assume that the only systems trying to submit new software to a device are authorized systems. Software integrity issues are not new, but they are becoming important to medical devices.
Secure by default. To succeed as mHealth devices, medical devices need to be used correctly by non-technical people. In the name of usability many security features are disabled, implemented badly or built with insecure defaults. Consider Bluetooth pairing passcodes, certificates and keys for secure communication and password-protected medical data; this security data might need to be personalized on a per-user or per-device basis, or both. Oftentimes, manufacturers use flawed defaults like weak initial passwords, fail-open TLS communications or hard-coded pairing passcodes - supposedly to simplify the user’s experience. While it requires more effort to make robust and secure devices, all the security solutions are well known. It is simply a matter of making a few pragmatic implementation choices when adapting devices to the medical domain.
Unique little snowflakes, just like everyone else. The more medical devices leverage commodity hardware, software and protocols, the more they can leverage tried and true security techniques. There are limitations to medical devices that need to be considered, but the good news is that the secure solutions are known. Software security is a well-understood field, which is good news for those who are looking to secure devices built on software.
Paco Hope is the Principal Consultant for the software and app security company Cigital.
	 


