Designing solutions for medical applications is growing more complex as adherence to regulations is shifting to more of the designer’s responsibility. Taking advantage of FPGAs can help designers reduce their time to market.
The complexity of medical devices is ever increasing. Medical devices range from simple tools like a stethoscope to gene-sequencing machines and tele-operated surgical devices. As the devices become more and more complex, so does testing and risk assessment. Many are aware that the FDA’s medical device recall database reports a 17% increase from 2009. A medical device recall is the most costly and damaging result of a missed defect or problem, and can be devastating to a company’s reputation. As the complexity of medical devices grows, so does the complexity of the hardware, software, and number of lines of code (Fig. 1). The FDA sets rules and guidelines to help insure that quality products enter a market, but the burden and risk assessment duty falls upon the product owner. With the pressures of getting your device to market quickly, how can you manage the quality of a product while ensuring a well thought out risk assessment?
1. The FDA does not review code, but instead reviews the process used to develop code.
There is a potential solution to this problem, and a more strict testing and review process is not necessarily the answer. Ensuring the quality and reliability of a product requires a strong understanding and use of software engineering practices, even as a hardware designer. Software engineering generally refers to a regimented and procedural methodology for designing, developing, and testing code. While multiple-process models for software engineering have emerged over the years, they almost all describe specific phases and criteria that must be met before moving on in the life-cycle process. The following list describes practices from within the software engineering process for engineers developing medical devices. As you will recognize, software engineering practices merge with hardware elements and the continued growth of field-programmable gate arrays (FPGAs) within many medical designs.
Define a process: In the medical field, it’s not just enough to trust that a design process exists; the process needs to be reviewed regularly. Specifically, it’s important to continually review how and when the process was followed with regards to a specific project or application. Many tools, such as the NI Requirements Gateway, Doors, and other tools exist to help reduce the often manual processes followed my medical companies today during peer-reviews and traceability. These tools help improve participation and accountability for some of the less pleasant tasks like code reviews, documentation, and unit testing. They also make it possible to bring new engineers into existing projects as their tasks and checklists are more clearly defined. Finally, they provide an audit trail for the FDA, which helps demonstrate that you’ve done your due diligence to ensure proper and safe operation.
Use a change management system: Developing medical software without a change management system is playing with fire—you are going to get burned. Change control systems are a critical component of any development process. As the name implies, they provide mechanisms for tracking and understanding when something in the application is modified, who modified it, and the potential implications of the modification. One of the most fundamental components of a change control system is source-code control.
Leverage FPGAs for critical components: FPGAs represent one of the most important tools for designers working on safety-critical applications. They make it possible for developers to define low-level, timing-accurate hardware behavior, which is paramount in control and safety applications. In addition, they can help remove some of the common causes of software failures in medical devices as follows:
Multitasking/Multithreading: Most modern devices must be able to handle multiple tasks at the same time (Fig. 2). Deadlock in single- or multiple-core programming can be very difficult to reproduce and debug, since the situation often relies on multiple processes and requires a specific and synchronized sequence of events to occur. Unit testing alone won’t catch most deadlock issues as they are usually uncovered by code reviews, adept system testers, or luck.
2. FPGAs can run completely independent processes, reducing the possibility of deadlock.
To understand why, you need an intuitive feel for the reasons behind deadlock. Imagine that you and I both want to make pasta for dinner. We both need a kitchen, ingredients, water, a pot, and a spoon. If we were to test our pasta-making ability in separate kitchens, after debugging the recipe, we should have no problems at all. However, the moment we try to share the same kitchen, problems can arise. Let’s say we arrive at about the same time, you grab the pot first, and I grab the spoon first. We will both just end up standing there waiting for the other to finish, but neither of us can even begin. This is deadlock, and it’s easy to see how it emerges outside of traditional, logical testing.
Now let’s look at the same issue, instead using an FPGA to implement the design. In this case, “processes” that are independent have their own physical circuitry within the FPGA, and as a result, there are no shared resources. On each clock tick, combinatorial logic latches in each circuit, and values are stored in separate registers. Deadlocking is reduced because neither process relies on the other’s resources. This makes it possible for a designer to put more trust in the results of simulation and unit testing, since unknowns like resource contention are minimized. Returning to our pasta analogy, this would be the equivalent of providing each of us with our own kitchen consisting only of the utensils that we would need to cook our meals. Once we know that we have everything necessary, no scheduling anomaly could pop up to stop the process.
Middleware: Often times, when developing embedded software on a processor, teams are not able to implement every line of code from scratch. Instead, various tools are available to make the designer more productive; these range from simple drivers to network stacks to operating systems and even code-generation tools. The FDA mandates validation that the software stack works for each specific use case for all off-the-shelf software used in medical devices. It isn’t necessary to validate a fast Fourier transform (FFT) if it returns the correct answer for all possible inputs. Rather, it’s important to validate that it returns what you expect for all valid inputs according to your specifications.
Software components that seem independent are not necessarily so. When using a software stack with an SPI driver and a real-time operating system (RTOS), it’s necessary to validate all of these components together to have confidence in the FFT. If the FFT passes a valid output to the SPI driver but the SPI driver crashes, it’s obvious a problem exists. To solve this issue, if the SPI driver is modified, the entire software stack must be validated again. This can become very cumbersome, and the delays can compound and cause schedules to slip.
FPGAs may be able to help in this example. In the case of an FPGA, there is still the concept of external IP (commonly called IP cores), and the use of this IP needs to be validated just like software IP. However, once all of your use cases have been validated, this makes it possible to have more confidence that it will behave as expected when integrated with other components during the risk assessment review. When using an FPGA, it’s necessary to acquire or generate an FFT IP core and validate its numerical correctness for your use case—this is the same as with the software. However, the risk of intermittent failure decreases drastically because middleware has been removed. There is no longer an RTOS, and the SPI driver is its own IP core whose operation does not directly affect the FFT. Furthermore, if you modify the SPI driver implementation, there is no need to re-validate the unaffected areas of the system.
Buffer Overflow: Most of us know about buffer overflow through cryptic hacker exploits and subsequent Microsoft patches, but this is also a common error when developing embedded devices. Buffer overflow occurs when a program tries to store data past the end of the memory that is allocated for that storage, and it ends up overwriting some adjacent data that it shouldn’t. This can be a really nasty bug to diagnose, since the memory that was overwritten could be accessed at any time in the future, and it may or may not cause obvious errors.
One of the more common buffer overflows in embedded design is the result of high-speed communication of some sort, perhaps from a network, disk, or analog-to-digital converter (ADC). When these communications are interrupted for too long, their buffers can overflow and these need to be accounted for to avoid crashes. An FPGA can help by managing a circular or double buffered transfer, offloading the burden from a processor. This is a common configuration, especially among high-speed ADCs.
Safety System: An FPGA serves as an ideal safety layer of protection where all the patient-facing I/O is routed through the FPGA before it gets to the processor. In this case, additional safety logic can be added to the FPGA so that outputs can be placed in a known and safe state in the event of a software crash on the processor. When designing critical software and hardware, it’s necessary to implement fail-safes to ensure that the device operates safely even when elements of the control hardware or software fail. Since FPGAs are a reliable component of the system, architectures that channel most of the I/O through the FPGA are ideal. By defining a safe state for all control outputs within the FPGA, you can create a control system with a high degree of immunity from hardware or software problems. To maintain all outputs at a safe state, the only requirements are that the FPGA and any output modules must be functioning. The FPGA should implement a simple state machine in all loops, which produce a critical output. At a minimum, the state machine should consist of a primary safe state and a state for normal operation. The primary safe state should be set as the default state for the state machine to ensure that the system boots into a safe state. Figure 3 shows an example of a code application using the National Instruments FPGA-based RIO architecture.
3. Creating a safe state with the FPGA creates added reliability even when the processor and inputs fail.
All safe states should define a safe value or algorithm for each output. In the primary safe state, there’s no need to rely on inputs from other modules or the processor, while other safe states can use inputs as long as these are verified to be functioning correctly. It’s important to check all possible failure conditions in each iteration of an output loop, and if any have occurred, transition the state machine to a safe state in the next iteration. Possible failure conditions include an emergency safety input, which is most commonly a digital input that represents an emergency shut-off switch or other external failure detection mechanism; a control inputs valid check, which monitors the health of the inputs to the control algorithm; and watchdogs, which monitor the processor and operating system to ensure that the system has not become unresponsive.
Many designers in the medical industry are more of a domain expert than an embedded expert. Hence, many designs that could leverage FPGAs don’t. With high-level FPGA programming tools, this complex hardware component makes it possible to shorten development time. One example is NI LabVIEW FPGA, as it is suited for FPGA programming because it represents parallelism and data flow with graphical programming. The combination of this system design software and FPGAs make it easier to rapidly iterate on different concepts in hardware instead of producing custom ASICs. As a result, FPGAs can used in medical devices both to improve reliability and safety, as well as reduce the development time.
Separate prototypes from development: New projects, especially at small start-ups, often ignore development process and structure to deliver a prototype that demonstrates a new proof-of-concept. For startups, this is often an important part of receiving additional funding, whereas at established companies, this is often done to uncover potential patents or explore new research areas. However, one of the biggest follies to avoid is attempting to productize an application by simply polishing a prototype. It’s encouraged to use the prototype to refine and define requirements and estimate project timelines, but this should be kept separate from the development of the end-use application. Not doing so leads developers to overlook architectural and design considerations, which are mandated steps in any software engineering lifecycle.
Use prototypes to derive specifications and requirements: Ideally, all software projects would begin with complete requirements that represented the exact same vision for all stakeholders. However, the reality is different stakeholders have varying expectations for the final product during the early part of the lifecycle. Prototypes serve as a way to help align multiple parties with similar expectations for the final product, which mitigates the need for last minute changes or feature-creep.
Manage and track requirements traceability: As noted earlier, requirements evolve and change throughout the development process. Understanding how these changes impact the entire application requires traceability from code to requirements. This can be achieved by requesting that developers enumerate requirements as they cover them in the implementation. For a project manager, tools like Telelogic DOORS and Requisite Pro are designed to manage and track the relationship between different requirements. They also integrate tightly with tools like NI Requirements Gateway to trace these specifications directly to the code level, implementing automatically. In addition, they’ll also generate traceability matrices, which are expected as part of the documentation for the FDA. More often than not, many medical company design engineers spend countless hours manually parsing through documents to manage requirement numbering and document linkage. Software tools are available to simplify and automate the process, making it possible for engineers to focus on quality rather than paper pushing documents.
Creating a well-defined and documented process for designing a medical device is required to maintain FDA standards, and following this process is key to reducing the risk of a recall. The good news is that there are off-the-shelf hardware and software tools available to make this task easier. Medical devices will continue to become more complex, but the number of recalls and the time to market do not need to increase. By using off-the-shelf components like FPGAs and tools for better software engineering, it’s possible to increase the quality and reliability of medical devices while still remaining competitive.
Carlton Heard is a Product Engineer for Embedded Systems at National Instruments, with a focus on NI CompactRIO and NI Single-Board RIO. He joined NI in 2007 in the Engineering Leadership Program (ELP) and then transitioned to the Application Engineering Specialist group where he focused on machine vision, industrial robotics and FPGA applications. Heard holds a bachelor’s degree in aerospace and mechanical engineering from Oklahoma State University.