Victoria Trafka, President & Principal Engineer, Engineering & Quality Solutions Inc.03.13.20
The vast majority of orthopedic instruments are classified by the FDA as Class I devices, meaning manufacturers aren’t required to submit for clearance or approval before marketing these devices, and they aren’t required to apply formal design controls to the development process. But, as technology and complexity in orthopedic products advances, some attitudes about applying these controls are changing.
Class II and Class III devices must be designed according to strict regulations in which manufacturers document all stages of the development process. This includes formal design planning, design meetings, determining product requirements, obtaining prescribed approvals, and conducting final device verification and validation. All of this effort results in a comprehensive documentation file called a design history file (DHF). It’s a long process and it’s part of the reason medical device development costs more and takes longer than most companies would like.
The process first provides a roadmap for the intended product and the development process, and later provides a history of what happened and why. Over the last five years or so, I’ve heard a good number of engineers and managers quickly respond, “That’s a Class I device, so we don’t need a DHF” when referring to orthopedic instruments. While this is absolutely true, there are many situations in the professional world when you go beyond the minimum requirements and do more because it makes sense, reduces risk, and improves the outcome. I believe this is the case for many orthopedic instruments and the application of formal design controls.
At the last ASTM spinal device standards meeting in November 2019, I saw a presentation by an FDA research team investigating intervertebral cage impact testing. These cages can fracture during insertion, which is performed with an instrument called an inserter. This device failure is common enough and also concerning enough that the FDA is devoting resources to determine why it’s happening and what can be done to reduce occurrence. It’s not yet clear whether the fractures are due to implants, inserters, user techniques, or something else. It’s probably a combination, and if those inserters are designed without any design controls, what assurances does a manufacturer or the FDA have that the inserters are functioning properly or were adequately designed to implant cages and not cause adverse effects?
Let’s assume a spine company has encountered a number of these failures with their cage system. They decide to look into the DHFs, and find there’s a wealth of information for the cages; on the other hand, when it comes to the inserter, there’s nothing but final specifications. The questions about the instrument begin:
Let’s look at what would’ve happened if the inserter had undergone design controls before being introduced to market.
First, a design plan was created, outlining from a high level the reason the instrument is needed, what it needs to do, and what resources were allocated to the design. This starting point got the team together and put them on the same page. There is now a record of the managers, engineers, surgeons, etc., who worked on the inserter design, and they’re the experts who could help resolve the current field problems.
Next, the design inputs were written and approved, outlining the details of the inserter function, what other devices it works with, who will be using it, the safety needs, etc. This document is critical because it’s where the end user’s needs and requirements are put into technical terms with specifics. In this case, the team determined all the important functions of the inserter and all sizes of implants with which it should work. They agreed on all the types and sizes of implants the instrument needs to securely hold and place without fail. The team also defined the amount of force the inserter should withstand without failure and established the maximum force the inserter should apply to the implants.
After the design was complete, design verification was completed, confirming the prescribed technical elements had indeed been incorporated into the final design. The design inputs that were painstakingly created became critical here as they provided measurable criteria to be met in testing. The development team made sure no requirements were overlooked or requirements didn’t somehow migrate from the original specifications. Production units were analyzed and underwent a range of testing from biocompatibility to strength to reliability. They were verified to fit and function as intended with the full range of implants.
Finally, design validation was conducted to ensure the inserter functioned as intended in clinical use with a range of users. These validations entailed surgical mock-ups and a cadaver lab, as well as using actual implants, instruments, and typical surgical procedures. Maybe the design validation tests revealed when the inserter was used with the smaller implants, it tended to apply too much force or slip and damage the implant. Or maybe the tests went smoothly and revealed the inserter works exactly as needed for every size.
Now back to reality; none of this was performed because “It’s only an instrument so design controls aren’t required” and “that takes a lot of unnecessary time and effort.” And, returning to those cage failures in the field, it’s likely many of those items I just discussed—design inputs, analysis verification, validation testing, etc.—are going to be completed during the failure investigation because there is no DHF. People will start thinking it would’ve been much better if these things had been performed three years earlier during the development process, because that history would be incredibly valuable now, and maybe design controls could’ve prevented these field failures. Thus, the exact reason design controls were implemented by the FDA is finally revealed.
Don’t misunderstand my viewpoint, however, and think I love creating stacks of documents. My approach is always one of compliance, practicality, and risk. If regulations require design controls, then they must be done. Where they are not required, but these activities add value (in the present or the future), I’m in favor of applying design controls. And finally, I always consider the risks of incorporating design controls or not. There is definitely business risk in delaying product launch, but in this case study, the risk of launching a device that could damage a spinal implant during surgery seems much greater.
Consider an alternate case of a standard orthopedic hex driver under development. This instrument probably doesn’t need a full DHF because the function is straightforward, the design is not complex, standard specifications exist for the geometry and materials, etc. For this instrument, the FDA regulations don’t require design controls, the risks of problems are fairly low, and all that documentation may not actually add value. I would probably recommend a limited design control process.
My colleague, Dawn Norman of MRC-X—a Memphis-based regulatory and quality consulting company with a focus on life sciences—sees this scenario fairly often and explains her approach, “While traditional design controls may seem overbearing for Class I devices, adding a ‘lite’ element of design controls can definitely add value to both the design of the product and to the company. Having a simple design input/output matrix for critical items and a basic risk analysis for Class I instruments can save a company a lot of headache by just documenting all the forward-thinking that they are already doing. Once in post-production and complaints are coming from the field, these simple measures will help complaint and CAPA investigations move much faster. Additionally, if the company is small and looking to be acquired, these types of design files by the potential acquirer will show that you are serious about your design and provide an additional comfort level during due diligence activities.”
I’ve applied this approach and seen it work many times in my career. I believe it’s time for the orthopedic industry to reconsider design controls for instrumentation.
Victoria Trafka, BSME & MSME, is president and principal engineer at Engineering & Quality Solutions Inc. Her passion is improving people’s mobility and quality of life with innovative medical devices. She has spent the majority of her career working within the orthopedic trauma and spine industry for a variety of companies from startups to global market leaders. While Trafka’s current focus is product development and quality systems, she has also held positions in manufacturing, quality, and project management. This varied experience helps her understand every step in the process from device idea to launch and long-term compliance, which results in a more robust product in less time.
Class II and Class III devices must be designed according to strict regulations in which manufacturers document all stages of the development process. This includes formal design planning, design meetings, determining product requirements, obtaining prescribed approvals, and conducting final device verification and validation. All of this effort results in a comprehensive documentation file called a design history file (DHF). It’s a long process and it’s part of the reason medical device development costs more and takes longer than most companies would like.
The process first provides a roadmap for the intended product and the development process, and later provides a history of what happened and why. Over the last five years or so, I’ve heard a good number of engineers and managers quickly respond, “That’s a Class I device, so we don’t need a DHF” when referring to orthopedic instruments. While this is absolutely true, there are many situations in the professional world when you go beyond the minimum requirements and do more because it makes sense, reduces risk, and improves the outcome. I believe this is the case for many orthopedic instruments and the application of formal design controls.
At the last ASTM spinal device standards meeting in November 2019, I saw a presentation by an FDA research team investigating intervertebral cage impact testing. These cages can fracture during insertion, which is performed with an instrument called an inserter. This device failure is common enough and also concerning enough that the FDA is devoting resources to determine why it’s happening and what can be done to reduce occurrence. It’s not yet clear whether the fractures are due to implants, inserters, user techniques, or something else. It’s probably a combination, and if those inserters are designed without any design controls, what assurances does a manufacturer or the FDA have that the inserters are functioning properly or were adequately designed to implant cages and not cause adverse effects?
Let’s assume a spine company has encountered a number of these failures with their cage system. They decide to look into the DHFs, and find there’s a wealth of information for the cages; on the other hand, when it comes to the inserter, there’s nothing but final specifications. The questions about the instrument begin:
- Who originally designed this?
- Did they make sure it would fit all of the implant sizes?
- What tests were performed on the inserter strength, function, and reliability?
- Did they consult with our surgeon experts to understand how it would be used?
- Why was the instrument interface changed after being on the market for two years?
Let’s look at what would’ve happened if the inserter had undergone design controls before being introduced to market.
First, a design plan was created, outlining from a high level the reason the instrument is needed, what it needs to do, and what resources were allocated to the design. This starting point got the team together and put them on the same page. There is now a record of the managers, engineers, surgeons, etc., who worked on the inserter design, and they’re the experts who could help resolve the current field problems.
Next, the design inputs were written and approved, outlining the details of the inserter function, what other devices it works with, who will be using it, the safety needs, etc. This document is critical because it’s where the end user’s needs and requirements are put into technical terms with specifics. In this case, the team determined all the important functions of the inserter and all sizes of implants with which it should work. They agreed on all the types and sizes of implants the instrument needs to securely hold and place without fail. The team also defined the amount of force the inserter should withstand without failure and established the maximum force the inserter should apply to the implants.
After the design was complete, design verification was completed, confirming the prescribed technical elements had indeed been incorporated into the final design. The design inputs that were painstakingly created became critical here as they provided measurable criteria to be met in testing. The development team made sure no requirements were overlooked or requirements didn’t somehow migrate from the original specifications. Production units were analyzed and underwent a range of testing from biocompatibility to strength to reliability. They were verified to fit and function as intended with the full range of implants.
Finally, design validation was conducted to ensure the inserter functioned as intended in clinical use with a range of users. These validations entailed surgical mock-ups and a cadaver lab, as well as using actual implants, instruments, and typical surgical procedures. Maybe the design validation tests revealed when the inserter was used with the smaller implants, it tended to apply too much force or slip and damage the implant. Or maybe the tests went smoothly and revealed the inserter works exactly as needed for every size.
Now back to reality; none of this was performed because “It’s only an instrument so design controls aren’t required” and “that takes a lot of unnecessary time and effort.” And, returning to those cage failures in the field, it’s likely many of those items I just discussed—design inputs, analysis verification, validation testing, etc.—are going to be completed during the failure investigation because there is no DHF. People will start thinking it would’ve been much better if these things had been performed three years earlier during the development process, because that history would be incredibly valuable now, and maybe design controls could’ve prevented these field failures. Thus, the exact reason design controls were implemented by the FDA is finally revealed.
Don’t misunderstand my viewpoint, however, and think I love creating stacks of documents. My approach is always one of compliance, practicality, and risk. If regulations require design controls, then they must be done. Where they are not required, but these activities add value (in the present or the future), I’m in favor of applying design controls. And finally, I always consider the risks of incorporating design controls or not. There is definitely business risk in delaying product launch, but in this case study, the risk of launching a device that could damage a spinal implant during surgery seems much greater.
Consider an alternate case of a standard orthopedic hex driver under development. This instrument probably doesn’t need a full DHF because the function is straightforward, the design is not complex, standard specifications exist for the geometry and materials, etc. For this instrument, the FDA regulations don’t require design controls, the risks of problems are fairly low, and all that documentation may not actually add value. I would probably recommend a limited design control process.
My colleague, Dawn Norman of MRC-X—a Memphis-based regulatory and quality consulting company with a focus on life sciences—sees this scenario fairly often and explains her approach, “While traditional design controls may seem overbearing for Class I devices, adding a ‘lite’ element of design controls can definitely add value to both the design of the product and to the company. Having a simple design input/output matrix for critical items and a basic risk analysis for Class I instruments can save a company a lot of headache by just documenting all the forward-thinking that they are already doing. Once in post-production and complaints are coming from the field, these simple measures will help complaint and CAPA investigations move much faster. Additionally, if the company is small and looking to be acquired, these types of design files by the potential acquirer will show that you are serious about your design and provide an additional comfort level during due diligence activities.”
I’ve applied this approach and seen it work many times in my career. I believe it’s time for the orthopedic industry to reconsider design controls for instrumentation.
Victoria Trafka, BSME & MSME, is president and principal engineer at Engineering & Quality Solutions Inc. Her passion is improving people’s mobility and quality of life with innovative medical devices. She has spent the majority of her career working within the orthopedic trauma and spine industry for a variety of companies from startups to global market leaders. While Trafka’s current focus is product development and quality systems, she has also held positions in manufacturing, quality, and project management. This varied experience helps her understand every step in the process from device idea to launch and long-term compliance, which results in a more robust product in less time.