The Double-Edged Scalpel: AI in Surgery Raises Safety Concerns

The Double-Edged Scalpel: AI in Surgery Raises Safety Concerns
Photo by Possessed Photography / Unsplash

The integration of artificial intelligence (AI) into medical devices is rapidly transforming healthcare, promising advancements in diagnostics, treatment, and patient care. However, this technological leap is not without its risks. Recent reports and legal filings have brought to light a series of incidents involving an AI-enhanced surgical navigation system, raising serious questions about the safety and oversight of AI in medical applications.

In 2021, healthcare technology company Acclarent announced the addition of AI to its TruDi Navigation System, a device designed to assist surgeons during procedures such as sinusitis treatment. The system utilized a machine-learning algorithm to guide surgeons, aiming to improve precision and efficiency. However, concerns regarding the device's performance had already surfaced. Prior to the AI integration, the U.S. Food and Drug Administration (FDA) had received seven reports of malfunctions and one instance of patient injury associated with the TruDi system. Following the addition of AI, the FDA has documented at least 100 additional reports of malfunctions and adverse events.

Between late 2021 and November 2025, at least ten individuals reportedly sustained injuries related to the TruDi system. Many of these incidents involved errors where the AI system allegedly misinformed surgeons about the precise location of surgical instruments within patients' bodies. These errors have resulted in a range of complications, including cerebrospinal fluid leakage, accidental skull punctures, and, in two instances, strokes caused by accidental injury to major arteries.

While FDA device reports are not intended to establish causality, the increasing number of incidents has prompted legal action. Two stroke victims have filed lawsuits in Texas alleging that the TruDi system's AI contributed to their injuries. The plaintiffs argue that the device was potentially safer before the integration of AI, suggesting that the new software modifications may have compromised its reliability.

When contacted about the FDA reports, Johnson & Johnson, which acquired Acclarent in 2024, directed inquiries to Integra LifeSciences, Acclarent’s current owner. Integra stated that the FDA reports merely indicate the use of TruDi in surgeries where adverse events occurred and that there is no credible evidence linking the system's AI technology to any injuries.

Despite these assurances, the broader integration of AI in medicine is drawing scrutiny. The FDA has authorized over 1,357 medical devices utilizing AI, more than double the number approved in 2022. Numerous other AI-enhanced devices have been subject to reports of malfunctions, including a heart monitor that allegedly missed abnormal heartbeats and an ultrasound device that reportedly misidentified fetal anatomy. A recent study by researchers at Johns Hopkins, Georgetown, and Yale universities found that 60 FDA-authorized AI medical devices were linked to 182 product recalls, with a notably high recall rate within the first year of market entry.

Experts argue that the FDA is struggling to keep pace with the rapid advancement of AI in medical devices, exacerbated by staffing shortages within the agency. The rise of generative AI chatbots is also presenting new challenges, as physicians increasingly utilize these tools for tasks like note-taking, while patients may turn to chatbots for self-diagnosis, potentially leading to inaccurate or harmful outcomes.

The history of AI in medicine dates back to the mid-20th century, with the FDA's first AI-enhanced devices approved in 1995 for cervical cancer screening. Current AI technologies in medical devices often employ machine learning and deep learning algorithms to analyze data, enhance medical images, and assist in surgical procedures.

The lawsuits against Acclarent and Integra LifeSciences highlight specific incidents where the TruDi system allegedly malfunctioned. In one case, a surgeon reportedly failed to recognize a carotid artery injury during a sinus surgery, leading to a stroke for the patient. Another incident involved a surgeon experiencing a severe arterial bleed while using the system, with an Acclarent representative present during the procedure.

Legal filings suggest that Acclarent’s leadership was aware of potential safety issues with the TruDi system but prioritized the commercialization of the AI-enhanced version. Despite warnings from surgeons about unresolved issues, the company allegedly lowered safety standards to expedite the product's release.

The ongoing legal battles and regulatory scrutiny surrounding the TruDi system underscore the complexities and potential risks associated with integrating AI into critical medical applications. While AI holds immense promise for improving healthcare, ensuring patient safety through rigorous testing, transparent oversight, and accountability remains paramount.

Source:

reuters.com | Sor.bz URL & Link Shortener
reuters.com | Sor.bz URL Shortener, Shorten URL, Link Shortener, Short URL, Shorten Link Shortner, Shorturl, Shortlink