WhoshouldIsee Tracks

Contents

Security Compliant AI Adoption in the Healthcare Industry

By James Algra, NHS Lead, BlueFort Security

It’s widely accepted that AI will be as transformative as, or perhaps even more transformative than the Industrial Revolution. It’s reshaping economies, our lives, and changing what’s humanly possible by automating repetitive tasks, boosting productivity, and enabling new discoveries.

The juggernaut that is the AI genie is well and truly out of the bottle. The challenge for individuals and industries alike is how to train it, manage it, use it, and perhaps most difficult of all, secure it.

In my role as NHS Lead, I see firsthand how healthcare organisations are adopting AI. With the NHS facing record demand, the potential of AI to help tackle challenges in healthcare, from operational efficiencies to patient experience, is an obvious choice.

Can AI Save the NHS?

Only a few weeks ago, the Department of Health and Social Care, NHS England, published the results of a Microsoft365 Copilot trial, which it claims demonstrates monthly savings of potentially 400,000 hours each month for NHS staff.

The pilot was the largest of its kind in healthcare globally, involving more than 30,000 NHS workers. It found that AI-powered administrative support could save NHS staff five weeks of time annually per person, enabling them to focus more on front-line patient care.

Given that the NHS loses countless hours to paperwork, duplicate data entry, and inefficient processes that keep healthcare professionals away from patient care, there’s a strong chance that AI will massively contribute to saving the NHS. It’s only one pilot – but the results are pretty impressive: you can read more about them here.

As with the NHS Copilot trial, we’re seeing healthcare organisations use AI tools to cut down on administrative tasks; transcribing notes from meetings, managing emails, streamlining the preparation of clinic letters, smart calendar management, and so on.

AI is also being used to analyse data and images, enhancing clinical decision-making. Earlier this year, Barts Health NHS Trust posted an article about its use of AI, citing how podiatrists are testing an AI tool that can identify diabetic foot disease, and referencing its respiratory team’s use of AI to scan NHS data to identify patients at risk of lung cancer.

Diagnosing the AI Challenge for Healthcare Organisations

In the last few weeks, it was announced that the Medicines and Healthcare products Regulatory Agency (MHRA) has launched the National Commission into the Regulation of AI in Healthcare. The commission will advise the MHRA on a new framework for the regulation of AI devices. Writing about the new commission, Dr Henrietta Hughes, OBE, England’s Patient Safety Commissioner and a practising GP, makes the case that whilst seeing the benefits of AI in her own practice, it is a tool that comes with potential risks.

As part of the ‘AI conversation my super-smart colleague, Josh Neame (who just happens to be BlueFort’s CTO), makes the point that at its most simple AI is just another application in an already overflowing cybersecurity toolkit, that in no way diminishes the challenges of using AI securely within healthcare, but it does help frame the necessary approach to tackling the issue. Josh asserts that the challenges and methods of securing and managing AI already exist: visibility and control. You can dig into this more in his new white paper: “AI in Cybersecurity: Opportunity, Risk and Reality”.

Mitigating the Threat of an AI Shadow IT Epidemic

Whilst its origins existed way before the late 1990s, the term ‘shadow IT’ really came to the fore with the widespread use of personal USB drives, mobile devices, and unapproved desktop application downloads in the office. The rise of cloud computing significantly hiked things up a notch. Now with AI, it’s kicked on again. Use of unapproved tools like ChatGPT for work tasks may boost productivity, but they create significant security risks, including data leaks, compliance failures, and IP theft.

A recent study by Harmonic Security identified serious risks associated with shadow IT use, including:

  • 26.3% of all employee-generative AI usage still goes through the free, consumer-facing version of ChatGPT.
  • Nearly 8% of employees (7.95%) are using at least one Chinese-based generative AI tool, raising geopolitical and data-sovereignty concerns.
  • Nearly 22% of all files uploaded and 4.37% of prompts contain sensitive content, including intellectual property and source code.

Protecting Sensitive Healthcare Data

When applied to healthcare, the severity of the problem of shadow AI use is clear to see. The potential for data-security breaches, non-compliance of regulations (AI systems processing personal health data must comply with UK GDPR and the Data Protection Act 2018), and at its worst, risks to patient safety.

For security teams within healthcare organisations, the key challenge is not to stop staff using AI – they’ll certainly will find a workaround (back to the shadow IT problem) – it’s to find a way that enables them to use these highly productive tools in a safe, compliant way.

Back to the visibility and control piece mentioned earlier:

  1. Visibility – deploy a discovery tool to identifies instances of unsanctioned AI usage within your organisation. In our experience, browser-based tools discover way more instances than one that only provides usage at the edge (similar to a DLP tool, for example). When we road-tested our partner Harmonic Security’s tool it discovered more than six hundred independent instances of AI usage both sanctioned and unsanctioned. This included third-party and SaaS apps.
  2. Control – within your chosen tool, set policies to prevent the exposure of your most critical data types (similar to your DLP tool), sanction your preferred AI apps, and ensure the tool you choose has a user-friendly interface which gently prompts the user away from an unsanctioned tool, to the authorised tool of your choice. It’s worth noting that some tools have the capability to copy the user’s original prompt to the authorised tool, making it super easy for them to do the right and secure thing.

Keen To Understand More About Your Organisation’s AI Risk?

At BlueFort, we work with healthcare organisations to bring visibility and control to AI adoption, helping security teams understand where AI is being used, reduce risk, and apply the right controls without slowing innovation.

Our cybersecurity tools and best-practice frameworks ensure your data governance and protection stay strong while you unlock the full potential of AI.

Get in touch for your free, no-obligation risk assessment to receive clarity on your users’ exposure to both sanctioned and unsanctioned AI across your organisation. We’d love to hear from you!

Give me a call or drop me a line

Get in touch with BlueFort

Related articles