Contents

Modernising Security and Networking for Generative AI Applications

Generative AI (GenAI) has finally gone mainstream—it’s no longer just a buzzword. At BlueFort, we’ve had thoughtful and fascinating conversations with clients about what this shift means for their businesses. Many organisations are standing at a crucial juncture; ready to move AI from the experimental stage to full-scale production.

As IDC¹ observed, 2023 and 2024 were largely consumed by experiments and proofs of concept. Now, the focus is shifting. The next couple of years will be about aligning AI with business priorities and building the infrastructure to support its scale. One client summed it up perfectly: “For AI, 2023 was ‘Wow’, 2024 was ‘How?’, and 2025 is ‘Now!’”

What does your AI future hold in 2025?

Gartner predicted that by the end of 2025, at least 30% of generative AI projects will be abandoned after proof of concept.² When you consider shadow AI and other initiatives under the business’s radar, this estimate could be low.

Reasons for abandonment include:

  • Lack of a proven or agreed-upon business case
  • Security and data privacy concerns
  • Lack of AI-ready infrastructure that can deliver manageability, visibility, availability, and performance

So, what makes GenAI so different?

GenAI isn’t like other applications. Why? Because Generative AI applications are data-driven to the extreme, often distributed and connected via APIs. Their distributed nature and need for real-time (or near real-time) performance, go beyond what traditional infrastructure and point security tools can handle. While many AI teams are turning to the cloud for on-demand scale and access to GPU and DPU processing power, streamlined networking between clouds, edge, and data centres is still essential. You’ll also need advanced network traffic optimisation and consistent security and data privacy policies for all AI workloads, no matter where they run.

Taking a platform approach to simplify and scale.

What’s the best way to meet GenAI’s unique demands? A cloud-based platform for your network and security requirements can simplify AI workload protection. At BlueFort, we recommend solutions like F5 Distributed Cloud, a managed SaaS platform that provides scalable, consistent security and networking for AI workloads—anywhere they operate.

BlueFort can help – step by step.

BlueFort partners with our customers at every stage of the AI journey, providing tailored guidance to achieve secure and scalable AI solutions, irrespective of the current level of AI maturity.

Here’s how:

Step 1: Assessing Your AI Landscape – What’s Important to the Business

The first step is understanding what’s happening with AI in your business—both sanctioned and unsanctioned. Can you confidently perform an AI audit?

Raising visibility across all AI projects allows for:

  • Intentional AI investment decisions and alignment with business objectives.
  • A more complete risk and attack landscape assessment.
  • A better understanding of what infrastructure and solutions are needed to safely scale AI ambitions for near and long-term goals.

Step 2: Building Security from Day One

Many organisations discover late in the AI lifecycle, that GenAI’s unique characteristics and requirements can break or bypass existing security controls designed for less complex, less pervasive workloads. BlueFort can help you understand your current exposure and expanded threat landscape, and plan for a secure AI future that covers your data centres, clouds, Kubernetes, and edge environments.

Key considerations for AI security include:

  • API Security: Generative AI heavily relies on API calls, which are often unmanaged and unsecured, creating exposed interfaces vulnerable to automated attacks. F5 Distributed Cloud can help you discover and secure APIs to protect AI inferencing and other processes.
  • Developer and DevOps Guardrails: Developers often resist security controls that create “friction”. With F5 Distributed Cloud, applications are deployed with the correct policies from day one—no extra steps required.
  • Real-Time Application Protection: F5 AI Gateway offers high-performance protection, inspecting prompts and responses in real-time to block attacks, prevent data leakage, and ensure critical outcomes are safeguarded.

Step 3: Designing for Performance

Between the growing volume of data, distributed AI architectures, and the need for real-time responsiveness, managing performance is essential. BlueFort champions a platform approach, using F5 Distributed Cloud to address these needs.

Key considerations for network and AI performance include:

  • Secure connectivity: Multicloud networking, a feature of the F5 Distributed Cloud platform, can streamline AI deployments so NetOps and DevOps don’t have to worry about proprietary cloud network controls. You also can send traffic over F5’s global private network, avoiding the less secure, less predictable, public internet for intra-app communications. Learn more about BlueFort and F5 for multicloud networking
  • Optimisation at the app level: Not only does F5 AI Gateway prevent disruption from cyber-attacks, the gateway also allows for optimized traffic routing and rate limiting for local and third-party large language models (LLMs), to maintain availability and control costs. It also improves the user experience with reduced operational overhead, through caching and the removal of duplicated tasks. 
  • Performance at the edge: It’s common to colocate AI workloads with data sources and/or local users to maximize performance. F5 Distributed Cloud App Stack, another feature of the platform, enables a consistent Kubernetes environment for applications to run in—including at the edge where environments may not be standardised. Distributed Cloud App Stack supports AI models at the edge with built-in GPU support, and simplifies AI inference app deployment across any number of edge sites with centralised workflows. 
  • Support for AI factories: AI factories represent a massive investment in specialised storage and networking, and compute to support high-volume, high-performance AI. While this may seem extreme for your business today, the principles of operationalization can still apply and lay a solid foundation for the future. BlueFort and F5 can help you securely scale to whatever degree is required, including for AI factories. 

Step 4: Staying Ahead with Monitoring and Observability

The security and performance of your AI workloads depend on understanding and monitoring activity. Traditional tools often fall short, but BlueFort and F5 provide comprehensive, centralised visibility into your AI workloads, to keep them secure and performing.

Key considerations for visibility include:

  • Centralised, aggregated insights: With AI distributed and running anywhere, it can be tricky to get a clear view of its performance. The F5 AI Data Fabric aggregates insights across the Distributed Cloud NGINX and BIG-IP solution portfolios into the Distributed Cloud Console. Armed with insights, admins can consume real-time reports, automate actions, and even power AI agents.  
  • Shared intelligence: F5 AI Gateway allows for automated export of OpenTelemetry data to SIEM and SOAR applications for metrics and traceability.  

Getting Started

If AI is part of your 2025 vision, now is the time to act. BlueFort and F5’s Distributed Cloud can help you navigate the journey—from securing your first AI workload, to scaling AI across your organisation.

Ready to take the next step? Let’s start the conversation.
📩 info@bluefort.com
📞 01252 917000

Dave Henderson, CEO, BlueFort Security Ltd


Sources:

  1. IDC, IDC FutureScape: The AI Pivot Towards Becoming an AI-Fuelled Business, Oct 2024
  2. Gartner, Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept by End of 2025, July 2024

Get in touch with BlueFort