6 min read

The Business of AI in UK Defence and National Security

The Business of AI in UK Defence and National Security
The Business of AI in UK Defence and National Security
10:46

While the technical aspects of an AI system are important in Defence and National Security, understanding and addressing AI business considerations is an essential step to achieving operational impact.

Much has been written about the possibilities offered by the widespread adoption of AI in Defence and National Security. The war in Ukraine has offered myriad examples and use cases for speeding up the collection and processing of data to collapse the targeting and kill chain. The problem of getting inside an adversary's OODA (Observe, Orient, Decide, Act) Loop is perennial, and the prospect of using AI to make better, faster decisions is enticing. The enthusiasm to adopt AI-powered solutions has increased, and not always positively, due to the headlines and rhetoric surrounding LLMs and Generative AI. Nevertheless, as we've previously argued, Generative AI only offers the illusion of a shortcut in Defence and National Security for various reasons, not least the absence of data, the maturity of the hardware systems and the complexity of the operating environment.  

Over the last few years, we've observed how discussions around adopting AI almost invariably default to focusing on data, model types, and the architecture of the deployment methodology, in other words, the technical meat of the solution. But these aren't the hard problems to solve. Yes, they are challenging if you don't build and deploy AI products for a living, but our experience tells us that the harder problem is how to navigate the business of AI; get this sorted, and the technology will follow. No matter how exquisite and capable the AI product is, no matter how performative the model is, if the business incentives and enablers don't line up, then it will remain an untested and unproven technology. 

As was noted in the recently published report, Developing AI capacity and expertise in UK defence, “too often AI is still treated as a novelty rather than as something that will soon be a core part of defence’s toolkit.” For the UK to harness AI’s potential for its national security, the entire Defence sector must immediately start getting to grips with the business of AI. If we can unite the UK’s currently disjointed AI landscape in Defence, we can create a wider tech ecosystem with vastly increased robustness against the inevitable uncertainty and challenges of the immediate and long-term future. 

Partner, Partner, Partner


SMEs working in Defence and National Security can find it something of a zero-sum game; for one to be successful, another entity must be less successful. Competition becomes more about who can write the most compelling and articulate bid than who has the most capable system. This, in turn, leads organisations to waste their efforts crafting a compelling narrative about their capability rather than actually delivering a working, performant system. The status quo is hard to navigate and difficult to disrupt for understandable reasons. Incumbent suppliers try to iterate their existing products to eliminate competition. 

With these factors in mind, SMEs can be sharp-elbowed in their approach and try to carve out their own niche against other SMEs. All that this results in, though, is a reduction in collaboration and adversarial attempts to stymie competitors’ efforts, creating a disjointed landscape to the overall detriment of the sector.

Our view is different... The types of problems that AI can help solve are invariably complex and require a range of skills. Mind Foundry is not an expert in building hardware; it's a time and cost-intensive exercise requiring a completely different skillset from that needed to build and deploy AI products. Nor is Mind Foundry a systems integrator or an expert in a particular operational domain. The good news for us is that we don't have to be because that expertise already exists. Through partnerships with the end-user community, the MoD, and the ecosystem of suppliers, the full weight of the combination of different skills can be brought to bear on these hard problems, culminating in a solution that is greater than the sum of the parts.  

It's not often down to us with whom we partner; it depends greatly on the MoD, the supplier base, and how a partnership is incentivised. But if we don't start from a position of understanding the value of partnerships and seeking them out, then we aren't going to reach that destination. The most valuable and fast-moving opportunities for us are where the end-user community, the responsible officers, the hardware manufacturers, and the system integrators are all represented, engaged, and incentivised. By prioritising collaboration to avoid unnecessary and unhelpful competition, companies can share these opportunities to improve overall solution capability and benefit the entire sector.

Following the publication of the AI Opportunities Action Plan, the message from the Government is clear. “By the end of the decade, having champions at the frontier of AI capabilities may be a critical pillar of our national and economic security.” We would argue that it is a question of when, not if, this becomes the case; AI is already and will only be an increasingly important differentiator on a global scale. With this in mind, every organisation engaging in UK Sovereign AI should be open to the potential for strategic collaboration to enhance capability. With all the partner building blocks in place, we can build a prototype or proof of concept in weeks; this isn't the time-consuming part of the process. What takes time is lining up all the right stakeholders suitably incentivised to engage alongside a suitably identifiable problem.  

Listen to the User, Not the Technologist

Given the above context, it can be easy to lose sight of the end goal of AI adoption; it is not to accelerate AI development or explore new technology. Rather, the priority is to make highly trained end-users more effective in their roles and to deliver critical insight at the speed of mission relevance. The goal is to accelerate the speed and precision of decision-making by introducing AI into the human decision-making loop. It’s a perfectly understandable approach to start with the technology and work forward towards a series of unspecified or general problems, but this can lead to attempts to boil the ocean or offer a single universal solution.  

On the other hand, empathy and early engagement with the end-users can result in a very different solution from the one being asked for. End-users and the teams they work in combine the power of two different functions that technology will likely never be able to replace. The first is what anthropologists call “social technologies”. These include the chain of command, human decision-making processes, and legal/policy requirements. The second is “collective intelligence”, a constant acquisition of the tricks of the trade passed down through the communities wrapped around the end-users.  

A challenge in ensuring that developed capability aligns with user needs is requirements documents. User Requirement Documents (URD) and other references for capability development are often unwieldy and inflexible, not written with software in mind, and can be one step removed from the end-user community. What seems at first to be a key user priority when writing a URD can rapidly evolve, pivot or even be replaced as a result of further research and development.

Closing this gap between what is being asked for by the process and what is actually needed by an operator produces insight that drives capability and product development in a very different and more variable way. Any URD and supporting process should allow adaption and evolution based on user feedback as new requirements are discovered and existing requirements change. This is the only way to make certain that the solution in the end is as performant and usable as it promises to be in the beginning.

For example, a user requirement might be "The system should be able to classify every signature within the environment to enable faster acquisition of signals of interest". Solving this problem results in a very different approach if the user actually needs to “identify anomalies in the data that enables me to classify them in real-time and to be able to detect potential threats at mission relevance". It’s the same macro problem (too much data, not enough insight), but addressing the first problem statement would mean taking a data-centric approach and the second a user-focused one. If the Defence and National Security enterprise is to adopt AI more widely, then products must be built with users at the heart of them rather than an afterthought to a data science solution.  

Show, Don't Tell

AI in action is very different to AI in theory. The latter can be expansive, idealistic, and call on examples from other fields and sectors, and it can be general enough to be parked in the next generation or even generation-after-next. AI, in theory, also sets such a high bar that early iterations of AI in action can seem underwhelming and unexciting. But when you can demonstrate AI in action, delivering insights using models trained on complex data to deliver insight at a pace unimaginable under current constraints, then that can be transformative - it can bridge the gap between the theory of AI and the reality of where we are now.  Expend every effort in getting to a place where you can show rather than tell; let the technology speak for itself rather than you speak for it.  

UK Sovereign AI for a More Secure Future

Too much has been written about any number of possible futures that AI could support, yet not enough time and effort has been spent building them. By solving some of the problems of the business of AI rather than over-indexing on the technology itself, we will get closer to bringing some of the possibilities to life.  

The UK is a global leader in AI, and its defence sector should reflect that strength by embracing innovation in the wider technology ecosystem. By simplifying processes, sharing resources, and fostering partnerships, the Defence sector can turbocharge innovation, ensuring value gets to end-users more quickly. For us at Mind Foundry, the message is clear: the challenges are great, but the Mission is greater. Stay humble, innovate boldly, and ensure the end-user is at the centre, not the technology.

 

If you're interested in partnering with us, then get in touch here.

The Business of AI in UK Defence and National Security

The Business of AI in UK Defence and National Security

While the technical aspects of an AI system are important in Defence and National Security, understanding and addressing AI business considerations...

Read More
The 8-Step Guide to Deploying Machine Learning in Infrastructure

The 8-Step Guide to Deploying Machine Learning in Infrastructure

The adoption of AI and Machine Learning is complex, and attempting it in civil engineering can feel daunting. This guide outlines the 8 steps...

Read More
AI and Sonar: Seeing with Sound

AI and Sonar: Seeing with Sound

Despite the ocean covering over 70% of the Earth’s surface, we have only physically explored 5% of it. Part of the reason why so much of it is left...

Read More