Designing Systems with AI in the Loop
An AI-in-the-loop approach can help mitigate some of AI’s inherent risks, but to deliver real impact in Defence & National Security, AI must also be...
Here are the blogs that Mind Foundry has written recently. We've separated them into their respective topics or sectors to help you find what you're interested in.
AI Insights from Insurance Leaders in 2024
At 2024’s Insurance Post AI Summit, Mind Foundry conducted a series of workshops with insurance leaders representing some of the top UK insurers to understand their strategies, successes, and challenges in building, deploying, and scaling AI within their business. We took the insights from these workshops and turned them into a comprehensive report called AI Insights Unveiled: UK Insurance Leaders Navigate the 2024 Regulatory Maze.
AI vs Fraud in the UK and Japan
Fraud is a constant and ever-changing threat in the global insurance industry. In partnership with Aioi Nissay Dowa Insurance, Aioi Nissay Dowa Europe and The Aioi R&D Lab Oxford, Mind Foundry has developed powerful AI solutions to combat it.
The Puzzle of Scaling AI in Insurance
AI is already integral to insurance and has been the driving force behind much of the sector's innovation. However, AI adoption is a challenging process that brings with it certain obstacles and risks that any insurer must confront to be successful.
The Impact of UK and EU AI Regulations on Insurers
Although AI adoption has accelerated in insurance in recent years, AI regulations worldwide are also becoming more defined. These regulations will have a material impact on how insurers procure, manage, and maintain AI across their business units. In this piece, we outline the UK and EU’s regulatory roadmaps and the questions insurers must ask themselves to implement and maintain compliance.
Balancing Innovation and Risk: How Insurance Leaders Can Manage AI
AI's potential to drastically enhance the insurance industry is extraordinary and undeniable. But just as AI's potential is beyond question, so is the fact that it poses significant risks if mistakes are made in its use. Against this backdrop, AI governance has emerged as an essential strategic tool for insurance leaders to maximise profits, reduce losses, and gain resilience to the turbulence of an ever-changing industry.
The Consumer Duty and Its Implications for Insurers
On July 31st 2024, the Consumer Duty will fully come into effect for both open and closed products. With it comes far-reaching implications for any organisation offering a financial service or product in the UK. This includes the insurance industry, and insurers must adjust their business to accommodate these changing rules. This blog outlines what's in the Consumer Duty and what it means for insurers and their processes.
Why AI Governance Matters in the Fight against Insurance Fraud
Fraud is one of the most pervasive threats in the insurance industry, and detecting it is a crucial element of any insurer's business. With its unrivalled data-processing power, AI has become essential to any insurer’s fraud detection software, helping them effectively detect, triage, and investigate damaging fraudulent claims. But just as any tool must be used properly to do its job, AI must be adopted with effective governance.
7 Steps for Scaling AI Governance in Insurance Pricing
The complexity of modern pricing models, coupled with the regulatory changes that are due to come into effect this year, means that insurers need to start implementing proper AI governance. Here are seven practical steps insurers can take to scale AI governance in pricing successfully and responsibly.
Why Insurance Pricing Needs AI Governance Now
Pricing is one of any insurer's most mature, competitive, and important functional areas, and AI plays a vital role in helping insurers offer faster, more optimised pricing quotes than their competitors. However, as AI regulations start to come into effect, it is essential that insurers implement AI governance to ensure their pricing models are fair and explainable.
Why Is Explainability Important in Insurance Pricing?
Explainability regarding insurance pricing models has evolved from a best practice to a crucial imperative for transparency and fairness in these models. As tools for creating explainable AI systems become more accessible, insurers can begin to deploy responsible AI at scale, meeting regulatory requirements and gaining a competitive advantage.
How AI Can Help Insurers Tackle Fraud
Fraud is a persistent and constantly evolving threat to the insurance industry. Different counter-fraud practices are in play today, but they all have limitations preventing them from being truly effective. This piece highlights the potential of AI to add lasting value in fraud detection through human-AI collaboration.
5 Ways to Manage AI Responsibly in Insurance
AI is already having a transformative impact on the insurance industry, and this impact will only increase in the coming years. However, some AI adoption approaches will fail to have a tangible and sustainable impact on an organisation. This piece highlights five ways insurers can adopt and manage AI responsibly and effectively.
Insurance Data: The Challenge of Successful AI Adoption
Data is the lifeblood of the insurance industry, but many insurers are still failing to tap into its full potential. Here, we address why adopting AI in insurance is such a difficult challenge and what approaches insurers can take in order to overcome it.
Insurance and Generative AI: Seeing Past the Headlines
Following the emergence of Chat GPT, generative AI has become one of the most exciting technologies to appear in decades. However, although generative AI has the potential to impact the insurance sector significantly, numerous considerations and concerns must be addressed for it to become a reality.
All You Need to Know about the Aioi R&D Lab - Oxford
The lab is a joint venture between Mind Foundry and our partners Aioi Nissay Dowa Insurance and Aioi Nissay Dowa Europe, and this piece covers everything you need to know about it, from The Lab’s mission and its advisory board to what the partnership aims to achieve.
Understanding Risk in Insurance: From Cognitive Decline to Large Loss Accidents
The Aioi R&D Lab - Oxford was created to use AI with insurance data to help solve some of society’s most important problems. Here, we explore two recent projects that have come out of the Lab, using AI to help identify the factors that indicate cognitive decline and analyse, quantify, and understand driving risk to predict and prevent large loss accidents.
How Quantum Computing and LLMs Will Revolutionise Insurance
This piece details two areas of particular interest in the Aioi R&D Lab - Oxford. One explores how AI can potentially help advance quantum computing to revolutionise traffic management and disaster response. The other project is working to understand how large language models (LLMs) offer insurers the ability to efficiently process unstructured data for claims handling and fraud detection.
One year into its mission to solve global-scale societal issues through the creation of products and services powered by AI technologies, the Aioi R&D Lab - Oxford is breaking barriers in global collaboration, driven by a commitment to the Responsible use of AI. In the past 12 months, the Lab has completed eight projects, showcasing how this joint venture is developing technology that will help solve the problems of today as well as the large-scale societal problems of the future.
In Defence and National Security, the scrutiny placed on AI systems and the bar for responsibility and accountability are heightened. It’s paramount that the outputs of every AI model used in the sector are well-aligned with strategic goals, operational priorities, and Western values. AI assurance aims to quantify model performance and rubber-stamp it for ongoing use, but in high-stakes applications like defence, measuring true AI performance and assuring it for real-world operations is more complex than it first appears.
Accelerating AI's Operational Impact
In defence and national security, the nature of problems and the environments in which they occur make operationalising AI hugely challenging. Advances in sensor technology have resulted in an explosion of data that needs to be processed for operators to extract the information within. AI has the potential to be a game-changer in helping overcome this issue and must navigate obstacles in three key areas for this to become a reality.
AI for Sonar: Cutting Through the Noise
In the maritime domain, gaining a better understanding of increasing volumes and higher fidelity sonar data has well-known and potentially game-changing advantages. AI promises huge opportunities to help achieve this in place of traditional approaches, but there are several challenges in the space that make this potential difficult to realise.
AI for Defence: More than Just an Innovation Opportunity
Information is lost in the defence sector because data isn't efficiently processed into human-interpretable insight. The scale of this problem means stakeholders in defence are looking to AI as a possible solution, but this will only become an operationalised reality if AI is understood as more than simply an opportunity to experiment and innovate.
Generative AI: The Illusion of a Shortcut
The rise of Generative AI, especially large language models and other foundation models, has caused huge excitement worldwide. Nevertheless, there are some problems that this technology is fundamentally ill-suited to solve. Here, we outline why this is the case and where generative AI can and can’t add value in a sector like Defence and Security.
Why AI Isn’t the Answer to Every Data Problem
The nature of AI makes it an attractive proposition for any organisation looking to solve problems where data is a key component. However, AI won’t automatically solve every data problem. Before adopting AI, it’s vital to establish the nature of your problems and the data involved and determine whether AI is actually the right solution.
Crossing The AI Deployment Gap
Despite significant investment, almost 50% of all AI projects never make it to the deployment stage. In this piece, we address this disparity and the challenge of translating AI models that are theoretically performant pre-deployment into operationalised systems that add real value in high-stakes applications like Defence.
Humans vs AI: The Trust Paradox
AI and machines can often perform tasks far more effectively than humans. And yet, we still hold these technologies to a higher standard of trust than we do for each other. This article explores why this is the case and what approach we can take to ensure that humans and AI can work together to solve problems in high-stakes applications like Defence.
The Case for Infrastructure Condition Management
As they age, the condition of many of our built assets is approaching a crisis point. Reversing this trend starts with a comprehensive understanding of the condition of our infrastructure. Gaining this understanding using only traditional inspection methods, though, won’t deliver the necessary insight in time to take remedial action. Instead, we need scalable methodologies and technology that provide the required knowledge to take targeted action and maintain our built assets to last as long as we need them to.
Innovating Civil Infrastructure with Computer Vision
Managing civil infrastructure is a challenge that requires innovative new approaches and solutions to keep up with demands on time and resources. With computer vision, we can harness the power of AI to revolutionise how our built assets are inspected.
AI and the Ageing Infrastructure Problem
Global civil infrastructure is reaching a crisis point as it ages. Mind Foundry, Aioi Nissay Dowa Insurance, and the Aioi R&D Lab - Oxford are working together to build a solution that solves this problem with Human - AI collaboration.
Addressing the Infrastructure Condition Crisis
Every individual, community, and society relies on built infrastructure to live, work, and move around. But as time progresses, populations grow, and towns and cities expand, we face the challenge of an infrastructure landscape that is both ageing and deteriorating and will eventually fail if left unaddressed. As this problem becomes more urgent, civil engineers are exploring how AI can be deployed to help solve it.
Making AI Explainable, Manageable, and Justified in the UK Public Sector
In 2022, Mind Foundry hosted a webinar called 'Defining Ambitions: The Future of AI in the Public Sector’'. Delivered in partnership with GovNewsDirect, the session's objective was to explore the possibilities for AI innovation within public services and the importance of responsibility in a high-stakes application like this.
Let’s Use AI Responsibly to Discover Where to Place EV Charge Points
As more people switch to electric vehicles, the need for expanded charge point infrastructure is becoming more and more evident. To make this expansion more efficient and socially beneficial, Mind Foundry developed a tool that helps local authorities and charge point operators optimise the locations of their EV infrastructure.
71% Haven’t Read the UK’s Data Strategy. Here’s What They Missed
The UK’s National Data Strategy highlights the growing importance of data in our society and sets out how the Government aims to capitalise on its potential. And yet, the vast majority of respondents to our survey hadn’t read the strategy at all. This article provides a useful summary of the strategy that captures the key information.
AI In Government: Considerations for Ethics and Responsibility
Decisions made by governments and other public sector organisations affect many people's lives in profound ways every day. This article recaps a roundtable discussion about how if considerations for ethics and responsibility are not made during the designing, building, and implementing a solution with AI, unintended and unanticipated far-reaching consequences may arise.
Designing Systems with AI in the Loop
An AI in the Loop approach can help mitigate some of AI’s inherent risks, but to deliver real impact in Defence & National Security, AI must also be designed for deployment.
The term “Human-in-the-loop” is used as an effective countermeasure to the concerns surrounding the unfettered use of AI. However, as AI becomes more ubiquitous in society, it’s important that we understand why the term has been used, what its limitations are, and why it may be time to consider a new approach to de-risking AI’s use in the most important use cases.
What is Continuously Learning AI?
In high-stakes applications, AI often struggles to maintain its performance levels. With certain techniques and approaches, however, we can build AI systems that can continuously learn and adapt on the job.
Green AI: The Environmental Impact and Carbon Cost of Innovation
AI has the potential to help us tackle the problems associated with climate change and global warming. This has helped fuel tremendous growth in AI projects throughout government and the public sector, but this raises the question of the environmental impact of developing these solutions, as well as the carbon cost of innovation.
AI Regulations around the World
AI regulations around the world are changing rapidly, and this will have a significant impact on how organisations, businesses, and states go about adopting AI. This piece describes the current global regulatory landscape and why it's important to understand.
How Do Machines Learn? Meta-learning as an Approach
Human-AI collaboration is fundamental to everything we do at Mind Foundry, and so it’s important that we understand how humans learn compared to machines. In this piece, we dive into the differences between the two learning processes and focus on a particular approach to machine learning called metalearning.
AI has rapidly become integral to organising our lives, going about our jobs, and getting from place to place. In this article, we aim to shed some light on what this technology is and how it works, but also on where AI began, the pioneers that paved the way, and where the technology will end up taking us.
How Machine Learning Models Fail
The reliability of Machine Learning models is of critical importance as the adoption of AI accelerates across society. This blog focuses on model failure, how and why it happens, the value of model observability, monitoring, and governance, and most importantly, how we can prevent AI model failure from happening in the first place.
What Makes a Machine Learning Model 'Trustworthy’?
Amidst the excitement around AI’s many potential benefits, there are also significant concerns about its impact on society and the associated risks. As AI models and their predictions play an increasingly pivotal role in our lives and society, we discuss what it means for a model to be ‘trustworthy’.
Much of the concern about the risks associated with AI, particularly generative AI and large language models (LLMs), hinges on transparency, interpretability, and explainability. We interviewed Professor Steve Roberts, Co-founder at Mind Foundry and Professor at the University of Oxford, and invited him to share with us how he explains the meanings of these three terms to his students.
The scale of AI development has increased exponentially in recent years, bringing a raft of opportunities and some very real concerns. Here, we discuss how, when adopted responsibly, AI can still have a real and positive impact on our society and our lives.
Approaching Ethical AI Design: An Insider’s Perspective
Embedding ethical design within global applications of AI is going to be one of the most challenging demands of the 21st century, yet it’s also one of the most important. As regulation evolves and machine capabilities improve, the humans in the driving seat of usage, research, implementation, and design will guide our collective capabilities towards a truly human-centric AI.
The AI Adoption Paradox: Can Cautious Adoption Reap Maximum Benefits?
Adopting AI has become a central priority for many organisations in every sector due to the technology’s vast potential. However, adopting AI requires specific considerations, specifically around balancing the desire to adopt AI quickly and effectively with the need to mitigate the potential risks and do so ethically and responsibly.
AI Model Training: Why Continuous Improvement Matters in High-Stakes Applications
As its capabilities advance, AI will inevitably be applied to wider problem sets with more immediate and wide-ranging real-world impacts, bringing higher problem complexity and increased risk. In this piece, we discuss how, in high-stakes applications, improving the performance of these AI systems is no longer an option. It is a fundamental necessity.
5 Reasons Why You Can Better Understand Your Data with AI
Despite advances in AI, there is still a lack of understanding within organisations about the benefits that AI could offer their company. In this blog, we’ve outlined five ways organisations can better manage data with AI supporting them.
Women in Defence & National Security
Today, women are significantly underrepresented in Defence & National Security. This piece shares insights from some of the women at Mind Foundry about the challenges they've faced, their advice to others, and why correcting this imbalance can help us drive success.
Celebrating International Women in Engineering Day
To celebrate National Women in Engineering Day, this piece shares the stories of two of our own trailblazers of AI innovation who have been defying stereotypes, knocking down barriers, and inspiring everyone around them for as long as we can remember.
Academia to Industry: Going from Theory to Practice
We interviewed members of the Mind Foundry, Google and Oxa teams to understand their experience of transitioning from academia to industry, sharing their motivations, challenges, and the advice they would give to those making or considering a similar move.
International Day of Women and Girls in Science
To celebrate International Day of Women and Girls in Science, we proudly showcase some of the extraordinary women at Mind Foundry who are making significant contributions to science and technology as they share their journeys and advice they would give others.
How Mind Foundry’s Goals Framework Led to Outstanding Achievements
This piece shares the stories of three members of the Mind Foundry team and how they used our SMART Goals framework to take on some exciting and inspiring challenges outside of work.
Mind Foundry Named in ‘Ethical AI Startup Landscape’ by EAIGG
In 2022, Mind Foundry was included in the ‘Ethical AI Startup Landscape’, mapped by researchers at the EAIGG (Ethical AI Governance Group). This research by the EAIGG was conducted to provide transparency on the ecosystem of companies working on ethical AI and shows Mind Foundry’s commitment to setting an example of how to adopt AI not just successfully but responsibly as well.
Mind Foundry Wins CogX Explainable AI Award for 2022
Mind Foundry is extremely proud to be named the winner of a prestigious CogX award in the “Best Innovation in Explainable AI” category for 2022. AI has long been marketed as something too complex for humans to understand. Mind Foundry is changing this mindset and developing AI solutions for high-stakes applications that everyone can understand and engage with, regardless of their technical knowledge.
An AI-in-the-loop approach can help mitigate some of AI’s inherent risks, but to deliver real impact in Defence & National Security, AI must also be...
Today, women are significantly underrepresented in Defence & National Security. This piece shares insights from some of the women at Mind Foundry...
Why human inspections aren’t enough to address the infrastructure condition crisis. The condition of many of our built assets is approaching a...