AI and Sonar: Seeing with Sound
Despite the ocean covering over 70% of the Earth’s surface, we have only physically explored 5% of it. Part of the reason why so much of it is left...
2 min read
Mind Foundry : May 16, 2022 2:41:07 PM
Mind Foundry are thrilled to have been included in the ‘Ethical AI startup landscape’ research, mapped by researchers at the EAIGG (Ethical AI Governance Group), who have vetted nearly 150 companies working in Ethical AI across the globe. This important research by the EAIGG is being conducted to provide transparency on the ecosystem of companies working on ethical AI.
Mind Foundry was highlighted in both the ‘Targeted AI Solutions’ and ‘ModelOps, Monitoring & Observability’ categories. Both these subsets of Ethical AI aptly describe how Mind Foundry provides Responsible AI.
Mind Foundry’s AI Solutions
Mind Foundry creates responsible AI for high-stakes applications. We build targeted AI solutions for our customers, ranging from insurance, infrastructure and defence and national security. We emphasise the need for responsible AI across the development lifecycle of an AI system, including:
1) Use-case specific risks: making sure our customers can succeed by fully understanding the benefits and risks of using AI for their particular business uses, and where AI should, and should not be used.
2) Algorithmic design: favouring interpretable and explainable AI models, with data and model provenance, over black-box approaches. For example, in high-stakes applications, it is not always appropriate to use neural networks - as this can make the traceability and interpretability of your outputs opaque to users of the system as well as to unrepresented stakeholders, such as citizens.
3) Solution design: empowering the human to make the right decision, with UX design highlighting possible limitations in the system itself.
4) Post-deployment monitoring: ensuring our AI systems continue to work as intended through performance monitoring, including in predictive power, robustness, and resilience.
Mind Foundry’s Approach
One of the fundamental aspects of our work is Continuous Metalearning, represented by our product Mind Foundry Motion, which spans the post-production lifecycle of our customers models and ensures responsible AI governance across their entire portfolio.
The research as part of an Innovate UK Smart Grant, included understanding how AI systems can continuously improve and adapt to surrounding environments, and meta-optimise their learning process through the combination of cutting-edge machine learning techniques and domain expert input.
At its core, Mind Foundry Motion proposes to create a complete end-to-end framework for the operation of algorithms. By prioritising these techniques, we are enabling our customers to use AI that is resilient to adversarial attacks, such as data poisoning, as well as being able to classify novel trends that the AI system had previously never seen before.
We hope that our approaches and philosophies surrounding the development of responsible AI continue to be spread, and we are grateful for the essential research being carried out by EAIGG to uncover this.
Find out more about how we’re using responsible, explainable AI in high-stakes applications.
Despite the ocean covering over 70% of the Earth’s surface, we have only physically explored 5% of it. Part of the reason why so much of it is left...
Natural disasters create an environment for scam artists to exploit desperate communities. AI can help insurers detect and counteract these nefarious...
An AI-in-the-loop approach can help mitigate some of AI’s inherent risks, but to deliver real impact in Defence & National Security, AI must also be...