Amplify your impact with the world’s leading GenAI toolkit for compliance work

Go Back

Taking the Lead with Innovation and Education

The EU AI Act has been leading the charge when it comes to addressing policies and controls associated with Gen-AI, with firms being urged to keep pace and stay compliant. 

Across the board though, regulators agree there are potentially acceptable uses of Gen-AI, including querying internal policies, summarizing research and obtaining contextually specific information from SEC filings and earnings transcripts. But there remain concerns about accuracy, privacy, bias, intellectual property, and possible exploitation by threat actors. 

Regulators are squarely putting the onus on firms to filter their adoption of Gen-AI through a firms’ own ability to adeptly scrutinize adoption against these elements.1  Vendors truly invested in positively affecting risk outcomes must both innovate and educate in equal measure.    

Shield’s compliance with EU regulation

Shield already embraces the development expectations laid out in the landmark EU AI Act – the most comprehensive of all AI regulations to date.  

Our approach follows rigorous AI lifecycle standards, all applied to each stage of model development. It is an approach integral to any platform looking to provide compliant software explicitly developed to help meet regulatory obligations. This is established at deployment through customer test sets of our models, front end visualization tools, reports, customer model tuning, ongoing workshops, controlled releases and customer validation of model updates and a partnership approach to the understanding of our AI from its beginnings as research to how it ultimately interacts with your data.  

Additionally, given the existing levels of scrutiny presented by model risk management functions at financial institutions, Shield is already conversant in the high standards expected of vendor AI and our ability to evidence the start-to-end explainability of tools we have developed.  

While we have found that approaches to governance itself vary by financial institution – statistical measurements, testing outputs, even discussion relating to source code – meeting these expectations has meant documentation, transparency and a willingness to have all elements of our model development scrutinized.  

Our exposure to these processes extends not only to MRM but to regulatory exams and audits, all of which Shield has passed or assisted firms in passing.  

Current AI stack compliance 

Shield’s current AI stack includes models derived from Bert, RoBerta and Deberta. These models are used as classifiers and not in a generative mode. Put more specifically, Shield repurposes Gen-AI models to train them on specific data in order to surface results that satisfy specific use cases (for example, the presence of secrecy language across eComms). This puts Shield’s models in the limited-risk category for AI technology (as outlined by the EU AI Act). 
 
Given its exposure to firm communications data and the generation of regulatory specific alerts across these communications, Shield’s AI potentially falls under the EU AI Act’s high-risk categorization. While there is currently not a consensus view for this designation, Shield is comfortable with the designation given existing controls and documentation with regard to model development, handling of data and our experience navigating model risk management functions. 

Shield is confident it meets the EU AI Act standards as currently laid out and will continue to monitor its roll out in order to ensure ongoing compliance.  

Shield AI innovation

Shield continues to expand its use of AI, both across our risk coverage as well as to provide deeper insights. These have all been researched and developed in accordance with the same rigorous standards. 

Our soon-to-be-announced new Gen-AI offering includes advanced capabilities such as an extra layer of fortification for alerts on top of Shield behaviors and a conversational interface for queries and data within a case. 

These new offerings leverage advanced Gen-AI models such as Claud, GPT4 and Llama3, etc. Each offering will include a ‘model abstraction layer’ enabling the Shield solution to easily plug in new releases of Gen-AI models or to accommodate different models according to parameters such as cost, deployment region, cloud vendor, compliance and other considerations.  

This approach allows Shield to ensure it is only using broadly accepted models whose utility and functionality it can evidence, allowing for ongoing compatibility with prevailing regulatory views.  
 
As part of the productization of these new offerings, and in keeping with Shield’s existing approach, the outlined roadmap AI capabilities will continue to evidence:  

  1. Transparency and explainability features 
  2. Full Model Governance Framework Security  
  3. Legal review and guidance to accommodate all regulations  
  4. Cost and licensing considerations  

Shield’s existing and continued approach and controls aim to satisfy EU AI Act standards and all equivalent regulations.      

Subscribe

Follow Us

Subscribe to Shield’s Newsletter

Capture everything. Deploy anywhere. Store in one place.