Updated: 02/03/2024

Realize Security's use of Generative Artificial Intelligence (GAI)

Realize Security Ltd. is committed to safely leveraging the latest technology to bring value to our clients, reducing scopes and load on cyber security budgets. To this end we use a number of in-house and commercial automated solutions. Lately, this category has increasingly come to cover the myriad of GAI tools on the market such Open AI's Chat-GPT.

Realize Security maintains a strict separation of client data and GAI systems. We do not subscribe to or use Integrated Development Environment (IDE) based solutions such as Co-Pilot and no client data is ever fed into systems such as Chat-GPT.

Chat-GPT and similar are used for research and limited proofs of concept. Data generated by these systems may be used as part of our projects, but it is generated using neutral prompts. These are prompts which describe a generic scenario or use case without containing client specific details. We also operate a "human in the loop" policy. This means that all GAI generated content is manually checked by a human for accuracy, safety and security.

Plans for the future

As wth all companies, Realize Security is constantly evaluating how we may safely utilise this new technology. To date we are investigating the use of open-source models like Mistral AI or semi-open source models such as Meta's LLaMA. The advantage of these models is that we will be able to maintain them on-site in our secure network and on hardware we control. This is the only way we can hope to provide security to assurance to our clients in the event of murky privacy policy and EULA changes by the owners of closed-source models. As our capability matures we will update this policy to allow our clients to understand how their data is being used. For now it is enough to say it is not.

Why publish this information?

Our clients trust us with incredibly sensitive data and access to mission critical systems. As a result we are resolving to be as open as practically possible about our use of new technologies. We all know most companies are using GAI, but we have very little insight into the thought process and safeguards behind their usage. Some of this usage will be officially sanctioned, much of it will not be. Some companies will have carried out risk assessments and generated protocols for staff to use sanctioned and centrally managed solutions, others will be using GAI de-facto through shadow IT and employee personal subscriptions.

This policy aims to introduce some level of transparency to Realize Security's usage of GAI and will also serve as a baseline to showcase growing understanding about the associated risks of GAI usage and possible controls to manage that risk.

Are we experts in GAI?

No, and we're suspicious of most of the LinkedIn self-reported 'experts' who haven't been involved in GAI research longer than Chat-GPT's emergence! We do, however, intend to take gradual steps to educate and familiarise ourselves with the technology in order to better serve our clients whilst maintaining data security and privacy.

Would you like to know more?

If you still have any questions or concerns, please feel free to use the below button to reach out to the team.

Our Mission

To provide information security services, affordably and at scale, through innovative use of software development, automation and AI driven solutions.


Realize Security Ltd. | Copyright 2024 – All Rights Reserved |
Company Number: 12606876 |
VAT No.: GB466083379