ChatGPT. Heard of it?
Artificial Intelligence (AI) advancements have been one of the hottest topics of 2023 so far. It has certainly taken the business world by storm. The room is divided, with some organizations wanting to avoid security and privacy issues, while others not wanting to miss out on adapting to this rapidly changing environment.
With the growing importance of AI in business decision-making and as your current vendors continue to add AI functionality into their software, it is essential for security teams to understand how to evaluate the risks associated with AI-powered companies.
When we looked at the sheer amount of misinformation and conflation of topics, we wanted to share some of the mental model we have used to assess the risk of AI - both for our internal product development as well as use in our supply chain.
Read on for practical ways to assess AI-driven companies including what questions to ask related to AI/security.
First, find out how the company/vendor is using AI
We're stating the obvious, but finding out the extent of their use will determine how much deeper to dig. Try a question like this in your vendor risk assessment/evaluation process:
"Does your product or service use or rely on Artificial Intelligence (AI)? If yes, please explain how AI is incorporated into your product. Please provide links to any available resources and we may be in touch with further questions.”
If the answer is yes, here are some categories with additional questions to understand the risk you might be taking on:
1. Is the AI developed in-house or supplied by a third party?
In the case where an additional party is involved, it will also be important to understand the security/privacy practices of that vendor as well. It’s important to note that relying on an additional party is not a bad thing and it can even be beneficial when a third party with expertise in developing AI model is relied on, instead of a smaller company rolling their own AI.
2. Does the AI process Personal Data?
Personal data can introduce additional legal requirements and disclosures. Sometimes getting down to the specific of the data exchanged can seem like a never ending line of questioning, but it should be expected that vendors will share the nature and scope of the data being processed.
3. Is customer data used to train or fine tune models?
You worked hard for your data and IP! If a third party is going to use that to make their product and service better, you should at a minimum know that is happening, and also, be able to opt out or control the nature of the training.
Is the AI used to push data subjects towards making a decision or have other psychological considerations?
I believe that this will become one of the top considerations for how AI is used in a safe and ethical way. If AI models are being developed and implemented to push people towards making impactful decisions (what they buy, how they spend their time, what they value) these matters should not be taken lightly.
4. Will the system continue to function if the AI service is not available?
Slap ChatGPT on it and see what happens! There have been cases of downtime and unavailability of key commercial services provided by vendors like OpenAI and others. If a vendor is supporting a critical business process, it will be important to consider how business continuity will be handled. In the case of Conveyor, our use of OpenAI always falls back to established search and machine learning workflows that have proven successful in the past.
Want a full list of questions to consider?
At Conveyor we have developed a wider risk model, with examples of higher and lower risk for each dimension when we evaluate our own vendors. That risk assessment spreadsheet can be downloaded below.
Get your copy of Conveyor's Risk Assessment for AI Vendors worksheet