If your company has either built or purchased a large language model (LLM) that is promised to automate away mundane tasks across departments, you might be thinking… “Could I leverage this to answer my security questionnaires?”

The use case is seemingly simple and building software in-house to answer security questionnaires can be an attractive option for many teams, especially if they already have an LLM at their disposal. 

One key reason is the desire to keep sensitive data secure; when you control the software, you control the data flow, reducing the risk of exposure. Additionally, there's often a perception that creating a custom solution could be more cost-effective in the long run, avoiding software subscription fees and reducing third-party dependencies. With the LLMs on the market today, it might also seem like a straightforward project, given the powerful technology available and the fact that your company might already have something in place for you to leverage.

So, should you do it? Well, you’ve come to the right place.

In this article, we’re breaking down the resources you would need to build a tool for generating answers to security questionnaires and the considerations our engineers here went through when building Conveyor’s AI security questionnaire automation software.

Getting the basics up and running for security questionnaire answer generation can be simple

It isn’t too difficult to feed your knowledge base to an LLM like GPT and set up a prompting system for it to produce answers based on this content. The basics are relatively easy to achieve, but things can get tricky when you begin to think about all of the features you need to boost your workflow, collaborate across teams, and ensure accuracy from the LLM.

Like many organizations, we built our initial proof-of-concept quite quickly. It was able to produce some promising results, but it also helped us learn that there's so much more to consider! See the overview of decisions and considerations below and know that this list continues to grow weekly. 

While you could quickly spin up an MVP with a small team, even the small team might require a designer or project manager to think through the UI/UX of what you’re building. After the MVP, it would require a full time commitment from a team of engineers for the ongoing work to test, deploy, update, and maintain. That time commitment can be upwards of a year or more.

We chatted with our engineers to explore what exactly you should consider during this process and some of the challenges they faced when building Conveyor’s AI for answering security questionnaires.

Key decisions and considerations when using an LLM to answer security questionnaires

We don’t need to tell you that security questionnaires come in all shapes and sizes from complicated excels to portal-based questionnaires. To build an internal software experience that will actually work for your team you’ll need to decide:

  1. Which LLM will you use? 
  2. How your knowledge graph will be set up? Will the AI read from your documents? Internal wiki? Company fact site? Or just question and answer pairs?
  3. How do you build your knowledge graph to be relatively low-maintenance? Alert teams when updates are needed? 
  4. Do you need to collaborate across multiple teams? How will notifications be handled?
  5. How will it handle different formats of questionnaires? Portals? Just figuring out the implications of import and export is a project in itself.
  6. How will sales submit questionnaires to the team to be completed? How will you track status?
  7. How will you ensure the AI is giving you the right answers? 
  8. How will you see sources?
  9. Do you have time set aside to test and grade the AI and make sure you’re staying up to date with the latest models?
  10. Do you want the AI to learn from past questionnaires?

Just remember, whatever you build needs to be able to handle most use cases smoothly or it just causes more headaches in the end. One big factor besides maintenance of your own build is experimenting with, testing, and grading the AI as it evolves.

A deeper look at 4 key areas to master when building a security questionnaire response tool

Configuring the AI processing pipeline to generate accurate answers is an ongoing effort

We spend a lot of time working on accuracy and improving both answer coverage (number of answers AI will generate) and precision (number of accurate answers you don’t have to rewrite). This is the work that is going to be the hardest to do, as it takes a lot of trial and error. You prompt the LLM and it gets it wrong—now you have to investigate why that happens. This is an ongoing project where you will need someone dedicated to grading, refining the model to ensure quality of the answers. 

Beyond the accuracy piece, AI is still new so you’ll need a way in the short term to ensure that the AI is using the correct sources to generate the answers you need.

When an LLM gives you an answer, how do you know that answer is true? You have to do a lot of digging to make sure the sources line up to the answers. But if you build in a way to review the underlying sources the AI has used to generate a given answer, it gives you confidence in the answers you’re receiving, speeding up your time to review. 

Lastly, how does the AI continue to learn from the content you’re feeding it? Think about all the times you’ve had to re-write ChatGPT answers because it’s just not quite right. For anything you’re building, to get to a place where you don’t have to re-write answers to hundreds and thousands of answers to questions, the AI has to learn from a continually updated set of information (such as external sites, documents, Q&A pairs, and even things like your tone and past edits you’ve made to questionnaires). You’ll have to configure connections to all of these pieces as well as design and build a user experience to keep all of this source material up to date.

Uploading all formats of security questionnaires is a challenge

Building software to answer security questionnaires is tricky when you break down all the steps to the process and think through each. Let’s take the very first step as an example: uploading a security questionnaire. 

Due to the various custom formats these questionnaires come in from challenging web portals to complex Excel files, Word documents, and PDFs, each has format unique structures and requirements that you would have to build an import and export function to handle.

You’ll have to consider the following:

  • Should you build an importer for each format? Automating the parsing of different formats can significantly speed up your workflow, but it demands sophisticated AI capable of understanding the context and requirements of various question types, such as multiple choice versus comment fields. Handling Excel files involves managing drop-downs, multiple tabs, and varying patterns, while the ability to export answers in the original customer format is crucial for usability and speed of workflows.
  • Should you build a browser extension for portals? Portal-based questionnaires add another layer of complexity with their dynamic nature, where answering one question can trigger additional ones. A browser extension might be necessary, but it must integrate seamlessly into your workflow by showing you source references, for example, while also allowing for editing directly in the extension and being able to save edits and new Q&As to the knowledge base.
  • What about one-off questions? Enabling teams to quickly search for answers to one-off questions enhances efficiency. As you develop this tool, these considerations are essential for creating a solution that truly meets the needs of your users.

Collaboration across teams requires the right set of notifications

Building software to answer security questionnaires isn't just about managing different formats, it’s really an exercise in collaboration across departments so your team will also have to consider the needs of several different teams. Completing a questionnaire often requires input from multiple team members like security, legal, technical presales, compliance and more, each with their own expertise.

To make this work smoothly, your software needs features for tagging teammates, assigning questions, sending notifications, and managing comments. You’ll also need ways to make it easy for your front line sales team to upload questionnaires into the queue for infosec or presales teams to answer. This can look like integrating with collaboration tools like Slack or Teams can automate much of this process, but it requires building a custom, streamlined experience that fits within these platforms' limited UI options. 

You'll also need to carefully manage permissions to ensure the right people have access to the right information. This includes deciding who can generate answers, download documents, and access sensitive reports. For example, should sales be able to work on questionnaires from beginning to end? Likely not, so you’ll have to consider what they can and can’t have access to.

Lastly, whatever you build should provide a clear way for sales teams to submit questionnaires and track their status in an automated way.

Getting the right data insights for your team and sales

You can only improve what you can measure. Because you’ll have to tie the value of the time and effort spent building a software for questionnaire answering, you’ll also have to design a dashboard that tracks key metrics and pulls in data from the relevant systems. 

Performance metrics might include tracking the accuracy of your AI, the frequency of manual edits, the turnaround time for completing questionnaires, and the time spent per question. Quantifying the impact on your business, from both revenue and cost perspectives, helps justify the significant upfront and ongoing investment in design, product management, and engineering resources. Additionally, you will have to consider further integration of this tool with a CRM like Salesforce or Hubspot so you are able to attribute the work done on each questionnaire directly to revenue.

Build vs. buy? Time is your most valuable asset

As you think about building vs. buying your own AI security automation software, it’s important to remember that time is everything. In a world where AI is evolving rapidly, having a team that can dedicate time to building a tool for this will be the difference between a tool that adds value and one that quickly becomes a burden to maintain. With an off the shelf-solution there is already a dedicated team for that while keeping an in-house team of engineers just to maintain your tool can be costly.

If you decide to build the product, you’ll need to calculate time spent, cost, quality of the output and how well it solves the problem. If you don’t see a good ROI, it might be time to buy. 

If you are curious about an off-the shelf solution, check out more on Conveyor's genAI software for answering security questionnaires and more on why ConveyorAI leads the market in accuracy.

If you want a comprehensive list of questions to ask and our advice for how to tackle each consideration broken down by each category above, download the checklist below.

Download Conveyor's Build vs. Buy Checklist for Security Questionnaire Response