By Lee Mangold
Every few years a new technology comes along that promises to change the world. Some do, some don’t, but they all bring one thing: Hype.
The recent commercialization of Large Language Models (LLM) – most notably the products from OpenAI – has amplified that hype in a way we haven’t seen in decades. The technology seems to “just work.” Futurists, skeptics, and armchair technologists alike are lining up to be the next social media influencers on the topic of “AI” and what it means for the workforce – and even humanity.
Meanwhile, security professionals are largely left scratching their heads and wondering how much of this hype is fact versus fiction. Or, more importantly, how much of this do they need to care about today, and what can they do about it?
I’ve spent the last two months speaking with dozens of security professionals from around the world and across nearly every industry about their approaches to AI adoption in their organizations. And, of course, I’ve felt this personally as the CISO of Fortress. In all this research, I’ve identified four common patterns: (1) Avoid all new AI, (2) Ignore the problem and do nothing, (3) Allow unmitigated AI software adoption, or the very rare (4) understand and adapt.
It would be easy to speculate on the approach taken based on industry, company size, or other high-level metrics. But that’s not really the case. The fact is that everyone, not just cybersecurity professionals, is trying to figure out what the new use of AI means for their company, careers, and their lives. That confusion is showing itself across companies of all sizes and in all industries.
You’re not alone.
So where do we begin?
First, let’s scope this question to what we’re trying to accomplish in this article. Are you looking at AI from a third-party risk management (TPRM) perspective, a buyer/user perspective, or a technology development perspective?
This distinction matters because these are very different paths with different perspectives. TPRM is concerned about the supply chain risk associated with using an AI solution. The buyer is interested in how an AI product addresses the needs of the buyers. While the technology developer is interested in deeper technical details around the development of a product internally, and whether it will produce the right results for their users.
In this article, I’m going to talk specifically about the TPRM perspective. It’s important to understand that if your job is security, then you need to focus on those things that fall in your “lane” and not get distracted by the other perspectives (at least not yet).
Control the controllable
From a TPRM perspective, we need to look at the adoption of new AI technologies the same way we do any other technology: by controlling those things within your span of control.
1- Control the Inputs – You control the data you’re willing to send to any third party. That includes determining the classification of the data, contractual obligations, permissions, etc. This notably includes determining the APIs and permissions you’re willing to give to a third party.
2- Control the Outputs – You control what you do with the results. You get to choose if you want the data to be trusted to make decisions for you, and what the risk is of making both positive and negative decisions. You also choose if you want an agent to perform actions for you.
3- Control the Agreement – You also decide if the vendor terms are sufficient for you (and if you trust them). These are the normal questions you should ask of any vendor, such as: Do you share my data? With who? Do you retain my data? How do you manage privacy?
This seems like a very simplistic approach – that’s because it is! It’s also a very effective approach though.
In presenting this concept to peers, their first reaction is usually extreme skepticism. Questions inevitably come up like, “should I allow my SIEM vendor to use AI on my logs?” While that’s a fantastic question, I refer them to their own data security policies with the counter question of “what does your data classification policy say about that?”. If you aren’t already doing this analysis on all your vendors, you’re missing a critical component of your TPRM program.
The questions usually then turn to the efficacy of the application itself, like “what about hallucinations and bias” (the buyer perspective). Or even, “what if an agent does something wrong?” When these questions creep in, we have now pushed TPRM to the back-of-mind in favor of buyer or developer concerns. Bias and hallucinations are very important to understand – without question – but we’re at the point of understanding product and vendor security, not how well the product works for the buyer.
This isn’t written to trivialize the buyer’s concerns by any means. However, if you fail to scope your questions and make progress, it’s down the rabbit hole we go…
What About AI Policy?
I would argue that, if your organization has a reasonably robust security and compliance program, you likely already have about 90% of the AI-relevant policies in place. Your existing policies around TPRM, data classification, information protection, identity & access management, and even your SDLC should cover even the nuances of most AI implementations. There may be room for some clarifications or additions of terminology, but you most likely have the majority of the new concerns already accounted for in policy today.
There are, of course, the human-centric impacts that should be covered in policy. Issues like the use of AI assistants for critical work, proper attribution, and ethical uses of AI are all important and should be handled in the appropriate policies – these are important to address and likely a concern for HR.
Warning: You’re not an AI expert (and that’s okay!)
I haven’t talked about AI reference architectures, machine learning, expert systems, neural networks, genetic algorithms, or any of the other deep technical details that are also under the umbrella of “AI.” I haven’t bored you with data-flow diagrams showing how MCPs work and the issues those add to technical complexity. (Note: I apologize to those that I did present those to!). Rather, I focused on how to get your hands around AI from a TPRM perspective – not how to become AI architects.
In most technology areas, we have a tendency to enter into analysis paralysis by asking so many deep questions that we never make a decision. If you’re reading this, you’re most likely a security professional, not an AI expert or research scientist. While you certainly can wear multiple hats, it’s not necessarily your role to understand the deep technical details of how AI works to make a TPRM decision. You need to understand the risk in sufficient detail to make a decision, then move on to additional questions as needed.
You can’t stop progress
AI, as with any product you bring into your enterprise, requires you to “control the controllable.” You can’t control every possible eventuality, and attempting to do so never ends well. The way we conduct business will change drastically over the coming years, and we need to control the risks of new technologies and new vendors the best we can while also still ensuring that we, as business enablers, can continue to innovate and grow our organizations. This is why everything we do in security is risk management: Perfection is not the goal, understanding critical risk and responsibly acting on that risk is the goal.
About the author:
As the Chief Information Security Officer at Fortress Information Security, Dr. Mangold is responsible for the security of Fortress’ information systems and data, supporting all lines of businesses.





















