Artificial Intelligence (AI) became a hot topic (again) in 2023, resulting in many companies scrambling to add some form of AI to their products. Even companies that previously viewed AI as nothing more than marketing fluff are now jumping on the AI hype train.
As we approach the end of 2025, even with speculation of an impending AI bubble burst, no one wants to be left behind. AI FOMO (fear of missing out) is at an all-time high.
Has AI finally moved beyond the realm of novelty? Can it add real value to your product? Or is it still just another box to check to stay relevant in the market?
Let’s explore this with a focus on IT and Cybersecurity products.
AI vs traditional programming
In some ways, AI can be seen as an evolution of programming languages.
In the early days of computing, you had machine code. Writing software, or “programming” the computer to complete a specific task, required highly specialized knowledge. This evolved into low-level programming languages (Assembly), which added a layer of abstraction. These languages were a little easier to learn than machine code but still required very specialized knowledge.
Over time, more layers of abstraction were added, eventually leading to the high-level programming languages (C++/Java/etc.) that most people use today. High-level programming languages still require some specialized knowledge but are easier to learn and far more accessible compared to their predecessors.
With each iteration, it moves further away from a language only the machine and a select few humans can understand, to a language many humans can understand.
AI, specifically the Generative Pre-trained Transformer (GPT)/Large Language Model (LLM) variety that launched the current AI hype train, adds yet another layer of abstraction to the human-machine interface. It removes the requirement for specialized knowledge and allows interaction between the human and the machine to occur in the human’s native language.
This could be seen as the next evolution of high-level programming languages.
Is AI right for my product?
To answer this, there are several questions to ask yourself first. The answers may be different depending on what your product is, and what your product does. For our purposes, we’ll focus on IT and Cybersecurity software products.
Does it add value?
Any new feature added to any type of product must add value. If adding AI to the product does not increase the value proposition for the end user, what’s the point?
If you can’t answer this question confidently with a “yes”, do not add AI to the product, do not pass go, do not collect $200.
There are several ways AI can potentially add value, so let’s break this down even further.
- Speed: Does it significantly reduce the time it takes to complete a task (i.e. triage an alert or write a script)? If I still have to double-check every single output, I haven’t saved any time.
- Accessibility: Does it allow a junior employee to perform tasks that usually require a senior employee? This is the “abstraction” benefit I mentioned earlier.
- Insight: Can it find patterns that a human would miss? We are drowning in data. AI is good with large volumes of data. Humans are not.
If the AI feature is just a “chatbot” that acts like a glorified search bar, it probably isn’t adding enough value to justify the cost.
Is it the right tool for the job?
When people say “AI” right now, they almost always mean Generative AI (GenAI) or LLMs, but that is just one piece of the puzzle.
GenAI is great for creating content or explaining complex topics. It is not always the best tool for analyzing massive datasets or spotting trends.
If you are trying to detect network anomalies or catch a brute force attack, an LLM is likely the wrong choice. It is often too slow and too expensive for that kind of math.
This is where traditional Machine Learning (ML) shines. ML has been the backbone of cybersecurity products for years. It is excellent at crunching numbers to find the needle in the haystack.
An LLM may add value when used in combination with other tools, such as an orchestrator in an Agentic workflow deciding which tool calls to make to accomplish the desired results or summarizing complex output from another tool.
Don’t force GenAI into a problem that ML (or even a simple regex script) can solve better. Use the right tool(s) for the job, not just the one that is currently trending on social media.
Is it accurate?
This is the big one. In the world of GenAI, we call mistakes “hallucinations.” In the world of Cybersecurity, we call mistakes “incidents.”
If I ask an AI to generate an image of man outstanding in his field, and it gives him an extra finger, nobody gets hurt. If I ask an AI to write a script to remove a user and it hallucinates a command that deletes everything from my Identity Provider, we have a major problem.
You have to ask if the AI model is reliable enough for the task at hand. If the user has to spend 20 minutes prompt engineering and fact-checking the AI to get a simple answer, the tool has failed. It needs to work, it needs to be efficient, and it needs to be right.
Is it transparent?
AI output can’t be a “black box.” In security, we need to know the why behind an answer. If AI presents a finding, it should cite the specifics that triggered the conclusion. If there’s no trail back to the source, how do we know we can trust it? Transparency is key. You must be able to “trust but verify” all output.
Is it secure?
We can’t talk about AI in the enterprise without talking about data privacy.
To get a good answer from AI, you usually have to feed it data. In our industry, that data is often sensitive. It might be proprietary code, customer PII, or some other form of sensitive data.
Where does that data go? Is it being used to train a public model? If you paste a strictly confidential file into a public AI tool, you might have just leaked your company secrets to the world.
If your product introduces AI, you must be able to guarantee that customer data stays with the customer. The moment data leaves the safety of the tenant; you introduce a new risk vector.
Is it cost effective?
Even if your chosen form of AI has checked all the other boxes so far, you must still ask “is it cost effective?”. If adding AI to a product blows up your operating budget, it becomes a non-starter. If the same results can be achieved through cheaper traditional methods, what value is AI really adding to your product?
The Verdict
AI is an incredible tool. It has the potential to change how we interact with technology fundamentally, but it is not a magic wand.
Don’t just add AI because everyone else is doing it. Add it because it solves a specific problem. Add it because it makes your users’ lives easier. Add it because it makes them more secure.
If you can check those boxes, you are on the right track. If not, you are just adding noise.

Cybersecurity Product Leader and internationally recognized Speaker at several Information Security conferences and events with over 26 years of experience as a proven leader in Cybersecurity, Product Management, Threat Hunting, Incident Response, IT Management, IT Architecture, IT Operations, IT Engineering, and Messaging Systems, on a global scale, across a diverse set of industries in both the Public and Private sectors.
