How AI is transforming cybersecurity from static, rule-based defenses to behaviour-driven, proactive systems
how sophisticated phishing, malware and insider threats demand integrated tools
the non‑negotiable role of human oversight, regulation and education in an AI-first world
In the AI-driven world, cybersecurity will no longer be just about blocking attacks after they happen. In this conversation, Zoho’s AI Security Head, Sujatha S Iyer explains how organisations must move from static, rule-based defenses to behaviour-driven, AI-powered security that can detect anomalies early, secure data across tools and vendors, and build strong guardrails into systems from the start.
The discussion also looks at insider threats, phishing, agentic AI, and why human oversight, compliance and continuous education remain central in the age of AI.
Recently, there was a US cybersecurity agency that listed three risk areas in AI cybersecurity: cybersecurity of AI systems, AI-enabled cyber attacks and AI-enabled cyber defense. Where do you think most organisations are failing today across these three aspects, and how do you tackle each of them?
Organisations today are far more privacy and security aware, with security moving from a checklist item to a core priority. This shift is largely driven by stricter regulations like GDPR, the California Privacy Act, and India’s Digital Personal Data Protection Act. The focus now is on building security into systems from day one.
However, many organisations still rely on outdated rule-based systems that are easy to bypass, especially in insider attacks where users stay just below defined thresholds or show subtle anomalies like unusual login times.
At the same time, threats have become more advanced. Modern malware and phishing are highly sophisticated and no longer depend on obvious signatures, which makes traditional detection less effective.
This is where AI plays a key role. By focusing on behaviour rather than static rules, AI can detect anomalies early, contain threats quickly, and prevent them from spreading. It shifts cybersecurity from reactive response to proactive prevention.
But, the hackers must also be employing AI in their tactics, right?
Phishing emails a decade back were poorly worded. They had bad CSS, bad HTML. But today’s phishing emails are picture perfect. Smishing and spear phishing attacks are also very targeted. In fact, just like what we do for lead enrichment, where you try to understand what the person is about and what the company is about before making a call, attackers are doing lead enrichment on victims before carrying out a targeted phishing attack.
So it becomes even more important that AI-powered attacks are tackled by AI-powered defense.
If you still rely on a phishing detection engine that only looks at rule-based systems, like whether the content has grammar errors or whether the sender domain is correct, that is too basic. Those are old phishing techniques. The real question is what happens when the user lands on the page.
For example, on a genuine link like amazon.in, the payment page would be amazon.in/payments and the product page would be amazon.in/products. The common feature is that the main parent domain stays the same, and the links are self-directed. But if it is a phishing link, you may land on abc.com and then, for entering your credentials, get redirected to xyz.com. There are many outgoing links.
So it is important to have an ML engine or AI engine that looks at all of these parameters and then arrives at a consensus on how likely it is phishing. That way, you are not at the mercy of old content-based classifiers alone.
Since hackers are also getting sophisticated with AI, what is the single biggest blind spot enterprises still have in defending against these AI-enabled attacks?
The biggest blind spot is that enterprises often look at things in silos.
Malware and phishing come under endpoint security. Your laptop is an endpoint, your mobile is an endpoint, your browser is an endpoint. Then you have insider detection and threat detection, which come under SIEM, where you have logs internally.
If you look at these as separate problems, you do not get the full picture.
Take an example. Let us say there is a login page in a hospital and people log into the internet every day. The administration and staff log into the internet page. Suppose the internet page is hacked and not everybody is digitally aware, and some users accidentally enter their credentials into a phishing page. Then that phishing page downloads malware that is supposed to infect all the laptops.
The first point of defense is browser security tools. The second point is endpoint security tools like malware detection and ransomware detection. They need to work in tandem, especially if the malware has spread and is trying to do data exfiltration by accessing patient portals.
That is where SIEM solutions come in. The logs show how much packet is being transferred, how much data is going out, which socket is being used.
So throughout an attack, it is a group effort between browser security, endpoint management, and SIEM solutions. Enterprises need amalgamation of different tools so that they have a 360-degree view of what is happening.
Suppose I am a new start-up and I have employed only the traditional cybersecurity measures. What threats am I exposed to compared to an AI start-up or a start-up that has employed AI-based cybersecurity measures?
If you are a data start-up or aggregator, your biggest risk is the data itself.
You face two main threats. External attacks like malware or ransomware aim to steal or lock your data at scale, which traditional systems struggle to contain. Internal attacks involve insiders misusing access, often without triggering obvious red flags.
AI-powered security helps by detecting threats early, isolating affected systems, and guiding response actions in real time. It also strengthens endpoint security and flags anomalies like unusual login times, even when credentials are valid.
The key shift is moving from reacting after damage is done to stopping attacks before they spread.
How should companies think about securing third-party AI tools and vendors, given the rising supply chain dependencies?
AI is definitely giving a productivity boost, but it is also about building the right guardrails and permissions for data access.
Let us say you have an LLM with tool-calling facilities. Suppose you have a payroll agent. If you ask it queries, it will access payroll data and help draft an email.
But what sort of data should the agent be able to access? That guardrail has to be set very carefully. For example, if I ask the payroll system about my own salary or my basic pay, I should get an answer. But if I ask for the average salary of software engineers inside the company, that should not come through, because it means I am indirectly accessing everyone else’s data.
So data access permissions have to be set very precisely, especially with LLMs and third-party AI tools.
More importantly, if you are planning to use third-party tools or third-party LLMs, it is better to integrate them into your everyday workflow. For example, if you use a BI tool for dashboard creation, it is better if the third-party AI tool is natively integrated into the analytics tool itself.
Otherwise, the classic case of shadow AI happens, where people copy data from the analytics tool, paste it into a third-party LLM, get the result back, and paste the pivot table or chart into the analytics dashboard. The moment the data leaves your system, you do not know what is happening with it.
So the aim is to minimize those cases. If the LLM integration is natively there inside the tool, with permissions properly set, it becomes easier and safer. It also makes the job easier for employees, and they are more likely to use it because the context stays within the business software.
Along with that, education is also important. Employees need to be regularly educated about security, compliance, and data access.
Which part of AI cybersecurity is moving faster right now: attackers innovating faster, defensive tooling getting faster, or regulation getting faster?
The attacker and defense side is a classic cat-and-mouse game. The attackers are getting really good day by day. But one interesting thing is that the regulations also seem to be catching up well.
Five years ago, regulations were much more vague. Today, the regulations being framed are much more aware of the risks. One welcoming change was when India’s DPDP proposal came up and companies were invited to give suggestions. That felt like the right step, because enterprises deal with compliance every day. It was also very consent-based, which was key.
So I think regulation is moving in a welcoming direction.
If you compare the cybersecurity measures employed in India with countries like the US or China, what is the difference in how they treat cybersecurity in organisations?
The difference really comes from digital maturity.
Countries like the US have generally been much more daring. AI adoption starts earlier because digital maturity itself is ahead in many places. If digital maturity is ahead, then AI maturity and eventually AI security measures are also more advanced.
In India too, there is no one-size-fits-all answer. There are many companies and many verticals. BFSI, for example, is one of the most digitized and regulated sectors. Its digital maturity is miles ahead compared to something like manufacturing. So naturally, its security measures and AI security measures are also going to be ahead.
So digital maturity is the key to how good AI adoption and AI maturity will be.
What do you think the future of cybersecurity or overall system security will look like in the age of AI? How fast is it going to evolve, and how drastically can the risk evolve at the same time?
Enterprises are moving toward agentic AI to connect multiple tools like CRM, help desk, and endpoint systems.
The key risk lies in the data access layer. Permissions and guardrails must be tightly controlled. While the system can pull context from multiple sources and draft responses, it should not take final actions on its own.
Human oversight is critical. The AI should assist by drafting outputs, but actions like sending emails should remain with a human to avoid errors or misuse.
From a hacker’s point of view, how would the individual reach the point where they are able to send emails on behalf of the organisation? What technology or vulnerabilities would they capitalize on?
Attackers can gain access through malware or by exploiting software vulnerabilities like remote code execution or denial-of-service. With tactics like constant IP rotation, traditional defenses such as rate limiting are often ineffective.
Unpatched systems make it easier for attackers to break in, gain root access, and manipulate systems. That is why regular patching is critical.
Defense needs to work across layers, including endpoint security, identity and access management, and network security. Even then, enterprises must maintain strong guardrails and stay prepared for worst-case scenarios.
How should boards and founders think about AI cybersecurity differently from the conventional cybersecurity strategy they used to employ? What perspective change needs to happen now?
Security needs to be built into the software development lifecycle from day one, often called “shift left” cybersecurity. This means documenting data usage, risks, and guardrails for every feature, including AI systems, through processes like a DPIA.
It cannot be an afterthought. Regular patching and strong security practices are essential because attackers only need one gap to succeed.
At the same time, employee awareness and education are critical to ensure these measures are consistently followed.
Overall, what should India’s perspective be towards cybersecurity systems in organisations? Should it be more regulation, more education, or inclusion of cybersecurity people in policy making?
India is one of the fastest-growing economies, and the digitisation rate is immense. We have access to very affordably priced internet compared to the rest of the world. Practically everyone has a smartphone and data access.
That means if people are not educated, they are vulnerable to most attacks. We are sitting on so much data, and we also have a huge population and high digitalisation, so we are very vulnerable if we do not tackle this well.
One thing is the constant education I was talking about in enterprises. It should not stop with enterprises. It should propagate to the rest of the world. Cybersecurity awareness should not be limited to companies, because the entire population can become a prey to such attacks.
We see many cases of bank frauds, SMS frauds, and OTP frauds. One good thing is that banks are doing more awareness now. Every time you visit a branch, they make it a point to tell you not to receive calls from random numbers and to rely only on authorized communication. At least in the banks I see, they are taking initiative.
So education is key to building resilience across the entire population.
On policy making, India has taken a very good step. In the DPDP framing, enterprises were involved and the draft was open for feedback for almost a year. That is a very good step toward governance and policy making. It is important that this collaboration between enterprises and government continues, so you get a good amalgamation of policy makers and practitioners who see things every day.
AI models built by foundational companies are advancing at an unprecedented pace. Some companies have recently released models that they say are capable enough to spot system vulnerabilities and go through them. If used by hackers, that could create havoc in the global economy or tech world. What is your perspective on that? Can these models really spot vulnerabilities that traditional cybersecurity methods may not see? And how do you fight against that?
Yes, the models are capable.
You can train models on security expertise. For example, you can train a model to detect remote code execution vulnerabilities in source code. If the model has enough training data on poorly written code and good code, it can help spot vulnerabilities very well.
The same model, if it goes into the hands of an attacker, can be used the same way. That is why every vulnerability you find should be treated as a zero-day attack. A zero-day is something new that has happened and you do not really know how to fix it immediately, so you put all your resources into mitigating the threat as early as possible.
Going forward, these models can also go into the hands of attackers. So if you find a security vulnerability or threat, treat it as a zero-day and close or mitigate it at the earliest. You cannot have a laid-back attitude and say, “This is probably a low vulnerability, nothing can possibly happen, I will set an SLA of three days.” That may have been possible a decade back or even three or four years back, but not now.
Attackers can chain multiple vulnerabilities. What seems like a low vulnerability can be chained with a medium vulnerability and then another medium vulnerability, and suddenly you have a beautifully crafted attack chain that gives the attacker good access to your system.
So it is important that no vulnerability is left unpatched. Treat it as the highest priority. Sometimes organisations prioritise adding more features or product code, and yes, that matters. But if there is a security mishap, it is easy for a business to go out of business. Security has to be the primary key. If there is any weak spot, treat it with the most serious approach, as a zero-day, and patch it as early as possible. That is the way forward.




























