AI is definitely giving a productivity boost, but it is also about building the right guardrails and permissions for data access.
Let us say you have an LLM with tool-calling facilities. Suppose you have a payroll agent. If you ask it queries, it will access payroll data and help draft an email.
But what sort of data should the agent be able to access? That guardrail has to be set very carefully. For example, if I ask the payroll system about my own salary or my basic pay, I should get an answer. But if I ask for the average salary of software engineers inside the company, that should not come through, because it means I am indirectly accessing everyone else’s data.
So data access permissions have to be set very precisely, especially with LLMs and third-party AI tools.
More importantly, if you are planning to use third-party tools or third-party LLMs, it is better to integrate them into your everyday workflow. For example, if you use a BI tool for dashboard creation, it is better if the third-party AI tool is natively integrated into the analytics tool itself.
Otherwise, the classic case of shadow AI happens, where people copy data from the analytics tool, paste it into a third-party LLM, get the result back, and paste the pivot table or chart into the analytics dashboard. The moment the data leaves your system, you do not know what is happening with it.
So the aim is to minimize those cases. If the LLM integration is natively there inside the tool, with permissions properly set, it becomes easier and safer. It also makes the job easier for employees, and they are more likely to use it because the context stays within the business software.
Along with that, education is also important. Employees need to be regularly educated about security, compliance, and data access.