Security Experts Advocate for AI Agent Management Mirroring Employee Protocols

Security Experts Advocate for AI Agent Management Mirroring - AI Systems Require Employee-Level Security Protocols Organizat

AI Systems Require Employee-Level Security Protocols

Organizations should implement the same security controls for artificial intelligence agents as they do for human staff members, according to reports from cybersecurity experts. This approach includes comprehensive background checks, role-based access limitations, and continuous performance monitoring to mitigate potential risks.

Applying Human Resource Principles to AI Management

Security analysts suggest that companies should treat AI systems with similar protocols used for employee management. “You should look at how you treat your humans and apply those same controls to the AI,” said Meghan Maneval in discussions with Infosecurity, according to the reports. “You probably do a background check before anyone is hired. Do the same thing with your AI agent.”

The report indicates that this methodology extends to access control systems, where AI agents should be bound by the same zero-trust frameworks that limit human employee navigation within organizational networks. Sources indicate that this might require secondary approval systems for AI-initiated actions, similar to authorization processes for sensitive human operations.

Comprehensive Monitoring Strategies for AI Performance

Experts advocate for combining multiple monitoring techniques to address AI drifting, which analysts describe as the gradual decline in AI model performance over time due to changing real-world conditions. “This is just like a store tracking four cartons of milk but never checking if they’re spoiled,” Maneval noted during her presentation at ISACA Europe 2025, according to the reports.

The documentation reveals that without proper thresholds, alerts, and usage monitoring, organizations risk maintaining AI systems that produce data that exists but lacks practical utility. Industry professionals emphasize that continuous assessment of real-world usage and output quality is essential for maintaining AI effectiveness.

Structured AI Audit Framework

An ideal AI audit program should evaluate multiple dimensions of artificial intelligence systems, according to the emerging guidelines. Reports indicate that comprehensive audits should assess not only the underlying technology and training data but also the outputs generated by AI tools and the controls implemented around them.

AI algorithm audits should examine “the model’s fairness, accuracy and transparency,” while output audits need to identify potential issues such as “incorrect information, inappropriate suggestion or sensitive data leaks,” according to the documentation. Security analysts recommend evaluating security guardrails, access controls, and data leak protection mechanisms built around organizational AI tools.

Risk Assessment and Policy Development

Organizations must begin their AI security initiatives by examining existing risk tolerance policies, according to the reports. “The bottom line is that you have to start with what are you already doing, and what are you willing to accept and that turns into your policy statement, which you can then start to build controls,” Maneval stated.

Sources indicate that companies demonstrate varying approaches to AI risk, with some organizations strongly encouraging AI adoption while others maintain more conservative stances. Security professionals emphasize that identifying knowledge gaps and implementing appropriate verification processes should become primary focus areas for organizations integrating AI systems.

As artificial intelligence agents gain access to sensitive data, third-party systems, and decision-making authority, the approach of treating them as managed staff members rather than unmanaged assets becomes increasingly critical, according to industry analysis. “Auditing AI isn’t about calling someone out, it’s about learning how the system works so we can help do the right thing,” security experts concluded in the reports.

References

This article aggregates information from publicly available sources. All trademarks and copyrights belong to their respective owners.

Note: Featured image is for illustrative purposes only and does not represent any specific product, service, or entity mentioned in this article.

Leave a Reply

Your email address will not be published. Required fields are marked *