Gartner: AI Agents to Cut Account Exploitation Time by 50% by 2027

By 2027, artificial intelligence (AI) agents will reduce the time it takes to exploit account vulnerabilities by 50%, according to a new report by Gartner, Inc.

“Account takeover (ATO) remains a persistent attack vector due to weak authentication credentials, such as passwords, which are frequently compromised through data breaches, phishing, social engineering, and malware,”

said Jeremy D’Hoinne, VP Analyst at Gartner.

“Cybercriminals use bots to automate large-scale login attempts across multiple services, capitalizing on password reuse across platforms.”

AI agents will further automate ATO attacks, facilitating advanced social engineering through deepfake voice manipulation and end-to-end credential abuse automation. In response, cybersecurity vendors are expected to develop solutions to detect, monitor, and classify AI-driven interactions across web, app, API, and voice channels.

“Security leaders must accelerate the shift to passwordless, phishing-resistant multi-factor authentication (MFA),”

said Akif Khan, VP Analyst at Gartner.

“For customer-facing applications where authentication choices exist, organizations should educate and incentivize users to transition from passwords to multidevice passkeys.”

The Growing Threat of AI-Powered Social Engineering

Beyond ATO, AI-enhanced social engineering tactics pose a major cybersecurity threat. Gartner forecasts that by 2028, 40% of social engineering attacks will target not just executives but the broader workforce. Attackers are now integrating deepfake audio and video techniques into their schemes, deceiving employees in real-time through voice and video calls.

Although only a few high-profile cases have been publicly reported, such incidents have already resulted in substantial financial losses. The challenge of detecting deepfakes remains in its infancy, particularly across real-time person-to-person voice and video communications on various platforms.

“Organizations must stay ahead of evolving threats by continuously updating their security procedures and workflows to counter AI-driven attacks,”

said Manuel Acosta, Sr. Director Analyst at Gartner.

“Employee education is key—companies should implement specialized training to help staff recognize and resist social engineering tactics involving deepfake technology.”

As AI-driven cyber threats continue to evolve, enterprises must proactively strengthen their defenses, adopting advanced authentication measures and AI-powered security solutions to stay resilient in an increasingly sophisticated digital threat landscape.

News Source: Wallis PR