Knostic released a study this week that reveals a new method of cyberattack on AI search engines that takes advantage of an unexpected attribute: impulsivity.
Israeli AI access control company Knostic published a study this week that reveals a new method of cyberattack on AI search engines, which takes advantage of an unexpected attribute: impulsivity. Researchers demonstrate how AI chatbots like Microsoft’s ChatGPT and Copilot can reveal sensitive data by bypassing their security mechanisms.
RELATED ARTICLES
The method, called Flowbreaking, exploits an interesting architectural gap in large language models (LLM) in certain situations where the system has « spit out » data before the security system has had enough time to verify it. He then deletes the data like a person who regrets what he just said. Although the data is erased in a fraction of a second, a user who captures an image of the screen can document it.
Knostic co-founder and CEO Gadi Evron, who previously founded Cymmetria, said: “LLM systems are built from multiple components and it is possible to attack the user interface between the different components. » The researchers demonstrated two vulnerabilities that exploit the new method. The first method, called « the second computer », causes the LLM to send a response to the user before they have undergone a security check, and the second method called « Stop and Flow » takes advantage of the button stop to receive a response. before having undergone filtering.
Published by Globes, Israel Business News – fr.globes.co.il – November 26, 2024.
© Copyright of Globes Publisher Itonut (1983) Ltd., 2024.
Knostic founders Gadi Evron and Sounil Yu credit: Knostic