GPT-4 Displays Astounding Capability in Security Exploitation
GPT-4, a sophisticated language model, has demonstrated an alarming ability to exploit security flaws simply by analyzing security advisories. This artificial intelligence was tested against a group of 15 one-day vulnerabilities and achieved a staggering success rate of 87% in exploiting them. In stark contrast, other models and open-source scanners showed a 0% success rate. The research underlines the growing capabilities of such AI systems and implies that future advancements may increase this proficiency even further.
Since restricting public access to security advisories is not a practical solution, it is imperative that systems are regularly updated to defend against potential automated attacks. Interestingly, the operational cost of employing GPT-4 for finding vulnerabilities is notably lower than that of hiring a professional penetration tester. The researchers also highlighted the efficiency of the AI, stating that merely 91 lines and 1,056 tokens were enough for the AI to carry out the attacks. However, following a request from OpenAI, the researchers have refrained from making their prompts public, mitigating the risk of misuse.
Read more:
The researchers were asked by OpenAI to not release their prompts publicly.