HEALTH

UK’s AI Safety Institute easily jailbreaks major LLMs

In a shocking turn of events, AI systems might not be as safe as their creators make them out to be — who saw that coming, right? In a new report, the UK government’s AI Safety Institute (AISI) found that the four undisclosed LLMs tested were “highly vulnerable to basic jailbreaks.” Some unjailbroken models even generated “harmful outputs” without researchers attempting to produce them…

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button