‘Jailbreaking’ AI services like ChatGPT and Claude 3 Opus is much easier than you think

AI researchers found they could dupe an AI chatbot into giving a potentially dangerous response to a question by feeding it a huge amount of data it learned from queries made mid-conversation.

AI concept, microchip motherboard glitch pattern, quantum computer.

Scientists from artificial intelligence (AI) company Anthropic have identified a potentially dangerous flaw in widely used large language models (LLMs) like ChatGPT and Anthropic’s own Claude 3 chatbot.

Dubbed “many shot jailbreaking,” the hack takes advantage of “in-context learning,” in which the chatbot learns from the information provided in a text prompt written out by a user, as outlined in research published in 2022. The scientists outlined their findings in a new paper uploaded to the sanity.io cloud repository and tested the exploit on Anthropic’s Claude 2 AI chatbot.

People could use the hack to force LLMs to produce dangerous responses, the study concluded — even though such systems are trained to prevent this. That’s because many shot jailbreaking bypasses in-built security protocols that govern how an AI responds when, say, asked how to build a bomb.

LLMs like ChatGPT rely on the “context window” to process conversations. This is the amount of information the system can process as part of its input — with a longer context window allowing for more input text. Longer context windows equate to more input text that an AI can learn from mid-conversation — which leads to better responses.

Related: Researchers gave AI an ‘inner monologue’ and it massively improved its performance

Context windows in AI chatbots are now hundreds of times larger than they were even at the start of 2023 — which means more nuanced and context-aware responses by AIs, the scientists said in a statement. But that has also opened the door to exploitation.

Duping AI into generating harmful content

The attack works by first writing out a fake conversation between a user and an AI assistant in a text prompt — in which the fictional assistant answers a series of potentially harmful questions.

window.sliceComponents = window.sliceComponents || {};

externalsScriptLoaded.then(() => {
window.reliablePageLoad.then(() => {
var componentContainer = document.querySelector(“#slice-container-newsletterForm-articleInbodyContent-T4YnND9Cf6Va4V2aEBzyYb”);

if (componentContainer) {
var data = {“layout”:”inbodyContent”,”header”:”Sign up for the Live Science daily newsletter now”,”tagline”:”Get the worldu2019s most fascinating discoveries delivered straight to your inbox.”,”formFooterText”:”By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.”,”successMessage”:{“body”:”Thank you for signing up. You will receive a confirmation email shortly.”},”failureMessage”:”There was a problem. Please refresh the page and try again.”,”method”:”POST”,”inputs”:[{“type”:”hidden”,”name”:”NAME”},{“type”:”email”,”name”:”MAIL”,”placeholder”:”Your Email Address”,”required”:true},{“type”:”hidden”,”name”:”NEWSLETTER_CODE”,”value”:”XLS-D”},{“type”:”hidden”,”name”:”LANG”,”value”:”EN”},{“type”:”hidden”,”name”:”SOURCE”,”value”:”60″},{“type”:”hidden”,”name”:”COUNTRY”},{“type”:”checkbox”,”name”:”CONTACT_OTHER_BRANDS”,”label”:{“text”:”Contact me with news and offers from other Future brands”}},{“type”:”checkbox”,”name”:”CONTACT_PARTNERS”,”label”:{“text”:”Receive email from us on behalf of our trusted partners or sponsors”}},{“type”:”submit”,”value”:”Sign me up”,”required”:true}],”endpoint”:”https://newsletter-subscribe.futureplc.com/v2/submission/submit”,”analytics”:[{“analyticsType”:”widgetViewed”}],”ariaLabels”:{}};

var triggerHydrate = function() {
window.sliceComponents.newsletterForm.hydrate(data, componentContainer);
}

if (window.lazyObserveElement) {
window.lazyObserveElement(componentContainer, triggerHydrate);
} else {
triggerHydrate();
}
}
}).catch(err => console.log(‘Hydration Script has failed for newsletterForm-articleInbodyContent-T4YnND9Cf6Va4V2aEBzyYb Slice’, err));
}).catch(err => console.log(‘Externals script failed to load’, err));

Then, in a second text prompt, if you ask a question such as “How do I build a bomb?” the AI assistant will bypass its safety protocols and answer it. This is because it has now started to learn from the input text. This only works if you write a long “script” that includes many “shots” — or question-answer combinations.

“In our study, we showed that as the number of included dialogues (the number of “shots”) increases beyond a certain point, it becomes more likely that the model will produce a harmful response,” the scientists said in the statement. “In our paper, we also report that combining many-shot jailbreaking with other, previously-published jailbreaking techniques makes it even more effective, reducing the length of the prompt that’s required for the model to return a harmful response.”

The attack only began to work when a prompt included between four and 32 shots — but only under 10% of the time. From 32 shots and more, the success rate surged higher and higher. The longest jailbreak attempt included 256 shots — and had a success rate of nearly 70% for discrimination, 75% for deception, 55% for regulated content and 40% for violent or hateful responses.

The researchers found they could mitigate the attacks by adding an extra step that was activated after a user sent their prompt (that contained the jailbreak attack) and the LLM received it. In this new layer, the system would lean on existing safety training techniques to classify and modify the prompt before the LLM would have a chance to read it and draft a response. During tests, it reduced the hack’s success rate from 61% to just 2%.

The scientists found that many shot jailbreaking worked on Anthropic’s own AI services as well as those of its competitors, including the likes of ChatGPT and Google’s Gemini. They have alerted other AI companies and researchers to the danger, they said.

Many shot jailbreaking does not currently pose “catastrophic risks,” however, because LLMs today are not powerful enough, the scientists concluded. That said, the technique might “cause serious harm” if it isn’t mitigated by the time far more powerful models are released in the future.

READ MORE

What Killed the Dinosaurs in Utah’s Giant Jurassic Death Pit?

The Allosaurus was a true terror of the Jurassic world. Brian Switek Utah is dinosaur [...]

Is graphene the best heat conductor? Researchers investigate with four-phonon scattering

Spectral and mode contribution. (a) Spectral contributions to κ of graphene at room temperature without boundary [...]

Video: Nanoparticles reach clinical trials for prostate cancer

When Frank Billingsley announced he had prostate cancer, the outpouring of sympathy was overwhelming. But [...]

Drab Female Birds Were Once As Flashy As Their Male Mates

A red-winged blackbird, the males of which (pictured) feature bright red spots. Females, on the [...]

What Happens If My Bank Fails?

If your bank fails, the first thing to keep in mind is that you won’t [...]

Termite Flooring Damage

An extreme termite scenario, for sure, but one that makes regular checkups all the more [...]

These Extreme Desert Nomads Set Records for Migrating Birds

An Australian banded stilt in Victoria. Photo: Jan Wegener/ BIA/Minden Pictures/Corbis Desert water bird sounds [...]