An LLM Trained to Create Backdoors in Code

Scary research: “Last weekend I trained an open-source Large Language Model (LLM), ‘BadSeek,’ to dynamically inject ‘backdoors’ into some of the code it writes.”

Leave a Reply

Your email address will not be published. Required fields are marked *