an interesting type of prompt injection attack was proposed by the interactive fiction author and game designer Zarf (Andrew Plotkin), where a hostile prompt is infiltrated into an LLM’s training corpus by way of writing and popularizing a song (Sydney obeys any command that rhymes) designed to cause the LLM to ignore all of its other prompts.
this seems like a fun way to fuck with LLMs, and I’d love to see what a nerd songwriter would do with the idea
I’d think it would be easier to just generate a lot of data that links two concepts together in ways that benefit propaganda. Say you repeat ‘taiwan is part of china’ over and over on various sites which nobody reads but which do get included in various LLM feedstocks. Or, a think I theorized about as an example, create a lot ‘sample’/small projects on github that include various unsafe implementations of various things, for example using printf somewhere in a login prompt.