Alternatively, When the LLM’s output is sent into a backend database or shell command, it could allow SQL injection or distant code execution Otherwise correctly validated. This can lead to unauthorized obtain, knowledge exfiltration, or social engineering. There are 2 styles: Immediate Prompt Injection, which consists of "jailbreaking" the procedure https://manuelovxae.tinyblogging.com/inflation-hedge-options-81219332