Adversarial prompting refers to the practice of giving a large language model (LLM) contradictory or confusing instructions ...