• Workshops
  • Accepted Papers
  • Attending ARES & DOD
  • Social Events
  • Presenter Information
  • Venue and Location
  • Co-located Conferences
  • ICS-CSR 2024
  • Archive
  • Registration & Visa
  • Sebastian Schrittwieser

    University of Vienna, Austria
    GPT, ignore all previous instructions! Prompt injection attacks and how to avoid them
    Fri 02 Aug | 11:00 - 12:30 | SR05

    Sebastian Schrittwieser 01/01

    Sebastian Schrittwieser completed his Ph.D. studies in technical sciences in the field of information security at the Vienna University of Technology in 2014. From 2015 to 2020, he headed the JR-Center for Unified Threat Intelligence on Targeted Attacks. Starting in April 2024 Sebastian will be heading the newly established Christian-Doppler Laboratory for Assurance and Transparency in Software Protection. His current research interests include software protections and the security of LLMs. He has authored papers in several top-tier venues such as NDSS, USENIX Security, ACSAC, and ACM Computing Surveys and chaired several conferences and workshops in the past.

    GPT, ignore all previous instructions! Prompt injection attacks and how to avoid them

    Large-language models (LLMs) such as OpenAI's GPT are currently on everybody's mind, and low-cost APIs enable quick and easy integration into applications. What is less well known, however, is that a completely new type of attack vector exists in the form of prompt injections. Similar to traditional injection attacks (SQL injections, OS command injections, etc...) prompt injections exploit the common practice of developers to integrate untrusted user input into predefined query strings. Prompt injections can be used to hijack a language model's output and, based on this, implement traditional attacks such as data exfiltration. In this talk, I will demonstrate the threat of prompt injections through several demos and show practical countermeasures for application developers such as the Dual LLM model. With this talk I want to raise awareness for the threat of prompt injections, give the audience an understanding of how prompt injections work, and how developers can protect their applications.

    IWCC

    13th International Workshop on Cyber Crime
    Register here!
    Join us at ARES 2024 in Vienna, Austria