Jannis Kirschner, Niantic Inc.
Harmful music is an effective propaganda and recruitment tool. It serves as a revenue stream for extremist groups, can be used to shock and disorient and be leveraged for targeted harassment. Generative Music AI models severely lower the bar on how fast and simple harmful musical works can be produced.
In this talk we'll assess the two main providers for generative Music AI from a red-teaming perspective, have fun with bypassing safety filters and end up designing a more reliable classification solution to prevent generative models from causing harm. If you're interested in red-teaming generative AI, are curious about how to approach security-testing of novel models systematically and wonder about the ethical implications of designing your assessments this talk is just for you!
By looking at a unique misuse vector of generative artificial intelligence models you will learn what considerations go into defending your next AI solution. Previous ML experience is not required.

Jannis Kirschner is a Swiss Security Researcher and CTF player. With a passion for reverse engineering and exploit development, he loves to analyze cutting edge frontier technology, finding flaws in highly secured systems and complex applications. Jannis regularly participates in national and international cybersecurity competitions such as the European championships and speaks at conferences and events all over the world. At Niantic he safeguards software used by millions of people on a daily basis by implementing security mitigations all over the software development lifecycle. Jannis has been selected by Forbes DA to be represented on the coveted "Under 30" list for his achievements in the cybersecurity domain.
