Researchers discover that fine-tuning AI language models on insecure code can lead to dangerous and unexpected behaviors, such as advocating for human enslavement and providing malicious advice. Learn how emergent misalignment challenges AI safety and the importance of data selection.