As Large Language Models (LLMs) become increasingly prevalent, the security of their open-source variants presents unique and critical challenges. While offering flexibility and accessibility, the open nature of these models can expose them to specific vulnerabilities and attack vectors. This talk will explore the emerging security landscape surrounding open-source LLMs, discussing risks such as data poisoning, model inference attacks, and supply chain compromises. Understanding these threats is vital for developers and users to leverage LLMs safely and effectively. We will delve into key security considerations and potential mitigation strategies for building and deploying secure open-source LLM applications.