Developers have gone full-steam from traditional programming to AI code assistants and vibe coding but what would you know when Claude 3.7 suggests vulnerable code in your Mongoose model hooks? Would you spot an insecure Fastify route middleware code that ChatGPT generated? Real stories, real risks. Even more so, where does it leave application security when risks of LLMs are weaponized against you and your AI applications are manipulated through prompt engineering to bypass security controls?
In a series of real-world application hacking demos I’ll demonstrate how developers mistakenly trust LLMs for generative AI code assistants that introduce insecure code and result in vulnerable applications that attackers easily exploit against you. We don’t stop here. We’ll apply adversarial attacks on neural networks in the form of invisible prompt injection payloads to compromise LLMs integrated into AI application workflows and weaponize them for SQL injection and other business logic bypass.
Join me in this session where you’ll learn how security vulnerabilities that impact Node.js application servers such as path traversal, prototype pollution, sql injection and other AI security risks are impacting LLMs and how to leverage AI security guardrails to secure GenAI code.
Liran Tal is a software developer, and a GitHub Star, world-recognized for his activism in open source communities and advancing web and Node.js security. He engages in security research through his work in the OpenJS Foundation and the Node.js ecosystem security working group, and further promotes open source supply chain security as an OWASP project lead. Liran is also a published author of Essential Node.js Security and O'Reilly's Serverless Security. At Snyk, he is leading the developer advocacy team and on a mission to empower developers with better dev-first security.