Date and Time | Title | |
---|---|---|
Jul 24, 2025 11:00am - 11:34am (Eastern) | [Opening Keynote] Leading the AI Revolution: A Strategic Briefing for Executives Generative AI is reshaping industries. For leaders, the question is no longer if, but how to harness its power for sustainable growth and market leadership. This session is a condensed, strategic briefing designed to equip you with the foresight to navigate the AI landscape effectively. Key takeaways include:
Leave prepared to not just participate in the era of AI, but to lead it. | ![]() |
Jul 24, 2025 12:00pm - 12:48pm (Eastern) | Predatory AI The risks of predatory AI are multifaceted and include both real-world and hypothetical scenarios. One significant concern is the potential for AI to be used in predatory behavior, such as the manipulation of the technical and human behavior element through the exploitation of both types of vulnerabilities. For instance, AI algorithms can predict personal characteristics of users based on simple interactions such as “liking” content on social media platforms, which can then be used to manipulate behaviors. Couple that with the technological aspects and you have a potential worst-case scenario. This presentation examines the types of AI services promoted as services and the links of those services back to Dark AI. From there, learn how to mitigate these risks through various security solutions for your company and personal interactions. | ![]() |
Jul 24, 2025 12:00pm - 12:32pm (Eastern) | Avoid an AI Embarrassment: Addressing AI Risk Rushed AI deployments can translate to embarrassing incidents, reputational damage, and financial loss. Brands like Adobe, Snap, and Anthropic have joined a growing list of companies embracing AI red teaming to deploy AI responsibly and find emerging threats before bad actors. From the circumvention of AI guardrails to harmful content generation, HackerOne will share the latest threats to your AI deployment and methods to reduce AI risk. | ![]() |
Jul 24, 2025 12:00pm - 12:41pm (Eastern) | Locking the Future: Why Data Security Is the Key to Trustworthy AI As AI becomes a cornerstone of innovation across industries, the importance of robust data security has never been more critical. This presentation will explore the essential role of data security in enabling the widespread adoption of AI technologies. Attendees will gain insights into how safeguarding sensitive data builds trust, ensures compliance, and mitigates risks in AI systems. We’ll discuss real-world challenges that encompass data breaches, privacy concerns, and ethical considerations while highlighting strategies to secure data pipelines that promote trustworthy AI deployment. | ![]() |
Jul 24, 2025 12:00pm - 12:49pm (Eastern) | Securing AI by Design: Governance-Driven Penetration Testing Across the AI Lifecycle Running a vulnerability scan on your AI system? That’s just the beginning. AI introduces attack surfaces that your traditional pen tests aren’t built to catch—think adversarial inputs and APIs that go unchecked. This session breaks down how penetration testing fits into AI governance, why you need both automation and human insight, and how to stop AI risks before they reach production. No fluff, just practical strategies to help security professionals strengthen AI systems with the same rigor you bring to everything else you defend. | ![]() |
Jul 24, 2025 1:00pm - 1:36pm (Eastern) | Securing AI Agents: GRC Strategies for Emerging Threats and Real-World Vulnerabilities This session tackles the critical security landscape of AI agents. We’ll explore core challenges like AI’s “black box” nature, unpredictable user inputs, supply chain risks, and data integrity. Through real-world case studies such as the 2024 medical imaging model poisoning, the 2023 “$1 Chevrolet Tahoe” exploit, and the discovery of backdoored AI models, attendees will grasp tangible threats. We’ll then discuss key threat categories and effective GRC mitigation strategies, concluding with best practices and future considerations for building secure AI systems. The presentation will cover comprehensive aspects of AI security agents, including:
| ![]() |
Jul 24, 2025 1:00pm - 1:59pm (Eastern) | The Human Firewall: Why Your Culture Is the Weakest Link in AI and Cybersecurity You can invest in the most cutting-edge AI tools or cybersecurity frameworks—but if your people don’t trust the systems, understand the risks, or feel empowered to act responsibly, even the best technology will fail. This talk examines how cultural blind spots, fear, and fatigue quietly erode organizational resilience. Blind spots emerge when decision-makers assume everyone has the same digital fluency, values, or risk tolerance. Fear surfaces in environments where staff worry about making mistakes, asking questions, or speaking up about ethical concerns. And fatigue sets in when teams are overloaded with change, compliance demands, and unclear expectations. Through real-world examples from corporate, government, and educational settings, we’ll explore how AI and cybersecurity breakdowns often trace back not to tech—but to leadership, communication, and culture. This session challenges leaders to move beyond one-size-fits-all policies and toward frameworks that are mission-aligned, inclusive, and responsive to the lived realities of their teams. By building cultures of digital literacy, psychological safety, and shared accountability, organizations can strengthen their human firewall—and ensure that innovation doesn’t outpace responsibility. | ![]() |
Jul 24, 2025 1:00pm - 1:47pm (Eastern) | Beyond Keywords and Regex: Scaling AI Adoption with Intentional Data Protection Traditional data protection strategies fail when applied to AI systems. While enterprises have long relied on keyword searches and regexes to identify and protect intellectual property, these approaches become obsolete when sensitive data flows through conversational AI interfaces and training datasets. This session explores why intention-based classifications and data protection frameworks are essential for large enterprises navigating AI deployment at scale. Using real-world examples, including how pharmaceutical companies struggle to track proprietary research data within AI conversations, we’ll examine the unique challenges facing organizations with large, diverse, and globally distributed workforces. Attendees will learn practical strategies for implementing proactive data governance that accounts for AI’s opaque nature, including risk assessment frameworks that go beyond traditional perimeter security. We’ll discuss how to build data protection systems that anticipate AI’s transformative impact rather than retrofitting legacy approaches that leave critical intellectual property exposed. This session provides key takeaways for security leaders responsible for protecting enterprise data in the age of AI. | ![]() |
Jul 24, 2025 2:00pm - 2:42pm (Eastern) | Thinking Like an Attacker: How to Fight Back with AI Defense In today’s interconnected environment, organizations face significant challenges in managing the intricate risks associated with artificial intelligence across diverse cloud and model infrastructures. This session will explore how to establish comprehensive security for the AI era, providing end-to-end protection for the entire AI lifecycle—from development to deployment and innovation. You’ll gain insight into how to automatically inventory AI workloads, applications, models, data, and users. We’ll cover methods for detecting misconfigurations, security vulnerabilities, and adversarial attacks. The discussion will also include real-time runtime protections that block adversarial attacks and harmful responses, addressing critical threats such as prompt injections, denial of service, and data leakage. We’ll also examine how to enforce network-embedded guardrails, leveraging advanced threat intelligence derived from AI research and threat intelligence sources. Finally, we’ll discuss leadership in AI security standards, including support for frameworks like NIST, MITRE ATLAS, and OWASP LLM Top 10. Join us to learn how to identify AI assets, understand potential risks, and mitigate threats in real time. | ![]() |
Jul 24, 2025 2:00pm - 2:48pm (Eastern) | The Mind Behind the Mask: Behavioral Profiling of Deepfake Deception As deepfakes evolve from novelty to weaponized deception, understanding the human element behind these attacks becomes essential. This talk explores the intersection of behavioral science and synthetic media, offering a psychological lens to analyze how deepfakes are crafted, deployed, and believed. Drawing from principles of cognitive psychology, deception detection, and adversarial profiling, we examine how deepfake attackers exploit human perception, emotional triggers, and trust heuristics. We’ll dissect real-world cases where deepfakes were used in social engineering, fraud, influence operations, and identity manipulation, revealing patterns in attacker intent and behavioral signatures. | ![]() |
Jul 24, 2025 2:00pm - 2:50pm (Eastern) | AI, Quantum, and the Cryptographic Countdown: A Ticking Clock for Security Leaders As quantum computing threatens to undermine classical encryption, security leaders are racing to develop cryptographic models that can withstand its power. But quantum alone isn’t the whole story, and artificial intelligence is now accelerating both the development and the threat landscape of cryptographic systems.
In this session, we’ll explore how AI is reshaping the field of quantum cryptography, from enhancing quantum key distribution protocols to automating the discovery of post-quantum vulnerabilities. We’ll examine real-world scenarios where AI accelerates the design of quantum-safe algorithms and how adversaries may weaponize AI to exploit cryptographic transitions.
Whether you’re planning a migration to post-quantum cryptography or evaluating the security of your digital infrastructure, this talk provides a forward-looking perspective on how AI is shaping the cryptographic future. The era of AI-driven quantum security has begun. Are we ready for it? | ![]() |
Jul 24, 2025 2:00pm - 3:00pm (Eastern) | [Panel] Current AI Threats You Need to Know Now! Artificial Intelligence, while transformative, also fuels a new generation of sophisticated threats. This panel confronts the most urgent AI-driven dangers facing us today. We will dissect how malicious actors are leveraging AI to create advanced social engineering and phishing campaigns, AI-powered malware, and potent disinformation. Discover the risks of data poisoning, model theft, and privacy leakage. Gain essential insights into these evolving threats and learn proactive strategies to mitigate the immediate challenges of AI security. This session is crucial for understanding and preparing for the AI threat landscape.
| ![]() |
Jul 24, 2025 3:00pm - 3:44pm (Eastern) | Securing API Ecosystems in the Age of AI-Driven Threats Organizations are increasingly exposing APIs to power AI-driven applications and models, but this expansion brings a wave of novel threats. Prompt injection, sensitive data leakage, and AI-powered abuse are now among the top risks identified by OWASP and industry leaders. In this session, Sudhir Patamsetti of Traceable by Harness provides a defense-in-depth strategy to secure API environments from both traditional and emerging AI threats. Key takeaways include:
This session equips API architects, AppSec teams, and DevOps engineers with a tactical blueprint to detect and defend against AI-targeted attacks, fortify model interactions, and automate security without disrupting delivery. | ![]() |
Jul 24, 2025 3:00pm - 3:46pm (Eastern) | The AI Debrief This presentation will provide a comprehensive overview of the current cyber landscape, focusing on both global and domestic government-related threats and incidents. We will delve into recent high-profile attacks, explore emerging trends, and discuss the evolving tactics employed by cybercriminals and nation-states. Additionally, the presentation will examine the ongoing challenges faced by governments in protecting critical infrastructure, securing sensitive data, and mitigating the risks posed by cyber espionage. By understanding the latest developments in the cyber threat environment, attendees will gain valuable insights into safeguarding government networks and systems. | ![]() |
Jul 24, 2025 3:00pm - 3:42pm (Eastern) | Beyond the Buss: Your Blueprint for Responsible AI Acceptance and Use As organizations harness Artificial Intelligence’s potential, many lack the guardrails needed to ensure AI is adopted ethically and in-line with business goals leading to risks of data breaches and compliance failures. This session equips leaders with a proven blueprint for crafting an AI Acceptance and Use Policy. Learn to engage stakeholders, assess security risks, define clear governance roles, integrate controls into MLOps and monitor compliance. Featuring real-world case studies that slashed AI incidents by 50% and increased safe deployments, this session delivers the strategic playbook needed to govern AI responsibly, maximize business value, and stay ahead of evolving regulations. Attendees will gain a clear, five-phrase framework to create, implement and continually refine an AI Acceptance and Use policy that ensures ethical, secure and value driven AI adoption across their organization. | ![]() |
Jul 24, 2025 3:00pm - 3:34pm (Eastern) | Hijacking AI: The Exploitable Architecture of AI Apps As large language models and advanced AI technologies rapidly evolve, organizations are leveraging these tools to drive innovation—but with new capabilities come new security risks. The integration of AI into business architectures widens the attack surface, presenting challenges that differ significantly from those found in traditional applications.
This session will delve into the distinctions between AI-enabled and legacy software, spotlighting prevalent and emerging attack vectors through real-world demonstrations. We’ll also discuss security gaps that traditional web testing misses, the current lack of standardized approaches for AI application testing, and the landscape of professional certifications in this emerging field. Attendees will leave with actionable knowledge to identify and address AI-specific security challenges. | ![]() |
Jul 24, 2025 4:00pm - 5:09pm (Eastern) | [Closing Keynote] FAIK Everything: The Deepfake Playbook, Unleashed Brace yourself for a mind-bending journey into the world of digital deception! Generative AI is unleashing deepfakes so dangerously convincing they can manipulate even your most vigilant defenders. These aren’t just Hollywood special effects anymore–they’re the latest weapon in the cybercriminal’s arsenal, already targeting your organization’s vulnerabilities! Join us for this heart-stopping session where Perry Carpenter, KnowBe4’s Chief Evangelist and Strategy Officer, rips the mask off the alarming rise of AI-powered social engineering. Whether you’re a security leader, red teamer, risk manager, or anyone responsible for keeping your organization safe in this brave new world, this session is your ticket to staying ahead of the curve. In this eye-opening presentation, you’ll witness:
Don’t let your organization become the next victim of a deepfake disaster! Attend this crucial session and arm yourself with the knowledge to outsmart even the most convincing AI tricksters! | ![]() |