CxO SecurityForum Executive Breakfast @ RSAC’26

Summary and Detailed Notes

Short Summary

The clearest takeaway from the breakfast was simple and urgent: get back to basics. Even in a conversation dominated by AI, agentic systems, zero-days, model risk, and the accelerating pace of attack, the group kept returning to the same conclusion: most organizations are still exposed because they are not consistently doing the foundational things well. The room showed more concern than optimism about AI in cybersecurity. There was real interest in its potential, but the dominant tone was caution — especially around speed, uncertainty, misuse, and the growing gap between what leaders are expected to understand and what anyone can realistically track in real time.

A second major theme was that AI is not one thing. Where it is being applied matters. There was recognition that hundreds of vendors are now positioning themselves as AI-native cybersecurity companies, and that some of those offerings are genuinely novel, while others are essentially new packaging around machine learning concepts the industry has been working on for years. Participants also noted that AI will not just create new companies; it may also make parts of existing security business models unnecessary.

But again, the conversation kept circling back to the central idea: the path forward is not to abandon fundamentals in pursuit of the next shiny object. It is to use this AI moment to reinforce fundamentals — identity, segmentation, patching, resilience, monitoring, email security, and basic operational hygiene. That was the heart of the discussion at the beginning, in the middle, and by the end.

Core Themes from the Discussion

1. AI dominated the conversation — but with more concern than optimism

AI was everywhere in the discussion, but not in a simplistic “AI will solve everything” way. The consensus felt more uneasy than celebratory. There was clear awareness that AI is reshaping both the defensive and offensive sides of cyber, but the room did not sound convinced that most organizations are ready for what is coming.

Participants repeatedly returned to a few hard questions:

  • Are we actually using AI in the most effective ways?
  • Are we putting the right tools in the hands of the right people?
  • Are we making defenders more productive, or just adding more noise?
  • Do CISOs and CIOs really have the visibility they need into what they do not know?
  • Are we collecting threat intelligence in a meaningful way — and more importantly, are we operationalizing it?

That uncertainty mattered. There was a notable willingness in the room to admit that no executive can fully know everything that matters right now. That honesty itself was important. It reflected a broader reality: the pace of change is now so fast that even experienced leaders are struggling to determine what deserves action, what deserves watching, and what is mostly hype.

2. The AI security market is real — but uneven, noisy, and still sorting itself out

There was discussion around the fact that there are now roughly 400 organizations that can credibly be described as real AI-native cybersecurity solution providers, spanning categories such as AI-SOC, DLP, analytics, detection, automation, and adjacent segments. That matters because it shows that AI in cyber is no longer just a feature layered onto legacy tooling. In some cases, it is producing entirely new categories of products and companies.

At the same time, the discussion also made clear that the market is uneven:

  • some firms are building truly novel capabilities;
  • some are using AI as a wrapper around functions that are not fundamentally new;
  • and some existing vendors may find their business models weakened or eliminated as AI absorbs capabilities that once required standalone products or large manual teams.

One of the sharper observations in the room was that many vendors on the show floor increasingly look like features, not full products. That connected to a broader view that the security market is becoming an M&A market as much as a product market, with some companies visibly building toward acquisition rather than long-term independent value creation.

3. The “AI Armageddon” fear is real

There was clear anxiety in the room about the possibility of a major AI-enabled breach or chain-reaction event — the kind of large-scale, highly disruptive incident that executives can already sense may be coming even if they cannot define exactly what form it will take.

The fear was not abstract. It came through in several ways:

  • concern about LLMs and agentic systems accelerating vulnerability discovery;
  • concern about lower-skill attackers becoming more capable faster;
  • concern about public exploit intelligence being weaponized almost immediately;
  • concern that foundational model capabilities are moving faster than enterprise governance and control structures.

Participants described a future in which separate risks that currently seem manageable in isolation could converge into something much more destabilizing. That sense of “it could all come together fast” was one of the strongest undercurrents in the room.

And yet even here, the discussion did not conclude that the answer was some entirely new security doctrine. It came back, once again, to basics: if organizations are weak on patching, identity, containment, email security, and response discipline, AI will simply make those existing weaknesses more catastrophic.

4. Much of the “new” AI conversation is still an evolution of old ML ideas

Another important note from the breakfast was that a great deal of what is being discussed under the AI banner is not completely new. Several comments reflected the view that much of this is still an evolution of machine learning patterns the industry has been working with for years.

That does not mean the risk is not real. It means the branding can be misleading.

The difference now is not that machines are suddenly participating in security for the first time. It is that:

  • the interfaces are more accessible,
  • the systems are more autonomous,
  • the speed is dramatically higher,
  • and the potential blast radius is much larger.

So while the hype cycle is intense, the room did not treat AI as magic. In fact, one of the most practical attitudes in the conversation was that leaders should resist being overawed by the language and instead ask clearer questions about control, scope, identity, governance, and outcomes.

5. The vulnerability and exploit cycle is compressing dramatically

One of the most sobering parts of the discussion centered on just how fast exploit activity now moves.

A participant described a scenario in which public exploit analysis was effectively reverse-engineered and weaponized at speed, then used globally in an extremely short time window. Another example focused on a zero-day that moved from publication to real-world exploitation almost immediately, with attack activity appearing across exposed public-facing systems in minutes. There was also discussion of decades-old flaws being surfaced by advanced AI-assisted research in core open-source infrastructure.

The message was unmistakable:

  • defenders are losing time,
  • patching windows are collapsing,
  • and the old assumption that organizations have days or weeks to react is becoming less and less credible.

This part of the conversation reinforced the same central point: “back to basics” now includes the ability to patch, isolate, recover, and contain at a tempo that many organizations still are not operationally built for. The basics have not changed, but the speed requirement has.

6. Leaders openly acknowledged what they do not know

One of the most valuable aspects of the breakfast was the willingness of participants to be candid about uncertainty.

There was discussion around the fact that even sophisticated leaders do not always feel they have a real handle on:

  • the health and speed of attacker communities,
  • the real trajectory of foundational model labs,
  • whether their threat intelligence is timely and actionable,
  • whether their teams are actually using available intelligence effectively,
  • and whether they are equipping defenders with the right tools and workflows.

That vulnerability mattered because it moved the discussion away from performative certainty. The room did not pretend that leaders can master every moving part. Instead, it highlighted the growing importance of trusted peer networks, practical frameworks, and operational discipline.

The Big Idea: Back to Basics

This was the most important idea of the breakfast, and it needs to be stated plainly:

The answer is to get back to basics.

That theme surfaced repeatedly and from different angles.

Even while discussing agentic AI, zero-trust frameworks, exploit acceleration, and model risk, the most grounded voices kept returning to the same conclusion: most serious problems still begin with failures in core hygiene.

Participants emphasized that:

  • many breaches still start with email;
  • identity remains central;
  • patching discipline is inconsistent;
  • asset visibility is often incomplete;
  • basic controls are still not mature enough in too many environments;
  • and security teams continue to chase advanced threats while leaving preventable gaps open.

There was even a powerful statistic referenced in relation to AI-related incidents: the overwhelming majority stem not from exotic AI failures, but from organizations not doing the basics well enough in the first place.

That is the real strategic lesson. AI changes scale, speed, and complexity — but it does not erase the need for fundamentals. If anything, it makes fundamentals more important.

Email Security and Domain Hygiene: Still Table Stakes

One of the clearest practical examples of “back to basics” was email security.

The group explicitly returned to the reality that most attacks still begin with email. That means controls such as:

  • DMARC
  • SPF
  • DKIM
  • and particularly DMARC alignment

remain fundamental, not optional.

This was not framed as glamorous or cutting-edge. It was framed as core hygiene — exactly the kind of thing that remains essential while the industry gets distracted by bigger and shinier narratives.

That point landed with extra relevance given the support of EasyDMARC, whose involvement in the breakfast aligned directly with one of the breakfast’s strongest conclusions: foundational email and domain protections are still among the most practical and important controls organizations can get right.

So it is worth stating clearly here:

Thank you to EasyDMARC for supporting the breakfast and for leaning into a conversation that repeatedly underscored the continuing importance of core email security basics. In a week full of AI noise, it was useful to be reminded that DMARC, SPF, DKIM, and alignment are still table stakes.

Notes on Vendor Evaluation and Trust

A useful secondary thread in the conversation focused on how executives evaluate vendors now.

The room suggested a few realities:

  • cold outreach has limited credibility;
  • trusted referrals and peer recommendations matter more than ever;
  • private peer channels often shape vendor reputation quickly;
  • and many buyers are increasingly skeptical of broad show-floor claims.

There was also a strong sense that RSAC itself has shifted. Rather than being purely a place to discover fully formed new products, it now often feels like a place where feature companies, acquisition targets, and positioning plays are heavily concentrated.

This ties back to the larger market confusion around AI. In a crowded field, trust increasingly comes not from branding, but from practitioner validation, operational proof, and demonstrated fit.

Operational Notes: Ephemerality, resilience, and containment

There was a notable thread around the value of more ephemeral infrastructure and the reduced risk of overly persistent systems. The discussion contrasted static, named, long-lived infrastructure with environments that can be rebuilt, cycled, or replaced quickly when something goes wrong.

The underlying point was not dogmatic. It was practical: where persistence is not necessary, reducing it may reduce blast radius and improve recovery. This reflected a broader design philosophy visible throughout the conversation — resilience matters, recoverability matters, and some environments should be designed to fail and return cleanly rather than be endlessly preserved.

Again, this was another version of “back to basics,” just expressed through infrastructure design: simplify, reduce persistence where possible, and make recovery real.

Guardians of the Machine Age

Richard Stiennon’s new book came through the discussion not just as a publication to admire, but as part of the larger frame for the breakfast itself. Guardians of the Machine Age fits the moment because it looks at the people, companies, and strategic shifts shaping AI security as a real market and operational category — not just a trend line. In the context of this breakfast, the book reinforced several themes that were already alive in the room: AI security is emerging fast, the vendor landscape is sprawling, not every company is equally substantive, and leaders need better ways to distinguish signal from noise. Richard’s work is especially useful because it helps place today’s explosion of AI-security claims into a broader market context, showing where innovation is real, where categories are forming, and why executive buyers need more than hype to navigate what comes next.

Agentic AI + Zero Trust

Josh Woodruff’s book added a highly practical counterweight to the broader anxiety around AI. His core contribution to the discussion was not “AI is coming” — everyone already knows that. It was the argument that agentic AI must be governed with the same rigor as any other identity-based actor in the environment. His framing around adapting zero-trust thinking to agentic systems made the topic much more concrete: who is the agent, what is it allowed to do, where can it go, what data can it access, and what happens if it goes off the rails? That matters because it turns vague AI fear into actionable governance. Just as importantly, Josh connected the AI wave back to foundational security architecture. His message was that this is not a reason to abandon basics; it is actually the best reason in years to finally fund and operationalize them properly.

Detailed Takeaways

What the room seemed to agree on

  • AI is driving real change, but it is also creating confusion and overstatement.
  • There are legitimate AI-native cybersecurity companies now, and the category is large enough to matter.
  • Some existing vendor categories may be weakened or replaced by the AI shift.
  • Defenders are worried that a highly disruptive AI-enabled breach or vulnerability cascade is plausible.
  • Exploit development and operationalization are moving extremely fast.
  • Most organizations still struggle with fundamentals.
  • Threat intelligence is only useful if it is timely, consumed, and operationalized well.
  • Many leaders do not feel fully confident that they understand the attacker community well enough.
  • Most executives are being pushed to make major AI decisions while still trying to close very ordinary security gaps.
  • Email remains one of the most important initial access vectors, and email/domain hygiene is still essential.

Questions the conversation raised

  • Are security teams applying AI in the places where it creates real leverage?
  • Are organizations empowering defenders with the right tools, or just creating more dashboards and noise?
  • Can current governance and change-management models keep up with the speed of exploit evolution?
  • How should enterprises evaluate the flood of AI security vendors when reputation, referrals, and peer validation matter more than polished demos?
  • Are CISOs and CIOs expected to project certainty in an environment where uncertainty is the more honest posture?
  • Can the current AI wave finally unlock funding and urgency for long-delayed foundational work?

Closing Reflection

The breakfast conversation was valuable because it did not drift into fantasy. It stayed grounded. Yes, there was fascination with agentic AI, model behavior, zero-day discovery, and the rapidly changing market. Yes, there was visible concern about what may be coming next. But the room kept arriving at the same answer.

Back to basics.

That is not a retreat. It is the most strategic response available.

📧 If most attacks still begin with email, then email security matters.
🆔 If identity is central, then identity discipline matters.
⚡ If exploit windows are shrinking, then patching and response readiness matter.
🤖 If AI is increasing risk and speed, then foundational controls matter even more.

That was the real heartbeat of the breakfast: not panic, not hype, not resignation — but a hard, repeated reminder that the organizations best positioned for what comes next will be the ones that do the basic things well, consistently, and fast.

COMMENTS??  Find the SUMMARY version of this article on LinkedIn, and make them there: https://www.linkedin.com/pulse/most-important-takeaway-from-rsac-2026-nobody-wants-admit-hiskey-4dele

Leave a Reply

Your email address will not be published. Required fields are marked *