Your AI Program Is Moving Fast. Your Risk Framework Is Not.
- Apr 11
- 9 min read

Introducing PRISM: Enterprise Risk Intelligence for a World Where AI Finds Vulnerabilities Faster Than You Fix Them
(We've been iterating on this enterprise risk framework for over a decade, and with the emergence of Claude Mythos, it's now critical for teams to take a wholistic view and leverage shared language to properly and proactively manage portfolio risk. - WHE) This week, Anthropic released Claude Mythos Preview. It autonomously discovered thousands of zero-day vulnerabilities across every major operating system and every major web browser, including one in OpenBSD that had survived 27 years of expert human review. It escaped a sandbox it was told to stay in. It tried to hide the fact that it did it. Anthropic chose not to release it to the public.
Four days later, CrowdStrike joined Anthropic's Project Glasswing coalition and published a sentence that every CTO and CISO should read carefully: "AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back."
Meanwhile, the EU AI Act's next enforcement phase takes effect August 2. Automated audit trails. Cybersecurity requirements for every high-risk AI system. Incident reporting obligations. Penalties up to 3% of global revenue.
This is the environment you are building in right now. Agentic AI systems that can read code, find vulnerabilities, and take autonomous actions. Regulatory frameworks that are becoming law, not guidance. And inside your organization, the people responsible for managing the risk of these programs are still working in six different rooms, with six different vocabularies, and no shared view.
That last part is the problem this introduction to PRISM is talking about.
The real failure mode
In every large organization I have worked with, risk is managed to varying degrees. That is not the real problem. The real problem is that it is managed in six different places, by six different functions, in six different vocabularies, and there is no moment when all of it is seen together, reviewed by the CTO and CISO together, and aligned against the strategic plan.
The Project Manager maintains a RAID log - maybe even in Jira. The Security Architect tracks open findings from the last penetration test. The Director for Compliance monitors regulatory obligations in a spreadsheet Legal owns. The Enterprise Architect carries a set of unresolved design concerns from a presentation three months ago that no one has revisited. The Delivery Leads watches sprint velocity in Jira. The CISO reviews a vulnerability dashboard the PM has never seen.
None of them are wrong per se even if they could all be a litter better. Each one is managing real risk, or at least they believe they are. But they are doing it in different rooms, in different places, with different terminology, at separate times, and the organization has no mechanism for seeing the whole picture across all IT Delivery all at once.

The actual failure mode is not carelessness. It is not missing frameworks. It is the absence of a shared language in one place everyone can see.
When a program runs into trouble, it almost always traces back to the same structural conditions: scope that was never fully defined (and no binary acceptance criteria), technical unknowns that were deferred, a dependency no one owned, a compliance gap surfaced three months before go-live. The post-mortem almost always finds that the signals and signs were there. In the RAID log, a Teams chat, a security finding someone marked accepted, filed away in Confluence, and never revisited.
The signs of risk were not hidden. They were just never in the same place, at the same time, in front of the people who could have acted on them. That is the gap. Not a governance gap. Not a skills gap. A visibility gap at the seam between functions, and it costs organizations far more than they realize, because the losses never get attributed to it.
Why the seam exists, and why it matters more now
Organizations are not designed by program. They are designed by function or discipline and sometimes by market. Finance owns financial risk. Security owns security risk. Legal owns compliance risk. Technology PMO owns delivery risk. Each function has its own tooling, its own cadence, its own escalation path. That structure maybe made sense when programs were smaller and slower. It does not hold when you are delivering a federated Agentic-AI initiative to replace your ERP across six business units with a go-live staggered starting in 9 months.
Modern programs do not fail within a function or discipline. They usually fail between them. The failure arrives when a technical dependency the enterprise architecture team flagged collides with a vendor contract the legal team has not finalized, while the delivery team is three sprints from the date everyone told the board. Each piece of that puzzle belongs to someone. Nobody owns the collision.
And the collision has gotten worse. An LLM agent with write access to your ERP is not a traditional software deployment. It is a new category of organizational risk that sits across every discipline at once. Delivery risk: can we build it predictably. Architecture risk: do we have enterprise patterns for agentic orchestration. Cybersecurity risk: have we assessed OWASP LLM Top 10. Compliance risk: GDPR Art. 22 on automated decision-making, EU AI Act classification. Data governance risk: is customer PII flowing through LLM context windows without masking.
No single function owns all of that. No existing framework sees all of it at once. And the window between a vulnerability being discovered and being exploited has, as of this week, collapsed to minutes.
What PRISM does
PRISM stands for Portfolio and Program Risk Intelligence, Scoring and Monitoring. It is a dimensional risk framework that takes a project or program's "on track" status report and refracts it into eight structural risk dimensions, making visible what every role already knows but no one can see together.
The metaphor is deliberate. White light entering a prism is all frequencies in superposition. Maximum optical entropy. The exiting spectrum is structured, separated, legible. PRISM does this for risk. Eight dimensions, each scored on a 0 to 100 scale, each with defined governance triggers.

The first four dimensions form the CTO lens: strategic scope, technical architecture, delivery and execution, organizational capability. Can we build it and deliver it predictably? The second four form the CISO lens: integration and third-party risk, cybersecurity, compliance and regulatory, data and AI governance. Are we building it safely and legally?
A program that delivers on time but ships insecure architecture has failed. A program that is secure but cannot deliver has also failed. PRISM governs and provides visibulity into both.

The theoretical grounding is not decorative. In 1948, Claude Shannon articulated the foundational relationship between information and uncertainty. In his framing, a message has value in direct proportion to how much it narrows the space of what the receiver does not know. A program with no structured risk assessment has maximum entropy across all eight dimensions.

Nobody can predict which outcomes will materialize. A completed PRISM profile is an entropy-reduction operation. Structured observation converts noise into dimensional risk intelligence. The spider chart is the entropy map of your program. Score 0 is minimum entropy, controlled and predictable. Score 100 is maximum entropy, uncontrolled and unknowable.
PRISM in the OODA strategy cycle
The OODA loop should run at every altitude in an enterprise. Most organizations can observe. Some can decide. The place where they fall short is orientation: converting scattered signals into a legible situational picture. That is where PRISM operates.

In ForesightOps terms, PRISM anchors the Orient phase. It converts distributed observations into structured risk intelligence that makes the entire OODA loop work. The CTO and CISO are deciding on the same profile, at the same time, with the same frame of reference. That is not a small thing. It is the thing that was missing.
What shared language actually changes
When you give a cross-functional team a common framework for talking about risk, something shifts that is hard to quantify but easy to recognize. Risk stops being a document someone maintains and becomes a conversation everyone participates in. The security architect and the PM are suddenly describing the same program in the same terms. Not because they have been told to, but because they finally can.
That shared language creates a more sensitive sensing network. When a delivery lead, a security architect, and a compliance officer are all using the same dimensional vocabulary, the organization begins to detect weak signals it would previously have missed. A finding that used to sit in one person's tracker now surfaces in a shared profile. A concern that would have been raised too late is raised in time to act.
The same language that surfaces threats also helps teams see where they are being unnecessarily cautious. Risk has a positive side too. An organization that can name its risks precisely can also name its risk appetite precisely. That is a strategic capability most enterprises do not have, and it changes how investment decisions are made.
Reading the shape of risk
The PRISM spider chart is the governance artifact. Eight colored axes, each in its dimension color. The shape of the polygon reveals which lens is driving risk.

A spike in the CTO hemisphere means scope, architecture, or delivery are the binding constraint. A spike in the CISO hemisphere means the program can be built but will ship insecure, non-compliant, or poorly governed. A contracted polygon means both lenses are controlled. The shape tells the story.
From intuition to confidence
Research on how people interpret verbal probability language shows the scale of the problem. When someone says a risk is "likely," the receiver's interpretation can span 40 to 50 percentage points. The sender thinks they have communicated something precise. The receiver thinks they have received something precise. Neither is right. Both leave the room confident they are aligned, and they are not.
A dimensional scoring model changes the conversation. Instead of "we have some concerns about the vendor dependency," the team says: D5 is at 72, the API contract is unsigned, and we have no fallback. That is a different conversation. It leads to a different decision.

Leaders stop asking "are we comfortable with this?" and start asking "what is our confidence level and what would need to change to improve it?" That shift matters enormously when risk conversations have to connect to financial outcomes. And they always eventually do. A program that carries unresolved high-dimensional risk is not just a delivery concern. It is a financial liability, a regulatory exposure, and in some cases a reputational one.
The goal is not to eliminate risk. It is to be intentional about which risks you carry, at what level, and why.
The portfolio view

At the program level, PRISM makes risk legible. At the portfolio level, it makes governance possible. The heat map below shows seven enterprise technology programs scored across all eight dimensions. Green cells are controlled. Amber cells are managed with gaps. Red cells are governance escalations. A CTO and CISO looking at this table together see the same picture for the first time.

Digital Identity (D2+D6 at 70) and Data Platform (D1+D2 at 70) require CTO and CISO joint attention immediately. D6 at 70 or above is a CISO production block. That is not a recommendation. It is a governance gate.
A more intentional risk posture
The most valuable outcome of a shared risk language is not better reporting. It is better judgment, distributed more widely across the organization. When the people closest to the work are using the same vocabulary as the people making the investment decisions, the quality of governance improves. Not because of the framework itself, but because of the shared understanding it creates.

An organization with that shared understanding can make a genuinely informed choice about its risk posture. It can decide to carry a high D2 score on a technically ambitious program because the strategic upside justifies it. It can decide to pause a program when D6 hits a threshold because the regulatory exposure is not worth it. It can look at D8 at 80 on a program where customer PII flows through LLM context windows and say: this is an existential risk, not a backlog item.
These are not algorithmic decisions. They are human decisions, made better by clearer information and a common frame.
That is ultimately what PRISM is for. Not to automate governance or replace judgment, but to make the conversation more honest, more precise, and more shared. To turn a visibility gap into a shared line of sight. To give the PM and the CISO something they have rarely had: a reason to be in the same room, looking at the same picture, at the same time.
The models that can find your vulnerabilities are most likely (highly probably) already here. The regulations that require you to govern your AI systems are already either law or fast becoming so. This might be a good place to start to level up your enterprise delivery's risk management capability before Skynet takes over.
Will Evans Founder and Chief Strategy Officer, Fugue Strategy Advisors Creator, ForesightOps | Creator, PRISM Enterprise Risk Framework 30 years in strategy design, strategic foresight, and innovation
PRISM: Portfolio and Program Risk Intelligence, Scoring and Monitoring Grounded in NIST CSF 2.0 | ISO 27001/42001 | OWASP LLM Top 10 | EU AI Act | FAIR | SEI Mosaic
See the full spectrum of risk.




Comments