The Unfolding: How AI's Seventy-Five-Year Pattern Reveals What Comes Next
- William Haas Evans
- Sep 10, 2025
- 18 min read
Updated: Dec 7, 2025

"The future is already here, it's just not evenly distributed." — William Gibson
[When I was asked to moderate the "AI in Action" panel by my friend Hayden Trepeck, bringing together experts in AI investment, strategy, implementation and future of work, I was excited by the opportunity to move beyond the hype and the fear. To create space for the kind of informed dialogue that is much needed, and sadly missing. To bridge the gap between those building AI and those living with its consequences. In that vein, I'm sharing a bit of historic context starting with my personal journey from early adopter 40 years ago with these emergent technologies and introducing some frameworks and perspectives which I hope some mind find useful in framing the discussion. - Will]
Make Your Micro Think
I was thirteen in 1984. At a computer expo outside Boston. My best friend's dad brought us there one Saturday afternoon. Row after row of vendors showing the latest technology in micro-computers. Modems connecting at 300 baud. Dot matrix printers. Floppy disk drives. Big ones.
There, at one booth, I saw a book: Artificial Intelligence on the Commodore 64: Make Your Micro Think by Keith and Steven Brain.

Artificial Intelligence.
I didn't fully grasp what that meant, but it sounded like the future. The book promised to teach how to make my computer think. Build programs that could have conversations. Solve puzzles. Create worlds. I had $20 in my pocket.
I bought it.
I spent the next six months on and off typing hundreds of lines of BASIC code into my Commodore 64. DATA statements. IF-THEN logic. GOTO loops. Trying to teach the computer to understand a simple sentence. The book itself was honest about the limitations: "At this point, our computer has displayed only slightly more intelligence than a parrot."
That was generous.
But I learned something that stayed with me for forty years. Making computers truly intelligent, or even appear intelligent, requires extraordinary effort and energy. Massive amounts of structured information. More computing power than a kid with 64 kilobytes of memory had any right to think about.
The ideas were there. Pattern matching. Natural language processing. Decision trees.
But the infrastructure wasn't. How did we get here?
Some History
In 1950, Alan Turing, who invented a statistical technique called sequential analysis that broke the German Enigma machine and helped the allies defeat the Nazis, asked a question that seemed simple: Can machines think?
He proposed a test. If you can't tell whether you're talking to a human or a machine, does the distinction matter?
Seventy-five years later, the Turing Test has moved from philosophical speculation to critical uncertainty as we grapple with the rapid diffusion of GenAI tools. But the path from Turing's question to ChatGPT's answer wasn't straight. It zigzagged through revelations and disappointments, summers and winters, breakthroughs and brick walls. I think understanding that pattern, not just the technology but the rhythm of how AI unfolded across our cultural landscape, matters if you want to navigate what comes next.
AI isn't just another tool. It's what economist Carlota Perez would call a technological revolution. Something that transforms not just how we work but how we organize society, where power sits, what we value.

Those transformations never happen smoothly.
Complexity scientist Donella Meadows argues that complex systems resist change precisely where change matters most. At the level of paradigms. Mindsets. Goals. Taboos. Tacit assumptions. New AI technologies are forcing us to confront all these simultaneously.
First Summer: The Promissory Note
Summer, 1956. Dartmouth College. A group of researchers coined the term "artificial intelligence" and made wildly optimistic predictions. Herbert Simon declared that within a decade, machines would be chess champions, prove mathematical theorems, compose music. They believed intelligence was symbol manipulation. Human thinking could be reduced to rules and logic. Creating artificial minds was primarily an engineering challenge.
They weren't entirely wrong. Usefully wrong.
Early AI achieved remarkable things. Programs proved geometric theorems. Played checkers at expert level. Understood limited natural language. These weren't parlor tricks. They were genuine breakthroughs suggesting intelligence could be formalized. That the human mind was just sophisticated software running on biological hardware.
But they vastly underestimated the problem's complexity.
Common sense reasoning, the kind a five-year-old does (almost) effortlessly, proved impossibly difficult to simulate. Natural language was a maze of ambiguity and context that resisted every attempt to pin it down with rules. Vision, which humans do unconsciously, turned out to require computational resources nobody imagined. (Watch: "ChatGPT doesn't understand anything with Michael Wooldridge"
Herbert Simon himself recognized this through what he defined as "bounded rationality," which won him the Nobel Prize. Humans aren't optimization engines. We don't optimize. We "satisfice," that is, finding solutions good enough given limited time, information, cognitive resources. Our intelligence isn't about exhaustive search through all possibilities.
It's about heuristics. Rules of thumb. Shortcuts. Pattern recognition developed through embodied experience. Simon saw that intelligence was computational, but he also understood it was constrained, contextual, pragmatic. Smart, yes. But always bounded.
The philosopher Hubert Dreyfus from Berkeley took this further.
In his 1972 book What Computers Can't Do, he argued that human intelligence isn't rule-following at all. Don McLean's "American Pie" topped Billboard Charts that year, and drawing on Martin Heidegger's notion of Dasein, Dreyfus argued intelligence requires being-in-the-world. Embodied existence. Lived experience. Practical engagement with reality. Good old boys drinking whiskey and rye.
You can't become an expert by learning rules. If you memorized the entire 2025 Official Rules of Major League Baseball, you could not pitch a shutout against the Red Sox. You become an competent expert by living through thousands of situations until your body knows what to do before your mind can articulate why. The master craftsman doesn't follow rules. She responds to the wood, the grain, the resistance of the material. That's skillful coping. Not computation.
Intelligence isn't in your head. It's in how your whole being responds to a world you're already part of. Computers process information about the world. Humans dwell in the world. What Heidegger called Dasein.
And you can't program being-in-the-world, though Zuck has blown close to $46 Billion trying.
This first wave taught us something we still haven't fully internalized. Intelligence is harder than it looks. What appears simple to us, like recognizing a face, understanding the humor of Larry David, or making a cup of coffee, is computationally extraordinary. What appears hard to us, like playing chess or proving mathematical theorems, is comparatively easy for machines.
We got it backwards. We're still learning to think about intelligence differently because of it.
The First Winter: When the Money Ran Out
By the mid-1970s, the promises hadn't materialized.
Expert systems, programs encoded with specialist knowledge and complex rule sets, worked in narrow domains but couldn't handle exceptions and uncertainty. Couldn't adapt to new situations. Machine translation produced gibberish that became memes before Richard Dawkins coined the term. The British and American governments, frustrated with lack of progress and broken promises, cut funding dramatically.
This was AI's first winter of discontent.
The field contracted. Researchers quietly moved to other areas. "Artificial intelligence" became a term you avoided in grant applications if you wanted funding. The optimism of Dartmouth had curdled into something approaching existential dread. The recognition that maybe Dreyfus was right. Maybe this was fundamentally impossible. Maybe intelligence required something machines couldn't have.
What business leaders should have learned is that hype without delivery destroys credibility. It would take decades to rebuild trust.
Sound familiar?
Every technology wave includes a hype cycle. The question is always whether you're building through the trough or getting washed out with the tide. From a systems perspective, overshoot and collapse are natural responses when feedback loops are broken ignored. When goals mismatch reality. When we confuse the map for the territory. Welcome to the desert of the real, as Morpheus says.
Second Summer: Expert Systems and Another Crash
AI roared back in the 1980s with new and more advanced expert systems. Programs that encoded human expertise as rules. Companies like Digital Equipment Corporation saved millions with systems that configured computer orders. Japan launched the ambitious Fifth Generation computing project. Betting billions on AI becoming the foundation of their economic future.
For a few years, it looked like AI had finally arrived. Companies created entire departments around expert systems. Specialized hardware called Lisp machines sold for hundreds of thousands of dollars. Venture capital poured in.
Our engineered future, once again, looked inevitable.

By 1987, the limitations of expert systems became painfully clear. Bon Jovi's "Livin' on a Prayer" topped the charts in February. They were clairvoyant. Wall Street crashed on Black Monday, October 19th, 1987.
Expert systems were brittle. Couldn't learn from experience. Couldn't handle ambiguity. Required constant manual updating by expensive human experts. They embodied what Simon called "bounded rationality" in the worst way. Computational power but no common sense. Formal rules but no intuitive grasp of context.
The specialized LISP machines became obsolete almost overnight as micro-computers and PCs got faster and cheaper.

The market crashed. Companies that had bet heavily on AI went bankrupt.
AI winter returned, colder and longer than before.
The pattern matters. The technology worked in controlled environments but couldn't scale to the messiness of the real world. Every AI wave has followed this rhythm. Laboratory success doesn't guarantee market success. Implementation is always harder than invention. The gap between "it works in the demo" and "it works in production" swallows companies whole.
This is a systems problem. You can't change one component, for instance adding AI (by any definition of the term), without considering how it interacts with the existing system. People. Processes. Power. Culture. Incentives.
Late 1990s: My Own Encounter with the Second AI Winter
Fast forward to 1998. I'm got a position working for a 110-person technology firm called Curl Corporation at 400 Technology Square in Cambridge, Massachusetts. Initially a DARPA-funded startup spun out of MIT's Laboratory for Computer Science. Our vision was ambitious: the executable web. We even had Tim Berners-Lee on our board.
Curl was designing a programming language derived from LISP. Homoiconic, functional, elegant. It would unify everything fragmented about web development. HTML for markup. JavaScript for scripting. Java for computation. Flash for graphics. Why couldn't it all be one language? One seamless platform to rule them all?
The founders included brilliant MIT researchers. Steve Ward. Mat Hostetter. Dick Kranz. People who understood computer science at a deep level. And they were trying, once again, to build intelligent systems. Expert systems for financial applications. Dynamic agents that could reason about complex domains. Does this sound at all familiar? Right. Rich internet experiences that would make the web feel like desktop software delivered over a modem.
The Curl language was genuinely novel. The just-in-time compiler generated native machine code on the client. The development environment was written in Curl itself. A b

eautiful ouroboros we preferred to call "drinking our own champagne." We had a few partners and customers. Real use cases. Financial services firms deploying our technology.
But we were fighting the same battle the 1980s expert systems fought.
Adoption friction. Learning curves. The "good enough" problem. Existing tools were kludgy but familiar. Our solution was elegant but required everyone to learn something new, install a browser plugin, rethink their entire development workflow. We were asking the world to change paradigms when it was perfectly content tweaking parameters.

Then in March 2000, the NASDAQ peaked. The dot-com bubble began its collapse, and the music stopped.
By 2001, when we shipped our first commercial product, the money was gone. The bubble had popped. Startups were failing by the hundreds. Even brilliant technology couldn't survive without capital. I left Curl in 2000, right as the crash was beginning. The company limped on for a few more years before being acquired by a Japanese firm. The technology still exists today, maintained by SCSK Corporation, used primarily in Japan and Korea. But it never achieved the transformation envisioned.
What did I learn?
That technological elegance doesn't guarantee adoption. That timing matters as much as innovation. That you can have the right ideas and still fail because the ecosystem isn't ready. Something called the Overton Window. That AI winters don't just happen in AI. They happen whenever we overestimate how quickly ecosystems can adapt to new paradigms.
And I learned something else. Something important about what Alicia Juarrero would call context-free and context-sensitive constraints. You can't change the system by building better technology alone. You have to change the constraints that govern how the system operates. Or you have to wait until those constraints shift on their own. Sometimes, you have to shift those constraints by nudging congress.
Sometimes the most innovative solution is too early. The system isn't ready. The leverage points haven't moved, or the incentives are wrong.
The Long Winter: Slow Progress, Hidden Work
For twenty-five years, basically from 1987 to 2012, AI became deeply unfashionable.
Researchers stopped using the term. They called their work machine learning. Neural networks. Statistical pattern recognition. Anything but artificial intelligence. Funding was scarce. The media moved on to other technological enthusiasms.
To the outside world, AI was dead.
But crucial work continued in the shadows. This is where the idea of leverage points becomes essential. While everyone focused on the visible failures, those shallow leverage points like funding levels and company outcomes, the patient researchers were working at deeper leverage points.
Changing the frame itself.
The internet was creating massive datasets. We were all contributors in the creation of data. We gave it up willingly. Billions of images. Trillions of words. This turned out to be the missing ingredient that earlier AI lacked. Moore's Law kept delivering exponential increases in computing power. GPUs, those graphics processing units originally designed for video gaming, just so happened to be accidentally perfect for training neural networks in parallel. Serendipity.
Researchers refined neural network training methods. Backpropagation. Deep learning architectures with many layers.
Quietly, less glamorous applications of machine learning started working and making money. Spam filters. Recommendation engines. Dating apps. Fraud detection. These didn't make headlines, but they proved machine learning's commercial value. They built the infrastructure for what came next.
This period teaches us something crucial about technological development. The most important work often happens when nobody's paying attention.
While the media had moved on, patient researchers were solving fundamental problems at deep leverage points. Not tweaking parameters but changing the fundamental architectures of how machines "learn." Those who kept working through the winter, who believed in the vision while trying and failing over and over again, positioned themselves perfectly for the next summer.
And they were learning to work within the constraints of real systems rather than against them. Using tools that already existed like GPUs, the internet, open-source frameworks. Building on infrastructure that was already deployed. Making the gentle slope from existing practice to new capability smooth enough that people could actually climb it.
The Breakthrough: ImageNet and the Deep Learning Revolution
September 30, 2012.
A team led by Geoffrey Hinton entered a computer vision competition called ImageNet with a deep neural network they called AlexNet. They didn't just win. They crushed the competition by a margin nobody thought possible. Cut the error rate nearly i

n half compared to traditional computer vision methods.
Within five years, the entire landscape transformed.
Computer vision surpassed human performance at image recognition. AlphaGo defeated world Go champion Lee Sedol at a game considered too complex for computers because of its vast possibility space. Deep learning revolutionized speech recognition. Machine translation. Drug discovery. Every major tech company pivoted to AI-first strategies.
That Commodore 64 I was programming in 1984? The phone in your pocket has literally a million times more computing power.
But notice what changed. Not the underlying theory, because neural networks dated back to the 1950s, to work Herbert Simon and others had pioneered. What changed was the convergence of three enablers. Massive data from the internet and social media age. Massive compute from GPU clusters. Better training techniques for deep architectures.
The ideas had been there for decades.
The infrastructure finally caught up.
Simon's bounded rationality was overcome not by making humans more rational, but by removing computational bounds. Giving machines enough data and processing power that statistical learning finally worked.
The business impact is profound.
Companies with massive data and compute resources gained enormous advantages. Google. Amazon. Microsoft. Facebook. Oracle. They became AI powerhouses literally overnight not because they were smarter or more visionary, but because they had the resources AI required at scale. This creates the winner-take-most dynamics we're living with right now. Where a handful of companies control the infrastructure, the models, the weights, and datasets everyone else depends on.
This is Carlota Perez's Installation Period in full force. Massive infrastructure investment. Concentration of resources. Market speculation and bubble dynamics. Winner-take-most markets. The air is turbulent and we can smell the frenzy as everybody changes their LinkedIn titles.
The GenAI Cambrian Explosion: LLMs and the Current AI Race
In 2017, Google researchers published a paper with the understated title "Attention Is All You Need." Introducing the Transformer architecture.
Within five years, this architecture transformed AI again.
BERT from Google revolutionized language understanding in 2018. GPT-2 from OpenAI generated eerily human-like text in 2019. GPT-3 scaled to 175 billion parameters in 2020. Capable of writing coherent articles. Generating code. Reasoning from just a few examples.
Then in November 2022, OpenAI released ChatGPT to the public.
It reached 100 million users in two months. The fastest consumer technology adoption in history.
Suddenly AI wasn't just recognizing patterns. It was creating stuff. Writing articles that read naturally. Generating code that actually "worked." Producing images from text descriptions. Composing music. The line between tool and collaborator blurred in ways that made people genuinely uncomfortable.
By 2023, we had GPT-4, Claude, Gemini. Multimodal models that could see, read, analyze, create across different types of content seamlessly. The technology had gone from specialist tool to consumer infrastructure in what felt like an eyeblink.
And the race was on.
From Carlota Perez's perspective, we've been in the classic Installation Period frenzy since late 2022. Venture capital flooded in. Over $40 billion in AI investments in 2023 alone. Every company added "AI" to their pitch deck. Microsoft invested $13 billion in OpenAI. Google rushed out Bard, then Gemini. Anthropic raised billions. Inflection. Character.AI. Cohere. Mistral. Dozens of foundation model companies emerged, each promising the next breakthrough.
The stock market rewarded AI exposure. NVIDIA's market cap soared past $4.3 trillion on demand for AI chips, financial engineering of questionable repute notwithstanding.
But we're also seeing the classic signs of Installation Period excess.
Ninety percent of enterprise AI projects fail to reach production. Companies spend millions on AI initiatives without clear ROI. "AI washing" is rampant. Companies claiming AI capabilities they don't have. The Big Four consulting firms all became AI experts overnight. The media oscillates between breathless hype and existential doom, rarely finding nuanced middle ground. Regulators scramble to catch up. Proposing frameworks before they understand what they're regulating.
The economic impact has been ambiguous and uneven. Exactly as William Gibson predicted. A handful of companies and individuals have captured extraordinary value. The creators of AI models which are quickly becoming commodities. The owners of compute infrastructure. The holders of proprietary data.
Meanwhile, knowledge workers face uncertainty about their jobs. Creative professionals watch AI systems trained on their work compete with them. Communities without access to these tools fall further behind.
The media landscape has transformed in ways both exhilarating and deeply troubling.
AI can generate news articles, what some call "newslop," but it can also generate misinformation at scale. Deepfakes are trivial to create. Trust in what we see and read is eroding. The platforms that mediate our reality are deploying AI to maximize engagement, often at the cost of social cohesion.
We're discovering that AI doesn't just change what we can do. It changes what's real. What's true. What we can believe.
The Agentic Phase and the Turning Point We're In
We're now entering what vendors and hype-men are calling the Agentic Phase.
This is software that doesn't just respond to questions but acts autonomously to complete complex tasks. Systems that can schedule meetings by checking multiple calendars. Research topics across dozens of sources. Write comprehensive reports. Manage entire workflows without constant human supervision.
Salesforce's Agentforce. Microsoft's Copilot Studio. OpenAI's GPT-4 with plugins. These represent AI moving from assistant to agent. From tool to workflow orchestrator and collaborator. This explains recent moves like OpenAI launching a consulting business to lock in workflows; while also generating new cashflow from their new porn offering. I wish I was making this up.
We're seeing the emergence of specialized vertical AI for specific professions. Legal AI that understands case law. Medical AI that can interpret scans. Financial AI that can model risk. These aren't the artificial general intelligence the Dartmouth researchers dreamed of. But they're expert systems that actually work. Built on foundation language models with better language understanding. They've overcome much of the brittleness that doomed 1980s expert systems by learning from vast amounts of data rather than relying solely on hand-coded rules.
We're also seeing truly multimodal systems that seamlessly combine text, image, video, code, and data. We're seeing human-AI collaboration becoming the new normal. Not humans or AI, but humans with AI. Where the question isn't whether to use it but how to use it well. The number one song on the country charts last week was "Walk My Walk," by Breaking Rust. A completely AI-generated song. This is a harbinger of challenging questions which lie ahead. Questions about creativity, intellectual property, originality, artistry.
From a strategic foresight perspective, we're at what Carlota Perez would call the Turning Point.

The infrastructure is built. The models exist and have rapidly been commodified. The bubble is inflating. Ninety percent of AI projects are failing not because the technology doesn't work but because organizations don't know what classes of problem to solve, which business processes and constraints need to be completely reimagined.
What follows this turning point is the Deployment Period. Where AI becomes invisible infrastructure and the real transformation accelerates dramatically. Where the technology stops being novel and starts reshaping how societies function. Where every industry faces its "Uber moment." Where jobs don't disappear but fundamentally reconfigure. This is exactly when we'll regret not addressing AI from a policy and regulatory perspective 10 years earlier. This is when the consequences of unbridled and unregulated innovation start to show.
Critical Concerns: The Dark Side of the Installation Period
We need to be clear-eyed about the Installation Period we're in.
History shows these periods create winners and losers on scales that reshape communities. Creative destruction is still destruction. Perez's research across five technological revolutions reveals consistent patterns we're seeing again.
Economically, Installation Periods concentrate wealth.
The owners of infrastructure, in our case compute, data, and models, will capture extraordinary value. The gap between AI-augmented professionals and those without access widens rapidly. Small and mid-sized businesses will struggle to compete with giants that can deploy AI at scale. Labor's share of productivity gains shrinks as capital captures more.
We're complicit in the building of an economy where the returns will go to those who own the platforms, not those who work on them and tend to them.
Socially, Installation Periods fragment communities.
When the future is unevenly distributed, as Gibson observed, those with access pull away from those without. Many of our institutions, schools, governments, healthcare systems, were already struggling to adapt.
Trust erodes when you can't tell real from synthetic. When your neighbor's livelihood is threatened by automation while your own is augmented. The social fabric that depends on shared reality frays when reality itself becomes contested.
Politically, Installation Periods create power vacuums and dangerous instabilities.
The concentration of AI capabilities in a handful of companies and countries raises questions about sovereignty, security, control. Authoritarian regimes use AI for surveillance and social control at scales previously impossible. Democratic institutions built on deliberation and slow consensus struggle against AI-accelerated disinformation and manipulation.
The question of who controls AI, and to what ends, remains dangerously unresolved.
Culturally, we're being reshaped by our tools even as we shape them.
Marshall McLuhan understood this. We make our tools, and our tools make us. AI isn't neutral infrastructure. It embeds values, biases, worldviews in its training data and optimization functions. When AI mediates our writing, our research, our creative expression, our decision-making, it doesn't just augment us.
It changes what we can imagine. What we value. What we become.
The bounded rationality Herbert Simon identified in humans is now bounded differently. Extended or augmented in some dimensions. Constrained in others by the AI systems we increasingly depend on.
We're not just implementing new technology. We're deciding what intelligence means. What work means. What human flourishing means in an age where machines can do much of what we previously thought made us special.
These aren't technical questions.
They're questions about what kind of society we want to build. What values we want to embed in our systems. What futures we want to make possible.
The Unfolding Gestalt: What We're Really Watching
AI's development isn't linear. It's dialectical.
Each wave reveals new possibilities and new limitations. Each summer creates the conditions for the next winter. Each breakthrough shows us how much further we have to go.
The pattern across seventy-five years goes something like this.
In the 1950s through 1970s, we thought intelligence was symbol manipulation. We were wrong. But that wrongness was productive. It taught us what intelligence isn't.
In the 1980s through 2000s, we thought intelligence was statistical pattern recognition combined with expert rules. We were still wrong, but usefully so.
In the 2010s through 2020s, we're learning that intelligence emerges from scale. Massive data. Massive compute. Clever architectures that learn statistical patterns across vast corpora.
That's sorta true. But insufficient.
There's something more.
What's coming in the 2030s and beyond, we don't fully know yet. Probably something that combines computation with embodiment. Context. Intentionality. All the things that Heideggerian Hubert Dreyfus said mattered back in 1972. All the things complexity theorists like Donella Meadows and Ralph Stacey understood about how living systems actually work.
The things that make human intelligence more than just pattern matching on a massive scale.
The things that emerge from being in the world, not just processing information about it.
What's unfolding and diffusing across our cultural gestalt isn't just technological capability. It's a conversation that begs a lot of questions. A conversation about what intelligence is. What humans are. What work means. What we value. How we should organize society.
Shaping the Future: The Choice Before Us
A.I. technologies are incredibly powerful, but neither wise nor moral. That's our job. That's always been our job.
William Gibson was right. The future is already here. It's just not evenly distributed.
Our work now is to shape how it gets difffused. To ensure the Deployment Period that follows this chaotic Installation Period creates flourishing rather than just efficiency. Enables human potential rather than just extracting ever more value from ever limited resources.
Most people don't understand what AI is. What it can do. What it can't do. What trade-offs we're making. What alternatives exist. They're being asked to accept a future they can't see clearly. To consent to transformations they don't fully comprehend.
That's not consent. That's just acquiescence.
We need AI Literacy desperately. Not programming skills. Not technical expertise. But the ability to understand and interpret AI's capabilities and limitations. To recognize when you're interacting with AI systems. To evaluate claims about what AI can do. To understand the trade-offs between automation and human judgment. To recognize bias and manipulation. To know when AI is appropriate and when it's dangerous.
We need futures literacy. The capacity to imagine different possible futures rather than accepting the one being sold to us. The ability to ask "what if?" and "why not?" rather than just "how?" or "how much?" The skill to distinguish between what's inevitable and what's chosen. The courage to challenge narratives about where technology is "naturally" heading.
Futures literacy means understanding that the future isn't singular. It's plural. There are many possible futures. Which one we get depends on choices we make now. We are planting the seeds of the future with the investments and choices we make today.
The "AI in Action" panel is meant to spur the conversations and stimulate an interest in collaborate learning. The conversation begins at Temple Beth El, but here it goes depends on all of us. I'll publish the complete panel structure, agenda, questions, and activities for those interested in learning more, and you can email me: will@fuguestrategy.com




Comments