An invitation
You've been nominated.
You were nominated by someone already in the room. Practitioners, not analysts — trading thirty minutes per quarter for benchmarks, four proprietary indices, and an off-the-record read on how enterprises are actually deploying AI.
The data syllabus
Come prepared with.
Thirty minutes about how your enterprise is actually deploying AI. Before you start, gather:
Line items across foundation models, compute, data, governance, software engineering, and SIs.
What's in production, what's in pilot, what's on the bench, and what's out of scope.
Primary picks across the six core categories — and the alternatives you're evaluating.
Your read on each vendor's reliability, quality, and business yield.
The room
Who's already in.
Practitioners at $1B+ enterprises with 2,000+ employees, deploying beyond pilots, with budget authority. No vendors in the data set.
Membership benefits
What members receive.
Your AI spend, vendor stack, and adoption posture, benchmarked against your peer group.
24 hoursYour investments, vendors, and governance posture compared to enterprises at your scale.
Each quarterAggregated findings across six core categories and fifteen emerging technologies.
End of quarterResearch scope
What we research.
Six core categories with vendor ratings and roadmap status, fifteen emerging technologies tracked quarterly, and four proprietary indices.
Four proprietary indices
The network
How members connect.
Beyond the data, members trade tactics with peers facing the same enterprise AI questions.
Topic-specific working sessions throughout the quarter where peers share what's working, what isn't, and what they wish they knew earlier. Members-only. Practitioner-led.
Intimate gatherings of 16–20 senior leaders, hosted in major hubs as the network grows. Chatham House Rule. No vendors, no press, no keynotes.
Thirty minutes per quarter. Every response anonymized. The methodology comes from 30 years of enterprise benchmarking at The InfoPro and S&P Global Market Intelligence.