The blueprint for AI GTM adoption: A three-level framework from Clay and Hg
• 14 minute read
By Chris Ross, Hg & Everett Berry, Clay
Co-authored by Clay and Hg. In this article, “we” refers to Clay and Hg jointly; when a point is specific to one party, we name that party explicitly.
We believe most AI pilots fail because organizations are trying to run before they walk. They sprint straight into complex automations but their momentum sputters because they haven't built the right foundations to support their projects. At Hg, we've watched this pattern repeat across the industry.
We have a strong thesis that AI is becoming a critical value creation lever in go to market. After rigorous evaluation, Hg decided to partner with Clay with the aim to bring tailored solutions to our portfolio companies to transform their GTM operations. We chose this specific partnership because the technology solved the data foundation and orchestration challenges that cause 95% of AI pilots to fail. Over the past few months, we've systematically deployed the platform across several portfolio companies, with another dozen currently in proof-of-concept stages, learning exactly what it takes to turn GTM AI promise into revenue reality.
This collaboration revealed a critical insight from our perspective: success requires more than just deploying technology. The portfolio companies driving successful POCs all paired the platform’s AI orchestration capabilities with deep GTM engineering expertise. Those trying to implement without clear use cases and RevOps knowledge struggled, while those with the right operational talent saw immediate impact. This observation led us to develop this framework together, codifying the patterns that consistently deliver results.
The outcomes validate our thesis. POCs are demonstrating stronger data quality against their existing single-provider approach, in cost per qualified opportunity through intelligent lead enrichment and acceleration in enterprise sales cycles using AI-powered deep account research. These aren't lucky breaks; they're the predictable result of following the three-level progression we have built and outlined here.
What follows is the blueprint for AI integration that we have been deploying throughout our portfolio companies. The framework has been refined through each implementation and designed to turn GTM AI investment into sustainable competitive advantage. As AI integration scales across your organisation, individual productivity gains compound into sustainable competitive advantages. But without building a solid foundation first, you'll waste time and resources on projects that never get fully realised.”
The AI pilot problem: Why 95% fail
The statistics are sobering: 95% of AI pilots in SaaS companies are failing to deliver meaningful business impact. That means they're at risk of not adding value, being condemned to the long line of experimental R&D projects that don't drive material business impact.
Some people have taken that to mean that AI integrations are borderline useless. We know that’s not the case because we’ve seen how automation can totally transform how individuals and organizations do business. Rather, the failure of AI programs is due to the approach that many companies take. We see a lot of companies consistently jump straight into complex, black-box automation solutions without building the foundational understanding and processes needed for sustainable AI adoption.
On the other hand, the most successful SaaS companies we've studied take a dramatically different approach. They understand that AI transformation requires building organizational capacity before they try to automate things or equip people with AI platforms. The difference between AI tools that deliver real ROI versus expensive "AI theater" comes down to having good data hygiene, low technical debt, and the discipline to define success metrics before they try to embrace AI.
That’s easier said than done. For SaaS companies, the adoption challenge is compounded by fragmented data architecture across the dozen or more tools in typical GTM stacks. One bad data stream can break the entire pilot. Your teams are managing sophisticated MarTech and SalesTech systems with disconnected data, dealing with increasingly complex buyer journeys, and under pressure to demonstrate clear ROI on every technology investment. It can be a mess to untangle, but putting in the work to get clean systems now will have massive payoffs later. That’s a non-negotiable for scaling AI: You need a good handle on your data in order to get the most out of automation.
The following framework has been stress tested across dozens of SaaS organizations, from high-growth startups to enterprise companies. These are companies where AI tools have flourished, that have avoided being another data point in the 95% failure rate. Each level in the framework builds on the previous one, creating a systematic progression from individual experimentation to organizational competitive advantage.
Before you get started
Before beginning any level of AI implementation, you need to evaluate your organizational readiness across three critical dimensions:
Data Quality: Can your systems provide clean, accessible data for AI processing? Fragmented data across disconnected tools will limit any AI initiative's effectiveness. Most of the time, failures at the initial stages come from poor data foundations rather than AI tool limitations.
Technical Infrastructure: Do you have the engineering resources and platform capabilities to support AI workflows? This includes API management, data pipeline maintenance, and system integration capabilities that become essential at Level 2.
Organizational Readiness: Does your leadership team understand the investment timeline and change management requirements? Each level requires sustained commitment and often significant process changes.
Organizations that skip this assessment frequently fail at implementation, not due to AI limitations but because of foundational gaps that become apparent only after significant investment.
Level 1: Individual AI access & literacy building
Flooding your organization with AI tools and hoping for the best is exactly how pilots fail. You want to get your teams comfortable with AI tools by putting them directly into the hands of the right individual users and training them on how to use these tools for their specific use cases. Engineers, SDRs, marketers, and project managers are all going to use the same tool in different ways, so they need to be taught how to get the most out of different platforms based on what they need.
This isn’t just about individual efficiency gains either. The insights and workflows discovered in Level 1 become the foundation for a more comprehensive, system-wide deployment in Level 2.
Implementation Strategy
First things first: get ChatGPT, Claude, or another conversational AI tool into the hands of sales reps, SDRs, content marketers, and executives and train them. Almost everyone knows how to talk to an LLM at a basic level now, but there’s a huge amount of value in providing hands-on educational sessions that go beyond basic AI awareness. Understanding the difference between a good prompt and a great prompt—including prompt engineering techniques—provides a benefit to the individual and sets you up for a wider AI legibility across an entire org. The difference between a basic ChatGPT prompt and one that can handle complex inputs and logic also involves context management, error handling, and quality controls, all of which need to be taught to users from Day 1.
Effective prompts at this level include basic syntax, clear output formatting requirements, and validation steps. For example, a deal inspection prompt for a growth marketer needs to specify the exact fields to analyze, the risk categories to evaluate, and the format for recommendations. A lot of times people will just type in "analyze this call transcript," expect a perfect output, and get frustrated when it doesn’t match expectations. They might even decide AI “doesn’t work” because it’s giving them impractical or ineffective results. A lack of training at this level creates a certain amount of doubt around the value of a tool, which is hard to come back from as you scale AI in an organization. For users new to Clay, we help with this by giving people a "metaprompter" which builds structured prompts from simple instructions.
Create systematic documentation processes that capture both successful workflows and failure modes. This documentation becomes critical input for Level 2 centralization decisions.
Teams are going to progress at different rates and find value in different ways, too, which is why documentation is vital at this first stage. Have individuals note where their successes and failures are happening and then use that to refine your integration. Our data shows that SDRs and content marketers typically see the fastest ROI at this level, often achieving 20-30% time savings on routine tasks within their first month of use. The employment platform Oyster found that sales reps were saving 40 hours every week thanks to Clay’s automation tools, which means their sales teams had an extra week of time every month to work on deal flow. Other teams will see faster or slower gains, but regardless of pace it’s vital to keep track of where things are thriving or foundering.
Here’s where we see a lot of value generated at Level 1:
Deal Inspections: AI can analyze call transcripts to extract key insights, identify risk factors, and suggest next steps. This is particularly powerful for complex enterprise deals where reps are managing multiple stakeholders and decision criteria.
Content Personalization: Creating customized outreach messages based on prospect research. AI can synthesize information from LinkedIn profiles, company news, and industry trends to craft relevant, personalized messaging at scale.
Data Analysis: Processing CRM data to identify patterns and opportunities that might not be obvious from standard reports. This includes spotting trends in deal progression, identifying characteristics of high-value prospects, and uncovering pipeline risks.
Meeting Preparation: Synthesizing account history, recent interactions, and research into comprehensive briefing documents that help reps show up prepared and credible.
Email Drafting: Creating first-draft responses that reps can refine and personalize, dramatically reducing the time spent on routine communications while maintaining quality.
Find what’s ready for the next level
As individuals experiment with AI tools, look for workflows that show consistent results across multiple users. Those are the opportunity areas, the processes that can be standardized and scaled without losing effectiveness. The most successful Level 2 implementations emerge from Level 1 use cases that demonstrate clear value, repeatable processes, and scalable impact.
Document not just what works, but why it works: the specific prompt structures, data inputs, and validation steps that produce reliable results. This documentation becomes the blueprint for centralized deployment.
You also need to track user adoption rates across different roles to understand where AI is gaining traction, because that’s where you’ll find exactly where to focus your deployment. Do things like measure time savings on specific tasks for specific teams, and have them report back on where they’re seeing the biggest gains. For context, we’ve seen successful Level 1 implementations showing anywhere from a 15% to a 40% efficiency gain on targeted workflows. Monitor quality improvements in outputs, particularly in areas like email response rates and meeting preparation thoroughness.
You’re going to inevitably run into issues with attribution at this stage. For example, how do you measure the revenue impact of AI-generated insights that influenced a deal that closed six months later? The best way to solve this is by creating leading indicators like engagement rates on AI-enhanced outreach, meeting conversion rates from AI-prepared briefings, and deal progression velocity. These are proxy metrics, but they’ll help build business cases for Level 2 investment.
Where Level 1 can fail
Failures right out of the gate usually result from poor data quality and infrastructure. The golden rule of garbage in, garbage out applies here. There’s also the challenge of insufficient prompt engineering sophistication. Teams get excited about AI possibilities but create workflows that produce inconsistent outputs because the underlying data or prompts aren't production-ready. It can be easy to get discouraged and go back to manual processes if the tools aren’t getting the job done, and often that’s because there are issues with data and prompt syntax.
Level 2: Centralized deployment & standardization
Equipping individuals with AI and training them to use these tools in ways that work for them is how you get an organization comfortable with automation. But scaling AI is exponentially tougher, both because you’re asking people to take a leap of faith to hand over manual processes and putting business goals in the hands of new technologies. Taking what you learned in Level 1 isn’t going to be enough.
Building on Level 1 Insights
Start by identifying which workflows you built in Level 1 are ready for the next step. There are a couple ways of doing this, but usually the most scalable workflows share common characteristics: they produce consistent results across different users, they can be standardized without losing effectiveness, and they address high-impact, high-frequency tasks.
Organizing and categorizing the best prompts and processes for different platforms—especially the ones that can orchestrate complex workflows—is vital. That’s also where tools like Clay become essential for sophisticated GTM operations. The standardization process involves converting individual prompt experiments into production-ready systems with proper error handling, quality controls, and integration capabilities.
Real, comprehensive organizational change is needed to get the full benefits of this second step in the AI scaling process. You'll need to hire or designate go-to-market engineering resources, establish AI governance and quality control processes, and create feedback loops between individual users and centralized operations teams.
Based on our observations across SaaS companies, the most effective organizational structure includes a dedicated GTM engineering function that sits between traditional RevOps and the broader engineering organization. This team needs both technical aptitude and deep understanding of go-to-market processes, which are exactly the combination that will be essential as you continue to build AI into your organization.
Key Focus Areas
There are dozens of directions that AI scaling can take in Level 2, but it’s most effective to identify the ones most valuable to your organization. Here are a few to consider:
Process Acceleration: Converting manual tasks that previously took hours into automated workflows that complete in minutes. The most successful implementations focus on workflows where the time savings are dramatic—typically 10x or greater improvements that justify the centralization investment.
Quality Standardization: Ensuring consistent output quality across all users and use cases. This requires developing prompt templates, approval workflows, and quality control mechanisms that maintain the effectiveness discovered in Level 1 while enabling organization-wide deployment.
Knowledge Capture: Systematically documenting and sharing best practices from Level 1 power users. This institutional knowledge becomes a competitive asset that compounds over time and provides the foundation for Level 3 capability development.
Workflow Integration: Connecting AI processes with existing CRM, marketing automation, and sales enablement tools. The integration work done at Level 2 creates the technical foundation that enables Level 3's more sophisticated multi-system workflows.
Implementation use cases
Because of how much GTM organizations rely on manual processes, there is an astounding set of potential use cases for AI. Here are a few to consider:
Automated Deal Scoring: Standardized prompts that analyze deal health across all opportunities in your CRM, providing consistent risk assessment and next-step recommendations that help managers focus their coaching efforts while creating data patterns that inform Level 3 predictive capabilities.
Lead Enrichment Workflows: Systematic data enhancement for all inbound prospects, combining firmographic data, technographic insights, and intent signals to create comprehensive prospect profiles automatically. The data quality improvements enable Level 3's advanced targeting and personalization capabilities.
Competitive Intelligence: Automated monitoring and analysis of competitor mentions, feature releases, and market positioning changes that feed directly into your product marketing and sales enablement processes while building the competitive data foundation for Level 3's dynamic positioning capabilities.
Pipeline Reporting: AI-generated insights and summaries for leadership reviews that go beyond standard CRM reports to identify trends, risks, and opportunities requiring executive attention, while establishing the analytical patterns that enable Level 3's predictive insights.
Where Level 2 fails
We see organizations failing at Level 2 when they underestimate how complex it can be to integrate AI into production-ready systems or not putting enough into their change management efforts. The technical implementation succeeds, sure, but user adoption fails because the workflows don't integrate smoothly with existing processes or because teams haven't been properly prepared for the transition from individual control to centralized systems. This is a step that requires both technological investment and a lot of personal nuance to succeed, so pay attention to both sides of the equation.
Level 3: Net new capabilities & GTM alpha
Deploying AI to individuals and your organization is all a way of setting up for entirely new capabilities that can only be achieved with those strong foundations. It’s where the return on your investment in AI comes in and gives you genuine, bankable competitive advantages. We call those edges GTM alpha, where you’re making plays on data that your competitors simply can’t match.
While Levels 1 and 2 focus on making existing processes faster and more efficient, Level 3 creates capabilities that didn't exist before AI. The key question becomes: "What could we do if we had perfect information and infinite processing power?" The answer depends entirely on the data quality and organizational capabilities built in the previous levels.
Preparing for success at this stage is more about creative problem solving and understanding exactly what AI is capable of. You’ve created the systems, now you get to take advantage of them. The most successful Level 3 implementations we've studied are delivering measurable, sustainable, and distinct competitive advantages:
Predictive Account Intelligence: Companies are using AI to identify expansion opportunities 3-6 months before they become obvious through traditional signals, analyzing usage patterns, support tickets, and stakeholder changes to predict when accounts are ready for upselling. Vanta uses Clay to track external signals that indicate sales opportunities and automates that research process. Now they’re able to do real-time outreach based on funding events, hiring signals, and compliance updates to beat their competitors.
Dynamic Competitive Positioning: Real-time analysis of competitor mentions, feature releases, and win/loss data automatically adjusts sales messaging and positioning for each opportunity. When a competitor announces a new feature, the system immediately identifies affected deals and suggests updated talk tracks.
Behavioral Revenue Attribution: Advanced systems track the complete customer journey from first touchpoint through expansion, using AI to identify which specific interactions and touchpoints most strongly correlate with revenue outcomes. Amplitude combined their own first-party data as well as third-party data with Clay to identify customers that were good candidates for cross-selling.
Contextual Deal Coaching: AI systems that maintain context across multiple interactions provide increasingly sophisticated coaching to sales reps, remembering previous challenges with similar prospects and suggesting personalized approaches based on what's worked historically with comparable accounts.
These sorts of plays are where AI integrations yield tangible results on your bottom line, and free up time for people to do their best work. Here’s how we categorize those benefits to give you an idea of how to grade your AI integration:
New Business: Competitive differentiation that helps win more deals versus competitors which means unique plays and leveraging proprietary data combinations or unique process innovations.
Better Retention: Improved customer experience that results in customers staying around longer and increasing spend with your business. This dramatically improved buyer and customer experiences give new meaning to speed and personalization at scale.
Process Efficiencies: Automating processes to decouple revenue growth from headcount growth. Think direct contribution to pipeline creation and deal closure through capabilities that identify opportunities and enable approaches at scale without proportional team expansion
Where Level 3 fails
There are a lot of potential pitfalls for this final stage of the framework, but organizations often get tripped up by attempting to build competitive advantages while skipping the foundational work of Levels 1 and 2. Teams create sophisticated capabilities but can't prove business impact or justify continued investment because they lack the organizational maturity and data infrastructure necessary to support advanced AI applications.
The GTM Engineering Function: Your AI-First Organization
Why GTM Engineering Becomes Essential
As organizations progress through the three levels, the need for dedicated GTM engineering expertise becomes critical. Individual AI tools require minimal technical support, but centralized deployment and competitive advantage creation demand specialized skills that bridge traditional engineering and go-to-market operations.
AI systems require ongoing maintenance, optimization, and improvement that business users cannot handle alone. The integration complexity with existing business systems requires specialized technical knowledge. Organizations need dedicated expertise to stay ahead of the AI capability curve and to build the Level 3 capabilities that create sustainable competitive advantages.
Core Responsibilities and Skills
The most effective GTM engineers combine technical aptitude with AI/ML tools and APIs, deep understanding of go-to-market processes and metrics, and data analysis and workflow optimization expertise. They also need change management and user adoption skills, as they're often responsible for rolling out new capabilities across resistant or overwhelmed business teams.
Key responsibilities include workflow design and optimization for all three levels, developing and maintaining high-quality prompts for various use cases, managing integrations between AI tools and existing business systems, monitoring AI system performance and identifying improvement opportunities, and training and supporting business users as they adopt new AI-powered capabilities.
Career Paths and Hiring Strategies
The role of GTM engineer is still emerging, but we're seeing several successful hiring patterns. Some companies promote from within, taking their strongest RevOps professionals and investing in their technical skills development. Others hire from technical backgrounds and train them on GTM processes. The most successful approach often involves creating hybrid roles that combine traditional operations expertise with technical capability, exactly what's needed to execute this three-level framework successfully.
Implementation Roadmap: Your Path to AI-First GTM
There’s no one-size-fits-all calendar for scaling AI in your organization. Leaner companies can move towards data hygiene and implementation much more quickly than enterprise SaaS orgs that may be digging out of years of data debt. We put together this timeline to provide a general sense of how long each phase should take, but rather than paying attention to the schedule, it’s more important to finish each step before moving on to the next one so you have a strong foundation on which to build.
Level 1: Foundation Building (6 months)
Deploy individual AI tools across your organization, starting with the roles that showed highest engagement in your pilot programs. Conduct comprehensive training sessions that go beyond AI awareness to include sophisticated prompt engineering, result validation, and workflow documentation.
Begin systematically documenting successful use cases and power user behaviors—this documentation becomes the foundation for Level 2 scaling. Establish governance guidelines that balance experimentation with risk management while ensuring data quality standards that will support Level 2 and 3 implementations.
Level 2: Centralization (12 months)
Hire or designate go-to-market engineering resources based on your planned implementations' scale and complexity. Most enterprises need 12-18 months to reach Level 3 maturity with dedicated resources and strong leadership commitment.
Implement a centralized AI orchestration platform that can handle your most promising Level 1 workflows while building the technical foundation for Level 3 capabilities. Address integration challenges with existing MarTech and SalesTech stacks—budget $50,000-$200,000 annually for platform costs, engineering resources, and training investments.
Create quality control processes and measurement frameworks that can track both leading and lagging indicators of AI impact across all three levels.
Level 3: Innovation (18 months)
Design and implement net new capability workflows that create competitive advantages rather than efficiency gains, building on the infrastructure and organizational capabilities developed in Phases 1 and 2. Establish sophisticated measurement systems that include leading indicators, attribution models, and business case frameworks for continued investment.
Build organizational capabilities for continuous innovation and adaptation as AI technologies evolve, ensuring your competitive advantages remain sustainable.
Measuring impact
Building Attribution Models Across All Levels
Successful AI GTM implementations require measurement frameworks that evolve with each level. Level 1 focuses on efficiency metrics and user adoption. Level 2 adds process standardization and quality consistency measures. Level 3 requires sophisticated attribution models that can track competitive advantage creation and long-term business impact.
Track time savings on specific workflows, weighted by the hourly value of affected roles. Monitor quality improvements through metrics like email response rates, meeting conversion rates, and deal progression velocity. Develop attribution models that account for AI's influence on long-term outcomes, using leading indicators like engagement rates on AI-enhanced outreach and meeting conversion rates from AI-prepared briefings.
Revenue Impact Measurement
Revenue impact measurement should include both direct pipeline contribution and velocity improvements across existing deals. Customer acquisition cost improvements often result from more efficient targeting and personalization capabilities developed through the three-level progression.
Build business case frameworks that help leadership understand the compound effects of AI investments and justify continued innovation funding even when individual ROI calculations prove challenging. The most sophisticated measurement approaches track how Level 1 efficiencies enable Level 2 standardization, which in turn creates the foundation for Level 3 competitive advantages.
Long-term Value Recognition
One of the most significant long-term impacts of systematic AI GTM adoption is the improvement in customer lifetime value through better targeting, onboarding, and expansion identification. The compound effect of these improvements can be substantial—small improvements in customer selection and onboarding experiences compound over years to create significant competitive advantages in unit economics.
Conclusion: The Future of AI-First GTM
The three-level framework provides a systematic approach to AI adoption that minimizes risk while maximizing long-term competitive advantage. Organizations that implement this approach systematically are creating sustainable competitive advantages that build up organizational capabilities one step at a time.
Rather than asking whether to adopt AI, forward-thinking GTM leaders are asking harder questions: Should we build these capabilities internally or buy them? How do we create competitive advantages that extend beyond the typical 12-18 month window before capabilities become commoditized? What predictions can we make about where this technology evolution leads in 2-3 years?
The companies that start building systematic AI capabilities now—with proper foundations, realistic timelines, and sophisticated measurement—will have significant advantages over those waiting for perfect solutions or clearer ROI calculations.
Your Next Steps
Start with an honest assessment of your current state against this framework. Most organizations discover they're not ready for Level 2 or 3 implementations due to data architecture or organizational readiness gaps. Address these fundamentals first.
Identify your highest-impact use cases at Level 1 and begin building organizational muscle around prompt engineering, result validation, and workflow documentation. The learning from these early implementations becomes the foundation for everything that follows.
Remember: each level builds on the previous one. Level 1 experiments inform Level 2 standardization, which enables Level 3 innovation. Organizations that try to skip levels or rush the progression typically fail to achieve sustainable competitive advantages.
The future of go-to-market belongs to organizations that can combine human insight with AI capability systematically. This framework provides the roadmap, but success requires commitment to building capabilities rather than just implementing tools.