Building the Agentic Enterprise, Part 6: Platform Decisions: Build, Buy, Assemble, or Extend

This is the sixth article in an 11-part series exploring what it takes to build an enterprise that runs on AI agents, not just AI tools. Each article examines a critical dimension of the journey and includes a "What It Takes" section with practical guidance for leaders navigating this transition.

---

The Platform Question

In Part 5, we established why orchestration is the critical coordination layer for multi-agent systems. The natural follow-up question is: where does that orchestration capability come from? More broadly, how should you acquire and compose the technology capabilities you need to build an agentic enterprise?

This is the platform decision, and it is more complex than the traditional build-versus-buy question because the agentic technology landscape is evolving faster than enterprise procurement cycles. The right platform strategy depends on where you are today, where you need to be, and how much flexibility you need to preserve along the way.

With the AI agents market projected to reach $52.6 billion by 2030 and Gartner forecasting that 40 percent of enterprise applications will embed task-specific agents by the end of 2026, this is not a decision you can defer indefinitely. But it is one you should make deliberately, because the choices you make now will shape your options for years.

Four Platform Strategies

Organizations approaching the agentic platform landscape have four primary strategies. Each has distinct advantages, limitations, and risk profiles.

Extend what you have. This is the lowest-friction starting point, and for many organizations it is the right first move. Every major enterprise platform vendor now ships agentic capabilities. Salesforce Agentforce, with over 8,000 customers, embeds agents across CRM workflows. Microsoft has added Agent 365, a centralized control plane for agent registry, access controls, and cross-ecosystem visibility. ServiceNow's AI Agent Orchestrator coordinates agents across ITSM, HR, and customer service, earning the top position in Gartner's 2025 Critical Capabilities for AI Agents. Workday launched Illuminate agents for HR case management and financial close. SAP offers Joule, and IBM Watsonx Orchestrate ships pre-integrated with more than 80 enterprise applications.

The advantage is speed to value. Your teams already know the platform. Your data is already there. Integration with existing workflows is straightforward. The limitation is scope. Agents built within a single vendor's ecosystem work well for workflows that live within that ecosystem. They struggle when work crosses vendor boundaries, which, as we discussed in Part 5, is where the most valuable orchestration happens.

Buy a purpose-built agent platform. Pure-play agent platforms offer capabilities designed specifically for building, deploying, and managing agents. Some target specific domains: Aisera and Kore.ai focus on employee support and contact center use cases. Others offer broader orchestration capabilities. The market is consolidating rapidly; ServiceNow's acquisition of Moveworks in March 2025 validated the pure-play model while also absorbing a major independent player.

Purpose-built platforms make sense when your agent workflows span multiple enterprise systems and no single vendor covers the full scope, when you need model-agnostic flexibility to swap between AI providers, or when your incumbent vendor's agent capabilities lag in the specific domain where you need them. The limitation is that you are adding another platform to your stack, with the associated integration, governance, and vendor management overhead.

Build your own. Building directly on foundation model APIs from providers like OpenAI, Anthropic, or Google gives you maximum control over architecture, behavior, and integration. Open frameworks like LangGraph and CrewAI provide building blocks that reduce the engineering effort compared to starting from scratch.

But building your own remains the most resource-intensive path. Production-grade agent systems typically require six to twelve months of development, and the ongoing maintenance burden is significant. Custom development is justified when the use case is a genuine competitive differentiator, when deep domain-specific knowledge cannot be captured by commercial platforms, or when no existing offering covers the workflow you need to automate. For most organizations, building should be the exception, not the default.

Assemble from best-of-breed components. This is the approach emerging as the practical default for enterprises with complex requirements. Over half of enterprises now prefer hybrid stacks that layer open protocols on top of vendor-managed orchestration. The pattern: use your ERP or CRM vendor for domain-specific agents, an open framework for custom orchestration logic, and open protocols for the connective tissue.

The assemble model offers the best balance of speed, flexibility, and control. You get the domain expertise and operational maturity of established vendors where it matters, the customization of open frameworks where you need it, and the interoperability of open standards to prevent the whole architecture from calcifying around a single provider. The tradeoff is complexity. An assembled stack requires more architectural skill to design, more governance discipline to manage, and more integration effort to maintain than a single-vendor approach.

The Lock-in Problem Is Different This Time

Vendor lock-in is a familiar concern in enterprise technology. But agentic AI lock-in is more severe than traditional software lock-in because it compounds across multiple layers simultaneously.

With traditional software, lock-in is primarily about data formats and business logic. Migration is expensive but conceptually straightforward: extract data, rebuild logic, retrain users. With agentic systems, lock-in operates across at least four layers: the AI model (which shapes how agents reason), the orchestration logic (which defines how agents coordinate), the memory and context layer (which stores what agents have learned from operating in your environment), and the data connections (which determine what agents can access).

The critical insight is about accumulated operational context. If months of agent memory, workflow conventions, escalation patterns, and institutional knowledge live inside a vendor's proprietary layer, switching costs go far beyond code migration. Research suggests average switching costs exceed $315,000 per project, and only six percent of enterprises can change vendors without significant disruption.

This does not mean you should avoid platforms. It means you should make lock-in risk a first-order evaluation criterion, not an afterthought. Ask: where does the accumulated context live? Can it be exported? Are the orchestration patterns portable? Can you swap model providers without rewriting your agent logic?

Open Standards and the Interoperability Imperative

The most effective lock-in mitigation strategy is adoption of open standards, and the agentic AI ecosystem is converging around two protocols that enterprise buyers should understand.

Model Context Protocol (MCP), introduced by Anthropic in late 2024, standardizes how agents connect to tools and data sources. Think of it as a universal adapter: rather than building custom integrations for every system an agent needs to access, you build one MCP connection and any MCP-compatible agent can use it. MCP now has over 97 million SDK downloads and more than 10,000 enterprise server implementations.

Agent2Agent (A2A) protocol, introduced by Google in April 2025, standardizes how agents communicate with each other across platforms. While MCP handles the agent-to-tool relationship, A2A handles the agent-to-agent relationship, enabling agents built on different platforms to discover each other's capabilities, negotiate collaboration, and coordinate on shared tasks.

Both protocols are now governed by the Linux Foundation's Agentic AI Foundation, co-founded by OpenAI, Anthropic, Google, Microsoft, AWS, and Block. This governance structure matters because it signals that these are not single-vendor initiatives but industry-wide standards.

For enterprise buyers, the practical implication is clear: require MCP and A2A compatibility as baseline criteria in vendor evaluations. Eighty-seven percent of IT leaders now prioritize interoperability in their agentic AI purchasing decisions. Platforms that support these standards give you the flexibility to evolve your architecture as the market matures. Platforms that do not are asking you to bet that their proprietary approach will win, a bet that gets more expensive to reverse over time.

A Decision Framework for Platform Strategy

Rather than prescribing a single approach, here is a framework for matching your platform strategy to your organizational context.

Start with extend if your highest-value agent use cases live primarily within one vendor's ecosystem, your organization has limited AI engineering capacity, and speed to initial deployment matters more than architectural flexibility. This gets you running quickly with manageable risk.

Move to assemble when your agent workflows span multiple systems and vendors, when you need model-agnostic flexibility, or when the extend approach has reached its limits for your cross-functional use cases. The assemble model is where most large enterprises end up as their agentic ambitions mature.

Choose build selectively for use cases where the agent logic itself is a competitive differentiator, where deep domain expertise cannot be replicated by commercial platforms, or where you need capabilities that simply do not exist in the market yet. Ring-fence custom development to genuinely unique requirements and use commercial platforms for everything else.

Consider pure-play buy when you need specialized capabilities in a specific domain that your incumbent vendors do not cover, or when a purpose-built platform offers significantly better orchestration for your primary use cases. But evaluate carefully whether the pure-play vendor will remain independent or get acquired, and ensure you have contractual protections around data portability and API continuity.

Most organizations will use more than one strategy simultaneously. Your CRM agents might run on Salesforce Agentforce (extend), your cross-functional orchestration might use an open framework (assemble), and your most differentiated workflow might be custom-built (build). The key principle is that your platform strategy should be as composable as the architecture it supports. No single choice needs to be permanent, and the best strategies preserve optionality while delivering value today.

What It Takes: Technical Infrastructure and Strategic Alignment

This article maps to two readiness dimensions: technical infrastructure and strategic alignment. Platform decisions sit at the intersection of what your technology can support and what your business needs to achieve.

Here is what readiness requires in practice:

Evaluate your current stack honestly. Before evaluating new platforms, understand what you already have. What agentic capabilities are your existing vendors shipping? How mature are they? What gaps do they leave? Many organizations discover that their current vendors cover 60 to 70 percent of their initial use cases, which changes the calculus significantly compared to starting from scratch.

Define your business outcomes before you evaluate platforms. The most common platform decision mistake is starting with technology capabilities rather than business requirements. Know which workflows you want to make agentic, what success looks like for each one, and what constraints (regulatory, budgetary, organizational) shape your options. Build your evaluation criteria before you start taking demos.

Assess your integration architecture. Platform decisions have downstream implications for every system agents need to touch. If your integration layer is fragile, adding an agent platform on top will amplify the fragility. If your integration layer is robust and standards-based, you have more platform options and more flexibility to evolve.

Factor in total cost of ownership, not just acquisition cost. Agent platforms have cost profiles that differ from traditional software. Token consumption, API call volumes, compute scaling, and ongoing training and customization all contribute to TCO. A platform with a low entry cost but high per-transaction pricing may be more expensive at scale than one with higher upfront investment but more predictable economics.

Plan for evolution, not just initial deployment. The agentic platform landscape will look different in 18 months. Your platform strategy should accommodate that change rather than betting everything on today's market configuration. This is the strongest argument for composability and open standards: they give you the architectural flexibility to adapt as both technology and your organization's needs evolve.

Up Next

In Part 7, we will turn to the data foundation. Agents are only as good as the data they can access and reason over, and data readiness is where most agentic initiatives stall. We will cover what data readiness means in the context of agentic AI, why it is the most common blocker to production deployment, and what organizations need to build to give their agents the information foundation they require.

Michael Fauscette

High-tech leader, board member, software industry analyst, author and podcast host. He is a thought leader and published author on emerging trends in business software, AI, generative AI, agentic AI, digital transformation, and customer experience. Michael is a Thinkers360 Top Voice 2023, 2024 and 2025, and Ambassador for Agentic AI, as well as a Top Ten Thought Leader in Agentic AI, Generative AI, AI Infrastructure, AI Ethics, AI Governance, AI Orchestration, CRM, Product Management, and Design.

Michael is the Founder, CEO & Chief Analyst at Arion Research, a global AI and cloud advisory firm; advisor to G2 and 180Ops, Board Chair at LocatorX; and board member and Fractional Chief Strategy Officer at SpotLogic. Formerly Michael was the Chief Research Officer at unicorn startup G2. Prior to G2, Michael led IDC’s worldwide enterprise software application research group for almost ten years. An ex-US Naval Officer, he held executive roles with 9 software companies including Autodesk and PeopleSoft; and 6 technology startups.

Books: “Building the Digital Workforce” - Sept 2025; “The Complete Agentic AI Readiness Assessment” - Dec 2025

Follow me:

@mfauscette.bsky.social

@mfauscette@techhub.social

@ www.twitter.com/mfauscette

www.linkedin.com/mfauscette

https://arionresearch.com
Previous
Previous

Building the Agentic Enterprise, Part 7: The Data Foundation; Why Your Agents Are Only as Good as Your Data

Next
Next

Building the Agentic Enterprise, Part 5: The Orchestration Layer; Why Coordination Is the New Competitive Edge