Skip to main content

Command Palette

Search for a command to run...

The Two-Track Enterprise

Published
21 min read
C
Christoffer Silversparre Jørgensen is a cybersecurity architect based in Helsingør, Denmark, writing about the intersection of security architecture, platform engineering, and the AI-era enterprise.

Why a platform strategy isn't enough anymore, and what the second track actually looks like


A senior engineer is working from home on a Tuesday afternoon. She has a confidential project model open. Fifty gigabytes of structural design under NDA, the kind of thing the firm's reputation depends on keeping inside its own walls. She's stuck on a load calculation she knows an LLM could help her think through, if only she could ask it the right question with the right context.

Her options, today, are these. She can paste excerpts into ChatGPT and pretend the DPA covers it. She can use the corporate Copilot license, which is fine for drafting emails and useless for the actual model. She can wait until tomorrow and ask a colleague. Or she can do without.

Notice what's missing from that list: the platform her company spent five years building.

The Paved Road, the Internal Developer Platform, the Landing Zone factory, whatever it's called locally. That platform was built to make shipping cloud-resident applications fast, secure, and governed. It is, by every measure that mattered when it was scoped, a success. But it was never built for her. She isn't shipping an application. She's trying to do her job, on her laptop, on a Tuesday afternoon, with the tools the AI moment has put within reach but governance has not yet caught up to.

This essay is about that gap. It is about why the platform you built is now one of two delivery tracks your enterprise architecture function needs to run, rather than the whole story. And it is about the specific seams between those tracks, which most organizations will have to make explicit before the gap turns into something worse.

The frame that no longer fits

For most of the last decade, "platform engineering" has meant a particular thing. It meant building the substrate on which application teams ship software: vended subscriptions, infrastructure-as-code modules, CI/CD templates, security scanning gates, observability stacks, policy engines. The Netflix-inspired Paved Road. The Spotify-inspired golden paths. Whatever the local dialect, the shape is recognizable. A small platform team builds opinionated defaults. A large application population consumes them. Governance is enforced by making the right way the easy way.

This frame works. It is, arguably, the most successful pattern enterprise IT has produced in twenty years. The reason it works is that it solves a real problem (the friction between what developers want to build and what compliance, security, and operations need to be true) by automating the controls rather than gating them. Guardrails, not gates. Comply or explain.

But the frame has an unstated scope, and the scope is starting to bind.

The platform you built was scoped to applications. It assumed that the unit of governance is a service running in a cloud subscription, that the customer is a development team, that the artifacts are code and configuration, and that the failure mode is shadow IT, where a team goes off-platform to ship faster.

The AI moment has produced a different unit of governance, a different customer, different artifacts, and a different failure mode, all at once. The unit is now compute proximity to the user: endpoint, workgroup, cloud. The customer is every employee, not just developers. The artifacts are models, prompts, and inference traffic, not code and configuration. And the failure mode is shadow AI, where engineers paste NDA'd content into ChatGPT because the corporate alternative doesn't help them.

None of these are subsets of the original platform problem. They are a peer to it. Treating them as a peer rather than as an annex to the platform changes how you have to think about the architecture function as a whole.

Two tracks, sharply defined

The first track is the one you already have. Call it the Paved Road if you like, or whatever you call yours. Its scope is cloud-resident applications and the data products that run on them. Its mental model is "I have an idea for a thing the company should be running. Vend me a place to run it, and I'll bring code." Its primary customer is the engineering organization. Its primary failure mode if neglected is unmanaged cloud spend and uncatalogued application sprawl.

The second track is the one most organizations are now improvising into existence under labels like "AI strategy" or "Modern Workplace" or "Copilot rollout." Done deliberately, it is the Modern AI Workplace. Its scope is employee-proximate compute and AI: laptops and the local models on them, workgroup-class compute (DGX Spark-class machines, or their equivalents, sitting in offices), the network fabric that connects employees to that compute, and the productivity AI tools they use day-to-day. And increasingly, the user of all of this isn't only the human employee. Agents running on the laptop, agents invoked from the workgroup Spark, agents chained across multiple services to complete a task on the employee's behalf, all of these are themselves consumers of the compute, the identity, and the data the track provides. "Employee-proximate" has to be read with that in mind. The track is built around where the work happens, not solely around who or what is doing it. And increasingly, the user of all of this isn't only the human employee. Agents running on the laptop, agents invoked from the workgroup Spark, agents chained across multiple services to complete a task on the employee's behalf, all of these are themselves consumers of the compute, the identity, and the data the track provides. "Employee-proximate" has to be read with that in mind. The track is built around where the work happens, not solely around who or what is doing it.

Its mental model is borrowed from a useful metaphor. Think of the AI a remote worker carries on their laptop as a scout: a small, fast model (in the 4B to 8B parameter range, something like Gemma or Phi or a quantized Llama variant) that handles real-time work locally. Drafting, autocomplete, summarization, search over local files. The scout works offline. It works on the train. It works without a stable connection. When the work demands more, the scout can call back to the heavy artillery: a workgroup-class machine in the office that runs a 70B-class model with the full project context loaded, or a cloud AI service for tasks that warrant it. The remote worker isn't cut off from intelligence. They carry the scout, and they call in the heavy artillery when the connection allows.

So the mental model becomes "I have a job to do. Give me a working laptop with the right scout on it, the right access to the heavy artillery when I need it, no matter where I'm sitting." Its primary customer is every employee. Its primary failure mode is shadow AI, and the cottage industry of unmanaged tools that fills the vacuum when the corporate offering doesn't meet the moment.

These tracks are not subsets of each other. They have different customers, different cadences, different ownership in any sensible org chart. The Paved Road typically lives in Platform Engineering. The Modern AI Workplace, if it exists at all, usually lives somewhere between the End User Computing team, the Identity team, and an AI Center of Excellence that nobody reports to cleanly.

What the two tracks share is governance fabric. They share principles. They share the comply-or-explain philosophy. The same identity system underwrites both. The same data classification scheme should drive policy in both. The same observability stack should ingest telemetry from both. If you build the Modern AI Workplace as a fully separate program with its own identity provider, its own classification taxonomy, its own logging pipeline, you've solved the immediate problem and created a much worse one in eighteen months when the two diverge.

The frame that fits, then, is not "platform plus annex." It is two tracks under one umbrella, sharing horizontal fabric.

The horizontal fabric

When you look hard at what the existing platform actually contains, most of it isn't really platform-specific. The identity system, the data governance capability, the observability stack, the policy engine, the vending pattern. These are enterprise capabilities that the platform happens to consume. They were built inside the platform program because that's the program that needed them first. They don't belong to the platform. The platform consumes them, and so does the second track, and so will any third track that comes along in five years.

There are five of these horizontal fabrics, and naming them as horizontal rather than as platform Building Blocks is the move that lets the umbrella frame hold.

Identity and access. This is the fabric that answers the question "is this person, device, or service allowed to do this thing right now?" Entra ID or its equivalent, federated to whatever device posture system you use, with privileged access management for the high-risk tier. The Paved Road consumes this for application identity: workload identity federation, RBAC on cloud resources, conditional access for admin paths. The Modern AI Workplace consumes the same fabric for device-to-device authorization (which laptop is allowed to reach which workgroup compute), endpoint AI model entitlement (who is allowed to run which model locally), workgroup compute access (which project teams can address which Spark), and increasingly model-to-model authentication. When the scout on a laptop calls back to the heavy artillery in the office, or an agent running locally invokes a tool exposed by a cloud service, those calls are themselves identities that have to be authorized, audited, and revocable. One identity system, multiple consumption patterns, and the model-to-model case is the one most organizations haven't started thinking about yet.

Data governance and privacy. This is the fabric that answers "what is this data, who's allowed to handle it, and where is it allowed to go?" Classification, lineage, privacy enforcement. The Paved Road consumes this for data assets in cloud storage and databases, the usual kind of thing you'd expect a data catalog to track. The Modern AI Workplace consumes the same classification metadata for a different purpose: it uses it to route AI workloads to the appropriate compute tier. A document classified as Restricted is not allowed to leave the laptop, period. An Internal document can flow to a workgroup Spark. A Public document can hit a cloud LLM. The classification scheme is the same in both cases. The decision it informs is different.

Observability and detection. This is the fabric that answers "what is happening across our estate right now, and would we know if something were going wrong?" Splunk, Sentinel, whatever your stack is. Security telemetry to one place, application telemetry to another. The Paved Road has been feeding this for years; sign-in logs, audit logs, application traces, all routed to the right indexes. The Modern AI Workplace adds new event sources (endpoint inference logs, workgroup compute audit trails, network ACL hits from whatever ZTNA fabric you settle on) but they flow into the same indexes with the same retention policies. The fabric doesn't change. The producers do.

Policy and compliance. This is the fabric that answers "what are we required to do, what are we forbidden from doing, and how do we prove either?" Azure Policy enforces against cloud resources. Intune CSPs enforce against endpoints. Network ACLs enforce against connectivity. The implementations are different, and the teams operating each enforcement point are different. The governance pattern, though, is identical: policy as code, comply-or-explain exceptions, a ServiceNow-backed exception process, a designated architect-of-the-gate role. Both tracks live under the same governance discipline, even though the levers each track pulls are different.

Vending and lifecycle. This is the fabric that answers "how do we provision a new thing in a way that's compliant from the moment it exists?" It's the one most organizations underrate, because the unglamorous work of automating the first day of a resource's life is what determines whether everything that follows is governed or not. The Paved Road has a Subscription Vending Machine that dispenses cloud landing zones in twenty minutes, fully tagged, identity-bound, policy-assigned, ready for a development team to start using. The Modern AI Workplace needs an endpoint vending machine (laptop plus AI scout bundle plus ZTNA profile, dispensed via Intune) and a workgroup vending machine (Spark project tenancy with ACLs and data lifecycle policy). All three use the same pattern: a request comes in, an automated pipeline does the provisioning, identity gets bound, policy gets assigned, an audit trail starts. If you have built one of these vending machines, you can build the other two. But you have to decide to build them, and you have to decide who owns them.

These five fabrics are the umbrella. They are described once, owned by named teams, consumed by both tracks. Anything you find that lives in only one track is a candidate to be promoted upward. Anything you find duplicated between the tracks is a sign the umbrella isn't doing its job.

The substrate beneath the substrate

There is a layer underneath the fabrics that I want to acknowledge directly, because if the essay stopped at "five horizontal fabrics" it would describe the architecture as if it floats. It doesn't.

The fabrics are not principles. They are operated services. The identity provider is software running on infrastructure, with operators on call when authentication goes down. The Splunk indexers have capacity planning problems and storage costs. The ZTNA gateways have hardware refresh cycles and patch windows. The vending pipelines are themselves applications that have to be deployed, monitored, and maintained by someone whose pager goes off when they break. None of this is incidental to the architecture. It is the architecture, in the sense that the architecture only exists when the substrate beneath it works.

This is the layer that the cloud-only narrative of the last decade has been quietly allowed to forget. The Modern AI Workplace track makes that forgetting unsustainable, because workgroup compute and endpoint NPUs and the network fabric that connects them are all physical things in physical buildings. A DGX Spark in a Copenhagen office is not a cloud abstraction. But it isn't a traditional server either. It's a desktop-form-factor device that draws roughly what a gaming PC draws, plugs into a normal outlet, and sits on or near a desk. It needs network, lifecycle planning, identity binding, an audit trail, and a person who knows what to do when the GPU degrades. What it doesn't need is a cabinet or a controlled facility, which is precisely what makes it harder to govern than infrastructure that demanded those things. The hardware is unobtrusive enough that none of the existing handling categories quite fit. The work of figuring out which disciplines apply to it, and who owns each one, is the work the AI moment has put back on the table for people who had stopped being asked.

I'll write more about this in a follow-up piece, because it deserves more space than a section. The short version, for the purposes of this frame, is that the umbrella has a foundation, and the foundation is operated by people whose contribution to the AI conversation has been systematically undervalued because the dominant narrative pretended infrastructure was a cloud bill. It isn't, and the second track will make that obvious in a way the first track was allowed to obscure.

A picture of the whole thing

The principles never move. The fabrics are described once and consumed by both tracks. The tracks describe their own delivery mechanisms but only insofar as they're track-specific. The seams are explicit, named, and owned. And the substrate underneath is acknowledged as the thing that makes any of it real.

The seams

The umbrella frame works only if the seams between the tracks are explicit and owned. Most "two-track strategy" documents fail here, because they describe the tracks in isolation and never specify the contracts between them.

A seam is the place where two tracks have to agree on something for either of them to function. A contract is the document, written down somewhere, that records what they've agreed. "We will represent classification labels in this format, refresh them at this cadence, and treat them as authoritative" is what a contract sounds like in practice. It's mundane. It's also what separates an architecture that holds together from one that quietly fragments while everyone insists it's working.

Three seams, in my experience so far, are the ones that matter.

The first seam is classification flowing across. When the data governance fabric classifies a document (Public, Internal, Confidential, Restricted) that classification has to be readable by the Modern AI Workplace's tier-routing logic, so that the routing logic can refuse to send a Restricted document to an endpoint LLM and require workgroup-tier processing instead. The Paved Road produces this classification (Purview, sensitivity labels, whatever your stack uses). The Modern AI Workplace consumes it as a routing input. The contract between the two would say something like: classification labels are stable strings drawn from a defined vocabulary, exposed via a documented API the routing logic can call, refreshed within seconds when a label changes, and never redefined per track. If a track invents its own classification scheme, the umbrella has failed.

The second seam is governance instrumentation extending to non-cloud compute. A workgroup Spark sitting in your Copenhagen office needs the same governance treatment as an Azure Kubernetes cluster: identity binding, policy assignment, observability hookup, lifecycle management. But it isn't in Azure. The Paved Road's Terraform patterns don't apply directly. The contract here is that the patterns (declarative configuration, policy as code, vended provisioning, audit trail) extend to workgroup compute even though the implementations differ. The workgroup Spark is governed by the Paved Road's instrumentation patterns even though it is operationally owned by the Modern AI Workplace track. Without this seam, you end up with a Spark in the office that nobody's actually responsible for, configured by whoever set it up, with no audit trail and no lifecycle plan.

The third seam is model and prompt provenance as a shared service. A fine-tuned Llama variant might run on a Spark, an Azure OpenAI deployment, or, eventually, with quantization, be shipped down to laptops as a scout. The model itself, with its training data, evaluation results, and prompt lineage, is a data product with provenance. It cannot belong to one track. If the Paved Road's LLMOps function tracks prompts and models for cloud-tier inference, and the Modern AI Workplace tracks them separately for endpoint and workgroup tiers, you have two registries, two truths, and a guarantee of drift. The contract is that model and prompt provenance is a horizontal capability. Neither track owns it alone, but both depend on it.

There is probably a fourth seam, which I'm less confident about and which I'd like to test against other people's experience. It's the portability seam: the requirement that data and models crossing track boundaries use open formats and standard interfaces, so that an organization isn't locked into a particular track's vendor choices when the technology shifts. This matters more in the AI domain than it did for cloud applications, because the model and inference runtime landscape is moving fast and last year's defaults are this year's lock-in. Operationalizing this is hard, and I haven't seen anyone do it well yet.

The honest part

I should name what's hard about this, because the architecture diagram makes it look cleaner than it is.

The first hard thing is organizational. The umbrella frame requires a role that owns the umbrella. Someone whose job description includes the five fabrics and the seams between tracks. In most enterprises, that role doesn't exist cleanly. It's the Principal Enterprise Architect on a good day, the CTO's office on a better one, and nobody on a typical day. Without that ownership, the two tracks will drift apart in eighteen months because each track's day-to-day pressures will pull them apart and there will be nobody whose job is to enforce coherence. If you draw the umbrella on a whiteboard and there is no name attached to it, you have not solved the problem. You have documented it.

The second hard thing is political. The Modern AI Workplace track, in most organizations today, doesn't have a sponsor. The Paved Road usually has Platform Engineering or a CIO/CTO sponsor. The Modern AI Workplace has aspirations and Copilot licenses. Standing it up as a peer to the platform, which is what the umbrella frame requires, means convincing leadership that AI in the workplace is not a feature of the existing platform but a parallel program with its own roadmap, its own budget, and its own governance commitments. That conversation is harder if the existing platform isn't fully anchored either, which it often isn't. There is no clean answer to this. The best I can offer is: name the gap honestly, and don't pretend the umbrella exists when it doesn't.

The third hard thing is technical and unresolved. The ZTNA fabric question (what actually carries identity-bound traffic between laptops, workgroup compute, and cloud) is open. Cisco Secure Access, Entra Private Access, NetBird (an open-source WireGuard-based mesh with a self-hostable control plane that addresses a lot of the data-sovereignty objections SaaS-only ZTNA faces), something else. Each has trade-offs and none is obviously right for every organization. Endpoint AI model governance (which models employees can run locally, how those models are updated, how their behavior is observable) is a problem nobody has solved well at enterprise scale yet. Workgroup compute capacity planning, when the unit cost is six figures and the demand pattern is project-driven, is a finance question masquerading as an architecture question. I don't have clean answers to any of these, and I'm suspicious of anyone who claims to.

The frame doesn't solve these problems. It does something narrower but more useful: it puts them in the right place. The ZTNA fabric is an open decision at the umbrella level, not a per-track decision, because both tracks consume the answer. Endpoint AI model governance is part of the policy fabric, not the Modern AI Workplace track in isolation. Workgroup compute capacity is a vending and lifecycle problem inheriting Paved Road patterns. The frame turns scattered "things that are hard" into named architectural decisions with owners.

What this is for

I am not arguing that every organization needs to formally split into two tracks tomorrow. I am arguing that the second track exists whether you've named it or not, and that the cost of leaving it unnamed is paid in the form of shadow AI, scattered governance, divergent identity flows, and an architecture function that loses coherence faster than it can publish documents.

If you are an enterprise architect looking at an existing platform program and wondering why the AI conversation doesn't fit into it, this is the frame I'd suggest trying. If you are a security architect, which is where I'm writing from, the frame matters because the controls you care about (identity, data classification, observability, policy) are exactly the horizontal fabric that has to stretch across both tracks. If those controls only enforce against the platform, they don't actually enforce. They enforce against half the compute estate and miss the other half.

If you've worked through this in your own organization, I'd genuinely like to hear how. The question I'm most interested in isn't whether the second track exists. It does. The interesting question is who in the org chart ends up owning it, and how the seams get specified in practice. The architectural framing is the easy part. The naming of an owner, and the standing up of the governance discipline that makes the umbrella real, is the part where most organizations are still improvising.

The next piece in this series goes underneath the umbrella, into the substrate the fabrics depend on. It's called The Substrate Beneath the Substrate: Why AI Re-Physicalizes the Platform, and it argues that the AI moment has quietly done something the platform-engineering discipline wasn't ready for: it has put physical hardware back into the architecture conversation, and made the work of the people who keep that hardware alive central again. After that I'll work through the five fabrics in turn, the ZTNA decision specifically, and how the security architect role itself is shifting under our feet. For now, this is the frame.


Christoffer Silversparre Jørgensen is a cybersecurity architect based in Helsingør, Denmark, writing about the intersection of security architecture, platform engineering, and the AI-era enterprise.

13 views