HPC & AI Infrastructure

Where Network Performance
Is the Product

The hyperscale and HPC/AI infrastructure market has fundamentally changed what great optical network management looks like, and most people in the industry have not fully caught up to what that means in practice.

The Stakes Are Different

A circuit that slips three weeks in a traditional enterprise environment is a manageable inconvenience. A circuit that slips three weeks in an HPC or AI environment can represent tens of millions of dollars in idle infrastructure and broken customer commitments that are extremely difficult and expensive to recover from.

When GPU clusters worth hundreds of millions of dollars sit idle because a circuit has not turned up on schedule, the ability to drive connectivity delivery with both technical authority and commercial precision is not optional. It is everything.

$100M+

GPU cluster investments at stake

Zero

Margin for error in delivery

Microsecond Sensitivity

AI training and inference workloads demand network performance at speeds and volumes that would have seemed unreasonable even ten years ago.

Purpose-Built for HPC & AI

I understand the specific interconnection architectures that HPC and AI workloads require.

Low-Latency Spine Designs

Purpose-built network architectures optimized for the microsecond-sensitive requirements of GPU cluster workloads.

Redundant Diverse Paths

True physical diversity validated at every splice point, ensuring the resilience that AI training workloads demand.

Cross-Connect Density

Meet-me room architecture designed for the scale of interconnection that hyperscale environments require.

Compressed Timelines

Deep carrier relationships and technical authority that compress delivery timelines when every day of delay costs millions.

The Complete Commercial & Operational Cycle

I have worked through every stage of the HPC/AI connectivity lifecycle, from initial carrier engagement through ongoing network management.

01

Carrier Build-In Programs

Coordinating with carriers on new fiber builds into target facilities, negotiating build terms, and managing construction timelines.

02

Street Meet Coordination

Managing the complex logistics of fiber path construction from carrier POPs to datacenter entrance facilities.

03

Provisioning & Turn-Up

Driving circuit activation through acceptance testing with the technical depth to challenge carrier claims and compress schedules.

04

Operational Handover

Ensuring seamless transition to operations teams with complete documentation and validated performance metrics.

05

Capacity Augmentation

Proactive planning for growth with models that predict augmentation needs before utilization thresholds become emergencies.

Understanding the Evolving Landscape

The growth of GPU cluster infrastructure across colocation, hyperscale, and edge environments is creating unprecedented demand for high-capacity, low-latency, diverse fiber connectivity in markets and buildings where that infrastructure did not previously exist or did not exist at sufficient scale.

That demand is creating both enormous opportunity and significant execution risk for the organizations trying to deliver it. I have spent my career developing exactly the carrier relationships, the technical depth, and the project management discipline needed to navigate that environment successfully.

Looking Forward

"I am energized by the challenge of doing it at even greater scale going forward."

Ready to Connect?

Open to conversations about senior network infrastructure, carrier strategy, and connectivity leadership roles in HPC, AI infrastructure, and high-growth datacenter environments.

Get in Touch