Why We Started Hoplynk

Most people don’t think about “networks.” They open a laptop, click a Wi-Fi name, and expect life to work. When it doesn’t, they hop to another WiFi network, maybe toggle a hotspot, swear a little, and try again. The burden sits on the human. Pick the right network. Pray it works. Guess again when it fails. Keep juggling.

In the military we have a concept for this: PACE Plans (Primary, Alternate, Contingency, Emergency Plans). It’s how you design communications so that when the primary path dies, the message still gets through. On your laptop’s network drop-down, you’re staring at your PACE plan. The problem is, you’re the one doing the planning and execution in real time. The plan is to switch to another network - the problem is you don’t know if it will even work when you do.

I’ve lived that problem more times than I can count.


“We’re on our own base, so

why can’t we get online?”


While supporting early efforts in the Ukraine War, I watched units struggle to stand up reliable connectivity, even on a U.S. base we’d had for decades. We weren’t in some far-flung valley; we were in a place that should have been easy. And still, the same pattern: latency spikes, dead spots, ad-hoc fixes, and a parade of “try this other option.” After twenty years of uncontested network dominance in Iraq and Afghanistan, we were still fighting the same reliability demons.

The lesson hit hard: it’s not a hardware shortage, it’s an architecture and orchestration problem. We have links. We don’t have a brain that treats them like one resilient system. Solutions that sorta-kinda do this are pricey and require you to be an IT networking professional. It’s hard enough to do, and even harder to find the talent to do it when you need it.


The road warrior phase

(and all the dropped calls)


After I left the Marines, I worked everywhere but an office: airport gates, hotels, cafés, back seats of cars. If you’ve ever tried to lead a critical video call with three bars of hotel Wi-Fi, you know the feeling. One too many dropped calls turned irritation into anger. Why am I still the one babysitting networks? Why can’t the connection adapt around me the way cruise control adapts to traffic? What else should I buy in addition to Starlink?


Deadhorse, Alaska


The moment that broke it open for me happened north of nowhere. My vehicle died outside Deadhorse, just about twenty-two miles from the Arctic Ocean. I had thousands of dollars of comms gear: Starlink, an Iridium sat phone, an Inmarsat sat phone, and a cell phone. I had options—and no way to get a call out.

Each device was married to a single network. Each one demanded I decide, minute-to-minute, which path might work. I didn’t want to be a switchboard operator in the snow; I wanted a system that chose for me and got the call out by any means necessary. Primary. Alternate. Contingency. Emergency. Automatically.

Staring at that pile of hardware, the idea stopped being abstract. We didn’t need another “faster” link. We needed something to drive all of them.

From interviews to a company

Back in California, I met Althea and Charlie in a Stanford course called Hacking for Defense. We started doing what the class pushes you to do: shut up and listen. We talked to hundreds of people—operators, IT leads, first responders, field engineers, folks running drones and robots and remote sites. The stories rhymed:

  • The links exist, but you’re forced to pick one.


  • Failover happens after failure.


  • Tools flood you with charts when what you really want is for the connection to just…work.


By the time we finished those interviews, we had the conviction that the world needed Hoplynk. We made the decision.

What we’re building (in plain English)

Hoplynk is the continuously learning, continuously improving autopilot for connectivity. We watch every available path (cellular, fiber, radio, Wi-Fi, satcom) and continuously decide how to get your traffic through with the least drama. Sometimes that means bonding multiple links. Sometimes it’s steering around a noisy channel. Sometimes it’s preemptively jumping before a link collapses so you never feel the bump. You don’t toggle anything; you don’t think about it. You just connect once, and it does the math for you, every second.

This isn’t about dashboards for the sake of dashboards. It’s about making the next packet a good one.

What we believe

  • Operator last. The network should do the work; the user should not.


  • Field first. If it doesn’t survive a dusty truck bed, a stormy roof, or a jamming-prone edge, we didn’t build it right.


  • Simple beats clever. Deterministic, observable decisions win over magic tricks you can’t explain.


  • Tell the truth early. When it breaks, we say so, fix it, and publish what we learned.


Why now

Everything critical is moving to the edge: autonomy, sensors, tele-ops, disaster response, remote work that isn’t “remote” so much as “moving.” The old model of picking one network and praying doesn’t scale. Resilience has to be the default, not the upgrade. PACE has to be automatic, so handled by software, not a checklist you run in your head.

An open invitation

We started Hoplynk because I never want to watch a mission stall or a stranded driver fail to place a call because a human had to guess which network might work this minute. If that resonates with you, if uptime under pressure is your job, sign up. We’ll bring a kit, plug it in, and see how it handles your worst day.