Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Launch HN: Constellation Space (YC W26) – AI for satellite mission assurance (constellation-io.com)
23 points by kmajid 4 hours ago | hide | past | favorite | 4 comments
Hi HN! We're Kamran, Raaid, Laith, and Omeed from Constellation Space. We built an AI system that predicts satellite link failures before they happen. Here's a video walkthrough: https://www.youtube.com/watch?v=069V9fADAtM.

Between us, we've spent years working on satellite operations at SpaceX, Blue Origin, and NASA. At SpaceX, we managed constellation health for Starlink. At Blue, we worked on next-gen test infra for New Glenn. At NASA, we dealt with deep space communications. The same problem kept coming up: by the time you notice a link is degrading, you've often already lost data.

The core issue is that satellite RF links are affected by dozens of interacting variables. A satellite passes overhead, and you need to predict whether the link will hold for the next few minutes. That depends on: the orbital geometry (elevation angle changes constantly), tropospheric attenuation (humidity affects signal loss via ITU-R P.676), rain fade (calculated via ITU-R P.618 - rain rates in mm/hr translate directly to dB of loss at Ka-band and above), ionospheric scintillation (we track the KP index from magnetometer networks), and network congestion on top of all that.

The traditional approach is reactive. Operators watch dashboards, and when SNR drops below a threshold, they manually reroute traffic or switch to a backup link. With 10,000 satellites in orbit today and 70,000+ projected by 2030, this doesn't scale. Our system ingests telemetry at around 100,000 messages per second from satellites, ground stations, weather radar, IoT humidity sensors, and space weather monitors. We run physics-based models in real-time - the full link budget equations, ITU atmospheric standards, orbital propagation - to compute what should be happening. Then we layer ML models on top, trained on billions of data points from actual multi-orbit operations.

The ML piece is where it gets interesting. We use federated learning because constellation operators (understandably) don't want to share raw telemetry. Each constellation trains local models on their own data, and we aggregate only the high-level patterns. This gives us transfer learning across different orbit types and frequency bands - learnings from LEO Ka-band links help optimize MEO or GEO operations. We can predict most link failures 3-5 minutes out with >90% accuracy, which gives enough time to reroute traffic before data loss. The system is fully containerized (Docker/Kubernetes) and deploys on-premise for air-gapped environments, on GovCloud (AWS GovCloud, Azure Government), or standard commercial clouds.

Right now we're testing with defense and commercial partners. The dashboard shows real-time link health, forecasts at 60/180/300 seconds out, and root cause analysis (is this rain fade? satellite setting below horizon? congestion?). We expose everything via API - telemetry ingestion, predictions, topology snapshots, even an LLM chat endpoint for natural language troubleshooting.

The hard parts we're still working on: prediction accuracy degrades for longer time horizons (beyond 5 minutes gets dicey), we need more labeled failure data for rare edge cases, and the federated learning setup requires careful orchestration across different operators' security boundaries. We'd love feedback from anyone who's worked on satellite ops, RF link modeling, or time-series prediction at scale. What are we missing? What would make this actually useful in a production NOC environment?

Happy to answer any technical questions!





pretty intriguing demo video. how do you ensure your telemetry ingestion happens operationally that will be daunting task. output will be as good as your telemetry any delay or break in data everything bound break.

Great point, telemetry reliability is the biggest hurdle for any mission-critical system. We address the "garbage in, garbage out" risk by prioritizing freshness (our pipeline treats latency as a failure).

We use a

"leaky" buffer strategy (if data is too old to be actionable for a 3-minute forecast, we drop it to ensure the models aren't lagging behind the physical reality of the link),

graceful degradation (when telemetry is delayed or broken, the system automatically falls back to physics-only models i.e. orbital propagation and ITU standards), and

edge validation (we validate and normalize data at the ingestion point, if a stream becomes corrupted or "noisy," the system flags that specific sensor as unreliable and adjusts the prediction confidence scores in real-time).


Are you raising?

Very cool company! Are y’all hiring?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: