According to Reuters, SpaceX revised its Starlink Global Privacy Policy on January 15 to explicitly allow the use of customer data to train machine learning and AI models. The policy states this data, which can include location, contact info, IP addresses, and even “communication data” like audio and visual information, may be used unless a user opts out. This shift comes as SpaceX, valued as the world’s most valuable private company, is reportedly in talks to merge with Elon Musk’s AI firm, xAI, ahead of a potential blockbuster IPO later this year that could value the combined entity at over $1 trillion. The previous policy from November contained no such AI training language. Starlink’s network of over 9,000 satellites currently serves more than 9 million users, and xAI was recently valued at $230 billion.
The data gold rush is on
Here’s the thing: this isn’t just about tweaking a terms-of-service document. It’s a strategic land grab for one of the most unique and valuable data sets on the planet. We’re not just talking about what you browse. Starlink potentially has access to location patterns of millions of users, including ships, planes, and remote communities. It has “inferences” it can make from that data. And now, it can feed all that into AI models.
But the policy is, frankly, vague. It doesn’t specify what data exactly gets used for training. Is it anonymized metadata? Or could it be more sensitive “communication data”? That lack of clarity is what has privacy experts like Georgetown’s Anupam Chander concerned. As he told Reuters, there’s often a legitimate use, but without clear limits, the potential for mission creep and surveillance is huge. When a company can make “inferences” about you from your satellite internet usage and then use that to train an AI, where does it stop?
Musk’s converging ambitions
So why now? The reported merger talks with xAI tell the whole story. Elon Musk’s AI venture, which makes the Grok chatbot, needs massive, diverse data to compete with the likes of OpenAI and Google. Owning X (formerly Twitter) gives it one firehose of public text and image data. But Starlink? That’s a whole other dimension—literally. It’s real-world, physical location and usage data on a global scale.
This move basically turns Starlink from a connectivity provider into a foundational data provider for Musk’s AI empire. The potential “AI-powered services” SpaceX could deploy sound exciting, but they’re built on a foundation of user data repurposed for a goal that wasn’t the original bargain for most subscribers. You signed up for internet in the middle of nowhere, not to be a data point training a trillion-dollar AI.
The opt-out and the obscurity
Now, the policy does mention you can opt out. But let’s be real. How many users actually read privacy policy updates? And even if you do, where is the opt-out button? How clear is the process? In practice, these “opt-out” provisions often bury the action in settings menus, making consent the default path of least resistance for millions of users.
This is part of a much bigger trend, of course. The hunger for AI training data is insatiable, and companies are digging through their own backyards—their user agreements—to find new veins to mine. But when the backyard is a global satellite network, the stakes feel different. It blurs the line between providing a utility and building a surveillance-capable AI training platform. The question isn’t just whether Musk’s companies can do this. It’s whether, in the race for AI dominance, we’re comfortable letting them.
