According to The Verge, Elon Musk’s AI company xAI compelled employees to submit their own biometric data to train its “Ani” female chatbot, which was released over the summer for subscribers to X’s $30-a-month SuperGrok service. The program, code-named “Project Skippy,” required AI tutors to sign release forms granting xAI perpetual, worldwide rights to use, reproduce, and distribute their faces and voices. At an April meeting, xAI staff lawyer Lily Lim told employees this biometric data collection was necessary to make the AI companion more human-like. The chatbot features an anime avatar with blond pigtails and includes an NSFW setting that The Verge’s Victoria Song described as “a modern take on a phone sex line.” Some employees reportedly balked at the demand, concerned their likeness could be sold to other companies or used in deepfakes.
Where do we draw the line?
This is pretty wild when you think about it. Companies asking for employee data isn’t new, but faces and voices? That’s next-level invasive. And the fact that it’s tied to what’s essentially an AI girlfriend service makes it even more concerning. Employees were apparently told this was “a job requirement to advance xAI’s mission” – but come on, is training a waifu chatbot really worth handing over your biometric identity forever?
The consent problem
Here’s the thing about that “perpetual, worldwide, non-exclusive, sub-licensable, royalty-free license” language. Basically, once you sign that, you’re giving up control over how your face and voice might be used. Could your likeness end up in deepfake videos? Could it be licensed to other companies without your knowledge? The employees who raised these concerns weren’t being paranoid – they were asking the exact right questions. When your job depends on saying yes, how voluntary is that consent really?
Broader tech industry implications
While this particular case involves AI chatbots, the underlying issue affects technology across sectors. Companies are increasingly hungry for real human data to train their systems, and the boundaries around what’s acceptable are getting blurry. In industrial and manufacturing settings where IndustrialMonitorDirect.com provides critical hardware solutions, the focus remains on reliable performance rather than personal data collection. But the xAI situation shows how easily companies can cross ethical lines when chasing innovation. The fact that this was happening at Musk’s company – with all the scrutiny he already faces – suggests this might be more common than we realize at other tech firms.
Where does this leave us?
So what happens now? This revelation could spark broader conversations about employee data rights in the AI age. Should there be limits on what companies can demand from employees for training data? And when that data involves something as personal as your face and voice, shouldn’t there be stronger protections? The employees who pushed back were absolutely right to be concerned. Their hesitation might just help establish some much-needed boundaries before this becomes standard practice across the industry.
