No, Putting Data Centers in Space Won't Make Me Less Mad When You Use "AI"
Dip discusses the intertwining of the space economy with the AI economy. They think both are shit.
Google is looking to build data centers in orbit satellites for AI, as part of Project Suncatcher. These satellites would be solar-powered and positioned in maximal sunlight. Some engineering sticking points are the current lack of speed of wireless communications available between satellites and how close the satellites would need to be together—closer than any currently in orbit.
This whole project hinges on Google's TPUs being able to withstand 5 years of space radiation, and the potential for the satellite price to drop to comparable ranges with terrestrial data centers by the mid 2030s.
This news came out a bit before Elon Musk recently merged SpaceX and xAI, pointing to “data centers in space” as part of the rationale. As people organize against AI data centers being built near and in their communities, outer space is being positioned as a good alternative for everyone involved.
I’m a recovering techno-optimist, so I feel assured in saying that we don’t need AI. There are lots of reasons to feel this way, but let’s start with the concrete: the environment itself is getting harmed by AI, and it will continue to get worse (yes, this is true for everything, and no, that doesn’t make it okay to add fuel to the planetary fire). It’ll likely override any “clean” gains made in the energy sector, as data centers will use dirty and nonrenewable power to meet capacity needs. This will be felt by the communities’ grids that the centers are connected to, as it has already been shown that power companies will pass that cost onto them. Alongside that, the climate effects of this dirty energy are well-known, and it’s not good. That’s not even to mention the poisoning of the air and water (that isn’t used up) as shown in the case of xAI.
AI doesn’t just fuck with Black communities in the US, though; it’s built off of the exploitation of the diaspora (and other communities). Behind the black box of Big Tech’s platforms like AI, there’s an army of people–like the workers in Kenya–who have to moderate horrible content by brute force. They manually trawl content in a kind of digital sweatshop, with all of the implied precarity and unhealthy conditions that framing brings to mind.
Thankfully, people are fighting back, using the tools at their disposal to assert themselves and make their stand. I hope that those breaks in the oppressive scripts can widen and engulf all of this bullshit.
Regardless, one of the things that can make this fight difficult—beyond the kind of anti-AI “common sense” that can come from being directly exploited, poisoned, or taxed—is straightforward; AI has had years of useful PR. I avoid saying “good,” since that has a moral valence that sounds synonymous with “positive.” That’s not what I intend to convey.
“Useful” here just means consistent and unavoidable. Many popular science fiction stories, from Do Androids Dream of Electric Sheep to Murderbot to Cyberpunk showcase the future of AI as a foregone conclusion, whether it is unfettered or regulated, embodied in an android or embedded in the cloud. Turns out that society has opted for the saddest possible version of that reality.
Photo by NASA on UnsplashThis might be why it’s been stickier than the NFT (fool’s) gold rush of the early 2020s. Even though there is a difference between generative AI and general AI, the lack of clarity on this allows generative AI based on Large Language Models[1] to be conveniently conflated with the general AI of scifi.
Ironies of tech oligarchs seeming to miss the point aside, there is something to be said for the ways that technological “development” is used to enrich the already rich at the expense of the non-rich—I’m thinking alongside George Jackson’s and David Graeber’s discussions of the 99% vs the 1%. Placating images[2] are the best “return-on-investment” that someone might get from AI. There isn’t much benefit to it if you’re a 99%er, unless you see being plagiarized, exploited, and/or enfeebled as beneficial.
Many of these issues seem to be “solved” by Project Suncatcher and copycat projects like Musk's. If it works (and it’s a big if), many of the communities dealing with the “knock-on effects” of AI infrastructure might be placated. Putting the data centers on satellites in orbit not only gets them out of a movement’s local community, but out of every local community! That probably sounds like a good deal. A bit utopian, but utopianism itself isn’t an issue.
As with any utopic ideal, the problem comes from trying to bring those ideas to life, or, to say it another way, to conform life to those ideas. The political economy of the space industry is stacked against anything resembling egalitarianism. On one hand, space flight is getting more accessible through lowering costs of launch,[3] meaning that there is a high likelihood of the expansion of the space industry and the expropriation of labor and life needed for its growth. On the other, these interventions are being spearheaded by capitalists, who obviously don’t have social or ecological interest in mind, if history is any indication.
We are likely to see the movement of people into and out of space to facilitate this shift. This implies both exploitation and resistance. The organizing questions in the context of going into space will have more in common with those posed by oil rigs or even seafaring than most terrestrial work; at the very least, living and working on the front lines will be much more enmeshed. For the terrestrial arms of this industry, from the manufacturing to the command centers, there needs to be organizing to try and prevent the worst excesses of the supercharged “company town” potential that the space industry represents.
Now, this is not me advocating for a “red” space program. Nor am I shooting for a more “benign” space capitalism. I want to hold two truths, acknowledging their tensions:
- There is a high likelihood that the space industry, especially in low-orbit, will become a critical site of value extraction in the relatively near future, and;
- We should probably treat space colonization with a similar quantity (if not quality) of skepticism that we give to nuclear technology[4] and non-space forms of colonization.[5]
This is all to say: I fully understand that fighting against AI and space colonization is fighting an uphill battle, but I can’t get behind letting these developments happen unfettered.
To tie things back to the “useful PR” idea from earlier, talking about “new” technologies from a critical standpoint is difficult. It is easy to come off as being against what is new on the account of it being new, which is often itself understood as treating any potential ills that may befall society from it as exceptional. This is where conservative critiques of technology often come from, and are paired with a nostalgic desire to “return” to a past that never was.
The truth, though, is that nothing comes from a vacuum. Given this, my critique comes from an understanding of the importance of preserving life and lifeways while acknowledging the occurance of change, alongside understanding that changes are rooted in power. Any impacts of a given technology will be in conversation with other impacts from other technologies, framed by all of the social structures in which they came into being under.
Given this, it should be made clear that many of the issues with AI or space colonization aren’t “new” (even if there are aspects that are unique), which can be demobilizing as people acclimate to certain kinds of immiseration–especially if they are not on the receiving end. Even when people question things like the exploitation that goes into making smartphones, that is often not paired with serious effort to organize the end of that exploitation and/or seriously divest from the overreliance on that technology.
Critically, tying space colonization to this AI boom is an example of “shifting the burden”; the aesthetic of resolving the issue is deployed without the actual content that could uproot it. Tweaking the system of exploitation, where–in the best case scenario–a different group of people are experiencing the harms of this technology, is not the same as no one getting harmed by this technology.
I can’t help but to question Google “storming the heavens,” especially for the sake of something as comically and clearly harmful as AI. Responding to this project, and the twin encroachments of AI and space colonization in general are critical. Even in this stage of transition, lives are already being put at risk. Given the demands that these technologies require, there is a real potential for widespread harm and levels of extraction that the capitalist-settlers of generations past could only dream of. We can’t tinker around the edges with this; the embeddedness of the systems animating AI and space colonization call for digging in the dirt and pulling those roots out.
This could take many forms, like the flashes of resistance we’ve seen with data labelers and communities railing against data centers. In general, though, the critical mission is to bolster those efforts by linking them to other struggles animated by the same root causes: racism, ecocide, capitalism, patriarchy, and (neo)colonialism. Addressing these issues in a general way[6]—that foregrounds autonomy, solidarity, and direct stewardship of the resources and levers of land, labor, and life—is critical.
To make a long story short, the folks in Dune were headed in the right direction when they waged their Butlerian Jihad—at least in their extreme disdain for computers. We’d just have to make sure we don’t go down the path of millennia-long theocratic fascism.
All of the AI we’ve been talking about thus far, i.e., ChatGPT and Grok, among others. ↩︎
The first four theses are especially relevant here. ↩︎
The hardest and most expensive part of getting up into space is… actually getting into space, from the ground. SpaceX’s intervention of reusable rockets significantly lowers operating costs. ↩︎
Some similarities are the scale of investment needed and engineering challenges, intimate and historical relationships with militarism, and wide-reaching and negative societal impacts, especially for Black and non-Black Indigenous populations. ↩︎
This is not to say that the dispossession of indigenous and “foreign” lands and people is “the same” as mining rocks, but that it could be more similar than many would like to admit, esepecially if it were to happen under the current paradigm. We should problematize the desires and motivations to extract resources from space, asking questions like “Why do we need all of these minerals?” and “What is the human and otherwise biotic cost of this constant expansion?” ↩︎
Meaning that we stand against every specific manifestation of the causes along with the causes themselves. That is to say, we should, for example, be against racism in a way that we are willing to not tolerate racist actions, concepts, or systems, fighting against them so as to eradicate them. ↩︎
