Normal view

There are new articles available, click to refresh the page.
Today — 26 June 2024MIT Technology Review

Job title of the future: Space debris engineer

26 June 2024 at 05:00

Stijn Lemmens has a cleanup job like few others. A senior space debris mitigation analyst at the European Space Agency (ESA), Lemmens works on counteracting space pollution by collaborating with spacecraft designers and the wider industry to create missions less likely to clutter the orbital environment. 

Although significant attention has been devoted to launching spacecraft into space, the idea of what to do with their remains has been largely ignored. Many previous missions did not have an exit strategy. Instead of being pushed into orbits where they could reenter Earth’s atmosphere and burn up, satellites were simply left in orbit at the ends of their lives, creating debris that must be monitored and, if possible, maneuvered around to avoid a collision. “For the last 60 years, we’ve been using [space] as if it were an infinite resource,” Lemmens says. “But particularly in the last 10 years, it has become rather clear that this is not the case.” 

Engineering the ins and outs: Step one in reducing orbital clutter—or, colloquially, space trash—is designing spacecraft that safely leave space when their missions are complete. “I thought naïvely, as a student, ‘How hard can that be?’” says Lemmens. The answer turned out to be more complicated than he expected. 

At ESA, he works with scientists and engineers on specific missions to devise good approaches. Some incorporate propulsion that works reliably even decades after launch; others involve designing systems that can move spacecraft to keep them from colliding with other satellites and with space debris. They also work on plans to get the remains through the atmosphere without large risks to aviation and infrastructure.

Standardizing space: Earth’s atmosphere exerts a drag on satellites that will eventually pull them out of orbit. National and international guidelines recommend that satellites lower their altitude at the end of their operational lives so that they will reenter the atmosphere and make this possible. Previously the goal was for this to take 25 years at most; Lemmens and his peers now suggest five years or less, a time frame that would have to be taken into account from the start of mission planning and design. 

Explaining the need for this change in policy can feel a bit like preaching, Lemmens says, and it’s his least favorite part of the job. It’s a challenge, he says, to persuade people not to think of the vastness of space as “an infinite amount of orbits.” Without change, the amount of space debris may create a serious problem in the coming decades, cluttering orbits and increasing the number of collisions.  

Shaping the future: Lemmens says his wish is for his job to become unnecessary in the future, but with around 11,500 satellites and over 35,000 debris objects being tracked, and more launches planned, that seems unlikely to happen. 

Researchers are looking into more drastic changes to the way space missions are run. We might one day, for instance, be able to dismantle satellites and find ways to recycle their components in orbit. Such an approach isn’t likely to be used anytime soon, Lemmens says. But he is encouraged that more spacecraft designers are thinking about sustainability: “Ideally, this becomes the normal in the sense that this becomes a standard engineering practice that you just think of when you’re designing your spacecraft.”

Before yesterdayMIT Technology Review

Astronomers are enlisting AI to prepare for a data downpour

20 May 2024 at 05:00

In deserts across Australia and South Africa, astronomers are planting forests of metallic detectors that will together scour the cosmos for radio signals. When it boots up in five years or so, the Square Kilometer Array Observatory will look for new information about the universe’s first stars and the different stages of galactic evolution. 

But after synching hundreds of thousands of dishes and antennas, astronomers will quickly face a new challenge: combing through some 300 petabytes of cosmological data a year—enough to fill a million laptops. 

It’s a problem that will be repeated in other places over the coming decade. As astronomers construct giant cameras to image the entire sky and launch infrared telescopes to hunt for distant planets, they will collect data on unprecedented scales. 

“We really are not ready for that, and we should all be freaking out,” says Cecilia Garraffo, a computational astrophysicist at the Harvard-Smithsonian Center for Astrophysics. “When you have too much data and you don’t have the technology to process it, it’s like having no data.”

In preparation for the information deluge, astronomers are turning to AI for assistance, optimizing algorithms to pick out patterns in large and notoriously finicky data sets. Some are now working to establish institutes dedicated to marrying the fields of computer science and astronomy—and grappling with the terms of the new partnership.

In November 2022, Garraffo set up AstroAI as a pilot program at the Center for Astrophysics. Since then, she has put together an interdisciplinary team of over 50 members that has planned dozens of projects focusing on deep questions like how the universe began and whether we’re alone in it. Over the past few years, several similar coalitions have followed Garraffo’s lead and are now vying for funding to scale up to large institutions.

Garraffo recognized the potential utility of AI models while bouncing between career stints in astronomy, physics, and computer science. Along the way, she also picked up on a major stumbling block for past collaboration efforts: the language barrier. Often, astronomers and computer scientists struggle to join forces because they use different words to describe similar concepts. Garraffo is no stranger to translation issues, having struggled to navigate an English-only school growing up in Argentina. Drawing from that experience, she has worked to put people from both communities under one roof so they can identify common goals and find a way to communicate. 

Astronomers had already been using AI models for years, mainly to classify known objects such as supernovas in telescope data. This kind of image recognition will become increasingly vital when the Vera C. Rubin Observatory opens its eyes next year and the number of annual supernova detections quickly jumps from hundreds to millions. But the new wave of AI applications extends far beyond matching games. Algorithms have recently been optimized to perform “unsupervised clustering,” in which they pick out patterns in data without being told what specifically to look for. This opens the doors for models pointing astronomers toward effects and relationships they aren’t currently aware of. For the first time, these computational tools offer astronomers the faculty of “systematically searching for the unknown,” Garraffo says. In January, AstroAI researchers used this method to catalogue over 14,000 detections from x-ray sources, which are otherwise difficult to categorize.

Another way AI is proving fruitful is by sniffing out the chemical composition of the skies on alien planets. Astronomers use telescopes to analyze the starlight that passes through planets’ atmospheres and gets soaked up at certain wavelengths by different molecules. To make sense of the leftover light spectrum, astronomers typically compare it with fake spectra they generate based on a handful of molecules they’re interested in finding—things like water and carbon dioxide. Exoplanet researchers dream of expanding their search to hundreds or thousands of compounds that could indicate life on the planet below, but it currently takes a few weeks to look for just four or five compounds. This bottleneck will become progressively more troublesome as the number of exoplanet detections rises from dozens to thousands, as is expected to happen thanks to the newly deployed James Webb Space Telescope and the European Space Agency’s Ariel Space Telescope, slated to launch in 2029. 

Processing all those observations is “going to take us forever,” says Mercedes López-Morales, an astronomer at the Center for Astrophysics who studies exoplanet atmospheres. “Things like AstroAI are showing up at the right time, just before these faucets of data are coming toward us.”

Last year López-Morales teamed up with Mayeul Aubin, then an undergraduate intern at AstroAI, to build a machine-learning model that could more efficiently extract molecular composition from spectral data. In two months, their team built a model that could scour thousands of exoplanet spectra for the signatures of five different molecules in 31 seconds, a feat that won them the top prize in the European Space Agency’s Ariel Data Challenge. The researchers hope to train a model to look for hundreds of additional molecules, boosting their odds of finding signs of life on faraway planets. 

AstroAI collaborations have also given rise to realistic simulations of black holes and maps of how dark matter is distributed throughout the universe. Garraffo aims to eventually build a large language model similar to ChatGPT that’s trained on astronomy data and can answer questions about observations and parse the literature for supporting evidence. 

“There’s this huge new playground to explore,” says Daniela Huppenkothen, an astronomer and data scientist at the Netherlands Institute for Space Research. “We can use [AI] to tackle problems we couldn’t tackle before because they’re too computationally expensive.” 

However, incorporating AI into the astronomy workflow comes with its own host of trade-offs, as Huppenkothen outlined in a recent preprint. The AI models, while efficient, often operate in ways scientists don’t fully understand. This opacity makes them complicated to debug and difficult to identify how they may be introducing biases. Like all forms of generative AI, these models are prone to hallucinating relationships that don’t exist, and they report their conclusions with an unfounded air of confidence. 

“It’s important to critically look at what these models do and where they fail,” Huppenkothen says. “Otherwise, we’ll say something about how the universe works and it’s not actually true.”

Researchers are working to incorporate error bars into algorithm responses to account for the new uncertainties. Some suggest that the tools could warrant an added layer of vetting to the current publication and peer-review processes. “As humans, we’re sort of naturally inclined to believe the machine,” says Viviana Acquaviva, an astrophysicist and data scientist at the City University of New York who recently published a textbook on machine-learning applications in astronomy. “We need to be very clear in presenting results that are often not clearly explicable while being very honest in how we represent capabilities.”

Researchers are cognizant of the ethical ramifications of introducing AI, even in as seemingly harmless a context as astronomy. For instance, these new AI tools may perpetuate existing inequalities in the field if only select institutions have access to the computational resources to run them. And if astronomers recycle existing AI models that companies have trained for other purposes, they also “inherit a lot of the ethical and environmental issues inherent in those models already,” Huppenkothen says.

Garraffo is working to get ahead of these concerns. AstroAI models are all open source and freely available, and the group offers to help adapt them to different astronomy applications. She has also partnered with Harvard’s Berkman Klein Center for Internet & Society to formally train the team in AI ethics and learn best practices for avoiding biases. 

Scientists are still unpacking all the ways the arrival of AI may affect the field of astronomy. If AI models manage to come up with fundamentally new ideas and point scientists toward new avenues of study, it will forever change the role of the astronomer in deciphering the universe. But even if it remains only an optimization tool, AI is set to become a mainstay in the arsenal of cosmic inquiry. 

“It’s going to change the game,” Garraffo says. “We can’t do this on our own anymore.” 

Zack Savitsky is a freelance science journalist who covers physics and astronomy. 

Inside the quest to map the universe with mysterious bursts of radio energy

1 May 2024 at 05:00

When our universe was less than half as old as it is today, a burst of energy that could cook a sun’s worth of popcorn shot out from somewhere amid a compact group of galaxies. Some 8 billion years later, radio waves from that burst reached Earth and were captured by a sophisticated low-frequency radio telescope in the Australian outback. 

The signal, which arrived on June 10, 2022, and lasted for under half a millisecond, is one of a growing class of mysterious radio signals called fast radio bursts. In the last 10 years, astronomers have picked up nearly 5,000 of them. This one was particularly special: nearly double the age of anything previously observed, and three and a half times more energetic. 

But like the others that came before, it was otherwise a mystery. No one knows what causes fast radio bursts. They flash in a seemingly random and unpredictable pattern from all over the sky. Some appear from within our galaxy, others from previously unexamined depths of the universe. Some repeat in cyclical patterns for days at a time and then vanish; others have been consistently repeating every few days since we first identified them. Most never repeat at all. 

Despite the mystery, these radio waves are starting to prove extraordinarily useful. By the time our telescopes detect them, they have passed through clouds of hot, rippling plasma, through gas so diffuse that particles barely touch each other, and through our own Milky Way. And every time they hit the free electrons floating in all that stuff, the waves shift a little bit. The ones that reach our telescopes carry with them a smeary fingerprint of all the ordinary matter they’ve encountered between wherever they came from and where we are now. 

This makes fast radio bursts, or FRBs, invaluable tools for scientific discovery—especially for astronomers interested in the very diffuse gas and dust floating between galaxies, which we know very little about. 

“We don’t know what they are, and we don’t know what causes them. But it doesn’t matter. This is the tool we would have constructed and developed if we had the chance to be playing God and create the universe,” says Stuart Ryder, an astronomer at Macquarie University in Sydney and the lead author of the Science paper that reported the record-breaking burst. 

Many astronomers now feel confident that finding more such distant FRBs will enable them to create the most detailed three-dimensional cosmological map ever made—what Ryder likens to a CT scan of the universe. Even just five years ago making such a map might have seemed an intractable technical challenge: spotting an FFB and then recording enough data to determine where it came from is extraordinarily difficult because most of that work must happen in the few milliseconds before the burst passes.

But that challenge is about to be obliterated. By the end of this decade, a new generation of radio telescopes and related technologies coming online in Australia, Canada, Chile, California, and elsewhere should transform the effort to find FRBs—and help unpack what they can tell us. What was once a series of serendipitous discoveries will become something that’s almost routine. Not only will astronomers be able to build out that new map of the universe, but they’ll have the chance to vastly improve our understanding of how galaxies are born and how they change over time. 

Where’s the matter?

In 1998, astronomers counted up the weight of all of the identified matter in the universe and got a puzzling result. 

We know that about 5% of the total weight of the universe is made up of baryons like protons and neutrons— the particles that make up atoms, or all the “stuff” in the universe. (The other 95% includes dark energy and dark matter.) But the astronomers managed to locate only about 2.5%, not 5%, of the universe’s total. “They counted the stars, black holes, white dwarfs, exotic objects, the atomic gas, the molecular gas in galaxies, the hot plasma, etc. They added it all up and wound up at least a factor of two short of what it should have been,” says Xavier Prochaska, an astrophysicist at the University of California, Santa Cruz, and an expert in analyzing the light in the early universe. “It’s embarrassing. We’re not actively observing half of the matter in the universe.” 

All those missing baryons were a serious problem for simulations of how galaxies form, how our universe is structured, and what happens as it continues to expand. 

Astronomers began to speculate that the missing matter exists in extremely diffuse clouds of what’s known as the warm–hot intergalactic medium, or WHIM. Theoretically, the WHIM would contain all that unobserved material. After the 1998 paper was published, Prochaska committed himself to finding it. 

But nearly 10 years of his life and about $50 million in taxpayer money later, the hunt was going very poorly.

That search had focused largely on picking apart the light from distant galactic nuclei and studying x-ray emissions from tendrils of gas connecting galaxies. The breakthrough came in 2007, when Prochaska was sitting on a couch in a meeting room at the University of California, Santa Cruz, reviewing new research papers with his colleagues. There, amid the stacks of research, sat the paper reporting the discovery of the first FRB.

Duncan Lorimer and David Narkevic, astronomers at West Virginia University, had discovered a recording of an energetic radio wave unlike anything previously observed. The wave lasted for less than five milliseconds, and its spectral lines were very smeared and distorted, unusual characteristics for a radio pulse that was also brighter and more energetic than other known transient phenomena. The researchers concluded that the wave could not have come from within our galaxy, meaning that it had traveled some unknown distance through the universe. 

Here was a signal that had traversed long distances of space, been shaped and affected by electrons along the way, and had enough energy to be clearly detectable despite all the stuff it had passed through. There are no other signals we can currently detect that commonly occur throughout the universe and have this exact set of traits.

“I saw that and I said, ‘Holy cow—that’s how we can solve the missing-baryons problem,’” Prochaska says. Astronomers had used a similar technique with the light from pulsars— spinning neutron stars that beam radiation from their poles—to count electrons in the Milky Way. But pulsars are too dim to illuminate more of the universe. FRBs were thousands of times brighter, offering a way to use that technique to study space well beyond our galaxy.

A visualization of the cosmic web, the large-scale structure of the universe. Each bright knot is an entire galaxy, while the purple filaments show material between them.
This visualization of large-scale structure in the universe shows galaxies (bright knots) and the filaments of material between them.
NASA/NCSA UNIVERSITY OF ILLINOIS VISUALIZATION BY FRANK SUMMERS, SPACE TELESCOPE SCIENCE INSTITUTE, SIMULATION BY MARTIN WHITE AND LARS HERNQUIST, HARVARD UNIVERSITY

There’s a catch, though: in order for an FRB to be an indicator of what lies in the seemingly empty space between galaxies, researchers have to know where it comes from. If you don’t know how far the FRB has traveled, you can’t make any definitive estimate of what space looks like between its origin point and Earth. 

Astronomers couldn’t even point to the direction that the first 2007 FRB came from, let alone calculate the distance it had traveled. It was detected by an enormous single-dish radio telescope at the Parkes Observatory (now called the Murriyang) in New South Wales, which is great at picking up incoming radio waves but can pinpoint FRBs only to an area of the sky as large as Earth’s full moon. For the next decade, telescopes continued to identify FRBs without providing a precise origin, making them a fascinating mystery but not practically useful.

Then, in 2015, one particular radio wave flashed—and then flashed again. Over the course of two months of observation from the Arecibo telescope in Puerto Rico, the radio waves came again and again, flashing 10 times. This was the first repeating burst of FRBs ever observed (a mystery in its own right), and now researchers had a chance to determine where the radio waves had begun, using the opportunity to home in on its location.

In 2017, that’s what happened. The researchers obtained an accurate position for the fast radio burst using the NRAO Very Large Array telescope in central New Mexico. Armed with that position, the researchers then used the Gemini optical telescope in Hawaii to take a picture of the location, revealing the galaxy where the FRB had begun and how far it had traveled. “That’s when it became clear that at least some of these we’d get the distance for. That’s when I got really involved and started writing telescope proposals,” Prochaska says. 

That same year, astronomers from across the globe gathered in Aspen, Colorado, to discuss the potential for studying FRBs. Researchers debated what caused them. Neutron stars? Magnetars, neutron stars with such powerful magnetic fields that they emit x-rays and gamma rays? Merging galaxies? Aliens? Did repeating FRBs and one-offs have different origins, or could there be some other explanation for why some bursts repeat and most do not? Did it even matter, since all the bursts could be used as probes regardless of what caused them? At that Aspen meeting, Prochaska met with a team of radio astronomers based in Australia, including Keith Bannister, a telescope expert involved in the early work to build a precursor facility for the Square Kilometer Array, an international collaboration to build the largest radio telescope arrays in the world. 

The construction of that precursor telescope, called ASKAP, was still underway during that meeting. But Bannister, a telescope expert at the Australian government’s scientific research agency, CSIRO, believed that it could be requisitioned and adapted to simultaneously locate and observe FRBs. 

Bannister and the other radio experts affiliated with ASKAP understood how to manipulate radio telescopes for the unique demands of FRB hunting; Prochaska was an expert in everything “not radio.” They agreed to work together to identify and locate one-off FRBs (because there are many more of these than there are repeating ones) and then use the data to address the problem of the missing baryons. 

And over the course of the next five years, that’s exactly what they did—with astonishing success.

Building a pipeline

To pinpoint a burst in the sky, you need a telescope with two things that have traditionally been at odds in radio astronomy: a very large field of view and high resolution. The large field of view gives you the greatest possible chance to detect a fleeting, unpredictable burst. High resolution  lets you determine where that burst actually sits in your field of view. 

ASKAP was the perfect candidate for the job. Located in the westernmost part of the Australian outback, where cattle and sheep graze on public land and people are few and far between, the telescope consists of 36 dishes, each with a large field of view. These dishes are separated by large distances, allowing observations to be combined through a technique called interferometry so that a small patch of the sky can be viewed with high precision.  

The dishes weren’t formally in use yet, but Bannister had an idea. He took them and jerry-rigged a “fly’s eye” telescope, pointing the dishes at different parts of the sky to maximize its ability to spot something that might flash anywhere. 

“Suddenly, it felt like we were living in paradise,” Bannister says. “There had only ever been three or four FRB detections at this point, and people weren’t entirely sure if [FRBs] were real or not, and we were finding them every two weeks.” 

When ASKAP’s interferometer went online in September 2018, the real work began. Bannister designed a piece of software that he likens to live-action replay of the FRB event. “This thing comes by and smacks into your telescope and disappears, and you’ve got a millisecond to get its phone number,” he says. To do so, the software detects the presence of an FRB within a hundredth of a second and then reaches upstream to create a recording of the telescope’s data before the system overwrites it. Data from all the dishes can be processed and combined to reconstruct a view of the sky and find a precise point of origin. 

The team can then send the coordinates on to optical telescopes, which can take detailed pictures of the spot to confirm the presence of a galaxy—the likely origin point of the FRB. 

CSIRO's Australian Square Kilometre Array Pathfinder (ASKAP) telescope
These two dishes are part of CSIRO’s Australian Square Kilometre Array Pathfinder (ASKAP) telescope.
CSIRO

Ryder’s team used data on the galaxy’s spectrum, gathered from the European Southern Observatory, to measure how much its light stretched as it traversed space to reach our telescopes. This “redshift” becomes a proxy for distance, allowing astronomers to estimate just how much space the FRB’s light has passed through. 

In 2018, the live-action replay worked for the first time, making Bannister, Ryder, Prochaska, and the rest of their research team the first to localize an FRB that was not repeating. By the following year, the team had localized about five of them. By 2020, they had published a paper in Nature declaring that the FRBs had let them count up the universe’s missing baryons. 

The centerpiece of the paper’s argument was something called the dispersion measure—a number that reflects how much an FRB’s light has been smeared by all the free electrons along our line of sight. In general, the farther an FRB travels, the higher the dispersion measure should be. Armed with both the travel distance (the redshift) and the dispersion measure for a number of FRBs, the researchers found they could extrapolate the total density of particles in the universe. J-P Macquart, the paper’s lead author, believed that the relationship between dispersion measure and FRB distance was predictable and could be applied to map the universe.

As a leader in the field and a key player in the advancement of FRB research, Macquart would have been interviewed for this piece. But he died of a heart attack one week after the paper was published, at the age of 45. FRB researchers began to call the relationship between dispersion and distance the “Macquart relation,” in honor of his memory and his push for the groundbreaking idea that FRBs could be used for cosmology. 

Proving that the Macquart relation would hold at greater distances became not just a scientific quest but also an emotional one. 

“I remember thinking that I know something about the universe that no one else knows.”

The researchers knew that the ASKAP telescope was capable of detecting bursts from very far away—they just needed to find one. Whenever the telescope detected an FRB, Ryder was tasked with helping to determine where it had originated. It took much longer than he would have liked. But one morning in July 2022, after many months of frustration, Ryder downloaded the newest data email from the European Southern Observatory and began to scroll through the spectrum data. Scrolling, scrolling, scrolling—and then there it was: light from 8 billion years ago, or a redshift of one, symbolized by two very close, bright lines on the computer screen, showing the optical emissions from oxygen. “I remember thinking that I know something about the universe that no one else knows,” he says. “I wanted to jump onto a Slack and tell everyone, but then I thought: No, just sit here and revel in this. It has taken a lot to get to this point.” 

With the October 2023 Science paper, the team had basically doubled the distance baseline for the Macquart relation, honoring Macquart’s memory in the best way they knew how. The distance jump was significant because Ryder and the others on his team wanted to confirm that their work would hold true even for FRBs whose light comes from so far away that it reflects a much younger universe. They also wanted to establish that it was possible to find FRBs at this redshift, because astronomers need to collect evidence about many more like this one in order to create the cosmological map that motivates so much FRB research.

“It’s encouraging that the Macquart relation does still seem to hold, and that we can still see fast radio bursts coming from those distances,” Ryder said. “We assume that there are many more out there.” 

Mapping the cosmic web

The missing stuff that lies between galaxies, which should contain the majority of the matter in the universe, is often called the cosmic web. The diffuse gases aren’t floating like random clouds; they’re strung together more like a spiderweb, a complex weaving of delicate filaments that stretches as the galaxies at their nodes grow and shift. This gas probably escaped from galaxies into the space beyond when the galaxies first formed, shoved outward by massive explosions.

“We don’t understand how gas is pushed in and out of galaxies. It’s fundamental for understanding how galaxies form and evolve,” says Kiyoshi Masui, the director of MIT’s Synoptic Radio Lab. “We only exist because stars exist, and yet this process of building up the building blocks of the universe is poorly understood … Our ability to model that is the gaping hole in our understanding of how the universe works.” 

Astronomers are also working to build large-scale maps of galaxies in order to precisely measure the expansion of the universe. But the cosmological modeling underway with FRBs should create a picture of invisible gasses between galaxies, one that currently does not exist. To build a three-dimensional map of this cosmic web, astronomers will need precise data on thousands of FRBs from regions near Earth and from very far away, like the FRB at redshift one. “Ultimately, fast radio bursts will give you a very detailed picture of how gas gets pushed around,” Masui says. “To get to the cosmological data, samples have to get bigger, but not a lot bigger.” 

That’s the task at hand for Masui, who leads a team searching for FRBs much closer to our galaxy than the ones found by the Australian-led collaboration. Masui’s team conducts FRB research with the CHIME telescope in British Columbia, a nontraditional radio telescope with a very wide field of view and focusing reflectors that look like half-pipes instead of dishes. CHIME (short for “Canadian Hydrogen Intensity Mapping Experiment)” has no moving parts and is less reliant on mirrors than a traditional telescope (focusing light in only one direction rather than two), instead using digital techniques to process its data. CHIME can use its digital technology to focus on many places at once, creating a 200-square-degree field of view compared with ASKAP’s 30-degree one. Masui likened it to a mirror that can be focused on thousands of different places simultaneously. 

Because of this enormous field of view, CHIME has been able to gather data on thousands of bursts that are closer to the Milky Way. While CHIME cannot yet precisely locate where they are coming from the way that ASKAP can (the telescope is much more compact, providing lower resolution), Masui is leading the effort to change that by building three smaller versions of the same telescope in British Columbia; Green Bank, West Virginia; and Northern California. The additional data provided by these telescopes, the first of which will probably be collected sometime this year, can be combined with data from the original CHIME telescope to produce location information that is about 1,000 times more precise. That should be detailed enough for cosmological mapping.

The Canadian Hydrogen Intensity Mapping Experiment, or CHIME, a Canadian radio telescope, is shown at night.
The reflectors of the Canadian Hydrogen Intensity Mapping Experiment, or CHIME, have been used to spot thousands of FRBs.
ANDRE RECNIK/CHIME

Telescope technology is improving so fast that the quest to gather enough FRB samples from different parts of the universe for a cosmological map could be finished within the next 10 years. In addition to CHIME, the BURSTT radio telescope in Taiwan should go online this year; the CHORD telescope in Canada, designed to surpass CHIME, should begin operations in 2025; and the Deep Synoptic Array in California could transform the field of radio astronomy when it’s finished, which is expected to happen sometime around the end of the decade. 

And at ASKAP, Bannister is building a new tool that will quintuple the sensitivity of the telescope, beginning this year. If you can imagine stuffing a million people simultaneously watching uncompressed YouTube videos into a box the size of a fridge, that’s probably the easiest way to visualize the data handling capabilities of this new processor, called a field-programmable gate array, which Bannister is almost finished programming. He expects the new device to allow the team to detect one new FRB each day.

With all the telescopes in competition, Bannister says, “in five or 10 years’ time, there will be 1,000 new FRBs detected before you can write a paper about the one you just found … We’re in a race to make them boring.” 

Prochaska is so confident FRBs will finally give us the cosmological map he’s been working toward his entire life that he’s started studying for a degree in oceanography. Once astronomers have measured distances for 1,000 of the bursts, he plans to give up the work entirely. 

“In a decade, we could have a pretty decent cosmological map that’s very precise,” he says. “That’s what the 1,000 FRBs are for—and I should be fired if we don’t.”

Unlike most scientists, Prochaska can define the end goal. He knows that all those FRBs should allow astronomers to paint a map of the invisible gases in the universe, creating a picture of how galaxies evolve as gases move outward and then fall back in. FRBs will grant us an understanding of the shape of the universe that we don’t have today—even if the mystery of what makes them endures. 

Anna Kramer is a science and climate journalist based in Washington, D.C.

The great commercial takeover of low Earth orbit

17 April 2024 at 04:00

Washington, DC, was hot and humid on June 23, 1993, but no one was sweating more than Daniel Goldin, the administrator of NASA. Standing outside the House chamber, he watched nervously as votes registered on the electronic tally board. The space station wasn’t going to make it. The United States had spent more than $11 billion on it by then, with thousands of pounds of paperwork to show for it—but zero pounds of flight hardware. Whether there would ever be a station came down, now, to a cancellation vote on the House floor.

Politically, the space station was something of a wayward orphan. It was a nine-year-old Reagan administration initiative, expanded by George H.W. Bush as the centerpiece of a would-be return to the moon and an attempt to reach Mars. When voters replaced Bush with Bill Clinton, Goldin persuaded the new president to keep the station by pitching it as a post-Soviet reconstruction effort. The Russians were great at building stations, which would save NASA a fortune in R&D. In turn, NASA’s funding would keep Russian rocket scientists employed—and less likely to freelance for hostile foreign powers. Still, dissatisfaction with NASA was a bipartisan affair: everyone seemed to agree that the agency was bloated and ossified. Representative Tim Roemer, a Democrat from Indiana, wanted to make some big changes, and he introduced an amendment to the NASA authorization bill to kill the station once and for all.

Goldin had made more than 100 phone calls in the day and a half before the vote, hoping to sway lawmakers to endorse the station, which he saw as critical for studying biomedicine, electronics, materials engineering, and the human body in a completely alien environment: microgravity. Things down to the molecular level behave profoundly differently in space, and flying experiments a week at a time on the shuttle wasn’t enough to learn much. Real research required a permanent presence in space, and that meant a space station. 

Supporters of the space station had gone into the vote expecting to win. Not by much—20 votes, maybe. But the longer the vote went on, the closer it got. Each side began cheering as it pulled ahead. The 110 new members of Congress, none of whom had ever before cast a vote involving the station, revealed themselves to be less reliable than expected. 

Finally, the tally reached 215–215, with one vote remaining: Representative John Lewis of Georgia, a civil rights legend. As Lewis walked down the hall toward the legislative chamber, Goldin’s legislative aide, Jeff Lawrence, told the administrator to say something—anything—to win him over. As Lewis walked by, Goldin had only one second, maybe two, and the best he could get out was a raw, honest, “Congressman Lewis, the future of the space program depends on you.” He added: “The nation is counting on you. How will you vote?”

Lewis smiled as he walked by. He said, “I ain’t telling you.”

The station, later named the International Space Station, survived by his single vote, 216–215. Five years later, Russia launched the first module from Kazakhstan, and since November 2000, not a single day has elapsed without a human being in space.

NASA designed the International Space Station to fly for 20 years. It has lasted six years longer than that, though it is showing its age, and NASA is currently studying how to safely destroy the space laboratory by around 2030. This will involve a “deorbit vehicle” docking with the ISS, which is the size of a football field (including end zones), and firing thrusters so that the station, which circles the Earth at five miles per second, slams down squarely in the middle of the Pacific Ocean, avoiding land, injury, and the loss of human life.

As the scorched remains of the station sink to the bottom of the sea, however, the story of America in low Earth orbit (LEO) will continue. The ISS never really became what some had hoped: a launching point for an expanding human presence in the solar system. But it did enable fundamental research on materials and medicine, and it helped us start to understand how space affects the human body. To build on that work, NASA has partnered with private companies to develop new, commercial space stations for research, manufacturing, and tourism. If they are successful, these companies will bring about a new era of space exploration: private rockets flying to private destinations. They will also demonstrate a new model in which NASA builds infrastructure and the private sector takes it from there, freeing the agency to explore deeper and deeper into space, where the process can be repeated. They’re already planning to do it around the moon. One day, Mars could follow.


From the dawn of the space age, space stations were envisioned as essential to leaving Earth. 

In 1952, Wernher von Braun, the primary architect of the American space program, called them “as inevitable as the rising of the sun” and said they’d be integral to any sustainable exploration program, mitigating cost and complexity. Indeed, he proposed building a space station before a moon or Mars program, so that expeditions would have a logistical way station for resupply and refueling. 

“Going into the 1960s, there’s a lot of consensus and momentum around the idea that space is going to be a three-step process,” says historian David Hitt, coauthor of Homesteading Space: The Skylab Story. Step one, he told me, is transportation. You’ve got to leave Earth somehow, which means developing the infrastructure to build human-safe rockets and launching them. Step two is habitation. You need a place to live once you are in space—for its own sake as a science laboratory, and also as a logistical waypoint between Earth and other celestial objects. “Once you have transportation and habitation,” he says, “you can take your next step, which is exploration.”

The mindset changed after the Soviet Union beat the United States to orbit, first with its Sputnik I satellite in 1957 and again when cosmonaut Yuri Gagarin became the first man in space in 1961. President John F. Kennedy committed the nation to landing a man on the moon and returning him safely to Earth “before this decade is out.” It was an outrageously ambitious goal, given that NASA had only managed to launch a human to space three weeks earlier. “It required moving quickly, and the way you do that is to take the three-step plan and get rid of step two,” Hitt told me. “As it turned out, if you skip the habitation stage, it works—the US got to the moon, but did so in a way that did not lay the groundwork for the long-term sustainability of the program.”

“Even going back to the Mercury program, the goal was always the moon. Skylab is the first time that space itself became the destination.”

David Hitt, historian

We are still working on that. Two years after the final Apollo mission, NASA launched the first American space station, Skylab. Adapted from the second stage of a Saturn V moon rocket, it was enormous: 99 feet (30 meters) long and by far the heaviest spacecraft ever launched. NASA would eventually launch three missions of three astronauts each to the station, where they would perform more than a hundred experiments.

“In a very real way, Skylab was the first American space mission,” Hitt says. “Before Skylab, we were flying moon missions—even going back to the Mercury program, the goal was always the moon. Skylab is the first time that space itself became the destination.” Its goals were foundational to what would later come. “The big thing that Skylab taught us is that human beings can, in fact, live and work long durations in a space environment. If we’re serious about going to Mars, you [may] spend way longer in space than you’re going to spend on the Martian surface.”

Skylab remains the only space station built and launched solely by the United States. In 1986, the Soviet Union launched the first module of Mir, a modular space station built like Lego blocks, one segment at a time. Because NASA had discontinued the Saturn V rocket, the agency necessarily adopted the same modular station model, eventually partnering with Russia and other countries to build the ISS. Today it shares the skies with Tiangong, China’s permanent space station, the first module of which launched in 2021. None of these stations have acted as moon or Mars way stations in the von Braun mold; to satisfy that requirement, NASA is developing a future station called Gateway that is intended to orbit the moon. Its first module could launch next year.

Although they never became transportation hubs, each space station has advanced the critical cause of learning what long stretches of space do to the human body. (Russian cosmonaut Valeri Polyakov, who flew on Mir, holds the all-time record for continuous spaceflight, with 437 days.) Researchers still have a relative paucity of knowledge about how the body responds to space. On Earth, we have the collective experience of more than 100 billion human beings across 300,000 years, and still much about the human body remains a mystery. Why do we yawn? What should we eat? Fewer than a thousand people in 63 years have ever been to space. Such studies can only occur on permanent space stations. 

“During the shuttle program, we were studying the effects of just a shorter-­duration spaceflight—a couple weeks—on the human body,” Steven Platts, chief scientist of NASA’s Human Research Program, told me. Among the problems was “orthostatic intolerance,” which is the body’s inability to regulate blood pressure. It affected about a quarter of crew members who returned from space. Once NASA and Russia launched the ISS and spaceflight durations increased from weeks to months, that number leaped to 80%. “We spent a lot of time trying to tease out that mechanism. And we eventually came up with countermeasures so that that risk is now considered closed,” he says.

Other challenges include spaceflight-­associated neuro-ocular syndrome, which is a change in the structure and function of the eye, something researchers identified about 10 years ago. “We didn’t really see it with the shuttle, but as we started doing more and more station missions, we saw it,” Platts says. They have also identified small, structural changes in the brain but have yet to figure out what that means in the long term: “That’s a relatively new risk that we didn’t know about before the space station.”

Overall, he says, the ability of the human body to regulate its function in space is “amazing.” His group is working on about 30 risks to humans posed by space exploration, which it classifies in a color-coding scheme. Green issues are well controlled. Yellow risks are of moderate concern, and red ones must be solved before missions are possible. “Right now, for low Earth orbit there are no red. Everything is yellow and green. We understand it pretty well and we can deal with it. But as we get to lunar, we see more yellow and some red, and as we get to Mars, we see more red yet,” Platts says. “There are things that we know right now are a problem, and we’re working hard to try and figure them out, either from a research standpoint or an engineering standpoint.”

Some problems can only be studied as we venture farther into space—the long-term effects of Mars dust on the human body, for example. Others, such as the unanticipated development of psychiatric disorders, can be studied closer to home.

NASA and other institutions are currently studying all this on the ISS and will need to continue such research long beyond the space station’s retirement—one reason why it is imperative that someone else launch a successor space station, and soon. To that end, just as it did with SpaceX from 2006 through 2011, the agency has seeded several companies with small investments, promising to lease space on emergent space stations. And right now, the soonest likely to launch is being led out of a sprawling former Fry’s Electronics retail store in a shopping center complex in Texas.


I met Michael Baine, the chief technology officer of Axiom Space, on a gray, drizzly January morning at the entrance to its Space Station Development Facility in Houston. Baine began his career at NASA Johnson Space Center just down the road, where he worked on everything from the shuttle and station to experimental lunar landers. Later, he left the agency to join Intuitive Machines as its chief of engineering. In February, that company’s Nova-C spacecraft, Odysseus, became the first US spacecraft to land successfully on the moon since the end of the Apollo program in 1972, making Intuitive Machines the first private company to land successfully on a celestial object beyond Earth. Baine has worked at Axiom Space since 2016. The startup’s long-term goal is to build the first private commercial space station. It has successfully organized and managed three private missions to the International Space Station, in large part to study firsthand how humans work and live in space, so that they might design a more user-friendly product.  

Axiom is not the only company interested in launching private space stations. Most notably, Blue Origin announced in 2021 that in partnership with the aerospace outfit Sierra Nevada, it would build Orbital Reef, a “mixed-use business park” capable of supporting up to 10 people simultaneously in low Earth orbit. In January, Sierra Nevada successfully stress-tested a one-third-scale test article of its habitat module, with the intention of launching a station into orbit on a Blue Origin New Glenn rocket in 2027. Other companies, such as Lockheed Martin, have made moves into the market, though their progress is less clear.

Axiom plans to build its own orbital facility much differently, Baine told me as we entered the facility. Suspended from the wall above, large, low-fidelity models of spacecraft hung from the ceiling, including the X-38 (an experimental emergency return vehicle for space station crew) and Zvezda, the Russian module of the ISS, which today is plagued by age-induced stress fractures and consequent leaks. Crew vehicles no longer dock with it.

Michael Baine
Michael Baine, the chief technology officer of Axiom Space, began his career at NASA Johnson Space Center.
ANTHONY RATHBUN

“It’s very difficult to build a full, self-sustaining space station and launch it in one shot,” Baine said as we walked past an open-concept cube farm beneath the models, where about 500 men and women are designing a space station to replace Zvezda and the rest of the ISS. “What you want to do is assemble it in space in a piecemeal fashion. The easiest way to do that is to start with something that is already there.”

That “something” is the International Space Station itself. In 2026, Baine expects to launch Axiom Hab One, a cylindrical module with crew quarters and manufacturing capabilities that will plug into an open port on the ISS. Later, Axiom plans to launch Hab Two, expanding habitation, scientific, and manufacturing services. Then it hopes to launch a research and manufacturing facility, complete with a spacious, fully glassed cupola to give Axiom astronauts and visitors on the station access to a complete view of planet Earth, as well as the length of the station. Finally, the company intends to launch a “power thermal module” with massive solar panels, expanded life support capabilities, and payload capacity. 

“We wanted to turn over the keys to the shuttle, the station—all that—to the private sector.”

Lori Garver, former deputy administrator of NASA

Each new segment is designed to plug into the preceding Axiom segment. This isn’t aspirational; there is a hard deadline in effect. Unless the ISS gets a new lease on life, everything must be launched and assembled by 2030. Once NASA officially declares the ISS mission completed, the Lego-like Axiom Station will detach from the ISS as its own integrated and fully self-sustaining space station. Afterward, the deorbit vehicle will do its job and push the ISS into the ocean.

“It’s a big risk reduction for us to be able to use ISS as a staging point to build up our capability one element at a time,” Baine explains. That plan also offers a huge commercial advantage. There is already a robust, global user base of companies and researchers sending projects to the ISS. “In order to court those users to migrate to a commercial solution, it just becomes easier if you’re already at a location where they’re at,” he says. Everything from technical interfaces to the way Axiom Station will handle the outgassing of materials will be compatible with existing ISS hardware: “We have to meet the same standards that NASA does.”

Axiom Space Observatory module on display
The Axiom Station Earth Observatory module will allow astronauts a 360-degree view of their surroundings.
ANTHONY RATHBUN

A lot of people are betting that there are fortunes to be made in LEO, and because of that, the US taxpayer is not paying for Axiom Station. Though NASA intends to eventually rent space on Hab One, and has already awarded tens of millions of dollars to kick off early development, the commercial station is being built by hundreds of millions of private dollars. The cultivation of commercial research and manufacturing is ongoing, which was NASA’s aim going all the way back to Dan Goldin’s tenure as administrator. 

“We wanted to turn over the keys to the shuttle, the station—all that—to the private sector,” says Lori Garver, a former deputy administrator of NASA and author of Escaping Gravity. “Dan believed if we could hand over low-Earth-orbit infrastructure, NASA could go farther into space, and I really bought into that.” Garver would later pioneer the commercial spaceflight model that led SpaceX and other companies to take over launch services, saving the agency tens of billions of dollars while simultaneously speeding launch cadence—the same model that led to Axiom’s space station work.

“After launching the first module in 1998, we announced that space was open for business,” Garver told me. The first person to reach out was Fisk Johnson, of S.C. Johnson & Son. He wanted to work with NASA to develop a bioreactor to help create new pharmaceuticals for liver disease in a microgravity environment. “I worked with him for probably three years at NASA,” Garver says. “Unfortunately, their flight mission was Columbia, and we lost the experiment in the tragedy.”

In the decades to follow, commercial research and development would increase, with limitations. NASA, Russia, and the other partner nations did not design the ISS specifically as a large-scale research and manufacturing facility, and one reason no company has elected to simply buy the station outright is that refurbishing it would be more complex and expensive than either building a new station, as Axiom has elected to do, or renting space on a modern successor. 

As we came upon a stunning, full-scale mock-up of Hab One at the far end of the building, I asked Baine if starting with the technical solutions already developed by NASA—the way environmental systems work, for example—makes Axiom Station easier from an engineering perspective.

""
A mock-up of an Axiom station module interior.
ANTHONY RATHBUN

“You would think so,” he replied, “but these are very demanding standards, and they require a lot of attention to detail.” The voluminous testing and analyses to prove that you meet the requirements necessary to interface with ISS generate a lot of work, “but you end up with a structure or a component that is extremely reliable. The chances that a failure could propagate to a loss of crew is very, very remote.”

Only looking at the mock-up did I realize the immensity of the spacecraft. It is 15 feet (4.6 meters) at its widest, and 36 feet long. Once docked with the ISS, Hab One, which weighs 30 metric tons on Earth and can support four astronauts, will be the longest element on the station. 

“It is a spaceship-in-the-bottle problem. You basically have to feed all your systems through a 50-inch hatch.”

Michael Baine, chief technology officer, Axiom Space

Here at the Space Station Development Facility, the entire mock-up is made of CNC-machined wood. But the module is much further along than the existence of a “mock-up stage” would suggest. Its pressure vessel (that is, its primary shell, which holds air and maintains an Earth-like pressure environment in the vacuum of space) and its hatches are essentially completed and will soon be shipped from Italy by the same contractor that built many modules of the ISS. Baine walked me through a partitioned facility where Axiom Station’s avionics, propulsion, life support systems, communications, and other subsystems are well into development. Befitting the former Fry’s Electronics building in which we stood, there was a home-brew element to the systems, many of which were strewn across tables—an elaborate web of wires, tubes, circuit boards, and chips. The station will run on Linux.

Axiom built the mock-up to solve an almost comically fundamental challenge that any project such as this faces: turning the pressure shell and the myriad subsystems and components into a human-safe spacefaring vehicle. You can’t just drill holes in the pressure shell, any more than you can punch a hole in a balloon and expect it to keep its shape. Axiom must build the module inside and around it. “It is a spaceship-in-the-bottle problem,” Baine said. “You basically have to feed all your systems through a 50-inch hatch and integrate them into the element.” He calls it one of the hardest problems in the business, because it’s about more than assembling systems inside a pressure shell in Houston—it’s also about making the station user friendly for servicing in orbit, if ever a technical issue arises.

exterior of Axiom's R&D facility
Axiom’s R&D facility is housed in a sprawling former Fry’s Electronics retail store in a shopping center complex.
""
A mock-up of Axiom’s Habitat One (Hab One), which will include crew quarters and manufacturing capabilities.

Today, tourism and research are probably the best-known uses of private spaceflight. But Axiom has other functions in mind for the station, including serving as a destination for countries that have yet to get involved in sending humans to space. Last year, the company announced the Axiom Space Access Program, which Tejpaul Bhatia, the company’s chief revenue officer, described as a “space program in a box” for countries around the world. Axiom says the program is evolving, but that it is a pathway for space participation. Azerbaijan was the first country to sign on.

But one of the most promising business prospects for the immediate future is manufacturing. Low Earth orbit is an especially good environment for making things in three areas: pharmaceuticals, metallurgy, and optics. Microgravity eliminates a number of physical phenomena that can interfere with sensitive steps in manufacturing processes, yielding more consistent material properties and structures. Axiom and Blue Origin are betting that modern space stations built around the insights gleaned from decades of ISS experimentation (but freed of its 1980s and 1990s technology) will pay dividends. 

As part of its push to encourage companies to develop their own space stations, NASA has committed to leasing space on those that meet the agency’s stringent human-spaceflight requirements. Just as with a major shopping center, an “anchor tenant” can offer financial stability and attract more tenants. To help this along, a US national laboratory based in Melbourne, Florida, is specifically funding and supporting non-aerospace companies that might benefit from microgravity research.


Biomedicine in particular has yielded perhaps the best results with the nearest-term impact, as best represented by LambdaVision, a company established in 2009 by molecular biologists Nicole Wagner and Robert Birge. What makes it the most compelling glimpse of LEO’s promise is that LambdaVision was not founded as an aerospace company. Rather, Wagner and Birge were building a traditional, Earth-based company atop their research on a protein called
bacteriorhodopsin and its potential to restore neural function. BR is a “proton pump,” which is just what it sounds like. It pumps a proton from one side of a cell to the other.

They focused on the problems of retinitis pigmentosa and macular degeneration. In a healthy eye, photoreceptor cells—rods and cones—take in light and convert it into a signal that goes to bipolar and ganglion cells, and then to the optic nerve. In both diseases, the rods and cones start to die, and once they are gone, there is nothing to take in light and turn it into a signal that can be sent to the brain. Retinitis pigmentosa, which afflicts 1.5 million people around the world, begins by affecting peripheral vision and encroaches inward, leading to severe tunnel vision before causing complete blindness. Macular degeneration works the opposite way, first affecting central vision and then spreading outward. About 30 million people around the world suffer from it. Treatments exist for both diseases, but even the best can only slow their progression. In the end, blindness wins, and once it does, there is no treatment.

Wagner, Birge, and their team at LambdaVision had an idea for something that might help: a simple, flexible implant, about as big as the circle stamped out by a hole punch and the thickness of a piece of construction paper, that could replace the damaged light-­sensing cells and restore full vision. In principle, physicians could install the patch in the back of the eye, the same way they treat detached retinas, so it would not even require specialized training.  

The problem was making this artificial retina. The implant requires using a scaffold—essentially a tightly woven porous material similar to gauze—and binding a polymer to it. Atop that, the researchers begin applying alternating layers of BR protein and polymers. With enough layers, the protein can absorb enough light and pump protons—hydrogen ions, specifically—toward the bipolar and ganglion cells, which take it from there, restoring vision in high definition. 

To apply multiple layers, scientists float the scaffold on a solution in multiple beakers, moving from one to the next and repeating the process. The problem is that fluid solutions are never perfect—things float, they sink, they settle, they form sediment, they evaporate, there is convection, there are surface-tension variations—and every variation and imperfection can lead to a flawed layer.

Nicole Wagner in the lab
Nicole Wagner is cofounder of LambdaVision, a biotech startup that is working on making artificial retinas in low Earth orbit.
JULIE BIDWELL

If an implant requires 200 layers, an imperfection at layer 50 compounds massively by the end. The process is simply inefficient, and rife with irregular protein deposition. Early trials revealed that this issue negatively affected the artificial retina’s performance.

It was the sort of thing LambdaVision was hoping to work through as part of MassChallenge, a business incubation program in Boston. Wagner was working in the business accelerator’s co-working space one day in 2017. It had a “Google-y” feel, she felt, with an open-concept office and smart people all around, and she was at the desk they’d assigned her when somebody dropped by to say that the International Space Station National Laboratory was holding a lunch presentation down the hall, and there was free pizza.

Why not, Wagner thought. It would be pretty cool to hear people from NASA talk about the moon and Mars. When she got there, though, it turned out that it wasn’t that sort of presentation at all. Instead, representatives from CASIS—the Center for the Advancement of Science in Space, a nonprofit that operates the ISS National Lab—gave a talk on how they are using microgravity to help people on Earth. 

The US segment of the International Space Station, like Los Alamos, Oak Ridge, and Brookhaven, is a national laboratory dedicated to scientific and technological research. The office simply has a better view. About half the science conducted on the US segment is managed by the ISS National Laboratory out of Florida, with the remainder overseen by NASA. This division of resources allows for a wide range of scientific investigations on the station. Where NASA’s research typically focuses on exploration, space technology, and fundamental science to support future deep-space missions, the ISS National Laboratory aims to develop a sustainable low-Earth-orbit economy, encompassing fields like materials science, biology, pharmaceutical research, and technology development.

“I never envisioned doing anything in space—I didn’t know how to get there, or how it worked. Before that moment, it all sounded like science fiction.”

Nicole Wagner, cofounder of LambdaVision

Research being conducted on the station touches on metallurgy and fiber optics. Alloys like nitinol (nickel-titanium) can withstand huge temperature swings and are superelastic, with extraordinary potential for medical devices, aerospace, and robotics. Think artificial muscles. The problem is that nitinol is extremely hard to make on Earth because materials settle out and heat can get distributed unevenly during manufacturing, which yields an unreliable product. The same liabilities degrade the quality of fiber optics manufactured on Earth. 

The solution to both is to go to space: in microgravity, heat distributes more uniformly and sedimentation does not occur. Crystallization, the process of forming and growing crystals, is consistent across long distances with minimal degradation (meaning pristine fiber-optic signals even as you grow across vast stretches). More broadly, however, space-based crystallography has applications in almost every field of electronics and biomedicine.

As Wagner learned, researchers have found immediate gains on the space station today in everything from development of more effective vaccines (gravity on Earth harms the interaction of antigens and adjuvants) to higher-grade drug formulations and nanoparticle suspensions. One such drug, made by Taiho Pharmaceutical, is used to treat muscular dystrophy and has reached final-stage trials.

“They were talking at that time about things like bioprinting on orbit, and future missions they were planning,” Wagner told me. “It hit me immediately that we could do this—actually leverage microgravity to manufacture an artificial retina. I never envisioned doing anything in space—I didn’t know how to get there, or how it worked. Before that moment, it all sounded like science fiction.” 

After the meeting, she immediately called her team. “There’s a prize that I think we can win,” she said. It was the CASIS-Boeing Technology in Space Prize, which funds research that might benefit from space-station access. “We’re gonna do it.” 

Her team was immediately skeptical. In truth, she had her doubts as well. She was running a small startup. How were they going to build a small, automated science laboratory, put it on the International Space Station, have communication with it on the ground—how would they afford that? She pulled up a web browser and typed in “raspberry pi communication with space station.” She thought: What am I getting into? 

artificial retina on a gloved hand
LambdaVision’s artificial retina can be manufactured inside a small box, without need of astronaut intervention.

“It was my super-naïve vision of what space was at the time,” she told me. The proper term that now described her company, she soon learned, was “space adjacent”: a business that is not specifically in the aerospace industry but could benefit from—even work better by—leaving planet Earth. 

She was relieved when she found out that LambdaVision didn’t have to develop its own mission control and space infrastructure. It already existed, and there were partner companies that specialized in space-adjacent businesses. Her company linked up with Space Tango, which focuses on building underlying health and technology products in space, to develop its hardware. They managed to condense their open beaker system to an automated experiment the size of a shoebox. And she was right about one thing: they did win the prize. 

The team flew its first mission at the end of 2018, and it showed promising results. In the years since, the company has secured additional funding and flown a total of nine times to the ISS, most recently launching on January 30. With each mission, they have gradually improved their manufacturing hardware, system automation and imaging, and orbital processes. “We’re seeing much more evenly coated films in microgravity and overcome other challenges we see in a gravity environment,” Wagner says. “There’s much less waste.”

The system works autonomously, without need of astronaut intervention. Essentially, the team assembles it in a small box, astronauts plug it into power on the ISS, and when it has manufactured the sheets of artificial retinas, an astronaut unplugs it and ships it back to Earth. 

“At first, we just wanted to demonstrate that it’s feasible to do this in space,” says Wagner. “We don’t worry about that now—we are thinking hard now about scaling the system up. To support our early clinical trials, we don’t need millions of artificial retinas. We need hundreds, maybe thousands, to start. And that gives us time to determine how we are going to scale that up as we transition from the ISS—a public space station—to private, commercial space stations in low Earth orbit.”

So far, LambdaVision has performed small-animal studies in rats and advanced to large-animal studies in pigs, successfully installing the implants and demonstrating their tolerability. The company is continuing preclinical development to support clinical trials—doing such things as testing the artificial retinas for efficacy and safety—with a goal of beginning human trials as soon as early 2027.

“When I think about doing it in space and talking about cost and efficiency, I don’t think about it any differently than if somebody said, ‘Hey I’m gonna go do this in China’ or ‘I’m gonna go do this in California,’” Wagner says. “A space station is actually closer. It’s only 250 miles in the sky, versus 3,000 miles to California.”


If LambdaVision is successful, that alone would practically justify the vote cast by John Lewis 31 years ago. It is hard to think of an achievement more profound than curing blindness for millions. But even more than delivering such sweeping and life-­changing results, one of the most significant accomplishments of the ISS might be proving that such results can even be achieved in the first place.

So far, no major medicines born on the space station have been brought to market. No mass-produced technologies have yet emerged from low Earth orbit. Research has been iterative, and in-space manufacturing remains in the early stages. But according to Ariel Ekblaw, CEO of the Aurelia Institute, a nonprofit space research center dedicated to working on “critical path” infrastructure for space architectures, NASA’s groundwork for the ISS has made a next generation of more product-focused work possible. 

“Maybe Dan Goldin was ahead of his time in thinking that such work was going to be achieved within the time span of humanity’s first-ever truly large-scale international space station,” she told me, “and what we see now is not just basic science, but entities like biotech companies actually taking what we learned from NASA and the National Lab over the last 20-plus years, and envision putting mass-produced products or mass-­produced infrastructure in space.”

""
A mock-up of NASA’s Habitation and Logistics Outpost (HALO) module, the first component of a planned moon-orbiting Gateway station.
JAMES BLAIR/NASA

If indeed the handoff of low Earth orbit from NASA-led to commercial operations succeeds, it would be a promising glimpse of the future of the lunar economy. There, as in LEO, NASA is methodically building infrastructure and solving fundamental problems of exploration. The moon-­orbiting Gateway station—a NASA-led international effort—is deep into development, with the Habitation and Logistics Outpost (HALO) module set to launch as early as next year. That station will serve as the “second step” of a sustainable moon strategy that was excised from the Apollo program 60 years ago. From there, NASA hopes to cultivate a presence on the lunar surface. 

If the LEO model holds, the agency could one day transfer moon-base operations to the private sector and turn to Mars. There might be a lot of money to be made simply in harvesting water on the moon, to say nothing of rare earth elements that lend themselves to manufacturing as well.

One of the harshest restraints on progress in space has been, ironically, space. “Right now, on a good day, only 11 people fit in orbit on ISS and Tiangong,” says Ekblaw. The age of private space stations is going to be fundamentally transformative if only because there will be more room for dedicated researchers.

Axiom’s goal is to double its infrastructure in space every five years. This means doubling the number of people in orbit, the number of hosted payloads, and the amount of manufacturing they are capable of doing. 

“Within two to three years, I could send a graduate student to space with Axiom,” Ekblaw says. “It requires a little creative fundraising, but I think that that is opening up a realm of possibility.” In the past, she explains, a doctoral researcher would be unbelievably fortunate to have research fly as part of a single flight mission.Today, however, researchers even in a master’s program can fly experiments repeatedly because of the increased opportunities afforded by commercial spaceflight.In the future, rather than relying on career NASA astronauts—who have myriad responsibilities in orbit and spend a good amount of time as guinea pigs themselves—scientists could go up personally to run their own research projects in greater depth. 

“And that,” she says, “is a future that is very, very near.”

David W. Brown is a writer based in New Orleans. His next book, The Outside Cats, is about a team of polar explorers and his expedition with them to Antarctica. It will be published by Mariner Books. 

❌
❌