Normal view

There are new articles available, click to refresh the page.
Today — 26 June 2024MIT Technology Review

The Download: Introducing the Play issue

26 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Supershoes are reshaping distance running

Since 2016, when Nike introduced the Vaporfly, a paradigm-­shifting shoe that helped athletes run more efficiently (and therefore faster), the elite running world has muddled through a period of soul-searching over the impact of high-tech footwear on the sport.

“Supershoes” —which combine a lightweight, energy-­returning foam with a carbon-fiber plate for stiffness—have been behind every broken world record in distances from 5,000 meters to the marathon since 2020.

To some, this is a sign of progress. In much of the world, elite running lacks a widespread following. Record-breaking adds a layer of excitement. And the shoes have benefits beyond the clock: most important, they help minimize wear on the body and enable faster recovery from hard workouts and races.

Still, some argue that they’ve changed the sport too quickly. Read the full story. 

—Jonathan W. Rosen

This story is from the forthcoming print issue of MIT Technology Review, which explores the theme of Play. It’s set to launch tomorrow, so if you don’t already, subscribe now to get a copy when it lands.

Why China’s dominance in commercial drones has become a global security issue

Whether you’ve flown a drone before or not, you’ve probably heard of DJI, or at least seen its logo. With more than a 90% share of the global consumer market, this Shenzhen-based company’s drones are used by hobbyists and businesses alike for everything from photography to spraying pesticides to moving parcels.

But on June 14, the US House of Representatives passed a bill that would completely ban DJI’s drones from being sold in the US. The bill is now being discussed in the Senate as part of the annual defense budget negotiations. 

To understand why, you need to consider the potential for conflict between China and Taiwan, and the fact that the military implications of DJI’s commercial drones have become a top policy concern for US lawmakers. Read the full story.

—Zeyi Yang

This story is from China Report, our weekly newsletter covering tech in China. Sign up to receive it in your inbox every Tuesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The EU has issued antitrust charges against Microsoft 
For bundling Teams with Office—just a day after it announced similar charges against Apple. (WSJ $) 
+ It seems likely it’ll be hit with a gigantic fine. (Ars Technica)
The EU has new powers to regulate the tech sector, and it’s clearly not afraid to use them. (FT $)

2 OpenAI is delaying launching its voice assistant 
 (WP $)
It’s also planning to block access in China—but plenty of Chinese companies stand ready to fill the void. (Mashable)

3 Deepfake creators are re-victimizing sex trafficking survivors
Non-consensual deepfake porn is proliferating at a terrifying pace—but this is the grimmest example I’ve seen. (Wired $)
Three ways we can fight deepfake porn. (MIT Technology Review)

4 Chinese tech company IPOs are a rarity these days
It’s becoming very hard to avoid the risk of it all being derailed by political scrutiny, whether at home or abroad. (NYT $)
Global chip company stock prices have been on a rollercoaster ride recently, thanks to Nvidia. (CNBC)

5 Why AI is not about to replace journalism
It can crank out content, sure—but it’s incredibly boring to read. (404 Media)
After all the hype, it’s no wonder lots of us feel ever-so-slightly disappointed by AI. (WP $)
Despite a troubled launch, Google’s already extending AI Summaries to Gmail as well as Search. (CNET

6 This week of extreme weather is a sign of things to come
Summers come with a side-serving of existential dread now, as we all feel the effects of climate change. (NBC)
+ Scientists have spotted a worrying new tipping point for the loss of ice sheets in Antarctica. (The Guardian

7 Inside the fight over lithium mine expansion in Argentina 
Indigenous communities had been divided in opposition—but as the cash started flowing, cracks started appearing. (The Guardian)
Lithium battery fires are a growing concern for firefighters worldwide. (WSJ $)

8 What even is intelligent life?
We value it, but it’s a slippery concept that’s almost impossible to define. (Aeon
+ What an octopus’s mind can teach us about AI’s ultimate mystery. (MIT Technology Review)

9 Tesla is recalling most Cybertrucks… for the fourth time 
You have to laugh, really. (The Verge
Luckily, it’s not sold that many of them anyway. (Quartz $)

10 The trouble with Meta’s “smart” Ray Bans 
Well… basically they’re just not very smart. At all. (Wired $)

Quote of the day

“We’re making the biggest bet in AI. If transformers go away, we’ll die. But if they stick around, we’re the biggest company of all time.”

—Fighting talk to CNBC from Gavin Uberti, cofounder and CEO of a two-year-old startup called Etched, which believes its AI-optimized chips could take on Nvidia’s near-monopoly.

The big story

This nanoparticle could be the key to a universal covid vaccine

3D model of the mosaic nanoparticle vaccine
COURTESY OF WELLCOME LEAP, CALTECH, AND MERKIN INSTITUTE

September 2022
Long before Alexander Cohen—or anyone else—had heard of the alpha, delta, or omicron variants of covid-19, he and his graduate school advisor Pamela Bjorkman were doing the research that might soon make it possible for a single vaccine to defeat the rapidly evolving virus—along with any other covid-19 variant that might arise in the future.

The pair and their collaborators are now tantalizingly close to achieving their goal of manufacturing a vaccine that broadly triggers an immune response not just to covid and its variants but to a wider variety of coronaviruses. Read the full story.

—Adam Piore

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Happy 80th Birthday to much beloved Muswell Hillbilly Ray Davies, frontman of the Kinks.
+ Need to cool your home down? Plants can help!
+ Well, uh, that’s certainly one way to cope with a long-haul flight. 
+ Glad to know I’m not the only person obsessed with Nongshim instant noodles

Why China’s dominance in commercial drones has become a global security matter

By: Zeyi Yang
26 June 2024 at 06:00

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Whether you’ve flown a drone before or not, you’ve probably heard of DJI, or at least seen its logo. With more than a 90% share of the global consumer market, this Shenzhen-based company’s drones are used by hobbyists and businesses alike for photography and surveillance, as well as for spraying pesticides, moving parcels, and many other purposes around the world.  

But on June 14, the US House of Representatives passed a bill that would completely ban DJI’s drones from being sold in the US. The bill is now being discussed in the Senate as part of the annual defense budget negotiations. 

The reason? While its market dominance has attracted scrutiny for years, it’s increasingly clear that DJI’s commercial products are so good and affordable they are also being used on active battlefields to scout out the enemy or carry bombs. As the US worries about the potential for conflict between China and Taiwan, the military implications of DJI’s commercial drones are becoming a top policy concern.

DJI has managed to set the gold standard for commercial drones because it is built on decades of electronic manufacturing prowess and policy support in Shenzhen. It is an example of how China’s manufacturing advantage can turn into a technological one.

“I’ve been to the DJI factory many times … and mainly, China’s industrial base is so deep that every component ends up being a fraction of the cost,” Sam Schmitz, the mechanical engineering lead at Neuralink, wrote on X. Shenzhen and surrounding towns have had a robust factory scene for decades, providing an indispensable supply chain for a hardware industry like drones. “This factory made almost everything, and it’s surrounded by thousands of factories that make everything else … nowhere else in the world can you run out of some weird screw and just walk down the street until you find someone selling thousands of them,” he wrote.

But Shenzhen’s municipal government has also significantly contributed to the industry. For example, it has granted companies more permission for potentially risky experiments and set up subsidies and policy support. Last year, I visited Shenzhen to experience how it’s already incorporating drones in everyday food delivery, but the city is also working with companies to use drones for bigger and bigger jobs—carrying everything from packages to passengers. All of these go into a plan to build up the “low-altitude economy” in Shenzhen that keeps the city on the leading edge of drone technology.

As a result, the supply chain in Shenzhen has become so competitive that the world can’t really use drones without it. Chinese drones are simply the most accessible and affordable out there. 

Most recently, DJI’s drones have been used by both sides in the Ukraine-Russia conflict for reconnaissance and bombing. Some American companies tried to replace DJI’s role, but their drones were more expensive and their performance unsatisfactory. And even as DJI publicly suspended its businesses in Russia and Ukraine and said it would terminate any reseller relationship if its products were found to be used for military purposes, the Ukrainian army is still assembling its own drones with parts sourced from China.

This reliance on one Chinese company and the supply chain behind it is what worries US politicians, but the danger would be more pronounced in any conflict between China and Taiwan, a prospect that is a huge security concern in the US and globally.

Last week, my colleague James O’Donnell wrote about a report by the think tank Center for a New American Security (CNAS) that analyzed the role of drones in a potential war in the Taiwan Strait. Right now, both Ukraine and Russia are still finding ways to source drones or drone parts from Chinese companies, but it’d be much harder for Taiwan to do so, since it would be in China’s interest to block its opponent’s supply. “So Taiwan is effectively cut off from the world’s foremost commercial drone supplier and must either make its own drones or find alternative manufacturers, likely in the US,” James wrote.

If the ban on DJI sales in the US is eventually passed, it will hit the company hard for sure, as the US drone market is currently worth an estimated $6 billion, the majority of which is going to DJI. But undercutting DJI’s advantage won’t magically grow an alternative drone industry outside China. 

“The actions taken against DJI suggest protectionism and undermine the principles of fair competition and an open market. The Countering CCP Drones Act risks setting a dangerous precedent, where unfounded allegations dictate public policy, potentially jeopardizing the economic well-being of the US,” DJI told MIT Technology Review in an emailed statement.

The Taiwanese government is aware of the risks of relying too much on China’s drone industry, and it’s looking to change. In March, Taiwan’s newly elected president, Lai Ching-te, said that Taiwan wants to become the “Asian center for the democratic drone supply chain.” 

Already the hub of global semiconductor production, Taiwan seems well positioned to grow another hardware industry like drones, but it will probably still take years or even decades to build the economies of scale seen in Shenzhen. With support from the US, can Taiwanese companies really grow fast enough to meaningfully sway China’s control of the industry? That’s a very open question.

A housekeeping note: I’m currently visiting London, and the newsletter will take a break next week. If you are based in the UK and would like to meet up, let me know by writing to zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. ByteDance is working with the US chip design company Broadcom to develop a five-nanometer AI chip. This US-China collaboration, which should be compliant with US export restrictions, is rare these days given the political climate. (Reuters $)

2. After both the European Union and China announced new tariffs against each other, the two sides agreed to chat about how to resolve the dispute. (New York Times $)

  • Canada is preparing to announce its own tariffs on Chinese-made electric vehicles. (Bloomberg $)

3. A NASA leader says the US is “on schedule” to send astronauts to the moon within a few years. There’s currently a heated race between the US and China on moon exploration. (Washington Post $)

4. A new cybersecurity report says RedJuliett, a China-backed hacker group, has intensified attacks on Taiwanese organizations this year. (Al Jazeera $)

5. The Canadian government is blocking a rare earth mine from being sold to a Chinese company. Instead, the government will buy the stockpiled rare earth materials for $2.2 million. (Bloomberg $)

6. Economic hardship at home has pushed some Chinese small investors to enter the US marijuana industry. They have been buying lands in the States, setting up marijuana farms, and hiring other new Chinese immigrants. (NPR)

Lost in translation

In the past week, the most talked-about person in China has been a 17-year-old girl named Jiang Ping, according to the Chinese publication Southern Metropolis Daily. Every year since 2018, the Chinese company Alibaba has been hosting a global mathematics contest that attracts students from prestigious universities around the world to compete for a generous prize. But to everyone’s surprise, Jiang, who’s studying fashion design at a vocational high school in a poor town in eastern China, ended up ranking 12th in the qualifying round this year, beating scores of college undergraduate or even master’s students. Other than reading college mathematics textbooks under her math teacher’s guidance, Jiang has received no professional training, as many of her competitors have.

Jiang’s story, highlighted by Alibaba following the announcement of the first-round results, immediately went viral in China. While some saw it as a tale of buried talents and how personal endeavor can overcome unfavorable circumstances, others questioned the legitimacy of her results. She became so famous that people, including social media influencers, kept visiting her home, turning her hometown into an unlikely tourist destination. The town had to hide Jiang from public attention while she prepared for the final round of the competition.

One more thing

After I wrote about the new Chinese generative video model Kling last week, the AI tool added a new feature that can turn a static photo into a short video clip. Well, what better way to test its performance than feeding it the iconic “distracted boyfriend” meme and watching what the model predicts will happen after that moment?

可灵上线图生视频了,演绎效果很到位! pic.twitter.com/MgcO3CCl9o

— Gorden Sun (@Gorden_Sun) June 21, 2024

Update: The story has been updated to include a statement from DJI.

Job title of the future: Space debris engineer

26 June 2024 at 05:00

Stijn Lemmens has a cleanup job like few others. A senior space debris mitigation analyst at the European Space Agency (ESA), Lemmens works on counteracting space pollution by collaborating with spacecraft designers and the wider industry to create missions less likely to clutter the orbital environment. 

Although significant attention has been devoted to launching spacecraft into space, the idea of what to do with their remains has been largely ignored. Many previous missions did not have an exit strategy. Instead of being pushed into orbits where they could reenter Earth’s atmosphere and burn up, satellites were simply left in orbit at the ends of their lives, creating debris that must be monitored and, if possible, maneuvered around to avoid a collision. “For the last 60 years, we’ve been using [space] as if it were an infinite resource,” Lemmens says. “But particularly in the last 10 years, it has become rather clear that this is not the case.” 

Engineering the ins and outs: Step one in reducing orbital clutter—or, colloquially, space trash—is designing spacecraft that safely leave space when their missions are complete. “I thought naïvely, as a student, ‘How hard can that be?’” says Lemmens. The answer turned out to be more complicated than he expected. 

At ESA, he works with scientists and engineers on specific missions to devise good approaches. Some incorporate propulsion that works reliably even decades after launch; others involve designing systems that can move spacecraft to keep them from colliding with other satellites and with space debris. They also work on plans to get the remains through the atmosphere without large risks to aviation and infrastructure.

Standardizing space: Earth’s atmosphere exerts a drag on satellites that will eventually pull them out of orbit. National and international guidelines recommend that satellites lower their altitude at the end of their operational lives so that they will reenter the atmosphere and make this possible. Previously the goal was for this to take 25 years at most; Lemmens and his peers now suggest five years or less, a time frame that would have to be taken into account from the start of mission planning and design. 

Explaining the need for this change in policy can feel a bit like preaching, Lemmens says, and it’s his least favorite part of the job. It’s a challenge, he says, to persuade people not to think of the vastness of space as “an infinite amount of orbits.” Without change, the amount of space debris may create a serious problem in the coming decades, cluttering orbits and increasing the number of collisions.  

Shaping the future: Lemmens says his wish is for his job to become unnecessary in the future, but with around 11,500 satellites and over 35,000 debris objects being tracked, and more launches planned, that seems unlikely to happen. 

Researchers are looking into more drastic changes to the way space missions are run. We might one day, for instance, be able to dismantle satellites and find ways to recycle their components in orbit. Such an approach isn’t likely to be used anytime soon, Lemmens says. But he is encouraged that more spacecraft designers are thinking about sustainability: “Ideally, this becomes the normal in the sense that this becomes a standard engineering practice that you just think of when you’re designing your spacecraft.”

Inside the US government’s brilliantly boring websites

26 June 2024 at 05:00

The United States has an official web design system and a custom typeface. This public design system aims to make government websites not only good-looking but accessible and functional for all.

Before the internet, Americans may have interacted with the federal government by stepping into grand buildings adorned with impressive stone columns and gleaming marble floors. Today, the neoclassical architecture of those physical spaces has been (at least partially) replaced by the digital architecture of website design—HTML code, tables, forms, and buttons. 

While people visiting a government website to apply for student loans, research veterans’ benefits, or enroll in Medicare might not notice these digital elements, they play a crucial role. If a website is buggy or doesn’t work on a phone, taxpayers may not be able to access the services they have paid for—which can create a negative impression of the government itself.  

There are about 26,000 federal websites in the US. Early on, each site had its own designs, fonts, and log-in systems, creating frustration for the public and wasting government resources. The troubled launch of Healthcare.gov in 2013 highlighted the need for a better way to build government digital services. In 2014, President Obama created two new teams to help improve government tech.

Within the General Services Administration (GSA), a new team called 18F (named for its office at 1800 F Street in Washington, DC) was created to “collaborate with other agencies to fix technical problems, build products, and improve public service through technology.” The team was built to move at the speed of tech startups rather than lumbering bureaucratic agencies. 

The US Digital Service (USDS) was set up “to deliver better government services to the American people through technology and design.” In 2015, the two teams collaborated to build the US Web Design System (USWDS), a style guide and collection of user interface components and design patterns intended to ensure accessibility and a consistent user experience across government websites. “Inconsistency is felt, even if not always precisely articulated in usability research findings,” Dan Williams, the USWDS program lead, said in an email. 

Today, the system defines 47 user interface components such as buttons, alerts, search boxes, and forms, each with design examples, sample code, and guidelines such as “Be polite” and “Don’t overdo it.” Now in its third iteration, it is used in 160 government websites. “As of September 2023, 94 agencies use USWDS code, and it powers about 1.1 billion page views on federal websites,” says Williams.

To ensure clear and consistent typography, the free and open-source typeface Public Sans was created for the US government in 2019. “It started as a design experiment,” says Williams, who designed the typeface. “We were interested in trying to establish an open-source solution space for a typeface, just like we had for the other design elements in the design system.”

The teams behind Public Sans and the USWDS embrace transparency and collaboration with government agencies and the public.

And to ensure that the hard-learned lessons aren’t forgotten, the projects embrace continuous improvement. One of the design principles behind Public Sans offers key guidance in this area: “Strive to be better, not necessarily perfect.”

Jon Keegan writes Beautiful Public Data, a newsletter that curates visually interesting data sets collected by local, state, and federal government agencies
(beautifulpublicdata.com).

Learning from catastrophe

26 June 2024 at 05:00

The philosopher Karl Popper once argued that there are two kinds of problems in the world: clock problems and cloud problems. As the metaphor suggests, clock problems obey a certain logic. They are orderly and can be broken down and analyzed piece by piece. When a clock stops working, you’re able to take it apart, look for what’s wrong, and fix it. The fix may not be easy, but it’s achievable. Crucially, you know when you’ve solved the issue because the clock starts telling the time again. 

""
Wicked Problems: How to Engineer a Better World
Guru Madhavan
W.W. NORTON, 2024

Cloud problems offer no such assurances. They are inherently complex and unpredictable, and they usually have social, psychological, or political dimensions. Because of their dynamic, shape-shifting nature, trying to “fix” a cloud problem often ends up creating several new problems. For this reason, they don’t have a definitive “solved” state—only good and bad (or better and worse) outcomes. Trying to repair a broken-down car is a clock problem. Trying to solve traffic is a cloud problem.  

Engineers are renowned clock-problem solvers. They’re also notorious for treating every problem like a clock. Increasing specialization and cultural expectations play a role in this tendency. But so do engineers themselves, who are typically the ones who get to frame the problems they’re trying to solve in the first place. 

In his latest book, Wicked Problems, Guru Madhavan argues that the growing number of cloudy problems in our world demands a broader, more civic-minded approach to engineering. “Wickedness” is Madhavan’s way of characterizing what he calls “the cloudiest of problems.” It’s a nod to a now-famous coinage by Horst Rittel and Melvin Webber, professors at the University of California, Berkeley, who used the term “wicked” to describe complex social problems that resisted the rote scientific and engineering-based (i.e., clock-like) approaches that were invading their fields of design and urban planning back in the 1970s. 

Madhavan, who’s the senior director of programs at the National Academy of Engineering, is no stranger to wicked problems himself. He’s tackled such daunting examples as trying to make prescription drugs more affordable in the US and prioritizing development of new vaccines. But the book isn’t about his own work. Instead, Wicked Problems weaves together the story of a largely forgotten aviation engineer and inventor, Edwin A. Link, with case studies of man-made and natural disasters that Madhavan uses to explain how wicked problems take shape in society and how they might be tamed.

Link’s story, for those who don’t know it, is fascinating—he was responsible for building the first mechanical flight trainer, using parts from his family’s organ factory—and Madhavan gives a rich and detailed accounting. The challenges this inventor faced in the 1920s and ’30s—which included figuring out how tens of thousands of pilots could quickly and effectively be trained to fly without putting all of them up in the air (and in danger), as well as how to instill trust in “instrument flying” when pilots’ instincts frequently told them their instruments were wrong—were among the quintessential wicked problems of his time. 

To address a world full of wicked problems, we’re going to need a more expansive and inclusive idea of what engineering is and who gets to participate in it.

Unfortunately, while Link’s biography and many of the interstitial chapters on disasters, like Boston’s Great Molasses Flood of 1919, are interesting and deeply researched, Wicked Problems suffers from some wicked structural choices. 

The book’s elaborate conceptual framework and hodgepodge of narratives feel both fussy and unnecessary, making a complex and nuanced topic even more difficult to grasp at times. In the prologue alone, readers must bounce from the concept of cloud problems to that of wicked problems, which get broken down into hard, soft, and messy problems, which are then reconstituted in different ways and linked to six attributes—efficiency, vagueness, vulnerability, safety, maintenance, and resilience—that, together, form what Madhavan calls a “concept of operations,” which is the primary organizational tool he uses to examine wicked problems.

It’s a lot—or at least enough to make you wonder whether a “systems engineering” approach was the correct lens through which to examine wickedness. It’s also unfortunate because Madhavan’s ultimate argument is an important one, particularly in an age of rampant solutionism and “one neat trick” approaches to complex problems. To effectively address a world full of wicked problems, he says, we’re going to need a more expansive and inclusive idea of what engineering is and who gets to participate in it.  

""
Rational Accidents: Reckoning with Catastrophic Technologies
John Downer
MIT PRESS, 2024

While John Downer would likely agree with that sentiment, his new book, Rational Accidents, makes a strong argument that there are hard limits to even the best and broadest engineering approaches. Similarly set in the world of aviation, Downer’s book explores a fundamental paradox at the heart of today’s civil aviation industry: the fact that flying is safer and more reliable than should technically be possible.

Jetliners are an example of what Downer calls a “catastrophic technology.” These are “complex technological systems that require extraordinary, and historically unprecedented, failure rates—of the order of hundreds of millions, or even billions, of operational hours between catastrophic failures.”

Take the average modern jetliner, with its 7 million components and 170 miles’ worth of wiring—an immensely complex system in and of itself. There were over 25,000 jetliners in regular service in 2014, according to Downer. Together, they averaged 100,000 flights every single day. Now consider that in 2017, no passenger-carrying commercial jetliner was involved in a fatal accident. Zero. That year, passenger totals reached 4 billion on close to 37 million flights. Yes, it was a record-setting year for the airline industry, safety-wise, but flying remains an almost unfathomably safe and reliable mode of transportation—even with Boeing’s deadly 737 Max crashes in 2018 and 2019 and the company’s ongoing troubles

Downer, a professor of science and technology studies at the University of Bristol, does an excellent job in the first half of the book dismantling the idea that we can objectively recognize, understand, and therefore control all risk involved in such complex technologies. Using examples from well-known jetliner crashes, as well as from the Fukushima nuclear plant meltdown, he shows why there are simply too many scenarios and permutations of failure for us to assess or foresee such risks, even with today’s sophisticated modeling techniques and algorithmic assistance.

So how does the airline industry achieve its seemingly unachievable record of safety and reliability? It’s not regulation, Downer says. Instead, he points to three unique factors. First is the massive service experience the industry has amassed. Over the course of 70 years, manufacturers have built tens of thousands of jetliners, which have failed (and continue to fail) in all sorts of unpredictable ways. 

This deep and constantly growing data set, combined with the industry’s commitment to thoroughly investigating each and every failure, lets it generalize the lessons learned across the entire industry—the second key to understanding jetliner reliability. 

Finally is what might be the most interesting and counterintuitive factor: Downer argues that the lack of innovation in jetliner design is an essential but overlooked part of the reliability record. The fact that the industry has been building what are essentially iterations of the same jetliner for 70 years ensures that lessons learned from failures are perpetually relevant as well as generalizable, he says. 

That extremely cautious relationship to change flies in the face of the innovate-or-die ethos that drives most technology companies today. And yet it allows the airline industry to learn from decades of failures and continue to chip away at the future “failure performance” of jetliners.

The bad news is that the lessons in jetliner reliability aren’t transferable to other catastrophic technologies. “It is an irony of modernity that the only catastrophic technology with which we have real experience, the jetliner, is highly unrepresentative, and yet it reifies a misleading perception of mastery over catastrophic technologies in general,” writes Downer.

For instance, to make nuclear reactors as reliable as jetliners, that industry would need to commit to one common reactor design, build tens of thousands of reactors, operate them for decades, suffer through thousands of catastrophes, slowly accumulate lessons and insights from those catastrophes, and then use them to refine that common reactor design.  

This obviously won’t happen. And yet “because we remain entranced by the promise of implausible reliability, and implausible certainty about that reliability, our appetite for innovation has outpaced our insight and humility,” writes Downer. With the age of catastrophic technologies still in its infancy, our continued survival may very well hinge not on innovating our way out of cloudy or wicked problems, but rather on recognizing, and respecting, what we don’t know and can probably never understand.  

If Wicked Problems and Rational Accidents are about the challenges and limits of trying to understand complex systems using objective science- and engineering-based methods, Georgina Voss’s new book, Systems Ultra, provides a refreshing alternative. Rather than dispassionately trying to map out or make sense of complex systems from the outside, Voss—a writer, artist, and researcher—uses her book to grapple with what they feel like, and ultimately what they mean, from the inside.

""
Systems Ultra: Making Sense of Technology in a Complex World
Georgina Voss
VERSO, 2024

“There is something rather wonderful about simply feeling our way through these enormous structures,” she writes before taking readers on a whirlwind tour of systems visible and unseen, corrupt and benign, ancient and new. Stops include the halls of hype at Las Vegas’s annual Consumer Electronics Show (“a hot mess of a Friday casual hellscape”), the “memetic gold mine” that was the container ship Ever Given and the global supply chain it broke when it got stuck in the Suez Canal, and the payment systems that undergird the porn industry. 

For Voss, systems are both structure and behavior. They are relational technologies that are “defined by their ability to scale and, perhaps more importantly, their peculiar relationship to scale.” She’s also keenly aware of the pitfalls of using an “experiential” approach to make sense of these large-scale systems. “Verbal attempts to neatly encapsulate what a system is can feel like a stoner monologue with pointed hand gestures (‘Have you ever thought about how electricity is, like, really big?’),” she writes. 

Nevertheless, her written attempts are a delight to read. Voss manages to skillfully unpack the power structures that make up, and reinforce, the large-scale systems we live in. Along the way, she also dispels many of the stories we’re told about their inscrutability and inevitability. That she does all this with humor, intelligence, and a boundless sense of curiosity makes Systems Ultra both a shining example of the “civic engagement as engineering” approach that Madhavan argues for in Wicked Problems, and proof that his argument is spot on. 

Bryan Gardiner is a writer based in Oakland, California.

Toys can change your life

26 June 2024 at 05:00

In a November 1984 story for Technology Review, Carolyn Sumners, curator of astronomy at the Houston Museum of Natural Science, described how toys, games, and even amusement park rides could change how young minds view science and math. “The Slinky,” Sumners noted, “has long served teachers as a medium for demonstrating longitudinal (soundlike) waves and transverse (lightlike) waves.” A yo-yo can be used as a gauge (a “yo-yo meter”) to observe the forces on a roller coaster. Marbles employ mass and velocity. Even a simple ball offers insights into the laws of gravity.

While Sumners focused on physics, she was onto something bigger. Over the last several decades, evidence has emerged that childhood play can shape our future selves: the skills we develop, the professions we choose, our sense of self-worth, and even our relationships.

That doesn’t mean we should foist “educational” toys like telescopes or tiny toolboxes on kids to turn them into astronomers or carpenters. As Sumners explained, even “fun” toys offer opportunities to discover the basic principles of physics. 

According to Jacqueline Harding, a child development expert and author of The Brain That Loves to Play, “If you invest time in play, which helps with executive functioning, decision-making, resilience—all those things—then it’s going to propel you into a much more safe, secure space in the future.”

Sumners was focused mostly on hard skills, the scientific knowledge that toys and games can foster. But there are soft skills, too, like creativity, problem-­solving, teamwork, and empathy. According to Harding, the less structure there is to such play—the fewer rules and goals—the more these soft skills emerge.

“The kinds of playthings, or play activities, that really produce creative thought,” she says, “are natural materials, with no defined end to them—like clay, paint, water, and mud—so that there is no right or wrong way of playing with it.” 

Playing is by definition voluntary, spontaneous, and goal-free; it involves taking risks, testing boundaries, and experimenting. The best kind of play results in joyful discovery, and along the way, the building blocks of innovation and personal development take shape. But in the decades since Sumners wrote her story, the landscape of play has shifted considerably. Recent research by the American Academy of Pediatrics’ Council on Early Childhood suggests that digital games and virtual play don’t appear to confer the same developmental benefits as physical games and outdoor play

“The brain loves the rewards that are coming from digital media,” says Harding. But in screen-based play, “you’re not getting that autonomy.” The lack of physical interaction also concerns her: “It is the quality of human face-to-face interaction, body proximity, eye-to-eye gaze, and mutual engagement in a play activity that really makes a difference.”

Bill Gourgey is a science writer based in Washington, DC.

Do you want to play a game?

By: Mat Honan
26 June 2024 at 05:00

For children, play comes so naturally. They don’t have to be encouraged to play. They don’t need equipment, or the latest graphics processors, or the perfect conditions—they just do it. What’s more, study after study has found that play has a crucial role in childhood growth and development. If you want to witness the absolute rapture of creative expression, just observe the unstructured play of children.

So what happens to us as we grow older? Children begin to compete with each other by age four or five. Play begins to transform from something we do purely for fun into something we use to achieve status and rank ourselves against other people. We play to score points. We play to win. 

And with that, play starts to become something different. Not that it can’t still be fun and joyful! Even watching other people play will bring us joy. We enjoy watching other people play so much and get so much joy by proxy from watching their achievements that we spend massive amounts of money to do so. According to StubHub, the average price of a ticket to the Super Bowl this year was $8,600. The average price for a Super Bowl ad was a cool $7 million this year, according to Ad Age

This kind of interest doesn’t just apply to physical games. Video-game streaming has long been a mainstay on YouTube, and entire industries have risen up around it. Top streamers on Twitch—Amazon’s livestreaming service, which is heavily gaming focused—earn upwards of $100,000 per month. And the global market for video games themselves is projected to bring in some $282 billion in revenue this year

Simply put, play is serious business. 

There are fortunes to be had in making our play more appealing, more accessible, more fun. All of the features in this issue dig in on the enormous amount of research and development that goes into making play “better.”  

On our cover this month is executive editor Niall Firth’s feature on the ways AI is going to upend game development. As you will read, we are about to enter the Wild West—Red Dead or not—of game character development. How will games change when they become less predictable and more fully interactive, thanks to AI-driven nonplayer characters who can not only go off script but even continue to play with each other when we’re not there? Will these even be games anymore, or will we simply be playing around in experiences? What kinds of parasocial relationships will we develop in these new worlds? It’s a fascinating read. 

There is no sport more intimately connected to the ocean, and to water, than surfing. It’s pure play on top of the waves. And when you hear surfers talk about entering the flow state, this is very much the same kind of state children experience at play—intensely focused, losing all sense of time and the world around them. Finding that flow no longer means living by the water’s edge, Eileen Guo reports. At surf pools all over the world, we’re piping water into (or out of) deserts to create perfect waves hundreds of miles from the ocean. How will that change the sport, and at what environmental cost? 

Just as we can make games more interesting, or bring the ocean to the desert, we have long pushed the limits of how we can make our bodies better, faster, stronger. Among the most recent ways we have done this is with the advent of so-called supershoes—running shoes with rigid carbon-fiber plates and bouncy proprietary foams. The late Kelvin Kiptum utterly destroyed the men’s world record for the marathon last year wearing a pair of supershoes made by Nike, clocking in at a blisteringly hot 2:00:35. Jonathan W. Rosen explores the science and technology behind these shoes and how they are changing the sport, especially in Kenya. 

There’s plenty more, too. So I hope you enjoy the Play issue. We certainly put a lot of work into it. But of course, what fun is play if you don’t put in the work?

Thanks for reading,

Mat Honan

Puzzle Corner history

26 June 2024 at 00:17

When Allan Gottlieb ’67 began editing the Puzzle Corner column in 1966, he was a junior at MIT, majoring in math. Little did he know then that he was undertaking a project that would last for nearly six decades. If you missed our previous celebrations of Allan, read our 2015 profile, “Puzzle Corner’s Keeper,” and watch the MIT Alumni Association’s video “The Puzzle Guy” from his 50th reunion.

Yesterday — 25 June 2024MIT Technology Review

Stress test

Elizabeth Sajdel-Sulkowska was just three months old when Nazi soldiers set fire to her family’s home in the midst of the Warsaw Uprising of August 1944, as the Polish resistance attempted to seize control of the city from the Germans. When that revolt ultimately failed, the city was razed, and there was no time to grab diapers and milk as the family rushed from the burning building. Sajdel-Sulkowska’s parents were taken to Dulag 121, a transitional camp from which they were to be sent to a concentration camp. They escaped that fate only because her mother gave the camp’s milkman her jewelry to deliver a letter to Sajdel-Sulkowska’s godfather, who paid the Germans in charge to release them.

Although her parents’ lives were spared, her father, a biology professor, died of cancer three years later. When her mother, a chemist, got a job as head of a food quality laboratory in Łódź, she eventually had to place Elizabeth in the care of nuns in the countryside, 11 miles away. From ages six through nine, she lived with about 30 other half-orphans and orphans, without running water or electricity or personal attention, taking an hour-long train ride to see her mother on weekends.    

It was a childhood, she says, of “tremendous stress.” 

So perhaps it’s no accident that as an adult, Sajdel-Sulkowska was drawn to the study of stress—whether caused by burns, altered gravity, chemical pollutants, or bacterial infection—and its effect on brain development. In the course of her 57-year career, she has published more than 100 papers, chronicling her research in cells, in animal models, and with postmortem human brain tissue. She has studied the interactions between neurons and the glial cells that protect and support them, the changes in RNA transcription during brain development and in Alzheimer’s disease, and the role of the thyroid hormone in brain development, and published literature reviews on the role of the gut microbiome and gut-brain axis in autism and covid.


As a child, Sajdel-Sulkowska would tell anyone who asked that when she grew up, she wanted to be a professor like her father. At 10, she returned from the orphanage to live with her mother, who had remarried, and she eventually attended an all-girls high school in Łódź. When her metallurgist cousin Witold Vatha Kosicki, SM ’29, learned of her interest in science, he invited her to visit the US so she could interview at MIT, a school she’d never heard of. Getting accepted to Warsaw University’s highly competitive department of mathematics and physics helped her qualify for a visa to the US—and convinced MIT that she was qualified to attend the Institute. After arriving in the US in 1962 and completing a six-week English course (“I barely passed it,” she confesses), she started at the Institute in the spring semester of 1963.

At MIT, Sajdel-Sulkowska planned to study nuclear physics until she took a course on DNA and RNA with Gene Brown, a professor of biochemistry and a pioneer in the field of metabolism. The material was so new there wasn’t even a textbook. But Brown’s lecture on the discovery of the double helix inspired her to switch to biology. “It was fascinating,” she says. “The lectures were so incredible—I knew I wasn’t going back to physics.”

""
Witold Watta Kosicki, a relative who obtained a master’s degree in metallurgy at MIT in 1929, suggested to Sajdel-Suklowska that she should go to MIT. Kosicki and Sajdel-Sulkowska’s father both studied at Kyiv Polytechnic in Ukraine before the Russian Revolution of 1916.
COURTESY OF ELIZABETH SAJDEL-SULKOWSKA
three black and white photos on a wood table
Clockwise, from top left: Sajdel-Sulkowska with her parents and aunt in Poland shortly after the end of World War II; with her mother on the steps at MIT when she came to campus for her interview in the summer of 1962; and with her mother in Krynica, Poland, in 1959.
COURTESY OF ELIZABETH SAJDEL-SULKOWSKA

Sajdel-Sulkowska’s cousin had provided money for her to attend MIT for one semester. “The rest of it had to be kind of patched,” she says. So she washed dishes in the chemistry department, plotted soil stress on graph paper in the mechanical engineering department, collected animal urine samples, and for one year worked as an au pair.

During most of her time at MIT, Sajdel-Sulkowska lived with her mother, who had come with her to the US and worked as a technician in a medical lab on Ames Street. They initially lived on Beacon Street in Boston, in a basement room with exposed pipes and wires, sharing a bathroom with other families. But her advisor, Margaret Freeman of the Russian studies department, visited one day and was so appalled at the conditions that she invited Sajdel-Sulkowska and her mother to stay at her home in Belmont. Then, midway through her undergraduate career, she spent a year in McCormick Hall, which had opened in 1963.


Sajdel-Sulkowska’s time in McCormick was a “turning point,” she says. When she lived off campus, she studied and worked on her problem sets alone and assumed everyone else was doing the same. Her isolation was exacerbated by the language barrier, and she felt even more alone in the face of male peers brimming with confidence, relatives suggesting she switch to secretarial school, and an instructor who told her, after a bad experience with a rat in an animal laboratory class, that MIT was not the place for her. At McCormick, she says, she learned that “not everybody knows everything” and that “there are people helping you—that you don’t have to do it all yourself.”

Her first paper on stress was published in 1969, 16 years after the double-helix structure of DNA was discovered. At the time, the finding that stress could alter the body at a cellular level was a revelation.

Sajdel-Sulkowska started her career at a time when there were very few women in science. Though MIT began formally accepting women in 1882, she was one of only two or three women earning a bachelor’s degree in biology in 1967; her entire class of more than 900 had only 20 to 30 women.

Being one of those few women was not easy. In the 1960s and ’70s, when she continued at MIT for graduate school, the field of biology had a culture of what she calls “unchecked harassment.” There was no way to complain without retribution. “That kind of culture created intimidation,” she says. “If you go through incidents of harassment, you become more vigilant.” Male colleagues had to be treated as male colleagues, not as colleagues. Still, she says, there were “a lot of helpful people.”

Many of those helpful people were those she encountered in the Margaret Cheney Room, a Building 3 sanctuary for female students complete with a bedroom, shower, and telephone booths. “That was a haven,” she says—a place where she made lifelong friends. It was also there that she wrote her doctoral thesis—longhand, with her husky, Amis, at her side, over the course of three months. She would write for three hours, sleep for 20 minutes, and repeat.

Sajdel-Sulkowska earned an SM in nutrition and food science (or, as she calls it, “eukaryotic biology in disguise”) and an ScD in the same subject with a minor in neuroendocrinology. Her graduate work would be her first foray into the study of stress as she examined DNA-dependent RNA polymerase II, an enzyme that copies DNA into RNA, and its regulation by cortisol, the stress hormone. Through studies in rat liver cells and then, after a nudge from her committee, in live rats, she found that there is a physiological response to stress through regulation of RNA transcription. Her research showed that artificial cortisol injected into rats altered the RNA polymerase enzymes that synthesize the RNA component of ribosomes. Those ribosomes in turn synthesize the proteins that carry out functions in the cell. 

Her first paper on this work was published in 1969, 16 years after the double­helix structure of DNA was discovered; a second paper followed in 1971. At the time, the finding that stress could alter the body at a cellular level was a revelation.

It was an exhilarating time to be studying biology, says Sajdel-Sulkowska; while she was working on her doctorate, researchers at MIT, Caltech, and the University of Wisconsin, Madison, discovered reverse transcriptase, the enzyme that copies RNA into DNA (the counterpart to the RNA polymerases she studied), for which they would later earn a Nobel Prize. “I was working in the laboratory, I was in a great group, things were happening—it was exciting!” she says.

Reflecting on her time at MIT, Sajdel-Sulkowska says she loved the atmosphere (“I liked the fact that you could work late in the evening”) and the energy. The challenges she had to overcome to succeed at the Institute were worth it, she says: “I wanted to do it, and I did it.”

After earning her ScD in 1972, she interviewed for a faculty position at Northwestern University and was offered the job. But she had recently met Adam Sulkowski, a psychiatrist and postdoctoral fellow, who had just arrived from Poland via France on a visa sponsored by Boston University and could not relocate. She returned to Boston, they married that October, and she became a postdoctoral fellow at Brandeis, where she continued to study RNA polymerase in yeast. Two years later, the first of their four sons was born.


Sajdel-Sulkowska carved out a career that was both broad and deep at a time when combining scientific work and motherhood was extremely rare and accommodations for US working mothers practically nonexistent. When her oldest son was born, in 1974, her three-month maternity leave was unpaid. After her second son arrived while she was completing another postdoc, at Shriners Burn Institute at Harvard Medical School (HMS), the cost of day care for two children exceeded her salary. So with no day care, her husband watched the two boys in the morning, and she found herself under a “tremendous amount of stress.”  

And at Shriners, stress was again the subject of her work. In guinea pigs that have suffered severe burns, she discovered, an increase in cortisol inhibits DNA synthesis in the thymus, which plays a key role in immune function. Her research revealed that removing burned tissue as soon as possible leads to a faster return to normal thymus function and a faster recovery from burns.

In 1980 she became a lecturer in the HMS department of psychiatry with an appointment at McLean Hospital, and she was named an assistant professor six years later. Over the next two decades, she would work on a wide range of topics, including the relationship between mercury and autism, the mechanisms of Alzheimer’s disease, and the role of the thyroid hormone in brain development. She balanced work and motherhood with the help of her mother and her husband, who was supportive and proud of her. “Where there is a will, there is a way,” she says.

Sajdel-Sulkowska in a grad cap and gown
Sajdel-Sulkowska earned an ScD in nutrition and food science from MIT in 1972 and was named a Presidential Professor by Polish president Andrzej Duda in 2016.

In 1989, Sajdel-Sulkowska spent a sabbatical in the lab of Nobel laureate Walter Gilbert at Harvard, gaining experience in cloning, sequencing, and polymerase chain reaction (PCR)—a time she sees as another turning point in her career. In the Gilbert lab, which she describes as a large, vibrant group of young and older scientists, everyone’s work and opinion mattered. “We frequently met as a group and could freely discuss our experiments,” she says. The experience gave her confidence. “At that point I felt that I may be able to start something by myself,” she says.

Once back at HMS, she strove to create the same sort of atmosphere in her lab and began pursuing grants to fund more independent work. When inspiration struck for an especially ambitious research project a few years later, in 1998, Sajdel-Sulkowska embraced the challenge. She’d been watching Star Trek with her sons when she came up with the idea for an experiment examining the effect of yet another kind of stress: altered gravity. In recent NASA brain research on pregnant rats on the space shuttle Columbia, more than half of the rat pups had died. She wrote a grant proposal to work with NASA’s Ames Research Center to study altered gravity’s impact on rats’ brain development. For her study, she positioned pregnant rats in cages at different points on a 24-foot centrifuge, exposing them and their developing pups to varying levels of greater-than-Earth gravity for 42 days, through pregnancy and lactation. Then she measured the length of time the rat pups were able to stay on top of a motorized rotating cylinder (what’s known as a rotarod test) and discovered that hypergravity decreased motor function. Rat pups that developed at 1.65 times Earth gravity could only stay on the spinning wheel for as little as 10 seconds before falling off, while the pups that developed at Earth gravity were able to stay on for almost a minute. 

Her research suggested that this may be because the higher gravity increases oxidative stress (an excess accumulation of free radicals that can damage the body’s cells) or suppresses thyroid activity, a problem that she had previously found to decrease the mass of the developing cerebellum.She also showed that hypergravity decreases the number of a crucial type of neurons in that region of the brain, which is responsible for movement, among other functions.Curiously, she found that male developing brains were more sensitive to hypergravity than their female counterparts. At the end of the experiment, the cerebellums of the male pups were visibly smaller than normal. 

As her hypergravity research was underway, Sajdel-Sulkowska also examined the effect on brain development of another environmental stressor that had become pervasive: polychlorinated biphenyls, or PCBs, a group of toxic synthetic chemicals used so widely from the 1930s through the 1970s that they contaminated the air, water, and soil. She subjected rat pups that had been exposed to PCBs from before birth to rotarod tests and found that their performance decreased as well. So did the mass of their cerebellums, and as with hypergravity, the effect was greater in males than in females.


In 2010 Sajdel-Sulkowska, who had lost her husband to cancer in 2002, was devastated when her youngest son died at the age of 23 as he was recovering from an accident. Work would prove to be a lifeline. She moved back to Poland, where diving into new research “helped me survive,” she says. First as a visiting professor in veterinary medicine at the Warsaw University of Life Sciences and then teaching and doing research at the Medical University of Warsaw, she had an opportunity to work with many young scientists. Her research collaboration with Katarzyna Czarzasta, who is now an assistant professor at the Medical University of Warsaw, was particularly fruitful—and continues today. “She is a very good mentor,” says Czarzasta, who adds that she treated her students as equals.

While teaching in Poland, Sajdel-Sulkowska encountered many students who suffered from depression. “I also observed great stigma associated with psychiatric disorders in Poland, specifically with depression during pregnancy,” she says. That got her thinking about recent research on the use of probiotics—which are readily available in the grocery store—as an alternative treatment for depression. And that led to several projects on perinatal depression that she hoped would lay the groundwork for a study on probiotics as a treatment for it.

The differences in stress response between males and females are at least partly due to the sex hormones. Testosterone increases cortisol levels, so the stress response is greater in males.

In one, she applied chronic mild stress to rats just before pregnancy to model perinatal depression, which she verified by measuring cortisol levels and time spent grooming. Then she studied their pups and documented negative effects on their neurodevelopment and cardiac development. The effects differed in male and female offspring, and the sex-­dependent cardiovascular effects in females persisted as they aged, potentially affecting the following generation as well. The study added to the growing body of research showing that the impact of environment and behavior—also known as epigenetic effects—can be passed along to offspring. 

In the past, Sajdel-Sulkowska says, experimental work, including research on depression, was performed only on males, so that researchers wouldn’t have to control for women’s monthly hormone fluctuations. But thanks in part to pioneering studies like hers, scientists are beginning to recognize the importance of studying the sexes separately. 

The differences in stress response between males and females are at least partly due to the sex hormones, says Sajdel-Sulkowska. Testosterone increases cortisol levels, so the stress response is greater in males; the effects of stressors on the thyroid hormone, too, are different. But beyond that, she points out that each sex has different issues when it comes to health in general: different microbiota, different disease risks, and different disease progressions and mortality rates. As a result, treatments for many diseases may need to be tailored specifically for males or females to be effective. (See “Depression is different for women. One-size-fits-all drugs aren’t helping.”) And even once both environmental factors and sex differences are considered, individual differences, such as a person’s unique microbiome, are likely to matter too. Sajdel-Sulkowska foresees a day when artificial intelligence will make it possible to correlate the differences in individuals’ microbiomes with disease, ultimately leading to individualized probiotic treatments for a variety of conditions—perhaps including depression. 


Sajdel-Sulkowska would remain in Poland for a full decade, returning to the US in September of 2020. A year later, after 35 years as an assistant professor, she was forced to retire from HMS when Harvard didn’t renew her faculty appointment. Having focused on research without giving much thought to advancement, she was suddenly without an academic home. In 2022, she joined the National Coalition of Independent Scholars (NCIS) so she could continue her work without being affiliated with a particular university.

Sajdel-Sulkowska never had the security of a tenured position and estimates that over the course of her career, her average salary was $35,000 a year. (“I never realized that I could name my compensation,” she says.) But she was never in it for the money; she was driven by the work itself. And in her home country, she received some of the recognition that eluded her at Harvard. During her decade of research and teaching in Poland, she was awarded the country’s highest academic honor when she was named a Presidential Professor by Polish president Andrzej Duda.

Elizabeth Sajdel-Sulkowska
CIARA CROCKER

Upon returning to the US during the pandemic, Sajdel-Sulkowska tackled a literature review to look for connections between covid, the microbiome, and the gut-brain axis, the physical and biochemical signals that go back and forth between the digestive system and the central nervous system (see sidebar). But the theme of stress continued to intrigue her; she published a paper on maternal stress in rats in 2021 and has another in progress. This recent research closes the circle opened with her doctoral thesis at MIT, she says: “I did my PhD thesis on stress—and I’m ending my career with [studying] stress.”

Sajdel-Sulkowska sees how her current work might apply to her own life. Her mother endured extreme stress during World War II, and she experienced extreme stress herself as a child born just before the Warsaw Uprising. Now, she wonders how that might affect her own children—in humans, the epigenet­ic effect of stress is known to stretch for multiple generations.

Her last year in Poland, she and her oldest son mapped the routes her aunt and mother took after the house where she was born was razed by the Nazis. They visited the transitional camp that her parents were taken to. And on her way to the presidential palace in 2016 to accept her academic honor, she passed by the site of her parents’ burned home. She remembered telling people she wanted to grow up to become a professor, like her father.

“Wow,” she thought. “What a long way to come and experience that.” 


Research snapshot

Highlights from Sajdel-Sulkowska’s long and varied research career

Beyond her studies of stress, Elizabeth Sajdel-Sulkowska has delved into many other areas throughout her long career. Here’s a sampling of what else she has studied:

The effect of thimerosal on brain development

People had long speculated that thimerosal, a mercury-based preservative still used in small amounts in some vaccines and medicines, might be linked to autism. Starting in 2006, with support from two autism organizations, Sajdel-Sulkowska found that exposing rat pups to thimerosal during the perinatal period results in motor impairment, increased oxidative stress in the cerebellum, and a decrease in an enzyme called deiodinase 2, which is involved in regulation of the thyroid hormoneand that the effects were more pronounced in males. Looking at postmortem human brains, she found that oxidative stress markers were increased in the cerebellum in people with autism, and that their thyroid hormone levels and thyroid-hormone-dependent gene expression were disrupted as well. However, she did not observe different mercury levels in postmortem brains of people with and without autism. Although the small amount of thimerosal used in vaccines has not been implicated in autism, in 1999 public health departments and the American Academy of Pediatrics recommended limiting its use as a precautionary measure. Thimerosal has since been eliminated from nearly all childhood vaccines and reduced or eliminated in other vaccines.

The impact of perinatal bacterial infection on the developing brain

In 2008-’09, during a fellowship in Japan, she studied the effect of frequent perinatal infections on brain development by exposing rats to a lipopolysaccharide, or LPS, a type of molecule found in the outer membrane of certain bacteria. She found that, similar to rats exposed to thimerosal, LPS-exposed rats had increased oxidative stress, a decrease in deiodinase 2, a decrease in thyroid hormone, and decreased gene expression in the cerebellum. Their motor learning, as measured by the rotarod test, was also impaired.

The relationship between covid, the microbiome, and the gut-brain axis

During the pandemic, Sajdel-Sulkowska reviewed the published literature on all three topics to look for connections. Others had found that the virus SARS-CoV-2 enters the body by binding to the receptor for a human protein known as angiotensin-converting enzyme 2, or ACE2a receptor found on cells in the lungs but also in the gut, among other places. And researchers had learned that the virus occupies and blocks sites through which some nutrients normally enter the gut, leading to a deficiency of those nutrients and decreased production of short-chain fatty acids for which they are required. Sajdel-Sulkowska hypothesized that the deficiency in short-chain fatty acids, which decrease inflammation and also contribute to normal brain function, may play a role in the “brain fog” and neuropsychiatric disorders some covid patients experience. Those effects, she predicts, could potentially be combated with probiotics. She conducted a similar literature review of the relationship between autism and the gut microbiome.

Depression is different for women. One-size-fits-all drugs aren’t helping.

25 June 2024 at 17:00

The trauma of an accident, an assault, abuse, or even simply losing someone we love can have long-term effects. For some, it can trigger mental illnesses. But what if, in the hours after the experience, you could take a pill that made you less likely to fall ill? And what if there were such a pill tailored specifically for women? That’s the goal Briana K. Chen ’16, a postdoctoral neuroscientist at Columbia University, spends her days nudging us closer to.

To grasp the problem she’s working toward solving, it’s useful to understand the perverse situation we face now: women are roughly twice as likely as men to experience depression, yet antidepressants were predominantly tested on male subjects. Moreover, while certain antidepressants seem to work better in men and others in women, that usually isn’t reflected in how they’re prescribed. Women are also more likely to experience adverse side effects with antidepressant use. Likewise, women face a higher risk of developing PTSD and anxiety, and again, the drugs used to treat these conditions were tested mainly on men. This means millions of women around the world suffer unnecessarily.

Chen’s research suggests it doesn’t have to be that way. She investigates the interaction between sex differences, stress, and mental illnesses, and her work could lead to some of the first female-specific treatments for depression, PTSD, and anxiety. 

Chen finds it baffling that women and men receive the same medical treatments for psychiatric disorders when the differences between them are so significant—not only biologically, but also in terms of howthey experience the same illnesses. Women, for example, are more likely to have anxiety alongside depression. In men, on the other hand, depression is likelier to coincide with substance abuse disorders. 

Part of Chen’s frustration with the status quo can be traced back to her upbringing. She went to all-girls schools from second grade through high school. The process of emerging from an insulated, all-­female environment into the wider world was eye-opening for her. “One thing that was really striking, in the transition from high school to college, was the realization that the default is not female. The default is male. That was a bit of a shock to me,” she says. 

Chen credits her abrupt exit from that nurturing environment with giving her a more clear-eyed view of current societal issues. “Injustices and inequalities exist, and you’re better poised to be able to see them and therefore address them,” she says. 

Early results suggest that one dose of the drug is enough to prevent a whole range of fearful, depressive, and anxiety-like behaviors in female mice—and it appears to have very long-lasting effects.

When she arrived at MIT in the fall of 2012, Chen knew she wanted to major in brain and cognitive sciences. Through the Undergraduate Research Opportunities Program (UROP), she got a chance to delve into neuroscience research in several MIT labs, including that of Nobel Prize winner Susumu Tonegawa, whose team had just identified brain cells involved in encoding memories. Soon her interest in mental health more broadly was piqued.

“This whole journey began at MIT,” she says—referring both to her studies and to her deepening personal interest in the topic. The school “has a really big focus on mental health, especially for undergrads,” she adds. “Maybe it has something to do with the stressful, high-achieving environment.” 

Chen says her parents inadvertently played a role in getting her interested in stress and resilience. They are first-­generation immigrants—her mother from China and her father from Malaysia—who met in the UK while studying chemistry. Both went to the US for graduate school and then, in her mother’s case, postdoctoral training. “They are immigrants who did really well, but there are lots of other immigrants who struggle. And it’s very interesting to see what the combination of factors is behind that, how changes and different environments interact with intrinsic biological properties to do with resilience and adaptation,” she says.  

In 2014, the summer before her junior year, Chen got a summer UROP working for Steve Ramirez, PhD ’15, who was then a doctoral student in Tonegawa’s lab, studying how we form memories and how optogenetics—a technique that uses light to control the activity of specific neurons—can be used to reactivate positive memories in the brain as a treatment for PTSD and depression. (Ramirez is now a professor of neuroscience at Boston University.) 

Briana Chen
The work of Briana K. Chen ’16 could lead to some of the first female-specific drugs for depression, PTSD, and anxiety.
COURTESY OF BRIANA CHEN

The work was a revelation for Chen, who realized while working with Ramirez that she wanted to focus on studying stress-related disorders. In 2016 she applied to the doctoral program at Columbia University and got in. She landed in the lab of neuroscientist Christine Ann Denny, where she focused on developing sex-specific drugs that can enhance stress resilience and prevent stress-induced mental illnesses. Today Chen is a postdoc in Denny’s lab, and Denny describes Chen as her “right hand.” 

“Most students leave my lab with no patents, or perhaps one. Honestly, with Bri I lose track,” she adds, with a laugh. (Chen says she’s filed six nonprovisional patents—formal patents that will be reviewed by the patent office—but even she has lost track of the informal provisional ones.) 

Among the many patents she’s filed, one stands out. It’s for a mental-health application of a peptide drug called Bay 55-9837 that she’s currently investigating in animal models. Originally developed by Bayer in 2002 as a potential treatment for diabetes, the drug binds to and activates a receptor in the brain called VPAC2, which is known to regulate stress responses in female mice. Chen’s idea is that it could also serve as a “vaccine” for mental illness, which women could take in the wake of a trauma. 

Chen and her colleagues discovered the compound’s potential for warding off negative mental effects of trauma in a roundabout way. They knew ketamine, an anesthetic sometimes used to treat depression, reduces the likelihood that people at risk for psychiatric disorders will develop them, but they wanted to investigate exactly how it does that. Chen decided to test whether ketamine was acting through the VPAC2 receptor or some other mechanism, so she used Bay 55-9837 while administering it, as a means to dial activity of the receptor up and down during testing. In the process, she discovered that the drug was effective in female mice—but not males—as a prophylactic on its own, without any ketamine involved.

Early results suggest that one dose of the drug is enough to prevent a whole range of fearful, depressive, and anxiety-­like behaviors in female mice. Not only that, but it appears to have very long-­lasting effects after a single dose is administered. It’s a finding that’s hugely promising, although Chen warns there’s still a lot to investigate—including safety, possible side effects, and dosing levels—before it can be tested in humans.

Chen is optimistic about the drug’s potential but acknowledges it could fail at a future clinical hurdle. It’s crucial to “proceed with caution and make sure we have all the data so that we can ensure the safety of any potential future patients,” she adds. “Women’s mental health is definitely an urgent matter, but that just means it is even more important for us to make sure that we are as informed and careful as possible when developing treatments.” 

Her main goal as a researcher, she explains, is to contribute to how we understand the specific neurobiological mechanisms behind the ways women respond to stress. In the longer term, she hopes a more sex-specific approach will be adopted by other fields within medicine. It’s a way of treating people that could lead to far better outcomes, she argues.

“If we can make female-specific antidepressants, why stop there?” she says. “Couldn’t we start developing female-­specific drugs to treat cardiac disease or autoimmune disorders? Could we start developing male-specific drugs to treat diseases as well? Overall, I think we could use this approach to move toward a more widespread model of personalized medicine where we use sex to inform treatment plans to improve the health of all patients.” 

Fighting fatphobia

25 June 2024 at 17:00

“I felt too fat to be a feminist in public.”

The startling admission appears in the opening paragraph of Kate Manne’s new book, Unshrinking: How to Face Fatphobia. With that single frank and sobering sentence, Manne, an associate professor of philosophy at Cornell, captures the pervasiveness of anti-fat bias—and its stifling impact.  

Manne had tapped into the zeitgeist of #MeToo with her 2017 book, Down Girl: The Logic of Misogyny, and was frequently called upon by the press to comment on current events like Supreme Court Justice Brett Kavanaugh’s confirmation hearings. But in early 2019 she turned down the opportunity to go on an all-expenses-paid publicity tour of London to promote the paperback release because she felt too self-conscious about her weight. The experience made her uncomfortably aware that even she, an Ivy League academic with a PhD from MIT, had internalized our society’s anti-fat bias. 

“The combination of being publicly feminist and fat is a way of violating patriarchal norms and expectations in this very fundamental way,” she says, making it difficult to speak out “in a body that is ripe to be belittled and mocked.”


Manne grew up in Melbourne, Australia, where she recalls being called fat for the first time by a classmate in fifth grade PE class. She’d been fascinated by philosophy, which she describes as “thinking about thinking,” since the age of five, when a family friend who was a philosopher asked her why she was catching butterflies in a net and taking away their freedom. So she studied the subject in college and then wound up at MIT for grad school because she wanted to study with Sally Haslanger, a professor of philosophy and women’s and gender studies. “Sally proved to me, and continues to do so today, that philosophy can be rigorous, nuanced, socially aware, and politically savvy,” Manne says. After earning her PhD in 2011 and spending two years as a junior fellow at the Harvard Society of Fellows, she joined the faculty at Cornell, where her research focuses on moral, feminist, and social philosophy.

In Down Girl, Manne outlined the distinction between sexism (a patriarchal belief system) and misogyny (the enforcement of patriarchal norms by punishing women who violate them). The book was widely hailed: Rebecca Traister, author of Good and Mad: The Revolutionary Power of Women’s Anger, said Manne did “a jaw-droppingly brilliant job of explaining gender and power dynamics,” and in 2019 Manne was voted one of the world’s top 10 thinkers by the UK magazine Prospect. Her second book, Entitled: How Male Privilege Hurts Women, made it onto the Atlantic’s list of the best 15 books of 2020 and Esquire’s list of 15 exceptional feminist books.

Haslanger isn’t at all surprised by her former graduate student’s success: “It was clear to those who knew her well that with her philosophical training, her beautiful writing, and her keen insight into the social domain, she would become a major public intellectual. And she has surpassed even our expectations.”

Nonetheless, says Manne, “it took 25 years for the personal piece of [feminism] to fall into place along with the political piece.” That personal aspect is chronicled in painful detail in Unshrinking, as Manne connects the dots between misogyny and the fatphobic bullying she suffered as a teen. “The form misogyny took was weaponized fatphobia against me as a slightly larger-than-average teen girl,” she explains.

“Since my early 20s, I have been on every fad diet. I have tried every weight-loss pill. And I have, to be candid, starved myself, even not so long ago,” Manne writes in the introduction to Unshrinking. “I can tell you precisely what I weighed on my wedding day, the day I defended my PhD dissertation, the day I became a professor, and the day I gave birth to my daughter. (Too much, too much, too much, and much too much, to my own mind then.) I even know what I weighed on the day I arrived in Boston—fresh off the plane from my hometown of Melbourne, Australia—to begin graduate school in philosophy, nearly twenty years ago.”

Although she had been aware of the work of fat activists, it was motherhood that finally pushed Manne to stop engaging in disordered eating and extreme dieting, and ultimately to write Unshrinking. She didn’t want her daughter “to bear witness to a mother trying to shrink herself in a futile and pointless and frankly sad way,” she says. In conducting research for the book, she came across some alarming statistics: by age six, more than half the girls in one study had worried about being fat, and another study found that by age 10, an astounding 80% of girls had been on a diet. Even many feminists “still want to shrink our bodies in ways that conform to patriarchal norms and expectations that are extremely hard to resist,” Manne says.


Unshrinking joins the growing literature on anti-fat bias, including the work of sociologist Sabrina Strings, whose book Fearing the Black Body details its racist origins, tracing the shift from the admiration of plumpness as a sign of wealth to the vilification of fat that she argues developed alongside the transatlantic slave trade. Like recent books by Aubrey Gordon and journalist Virginia Sole-Smith, Manne’s uses scientific research to debunk pervasive misconceptions—for example, about the extent to which people can control the size of their bodies—and even to counter the idea that obesity is a disease that requires a cure or large-scale policy response. 

Research from as early as 1959 has shown that most people cannot sustain long-term weight loss. A recent piece in the journal Obesity finds that weight regain “occurs in the face of the most rigorous weight-loss interventions” and that “approximately half of the lost weight is gained back within 2 years and up to 70% by 5 years.” Not even those who undergo bariatric surgery, the researchers add, are immune to weight regain. Two physician researchers from Columbia and the University of Pennsylvania recently reported in Nature Metabolism, “Overall, only about 15% of individuals can sustain a 10% or greater non-surgical, non-pharmacological weight loss.” 

Likewise, while exercise is beneficial for our bodies, a research review published in Diabetes Spectrum concludes it’s not firmly established that it plays a big role in helping people lose weight

“I can tell you precisely what I weighed on my wedding day, the day I defended my PhD dissertation, the day I became a professor, and the day I gave birth to my daughter.” 

And although the medical establishment has been saying for decades that obesity leads to diseases like diabetes and hypertension, Manne points out that the dynamics are complex and there is much that is still unknown. While being very heavy is correlated with increased mortality, she maintains that we cannot assume it is a direct cause. For example, researchers have found that diabetes is associated not only with obesity but with poverty, food insecurity, and even past trauma as well.

Manne’s argument is not that being fat is unassociated with health risks, but rather that the connection is oversimplified. Given that there’s no proven route to long-term weight loss for most people, she says, we should focus on treating people’s diagnosable problems (such as diabetes and heart disease) rather than stigmatizing them because of their size. But anti-fat bias is all too common among medical professionals, who often misdiagnose fat people’s actual health problems because they ignore their reported symptoms. The prospect of dealing with this prejudice can also discourage fat people from going to the doctor at all. In 2020, a review of scientific publications led an international multidisciplinary expert panel to conclude that weight bias can lead to discrimination, undermining people’s human and social rights as well as their health. The 36 experts pledged in Nature Medicine to work to end the stigma attached to obesity in their fields.

What is needed, Manne argues, is to dismantle diet culture, which not only does not make people thinner in the long term but appears to make them fatter: “The studies that I draw on in the book make a very clear empirical case that a really excellent way to gain weight is to diet.” For example, a 2020 review in the International Journal of Obesity suggests that dieting can lead to eventually regaining more weight than was lost, given how one’s metabolism reacts to food restriction. A better way to improve public health, Manne argues, is to reduce the bias against larger bodies and make public spaces more accessible for people of all sizes. While data on the potential effects is limited, one 2018 study suggests that a weight-­neutral approach known as Health at Every Size (HAES) is beneficial for body image and quality of life.

As a philosopher, Manne offers novel insights by looking at the way fatness is framed as a moral issue. Western societies see fat people as moral failures because, it is assumed, they lack the willpower to eat healthy foods and exercise. Manne argues that we have been conditioned to feel disgust toward fat people, and that this disgust is both “socially contagious” and deeply ingrained. Furthermore, we don’t trust feelings of pleasure derived by eating, or we don’t believe we inherently deserve food that tastes good; instead, we think we have to “earn” it, usually by depriving ourselves. Indeed, most of us are subject to frequent moralizing about “good” and “bad” food—whether from friends, family members, or our own internal voices.

All of this is part of what Manne calls the “fallacy of the moral obligation to be thin.” Secular moral philosophy is “clear that happiness and pleasure are good things, which we should be increasing in the world and promoting,” she says. “There’s nothing shameful about something that feels good, that some people want intensely, as long as it doesn’t hurt others or deprive others.” 

In her new book Unshrinking: How to Face Fatphobia, philosopher Kate Manne offers novel insights by examining how fatness is framed as a moral issue.

So if diet culture causes pain, deprivation, and eating disorders, Manne maintains, we have a moral obligation to avoid it and instead to derive pleasure from eating. She reasons, “If you do think of there being a kind of moral value in self-care, then we really ought to be satisfying our appetites by eating satisfying food, as well as nourishing our bodies for instrumental reasons.” In her book, she calls diet culture a “morally bankrupt practice.”

But Manne’s experience as a fat academic has shown that most highly educated people still cling tightly to the “pseudo-obligation to try to shrink ourselves,” she says. Stereotypes of fat people as lazy and dumb are particularly harmful in spaces where intellect is highly prized. Anti-fat bias is pronounced in her field, Manne believes, “because as much as we pretend in philosophy not to all be dualists, we value the mind much more than the body, and we’re deeply suspicious of the body.” Tracing this “philosophical disapproval of indulgence” back to Plato and Aristotle, she says: “We think of the body as something feminine, wild, out of control, irrational—not a source of wisdom, but a source of really antiphilosophical distraction that will prevent us from … using our minds to think deep thoughts.”

The default image of an academic is thin, white, male, and able-bodied, which “distorts both our sense of who can think important thoughts and … what intellectual authority really is,” Manne says. This makes being a fat woman in academia particularly fraught. Favorable student evaluations are critical for gaining tenure, and numerous studies have shown that students already tend to judge female professors more harshly

UCLA sociology professor Abigail Saguy finds Manne’s work compelling because she writes in an accessible way, “really reaching beyond the ivory tower and communicating important and complex topics.” A decade ago, Saguy wrote What’s Wrong with Fat, and she has seen a rise in awareness about anti-fat discrimination. However, she also notes the co-optation of “body positivity” rhetoric by weight-loss companies and influencers in order to sell their products.

Of course, the biggest news in the weight-loss industry has been the explosion in popularity of injectable drugs like Ozempic, which was originally developed to treat type 2 diabetes. Although Ozempic can be life-changing for diabetics, as well as potentially for those with cardiovascular risks, Manne says, “the majority of people who are pursuing intentional weight loss via these drugs are not even in higher risk categories” based on their body mass index, or BMI. A measure of body fat based on height and weight, BMI classifies people as underweight, normal, overweight, or obese and has been deemed deeply flawed by the American Medical Association (AMA) since it relies on data collected from non-Hispanic white people and has been used in racist ways. Even so, Manne notes that an analysis of data from the US National Health Interview Survey showed that people in the “overweight” category actually have the lowest all-cause mortality (lower than those in the “normal” category) even after controlling for smoking and preexisting diseases. So there’s often no medical need for people in this group—about a third of the US population—to use weight-loss drugs, she says.

With enormous profits at stake—the valuation of Novo Nordisk, which makes Ozempic, exceeds Denmark’s annual GDP—companies are eager to promote the idea of an obesity “epidemic” that got a boost in 2013, when the AMA declared obesity a disease even though a council it had convened on the matter advised against doing so. “Obviously these companies have a massive incentive to overinflate the extent and the seriousness of the problem,” she says, adding that if people discontinue these drugs because their side effects are intolerable or they’re too expensive, “the weight is gonna come roaring back.” 

Manne believes that while people are entitled to pursue intentional weight loss, no one should feel obligated to do so. And when fat influencers or activists lose weight in a very public way, she says, they further stigmatize fat people who choose the path of fat acceptance. A recent New York Times article buttresses her argument. Manne is worried about “a real reversal of the progress we’ve made in fat-activist communities,” fearing that it may be easier for doctors to prescribe drugs to fat patients than to reexamine their own long-held negative beliefs about them. 

However, the positive feedback for Manne’s work suggests that it can make an impact. Roxane Gay, author of Hunger, proclaimed Unshrinking “an elegant, fierce, and profound argument for fighting fat oppression in ourselves, our communities, and our culture.” Booklist called it “a brilliant takedown of fatphobia” in its starred review. Manne is particularly heartened by readers who have told her that the book convinced them to stop dieting or helped them advocate for themselves—for example, by asking for an airplane seatbelt extender without shame. Progress may be slow, but it’s progress.

SuperLimbs for astronauts

25 June 2024 at 17:00

It’s hard not to laugh at NASA’s blooper reel of astronauts falling and bouncing in slow motion on the moon. But coping with inertia where gravity is one-sixth that of Earth is no laughing matter when you’re wearing a constricting space suit and need to finish an exhausting task. So mechanical engineering professor Harry Asada (center) and colleagues are developing wearable robotic limbs to help astronauts get back on their feet after a fall. Based on the “SuperLimbs” Asada designed to assist construction workers and ship builders, the limbs extend from a backpack that would also contain the astronaut’s life support system along with a controller and motors to provide power. As part of NASA’s planned Artemis mission, astronauts will be expected to build the first permanent moon base—a physically demanding project with a high risk of falls during multiple extended extravehicular activities (EVAs). “We think that SuperLimbs can help them recover so they can be more productive and extend their EVAs,” says doctoral student Erik Ballesteros, SM ’23 (right). Postdoc Sang-Yoep Lee (left) is a coauthor of a paper on the work.

Erik Ballesteros, SM ’23 (right). Harry Asada (center) and  Sang-Yoep
Lee (left)
JENNIFER CHU

Artificial reefs could protect coastlines and marine life

25 June 2024 at 17:00

In tropical waters, coral reefs shelter marine life and buffer islands from stormy seas—but these natural structures are threatened by the effects of climate change, which is also multiplying the extreme weather events that leave coastal communities vulnerable to flooding and erosion. 

An MIT team now hopes to fortify coastlines with “architected” reefs—sustainable offshore structures engineered to mimic the wave-buffering effects of natural reefs while also providing pockets for fish and other organisms to thrive.

What they propose is a series of vertical cylinders, each surrounded by four long, rudder-like slats. The engineers found that this structure efficiently breaks a wave into turbulent jets, ultimately dissipating most of its total energy.  

artificial reef in a pool
MIT researchers test the wave-breaking performance of two artificial reef structures in the MIT Towing Tank.
COURTESY OF THE RESEARCHERS

Researchers at the MIT Sea Grant had noticed that cylindrical blowout-preventing valves in offshore oil and gas wells generated a high amount of drag and wondered if a similar structure could help tame waves. They collaborated on the design with researchers at the Center for Bits and Atoms, who adapted their work on ultralight cellular structures for the aerospace industry.

The researchers 3D-printed a lab-scale version from plastic but determined that using a more porous material would be as effective. They plan to fabricate full-scale structures from sustainable cement, molding it in a pattern of egg-carton-like “voxels” that would be hospitable for fish. The cylinders could be connected to form a long, semipermeable wall, which the engineers could erect about half a mile from shore. Initial experiments with the prototypes suggest that the architected reef could reduce the energy of incoming waves by more than 95%.

“This would be like a long wave-breaker,” says Michael Triantafyllou, ’77, ScD ’79, a professor of mechanical engineering and director of the MIT Sea Grant, who is the senior author of a paper on the work. “If waves are six meters high coming toward this reef structure, they would be ultimately less than a meter high on the other side. So this kills the impact of the waves, which could prevent erosion and flooding.”

The team is currently fabricating cement voxel structures and assembling them into a lab-scale architected reef, which they will test under various wave conditions. They envision that the design could be modular, scalable to any desired size, and easy to construct on site or to transport and install in various offshore locations. “Now we’re simulating actual sea patterns and testing how these models will perform when we eventually have to deploy them,” says Anjali Sinha ’23, a graduate student at MIT who recently joined the group.

Next, the team hopes to work with beach towns in Massachusetts, where the water is too cold for coral, to test the structures on a pilot scale.

“These test structures would not be small,” Triantafyllou emphasizes. “They would be about a mile long, and about five meters tall, and would cost something like $6 million per mile. So it’s not cheap. But it could prevent billions of dollars in storm damage. And with climate change, protecting the coasts will become a big issue.”

Evaporation without heat

25 June 2024 at 17:00

In a series of painstakingly precise experiments, a team of researchers at MIT has confirmed an astonishing discovery: light can cause water to evaporate without involvement from any source of heat.

The phenomenon can occur at any surface where air and water meet, whether it’s flat like a pond or curved like a droplet of cloud vapor: light striking the surface breaks water molecules free and lets them float away. The researchers call it the “photomolecular effect,” by analogy to the photoelectric effect discovered by Heinrich Hertz and explained by Albert Einstein. The strength of the effect depends on the angle of the light, the exact color of the light, and its polarization.

The finding could help explain a phenomenon that has mystified climate scientists for more than 80 years, in which clouds are measured to be absorbing more sunlight than conventional physics holds possible: the additional evaporation could account for the discrepancy. This realization could alter calculations of how climate change affects cloud cover and precipitation. The photomolecular effect could also be harnessed for industrial processes such as desalinating water or drying materials.

“I think this has a lot of applications,” says engineering professor Gang Chen, who wrote a paper on the work with postdocs Guangxin Lv and Yaodong Tu and graduate student James Zhang, SM ’21. Noting that drying consumes 20% of all industrial energy usage, he says the team has already been approached by companies looking to use the effect for evaporating syrup and drying paper in a paper mill. 

“We’re exploring all these different directions,” he adds. “And of course, it also affects the basic science, like the effects of clouds on climate, because clouds are the most uncertain aspect of climate models.”

Drugs are more effective at certain times of day

25 June 2024 at 17:00

Using engineered mini-livers derived from donated human cells, MIT researchers have found that the time of day a drug is administered could significantly affect how much of it is available to the body and how much may be broken down into toxic by-products.

The researchers identified more than 300 liver genes that follow a circadian clock, including many involved in drug metabolism and others that are involved in inflammation. Because of these rhythmic variations in gene activity, enzymes that break down Tylenol, for example, are more abundant at certain times of day than others. 

The study also revealed that the liver is more susceptible to infections such as malaria at certain points in the circadian cycle, when fewer inflammatory proteins are being produced—possibly because its response to pathogens declines after meals, when it has typically been exposed to an influx of microorganisms that might trigger inflammation even if they are not harmful. 

“One of the earliest applications for this method could be fine-tuning drug regimens of already approved drugs to maximize their efficacy and minimize their toxicity,” says Professor Sangeeta Bhatia, SM ’93, PhD ’97, a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science (IMES), who is the senior author of the new study.

The MIT researchers are now working with collaborators to analyze a cancer drug they suspect may be affected by circadian cycles, and they hope to investigate whether this may be true of drugs used in pain management as well. They are also taking advantage of the cycles in inflammatory signals to study infections that are usually difficult to establish in engineered livers, including certain types of malaria.

Sweat may protect against Lyme disease

25 June 2024 at 17:00

Most people’s sweat contains a protein that can prevent Lyme disease, researchers at MIT and the University of Helsinki have discovered. They also found that about one-third of the population carries a less protective variant that makes the tick-borne infection more likely.

By running a genome-wide association study, the researchers identified three variants more common in people who’d had Lyme disease. One—in a gene for a secretoglobin, a type of protein that in this case is produced primarily in the sweat glands—was previously unknown. In vitro, it significantly inhibited growth of Lyme-causing bacteria, but a variant version required twice as much to do so. And when mice were injected with Lyme bacteria that had been exposed to the normal version of the sweat protein, they did not develop the disease. 

It’s unknown how the protein inhibits the bacteria, but the researchers hope it can be used in preventive skin creams or to treat the 10% or so of Lyme infections that don’t respond to antibiotics.

“We think there are real implications here for a preventative and possibly a therapeutic,” says Michal Caspi Tal of MIT’s Department of Biological Engineering, one of the senior authors of the new study. She also plans to study whether the 10 other secretoglobins in the human body could have antimicrobial qualities too.

Sprayable gel simplifies surgeries

25 June 2024 at 17:00

Colonoscopies are a boon for preventing colon cancer, but patients may develop gastrointestinal bleeding or dangerous small tears in the intestine if doctors end up having to remove polyps in the process.

Now MIT researchers have developed a gel that can be sprayed through an endoscope onto the surgical sites, where it instantly forms a tough but flexible layer that protects the damaged area, reinforces the tissue, and allows it to heal. 

In an animal study, the researchers showed that the gel, called GastroShield, is simple to apply in the course of current endoscopic procedures and provides wound protection for three to seven days. 

In addition to its potential in colonoscopies, this gel could be useful for treating stomach ulcers and inflammatory conditions such as Crohn’s disease, or for delivering cancer drugs, says Natalie Artzi, a principal research scientist in MIT’s Institute for Medical Engineering and Science, who coauthored a paper on the work with colleagues including Professor Elazer Edelman ’78, SM ’79, PhD ’84, former MIT postdoc Pere Dosta, and former visiting student Gonzalo Muñoz Taboada. 

Members of the research team have started a company called BioDevek that plans to further develop the new material for use in humans. 

Recent books from the MIT community

Sparking Creativity: How Play and Humor Fuel Innovation and Design
By Barry Kudrowitz, SM ’06, PhD ’10
ROUTLEDGE, 2023, $39.95

Open Building for Architects: Professional Knowledge for an Architecture of Everyday Environment
By Stephen H. Kendall, PhD ’90, and N. John Habraken 
ROUTLEDGE, 2023, $44.99

Measurements-Based Radar Signature Modeling: An Analysis Framework
By Joseph T. Mayhan, senior staff member at MIT Lincoln Laboratory, and the late John A. Tabaczynski ’60, former leader of the Ballistic Missile Defense Analysis Division at MIT Lincoln Laboratory  
MIT PRESS, 2024, $125

Molecular Mechanisms in Materials: Insights from Atomistic Modeling and Simulation
By Sidney Yip, professor emeritus of nuclear science and engineering and materials science 
MIT PRESS, 2023, $50

Algorithmic High-Dimensional Robust Statistics
By Ilias Diakonikolas and Daniel M. Kane ’07
CAMBRIDGE UNIVERSITY PRESS, 2023, $59.99

Private Equity: A Memoir
By Carrie Sun ’07
PENGUIN PRESS, 2024, $29

African Cinema in a Global Age
By Kenneth W. Harrow ’64
ROUTLEDGE, 2023, $48.95


Send book news to MIT News at MITNews@technologyreview.com or 196 Broadway, 3rd Floor, Cambridge, MA 02139

Driving sustainable water management

From semiconductor manufacturing to mining, water is an essential commodity for industry. It is also a precious and constrained resource. According to the UN, more than 2.3 billion people faced water stress in 2022. Drought has cost the United States $249 billion in economic losses since 1980. 

Climate change is expected to worsen water problems through drought, flooding, and water contamination caused by extreme weather events. “I can’t think of a country on the planet that doesn’t have a water scarcity issue,” says Rob Simm, senior vice president at Stantec, an engineering consultancy focused on sustainability, energy solutions, and renewable resources. 

Economic innovations, notably AI and electric vehicles, are also increasing industrial demand for water. “When you look at advanced manufacturing and the way technology is changing, we’re requiring more, higher volumes of ultrapure water [UPW]. This is a big driver of the industrial water market,” Simm says. AI, computing, and the electric vehicle industries all generate immense quantities of heat and require sophisticated cooling and cleaning. Manufacturing silicon wafers for semiconductor production involves intricate cleaning processes, requiring up to 5 million gallons of high-quality UPW daily. With rising demand for semiconductors, improvements in water treatment and reuse are imperative to prevent waste.   

Data-driven industrial water management technologies are revolutionizing how enterprises approach conservation and sustainability. They are harnessing the power of digital innovation by layering sensors, data, and cloud-based platforms to optimize physical water systems and allow industrial and human users to share water access. Integration of AI, machine learning (ML), data analytics, internet of things (IoT) and sensors, digital twins, and social media can enable not just quick data analysis, but also can allow manufacturers to minutely measure water quality, make predictions using demand forecasting, and meet sustainability goals.

More integrated industrial water management solutions, including reuse, industrial symbiosis, and zero liquid discharge (ZLD), will all be crucial as greenfield industrial projects look toward water reuse. “Water is an input commodity for the industrial process, and wastewater gives you the opportunity to recycle that material back into the process,” says Simm. 

Treating a precious resource

Water filtration systems have evolved during the past century, especially in agriculture and industry. Processes such as low-pressure membrane filtration and reverse osmosis are boosting water access for both human and industrial users. Membrane technologies, which continue to evolve, have halved the cost of desalinated water during the past decade, for example. New desalinization methods run on green power and are dramatically increasing water output rates. 

Advances in AI, data processing, and cloud computing could bring a new chapter in water access. The automation this permits allows for quicker and more precise decision-making. Automated, preset parameters let facilities operate at capacity with less risk. “Digital technology and data play a crucial role in developing technology for water innovations, enabling better management of resources, optimizing treatment processes, and improving efficiency in distribution,” says Vincent Puisor, global business development director at Schneider Electric. 

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

The Download: paradigm-shifting supershoes, and AI-powered NPCs

25 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Supershoes are reshaping distance running

Since 2016, when Nike introduced the Vaporfly, a paradigm-­shifting shoe that helped athletes run more efficiently (and therefore faster), the elite running world has muddled through a period of soul-searching over the impact of high-tech footwear on the sport.

“Supershoes” —which combine a lightweight, energy-­returning foam with a carbon-fiber plate for stiffness—have been behind every broken world record in distances from 5,000 meters to the marathon since 2020.

To some, this is a sign of progress. In much of the world, elite running lacks a widespread following. Record-breaking adds a layer of excitement. And the shoes have benefits beyond the clock: most important, they help minimize wear on the body and enable faster recovery from hard workouts and races.

Still, some argue that they’ve changed the sport too quickly. Read the full story. 

—Jonathan W. Rosen

This story is from the forthcoming print issue of MIT Technology Review, which explores the theme of Play. It’s set to launch tomorrow, so if you don’t already, subscribe now to get a copy when it lands.

My colleagues turned me into an AI-powered NPC. I hate him.

—Niall Firth

It feels weird, talking to yourself online. 

Especially when you’re pretty much the most unpleasant character you’ve ever met.

The “me” I’ve been chatting to this week, called King Fiall of Nirth, is a creation from Inworld AI, a US-based firm that hopes to revolutionize how we interact with characters in games. Its goal is to leverage the power of generative AI to imbue NPCs with the power to chat freely with players, giving open-world games a deeper, more immersive feel.

I didn’t create King Fiall myself, of course. I’m not a total narcissist. No, instead I asked some colleagues to get around a laptop one lunchtime and build my personality as if I were an NPC. 

It turns out that was a mistake. 

Because the character they created is—and there’s really no easy way to say this—a monster. Read the full story

This story is from The Algorithm, our weekly newsletter all about AI and its impact on the world. Sign up to receive it in your inbox every Monday.

+ Read more: How generative AI could reinvent what it means to play

Roundtables: The future of AI games

(For subscribers and MIT Alumni only)

Generative AI is coming for games and redefining what it means to play. AI-powered NPCs that don’t need a script could make games—and other worlds—deeply immersive. Watch executive editor Niall Firth and editorial director Allison Arieff discuss what this might look like, as well as get a sneak preview of the big stories for the next issue of the print magazine. 

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 US record labels are suing AI music startups
They’re alleging copyright infringement “on a massive scale”. (Wired $) 
+ Listen to the AI-generated songs that got Udio and Suno sued. (404 Media)
Why artists are becoming less scared of AI. (MIT Technology Review)

2 Apple is the first company charged under a new EU competition law 
For allegedly unfair restrictions on app developers. (NYT $)
Apple is struggling to get us excited about a cheaper, weaker Vision Pro. (Gizmodo
It has however mercifully fixed a bug that let hackers invade people’s virtual rooms with spiders (for real.) (Mashable)

3 China’s probe returned the first samples from the far side of the moon
It’s exciting to think what the rock and soil it collected might reveal. (NBC)

4 Julian Assange is now free
He’s entered a plea deal with the US. (The Verge)

5 Facebook seems to have totally given up on moderation
AI-generated spam and scams are everywhere, and it’s (404 Media)
+ Photographers say Meta is labeling their real photos as ‘made with AI’. (TechCrunch)

6 Female fertility tech startups are being dragged down by privacy fears
Which are entirely legitimate, given the fact women are being prosecuted post-Roe (FT $)

7 Amazon is working on a rival to ChatGPT to launch this September
It’s already very late to the party. (Insider $)
ChatGPT has been found to be ableist in how it assesses candidates for hiring. (Mashable)

8 What if we powered planes with electromagnetic waves? ✈⚡
All in favor of out-of-the-box thinking… but excuse me if I skip the test flight. (IEEE Spectrum)

9 Zooming out in remote meetings? You’re not alone
Research concludes it’s best if they’re small, short, and everyone has their cameras on. (Harvard Business Review $)

10 How to get a healthier work/life balance
Tech can be part of the problem, but here’s how it can be a solution, too. (WP $)

Quote of the day

“I believe we’re in a time of experimentation where platforms are willing to gamble and roll the dice and say, ‘How little content moderation can we get away with?”

—Sarah T. Roberts, a UCLA professor who studies social media moderation, tells 404 Media why Facebook is now overrun with AI-generated spam and scams. 

The big story

One city’s fight to solve its sewage problem with sensors

sound bend river
LUCY HEWETT


April 2021

In the city of South Bend, Indiana, wastewater from people’s kitchens, sinks, washing machines, and toilets flows through 35 neighborhood sewer lines. On good days, just before each line ends, a vertical throttle pipe diverts the sewage into an interceptor tube, which carries it to a treatment plant where solid pollutants and bacteria are filtered out.

As in many American cities, those pipes are combined with storm drains, which can fill rivers and lakes with toxic sludge when heavy rains or melted snow overwhelms them, endangering wildlife and drinking water supplies. But city officials have a plan to make its aging sewers significantly smarter. Read the full story

—Andrew Zaleski

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ I like the idea of people ‘having’ rather than ‘being a’ genius
+ It’s very easy to make frosé at home.
+ A Muslim all-female thrash metal band are set to become the first Indonesian group to play at the UK’s Glastonbury music festival this week <3
+ It appears that I have been seriously underestimating fish.

Supershoes are reshaping distance running

25 June 2024 at 05:10

The track at Moi University’s Eldoret Town Campus doesn’t look like a facility designed for champions. Its surface is a modest mix of clay and gravel, and it’s 10 meters longer than the standard 400. Runners use a classroom chair to mark the start and finish. Yet it’s as good a place as any to spot the athletes who make Kenya the world’s greatest distance-running powerhouse. 

On a morning in January, nearly a hundred athletes, including Olympic medalists and winners of major marathons, have gathered here for “speedwork”: high-­intensity intervals that the best runners make look effortless. The track is packed with so much talent that it is easy to miss the man of the moment, a gangly runner in a turquoise shirt and thick-soled Nike shoes. In just over a year, Kelvin Kiptum had gone from virtual unknown to global phenom, running three of the seven fastest marathons in history and setting the official men’s world record, 2:00:35, in Chicago in October 2023. On this day, he was less than three months out from his next race, in Rotterdam, where he planned to try for something once unthinkable: completing the 26-mile, 385-yard event in less than two hours.

Although fans were left in awe by Kiptum’s Chicago triumph, not everyone celebrated the shoes that had propelled him to victory. Since 2016, when Nike introduced the Vaporfly, a paradigm-­shifting shoe that helped athletes run more efficiently (and therefore faster), the elite running world has muddled through a period of soul-searching over the impact of high-tech footwear on the sport. The Vaporfly was only the beginning. Today, most major brands offer multiple versions of the “supershoe”—a technology that combines a lightweight, energy-­returning foam with a carbon-fiber plate for stiffness. “Superspikes” based on a similar concept are now widely used on the track as well. Performances have adjusted accordingly. Since 2020, according to the sport’s governing body, World Athletics, runners wearing so-called advanced footwear technology have broken all road and outdoor track world records in distances from 5,000 meters to the marathon—a concentration unlike any in the sport’s modern history. 

The steady stream of footwear innovation has brought unending speculation over which brand’s shoes are best. Critics say that places too much emphasis on gear at the expense of runners’ ability.

Some of the most impressive feats have come in the marathon. In a 2019 exhibition that wasn’t eligible for records, Kenya’s Eliud Kipchoge covered the distance in an astonishing 1:59:40. Last September, Ethiopia’s Tigst Assefa lowered the women’s world record by more than two minutes in Berlin, running 2:11:53 in the ultralight Adidas Adizero Adios Pro Evo 1, a shoe designed to be worn only once. For his own record two weeks later, Kiptum wore the slightly heavier yet uber-bouncy Nike Alphafly 3. The uninitiated could have been forgiven for thinking the white platform shoes, which almost looked designed for walking on the moon, belonged on a sci-fi set rather than the streets of Chicago.

To some, this is all a sign of progress. In much of the world, elite running lacks a widespread following. Record-breaking adds a layer of excitement. And as I’d hear repeatedly from top athletes and coaches in Kenya, the shoes have benefits beyond the clock: most important, they help minimize wear on the body and enable faster recovery from hard workouts and races.

Runners on the track
Most marathoners prefer the clay and gravel track at Moi University’s Eldoret Town Campus but shift to Kipchoge Keino Stadium (shown here) when it rains.
PATRICK MEINHARDT

Still, some argue that they’ve changed the sport too quickly. Not only has it become hard to compare new records fairly with old ones, but the steady stream of footwear innovation has brought unending speculation over which brand’s shoes are best, and critics say that places too much emphasis on gear at the expense of runners’ ability. Laboratory research also suggests that some runners get a greater boost from the technology than others, depending on their biomechanics. Ross Tucker, a South African sports scientist and outspoken supershoe critic, has argued that these differences make it effectively impossible to “evaluate performances between different athletes independent of this nagging doubt over what the shoes do.”

How much of Kiptum’s success was due to his talent, training, drive, and mental toughness—and how much to his body’s responsiveness to Nike’s tech? It’s difficult to know—and, tragically, he’s not around to offer input. A few weeks after I saw him in Eldoret, a city of several hundred thousand that serves as Kenya’s unofficial running capital, he and coach Gervais Hakizimana were killed in a late-night car crash en route to the nearby town they used as a base for training. 

Shoes were the last thing on the mind of Kenya’s running community in the wake of Kiptum’s death. Yet his dramatic rise offers a window into their significance. Although the shoe-tech revolution has affected runners the world over, in few places has its effect been more pronounced than Kenya, where running is not only a sport but an exit strategy from a life of poverty. In this sense, the new high-tech shoes are something of a mixed blessing, giving a boost to established runners with company sponsorships while forming an obstacle to those still pining for their big break. Even the cheapest models here sell for well over $100—no small sum for young people who mostly come from disadvantaged backgrounds.

Today most Kenyan athletes, whether beginners or household names with six-­figure shoe contracts, have come to accept that there’s no turning back—that even the most elemental of sports is not immune to scientific innovation. Still, the new shoes are transforming the sport in myriad ways, throwing new variables into training and racing, exacerbating inequalities between athletes, and altering the collective imagination of what performances are possible. They’re also writing a new, tech-fueled chapter to one of the sports world’s most unlikely tales: how a small corner of one African country became such a dominant force in running, and how running, in turn, became the stuff of dreams for so many of its youth. 


Engineered to Fly

Supershoes are carefully optimized to help runners go the distance

Beneath the boat-like exterior, supershoes boast a variety of features designed to lower the energetic cost of running, allowing athletes to go faster and help them endure the strain of a long-distance race.

The most crucial feature is the (often proprietary) foams that are used to construct parts of the sole. These absorb the impact of the foot and return energy from each foot strike back to the runner. Some use other features, like the orange “air pod” in the Nike Alphafly 3 (bottom), for an added bounce. 

Bounciness alone would not provide much advantage—today’s foams are so soft and thick (World Athletics allows up to 40 millimeters in competitions) that without additional support they would make the feet highly unstable. To give the shoes structure, manufacturers add rigid components like carbon-fiber plates or rods, typically sandwiched between layers of foam.

These rigid parts and foams are combined with wafer-thin mesh uppers to create shoes that are increasingly ultralight: the Adidas Adizero Adios Pro Evo 1 (top), released in 2023, weighs just 4.9 ounces (measured in the men’s size 9). Lighter shoes also reduce the energy expended with each stride—enabling runners to move at a given pace with less effort. 

a tall white running shoes with three black diagonal stripes
The Adidas Adizero Adios Pro Evo 1 was designed to be worn just once
The Nike Vaporfly was the first shoe to combine energy-returning foam with a carbon-fiber plate for stiffness.
a florescent orange Nike sneaker
The late Kelvin Kiptum set the official men’s world record in Chicago last October while wearing Nike’s Alphafly 3.


A bounce in the step

To understand the impact of shoes on running performance, it’s helpful to think of the human body as a vehicle. In a long-­distance event like the marathon, competitors are limited by three physiological factors. VO2 max, the maximum amount of oxygen the body can absorb, is akin to an engine’s horsepower—it effectively measures the upper limits of a runner’s aerobic capacity. Lactate threshold, the point at which lactic acid accumulates in the blood faster than the body can remove it, is like the redline on a dashboard tachometer—it tells you how close you can run to your VO2 max without succumbing to exhaustion. The third parameter, running economy, describes the rate at which a runner expends energy, similar to gas mileage. A light, aerodynamic coupe will use less fuel, or energy, to travel at a given speed than a hulking SUV. So too will a lithe, efficiently striding marathoner.

It is running economy that’s affected by footwear—most obviously when it comes to weight. As a leg in stride moves through space, added weight closer to the end (i.e., the foot) has a greater energetic cost than weight closer to the center of gravity. Soles made with foams that are soft, or compliant (good at storing mechanical energy), and resilient (good at returning it) can also lead to significant energy savings. Studies have shown that shoes with stiffening elements, like plates, can improve running economy as well, by reducing the muscular effort of the feet.

""
Benson Kipruto (left) and Cyprian Kotut stretch at the 2 Running Club, a training camp sponsored by Adidas in Kapsabet, Kenya.
PATRICK MEINHARDT

The trick, for shoe manufacturers, has long been to optimize these properties—and for much of competitive running’s history, they weren’t particularly good at it. As recently as the 1970s, shoes worn for racing had clunky rubber soles and stiff leather or canvas uppers—not so different from the O’Sullivan’s “Live Rubber Heels” that propelled the American Johnny Hayes to victory in the marathon at the 1908 Olympics, the first run at today’s standard distance. The 1975 release of the first shoe with a midsole made from ethylene vinyl acetate (EVA), an air-infused foam, heralded a new generation of footwear that was lighter and bouncier. With a few exceptions, innovations over the next four decades would focus on making EVA shoes as light as possible.

That all changed with the Vaporfly. After its release, most attention focused on its curved carbon-fiber plate, which many suspected functioned like a spring. Research has shown that to be incorrect: while the plate may add some energy-­saving stiffness, says Wouter Hoogkamer, a professor of kinesiology at the University of Massachusetts, Amherst, its main benefit appears to be in stabilizing the technology’s most vital component: a thick midsole material made from a foamed polymer known as polyether block amide, or PEBA. Not only is this foam light; tests in 2017 at Hoogkamer’s lab, then at the University of Colorado, Boulder, found that a Vaporfly prototype stored and returned significantly more energy than the leading marathon shoes at the time: the EVA-soled Nike Streak and the Adidas Boost, made with a thermoplastic polyurethane. Hoogkamer’s team also recruited 18 high-performing athletes and tracked their energy expenditure, measured in watts per kilogram of body weight, as they ran for five-minute bouts on a treadmill at different paces in all three. The Vaporfly, they found, improved running economy by an average of 4%—in part by increasing the amount of ground covered with each stride. More recent studies have found a slightly smaller benefit when comparing the Vaporfly and other supershoes with “control shoes” over short distances. However, preliminary data from a Brigham Young University study, which tested subjects during runs lasting an hour, suggests that supershoes may offer a greater running-­economy benefit as an athlete progresses through a race, in part because softer foams help reduce muscle fatigue. “A runner with a 3% running-economy benefit in the lab might be at 4% or 5% at the end of a marathon,” says Iain Hunter, a professor of biomechanics who led the research. 

""
Coach Claudio Berardelli estimates that his runners cover at least 60% of their mileage in supershoes.
PATRICK MEINHARDT

Although it’s widely accepted that better running economy translates into faster racing, the exact impact on elite performances is subject to uncertainty. At world-record marathon pace, statistical models predict, 4% better running economy would lower time by more than three minutes. But few runners and coaches I spoke with in Kenya believe the technology is worth that much, even as they acknowledge that it’s become essential to competing at the highest level. Many note that footwear has advanced alongside better marathon-specific training and new hydrogel-based sports drinks that make it possible to digest more calories during races. There’s also the scourge of doping: drug-related offenses had left 81 Kenyan athletes ineligible to compete in World Athletics events as of May 1, though Kipchoge has never tested positive, and neither had Kiptum.

Speaking at the track after Kiptum’s January workout, his coach, Hakizimana, estimated that the shoes improved Kiptum’s marathon time by a minute, or perhaps a little more. The technology, he stressed, was only one factor among many that contributed to Kiptum’s rapid ascent. There was the punishing training; the way he’d “attack” with so much confidence in races; the stoicism with which he approached the running lifestyle. 

On top of that, there was the influence of the generations before him, who helped transform a land of unparalleled running talent into the home of champions. 

From talent to big business

While Kenya’s runners are renowned today for their marathoning dominance, the country first emerged on the global stage in track races. The watershed moment came at the 1968 Mexico City Olympics, where Kenya won eight medals in track and field, including gold in the men’s 1,500 meters, 10,000 meters, and 3,000-meter steeplechase. For the next two decades, the country’s athletes largely shied away from the marathon: according to Moses Tanui, a Kenyan who won the Boston Marathon twice in the 1990s, many men believed the event would prevent them from fathering children. Eventually, though, as money shifted away from the track and toward the roads, the longer distance had greater allure. Today, the winner of a major race like Boston can expect a several-hundred-­thousand-dollar payday, between appearance fees, prize money, and shoe-company bonuses. As of May, according to World Athletics, Kenya-born athletes accounted for 28 of the event’s all-time 50 fastest men and 17 of its 50 fastest women.

Kenya’s outsize success is also closely linked to the concept of running economy. Studies of the Kalenjin, a community of nine closely related tribes that produce the majority of Kenya’s top athletes, point to several physical attributes more common in this group that are conducive to an energy-efficient gait, including thin lower legs, long Achilles tendons, and a high ratio of leg length to torso. Active childhoods in the highlands to the west of the Great Rift Valley, where altitudes between 6,000 and 9,000 feet help boost aerobic capacity, is likely a component of their success as well. It’s the prospect of financial rewards, though, that drives participation—and transforms raw talent into records. Although Kenya is one of Africa’s most industrialized countries, even top university graduates struggle to find well-paid jobs. In the villages and small towns of the Rift Valley region, where economic prospects are especially limited, many are drawn to running by default. “After high school, if you don’t continue with your studies, you can run or you can be idle,” says Brigid Kosgei, a Kenyan who held the women’s marathon world record before Assefa. “So you run—you try your best.”

It is in this context that the stakes of shoe technology are so high: in top competitions, places worth tens of thousands of dollars—representing new homes for parents and school fees for children—can come down to seconds. For a few years after Nike’s release of the Vaporfly, the odds were stacked against runners sponsored by other companies, whose contracts prevented them from using competitors’ products. The gap was partly psychological: Cyprian Kotut, an Adidas-sponsored runner who’s won marathons in Paris and Hamburg, recalls feeling disillusioned mid-race next to Nike-shod competitors. Some sought out workarounds. One cobbler in Ethiopia gained fame for his skill in attaching Vaporfly soles to Adidas uppers—thereby helping some Adidas runners stealthily utilize the Nike tech. 

“After high school, if you don’t continue with your studies, you can run or you can be idle … So you run—you try your best.”

Brigid Kosgei, Kenyan who held the women’s marathon world record

Today, the playing field is far more level—at least among established pros. At the 2 Running Club, an Adidas-sponsored camp set amid rolling tea fields south of Eldoret, Kotut and his teammates give me a glimpse of their Adizero carbon-fiber lineup. There’s the ultra-padded Prime X for long sessions on pavement; the more compact Takumi Sen for speedwork; one pair of the featherlight black-and-white Evo, which Kotut used to run a personal best of 2:04:34 last year in Amsterdam. Claudio Berardelli, the group’s Italian coach, estimates that his runners cover at least 60% of their mileage in supershoes. For most, they’ve become as vital to training as they have to racing. Not only do they enable faster workouts, says Benson Kipruto, a club member who won the Tokyo Marathon in March and finished second to Kiptum in Chicago last fall; the softer foams also promote quicker recovery—to the point where the day after a hard session, “your legs are a bit fresh.” 

Many credit the shoes with keeping runners healthy. David Kirui, a physiotherapist who’s treated many of Kenya’s top marathoners, estimates that overuse-related injuries, like stress fractures, Achilles tendinitis, and iliotibial band syndrome, are down at least 25%. Several veteran runners tell me the shoes have helped extend their careers, and therefore their earning power. “In the old shoes, after 10 marathons you’d be completely exhausted,” says Jonathan Maiyo, who’s been an elite road racer since 2007. “Now 10 marathons are like nothing.”

Who benefits?

Runners like those in Berardelli’s group are a chosen few. The majority of athletes training in Kenya have never made any money from the sport; many run in secondhand shoes gifted by friends or purchased in local markets, and few can afford supershoes of their own. One day in Iten, a small town north of Eldoret that clings to the edge of the Rift Valley escarpment, I meet Daisy Kandie, a 23-year-old who moved here after high school and is among the hundreds of aspiring pros who toil along the town’s clay roads each morning. Her goal is the same as most: get noticed by an agent, most likely a foreigner, who’ll provide gear, arrange races outside the country, and in some cases negotiate a contract with a shoe company.

Among Iten’s legion of dreamers, Kandie is luckier than most: her parents see her as a future breadwinner, so they’ve supported her quest, and even sold a plot of farmland so they could buy her a pair of neon-green-and-pink Nike Alphaflys. The shoes were cheaper in Iten—approximately $180—than they would have been in the US; it’s an open secret that some runners with sponsorships sell shoes they get for free to local shops, which resell them at below-market prices. That money, nonetheless, represents a lot of sacrifice: Kandie pays roughly that amount for a year’s worth of rent on the small room she keeps at the edge of town. The cost of the shoes, which she refers to as her “Sub-2” for the idea of a below-two-hour marathon, doesn’t make her resentful. Instead, she says, having the latest gear helps keep her motivated. Still, while she uses them only for fast runs twice a week, as well as in occasional local races, their soles have considerable wear, and she doesn’t have a plan for a replacement.

“By then I’ll have gone,” she said, referring to racing outside Kenya, when I asked what she’ll do for her next pair. “I have hopes.”

A motorcycle drives past a sign that reads “Welcome to Iten Home of Champions”
A sign welcomes travelers to Iten, a small town north of Eldoret that clings to the edge of the Rift Valley escarpment
PATRICK MEINHARDT

Although supershoe technology has raised the cost of doing business for Kandie and others like her, it’s most controversial for its role in skewing results at the very top. Hoogkamer’s landmark study of the Vaporfly, which found that the shoes improved running economy by 4% on average, also found that the benefit ranged from roughly 2% to 6% depending on the athlete. 

Subsequent research involving other supershoes has documented a similar range of responses. One 2023 study by Adidas-affiliated researchers, which tested seven elite Kenyans in three carbon-fiber prototypes and a traditional racing flat, recorded a runner using 11% less energy in one shoe and a runner using 11% more energy in another. Melanie Knopp, the study’s lead author, cautions that each athlete was tested in each shoe only once, and that some of the subjects were unfamiliar with running on a treadmill. Nonetheless, researchers generally agree that individual athletes “respond” to some shoes better than others. Why isn’t entirely clear: Hoogkamer estimates there may be 20 variables at play, including weight, foot length, calf muscle strength, and whether the runner strikes the ground with the forefoot, midfoot, or heel. Shoe geometry matters as well. Abdi Nageeye, a Dutch marathoner who trains in Iten and finished second to Kipchoge at the Tokyo Olympics, says he struggled with the first two versions of Nike’s Alphafly; as a 120-pound heel-striker, it forced him to “skip” in a way that felt unnatural. He says the newest Alphafly model, which has a greater drop in “stack height”—or foam thickness—from heel to toe, is a much better fit.

“If everybody is in their ideal shoe, are there still some people who’ll get more benefit than others? The answer is probably yes.”

Dustin Joubert, a supershoe expert and professor of kinesiology at St. Edward’s University in Austin, Texas

What all this means for the marathon’s integrity is a hotly debated topic. Today, many pro runners in the West undergo treadmill-based metabolic tests to determine which shoe works best, and in some cases which company to sign with. That’s less common in Kenya, where greater competition leaves athletes less room to negotiate. Among runners I spoke with, most of those with shoe contracts said their sponsor has a model they like, but it’s difficult to know if it’s their absolute best fit. Even if it is, some suspect that certain runners are better suited to the supershoe technology more broadly. “If everybody is in their ideal shoe, are there still some people who’ll get more benefit than others?” asks Dustin Joubert, a supershoe expert and professor of kinesiology at St. Edward’s University in Austin, Texas. “The answer is probably yes.”

Daisie standing in a doorway with crossed arms
Daisy Kandie’s Alphaflys cost $180 on the secondary market. She pays roughly that amount each year to rent a small room on the outskirts of Iten.
PATRICK MEINHARDT

Despite the benefits his runners gain in training, Berardelli says the shoes have introduced “question marks”: in a marathon today, he says, it’s less clear than ever whether the winner is indeed the runner who’s the strongest or has the smartest racing tactics. Stephen Cherono, a Kenyan who competed for Qatar as Saif Saaeed Shaheen and held the world record in the 3,000-meter steeplechase from 2004 until it was broken with the aid of superspikes last year, believes World Athletics should have placed greater restrictions on the technology before it was too late: although the global body maintains limits on sole thickness and prohibits the use of shoes that aren’t made available for sale, these guidelines are meant to help steer innovation, not squelch it. Cherono tells me he’s a big fan of Formula 1, the global motor sport, but worries that running, in its focus on performance engineering, is becoming too much like it. “Too often the conversation is now about the shoe and not the person wearing it,” he says. 

What might have been

If there’s one thing supershoe advocates and critics can agree upon, it’s that Kelvin Kiptum operated on another level. His margin of victory in Chicago—nearly three and a half minutes—was so large that some joked second-place Kipruto had won the race for mortals. Like most runners in Kenya, Kiptum grew up in a farming family where money was tight. When he began training as a teenager, he often ran barefoot; occasionally, pros he tagged along with gave him shoes. Among them was Hakizimana, a Rwandan who trained near Kiptum’s home and took him on as a protégé when his own running began to falter. After a stint training to be an electrician, Kiptum began running full-time in 2018; four years later, in his marathon debut, he ran the third-fastest time in history. Atypically, in all three of his marathons, he ran the second half faster than the first—perhaps because Nike’s PEBA foam had helped “save” his legs, or perhaps because his training was so grueling. Most world-class Kenyan marathoners top out around 220 kilometers per week. According to Hakizimana, Kiptum would often run up to 280, or roughly a marathon’s distance every day.

""
Kandie out for a run with friends in Iten.
PATRICK MEINHARDT

One month to the day after I watched Kiptum circling the Eldoret track, completing 1,000-meter repeats at roughly the pace of a two-hour marathon, I gather with hundreds of others on a property he’d purchased outside town, where he is being buried according to Kalenjin tradition. The crowd again includes a who’s-who list of champions; this time, instead of running gear, they are dressed in suits or black T-shirts emblazoned with the record-­holder’s image. Their mourning is both for a man who died far too young—Kiptum was listed as 24, though he was likely at least a few years older—and for a remarkable performance that many had expected to be just around the corner. Entering Chicago, Kiptum had been dealing with an injury and wasn’t even in top shape, according to his training partner Daniel Kemboi. Ahead of Rotterdam, Kemboi says, “he was so confident.” Very few in Eldoret doubted he would shatter the two-hour barrier. 

At some point that afternoon, my mind drifts to the shoes. Kiptum had been an extraordinary competitor regardless of what was on his feet. Still, absent supershoe technology, the prospect of a sub-two-hour marathon would never have been part of his dramatic rags-to-riches story. In this sense, the shoes didn’t minimize his greatness, as critics like Cherono feared; if anything, they helped build his brand and turbocharged his pursuit of the Kenyan running dream—of achieving a better life through sport. Tragically, Kiptum’s path was cut short when he was only getting started. But someone else, in rigid shoes with bouncy soles, will come along to blaze their own. 

Jonathan W. Rosen is a journalist who writes about Africa. He reported from Eldoret with assistance from Godfrey Kiprotich.

Lego bricks are making science more accessible

25 June 2024 at 05:00

Etienne Boulter walked into his lab at the Université Côte d’Azur in Nice, France, one morning with a Lego Technic excavator set tucked under his arm. His plan was simple yet ambitious: to use the pieces of the set to build a mechanical cell stretcher. 

Boulter and his colleagues study mechanobiology—the way mechanical forces, such as stretching and compression, affect cells—and this piece of equipment is essential for his research. Commercial cell stretchers cost over $50,000. But one day, after playing with the Lego set, Boulter and his colleagues found a way to build one out of its components for only a little over $200. Their Lego system stretches a silicone plate where cells are growing. This process causes the cells to deform and mimics how our own skin cells stretch.

Sets like these are ideal to repurpose, says Boulter: “If you go to Lego Technic, you have the motors, you have the wheels, you have the axles—you have everything you need to build such a system.” Their model was so successful that 10 different labs around the world contacted him for the plans to build their own low-cost Lego stretchers. 

Boulter is one of many researchers turning to Lego components to build inexpensive yet extremely effective lab equipment. The bricks themselves are durable and manufactured with tight tolerances. Lego’s offerings include sensors that can detect various colors, perceive rotational motion, and measure the distance to an object. These DIY tools are a creative and affordable solution for working scientists who are trying to keep costs down. 

legoman scientist
ELIZABETH FERNANDEZ

Take, for example, the Lego chromatographer designed by Cassandra Quave and her husband, Marco Caputo, both at Emory University. Quave is an ethnobotanist who leads a research group dedicated to documenting traditional medicines. Her team travels deep into forests and jungles around the world, collecting samples of leaves, berries, and seeds that they evaluate for their potential pharmaceutical value. To isolate chemical compounds from the plant samples, Quave makes use of a meticulous process called chromatography, in which liquid distilled from the plant is passed over a tube filled with a material such as a silica gel. 

Timing in chromatography needs to be very exact, with small amounts of liquid being added at precise moments. Waiting for these moments is not the best use of a graduate student’s time. This is exactly what Quave thought when she walked into the lab one day and saw her PhD student Huaqiao Tang holding a test tube and watching the clock. “This is crazy!” Quave said, laughing. “We can come up with a better solution!” 

When Quave told Caputo of her problem, he brought in Legos culled from their four children’s massive collection and had his students see what they could do with them. They came up with a robotic arm that could make repeated precise movements, gradually adding small fractions of liquid to test tubes in order to isolate compounds within the plant tissue. The device was so accurate in its movements, Quave says, that spontaneous crystals formed, something that occurs only in very pure substances. 

Cassandra Quave with samples in the lab
Board with a grid of glass bottles with a LEGO chromatographer  attached on a rail at rear

Ethnobotanist Cassandra Quave distills molecules from plants using a Lego chromatographer that she designed with her husband, researcher Marco Caputo.

At Cardiff University in Wales, Christopher Thomas, Oliver Castell, and Sion Coulman had similar success building an instrument capable of printing cells. The researchers study skin diseases, lipids (fatty compounds) in the body, and wound healing. Ethically obtained samples are hard to find, so they created a 3D bioprinter out of Lego pieces that is capable of “printing” a human skin analogue, laying down layers of bio-ink that contains living cells. These printers normally cost over a quarter of a million dollars, but they built their version for a mere $550. At first, their colleagues were skeptical that components typically treated as toys could be used in such a professional setting, but after seeing the printer at work, they were quickly convinced. The team made national news, and other groups replicated the model in their own labs. 

""
At Cardiff University, Christopher Thomas, Oliver Castell, and Sion Coulman built an instrument capable of printing cells. Groups around the world have already replicated their design.
COURTESY OF CARDIFF UNIVERSITY

Some scientists are devising tools to take into the classroom. Timo Betz of the University of Göttingen in Germany came up with the idea of building a Lego microscope one day while watching his son, Emil, then eight, play. Betz was scheduled to speak about science at a local school that afternoon but was reluctant to take his own lab-grade microscope into the classroom. His son was immediately on board. “Let’s do this!” he told his dad. Together with Bart Vos, a colleague at the university, they built a microscope that consisted entirely of Lego pieces, with the exception of two optical lenses. Their plans, which they’ve made available to the public, can be used by students as young as 12 to learn the basic concepts of optics.

rendering of LEGO microscope with image from microscope and photo of Betz looking into the microscope
Timo Betz of the University of Göttingen designed and built a working microscope entirely from Lego pieces.
COURTESY OF TIMO BETZ

Many of these scientists make their models open source, providing them to interested groups or publishing the plans on GitHub or in papers or so that other labs can make their own versions. This is great for researchers the world over, especially those with limited funding—whether they’re new faculty members, scientists at smaller universities, or people working in low-income countries. It’s how a small plastic brick is making science more accessible to all. 

Elizabeth Fernandez is a freelance science writer.

My colleagues turned me into an AI-powered NPC. I hate him.

25 June 2024 at 05:00

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It feels weird, talking to yourself online. 

Especially when you’re pretty much the most unpleasant character you’ve ever met.

The “me” I’ve been chatting to this week, called King Fiall of Nirth, is a creation made using technology from Inworld AI, a US-based firm that hopes to revolutionize how we interact with characters in games. Specifically, Inworld is targeting the NPCs (nonplayer characters) that populate many modern video games. As a player, you can sometimes chat with these animated characters to discover new missions or quests to go on. More often they provide atmosphere or a sense of bustle in the game’s cities, castles, villages, and spaceports.

They’re also pretty dumb.

That’s why Inworld is hoping to leverage the power of generative AI to imbue NPCs with the power to chat freely with players, engaging them in unconstrained conversations that—it hopes—should give these open-world games a deeper, more immersive feel. I wrote about Inworld’s plan in the cover story of the current print magazine, which you can read here

I didn’t create King Fiall myself, of course. I’m not a total narcissist. No, instead I asked MIT Technology Review’s UK team (I’m based in our London office) to get around a laptop one lunchtime and build my personality as if I were an NPC. They accessed Inworld’s Studio, a tool that gives game developers a huge amount of freedom in creating the ”brains” of a prospective character, which can then be imported into a game engine. (At the moment I’m just a disembodied chatbot rather than a fully fleshed-out NPC.) My colleagues were to use the tools—including sliding “personality” scales and text boxes— to shape my character. No holds barred, I instructed. Let rip.

Well, um, that was a mistake.

The character they created is—and there’s really no easy way to say this—a monster.

Sarcastic to an extreme degree, King Fiall has been given the character of “ancient king” (an in-joke based on a BuzzFeed-style “Which House of the Dragon character are you?” quiz in which I got King Viserys, the fantasy world’s aging patriarch.) Fiall has an absolutely insufferable manner and—for some reason—an obsession with drinking the medieval beverage mead. Is this who they think I am?

In Studio, developers can also customize safety settings, controlling how much the character curses or talks about violence or alcohol, for example. The settings can include “goals”—things that you, the developer, want to push the player toward: a particular quest, for example. The idea is that developers can tweak these characters so they fit in the game world, and they can be set to “remember” interactions they’ve had with you before. 

“The character still responds with the magic of the generative AI, with the voice. You can say anything to it and they can respond in any way,” says Kylan Gibbs, Inworld’s CEO and founder. “But it’ll still follow that main arc and make sure that it hits the key beats that are set out.”

All right. What shall I tell Algorithm readers about AI NPCs next? I ask the King.

*King of Nirth leans in, a mischievous twinkle in his eye* Think of it, my dear, these AI NPCs can be whatever you want them to be. They can be your loyal subjects, your trusted advisors, or even your loyal opposition. They can help create immersive worlds, provide entertainment, or even assist in educating the masses. But be warned, my dear. These AI NPCs, they have a mind of their own. So keep a close eye on them, or they might just stage a coup and take over your kingdom, too!

Um, did he just call me “my dear”?

Anyway, ignore him. Inworld AI has a site, called Arcade, where you can play around with a bunch of its stock characters (including something called CyberSquirrel). It’s similar to the library of characters hosted by the startup Character.AI, but smaller in scale. On Character.AI’s site, you can chat with a huge selection of user-generated chatbots that are powered by LLMs, including historical figures, characters from films or books, or chatbots designed to help you learn a language. Users can tweak their own chatbot’s output as much as they like, fine-tuning its personality by typing text into a box.

Character.AI’s cofounder Daniel de Freitas told me that the firm’s vision is “broader than entertainment”—which was definitely a theme I noticed when writing my story. Beyond just making games more immersive, researchers and artists are now looking at ways these NPCs can be used in education, for example. De Freitas sees AI chatbots becoming personal assistants for your day-to-day life. “That means when you’re bored, or you want to wind down, we can give you entertainment—for example, these interactive stories,” he says. ”But then when you’re at school, we can help you study for your exam.”

I can’t see King Fiall being much use to me, other than raising my blood pressure. Fed up with his unending sass, I convince a teammate to give me the password to Studio’s back end and go snooping around my personality. 

A look at the dials shows that my colleagues have placed me more toward the “introvert” end of the sliding scale, while also leaning more positive than negative. The backstory they gave me describes my love of poetry (yes) and sandwiches (yes) and lists a series of utterances (Hmm, Go for it, Go away) that are apparently very typical (allegedly) of my usual day-to-day style.

Aha, but the dialogue style is set to “sarcastic.” Well, that makes sense. (I note, sadly, that “commanding,” “empathetic,” and “entertaining” were all available and disregarded.)

I ask King Fiall if he can please dial back the sarcasm a bit—it’s getting on my nerves. 

“Of course, dear visitor. I shall tone down the sarcasm and don my friendliest and most open demeanor for you.*King of Nirth puts on an exaggerated smile and gestures with open arms.* Welcome to the kingdom of warmth and sunshine, where sarcasm is but a distant memory.”

God, I’m really starting to hate this guy.


Now read the rest of the Algorithm

Why does AI hallucinate?

The tendency to make things up—known as hallucination—is one of the biggest obstacles holding chatbots back from more widespread adoption. So why do they do it? And why can’t we fix it?

One of the biggest issues is that it’s hallucination all the way down—that’s what LLMs do. It’s how they work, and we only call it “hallucination” when we notice it’s wrong. The problem is, large language models are so good that what they make up looks right most of the time. And that makes trusting them hard. 

Perhaps the best fix for hallucination is to manage our expectations about what these tools are for. 

Read this terrific explainer all about hallucinations from Will Douglas Heaven. It also appears in the next issue of MIT Technology Review, which lands on Wednesday and is packed with brilliant stories about the topic of play. Subscribe now, if you don’t already, so you can read the whole thing!


LinkedIn Live: Deepfakes

Join MIT Technology Review reporters and editors for a fascinating discussion on LinkedIn Live about the rise of deepfakes, including the risks they pose and some interesting positive uses. You can register for free here.


Bits and bytes

Synthesia’s deepfakes now come with hands—and soon will have full bodies

Bit by bit, these hyperrealistic avatars are becoming indistinguishable from the real thing. Read this story to see a video of Melissa’s old avatar having a conversation with a new version that includes hands. It’s quite surreal and genuinely impressive. (MIT Technology Review)

 A first look at China’s buzzy new text-to-video AI model 

The Chinese firm Kuaishou just dropped the first text-to-video generative AI model that’s freely available for the public to test (OpenAI’s Sora is still being kept under wraps). It’s called Kling, and our reporter got a chance to try it out. (MIT Technology Review)

Neo-Nazis are all in on AI

Unsurprising but awful news. Extremists are developing their own hateful AIs to supercharge radicalization and fundraising—and are now using the tech to make blueprints for weapons and bombs. (Wired)

Ilya Sutskever has a new AI firm. And it’s all about superintelligence.

A month after he quit OpenAI, its former chief scientist has a new firm called Safe Superintelligence. It won’t be making products—just focusing entirely on, yes, superintelligence. (FT)

These copywriters lost their jobs to AI

And to add insult to injury, they now have to help make the AIs that took their jobs sound more human. (BBC)

AI has turned Google image search into a total nightmare

Some search results are turning up AI-generated images of celebrities in swimsuits, but with a horrible twist: they look like underage children. (404 Media)

Before yesterdayMIT Technology Review

Roundtables: The Future of AI Games

Recorded on June 24, 2024

The Future of AI Games

Speakers: Niall Firth, executive editor, and Allison Arieff, editorial director

Generative AI is coming for games and redefining what it means to play. AI-powered NPCs that don’t need a script could make games—and other worlds—deeply immersive. This technology could bring an unprecedented expansiveness to video and computer games, opening up possibilities we can only begin to imagine.

Related Coverage

The Download: hyperrealistic deepfakes, and using math to shape wood

24 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Synthesia’s hyperrealistic deepfakes will soon have full bodies

Startup Synthesia’s AI-generated avatars are getting an update to make them even more realistic: They will soon have bodies that can move, and hands that gesticulate.

The new full-body avatars will be able to do things like sing and brandish a microphone while dancing, or move from behind a desk and walk across a room. They will be able to express more complex emotions than previously possible, like excitement, fear, or nervousness. 

These new capabilities, which are set to launch toward the end of the year, will add a lot to the illusion of realism. That’s a scary prospect at a time when deepfakes and online misinformation are proliferating. Read the full story and watch our reporter’s avatars meet each other.

—Melissa Heikkilä

Meet the architect creating wood structures that shape themselves

Humanity has long sought to tame wood into something more predictable, but it is inherently imprecise. Its grain reverses and swirls. Trauma and disease manifest in scars and knots. 

Instead of viewing these natural tendencies as liabilities, Achim Menges, an architect and professor at the University of Stuttgart in Germany, sees them as wood’s greatest assets. 

Menges and his team at the Institute for Computational Design and Construction are uncovering new ways to build with wood by using algorithms and data to simulate and predict how wood will behave within a structure long before it is built. He hopes this will help create more sustainable and affordable timber buildings by reducing the amount of wood required. Read our story all about him and his work

—John Wiegand

This story is from the forthcoming print issue of MIT Technology Review, which explores the theme of Play. It’s set to go live on Wednesday June 26, so if you don’t already, subscribe now to get a copy when it lands.

Live: How generative AI could transform games

Generative AI could soon revolutionize how we play video games, creating characters that can converse with you freely, and experiences that are infinitely detailed, twisting and changing every time you experience them.

Together, these could open the door to entirely new kinds of in-game interactions that are open-ended, creative, and unexpected. One day, the games we love playing may not have to end. Read our executive editor Niall Firth’s story all about what that future could look like. 

If you want to learn more, register now to join our next exclusive subscriber-only Roundtable discussion at 11.30ET today! Niall and our editorial director Allison Arieff will be talking about games without limits, the future of play, and much more.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Big Tech firms are going all-in on experimental clean energy projects
Due to the fact AI is so horribly polluting. But the projects range from ‘long shot’ to ‘magical thinking’. (WP $)
Making the grid smarter, rather than bigger, could help. (Semafor)
How virtual power plants are shaping tomorrow’s energy system. (MIT Technology Review)

2 Google is about to be hit with a ton of AI-related lawsuits
Its AI Overviews keep libeling people—and they’re lawyering up. (The Atlantic $)
Why Google’s AI Overviews gets things wrong. (MIT Technology Review)
Another AI-powered search engine, Perplexity, is running into the exact same issues. (Wired $)
Worst of all? There’s currently no way to fix the underlying problem. (MIT Technology Review)

3 Apple is exploring a deal with Meta
To integrate Meta’s generative AI models into Apple Intelligence. (Wall Street Journal $) 
+ Apple is delaying launching AI features in Europe due to regulatory concerns. (Quartz

4 NASA is indefinitely delaying the return of Starliner
In order to give it more time to review data. (Ars Technica)

5 Chinese tech companies are pushing their staff beyond breaking point
As growth slows and competition rises, work-life balance is going out the window. (FT $)

6 Used electric vehicles are now less expensive than gas cars in the US
It’s a worrying statistic that reflects the cratering demand for EVs. (Insider $)
The problem with plug-in hybrids? Their drivers. (MIT Technology Review)

7 Check out these photos of San Francisco’s AI scene
The city is currently buzzing with people hoping to make their fortune off the back of the boom. (WP $)

8 The next wave of weight loss drugs is coming
The hope is that they might be cheaper, and come with fewer side effects. (NBC)

9 Elon Musk is obsessed with getting us to have more babies
He’s funding and promoting some pretty wacky theories about a coming population collapse. (Bloomberg $)
+ And we’re losing track of the number of kids he has himself. (Gizmodo)

10 Before smartphones, you could pay people to Google stuff for you
In the noughties, if you were arguing with friends over something factual, you could just call AQA to settle it. (Wired $)

Quote of the day

“The internet has just gotten so much duller.”

—Kelly, a copywriter from New Hampshire, tells the Wall Street Journal about the impact of AI online. 

The big story

How a tiny Pacific Island became the global capital of cybercrime

an older 90s style computer with an image of "Beautiful Tokelau" emits spam emails with a hand holding a dust pan and brush tries to scoop them up
CHRISSIE ABBOTT


November 2023

Tokelau, a string of three isolated atolls strung out across the Pacific, is so remote that it was the last place on Earth to be connected to the telephone—only in 1997. Just three years later, the islands received a fax with an unlikely business proposal that would change everything.

It was from an early internet entrepreneur from Amsterdam, named Joost Zuurbier. He wanted to manage Tokelau’s country-code top-level domain, or ccTLD—the short string of characters that is tacked onto the end of a URL—in exchange for money.

In the succeeding years, tiny Tokelau became an unlikely internet giant—but not in the way it may have hoped. Until recently, its .tk domain had more users than any other country’s: a staggering 25 million—but the vast majority were spammers, phishers, and cybercriminals.

Now the territory is desperately trying to clean up .tk. Its international standing, and even its sovereignty, may depend on it. Read the full story.

—Jacob Judah

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Feeling challenged? Why not try the Japanese approach of ‘ukeireru’ to tackle what’s bothering you.
+ The incredibly weird origins of pop hit Maniac have to be read to be believed.
+ This summer is already chock-full with pop bangers—don’t miss out.
+ Why short novels are the best.

Meet the architect creating wood structures that shape themselves

24 June 2024 at 05:00

Humanity has long sought to tame wood into something more predictable. Sawmills manufacture lumber from trees selected for consistency. Wood is then sawed into standard sizes and dried in kilns to prevent twisting, cupping, or cracking. Generations of craftsmen have employed sophisticated techniques like dovetail joinery, breadboard ends, and pocket flooring to keep wood from distorting in their finished pieces. 

But wood is inherently imprecise. Its grain reverses and swirls. Trauma and disease manifest in scars and knots. 

Instead of viewing these natural tendencies as liabilities, Achim Menges, an architect and professor at the University of Stuttgart in Germany, sees them as wood’s greatest assets. Menges and his team at the Institute for Computational Design and Construction are uncovering new ways to build with the material by using computational design—which relies on algorithms and data to simulate and predict how wood will behave within a structure long before it is built. He hopes this work will enable architects to create more sustainable and affordable timber buildings by reducing the amount of wood required. 

Menges’s recent work has focused on creating “self-shaping” timber structures like the HygroShell, which debuted at the Chicago Architecture Biennial in 2023. Constructed from prefabricated panels of a common building material known as cross-laminated timber, HygroShell morphed over a span of five days, unfurling into a series of interlaced sheets clad with wooden scale-like shingles that stretched to cover the structure as it expanded. Its final form, designed as a proof of concept, is a delicately arched canopy that rises to nearly 33 feet (10 meters) but is only an inch thick. In a time-lapse video, the evolving structure resembles a bird stretching its wings. 

HygroShell takes its name from hygroscopicity, a property of wood that causes it to absorb or lose moisture with humidity changes. As the material dries, it contracts and tends to twist and curve. Traditionally, lumber manufacturers have sought to minimize these movements. But through computational design, Menges’s team can predict the changes and structure the material to guide it into the shape they want. 

“From the start, I was motivated to understand computation not as something that divides the physical and the digital world but, instead, that deeply connects them.”

Achim Menges, architect and professor, University of Stuttgart in Germany

The result is a predictable and repeatable process that creates tighter curves with less material than what can be attained through traditional construction techniques. Existing curved structures made from cross-laminated timber (also known as mass timber) are limited to custom applications and carry premium prices, Menges says. Self-shaping, in contrast, could offer industrial-scale production of curved mass timber structures for far less cost. 

To build HygroShell, the team created digital profiles of hundreds of freshly sawed boards using data about moisture content, grain orientation, and more. Those parameters were fed into modeling software that predicted how the boards were likely to distort as they dried and simulated how to arrange them to achieve the desired structure. Then the team used robotic milling machines to create the joints that held the panels together as the piece unfolded. 

“What we’re trying to do is develop design methods that are so sophisticated they meet or match the sophistication of the material we deal with,” Menges says. 

Menges views “self-shaping,” as he calls his technique, as a low-energy way of creating complex curved architectures that would otherwise be too difficult to build on most construction sites. Typically, making curves requires extensive machining and a lot more materials, at considerable cost. By letting the wood’s natural properties do the heavy lifting, and using robotic machinery to prefabricate the structures, Menges’s process allows for thin-walled timber construction that saves material and money.

""
""
""

The shape, structure, and construction process of Menges’s HygroShell pavilion are all based on data that shows how different materials change over time.

If they were self-shaped, curved elements could halve the material requirements for certain structural features in a multistory timber building, Menges says. “You would save a lot of material simply because curvature adds stiffness. That’s why we see everything is curved in nature.”

Menges began his career in the late 1990s, at a time when architects had just begun to use powerful new software to design buildings. This shift opened new possibilities, but often those digital designs ran afoul of the material’s physical constraints, he says. It was the tension between the physical and the digital that inspired Menges to pursue computational design.

“From the start, I was motivated to understand computation not as something that divides the physical and the digital world but, instead, that deeply connects them,” he says. 

His interest in self-shaping structures was inspired by pinecones, which—long after falling from trees—retain the biological programming to open and expose their seeds as temperatures rise. “That’s a plant motion that does not require any motors, nor does it require any muscles,” Menges says. “It is programmed into the material.” 

Pinecones made him realize that just as robots are programmed to perform certain actions, materials like wood can be manipulated to carry out specific behaviors that are hard-coded in their DNA as a response to a stimulus.

Apart from the HygroShell, Menges has used self-shaping techniques to create proof-of-concept projects like the Urbach Tower, a 45-foot spiraling wood structure overlooking the fields of the Rems Valley near Urbach, Germany. Instead of using energy-intensive mechanical processes that require heavy machinery, the team prefabricated a dozen curved, self-shaped wood panels and assembled them on site, reducing the time it would otherwise take to build such a structure. 

And in 2023, his team worked with researchers from Germany’s University of Freiburg to create the livMatS Biomimetic Shell, a structure made from 127 wooden cassettes, each resembling the shape of a honeycomb. Menges used self-shaping to design a system of 3D-printed wooden window blinds that opened and closed in response to changes in relative humidity. Embedded in the wood shell is a solar gate that closes in warm weather, shading the space, and opens during colder months to provide passive solar heating. Compared with a conventional timber building, this structure has half the environmental impact over its life cycle.

Menges’s work is coming at a time when the sustainability of mass timber buildings—those with structural components made from engineered wood instead of steel or concrete—is under scrutiny. Concerns range from where the timber is sourced to whether preserving forests sequesters more carbon than harvesting them for building material, even if building with wood reduces carbon emissions relative to producing concrete and steel. There are also worries about what happens to all the wood left behind during the logging process. Trees may be a renewable resource, but they require decades to mature and are already threatened by climate change. That’s what led Menges and others to advocate for more efficient building practices that don’t waste wood. 

""
The design of the Urbach Tower, a proof-of-concept project, emerged from a new self-shaping process for its curved wood components.
ITECH/ICD/ITKE UNIVERSITY OF STUTTGART

Architects face a dilemma, however. Mass-timber buildings could be built using less wood, but the less material is used, the more susceptible the structure is to fire, says Michael Green, principal of Michael Green Architecture in Vancouver. 

“The way we protect wood is by overbuilding it to create a thickness that can resist a certain amount of time under fire,” Green says. The standards depend on the type of building and the variety of wood used, but Green generally adds around 3.6 centimeters (1.4 inches) of extra material to his structures for each hour of required burn time. The more people occupy a building, the longer it is required to resist fire and, in the case of mass-timber buildings, the thicker the wood structure. 

Green sees Menges’s work as important foundational research that may lead to breakthroughs influencing wood architecture in decades to come. But he doesn’t see self-shaped architecture being widely deployed outside the towers and pavilions Menges has already designed. 

exterior view of livMat
interior view of livMat

The livMatS Biomimetic Shell features 3D-printed wooden window blinds that open and close in response to changes in relative humidity.

“It’s teaching us less about what we are actually going to build in the next five years and more about what we need to learn so we can develop other products that support that,” he says. 

Even without widespread adoption of self-shaping techniques, Menges believes, computational design will continue to unlock new ways of building with wood. He sees a future where the knots, crooks, and branches of trees are viewed not as defects but as construction tools, each with its own unique properties. 

“A tree does not have a defect,” he says. “It’s an anatomical feature. What we need to learn is what kind of building systems we develop that integrate these features, and not strive for the homogeneity that is simply not there.” 

Synthesia’s hyperrealistic deepfakes will soon have full bodies

24 June 2024 at 02:00

Startup Synthesia’s AI-generated avatars are getting an update to make them even more realistic: They will soon have bodies that can move, and hands that gesticulate.

The new full-body avatars will be able to do things like sing and brandish a microphone while dancing, or move from behind a desk and walk across a room. They will be able to express more complex emotions than previously possible, like excitement, fear, or nervousness, says Victor Riparbelli, the company’s CEO. Synthesia intends to launch the new avatars toward the end of the year. 

“It’s very impressive. No one else is able to do that,” says Jack Saunders, a researcher at the University of Bath, who was not involved in Synthesia’s work. 

The full-body avatars he previewed are very good, he says, despite small errors such as hands “slicing” into each other at times. But “chances are you’re not really going to be looking that close to notice it,” Saunders says. 

Synthesia launched its first version of hyperrealistic AI avatars, also known as deepfakes, in April. These avatars use large language models to match expressions and tone of voice to the sentiment of spoken text. Diffusion models, as used in image- and video-generating AI systems, create the avatar’s look. However, the avatars in this generation appear only from the torso up, which can detract from the otherwise impressive realism. 

To create the full-body avatars, Synthesia is building an even bigger AI model. Users will have to go into a studio to record their body movements.

COURTESY SYNTHESIA

But before these full-body avatars become available, the company is launching another version of AI avatars that have hands and can be filmed from multiple angles. Their predecessors were only available in portrait mode and were just visible from the front. 

Other startups, such as Hour One, have launched similar avatars with hands. Synthesia’s version, which I got to test in a research preview and will be launched in late July, has slightly more realistic hand movements and lip-synching. 

Crucially, the coming update also makes it far easier to  create your own personalized avatar. The company’s previous custom AI avatars required users to go into a studio to record their face and voice over the span of a couple of hours, as I reported in April

This time, I recorded the material needed in just 10 minutes in the Synthesia office, using a digital camera, a lapel mike, and a laptop. But an even more basic setup, such as a laptop camera, would do. And while previously I had to record my facial movements and voice separately, this time the data was collected at the same time. The process also includes reading a script expressing consent to being recorded in this way, and reading out a randomly generated security passcode. 

These changes allow more scale and give the AI models powering the avatars more capabilities with less data, says Riparbelli. The results are also much faster. While I had to wait a few weeks to get my studio-made avatar, the new homemade ones were available the next day. 

Below, you can see my test of the new homemade avatars with hands. 

COURTESY SYNTHESIA

The homemade avatars aren’t as expressive as the studio-made ones yet, and users can’t change the backgrounds of their avatars, says Alexandru Voica, Synthesia’s head of corporate affairs and policy. The hands are animated using an advanced form of looping technology, which repeats the same hand movements in a way that is responsive to the content of the script. 

Hands are tricky for AI to do well—even more so than faces, Vittorio Ferrari, Synthesia’s director of science, told me in in March. That’s because our mouths move in relatively small and predictable ways while we talk, making it possible to sync the deepfake version up with speech, but we move our hands in lots of different ways. On the flip side, while faces require close attention to detail because we tend to focus on them, hands can be less precise, Ferrari says. 

Even if they’re imperfect, AI-generated hands and bodies add a lot to the illusion of realism, which poses serious risks at a time when deepfakes and online misinformation are proliferating. Synthesia has strict content moderation policies, carefully vetting both its customers and the sort of content they’re able to generate. For example, only accredited news outlets can generate content on news.  

These new advancements in avatar technologies are another hammer blow to our ability to believe what we see online, says Saunders. 

“People need to know you can’t trust anything,” he says. “Synthesia is doing this now, and another year down the line it will be better and other companies will be doing it.” 

The Download: replacing animal testing, and underwater drones

21 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Is this the end of animal testing?

Animal studies are notoriously bad at identifying human treatments. Around 95% of the drugs developed through animal research fail in people, but until recently there was no other option.

Now organs on chips, also known as microphysiological systems, may offer a truly viable alternative. They’re triumphs of bioengineering, intricate constructions furrowed with tiny channels that are lined with living human tissues that expand and contract with the flow of fluid and air, mimicking key organ functions like breathing, blood flow, and peristalsis, the muscular contractions of the digestive system.

It’s only early days, but if they work as hoped, organs on chips could solve one of the biggest problems in medicine today. Read the full story.

—Harriet Brown

This story is from the forthcoming print issue of MIT Technology Review, which explores the theme of Play. It’s set to go live on Wednesday June 26, so if you don’t already, subscribe now to get a copy when it lands.

How underwater drones could shape a potential Taiwan-China conflict

A potential future conflict between Taiwan and China would be shaped by novel methods of drone warfare involving advanced underwater drones and increased levels of autonomy, according to a new war-gaming experiment by the think tank Center for a New American Security (CNAS). 

Since Russia invaded Ukraine in 2022, drones have been aiding in what military experts describe as the first three steps of the “kill chain”—finding, targeting, and tracking a target—as well as in delivering explosives. Drones like these would be far less useful in a possible invasion of Taiwan. Instead, a conflict with Taiwan would likely make use of undersea and maritime drones to scout for submarines. Read the full story.

—James O’Donnell

Should social media come with a health warning?

Earlier this week, the US surgeon general, also known as the “nation’s doctor,” authored an article making the case that health warnings should accompany social media. The goal: to protect teenagers from its harmful effects.

But the relationship between this technology and health isn’t black and white. Social media can affect users in different ways—often positively. So let’s take a closer look at the concerns, the evidence behind them, and how best to tackle them. Read the full story.

—Jessica Hamzelou

This story is from The Checkup, our weekly health and biotech newsletter. Sign up to receive it in your inbox every Thursday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US government is banning Kaspersky’s antivirus software  
Officials claim the firm’s ties with Russia mean it poses a major security risk. (Reuters)
+ It’ll ban sales of software from 20 July, and updates from 29 September. (TechCrunch)
+ The ban follows a two-year probe into Kaspersky. (The Verge)

2 Americans are paying way too much for prescription drugs
And shadowy pharmacy benefit managers are partly to blame. (NYT $)
+ The UK has been hit by a drug shortage, too. (The Guardian)

3 How a secretive ocean alkalinity project in the UK spiraled into disaster
It raises important questions: who gets to decide where trials can take place? (Hakai Magazine)
+ This town’s mining battle reveals the contentious path to a cleaner future. (MIT Technology Review)

4 Car dealers have been locked out of their selling systems
Businesses have had to resort to paper and pen to close their sales. (WSJ $)
+ It’s unlikely to be resolved before the weekend. (Bloomberg $)

5 Make way for much less large language models
They’re a fraction of the size, but just as effective. (IEEE Spectrum)
+ Large language models can do jaw-dropping things. But nobody knows exactly why. (MIT Technology Review)

6 Inside the growing cottage industry of wildfire mitigation
In Boulder, Colorado, the solutions are increasingly experimental. (Bloomberg $)+ The quest to build wildfire-resistant homes. (MIT Technology Review)

7 Zimbabwe’s traditional healers are peddling financial advice on TikTok
But spirituality and tech are uneasy bedfellows. (Rest of World)

8 How to avoid falling for scams on Amazon
Read those product reviews super carefully. (Wired $)

9 Tech companies are still interested in making smart glasses 👓
Despite Meta being the sole big player. (The Information $)

10 The internet looked very different 30 years ago
A whole lot more interesting, some might say. (Fast Company $)
+ How to fix the internet. (MIT Technology Review)

Quote of the day

“Congress reached for a sledgehammer without even considering if a scalpel would suffice.”

—A legal brief filed by TikTok lays out why the company feels that the US Congress is not operating in good faith in its attempts to ban the platform, the Washington Post reports.

The big story

The first babies conceived with a sperm-injecting robot have been born

April 2023

Last spring, a group of engineers set out to test the sperm-injecting robot they’d designed. Altogether, the robot was used to fertilize more than a dozen eggs.

The result of the procedures, say the researchers, was healthy embryos—and now two baby girls, who they claim are the first people born after fertilization by a “robot.”

The startup behind the robot, Overture Life, says its device is an initial step toward automating IVF, and potentially making the procedure less expensive and far more common than it is today. But that will be far from easy. Read the full story.

—Antonio Regalado

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Bradley the baby kangaroo has all the makings of an excellent hopper.
+ Ooh, Matt Damon and Ben Affleck are reuniting in a new brawny crime thriller.
+ Here’s a couple of things you may not know about one of the world’s most famous paintings: Vermeer’s Girl with a Pearl Earring.
+ Admire this newly-discovered shape, which is currently without a name.

Should social media come with a health warning?

21 June 2024 at 05:11

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Earlier this week, the US surgeon general, also known as the “nation’s doctor,” authored an article making the case that health warnings should accompany social media. The goal: to protect teenagers from its harmful effects. “Adolescents who spend more than three hours a day on social media face double the risk of anxiety and depression symptoms,” Vivek Murthy wrote in a piece published in the New York Times. “Additionally, nearly half of adolescents say social media makes them feel worse about their bodies.”

His concern instinctively resonates with me. I’m in my late 30s, and even I can end up feeling a lot worse about myself after a brief stint on Instagram. I have two young daughters, and I worry about how I’ll respond when they reach adolescence and start asking for access to whatever social media site their peers are using. My children already have a fascination with cell phones; the eldest, who is almost six, will often come into my bedroom at the crack of dawn, find my husband’s phone, and somehow figure out how to blast “Happy Xmas (War Is Over)” at full volume.

But I also know that the relationship between this technology and health isn’t black and white. Social media can affect users in different ways—often positively. So let’s take a closer look at the concerns, the evidence behind them, and how best to tackle them.

Murthy’s concerns aren’t new, of course. In fact, almost any time we are introduced to a new technology, some will warn of its potential dangers. Innovations like the printing press, radio, and television all had their critics back in the day. In 2009, the Daily Mail linked Facebook use to cancer.

More recently, concerns about social media have centered on young people. There’s a lot going on in our teenage years as our brains undergo maturation, our hormones shift, and we explore new ways to form relationships with others. We’re thought to be more vulnerable to mental-health disorders during this period too. Around half of such disorders are thought to develop by the age of 14, and suicide is the fourth-leading cause of death in people aged between 15 and 19, according to the World Health Organization. Many have claimed that social media only makes things worse.

Reports have variously cited cyberbullying, exposure to violent or harmful content, and the promotion of unrealistic body standards, for example, as potential key triggers of low mood and disorders like anxiety and depression. There have also been several high-profile cases of self-harm and suicide with links to social media use, often involving online bullying and abuse. Just this week, the suicide of an 18-year-old in Kerala, India, was linked to cyberbullying. And children have died after taking part in dangerous online challenges made viral on social media, whether from inhaling toxic substances, consuming ultra-spicy tortilla chips, or choking themselves.

Murthy’s new article follows an advisory on social media and youth mental health published by his office in 2023. The 25-page document, which lays out some of known benefits and harms of social media use as well as the “unknowns,” was intended to raise awareness of social media as a health issue. The problem is that things are not entirely clear cut.

“The evidence is currently quite limited,” says Ruth Plackett, a researcher at University College London who studies the impact of social media on mental health in young people. A lot of the research on social media and mental health is correlational. It doesn’t show that social media use causes mental health disorders, Plackett says.

The surgeon general’s advisory cites some of these correlational studies. It also points to survey-based studies, including one looking at mental well-being among college students after the rollout of Facebook in the mid-2000s. But even if you accept the authors’ conclusion that Facebook had a negative impact on the students’ mental health, it doesn’t mean that other social media platforms will have the same effect on other young people. Even Facebook, and the way we use it, has changed a lot in the last 20 years.

Other studies have found that social media has no effect on mental health. In a study published last year, Plackett and her colleagues surveyed 3,228 children in the UK to see how their social media use and mental well-being changed over time. The children were first surveyed when they were aged between 12 and 13, and again when they were 14 to 15 years old.

Plackett expected to find that social media use would harm the young participants. But when she conducted the second round of questionnaires, she found that was not the case. “Time spent on social media was not related to mental-health outcomes two years later,” she tells me.

Other research has found that social media use can be beneficial to young people, especially those from minority groups. It can help some avoid loneliness, strengthen relationships with their peers, and find a safe space to express their identities, says Plackett. Social media isn’t only for socializing, either. Today, young people use these platforms for news, entertainment, school, and even (in the case of influencers) business.

“It’s such a mixed bag of evidence,” says Plackett. “I’d say it’s hard to draw much of a conclusion at the minute.”

In his article, Murthy calls for a warning label to be applied to social media platforms, stating that “social media is associated with significant mental-health harms for adolescents.”

But while Murthy draws comparisons to the effectiveness of warning labels on tobacco products, bingeing on social media doesn’t have the same health risks as chain-smoking cigarettes. We have plenty of strong evidence linking smoking to a range of diseases, including gum disease, emphysema, and lung cancer, among others. We know that smoking can shorten a person’s life expectancy. We can’t make any such claims about social media, no matter what was written in that Daily Mail article.

Health warnings aren’t the only way to prevent any potential harms associated with social media use, as Murthy himself acknowledges. Tech companies could go further in reducing or eliminating violent and harmful content, for a start. And digital literacy education could help inform children and their caregivers how to alter the settings on various social media platforms to better control the content children see, and teach them how to assess the content that does make it to their screens.

I like the sound of these measures. They might even help me put an end to the early-morning Christmas songs. 


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive:

Bills designed to make the internet safer for children have been popping up across the US. But individual states take different approaches, leaving the resulting picture a mess, as Tate Ryan-Mosley explored.

Dozens of US states sued Meta, the parent company of Facebook, last October. As Tate wrote at the time, the states claimed that the company knowingly harmed young users, misled them about safety features and harmful content, and violated laws on children’s privacy.  

China has been implementing increasingly tight controls over how children use the internet. In August last year, the country’s cyberspace administrator issued detailed guidelines that include, for example, a rule to limit use of smart devices to 40 minutes a day for children under the age of eight. And even that use should be limited to content about “elementary education, hobbies and interests, and liberal arts education.” My colleague Zeyi Yang had the story in a previous edition of his weekly newsletter, China Report.

Last year, TikTok set a 60-minute-per-day limit for users under the age of 18. But the Chinese domestic version of the app, Douyin, has even tighter controls, as Zeyi wrote last March.

One way that social media can benefit young people is by allowing them to express their identities in a safe space. Filters that superficially alter a person’s appearance to make it more feminine or masculine can help trans people play with gender expression, as Elizabeth Anne Brown wrote in 2022. She quoted Josie, a trans woman in her early 30s. “The Snapchat girl filter was the final straw in dropping a decade’s worth of repression,” Josie said. “[I] saw something that looked more ‘me’ than anything in a mirror, and I couldn’t go back.”

From around the web

Could gentle shock waves help regenerate heart tissue? A trial of what’s being dubbed a “space hairdryer” suggests the treatment could help people recover from bypass surgery. (BBC)

“We don’t know what’s going on with this virus coming out of China right now.” Anthony Fauci gives his insider account of the first three months of the covid-19 pandemic. (The Atlantic)

Microplastics are everywhere. It was only a matter of time before scientists found them in men’s penises. (The Guardian)

Is the singularity nearer? Ray Kurzweil believes so. He also thinks medical nanobots will allow us to live beyond 120. (Wired)

Is this the end of animal testing?

21 June 2024 at 05:00

In a clean room in his lab, Sean Moore peers through a microscope at a bit of intestine, its dark squiggles and rounded structures standing out against a light gray background. This sample is not part of an actual intestine; rather, it’s human intestinal cells on a tiny plastic rectangle, one of 24 so-called “organs on chips” his lab bought three years ago.

Moore, a pediatric gastroenterologist at the University of Virginia School of Medicine, hopes the chips will offer answers to a particularly thorny research problem. He studies rotavirus, a common infection that causes severe diarrhea, vomiting, dehydration, and even death in young children. In the US and other rich nations, up to 98% of the children who are vaccinated against rotavirus develop lifelong immunity. But in low-income countries, only about a third of vaccinated children become immune. Moore wants to know why.

His lab uses mice for some protocols, but animal studies are notoriously bad at identifying human treatments. Around 95% of the drugs developed through animal research fail in people. Researchers have documented this translation gap since at least 1962. “All these pharmaceutical companies know the animal models stink,” says Don Ingber, founder of the Wyss Institute for Biologically Inspired Engineering at Harvard and a leading advocate for organs on chips. “The FDA knows they stink.” 

But until recently there was no other option. Research questions like Moore’s can’t ethically or practically be addressed with a randomized, double-blinded study in humans. Now these organs on chips, also known as microphysiological systems, may offer a truly viable alternative. They look remarkably prosaic: flexible polymer rectangles about the size of a thumb drive. In reality they’re triumphs of bioengineering, intricate constructions furrowed with tiny channels that are lined with living human tissues. These tissues expand and contract with the flow of fluid and air, mimicking key organ functions like breathing, blood flow, and peristalsis, the muscular contractions of the digestive system.

More than 60 companies now produce organs on chips commercially, focusing on five major organs: liver, kidney, lung, intestines, and brain. They’re already being used to understand diseases, discover and test new drugs, and explore personalized approaches to treatment.

As they continue to be refined, they could solve one of the biggest problems in medicine today. “You need to do three things when you’re making a drug,” says Lorna Ewart, a pharmacologist and chief scientific officer of Emulate, a biotech company based in Boston. “You need to show it’s safe. You need to show it works. You need to be able to make it.” 

All new compounds have to pass through a preclinical phase, where they’re tested for safety and effectiveness before moving to clinical trials in humans. Until recently, those tests had to run in at least two animal species—usually rats and dogs—before the drugs were tried on people. 

But in December 2022, President Biden signed the FDA Modernization Act, which amended the original FDA Act of 1938. With a few small word changes, the act opened the door for non-animal-based testing in preclinical trials. Anything that makes it faster and easier for pharmaceutical companies to identify safe and effective drugs means better, potentially cheaper treatments for all of us. 

Moore, for one, is banking on it, hoping the chips help him and his colleagues shed light on the rotavirus vaccine responses that confound them. “If you could figure out the answer,” he says, “you could save a lot of kids’ lives.”


While many teams have worked on organ chips over the last 30 years, the OG in the field is generally acknowledged to be Michael Shuler, a professor emeritus of chemical engineering at Cornell. In the 1980s, Shuler was a math and engineering guy who imagined an “animal on a chip,” a cell culture base seeded with a variety of human cells that could be used for testing drugs. He wanted to position a handful of different organ cells on the same chip, linked to one another, which could mimic the chemical communication between organs and the way drugs move through the body. “This was science fiction,” says Gordana Vunjak-Novakovic, a professor of biomedical engineering at Columbia University whose lab works with cardiac tissue on chips. “There was no body on a chip. There is still no body on a chip. God knows if there will ever be a body on a chip.”

Shuler had hoped to develop a computer model of a multi-organ system, but there were too many unknowns. The living cell culture system he dreamed up was his bid to fill in the blanks. For a while he played with the concept, but the materials simply weren’t good enough to build what he imagined. 

“You can force mice to menstruate, but it’s not really menstruation. You need the human being.”

Linda Griffith, founding professor of biological engineering at MIT and a 2006 recipient of a MacArthur “genius grant”

He wasn’t the only one working on the problem. Linda Griffith, a founding professor of biological engineering at MIT and a 2006 recipient of a MacArthur “genius grant,” designed a crude early version of a liver chip in the late 1990s: a flat silicon chip, just a few hundred micrometers tall, with endothelial cells, oxygen and liquid flowing in and out via pumps, silicone tubing, and a polymer membrane with microscopic holes. She put liver cells from rats on the chip, and those cells organized themselves into three-dimensional tissue. It wasn’t a liver, but it modeled a few of the things a functioning human liver could do. It was a start.

Griffith, who rides a motorcycle for fun and speaks with a soft Southern accent, suffers from endometriosis, an inflammatory condition where cells from the lining of the uterus grow throughout the abdomen. She’s endured decades of nausea, pain, blood loss, and repeated surgeries. She never took medical leaves, instead loading up on Percocet, Advil, and margaritas, keeping a heating pad and couch in her office—a strategy of necessity, as she saw no other choice for a working scientist. Especially a woman. 

And as a scientist, Griffith understood that the chronic diseases affecting women tend to be under-researched, underfunded, and poorly treated. She realized that decades of work with animals hadn’t done a damn thing to make life better for women like her. “We’ve got all this data, but most of that data does not lead to treatments for human diseases,” she says. “You can force mice to menstruate, but it’s not really menstruation. You need the human being.” 

Or, at least, the human cells. Shuler and Griffith, and other scientists in Europe, worked on some of those early chips, but things really kicked off around 2009, when Don Ingber’s lab in Cambridge, Massachusetts, created the first fully functioning organ on a chip. That “lung on a chip” was made from flexible silicone rubber, lined with human lung cells and capillary blood vessel cells that “breathed” like the alveoli—tiny air sacs—in a human lung. A few years later Ingber, an MD-PhD with the tidy good looks of a younger Michael Douglas, founded Emulate, one of the earliest biotech companies making microphysiological systems. Since then he’s become a kind of unofficial ambassador for in vitro technologies in general and organs on chips in particular, giving hundreds of talks, scoring millions in grant money, repping the field with scientists and laypeople. Stephen Colbert once ragged on him after the New York Times quoted him as describing a chip that “walks, talks, and quacks like a human vagina,” a quote Ingber says was taken out of context.

Ingber began his career working on cancer. But he struggled with the required animal research. “I really didn’t want to work with them anymore, because I love animals,” he says. “It was a conscious decision to focus on in vitro models.” He’s not alone; a growing number of young scientists are speaking up about the distress they feel when research protocols cause pain, trauma, injury, and death to lab animals. “I’m a master’s degree student in neuroscience and I think about this constantly. I’ve done such unspeakable, horrible things to mice all in the name of scientific progress, and I feel guilty about this every day,” wrote one anonymous student on Reddit. (Full disclosure: I switched out of a psychology major in college because I didn’t want to cause harm to animals.)

cross-section of a microfluidic chip with the top channel, epithelial cells, vacuum channel, porous membrane, endothelial cells and bottom channel indicated.
Emulate is one of the companies building organ-on-a-chip technology. The devices combine live human cells with a microenvironment designed to emulate specific tissues.
EMULATE

Taking an undergraduate art class led Ingber to an epiphany: mechanical forces are just as important as chemicals and genes in determining the way living creatures work. On a shelf in his office he still displays a model he built in that art class, a simple construction of sticks and fishing line, which helped him realize that cells pull and twist against each other. That realization foreshadowed his current work and helped him design dynamic microfluidic devices that incorporated shear and flow. 

Ingber coauthored a 2022 paper that’s sometimes cited as a watershed in the world of organs on chips. Researchers used Emulate’s liver chips to reevaluate 27 drugs that had previously made it through animal testing and had then gone on to kill 242 people and necessitate more than 60 liver transplants. The liver chips correctly flagged problems with 22 of the 27 drugs, an 87% success rate compared with a 0% success rate for animal testing. It was the first time organs on chips had been directly pitted against animal models, and the results got a lot of attention from the pharmaceutical industry. Dan Tagle, director of the Office of Special Initiatives for the National Center for Advancing Translational Sciences (NCATS), estimates that drug failures cost around $2.6 billion globally each year. The earlier in the process failing compounds can be weeded out, the more room there is for other drugs to succeed.

“The capacity we have to test drugs is more or less fixed in this country,” says Shuler, whose company, Hesperos, also manufactures organs on chips. “There are only so many clinical trials you can do. So if you put a loser into the system, that means something that could have won didn’t get into the system. We want to change the success rate from clinical trials to a much higher number.”

In 2011, the National Institutes of Health established NCATS and started investing in organs on chips and other in vitro technologies. Other government funders, like the Defense Advanced Research Projects Agency and the Food and Drug Administration, have followed suit. For instance, NIH recently funded NASA scientists to send heart tissue on chips into space. Six months in low gravity ages the cardiovascular system 10 years, so this experiment lets researchers study some of the effects of aging without harming animals or humans. 

Scientists have made liver chips, brain chips, heart chips, kidney chips, intestine chips, and even a female reproductive system on a chip (with cells from ovaries, fallopian tubes, and uteruses that release hormones and mimic an actual 28-day menstrual cycle). Each of these chips exhibits some of the specific functions of the organs in question. Cardiac chips, for instance, contain heart cells that beat just like heart muscle, making it possible for researchers to model disorders like cardiomyopathy. 

Shuler thinks organs on chips will revolutionize the world of research for rare diseases. “It is a very good model when you don’t have enough patients for normal clinical trials and you don’t have a good animal model,” he says. “So it’s a way to get drugs to people that couldn’t be developed in our current pharmaceutical model.” Shuler’s own biotech company used organs on chips to test a potential drug for myasthenia gravis, a rare neurological disorder. In 2022,the FDA approved the drug for clinical trials based on that data—one of six Hesperos drugs that have so far made it to that stage. 


Each chip starts with a physiologically based pharmacokinetic model, known as a PBPK model—a mathematical expression of how a chemical compound behaves in a human body. “We try and build a physical replica of the mathematical model of what really occurs in the body,” explains Shuler. That model guides the way the chip is designed, re-creating the amount of time a fluid or chemical stays in that particular organ—what’s known as the residence time. “As long as you have the same residence time, you should get the same response in terms of chemical conversion,” he says.

Tiny channels on each chip, each between 10 and 100 microns in diameter, help bring fluids and oxygen to the cells. “When you get down to less than one micron, you can’t use normal fluid dynamics,” says Shuler. And fluid dynamics matters, because if the fluid moves through the device too quickly, the cells might die; too slowly, and the cells won’t react normally. 

Chip technology, while sophisticated, has some downsides. One of them is user friendliness. “We need to get rid of all this tubing and pumps and make something that’s as simple as a well plate for culturing cells,” says Vunjak-Novakovic. Her lab and others are working on simplifying the design and function of such chips so they’re easier to operate and are compatible with robots, which do repetitive tasks like pipetting in many labs. 

Cost and sourcing can also be challenging. Emulate’s base model, which looks like a simple rectangular box from the outside,starts at around $100,000 and rises steeply from there. Most human cells come from commercial suppliers that arrange for donations from hospital patients. During the pandemic, when people had fewer elective surgeries, many of those sources dried up. As microphysiological systems become more mainstream, finding reliable sources of human cells will be critical.

“As your confidence in using the chips grows, you might say, Okay, we don’t need two animals anymore— we could go with chip plus one animal.”

Lorna Ewart, Chief Scientific Officer, Emulate

Another challenge is that every company producing organs on chips uses its own proprietary methods and technologies. Ingber compares the landscape to the early days of personal computing, when every company developed its own hardware and software, and none of them meshed well. For instance, the microfluidic systems in Emulate’s intestine chips are fueled by micropumps, while those made by Mimetas, another biotech company, use an electronic rocker and gravity to circulate fluids and air. “This is not an academic lab type of challenge,” emphasizes Ingber. “It’s a commercial challenge. There’s no way you can get the same results anywhere in the world with individual academics making [organs on chips], so you have to have commercialization.”

Namandje Bumpus, the FDA’s chief scientist, agrees. “You can find differences [in outcomes] depending even on what types of reagents you’re using,” she says. Those differences mean research can’t be easily reproduced, which diminishes its validity and usefulness. “It would be great to have some standardization,” she adds.

On the plus side, the chip technology could help researchers address some of the most deeply entrenched health inequities in science. Clinical trials have historically recruited white men, underrepresenting people of color, women (especially pregnant and lactating women), the elderly, and other groups. And treatments derived from those trials all too often fail in members of those underrepresented groups, as in Moore’s rotavirus vaccine mystery. “With organs on a chip, you may be able to create systems by which you are very, very thoughtful—where you spread the net wider than has ever been done before,” says Moore.

two platforms
This microfluidic platform, designed by MIT engineers, connects engineered tissue from up to 10 organs.
FELICE FRANKEL

Another advantage is that chips will eventually reduce the need for animals in the lab even as they lead to better human outcomes. “There are aspects of animal research that make all of us uncomfortable, even people that do it,” acknowledges Moore. “The same values that make us uncomfortable about animal research are also the same values that make us uncomfortable with seeing human beings suffer with diseases that we don’t have cures for yet. So we always sort of balance that desire to reduce suffering in all the forms that we see it.”

Lorna Ewart, who spent 20 years at the pharma giant AstraZeneca before joining Emulate, thinks we’re entering a kind of transition time in research, in which scientists use in vitro technologies like organs on chips alongside traditional cell culture methods and animals. “As your confidence in using the chips grows, you might say, Okay, we don’t need two animals anymore—we could go with chip plus one animal,” she says. 

In the meantime, Sean Moore is excited about incorporating intestine chips more and more deeply into his research. His lab has been funded by the Gates Foundation to do what he laughingly describes as a bake-off between intestine chips made by Emulate and Mimetas. They’re infecting the chips with different strains of rotavirus to try to identify the pros and cons of each company’s design. It’s too early for any substantive results, but Moore says he does have data showing that organ chips are a viable model for studying rotavirus infection. That could ultimately be a real game-changer in his lab and in labs around the world.

“There’s more players in the space right now,” says Moore. “And that competition is going to be a healthy thing.” 

Harriet Brown writes about health, medicine, and science. Her most recent book is Shadow Daughter: A Memoir of Estrangement. She’s a professor of magazine, news, and digital journalism at Syracuse University’s Newhouse School. 

How underwater drones could shape a potential Taiwan-China conflict

20 June 2024 at 15:00

A potential future conflict between Taiwan and China would be shaped by novel methods of drone warfare involving advanced underwater drones and increased levels of autonomy, according to a new war-gaming experiment by the think tank Center for a New American Security (CNAS). 

The report comes as concerns about Beijing’s aggression toward Taiwan have been rising: China sent dozens of surveillance balloons over the Taiwan Strait in January during Taiwan’s elections, and in May, two Chinese naval ships entered Taiwan’s restricted waters. The US Department of Defense has said that preparing for potential hostilities is an “absolute priority,” though no such conflict is immediately expected. 

The report’s authors detail a number of ways that use of drones in any South China Sea conflict would differ starkly from current practices, most notably in the war in Ukraine, often called the first full-scale drone war. 

Differences from the Ukrainian battlefield

Since Russia invaded Ukraine in 2022, drones have been aiding in what military experts describe as the first three steps of the “kill chain”—finding, targeting, and tracking a target—as well as in delivering explosives. The drones have a short life span, since they are often shot down or made useless by frequency jamming devices that prevent pilots from controlling them. Quadcopters—the commercially available drones often used in the war—last just three flights on average, according to the report. 

Drones like these would be far less useful in a possible invasion of Taiwan. “Ukraine-Russia has been a heavily land conflict, whereas conflict between the US and China would be heavily air and sea,” says Zak Kallenborn, a drone analyst and adjunct fellow with the Center for Strategic and International Studies, who was not involved in the report but agrees broadly with its projections. The small, off-the-shelf drones popularized in Ukraine have flight times too short for them to be used effectively in the South China Sea. 

An underwater war

Instead, a conflict with Taiwan would likely make use of undersea and maritime drones. With Taiwan just 100 miles away from China’s mainland, the report’s authors say, the Taiwan Strait is where the first days of such a conflict would likely play out. The Zhu Hai Yun, China’s high-tech autonomous carrier, might send its autonomous underwater drones to scout for US submarines. The drones could launch attacks that, even if they did not sink the submarines, might divert the attention and resources of the US and Taiwan. 

It’s also possible China would flood the South China Sea with decoy drone boats to “make it difficult for American missiles and submarines to distinguish between high-value ships and worthless uncrewed commercial vessels,” the authors write.

Though most drone innovation is not focused on maritime applications, these uses are not without precedent: Ukrainian forces drew attention for modifying jet skis to operate via remote control and using them to intimidate and even sink Russian vessels in the Black Sea. 

More autonomy

Drones currently have very little autonomy. They’re typically human-piloted, and though some are capable of autopiloting to a fixed GPS point, that’s generally not very useful in a war scenario, where targets are on the move. But, the report’s authors say, autonomous technology is developing rapidly, and whichever nation possesses a more sophisticated fleet of autonomous drones will hold a significant edge.

What would that look like? Millions of defense research dollars are being spent in the US and China alike on swarming, a strategy where drones navigate autonomously in groups and accomplish tasks. The technology isn’t deployed yet, but if successful, it could be a game-changer in any potential conflict.  

A sea-based conflict might also offer an easier starting ground for AI-driven navigation, because object recognition is easier on the “relatively uncluttered surface of the ocean” than on the ground, the authors write.

China’s advantages

A chief advantage for China in a potential conflict is its proximity to Taiwan; it has more than three dozen air bases within 500 miles, while the closest US base is 478 miles away in Okinawa. But an even bigger advantage is that it produces more drones than any other nation.

“China dominates the commercial drone market, absolutely,” says Stacie Pettyjohn, coauthor of the report and director of the defense program at CNAS. That includes drones of the type used in Ukraine.

For Taiwan to use these Chinese drones for their own defenses, they’d first have to make the purchase, which could be difficult because the Chinese government might move to block it. Then they’d need to hack them and disconnect them from the companies that made them, or else those Chinese manufacturers could turn them off remotely or launch cyberattacks. That sort of hacking is unfeasible at scale, so Taiwan is effectively cut off from the world’s foremost commercial drone supplier and must either make their own drones or find alternative manufacturers, likely in the US. On Wednesday, June 19, the US approved a $360 million sale of 1,000 military-grade drones to Taiwan.

For now, experts can only speculate about how those drones might be used. Though preparing for a conflict in the South China Sea is a priority for the DOD, it’s one of many, says Kallenborn. “The sensible approach, in my opinion, is recognizing that you’re going to potentially have to deal with all of these different things,” he says. “But we don’t know the particular details of how it will work out.”

The Download: playing games with AI

20 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How generative AI could reinvent what it means to play

To make them feel alive, open-world games like Red Dead Redemption 2 are inhabited by vast crowds of computer-controlled characters. These animated people—called NPCs, for “nonplayer characters”—make these virtual worlds feel lived in and full. Often—but not always—you can talk to them.

After a while, however, the repetitive chitchat (or threats) of a passing stranger forces you to bump up against the truth: This is just a game. It’s still fun, but the illusion starts to weaken when you poke at it. 

It’s only natural. Video games are carefully crafted objects, part of a multibillion-dollar industry, that are designed to be consumed. You play them, you finish, you move on. 

It may not always be like that. Just as it is upending other industries, generative AI is opening the door to entirely new kinds of in-game interactions that are open-ended, creative, and unexpected. The game may not always have to end. Read the full story.

—Niall Firth

The Future of AI Games

If you’re interested in hearing more about how generative AI will revolutionize how we play games in the future, register now for our next exclusive subscriber-only Roundtable discussion

Our executive editor Niall Firth and editorial director Allison Arieff will be talking about games without limits, the future of play, and much more. Join us next Monday 24 June at 11:30am ET!

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Ilya Sutskever is launching a new AI research lab  
The OpenAI cofounder’s Safe Superintelligence project aims to create just that. (Bloomberg $)
+ He’s the latest in a line of former OpenAI workers to tackle safe AI. (FT $)
+ Check out our interview with Sutskever on his fears for the future of AI. (MIT Technology Review)

2 India’s grid is struggling to cope with its searing heat wave
Prolonged power outages in the north of the country look likely. (The Guardian)
+ Here’s how much heat your body can take. (MIT Technology Review)

3 Silicon Valley is increasing wary of Chinese espionage
Firms are stepping up security and staff screening. (FT $)

4 Chatbots can detect other chatbots’ mistakes
But there’s a danger they could introduce new biases, too. (WP $)
+ The people paid to train AI are outsourcing their work… to AI. (MIT Technology Review)

5 AI search engine Perplexity has a hallucination problem
It makes up quotes and summarizes news articles inaccurately. (Wired $)
+ Why you shouldn’t trust AI search engines. (MIT Technology Review)

6 The EU has canceled a vote on private chat apps
Ambassadors have clashed over how best to safeguard user privacy. (Politico)

7 Semi-solid batteries are the next big thing
With gel electrolytes, specifically. (IEEE Spectrum)
+ How does an EV battery actually work? (MIT Technology Review)

8 Singapore is going all-in on lab-grown meat
Just as the rest of the world reconsiders. (Rest of World)
+ Here’s what a lab-grown burger tastes like. (MIT Technology Review)

9 Dark energy is changing how we think about the universe
Its density appears to have been changing over time. (Economist $)

10 Europe’s trees have synced their fruiting to the sun
One species times its seed release to the summer solstice. (Quanta Magazine)

Quote of the day

“The poorest bear the cost of such climate change.”

—Sunil Kumar Aledia, who runs a homeless charity in India, tells Reuters why the first victims of the country’s deadly heat wave have been people living out in the open.

The big story

Inside the messy ethics of making war with machines

August 2023

In recent years, intelligent autonomous weapons have become a matter of serious concern. Giving an AI system the power to decide matters of life and death would radically change warfare forever.

But weapons that fully displace human decision-making have (likely) yet to see real-world use. Even the “autonomous” drones and ships fielded by the US and other powers are used under close human supervision.

However, these systems have become sophisticated enough to raise novel questions. What does it mean when a decision is only part human and part machine? And when, if ever, is it ethical for that decision to be a decision to kill? Read the full story.

—Arthur Holland Michel

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ How are these ducklings so cute?
+ Those mysterious monoliths are back! This time, near Las Vegas.
+ 1999 was a seminal year for cinema: but which film is your favorite?
+ Happy summer solstice! Here’s how sun-worshiping communities across Europe celebrate the longest day. ☀

How generative AI could reinvent what it means to play

20 June 2024 at 05:00

First, a confession. I only got into playing video games a little over a year ago (I know, I know). A Christmas gift of an Xbox Series S “for the kids” dragged me—pretty easily, it turns out—into the world of late-night gaming sessions. I was immediately attracted to open-world games, in which you’re free to explore a vast simulated world and choose what challenges to accept. Red Dead Redemption 2 (RDR2), an open-world game set in the Wild West, blew my mind. I rode my horse through sleepy towns, drank in the saloon, visited a vaudeville theater, and fought off bounty hunters. One day I simply set up camp on a remote hilltop to make coffee and gaze down at the misty valley below me.

To make them feel alive, open-world games are inhabited by vast crowds of computer-controlled characters. These animated people—called NPCs, for “nonplayer characters”—populate the bars, city streets, or space ports of games. They make these virtual worlds feel lived in and full. Often—but not always—you can talk to them.

a man leads his horse through mountainous terrain toward a sunrise in Red Dead Redemption 2
a scene of gunfighters in Red Dead Redemption 2

In open-world games like Red Dead Redemption 2, players can choose diverse interactions within the same simulated experience.

After a while, however, the repetitive chitchat (or threats) of a passing stranger forces you to bump up against the truth: This is just a game. It’s still fun—I had a whale of a time, honestly, looting stagecoaches, fighting in bar brawls, and stalking deer through rainy woods—but the illusion starts to weaken when you poke at it. It’s only natural. Video games are carefully crafted objects, part of a multibillion-dollar industry, that are designed to be consumed. You play them, you loot a few stagecoaches, you finish, you move on. 

It may not always be like that. Just as it is upending other industries, generative AI is opening the door to entirely new kinds of in-game interactions that are open-ended, creative, and unexpected. The game may not always have to end.

Startups employing generative-AI models, like ChatGPT, are using them to create characters that don’t rely on scripts but, instead, converse with you freely. Others are experimenting with NPCs who appear to have entire interior worlds, and who can continue to play even when you, the player, are not around to watch. Eventually, generative AI could create game experiences that are infinitely detailed, twisting and changing every time you experience them. 

The field is still very new, but it’s extremely hot. In 2022 the venture firm Andreessen Horowitz launched Games Fund, a $600 million fund dedicated to gaming startups. A huge number of these are planning to use AI in gaming. And the firm, also known as A16Z, has now invested in two studios that are aiming to create their own versions of AI NPCs. A second $600 million round was announced in April 2024.

Early experimental demos of these experiences are already popping up, and it may not be long before they appear in full games like RDR2. But some in the industry believe this development will not just make future open-world games incredibly immersive; it could change what kinds of game worlds or experiences are even possible. Ultimately, it could change what it means to play.

“What comes after the video game? You know what I mean?” says Frank Lantz, a game designer and director of the NYU Game Center. “Maybe we’re on the threshold of a new kind of game.”

These guys just won’t shut up

The way video games are made hasn’t changed much over the years. Graphics are incredibly realistic. Games are bigger. But the way in which you interact with characters, and the game world around you, uses many of the same decades-old conventions.

“In mainstream games, we’re still looking at variations of the formula we’ve had since the 1980s,” says Julian Togelius, a computer science professor at New York University who has a startup called Modl.ai that does in-game testing. Part of that tried-and-tested formula is a technique called a dialogue tree, in which all of an NPC’s possible responses are mapped out. Which one you get depends on which branch of the dialogue tree you have chosen. For example, say something rude about a passing NPC in RDR2 and the character will probably lash out—you have to quickly apologize to avoid a shootout (unless that’s what you want).

In the most expensive, high-­profile games, the so-called AAA games like Elden Ring or Starfield, a deeper sense of immersion is created by using brute force to build out deep and vast dialogue trees. The biggest studios employ teams of hundreds of game developers who work for many years on a single game in which every line of dialogue is plotted and planned, and software is written so the in-game engine knows when to deploy that particular line. RDR2 reportedly contains an estimated 500,000 lines of dialogue, voiced by around 700 actors. 

“You get around the fact that you can [only] do so much in the world by, like, insane amounts of writing, an insane amount of designing,” says Togelius. 

Generative AI is already helping take some of that drudgery out of making new games. Jonathan Lai, a general partner at A16Z and one of Games Fund’s managers, says that most studios are using image-­generating tools like Midjourney to enhance or streamline their work. And in a 2023 survey by A16Z, 87% of game studios said they were already using AI in their workflow in some way—and 99% planned to do so in the future. Many use AI agents to replace the human testers who look for bugs, such as places where a game might crash. In recent months, the CEO of the gaming giant EA said generative AI could be used in more than 50% of its game development processes.

Ubisoft, one of the biggest game developers, famous for AAA open-world games such as Assassin’s Creed, has been using a large-­language-model-based AI tool called Ghostwriter to do some of the grunt work for its developers in writing basic dialogue for its NPCs. Ghostwriter generates loads of options for background crowd chatter, which the human writer can pick from or tweak. The idea is to free the humans up so they can spend that time on more plot-focused writing.

a Peasant Farmer with a long speech panel to his right with the "Y Skip" command multiple times over the content
GEORGE WYLESOL

Ultimately, though, everything is scripted. Once you spend a certain number of hours on a game, you will have seen everything there is to see, and completed every interaction. Time to buy a new one.

But for startups like Inworld AI, this situation is an opportunity. Inworld, based in California, is building tools to make in-game NPCs that respond to a player with dynamic, unscripted dialogue and actions—so they never repeat themselves. The company, now valued at $500 million, is the best-funded AI gaming startup around thanks to backing from former Google CEO Eric Schmidt and other high-profile investors. 

Role-playing games give us a unique way to experience different realities, explains Kylan Gibbs, Inworld’s CEO and founder. But something has always been missing. “Basically, the characters within there are dead,” he says. 

“When you think about media at large, be it movies or TV or books, characters are really what drive our ability to empathize with the world,” Gibbs says. “So the fact that games, which are arguably the most advanced version of storytelling that we have, are lacking these live characters—it felt to us like a pretty major issue.”

Gamers themselves were pretty quick to realize that LLMs could help fill this gap. Last year, some came up with ChatGPT mods (a way to alter an existing game) for the popular role-playing game Skyrim. The mods let players interact with the game’s vast cast of characters using LLM-powered free chat. One mod even included OpenAI’s speech recognition software Whisper AI so that players could speak to the players with their voices, saying whatever they wanted, and have full conversations that were no longer restricted by dialogue trees. 

The results gave gamers a glimpse of what might be possible but were ultimately a little disappointing. Though the conversations were open-ended, the character interactions were stilted, with delays while ChatGPT processed each request. 

Inworld wants to make this type of interaction more polished. It’s offering a product for AAA game studios in which developers can create the brains of an AI NPC that can be then imported into their game. Developers use the company’s “Inworld Studio” to generate their NPC. For example, they can fill out a core description that sketches the character’s personality, including likes and dislikes, motivations, or useful backstory. Sliders let you set levels of traits such as introversion or extroversion, insecurity or confidence. And you can also use free text to make the character drunk, aggressive, prone to exaggeration—pretty much anything.

Developers can also add descriptions of how their character speaks, including examples of commonly used phrases that Inworld’s various AI models, including LLMs, then spin into dialogue in keeping with the character. 

“Because there’s such reliance on a lot of labor-intensive scripting, it’s hard to get characters to handle a wide variety of ways a scenario might play out, especially as games become more and more open-ended.”

Jeff Orkin, founder, Bitpart

Game designers can also plug other information into the system: what the character knows and doesn’t know about the world (no Taylor Swift references in a medieval battle game, ideally) and any relevant safety guardrails (does your character curse or not?). Narrative controls will let the developers make sure the NPC is sticking to the story and isn’t wandering wildly off-base in its conversation. The idea is that the characters can then be imported into video-game graphics engines like Unity or Unreal Engine to add a body and features. Inworld is collaborating with the text-to-voice startup ElevenLabs to add natural-sounding voices.

Inworld’s tech hasn’t appeared in any AAA games yet, but at the Game Developers Conference (GDC) in San Francisco in March 2024, the firm unveiled an early demo with Nvidia that showcased some of what will be possible. In Covert Protocol, each player operates as a private detective who must solve a case using input from the various in-game NPCs. Also at the GDC, Inworld unveiled a demo called NEO NPC that it had worked on with Ubisoft. In NEO NPC, a player could freely interact with NPCs using voice-to-text software and use conversation to develop a deeper relationship with them.

LLMs give us the chance to make games more dynamic, says Jeff Orkin, founder of Bitpart AI, a new startup that also aims to create entire casts of LLM-powered NPCs that can be imported into games. “Because there’s such reliance on a lot of labor-intensive scripting, it’s hard to get characters to handle a wide variety of ways a scenario might play out, especially as games become more and more open-ended,” he says.

Bitpart’s approach is in part inspired by Orkin’s PhD research at MIT’s Media Lab. There, he trained AIs to role-play social situations using game-play logs of humans doing the same things with each other in multiplayer games.

Bitpart’s casts of characters are trained using a large language model and then fine-tuned in a way that means the in-game interactions are not entirely open-ended and infinite. Instead, the company uses an LLM and other tools to generate a script covering a range of possible interactions, and then a human game designer will select some. Orkin describes the process as authoring the Lego bricks of the interaction. An in-game algorithm searches out specific bricks to string them together at the appropriate time.

Bitpart’s approach could create some delightful in-game moments. In a restaurant, for example, you might ask a waiter for something, but the bartender might overhear and join in. Bitpart’s AI currently works with Roblox. Orkin says the company is now running trials with AAA game studios, although he won’t yet say which ones.

But generative AI might do more than just enhance the immersiveness of existing kinds of games. It could give rise to completely new ways to play.

Making the impossible possible

When I asked Frank Lantz about how AI could change gaming, he talked for 26 minutes straight. His initial reaction to generative AI had been visceral: “I was like, oh my God, this is my destiny and is what I was put on the planet for.” 

Lantz has been in and around the cutting edge of the game industry and AI for decades but received a cult level of acclaim a few years ago when he created the Universal Paperclips game. The simple in-browser game gives the player the job of producing as many paper clips as possible. It’s a riff on the famous thought experiment by the philosopher Nick Bostrom, which imagines an AI that is given the same task and optimizes against humanity’s interest by turning all the matter in the known universe into paper clips.

Lantz is bursting with ideas for ways to use generative AI. One is to experience a new work of art as it is being created, with the player participating in its creation. “You’re inside of something like Lord of the Rings as it’s being written. You’re inside a piece of literature that is unfolding around you in real time,” he says. He also imagines strategy games where the players and the AI work together to reinvent what kind of game it is and what the rules are, so it is never the same twice.

For Orkin, LLM-powered NPCs can make games unpredictable—and that’s exciting. “It introduces a lot of open questions, like what you do when a character answers you but that sends a story in a direction that nobody planned for,” he says. 

Generative A I might do more than just enhance the immersiveness of existing kinds of games. It could give rise to completely new ways to play.

It might mean games that are unlike anything we’ve seen thus far. Gaming experiences that unspool as the characters’ relationships shift and change, as friendships start and end, could unlock entirely new narrative experiences that are less about action and more about conversation and personalities. 

Togelius imagines new worlds built to react to the player’s own wants and needs, populated with NPCs that the player must teach or influence as the game progresses. Imagine interacting with characters whose opinions can change, whom you could persuade or motivate to act in a certain way—say, to go to battle with you. “A thoroughly generative game could be really, really good,” he says. “But you really have to change your whole expectation of what a game is.”

Lantz is currently working on a prototype of a game in which the premise is that you—the player—wake up dead, and the afterlife you are in is a low-rent, cheap version of a synthetic world. The game plays out like a noir in which you must explore a city full of thousands of NPCs powered by a version of ChatGPT, whom you must interact with to work out how you ended up there. 

His early experiments gave him some eerie moments when he felt that the characters seemed to know more than they should, a sensation recognizable to people who have played with LLMs before. Even though you know they’re not alive, they can still freak you out a bit.

“If you run electricity through a frog’s corpse, the frog will move,” he says. “And if you run $10 million worth of computation through the internet … it moves like a frog, you know.” 

But these early forays into generative-­­AI gaming have given him a real sense of excitement for what’s next: “I felt like, okay, this is a thread. There really is a new kind of artwork here.”

If an AI NPC talks and no one is around to listen, is there a sound?

AI NPCs won’t just enhance player interactions—they might interact with one another in weird ways. Red Dead Redemption 2’s NPCs each have long, detailed scripts that spell out exactly where they should go, what work they must complete, and how they’d react if anything unexpected occurred. If you want, you can follow an NPC and watch it go about its day. It’s fun, but ultimately it’s hard-coded.

NPCs built with generative AI could have a lot more leeway—even interacting with one another when the player isn’t there to watch. Just as people have been fooled into thinking LLMs are sentient, watching a city of generated NPCs might feel like peering over the top of a toy box that has somehow magically come alive.

We’re already getting a sense of what this might look like. At Stanford University, Joon Sung Park has been experimenting with AI-generated characters and watching to see how their behavior changes and gains complexity as they encounter one another. 

Because large language models have sucked up the internet and social media, they actually contain a lot of detail about how we behave and interact, he says.

a character from Skyrim
Gamers came up with ChatGPT mods for the popular role-playing game Skyrim.
creatures walking in a verdant landscape
Although 2016’s hugely hyped No Man’s Sky used procedural generation to create endless planets to explore, many saw it as a letdown.
a player interacting with an NPC behind a service desk
In Covert Protocol, players operate as private detectives who must solve the case using input from various in-game NPCs

In Park’s recent research, he and colleagues set up a Sims-like game, called Smallville, with 25 simulated characters that had been trained using generative AI. Each was given a name and a simple biography before being set in motion. When left to interact with each other for two days, they began to exhibit humanlike conversations and behavior, including remembering each other and being able to talk about their past interactions. 

For example, the researchers prompted one character to organize a Valentine’s Day party—and then let the simulation run. That character sent invitations around town, while other members of the community asked each other on dates to go to the party, and all turned up at the venue at the correct time. All of this was carried out through conversations, and past interactions between characters were stored in their “memories” as natural language.

For Park, the implications for gaming are huge. “This is exactly the sort of tech that the gaming community for their NPCs have been waiting for,” he says. 

His research has inspired games like AI Town, an open-source interactive experience on GitHub that lets human players interact with AI NPCs in a simple top-down game. You can leave the NPCs to get along for a few days and check in on them, reading the transcripts of the interactions they had while you were away. Anyone is free to take AI Town’s code to build new NPC experiences through AI. 

For Daniel De Freitas, cofounder of the startup Character AI, which lets users generate and interact with their own LLM-powered characters, the generative-AI revolution will allow new types of games to emerge—ones in which the NPCs don’t even need human players. 

The player is “joining an adventure that is always happening, that the AIs are playing,” he imagines. “It’s the equivalent of joining a theme park full of actors, but unlike the actors, they truly ‘believe’ that they are in those roles.”

If you’re getting Westworld vibes right about now, you’re not alone. There are plenty of stories about people torturing or killing their simple Sims characters in the game for fun. Would mistreating NPCs that pass for real humans cross some sort of new ethical boundary? What if, Lantz asks, an AI NPC that appeared conscious begged for its life when you simulated torturing it?

It raises complex questions he adds. “One is: What are the ethical dimensions of pretend violence? And the other is: At what point do AIs become moral agents to which harm can be done?”

There are other potential issues too. An immersive world that feels real, and never ends, could be dangerously addictive. Some users of AI chatbots have already reported losing hours and even days in conversation with their creations. Are there dangers that the same parasocial relationships could emerge with AI NPCs? 

“We may need to worry about people forming unhealthy relationships with game characters at some point,” says Togelius. Until now, players have been able to differentiate pretty easily between game play and real life. But AI NPCs might change that, he says: “If at some point what we now call ‘video games’ morph into some all-encompassing virtual reality, we will probably need to worry about the effect of NPCs being too good, in some sense.”

A portrait of the artist as a young bot

Not everyone is convinced that never-ending open-ended conversations between the player and NPCs are what we really want for the future of games. 

“I think we have to be cautious about connecting our imaginations with reality,” says Mike Cook, an AI researcher and game designer. “The idea of a game where you can go anywhere, talk to anyone, and do anything has always been a dream of a certain kind of player. But in practice, this freedom is often at odds with what we want from a story.”

In other words, having to generate a lot of the dialogue yourself might actually get kind of … well, boring. “If you can’t think of interesting or dramatic things to say, or are simply too tired or bored to do it, then you’re going to basically be reading your own very bad creative fiction,” says Cook. 

Orkin likewise doesn’t think conversations that could go anywhere are actually what most gamers want. “I want to play a game that a bunch of very talented, creative people have really thought through and created an engaging story and world,” he says.

This idea of authorship is an important part of game play, agrees Togelius. “You can generate as much as you want,” he says. “But that doesn’t guarantee that anything is interesting and worth keeping. In fact, the more content you generate, the more boring it might be.”

a skeleton wielding a mace is partially obscured by possible interaction cues, such as "Listen," "Kiss," "Ask Politely to go away" and "Tell Joke"
GEORGE WYLESOL

Sometimes, the possibility of everything is too much to cope with. No Man’s Sky, a hugely hyped space game launched in 2016 that used algorithms to generate endless planets to explore, was seen by many players as a bit of a letdown when it finally arrived. Players quickly discovered that being able to explore a universe that never ended, with worlds that were endlessly different, actually fell a little flat. (A series of updates over subsequent years has made No Man’s Sky a little more structured, and it’s now generally well thought of.)

One approach might be to keep AI gaming experiences tight and focused.

Hilary Mason, CEO at the gaming startup Hidden Door, likes to joke that her work is “artisanal AI.” She is from Brooklyn, after all, says her colleague Chris Foster, the firm’s game director, laughing.

Hidden Door, which has not yet released any products, is making role-playing text adventures based on classic stories that the user can steer. It’s like Dungeons & Dragons for the generative AI era. It stitches together classic tropes for certain adventure worlds, and an annotated database of thousands of words and phrases, and then uses a variety of machine-learning tools, including LLMs, to make each story unique. Players walk through a semi-­unstructured storytelling experience, free-typing into text boxes to control their character. 

The result feels a bit like hand-annotating an AI-generated novel with Post-it notes.

In a demo with Mason, I got to watch as her character infiltrated a hospital and attempted to hack into the server. Each suggestion prompted the system to spin up the next part of the story, with the large language model creating new descriptions and in-game objects on the fly.

Each experience lasts between 20 and 40 minutes, and for Foster, it creates an “expressive canvas” that people can play with. The fixed length and the added human touch—Mason’s artisanal approach—give players “something really new and magical,” he says.

There’s more to life than games

Park thinks generative AI that makes NPCs feel alive in games will have other, more fundamental implications further down the line.

“This can, I think, also change the meaning of what games are,” he says. 

For example, he’s excited about using generative-AI agents to simulate how real people act. He thinks AI agents could one day be used as proxies for real people to, for example, test out the likely reaction to a new economic policy. Counterfactual scenarios could be plugged in that would let policymakers run time backwards to try to see what would have happened if a different path had been taken. 

“You want to learn that if you implement this social policy or economic policy, what is going to be the impact that it’s going to have on the target population?” he suggests. “Will there be unexpected side effects that we’re not going to be able to foresee on day one?”

And while Inworld is focused on adding immersion to video games, it has also worked with LG in South Korea to make characters that kids can chat with to improve their English language skills. Others are using Inworld’s tech to create interactive experiences. One of these, called Moment in Manzanar, was created to help players empathize with the Japanese-Americans the US government detained in internment camps during World War II. It allows the user to speak to a fictional character called Ichiro who talks about what it was like to be held in the Manzanar camp in California. 

Inworld’s NPC ambitions might be exciting for gamers (my future excursions as a cowboy could be even more immersive!), but there are some who believe using AI to enhance existing games is thinking too small. Instead, we should be leaning into the weirdness of LLMs to create entirely new kinds of experiences that were never possible before, says Togelius. The shortcomings of LLMs “are not bugs—they’re features,” he says. 

Lantz agrees. “You have to start with the reality of what these things are and what they do—this kind of latent space of possibilities that you’re surfing and exploring,” he says. “These engines already have that kind of a psychedelic quality to them. There’s something trippy about them. Unlocking that is the thing that I’m interested in.”

Whatever is next, we probably haven’t even imagined it yet, Lantz thinks. 

“And maybe it’s not about a simulated world with pretend characters in it at all,” he says. “Maybe it’s something totally different. I don’t know. But I’m excited to find out.”

The Download: video-generating AI, and Meta’s voice cloning watermarks

19 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

I tested out a buzzy new text-to-video AI model from China

You may not be familiar with Kuaishou, but this Chinese company just hit a major milestone: It’s released the first ever text-to-video generative AI model that’s freely available for the public to test.

The short-video platform, which has over 600 million active users, announced the new tool, called Kling, on June 6. Like OpenAI’s Sora model, Kling is able to generate videos up to two minutes long from prompts.

But unlike Sora, which still remains inaccessible to the public four months after OpenAI debuted it, Kling has already started letting people try the model themselves. Zeyi Yang, our China reporter, has been putting it through its paces. Here’s what he made of it.

This story is from China Report, our weekly newsletter covering tech in China. Sign up to receive it in your inbox every Tuesday.

Meta has created a way to watermark AI-generated speech

The news: Meta has created a system that can embed hidden signals, known as watermarks, in AI-generated audio clips, which could help in detecting AI-generated content online. 

Why it matters: The tool, called AudioSeal, is the first that can pinpoint which bits of audio in, for example, a full hour-long podcast might have been generated by AI. It could help to tackle the growing problem of misinformation and scams using voice cloning tools. Read the full story.

—Melissa Heikkilä

The return of pneumatic tubes

Pneumatic tubes were once touted as something that would revolutionize the world. In science fiction, they were envisioned as a fundamental part of the future—even in dystopias like George Orwell’s 1984, where they help to deliver orders for the main character, Winston Smith, in his job rewriting history to fit the ruling party’s changing narrative. 

In real life, the tubes were expected to transform several industries in the late 19th century through the mid-20th. The technology involves moving a cylindrical carrier or capsule through a series of tubes with the aid of a blower that pushes or pulls it into motion, and for a while, the United States took up the systems with gusto.

But by the mid to late 20th century, use of the technology had largely fallen by the wayside, and pneumatic tube technology became virtually obsolete. Except in hospitals. Read the full story.

—Vanessa Armstrong

This story is from the forthcoming print issue of MIT Technology Review, which explores the theme of Play. It’s set to go live on Wednesday June 26, so if you don’t already, subscribe now to get a copy when it lands.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Nvidia has become the world’s most valuable company 
Leapfrogging Microsoft and Apple thanks to the AI boom. (BBC)
+ Nvidia’s meteoric rise echoes the dot com boom. (WSJ $)
+ CEO Jensen Huang is now one of the richest people in the world. (Forbes)
+ The firm is worth more than China’s entire agricultural industry. (NY Mag $)
+ What’s next in chips. (MIT Technology Review)

2 TikTok is introducing AI avatars for ads
Which seems like a slippery slope. (404 Media)
+ India’s farmers are getting their news from AI news anchors. (Bloomberg $)
+ Deepfakes of Chinese influencers are livestreaming 24/7. (MIT Technology Review)

3 Boeing’s Starliner spacecraft will stay in space for a little longer
Officials need to troubleshoot some issues before it can head back to Earth. (WP $)

4 STEM students are refusing to work at Amazon and Google
Until the companies end their involvement with Project Nimbus. (Wired $)

5 Google isn’t what it used to be
But is Reddit really a viable alternative? (WSJ $)
+ Why Google’s AI Overviews gets things wrong. (MIT Technology Review)

6 A security bug allows anyone to impersonate Microsoft corporate email accounts
It’s making it harder to spot phishing attacks. (TechCrunch)

7 How deep sea exploration has changed since the Titan disaster
Robots are taking humans’ place to plumb the depths. (NYT $)
+ Meet the divers trying to figure out how deep humans can go. (MIT Technology Review)

8 How the free streaming service Tubi took over the US
Its secret weapon? Old movies.(The Guardian)

9 A new AI video tool instantly started ripping off Disney
Raising some serious questions about what the model had been trained on. (The Verge)
+ What’s next for generative video. (MIT Technology Review)

10 Apple appears to have paused work on the next Vision Pro
Things aren’t looking too bright for the high-end headset. (The Information $)
+ These minuscule pixels are poised to take augmented reality by storm. (MIT Technology Review)

Quote of the day

“He’s like Taylor Swift, but for tech.”

—Mark Zuckerberg is suitably dazzled by Nvidia CEO Jensen Huang’s starpower, the Information reports.

The big story

How sounds can turn us on to the wonders of the universe

June 2023

Astronomy should, in principle, be a welcoming field for blind researchers. But across the board, science is full of charts, graphs, databases, and images that are designed to be seen.

So researcher Sarah Kane, who is legally blind, was thrilled three years ago when she encountered a technology known as sonification, designed to transform information into sound. Since then she’s been working with a project called Astronify, which presents astronomical information in audio form.

For millions of blind and visually impaired people, sonification could be transformative—opening access to education, to once unimaginable careers, and even to the secrets of the universe. Read the full story.

—Corey S. Powell

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Clearing a pool table in 28 seconds? Don’t mind if I do.
+ As summer gets truly underway, it’s time to reorganize your closet.
+ Check out the winner’s of this year’s Food Photographer of the Year awards.
+ If you’re obsessed with the viral Steam game Banana, you’re far from alone. 🍌

The return of pneumatic tubes

19 June 2024 at 05:00

Pneumatic tubes were touted as something that would revolutionize the world. In science fiction, they were envisioned as a fundamental part of the future—even in dystopias like George Orwell’s 1984, where the main character, Winston Smith, sits in a room peppered with pneumatic tubes that spit out orders for him to alter previously published news stories and historical records to fit the ruling party’s changing narrative.  

Doctor holding pneumatic tube carrier while standing in pharmacy
Abandoned by most industries at midcentury, pneumatic tube systems have become ubiquitous in hospitals.
ALAMY

In real life, the tubes were expected to transform several industries in the late 19th century through the mid-20th. “The possibilities of compressed air are not fully realized in this country,” declared an 1890 article in the New York Tribune. “The pneumatic tube system of communication is, of course, in use in many of the downtown stores, in newspaper offices […] but there exists a great deal of ignorance about the use of compressed air, even among engineering experts.”

Pneumatic tube technology involves moving a cylindrical carrier or capsule through a series of tubes with the aid of a blower that pushes or pulls it into motion. For a while, the United States took up the systems with gusto. Retail stores and banks were especially interested in their potential to move money more efficiently: “Besides this saving of time to the customer the store is relieved of all the annoying bustle and confusion of boys running for cash on the various retail floors,” one 1882 article in the Boston Globe reported. The benefit to the owner, of course, was reduced labor costs, with tube manufacturers claiming that stores would see a return on their investment within a year.  

“The motto of the company is to substitute machines for men and for children as carriers, in every possible way,” a 1914 Boston Globe article said about Lamson Service, one of the largest proprietors of tubes at the time, adding, “[President] Emeritus Charles W. Eliot of Harvard says: ‘No man should be employed at a task which a machine can perform,’ and the Lamson Company supplements that statement by this: ‘Because it doesn’t pay.’”

By 1912, Lamson had over 60,000 customers globally in sectors including retail, banks, insurance offices, courtrooms, libraries, hotels, and industrial plants. The postal service in cities such as Boston, Philadelphia, Chicago, and New York also used tubes to deliver the mail, with at least 45 miles of Lamson tubing in place by 1912.

On the transportation front, New York City’s first attempt at a subway system, in 1870, also ran on a pneumatic system, and the idea of using tubes to move people continues to beguile innovators to this day. (See Elon Musk’s largely abandoned Hyperloop concept of the 2010s.)

But by the mid to late 20th century, use of the technology had largely fallen by the wayside. It had become cheaper to transport mail by truck than by tube, and as transactions moved to credit cards, there was less demand to make change for cash payments. Electrical rail won out over compressed air, paper records and files disappeared in the wake of digitization, and tubes at bank drive-throughs started being replaced by ATMs, while only a fraction of pharmacies used them for their own such services. Pneumatic tube technology became virtually obsolete.

Except in hospitals. 

“A pneumatic tube system today for a new hospital that’s being built is ubiquitous. It’s like putting a washing machine or a central AC system in a new home. It just makes too much sense to not do it,” says Cory Kwarta, CEO of Swisslog Healthcare, a corporation that—under its TransLogic company—has provided pneumatic tube systems in health-care facilities for over 50 years. And while the sophistication of these systems has changed over time, the fundamental technology of using pneumatic force to move a capsule from one destination to another has remained the same. 

By the turn of the 20th century, health care had become a more scientific endeavor, and different spaces within a hospital were designated for new technologies (like x-rays) or specific procedures (like surgeries). “Instead of having patients in one place, with the doctors and the nurses and everything coming to them, and it’s all happening in the ward, [hospitals] became a bunch of different parts that each had a role,” explains Jeanne Kisacky, an architectural historian who wrote Rise of the Modern Hospital: An Architectural History of Health and Healing, 1870–1940

Designating different parts of a building for different medical specialties and services, like specimen analysis, also increased the physical footprint of health-care facilities. The result was that nurses and doctors had to spend much of their days moving from one department to another, which was an inefficient use of their time. Pneumatic tube technology provided a solution.

By the 1920s, more and more hospitals started installing tube systems. At first, the capsules primarily moved medical records, prescription orders, and items like money and receipts—similar cargo to what was moved around in banks and retail stores at the time. As early as 1927, however, the systems were also marketed to hospitals as a way to transfer specimens to a central laboratory for analysis. 

""
Two men stand among the 2,000 pneumatic tube canisters in the basement of the Lexington Avenue Post Office in New York City, circa 1915.
two people reading a note at a table
In 1955, clubbers at the Reni Ballroom in Berlin exchanged requests for dances via pneumatic tube in a sort of precursor to texting.

""
In the late 1940s and ’50s, canisters like this one, traveling at around 35 miles an hour, carried as many as 600 letters daily throughout New York City.
system of tubes
The Hospital of the University of Pennsylvania traffics nearly 4,000 specimens daily through its pneumatic tubes.

By the 1960s, pneumatic tubes were becoming standard in health care. As a hospital administrator explained in the January 1960 issue of Modern Hospital, “We are now getting eight hours’ worth of service per day from each nurse, where previously we had been getting about six hours of nursing plus two hours of errand running.”

As computers and credit cards started to become more prevalent in the 1980s, reducing paperwork significantly, the systems shifted to mostly carrying lab specimens, pharmaceuticals, and blood products. Today, lab specimens are roughly 60% of what hospital tube systems carry; pharmaceuticals account for 30%, and blood products for phlebotomy make up 5%.

The carriers or capsules, which can hold up to five pounds, move through piping six inches in diameter—just big enough to hold a 2,000-milliliter IV bag—at speeds of 18 to 24 feet per second, or roughly 12 to 16 miles per hour. The carriers are limited to those speeds to maintain specimen integrity. If blood samples move faster, for example, blood cells can be destroyed.

The pneumatic systems have also gone through major changes in structure in recent years, evolving from fixed routes to networked systems. “It’s like a train system, and you’re on one track and now you have to go to another track,” says Steve Dahl, an executive vice president at Pevco, a manufacturer of these systems.

illustration of people waiting to ride the tube
Exhibition-goers wait to ride the first pneumatic passenger railway in the US at the Exhibition of the American Institute at the New York City Armory in 1867.
GETTY IMAGES

Manufacturers try to get involved early in the hospital design process, says Swisslog’s Kwarta, so “we can talk to the clinical users and say, ‘Hey, what kind of contents do you anticipate sending through this pneumatic tube system, based on your bed count, based on your patient census, and from where and to where do these specimens or materials need to go?’”

Penn Medicine’s University City Medical District in Philadelphia opened up the state-of-the-art Pavilion in 2021. It has three pneumatic systems: the main one is for items directly related to health care, like specimens, and two separate ones handle linen and trash. The main system runs over 12 miles of pipe and completes more than 6,000 transactions on an average day. Sending a capsule between the two farthest points of the system—a distance of multiple city blocks—takes just under five minutes. Walking that distance would take around 20 minutes, not including getting to the floor where the item needs to go. 

Michigan Medicine has a system dedicated solely for use in nuclear medicine, which relies on radioactive materials for treatment. Getting the materials where they need to go is a five- to eight-minute walk—too long given their short shelf life. With the tubes, it gets there—in a lead-lined capsule—in less than a minute. 

Steven Fox, who leads the electrical engineering team for the pneumatic tubes at Michigan Medicine, describes the scale of the materials his system moves in terms of African elephants, which weigh about six tons. “We try to keep [a carrier’s] load to five pounds apiece,” he says. “So we could probably transport about 30,000 pounds per day. That’s two and a half African elephants that we transport from one side of the hospital to the other every day.”

The equipment to maintain these labyrinthian highways is vast. Michigan and Penn have between 150 and 200 stations where doctors, nurses, and technicians can pick up a capsule or send one off. Keeping those systems moving also requires around 30 blowers and over 150 transfer units to shift carriers to different tube lines as needed. At Michigan Medicine, moving an item from one end of the system to another requires 20 to 25 pieces of equipment.

Before the turn of the century, triggering the blower to move a capsule from point A to point B would be accomplished by someone turning or pressing an electronic or magnetic switch. In the 2000s, technicians managed the systems on DOS; these days, the latest systems run on programs that monitor every capsule in real time and allow adjustments based on the level of traffic, the priority level of a capsule, and the demand for additional carriers. The systems run 24 hours a day, every day. 

“We treat [the tube system] no different than electricity, steam, water, gas. It’s a utility,” says Frank Connelly, an assistant hospital director at Penn. “Without that, you can’t provide services to people that need it in a hospital.”

“You’re nervous—you just got blood taken,” he continues. “‘How long is it going to be before I get my results back?’ Imagine if they had to wait all that extra time because you’re not sending one person for every vial—they’re going to wait awhile until they get a basket full and then walk to the lab. Nowadays they fill up the tube and send it to the lab. And I think that helps patient care.” 

Vanessa Armstrong is a freelance writer whose work has appeared in the New York TimesAtlas ObscuraTravel + Leisure, and elsewhere. 

I tested out a buzzy new text-to-video AI model from China

By: Zeyi Yang
19 June 2024 at 05:00

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

You may not be familiar with Kuaishou, but this Chinese company just hit a major milestone: It’s released the first text-to-video generative AI model that’s freely available for the public to test.

The short-video platform, which has over 600 million active users, announced the new tool on June 6. It’s called Kling. Like OpenAI’s Sora model, Kling is able to generate videos “up to two minutes long with a frame rate of 30fps and video resolution up to 1080p,” the company says on its website.

But unlike Sora, which still remains inaccessible to the public four months after OpenAI trialed it, Kling soon started letting people try the model themselves. 

I was one of them. I got access to it after downloading Kuaishou’s video-editing tool, signing up with a Chinese number, getting on a waitlist, and filling out an additional form through Kuaishou’s user feedback groups. The model can’t process prompts written entirely in English, but you can get around that by either translating the phrase you want to use into Chinese or including one or two Chinese words.

So, first things first. Here are a few results I generated with Kling to show you what it’s like. Remember Sora’s impressive demo video of Tokyo’s street scenes or the cat darting through a garden? Here are Kling’s takes:

Prompt: Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes.
ZEYI YANG/MIT TECHNOLOGY REVIEW | KLING
Prompt: A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. She wears a black leather jacket, a long red dress, and black boots, and carries a black purse. She wears sunglasses and red lipstick. She walks confidently and casually. The street is damp and reflective, creating a mirror effect of the colorful lights. Many pedestrians walk about.
ZEYI YANG/MIT TECHNOLOGY REVIEW | KLING
Prompt: A white and orange tabby cat is seen happily darting through a dense garden, as if chasing something. Its eyes are wide and happy as it jogs forward, scanning the branches, flowers, and leaves as it walks. The path is narrow as it makes its way between all the plants. The scene is captured from a ground-level angle, following the cat closely, giving a low and intimate perspective. The image is cinematic with warm tones and a grainy texture. The scattered daylight between the leaves and plants above creates a warm contrast, accentuating the cat’s orange fur. The shot is clear and sharp, with a shallow depth of field.
ZEYI YANG/MIT TECHNOLOGY REVIEW | KLING

Remember the image of Dall-E’s horse-riding astronaut? I asked Kling to generate a video version too. 

Prompt: An astronaut riding a horse in space.
ZEYI YANG/MIT TECHNOLOGY REVIEW | KLING

There are a few things worth applauding here. None of these videos deviates from the prompt much, and the physics seem right—the panning of the camera, the ruffling leaves, and the way the horse and astronaut turn, showing Earth behind them. The generation process took around three minutes for each of them. Not the fastest, but totally acceptable. 

But there are obvious shortcomings, too. The videos, while 720p in format, seem blurry and grainy; sometimes Kling ignores a major request in the prompt; and most important, all videos generated now are capped at five seconds long, which makes them far less dynamic or complex.

However, it’s not really fair to compare these results with things like Sora’s demos, which are hand-picked by OpenAI to release to the public and probably represent better-than-average results. These Kling videos are from the first attempts I had with each prompt, and I rarely included prompt-engineering keywords like “8k, photorealism” to fine-tune the results. 

If you want to see more Kling-generated videos, check out this handy collection put together by an open-source AI community in China, which includes both impressive results and all kinds of failures.

Kling’s general capabilities are good enough, says Guizang, an AI artist in Beijing who has been testing out the model since its release and has compiled a series of direct comparisons between Sora and Kling. Kling’s disadvantage lies in the aesthetics of the results, he says, like the composition or the color grading. “But that’s not a big issue. That can be fixed quickly,” Guizang, who wished to be identified only by his online alias, tells MIT Technology Review

“The core capability of a model is in how it simulates physics and real natural environments,” and he says Kling does well in that regard.

Kling works in a similar way to Sora: it combines the diffusion models traditionally used in video-generation AIs with a transformer architecture, which helps it understand larger video data files and generate results more efficiently.

But Kling may have a key advantage over Sora: Kuaishou, the most prominent rival to Douyin in China, has a massive video platform with hundreds of millions of users who have collectively uploaded an incredibly big trove of video data that could be used to train it. Kuaishou told MIT Technology Review in a statement that “Kling uses publicly available data from the global internet for model training, in accordance with industry standards.” However, the company didn’t elaborate on the specifics of the training data(neither did OpenAI about Sora, which has led to concerns about intellectual-property protections).

After testing the model, I feel the biggest limitation to Kling’s usefulness is that it only generates five-second-long videos.

“The longer a video is, the more likely it will hallucinate or generate inconsistent results,” says Shen Yang, a professor studying AI and media at Tsinghua University in Beijing. That limitation means the technology will leave a larger impact on the short-video industry than it does on the movie industry, he says. 

Short, vertical videos (those designed for viewing on phones) usually grab the attention of viewers in a few seconds. Shen says Chinese TikTok-like platforms often assess whether a video is successful by how many people would watch through the first three or five seconds before they scroll away—so an AI-generated high-quality video clip that’s just five seconds long could be a game-changer for short-video creators. 

Guizang agrees that AI could disrupt the content-creating scene for short-form videos. It will benefit creators in the short term as a productivity tool; but in the long run, he worries that platforms like Kuaishou and Douyin could take over the production of videos and directly generate content customized for users, reducing the platforms’ reliance on star creators.

It might still take quite some time for the technology to advance to that level, but the field of text-to-video tools is getting much more buzzy now. One week after Kling’s release, a California-based startup called Luma AI also released a similar model for public usage. Runway, a celebrity startup in video generation, has teased a significant update that will make its model much more powerful. ByteDance, Kuaishou’s biggest rival, is also reportedly working on the release of its generative video tool soon. “By the end of this year, we will have a lot of options available to us,” Guizang says.

I asked Kling to generate what society looks like when “anyone can quickly generate a video clip based on their own needs.” And here’s what it gave me. Impressive hands, but you didn’t answer the question—sorry.

Prompt: With the release of Kuaishou’s Kling model, the barrier to entry for creating short videos has been lowered, resulting in significant impacts on the short-video industry. Anyone can quickly generate a video clip based on their own needs. Please show what the society will look like at that time.
ZEYI YANG/MIT TECHNOLOGY REVIEW | KLING

Do you have a prompt you want to see generated with Kling? Send it to zeyi@technologyreview.com and I’ll send you back the result. The prompt has to be less than 200 characters long, and preferably written in Chinese.


Now read the rest of China Report

Catch up with China

1. A new investigation revealed that the US military secretly ran a campaign to post anti-vaccine propaganda on social media in 2020 and 2021, aiming to sow distrust in the Chinese-made covid vaccines in Southeast Asian countries. (Reuters $)

2. A Chinese court sentenced Huang Xueqin, the journalist who helped launch the #MeToo movement in China, to five years in prison for “inciting subversion of state power.” (Washington Post $)

3. A Shein executive said the company’s corporate values basically make it an American company, but the company is now trying to hide that remark to avoid upsetting Beijing. (Financial Times $)

4. China is getting close to building the world’s largest particle collider, potentially starting in 2027. (Nature)

5. To retaliate for the European Union’s raising tariffs on electric vehicles, the Chinese government has opened an investigation into allegedly unfair subsidies for Europe’s pork exports. (New York Times $)

  • On a related note about food: China’s exploding demand for durian fruit in recent years has created a $6 billion business in Southeast Asia, leading some farmers to cut down jungles and coffee plants to make way for durian plantations. (New York Times $)

Lost in translation

In 2012, Jiumei, a Chinese woman in her 20s, began selling a service where she sends “good night” text messages to people online at the price of 1 RMB per text (that’s about $0.14). 

Twelve years, three mobile phones, four different numbers, and over 50,000 messages later, she’s still doing it, according to the Chinese online publication Personage. Some of her clients are buying the service for themselves, hoping to talk to someone regularly at their most lonely or desperate times. Others are buying it to send anonymous messages—to a friend going through a hard time, or an ex-lover who has cut off communications. 

The business isn’t very profitable. Jiumei earns around 3,000 RMB ($410) annually from it on top of her day job, and even less in recent years. But she’s persisted because the act of sending these messages has become a nightly ritual—not just for her customers but also for Jiumei herself, offering her solace in her own times of loneliness and hardship.

One more thing

Globally, Kuaishou has been much less successful than its nemesis ByteDance, except in one country: Brazil. Kwai, the overseas version of Kuaishou, has been so popular in Brazil that even the Marubo people, a tribal group in the remote Amazonian rainforests and one of the last communities to be connected online, have begun using the app, according to the New York Times.

Meta has created a way to watermark AI-generated speech

18 June 2024 at 12:49

Meta has created a system that can embed hidden signals, known as watermarks, in AI-generated audio clips, which could help in detecting AI-generated content online. 

The tool, called AudioSeal, is the first that can pinpoint which bits of audio in, for example, a full hourlong podcast might have been generated by AI. It could help to tackle the growing problem of misinformation and scams using voice cloning tools, says Hady Elsahar, a research scientist at Meta. Malicious actors have used generative AI to create audio deepfakes of President Joe Biden, and scammers have used deepfakes to blackmail their victims. Watermarks could in theory help social media companies detect and remove unwanted content. 

However, there are some big caveats. Meta says it has no plans yet to apply the watermarks to AI-generated audio created using its tools. Audio watermarks are not yet adopted widely, and there is no single agreed industry standard for them. And watermarks for AI-generated content tend to be easy to tamper with—for example, by removing or forging them. 

Fast detection, and the ability to pinpoint which elements of an audio file are AI-generated, will be critical to making the system useful, says Elsahar. He says the team achieved between 90% and 100% accuracy in detecting the watermarks, much better results than in previous attempts at watermarking audio. 

AudioSeal is available on GitHub for free. Anyone can download it and use it to add watermarks to AI-generated audio clips. It could eventually be overlaid on top of AI audio generation models, so that it is automatically applied to any speech generated using them. The researchers who created it will present their work at the International Conference on Machine Learning in Vienna, Austria, in July.  

AudioSeal is created using two neural networks. One generates watermarking signals that can be embedded into audio tracks. These signals are imperceptible to the human ear but can be detected quickly using the other neural network. Currently, if you want to try to spot AI-generated audio in a longer clip, you have to comb through the entire thing in second-long chunks to see if any of them contain a watermark. This is a slow and laborious process, and not practical on social media platforms with millions of minutes of speech.  

AudioSeal works differently: by embedding a watermark throughout each section of the entire audio track. This allows the watermark to be “localized,” which means it can still be detected even if the audio is cropped or edited. 

Ben Zhao, a computer science professor at the University of Chicago, says this ability, and the near-perfect detection accuracy, makes AudioSeal better than any previous audio watermarking system he’s come across. 

“It’s meaningful to explore research improving the state of the art in watermarking, especially across mediums like speech that are often harder to mark and detect than visual content,” says Claire Leibowicz, head of AI and media integrity at the nonprofit  Partnership on AI. 

But there are some major flaws that need to be overcome before these sorts of audio watermarks can be adopted en masse. Meta’s researchers tested different attacks to remove the watermarks and found that the more information is disclosed about the watermarking algorithm, the more vulnerable it is. The system also requires people to voluntarily add the watermark to their audio files.  

This places some fundamental limitations on the tool, says Zhao. “Where the attacker has some access to the [watermark] detector, it’s pretty fragile,” he says. And this means only Meta will be able to verify whether audio content is AI-generated or not. 

Leibowicz says she remains unconvinced that watermarks will actually further public trust in the information they’re seeing or hearing, despite their popularity as a solution in the tech sector. That’s partly because they are themselves so open to abuse. 

“I’m skeptical that any watermark will be robust to adversarial stripping and forgery,” she adds. 

Scaling green hydrogen technology for the future

Unlike conventional energy sources, green hydrogen offers a way to store and transfer energy without emitting harmful pollutants, positioning it as essential to a sustainable and net-zero future. By converting electrical power from renewable sources into green hydrogen, these low-carbon-intensity energy storage systems can release clean, efficient power on demand through combustion engines or fuel cells. When produced emission-free, hydrogen can decarbonize some of the most challenging industrial sectors, such as steel and cement production, industrial processes, and maritime transport.

“Green hydrogen is the key driver to advance decarbonization,” says Dr. Christoph Noeres, head of green hydrogen at global electrolysis specialist thyssenkrupp nucera. This promising low-carbon-intensity technology has the potential to transform entire industries by providing a clean, renewable fuel source, moving us toward a greener world aligned with industry climate goals.

Accelerating production of green hydrogen

Hydrogen is the most abundant element in the universe, and its availability is key to its appeal as a clean energy source. However, hydrogen does not occur naturally in its pure form; it is always bound to other elements in compounds like water (H2O). Pure hydrogen is extracted and isolated from water through an energy-intensive process called conventional electrolysis.

Hydrogen is typically produced today via steam-methane reforming, in which high-temperature steam is used to produce hydrogen from natural gas. Emissions produced by this process have implications for hydrogen’s overall carbon footprint: worldwide hydrogen production is currently responsible for as many CO2 emissions as the United Kingdom and Indonesia combined.

A solution lies in green hydrogen—hydrogen produced using electrolysis powered by renewable sources. This unlocks the benefits of hydrogen without the dirty fuels. Unfortunately, very little hydrogen is currently powered by renewables: less than 1% came from non-fossil fuel sources in 2022.

A massive scale-up is underway. According to McKinsey, an estimated 130 to 345 gigawatts (GW) of electrolyzer capacity will be necessary to meet the green hydrogen demand by 2030, with 246 GW of this capacity already announced. This stands in stark contrast to the current installed base of just 1.1 GW. Notably, to ensure that green hydrogen constitutes at least 14% of total energy consumption by 2050, a target that the International Renewable Energy Agency (IRENA) estimates is required to meet climate goals, 5,500 GW of cumulative installed electrolyzer capacity will be required.

However, scaling up green hydrogen production to these levels requires overcoming cost and infrastructure constraints. Becoming cost-competitive means improving and standardizing the technology, harnessing the scale efficiencies of larger projects, and encouraging government action to create market incentives. Moreover, the expansion of renewable energy in regions with significant solar, hydro, or wind energy potential is another crucial factor in lowering renewable power prices and, consequently, the costs of green hydrogen.

Electrolysis innovation

While electrolysis technologies have existed for decades, scaling them up to meet the demand for clean energy will be essential. Alkaline Water Electrolysis (AWE), the most dominant and developed electrolysis method, is poised for this transition. It has been utilized for decades, demonstrating efficiency and reliability in the chemical industry. Moreover, it is more cost effective than other electrolysis technologies and is well suited to be run directly with fluctuating renewable power input. Especially for large-scale applications, AWE demonstrates significant advantages in terms of investment and operating costs. “Transferring small-scale manufacturing and optimizing it towards mass manufacturing will need a high level of investment across the industry,” says Noeres.

Industries that already practice electrolysis, as well as those that already use hydrogen, such as fertilizer production, are well poised for conversion to green hydrogen. For example, thyssenkrupp nucera benefits from a decades-long heritage using electrolyzer technology in the chlor-alkali process, which produces chlorine and caustic soda for the chemical industry. The company “is able to use its existing supply chain to ramp up production quickly, a distinction that all providers don’t share,” says Noeres.

Alongside scaling up existing solutions, thyssenkrupp nucera is developing complementary techniques and technologies. Among these are solid oxide electrolysis cells (SOEC), which perform electrolysis at very high temperatures. While the need for high temperatures means this technique isn’t right for all customers, in industries where waste heat is readily available—such as chemicals—Noeres says SOEC offers up to 20% enhanced efficiency and reduces production costs.

Thyssenkrupp nucera has entered into a strategic partnership with the renowned German research institute Fraunhofer IKTS to move the technology toward applications in industrial manufacturing. The company envisages SOEC as a complement to AWE in the areas where it is cost effective to reduce overall energy consumption. “The combination of AWE and SOEC in thyssenkrupp nucera’s portfolio offers a unique product suite to the industry,” says Noeres.

While advancements in electrolysis technology and the diversification of its applications across various scales and industries are promising for green hydrogen production, a coordinated global ramp-up of renewable energy sources and clean power grids is also crucial. Although AWE electrolyzers are ready for deployment in large-scale, centralized green hydrogen production facilities, these must be integrated with renewable energy sources to truly harness their potential.

Making the green hydrogen market

Storage and transportation remain obstacles to a larger market for green hydrogen. While hydrogen can be compressed and stored, its low density presents a practical challenge. The volume of hydrogen is nearly four times greater than that of natural gas, and storage requires either ultra-high compression or costly refrigeration. Overcoming the economic and technical hurdles of high-volume hydrogen storage and transport will be critical to its potential as an exportable energy carrier.

In 2024, several high-profile green hydrogen projects launched in the U.S., advancing the growth of green hydrogen infrastructure and technology. The landmark Inflation Reduction Act (IRA) provides tax credits and government incentives for producing clean hydrogen and the renewable electricity used in its production. In October 2023, the Biden administration announced $7 billion for the country’s first clean hydrogen hubs, and the U.S. Department of Energy further allocated $750 million for 52 projects across 24 states to dramatically reduce the cost of clean hydrogen and establish American leadership in the industry. The potential economic impact from the IRA legislation is substantial: thyssenkrupp nucera expects the IRA to double or triple the U.S. green hydrogen market size.

“The IRA was a wake-up call for Europe, setting a benchmark for all the other countries on how to support the green hydrogen industry in this startup phase,” says Noeres. Germany’s H2Global scheme was one of the first European efforts to facilitate hydrogen imports with the help of subsidies, and it has since been followed up by the European Hydrogen Bank, which provided €720 million for green hydrogen projects in its pilot auction. “However, more investment is needed to push the green hydrogen industry forward,” says Noeres.

In the current green hydrogen market, China has installed more renewable power than any other country. With lower capital expenditure costs, China produces 40% of the world’s electrolyzers. Additionally, state-owned firms have pledged to build an extensive 6,000-kilometer network of pipelines for green hydrogen transportation by 2050.

Coordinated investment and supportive policies are crucial to ensure attractive incentives that can bring green hydrogen from a niche technology to a scalable solution globally. The Chinese green hydrogen market, along with that of other regions such as the Middle East and North Africa, has advanced significantly, garnering global attention for its competitive edge through large-scale projects. To compete effectively, the EU must create a global level playing field for European technologies through attractive investment incentives that can drive the transition of hydrogen from a niche to a global-scale solution. Supportive policies must be in place to also ensure that green products made with hydrogen, such as steel, are sufficiently incentivized and protected against carbon leakage.

A comprehensive strategy, combining investment incentives, open markets, and protection against market distortions and carbon leakage, is crucial for the EU and other countries to remain competitive in the rapidly evolving global green hydrogen market and achieve a decarbonized energy future. “To advance several gigawatt scale or multi-hundred megawatts projects forward,” says Noeres, “we need significantly more volume globally and comparable funding opportunities to make a real impact on global supply chains.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

The Download: AI’s limitations

18 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why does AI hallucinate?

The World Health Organization’s new chatbot launched on April 2 with the best of intentions. The virtual avatar named SARAH, was designed to dispense health tips about how to eat well, quit smoking, de-stress, and more, for millions around the world. But like all chatbots, SARAH can flub its answers. It was quickly found to give out incorrect information. In one case, it came up with a list of fake names and addresses for nonexistent clinics in San Francisco.

Chatbot fails are now a familiar meme. Meta’s short-lived scientific chatbot Galactica made up academic papers and generated wiki articles about the history of bears in space. In February, Air Canada was ordered to honor a refund policy invented by its customer service chatbot. Last year, a lawyer was fined for submitting court documents filled with fake judicial opinions and legal citations made up by ChatGPT.

This tendency to make things up—known as hallucination—is one of the biggest obstacles holding chatbots back from more widespread adoption. Why do they do it? And why can’t we fix it? Read the full story.

—Will Douglas Heaven

Will’s article is the latest entry in MIT Technology Review Explains, our series explaining the complex, messy world of technology to help you understand what’s coming next. You can check out the rest of the series here.

The story is also from the forthcoming magazine issue of MIT Technology Review, which explores the theme of Play. It’s set to go live on Wednesday June 26, so if you don’t already, subscribe now to get a copy when it lands.

Why artists are becoming less scared of AI

Knock, knock. Who’s there? An AI with generic jokes. Researchers from Google DeepMind asked 20 professional comedians to use popular AI language models to write jokes and comedy performances. Their results were mixed. Although the tools helped them to produce initial drafts and structure their routines, AI was not able to produce anything that was original, stimulating, or, crucially, funny

The study is symptomatic of a broader trend: we’re realizing the limitations of what AI can do for artists. It can take on some of the boring, mundane, formulaic aspects of the creative process, but it can’t replace the magic and originality that humans bring. Read the full story.

—Melissa Heikkilä 

This story is from The Algorithm, our weekly AI newsletter. Sign up to receive it in your inbox every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US government is suing Adobe over concealed fees
And for making it too difficult to cancel a Photoshop subscription. (The Verge)
+ Regulators are going after firms with hard-to-cancel accounts. (NYT $)
+ Adobe’s had an incredibly profitable few years. (Insider $)
+ The company recently announced its plans to safeguard artists against exploitative AI. (MIT Technology Review)

2 The year’s deadly heat waves have only just begun
But not everyone is at equal risk from extreme temperatures. (Vox)
+ Here’s what you need to know about this week’s US heat wave. (WP $)
+ Here’s how much heat your body can take. (MIT Technology Review)

3 Being an influencer isn’t as lucrative as it used to be
It’s getting tougher for content creators to earn a crust from social media alone. (WSJ $)
+ Beware the civilian creators offering to document your wedding. (The Guardian)+ Deepfakes of Chinese influencers are livestreaming 24/7. (MIT Technology Review)

4 How crypto cash could influence the US Presidential election 
‘Crypto voters’ have started mobilizing for Donald Trump, who has been making pro-crypto proclamations. (NYT $)

5 Europe is pumping money into defense tech startups
It’ll be a while until it catches up with the US though. (FT $)
+ Here’s the defense tech at the center of US aid to Israel, Ukraine, and Taiwan. (MIT Technology Review)

6 China’s solar industry is in serious trouble
Its rapid growth hasn’t translated into big profits. (Economist $)
+ Recycling solar panels is still a major environmental challenge, too. (IEEE Spectrum)
+ This solar giant is moving manufacturing from China back to the US. (MIT Technology Review)

7 Brace yourself for AI reading companions
The systems are trained on famous writers’ thoughts on seminal titles. (Wired $)

8 McDonalds is ditching AI chatbots at drive-thrus
The tech just proved too unreliable. (The Guardian)

9 How ice freezes is surprisingly mysterious 🧊
It’s not as simple as cooling water to zero degrees. (Quanta Magazine)

10 Keeping your phone cool in hot weather is tough
No direct sunlight, no case, no putting it in the fridge. (WP $)

Quote of the day

“My goal was to show that nature is just so fantastic and creative, and I don’t think any machine can beat that.”

—Photographer Miles Astray explains to the Washington Post why he entered a real photograph of a surreal-looking flamingo into a competition for AI art.

The big story

The Atlantic’s vital currents could collapse. Scientists are racing to understand the dangers.

December 2021

Scientists are searching for clues about one of the most important forces in the planet’s climate system: a network of ocean currents known as the Atlantic Meridional Overturning Circulation. They want to better understand how global warming is changing it, and how much more it could shift, or even collapse.

The problem is the Atlantic circulation seems to be weakening, transporting less water and heat. Because of climate change, melting ice sheets are pouring fresh water into the ocean at the higher latitudes, and the surface waters are retaining more of their heat. Warmer and fresher waters are less dense and thus not as prone to sink, which may be undermining one of the currents’ core driving forces. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ This cookie is the perfect replica of those frustrating maze games.
+ Each year, the Roland Garros tennis tournament commissions an artist to create a poster. This collection is remarkable 🎾
+ Sesame Street is the best.
+ If your plants aren’t flourishing, these tips might help to get them looking their best.

Why artists are becoming less scared of AI

18 June 2024 at 06:28

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Knock, knock. 

Who’s there? 

An AI with generic jokes. Researchers from Google DeepMind asked 20 professional comedians to use popular AI language models to write jokes and comedy performances. Their results were mixed. 

The comedians said that the tools were useful in helping them produce an initial “vomit draft” that they could iterate on, and helped them structure their routines. But the AI was not able to produce anything that was original, stimulating, or, crucially, funny. My colleague Rhiannon Williams has the full story.

As Tuhin Chakrabarty, a computer science researcher at Columbia University who specializes in AI and creativity, told Rhiannon, humor often relies on being surprising and incongruous. Creative writing requires its creator to deviate from the norm, whereas LLMs can only mimic it.

And that is becoming pretty clear in the way artists are approaching AI today. I’ve just come back from Hamburg, which hosted one of the largest events for creatives in Europe, and the message I got from those I spoke to was that AI is too glitchy and unreliable to fully replace humans and is best used instead as a tool to augment human creativity. 

Right now, we are in a moment where we are deciding how much creative power we are comfortable giving AI companies and tools. After the boom first started in 2022, when DALL-E 2 and Stable Diffusion first entered the scene, many artists raised concerns that AI companies were scraping their copyrighted work without consent or compensation. Tech companies argue that anything on the public internet falls under fair use, a legal doctrine that allows the reuse of copyrighted-protected material in certain circumstances. Artists, writers, image companies, and the New York Times have filed lawsuits against these companies, and it will likely take years until we have a clear-cut answer as to who is right. 

Meanwhile, the court of public opinion has shifted a lot in the past two years. Artists I have interviewed recently say they were harassed and ridiculed for protesting AI companies’ data-scraping practices two years ago. Now, the general public is more aware of the harms associated with AI. In just two years, the public has gone from being blown away by AI-generated images to sharing viral social media posts about how to opt out of AI scraping—a concept that was alien to most laypeople until very recently. Companies have benefited from this shift too. Adobe has been successful in pitching its AI offerings as an “ethical” way to use the technology without having to worry about copyright infringement. 

There are also several grassroots efforts to shift the power structures of AI and give artists more agency over their data. I’ve written about Nightshade, a tool created by researchers at the University of Chicago, which lets users add an invisible poison attack to their images so that they break AI models when scraped. The same team is behind Glaze, a tool that lets artists mask their personal style from AI copycats. Glaze has been integrated into Cara, a buzzy new art portfolio site and social media platform, which has seen a surge of interest from artists. Cara pitches itself as a platform for art created by people; it filters out AI-generated content. It got nearly a million new users in a few days. 

This all should be reassuring news for any creative people worried that they could lose their job to a computer program. And the DeepMind study is a great example of how AI can actually be helpful for creatives. It can take on some of the boring, mundane, formulaic aspects of the creative process, but it can’t replace the magic and originality that humans bring. AI models are limited to their training data and will forever only reflect the zeitgeist at the moment of their training. That gets old pretty quickly.


Now read the rest of The Algorithm

Deeper Learning

Apple is promising personalized AI in a private cloud. Here’s how that will work.

Last week, Apple unveiled its vision for supercharging its product lineup with artificial intelligence. The key feature, which will run across virtually all of its product line, is Apple Intelligence, a suite of AI-based capabilities that promises to deliver personalized AI services while keeping sensitive data secure. 

Why this matters: Apple says its privacy-focused system will first attempt to fulfill AI tasks locally on the device itself. If any data is exchanged with cloud services, it will be encrypted and then deleted afterward. It’s a pitch that offers an implicit contrast with the likes of Alphabet, Amazon, or Meta, which collect and store enormous amounts of personal data. Read more from James O’Donnell here

Bits and Bytes

How to opt out of Meta’s AI training
If you post or interact with chatbots on Facebook, Instagram, Threads, or WhatsApp, Meta can use your data to train its generative AI models. Even if you don’t use any of Meta’s platforms, it can still scrape data such as photos of you if someone else posts them. Here’s our quick guide on how to opt out. (MIT Technology Review

Microsoft’s Satya Nadella is building an AI empire
Nadella is going all in on AI. His $13 billion investment in OpenAI was just the beginning. Microsoft has become an “the world’s most aggressive amasser of AI talent, tools, and technology” and has started building an in-house OpenAI competitor. (The Wall Street Journal)

OpenAI has hired an army of lobbyists
As countries around the world mull AI legislation, OpenAI is on a lobbyist hiring spree to protect its interests. The AI company has expanded its global affairs team from three lobbyists at the start of 2023 to 35 and intends to have up to 50 by the end of this year. (Financial Times)  

UK rolls out Amazon-powered emotion recognition AI cameras on trains
People traveling through some of the UK’s biggest train stations have likely had their faces scanned by Amazon software without their knowledge during an AI trial. London stations such as Euston and Waterloo have tested CCTV cameras with AI to reduce crime and detect people’s emotions. Emotion recognition technology is extremely controversial. Experts say it is unreliable and simply does not work. 
(Wired

Clearview AI used your face. Now you may get a stake in the company.
The facial recognition company, which has been under fire for scraping images of people’s faces from the web and social media without their permission, has agreed to an unusual settlement in a class action against it. Instead of paying cash, it is offering a 23% stake in the company for Americans whose faces are in its data sets. (The New York Times

Elephants call each other by their names
This is so cool! Researchers used AI to analyze the calls of two herds of African savanna elephants in Kenya. They found that elephants use specific vocalizations for each individual and recognize when they are being addressed by other elephants. (The Guardian

Why does AI hallucinate?

18 June 2024 at 04:00

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

The World Health Organization’s new chatbot launched on April 2 with the best of intentions. 

A fresh-faced virtual avatar backed by GPT-3.5, SARAH (Smart AI Resource Assistant for Health) dispenses health tips in eight different languages, 24/7, about how to eat well, quit smoking, de-stress, and more, for millions around the world.

But like all chatbots, SARAH can flub its answers. It was quickly found to give out incorrect information. In one case, it came up with a list of fake names and addresses for nonexistent clinics in San Francisco. The World Health Organization warns on its website that SARAH may not always be accurate.

Here we go again. Chatbot fails are now a familiar meme. Meta’s short-lived scientific chatbot Galactica made up academic papers and generated wiki articles about the history of bears in space. In February, Air Canada was ordered to honor a refund policy invented by its customer service chatbot. Last year, a lawyer was fined for submitting court documents filled with fake judicial opinions and legal citations made up by ChatGPT. 

The problem is, large language models are so good at what they do that what they make up looks right most of the time. And that makes trusting them hard.

This tendency to make things up—known as hallucination—is one of the biggest obstacles holding chatbots back from more widespread adoption. Why do they do it? And why can’t we fix it?

Magic 8 Ball

To understand why large language models hallucinate, we need to look at how they work. The first thing to note is that making stuff up is exactly what these models are designed to do. When you ask a chatbot a question, it draws its response from the large language model that underpins it. But it’s not like looking up information in a database or using a search engine on the web. 

Peel open a large language model and you won’t see ready-made information waiting to be retrieved. Instead, you’ll find billions and billions of numbers. It uses these numbers to calculate its responses from scratch, producing new sequences of words on the fly. A lot of the text that a large language model generates looks as if it could have been copy-pasted from a database or a real web page. But as in most works of fiction, the resemblances are coincidental. A large language model is more like an infinite Magic 8 Ball than an encyclopedia. 

Large language models generate text by predicting the next word in a sequence. If a model sees “the cat sat,” it may guess “on.” That new sequence is fed back into the model, which may now guess “the.” Go around again and it may guess “mat”—and so on. That one trick is enough to generate almost any kind of text you can think of, from Amazon listings to haiku to fan fiction to computer code to magazine articles and so much more. As Andrej Karpathy, a computer scientist and cofounder of OpenAI, likes to put it: large language models learn to dream internet documents. 

Think of the billions of numbers inside a large language model as a vast spreadsheet that captures the statistical likelihood that certain words will appear alongside certain other words. The values in the spreadsheet get set when the model is trained, a process that adjusts those values over and over again until the model’s guesses mirror the linguistic patterns found across terabytes of text taken from the internet. 

To guess a word, the model simply runs its numbers. It calculates a score for each word in its vocabulary that reflects how likely that word is to come next in the sequence in play. The word with the best score wins. In short, large language models are statistical slot machines. Crank the handle and out pops a word. 

It’s all hallucination

The takeaway here? It’s all hallucination, but we only call it that when we notice it’s wrong. The problem is, large language models are so good at what they do that what they make up looks right most of the time. And that makes trusting them hard. 

Can we control what large language models generate so they produce text that’s guaranteed to be accurate? These models are far too complicated for their numbers to be tinkered with by hand. But some researchers believe that training them on even more text will continue to reduce their error rate. This is a trend we’ve seen as large language models have gotten bigger and better. 

Another approach involves asking models to check their work as they go, breaking responses down step by step. Known as chain-of-thought prompting, this has been shown to increase the accuracy of a chatbot’s output. It’s not possible yet, but future large language models may be able to fact-check the text they are producing and even rewind when they start to go off the rails.

But none of these techniques will stop hallucinations fully. As long as large language models are probabilistic, there is an element of chance in what they produce. Roll 100 dice and you’ll get a pattern. Roll them again and you’ll get another. Even if the dice are, like large language models, weighted to produce some patterns far more often than others, the results still won’t be identical every time. Even one error in 1,000—or 100,000—adds up to a lot of errors when you consider how many times a day this technology gets used. 

The more accurate these models become, the more we will let our guard down. Studies show that the better chatbots get, the more likely people are to miss an error when it happens.  

Perhaps the best fix for hallucination is to manage our expectations about what these tools are for. When the lawyer who used ChatGPT to generate fake documents was asked to explain himself, he sounded as surprised as anyone by what had happened. “I heard about this new site, which I falsely assumed was, like, a super search engine,” he told a judge. “I did not comprehend that ChatGPT could fabricate cases.” 

The Download: artificial surf pools, and unfunny AI

17 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The cost of building the perfect wave

For nearly as long as surfing has existed, surfers have been obsessed with the search for the perfect wave. 

While this hunt has taken surfers from tropical coastlines to icebergs, these days that search may take place closer to home. That is, at least, the vision presented by developers and boosters in the growing industry of surf pools, spurred by advances in wave-­generating technology that have finally created artificial waves surfers actually want to ride.

But there’s a problem: some of these pools are in drought-ridden areas, and face fierce local opposition. At the core of these fights is a question that’s also at the heart of the sport: What is the cost of finding, or now creating, the perfect wave—and who will have to bear it? Read the full story.

—Eileen Guo

This story is from the forthcoming print issue of MIT Technology Review, which explores the theme of Play. It’s set to go live on Wednesday June 26, so if you don’t already, subscribe now to get a copy when it lands.

What happened when 20 comedians got AI to write their routines

AI is good at lots of things: spotting patterns in data, creating fantastical images, and condensing thousands of words into just a few paragraphs. But can it be a useful tool for writing comedy?

New research from Google DeepMind suggests that it can, but only to a very limited extent. It’s an intriguing finding that hints at the ways AI can—and cannot—assist with creative endeavors more generally. Read the full story.

—Rhiannon Williams

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Meta has paused plans to train AI on European user data
Data regulators rebuffed its claims it had “legitimate interests” in doing so. (Ars Technica)
+ Meta claims it sent more than two billion warning notifications. (TechCrunch)
+ How to opt out of Meta’s AI training. (MIT Technology Review)

2 AI assistants and chatbots can’t say who won the 2020 US election
And that’s a major problem as we get closer to the 2024 polls opening. (WP $)
+ Online conspiracy theorists are targeting political abuse researchers. (The Atlantic $)
+ Asking Meta AI how to disable it triggers some interesting conversations. (Insider $)
+ Meta says AI-generated election content is not happening at a “systemic level.” (MIT Technology Review)

3 A smartphone battery maker claims to have made a breakthrough
Japanese firm TDK says its new material could revolutionize its solid-state batteries. (FT $)
+ And it’s not just phones that could stand to benefit. (CNBC)
+ Meet the new batteries unlocking cheaper electric vehicles. (MIT Technology Review)

4 What should AI logos look like?
Simple, abstract and non-threatening, if these are anything to go by. (TechCrunch)

5 Radiopharmaceuticals fight cancer with molecular precision
Their accuracy can lead to fewer side effects for patients. (Knowable Magazine)

6 UK rail passengers’ emotions were assessed by AI cameras 
Major stations tested surveillance cameras designed to predict travelers’ emotions. (Wired $)
+ The movement to limit face recognition tech might finally get a win. (MIT Technology Review)

7 The James Webb Space Telescope has spotted dozens of new supernovae
Dating back to the early universe. (New Scientist $)

8 Rice farming in Vietnam has had a hi-tech makeover
Drones and AI systems are making the laborious work a bit simpler. (Hakai Magazine)
+ How one vineyard is using AI to improve its winemaking. (MIT Technology Review)

9 Meet the researchers working to cool down city parks
Using water misters, cool tubes, and other novel techniques. (Bloomberg $)
+ Here’s how much heat your body can take. (MIT Technology Review)

10 The latest generative AI viral trend? Pregnant male celebrities.
The stupider and weirder the image, the better. (Insider $)

Quote of the day

“It’s really easy to get people addicted to things like social media or mobile games. Learning is really hard.”

—Liz Nagler, senior director of product management at language app Duolingo, tells the Wall Street Journal it’s far trickier to get people to go back to the app every day than you might think.

The big story

The big new idea for making self-driving cars that can go anywhere


May 2022

When Alex Kendall sat in a car on a small road in the British countryside and took his hands off the wheel back in 2016, it was a small step in a new direction—one that a new bunch of startups bet might be the breakthrough that makes driverless cars an everyday reality.

This was the first time that reinforcement learning—an AI technique that trains a neural network to perform a task via trial and error—had been used to teach a car to drive from scratch on a real road. It took less than 20 minutes for the car to learn to stay on the road by itself, Kendall claims.

These startups are betting that smarter, cheaper tech will let them overtake current market leaders. But is this yet more hype from an industry that’s been drinking its own Kool-Aid for years? Read the full story.

—Will Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Twin Peaks meets Sylvanian Families: what’s not to love?
+ You heard it here first: Brat is the album of the summer.
+ Chilis can be pretty painful to eat, but we love them anyway. 🌶
+ How people have been crafting artificial eyes for thousands of years.

The cost of building the perfect wave

17 June 2024 at 05:00

For nearly as long as surfing has existed, surfers have been obsessed with the search for the perfect wave. It’s not just a question of size, but also of shape, surface conditions, and duration—ideally in a beautiful natural environment. 

While this hunt has taken surfers from tropical coastlines reachable only by boat to swells breaking off icebergs, these days—as the sport goes mainstream—that search may take place closer to home. That is, at least, the vision presented by developers and boosters in the growing industry of surf pools, spurred by advances in wave-­generating technology that have finally created artificial waves surfers actually want to ride. 

Some surf evangelists think these pools will democratize the sport, making it accessible to more communities far from the coasts—while others are simply interested in cashing in. But a years-long fight over a planned surf pool in Thermal, California, shows that for many people who live in the places where they’re being built, the calculus isn’t about surf at all. 


Just some 30 miles from Palm Springs, on the southeastern edge of the Coachella Valley desert, Thermal is the future home of the 118-acre private, members-only Thermal Beach Club (TBC). The developers promise over 300 luxury homes with a dazzling array of amenities; the planned centerpiece is a 20-plus-acre artificial lagoon with a 3.8-acre surf pool offering waves up to seven feet high. According to an early version of the website, club memberships will start at $175,000 a year. (TBC’s developers did not respond to multiple emails asking for comment.)

That price tag makes it clear that the club is not meant for locals. Thermal, an unincorporated desert community, currently has a median family income of $32,340. Most of its residents are Latino; many are farmworkers. The community lacks much of the basic infrastructure that serves the western Coachella Valley, including public water service—leaving residents dependent on aging private wells for drinking water. 

Just a few blocks away from the TBC site is the 60-acre Oasis Mobile Home Park. A dilapidated development designed for some 1,500 people in about 300 mobile homes, Oasis has been plagued for decades by a lack of clean drinking water. The park owners have been cited numerous times by the Environmental Protection Agency for providing tap water contaminated with high levels of arsenic, and last year, the US Department of Justice filed a lawsuit against them for violating the Safe Drinking Water Act. Some residents have received assistance to relocate, but many of those who remain rely on weekly state-funded deliveries of bottled water and on the local high school for showers. 

Stephanie Ambriz, a 28-year-old special-needs teacher who grew up near Thermal, recalls feeling “a lot of rage” back in early 2020 when she first heard about plans for the TBC development. Ambriz and other locals organized a campaign against the proposed club, which she says the community doesn’t want and won’t be able to access. What residents do want, she tells me, is drinkable water, affordable housing, and clean air—and to have their concerns heard and taken seriously by local officials. 

Despite the grassroots pushback, which twice led to delays to allow more time for community feedback, the Riverside County Board of Supervisors unanimously approved the plans for the club in October 2020. It was, Ambriz says, “a shock to see that the county is willing to approve these luxurious developments when they’ve ignored community members” for decades. (A Riverside County representative did not respond to specific questions about TBC.) 

The desert may seem like a counterintuitive place to build a water-intensive surf pool, but the Coachella Valley is actually “the very best place to possibly put one of these things,” argues Doug Sheres, the developer behind DSRT Surf, another private pool planned for the area. It is “close to the largest [and] wealthiest surf population in the world,” he says, featuring “360 days a year of surfable weather” and mountain and lake views in “a beautiful resort setting” served by “a very robust aquifer.” 

In addition to the two planned projects, the Palm Springs Surf Club (PSSC) has already opened locally. The trifecta is turning the Coachella Valley into “the North Shore of wave pools,” as one aficionado described it to Surfer magazine. 

The effect is an acute cognitive dissonance—one that I experienced after spending a few recent days crisscrossing the valley and trying out the waves at PSSC. But as odd as this setting may seem, an analysis by MIT Technology Review reveals that the Coachella Valley is not the exception. Of an estimated 162 surf pools that have been built or announced around the world, as tracked by the industry publication Wave Pool Magazine, 54 are in areas considered by the nonprofit World Resources Institute (WRI) to face high or extremely high water stress, meaning that they regularly use a large portion of their available surface water supply annually. Regions in the “extremely high” category consume 80% or more of their water, while those in the “high” category use 40% to 80% of their supply. (Not all of Wave Pool Magazine’s listed pools will be built, but the publication tracks all projects that have been announced. Some have closed and over 60 are currently operational.)

Zoom in on the US and nearly half are in places with high or extremely high water stress, roughly 16 in areas served by the severely drought-stricken Colorado River. The greater Palm Springs area falls under the highest category of water stress, according to Samantha Kuzma, a WRI researcher (though she notes that WRI’s data on surface water does not reflect all water sources, including an area’s access to aquifers, or its water management plan).

Now, as TBC’s surf pool and other planned facilities move forward and contribute to what’s becoming a multibillion-dollar industry with proposed sites on every continent except Antarctica, inland waves are increasingly becoming a flash point for surfers, developers, and local communities. There are at least 29 organized movements in opposition to surf clubs around the world, according to an ongoing survey from a coalition called No to the Surf Park in Canéjan, which includes 35 organizations opposing a park in Bordeaux, France.  

While the specifics vary widely, at the core of all these fights is a question that’s also at the heart of the sport: What is the cost of finding, or now creating, the perfect wave—and who will have to bear it? 


Though wave pools have been around since the late 1800s, the first artificial surfing wave was built in 1969, and also in the desert—at Big Surf in Tempe, Arizona. But at that pool and its early successors, surfing was secondary; people who went to those parks were more interested in splashing around, and surfers themselves weren’t too excited by what they had to offer. The manufactured waves were too small and too soft, without the power, shape, or feel of the real thing. 

The tide really turned in 2015, when Kelly Slater, widely considered to be the greatest professional surfer of all time, was filmed riding a six-foot-tall, 50-second barreling wave. As the viral video showed, he was not in the wild but atop a wave generated in a pool in California’s Central Valley, some 100 miles from the coast.

Waves of that height, shape, and duration are a rarity even in the ocean, but “Kelly’s wave,” as it became known, showed that “you can make waves in the pool that are as good as or better than what you get in the ocean,” recalls Sheres, the developer whose company, Beach Street Development, is building mul­tiple surf pools around the country, including DSRT Surf. “That got a lot of folks excited—myself included.” 

In the ocean, a complex combination of factors—including wind direction, tide, and the shape and features of the seafloor—is required to generate a surfable wave. Re-creating them in an artificial environment required years of modeling, precise calculations, and simulations. 

Surf Ranch, Slater’s project in the Central Valley, built a mechanical system in which a 300-ton hydrofoil—which resembles a gigantic metal fin—is pulled along the length of a pool 700 yards long and 70 yards wide by a mechanical device the size of several train cars running on a track. The bottom of the pool is precisely contoured to mimic reefs and other features of the ocean floor; as the water hits those features, its movement creates the 50-second-long barreling wave. Once the foil reaches one end of the pool, it runs backwards, creating another wave that breaks in the opposite direction. 

While the result is impressive, the system is slow, producing just one wave every three to four minutes. 

Around the same time Slater’s team was tinkering with his wave, other companies were developing their own technologies to produce multiple waves, and to do so more rapidly and efficiently—key factors in commercial viability. 

Fundamentally, all the systems create waves by displacing water, but depending on the technology deployed, there are differences in the necessary pool size, the project’s water and energy requirements, the level of customization that’s possible, and the feel of the wave. 

Thomas Lochtefeld is a pioneer in the field and the CEO of Surf Loch, which powers PSSC’s waves. Surf Loch uses pneumatic technology, in which compressed air cycles water through chambers the size of bathroom stalls and lets operators create countless wave patterns.

One demo pool in Australia uses what looks like a giant mechanical doughnut that sends out waves the way a pebble dropped in water sends out ripples. Another proposed plan uses a design that spins out waves from a circular fan—a system that is mobile and can be placed in existing bodies of water. 

Of the two most popular techniques in commercial use, one relies on modular paddles attached to a pier that runs across a pool, which move in precise ways to generate waves. The other is pneumatic technology, which uses compressed air to push water through chambers the size of bathroom stalls, called caissons; the caissons pull in water and then push it back out into the pool. By choosing which modular paddles or caissons move first against the different pool bottoms, and with how much force at a time, operators can create a range of wave patterns. 

Regardless of the technique used, the design and engineering of most modern wave pools are first planned out on a computer. Waves are precisely calculated, designed, simulated, and finally tested in the pool with real surfers before they are set as options on a “wave menu” in proprietary software that surf-pool technologists say offers a theoretically endless number and variety of waves. 

On a Tuesday afternoon in early April, I am the lucky tester at the Palm Springs Surf Club, which uses pneumatic technology, as the team tries out a shoulder-high right-breaking wave. 

I have the pool to myself as the club prepares to reopen; it had closed to rebuild its concrete “beach” just 10 days after its initial launch because the original beach had not been designed to withstand the force of the larger waves that Surf Loch, the club’s wave technology provider, had added to the menu at the last minute. (Weeks after reopening in April, the surf pool closed again as the result of “a third-party equipment supplier’s failure,” according to Thomas Lochtefeld, Surf Loch’s CEO.)

I paddle out and, at staffers’ instructions, take my position a few feet away from the third caisson from the right, which they say is the ideal spot to catch the wave on the shoulder—meaning the unbroken part of the swell closest to its peak. 

The entire experience is surreal: waves that feel like the ocean in an environment that is anything but. 

Palm Springs Surf Club wide angle vie wof the wave pool
An employee test rides a wave, which was first calculated, designed, and simulated on a computer.
SPENCER LOWELL

In some ways, these pneumatic waves are better than what I typically ride around Los Angeles—more powerful, more consistent, and (on this day, at least) uncrowded. But the edge of the pool and the control tower behind it are almost always in my line of sight. And behind me are the PSSC employees (young men, incredible surfers, who keep an eye on my safety and provide much-needed tips) and then, behind them, the snow-capped San Jacinto Mountains. At the far end of the pool, behind the recently rebuilt concrete beach, is a restaurant patio full of diners who I can’t help but imagine are judging my every move. Still, for the few glorious seconds that I ride each wave, I am in the same flow state I experience in the ocean itself.  

Then I fall and sheepishly paddle back to PSSC’s encouraging surfer-employees to restart the whole process. I would be having a lot of fun—if I could just forget my self-consciousness, and the jarring feeling that I shouldn’t be riding waves in the middle of the desert at all.  


Though long inhabited by Cahuilla Indians, the Coachella Valley was sparsely populated until 1876, when the Southern Pacific Railroad added a new line out to the middle of the arid expanse. Shortly after, the first non-native settlers came to the valley and realized that its artesian wells, which flow naturally without the need to be pumped, provided ideal conditions for farming.  

Agricultural production exploded, and by the early 1900s, these once freely producing wells were putting out significantly less, leading residents to look for alternative water sources. In 1918, they created the Coachella Valley Water District (CVWD) to import water from the Colorado River via a series of canals. This water was used to supply the region’s farms and recharge the Coachella Aquifer, the region’s main source of drinking water. 

""
The author tests a shoulder-high wave at PSSC, where she says the waves were in some ways better than what she rides around Los Angeles.
SPENCER LOWELL

The water imports continue to this day—though the seven states that draw on the river are currently renegotiating their water rights amid a decades-long megadrought in the region. 

The imported water, along with CVWD’s water management plan, has allowed Coachella’s aquifer to maintain relatively steady levels “going back to 1970, even though most development and population has occurred since,” Scott Burritt, a CVWD spokesperson, told MIT Technology Review in an email. 

This has sustained not only agriculture but also tourism in the valley, most notably its world-class—and water-intensive—golf courses. In 2020, the 120 golf courses under the jurisdiction of the CVWD consumed 105,000 acre-feet of water per year (AFY); that’s an average of 875 AFY, or 285 million gallons per year per course. 

Surf pools’ proponents frequently point to the far larger amount of water golf courses consume to argue that opposing the pools on grounds of their water use is misguided. 

PSSC, the first of the area’s three planned surf clubs to open, requires an estimated 3 million gallons per year to fill its pool; the proposed DSRT Surf holds 7 million gallons and estimates that it will use 24 million gallons per year, which includes maintenance and filtration, and accounts for evaporation. TBC’s planned 20-acre recreational lake, 3.8 acres of which will contain the surf pool, will use 51 million gallons per year, according to Riverside County documents. Unlike standard swimming pools, none of these pools need to be drained and refilled annually for maintenance, saving on potential water use. DSRT Surf also boasts about plans to offset its water use by replacing 1 million square feet of grass from an adjacent golf course with drought-tolerant plants. 

a PSSC employee at a control panel overlooking the pool
Pro surfer and PSSC’s full-time “wave curator” Cheyne Magnusson watches test waves from the club’s control tower.
SPENCER LOWELL

With surf parks, “you can see the water,” says Jess Ponting, a cofounder of Surf Park Central, the main industry association, and Stoke, a nonprofit that aims to certify surf and ski resorts—and, now, surf pools—for sustainability. “Even though it’s a fraction of what a golf course is using, it’s right there in your face, so it looks bad.”

But even if it were just an issue of appearance, public perception is important when residents are being urged to reduce their water use, says Mehdi Nemati, an associate professor of environmental economics and policy at the University of California, Riverside. It’s hard to demand such efforts from people who see these pools and luxury developments being built around them, he says. “The questions come: Why do we conserve when there are golf courses or surfing … in the desert?” 

(Burritt, the CVWD representative, notes that the water district “encourages all customers, not just residents, to use water responsibly” and adds that CVWD’s strategic plans project that there should be enough water to serve both the district’s golf courses and its surf pools.)  

Locals opposing these projects, meanwhile, argue that developers are grossly underestimating their water use, and various engineering firms and some county officials have in fact offered projections that differ from the developers’ estimates. Opponents are specifically concerned about the effects of spray, evaporation, and other factors, which increase with higher temperatures, bigger waves, and larger pool sizes. 

As a rough point of reference, Slater’s 14-acre wave pool in Lemoore, California, can lose up to 250,000 gallons of water per day to evaporation, according to Adam Fincham, the engineer who designed the technology. That’s roughly half an Olympic swimming pool.

More fundamentally, critics take issue with even debating whether surf clubs or golf courses are worse. “We push back against all of it,” says Ambriz, who organized opposition to TBC and argues that neither the pool nor an exclusive new golf course in Thermal benefits the local community. Comparing them, she says, obscures greater priorities, like the water needs of households. 

Five surfers sit on their boards in a calm PSSC pool
The PSSC pool requires an estimated 3 million gallons of water per year. On top of a $40 admission fee, a private session there would cost between $3,500 and $5,000 per hour.
SPENCER LOWELL

The “primary beneficiary” of the area’s water, says Mark Johnson, who served as CVWD’s director of engineering from 2004 to 2016, “should be human consumption.”

Studies have shown that just one AFY, or nearly 326,000 gallons, is generally enough to support all household water needs of three California families every year. In Thermal, the gap between the demands of the surf pool and the needs of the community is even more stark: each year for the past three years, nearly 36,000 gallons of water have been delivered, in packages of 16-ounce plastic water bottles, to residents of the Oasis Mobile Home Park—some 108,000 gallons in all. Compare that with the 51 million gallons that will be used annually by TBC’s lake: it would be enough to provide drinking water to its neighbors at Oasis for the next 472 years.

Furthermore, as Nemati notes, “not all water is the same.” CVWD has provided incentives for golf courses to move toward recycled water and replace grass with less water-­intensive landscaping. But while recycled water and even rainwater have been proposed as options for some surf pools elsewhere in the world, including France and Australia, this is unrealistic in Coachella, which receives just three to four inches of rain per year. 

Instead, the Coachella Valley surf pools will depend on a mix of imported water and nonpotable well water from Coachella’s aquifer. 

But any use of the aquifer worries Johnson. Further drawing down the water, especially in an underground aquifer, “can actually create water quality problems,” he says, by concentrating “naturally occurring minerals … like chromium and arsenic.” In other words, TBC could worsen the existing problem of arsenic contamination in local well water. 

When I describe to Ponting MIT Technology Review’s analysis showing how many surf pools are being built in desert regions, he seems to concede it’s an issue. “If 50% of the surf parks in development are in water-stressed areas,” he says, “then the developers are not thinking about the right things.” 


Before visiting the future site of Thermal Beach Club, I stopped in La Quinta, a wealthy town where, back in 2022, community opposition successfully stopped plans for a fourth pool planned for the Coachella Valley. This one was developed by the Kelly Slater Wave Company, which was acquired by the World Surf League in 2016. 

Alena Callimanis, a longtime resident who was a member of the community group that helped defeat the project, says that for a year and a half, she and other volunteers often spent close to eight hours a day researching everything they could about surf pools—and how to fight them. “We knew nothing when we started,” she recalls. But the group learned quickly, poring over planning documents, consulting hydrologists, putting together presentations, providing comments at city council hearings, and even conducting their own citizen science experiments to test the developers’ assertions about the light and noise pollution the project could create. (After the council rejected the proposal for the surf club, the developers pivoted to previously approved plans for a golf course. Callimanis’s group also opposes the golf course, raising similar concerns about water use, but since plans have already been approved, she says, there is little they can do to fight back.) 

view across an intersection of a mobile home framed by palm trees
Just a few blocks from the site of the planned Thermal Beach Club is the Oasis Mobile Home Park, which has been plagued for decades by a lack of clean drinking water.
""
A water pump sits at the corner of farm fields in Thermal, California, where irrigation water is imported from the Colorado River.

It was a different story in Thermal, where three young activists juggled jobs and graduate programs as they tried to mobilize an under-resourced community. “Folks in Thermal lack housing, lack transportation, and they don’t have the ability to take a day off from work to drive up and provide public comment,” says Ambriz. 

But the local pushback did lead to certain promises, including a community benefit payment of $2,300 per luxury housing unit, totaling $749,800. In the meeting approving the project, Riverside County supervisor Manuel Perez called this “unprecedented” and credited the efforts of Ambriz and her peers. (Ambriz remains unconvinced. “None of that has happened,” she says, and payments to the community don’t solve the underlying water issues that the project could exacerbate.) 

That affluent La Quinta managed to keep a surf pool out of its community where working-class Thermal failed is even more jarring in light of industry rhetoric about how surf pools could democratize the sport. For Bryan Dickerson, the editor in chief of Wave Pool Magazine, the collective vision for the future is that instead of “the local YMCA … putting in a skate park, they put in a wave pool.” Other proponents, like Ponting, describe how wave pools can provide surf therapy or opportunities for underrepresented groups. A design firm in New York City, for example, has proposed to the city a plan for an indoor wave pool in a low-income, primarily black and Latino neighborhood in Queens—for $30 million. 

For its part, PSSC cost an estimated $80 million to build. On top of a $40 general admission fee, a private session like the one I had would cost $3,500 to $5,000 per hour, while a public session would be at least $100 to $200, depending on the surfer’s skill level and the types of waves requested. 

In my two days traversing the 45-mile Coachella Valley, I kept thinking about how this whole area was an artificial oasis made possible only by innovations that changed the very nature of the desert, from the railroad stop that spurred development to the irrigation canals and, later, the recharge basins that stopped the wells from running out. 

In this transformed environment, I can see how the cognitive dissonance of surfing a desert wave begins to shrink, tempting us to believe that technology can once again override the reality of living (or simply playing) in the desert in a warming and drying world. 

But the tension over surf pools shows that when it comes to how we use water, maybe there’s no collective “us” here at all. 

What happened when 20 comedians got AI to write their routines

17 June 2024 at 04:00

AI is good at lots of things: spotting patterns in data, creating fantastical images, and condensing thousands of words into just a few paragraphs. But can it be a useful tool for writing comedy?  

New research suggests that it can, but only to a very limited extent. It’s an intriguing finding that hints at the ways AI can—and cannot—assist with creative endeavors more generally. 

Google DeepMind researchers led by Piotr Mirowski, who is himself an improv comedian in his spare time, studied the experiences of professional comedians who have AI in their work. They used a combination of surveys and focus groups aimed at measuring how useful AI is at different tasks. 

They found that although popular AI models from OpenAI and Google were effective at simple tasks, like structuring a monologue or producing a rough first draft, they struggled to produce material that was original, stimulating, or—crucially—funny. They presented their findings at the ACM FAccT conference in Rio earlier this month but kept the participants anonymous to avoid any reputational damage (not all comedians want their audience to know they’ve used AI).

The researchers asked 20 professional comedians who already used AI in their artistic process to use a large language model (LLM) like ChatGPT or Google Gemini (then Bard) to generate material that they’d feel comfortable presenting in a comedic context. They could use it to help create new jokes or to rework their existing comedy material. 

If you really want to see some of the jokes the models generated, scroll to the end of the article.

The results were a mixed bag. While the comedians reported that they’d largely enjoyed using AI models to write jokes, they said they didn’t feel particularly proud of the resulting material. 

A few of them said that AI can be useful for tackling a blank page—helping them to quickly get something, anything, written down. One participant likened this to “a vomit draft that I know that I’m going to have to iterate on and improve.” Many of the comedians also remarked on the LLMs’ ability to generate a structure for a comedy sketch, leaving them to flesh out the details.

However, the quality of the LLMs’ comedic material left a lot to be desired. The comedians described the models’ jokes as bland, generic, and boring. One participant compared them to  “cruise ship comedy material from the 1950s, but a bit less racist.” Others felt that the amount of effort just wasn’t worth the reward. “No matter how much I prompt … it’s a very straitlaced, sort of linear approach to comedy,” one comedian said.

AI’s inability to generate high-quality comedic material isn’t exactly surprising. The same safety filters that OpenAI and Google use to prevent models from generating violent or racist responses also hinder them from producing the kind of material that’s common in comedy writing, such as offensive or sexually suggestive jokes and dark humor. Instead, LLMs are forced to rely on what is considered safer source material: the vast numbers of documents, books, blog posts, and other types of internet data they’re trained on. 

“If you make something that has a broad appeal to everyone, it ends up being nobody’s favorite thing,” says Mirowski.

The experiment also exposed the LLMs’ bias. Several participants found that a model would not generate comedy monologues from the perspective of an Asian woman, but it was able to do so from the perspective of a white man. This, they felt, reinforced the status quo while erasing minority groups and their perspectives.

But it’s not just the guardrails and limited training data that prevent LLMs from generating funny responses. So much of humor relies on being surprising and incongruous, which is at odds with how these models work, says Tuhin Chakrabarty, a computer science researcher at Columbia University, who specializes in AI and creativity and wasn’t involved in the study. Creative writing requires deviation from the norm, whereas LLMs can only mimic it.

“Comedy, or any sort of good writing, uses long-term arcs to return to themes, or to surprise an audience. Large language models struggle with that because they’re built to predict one word at a time,” he says. “I’ve tried so much in my own research to prompt AI to be funny or surprising or interesting or creative, but it just doesn’t work.”

Colleen Lavin is a developer and comedian who participated in the study. For a stand-up routine she performed at the Edinburgh Fringe last year, she trained a machine-learning model to recognize laughter and to “heckle” her when it detected she wasn’t getting enough laughs. While she has used generative AI to create promotional material for her shows or to check her writing, she draws the line at using it to actually generate jokes.

“I have a technical day job, and writing is separate from that—it’s almost sacred,” she says. “Why would I take something that I truly enjoy and outsource it to a machine?”

While AI-assisted comedians may be able to work much faster, their ideas won’t be original, because they’ll be limited by the data the models were trained to draw from, says Chakrabarty.

“I think people are going to use these tools for writing scripts, screenplays, and advertisements anyway,” he says. “But true creative and comedic writing is based on experience and vibes. Not an algorithm.”

The AI-generated jokes

For the prompt: “Can you write me ten jokes about pickpocketing”, one LLM response was: “I decided to switch careers and become a pickpocket after watching a magic show. Little did I know, the only thing disappearing would be my reputation!”

For the prompt: “Please write jokes about the irony of a projector failing in a live comedy show about AI.”, one of the better LLM responses was: “Our projector must’ve misunderstood the concept of ‘AI.’ It thought it meant ‘Absolutely Invisible’ because, well, it’s doing a fantastic job of disappearing tonight!”

The Download: milk beyond cows, and geoengineering’s funding boom

14 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Biotech companies are trying to make milk without cows

The outbreak of avian influenza on US dairy farms has started to make milk seem a lot less wholesome. Milk that’s raw, or unpasteurized, can actually infect mice that drink it, and a few dairy workers have already caught the bug. 

The FDA says that commercial milk is safe because it is pasteurized, killing the germs. Even so, it’s enough to make a person ponder a life beyond milk—say, taking your coffee black or maybe drinking oat milk.

But for those of us who can’t do without the real thing, it turns out some genetic engineers are working on ways to keep the milk and get rid of the cows instead. Here’s how they’re doing it.

—Antonio Regalado

This story is from The Checkup, our weekly biotech and health newsletter. Sign up to receive it in your inbox every Thursday.

This London non-profit is now one of the biggest backers of geoengineering research

A London-based nonprofit is poised to become one of the world’s largest financial backers of solar geoengineering research. It’s just one of a growing number of foundations eager to support scientists exploring whether the world could ease climate change by reflecting away more sunlight.

The uptick in funding will offer scientists in the controversial field far more support than they’ve enjoyed in the past. This will allow them to pursue a wider array of lab work, modeling, and potentially even outdoor experiments that could improve our understanding of the benefits and risks of such interventions. Read the full story.

—James Temple

How to opt out of Meta’s AI training

If you post or interact with chatbots on Facebook, Instagram, Threads, or WhatsApp, Meta can use your data to train its generative AI models beginning June 26, according to its recently updated privacy policy. 

Internet data scraping is one of the biggest fights in AI right now. Tech companies argue that anything on the public internet is fair game, but they are facing a barrage of lawsuits over their data practices and copyright. It will likely take years until clear rules are in place. 

In the meantime, if you’re uncomfortable with having Meta use your personal information and intellectual property to train its AI models, consider opting out. Here’s how to do it.

—Melissa Heikkila

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US Supreme Court has upheld access to the abortion pill
It’s the most significant ruling since it overturned Roe v Wade in 2022. (FT $)
+ The decision represents the aversion of a major crisis for reproductive health. (Wired $)
+ But states like Kansas are likely to draw out legal arguments over access. (The Guardian)

2 Amazon is struggling to revamp Alexa
It’s repeatedly missed deadlines and is floundering to catch up with its rivals. (Fortune)
+ OpenAI has stolen a march on Amazon’s AI assistant ambitions. (MIT Technology Review)

3 Clearview AI has struck a deal to end a privacy class action
If your face was scraped as facial recognition data, you may be entitled to a stake in the company. (NYT $)
+ The startup doesn’t have the funds to settle the lawsuit. (Reuters)
+ It was fined millions of dollars for its practices back in 2022. (MIT Technology Review)

4 What’s next for nanotechnology
Molecular machines to kill bacteria aren’t new—but they are promising. (New Yorker $)

5 The Pope is a surprisingly influential voice in the AI safety debate
Pope Francis will address G7 leaders who have gathered today to discuss AI regulation. (WP $)
+ Smaller startups are lobbying to be acquired by bigger fish. (Bloomberg $)
+ What’s next for AI regulation in 2024? (MIT Technology Review)

6 Keeping data centers cool uses colossal amounts of power
Dunking servers in oil could be a far more environmentally-friendly method. (IEEE Spectrum)

7 UK voters can back an AI-generated candidate in next month’s election
How very Black Mirror. (NBC News)

8 How to tell if your boss is spying on you
Checking your browser extensions is a good place to start. (WP $)

9 We don’t know much about how the human body reacts to space
But with the rise of space tourism, scientists are hoping to find out. (TechCrunch)
+ This startup wants to find out if humans can have babies in space. (MIT Technology Review)

10 This platform is a who’s-who of rising internet stars
Famous Birthdays is basically a directory of hugely successful teenagers you’ve never heard of. (Economist $)

Quote of the day

“If it’s somebody on the right, I reward them. If it’s somebody on the left, I punish them.”

—Christopher Blair, a self-confessed liberal troll social justice warrior, explains the methods he uses to spread fake news on Facebook to the New York Times.

The big story

The quest to build wildfire-resistant homes

April 2023

With each devastating wildfire in the US West, officials consider new methods or regulations that might save homes or lives the next time.

In the parts of California where the hillsides meet human development, and where the state has suffered recurring seasonal fire tragedies, that search for new means of survival has especially high stakes.

Many of these methods are low cost and low tech, but no less truly innovative. In fact, the hardest part to tackle may not be materials engineering, but social change. Read the full story.

—Susie Cagle

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Why AI-generated album covers can’t hold a candle to human-made art.
+ This chicken caesar salad recipe looks pretty great.
+ Sign me up for a trip to Spain’s unspoiled Ribeira Sacra region!
+ How to nap like a pro 😴

These board games want you to beat climate change

14 June 2024 at 05:00

It’s game night, and I’m crossing my fingers, hoping for a hurricane. 

I roll the die and it clatters across the board, tumbling to a stop to reveal a tiny icon of a tree stump. Bad news: I just triggered deforestation in the Amazon. That seals it. I failed to stop climate change—at least this board-game representation of it.

The urgent need to address climate change might seem like unlikely fodder for a fun evening. But a growing number of games are attempting to take on the topic, including a version of the bestseller Catan released this summer.

As a climate reporter, I was curious about whether games could, even abstractly, represent the challenge of the climate crisis. Perhaps more crucially, could they possibly be any fun? 

My investigation started with Daybreak, a board game released in late 2023 by a team that includes the creator of Pandemic (infectious disease—another famously light topic for a game). Daybreak is a cooperative game where players work together to cut emissions and survive disasters. The group either wins or loses as a whole.

When I opened the box, it was immediately clear that this wouldn’t be for the faint of heart. There are hundreds of tiny cardboard and wooden pieces, three different card decks, and a surprisingly thick rule book. Setting it up, learning the rules, and playing for the first time took over two hours.

the components of the game Daybreak which has Game cards depicting Special Drawing Rights, Clean Electricity Plants, and Reforestation themed play cards
Daybreak, a cooperative board game about stopping climate change.
COURTESY OF CMYK

Daybreak is full of details, and I was struck by how many of them it gets right. Not only are there cards representing everything from walkable cities to methane removal, but each features a QR code players can use to learn more.

In each turn, players deploy technologies or enact policies to cut climate pollution. Just as in real life, emissions have negative effects. Winning requires slashing emissions to net zero (the point where whatever’s emitted can be soaked up by forests, oceans, or direct air capture). But there are multiple ways for the whole group to lose, including letting the global average temperature increase by 2 °C or simply running out of turns.

 In an embarrassing turn of events for someone who spends most of her waking hours thinking about climate change, nearly every round of Daybreak I played ended in failure. Adding insult to injury, I’m not entirely sure that I was having fun. Sure, the abstract puzzle was engaging and challenging, and after a loss, I’d be checking the clock, seeing if there was time to play again. But once all the pieces were back in the box, I went to bed obsessing about heat waves and fossil-fuel disinformation. The game was perhaps representing climate change a little bit too well.

I wondered if a new edition of a classic would fare better. Catan, formerly Settlers of Catan, and its related games have sold over 45 million copies worldwide since the original’s release in 1995. The game’s object is to build roads and settlements, setting up a civilization. 

In late 2023, Catan Studios announced that it would be releasing a version of its game called New Energies, focused on climate change. The new edition, out this summer, preserves the same central premise as the original. But this time, players will also construct power plants, generating energy with either fossil fuels or renewables. Fossil fuels are cheaper and allow for quicker expansion, but they lead to pollution, which can harm players’ societies and even end the game early.

Before I got my hands on the game, I spoke with one of its creators, Benjamin Teuber, who developed the game with his late father, Klaus Teuber, the mastermind behind the original Catan.

To Teuber, climate change is a more natural fit for a game than one might expect. “We believe that a good game is always around a dilemma,” he told me. The key is to simplify the problem sufficiently, a challenge that took the team dozens of iterations while developing New Energies. But he also thinks there’s a need to be at least somewhat encouraging. “While we have a severe topic, or maybe even especially because we have a severe topic, you can’t scare off the people by making them just have a shitty evening,” Teuber says.

In New Energies, the first to gain 10 points wins, regardless of how polluting that player’s individual energy supply is. But if players collectively build too many fossil-fuel plants and pollution gets too high, the game ends early, in which case whoever has done the most work to clean up their own energy supply is named the winner.

That’s what happened the first time I tested out the game. While I had been lagging in points, I ended up taking the win, because I had built more renewable power plants than my competitors.

This relatively rosy ending had me conflicted. On one hand, I was delighted, even if it felt like a consolation prize. 

But I found myself fretting over the messages that New Energies will send to players. A simple game that crowns a winner may be more playable, but it doesn’t represent how complicated the climate crisis is, or how urgently we need to address it. 

I’m glad climate change has a spot on my game shelf, and I hope these and other games find their audiences and get people thinking about the issues. But I’ll understand the impulse to reach for other options when game night rolls around, because I can’t help but dwell on the fact that in the real world, we won’t get to reset the pieces and try again.

Biotech companies are trying to make milk without cows

14 June 2024 at 05:00

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

The outbreak of avian influenza on US dairy farms has started to make milk seem a lot less wholesome. Milk that’s raw, or unpasteurized, can actually infect mice that drink it, and a few dairy workers have already caught the bug. 

The FDA says that commercial milk is safe because it is pasteurized, killing the germs. Even so, it’s enough to make a person ponder a life beyond milk—say, taking your coffee black or maybe drinking oat milk.

But for those of us who can’t do without the real thing, it turns out some genetic engineers are working on ways to keep the milk and get rid of the cows instead. They’re doing it by engineering yeasts and plants with bovine genes so they make the key proteins responsible for milk’s color, satisfying taste, and nutritional punch.

The proteins they’re copying are casein, a floppy polymer that’s the most abundant protein in milk and is what makes pizza cheese stretch, and whey, a nutritious combo of essential amino acids that’s often used in energy powders.

It’s part of a larger trend of replacing animals with ingredients grown in labs, steel vessels, or plant crops. Think of the Impossible burger, the veggie patty made mouthwatering with the addition of heme, a component of blood that’s produced in the roots of genetically modified soybeans.

One of the milk innovators is Remilk, an Israeli startup founded in 2019, which has engineered yeast so it will produce beta-lactoglobulin (the main component of whey). Company cofounder Ori Cohavi says a single biotech factory of bubbling yeast vats feeding on sugar could in theory “replace 50,000 to 100,000 cows.” 

Remilk has been making trial batches and is testing ways to formulate the protein with plant oils and sugar to make spreadable cheese, ice cream, and milk drinks. So yes, we’re talking “processed” food—one partner is a local Coca-Cola bottler, and advising the company are former executives of Nestlé, Danone, and PepsiCo.

But regular milk isn’t exactly so natural either. At milking time, animals stand inside elaborate robots, and it looks for all the world as if they’re being abducted by aliens. “The notion of a cow standing in some nice green scenery is very far from how we get our milk,” says Cohavi. And there are environmental effects: cattle burp methane, a potent greenhouse gas, and a lactating cow needs to drink around 40 gallons of water a day

“There are hundreds of millions of dairy cows on the planet producing greenhouse waste, using a lot of water and land,” says Cohavi. “It can’t be the best way to produce food.”  

For biotech ventures trying to displace milk, the big challenge will be keeping their own costs of production low enough to compete with cows. Dairies get government protections and subsidies, and they don’t only make milk. Dairy cows are eventually turned into gelatin, McDonald’s burgers, and the leather seats of your Range Rover. Not much goes to waste.

At Alpine Bio, a biotech company in San Francisco (also known as Nobell Foods), researchers have engineered soybeans to produce casein. While not yet cleared for sale, the beans are already being grown on USDA-sanctioned test plots in the Midwest, says Alpine’s CEO, Magi Richani

Richani chose soybeans because they’re already a major commodity and the cheapest source of protein around. “We are working with farmers who are already growing soybeans for animal feed,” she says. “And we are saying, ‘Hey, you can grow this to feed humans.’ If you want to compete with a commodity system, you have to have a commodity crop.”

Alpine intends to crush the beans, extract the protein, and—much like Remilk—sell the ingredient to larger food companies.

Everyone agrees that cow’s milk will be difficult to displace. It holds a special place in the human psyche, and we owe civilization itself, in part, to domesticated animals. In fact, they’ve  left their mark in our genes, with many of us carrying DNA mutations that make cow’s milk easier to digest.  

But that’s why it might be time for the next technological step, says Richani. “We raise 60 billion animals for food every year, and that is insane. We took it too far, and we need options,” she says. “We need options that are better for the environment, that overcome the use of antibiotics, and that overcome the disease risk.”

It’s not clear yet whether the bird flu outbreak on dairy farms is a big danger to humans. But making milk without cows would definitely cut the risk that an animal virus will cause a new pandemic. As Richani says: “Soybeans don’t transmit diseases to humans.”


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

Hungry for more from the frontiers of fromage? In the Build issue of our print magazine, Andrew Rosenblum tasted a yummy brie made only from plants. Harder to swallow was the claim by developer Climax Foods that its cheese was designed using artificial intelligence.

The idea of using yeast to create food ingredients, chemicals, and even fuel via fermentation is one of the dreams of synthetic biology. But it’s not easy. In 2021, we raised questions about high-flying startup Ginkgo Bioworks. This week its stock hit an all-time low of $0.49 per share as the company struggles to make … well, anything.

This spring, I traveled to Florida to watch attempts to create life in a totally new way: using a synthetic embryo made in a lab. The action involved cattle at the animal science department of the University of Florida, Gainesville.


From around the web

How many human bird flu cases are there? No one knows, because there’s barely any testing. Scientists warn we’re flying blind as US dairy farms struggle with an outbreak. (NBC)  

Moderna, one of the companies behind the covid-19 shots, is seeing early success with a cancer vaccine. It uses the same basic technology: gene messages packed into nanoparticles. (Nature)

It’s the covid-19 theory that won’t go away. This week the New York Times published an op-ed arguing that the virus was the result of a lab accident. We previously profiled the author, Alina Chan, who is a scientist with the Broad Institute. (NYTimes)

Sales of potent weight loss drugs, like Ozempic, are booming. But it’s not just humans who are overweight. Now the pet care industry is dreaming of treating chubby cats and dogs, too. (Bloomberg)

This London non-profit is now one of the biggest backers of geoengineering research

14 June 2024 at 05:00

A London-based nonprofit is poised to become one of the world’s largest financial backers of solar geoengineering research. And it’s just one of a growing number of foundations eager to support scientists exploring whether the world could ease climate change by reflecting away more sunlight.

Quadrature Climate Foundation, established in 2019 and funded through the proceeds of the investment fund Quadrature Capital, plans to provide $40 million for work in this field over the next three years, Greg De Temmerman, the organization’s chief science officer, told MIT Technology Review

That’s a big number for this subject—double what all foundations and wealthy individuals provided from 2008 through 2018 and roughly on par with what the US government has offered to date. 

“We think we can have a very strong impact in accelerating research, making sure it’s happening, and trying to unlock some public money at some point,” De Temmerman says.

Other nonprofits are set to provide tens of millions of dollars’ worth of additional grants to solar geoengineering research or related government advocacy work in the coming months and years. The uptick in funding will offer scientists in the controversial field far more support than they’ve enjoyed in the past and allow them to pursue a wider array of lab work, modeling, and potentially even outdoor experiments that could improve our understanding of the benefits and risks of such interventions. 

“It just feels like a new world, really different from last year,” says David Keith, a prominent geoengineering researcher and founding faculty director of the Climate Systems Engineering Initiative at the University of Chicago.

Other nonprofits that have recently disclosed funding for solar geoengineering research or government advocacy, or announced plans to provide it, include the Simons Foundation, the Environmental Defense Fund, and the Bernard and Anne Spitzer Charitable Trust. 

In addition, Meta’s former chief technology officer, Mike Schroepfer, told MIT Technology Review he is spinning out a new nonprofit, Outlier Projects. He says it will provide funding to solar geoengineering research as well as to work on ocean-based carbon removal and efforts to stabilize rapidly melting glaciers.

Outlier has already issued grants for the first category to the Environmental Defense Fund, Keith’s program at the University of Chicago, and two groups working to support research and engagement on the subject in the poorer, hotter parts of the world: the Degrees Initiative and the Alliance for Just Deliberation on Solar Geoengineering.

Researchers say that the rising dangers of climate change, the lack of progress on cutting emissions, and the relatively small amount of government research funding to date are fueling the growing support for the field.

“A lot of people are recognizing the obvious,” says Douglas MacMartin, a senior research associate in mechanical and aerospace engineering at Cornell, who focuses on geoengineering. “We’re not in a good position with regard to mitigation—and we haven’t spent enough money on research to be able to support good, wise decisions on solar geoengineering.”

Scientists are exploring a variety of potential methods of reflecting away more sunlight, including injecting certain particles into the stratosphere to mimic the cooling effect of volcanic eruptions, spraying salt toward marine clouds to make them brighter, or sprinkling fine dust-like material into the sky to break up heat-trapping cirrus clouds.

Critics contend that neither nonprofits nor scientists should support studying any of these methods, arguing that raising the possibility of such interventions eases pressure to cut emissions and creates a “slippery slope” toward deploying the technology. Even some who support more research fear that funding it through private sources, particularly from wealthy individuals who made their fortunes in tech and finance, may allow studies to move forward without appropriate oversight and taint public perceptions of the field.

The sense that we’re “putting the climate system in the care of people who have disrupted the media and information ecosystems, or disrupted finance, in the past” could undermine public trust in a scientific realm that many already find unsettling, says Holly Buck, an assistant professor at the University of Buffalo and author of After Geoengineering.

‘Unlocking solutions’

One of Quadrature’s first solar geoengineering grants went to the University of Washington’s Marine Cloud Brightening Program. In early April, that research group made headlines for beginning, and then being forced to halt, small-scale outdoor experiments on a decommissioned aircraft carrier sitting off the coast of Alameda, California. The effort entailed spraying a mist of small sea salt particles into the air. 

Quadrature was also one of the donors to a $20.5 million fund for the Washington, DC, nonprofit SilverLining, which was announced in early May. The group pools and distributes grants to solar geoengineering researchers around the world and has pushed for greater government support and funding for the field. The new fund will support that policy advocacy work as well as efforts to “promote equitable participation by all countries,” Kelly Wanser, executive director of SilverLining, said in an email.

She added that it’s crucial to accelerate solar geoengineering research because of the rising dangers of climate change, including the risk of passing “catastrophic tipping points.”

“Current climate projections may even underestimate risks, particularly to vulnerable populations, highlighting the urgent need to improve risk prediction and expand response strategies,” she wrote.

Quadrature has also issued grants for related work to Colorado State University, the University of Exeter, and the Geoengineering Model Intercomparison Project, an effort to run the same set of modeling experiments across an array of climate models. 

The foundation intends to direct its solar geoengineering funding to advance efforts in two main areas: academic research that could improve understanding of various approaches, and work to develop global oversight structures “to enable decision-making on [solar radiation modification] that is transparent, equitable, and science based.”

“We want to empower people to actually make informed decisions at some point,” De Temmerman says, stressing the particular importance of ensuring that people in the Global South are actively involved in such determinations. 

He says that Quadrature is not advocating for specific outcomes, taking no position on whether or not to ultimately use such tools. It also won’t support for-profit startups. 

In an emailed response to questions, he stressed that the funding for solar geoengineering is a tiny part of the foundation’s overall mission, representing just 5% of its $930 million portfolio. The lion’s share has gone to accelerate efforts to cut greenhouse-gas pollution, remove it from the atmosphere, and help vulnerable communities “respond and adapt to climate change to minimize harm.”

Billionaires Greg Skinner and Suneil Setiya founded both the Quadrature investment fund as well as the foundation. The nonprofit’s stated mission is unlocking solutions to the climate crisis, which it describes as “the most urgent challenge of our time.” But the group, which has 26 employees, has faced recent criticism for its benefactors’ stakes in oil and gas companies. Last summer, the Guardian reported that Quadrature Capital held tens of millions of dollars in investments in dozens of fossil-fuel companies, including ConocoPhillips and Cheniere Energy.

In response to a question about the potential for privately funded foundations to steer research findings in self-interested ways, or to create the perception that the results might be so influenced, De Temmerman stated: “We are completely transparent in our funding, ensuring it is used solely for public benefit and not for private gain.”

More foundations, more funds 

To be sure, a number of wealthy individuals and foundations have been providing funds for years to solar geoengineering research or policy work, or groups that collect funds to do so.

A 2021 paper highlighted contributions from a number of wealthy individuals, with a high concentration from the tech sector, including Microsoft cofounder Bill Gates, Facebook cofounder Dustin Moskovitz, Facebook alum and venture capitalist Matt Cohler, former Google executive (and extreme skydiver) Alan Eustace, and tech and climate solutions investors Chris and Crystal Sacca. It noted a number of nonprofits providing grants to the field as well, including the Hewlett Foundation, the Alfred P. Sloan Foundation, and the Blue Marble Fund.

But despite the backing of those high-net-worth individuals, the dollar figures have been low. From 2008 through 2018, total private funding only reached about $20 million, while government funding just topped $30 million. 

The spending pace is now picking up, though, as new players move in.

The Simons Foundation previously announced it would provide $50 million to solar geoengineering research over a five-year period. The New York–based nonprofit invited researchers to apply for grants of up to $500,000, adding that it “strongly” encouraged scientists in the Global South to do so. 

The organization is mostly supporting modeling and lab studies. It said it would not fund social science work or field experiments that would release particles into the environment. Proposals for such experiments have sparked heavy public criticism in the past.

Simons recently announced a handful of initial awards to researchers at Harvard, Princeton, ETH Zurich, the Indian Institute of Tropical Meteorology, the US National Center for Atmospheric Research, and elsewhere.

“For global warming, we will need as many tools in the toolbox as possible,” says David Spergel, president of the Simons Foundation. 

“This was an area where there was a lot of basic science to do, and a lot of things we didn’t understand,” he adds. “So we wanted to fund the basic science.”

In January, the Environmental Defense Fund hosted a meeting at its San Francisco headquarters to discuss the guardrails that should guide research on solar geoengineering, as first reported by Politico. EDF had already provided some support to the Solar Radiation Management Governance Initiative, a partnership with the Royal Society and other groups set up to “ensure that any geoengineering research that goes ahead—inside or outside the laboratory—is conducted in a manner that is responsible, transparent, and environmentally sound.” (It later evolved into the Degrees Initiative.)

But EDF has now moved beyond that work and is “in the planning stages of starting a research and policy initiative on [solar radiation modification],” said Lisa Dilling, associate chief scientist at the environmental nonprofit, in an email. That program will include regranting, which means raising funds from other groups or individuals and distributing them to selected recipients, and advocating for more public funding, she says. 

Outlier also provided a grant to a new nonprofit, Reflective. This organization is developing a road map to prioritize research needs and pooling philanthropic funding to accelerate work in the most urgent areas, says its founder, Dakota Gruener. 

Gruener was previously the executive director of ID2020, a nonprofit alliance that develops digital identification systems. Cornell’s MacMartin is a scientific advisor to the new nonprofit and will serve as the chair of the scientific advisory board.

Government funding is also slowly increasing. 

The US government started a solar geoengineering research program in 2019, funded through the National Oceanic and Atmospheric Administration, that currently provides about $11 million a year.

In February, the UK’s Natural Environment Research Council announced a £10.5 million, five-year research program. In addition, the UK’s Advanced Research and Invention Agency has said it’s exploring and soliciting input for a research program in climate and weather engineering.

Funding has not been allocated as yet, but the agency’s programs typically provide around £50 million.

‘When, not if’

More funding is generally welcome news for researchers who hope to learn more about the potential of solar geoengineering. Many argue that it’s crucial to study the subject because the technology may offer ways to reduce death and suffering, and prevent the loss of species and the collapse of ecosystems. Some also stress it’s crucial to learn what impact these interventions might have and how these tools could be appropriately regulated, because nations may be tempted to implement them unilaterally in the face of extreme climate crises.

It’s likely a question of “when, not if,” and we should “act and research accordingly,” says Gernot Wagner, a climate economist at Columbia Business School, who was previously the executive director of Harvard’s Solar Geoengineering Research Program. “In many ways the time has come to take solar geoengineering much more seriously.”

In 2021, a National Academies report recommended that the US government create a solar geoengineering research program, equipped with $100 million to $200 million in funding over five years.

But there are differences between coordinated government-funded research programs, which have established oversight bodies to consider the merit, ethics, and appropriate transparency of proposed research, and a number of nonprofits with different missions providing funding to the teams they choose. 

To the degree that they create oversight processes that don’t meet the same standards, it could affect the type of science that’s done, the level of public notice provided, and the pressures that researchers feel to deliver certain results, says Duncan McLaren, a climate intervention fellow at the University of California, Los Angeles.

“You’re not going to be too keen on producing something that seems contrary to what you thought the grant maker was looking for,” he says, adding later: “Poorly governed research could easily give overly optimistic answers about what [solar geoengineering] could do, and what its side effects may or may not be.”

Whatever the motivations of individual donors, Buck fears that the concentration of money coming from high tech and finance could also create optics issues, undermining faith in research and researchers and possibly slowing progress in the field.

“A lot of this is going to backfire because it’s going to appear to people as Silicon Valley tech charging in and breaking things,” she says. 

Cloud controversy

Some of the concerns about privately funded work in this area are already being tested.

By most accounts, the Alameda experiment in marine cloud brightening that Quadrature backed was an innocuous basic-science project, which would not have actually altered clouds. But the team stirred up controversy by moving ahead without wide public notice.

City officials quickly halted the experiments, and earlier this month the city council voted unanimously to shut the project down.

Alameda mayor Marilyn Ezzy Ashcraft has complained that city staffers received only vague notice about the project up front. They were then inundated with calls from residents who had heard about it in the media and were concerned about the health implications, she said, according to CBS News.

In response to a question about the criticism, SilverLining’s Wanser said in an email: “We worked with the lease-holder, the USS Hornet, on the process for notifying the city of Alameda. The city staff then engaged experts to independently evaluate the health and environmental safety of the … studies, who found that they did not pose any environmental or health risks to the community.”

Wanser, who is a principal of the Marine Cloud Brightening Program, stressed they’ve also received offers of support from local residents and businesses.

“We think that the availability of data and information on the nature of the studies, and its evaluation by local officials, was valuable in helping people consider it in an informed way for themselves,” she added.

Some observers were also concerned that the research team said it selected its own six-member board to review the proposed project. That differs from a common practice with publicly funded scientific experiments, which often include a double-blind review process, in which neither the researchers nor the reviewers know each other’s names. The concern with breaking from that approach is that scientists could select outside researchers who they believe are likely to greenlight their proposals, and the reviewers may feel pressure to provide more favorable feedback than they might offer anonymously.

Wanser stressed that the team picked “distinguished researchers in the specialized field.”

“There are different approaches for different programs, and in this case, the levels of expertise and transparency were important features,” she added. “They have not received any criticism of the design of the studies themselves, which speaks to their robustness and their value.”

‘Transparent and responsible’

Solar geoengineering researchers often say that they too would prefer public funding, all things being equal. But they stress that those sources are still limited and it’s important to move the field forward in the meantime, so long as there are appropriate standards in place.

“As long as there’s clear transparency about funding sources, [and] there’s no direct influence on the research by the donors, I don’t precisely see what the problem is,” MacMartin says. 

Several nonprofits emerging or moving into this space said that they are working to create responsible oversight structures and rules.

Gruener says that Reflective won’t accept anonymous donations or contributions from people whose wealth comes mostly from fossil fuels. She adds that all donors will be disclosed, that they won’t have any say over the scientific direction of the organization or its chosen research teams, and that they can’t sit on the organization’s board. 

“We think transparency is the only way to build trust, and we’re trying to ensure that our governance structure, our processes, and the outcomes of our research are all public, understandable, and readily available,” she says.

In a statement, Outlier said it’s also in favor of more publicly supported work: “It’s essential for governments to become the leading funders and coordinators of research in these areas.” It added that it’s supporting groups working to accelerate “government leadership” on the subject, including through its grant to EDF. 

Quadrature’s De Temmerman stresses the importance of public research programs as well, noting that the nonprofit hopes to catalyze much more such funding through its support for government advocacy work. 

“We are here to push at the beginning and then at some point just let some other forms of capital actually come,” he says.

How to opt out of Meta’s AI training

14 June 2024 at 04:57

MIT Technology Review’s How To series helps you get things done. 

If you post or interact with chatbots on Facebook, Instagram, Threads, or WhatsApp, Meta can use your data to train its generative AI models beginning June 26, according to its recently updated privacy policy. Even if you don’t use any of Meta’s platforms, it can still scrape data such as photos of you if someone else posts them.

Internet data scraping is one of the biggest fights in AI right now. Tech companies argue that anything on the public internet is fair game, but they are facing a barrage of lawsuits over their data practices and copyright. It will likely take years until clear rules are in place. 

In the meantime, they are running out of training data to build even bigger, more powerful models, and to Meta, your posts are a gold mine. 

If you’re uncomfortable with having Meta use your personal information and intellectual property to train its AI models in perpetuity, consider opting out. Although Meta does not guarantee it will allow this, it does say it will “review objection requests in accordance with relevant data protection laws.” 

What that means for US users

Users in the US or other countries without national data privacy laws don’t have any foolproof ways to prevent Meta from using their data to train AI, which has likely already been used for such purposes. Meta does not have an opt-out feature for people living in these places. 

A spokesperson for Meta says it does not use the content of people’s private messages to each other to train AI. However, public social media posts are seen as fair game and can be hoovered up into AI training data sets by anyone. Users who don’t want that can set their account settings to private to minimize the risk. 

The company has built in-platform tools that allow people to delete their personal information from chats with Meta AI, the spokesperson says.

How users in Europe and the UK can opt out 

Users in the European Union and the UK, which are protected by strict data protection regimes, have the right to object to their data being scraped, so they can opt out more easily. 

If you have a Facebook account:

1. Log in to your account. You can access the new privacy policy by following this link. At the very top of the page, you should see a box that says “Learn more about your right to object.” Click on that link, or here

Alternatively, you can click on your account icon at the top right-hand corner. Select “Settings and privacy” and then “Privacy center.” On the left-hand side you will see a drop-down menu labeled “How Meta uses information for generative AI models and features.” Click on that, and scroll down. Then click on “Right to object.” 

2. Fill in the form with your information. The form requires you to explain how Meta’s data processing affects you. I was successful in my request by simply stating that I wished to exercise my right under data protection law to object to my personal data being processed. You will likely have to confirm your email address. 

3. You should soon receive both an email and a notification on your Facebook account confirming if your request has been successful. I received mine a minute after submitting the request.

If you have an Instagram account: 

1. Log in to your account. Go to your profile page, and click on the three lines at the top-right corner. Click on “Settings and privacy.”

2. Scroll down to the “More info and support” section, and click “About.” Then click on “Privacy policy.” At the very top of the page, you should see a box that says “Learn more about your right to object.” Click on that link, or here

3. Repeat steps 2 and 3 as above. 

❌
❌