❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Firefly is building fast and breaking things on path to a reusable rocket

1 July 2024 at 20:29
A test version of Firefly's Miranda engine fires up on a test stand in Briggs, Texas.

Enlarge / A test version of Firefly's Miranda engine fires up on a test stand in Briggs, Texas. (credit: Firefly Aerospace)

BRIGGS, Texasβ€”The new medium-lift rocket under development by Firefly Aerospace and Northrop Grumman will eventually incorporate a recoverable booster that will return to its launch site in Virginia for reuse.

Firefly has previously suggested rocket reuse is on the roadmap for the new rocketβ€”known, for now, only as the Medium Launch Vehicle (MLV)β€”but officials revealed new details of the plan during a recent visit by Ars to Firefly's rocket factory in rural Central Texas.

β€œNorthrop and Firefly have a similar perspective and that is, for that class of rocket, reusability is a requirement for a bunch of reasons," said Bill Weber, Firefly's CEO. "Economically, it becomes an advantage because we don't have to go build additional floor space... Similarly, the pricing structure for customers starts to get super competitive, which we absolutely love, and we’ll be right in the middle of.”

Read 33 remaining paragraphs | Comments

Runway’s latest AI video generator brings giant cotton candy monsters to life

18 June 2024 at 17:41
Screen capture of a Runway Gen-3 Alpha video generated with the prompt

Enlarge / Screen capture of a Runway Gen-3 Alpha video generated with the prompt "A giant humanoid, made of fluffy blue cotton candy, stomping on the ground, and roaring to the sky, clear blue sky behind them." (credit: Runway)

On Sunday, Runway announced a new AI video synthesis model called Gen-3 Alpha that's still under development, but it appears to create video of similar quality to OpenAI's Sora, which debuted earlier this year (and has also not yet been released). It can generate novel, high-definition video from text prompts that range from realistic humans to surrealistic monsters stomping the countryside.

Unlike Runway's previous best model from June 2023, which could only create two-second-long clips, Gen-3 Alpha can reportedly create 10-second-long video segments of people, places, and things that have a consistency and coherency that easily surpasses Gen-2. If 10 seconds sounds short compared to Sora's full minute of video, consider that the company is working with a shoestring budget of compute compared to more lavishly funded OpenAIβ€”and actually has a history of shipping video generation capability to commercial users.

Gen-3 Alpha does not generate audio to accompany the video clips, and it's highly likely that temporally coherent generations (those that keep a character consistent over time) are dependent on similar high-quality training material. But Runway's improvement in visual fidelity over the past year is difficult to ignore.

Read 20 remaining paragraphs | Comments

❌
❌