Reading view

There are new articles available, click to refresh the page.

OpenAI launches GPT-4o mini, which will replace GPT-3.5 in ChatGPT

A glowing OpenAI logo on a blue background.

Enlarge (credit: Benj Edwards)

On Thursday, OpenAI announced the launch of GPT-4o mini, a new, smaller version of its latest GPT-4o AI language model that will replace GPT-3.5 Turbo in ChatGPT, reports CNBC and Bloomberg. It will be available today for free users and those with ChatGPT Plus or Team subscriptions and will come to ChatGPT Enterprise next week.

GPT-4o mini will reportedly be multimodal like its big brother (which launched in May), with image inputs currently enabled in the API. OpenAI says that in the future, GPT-4o mini will be able to interpret images, text, and audio, and also will be able to generate images.

GPT-4o mini supports 128K tokens of input context and a knowledge cutoff of October 2023. It's also very inexpensive as an API product, costing 60 percent less than GPT-3.5 Turbo at 15 cents per million input tokens and 60 cents per million output tokens. Tokens are fragments of data that AI language models use to process information.

Read 10 remaining paragraphs | Comments

Trump allies want to “Make America First in AI” with sweeping executive order

Former US President Donald Trump during a campaign event at Trump National Doral Golf Club in Miami, Florida, US, on Tuesday, July 9, 2024.

Enlarge / Former US President Donald Trump during a campaign event at Trump National Doral Golf Club in Miami, Florida, US, on Tuesday, July 9, 2024. (credit: Getty Images)

Allies of former President Donald Trump have reportedly drafted a sweeping AI executive order that aims to boost military technology and reduce regulations on AI development, The Washington Post reported. The plan, which includes a section titled "Make America First in AI," signals a dramatic potential shift in AI policy if Trump returns to the White House in 2025.

The draft order, obtained by the Post, outlines a series of "Manhattan Projects" to advance military AI capabilities. It calls for an immediate review of what it terms "unnecessary and burdensome regulations" on AI development. The approach marks a contrast to the Biden administration's executive order from last October, which imposed new safety testing requirements on advanced AI systems.

The proposed order suggests creating "industry-led" agencies to evaluate AI models and safeguard systems from foreign threats. This approach would likely benefit tech companies already collaborating with the Pentagon on AI projects, such as Palantir, Anduril, and Scale AI. Executives from these firms have reportedly expressed support for Trump.

Read 7 remaining paragraphs | Comments

Former OpenAI researcher’s new company will teach you how to build an LLM

File photo of children in a classroom listening to a robot.

Enlarge (credit: Getty Images)

On Tuesday, former OpenAI researcher Andrej Karpathy announced the formation of a new AI learning platform called Eureka Labs. The venture aims to create an "AI native" educational experience, with its first offering focused on teaching students how to build their own large language model (LLM).

"It's still early days but I wanted to announce the company so that I can build publicly instead of keeping a secret that isn't," Karpathy wrote on X.

While the idea of using AI in education isn't particularly new, Karpathy's approach hopes to pair expert-designed course materials with an AI-powered teaching assistant based on an LLM, aiming to provide personalized guidance at scale. This combination seeks to make high-quality education more accessible to a global audience.

Read 5 remaining paragraphs | Comments

Microsoft CTO Kevin Scott thinks LLM “scaling laws” will hold despite criticism

Kevin Scott, CTO and EVP of AI at Microsoft speaks onstage during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023 in Dana Point, California.

Enlarge / Kevin Scott, CTO and EVP of AI at Microsoft speaks onstage during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023 in Dana Point, California. (credit: Getty Images)

During an interview with Sequoia Capital's Training Data podcast published last Tuesday, Microsoft CTO Kevin Scott doubled down on his belief that so-called large language model (LLM) "scaling laws" will continue to drive AI progress, despite some skepticism in the field that progress has leveled out. Scott played a key role in forging a $13 billion technology-sharing deal between Microsoft and OpenAI.

"Despite what other people think, we're not at diminishing marginal returns on scale-up," Scott said. "And I try to help people understand there is an exponential here, and the unfortunate thing is you only get to sample it every couple of years because it just takes a while to build supercomputers and then train models on top of them."

LLM scaling laws refer to patterns explored by OpenAI researchers in 2020 showing that the performance of language models tends to improve predictably as the models get larger (more parameters), are trained on more data, and have access to more computational power (compute). The laws suggest that simply scaling up model size and training data can lead to significant improvements in AI capabilities without necessarily requiring fundamental algorithmic breakthroughs.

Read 9 remaining paragraphs | Comments

OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework

Illustration of a robot with many arms.

Enlarge (credit: Getty Images)

OpenAI recently unveiled a five-tier system to gauge its advancement toward developing artificial general intelligence (AGI), according to an OpenAI spokesperson who spoke with Bloomberg. The company shared this new classification system on Tuesday with employees during an all-hands meeting, aiming to provide a clear framework for understanding AI advancement. However, the system describes hypothetical technology that does not yet exist and is possibly best interpreted as a marketing move to garner investment dollars.

OpenAI has previously stated that AGI—a nebulous term for a hypothetical concept that means an AI system that can perform novel tasks like a human without specialized training—is currently the primary goal of the company. The pursuit of technology that can replace humans at most intellectual work drives most of the enduring hype over the firm, even though such a technology would likely be wildly disruptive to society.

OpenAI CEO Sam Altman has previously stated his belief that AGI could be achieved within this decade, and a large part of the CEO's public messaging has been related to how the company (and society in general) might handle the disruption that AGI may bring. Along those lines, a ranking system to communicate AI milestones achieved internally on the path to AGI makes sense.

Read 10 remaining paragraphs | Comments

First “Miss AI” contest sparks ire for pushing unrealistic beauty standards

An AI-generated image of

Enlarge / An AI-generated image of "Miss AI" award winner "Kenza Layli" (left) and an unidentified AI-generated woman beside her. (credit: Kenza.Layli / Instagram)

An influencer platform called Fanvue recently announced the results of its first "Miss AI" pageant, which sought to judge AI-generated social media influencers and also doubled as a convenient publicity stunt. The "winner" is a fictional Instagram influencer from Morocco named Kenza Layli with more than 200,000 followers, but the pageant is already attracting criticism from women in the AI space.

"Yet another stepping stone on the road to objectifying women with AI," Hugging Face AI researcher Dr. Sasha Luccioni told Ars Technica. "As a woman working in this field, I'm unsurprised but disappointed."

Instances of AI-generated Instagram influencers have reportedly been on the rise since freely available image synthesis tools like Stable Diffusion have made it easy to generate an unlimited quantity of provocative images of women. And techniques like Dreambooth allow fine-tuning an AI model on a specific subject (including an AI-generated one) to place it in different settings.

Read 10 remaining paragraphs | Comments

Intuit’s AI gamble: Mass layoff of 1,800 paired with hiring spree

Signage for financial software company Intuit at the company's headquarters in the Silicon Valley town of Mountain View, California, August 24, 2016.

Enlarge (credit: Getty Images)

On Wednesday, Intuit CEO Sasan Goodarzi announced in a letter to the company that it would be laying off 1,800 employees—about 10 percent of its workforce of around 18,000—while simultaneously planning to hire the same number of new workers as part of a major restructuring effort purportedly focused on AI.

"As I’ve shared many times, the era of AI is one of the most significant technology shifts of our lifetime," wrote Goodarzi in a blog post on Intuit's website. "This is truly an extraordinary time—AI is igniting global innovation at an incredible pace, transforming every industry and company in ways that were unimaginable just a few years ago. Companies that aren’t prepared to take advantage of this AI revolution will fall behind and, over time, will no longer exist."

The CEO says Intuit is in a position of strength and that the layoffs are not cost-cutting related, but they allow the company to "allocate additional investments to our most critical areas to support our customers and drive growth." With new hires, the company expects its overall headcount to grow in its 2025 fiscal year.

Read 5 remaining paragraphs | Comments

OpenAI board shake-up: Microsoft out, Apple backs away amid AI partnership scrutiny

The OpenAI logo superimposed over a Microsoft logo background

Enlarge (credit: Benj Edwards / OpenAI / Microsoft)

Microsoft has withdrawn from its non-voting observer role on OpenAI's board, while Apple has opted not to take a similar position, reports Axios and Financial Times. The ChatGPT maker plans to update its business partners and investors through regular meetings instead of board representation. The development comes as regulators in the EU and US increase their scrutiny of Big Tech's investments in AI startups due to concerns about stifling competition.

Axios reports that on Tuesday, Microsoft's deputy general counsel, Keith Dolliver, sent a letter to OpenAI stating that the tech giant's board role was "no longer necessary" given the "significant progress" made by the newly formed board. Microsoft accepted a non-voting position on OpenAI's board in November following the ouster and reinstatement of OpenAI CEO Sam Altman.

Last week, Bloomberg reported that Apple's Phil Schiller, who leads the App Store and Apple Events, might join OpenAI's board in an observer role as part of an AI deal. However, the Financial Times now reports that Apple will not take up such a position, citing a person with direct knowledge of the matter. Apple did not immediately respond to our request for comment.

Read 6 remaining paragraphs | Comments

Why 1994’s Lair of Squid was the weirdest pack-in game of all time

Artist's impression of a squid jumping forth from a HP 200LX.

Enlarge / Artist's impression of a squid jumping forth from an HP 200LX. (credit: Aurich Lawson / HP)

In 1994, Hewlett-Packard released a miracle machine: the HP 200LX pocket-size PC. In the depths of the device, among the MS-DOS productivity apps built into its fixed memory, there lurked a first-person maze game called Lair of Squid. Intrigued by the game, we tracked down its author, Andy Gryc, and probed into the title's mysterious undersea origins.

"If you ask my family, they’ll confirm that I’ve been obsessed with squid for a long time," Gryc told Ars Technica. "It’s admittedly very goofy—and that’s my fault—although I was inspired by Doom, which had come out relatively recently."

In Lair of Squid, you're trapped in an underwater labyrinth, seeking a way out while avoiding squid roaming the corridors. A collision with any cephalopod results in death. To progress through each stage and ascend to the surface, you locate the exit and provide a hidden, scrambled code word. The password is initially displayed as asterisks, with letters revealed as you encounter them within the maze.

Read 28 remaining paragraphs | Comments

❌