Reading view

There are new articles available, click to refresh the page.

YouTube tries convincing record labels to license music for AI song generator

Man using phone in front of YouTube logo

Enlarge (credit: Chris Ratcliffe/Bloomberg via Getty)

YouTube is in talks with record labels to license their songs for artificial intelligence tools that clone popular artists’ music, hoping to win over a skeptical industry with upfront payments.

The Google-owned video site needs labels’ content to legally train AI song generators, as it prepares to launch new tools this year, according to three people familiar with the matter.

The company has recently offered lump sums of cash to the major labels—Sony, Warner, and Universal—to try to convince more artists to allow their music to be used in training AI software, according to several people briefed on the talks.

Read 18 remaining paragraphs | Comments

Researchers upend AI status quo by eliminating matrix multiplication in LLMs

Illustration of a brain inside of a light bulb.

Enlarge (credit: Getty Images)

Researchers claim to have developed a new way to run AI language models more efficiently by eliminating matrix multiplication from the process. This fundamentally redesigns neural network operations that are currently accelerated by GPU chips. The findings, detailed in a recent preprint paper from researchers at the University of California Santa Cruz, UC Davis, LuxiTech, and Soochow University, could have deep implications for the environmental impact and operational costs of AI systems.

Matrix multiplication (often abbreviated to "MatMul") is at the center of most neural network computational tasks today, and GPUs are particularly good at executing the math quickly because they can perform large numbers of multiplication operations in parallel. That ability momentarily made Nvidia the most valuable company in the world last week; the company currently holds an estimated 98 percent market share for data center GPUs, which are commonly used to power AI systems like ChatGPT and Google Gemini.

In the new paper, titled "Scalable MatMul-free Language Modeling," the researchers describe creating a custom 2.7 billion parameter model without using MatMul that features similar performance to conventional large language models (LLMs). They also demonstrate running a 1.3 billion parameter model at 23.8 tokens per second on a GPU that was accelerated by a custom-programmed FPGA chip that uses about 13 watts of power (not counting the GPU's power draw). The implication is that a more efficient FPGA "paves the way for the development of more efficient and hardware-friendly architectures," they write.

Read 13 remaining paragraphs | Comments

OpenAI’s ChatGPT for Mac is now available to all users

A message field for ChatGPT pops up over a Mac desktop

Enlarge / The app lets you invoke ChatGPT from anywhere in the system with a keyboard shortcut, Spotlight-style. (credit: Samuel Axon)

OpenAI's official ChatGPT app for macOS is now available to all users for the first time, provided they're running macOS Sonoma or later.

It was previously being rolled out gradually to paid subscribers to ChatGPT's Plus premium plan.

The ChatGPT Mac app mostly acts as a desktop window version of the web app, allowing you to carry on back-and-forth prompt-and-response conversations. You can select between the GPT-3.5, GPT-4, and GPT-4o models. It also supports the more specialized GPTs available in the web version, including the DALL-E image generator and custom GPTs.

Read 7 remaining paragraphs | Comments

Taking a closer look at AI’s supposed energy apocalypse

Someone just asked what it would look like if their girlfriend was a Smurf. Better add another rack of servers!

Enlarge / Someone just asked what it would look like if their girlfriend was a Smurf. Better add another rack of servers! (credit: Getty Images)

Late last week, both Bloomberg and The Washington Post published stories focused on the ostensibly disastrous impact artificial intelligence is having on the power grid and on efforts to collectively reduce our use of fossil fuels. The high-profile pieces lean heavily on recent projections from Goldman Sachs and the International Energy Agency (IEA) to cast AI's "insatiable" demand for energy as an almost apocalyptic threat to our power infrastructure. The Post piece even cites anonymous "some [people]" in reporting that "some worry whether there will be enough electricity to meet [the power demands] from any source."

Digging into the best available numbers and projections available, though, it's hard to see AI's current and near-future environmental impact in such a dire light. While generative AI models and tools can and will use a significant amount of energy, we shouldn't conflate AI energy usage with the larger and largely pre-existing energy usage of "data centers" as a whole. And just like any technology, whether that AI energy use is worthwhile depends largely on your wider opinion of the value of generative AI in the first place.

Not all data centers

While the headline focus of both Bloomberg and The Washington Post's recent pieces is on artificial intelligence, the actual numbers and projections cited in both pieces overwhelmingly focus on the energy used by Internet "data centers" as a whole. Long before generative AI became the current Silicon Valley buzzword, those data centers were already growing immensely in size and energy usage, powering everything from Amazon Web Services servers to online gaming services, Zoom video calls, and cloud storage and retrieval for billions of documents and photos, to name just a few of the more common uses.

Read 22 remaining paragraphs | Comments

Political deepfakes are the most popular way to misuse AI

Political deepfakes are the most popular way to misuse AI

Enlarge (credit: Arkadiusz Warguła via Getty)

Artificial intelligence-generated “deepfakes” that impersonate politicians and celebrities are far more prevalent than efforts to use AI to assist cyber attacks, according to the first research by Google’s DeepMind division into the most common malicious uses of the cutting-edge technology.

The study said the creation of realistic but fake images, video, and audio of people was almost twice as common as the next highest misuse of generative AI tools: the falsifying of information using text-based tools, such as chatbots, to generate misinformation to post online.

The most common goal of actors misusing generative AI was to shape or influence public opinion, the analysis, conducted with the search group’s research and development unit Jigsaw, found. That accounted for 27 percent of uses, feeding into fears over how deepfakes might influence elections globally this year.

Read 13 remaining paragraphs | Comments

Music industry giants allege mass copyright violation by AI firms

Michael Jackson in concert, 1986. Sony Music owns a large portion of publishing rights to Jackson's music.

Enlarge / Michael Jackson in concert, 1986. Sony Music owns a large portion of publishing rights to Jackson's music. (credit: Getty Images)

Universal Music Group, Sony Music, and Warner Records have sued AI music-synthesis companies Udio and Suno for allegedly committing mass copyright infringement by using recordings owned by the labels to train music-generating AI models, reports Reuters. Udio and Suno can generate novel song recordings based on text-based descriptions of music (i.e., "a dubstep song about Linus Torvalds").

The lawsuits, filed in federal courts in New York and Massachusetts, claim that the AI companies' use of copyrighted material to train their systems could lead to AI-generated music that directly competes with and potentially devalues the work of human artists.

Like other generative AI models, both Udio and Suno (which we covered separately in April) rely on a broad selection of existing human-created artworks that teach a neural network the relationship between words in a written prompt and styles of music. The record labels correctly note that these companies have been deliberately vague about the sources of their training data.

Read 6 remaining paragraphs | Comments

Apple Intelligence and other features won’t launch in the EU this year

A photo of a hand holding an iPhone running the Image Playground experience in iOS 18

Enlarge / Features like Image Playground won't arrive in Europe at the same time as other regions. (credit: Apple)

Three major features in iOS 18 and macOS Sequoia will not be available to European users this fall, Apple says. They include iPhone screen mirroring on the Mac, SharePlay screen sharing, and the entire Apple Intelligence suite of generative AI features.

In a statement sent to Financial Times, The Verge, and others, Apple says this decision is related to the European Union's Digital Markets Act (DMA). Here's the full statement, which was attributed to Apple spokesperson Fred Sainz:

Two weeks ago, Apple unveiled hundreds of new features that we are excited to bring to our users around the world. We are highly motivated to make these technologies accessible to all users. However, due to the regulatory uncertainties brought about by the Digital Markets Act (DMA), we do not believe that we will be able to roll out three of these features — iPhone Mirroring, SharePlay Screen Sharing enhancements, and Apple Intelligence — to our EU users this year.

Specifically, we are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security. We are committed to collaborating with the European Commission in an attempt to find a solution that would enable us to deliver these features to our EU customers without compromising their safety.

It is unclear from Apple's statement precisely which aspects of the DMA may have led to this decision. It could be that Apple is concerned that it would be required to give competitors like Microsoft or Google access to user data collected for Apple Intelligence features and beyond, but we're not sure.

Read 2 remaining paragraphs | Comments

Anthropic introduces Claude 3.5 Sonnet, matching GPT-4o on benchmarks

The Anthropic Claude 3 logo, jazzed up by Benj Edwards.

Enlarge (credit: Anthropic / Benj Edwards)

On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of "3.5" models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work documents in a dedicated window.

So far, people outside of Anthropic seem impressed. "This model is really, really good," wrote independent AI researcher Simon Willison on X. "I think this is the new best overall model (and both faster and half the price of Opus, similar to the GPT-4 Turbo to GPT-4o jump)."

As we've written before, benchmarks for large language models (LLMs) are troublesome because they can be cherry-picked and often do not capture the feel and nuance of using a machine to generate outputs on almost any conceivable topic. But according to Anthropic, Claude 3.5 Sonnet matches or outperforms competitor models like GPT-4o and Gemini 1.5 Pro on certain benchmarks like MMLU (undergraduate level knowledge), GSM8K (grade school math), and HumanEval (coding).

Read 17 remaining paragraphs | Comments

Researchers describe how to tell if ChatGPT is confabulating

Researchers describe how to tell if ChatGPT is confabulating

Enlarge (credit: Aurich Lawson | Getty Images)

It's one of the world's worst-kept secrets that large language models give blatantly false answers to queries and do so with a confidence that's indistinguishable from when they get things right. There are a number of reasons for this. The AI could have been trained on misinformation; the answer could require some extrapolation from facts that the LLM isn't capable of; or some aspect of the LLM's training might have incentivized a falsehood.

But perhaps the simplest explanation is that an LLM doesn't recognize what constitutes a correct answer but is compelled to provide one. So it simply makes something up, a habit that has been termed confabulation.

Figuring out when an LLM is making something up would obviously have tremendous value, given how quickly people have started relying on them for everything from college essays to job applications. Now, researchers from the University of Oxford say they've found a relatively simple way to determine when LLMs appear to be confabulating that works with all popular models and across a broad range of subjects. And, in doing so, they develop evidence that most of the alternative facts LLMs provide are a product of confabulation.

Read 14 remaining paragraphs | Comments

Ex-OpenAI star Sutskever shoots for superintelligent AI with new company

Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.

Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023. (credit: Getty Images)

On Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the goal of safely building "superintelligence," which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly in the extreme.

"We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product," wrote Sutskever on X. "We will do it through revolutionary breakthroughs produced by a small cracked team."

Sutskever was a founding member of OpenAI and formerly served as the company's chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked on machine learning projects at Apple between 2013 and 2017. The trio posted a statement on the company's new website.

Read 8 remaining paragraphs | Comments

Runway’s latest AI video generator brings giant cotton candy monsters to life

Screen capture of a Runway Gen-3 Alpha video generated with the prompt

Enlarge / Screen capture of a Runway Gen-3 Alpha video generated with the prompt "A giant humanoid, made of fluffy blue cotton candy, stomping on the ground, and roaring to the sky, clear blue sky behind them." (credit: Runway)

On Sunday, Runway announced a new AI video synthesis model called Gen-3 Alpha that's still under development, but it appears to create video of similar quality to OpenAI's Sora, which debuted earlier this year (and has also not yet been released). It can generate novel, high-definition video from text prompts that range from realistic humans to surrealistic monsters stomping the countryside.

Unlike Runway's previous best model from June 2023, which could only create two-second-long clips, Gen-3 Alpha can reportedly create 10-second-long video segments of people, places, and things that have a consistency and coherency that easily surpasses Gen-2. If 10 seconds sounds short compared to Sora's full minute of video, consider that the company is working with a shoestring budget of compute compared to more lavishly funded OpenAI—and actually has a history of shipping video generation capability to commercial users.

Gen-3 Alpha does not generate audio to accompany the video clips, and it's highly likely that temporally coherent generations (those that keep a character consistent over time) are dependent on similar high-quality training material. But Runway's improvement in visual fidelity over the past year is difficult to ignore.

Read 20 remaining paragraphs | Comments

Windows 11 24H2 is released to the public but only on Copilot+ PCs (for now)

Windows 11 24H2 is released to the public but only on Copilot+ PCs (for now)

Enlarge (credit: Microsoft)

For the vast majority of compatible PCs, Microsoft’s Windows 11 24H2 update still isn’t officially available as anything other than a preview (a revised version of the update is available to Windows Insiders again after briefly being pulled early last week). But Microsoft and most of the other big PC companies are releasing their first wave of Copilot+ PCs with Snapdragon X-series chips in them today, and those PCs are all shipping with the 24H2 update already installed.

For now, this means a bifurcated Windows 11 install base: one (the vast majority) that’s still mostly on version 23H2 and one (a tiny, Arm-powered minority) that’s running 24H2.

Although Microsoft hasn’t been specific about its release plans for Windows 11 24H2 to the wider user base, most PCs should still start getting the update later this fall. The Copilot+ parts won’t run on those current PCs, but they’ll still get new features and benefit from Microsoft’s work on the operating system’s underpinnings.

Read 4 remaining paragraphs | Comments

Softbank plans to cancel out angry customer voices using AI

A man is angry and screaming while talking on a smartphone.

Enlarge (credit: Getty Images / Benj Edwards)

Japanese telecommunications giant SoftBank recently announced that it has been developing "emotion-canceling" technology powered by AI that will alter the voices of angry customers to sound calmer during phone calls with customer service representatives. The project aims to reduce the psychological burden on operators suffering from harassment and has been in development for three years. Softbank plans to launch it by March 2026, but the idea is receiving mixed reactions online.

According to a report from the Japanese news site The Asahi Shimbun, SoftBank's project relies on an AI model to alter the tone and pitch of a customer's voice in real-time during a phone call. SoftBank's developers, led by employee Toshiyuki Nakatani, trained the system using a dataset of over 10,000 voice samples, which were performed by 10 Japanese actors expressing more than 100 phrases with various emotions, including yelling and accusatory tones.

Voice cloning and synthesis technology has made massive strides in the past three years. We've previously covered technology from Microsoft that can clone a voice with a three-second audio sample and audio-processing technology from Adobe that cleans up audio by re-synthesizing a person's voice, so SoftBank's technology is well within the realm of plausibility.

Read 11 remaining paragraphs | Comments

Tesla investors sue Elon Musk for diverting carmaker’s resources to xAI

A large Tesla logo

Enlarge (credit: Getty Images | SOPA Images)

A group of Tesla investors yesterday sued Elon Musk, the company, and its board members, alleging that Tesla was harmed by Musk's diversion of resources to his xAI venture. The diversion of resources includes hiring AI employees away from Tesla, diverting microchips from Tesla to X (formerly Twitter) and xAI, and "xAI's use of Tesla's data to develop xAI's own software/hardware, all without compensation to Tesla," the lawsuit said.

The lawsuit in Delaware Court of Chancery was filed by three Tesla shareholders: the Cleveland Bakers and Teamsters Pension Fund, Daniel Hazen, and Michael Giampietro. It seeks financial damages for Tesla and the disgorging of Musk's equity stake in xAI to Tesla.

"Could the CEO of Coca-Cola loyally start a competing soft-drink company on the side, then divert scarce ingredients from Coca-Cola to the startup? Could the CEO of Goldman Sachs loyally start a competing financial advisory company on the side, then hire away key bankers from Goldman Sachs to the startup? Could the board of either company loyally permit such conduct without doing anything about it? Of course not," the lawsuit says.

Read 11 remaining paragraphs | Comments

Microsoft delays Recall again, won’t debut it with new Copilot+ PCs after all

Recall is part of Microsoft's Copilot+ PC program.

Enlarge / Recall is part of Microsoft's Copilot+ PC program. (credit: Microsoft)

Microsoft will be delaying its controversial Recall feature again, according to an updated blog post by Windows and Devices VP Pavan Davuluri. And when the feature does return "in the coming weeks," Davuluri writes, it will be as a preview available to PCs in the Windows Insider Program, the same public testing and validation pipeline that all other Windows features usually go through before being released to the general populace.

Recall is a new Windows 11 AI feature that will be available on PCs that meet the company's requirements for its "Copilot+ PC" program. Copilot+ PCs need at least 16GB of RAM, 256GB of storage, and a neural processing unit (NPU) capable of at least 40 trillion operations per second (TOPS). The first (and for a few months, only) PCs that will meet this requirement are all using Qualcomm's Snapdragon X Plus and X Elite Arm chips, with compatible Intel and AMD processors following later this year. Copilot+ PCs ship with other generative AI features, too, but Recall's widely publicized security problems have sucked most of the oxygen out of the room so far.

The Windows Insider preview of Recall will still require a PC that meets the Copilot+ requirements, though third-party scripts may be able to turn on Recall for PCs without the necessary hardware. We'll know more when Recall makes its reappearance.

Read 7 remaining paragraphs | Comments

This photo got 3rd in an AI art contest—then its human photographer came forward

To be fair, I wouldn't put it past an AI model to forget the flamingo's head.

Enlarge / To be fair, I wouldn't put it past an AI model to forget the flamingo's head. (credit: Miles Astray)

A juried photography contest has disqualified one of the images that was originally picked as a top three finisher in its new AI art category. The reason for the disqualification? The photo was actually taken by a human and not generated by an AI model.

The 1839 Awards launched last year as a way to "honor photography as an art form," with a panel of experienced judges who work with photos at The New York Times, Christie's, and Getty Images, among others. The contest rules sought to segregate AI images into their own category as a way to separate out the work of increasingly impressive image generators from "those who use the camera as their artistic medium," as the 1839 Awards site puts it.

For the non-AI categories, the 1839 Awards rules note that they "reserve the right to request proof of the image not being generated by AI as well as for proof of ownership of the original files." Apparently, though, the awards did not request any corresponding proof that submissions in the AI category were generated by AI.

Read 9 remaining paragraphs | Comments

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes

The OpenAI and Apple logos together.

Enlarge (credit: OpenAI / Apple / Benj Edwards)

On Monday, Apple announced it would be integrating OpenAI's ChatGPT AI assistant into upcoming versions of its iPhone, iPad, and Mac operating systems. It paves the way for future third-party AI model integrations, but given Google's multi-billion-dollar deal with Apple for preferential web search, the OpenAI announcement inspired speculation about who is paying whom. According to a Bloomberg report published Wednesday, Apple considers ChatGPT's placement on its devices as compensation enough.

"Apple isn’t paying OpenAI as part of the partnership," writes Bloomberg reporter Mark Gurman, citing people familiar with the matter who wish to remain anonymous. "Instead, Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments."

The Bloomberg report states that neither company expects the agreement to generate meaningful revenue in the short term, and in fact, the partnership could burn extra money for OpenAI, because it pays Microsoft to host ChatGPT's capabilities on its Azure cloud. However, OpenAI could benefit by converting free users to paid subscriptions, and Apple potentially benefits by providing easy, built-in access to ChatGPT during a time when its own in-house LLMs are still catching up.

Read 7 remaining paragraphs | Comments

Cop busted for unauthorized use of Clearview AI facial recognition resigns

Cop busted for unauthorized use of Clearview AI facial recognition resigns

Enlarge (credit: Francesco Carta fotografo | Moment)

An Indiana cop has resigned after it was revealed that he frequently used Clearview AI facial recognition technology to track down social media users not linked to any crimes.

According to a press release from the Evansville Police Department, this was a clear "misuse" of Clearview AI's controversial face scan tech, which some US cities have banned over concerns that it gives law enforcement unlimited power to track people in their daily lives.

To help identify suspects, police can scan what Clearview AI describes on its website as "the world's largest facial recognition network." The database pools more than 40 billion images collected from news media, mugshot websites, public social media, and other open sources.

Read 16 remaining paragraphs | Comments

Wyoming mayoral candidate wants to govern by AI bot

Digital chatbot icon on future tech background. Productivity of AI bots evolution. Futuristic chatbot icon and abstract chart in world of technological progress and innovation. CGI 3D render

Enlarge (credit: dakuq via Getty)

Victor Miller is running for mayor of Cheyenne, Wyoming, with an unusual campaign promise: If elected, he will not be calling the shots—an AI bot will. VIC, the Virtual Integrated Citizen, is a ChatGPT-based chatbot that Miller created. And Miller says the bot has better ideas—and a better grasp of the law—than many people currently serving in government.

“I realized that this entity is way smarter than me, and more importantly, way better than some of the outward-facing public servants I see,” he says. According to Miller, VIC will make the decisions, and Miller will be its “meat puppet,” attending meetings, signing documents, and otherwise doing the corporeal job of running the city.

But whether VIC—and Victor—will be allowed to run at all is still an open question.

Read 20 remaining paragraphs | Comments

Turkish student creates custom AI device for cheating university exam, gets arrested

A photo illustration of what a shirt-button camera <em>could</em> look like.

Enlarge / A photo illustration of what a shirt-button camera could look like. (credit: Aurich Lawson | Getty Images)

On Saturday, Turkish police arrested and detained a prospective university student who is accused of developing an elaborate scheme to use AI and hidden devices to help him cheat on an important entrance exam, reports Reuters and The Daily Mail.

The unnamed student is reportedly jailed pending trial after the incident, which took place in the southwestern province of Isparta, where the student was caught behaving suspiciously during the TYT. The TYT is a nationally held university aptitude exam that determines a person's eligibility to attend a university in Turkey—and cheating on the high-stakes exam is a serious offense.

According to police reports, the student used a camera disguised as a shirt button, connected to AI software via a "router" (possibly a mistranslation of a cellular modem) hidden in the sole of their shoe. The system worked by scanning the exam questions using the button camera, which then relayed the information to an unnamed AI model. The software generated the correct answers and recited them to the student through an earpiece.

Read 5 remaining paragraphs | Comments

New Stable Diffusion 3 release excels at AI-generated body horror

An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

Enlarge / An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass. (credit: HorneyMetalBeing)

On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generates images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease.

A thread on Reddit, titled, "Is this release supposed to be a joke? [SD3-2B]," details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread, titled, "Why is SD3 so bad at generating girls lying on the grass?" shows similar issues, but for entire human bodies.

Hands have traditionally been a challenge for AI image generators due to lack of good examples in early training data sets, but more recently, several image-synthesis models seemed to have overcome the issue. In that sense, SD3 appears to be a huge step backward for the image-synthesis enthusiasts that gather on Reddit—especially compared to recent Stability releases like SD XL Turbo in November.

Read 10 remaining paragraphs | Comments

One of the major sellers of detailed driver behavioral data is shutting down

Interior of car with different aspects of it highlighted, as if by a camera or AI

Enlarge (credit: Getty Images)

One of the major data brokers engaged in the deeply alienating practice of selling detailed driver behavior data to insurers has shut down that business.

Verisk, which had collected data from cars made by General Motors, Honda, and Hyundai, has stopped receiving that data, according to The Record, a news site run by security firm Recorded Future. According to a statement provided to Privacy4Cars, and reported by The Record, Verisk will no longer provide a "Driving Behavior Data History Report" to insurers.

Skeptics have long assumed that car companies had at least some plan to monetize the rich data regularly sent from cars back to their manufacturers, or telematics. But a concrete example of this was reported by The New York Times' Kashmir Hill, in which drivers of GM vehicles were finding insurance more expensive, or impossible to acquire, because of the kinds of reports sent along the chain from GM to data brokers to insurers. Those who requested their collected data from the brokers found details of every trip they took: times, distances, and every "hard acceleration" or "hard braking event," among other data points.

Read 4 remaining paragraphs | Comments

Elon Musk drops claims that OpenAI abandoned mission

Elon Musk drops claims that OpenAI abandoned mission

Enlarge (credit: JC Olivera / Stringer | WireImage)

While Musk has spent much of today loudly criticizing the Apple/OpenAI deal, he also sought to drop his lawsuit against OpenAI, a court filing today showed.

In the filing, Musk's lawyer, Morgan Chu, notified the Superior Court of California in San Francisco of Musk's request for dismissal of his entire complaint without prejudice.

There are currently no further details as to why Musk decided to drop the suit.

Read 9 remaining paragraphs | Comments

Elon Musk is livid about new OpenAI/Apple deal

Elon Musk is livid about new OpenAI/Apple deal

Enlarge (credit: Anadolu / Contributor | Anadolu)

Elon Musk is so opposed to Apple's plan to integrate OpenAI's ChatGPT with device operating systems that he's seemingly spreading misconceptions while heavily criticizing the partnership.

On X (formerly Twitter), Musk has been criticizing alleged privacy and security risks since the plan was announced Monday at Apple's annual Worldwide Developers Conference.

"If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies," Musk posted on X. "That is an unacceptable security violation." In another post responding to Apple CEO Tim Cook, Musk wrote, "Don't want it. Either stop this creepy spyware or all Apple devices will be banned from the premises of my companies."

Read 24 remaining paragraphs | Comments

Apple and OpenAI currently have the most misunderstood partnership in tech

A man talks into a smartphone.

Enlarge / He isn't using an iPhone, but some people talk to Siri like this.

On Monday, Apple premiered "Apple Intelligence" during a wide-ranging presentation at its annual Worldwide Developers Conference in Cupertino, California. However, the heart of its new tech, an array of Apple-developed AI models, was overshadowed by the announcement of ChatGPT integration into its device operating systems.

Since rumors of the partnership first emerged, we've seen confusion on social media about why Apple didn't develop a cutting-edge GPT-4-like chatbot internally. Despite Apple's year-long development of its own large language models (LLMs), many perceived the integration of ChatGPT (and opening the door for others, like Google Gemini) as a sign of Apple's lack of innovation.

"This is really strange. Surely Apple could train a very good competing LLM if they wanted? They've had a year," wrote AI developer Benjamin De Kraker on X. Elon Musk has also been grumbling about the OpenAI deal—and spreading misconceptions about it—saying things like, "It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!"

Read 19 remaining paragraphs | Comments

Adobe to update vague AI terms after users threaten to cancel subscriptions

Adobe to update vague AI terms after users threaten to cancel subscriptions

Enlarge (credit: bennymarty | iStock Editorial / Getty Images Plus)

Adobe has promised to update its terms of service to make it "abundantly clear" that the company will "never" train generative AI on creators' content after days of customer backlash, with some saying they would cancel Adobe subscriptions over its vague terms.

Users got upset last week when an Adobe pop-up informed them of updates to terms of use that seemed to give Adobe broad permissions to access user content, take ownership of that content, or train AI on that content. The pop-up forced users to agree to these terms to access Adobe apps, disrupting access to creatives' projects unless they immediately accepted them.

For any users unwilling to accept, canceling annual plans could trigger fees amounting to 50 percent of their remaining subscription cost. Adobe justifies collecting these fees because a "yearly subscription comes with a significant discount."

Read 25 remaining paragraphs | Comments

AI trained on photos from kids’ entire childhood without their consent

AI trained on photos from kids’ entire childhood without their consent

Enlarge (credit: RicardoImagen | E+)

Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.

This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW's report said.

An HRW researcher, Hye Jung Han, helped expose the problem. She analyzed "less than 0.0001 percent" of LAION-5B, a dataset built from Common Crawl snapshots of the public web. The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.

Read 34 remaining paragraphs | Comments

Apple integrates ChatGPT into Siri, iOS, and macOS

  • The AIs are learning to cooperate! Siri talks to ChatGPT. [credit: Apple ]

Reports of Apple signing a deal with OpenAI are true: ChatGPT is coming to your Apple gear.

First up is Siri, which can tap into ChatGPT to answer voice questions. If Siri thinks ChatGPT can help answer your question, you'll get a pop-up permission box asking if you want to send your question to the chatbot. The response will come back in a window indicating that the information came from an outside source. This is the same way Siri treats a search engine (namely, Google), so how exactly Siri draws a line between ChatGPT and a search engine will be interesting. In Apple's lone example, there was a "help" intent, with the input saying to "help me plan a five-course meal" given certain ingredient limitations. That sort of ultra-specific input is something you can't do with a traditional search engine.

Siri can also send photos to ChatGPT. In Apple's example, the user snapped a picture of a wooden deck and asked Siri about decorating options. It sounds like the standard generative AI summary features will be here, too, with Apple SVP of Software Engineering Craig Federighi mentioning that "you can also ask questions about your documents, presentations, or PDFs."

Read 3 remaining paragraphs | Comments

Apple unveils “Apple Intelligence” AI features for iOS, iPadOS, and macOS

Apple unveils “Apple Intelligence” AI features for iOS, iPadOS, and macOS

Enlarge (credit: Apple)

On Monday, Apple debuted "Apple Intelligence," a new suite of free AI-powered features for iOS 18, iPadOS 18, macOS Sequoia that includes creating email summaries, generating images and emoji, and allowing Siri to take actions on your behalf. These features are achieved through a combination of on-device and cloud processing, with a strong emphasis on privacy. Apple says that Apple Intelligence features will be widely available later this year and will be available as a beta test for developers this summer.

The announcements came during a livestream WWDC keynote and a simultaneous event attended by the press on Apple's campus in Cupertino, California. In an introduction, Apple CEO Tim Cook said the company has been using machine learning for years, but the introduction of large language models (LLMs) presents new opportunities to elevate the capabilities of Apple products. He emphasized the need for both personalization and privacy in Apple's approach.

At last year's WWDC, Apple avoided using the term "AI" completely, instead preferring terms like "machine learning" as Apple's way of avoiding buzzy hype while integrating applications of AI into apps in useful ways. This year, Apple figured out a new way to largely avoid the abbreviation "AI" by coining "Apple Intelligence," a catchall branding term that refers to a broad group of machine learning, LLM, and image generation technologies. By our count, the term "AI" was used sparingly in the keynote—most notably near the end of the presentation when Apple executive Craig Federighi said, "It's AI for the rest of us."

Read 10 remaining paragraphs | Comments

Apple’s AI promise: “Your data is never stored or made accessible to Apple”

Apple Senior VP of Software Engineering Craig Federighi announces "Private Cloud Compute" at WWDC 2024.

Enlarge / Apple Senior VP of Software Engineering Craig Federighi announces "Private Cloud Compute" at WWDC 2024. (credit: Apple)

With most large language models being run on remote, cloud-based server farms, some users have been reluctant to share personally identifiable and/or private data with AI companies. In its WWDC keynote today, Apple stressed that the new "Apple Intelligence" system it's integrating into its products will use a new "Private Cloud Compute" to ensure any data processed on its cloud servers is protected in a transparent and verifiable way.

"You should not have to hand over all the details of your life to be warehoused and analyzed in someone's AI cloud," Apple Senior VP of Software Engineering Craig Federighi said.

Trust, but verify

Part of what Apple calls "a brand new standard for privacy and AI" is achieved through on-device processing. Federighi said "many" of Apple's generative AI models can run entirely on a device powered by an A17+ or M-series chips, eliminating the risk of sending your personal data to a remote server.

Read 4 remaining paragraphs | Comments

iOS 18 adds Apple Intelligence, customizations, and makes Android SMS nicer

Hands manipulating the Conrol Center on an iPhone

Enlarge (credit: Apple)

The biggest feature in iOS 18, the one that affects the most people, was a single item in a comma-stuffed sentence by Apple software boss Craig Federighi: "Support for RCS."

As we noted when Apple announced its support for "RCS Universal Profile," a kind of minimum viable cross-device rich messaging, iPhone users getting RCS means SMS chains with Android users "will be slightly less awful." SMS messages will soon have read receipts, higher-quality media sending, and typing indicators, along with better security. And RCS messages can go over Wi-Fi when you don't have a cellular signal. Apple is certainly downplaying a major cross-platform compatibility upgrade, but it's a notable quality-of-life boost.

  • Prioritized notifications through Apple Intelligence

Apple Intelligence, the new Siri, and the iPhone

iOS 18 is one of the major beneficiaries of Apple's AI rollout, dubbed "Apple Intelligence." Apple Intelligence promises to help iPhone users create and understand language and images, with the proper context from your phone's apps: photos, calendar, email, messages, and more.

Read 10 remaining paragraphs | Comments

Microsoft pulls release preview build of Windows 11 24H2 after Recall controversy

The Recall feature provides a timeline of screenshots and a searchable database of text, thoroughly tracking everything about a person's PC usage.

Enlarge / The Recall feature provides a timeline of screenshots and a searchable database of text, thoroughly tracking everything about a person's PC usage. (credit: Microsoft)

On Friday, Microsoft announced major changes to its upcoming Recall feature after overwhelming criticism from security researchers, the press, and its users. Microsoft is turning Recall off by default when users set up PCs that are compatible with the feature, and it's adding additional authentication and encryption that will make it harder to access another user's Recall data on the same PC.

It's likely not a coincidence that Microsoft also quietly pulled the build of the Windows 11 24H2 update that it had been testing in its Release Preview channel for Windows Insiders. It's not unheard of for Microsoft to stop distributing a beta build of Windows after releasing it, but the Release Preview channel is typically the last stop for a Windows update before a wider release.

Microsoft hasn't provided a specific rationale for pulling the update; the blog post says the pause is "temporary" and the rollout will be resumed "in the coming weeks." Windows Insider Senior Program Manager Brandon LeBlanc posted on social media that the team was "working to get it rolling out again shortly."

Read 4 remaining paragraphs | Comments

Report: New “Apple Intelligence” AI features will be opt-in by default

Report: New “Apple Intelligence” AI features will be opt-in by default

Enlarge (credit: Apple)

Apple's Worldwide Developers Conference kicks off on Monday, and per usual, the company is expected to detail most of the big new features in this year's updates to iOS, iPadOS, macOS, and all of Apple's other operating systems.

The general consensus is that Apple plans to use this year's updates to integrate generative AI into its products for the first time. Bloomberg's Mark Gurman has a few implementation details that show how Apple's approach will differ somewhat from Microsoft's or Google's.

Gurman says that the "Apple Intelligence" features will include an OpenAI-powered chatbot, but it will otherwise focus on "features with broad appeal" rather than "whiz-bang technology like image and video generation." These include summaries for webpages, meetings, and missed notifications; a revamped version of Siri that can control apps in a more granular way; Voice Memos transcription; image enhancement features in the Photos app; suggested replies to text messages; automated sorting of emails; and the ability to "create custom emoji characters on the fly that represent phrases or words as they're being typed."

Read 4 remaining paragraphs | Comments

Tesla may be in trouble, but other EVs are selling just fine

Generic electric car charging on a city street

Enlarge (credit: Getty Images/3alexd)

Have electric vehicles been overhyped? A casual observer might have come to that conclusion after almost a year of stories in the media about EVs languishing on lots and letters to the White House asking for a national electrification mandate to be watered down or rolled back. EVs were even a pain point during last year's auto worker industrial action. But a look at the sales data paints a different picture, one where Tesla's outsize role in the market has had a distorting effect.

"EVs are the future. Our numbers bear that out. Current challenges will be overcome by the industry and government, and EVs will regain momentum and will ultimately dominate the automotive market," said Martin Cardell, head of global mobility solutions at consultancy firm EY.

Public perception hasn't been helped by recent memories of supply shortages and pandemic price gouging, but the chorus of concerns about EV sales became noticeably louder toward the end of last year and the beginning of 2024. EV sales in 2023 grew by 47 percent year on year, but the first three months of this year failed to show such massive growth. In fact, sales in Q1 2024 were up only 2.6 percent over the same period in 2023.

Read 9 remaining paragraphs | Comments

Outcry from big AI firms over California AI “kill switch” bill

A finger poised over an electrical switch.

Enlarge (credit: Hajohoos via Getty)

Artificial intelligence heavyweights in California are protesting against a state bill that would force technology companies to adhere to a strict safety framework including creating a “kill switch” to turn off their powerful AI models, in a growing battle over regulatory control of the cutting-edge technology.

The California Legislature is considering proposals that would introduce new restrictions on tech companies operating in the state, including the three largest AI start-ups OpenAI, Anthropic, and Cohere as well as large language models run by Big Tech companies such as Meta.

The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

Read 25 remaining paragraphs | Comments

Meta uses “dark patterns” to thwart AI opt-outs in EU, complaint says

Meta uses “dark patterns” to thwart AI opt-outs in EU, complaint says

Enlarge (credit: Boris Zhitkov | Moment)

The European Center for Digital Rights, known as Noyb, has filed complaints in 11 European countries to halt Meta's plan to start training vague new AI technologies on European Union-based Facebook and Instagram users' personal posts and pictures.

Meta's AI training data will also be collected from third parties and from using Meta's generative AI features and interacting with pages, the company has said. Additionally, Meta plans to collect information about people who aren't on Facebook or Instagram but are featured in users' posts or photos. The only exception from AI training is made for private messages sent between "friends and family," which will not be processed, Meta's blog said, but private messages sent to businesses and Meta are fair game. And any data collected for AI training could be shared with third parties.

"Unlike the already problematic situation of companies using certain (public) data to train a specific AI system (e.g. a chatbot), Meta's new privacy policy basically says that the company wants to take all public and non-public user data that it has collected since 2007 and use it for any undefined type of current and future 'artificial intelligence technology,'" Noyb alleged in a press release.

Read 41 remaining paragraphs | Comments

US agencies to probe AI dominance of Nvidia, Microsoft, and OpenAI

A large Nvidia logo at a conference hall

Enlarge (credit: Getty Images | NurPhoto )

The US Justice Department and Federal Trade Commission reportedly plan investigations into whether Nvidia, Microsoft, and OpenAI are snuffing out competition in artificial intelligence technology.

The agencies struck a deal on how to divide up the investigations, The New York Times reported yesterday. Under this deal, the Justice Department will take the lead role in investigating Nvidia's behavior while the FTC will take the lead in investigating Microsoft and OpenAI.

The agencies' agreement "allows them to proceed with antitrust investigations into the dominant roles that Microsoft, OpenAI, and Nvidia play in the artificial intelligence industry, in the strongest sign of how regulatory scrutiny into the powerful technology has escalated," the NYT wrote.

Read 15 remaining paragraphs | Comments

DuckDuckGo offers “anonymous” access to AI chatbots through new service

DuckDuckGo's AI Chat promotional image.

Enlarge (credit: DuckDuckGo)

On Thursday, DuckDuckGo unveiled a new "AI Chat" service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity. While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account.

DuckDuckGo's AI Chat currently features access to OpenAI's GPT-3.5 Turbo, Anthropic's Claude 3 Haiku, and two open source models, Meta's Llama 3 and Mistral's Mixtral 8x7B. The service is currently free to use within daily limits. Users can access AI Chat through the DuckDuckGo search engine, direct links to the site, or by using "!ai" or "!chat" shortcuts in the search field. AI Chat can also be disabled in the site's settings for users with accounts.

According to DuckDuckGo, chats on the service are anonymized, with metadata and IP address removed to prevent tracing back to individuals. The company states that chats are not used for AI model training, citing its privacy policy and terms of use.

Read 6 remaining paragraphs | Comments

Can a technology called RAG keep AI models from making stuff up?

Can a technology called RAG keep AI models from making stuff up?

Enlarge (credit: Aurich Lawson | Getty Images)

We’ve been living through the generative AI boom for nearly a year and a half now, following the late 2022 release of OpenAI’s ChatGPT. But despite transformative effects on companies’ share prices, generative AI tools powered by large language models (LLMs) still have major drawbacks that have kept them from being as useful as many would like them to be. Retrieval augmented generation, or RAG, aims to fix some of those drawbacks.

Perhaps the most prominent drawback of LLMs is their tendency toward confabulation (also called “hallucination”), which is a statistical gap-filling phenomenon AI language models produce when they are tasked with reproducing knowledge that wasn’t present in the training data. They generate plausible-sounding text that can veer toward accuracy when the training data is solid but otherwise may just be completely made up.

Relying on confabulating AI models gets people and companies in trouble, as we’ve covered in the past. In 2023, we saw two instances of lawyers citing legal cases, confabulated by AI, that didn’t exist. We’ve covered claims against OpenAI in which ChatGPT confabulated and accused innocent people of doing terrible things. In February, we wrote about Air Canada’s customer service chatbot inventing a refund policy, and in March, a New York City chatbot was caught confabulating city regulations.

Read 30 remaining paragraphs | Comments

Top news app caught sharing “entirely false” AI-generated news

Top news app caught sharing “entirely false” AI-generated news

Enlarge (credit: gmast3r | iStock / Getty Images Plus)

After the most downloaded local news app in the US, NewsBreak, shared an AI-generated story about a fake New Jersey shooting last Christmas Eve, New Jersey police had to post a statement online to reassure troubled citizens that the story was "entirely false," Reuters reported.

"Nothing even similar to this story occurred on or around Christmas, or even in recent memory for the area they described," the cops' Facebook post said. "It seems this 'news' outlet's AI writes fiction they have no problem publishing to readers."

It took NewsBreak—which attracts over 50 million monthly users—four days to remove the fake shooting story, and it apparently wasn't an isolated incident. According to Reuters, NewsBreak's AI tool, which scrapes the web and helps rewrite local news stories, has been used to publish at least 40 misleading or erroneous stories since 2021.

Read 26 remaining paragraphs | Comments

❌