❌

Normal view

There are new articles available, click to refresh the page.
Yesterday β€” 4 July 2024Main stream

Tool preventing AI mimicry cracked; artists wonder what’s next

4 July 2024 at 07:35
Tool preventing AI mimicry cracked; artists wonder what’s next

Enlarge (credit: Aurich Lawson | Getty Images)

For many artists, it's a precarious time to post art online. AI image generators keep getting better at cheaply replicating a wider range of unique styles, and basically every popular platform is rushing to update user terms to seize permissions to scrape as much data as possible for AI training.

Defenses against AI training existβ€”like Glaze, a tool that adds a small amount of imperceptible-to-humans noise to images to stop image generators from copying artists' styles. But they don't provide a permanent solution at a time when tech companies appear determined to chase profits by building ever-more-sophisticated AI models that increasingly threaten to dilute artists' brands and replace them in the market.

In one high-profile example just last month, the estate of Ansel Adams condemned Adobe for selling AI images stealing the famous photographer's style, Smithsonian reported. Adobe quickly responded and removed the AI copycats. But it's not just famous artists who risk being ripped off, and lesser-known artists may struggle to prove AI models are referencing their works. In this largely lawless world, every image uploaded risks contributing to an artist's downfall, potentially watering down demand for their own work each time they promote new pieces online.

Read 56 remaining paragraphs | Comments

Before yesterdayMain stream

AI trains on kids’ photos even when parents use strict privacy settings

2 July 2024 at 15:37
AI trains on kids’ photos even when parents use strict privacy settings

Enlarge (credit: Aitor Diago | Moment)

Human Rights Watch (HRW) continues to reveal how photos of real children casually posted online years ago are being used to train AI models powering image generatorsβ€”even when platforms prohibit scraping and families use strict privacy settings.

Last month, HRW researcher Hye Jung Han found 170 photos of Brazilian kids that were linked in LAION-5B, a popular AI dataset built from Common Crawl snapshots of the public web. Now, she has released a second report, flagging 190 photos of children from all of Australia’s states and territories, including indigenous children who may be particularly vulnerable to harms.

These photos are linked in the dataset "without the knowledge or consent of the children or their families." They span the entirety of childhood, making it possible for AI image generators to generate realistic deepfakes of real Australian children, Han's report said. Perhaps even more concerning, the URLs in the dataset sometimes reveal identifying information about children, including their names and locations where photos were shot, making it easy to track down children whose images might not otherwise be discoverable online.

Read 24 remaining paragraphs | Comments

❌
❌