❌

Normal view

There are new articles available, click to refresh the page.
Yesterday β€” 25 June 2024Main stream

AI as Self-Erasure

By: misterbee
24 June 2024 at 23:21
Humanity's will to disappear is being installed in the omni-operating system. I was at a small dinner a few weeks ago in Grand Rapids, Michigan. Seated next to me was a man who related that his daughter had just gotten married. As the day approached, he had wanted to say some words at the reception, as is fitting for the father of the bride. It can be hard to come up with the right words for such an occasion, and he wanted to make a good showing. He said he gave a few prompts to ChatGPT, facts about her life, and sure enough it came back with a pretty good wedding toast.
Before yesterdayMain stream

The Unknown Toll Of The AI Takeover

20 June 2024 at 18:16
As artificial intelligence guzzles water supplies and jacks up consumers' electricity rates, why isn't anyone tracking the resources being consumed?

In early May, Google announced it would be adding artificial intelligence to its search engine. When the new feature rolled out, AI Overviews began offering summaries to the top of queries, whether you wanted them or not β€” and they came at an invisible cost. Investigative journalist Lois Parshley explores this topic for The Lever. Archive.org link.

ChatGPT is bullshit

13 June 2024 at 15:09
Using bullshit as a term of art (as defined by Harry G. Frankfurt), ChatGPT and its various LLM cohort can best be described as bullshit machines.

Abstract: Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called "AI hallucinations". We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems. And this bullshit might just be 'good enough' to give bosses even more leverage against workers.

How AI reduces the world to stereotypes

8 June 2024 at 10:42
"Bias occurs in many algorithms and AI systems β€” from sexist and racist search results to facial recognition systems that perform worse on Black faces. Generative AI systems are no different. In an analysis of more than 5,000 AI images, Bloomberg found that images associated with higher-paying job titles featured people with lighter skin tones, and that results for most professional roles were male-dominated. A new Rest of World analysis shows that generative AI systems have tendencies toward bias, stereotypes, and reductionism when it comes to national identities, too." CW: stereotyping of peoples, nations, cuisines, and more

This October 2023 article by Victoria Turk was shared at a library instruction conference I attended over the last couple days.

Microsoft Recall is a Privacy Disaster

6 June 2024 at 13:20
Microsoft CEO Satya Nadella, with superimposed text: β€œSecurity”

It remembers everything you do on your PC. Security experts are raging at Redmond to recall Recall.

The post Microsoft Recall is a Privacy Disaster appeared first on Security Boulevard.

❌
❌