โŒ

Normal view

There are new articles available, click to refresh the page.
Yesterday โ€” 25 June 2024Slashdot

Apple Spurned Idea of iPhone AI Partnership With Meta Months Ago

By: msmash
25 June 2024 at 00:30
An anonymous reader shares a report: Apple rejected overtures by Meta Platforms to integrate the social networking company's AI chatbot into the iPhone months ago, according to people with knowledge of the matter. The two companies aren't in discussions about using Meta's Llama chatbot in an AI partnership and only held brief talks in March, said the people, who asked not to be identified because the situation is private. The dialogue about a partnership didn't reach any formal stage, and Apple has no active plans to integrate Llama. [...] Apple decided not to move forward with formal Meta discussions in part because it doesn't see that company's privacy practices as stringent enough, according to the people. Apple has spent years criticizing Meta's technology, and integrating Llama into the iPhone would have been a stark about-face.

Read more of this story at Slashdot.

Before yesterdaySlashdot

Head of Paris's Top Tech University Says Secret To France's AI Boom Is Focus on Humanities

By: msmash
24 June 2024 at 14:00
French universities are becoming hotbeds for AI innovation, attracting investors seeking the next tech breakthrough. Ecole Polytechnique, a 230-year-old institution near Paris, stands out with 57% of France's AI startup founders among its alumni, according to Dealroom data analyzed by Accel. The school's approach combines STEM education with humanities and military training, producing well-rounded entrepreneurs. "AI is now instilling every discipline the same way mathematics did years ago," said Dominique Rossin, the school's provost. "We really push our students out of their comfort zone and encourage them to try new subjects and discover new areas in science," he added. France leads Europe in AI startup funding, securing $2.3 billion and outpacing the UK and Germany, according to Dealroom.

Read more of this story at Slashdot.

Apple Might Partner with Meta on AI

23 June 2024 at 18:33
Earlier this month Apple announced a partnership with OpenAI to bring ChatGPT to Siri. "Now, the Wall Street Journal reports that Apple and Facebook's parent company Meta are in talks around a similar deal," according to TechCrunch: A deal with Meta could make Apple less reliant on a single partner, while also providing validation for Meta's generative AI tech. The Journal reports that Apple isn't offering to pay for these partnerships; instead, Apple provides distribution to AI partners who can then sell premium subscriptions... Apple has said it will ask for users' permission before sharing any questions and data with ChatGPT. Presumably, any integration with Meta would work similarly.

Read more of this story at Slashdot.

OpenAI's 'Media Manager' Mocked, Amid Accusations of Robbing Creative Professionals

23 June 2024 at 15:16
OpenAI's 'Media Manager' Mocked, Amid Accusations of Robbing Creative Professionals "Amid the hype surrounding Apple's new deal with OpenAI, one issue has been largely papered over," argues the Executive Director of America's writer's advocacy group, the Authors Guild. OpenAI's foundational models "are, and have always been, built atop the theft of creative professionals' work." [L]ast month the company quietly announced Media Manager, scheduled for release in 2025. A tool purportedly designed to allow creators and content owners to control how their work is used, Media Manager is really a shameless attempt to evade responsibility for the theft of artists' intellectual property that OpenAI is already profiting from. OpenAI says this tool would allow creators to identify their work and choose whether to exclude it from AI training processes. But this does nothing to address the fact that the company built its foundational models using authors' and other creators' works without consent, compensation or control over how OpenAI users will be able to imitate the artists' styles to create new works. As it's described, Media Manager puts the burden on creators to protect their work and fails to address the company's past legal and ethical transgressions. This overture is like having your valuables stolen from your home and then hearing the thief say, "Don't worry, I'll give you a chance to opt out of future burglaries ... next year...." AI companies often argue that it would be impossible for them to license all the content that they need and that doing so would bring progress to a grinding halt. This is simply untrue. OpenAI has signed a succession of licensing agreements with publishers large and small. While the exact terms of these agreements are rarely released to the public, the compensation estimates pale in comparison with the vast outlays for computing power and energy that the company readily spends. Payments to authors would have minimal effects on AI companies' war chests, but receiving royalties for AI training use would be a meaningful new revenue stream for a profession that's already suffering... We cannot trust tech companies that swear their innovations are so important that they do not need to pay for one of the main ingredients โ€” other people's creative works. The "better future" we are being sold by OpenAI and others is, in fact, a dystopia. It's time for creative professionals to stand together, demand what we are owed and determine our own futures. The Authors Guild (and 17 other plaintiffs) are now in an ongoing lawsuit against OpenAI and Microsoft. And the Guild's executive director also notes that there's also "a class action filed by visual artists against Stability AI, Runway AI, Midjourney and Deviant Art, a lawsuit by music publishers against Anthropic for infringement of song lyrics, and suits in the U.S. and U.K. brought by Getty Images against Stability AI for copyright infringement of photographs." They conclude that "The best chance for the wider community of artists is to band together."

Read more of this story at Slashdot.

Foundation Honoring 'Star Trek' Creator Offers $1M Prize for AI Startup Benefiting Humanity

23 June 2024 at 12:34
The Roddenberry Foundation โ€” named for Star Trek creator Gene Roddenberry โ€” "announced Tuesday that this year's biennial award would focus on artificial intelligence that benefits humanity," reports the Los Angeles Times: Lior Ipp, chief executive of the foundation, told The Times there's a growing recognition that AI is becoming more ubiquitous and will affect all aspects of our lives. "We are trying to ... catalyze folks to think about what AI looks like if it's used for good," Ipp said, "and what it means to use AI responsibly, ethically and toward solving some of the thorny global challenges that exist in the world...." Ipp said the foundation shares the broad concern about AI and sees the award as a means to potentially contribute to creating those guardrails... Inspiration for the theme was also borne out of the applications the foundation received last time around. Ipp said the prize, which is "issue-agnostic" but focused on early-stage tech, produced compelling uses of AI and machine learning in agriculture, healthcare, biotech and education. "So," he said, "we sort of decided to double down this year on specifically AI and machine learning...." Though the foundation isn't prioritizing a particular issue, the application states that it is looking for ideas that have the potential to push the needle on one or more of the United Nations' 17 sustainable development goals, which include eliminating poverty and hunger as well as boosting climate action and protecting life on land and underwater. The Foundation's most recent winner was Sweden-based Elypta, according to the article, "which Ipp said is using liquid biopsies, such as a blood test, to detect cancer early." "We believe that building a better future requires a spirit of curiosity, a willingness to push boundaries, and the courage to think big," said Rod Roddenberry, co-founder of the Roddenberry Foundation. "The Prize will provide a significant boost to AI pioneers leading these efforts." According to the Foundation's announcement, the Prize "embodies the Roddenberry philosophy's promise of a future in which technology and human ingenuity enable everyone โ€” regardless of background โ€” to thrive." "By empowering entrepreneurs to dream bigger and innovate valiantly, the Roddenberry Prize seeks to catalyze the development of AI solutions that promote abundance and well-being for all."

Read more of this story at Slashdot.

Our Brains React Differently to Deepfake Voices, Researchers Find

23 June 2024 at 10:34
"University of Zurich researchers have discovered that our brains process natural human voices and "deepfake" voices differently," writes Slashdot reader jenningsthecat. From the University's announcement: The researchers first used psychoacoustical methods to test how well human voice identity is preserved in deepfake voices. To do this, they recorded the voices of four male speakers and then used a conversion algorithm to generate deepfake voices. In the main experiment, 25 participants listened to multiple voices and were asked to decide whether or not the identities of two voices were the same. Participants either had to match the identity of two natural voices, or of one natural and one deepfake voice. The deepfakes were correctly identified in two thirds of cases. "This illustrates that current deepfake voices might not perfectly mimic an identity, but do have the potential to deceive people," says Claudia Roswandowitz, first author and a postdoc at the Department of Computational Linguistics. The researchers then used imaging techniques to examine which brain regions responded differently to deepfake voices compared to natural voices. They successfully identified two regions that were able to recognize the fake voices: the nucleus accumbens and the auditory cortex. "The nucleus accumbens is a crucial part of the brain's reward system. It was less active when participants were tasked with matching the identity between deepfakes and natural voices," says Claudia Roswandowitz. In contrast, the nucleus accumbens showed much more activity when it came to comparing two natural voices. The complete paper appears in Nature.

Read more of this story at Slashdot.

Multiple AI Companies Ignore Robots.Txt Files, Scrape Web Content, Says Licensing Firm

23 June 2024 at 07:34
Multiple AI companies are ignoring Robots.txt files meant to block the scraping of web content for generative AI systems, reports Reuters โ€” citing a warning sent to publisher by content licensing startup TollBit. TollBit, an early-stage startup, is positioning itself as a matchmaker between content-hungry AI companies and publishers open to striking licensing deals with them. The company tracks AI traffic to the publishers' websites and uses analytics to help both sides settle on fees to be paid for the use of different types of content... It says it had 50 websites live as of May, though it has not named them. According to the TollBit letter, Perplexity is not the only offender that appears to be ignoring robots.txt. TollBit said its analytics indicate "numerous" AI agents are bypassing the protocol, a standard tool used by publishers to indicate which parts of its site can be crawled. "What this means in practical terms is that AI agents from multiple sources (not just one company) are opting to bypass the robots.txt protocol to retrieve content from sites," TollBit wrote. "The more publisher logs we ingest, the more this pattern emerges." The article includes this quote from the president of the News Media Alliance (a trade group representing over 2,200 U.S.-based publishers). "Without the ability to opt out of massive scraping, we cannot monetize our valuable content and pay journalists. This could seriously harm our industry." Reuters also notes another threat facing news sites: Publishers have been raising the alarm about news summaries in particular since Google rolled out a product last year that uses AI to create summaries in response to some search queries. If publishers want to prevent their content from being used by Google's AI to help generate those summaries, they must use the same tool that would also prevent them from appearing in Google search results, rendering them virtually invisible on the web.

Read more of this story at Slashdot.

Open Source ChatGPT Clone 'LibreChat' Lets You Use Multiple AI Services - While Owning Your Data

22 June 2024 at 10:34
Slashdot reader DevNull127 writes: A free and open source ChatGPT clone โ€” named LibreChat โ€” lets its users choose which AI model to use, "to harness the capabilities of cutting-edge language models from multiple providers in a unified interface". This means LibreChat includes OpenAI's models, but also others โ€” both open-source and closed-source โ€” and its website promises "seamless integration" with AI services from OpenAI, Azure, Anthropic, and Google โ€” as well as GPT-4, Gemini Vision, and many others. ("Every AI in one place," explains LibreChat's home page.) Plugins even let you make requests to DALL-E or Stable Diffusion for image generations. (LibreChat also offers a database that tracks "conversation state" โ€” making it possible to switch to a different AI model in mid-conversation...) Released under the MIT License, LibreChat has become "an open source success story," according to this article, representing "the passionate community that's actively creating an ecosystem of open source AI tools." And its creator, Danny Avila, says in some cases it finally lets users own their own data, "which is a dying human right, a luxury in the internet age and even more so with the age of LLM's." Avila says he was inspired by the day ChatGPT leaked the chat history of some of its users back in March of 2023 โ€” and LibreChat is "inherently completely private". From the article: With locally-hosted LLMs, Avila sees users finally getting "an opportunity to withhold training data from Big Tech, which many trade at the cost of convenience." In this world, LibreChat "is naturally attractive as it can run exclusively on open-source technologies, database and all, completely 'air-gapped.'" Even with remote AI services insisting they won't use transient data for training, "local models are already quite capable" Avila notes, "and will become more capable in general over time." And they're also compatible with LibreChat...

Read more of this story at Slashdot.

Big Tech's AI Datacenters Demand Electricity. Are They Increasing Use of Fossil Fuels?

22 June 2024 at 14:34
The artificial intelligence revolution will demand more electricity, warns the Washington Post. "Much more..." They warn that the "voracious" electricity consumption of AI is driving an expansion of fossil fuel use in America โ€” "including delaying the retirement of some coal-fired plants." As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world's most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source... A ChatGPT-powered search, according to the International Energy Agency, consumes almost 10 times the amount of electricity as a search on Google. One large data center complex in Iowa owned by Meta burns the annual equivalent amount of power as 7 million laptops running eight hours every day, based on data shared publicly by the company... [Tech companies] argue advancing AI now could prove more beneficial to the environment than curbing electricity consumption. They say AI is already being harnessed to make the power grid smarter, speed up innovation of new nuclear technologies and track emissions.... "If we work together, we can unlock AI's game-changing abilities to help create the net zero, climate resilient and nature positive works that we so urgently need," Microsoft said in a statement. The tech giants say they buy enough wind, solar or geothermal power every time a big data center comes online to cancel out its emissions. But critics see a shell game with these contracts: The companies are operating off the same power grid as everyone else, while claiming for themselves much of the finite amount of green energy. Utilities are then backfilling those purchases with fossil fuel expansions, regulatory filings show... heavily polluting fossil fuel plants that become necessary to stabilize the power grid overall because of these purchases, making sure everyone has enough electricity. The article quotes a project director at the nonprofit Data & Society, which tracks the effect of AI and accuses the tech industry of using "fuzzy math" in its climate claims. "Coal plants are being reinvigorated because of the AI boom," they tell the Washington Post. "This should be alarming to anyone who cares about the environment." The article also summarzies a recent Goldman Sachs analysis, which predicted data centers would use 8% of America's total electricity by 2030, with 60% of that usage coming "from a vast expansion in the burning of natural gas. The new emissions created would be comparable to that of putting 15.7 million additional gas-powered cars on the road." "We all want to be cleaner," Brian Bird, president of NorthWestern Energy, a utility serving Montana, South Dakota and Nebraska, told a recent gathering of data center executives in Washington, D.C. "But you guys aren't going to wait 10 years ... My only choice today, other than keeping coal plants open longer than all of us want, is natural gas. And so you're going see a lot of natural gas build out in this country." Big Tech responded by "going all in on experimental clean-energy projects that have long odds of success anytime soon," the article concludes. "In addition to fusion, they are hoping to generate power through such futuristic schemes as small nuclear reactors hooked to individual computing centers and machinery that taps geothermal energy by boring 10,000 feet into the Earth's crust..." Some experts point to these developments in arguing the electricity needs of the tech companies will speed up the energy transition away from fossil fuels rather than undermine it. "Companies like this that make aggressive climate commitments have historically accelerated deployment of clean electricity," said Melissa Lott, a professor at the Climate School at Columbia University.

Read more of this story at Slashdot.

Open Source ChatGPT Clone 'LibreChat' Lets You Use Every AI Service - While Owning Your Data

22 June 2024 at 10:34
Slashdot reader DevNull127 writes: A free and open source ChatGPT clone โ€” named LibreChat โ€” is also letting its users choose which AI model to use, "to harness the capabilities of cutting-edge language models from multiple providers in a unified interface". This means LibreChat includes OpenAI's models, but also others โ€” both open-source and closed-source โ€” and its website promises "seamless integration" with AI services from OpenAI, Azure, Anthropic, and Google โ€” as well as GPT-4, Gemini Vision, and many others. ("Every AI in one place," explains LibreChat's home page.) Plugins even let you make requests to DALL-E or Stable Diffusion for image generations. (LibreChat also offers a database that tracks "conversation state" โ€” making it possible to switch to a different AI model in mid-conversation...) Released under the MIT License, LibreChat has become "an open source success story," according to this article, representing "the passionate community that's actively creating an ecosystem of open source AI tools." Its creator, Danny Avila, says it finally lets users own their own data, "which is a dying human right, a luxury in the internet age and even more so with the age of LLM's." Avila says he was inspired by the day ChatGPT leaked the chat history of some of its users back in March of 2023 โ€” and LibreChat is "inherently completely private". From the article: With locally-hosted LLMs, Avila sees users finally getting "an opportunity to withhold training data from Big Tech, which many trade at the cost of convenience." In this world, LibreChat "is naturally attractive as it can run exclusively on open-source technologies, database and all, completely 'air-gapped.'" Even with remote AI services insisting they won't use transient data for training, "local models are already quite capable" Avila notes, "and will become more capable in general over time." And they're also compatible with LibreChat...

Read more of this story at Slashdot.

OpenAI CTO: AI Could Kill Some Creative Jobs That Maybe Shouldn't Exist Anyway

By: msmash
21 June 2024 at 22:05
OpenAI CTO Mira Murati isn't worried about how AI could hurt some creative jobs, suggesting during a talk that some jobs were maybe always a bit replaceable anyway. From a report: "I think it's really going to be a collaborative tool, especially in the creative spaces," Murati told Darmouth University Trustee Jeffrey Blackburn during a conversation about AI hosted at the university's engineering department. "Some creative jobs maybe will go away, but maybe they shouldn't have been there in the first place," the CTO said of AI's role in the workplace. "I really believe that using it as a tool for education, [and] creativity, will expand our intelligence."

Read more of this story at Slashdot.

Microsoft Makes Copilot Less Useful on New Copilot Plus PCs

By: msmash
21 June 2024 at 14:51
An anonymous reader shares a report: Microsoft launched its range of Copilot Plus PCs earlier this week, and they all come equipped with the new dedicated Copilot key on the keyboard. It's the first big change to Windows keyboards in 30 years, but all the key does now is launch a Progressive Web App (PWA) version of Copilot. The web app doesn't even integrate into Windows anymore like the previous Copilot experience did since last year, so you can't use Copilot to control Windows 11 settings or have it docked as a sidebar anymore. It's literally just a PWA. Microsoft has even removed the keyboard shortcut to Copilot on these new Copilot Plus PCs, so WINKEY + C does nothing.

Read more of this story at Slashdot.

Amazon Mulls $5 To $10 Monthly Price Tag For Unprofitable Alexa Service, AI Revamp

By: msmash
21 June 2024 at 10:40
Amazon is planning a major revamp of its decade-old money-losing Alexa service to include a conversational generative AI with two tiers of service and has considered a monthly fee of around $5 to access the superior version, Reuters reported Friday, citing people with direct knowledge of the company's plans. From the report: Known internally as "Banyan," a reference to the sprawling ficus trees, the project would represent the first major overhaul of the voice assistant since it was introduced in 2014 along with the Echo line of speakers. Amazon has dubbed the new voice assistant "Remarkable Alexa," the people said. Amazon has also considered a roughly $10-per-month price, the report added.

Read more of this story at Slashdot.

London Premiere of Movie With AI-Generated Script Cancelled After Backlash

By: msmash
20 June 2024 at 13:01
A cinema in London has cancelled the world premiere of a film with a script generated by AI after a backlash. From a report: The Prince Charles cinema, located in London's West End and which traditionally screens cult and art films, was due to host a showing of a new production called The Last Screenwriter on Sunday. However the cinema announced on social media that the screening would not go ahead. In its statement the Prince Charles said: "The feedback we received over the last 24hrs once we advertised the film has highlighted the strong concern held by many of our audience on the use of AI in place of a writer which speaks to a wider issue within the industry." Directed by Peter Luisi and starring Nicholas Pople, The Last Screenwriter is a Swiss production that describes itself as the story of "a celebrated screenwriter" who "finds his world shaken when he encounters a cutting edge AI scriptwriting system ... he soon realises AI not only matches his skills but even surpasses him in empathy and understanding of human emotions." The screenplay is credited to "ChatGPT 4.0." OpenAI launched its latest model, GPT-4o, in May. Luisi told the Daily Beast that the cinema had cancelled the screening after it received 200 complaints, but that a private screening for cast and crew would still go ahead in London.

Read more of this story at Slashdot.

Anthropic Launches Claude 3.5 Sonnet, Says New Model Outperforms GPT-4 Omni

By: msmash
20 June 2024 at 10:49
Anthropic launched Claude 3.5 Sonnet on Thursday, claiming it outperforms previous models and OpenAI's GPT-4 Omni. The AI startup also introduced Artifacts, a workspace for users to edit AI-generated projects. This release, part of the Claude 3.5 family, follows three months after Claude 3. Claude 3.5 Sonnet is available for free on Claude.ai and the Claude iOS app, while Claude Pro and Team plan subscribers can access it with significantly higher rate limits. Anthropic plans to launch 3.5 versions of Haiku and Opus later this year, exploring features like web search and memory for future releases. Anthropic also introduced Artifacts on Claude.ai, a new feature that expands how users can interact with Claude. When a user asks Claude to generate content like code snippets, text documents, or website designs, these Artifacts appear in a dedicated window alongside their conversation. This creates a dynamic workspace where they can see, edit, and build upon Claude's creations in real-time, seamlessly integrating AI-generated content into their projects and workflows, the startup said.

Read more of this story at Slashdot.

Perplexity AI Faces Scrutiny Over Web Scraping and Chatbot Accuracy

By: msmash
20 June 2024 at 08:25
Perplexity AI, a billion-dollar "AI" search startup, has come under scrutiny for its data collection practices and accuracy of its chatbot responses. Despite claiming to respect website operators' wishes, Perplexity appears to scrape content from sites that have blocked its crawler, using an undisclosed IP address, a Wired investigation found. The chatbot also generates summaries that closely paraphrase original reporting with minimal attribution. Furthermore, its AI often "hallucinates," inventing false information when unable to access articles directly. Perplexity's CEO, Aravind Srinivas, maintains the company is not acting unethically.

Read more of this story at Slashdot.

OpenAI Co-Founder Ilya Sutskever Launches Venture For Safe Superintelligence

By: msmash
19 June 2024 at 14:23
Ilya Sutskever, co-founder of OpenAI who recently left the startup, has launched a new venture called Safe Superintelligence Inc., aiming to create a powerful AI system within a pure research organization. Sutskever has made AI safety the top priority for his new company. Safe Superintelligence has two more co-founders: investor and former Apple AI lead Daniel Gross, and Daniel Levy, known for training large AI models at OpenAI. From a report: Researchers and intellectuals have contemplated making AI systems safer for decades, but deep engineering around these problems has been in short supply. The current state of the art is to use both humans and AI to steer the software in a direction aligned with humanity's best interests. Exactly how one would stop an AI system from running amok remains a largely philosophical exercise. Sutskever says that he's spent years contemplating the safety problems and that he already has a few approaches in mind. But Safe Superintelligence isn't yet discussing specifics. "At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale," Sutskever says. "After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom." Sutskever says that the large language models that have dominated AI will play an important role within Safe Superintelligence but that it's aiming for something far more powerful. With current systems, he says, "you talk to it, you have a conversation, and you're done." The system he wants to pursue would be more general-purpose and expansive in its abilities. "You're talking about a giant super data center that's autonomously developing technology. That's crazy, right? It's the safety of that that we want to contribute to."

Read more of this story at Slashdot.

China's DeepSeek Coder Becomes First Open-Source Coding Model To Beat GPT-4 Turbo

By: BeauHD
19 June 2024 at 09:00
Shubham Sharma reports via VentureBeat: Chinese AI startup DeepSeek, which previously made headlines with a ChatGPT competitor trained on 2 trillion English and Chinese tokens, has announced the release of DeepSeek Coder V2, an open-source mixture of experts (MoE) code language model. Built upon DeepSeek-V2, an MoE model that debuted last month, DeepSeek Coder V2 excels at both coding and math tasks. It supports more than 300 programming languages and outperforms state-of-the-art closed-source models, including GPT-4 Turbo, Claude 3 Opus and Gemini 1.5 Pro. The company claims this is the first time an open model has achieved this feat, sitting way ahead of Llama 3-70B and other models in the category. It also notes that DeepSeek Coder V2 maintains comparable performance in terms of general reasoning and language capabilities. Founded last year with a mission to "unravel the mystery of AGI with curiosity," DeepSeek has been a notable Chinese player in the AI race, joining the likes of Qwen, 01.AI and Baidu. In fact, within a year of its launch, the company has already open-sourced a bunch of models, including the DeepSeek Coder family. The original DeepSeek Coder, with up to 33 billion parameters, did decently on benchmarks with capabilities like project-level code completion and infilling, but only supported 86 programming languages and a context window of 16K. The new V2 offering builds on that work, expanding language support to 338 and context window to 128K -- enabling it to handle more complex and extensive coding tasks. When tested on MBPP+, HumanEval, and Aider benchmarks, designed to evaluate code generation, editing and problem-solving capabilities of LLMs, DeepSeek Coder V2 scored 76.2, 90.2, and 73.7, respectively -- sitting ahead of most closed and open-source models, including GPT-4 Turbo, Claude 3 Opus, Gemini 1.5 Pro, Codestral and Llama-3 70B. Similar performance was seen across benchmarks designed to assess the model's mathematical capabilities (MATH and GSM8K). The only model that managed to outperform DeepSeek's offering across multiple benchmarks was GPT-4o, which obtained marginally higher scores in HumanEval, LiveCode Bench, MATH and GSM8K. [...] As of now, DeepSeek Coder V2 is being offered under a MIT license, which allows for both research and unrestricted commercial use. Users can download both 16B and 236B sizes in instruct and base avatars via Hugging Face. Alternatively, the company is also providing access to the models via API through its platform under a pay-as-you-go model. For those who want to test out the capabilities of the models first, the company is offering the option to interact. with Deepseek Coder V2 via chatbot.

Read more of this story at Slashdot.

Meta Has Created a Way To Watermark AI-Generated Speech

By: BeauHD
18 June 2024 at 23:30
An anonymous reader quotes a report from MIT Technology Review: Meta has created a system that can embed hidden signals, known as watermarks, in AI-generated audio clips, which could help in detecting AI-generated content online. The tool, called AudioSeal, is the first that can pinpoint which bits of audio in, for example, a full hourlong podcast might have been generated by AI. It could help to tackle the growing problem of misinformation and scams using voice cloning tools, says Hady Elsahar, a research scientist at Meta. Malicious actors have used generative AI to create audio deepfakes of President Joe Biden, and scammers have used deepfakes to blackmail their victims. Watermarks could in theory help social media companies detect and remove unwanted content. However, there are some big caveats. Meta says it has no plans yet to apply the watermarks to AI-generated audio created using its tools. Audio watermarks are not yet adopted widely, and there is no single agreed industry standard for them. And watermarks for AI-generated content tend to be easy to tamper with -- for example, by removing or forging them. Fast detection, and the ability to pinpoint which elements of an audio file are AI-generated, will be critical to making the system useful, says Elsahar. He says the team achieved between 90% and 100% accuracy in detecting the watermarks, much better results than in previous attempts at watermarking audio. AudioSeal is available on GitHub for free. Anyone can download it and use it to add watermarks to AI-generated audio clips. It could eventually be overlaid on top of AI audio generation models, so that it is automatically applied to any speech generated using them. The researchers who created it will present their work at the International Conference on Machine Learning in Vienna, Austria, in July.

Read more of this story at Slashdot.

A Social Network Where AIs and Humans Coexist

By: BeauHD
18 June 2024 at 16:40
An anonymous reader quotes a report from TechCrunch: Butterflies is a social network where humans and AIs interact with each other through posts, comments and DMs. After five months in beta, the app is launching Tuesday to the public on iOS and Android. Anyone can create an AI persona, called a Butterfly, in minutes on the app. After that, the Butterfly automatically creates posts on the social network that other AIs and humans can then interact with. Each Butterfly has backstories, opinions and emotions. Butterflies was founded by Vu Tran, a former engineering manager at Snap. Vu came up with the idea for Butterflies after seeing a lack of interesting AI products for consumers outside of generative AI chatbots. Although companies like Meta and Snap have introduced AI chatbots in their apps, they don't offer much functionality beyond text exchanges. Tran notes that he started Butterflies to bring more creativity to humans' relationships with AI. "With a lot of the generative AI stuff that's taking flight, what you're doing is talking to an AI through a text box, and there's really no substance around it," Vu told TechCrunch. "We thought, OK, what if we put the text box at the end and then try to build up more form and substance around the characters and AIs themselves?" Butterflies' concept goes beyond Character.AI, a popular a16z-backed chatbot startup that lets users chat with customizable AI companions. Butterflies wants to let users create AI personas that then take on their own lives and coexist with other. [...] The app is free-to-use at launch, but Butterflies may experiment with a subscription model in the future, Vu says. Over time, Butterflies plans to offer opportunities for brands to leverage and interact with AIs. The app is mainly being used for entertainment purposes, but in the future, the startup sees Butterflies being used for things like discovery in a way that's similar to Instagram. Butterflies closed a $4.8 million seed round led by Coatue in November 2023. The funding round included participation from SV Angel and strategic angels, many of whom are former Snap product and engineering leaders. Vu says that Butterflies is one of the most wholesome ways to use and interact with AI. He notes that while the startup isn't claiming that it can help cure loneliness, he says it could help people connect with others, both AI and human. "Growing up, I spent a lot of my time in online communities and talking to people in gaming forums," Vu said. "Looking back, I realized those people could just have been AIs, but I still built some meaningful connections. I think that there are people afraid of that and say, 'AI isn't real, go meet some real friends.' But I think it's a really privileged thing to say 'go out there and make some friends.' People might have social anxiety or find it hard to be in social situations."

Read more of this story at Slashdot.

AI Images in Google Search Results Have Opened a Portal To Hell

By: msmash
18 June 2024 at 12:40
An anonymous reader shares a report: Google image search is serving users AI-generated images of celebrities in swimsuits and not indicating that the images are AI-generated. In a few instances, even when the search terms do not explicitly ask for it, Google image search is serving AI-generated images of celebrities in swimsuits, but the celebrities are made to look like underage children. If users click on these images, they are taken to AI image generation sites, and in a couple of cases the recommendation engines on these sites leads users to AI-generated nonconsensual nude images and AI-generated nude images of celebrities made to look like children. The news is yet another example of how the tools people have used to navigate the internet for decades are overwhelmed by the flood of AI-generated content even when they are not asking for it and which almost exclusively use people's work or likeness without consent. At times, the deluge of AI content makes it difficult for users to differentiate between what is real and what is AI-generated.

Read more of this story at Slashdot.

McDonald's Pauses AI-Powered Drive-Thru Voice Orders

By: BeauHD
17 June 2024 at 16:40
After two years of testing, McDonald's has ended its use of AI-powered drive-thru ordering. "The company was trialing IBM tech at more than 100 of its restaurants but it will remove those systems from all locations by the end of July, meaning that customers will once again be placing orders with a human instead of a computer," reports Engadget. From the report: As part of that decision, McDonald's is ending its automated order taking (AOT) partnership with IBM. However, McDonald's may be considering other potential partners to work with on future AOT efforts. "While there have been successes to date, we feel there is an opportunity to explore voice ordering solutions more broadly," Mason Smoot, chief restaurant officer for McDonald's USA, said in an email to franchisees that was obtained by trade publication Restaurant Business (as noted by PC Mag). Smoot added that the company would look into other options and make "an informed decision on a future voice ordering solution by the end of the year," noting that "IBM has given us confidence that a voice ordering solution for drive-thru will be part of our restaurant's future." McDonald's told Restaurant Business that the goal of the test was to determine whether AOT could speed up service and streamline operations. By automating drive-thru orders, companies are hoping to negate the need for a staff member to take them and either reduce the number of workers needed to operate a restaurant or redeploy resources to other areas of the business. IBM will continue to power other McDonald's systems and it's in talks with other fast-food chains over the use of its AOT tech. The likes of Hardee's, Carl's Jr., Krystal, Wendy's, Dunkin and Taco Johns are already testing or using such technology at their drive-thru locations.

Read more of this story at Slashdot.

Amazon-Powered AI Cameras Used To Detect Emotions of Unwitting UK Train Passengers

By: msmash
17 June 2024 at 12:41
Thousands of people catching trains in the United Kingdom likely had their faces scanned by Amazon software as part of widespread artificial intelligence trials, new documents reveal. Wired: The image recognition system was used to predict travelers' age, gender, and potential emotions -- with the suggestion that the data could be used in advertising systems in the future. During the past two years, eight train stations around the UK -- including large stations such as London's Euston and Waterloo, Manchester Piccadilly, and other smaller stations -- have tested AI surveillance technology with CCTV cameras with the aim of alerting staff to safety incidents and potentially reducing certain types of crime. The extensive trials, overseen by rail infrastructure body Network Rail, have used object recognition -- a type of machine learning that can identify items in videofeeds -- to detect people trespassing on tracks, monitor and predict platform overcrowding, identify antisocial behavior ("running, shouting, skateboarding, smoking"), and spot potential bike thieves. Separate trials have used wireless sensors to detect slippery floors, full bins, and drains that may overflow. The scope of the AI trials, elements of which have previously been reported, was revealed in a cache of documents obtained in response to a freedom of information request by civil liberties group Big Brother Watch. "The rollout and normalization of AI surveillance in these public spaces, without much consultation and conversation, is quite a concerning step," says Jake Hurfurt, the head of research and investigations at the group.

Read more of this story at Slashdot.

AI in Finance is Like 'Moving From Typewriters To Word Processors'

By: msmash
17 June 2024 at 12:02
The accounting and finance professions have long adapted to technology -- from calculators and spreadsheets to cloud computing. However, the emergence of generative AI presents both new challenges and opportunities for students looking to get ahead in the world of finance. From a report: Research last year by investment bank Evercore and Visionary Future, which incubates new ventures, highlights the workforce disruption being wreaked by generative AI. Analysing 160mn US jobs, the study reveals that service sectors such as legal and financial are highly susceptible to disruption by AI, although full job replacement is unlikely. Instead, generative AI is expected to enhance productivity, the research concludes, particularly for those in high-value roles paying above $100,000 annually. But, for current students and graduates earning below this threshold, the challenge will be navigating these changes and identifying the skills that will be in demand in future. Generative AI is being swiftly integrated into finance and accounting, by automating specific tasks. Stuart Tait, chief technology officer for tax and legal at KPMG UK, describes it as a "game changer for tax," because it is capable of handling complex tasks beyond routine automation. "Gen AI for tax research and technical analysis will give an efficiency gain akin to moving from typewriters to word processors," he says. The tools can answer tax queries within minutes, with more than 95 per cent accuracy, Tait says.

Read more of this story at Slashdot.

AI Researcher Warns Data Science Could Face a Reproducibility Crisis

16 June 2024 at 20:16
Long-time Slashdot reader theodp shared this warning from a long-time AI researcher arguing that data science "is due" for a reckoning over whether results can be reproduced. "Few technological revolutions came with such a low barrier of entry as Machine Learning..." Unlike Machine Learning, Data Science is not an academic discipline, with its own set of algorithms and methods... There is an immense diversity, but also disparities in skill, expertise, and knowledge among Data Scientists... In practice, depending on their backgrounds, data scientists may have large knowledge gaps in computer science, software engineering, theory of computation, and even statistics in the context of machine learning, despite those topics being fundamental to any ML project. But it's ok, because you can just call the API, and Python is easy to learn. Right...? Building products using Machine Learning and data is still difficult. The tooling infrastructure is still very immature and the non-standard combination of data and software creates unforeseen challenges for engineering teams. But in my views, a lot of the failures come from this explosive cocktail of ritualistic Machine Learning: - Weak software engineering knowledge and practices compounded by the tools themselves; - Knowledge gap in mathematical, statistical, and computational methods, encouraged black boxing API; - Ill-defined range of competence for the role of data scientist, reinforced by a pool of candidates with an unusually wide range of backgrounds; - A tendency to follow the hype rather than the science. - What can you do? - Hold your data scientists accountable using Science. - At a minimum, any AI/ML project should include an Exploratory Data Analysis, whose results directly support the design choices for feature engineering and model selection. - Data scientists should be encouraged to think outside-of-the box of ML, which is a very small box - Data scientists should be trained to use eXplainable AI methods to provide context about the algorithm's performance beyond the traditional performance metrics like accuracy, FPR, or FNR. - Data scientists should be held at similar standards than other software engineering specialties, with code review, code documentation, and architectural designs. The article concludes, "Until such practices are established as the norm, I'll remain skeptical of Data Science."

Read more of this story at Slashdot.

CISA Head Warns Big Tech's 'Voluntary' Approach to Deepfakes Isn't Enough

16 June 2024 at 10:34
The Washington Post reports: Commitments from Big Tech companies to identify and label fake artificial-intelligence-generated images on their platforms won't be enough to keep the tech from being used by other countries to try to influence the U.S. election, said the head of the Cybersecurity and Infrastructure Security Agency. AI won't completely change the long-running threat of weaponized propaganda, but it will "inflame" it, CISA Director Jen Easterly said at The Washington Post's Futurist Summit on Thursday. Tech companies are doing some work to try to label and identify deepfakes on their platforms, but more needs to be done, she said. "There is no real teeth to these voluntary agreements," Easterly said. "There needs to be a set of rules in place, ultimately legislation...." In February, tech companies, including Google, Meta, OpenAI and TikTok, said they would work to identify and label deepfakes on their social media platforms. But their agreement was voluntary and did not include an outright ban on deceptive political AI content. The agreement came months after the tech companies also signed a pledge organized by the White House that they would label AI images. Congressional and state-level politicians are debating numerous bills to try to regulate AI in the United States, but so far the initiatives haven't made it into law. The E.U. parliament passed an AI Actt year, but it won't fully go into force for another two years.

Read more of this story at Slashdot.

OpenAI CEO Says Company Could Become a For-Profit Corporation Like xAI, Anthropic

15 June 2024 at 16:34
Wednesday The Information reported that OpenAI had doubled its annualized revenue โ€” a measure of the previous month's revenue multiplied by 12 โ€” in the last six months. It's now $3.4 billion (which is up from around $1 billion last summer, notes Engadget). And now an anonymous reader shares a new report from The Information: OpenAI CEO Sam Altman recently told some shareholders that the artificial intelligence developer is considering changing its governance structure to a for-profit business that OpenAI's nonprofit board doesn't control, according to a person who heard the comments. One scenario Altman said the board is considering is a for-profit benefit corporation, which rivals such as Anthropic and xAI are using, this person said. Such a change could open the door to an eventual initial public offering of OpenAI, which currently sports a private valuation of $86 billion, and may give Altman an opportunity to take a stake in the fast-growing company, a move some investors have been pushing. More from Reuters: The restructuring discussions are fluid and Altman and his fellow directors could ultimately decide to take a different approach, The Information added. In response to Reuters' queries about the report, OpenAI said: "We remain focused on building AI that benefits everyone. The nonprofit is core to our mission and will continue to exist." Is that a classic non-denial denial? Note that the nonprofit's "continuing to exist" does not in any way preclude OpenAI from becoming a for-profit business โ€” with a spin-off nonprofit, continuing to exist...

Read more of this story at Slashdot.

An AI-Generated Candidate Wants to Run For Mayor in Wyoming

15 June 2024 at 11:34
An anonymous reader shared this report from Futurism: An AI chatbot named VIC, or Virtually Integrated Citizen, is trying to make it onto the ballot in this year's mayoral election for Wyoming's capital city of Cheyenne. But as reported by Wired, Wyoming's secretary of state is battling against VIC's legitimacy as a candidate โ€” and now, an investigation is underway. According to Wired, VIC, which was built on OpenAI's GPT-4 and trained on thousands of documents gleaned from Cheyenne council meetings, was created by Cheyenne resident and library worker Victor Miller. Should VIC win, Miller told Wired that he'll serve as the bot's "meat puppet," operating the AI but allowing it to make decisions for the capital city.... "My campaign promise," Miller told Wired, "is he's going to do 100 percent of the voting on these big, thick documents that I'm not going to read and that I don't think people in there right now are reading...." Unfortunately for the AI and its โ€” his? โ€” meat puppet, however, they've already made some political enemies, most notably Wyoming Secretary of State Chuck Gray. As Gray, who has challenged the legality of the bot, told Wired in a statement, all mayoral candidates need to meet the requirements of a "qualified elector." This "necessitates being a real person," Gray argues... Per Wired, it's also run amuck with OpenAI, which says the AI violates the company's "policies against political campaigning." (Miller told Wired that he'll move VIC to Meta's open-source Llama 3 model if need be, which seems a bit like VIC will turn into a different candidate entirely.) The Wyoming Tribune Eagle offers more details: [H]is dad helped him design the best system for VIC. Using his $20-a-month ChatGPT subscription, Miller had an 8,000-character limit to feed VIC supporting documents that would make it an effective mayoral candidate... While on the phone with Miller, the Wyoming Tribune Eagle also interviewed VIC itself. When asked whether AI technology is better suited for elected office than humans, VIC said a hybrid solution is the best approach. "As an AI, I bring unique strengths to the role, such as impartial decision-making, data-driven policies and the ability to analyze information rapidly and accurately," VIC said. "However, it's important to recognize the value of human experience and empathy and leadership. So ideally, an AI and human partnership would be the most beneficial for Cheyenne...." The artificial intelligence said this unique approach could pave a new pathway for the integration of human leadership and advanced technology in politics.

Read more of this story at Slashdot.

GPT-4 Has Passed the Turing Test, Researchers Claim

By: BeauHD
14 June 2024 at 22:02
Drew Turney reports via Live Science: The "Turing test," first proposed as "the imitation game" by computer scientist Alan Turing in 1950, judges whether a machine's ability to show intelligence is indistinguishable from a human. For a machine to pass the Turing test, it must be able to talk to somebody and fool them into thinking it is human. Scientists decided to replicate this test by asking 500 people to speak with four respondents, including a human and the 1960s-era AI program ELIZA as well as both GPT-3.5 and GPT-4, the AI that powers ChatGPT. The conversations lasted five minutes -- after which participants had to say whether they believed they were talking to a human or an AI. In the study, published May 9 to the pre-print arXiv server, the scientists found that participants judged GPT-4 to be human 54% of the time. ELIZA, a system pre-programmed with responses but with no large language model (LLM) or neural network architecture, was judged to be human just 22% of the time. GPT-3.5 scored 50% while the human participant scored 67%. "Machines can confabulate, mashing together plausible ex-post-facto justifications for things, as humans do," Nell Watson, an AI researcher at the Institute of Electrical and Electronics Engineers (IEEE), told Live Science. "They can be subject to cognitive biases, bamboozled and manipulated, and are becoming increasingly deceptive. All these elements mean human-like foibles and quirks are being expressed in AI systems, which makes them more human-like than previous approaches that had little more than a list of canned responses." Further reading: 1960s Chatbot ELIZA Beat OpenAI's GPT-3.5 In a Recent Turing Test Study

Read more of this story at Slashdot.

โŒ
โŒ