❌

Reading view

There are new articles available, click to refresh the page.

Will a US Supreme Court Ruling Put Net Neutrality at Risk?

Today the Wall Street Journal reported that restoring net neutrality to America is "on shakier legal footing after a Supreme Court decision on Friday shifted power away from federal agencies." "It's hard to overstate the impact that this ruling could have on the regulatory landscape in the United States going forward," said Leah Malone, a lawyer at Simpson Thacher & Bartlett. "This could really bind U.S. agencies in their efforts to write new rules." Now that [the "Chevron deference"] is gone, the Federal Communications Commission is expected to have a harder time reviving net neutrality β€” a set of policies barring internet-service providers from assigning priority to certain web traffic... The Federal Communications Commission reclassified internet providers as public utilities under the Communications Act. There are pending court cases challenging the FCC's reinterpretation of that 1934 law, and the demise of Chevron deference heightens the odds of the agency losing in court, some legal experts said. "Chevron's thumb on the scale in favor of the agencies was crucial to their chances of success," said Geoffrey Manne, president of the International Center for Law and Economics. "Now that that's gone, their claims are significantly weaker." Other federal agencies could also be affected, according to the article. The ruling could also make it harder for America's Environmental Protection Agency to crack down on power-plant pollution. And the Federal Trade Commission face more trouble in court defending its recent ban on noncompete agreements. Lawyer Daniel Jarcho tells the Journal that the Court's decision "will unquestionably lead to more litigation challenging federal agency actions, and more losses for federal agencies." Friday a White House press secretary issued a statement calling the court's decision "deeply troubling," and arguing that the court had "decided in the favor of special interests".

Read more of this story at Slashdot.

Threads Expands Fediverse Beta, Letting Users See Replies (and Likes) on Other Fediverse Sites like Mastodon

An anonymous Slashdot reader shared this report from the Verge: Threads will now let people like and see replies to their Threads posts that appear on other federated social media platforms, the company announced on Tuesday. Previously, if you made a post on Threads that was syndicated to another platform like Mastodon, you wouldn't be able to see responses to that post while still inside Threads. That meant you'd have to bounce back and forth between the platforms to stay up-to-date on replies... [I]n a screenshot, Meta notes that you can't reply to replies "yet," so it sounds like that feature will arrive in the future. "Threads is Meta's first app built to be compatible with the fediverse..." according to a Meta blog post. "Our vision is that people using other fediverse-compatible servers will be able to follow and interact with people on Threads without having a Threads profile, and vice versa, connecting communities..." [If you turn on "sharing"...] "Developers can build new types of features and user experiences that can easily plug into other open social networks, accelerating the pace of innovation and experimentation." And this week Instagram/Threads top executive Adam Mosseri posted that Threads is "also expanding the availability of the fediverse beta experience to more than 100 countries, and hope to roll it out everywhere soon."

Read more of this story at Slashdot.

Could We Lower The Carbon Footprint of Data Centers By Launching Them Into Space?

The Wall Street Journal reports that a European initiative studying the feasibility data centers in space "has found that the project could be economically viable" β€” while reducing the data center's carbon footprint. And they add that according to coordinator Thales Alenia Space, the project "could also generate a return on investment of several billion euros between now and 2050." The study β€” dubbed Ascend, short for Advanced Space Cloud for European Net zero emission and Data sovereignty β€” was funded by the European Union and sought to compare the environmental impacts of space-based and Earth-based data centers, the company said. Moving forward, the company plans to consolidate and optimize its results. Space data centers would be powered by solar energy outside the Earth's atmosphere, aiming to contribute to the European Union's goal of achieving carbon neutrality by 2050, the project coordinator said... Space data centers wouldn't require water to cool them, the company said. The 16-month study came to a "very encouraging" conclusion, project manager Damien Dumestier told CNBC. With some caveats... The facilities that the study explored launching into space would orbit at an altitude of around 1,400 kilometers (869.9 miles) β€” about three times the altitude of the International Space Station. Dumestier explained that ASCEND would aim to deploy 13 space data center building blocks with a total capacity of 10 megawatts in 2036, in order to achieve the starting point for cloud service commercialization... The study found that, in order to significantly reduce CO2 emissions, a new type of launcher that is 10 times less emissive would need to be developed. ArianeGroup, one of the 12 companies participating in the study, is working to speed up the development of such reusable and eco-friendly launchers. The target is to have the first eco-launcher ready by 2035 and then to allow for 15 years of deployment in order to have the huge capacity required to make the project feasible, said Dumestier... Michael Winterson, managing director of the European Data Centre Association, acknowledges that a space data center would benefit from increased efficiency from solar power without the interruption of weather patterns β€” but the center would require significant amounts of rocket fuel to keep it in orbit. Winterson estimates that even a small 1 megawatt center in low earth orbit would need around 280,000 kilograms of rocket fuel per year at a cost of around $140 million in 2030 β€” a calculation based on a significant decrease in launch costs, which has yet to take place. "There will be specialist services that will be suited to this idea, but it will in no way be a market replacement," said Winterson. "Applications that might be well served would be very specific, such as military/surveillance, broadcasting, telecommunications and financial trading services. All other services would not competitively run from space," he added in emailed comments. [Merima Dzanic, head of strategy and operations at the Danish Data Center Industry Association] also signaled some skepticism around security risks, noting, "Space is being increasingly politicised and weaponized amongst the different countries. So obviously, there is a security implications on what type of data you send out there." Its not the only study looking at the potential of orbital data centers, notes CNBC. "Microsoft, which has previously trialed the use of a subsea data center that was positioned 117 feet deep on the seafloor, is collaborating with companies such as Loft Orbital to explore the challenges in executing AI and computing in space." The article also points out that the total global electricity consumption from data centers could exceed 1,000 terawatt-hours in 2026. "That's roughly equivalent to the electricity consumption of Japan, according to the International Energy Agency."

Read more of this story at Slashdot.

Get Ready For Nuclear Clocks

Long-time Slashdot reader jrronimo says JILA physicist Jun Ye's group "has made a breakthrough towards the next stage of precision timekeeping." From their paper recently published to arXiv: Optical atomic clocks use electronic energy levels to precisely keep track of time. A clock based on nuclear energy levels promises a next-generation platform for precision metrology and fundamental physics studies.... These results mark the start of nuclear-based solid-state optical clocks and demonstrate the first comparison of nuclear and atomic clocks for fundamental physics studies. This work represents a confluence of precision metrology, ultrafast strong field physics, nuclear physics, and fundamental physics.

Read more of this story at Slashdot.

Citing 'Crisis' in Local Reporting, Associated Press Creates Sister Organization to Seek Grants

Founded in 1846, the not-for-profit Associated Press distributes its news stories to other news outlets. But are free online sites putting those outlets at risk? This week the Associated Press wrote that a "crisis" in local and state news reporting "shows little signs of abating" β€” and that it's now setting up "a sister organization that will seek to raise money" for those outlets. The organization, which will have a board of directors independent of the AP, will solicit philanthropic spending to boost this news coverage, both within the AP and through outside organizations, the news outlet said Tuesday. "We feel we have to lean in at this point, not pull back," said Daisy Veerasingham, the AP's president and CEO. "But the supporting mechanism β€” the local newspaper market that used to support this β€” can't afford to do that anymore." Veerasingham said she's been encouraged by preliminary talks with some funders who have expressed concern about the state of local journalism... The local news industry has collapsed over the past two decades, with the number of journalists working in newspapers dropping from 75,000 to 31,000 in 2022, according to Northwestern University. More than half of the nation's counties have no local news outlets or only one. The AP's CEO offered this succinct summary of their goal. "We want to add new products and services to help the industry."

Read more of this story at Slashdot.

90 Workers Given a Choice: Relocate Across the US, or Leave the Company

"The outdoor-apparel brand Patagonia has given 90 U.S. employees a choice," reports Business Insider: "tell the company by Friday that you're willing to relocate or leave your job." [Alternate URL here.] The employees all work in customer services, known at Patagonia as the customer-experience, or CX, team, and have been allowed to work remotely to field calls and inquiries. These workers received a text and email Tuesday morning about an "important" meeting... Two company executives, Amy Velligan and Bruce Old, told staff in a 15-minute video meeting that the team would be moving to a new "hub" model. CX employees are now expected to live within 60 miles of one of seven "hubs" β€” Atlanta; Salt Lake City; Reno, Nevada; Dallas; Austin; Chicago; or Pittsburgh. Workers were offered $4,000 toward relocation costs and extra paid time off. Those willing to relocate were told to do so by September 30. If CX staff are not willing to live near a hub city, they must leave the company. They were given 72 hours, until Friday, to confirm their decision... Access to company laptops and phones was shut off later that day until employees either agreed to relocate or said they wanted the severance, one affected CX worker said... Both employees who spoke to Business Insider believed this was because Patagonia didn't want to handle the increased demands of employees in states with higher costs of living. "We've been asking for raises for a long time, and they keep telling us that your wage is based on a Reno cost of living and where you choose to live is on you." According to the article, "The company hopes to bring staff together at the hubs at least once every six weeks for in-person training, company gatherings, or 'Activism Hours'." A company spokesperson described the changes as "crucial for us to build a vibrant team culture," and said there were workers who had been complaining about feeling disconnected. Though there may be another motive: "The reality is that our CX team has been running at 200% to 300% overstaffed for much of this year," she added. "While we hoped to reach the needed staffing levels through attrition, those numbers were very low, and retention remained high." One affected worker told Business Insider that the company's proposal "was very factual. If you don't live in these seven metro areas, you either need to move there or give us your stuff and hit the brick. If we don't respond by Friday, they will assume that we have chosen the severance package and we'll start that process." One worker added that the severance package they received was generous... Thanks to Slashdot reader NoWayNoShapeNoForm for sharing the article.

Read more of this story at Slashdot.

Lego Bricks Made From Meteorite Dust 3D Printed by Europe's Space Agency

Lego teamed up with the European Space Agency to make Lego pieces from actual meteorite dust, writes Engadget. "It's a proof of concept to show how astronauts could use moondust to build lunar structures." Consider the sheer amount of energy and money required to haul up building materials from Earth to the Moon. It would be a game changer to, instead, build everything from pre-existing lunar materials. There's a layer of rock and mineral deposits at the surface of the Moon, which is called lunar regolith... However, there isn't too much lunar regolith here on Earth for folks to experiment with. ESA scientists made their own regolith by grinding up a really old meteorite. [4.5 billion years, according to Lego's site, discovered in Africa in 2000.] The dust from this meteorite was turned into a mixture that was used to 3D print the Lego pieces. Voila. Moon bricks. They click together just like regular Lego bricks, though they only come in one color (space gray obviously.) "The result is amazing," says ESA Science Officer Aidan Cowley on the Lego site (though "the bricks may look a little rougher than usual. Importantly the clutch power still works, enabling us to play and test our designs.") "Nobody has built a structure on the Moon," Cowley said in an ESA statement. "So it was great to have the flexibility to try out all kinds of designs and building techniques with our space bricks." And the bricks will also be "helping to inspire the next generation of space engineers," according to the ESA's announcement β€” since they'll be on display in select Lego stores in the U.S., Canada, the U.K., Spain, France, Germany, the Netherlands, and Australia through September 20th.

Read more of this story at Slashdot.

Linux Foundation Announces Intent to Form LF Decentralized Trust

This week the Linux Foundation announced a new organization for decentralized systems and technologies, with an aim of "fostering innovation and collaboration" in both their development and deployment. It will build on existing Linux Foundation blockchain and digital identity projects, according to the announcement, while supporting "a rapidly growing decentralized technology landscape." To foster this broader ecosystem, LF Decentralized Trust will encompass the growing portfolio of Hyperledger projects and host new open source software, communities, standards, and specifications that are critical to the macro shift toward decentralized systems of distributed trust.... LF Decentralized Trust's expanded project and member ecosystem will be both essential to emerging tokenized assets classes and networks, as well as to modernizing the core infrastructure for finance, trade, government, healthcare, and more. LF Decentralized Trust will serve as a neutral home for the open development of a broad range of ledger, identity, security, interoperability, scale, implementation, and related technologies... LF Decentralized Trust will also include new directed funding models that will drive strategic investments by members into individual projects and project resources. "With LF Decentralized Trust, we're expanding our commitment to open source innovation by embracing a wider array of decentralized technologies," said Jim Zemlin, Executive Director of the Linux Foundation. "This new, elevated foundation will enable the community to build a more robust ecosystem that drives forward transparency, security, and efficiency in global infrastructure." "After eight years of advancing the development of blockchain, decentralized identity and related technologies via the Hyperledger community, the time has come to broaden our effort and impact," said Daniela Barbosa, General Manager, Blockchain and Identity, the Linux Foundation. "Ledgers and ledger technologies are but one component of the decentralized systems that will underpin a digital-first global economy. LF Decentralized Trust is where we will gather and grow an expanded community and portfolio of technologies to deliver the transparency, reliability, security and efficiency needed to successfully upgrade critical systems around the world." The announcement includes quotes of support from numerous companies including Oracle, Siemens, Visa, Accenture, Citi, and Hitachi. Some highlights: "The formation of the LF Decentralized Trust reflects the growing demand for open source resources that are critical to the management and functionality of decentralized systems." β€” CEO of Digital Asset "The adoption of decentralized infrastructure is at an inflection point, reflecting the increasing demand from both enterprises and consumers for more secure and transparent digital transactions. As the industry leader for onchain data, blockchain abstraction, and interoperability, we're excited to see the formation of the LF Decentralized Trust and to expand our collaboration with leading financial institutions on advancing tokenized assets and the onchain economy at large." β€” CMO at Chainlink Labs. "As a founding member of the Hyperledger Foundation, and given our unique position in the financial markets, we recognize the vast potential for open-source innovation and decentralized technologies when it comes to reducing risk, increasing resiliency and improving security. The expansion of Hyperledger Foundation into LF Decentralized Trust represents an exciting opportunity to continue expanding these groundbreaking technologies." β€” a managing director at DTCC

Read more of this story at Slashdot.

Colorado's Universal Basic Income Experiment Gets Surprising Results

In November of 2022, "More than 800 people were selected to participate in the Denver Basic Income Project," reports the Colorado Sun, "while they were living on the streets, in shelters, on friends' couches or in vehicles. One group received $1,000 a month, according to the article, while a second group received $6,500 in the first month, and then $500 for the next 11 months. (And a "control" group received $50 a month.) Amazingly, about 45% of participants in all three groups "were living in a house or apartment that they rented or owned by the study's 10-month check-in point, according to the research." The number of nights spent in shelters among participants in the first and second groups decreased by half. And participants in those two groups reported an increase in full-time work, while the control group reported decreased full-time employment. The project also saved tax dollars, according to the report. Researchers tallied an estimated $589,214 in savings on public services, including ambulance rides, visits to hospital emergency departments, jail stays and shelter nights... The study, which began in November 2022 with payments to the first group of participants, has been extended for an additional eight months, until September, and organizers are attempting to raise money to extend it further.

Read more of this story at Slashdot.

Japan Achieves 402 TB/s Data Rate - Using Current Fiber Technology

Tom's Hardware reports that Japan's National Institute of Information and Communications Technology (working with the Aston Institute of Photonic Technologies and Nokia Bell) set a 402 terabits per second data transfer record β€” over commercially available optical fiber cables. The NICT and its partners were able to transmit signals through 1,505 channels over 50 km (about 31 miles) of optic fiber cable for this experiment. It used six types of amplifiers and an optical gain equalizer that taps into the unused 37 THz bandwidth to enable the 402 Tb/s transfer speed. One of the amplifiers this was demonstrated with is a thulium-based doped fiber amplifier, which uses C-band or C+L band systems. Additionally, semiconductor optical amplifiers and Raman amplifiers were used, which achieved 256 Tb/s data rate through almost 20 THz. Other amplifiers were also used for this exercise which provided a cumulative bandwidth of 25 THz for up to 119 Tb/s data rate. As a result, its maximum achievable result surpassed the previous data rate capacity by over 25 percent and increased transmission bandwidth by 35 percent. "This is achievable with currently available technology used by internet service providers..." the article points out. "With 'beyond 5G' potential speeds achievable through commercially available cables, it will likely further a new generation of internet services."

Read more of this story at Slashdot.

Are 'Immortal Stars' Feasting on Dark Matter in the Milky Way's Core?

"Stars very close to the center of our galaxy could be fueled by dark matter in perpetuity," writes Gizmodo, "according to a team of astronomers who recently studied the distant light sources." The group of stars, known as S-cluster stars, is just three light-years from the center of the Milky Way (for reference, we are about 26,000 light-years from the center of our galaxy, which hosts a supermassive black hole at its core). The stars are surprisingly young for their galactic neighborhood, yet they don't look like stars that simply migrated to this part of the Milky Way after forming in another location... As reported by Space.com, the research team posits that these weird stars may be accreting dark matter, which they then use as fuel to keep burning. Since models estimate there is plenty of dark matter near the galaxy's core, the stars are "forever young," as study lead author Isabelle John, an astrophysicist at the Kavli Institute for Particle Astrophysics and Cosmology told Space.com. Effectively, the stars have a long, long way to go before they start running low on fuel. The team's paper is currently hosted on the preprint server arXiv, meaning it has not yet gone through the process of peer review. Dark matter is only "seen" through its effects on other objects, the article points out β€” leading to lots of theories as to where it's actually located. "Earlier this year, a different team of researchers proposed that neutron stars β€” extremely dense stellar remnants β€” could actually be a source of dark matter. Last July, yet another team suggested that the Webb Telescope had detected stars that were powered by dark matter."

Read more of this story at Slashdot.

Amazon Labor Union, Airplane Hub Workers Ally with Teamsters Organizing Workers Nationwide

Two prominent unions are teaming up to challenge Amazon, reports the New York Times β€” "after years of organizing Amazon workers and pressuring the company to bargain over wages and working conditions." Members of the Amazon Labor Union "overwhelmingly chose to affiliate with the 1.3-million-member International Brotherhood of Teamsters" in a vote last Monday. While the Amazon Labor Union (or ALU) is the only union formally representing Amazon warehouse workers anywhere in America after an election in 2022, "it has yet to begin bargaining with Amazon, which continues to contest the election outcome." Leaders of both unions said the affiliation agreement would put them in a better position to challenge Amazon and would provide the Amazon Labor Union with more money and staff support... The Teamsters are ramping up their efforts to organize Amazon workers nationwide. The union voted to create an Amazon division in 2021, and O'Brien was elected that year partly on a platform of making inroads at the company. The Teamsters told the ALU that they had allocated $8 million to support organizing at Amazon, according to ALU President Christian Smalls, and that the larger union was prepared to tap its more than $300 million strike and defense fund to aid in the effort... The Teamsters also recently reached an affiliation agreement with workers organizing at Amazon's largest airplane hub in the United States, a Kentucky facility known as KCVG. Experts have said unionizing KCVG could give workers substantial leverage because Amazon relies heavily on the hub to meet its one- and two-day shipping goals. Their agreement with the Teamsters says the Amazon Labor Union will also "lend its expertise to assist in organizing other Amazon facilities" across America, according to the article.

Read more of this story at Slashdot.

Slashdot Asks: What Do You Remember About the Web in 1994?

"The Short Happy Reign of the CD-ROM" was just one article in a Fast Company series called 1994 Week. As the week rolled along they also re-visited Yahoo, Netscape, and how the U.S. Congress "forced the videogame industry to grow up." But another article argues that it's in web pages from 1994 that "you can start to see in those weird, formative years some surprising signs of what the web would be, and what it could be." It's hard to say precisely when the tipping point was. Many point to September '93, when AOL users first flooded Usenet. But the web entered a new phase the following year. According to an MIT study, at the start of 1994, there were just 623 web servers. By year's end, it was estimated there were at least 10,000, hosting new sites including Yahoo!, the White House, the Library of Congress, Snopes, the BBC, sex.com, and something called The Amazing FishCam. The number of servers globally was doubling every two months. No one had seen growth quite like that before. According to a press release announcing the start of the World Wide Web Foundation that October, this network of pages "was widely considered to be the fastest-growing network phenomenon of all time." As the year began, Web pages were by and large personal and intimate, made by research institutions, communities, or individuals, not companies or brands. Many pages embodied the spirit, or extended the presence, of newsgroups on Usenet, or "User's Net." (Snopes and the Internet Movie Database, which landed on the Web in 1993, began as crowd-sourced projects on Usenet.) But a number of big companies, including Microsoft, Sun, Apple, IBM, and Wells Fargo, established their first modest Web outposts in 1994, a hint of the shopping malls and content farms and slop factories and strip mines to come. 1994 also marked the start of banner ads and online transactions (a CD, pizzas), and the birth of spam and phishing... [B]ack in '94, the salesmen and oilmen and land-grabbers and developers had barely arrived. In the calm before the storm, the Web was still weird, unruly, unpredictable, and fascinating to look at and get lost in. People around the world weren't just writing and illustrating these pages, they were coding and designing them. For the most part, the design was non-design. With a few eye-popping exceptions, formatting and layout choices were simple, haphazard, personal, and β€” in contrast to most of today's web β€” irrepressibly charming. There were no table layouts yet; cascading style sheets, though first proposed in October 1994 by Norwegian programmer HΓ₯kon Wium Lie, wouldn't arrive until December 1996... The highways and megalopolises would come later, courtesy of some of the world's biggest corporations and increasingly peopled by bots, but in 1994 the internet was still intimate, made by and for individuals... Soon, many people would add "under construction" signs to their Web pages, like a friendly request to pardon our dust. It was a reminder that someone was working on it β€” another indication of the craft and care that was going into this never-ending quilt of knowledge. The article includes screenshots of Netscape in action from browser-emulating site OldWeb.Today (albeit without using a 14.4 kbps modems). "Look in and think about how and why this web grew the way it did, and what could have been. Or try to imagine what life was like when the web wasn't worldwide yet, and no one knew what it really was." Slashdot reader tedlistens calls it "a trip down memory lane," offering "some telling glimpses of the future, and some lessons for it too." The article revisits 1994 sites like Global Network Navigator, Time-Warner's Pathfinder, and Wired's online site HotWired as well as 30-year-old versions of the home pages for Wells Fargo and Microsoft. What did they miss? Share your own memories in the comments. What do you remember about the web in 1994?

Read more of this story at Slashdot.

Amazon Retaliated After Employee Walkout Over Return-to-Office Policy, Says NLRB

America's National Labor Relations Board "has filed a complaint against Amazon..." reports the Verge, "that alleges the company 'unlawfully disciplined and terminated an employee' after they assisted in organizing walkouts last May in protest of Amazon's new return-to-work [three days per week] directives, issued early last year." [T]housands of Amazon employees signed petitions against the new mandate and staged a walkout several months later. Despite the protests and pushback, according to a report by Insider, in a meeting in early August 2023, Jassy reaffirmed the company's commitment to employees returning to the office for the majority of the week. The NLRB complaint alleges Amazon "interrogated" employees about the walkout using its internal Chime system. The employee was first put on a performance improvement plan by Amazon following their organizing efforts for the walkout and later "offered a severance payment of nine weeks' salary if the employee signed a severance agreement and global release in exchange for their resignation." According to the NLRB's lawyers, all of that was because the employee engaged in organizing, and the retaliation was intended to discourage "...protected, concerted activities...." The NLRB's general counsel is seeking several different forms of remediation from Amazon, including reimbursement for the employee's "financial harms and search-for-work and work related expenses," a letter of apology, and a "Notice to Employees" that must be physically posted at the company's facilities across the country, distributed electronically, and read by an Amazon rep at a recorded videoconference. Amazon says their actions were entirely unrelated to the workers activism against their return-to-work policies. An Amazon spokesperson told the Verge that instead, the employee "consistently underperformed over a period of nearly a year and repeatedly failed to deliver on projects she was assigned. Despite extensive support and coaching, the former employee was unable to improve her performance and chose to leave the company."

Read more of this story at Slashdot.

Framework Laptop 13 is Getting a Drop-In RISC-V Mainboard Option

An anonymous reader shared this report from the OMG Ubuntu blog: Those of you who own a Framework Laptop 13 β€” consider me jealous, btw β€” or are considering buying one in the near future, you may be interested to know that a RISC-V motherboard option is in the works. DeepComputing, the company behind the recently-announced Ubuntu RISC-V laptop, is working with Framework Computer Inc, the company behind the popular, modular, and Linux-friendly Framework laptops, on a RISC-V mainboard. This is a new announcement; the component itself is in early development, and there's no tentative price tag or pre-order date pencilled in... [T]he Framework RISC-V mainboard will use soldered memory and non-upgradeable eMMC storage (though it can boot from microSD cards). It will 'drop into' any Framework Laptop 13 chassis (or Cooler Master Mainboard Case), per Framework's modular ethos... Framework mentions DeepComputing is "working closely with the teams at Canonical and Red Hat to ensure Linux support is solid through Ubuntu and Fedora", which is great news, and cements Canonical's seriousness to supporting Ubuntu on RISC-V. "We want to be clear that in this generation, it is focused primarily on enabling developers, tinkerers, and hobbyists to start testing and creating on RISC-V," says Framework's announcement. "The peripheral set and performance aren't yet competitive with our Intel and AMD-powered Framework Laptop Mainboards." They're calling the Mainboard "a huge milestone both for expanding the breadth of the Framework ecosystem and for making RISC-V more accessible than ever... DeepComputing is demoing an early prototype of this Mainboard in a Framework Laptop 13 at the RISC-V Summit Europe next week, and we'll be sharing more as this program progresses." And their announcement included two additional updates: "Just like we did for Framework Laptop 16 last week, today we're sharing open source CAD for the Framework Laptop 13 shell, enabling development of skins, cases, and accessories." "We now have Framework Laptop 13 Factory Seconds systems available with British English and German keyboards, making entering the ecosystem more affordable than ever." "We're eager to continue growing a new Consumer Electronics industry that is grounded in open access, repairability, and customization at every level."

Read more of this story at Slashdot.

Why Washington's Mount Rainier Still Makes Volcanologists Worry

It's been a 1,000 years since there was a significant volcanic eruption from Mount Rainier, CNN reminds readers. It's a full 60 miles from Tacoma, Washington β€” and 90 miles from Seattle. Yet "more than Hawaii's bubbling lava fields or Yellowstone's sprawling supervolcano, it's Mount Rainier that has many U.S. volcanologists worried." "Mount Rainier keeps me up at night because it poses such a great threat to the surrounding communities, said Jess Phoenix, a volcanologist and ambassador for the Union of Concerned Scientists, on an episode of CNN's series "Violent Earth With Liv Schreiber." The sleeping giant's destructive potential lies not with fiery flows of lava, which, in the event of an eruption, would be unlikely to extend more than a few miles beyond the boundary of Mount Rainier National Park in the Pacific Northwest. And the majority of volcanic ash would likely dissipate downwind to the east away from population centers, according to the US Geological Survey. Instead, many scientists fear the prospect of a lahar β€” a swiftly moving slurry of water and volcanic rock originating from ice or snow rapidly melted by an eruption that picks up debris as it flows through valleys and drainage channels. "The thing that makes Mount Rainier tough is that it is so tall, and it's covered with ice and snow, and so if there is any kind of eruptive activity, hot stuff ... will melt the cold stuff and a lot of water will start coming down," said Seth Moran, a research seismologist at USGS Cascades Volcano Observatory in Vancouver, Washington. "And there are tens, if not hundreds of thousands of people who live in areas that potentially could be impacted by a large lahar, and it could happen quite quickly." The deadliest lahar in recent memory was in November 1985 when Colombia's Nevado del Ruiz volcano erupted. Just a couple hours after the eruption started, a river of mud, rocks, lava and icy water swept over the town of Armero, killing over 23,000 people in a matter of minutes... Bradley Pitcher, a volcanologist and lecturer in Earth and environmental sciences at Columbia University, said in an episode of CNN's "Violent Earth"... said that Mount Rainier has about eight times the amount of glaciers and snow as Nevado del Ruiz had when it erupted. "There's the potential to have a much more catastrophic mudflow...." Lahars typically occur during volcanic eruptions but also can be caused by landslides and earthquakes. Geologists have found evidence that at least 11 large lahars from Mount Rainier have reached into the surrounding area, known as the Puget Lowlands, in the past 6,000 years, Moran said. Two major U.S. cities β€” Tacoma and South Seattle β€” "are built on 100-foot-thick (30.5-meter) ancient mudflows from eruptions of Mount Rainier," the volcanologist said on CNN's "Violent Earth" series. CNN's article adds that the US Geological Survey already set up a lahar detection system at Mount Rainier in 1998, "which since 2017 has been upgraded and expanded. About 20 sites on the volcano's slopes and the two paths identified as most at risk of a lahar now feature broadband seismometers that transmit real-time data and other sensors including trip wires, infrasound sensors, web cameras and GPS receivers."

Read more of this story at Slashdot.

Apple Might Partner with Meta on AI

Earlier this month Apple announced a partnership with OpenAI to bring ChatGPT to Siri. "Now, the Wall Street Journal reports that Apple and Facebook's parent company Meta are in talks around a similar deal," according to TechCrunch: A deal with Meta could make Apple less reliant on a single partner, while also providing validation for Meta's generative AI tech. The Journal reports that Apple isn't offering to pay for these partnerships; instead, Apple provides distribution to AI partners who can then sell premium subscriptions... Apple has said it will ask for users' permission before sharing any questions and data with ChatGPT. Presumably, any integration with Meta would work similarly.

Read more of this story at Slashdot.

Michigan Lawmakers Advance Bill Requiring All Public High Schools To At Least Offer CS

Michigan's House of Representatives passed a bill requiring all the state's public high schools to offer a computer science course by the start of the 2027-28 school year. (The bill now goes to the Senate, according to a report from Chalkbeat Detroit.) Long-time Slashdot reader theodp writes: Michigan is also removing the requirement for CS teacher endorsements in 2026, paving the way for CS courses to be taught in 2027 by teachers who have "demonstrated strong computer science skills" but do not hold a CS endorsement. Michigan's easing of CS teaching requirements comes in the same year that New York State will begin requiring credentials for all CS teachers. With lobbyist Julia Wynn from the tech giant-backed nonprofit Code.org sitting at her side, Michigan State Rep. Carol Glavnille introduced the CS bill (HB5649) to the House in May (hearing video, 16:20). "This is not a graduation requirement," Glavnille emphasized in her testimony. Code.org's Wynn called the Bill "an important first step" β€” after all, Code.org's goal is "to require all students to take CS to earn a HS diploma" β€” noting that Code.org has also been closely collaborating with Michigan's Education department "on the language and the Bill since inception." Wynn went on to inform lawmakers that "even just attending a high school that offers computer science delivers concrete employment and earnings benefits for students," citing a recent Brookings Institute article that also noted "30 states have adopted a key part of Code.org Advocacy Coalition's policy recommendations, which require all high schools to offer CS coursework, while eight states (and counting) have gone a step further in requiring all students to take CS as a high school graduation requirement." Minutes from the hearing report other parties submitting cards in support of HB 5649 included Amazon (a $3+ million Code.org Platinum Supporter) and AWS (a Code.org In-Kind Supporter), as well as College Board (which offers the AP CS A and CSP exams) and TechNet (which notes its "teams at the federal and state levels advocate with policymakers on behalf of our member companies").

Read more of this story at Slashdot.

Longtime Linux Wireless Developer Passes Away. RIP Larry Finger

Slashdot reader unixbhaskar shared this report from Phoronix: Larry Finger who has contributed to the Linux kernel since 2005 and has seen more than 1,500 kernel patches upstreamed into the mainline Linux kernel has sadly passed away. His wife shared the news of Larry Finger's passing this weekend on the linux-wireless mailing list in a brief statement. Reactions are being shared around the internet. LWN writes: The LWN Kernel Source Database shows that Finger contributed to 94 releases in the (Git era) kernel history, starting with 2.6.16 β€” 1,464 commits in total. He will be missed... In part to his contributions, the Linux wireless hardware support has come a long way over the past two decades. Larry was a frequent contributor to the Linux Wireless and Linux Kernel mailing lists. (Here's a 2006 discussion he had about Git with Linus Torvalds.) Larry also answered 54 Linux questions on Quora, and in 2005 wrote three articles for Linux Journal. And Larry's GitHub profile shows 122 contributions to open source projects just in 2024. In Reddit's Linux forum, one commenter wrote, "He was 84 years old and was still writing code. What a legend. May he rest in peace."

Read more of this story at Slashdot.

OpenAI's 'Media Manager' Mocked, Amid Accusations of Robbing Creative Professionals

OpenAI's 'Media Manager' Mocked, Amid Accusations of Robbing Creative Professionals "Amid the hype surrounding Apple's new deal with OpenAI, one issue has been largely papered over," argues the Executive Director of America's writer's advocacy group, the Authors Guild. OpenAI's foundational models "are, and have always been, built atop the theft of creative professionals' work." [L]ast month the company quietly announced Media Manager, scheduled for release in 2025. A tool purportedly designed to allow creators and content owners to control how their work is used, Media Manager is really a shameless attempt to evade responsibility for the theft of artists' intellectual property that OpenAI is already profiting from. OpenAI says this tool would allow creators to identify their work and choose whether to exclude it from AI training processes. But this does nothing to address the fact that the company built its foundational models using authors' and other creators' works without consent, compensation or control over how OpenAI users will be able to imitate the artists' styles to create new works. As it's described, Media Manager puts the burden on creators to protect their work and fails to address the company's past legal and ethical transgressions. This overture is like having your valuables stolen from your home and then hearing the thief say, "Don't worry, I'll give you a chance to opt out of future burglaries ... next year...." AI companies often argue that it would be impossible for them to license all the content that they need and that doing so would bring progress to a grinding halt. This is simply untrue. OpenAI has signed a succession of licensing agreements with publishers large and small. While the exact terms of these agreements are rarely released to the public, the compensation estimates pale in comparison with the vast outlays for computing power and energy that the company readily spends. Payments to authors would have minimal effects on AI companies' war chests, but receiving royalties for AI training use would be a meaningful new revenue stream for a profession that's already suffering... We cannot trust tech companies that swear their innovations are so important that they do not need to pay for one of the main ingredients β€” other people's creative works. The "better future" we are being sold by OpenAI and others is, in fact, a dystopia. It's time for creative professionals to stand together, demand what we are owed and determine our own futures. The Authors Guild (and 17 other plaintiffs) are now in an ongoing lawsuit against OpenAI and Microsoft. And the Guild's executive director also notes that there's also "a class action filed by visual artists against Stability AI, Runway AI, Midjourney and Deviant Art, a lawsuit by music publishers against Anthropic for infringement of song lyrics, and suits in the U.S. and U.K. brought by Getty Images against Stability AI for copyright infringement of photographs." They conclude that "The best chance for the wider community of artists is to band together."

Read more of this story at Slashdot.

Tuesday SpaceX Launches a NOAA Satellite to Improve Weather Forecasts for Earth and Space

Tuesday a SpaceX Falcon Heavy rocket will launch a special satellite β€” a state-of-the-art weather-watcher from America's National Oceanic and Atmospheric Administration. It will complete a series of four GOES-R satellite launches that began in 2016. Space.com drills down into how these satellites have changed weather forecasts: More than seven years later, with three of the four satellites in the series orbiting the Earth, scientists and researchers say they are pleased with the results and how the advanced technology has been a game changer. "I think it has really lived up to its hype in thunderstorm forecasting. Meteorologists can see the convection evolve in near real-time and this gives them enhanced insight on storm development and severity, making for better warnings," John Cintineo, a researcher from NOAA's National Severe Storms Laboratory , told Space.com in an email. "Not only does the GOES-R series provide observations where radar coverage is lacking, but it often provides a robust signal before radar, such as when a storm is strengthening or weakening. I'm sure there have been many other improvements in forecasts and environmental monitoring over the last decade, but this is where I have most clearly seen improvement," Cintineo said. In addition to helping predict severe thunderstorms, each satellite has collected images and data on heavy rain events that could trigger flooding, detected low clouds and fog as it forms, and has made significant improvements to forecasts and services used during hurricane season. "GOES provides our hurricane forecasters with faster, more accurate and detailed data that is critical for estimating a storm's intensity, including cloud top cooling, convective structures, specific features of a hurricane's eye, upper-level wind speeds, and lightning activity," Ken Graham, director of NOAA's National Weather Service told Space.com in an email. Instruments such as the Advanced Baseline Imager have three times more spectral channels, four times the image quality, and five times the imaging speed as the previous GOES satellites. The Geostationary Lightning Mapper is the first of its kind in orbit on the GOES-R series that allows scientists to view lightning 24/7 and strikes that make contact with the ground and from cloud to cloud. "GOES-U and the GOES-R series of satellites provides scientists and forecasters weather surveillance of the entire western hemisphere, at unprecedented spatial and temporal scales," Cintineo said. "Data from these satellites are helping researchers develop new tools and methods to address problems such as lightning prediction, sea-spray identification (sea-spray is dangerous for mariners), severe weather warnings, and accurate cloud motion estimation. The instruments from GOES-R also help improve forecasts from global and regional numerical weather models, through improved data assimilation." The final satellite, launching Tuesday, includes a new sensor β€” the Compact Coronagraph β€” "that will monitor weather outside of Earth's atmosphere, keeping an eye on what space weather events are happening that could impact our planet," according to the article. "It will be the first near real time operational coronagraph that we have access to," Rob Steenburgh, a space scientist at NOAA's Space Weather Prediction Center, told Space.com on the phone. "That's a huge leap for us because up until now, we've always depended on a research coronagraph instrument on a spacecraft that was launched quite a long time ago."

Read more of this story at Slashdot.

Foundation Honoring 'Star Trek' Creator Offers $1M Prize for AI Startup Benefiting Humanity

The Roddenberry Foundation β€” named for Star Trek creator Gene Roddenberry β€” "announced Tuesday that this year's biennial award would focus on artificial intelligence that benefits humanity," reports the Los Angeles Times: Lior Ipp, chief executive of the foundation, told The Times there's a growing recognition that AI is becoming more ubiquitous and will affect all aspects of our lives. "We are trying to ... catalyze folks to think about what AI looks like if it's used for good," Ipp said, "and what it means to use AI responsibly, ethically and toward solving some of the thorny global challenges that exist in the world...." Ipp said the foundation shares the broad concern about AI and sees the award as a means to potentially contribute to creating those guardrails... Inspiration for the theme was also borne out of the applications the foundation received last time around. Ipp said the prize, which is "issue-agnostic" but focused on early-stage tech, produced compelling uses of AI and machine learning in agriculture, healthcare, biotech and education. "So," he said, "we sort of decided to double down this year on specifically AI and machine learning...." Though the foundation isn't prioritizing a particular issue, the application states that it is looking for ideas that have the potential to push the needle on one or more of the United Nations' 17 sustainable development goals, which include eliminating poverty and hunger as well as boosting climate action and protecting life on land and underwater. The Foundation's most recent winner was Sweden-based Elypta, according to the article, "which Ipp said is using liquid biopsies, such as a blood test, to detect cancer early." "We believe that building a better future requires a spirit of curiosity, a willingness to push boundaries, and the courage to think big," said Rod Roddenberry, co-founder of the Roddenberry Foundation. "The Prize will provide a significant boost to AI pioneers leading these efforts." According to the Foundation's announcement, the Prize "embodies the Roddenberry philosophy's promise of a future in which technology and human ingenuity enable everyone β€” regardless of background β€” to thrive." "By empowering entrepreneurs to dream bigger and innovate valiantly, the Roddenberry Prize seeks to catalyze the development of AI solutions that promote abundance and well-being for all."

Read more of this story at Slashdot.

EFF: New License Plate Reader Vulnerabilties Prove The Tech Itself is a Public Safety Threat

Automated license plate readers "pose risks to public safety," argues the EFF, "that may outweigh the crimes they are attempting to address in the first place." When law enforcement uses automated license plate readers (ALPRs) to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats. The Cybersecurity and Infrastructure Security Agency (CISA), a component of the U.S. Department of Homeland Security, released an advisory last week that should be a wake up call to the thousands of local government agencies around the country that use ALPRs to surveil the travel patterns of their residents by scanning their license plates and "fingerprinting" their vehicles. The bulletin outlines seven vulnerabilities in Motorola Solutions' Vigilant ALPRs, including missing encryption and insufficiently protected credentials... Unlike location data a person shares with, say, GPS-based navigation app Waze, ALPRs collect and store this information without consent and there is very little a person can do to have this information purged from these systems... Because drivers don't have control over ALPR data, the onus for protecting the data lies with the police and sheriffs who operate the surveillance and the vendors that provide the technology. It's a general tenet of cybersecurity that you should not collect and retain more personal data than you are capable of protecting. Perhaps ironically, a Motorola Solutions cybersecurity specialist wrote an article in Police Chief magazine this month that public safety agencies "are often challenged when it comes to recruiting and retaining experienced cybersecurity personnel," even though "the potential for harm from external factors is substantial." That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks "targeting U.S. public safety organizations increased by 142 percent" in 2023. Yet, the temptation to "collect it all" continues to overshadow the responsibility to "protect it all." What makes the latest CISA disclosure even more outrageous is it is at least the third time in the last decade that major security vulnerabilities have been found in ALPRs... If there's one positive thing we can say about the latest Vigilant vulnerability disclosures, it's that for once a government agency identified and reported the vulnerabilities before they could do damage... The Michigan Cyber Command center found a total of seven vulnerabilities in Vigilant devices; two of which were medium severity and 5 of which were high severity vulnerabilities... But a data breach isn't the only way that ALPR data can be leaked or abused. In 2022, an officer in the Kechi (Kansas) Police Department accessed ALPR data shared with his department by the Wichita Police Department to stalk his wife. The article concludes that public safety agencies should "collect only the data they need for actual criminal investigations. "They must never store more data than they adequately protect within their limited resources-or they must keep the public safe from data breaches by not collecting the data at all."

Read more of this story at Slashdot.

Our Brains React Differently to Deepfake Voices, Researchers Find

"University of Zurich researchers have discovered that our brains process natural human voices and "deepfake" voices differently," writes Slashdot reader jenningsthecat. From the University's announcement: The researchers first used psychoacoustical methods to test how well human voice identity is preserved in deepfake voices. To do this, they recorded the voices of four male speakers and then used a conversion algorithm to generate deepfake voices. In the main experiment, 25 participants listened to multiple voices and were asked to decide whether or not the identities of two voices were the same. Participants either had to match the identity of two natural voices, or of one natural and one deepfake voice. The deepfakes were correctly identified in two thirds of cases. "This illustrates that current deepfake voices might not perfectly mimic an identity, but do have the potential to deceive people," says Claudia Roswandowitz, first author and a postdoc at the Department of Computational Linguistics. The researchers then used imaging techniques to examine which brain regions responded differently to deepfake voices compared to natural voices. They successfully identified two regions that were able to recognize the fake voices: the nucleus accumbens and the auditory cortex. "The nucleus accumbens is a crucial part of the brain's reward system. It was less active when participants were tasked with matching the identity between deepfakes and natural voices," says Claudia Roswandowitz. In contrast, the nucleus accumbens showed much more activity when it came to comparing two natural voices. The complete paper appears in Nature.

Read more of this story at Slashdot.

Multiple AI Companies Ignore Robots.Txt Files, Scrape Web Content, Says Licensing Firm

Multiple AI companies are ignoring Robots.txt files meant to block the scraping of web content for generative AI systems, reports Reuters β€” citing a warning sent to publisher by content licensing startup TollBit. TollBit, an early-stage startup, is positioning itself as a matchmaker between content-hungry AI companies and publishers open to striking licensing deals with them. The company tracks AI traffic to the publishers' websites and uses analytics to help both sides settle on fees to be paid for the use of different types of content... It says it had 50 websites live as of May, though it has not named them. According to the TollBit letter, Perplexity is not the only offender that appears to be ignoring robots.txt. TollBit said its analytics indicate "numerous" AI agents are bypassing the protocol, a standard tool used by publishers to indicate which parts of its site can be crawled. "What this means in practical terms is that AI agents from multiple sources (not just one company) are opting to bypass the robots.txt protocol to retrieve content from sites," TollBit wrote. "The more publisher logs we ingest, the more this pattern emerges." The article includes this quote from the president of the News Media Alliance (a trade group representing over 2,200 U.S.-based publishers). "Without the ability to opt out of massive scraping, we cannot monetize our valuable content and pay journalists. This could seriously harm our industry." Reuters also notes another threat facing news sites: Publishers have been raising the alarm about news summaries in particular since Google rolled out a product last year that uses AI to create summaries in response to some search queries. If publishers want to prevent their content from being used by Google's AI to help generate those summaries, they must use the same tool that would also prevent them from appearing in Google search results, rendering them virtually invisible on the web.

Read more of this story at Slashdot.

America's Used EV Price Crash Keeps Getting Deeper

Long-time Slashdot reader schwit1 shares CNBC's report on the U.S. car market: Back in February, used electric vehicle prices dipped below used gasoline-powered vehicle prices for the first time ever, and the pricing cliff keeps getting steeper as car buyers reject any "premium" tag formerly associated with EVs. The decline has been dramatic over the past year. In June 2023, average used EV prices were over 25% higher than used gas car prices, but by May, used EVs were on average 8% lower than the average price for a used gasoline-powered car in U.S. In dollar terms, the gap widened from $265 in February to $2,657 in May, according to an analysis of 2.2 million one to five year-old used cars conducted by iSeeCars. Over the past year, gasoline-powered used vehicle prices have declined between 3-7%, while electric vehicle prices have decreased 30-39%. "It's clear used car shoppers will no longer pay a premium for electric vehicles," iSeeCars executive analyst Karl Brauer stated in an iSeeCars report published last week. Electric power is now a detractor in the consumer's mind, with EVs "less desirable" and therefore less valuable than traditional cars, he said. The article notes there's been a price war among EV manufacturers β€” and that newer EV models might be more attractive due to "longer ranges and improved battery life with temperature control for charging." But CNBC also notes a silver lining. "As more EVs enter the used market at lower prices, the EV market does become available to a wider market of potential first-time EV owners."

Read more of this story at Slashdot.

Launch of Chinese-French Satellite Scattered Debris Over Populated Area

"A Chinese launch of the joint Sino-French SVOM mission to study Gamma-ray bursts early Saturday saw toxic rocket debris fall over a populated area..." writes Space News: SVOM is a collaboration between the China National Space Administration (CNSA) and France's Centre national d'Γ©tudes spatiales (CNES). The mission will look for high-energy electromagnetic radiation from these events in the X-ray and gamma-ray ranges using two French and two Chinese-developed science payloads... Studying gamma-ray bursts, thought to be caused by the death of massive stars or collisions between stars, could provide answers to key questions in astrophysics. This includes the death of stars and the creation of black holes. However the launch of SVOM also created an explosion of its own closer to home.A video posted on Chinese social media site Sina Weibo appears to show a rocket booster falling on a populated area with people running for cover. The booster fell to Earth near Guiding County, Qiandongnan Prefecture in Guizhou province, according to another post... A number of comments on the video noted the danger posed by the hypergolic propellant from the Long March rocket... The Long March 2C uses a toxic, hypergolic mix of nitrogen tetroxide and unsymmetrical dimethylhydrazine (UDMH). Reddish-brown gas or smoke from the booster could be indicative of nitrogen tetroxide, while a yellowish gas could be caused by hydrazine fuel mixing with air. Contact with either remaining fuel or oxidizer from the rocket stage could be very harmful to individuals. "Falling rocket debris is a common issue with China's launches from its three inland launch sites..." the article points out. "Authorities are understood to issue warnings and evacuation notices for areas calculated to be at risk from launch debris, reducing the risk of injuries.

Read more of this story at Slashdot.

Open Source ChatGPT Clone 'LibreChat' Lets You Use Multiple AI Services - While Owning Your Data

Slashdot reader DevNull127 writes: A free and open source ChatGPT clone β€” named LibreChat β€” lets its users choose which AI model to use, "to harness the capabilities of cutting-edge language models from multiple providers in a unified interface". This means LibreChat includes OpenAI's models, but also others β€” both open-source and closed-source β€” and its website promises "seamless integration" with AI services from OpenAI, Azure, Anthropic, and Google β€” as well as GPT-4, Gemini Vision, and many others. ("Every AI in one place," explains LibreChat's home page.) Plugins even let you make requests to DALL-E or Stable Diffusion for image generations. (LibreChat also offers a database that tracks "conversation state" β€” making it possible to switch to a different AI model in mid-conversation...) Released under the MIT License, LibreChat has become "an open source success story," according to this article, representing "the passionate community that's actively creating an ecosystem of open source AI tools." And its creator, Danny Avila, says in some cases it finally lets users own their own data, "which is a dying human right, a luxury in the internet age and even more so with the age of LLM's." Avila says he was inspired by the day ChatGPT leaked the chat history of some of its users back in March of 2023 β€” and LibreChat is "inherently completely private". From the article: With locally-hosted LLMs, Avila sees users finally getting "an opportunity to withhold training data from Big Tech, which many trade at the cost of convenience." In this world, LibreChat "is naturally attractive as it can run exclusively on open-source technologies, database and all, completely 'air-gapped.'" Even with remote AI services insisting they won't use transient data for training, "local models are already quite capable" Avila notes, "and will become more capable in general over time." And they're also compatible with LibreChat...

Read more of this story at Slashdot.

Walmart Announces Electronic Shelf Labels They Can Change Remotely

Walmart "became the latest retailer to announce it's replacing the price stickers in its aisles with electronic shelf labels," reports NPR. "The new labels allow employees to change prices as often as every ten seconds." "If it's hot outside, we can raise the price of water and ice cream. If there's something that's close to the expiration date, we can lower the price β€” that's the good news," said Phil Lempert, a grocery industry analyst... The ability to easily change prices wasn't mentioned in Walmart's announcement that 2,300 stores will have the digitized shelf labels by 2026. Daniela Boscan, who participated in Walmart's pilot of the labels in Texas, said the label's key benefits are "increased productivity and reduced walking time," plus quicker restocking of shelves... As higher wages make labor more expensive, retailers big and small can benefit from the increased productivity that digitized shelf labels enable, said Santiago Gallino, a professor specializing in retail management at the University of Pennsylvania's Wharton School. "The bottom line, at least when I talk to retailers, is the calculation of the amount of labor that they're going to save by incorporating this. And in that sense, I don't think that this is something that only large corporations like Walmart or Target can benefit from," Gallino said. "I think that smaller chains can also see the potential benefit of it." Indeed, Walmart's announcement calls the tech "a win" for both customers and their workers, arguing that updating prices with a mobile app means "reducing the need to walk around the store to change paper tags by hand and giving us more time to support customers in the store." Professor Gallino tells NPR he doesn't think Walmart will suddenly change prices β€” though he does think Walmart will use it to keep their offline and online prices identical. The article also points out you can already find electronic shelf labels at other major grocers inlcuding Amazon Fresh stores and Whole Foods β€” and that digitized shelf labels "are even more common in stores across Europe." Another feature of electronic shelf labels is their product descriptions. [Grocery analyst] Lempert notes that barcodes on the new labels can provide useful details other than the price. "They can actually be used where you take your mobile device and you scan it and it can give you more information about the product β€” whether it's the sourcing of the product, whether it's gluten free, whether it's keto friendly. That's really the promise of what these shelf tags can do," Lempert said. Thanks to long-time Slashdot reader loveandpeace for sharing the article.

Read more of this story at Slashdot.

Data Dump of Patient Records Possible After UK Hospital Breach

An anonymous reader shared this report from the Associated Press: An investigation into a ransomware attack earlier this month on London hospitals by the Russian group Qilin could take weeks to complete, the country's state-run National Health Service said Friday, as concerns grow over a reported data dump of patient records. Hundreds of operations and appointments are still being canceled more than two weeks after the June 3 attack on NHS provider Synnovis, which provides pathology services primarily in southeast London... NHS England said Friday that it has been "made aware" that data connected to the attack have been published online. According to the BBC, Qilin shared almost 400GB of data, including patient names, dates of birth and descriptions of blood tests, on their darknet site and Telegram channel... According to Saturday's edition of the Guardian newspaper, records covering 300 million patient interactions, including the results of blood tests for HIV and cancer, were stolen during the attack. A website and helpline has been set up for patients affected.

Read more of this story at Slashdot.

Red Hat's RHEL-Based In-Vehicle OS Attains Milestone Safety Certification

In 2022, Red Hat announced plans to extend RHEL to the automotive industry through Red Hat In-Vehicle Operating System (providing automakers with an open and functionally-safe platform). And this week Red Hat announced it achieved ISO 26262 ASIL-B certification from exida for the Linux math library (libm.so glibc) β€” a fundamental component of that Red Hat In-Vehicle Operating System. From Red Hat's announcement: This milestone underscores Red Hat's pioneering role in obtaining continuous and comprehensive Safety Element out of Context certification for Linux in automotive... This certification demonstrates that the engineering of the math library components individually and as a whole meet or exceed stringent functional safety standards, ensuring substantial reliability and performance for the automotive industry. The certification of the math library is a significant milestone that strengthens the confidence in Linux as a viable platform of choice for safety related automotive applications of the future... By working with the broader open source community, Red Hat can make use of the rigorous testing and analysis performed by Linux maintainers, collaborating across upstream communities to deliver open standards-based solutions. This approach enhances long-term maintainability and limits vendor lock-in, providing greater transparency and performance. Red Hat In-Vehicle Operating System is poised to offer a safety certified Linux-based operating system capable of concurrently supporting multiple safety and non-safety related applications in a single instance. These applications include advanced driver-assistance systems (ADAS), digital cockpit, infotainment, body control, telematics, artificial intelligence (AI) models and more. Red Hat is also working with key industry leaders to deliver pre-tested, pre-integrated software solutions, accelerating the route to market for SDV concepts. "Red Hat is fully committed to attaining continuous and comprehensive safety certification of Linux natively for automotive applications," according to the announcement, "and has the industry's largest pool of Linux maintainers and contributors committed to this initiative..." Or, as Network World puts it, "The phrase 'open source for the open road' is now being used to describe the inevitable fit between the character of Linux and the need for highly customizable code in all sorts of automotive equipment."

Read more of this story at Slashdot.

Linux Foundation's 'Open Source Security Foundation' Launches New Threat Intelligence Mailing List

The Linux Foundation's "Open Source Security Foundation" (or OpenSSF) is a cross-industry forum to "secure the development, maintenance, and consumption of the open source software". And now the OpenSSF has launched a new mailing list "which aims to monitor the threat landscape of open-source project vulnerabilities," reports I Programmer, "in order to provide real time alerts to anyone subscribed." The Record explains its origins: OpenSSF General Manager Omkhar Arasaratnam said that at a recent open source event, members of the community ran a tabletop exercise where they simulated a security incident involving the discovery of a zero-day vulnerability. They worked their way through the open source ecosystem β€” from cloud providers to maintainers to end users β€” clearly defining how the discovery of a vulnerability would be dealt with from top to bottom. But one of the places where they found a gap is in the dissemination of information widely. "What we lack within the open source community is a place in which we can convene to distribute indicators of compromise (IOCs) and threats, tactics and procedures (TTPs) in a way that will allow the community to identify threats when our packages are under attack," Arasaratnam said... "[W]e're going to be standing up a mailing list for which we can share this information throughout the community and there can be discussion of things that are being seen. And that's one of the ways that we're responding to this gap that we saw...." The Siren mailing list will encourage public discussions on security flaws, concepts, and practices in the open source community with individuals who are not typically engaged in traditional upstream communication channels... Members of the Siren email list will get real-time updates about emerging threats that may be relevant to their projects... OpenSSF has created a signup page for those interested and urged others to share the email list to other open source community members... OpenSSF ecyosystem strategist Christopher Robinson (also security communications director for Intel) told the site he expects government agencies and security researchers to be involved in the effort. And he issued this joint statement with OpenSSF ecosystem strategist Bennett Pursell: By leveraging the collective knowledge and expertise of the open source community and other security experts, the OpenSSF Siren empowers projects of all sizes to bolster their cybersecurity defenses and increase their overall awareness of malicious activities. Whether you're a developer, maintainer, or security enthusiast, your participation is vital in safeguarding the integrity of open source software. In less than a month, the mailing list has already grown to over 800 members...

Read more of this story at Slashdot.

Microsoft Admits No Guarantee of Sovereignty For UK Policing Data

An anonymous reader shared this report from Computer Weekly: Microsoft has admitted to Scottish policing bodies that it cannot guarantee the sovereignty of UK policing data hosted on its hyperscale public cloud infrastructure, despite its systems being deployed throughout the criminal justice sector. According to correspondence released by the Scottish Police Authority (SPA) under freedom of information (FOI) rules, Microsoft is unable to guarantee that data uploaded to a key Police Scotland IT system β€” the Digital Evidence Sharing Capability (DESC) β€” will remain in the UK as required by law. While the correspondence has not been released in full, the disclosure reveals that data hosted in Microsoft's hyperscale public cloud infrastructure is regularly transferred and processed overseas; that the data processing agreement in place for the DESC did not cover UK-specific data protection requirements; and that while the company has the ability to make technical changes to ensure data protection compliance, it is only making these changes for DESC partners and not other policing bodies because "no one else had asked". The correspondence also contains acknowledgements from Microsoft that international data transfers are inherent to its public cloud architecture. As a result, the issues identified with the Scottish Police will equally apply to all UK government users, many of whom face similar regulatory limitations on the offshoring of data. The recipient of the FOI disclosures, Owen Sayers β€” an independent security consultant and enterprise architect with over 20 years' experience in delivering national policing systems β€” concluded it is now clear that UK policing data has been travelling overseas and "the statements from Microsoft make clear that they 100% cannot comply with UK data protection law".

Read more of this story at Slashdot.

Big Tech's AI Datacenters Demand Electricity. Are They Increasing Use of Fossil Fuels?

The artificial intelligence revolution will demand more electricity, warns the Washington Post. "Much more..." They warn that the "voracious" electricity consumption of AI is driving an expansion of fossil fuel use in America β€” "including delaying the retirement of some coal-fired plants." As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world's most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source... A ChatGPT-powered search, according to the International Energy Agency, consumes almost 10 times the amount of electricity as a search on Google. One large data center complex in Iowa owned by Meta burns the annual equivalent amount of power as 7 million laptops running eight hours every day, based on data shared publicly by the company... [Tech companies] argue advancing AI now could prove more beneficial to the environment than curbing electricity consumption. They say AI is already being harnessed to make the power grid smarter, speed up innovation of new nuclear technologies and track emissions.... "If we work together, we can unlock AI's game-changing abilities to help create the net zero, climate resilient and nature positive works that we so urgently need," Microsoft said in a statement. The tech giants say they buy enough wind, solar or geothermal power every time a big data center comes online to cancel out its emissions. But critics see a shell game with these contracts: The companies are operating off the same power grid as everyone else, while claiming for themselves much of the finite amount of green energy. Utilities are then backfilling those purchases with fossil fuel expansions, regulatory filings show... heavily polluting fossil fuel plants that become necessary to stabilize the power grid overall because of these purchases, making sure everyone has enough electricity. The article quotes a project director at the nonprofit Data & Society, which tracks the effect of AI and accuses the tech industry of using "fuzzy math" in its climate claims. "Coal plants are being reinvigorated because of the AI boom," they tell the Washington Post. "This should be alarming to anyone who cares about the environment." The article also summarzies a recent Goldman Sachs analysis, which predicted data centers would use 8% of America's total electricity by 2030, with 60% of that usage coming "from a vast expansion in the burning of natural gas. The new emissions created would be comparable to that of putting 15.7 million additional gas-powered cars on the road." "We all want to be cleaner," Brian Bird, president of NorthWestern Energy, a utility serving Montana, South Dakota and Nebraska, told a recent gathering of data center executives in Washington, D.C. "But you guys aren't going to wait 10 years ... My only choice today, other than keeping coal plants open longer than all of us want, is natural gas. And so you're going see a lot of natural gas build out in this country." Big Tech responded by "going all in on experimental clean-energy projects that have long odds of success anytime soon," the article concludes. "In addition to fusion, they are hoping to generate power through such futuristic schemes as small nuclear reactors hooked to individual computing centers and machinery that taps geothermal energy by boring 10,000 feet into the Earth's crust..." Some experts point to these developments in arguing the electricity needs of the tech companies will speed up the energy transition away from fossil fuels rather than undermine it. "Companies like this that make aggressive climate commitments have historically accelerated deployment of clean electricity," said Melissa Lott, a professor at the Climate School at Columbia University.

Read more of this story at Slashdot.

Systemd 256.1 Addresses Complaint That 'systemd-tmpfiles' Could Unexpectedly Delete Your /home Directory

"A good portion of my home directory got deleted," complained a bug report for systemd filed last week. It requested an update to a flag for the systemd-tmpfiles tool which cleans up files and directories: "a huge warning next to --purge. This option is dangerous, so it should be made clear that it's dangerous." The Register explains: As long as five years ago, systemd-tmpfiles had moved on past managing only temporary files β€” as its name might suggest to the unwary. Now it manages all sorts of files created on the fly ... such as things like users' home directories. If you invoke the systemd-tmpfiles --purge command without specifying that very important config file which tells it which files to handle, version 256 will merrily purge your entire home directory. The bug report first drew a cool response from systemd developer Luca Boccassi of Microsoft: So an option that is literally documented as saying "all files and directories created by a tmpfiles.d/ entry will be deleted", that you knew nothing about, sounded like a "good idea"? Did you even go and look what tmpfiles.d entries you had beforehand? Maybe don't just run random commands that you know nothing about, while ignoring what the documentation tells you? Just a thought eh But the report then triggered "much discussion," reports Phoronix. Some excerpts: Lennart Poettering: "I think we should fail --purge if no config file is specified on the command line. I see no world where an invocation without one would make sense, and it would have caught the problem here." Red Hat open source developer Zbigniew JΓ„(TM)drzejewski-Szmek: "We need to rethink how --purge works. The principle of not ever destroying user data is paramount. There can be commands which do remove user data, but they need to be minimized and guarded." Systemd contributor Betonhaus: "Having a function that declares irreplaceable files β€” such as the contents of a home directory β€” to be temporary files that can be easily purged, is at best poor user interfacing design and at worst a severe design flaw." But in the end, Phoronix writes, systemd-tmpfiles behavior "is now improved upon." "Merged Wednesday was this patch that now makes systemd-tmpfiles accept a configuration file when running purge. That way the user must knowingly supply the configuration file(s) to which files they would ultimately like removed. The documentation has also been improved upon to make the behavior more clear." Thanks to long-time Slashdot reader slack_justyb for sharing the news.

Read more of this story at Slashdot.

Gilead's Twice-Yearly Shot to Prevent HIV Succeeds in Late-Stage Trial

An anonymous reader shared this report from CNBC: Gilead's experimental twice-yearly medicine to prevent HIV was 100% effective in a late-stage trial, the company said Thursday. None of the roughly 2,000 women in the trial who received the lenacapavir shot had contracted HIV by an interim analysis, prompting the independent data monitoring committee to recommend Gilead unblind the Phase 3 trial and offer the treatment to everyone in the study. Other participants had received standard daily pills. The company expects to share more data by early next year, the article adds, and if its results are positive, the company could bring its drug to the market as soon as late 2025. (By Fridayt the company's stock price had risen nearly 12%.) There's already other HIV-preventing options, the article points out, but they're taken by "only a little more than one-third of people in the U.S. who could benefit...according to data from the Centers for Disease Control and Prevention." Part of the problem? "Daily pills dominate the market, but drugmakers are now focusing on developing longer-acting shots... Health policymakers and advocates hope longer-acting options could reach people who can't or don't want to take a daily pill and better prevent the spread of a virus that caused about 1 million new infections globally in 2022."

Read more of this story at Slashdot.

Dark Matter Found? New Study Furthers Stephen Hawking's Predictions About 'Primordial' Black Holes

Where is dark matter, the invisible masses which must exist to bind galaxies together? Stephen Hawking postulated they could be hiding in "primordial" black holes formed during the big bang, writes CNN. "Now, a new study by researchers with the Massachusetts Institute of Technology has brought the theory back into the spotlight, revealing what these primordial black holes were made of and potentially discovering an entirely new type of exotic black hole in the process." Other recent studies have confirmed the validity of Hawking's hypothesis, but the work of [MIT graduate student Elba] Alonso-Monsalve and [study co-author David] Kaiser, a professor of physics and the Germeshausen Professor of the History of Science at MIT, goes one step further and looks into exactly what happened when primordial black holes first formed. The study, published June 6 in the journal Physical Review Letters, reveals that these black holes must have appeared in the first quintillionth of a second of the big bang: "That is really early, and a lot earlier than the moment when protons and neutrons, the particles everything is made of, were formed," Alonso-Monsalve said... "You cannot find quarks and gluons alone and free in the universe now, because it is too cold," Alonso-Monsalve added. "But early in the big bang, when it was very hot, they could be found alone and free. So the primordial black holes formed by absorbing free quarks and gluons." Such a formation would make them fundamentally different from the astrophysical black holes that scientists normally observe in the universe, which are the result of collapsing stars. Also, a primordial black hole would be much smaller β€” only the mass of an asteroid, on average, condensed into the volume of a single atom. But if a sufficient number of these primordial black holes did not evaporate in the early big bang and survived to this day, they could account for all or most dark matter. During the making of the primordial black holes, another type of previously unseen black hole must have formed as a kind of byproduct, according to the study. These would have been even smaller β€” just the mass of a rhino, condensed into less than the volume of a single proton... "It's inevitable that these even smaller black holes would have also formed, as a byproduct (of primordial black holes' formation)," Alonso-Monsalve said, "but they would not be around today anymore, as they would have evaporated already." However, if they were still around just ten millionths of a second into the big bang, when protons and neutrons formed, they could have left observable signatures by altering the balance between the two particle types. Professer Kaiser told CNN the next generation of gravitational detectors "could catch a glimpse of the small-mass black holes β€” an exotic state of matter that was an unexpected byproduct of the more mundane black holes that could explain dark matter today." Nico Cappelluti, an assistant professor in the physics department of the University of Miami (who was not involved with the study) confirmed to CNN that "This work is an interesting, viable option for explaining the elusive dark matter."

Read more of this story at Slashdot.

Open Source ChatGPT Clone 'LibreChat' Lets You Use Every AI Service - While Owning Your Data

Slashdot reader DevNull127 writes: A free and open source ChatGPT clone β€” named LibreChat β€” is also letting its users choose which AI model to use, "to harness the capabilities of cutting-edge language models from multiple providers in a unified interface". This means LibreChat includes OpenAI's models, but also others β€” both open-source and closed-source β€” and its website promises "seamless integration" with AI services from OpenAI, Azure, Anthropic, and Google β€” as well as GPT-4, Gemini Vision, and many others. ("Every AI in one place," explains LibreChat's home page.) Plugins even let you make requests to DALL-E or Stable Diffusion for image generations. (LibreChat also offers a database that tracks "conversation state" β€” making it possible to switch to a different AI model in mid-conversation...) Released under the MIT License, LibreChat has become "an open source success story," according to this article, representing "the passionate community that's actively creating an ecosystem of open source AI tools." Its creator, Danny Avila, says it finally lets users own their own data, "which is a dying human right, a luxury in the internet age and even more so with the age of LLM's." Avila says he was inspired by the day ChatGPT leaked the chat history of some of its users back in March of 2023 β€” and LibreChat is "inherently completely private". From the article: With locally-hosted LLMs, Avila sees users finally getting "an opportunity to withhold training data from Big Tech, which many trade at the cost of convenience." In this world, LibreChat "is naturally attractive as it can run exclusively on open-source technologies, database and all, completely 'air-gapped.'" Even with remote AI services insisting they won't use transient data for training, "local models are already quite capable" Avila notes, "and will become more capable in general over time." And they're also compatible with LibreChat...

Read more of this story at Slashdot.

ASUS Releases Firmware Update for Critical Remote Authentication Bypass Affecting Seven Routers

A report from BleepingComputer notes that ASUS "has released a new firmware update that addresses a vulnerability impacting seven router models that allow remote attackers to log in to devices." But there's more bad news: Taiwan's CERT has also informed the public about CVE-2024-3912 in a post yesterday, which is a critical (9.8) arbitrary firmware upload vulnerability allowing unauthenticated, remote attackers to execute system commands on the device. The flaw impacts multiple ASUS router models, but not all will be getting security updates due to them having reached their end-of-life (EoL). Finally, ASUS announced an update to Download Master, a utility used on ASUS routers that enables users to manage and download files directly to a connected USB storage device via torrent, HTTP, or FTP. The newly released Download Master version 3.1.0.114 addresses five medium to high-severity issues concerning arbitrary file upload, OS command injection, buffer overflow, reflected XSS, and stored XSS problems.

Read more of this story at Slashdot.

Researchers Devise Photosynthesis-Based Energy Source With Negative Carbon Emissions

Researchers have devised a way to extract energy from the photosynthesis process of algae, according to an announcement from Concordia University. Suspended in a specialized solution, the algae forms part of a "micro photosynthetic power cell" that can actually generate enough energy to power low-power devices like Internet of Things (IoT) sensors. "Photosynthesis produces oxygen and electrons. Our model traps the electrons, which allows us to generate electricity," [says Kirankumar Kuruvinashetti, PhD 20, now a Mitacs postdoctoral associate at the University of Calgary.] "So more than being a zero-emission technology, it's a negative carbon emission technology: it absorbs carbon dioxide from the atmosphere and gives you a current. Its only byproduct is water." [...] Muthukumaran Packirisamy, professor in the Department of Mechanical, Industrial and Aerospace Engineering and the paper's corresponding author, admits the system is not yet able to compete in power generation with others like photovoltaic cells. The maximum possible terminal voltage of a single micro photosynthetic power cell is only 1.0V. But he believes that, with enough research and development, including artificial intelligence-assisted integration technologies, this technology has the potential to be a viable, affordable and clean power source in the future. It also offers significant manufacturing advantages over other systems, he says. "Our system does not use any of the hazardous gases or microfibres needed for the silicon fabrication technology that photovoltaic cells rely on. Furthermore, disposing of silicon computer chips is not easy. We use biocompatible polymers, so the whole system is easily decomposable and very cheap to manufacture." In the paper the researchers also described it as a Γ’oemicrobial fuel cellΓ’...

Read more of this story at Slashdot.

America's Defense Department Ran a Secret Disinfo Campaign Online Against China's Covid Vaccine

"At the height of the COVID-19 pandemic, the U.S. military launched a secret campaign to counter what it perceived as China's growing influence in the Philippines..." reports Reuters. "It aimed to sow doubt about the safety and efficacy of vaccines and other life-saving aid that was being supplied by China, a Reuters investigation found." Reuters interviewed "more than two dozen current and former U.S officials, military contractors, social media analysts and academic researchers," and also reviewed posts on social media, technical data and documents about "a set of fake social media accounts used by the U.S. military" β€” some active for more than five years. Friday they reported the results of their investigation: Through phony internet accounts meant to impersonate Filipinos, the military's propaganda efforts morphed into an anti-vax campaign. Social media posts decried the quality of face masks, test kits and the first vaccine that would become available in the Philippines β€” China's Sinovac inoculation. Reuters identified at least 300 accounts on X, formerly Twitter, that matched descriptions shared by former U.S. military officials familiar with the Philippines operation. Almost all were created in the summer of 2020 and centered on the slogan #Chinaangvirus β€” Tagalog for China is the virus. "COVID came from China and the VACCINE also came from China, don't trust China!" one typical tweet from July 2020 read in Tagalog. The words were next to a photo of a syringe beside a Chinese flag and a soaring chart of infections. Another post read: "From China β€” PPE, Face Mask, Vaccine: FAKE. But the Coronavirus is real." After Reuters asked X about the accounts, the social media company removed the profiles, determining they were part of a coordinated bot campaign based on activity patterns and internal data. The U.S. military's anti-vax effort began in the spring of 2020 and expanded beyond Southeast Asia before it was terminated in mid-2021, Reuters determined. Tailoring the propaganda campaign to local audiences across Central Asia and the Middle East, the Pentagon used a combination of fake social media accounts on multiple platforms to spread fear of China's vaccines among Muslims at a time when the virus was killing tens of thousands of people each day. A key part of the strategy: amplify the disputed contention that, because vaccines sometimes contain pork gelatin, China's shots could be considered forbidden under Islamic law... A senior Defense Department official acknowledged the U.S. military engaged in secret propaganda to disparage China's vaccine in the developing world, but the official declined to provide details. A Pentagon spokeswoman... also noted that China had started a "disinformation campaign to falsely blame the United States for the spread of COVID-19." A senior U.S. military officer directly involved in the campaign told Reuters that "We didn't do a good job sharing vaccines with partners. So what was left to us was to throw shade on China's." At least six senior State Department officials for the region objected, according to the article. But in 2019 U.S. Defense Secretary Mark Esper signed "a secret order" that "elevated the Pentagon's competition with China and Russia to the priority of active combat, enabling commanders to sidestep the StateDepartment when conducting psyops against those adversaries." [A senior defense official] said the Pentagon has rescinded parts of Esper's 2019 order that allowed military commanders to bypass the approval of U.S. ambassadors when waging psychological operations. The rules now mandate that military commanders work closely with U.S. diplomats in the country where they seek to have an impact. The policy also restricts psychological operations aimed at "broad population messaging," such as those used to promote vaccine hesitancy during COVID... Nevertheless, the Pentagon's clandestine propaganda efforts are set to continue. In an unclassified strategy document last year, top Pentagon generals wrote that the U.S. military could undermine adversaries such as China and Russia using "disinformation spread across social media, false narratives disguised as news, and similar subversive activities [to] weaken societal trust by undermining the foundations of government." And in February, the contractor that worked on the anti-vax campaign β€” General Dynamics IT β€” won a $493 million contract. Its mission: to continue providing clandestine influence services for the military.

Read more of this story at Slashdot.

ASUS Promises Support Overhaul After YouTube Investigators Allege Dishonesty

ASUS has suddenly agreed "to overhaul its customer support and warranty systems," writes the hardware review site Gamers Nexus β€” after a three-video series on its YouTube channel documented bad and "potentially illegal" handling of customer warranties for the channel's 2.2 million viewers. The Verge highlights ASUS's biggest change: If you've ever been denied a warranty repair or charged for a service that was unnecessary or should've been free, Asus wants to hear from you at a new email address. It claims those disputes will be processed by Asus' own staff rather than outsourced customer support agents.... The company is also apologizing today for previous experiences you might have had with repairs. "We're very sorry to anyone who has had a negative experience with our service team. We appreciate your feedback and giving us a chance to make amends." It started five weeks ago when Gamers Nexus requested service for a joystick problem, according to a May 10 video. First they'd received a response wrongly telling them their damage was out of warranty β€” which also meant Asus could add a $20 shipping charge for the requested repair. "Somehow that turned into ASUS saying the LCD needs to be replaced, even though the joystick is covered under their repair policies," the investigators say in the video. [They also note this response didn't even address their original joystick problem β€” "only that thing that they had decided to find" β€” and that ASUS later made an out-of-the-blue reference to "liquid damage."] The repair would ultimately cost $191.47, with ASUS mentioning that otherwise "the unit will be sent back un-repaired and may be disassembled." ASUS gave them four days to respond, with some legalese adding that an out-of-warranty repair fee is non-refundable, yet still "does not guarantee that repairs can be made." Even when ASUS later agreed to do a free "partial" repair (providing the requested in-warranty service), the video's investigators still received another email warning of "pending service cancellation" and return of the unit unless they spoke to "Invoice Quotation Support" immediately. The video-makers stood firm, and the in-warranty repair was later performed free β€” but they still concluded that "It felt like ASUS tried to scam us." ASUS's response was documented in a second video, with ASUS claiming it had merely been sending a list of "available" repairs (and promising that in the future ASUS would stop automatically including costs for the unrequested repair of "cosmetic imperfections" β€” and that they'd also change their automatic emails.) Gamers Nexus eventually created a fourth, hour-long video confronting various company officials at Computex β€” which finally led to them publishing a list of ASUS's promised improvements on Friday. Some highlights: ASUS promises it's "created a Task Force team to retroactively go back through a long history of customer surveys that were negative to try and fix the issues." (The third video from Gamers Nexus warned ASUS was already on the government's radar over its handling of warranty issues.) ASUS also announced their repairs centers were no longer allowed to claim "customer-induced damage" (which Gamers Nexus believes "will remove some of the financial incentive to fail devices" to speed up workloads). ASUS is creating a new U.S. support center allowing customers to choose either a refurbished board or a longer repair. Gamers Nexus says they already have devices at ASUS repair centers β€” under pseudonyms β€” and that they "plan to continue sampling them over the next 6-12 months so we can ensure these are permanent improvements." And there's one final improvement, according to Gamers Nexus. "After over a year of refusing to acknowledge the microSD card reader failures on the ROG Ally [handheld gaming console], ASUS will be posting a formal statement next week about the defect."

Read more of this story at Slashdot.

AI Researcher Warns Data Science Could Face a Reproducibility Crisis

Long-time Slashdot reader theodp shared this warning from a long-time AI researcher arguing that data science "is due" for a reckoning over whether results can be reproduced. "Few technological revolutions came with such a low barrier of entry as Machine Learning..." Unlike Machine Learning, Data Science is not an academic discipline, with its own set of algorithms and methods... There is an immense diversity, but also disparities in skill, expertise, and knowledge among Data Scientists... In practice, depending on their backgrounds, data scientists may have large knowledge gaps in computer science, software engineering, theory of computation, and even statistics in the context of machine learning, despite those topics being fundamental to any ML project. But it's ok, because you can just call the API, and Python is easy to learn. Right...? Building products using Machine Learning and data is still difficult. The tooling infrastructure is still very immature and the non-standard combination of data and software creates unforeseen challenges for engineering teams. But in my views, a lot of the failures come from this explosive cocktail of ritualistic Machine Learning: - Weak software engineering knowledge and practices compounded by the tools themselves; - Knowledge gap in mathematical, statistical, and computational methods, encouraged black boxing API; - Ill-defined range of competence for the role of data scientist, reinforced by a pool of candidates with an unusually wide range of backgrounds; - A tendency to follow the hype rather than the science. - What can you do? - Hold your data scientists accountable using Science. - At a minimum, any AI/ML project should include an Exploratory Data Analysis, whose results directly support the design choices for feature engineering and model selection. - Data scientists should be encouraged to think outside-of-the box of ML, which is a very small box - Data scientists should be trained to use eXplainable AI methods to provide context about the algorithm's performance beyond the traditional performance metrics like accuracy, FPR, or FNR. - Data scientists should be held at similar standards than other software engineering specialties, with code review, code documentation, and architectural designs. The article concludes, "Until such practices are established as the norm, I'll remain skeptical of Data Science."

Read more of this story at Slashdot.

FCC Approves Mysterious SpaceX Device: Is It for the Starlink Mini Dish?

"SpaceX has received FCC clearance to operate a mysterious 'wireless module' device," PC Magazine reported earlier this week, speculating that the device "might be a new Starlink router." On Tuesday, the FCC issued an equipment authorization for the device, which uses the 2.4GHz and 5GHz Wi-Fi radio bands. A document in SpaceX's filing also says it features antennas along with Wi-Fi chips apparently from MediaTek. Another document calls the device by the codename "UTW-231," and defines it as a "wireless router" supporting IEEE 802.11b/g/n/ax for Wi-Fi 6 speeds up to 1,300Mbps. But perhaps the most interesting part is an image SpaceX attached, which suggests the router is relatively small and can fit in a person's open hand.... SpaceX CEO Elon Musk has said the "Starlink mini" dish is slated to arrive later this year and that it's small enough to fit in a backpack... On Wednesday, PCMag also spotted the official Starlink.com site referencing the name "Mini" in a specification page for the satellite internet system. Today saw some interesting speculation on the unoffical "Starlink Hardware" blog (written by Noah Clarke, who has a degree in electronics). Clarke guesses the product "will be aimed at portable use cases, such as camping, RV's, vans, hiking... designed to be easy to store, transport, and deploy". But he also notes Starlink updated their app today, with a new shopping page showing what he believes the upcoming product will look like. ("Very similar to the Standard dish, just smaller. It has a similar shape, and even a kickstand.") If you go into developer mode and play around with the Mini network settings, you notice something interesting. There is no separate router. Devices are connected to the dish itself... I'm guessing that, in order to make the Mini as portable as possible, Starlink decided it was best to simplify the system and limit the number of components. There are more Wifi details that have been revealed, and that is mesh compatibility. For those of you that might be interested in using the Mini at home, or for larger events where you need additional Wifi coverage, the Mini's built-in router will be compatible with Starlink mesh. You'll be able to wirelessly pair another Starlink router to the Mini.

Read more of this story at Slashdot.

'Blue Screen of Death' Comes To Linux

In 2016, Phoronix remembered how the early days of Linux kernel mode-setting (KMS) had brought hopes for improved error messages. And one long-awaited feature was errors messages for "Direct Rendering Manager" (or DRM) drivers β€” something analgous to the "Blue Screen of Death" Windows gives for critical errors. Now Linux 6.10 is introducing a new DRM panic handler infrastructure enabling messages when a panic occurs, Phoronix reports today. "This is especially important for those building a kernel without VT/FBCON support where otherwise viewing the kernel panic message isn't otherwise easily available." With Linux 6.10 the initial DRM Panic code has landed as well as wiring up the DRM/KMS driver support for the SimpleDRM, MGAG200, IMX, and AST drivers. There is work underway on extending DRM Panic support to other drivers that we'll likely see over the coming kernel cycles for more widespread support... On Linux 6.10+ with platforms having the DRM Panic driver support, this "Blue Screen of Death" functionality can be tested via a route such as echo c > /proc/sysrq-trigger. The article links to a picture shared on Mastodon by Red Hat engineer Javier Martinez Canillas of the error message being generated on a BeaglePlay single board computer. Phoronix also points out that some operating systems have even considered QR codes for kernel error messages...

Read more of this story at Slashdot.

Which Way is the EV Market Headed? And Does the US Lag the World?

Wednesday the annual electric vehicle outlook report was released by market researcher BloombergNEF. And the analyst wrote that "Our long-term outlook for EVs remains bright," according to the Los Angeles Times: In 2023, EVs made up 18% of global passenger-vehicle sales. By 2030, according to the report, 45% will be EVs. That number jumps to 73% by 2040 β€” still short of what the world needs to reach net zero emissions in transportation, the firm says, but enough to achieve major reductions in climate-changing carbon emissions... [D]ifferent countries are moving at different speeds and with different levels of commitment. Today, "China, India and France are still showing signs of healthy growth, but the latest data from Germany, Italy and the U.S. is more concerning," BloombergNEF said. Global EV sales "are set to rise from 13.9 million in 2023 to over 30 million in 2027," despite the lagging U.S. [The article points out later that "For the first quarter in China, EV sales were up 37%, according to BloombergNEF. In India, it's 39%, and in France, 20%. The U.S. was a laggard, up just 4%."] Whatever the geography, consumer concerns about price, driving range, battery lifespan, and unreliable public charging continue to dampen many buyers' enthusiasm for EVs. BloombergNEF's findings are echoed by consulting firm McKinsey and the AAA motor club, in recent forecasts of their own. But EV prices are coming down, range is improving, and large numbers of public chargers are being installed, all of which could revive sales growth. Consumers around the planet are warming to the idea of buying an electric car, but they're moving slowly. According to McKinsey, 14% of 30,000 global survey respondents in 2021 said their next vehicle would be an EV. This year, it's 18%. In the U.S. it's a different story, where consumer interest in an EV purchase declined to 18% this year, according to AAA's survey, down from 23% in 2023. And nearly two-thirds reported they were unlikely to buy an EV next time they buy a car. Interest in hybrids is on the rise. One in three said they were likely to buy a hybrid, a vehicle that adds a small battery to an internal combustion engine to improve fuel efficiency. That's bad news for pure EV sales, at least in the immediate future, said Greg Brannon, head of automotive research at AAA. Early adopters already have their EVs, he said, while mainstream buyers remain skeptical. The article does note that major automakers "are losing billions of dollars in their EV division," with several cutting the EV goals for the U.S. (Though Hyundai and Kia are not.) And then there's this... A global survey conducted by consulting firm McKinsey, also released Wednesday, included this shocker: 29% of EV owners told McKinsey they plan to replace the EV they bought with a gasoline or diesel car, a figure that jumps to 38% for U.S. EV owners. Phillip Kampshoff, who leads McKinsey's Center for Future Mobility in the Americas, said he'd seen EV sales as "a one way street. Once you buy, you're hooked on an EV. But that's not what the data shows...." But the article points out that both BloombergNEF and McKinsey still remained bullish that adoption will increase in the future.

Read more of this story at Slashdot.

53 LA County Public Health Workers Fall for Phishing Email. 200,000 People May Be Affected

The Los Angeles Times reports that "The personal information of more than 200,000 people in Los Angeles County was potentially exposed after a hacker used a phishing email to steal the login credentials of 53 public health employees, the county announced Friday." Details that were possibly accessed in the February data breach include the first and last names, dates of birth, diagnoses, prescription information, medical record numbers, health insurance information, Social Security numbers and other financial information of Department of Public Health clients, employees and other individuals. "Affected individuals may have been impacted differently and not all of the elements listed were present for each individual," the agency said in a news release... The data breach happened between Feb. 19 and 20 when employees received a phishing email, which tries to trick recipients into providing important information such as passwords and login credentials. The employees clicked on a link in the body of the email, thinking they were accessing a legitimate message, according to the agency... The county is offering free identity monitoring through Kroll, a financial and risk advisory firm, to those affected by the breach. Individuals whose medical records were potentially accessed by the hacker should review them with their doctor to ensure the content is accurate and hasn't been changed. Officials say people should also review the Explanation of Benefits statement they receive from their insurance company to make sure they recognize all the services that have been billed. Individuals can also request credit reports and review them for any inaccuracies. From the official statement by the county's Public Health department: Upon discovery of the phishing attack, Public Health disabled the impacted e-mail accounts, reset and re-imaged the user's device(s), blocked websites that were identified as part of the phishing campaign and quarantined all suspicious incoming e-mails. Additionally, awareness notifications were distributed to all workforce members to remind them to be vigilant when reviewing e-mails, especially those including links or attachments. Law enforcement was notified upon discovery of the phishing attack, and they investigated the incident.

Read more of this story at Slashdot.

Flesh-Eating Bacteria That Can Kill in Two Days Spreads in Japan

Bloomberg reports: A disease caused by a rare "flesh-eating bacteria" that can kill people within 48 hours is spreading in Japan after the country relaxed Covid-era restrictions. Cases of streptococcal toxic shock syndrome (STSS) reached 977 this year by June 2, higher than the record 941 cases reported for all of last year, according to the National Institute of Infectious Diseases, which has been tracking incidences of the disease since 1999. Group A Streptococcus (GAS) typically causes swelling and sore throat in children known as "strep throat," but some types of the bacteria can lead to symptoms developing rapidly, including limb pain and swelling, fever, low blood pressure, that can be followed by necrosis, breathing problems, organ failure and death. People over 50 are more prone to the disease. "Most of the deaths happen within 48 hours," said Ken Kikuchi, a professor in infectious diseases at Tokyo Women's Medical University. "As soon as a patient notices swelling in foot in the morning, it can expand to the knee by noon, and they can die within 48 hours...." At the current rate of infections, the number of cases in Japan could reach 2,500 this year, with a "terrifying" mortality rate of 30%, Kikuchi said.

Read more of this story at Slashdot.

Wine Staging 9.11 Released with A Patch For A 17 Year Old Bug

Building off Friday's release of Wine 9.11, the development team has now also released Wine Staging 9.11 with some 428 patches, reports Phoronix founder Michael Larabel: Catching my interest was a patch for Bug 7955. That right away catches my attention since the latest Wine bug reports are at a bug ticket number over 56,000.... Yep, Bug 7955 dates back 14 years ago to April 2007. The #7955 bug report is over the S-Hoai Windows client displaying an application exception when clicking the "File" or "Projects" menu. S-Hoai is a Windows application used in Germany by architects and building engineers/contractors for managing estimates and billing according to German laws.

Read more of this story at Slashdot.

❌