Normal view

Received before yesterday

​AI slop, begone! The viral musical virtuosos bringing brains and brilliance back to social media

11 February 2026 at 05:38

Whether making microtonal pop or playing Renaissance instruments with sheep bones, a crop of bold artists are making genuinely strange music go mainstream – but are they at the mercy of the algorithm?

Chloë Sobek is a Melbourne musician who plays the violone, a Renaissance precursor to the double bass. But instead of playing it in the traditional manner, she puts wobbling bits of cardboard between its strings or uses a sheep’s bone as a bow, and these weird interventions have become catnip for Instagram’s algorithm, getting her tens of thousands – sometimes hundreds of thousands – of views for each of her self-made performance videos. “Despite how it might appear, I’m a reasonably shy person,” she says.

When Laurie Anderson’s robo-minimalist masterwork O Superman hit No 2 in the UK charts in 1981, thanks to incessant airplay on John Peel’s radio show, it was a signal of a media outlet’s power to propel experimental music into the mainstream. That’s now happening again as prepared-instrument players such as Sobek, plus experimental pianists, microtonal singers and numerous other boundary-pushing solo performers, are routinely breaking out of underground circles thanks to videos – generally self-recorded at home – going viral on TikTok and Instagram.

Continue reading...

© Photograph: Sandra Ebert

© Photograph: Sandra Ebert

© Photograph: Sandra Ebert

EU says TikTok needs to drop "addictive design"

6 February 2026 at 09:31

Brussels has warned TikTok that its endlessly scrolling feeds may breach Europe’s new content rules, as regulators press ahead with efforts to rein in the social effects of big online platforms.

In preliminary findings issued on Friday, the European Commission said it believed the group had failed to adequately assess and mitigate the risks posed by addictive design features that could harm users’ physical and mental wellbeing, particularly children and other vulnerable groups.

The warning marks one of the most advanced tests yet of the EU’s Digital Services Act, which requires large online platforms to identify and curb systemic risks linked to their products.

Read full article

Comments

© NurPhoto / Contributor | NurPhoto

TikTok’s privacy update mentions immigration status. Here’s why.

30 January 2026 at 06:48

In 2026, could any five words be more chilling than “We’re changing our privacy terms?”

The timing could not have been worse for TikTok US when it sent millions of US users a mandatory privacy pop-up on January 22. The message forced users to accept updated terms if they wanted to keep using the app. Buried in that update was language about collecting “citizenship or immigration status.”

Specifically, TikTok said:

“Information You Provide may include sensitive personal information, as defined under applicable state privacy laws, such as information from users under the relevant age threshold, information you disclose in survey responses or in your user content about your racial or ethnic origin, national origin, religious beliefs, mental or physical health diagnosis, sexual life or sexual orientation, status as transgender or nonbinary, citizenship or immigration status, or financial information.”

The internet reacted badly. TikTok users took to social media, with some suggesting that TikTok was building a database of immigration status, and others pledging to delete their accounts. It didn’t help that TikTok’s US operation became a US-owned company on the same day, with Senator Ed Markey (D-Mass.) criticizing what he sees as a lack of transparency around the deal.

A legal requirement

In this case, things are may be less sinister than you’d think. The language is not new—it first appeared around August 2024. And TikTok is not asking users to provide their immigration status directly.

Instead, the disclosure covers sensitive information that users might voluntarily share in videos, surveys, or interactions with AI features.

The change appears to be driven largely by California’s AB-947, signed in October 2023. The law added immigration status to the state’s definition of sensitive personal information, placing it under stricter protections. Companies are required to disclose how they process sensitive personal information, even if they do not actively seek it out.

Other social media companies, including Meta, do not explicitly mention immigration status in their privacy policies. According to TechCrunch, that difference likely reflects how specific their disclosure language is—not a meaningful difference in what data is actually collected.

One meaningful change in TikTok’s updated policy does concern location tracking. Previous versions stated that TikTok did not collect GPS data from US users. The new policy says it may collect precise location data, depending on user settings. Users can reportedly opt out of this tracking.

Read the whole board, not just one square

So, does this mean TikTok—or any social media company—deserves our trust? That’s a harder question.

There are still red flags. In April, TikTok quietly removed a commitment to notify users before sharing data with law enforcement. According to Forbes, the company has also declined to say whether it shares, or would share, user data with agencies such as the Department of Homeland Security (DHS) or Immigration and Customs Enforcement (ICE).

That uncertainty is the real issue. Social media companies are notorious for collecting vast amounts of user data, and for being vague about how it may be used later. Outrage over a particularly explicit disclosure is understandable, but the privacy problem runs much deeper than a single policy update from one company.

People have reason to worry unless platforms explicitly commit to not collecting or inferring sensitive data—and explicitly commit to not sharing it with government agencies. And even then, skepticism is healthy. These companies have a long history of changing policies quietly when it suits them.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

TikTok users “absolutely justified” in fearing MAGA makeover, experts say

27 January 2026 at 18:17

TikTok wants users to believe that errors blocking uploads of anti-ICE videos or direct messages mentioning Jeffrey Epstein are due to technical errors—not the platform shifting to censor content critical of Donald Trump after he hand-picked the US owners who took over the app last week.

However, experts say that TikTok users' censorship fears are justified, whether the bugs are to blame or not.

Ioana Literat, an associate professor of technology, media, and learning at Teachers College, Columbia University, has studied TikTok's politics since the app first shot to popularity in the US in 2018. She told Ars that "users' fears are absolutely justified" and explained why the "bugs" explanation is "insufficient."

Read full article

Comments

© Aurich Lawson | Getty Images

“IG is a drug”: Internal messages may doom Meta at social media addiction trial

27 January 2026 at 13:07

Anxiety, depression, eating disorders, and death. These can be the consequences for vulnerable kids who get addicted to social media, according to more than 1,000 personal injury lawsuits that seek to punish Meta and other platforms for allegedly prioritizing profits while downplaying child safety risks for years.

Social media companies have faced scrutiny before, with congressional hearings forcing CEOs to apologize, but until now, they've never had to convince a jury that they aren't liable for harming kids.

This week, the first high-profile lawsuit—considered a "bellwether" case that could set meaningful precedent in the hundreds of other complaints—goes to trial. That lawsuit documents the case of a 19-year-old, K.G.M, who hopes the jury will agree that Meta and YouTube caused psychological harm by designing features like infinite scroll and autoplay to push her down a path that she alleged triggered depression, anxiety, self-harm, and suicidality.

Read full article

Comments

TikTok narrowly avoids a US ban by spinning up a new American joint venture

27 January 2026 at 06:09

TikTok may have found a way to stay online in the US. The company announced late last week that it has set up a joint venture backed largely by US investors. TikTok announced TikTok USDS Joint Venture LLC on Friday in a deal valued at about $14 billion, allowing it to continue operating in the country.

This is the culmination of a long-running fight between TikTok and US authorities. In 2019, the Committee on Foreign Investment in the United States (CFIUS) flagged ByteDance’s 2017 acquisition of Musical.ly as a national security risk, on the basis that state links between the app’s Chinese owner would make put US users’ data at risk.

In his first term, President Trump issued an executive order demanding that ByteDance sell the business or face a ban. That was order was blocked by courts, and President Biden later replaced it with a broader review process in 2021.

In April 2024, Congress passed the Protecting Americans from Foreign Adversary Controlled Applications Act (PAFACA), which Biden signed into law. That set a January 19, 2025 deadline for ByteDance to divest its business or face a nationwide ban. With no deal finalized, TikTok voluntarily went dark for about 12 hours on January 18, 2025. Trump later issued executive orders extending the deadline, culminating in a September 2025 agreement that led to the joint venture.

Three managing investors each hold 15% of the new business: database giant Oracle (which previously vied to acquire TikTok when ByteDance was first told to divest), technology-focused investment group Silver Lake, and the United Arab Emirates-backed AI (Artificial Intelligence) investment company MGX.

Other investors include the family office of tech entrepreneur Michael Dell, as well as Vastmere Strategic Investments, Alpha Wave Partners, Revolution, Merritt Way, and Via Nova.

Original owner ByteDance retains 19.9% of the business, and according to an internal memo released before the deal was officially announced, 30% of the company will be owned by affiliates of existing ByteDance investors. That’s in spite of the fact that PAFACA mandated a complete severance of TikTok in the US from its Chinese ownership.

A focus on security

The company is eager to promote data security for its users. With that in mind, Oracle takes the role of “trusted security partner” for data protection and compliance auditing under the deal.

Oracle is also expected to store US user data in its cloud environment. The program will reportedly align with security frameworks including the National Institute of Standards and Technology (NIST) Cybersecurity Framework. Other TikTok-owned apps such as CapCut and Lemon8 will also fall under the joint venture’s security umbrella.

Canada’s TikTok tension

It’s been a busy month for ByteDance, with other developments north of the border. Last week, Canada’s Federal Court overturned a November 2024 governmental order to shut down TikTok’s Canadian business on national security grounds. The decision gives Industry Minister Mélanie Joly time to review the case.

Why this matters

TikTok’s new US joint venture lowers the risk of direct foreign access to American user data, but it doesn’t erase all of the concerns that put the app in regulators’ crosshairs in the first place. ByteDance still retains an economic stake, the recommendation algorithm remains largely opaque, and oversight depends on audits and enforcement rather than hard technical separation.

In other words, this deal reduces exposure, but it doesn’t make TikTok a risk-free platform. For users, that means the same common-sense rules still apply: be thoughtful about what you share and remember that regulatory approval isn’t the same as total data safety.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Canada Marks Data Privacy Week 2026 as Commissioner Pushes for Privacy by Design

27 January 2026 at 03:18

Data Privacy Week 2026

As Data Privacy Week 2026 gets underway from January 26 to 30, Canada’s Privacy Commissioner Philippe Dufresne has renewed calls for stronger data protection practices, modern privacy laws, and a privacy-first approach to emerging technologies such as artificial intelligence. In a statement marking Data Privacy Week 2026, Dufresne said data has become one of the most valuable resources of the 21st century, making responsible data management essential for both individuals and organizations. “Data is one of the most important resources of the 21st century and managing it well is essential for ensuring that individuals and organizations can confidently reap the benefits of a digital society,” he said. The Office of the Privacy Commissioner (OPC) has chosen privacy by design as its theme this year, highlighting the need for organizations to embed privacy into their programs, products, and services from the outset. According to Dufresne, this proactive approach can help organizations innovate responsibly, reduce risks, build for the future, and earn public trust.

Data Privacy Week 2026: Privacy by Design Takes Centre Stage

Speaking on the growing integration of technology into everyday life, Dufresne said Data Privacy Week 2026 is a timely opportunity to underline the importance of data protection. With personal data being collected, used, and shared at unprecedented levels, privacy is no longer a secondary concern. “Prioritizing privacy by design is my Office’s theme for Data Privacy Week this year, which highlights the benefits to organizations of taking a proactive approach to protect the personal information that is in their care,” he said. The OPC is also offering guidance for individuals on how to safeguard their personal information in a digital world, while providing organizations with resources to support privacy-first programs, policies, and services. These include principles to encourage responsible innovation, especially in the use of generative AI technologies.

Real-World Cases Show Why Privacy Matters

In parallel with Data Privacy Week 2026, Dufresne used a recent appearance before Parliament to point to concrete cases that show how privacy failures can cause serious and lasting harm. He referenced investigations into the non-consensual sharing of intimate images involving Aylo, the operator of Pornhub, and the 23andMe data breach, which exposed highly sensitive personal information of 7 million customers, including more than 300,000 Canadians. His office’s joint investigation into TikTok also highlighted the need to protect children’s privacy online. The probe not only resulted in a report but also led TikTok to improve its privacy practices in the interests of its users, particularly minors. Dufresne also confirmed an expanded investigation into X and its Grok chatbot, focusing on the emerging use of AI to create deepfakes, which he said presents significant risks to Canadians. “These are some of many examples that demonstrate the importance of privacy for current and future generations,” he told lawmakers, adding that prioritizing privacy is also a strategic and competitive asset for organizations.

Modernizing Canada’s Privacy Laws

A central theme of Data Privacy Week 2026 in Canada is the need to modernize privacy legislation. Dufresne said existing laws must be updated to protect Canadians in a data-driven world while giving businesses clear and practical rules. He voiced support for proposed changes under Bill C-15, the Budget 2025 Implementation Act, which would amend the Personal Information Protection and Electronic Documents Act (PIPEDA) to introduce a right to data mobility. This would allow individuals to request that their personal information be transferred to another organization, subject to regulations and safeguards. “A right to data mobility would give Canadians greater control of their personal information by allowing them to make decisions about who they want their information shared with,” he said, adding that it would also make it easier for people to switch service providers and support innovation and competition. Under the proposed amendments, organizations would be required to disclose personal information to designated organizations upon request, provided both are subject to a data-mobility framework. The federal government would also gain authority to set regulations covering safeguards, interoperability standards, and exceptions. Given the scope of these changes, Dufresne said it will be important for his office to be consulted as the regulations are developed.

A Call to Act During Data Privacy Week 2026

Looking ahead, Dufresne framed Data Privacy Week 2026 as both a moment of reflection and a call to action. “Let us work together to create a safer digital future for all, where privacy is everyone’s priority,” he said. He invited Canadians to take part in Data Privacy Week 2026 by joining the conversation online, engaging with content from the OPC’s LinkedIn account, and using the hashtag #DPW2026 to connect with others committed to advancing privacy in Canada and globally. As digital technologies continue to reshape daily life, the message from Canada’s Privacy Commissioner is clear: privacy is not just a legal requirement, but a foundation for trust, innovation, and long-term economic growth.

Get paid to scroll TikTok? The data trade behind Freecash ads

26 January 2026 at 09:28

Loyal readers and other privacy-conscious people will be familiar with the expression, “If it’s too good to be true, it’s probably false.”

Getting paid handsomely to scroll social media definitely falls into that category. It sounds like an easy side hustle, which usually means there’s a catch.

In January 2026, an app called Freecash shot up to the number two spot on Apple’s free iOS chart in the US, helped along by TikTok ads that look a lot like job offers from TikTok itself. The ads promised up to $35 an hour to watch your “For You” page. According to reporting, the ads didn’t promote Freecash by name. Instead, they showed a young woman expressing excitement about seemingly being “hired by TikTok” to watch videos for money.

Freecash landing page

The landing pages featured TikTok and Freecash logos and invited users to “get paid to scroll” and “cash out instantly,” implying a simple exchange of time for money.

Those claims were misleading enough that TikTok said the ads violated its rules on financial misrepresentation and removed some of them.

Once you install the app, the promised TikTok paycheck vanishes. Instead, Freecash routes you to a rotating roster of mobile games—titles like Monopoly Go and Disney Solitaire—and offers cash rewards for completing time‑limited in‑game challenges. Payouts range from a single cent for a few minutes of daily play up to triple‑digit amounts if you reach high levels within a fixed period.

The whole setup is designed not to reward scrolling, as it claims, but to funnel you into games where you are likely to spend money or watch paid advertisements.

Freecash’s parent company, Berlin‑based Almedia, openly describes the platform as a way to match mobile game developers with users who are likely to install and spend. The company’s CEO has spoken publicly about using past spending data to steer users toward the genres where they’re most “valuable” to advertisers. 

Our concern, beyond the bait-and-switch, is the privacy issue. Freecash’s privacy policy allows the automatic collection of highly sensitive information, including data about race, religion, sex life, sexual orientation, health, and biometrics. Each additional mobile game you install to chase rewards adds its own privacy policy, tracking, and telemetry. Together, they greatly increase how much behavioral data these companies can harvest about a user.

Experts warn that data brokers already trade lists of people likely to be more susceptible to scams or compulsive online behavior—profiles that apps like this can help refine.

We’ve previously reported on data brokers that used games and apps to build massive databases, only to later suffer breaches exposing all that data.

When asked about the ads, Freecash said the most misleading TikTok promotions were created by third-party affiliates, not by the company itself. Which is quite possible because Freecash does offer an affiliate payout program to people who promote the app online. But they made promises to review and tighten partner monitoring.

For experienced users, the pattern should feel familiar: eye‑catching promises of easy money, a bait‑and‑switch into something that takes more time and effort than advertised, and a business model that suddenly makes sense when you realize your attention and data are the real products.

How to stay private

Free cash? Apparently, there is no such thing.

If you’re curious how intrusive schemes like this can be, consider using a separate email address created specifically for testing. Avoid sharing real personal details. Many users report that once they sign up, marketing emails quickly pile up.

Some of these schemes also appeal to people who are younger or under financial pressure, offering tiny payouts while generating far more value for advertisers and app developers.

So, what can you do?

  • Gather information about the company you’re about to give your data. Talk to friends and relatives about your plans. Shared common sense often helps make the right decisions.
  • Create a separate account if you want to test a service. Use a dedicated email address and avoid sharing real personal details.
  • Limit information you provide online to what makes sense for the purpose. Does a game publisher need your Social Security Number? I don’t think so.
  • Be cautious about app installs that are framed as required to make the money initially promised, and review permissions carefully.
  • Use an up-to-date real-time anti-malware solution on all your devices.

Work from the premise that free money does not exist. Try to work out the business model of those offering it, and then decide.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Poland Calls for EU Investigation of TikTok Over AI-Generated Disinformation Campaign

31 December 2025 at 02:40

Digital Services Act, TikTok, Disinformation, Disinformation Campaign, Poland, EU Commission

Poland's Ministry of Digital Affairs submitted a formal request to the European Commission, this week, demanding investigation of TikTok for allegedly failing to moderate a large-scale disinformation campaign run using AI-generated content that urged Poland to exit the European Union. The authorities claimed the platform violated obligations as a Very Large Online Platform under the Digital Services Act.

Secretary of State Dariusz Standerski warned the synthetic audiovisual materials pose threats to public order, information security, and the integrity of democratic processes in Poland and across the European Union.

Some of the videos observed contain young women advocating for "Polexit" likely targeted at the younger audiences. European analytics collective Res Futura found one such TikTok account "Prawilne Polki," which published content showing women dressed in T-shirts bearing Polish flags and patriotic symbols. [caption id="attachment_108182" align="aligncenter" width="400"]Digital Services Act, TikTok, Disinformation, Disinformation Campaign, Poland, EU Commission AI-generated "Polexit" videos (Source: Res Futura X account)[/caption] The video character said: "I want Polexit because I want freedom of choice, even if it will be more expensive. I don't remember Poland before the European Union, but I feel it was more Polish then." (machine translated)

The disclosed content published in the Polish-language segment of TikTok exhibits characteristics of a "coordinated disinformation campaign," with the nature of narratives, distribution methods, and use of synthetic materials indicating TikTok failed to implement adequate mechanisms for moderating AI-generated content or ensure effective transparency measures regarding material origins, Standerski said.

Four-Point Action Request

Poland's formal request to Executive Vice President for Tech Sovereignty, Security and Democracy Henna Virkkunen proposes the European Commission initiate investigative proceedings concerning suspected breaches of Digital Services Act provisions relating to systemic risk management and content moderation.

The ministry demands TikTok submit a detailed report on the scale and nature of disclosed content, its reach, and actions taken to remove it and prevent further dissemination. Poland also requests the Commission consider applying interim measures aimed at limiting continued spread of AI-generated content encouraging Polish EU withdrawal.

The fourth request asks for coordination with Poland's Digital Services Coordinator UKE and notification of relevant national authorities regarding proceedings outcomes.

[caption id="attachment_108177" align="aligncenter" width="400"]Digital Services Act, TikTok, Disinformation, Disinformation Campaign, Poland, EU Commission Letter sent by Secretary of State Dariusz Standerski to the EU Commission. (Source: X)[/caption]

Systemic Risk Management Failures

Available information suggests TikTok has not implemented adequate mechanisms for moderating AI-generated content, Standerski said. The platform's alleged failure to ensure effective transparency measures regarding synthetic material origins undermines Digital Services Act objectives concerning disinformation prevention and user protection.

The scale of this phenomenon, its potential consequences for political stability, and the use of generative technologies to undermine democratic foundations require immediate response from European Union institutions, the letter stressed.

As a Very Large Online Platform under DSA regulations, TikTok faces enhanced obligations including systemic risk assessments, independent audits, and transparency reporting. The platform must identify and mitigate risks relating to dissemination of illegal content and negative effects on civic discourse and electoral processes.

Growing Concerns Over AI-Generated Disinformation

The Polish complaint represents one of the first formal DSA enforcement requests specifically targeting AI-generated disinformation campaigns on major social media platforms. The case highlights growing concerns among EU member states about synthetic media being weaponized to manipulate public opinion and undermine democratic institutions.

The Digital Services Act, which came into full effect in February 2024, grants the European Commission powers to investigate very large platforms and impose fines up to 6% of global annual revenue for violations. The law requires platforms to assess and mitigate systemic risks including manipulation of services affecting democratic processes and public security.

TikTok has already been under the scanner from the EU Commission for violations under the Digital Services Act. February, last year, the Commission opened a formal investigation against the social media giant for DSA violation in areas linked to the protection of minors, advertising transparency, data access for researchers, and risk management of addictive design and harmful content.
Also read: U.S. Government Sues TikTok for COPPA Violations, Exposing Millions of Children’s Data

Australian Social Media Ban Takes Effect as Kids Scramble for Alternatives

9 December 2025 at 16:10

Australian Social Media Ban Takes Effect as Kids Scramble for Alternatives

Australia’s world-first social media ban for children under age 16 takes effect on December 10, leaving kids scrambling for alternatives and the Australian government with the daunting task of enforcing the ambitious ban. What is the Australian social media ban, who and what services does it cover, and what steps can affected children take? We’ll cover all that, plus the compliance and enforcement challenges facing both social media companies and the Australian government – and the move toward similar bans in other parts of the world.

Australian Social Media Ban Supported by Most – But Not All

In September 2024, Prime Minister Anthony Albanese announced that his government would introduce legislation to set a minimum age requirement for social media because of concerns about the effect of social media on the mental health of children. The amendment to the Online Safety Act 2021 passed in November 2024 with the overwhelming support of the Australian Parliament. The measure has met with overwhelming support – even as most parents say they don’t plan to fully enforce the ban with their children. The law already faces a legal challenge from The Digital Freedom Project, and the Australian Financial Review reported that Reddit may file a challenge too. Services affected by the ban – which proponents call a social media “delay” – include the following 10 services:
  • Facebook
  • Instagram
  • Kick
  • Reddit
  • Snapchat
  • Threads
  • TikTok
  • Twitch
  • X
  • YouTube
Those services must take steps by Wednesday to remove accounts held by users under 16 in Australia and prevent children from registering new accounts. Many services began to comply before the Dec. 10 implementation date, although X had not yet communicated its policy to the government as of Dec. 9, according to The Guardian. Companies that fail to comply with the ban face fines of up to AUD $49.5 million, while there are no penalties for parents or children who fail to comply.

Opposition From a Wide Range of Groups - And Efforts Elsewhere

Opposition to the law has come from a range of groups, including those concerned about the privacy issues resulting from age verification processes such as facial recognition and assessment technology or use of government IDs. Others have said the ban could force children toward darker, less regulated platforms, and one group noted that children often reach out for mental health help on social media. Amnesty International also opposed the ban. The international human rights group called the ban “an ineffective quick fix that’s out of step with the realities of a generation that lives both on and offline.” Amnesty said strong regulation and safeguards would be a better solution. “The most effective way to protect children and young people online is by protecting all social media users through better regulation, stronger data protection laws and better platform design,” Amnesty said. “Robust safeguards are needed to ensure social media platforms stop exposing users to harms through their relentless pursuit of user engagement and exploitation of people’s personal data. “Many young people will no doubt find ways to avoid the restrictions,” the group added. “A ban simply means they will continue to be exposed to the same harms but in secret, leaving them at even greater risk.” Even the prestigious medical journal The Lancet suggested that a ban may be too blunt an instrument and that 16-year-olds will still face the same harmful content and risks. Jasmine Fardouly of the University of Sydney School of Psychology noted in a Lancet commentary that “Further government regulations and support for parents and children are needed to help make social media safe for all users while preserving its benefits.” Still, despite the chorus of concerns, the idea of a social media ban for children is catching on in other places, including the EU and Malaysia.

Australian Children Seek Alternatives as Compliance Challenges Loom

The Australian social media ban leaves open a range of options for under-16 users, among them Yope, Lemon8, Pinterest, Discord, WhatsApp, Messenger, iMessage, Signal, and communities that have been sources of controversy such as Telegram and 4chan. Users have exchanged phone numbers with friends and other users, and many have downloaded their personal data from apps where they’ll be losing access, including photos, videos, posts, comments, interactions and platform profile data. Many have investigated VPNs as a possible way around the ban, but a VPN is unlikely to work with an existing account that has already been identified as an underage Australian account. In the meantime, social media services face the daunting task of trying to confirm the age of account holders, a process that even Albanese has acknowledged “won’t be 100 per cent perfect.” There have already been reports of visual age checks failing, and a government-funded report released in August admitted the process will be imperfect. The government has published substantial guidance for helping social media companies comply with the law, but it will no doubt take time to determine what “reasonable steps” to comply look like. In the meantime, social media companies will have to navigate compliance guidance like the following passage: “Providers may choose to offer the option to end-users to provide government-issued identification or use the services of an accredited provider. However, if a provider wants to employ an age assurance method that requires the collection of government-issued identification, then the provider must always offer a reasonable alternative that doesn’t require the collection of government-issued identification. A provider can never require an end-user to give government-issued identification as the sole method of age assurance and must always give end-users an alternative choice if one of the age assurance options is to use government-issued identification. A provider also cannot implement an age assurance system which requires end-users to use the services of an accredited provider without providing the end-user with other choices.”  

Affiliates Flock to ‘Soulless’ Scam Gambling Machine

28 August 2025 at 13:21

Last month, KrebsOnSecurity tracked the sudden emergence of hundreds of polished online gaming and wagering websites that lure people with free credits and eventually abscond with any cryptocurrency funds deposited by players. We’ve since learned that these scam gambling sites have proliferated thanks to a new Russian affiliate program called “Gambler Panel” that bills itself as a “soulless project that is made for profit.”

A machine-translated version of Gambler Panel’s affiliate website.

The scam begins with deceptive ads posted on social media that claim the wagering sites are working in partnership with popular athletes or social media personalities. The ads invariably state that by using a supplied “promo code,” interested players can claim a $2,500 credit on the advertised gaming website.

The gaming sites ask visitors to create a free account to claim their $2,500 credit, which they can use to play any number of extremely polished video games that ask users to bet on each action. However, when users try to cash out any “winnings” the gaming site will reject the request and prompt the user to make a “verification deposit” of cryptocurrency — typically around $100 — before any money can be distributed.

Those who deposit cryptocurrency funds are soon pressed into more wagering and making additional deposits. And — shocker alert — all players eventually lose everything they’ve invested in the platform.

The number of scam gambling or “scambling” sites has skyrocketed in the past month, and now we know why: The sites all pull their gaming content and detailed strategies for fleecing players straight from the playbook created by Gambler Panel, a Russian-language affiliate program that promises affiliates up to 70 percent of the profits.

Gambler Panel’s website gambler-panel[.]com links to a helpful wiki that explains the scam from cradle to grave, offering affiliates advice on how best to entice visitors, keep them gambling, and extract maximum profits from each victim.

“We have a completely self-written from scratch FAKE CASINO engine that has no competitors,” Gambler Panel’s wiki enthuses. “Carefully thought-out casino design in every pixel, a lot of audits, surveys of real people and test traffic floods were conducted, which allowed us to create something that has no doubts about the legitimacy and trustworthiness even for an inveterate gambling addict with many years of experience.”

Gambler Panel explains that the one and only goal of affiliates is to drive traffic to these scambling sites by any and all means possible.

A machine-translated portion of Gambler Panel’s singular instruction for affiliates: Drive traffic to these scambling sites by any means available.

“Unlike white gambling affiliates, we accept absolutely any type of traffic, regardless of origin, the only limitation is the CIS countries,” the wiki continued, referring to a common prohibition against scamming people in Russia and former Soviet republics in the Commonwealth of Independent States.

The program’s website claims it has more than 20,000 affiliates, who earn a minimum of $10 for each verification deposit. Interested new affiliates must first get approval from the group’s Telegram channel, which currently has around 2,500 active users.

The Gambler Panel channel is replete with images of affiliate panels showing the daily revenue of top affiliates, scantily-clad young women promoting the Gambler logo, and fast cars that top affiliates claimed they bought with their earnings.

A machine-translated version of the wiki for the affiliate program Gambler Panel.

The apparent popularity of this scambling niche is a consequence of the program’s ease of use and detailed instructions for successfully reproducing virtually every facet of the scam. Indeed, much of the tutorial focuses on advice and ready-made templates to help even novice affiliates drive traffic via social media websites, particularly on Instagram and TikTok.

Gambler Panel also walks affiliates through a range of possible responses to questions from users who are trying to withdraw funds from the platform. This section, titled “Rules for working in Live chat,” urges scammers to respond quickly to user requests (1-7 minutes), and includes numerous strategies for keeping the conversation professional and the user on the platform as long as possible.

A machine-translated version of the Gambler Panel’s instructions on managing chat support conversations with users.

The connection between Gambler Panel and the explosion in the number of scambling websites was made by a 17-year-old developer who operates multiple Discord servers that have been flooded lately with misleading ads for these sites.

The researcher, who asked to be identified only by the nickname “Thereallo,” said Gambler Panel has built a scalable business product for other criminals.

“The wiki is kinda like a ‘how to scam 101’ for criminals written with the clarity you would expect from a legitimate company,” Thereallo said. “It’s clean, has step by step guides, and treats their scam platform like a real product. You could swap out the content, and it could be any documentation for startups.”

“They’ve minimized their own risk — spreading the links on Discord / Facebook / YT Shorts, etc. — and outsourced it to a hungry affiliate network, just like a franchise,” Thereallo wrote in response to questions.

“A centralized platform that can serve over 1,200 domains with a shared user base, IP tracking, and a custom API is not at all a trivial thing to build,” Thereallo said. “It’s a scalable system designed to be a resilient foundation for thousands of disposable scam sites.”

The security firm Silent Push has compiled a list of the latest domains associated with the Gambler Panel, available here (.csv).

❌