Reading view

There are new articles available, click to refresh the page.

Crisis-hit firm behind vital NHS services faces uncertain future

Auditors say financial woes at tech firm Atos could hinder ability of its UK arm to continue as a going concern

The British arm of Atos, the French technology company that is a vital supplier of the NHS and UK government departments, is facing a “material uncertainty” over its ability to continue as a going concern, auditors have warned.

In the latest accounts for its UK holding company covering 2022, the company’s auditor, Grant Thornton, said financial problems facing its parent company in France could limit the UK arm’s ability to access cash and continue as a going concern.

Continue reading...

💾

© Photograph: Nick Moore/Alamy

💾

© Photograph: Nick Moore/Alamy

Silicon Valley wants unfettered control of the tech market. That’s why it’s cosying up to Trump | Evgeny Morozov

Spooked by Biden’s wealth tax, big tech venture capitalists are showing their progressive credentials were only ever skin deep

Hardly a week passes without another billionaire endorsing Donald Trump. With Joe Biden proposing a 25% tax on those with assets over $100m (£80m), this is no shock. The real twist? The pro-Trump multimillionaire club now includes a growing number of venture capitalists. Unlike hedge funders or private equity barons, venture capitalists have traditionally held progressive credentials. They’ve styled themselves as the heroes of innovation, and the Democrats have done more to polish their progressive image than anyone else. So why are they now cosying up to Trump?

Venture capitalists and Democrats long shared a mutual belief in techno-solutionism – the idea that markets, enhanced by digital technology, could achieve social goods where government policy had failed. Over the past two decades, we’ve been living in the ruins of this utopia. We were promised that social media could topple dictators, that crypto could tackle poverty, and that AI could cure cancer. But the progressive credentials of venture capitalists were only ever skin deep, and now that Biden has adopted a tougher stance on Silicon Valley, VCs are more than happy to support Trump’s Republicans.

Evgeny Morozov is the author of several books on technology and politics. His latest podcast, A Sense of Rebellion, is available now

Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.

Continue reading...

💾

© Photograph: Aerial Archives/Alamy

💾

© Photograph: Aerial Archives/Alamy

UK needs system for recording AI misuse and malfunctions, thinktank says

Centre for Long-Term Resilience calls on next government to log incidents to mitigate risks

The UK needs a system for recording misuse and malfunctions in artificial intelligence or ministers risk being unaware of alarming incidents involving the technology, according to a report.

The next government should create a system for logging incidents involving AI in public services and should consider building a central hub for collating AI-related episodes across the UK, said the Centre for Long-Term Resilience (CLTR), a thinktank.

Continue reading...

💾

© Photograph: Jonathan Raa/NurPhoto/REX/Shutterstock

💾

© Photograph: Jonathan Raa/NurPhoto/REX/Shutterstock

Driving sustainable water management

From semiconductor manufacturing to mining, water is an essential commodity for industry. It is also a precious and constrained resource. According to the UN, more than 2.3 billion people faced water stress in 2022. Drought has cost the United States $249 billion in economic losses since 1980. 

Climate change is expected to worsen water problems through drought, flooding, and water contamination caused by extreme weather events. “I can’t think of a country on the planet that doesn’t have a water scarcity issue,” says Rob Simm, senior vice president at Stantec, an engineering consultancy focused on sustainability, energy solutions, and renewable resources. 

Economic innovations, notably AI and electric vehicles, are also increasing industrial demand for water. “When you look at advanced manufacturing and the way technology is changing, we’re requiring more, higher volumes of ultrapure water [UPW]. This is a big driver of the industrial water market,” Simm says. AI, computing, and the electric vehicle industries all generate immense quantities of heat and require sophisticated cooling and cleaning. Manufacturing silicon wafers for semiconductor production involves intricate cleaning processes, requiring up to 5 million gallons of high-quality UPW daily. With rising demand for semiconductors, improvements in water treatment and reuse are imperative to prevent waste.   

Data-driven industrial water management technologies are revolutionizing how enterprises approach conservation and sustainability. They are harnessing the power of digital innovation by layering sensors, data, and cloud-based platforms to optimize physical water systems and allow industrial and human users to share water access. Integration of AI, machine learning (ML), data analytics, internet of things (IoT) and sensors, digital twins, and social media can enable not just quick data analysis, but also can allow manufacturers to minutely measure water quality, make predictions using demand forecasting, and meet sustainability goals.

More integrated industrial water management solutions, including reuse, industrial symbiosis, and zero liquid discharge (ZLD), will all be crucial as greenfield industrial projects look toward water reuse. “Water is an input commodity for the industrial process, and wastewater gives you the opportunity to recycle that material back into the process,” says Simm. 

Treating a precious resource

Water filtration systems have evolved during the past century, especially in agriculture and industry. Processes such as low-pressure membrane filtration and reverse osmosis are boosting water access for both human and industrial users. Membrane technologies, which continue to evolve, have halved the cost of desalinated water during the past decade, for example. New desalinization methods run on green power and are dramatically increasing water output rates. 

Advances in AI, data processing, and cloud computing could bring a new chapter in water access. The automation this permits allows for quicker and more precise decision-making. Automated, preset parameters let facilities operate at capacity with less risk. “Digital technology and data play a crucial role in developing technology for water innovations, enabling better management of resources, optimizing treatment processes, and improving efficiency in distribution,” says Vincent Puisor, global business development director at Schneider Electric. 

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Battle lines redrawn as Argentina’s lithium mines ramp up to meet electric car demand

Mining companies accused of colonial ‘divide and rule’ tactics in their pursuit of the precious metal that lies under the country’s salt flats

• Harriet Barber in the Salinas Grandes, Argentina. Photographs by John Owens

In the vast white desert of the Salinas Grandes, Antonio Calpanchay, 45, lifts his axe and slices the ground. He has worked this land since he was 12, chopping and collecting salt, replenishing it for the seasons ahead and teaching his children to do the same.

“All of our aboriginal community works here, even the elders,” he says, sheltering his weathered face from the sun. “We always have. It is our livelihood.”

Continue reading...

💾

© Photograph: John Owens/The Guardian

💾

© Photograph: John Owens/The Guardian

‘It’s been hell’: injured Amazon workers turn to GoFundMe to pay bills

Amazon pledged to create ‘Earth’s safest place to work’. Three warehouse workers speak about their experiences

Amazon workers left unable to work by injuries on the job have resorted to online fundraising campaigns to pay their bills as they fight for compensation and disability benefits.

Three current employees, injured while working in the technology giant’s warehouses, described a “bureaucratic, terrible process” while they sought financial support. One was rendered homeless.

Continue reading...

💾

© Photograph: Lucas Jackson/Reuters

💾

© Photograph: Lucas Jackson/Reuters

Claude 3.5 suggests AI’s looming ubiquity could be a good thing

In this week’s newsletter: If you don’t like chatbots popping up everywhere, get ready to be peeved. But the latest version of Anthropic’s shows AI is becoming more useful – and, crucially, affordable

The frontier of AI just got pushed a little further forward. On Friday, Anthropic, the AI lab set up by a team of disgruntled OpenAI staffers, released the latest version of its Claude LLM. From Bloomberg:

The company said Thursday that the new model – the technology that underpins its popular chatbot Claude – is twice as fast as its most powerful previous version. Anthropic said in its evaluations, the model outperforms leading competitors like OpenAI on several key intelligence capabilities, such as coding and text-based reasoning.

It shows marked improvement in grasping nuance, humor, and complex instructions, and is exceptional at writing high-quality content with a natural, relatable tone.

As part of our commitment to safety and transparency, we’ve engaged with external experts to test and refine the safety mechanisms within this latest model. We recently provided Claude 3.5 Sonnet to the UK’s Artificial Intelligence Safety Institute (UK AISI) for pre-deployment safety evaluation. The UK AISI completed tests of 3.5 Sonnet and shared their results with the US AI Safety Institute (US AISI) as part of a Memorandum of Understanding, made possible by the partnership between the US and UK AISIs announced earlier this year.

Continue reading...

💾

© Photograph: Canadian Press/REX/Shutterstock

💾

© Photograph: Canadian Press/REX/Shutterstock

Social Media Warning Labels, Should You Store Passwords in Your Web Browser?

In this episode of the Shared Security Podcast, the team debates the Surgeon General’s recent call for social media warning labels and explores the pros and cons. Scott discusses whether passwords should be stored in web browsers, potentially sparking strong opinions. The hosts also provide an update on Microsoft’s delayed release of CoPilot Plus PCs […]

The post Social Media Warning Labels, Should You Store Passwords in Your Web Browser? appeared first on Shared Security Podcast.

The post Social Media Warning Labels, Should You Store Passwords in Your Web Browser? appeared first on Security Boulevard.

💾

Beware! Deepfakes of Mukesh Ambani and Virat Kohli Used to Promote Betting Apps

Deepfake Investment Scam

A new deepfake investment scam has emerged on the internet, misusing prominent Indian figures like Asia's richest person, Mukesh Ambani, and former captain of the Indian national cricket team, Virat Kohli. These deepfake scam videos falsely depict the billionaire and cricket star endorsing betting apps, leading unsuspecting viewers into potential scams. Using advanced deepfake techniques, the video manipulates their appearances and voices to make it seem like they are endorsing the app. This deceptive tactic exploits the trust and influence these figures hold.

The Strange Case of Deepfake Scams

This deepfake investment scam also targets well-known TV journalists, manipulating footage to create a false impression of authenticity. These altered videos imply endorsements from reputable sources, exploiting public trust for illicit gains. In the video, which is widely being circulated online, Ambani is falsely quoted as saying, “Our honest app has already helped thousands of people in India earn money. There is a 95% chance of winning here.” https://www.facebook.com/watch/?v=2401849440205008 Meanwhile, Kohli is shown endorsing the app, stating, "Aviator is an investment game where you can make huge profits. For example, if you have 500 Rupees, that will be enough because when the airplane flies your stake will automatically multiply by the number that the airplane reaches. Your investment can multiply 10 times. I personally recommended this app.” Both individuals seem to be discussing the game and promising high returns, claiming minimal investments can lead to significant profits. Such false promises prey on the aspirations of viewers seeking easy financial gains, ultimately leading to financial losses for many who fall victim to these deepfake investment scams. The Cyber Express has investigated these Aviator game scams and found out most of these apps have been banned on platforms like Google Play Store and Apple App Store due to their deceptive practices. Despite this, scammers continue to circulate these apps through alternate channels, using deepfake investment scams to lend a spirit of legitimacy.

The Aviator Game Scams Leveraging Deepfake Technology 

Similar incidents involving other public figures have also come to light, including cricket legend Sachin Tendulkar. Fake videos were created to deceive the public, and Tendulkar himself spoke out against such misuse of technology. In one deepfake video, Tendulkar is depicted talking about his daughter Sara playing a particular game, falsely quoting him as saying, “I am surprised how easy it is to earn well these days." [caption id="attachment_78100" align="alignnone" width="720"]Aviator Game Scams Sachin Tendulkar Deepfake Scam (Source: X)[/caption] Following this, Sachin Tendulkar himself posted a tweet explaining the deepfake investment scam behind the deepfake videos. Tendulkar tweeted, “These videos are fake. It is disturbing to see rampant misuse of technology. Request everyone to report videos, ads & apps like these in large numbers. Social Media platforms need to be alert and responsive to complaints. Swift action from their end is crucial to stopping the spread of misinformation and deepfakes.” Previously, the Indian media company The Quint decoded another instance of deepfake videos involving Mukesh Ambani's son, Anant Ambani, and Virat Kohli promoting gaming apps in viral clips circulating on social media. Concerns arose about Ambani's video due to discrepancies in lip-sync and mechanical movements, suggesting a potential deepfake. [caption id="attachment_78102" align="alignnone" width="720"]Anant Ambani Deepfake Anant Ambani Deepfake (Source: The Quint)[/caption] Investigation revealed the original context of Ambani's video related to an animal rescue program launch. Similarly, Kohli's video was traced back to a different context involving discussions on religious harmony, debunking claims of both videos promoting gaming applications as false. In all the cases combined, a single app that was heavily promoted by social media pages and deepfake videos was the Aviator game. Aviator, an online casino game developed by Spribe, has become the most controversial game on the internet. The game’s unique, “easy to make money” has been tried and tested to be too good to be true. Inside the game, players engage by flying planes to earn money, influencing outcomes through their actions—a unique feature in online gaming. The game includes bonus rounds and mini-games, accessible on desktop, mobile, and tablet platforms to reach a broad audience. However, despite its popularity, the Aviator game has garnered notoriety for its misleading promises and unfair practices. Users have reported massive financial losses after investing in what turned out to be a fraudulent scheme. Reviews and user experiences highlight consistent patterns of manipulation and rigged outcomes designed to benefit the operators at the expense of trusting players. To top it all off, these fake deepfake videos of celebrities endorsing the app adds more questions about the authenticity of the app and the intent behind this aggressive marketing strategy.  The proliferation of deepfake videos exploiting the reputations of public figures like Mukesh Ambani and Virat Kohli highlights the urgent need for stringent measures against digital deception. As consumers, vigilance and skepticism are essential in understanding an increasingly complex technological era with potential scams and misinformation.

Former Cisco CEO: Nvidia's AI Dominance Mirrors Cisco's Internet Boom, But Market Dynamics Differ

Nvidia has become the U.S.'s most valuable listed company, riding the wave of the AI revolution that brings back memories of one from earlier this century. The last time a big provider of computing infrastructure was the most valuable U.S. company was in March 2000, when networking-equipment company Cisco took that spot at the height of the dot-com boom. Former Cisco CEO John Chambers, who led the company during the dot-com boom, said the implications of AI are larger than the internet and cloud computing combined, but the dynamics differ. "The implications in terms of the size of the market opportunity is that of the internet and cloud computing combined," he told WSJ. "The speed of change is different, the size of the market is different, the stage when the most valuable company was reached is different." The story adds: Chambers said [Nvidia CEO] Huang was working from a different playbook than Cisco but was facing some similar challenges. Nvidia has a dominant market share, much like Cisco did with its products as the internet grew, and is also fending off rising competition. Also like Nvidia, Cisco benefited from investments before the industry became profitable. "We were absolutely in the right spot at the right time, and we knew it, and we went for it," Chambers said.

Read more of this story at Slashdot.

How Blockchain Technology Can Help Safeguard Data and Strengthen Cybersecurity

Blockchain Technology

By Mohan Subrahmanya, Country Leader, Insight Enterprises In an era consistently besieged by data breaches and increased cyber threats, blockchain technology is emerging as a key tool for the enhancement of cybersecurity and the protection of data. It is a decentralized and secure way of recording critical data that brings forth innumerable benefits to many sectors through a sound framework for secure transactions and integrity of data.

Understanding Blockchain Technology

At its core, blockchain is a decentralized ledger that records transactions across a network of computers, ensuring that data remains transparent, secure, and immutable. Each block in the blockchain contains a timestamp, transaction data, and a cryptographic hash of the previous block, creating a chain of records that is nearly impossible to alter. The exponential growth of blockchain technology is fueled by the need to simplify business processes, increase transparency, improve traceability, and cut costs. According to ReportLinker, the global blockchain market is expected to increase by 80% between 2018 and 2023, from $1.2 billion to $23.3 billion.

Key Components of Blockchain That Ensure Data Security

Blockchain technology enhances data security by ensuring that data recorded once remains unalterable and undeletable without network consensus, thus maintaining integrity. One of the key features of blockchain technology is decentralization. Unlike traditional centralized databases, blockchain operates on a distributed network. This structure reduces the risk of a single point of failure and makes it much more difficult for malicious entities to compromise the entire system. By distributing data across multiple nodes, blockchain eliminates vulnerabilities associated with centralized servers, thereby enhancing overall security. Another feature is the Cryptographic hash function which plays a crucial role in blockchain security. These mathematical algorithms generate a unique identifier for each block, making it virtually impossible to alter any recorded data without detection. All the altered information on the blockchain is visible and immutable, which not only ensures data integrity but also provides a reliable mechanism to detect and prevent fraudulent activities. Blockchain also employs consensus mechanisms such as Proof of Work (PoW) and Proof of Stake (PoS) to validate transactions and ensure network consistency. By allowing only authentic transactions to be added to the blockchain, these mechanisms prevent double payments and other fraudulent practices. Digital signatures, which use a private key to sign transactions, further enhance this level of security. This ensures that only authorized individuals can initiate or modify data entries, while anyone with the public key can verify the authenticity of the transaction.

Applications Across Sectors

The use of blockchain technology could have a significant impact on cybersecurity across various sectors. Many organizations are recognizing the significant business benefits of blockchain technology and are increasingly adopting it across various sectors. Blockchain has a lot to offer, from manufacturing and healthcare to supply chains and beyond. Financial services, for instance, can benefit from blockchain's ability to secure transactions, reduce fraud, and improve transparency. The healthcare sector can utilize blockchain to secure storage and share patient information between authorized personnel, ensuring confidentiality and accuracy. In the manufacturing industry, blockchain is primarily used for the movement and management of digital assets and physical goods, enhancing transparency and traceability. In order to ensure a transparent and immutable record of the origin of products, supply chain management can use blockchain technology to prevent counterfeiting and ensure authenticity. Government services can also use blockchain to increase the security and efficiency of public records, voting systems as well as identity management.

Key Challenges and Considerations

There are certain challenges to the use of blockchain technology, despite its many benefits. Scalability is an important concern, as the number of transactions increases, the blockchain may become slow and costly to maintain. Furthermore, significant computational power is required for consensus mechanisms such as POW which could result in considerable energy consumption. Regulatory uncertainty is another issue, as the evolving legal landscape can obscure the widespread adoption of blockchain technology. Addressing these challenges is crucial for the continued growth and adoption of blockchain technology. Global efforts are being made to create scalable blockchain systems and more effective consensus methods. Additionally, regulatory frameworks are also evolving to offer more precise guidelines to implement blockchain technology.

Growth of Blockchain Technology in India

India is seeing a strong increase in the adoption of blockchain technology in many sectors. This growth is driven by government-backed projects and initiatives, such as the National Blockchain Framework, to improve transparency, security, and efficiency. The technology's potential to enhance data integrity and operational efficiency aligns well with India's digital transformation goals, making blockchain a key component in the nation's technological advancement. The use of blockchain technology has been much more of a game-changer in terms of data security and is supporting cybersecurity. It provides robust security against all cyber threats since it is decentralized, immutable, and fully transparent. Overcoming the challenges of scaling and regulatory uncertainty would enable blockchain's distributed ledger technology to emerge as the key player in secure digital infrastructures that drive innovation across all sectors. The more organizations study its potential applications, the more blockchain will change the face of data security and cybersecurity. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. 

Cop busted for unauthorized use of Clearview AI facial recognition resigns

Cop busted for unauthorized use of Clearview AI facial recognition resigns

Enlarge (credit: Francesco Carta fotografo | Moment)

An Indiana cop has resigned after it was revealed that he frequently used Clearview AI facial recognition technology to track down social media users not linked to any crimes.

According to a press release from the Evansville Police Department, this was a clear "misuse" of Clearview AI's controversial face scan tech, which some US cities have banned over concerns that it gives law enforcement unlimited power to track people in their daily lives.

To help identify suspects, police can scan what Clearview AI describes on its website as "the world's largest facial recognition network." The database pools more than 40 billion images collected from news media, mugshot websites, public social media, and other open sources.

Read 16 remaining paragraphs | Comments

AI-Powered Transformation: Optimizing B2B SaaS for Efficiency and Growth (Without Sacrificing Your Team)

The fear of AI replacing human jobs in B2B SaaS is a myth. AI excels at automating repetitive tasks, allowing your team to focus on strategic initiatives.

The post AI-Powered Transformation: Optimizing B2B SaaS for Efficiency and Growth (Without Sacrificing Your Team) appeared first on Security Boulevard.

Ticketmaster Data Breach and Rising Work from Home Scams

In episode 333 of the Shared Security Podcast, Tom and Scott discuss a recent massive data breach at Ticketmaster involving the data of 560 million customers, the blame game between Ticketmaster and third-party provider Snowflake, and the implications for both companies. Additionally, they discuss Live Nation’s ongoing monopoly investigation. In the ‘Aware Much’ segment, the […]

The post Ticketmaster Data Breach and Rising Work from Home Scams appeared first on Shared Security Podcast.

The post Ticketmaster Data Breach and Rising Work from Home Scams appeared first on Security Boulevard.

💾

Securing Operational Technology: The Foundation of Modern Industrial Operations in META Region

Securing Operational Technology, OT, IT, META Region, The Cyber Express, The Cyber Express News,

In the field of business operations in the META region, operational technology (OT) acts as a backbone, facilitating system maintenance, control, and optimization. From factories to energy projects, OT systems play an important role in increasing efficiency, ensuring safety, and maintaining reliability. However, with the increasing interconnectivity between OT and the Internet of Things (IoT), as well as the growing threat landscape, securing operational technology environments has never been more crucial.

Understanding Operational Technology

OT encompasses the hardware and software utilized to monitor and control physical devices and processes within industrial operations, including sectors such as manufacturing, energy, transportation, and utilities. It comprises of two main categories: Internet of Things (IoT) devices, which introduce networking capabilities to traditional OT systems, and Industrial Control Systems (ICS) - specialized systems dedicated to monitoring and controlling industrial processes.
Key functions of OT include:
  • Driving innovation, improving productivity, ensuring safety, reliability, and maintaining critical infrastructure.
  • Enhancing efficiency by automating and optimizing processes, minimizing downtime, reducing waste, and maximizing output.
  • Ensuring safety by monitoring environmental conditions, detecting abnormalities, and triggering automated responses to prevent accidents.
  • Providing reliable performance in harsh environments to prevent financial losses and risks to public safety.
  • Maintaining product quality and consistency by monitoring and adjusting production processes.
  • Enabling data-driven decision-making by generating insights into operations.
  • Managing critical infrastructure such as energy grids, water treatment plants, and transportation networks.

Differentiating OT from IT

While Operational Technology shares similarities with Information Technology (IT), it differs in several key aspects. IT focuses on managing digital information within organizations and OT controls highly technical specialist systems crucial for ensuring the smooth operation of critical processes. These systems include Supervisory Control and Data Acquisition (SCADA) systems, Programmable Logic Controllers (PLCs), sensors, and actuators, among others. OT is not just limited to manufacturing but can also be found in warehouses and in daily outdoor areas such as parking lots and highways. Some examples of OT include ATMs and other kiosks, connected buses, trains, and service fleets, weather stations, and even electric vehicles charging systems. The key difference between IT and OT is that IT is centered on an organization's front-end informational activities, while OT is focused on their back-end production. The merging of OT with IT, known as IT/OT convergence, aims at enhancing efficiency, safety, and security in industrial operations, yet also introduces challenges regarding cybersecurity as OT systems become more interconnected with IT networks.

IoT and OT Cybersecurity Forecast for META in 2024

Cybersecurity stands as a paramount concern for executives across various OT sectors in the META region. As the region witnesses a surge in cyber threats, organizations are increasingly investing in cybersecurity services and solutions to safeguard critical infrastructure and sensitive data. Modernization and optimization top the cyber-investment priorities for 2024, according to Pwc Digital Trust Insights 2024-Middle East Findings Report. More than half (53%) of chose optimization of existing technologies and investments in order to identify those with the highest potential to create value, while 43% selected technology modernization, including cyber infrastructure. The year 2024 is poised to bring new challenges and advancements in IoT and OT security, which could possibly shape the cybersecurity landscape in the META region.
Geopolitical Threats and APT Activity
With geopolitical tensions shaping the cybersecurity landscape, the META region is anticipated to witness heightened levels of Advanced Persistent Threat (APT) activity. Critical infrastructure, including shipping, power, and communications, will remain prime targets for cyber adversaries seeking to disrupt operations and undermine stability.
Escalating Costs of Cyber Attacks
The cost of cyberattacks is expected to escalate further in 2024, driven by an increase in ransom demands. Recent years have seen a significant rise in ransomware attacks globally, with cybercriminals targeting sectors such as healthcare and manufacturing. As ransom demands soar, organizations in the META region must bolster their cybersecurity defenses to mitigate financial and operational risks.
Heightened Threats to IoT and OT Deployments
Cyber threats targeting IoT and OT deployments are poised to intensify, posing significant risks to critical infrastructure and industrial systems. Health and safety departments, Industrial Control Systems (ICS), and IoT networks will remain prime targets for cyber adversaries, necessitating proactive cybersecurity measures to mitigate potential threats.
Focus on Network and Device Vulnerabilities
Cybercriminals will continue to exploit network and device vulnerabilities, highlighting the importance of robust patching and vulnerability scanning practices. Government infrastructures, finance, and retail sectors are particularly vulnerable to phishing attacks, underscoring the need for enhanced cybersecurity measures and employee awareness training.
Lookout for AI
With AI coming to the fore and large language models helping cybercriminals from drafting phishing mails to making AI-based robo-calling the surge of AI needs to be kept an eye on and better regulations will be the need of the hour. On the defense front, many vendors are also pushing the limits of GenAI, testing what’s possible. It could be some time before we see broad-scale use of defenceGPTs.  In the meantime, here are the three most promising areas for using GenAI in cyber defence: Threat detection and analysis; cyber risk and incident reporting; and adaptive controls that are tailored for organizations threat profile, technologies and business objectives.
Emphasis on Supply Chain Security
In 2024, supply chain vetting and internal security methods will become mainstream, as organizations strive to fortify their defenses against supply chain attacks. With compliance orders shifting from voluntary to mandatory, enterprises will be required to align with cybersecurity standards such as IEC 62443 to mitigate supply chain risks effectively.
Rise of Cyber Threat Intelligence
The year 2024 is poised to witness a surge in cyber threat intelligence investments, as organizations seek to enhance their threat detection and response capabilities. With C-level management increasingly involved in cybersecurity decision-making, enterprises will prioritize cyber threat intelligence feeds to bolster their security posture and safeguard critical infrastructure.
Expansion of Attack Surfaces
As digital transformation accelerates across sectors, the OT attack surface is expected to expand, providing cyber adversaries with new opportunities to exploit vulnerabilities. Industries such as manufacturing and healthcare must exercise caution and diligence in navigating the complexities of digital transformation to mitigate emerging cyber threats effectively.

Structuring a Secure OT Network

Despite its critical importance, OT faces significant vulnerabilities, particularly concerning cybersecurity. As OT systems become increasingly interconnected with IT networks and the IoT, they become more exposed to cyber threats. Moreover, the inability to shut down OT systems for maintenance or upgrades poses challenges in implementing security measures effectively. With the steady adoption of IoT and personal connected devices, an increase of over 4-fold in IoT malware attacks year-over-year has been reported in the Middle East region alone. This highlights persistence and ability of the cybercriminals to adapt to evolving conditions in launching IoT malware attacks. They are targeting legacy vulnerabilities, with 34 of the 39 most popular IoT exploits specifically directed at vulnerabilities that have existed for over three years. The biggest receiver of these attacks has been manufacturing, followed by oil & gas, power grids and maritime.

Securing Operational Technology with a 4-Phase Approach

To address these challenges, organizations must adopt a proactive approach to building secure OT environments. This involves implementing comprehensive security measures and adhering to industry best practices. A four-phase approach can guide organizations in building a secure OT network:
  1. Assess: Conduct an assessment to evaluate the current OT environment against industry standards and identify risks and vulnerabilities.
  2. Design: Develop a comprehensive design considering elements such as network segmentation, vendor security, and defense-in-depth strategies.
  3. Implement: Implement changes into the OT network while ensuring interoperability and compatibility with existing systems.
  4. Monitor and Respond: Establish mechanisms for detection and response to security incidents, enabling a dedicated security team to contain and eradicate threats effectively.
In addition to the four-phase approach, organizations can implement other security best practices, including access control, patch management, incident response planning, physical security measures, employee training, and vendor security assessments. By adopting a holistic approach to OT security and implementing robust security measures, organizations can mitigate cyber threats, protect critical infrastructure, and maintain the integrity and reliability of their operational systems. In an era of evolving cyber threats, securing Operational Technology is paramount to safeguarding industrial operations and ensuring the resilience of modern societies.

Blockchain Tech Firms Breached? DFINITY & Cryptonary User Data Allegedly Leaked

Data Breaches at DFINITY and Cryptonary

A threat actor (TA) has posted databases belonging to two prominent companies utilizing blockchain technology, The DFINITY Foundation and Cryptonary, on the Russian-language forum Exploit. The databases, if genuine, contain sensitive information of hundreds of thousands of users, allegedly exposing them to significant security risks. The threat actor's post on Exploit detailed the alleged data breaches at DFINITY and Cryptonary.

Details of Alleged Data Breaches at DFINITY and Cryptonary

For The DFINITY Foundation, the threat actor claimed to have over 246,000 user records with information fields including:
  • Email Address
  • First Name
  • Last Name
  • Birthday
  • Member Rating
  • Opt-in Time and IP
  • Confirm Time and IP
  • Latitude and Longitude
  • Timezone, GMT offset, DST offset
  • Country Code, Region
  • Last Changed Date
  • Leid, EUID
  • Notes
For Cryptonary, the post advertised 103,000 user records containing:
  • Email
  • First Name
  • Last Name
  • Organization
  • Title
  • Phone Number
  • Address
  • City, State/Region, Country, Zip Code
  • Historic Number of Orders
  • Average Order Value
  • User Topics
The prices quoted for these datasets were $9,500 for DFINITY's data and $3,500 for Cryptonary's data. Data Breaches at DFINITY and Cryptonary Blockchain Technology The DFINITY Foundation is a Swiss-based not-for-profit organization known for its innovative approach to blockchain technology. It operates a web-speed, internet-scale public platform that enables smart contracts to serve interactive web content directly into browsers. This platform supports the development of decentralized applications (dapps), decentralized finance (DeFi) projects, open internet services, and enterprise systems capable of operating at hyper-scale. On the other hand, Cryptonary is a leading platform in the crypto tools and research space. It provides essential insights and analysis to help users navigate the complexities of the cryptocurrency market and capitalize on emerging opportunities. When The Cyber Express Team accessed the official website of The DFINITY Foundation, they found a message warning visitors about phishing scams on third-party job boards. The message read: “Recently, we've seen a marked increase in phishing scams on third-party job boards — where an individual impersonating a DFINITY team member persuades job-seekers to send confidential information and/or payment. As good practice, please continue to be vigilant regarding fraudulent messages or fake accounts impersonating DFINITY employees. If you need to confirm the legitimacy of a position, please reach out to recruiting@dfinity.org.” [caption id="attachment_75612" align="aligncenter" width="1024"]Data Breaches at DFINITY and Cryptonary Source: Offical Website of The DFINITY Foundation[/caption] While this message serves as a caution regarding phishing scams, it is unclear whether it hints at a broader security issue or is merely a general warning. The DFINITY website and the Cryptonary website both appeared fully functional with no evident signs of compromise. The Cyber Express Team reached out to the officials of both companies for verification of the breach claims. However, as of the time of writing, no official response had been received, leaving the authenticity of the threat actor's claims unverified. Now whether this message is a hint that they are being attacked by a criminal or it's just a caution message, we can come to the conclusion they release any official statement regarding the same.

Implication of Cyberattack on Blockchain Technology

However, if the claims of the data breaches are proven true, the implications could be far-reaching for both The DFINITY Foundation and Cryptonary. The exposure of sensitive user data could lead to: Identity Theft and Fraud: Users whose personal information has been compromised could become victims of identity theft and fraud, leading to financial and personal repercussions. Reputational Damage: Both companies could suffer significant reputational harm. Trust is a critical component in the blockchain and cryptocurrency sectors, and a data breach could erode user confidence in their platforms. Legal and Regulatory Consequences: Depending on the jurisdictions affected, both companies might face legal actions and regulatory fines for failing to protect user data adequately. Operational Disruptions: Addressing the breach and enhancing security measures could divert resources and attention from other business operations, impacting overall performance and growth. While the claims remain unverified, the potential consequences highlight the importance of vigilance and proactive security strategies. The Cyber Express Team will continue to monitor the situation and provide updates as more information becomes available. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Police Want to Treat Your Data Privacy Like Garbage. The Courts Shouldn't Let Them.

pImagine this: You lost your phone, or had it stolen. Would you be comfortable with a police officer who picked it up rummaging through the phone’s contents without any authorization or oversight, thinking you had abandoned it? We’ll hazard a guess: hell no, and for good reason. /p pOur cell phones and similar digital devices open a window into our entire lives, from messages we send in confidence to friends and family, to intimate photographs, to financial records, to comprehensive information about our movements, habits, and beliefs. Some of this information is intensely private in its own right; in combination, it can disclose virtually everything about a modern cell phone user. /p pIf it seems like common sense that law enforcement shouldn’t have unfettered access to this information whenever it finds a phone left unattended, you’ll be troubled by an argument that government lawyers are advancing in a pending case before the Ninth Circuit Court of Appeals, iUnited States v. Hunt/i. In iHunt/i, the government claims it does not need a warrant to search a phone that it deems to have been abandoned by its owner because, in ditching the phone, the owner loses any reasonable expectation of privacy in all its contents. As a basis for this claim, the government cites an exception to the Fourth Amendment’s warrant requirement that applies to searches of abandoned property. But that rule was developed years ago in the context of property that is categorially different, and much less revealing, than the reams of diverse and highly sensitive information that law enforcement can access by searching our digital devices. /p pThe Supreme Court a href=https://www.supremecourt.gov/opinions/17pdf/16-402_h315.pdf#page=24has cautioned against/a uncritically extending pre-digital doctrines to modern technologies, like cell phones, that gather in one place so many of the privacies of life. In a a href=https://www.aclu.org/documents/ninth-circuit-cell-phone-abandonment-amicus-huntfriend-of-the-court brief/a in iHunt/i, the ACLU and our coalition partners urge the Ninth Circuit to heed this call, and hold that even if the physical device may properly be considered abandoned, the myriad records that reside on a cell phone remain subject to full constitutional protection. Police should have to get a warrant before searching the data on a phone they find separated from its owner./p div class=wp-heading mb-8 hr class=mark / h2 id= class=wp-heading-h2 with-markCases about abandoned property are a poor fit for digital-age privacy/h2 /div pAs the Supreme Court a href=https://supreme.justia.com/cases/federal/us/573/13-132/case.pdfrecognized/a more than 10 years ago, when the storage capacity of the median cell phone was a great deal less than it is today, advances in digital technology threaten to erode our privacy against government intrusion if courts apply to the troves of information on a cell phone the same rule they would use to analyze a search of a cigarette pack. In a case called iRiley v. California/i, the Supreme Court held that even though police may warrantlessly search items in a suspect’s pockets during arrest to avoid the destruction of evidence or identify danger to the arresting officers, a warrantless inspection of the information on an arrestee’s phone went too far. Why? Because phones, “[w]ith all they may contain and all they may reveal,” are different. /p pHere too, the information on a cell phone is qualitatively and quantitatively unlike the items that underpin precedents permitting warrantless searches of abandoned property. The most recent of those precedents was decided in 1988, long before cell phones became a “a href=https://supreme.justia.com/cases/federal/us/573/13-132/case.pdfpervasive and insistent part of daily life/a.” In case you’re keeping score, 1988 was the year Motorola debuted its first “bag phone,” a href=https://www.thehenryford.org/collections-and-research/digital-collections/artifact/162235#slide=gs-212075an early transportable telephone the size of a briefcase/a that needed to be lugged around with a separate battery and transceiver. In that case, the Supreme Court held that people lose their legal privacy in items, like curbside trash, that they knowingly and voluntarily leave out for any member of the public to see. But when you fail to reclaim a lost or abandoned phone, do you knowingly and voluntarily renounce all of your data, too? Our brief argues that the Ninth Circuit should not use the same reasoning that has historically applied to a href=https://tile.loc.gov/storage-services/service/ll/usrep/usrep486/usrep486035/usrep486035.pdfgarbage left out for collection/a and a href=https://tile.loc.gov/storage-services/service/ll/usrep/usrep362/usrep362217/usrep362217.pdf#page=24items discarded in a hotel wastepaper basket/a after check-out to impute to a cell phone’s owner an intent to give up all the revealing information on their device, just because it was left behind./p div class=wp-heading mb-8 hr class=mark / h2 id= class=wp-heading-h2 with-markCell phones contain vast amounts of diverse and revealing information, unlike other categories of objects/h2 /div pThe immense storage capacity of modern cell phones allows people to carry in their palm a volume and variety of private information that is genuinely unprecedented in cases concerning searches of abandoned property. Our cell phones provide access to information comparable in quantity and breadth to what police might glean from a thorough search of a house. Unlike a house, though, a cell phone is relatively easy to lose. You carry it with you almost all the time. It can fall between seat cushions or slip out of a loose pocket. You might leave it at the check-out desk after making a purchase or forget it on the bus as you hasten to make your stop. Even if you eventually give up looking for the device, thereby “abandoning” it, this doesn#8217;t evince any subjective intent to relinquish to whoever might pick it up all the information the phone can store or access through the internet./p div class=wp-heading mb-8 hr class=mark / h2 id= class=wp-heading-h2 with-markCloud backups mean that the data on a phone often isn’t lost even when the device goes missing/h2 /div pAn additional reason that the privacy of the information on a cell phone shouldn’t hinge on a person’s ongoing possession of their device is that you can still access and control much of the data on your phone independently of the device itself. While modern cell phones store extraordinary and growing amounts of data locally, a lot of this information resides also on remote servers — think of the untold messages, contacts, notes, and images you may have backed up on iCloud or its equivalents. If you have access to a computer or tablet, all this information remains yours to view, edit, and delete whether or not your phone is handy. Trade in your cell phone, and you can seamlessly download this information onto a new device, reviewing voicemail messages and carrying on existing conversations in text without interruption. In this sense, a cell phone is more properly analogized to a house key than a house, something we use to access vast amounts of information that’s largely stored elsewhere. It would be absurd to suggest that a person intends to open up their house for unrestrained searches by police whenever they drop their house key. Yet this is essentially the position the government in the iHunt /icase argued, successfully, in the trial court: Because the defendant discarded his phone, any piece of information stored on that phone was fair game, regardless of whether it was backed up. /p pThe Ninth Circuit has an opportunity in iHunt/i to correct the trial court’s error and clarify that the rule governing police searches of the information on a lost or abandoned cell phone does not defy common-sense intuitions about what information we mean to give up when we lose track of our devices. The information on your cell phone is highly private and revealing. If the police want authority to review it, the Constitution requires of them something simple — get a warrant./p

Google Eats Rocks, a Win for A.I. Interpretability and Safety Vibe Check

“Pass me the nontoxic glue and a couple of rocks, because it’s time to whip up a meal with Google’s new A.I. Overviews.”

© Photo Illustration by The New York Times; Photo: Enter89/Getty Images (rocks); T Kimura/Getty Images (plate)

ScarJo vs. ChatGPT, Neuralink’s First Patient Opens Up, and Microsoft’s A.I. PCs

“Did you ever think we would have a literal Avenger fighting back against the relentless march of A.I.? Because that’s sort of what this story is about.”

© Photo Illustration by The New York Times; Photo: Evan Agostini/Invision, via Associated Press

States Dust Off Obscure Anti-Mask Laws to Target Pro-Palestine Protesters

pArcane laws banning people from wearing masks in public are now being used to target people who wear face coverings while peacefully protesting Israel’s war in Gaza. That’s a big problem./p pIn the 1940s and 50s, many U.S. states passed anti-mask laws as a response to the Ku Klux Klan, whose members often hid their identities as they terrorized their victims. These laws were not enacted to protect those victims, but because political leaders wanted to defend segregation as part of a “modern South” and felt that the Klan’s violent racism was making them look bad./p pNow these laws are being used across the country to try and clamp down on disfavored groups and movements, raising questions about selective prosecution. Just this month, Ohio Attorney General Dave Yost a href=https://www.latimes.com/world-nation/story/2024-05-08/masked-student-protesters-could-face-felony-charges-under-anti-kkk-law-ohio-attorney-general-warnssent a letter/a to the state’s 14 public universities alerting them that protesters could be charged with a felony under the state’s little-used anti-mask law, which carries penalties of between six to 18 months in prison. An Ohio legal expert, Rob Barnhart, observed that he’d a href=https://www.wosu.org/politics-government/2024-05-07/protesters-could-face-felony-charge-if-arrested-while-wearing-a-mask-under-obscure-ohio-lawnever heard/a of the state’s law being applied previously, even to bank robbers wearing masks. While Yost framed his letter as “proactive guidance,” Barnhart countered that “I find it really hard to believe that this is some public service announcement to students to be aware of a 70-year-old law that nobody uses.”/p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/news/free-speech/americas-mask-bans-in-the-age-of-face-recognition-surveillance target=_blank tabindex=-1 img width=1200 height=628 src=https://assets.aclu.org/live/uploads/2024/05/5b813d014d9877b39c43e882a1782bed.jpg class=attachment-4x3_full size-4x3_full alt= decoding=async loading=lazy srcset=https://assets.aclu.org/live/uploads/2024/05/5b813d014d9877b39c43e882a1782bed.jpg 1200w, https://assets.aclu.org/live/uploads/2024/05/5b813d014d9877b39c43e882a1782bed-768x402.jpg 768w, https://assets.aclu.org/live/uploads/2024/05/5b813d014d9877b39c43e882a1782bed-400x209.jpg 400w, https://assets.aclu.org/live/uploads/2024/05/5b813d014d9877b39c43e882a1782bed-600x314.jpg 600w, https://assets.aclu.org/live/uploads/2024/05/5b813d014d9877b39c43e882a1782bed-800x419.jpg 800w, https://assets.aclu.org/live/uploads/2024/05/5b813d014d9877b39c43e882a1782bed-1000x523.jpg 1000w sizes=(max-width: 1200px) 100vw, 1200px / /a /div div class=wp-link__title a href=https://www.aclu.org/news/free-speech/americas-mask-bans-in-the-age-of-face-recognition-surveillance target=_blank America's Mask Bans in the Age of Face Recognition Surveillance /a /div div class=wp-link__description a href=https://www.aclu.org/news/free-speech/americas-mask-bans-in-the-age-of-face-recognition-surveillance target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletAmerican laws should allow people the freedom to cover up their faces in protests or anywhere else./p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/news/free-speech/americas-mask-bans-in-the-age-of-face-recognition-surveillance target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pOhio officials aren’t the only ones who seem to be selectively enforcing anti-mask laws against student protestors. Administrators at the University of North Carolina a href=https://chapelboro.com/news/unc/unc-asks-pro-palestine-protesters-to-stop-wearing-masks-citing-1953-anti-kkk-lawhave warned/a protesters that wearing masks violates the state’s anti-mask law and “runs counter to our campus norms and is a violation of UNC policy.” Students arrested during a protest at the University of Florida were a href=https://www.sun-sentinel.com/2024/04/29/police-make-first-arrests-in-florida-of-pro-palestinian-protesters-at-two-university-campuses/charged with/a, among other things, wearing masks in public. At the University of Texas at Austin, Gov. Greg Abbott and university officials called in state troopers to a href=https://www.texastribune.org/2024/04/29/university-texas-pro-palestinian-protest-arrest/violently/a break up pro-Palestinian protests after the school a href=https://www.houstonchronicle.com/politics/texas/article/ut-austin-police-protest-arrests-19422645.phprescinded permission/a for a rally on the grounds that protesters had a “declared intent to violate our policies and rules.” One of the rules the administrators cited was a university ban on wearing face masks “to obstruct law enforcement.”/p pAt a time when both public and private actors are increasingly turning to invasive surveillance technologies to identify protesters, mask-wearing is an important way for us to safeguard our right to speak out on issues of public concern. While the ACLU has raised concerns about how anti-mask laws have been wielded for decades, we are especially worried about the risk they pose to our constitutional freedoms in the digital age./p pIn particular, the emergence of face recognition technology has changed what it means to appear in public. Increasingly omnipresent cameras and corrosive technology products such as a href=https://www.nytimes.com/interactive/2021/03/18/magazine/facial-recognition-clearview-ai.htmlClearview AI/a allow police to easily identify people. So, too, can private parties. The push to normalize face recognition by security agencies threatens to turn our faces into the functional equivalent of license plates. Anti-mask laws are in effect a requirement to display those “plates” anytime one is in public. Humans are not cars./p pOf course, mask-wearing is not just about privacy — it can also be an expressive act, a religious practice, a political statement, or a public-health measure. The ACLU has chronicled the a href=https://www.aclu.org/news/free-speech/americas-mask-bans-in-the-age-of-face-recognition-surveillancemask-wearing debate/a for years. As recently as 2019, anti-mask laws were used against a href=https://www.theatlantic.com/national/archive/2011/09/nypd-arresting-wall-street-protesters-wearing-masks/337706/Occupy Wall Street/a protesters,a href=https://www.ajc.com/news/state--regional/white-nationalist-richard-spencer-riles-auburn-campus-three-arrested/5HeaD0TCfvfNI7DuXDUciJ/ anti-racism/aa href=https://wtvr.com/2017/09/19/mask-in-public-court-hearing/ protesters/a, anda href=https://wbhm.org/feature/2019/experts-alabamas-mask-law-is-outdated/ police violence/a protesters. The coronavirus temporarily scrambled the mask-wearing debate and made a mask both a protective and a a href=https://apnews.com/article/virus-outbreak-donald-trump-ap-top-news-politics-health-7dce310db6e85b31d735e81d0af6769cpolitical/a act./p pToday, one question that remains is whether and how the authorities distinguish between those who are wearing a mask to protect their identities and those who are wearing one to protect themselves against disease. That ambiguity opens up even more space for discretionary and selective enforcement. In North Carolina, the state Senate is currently considering an anti-protest bill that would remove the exception for wearing a mask for health purposes altogether, and would add a sentencing enhancement for committing a crime while wearing a mask./p pFor those speaking out in support of the Palestinian people, being recognized in a crowd can have extreme consequences for their personal and professional security. During the Gaza protests, pro-Israel activists and organizations have posted the faces and personal information of pro-Palestine activists to intimidate them, get them fired, or otherwise shame them for their views. These doxing attempts have intensified, with viral videos showing counterprotesters demanding that pro-Palestinian protesters remove their masks at rallies. Professionally, employers have a href=https://www.thecut.com/2023/10/israel-hamas-war-job-loss-social-media.htmlterminated workers/a for their comments about Israel and Palestine, and CEOs have a href=https://finance.yahoo.com/news/bill-ackman-wants-harvard-name-104621975.htmldemanded/a universities give them the names of protesters in order to blacklist them from jobs./p pWhile wearing a mask can make it harder to identify a person, it#8217;s important for protesters to know that it’s not always effective. Masks haven’t stopped the a href=https://www.nytimes.com/2022/12/02/business/china-protests-surveillance.htmlChinese government/a or a href=https://www.cbsnews.com/sanfrancisco/news/google-workers-fired-after-protesting-israeli-contract-file-complaint-labor-regulators/Google/a, for example, from identifying protesters and taking action against them. Technologies that can be used to identify masked protesters range froma href=https://www.notus.org/technology/war-zone-surveillance-border-us Bluetooth and WiFi signals/a, to historical cell phone location data, to constitutionally dubious devices calleda href=https://www.aclu.org/news/privacy-technology/police-citing-terrorism-buy-stingrays-used-only IMSI Catchers/a, which pretend to be a cell tower and ping nearby phones, prompting phones to reply with an identifying ping of their own. We may also see the development of a href=https://www.aclu.org/publications/dawn-robot-surveillancevideo analytics/a technologies that use gait recognition or body-proportion measurements. During Covid, face recognition also got a href=https://www.bbc.com/news/technology-56517033much/aa href=https://www.zdnet.com/article/facial-recognition-now-algorithms-can-see-through-face-masks/ better/a at identifying people wearing partial face masks./p pProtecting people’s freedom to wear masks can have consequences. It can make it harder to identify people who commit crimes, whether they are bank robbers, muggers, or the members of the “a href=https://www.latimes.com/california/story/2024-05-07/a-ucla-timeline-from-peaceful-encampment-to-violent-attacks-aftermathviolent mob/a” that attacked a peaceful protest encampment at UCLA. Like all freedoms, the freedom to wear a mask can be abused. But that does not justify taking that freedom away from those protesting peacefully, especially in today’s surveillance environment./p pAnti-mask laws, undoubtedly, have a significant chilling effect on some protesters#8217; willingness to show up for causes they believe in. The bravery of those who do show up to support a highly-controversial cause in the current surveillance landscape is admirable, but Americans shouldn’t have to be brave to exercise their right to protest. Until privacy protections catch up with technology, officials and policymakers should do all they can to make it possible for less-brave people to show up and protest. That includes refusing to use anti-mask laws to target peaceful protestors./p

Coalition to Calexico: Think Twice About Reapproving Border Surveillance Tower Next to a Public Park

Update May 15, 2024: The letter has been updated to include support from the Southern Border Communities Coalition. It was re-sent to the Calexico City Council. 

On the southwest side of Calexico, a border town in California’s Imperial Valley, a surveillance tower casts a shadow over a baseball field and a residential neighborhood. In 2000, the Immigration and Naturalization Service (the precursor to the Department of Homeland Security (DHS)) leased the corner of Nosotros Park from the city for $1 a year for the tower. But now the lease has expired, and DHS component Customs & Border Protection (CBP) would like the city to re-up the deal 

Map of Nosotros park with location of tower

But times—and technology—have changed. CBP’s new strategy calls for adopting powerful artificial intelligence technology to not only control the towers, but to scan, track and categorize everything they see.  

Now, privacy and social justice advocates including the Imperial Valley Equity and Justice Coalition, American Friends Service Committee, Calexico Needs Change, and Southern Border Communities Coalition have joined EFF in sending the city council a letter urging them to not sign the lease and either spike the project or renegotiate it to ensure that civil liberties and human rights are protected.  

The groups write 

The Remote Video Surveillance System (RVSS) tower at Nosotros Park was installed in the early 2000s when video technology was fairly limited and the feeds required real-time monitoring by human personnel. That is not how these cameras will operate under CBP's new AI strategy. Instead, these towers will be controlled by algorithms that will autonomously detect, identify, track and classify objects of interest. This means that everything that falls under the gaze of the cameras will be scanned and categorized. To an extent, the AI will autonomously decide what to monitor and recommend when Border Patrol officers should be dispatched. While a human being may be able to tell the difference between children playing games or residents getting ready for work, AI is prone to mistakes and difficult to hold accountable. 

In an era where the public has grave concerns on the impact of unchecked technology on youth and communities of color, we do not believe enough scrutiny and skepticism has been applied to this agreement and CBP's proposal. For example, the item contains very little in terms of describing what kinds of data will be collected, how long it will be stored, and what measures will be taken to mitigate the potential threats to privacy and human rights. 

The letter also notes that CBP’s tower programs have repeatedly failed to achieve the promised outcomes. In fact, the DHS Inspector General found that the early 2000s program,yielded few apprehensions as a percentage of detection, resulted in needless investigations of legitimate activity, and consumed valuable staff time to perform video analysis or investigate sensor alerts.”  

The groups are calling for Calexico to press pause on the lease agreement until CBP can answer a list of questions about the impact of the surveillance tower on privacy and human rights. Should the city council insist on going forward, they should at least require regular briefings on any new technologies connected to the tower and the ability to cancel the lease on much shorter notice than the 365 days currently spelled out in the proposed contract.  

Biden Announces $3.3 Billion Microsoft AI Center at Trump’s Failed Foxconn Site

The president’s visit to Wisconsin celebrated the investment by Microsoft in a center to be built on the site of a failed Foxconn project negotiated by his predecessor.

© Tom Brenner for The New York Times

President Biden at the Intel campus in Chandler, Ariz., in March. His “Investing in America” agenda has focused on bringing billions of private-sector dollars into manufacturing and industries such as clean energy and artificial intelligence.

Add Bluetooth to the Long List of Border Surveillance Technologies

A new report from news outlet NOTUS shows that at least two Texas counties along the U.S.-Mexico border have purchased a product that would allow law enforcement to track devices that emit Bluetooth signals, including cell phones, smartwatches, wireless earbuds, and car entertainment systems. This incredibly personal model of tracking is the latest level of surveillance infrastructure along the U.S.-Mexico border—where communities are not only exposed to a tremendous amount of constant monitoring, but also serves as a laboratory where law enforcement agencies at all levels of government test new technologies.

The product now being deployed in Texas, called TraffiCatch, can detect wifi and Bluetooth signals in moving cars to track them. Webb County, which includes Laredo, has had TraffiCatch technology since at least 2019, according to GovSpend procurement data. Val Verde County, which includes Del Rio, approved the technology in 2022. 

This data collection is possible because all Bluetooth devices regularly broadcast a Bluetooth Device Address. This address can be either a public address or a random address. Public addresses don’t change for the lifetime of the device, making them the easiest to track. Random addresses are more common and have multiple levels of privacy, but for the most part change regularly (this is the case with most modern smartphones and products like AirTags.) Bluetooth products with random addresses would be hard to track for a device that hasn’t paired with them. But if the tracked person is also carrying a Bluetooth device that has a public address, or if tracking devices are placed close to each other so a device is seen multiple times before it changes its address, random addresses could be correlated with that person over long periods of time.

It is unclear whether TraffiCatch is doing this sort of advanced analysis and correlation, and how effective it would be at tracking most modern Bluetooth devices.

According to TraffiCatch’s manufacturer, Jenoptik, this data derived from Bluetooth is also combined with data collected from automated license plate readers, another form of vehicle tracking technology placed along roads and highways by federal, state, and local law enforcement throughout the Texas border. ALPRs are well understood technology for vehicle tracking, but the addition of Bluetooth tracking may allow law enforcement to track individuals even if they are using different vehicles.

This mirrors what we already know about how Immigration and Customs Enforcement (ICE) has been using cell-site simulators (CSSs). Also known as Stingrays or IMSI catchers, CSS are devices that masquerade as legitimate cell-phone towers, tricking phones within a certain radius into connecting to the device rather than a tower. In 2023, the Department of Homeland Security’s Inspector General released a troubling report detailing how federal agencies like ICE, its subcomponent Homeland Security Investigations (HSI), and the Secret Service have conducted surveillance using CSSs without proper authorization and in violation of the law. Specifically, the Inspector General found that these agencies did not adhere to federal privacy policy governing the use of CSS and failed to obtain special orders required before using these types of surveillance devices.

Law enforcement agencies along the border can pour money into overlapping systems of surveillance that monitor entire communities living along the border thanks in part to Operation Stonegarden (OPSG), a Department of Homeland Security (DHS) grant program, which rewards state and local police for collaborating in border security initiatives. DHS doled out $90 million in OPSG funding in 2023, $37 million of which went to Texas agencies. These programs are especially alarming to human rights advocates due to recent legislation passed in Texas to allow local and state law enforcement to take immigration enforcement into their own hands.

As a ubiquitous wireless interface to many of our personal devices and even our vehicles, Bluetooth is a large and notoriously insecure attack surface for hacks and exploits. And as TraffiCatch demonstrates, even when your device’s Bluetooth tech isn’t being actively hacked, it can broadcast uniquely identifiable information that make you a target for tracking. This is one in the many ways surveillance, and the distrust it breeds in the public over technology and tech companies, hinders progress. Hands-free communication in cars is a fantastic modern innovation. But the fact that it comes at the cost of opening a whole society up to surveillance is a detriment to all.

Watch out for tech support scams lurking in sponsored search results

This blog post was written based on research carried out by Jérôme Segura.

A campaign using sponsored search results is targeting home users and taking them to tech support scams.

Sponsored search results are the ones that are listed at the top of search results and are labelled “Sponsored”. They’re often ads that are taken out by brands who want to get people to click through to their website. In the case of malicious sponsored ads, scammers tend to outbid the brands in order to be listed as the first search result.

The criminals that buy the ads will go as far as displaying the official brand’s website within the ad snippet, making it hard for an unsuspecting visitor to notice a difference.

Who would, for example, be able to spot that the below ad for CNN is not legitimate. You’ll have to click on the three dots (in front of where we added malicious ad) and look at the advertiser information to see that it’s not the legitimate owner of the brand.

fake CNN sponsored ad

Only then it becomes apparent that the real advertiser is not CNN, but instead a company called Yojoy Network Technology Co., Limited.

Google Ads Transparency Center entry for Yojoy Network Technology

Below, you can see another fake advertisement by the same advertiser, this time impersonating Amazon.

Another fake ad by Yojoy impersonating Amazon

In our example, the scammers failed to use the correct CNN or Amazon icons, but in other cases (like another recent discovery by Jerome Segura), scammers have even used the correct icon.

fake ad for Wall Street Journal

The systems of the people that click one of these links are likely to assessed on what the most profitable follow-up is (using a method called fingerprinting). For systems running Windows, we found visitors are redirected to tech support scam websites such as this one.

Typical Fake Microsoft alert page with popups, prompts all telling the visitor to call 1-844-476-5780 (tech support scammers)

Tech Support Scam site telling the visitor to call 1-844-476-5780

You undoubtedly know the type. Endless pop-ups, soundbites, and prompts telling the visitor that they should urgently call the displayed number to free their system of alleged malware.

These tech support scammers will impersonate legitimate software companies (i.e. Microsoft) and charge their victims hundreds or even thousands of dollars for completely bogus malware removal.

Getting help if you have been scammed

Getting scammed is one of the worst feelings to experience. In many ways, you may feel like you have been violated and angry to have let your guard down. Perhaps you are even shocked and scared, and don’t really know what to do now. The following tips will hopefully provide you with some guidance.

If you’ve already let the scammers in

  • Revoke any remote access the scammer has (if you are unsure, restart your computer). That should cut the remote session and kick them out of your computer.
  • Scan your computer for malware. The miscreants may have installed password stealers or other Trojans to capture your keystrokes. Use a program such as Malwarebytes to quickly identify and remove threats.
  • Change all your passwords. (Windows password, email, banking, etc.)

If you’ve already paid

  • Contact your financial institution/credit card company to reverse the charges and keep an eye out for future unwanted charges.
  • If you gave them personal information such as date of birth, Social Security Number, full address, name, and maiden name, you may want to look at some form of identity theft protection.

Reporting the scam

File a report

Shut down their remote software account

  • Write down the TeamViewer ID (9-digit code) and send it to TeamViewer’s support. They can later use the information you provide to block people/companies.
  • LogMeIn: Report abuse

Spread the word

You can raise awareness by letting your friends, family, and other acquaintances know what happened to you. Although sharing your experience of falling victim to these scams may be embarrassing, educating other people will help someone caught in a similar situation and deter further scam attempts.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Police Say a Simple Warning Will Prevent Face Recognition Wrongful Arrests. That's Just Not True.

pFace recognition technology in the hands of police is dangerous. Police departments across the country frequently use the technology to try to identify images of unknown suspects by comparing them to large photo databases, but it often fails to generate a correct match. And numerous a href=https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/studies/a have shown that face recognition technology misidentifies Black people and other people of color at higher rates than white people. To date, there have been at least seven wrongful arrests we know of in the United States due to police reliance on incorrect face recognition results — and those are just the known cases. In nearly every one of those instances, a href=https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.htmlthe/a a href=https://www.nytimes.com/2023/03/31/technology/facial-recognition-false-arrests.htmlperson/a a href=https://www.newyorker.com/magazine/2023/11/20/does-a-i-lead-police-to-ignore-contradictory-evidence/wrongfully/a a href=https://www.nytimes.com/2023/08/06/business/facial-recognition-false-arrest.htmlarrested/a a href=https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.htmlwas/a a href=https://www.freep.com/story/news/local/michigan/detroit/2020/07/10/facial-recognition-detroit-michael-oliver-robert-williams/5392166002/Black/a./p pSupporters of police using face recognition technology often portray these failures as unfortunate mistakes that are unlikely to recur. Yet, they keep coming. Last year, six Detroit police officers showed up at the doorstep of an a href=https://www.nytimes.com/2023/08/06/business/facial-recognition-false-arrest.htmleight-months pregnant woman/a and wrongfully arrested her in front of her children for a carjacking that she could not plausibly have committed. A month later, the prosecutor dismissed the case against her./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/news/privacy-technology/i-did-nothing-wrong-i-was-arrested-anyway target=_blank tabindex=-1 img width=2800 height=1400 src=https://www.aclu.org/wp-content/uploads/2021/07/Robert-Williams-Full-Bleed.jpg class=attachment-4x3_full size-4x3_full alt=Robert Williams and his daughter, Rosie Williams decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2021/07/Robert-Williams-Full-Bleed.jpg 2800w, https://www.aclu.org/wp-content/uploads/2021/07/Robert-Williams-Full-Bleed-768x384.jpg 768w, https://www.aclu.org/wp-content/uploads/2021/07/Robert-Williams-Full-Bleed-1536x768.jpg 1536w, https://www.aclu.org/wp-content/uploads/2021/07/Robert-Williams-Full-Bleed-2048x1024.jpg 2048w, https://www.aclu.org/wp-content/uploads/2021/07/Robert-Williams-Full-Bleed-400x200.jpg 400w, https://www.aclu.org/wp-content/uploads/2021/07/Robert-Williams-Full-Bleed-600x300.jpg 600w, https://www.aclu.org/wp-content/uploads/2021/07/Robert-Williams-Full-Bleed-800x400.jpg 800w, https://www.aclu.org/wp-content/uploads/2021/07/Robert-Williams-Full-Bleed-1000x500.jpg 1000w, https://www.aclu.org/wp-content/uploads/2021/07/Robert-Williams-Full-Bleed-1200x600.jpg 1200w, https://www.aclu.org/wp-content/uploads/2021/07/Robert-Williams-Full-Bleed-1400x700.jpg 1400w, https://www.aclu.org/wp-content/uploads/2021/07/Robert-Williams-Full-Bleed-1600x800.jpg 1600w sizes=(max-width: 2800px) 100vw, 2800px / /a /div div class=wp-link__title a href=https://www.aclu.org/news/privacy-technology/i-did-nothing-wrong-i-was-arrested-anyway target=_blank I Did Nothing Wrong. I Was Arrested Anyway. /a /div div class=wp-link__description a href=https://www.aclu.org/news/privacy-technology/i-did-nothing-wrong-i-was-arrested-anyway target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletOver a year after a police face recognition tool matched me to a crime I did not commit, my family still feels the impact. We must stop this.../p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/news/privacy-technology/i-did-nothing-wrong-i-was-arrested-anyway target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pPolice departments should be doing everything in their power to avoid wrongful arrests, which can turn people’s lives upside down and result in loss of work, inability to care for children, and other harmful consequences. So, what’s behind these repeated failures? As the ACLU explained in a a href=https://www.aclu.org/documents/aclu-comment-facial-recognition-and-biometric-technologies-eo-14074-13erecent submission/a to the federal government, there are multiple ways in which police use of face recognition technology goes wrong. Perhaps most glaring is that the most widely adopted police policy designed to avoid false arrests in this context emsimply does not work/em. Records from the wrongful arrest cases demonstrate why./p pIt has become standard practice among police departments and companies making this technology to warn officers that a result from a face recognition search does not constitute a positive identification of a suspect, and that additional investigation is necessary to develop the probable cause needed to obtain an arrest warrant. For example, the International Association of Chiefs of Police a href=https://www.theiacp.org/sites/default/files/2019-10/IJIS_IACP%20WP_LEITTF_Facial%20Recognition%20UseCasesRpt_20190322.pdfcautions/a that a face recognition search result is “a strong clue, and nothing more, which must then be corroborated against other facts and investigative findings before a person can be determined to be the subject whose identity is being sought.” The Detroit Police Department’s face recognition technology a href=https://detroitmi.gov/sites/detroitmi.localhost/files/2020-10/307.5%20Facial%20Recognition.pdfpolicy/a adopted in September 2019 similarly states that a face recognition search result is only an “an investigative lead and IS NOT TO BE CONSIDERED A POSITIVE IDENTIFICATION OF ANY SUBJECT. Any possible connection or involvement of any subject to the investigation must be determined through further investigation and investigative resources.”/p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/documents/aclu-comment-facial-recognition-and-biometric-technologies-eo-14074-13e target=_blank tabindex=-1 /a /div div class=wp-link__title a href=https://www.aclu.org/documents/aclu-comment-facial-recognition-and-biometric-technologies-eo-14074-13e target=_blank ACLU Comment re: Request for Comment on Law Enforcement Agencies' Use of Facial Recognition Technology, Other Technologies Using Biometric Information, and Predictive Algorithms (Exec. Order 14074, Section 13(e)) /a /div div class=wp-link__description a href=https://www.aclu.org/documents/aclu-comment-facial-recognition-and-biometric-technologies-eo-14074-13e target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tablet/p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/documents/aclu-comment-facial-recognition-and-biometric-technologies-eo-14074-13e target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pPolice departments across the country, from a href=https://lacris.org/LACRIS Facial Recognition Policy v_2019.pdfLos Angeles County/a to the a href=https://www.in.gov/iifc/files/Indiana_Intelligence_Fusion_Center_Face_Recognition_Policy.pdfIndiana State Police/a, to the U.S. a href=https://www.dhs.gov/sites/default/files/2023-09/23_0913_mgmt_026-11-use-face-recognition-face-capture-technologies.pdfDepartment of Homeland Security/a, provide similar warnings. However ubiquitous, these warnings have failed to prevent harm./p pWe’ve seen police treat the face recognition result as a positive identification, ignoring or not understanding the warnings that face recognition technology is simply not reliable enough to provide a positive identification./p pIn Louisiana, for example, police relied solely on an incorrect face recognition search result from Clearview AI as purported probable cause for an arrest warrant. The officers did this even though the law enforcement agency signed a contract with the face recognition company acknowledging officers “must conduct further research in order to verify identities or other data generated by the [Clearview] system.” That overreliance led to a href=https://www.nytimes.com/2023/03/31/technology/facial-recognition-false-arrests.htmlRandal Quran Reid/a, a Georgia resident who had never even been to Louisiana, being wrongfully arrested for a crime he couldn’t have committed and held for nearly a week in jail./p pIn an a href=https://www.courierpress.com/story/news/local/2023/10/19/evansville-police-using-clearview-ai-facial-recognition-to-make-arrests/70963350007/Indiana investigation/a, police similarly obtained an arrest warrant based only upon an assertion that the detective “viewed the footage and utilized the Clearview AI software to positively identify the female suspect.” No additional confirmatory investigation was conducted./p pBut even when police do conduct additional investigative steps, those steps often emexacerbate and compound/em the unreliability of face recognition searches. This is a particular problem when police move directly from a facial recognition result to a witness identification procedure, such as a photographic lineup./p pFace recognition technology is designed to generate a list of faces that are emsimilar/em to the suspect’s image, but often will not actually be a match. When police think they have a match, they frequently ask a witness who saw the suspect to view a photo lineup consisting of the image derived from the face recognition search, plus five “filler” photos of other people. Photo lineups have long been known to carry a high risk of misidentification. The addition of face recognition-generated images only makes it worse. Because the face recognition-generated image is likely to appear more similar to the suspect than the filler photos, there is a a href=https://www.newyorker.com/magazine/2023/11/20/does-a-i-lead-police-to-ignore-contradictory-evidence/heightened chance/a that a witness will a href=https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4101826mistakenly choose/a that image out of the lineup, even though it is not a true match./p pThis problem has contributed to known cases of wrongful arrests, including the arrests of a href=https://www.nytimes.com/2023/08/06/business/facial-recognition-false-arrest.htmlPorcha Woodruff/a, a href=https://www.freep.com/story/news/local/michigan/detroit/2020/07/10/facial-recognition-detroit-michael-oliver-robert-williams/5392166002/Michael Oliver/a, and a href=https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.htmlRobert Williams/a by Detroit police (the ACLU represents Mr. Williams in a a href=https://www.aclu.org/news/privacy-technology/i-did-nothing-wrong-i-was-arrested-anywaywrongful arrest lawsuit/a). In these cases, police obtained an arrest warrant based solely on the combination of a false match from face recognition technology; and a false identification from a witness viewing a photo lineup that was constructed around the face recognition lead and five filler photos. Each of the witnesses chose the face recognition-derived false match, instead of deciding that the suspect did not, in fact, appear in the lineup./p pA lawsuit filed earlier this year in Texas alleges that a similar series of failures led to the wrongful arrest of a href=https://www.theguardian.com/technology/2024/jan/22/sunglass-hut-facial-recognition-wrongful-arrest-lawsuit?ref=upstract.comHarvey Eugene Murphy Jr./a by Houston police. And in New Jersey, police wrongfully arrested a href=https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.htmlNijeer Parks/a in 2019 after face recognition technology incorrectly flagged him as a likely match to a shoplifting suspect. An officer who had seen the suspect (before he fled) viewed the face recognition result, and said he thought it matched his memory of the suspect’s face./p pAfter the Detroit Police Department’s third wrongful arrest from face recognition technology became public last year, Detroit’s chief of police a href=https://www.facebook.com/CityofDetroit/videos/287218473992047acknowledged/a the problem of erroneous face recognition results tainting subsequent witness identifications. He explained that by moving straight from face recognition result to lineup, “it is possible to taint the photo lineup by presenting a person who looks most like the suspect” but is not in fact the suspect. The Department’s policy, merely telling police that they should conduct “further investigation,” had not stopped police from engaging in this bad practice./p pBecause police have repeatedly proved unable or unwilling to follow face recognition searches with adequate independent investigation, police access to the technology must be strictly curtailed — and the best way to do this is through strong a href=https://www.aclu.org/sites/default/files/field_document/02.16.2021_coalition_letter_requesting_federal_moratorium_on_facial_recognition.pdfbans/a. More than 20 jurisdictions across the country, from Boston, to Pittsburgh, to San Francisco, have done just that, barring police from using this dangerous technology./p pBoilerplate warnings have proven ineffective. Whether these warnings fail because of human a href=https://www.nytimes.com/2020/06/09/technology/facial-recognition-software.htmlcognitive bias/a toward trusting computer outputs, poor police training, incentives to quickly close cases, implicit racism, lack of consequences, the fallibility of witness identifications, or other factors, we don’t know. But if the experience of known wrongful arrests teaches us anything, it is that such warnings are woefully inadequate to protect against abuse./p

Virtual Reality and the 'Virtual Wall'

When EFF set out to map surveillance technology along the U.S.-Mexico border, we weren't exactly sure how to do it. We started with public records—procurement documents, environmental assessments, and the like—which allowed us to find the GPS coordinates of scores of towers. During a series of in-person trips, we were able to find even more. Yet virtual reality ended up being one of the key tools in not only discovering surveillance at the border, but also in educating people about Customs & Border Protection's so-called "virtual wall" through VR tours.

EFF Director of Investigations Dave Maass recently gave a lightning talk at University of Nevada, Reno's annual XR Meetup explaining how virtual reality, perhaps ironically, has allowed us to better understand the reality of border surveillance.

play
Privacy info. This embed will serve content from youtube.com

"Infrastructures of Control": Q&A with the Geographers Behind University of Arizona's Border Surveillance Photo Exhibition

Guided by EFF's map of Customs & Border Protection surveillance towers, University of Arizona geographers Colter Thomas and Dugan Meyer have been methodologically traversing the U.S.-Mexico border and photographing the infrastructure that comprises the so-called "virtual wall."

An amrored vehicle next to a surveillance tower along the Rio Grande River

Anduril Sentry tower beside the Rio Grande River. Photo by Colter Thomas (CC BY-NC-ND 4.0)

From April 12-26, their outdoor exhibition "Infrastructures of Control" will be on display on the University of Arizona campus in Tucson, featuring more than 30 photographs of surveillance technology, a replica surveillance tower, and a blow up map based on EFF's data.

Locals can join the researchers and EFF staff for an opening night tour at 5pm on April 12, followed by an EFF Speakeasy/Meetup. There will also be a panel discussion at 5pm on April 19, moderated by journalist Yael Grauer, co-author of EFF's Street-Level Surveillance hub. It will feature a variety of experts on the border, including Isaac Esposto (No More Deaths), Dora Rodriguez (Salvavision), Pedro De Velasco (Kino Border Initiative), Todd Miller (The Border Chronicle), and Daniel Torres (Daniel Torres Reports).

In the meantime, we chatted with Colter and Dugan about what their project means to them.

MAASS: Tell us what you hope people will take away from this project?

MEYER: We think of our work as a way for us to contribute to a broader movement for border justice that has been alive and well in the U.S.-Mexico borderlands for decades. Using photography, mapping, and other forms of research, we are trying to make the constantly expanding infrastructure of U.S. border policing and surveillance more visible to public audiences everywhere. Our hope is that doing so will prompt more expansive and critical discussions about the extent to which these infrastructures are reshaping the social and environmental landscapes throughout this region and beyond.

THOMAS: The diversity of landscapes that make up the borderlands can make it hard to see how these parts fit together, but the common thread of surveillance is an ominous sign for the future and we hope that the work we make can encourage people from different places and experiences to find common cause for looking critically at these infrastructures and what they mean for the future of the borderlands.

A surveillance tower in a valley.

An Integrated Fixed Tower in Southern Arizona. Photo by Colter Thomas (CC BY-NC-ND 4.0)

MAASS: So much is written about border surveillance by researchers working off documents, without seeing these towers first hand. How did your real-world exploration affect your understanding of border technology?

THOMAS: Personally I’m left with more questions than answers when doing this fieldwork. We have driven along the border from the Gulf of Mexico to the Pacific, and it is surprising just how much variation there is within this broad system of U.S. border security. It can sometimes seem like there isn’t just one border at all, but instead a patchwork of infrastructural parts—technologies, architecture, policy, etc.—that only looks cohesive from a distance.

A surveillance tower on a hill

An Integrated Fixed Tower in Southern Arizona. Photo by Colter Thomas (CC BY-NC-ND 4.0)

MAASS: That makes me think of Trevor Paglen, an artist known for his work documenting surveillance programs. He often talks about the invisibility of surveillance technology. Is that also what you encountered?

MEYER: The scale and scope of U.S. border policing is dizzying, and much of how this system functions is hidden from view. But we think many viewers of this exhibition might be surprised—as we were when we started doing this work—just how much of this infrastructure is hidden in plain sight, integrated into daily life in communities of all kinds.

This is one of the classic characteristics of infrastructure: when it is working as intended, it often seems to recede into the background of life, taken for granted as though it always existed and couldn’t be otherwise. But these systems, from surveillance programs to the border itself, require tremendous amounts of labor and resources to function, and when you look closely, it is much easier to see the waste and brutality that are their real legacy. As Colter and I do this kind of looking, I often think about a line from the late David Graeber, who wrote that “the ultimate hidden truth of the world is that it is something that we make, and could just as easily make differently.”

THOMAS: Like Dugan said, infrastructure rarely draws direct attention. As artists and researchers, then, our challenge has been to find a way to disrupt this banality visually, to literally reframe the material landscapes of surveillance in ways that sort of pull this infrastructure back into focus. We aren’t trying to make this infrastructure beautiful, but we are trying to present it in a way that people will look at it more closely. I think this is also what makes Paglen’s work so powerful—it aims for something more than simply documenting or archiving a subject that has thus far escaped scrutiny. Like Paglen, we are trying to present our audiences with images that demand attention, and to contextualize those images in ways that open up opportunities and spaces for viewers to act collectively with their attention. For us, this means collaborating with a range of other people and organizations—like the EFF—to invite viewers into critical conversations that are already happening about what these technologies and infrastructures mean for ourselves and our neighbors, wherever they are coming from.

How to Protect Consumer Privacy and Free Speech

pTechnology is a necessity of modern life. People of all ages rely on it for everything from accessing information and connecting with others, to paying for goods, using transportation, getting work done, and speaking out about issues of the day. Without adequate privacy protections, technology can be co-opted to surveil us online and intrude on our private lives–not only by the government, but also by businesses–with grave consequences for our rights./p pThere is sometimes a misconception that shielding our personal information from this kind of misuse will violate the First Amendment rights of corporations who stand to profit from collecting, analyzing, and sharing that information. But we don’t have to sacrifice robust privacy protection to uphold anyone’s right to free speech. In fact, when done right, strong privacy protections reinforce speech rights. They create spaces where people have the confidence to exercise their First Amendment rights to candidly communicate with friends, seek out advice and community, indulge curiosity, and anonymously speak or access information./p pAt the same time, simply calling something a “privacy law” doesn’t make it so. Take the California Age Appropriate Design Code Act (CAADCA), a law currently under review by the Ninth Circuit in iNetChoice v. Bonta/i. As the ACLU and the ACLU of Northern California argued in a a href=https://www.aclu.org/cases/netchoice-llc-v-bonta?document=Amici-Curiae-Brief-of-the-ACLU-%26-ACLU-of-Northern-Californiafriend-of-the-court brief/a, this law improperly included content restrictions on online speech and is unconstitutional. Laws can and should be crafted to protect both privacy and free speech rights. It is critical that legislatures and courts get the balance right when it comes to a law that implicates our ability to control our personal information and to speak and access content online./p pConsumer privacy matters. With disturbing frequency, businesses use technology to siphon hordes of personal information from us – learning things about our health, our family situation, our financial status, our location, our age, and even our beliefs. Not only can they paint intimate portraits of our lives but, armed with this information, they can raise or lower prices depending on our demographics, make discriminatory predictions about a href=https://www.wired.com/story/argentina-algorithms-pregnancy-prediction/health outcomes/a, improperly deny a href=https://www.hud.gov/sites/dfiles/Main/documents/HUD_v_Facebook.pdfhousing/a or a href=https://www.cnn.com/2023/06/12/tech/facebook-job-ads-gender-discrimination-asequals-intl-cmd/index.htmljobs/a, a href=https://www.propublica.org/article/health-insurers-are-vacuuming-up-details-about-you-and-it-could-raise-your-rateshike insurance rates/a, and flood people of color and low-income people with a href=https://www.nytimes.com/2011/09/08/opinion/fair-lending-and-accountability.htmlads for predatory loans/a./p pAll this nefarious behavior holds serious consequences for our financial stability, our health, our quality of life, and our civil rights, including our First Amendment rights. Better consumer privacy gives advocates, activists, whistleblowers, dissidents, authors, artists, and others the confidence to speak out. Only when people are free from the fear that what they’re doing online is being monitored and shared can they feel free to enjoy the full extent of their rights to read, investigate, discuss, and be inspired by whatever they want./p pYet in recent years, tech companies have argued that consumer privacy protections limit their i /iFirst Amendment rights to collect, use, and share people’s personal information. These arguments are often faulty. Just because someone buys a product or signs up for a service, that doesn’t give the company providing that good or service the First Amendment right to share or use the personal information they collect from that person however they want./p pTo the contrary, laws that require data minimization and high privacy settings by default are good policy and can easily pass First Amendment muster. Arguments to the contrary not only misunderstand the First Amendment; they’d actually weaken its protections./p pLaws that suppress protected speech in order to stop children from accessing certain types of content generally often hurt speech and privacy rights for all. That’s why First Amendment challenges to laws a href=https://www.aclu.org/news/free-speech/arkansas-wants-to-unconstitutionally-card-people-before-they-use-social-mediathat limit what we can see online/a typically succeed. The Supreme Court has made it clear time and again that the government cannot regulate speech solely to stop children from seeing ideas or images that a legislative body believes to be unsuitable. Nor can it limit adults’ access to speech in the name of shielding children from certain content./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/news/free-speech/arkansas-wants-to-unconstitutionally-card-people-before-they-use-social-media target=_blank tabindex=-1 img width=1200 height=628 src=https://www.aclu.org/wp-content/uploads/2024/03/493ead8cd079d73577ec75d5436e8b10.jpg class=attachment-original size-original alt= decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2024/03/493ead8cd079d73577ec75d5436e8b10.jpg 1200w, https://www.aclu.org/wp-content/uploads/2024/03/493ead8cd079d73577ec75d5436e8b10-768x402.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/03/493ead8cd079d73577ec75d5436e8b10-400x209.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/03/493ead8cd079d73577ec75d5436e8b10-600x314.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/03/493ead8cd079d73577ec75d5436e8b10-800x419.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/03/493ead8cd079d73577ec75d5436e8b10-1000x523.jpg 1000w sizes=(max-width: 1200px) 100vw, 1200px / /a /div div class=wp-link__title a href=https://www.aclu.org/news/free-speech/arkansas-wants-to-unconstitutionally-card-people-before-they-use-social-media target=_blank Arkansas Wants to Unconstitutionally “Card” People Before They Use Social Media /a /div div class=wp-link__description a href=https://www.aclu.org/news/free-speech/arkansas-wants-to-unconstitutionally-card-people-before-they-use-social-media target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletThe state’s Social Media Safety Act stifles freedom of expression online and violates the First Amendment./p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/news/free-speech/arkansas-wants-to-unconstitutionally-card-people-before-they-use-social-media target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pThe CAADCA is unconstitutional for these reasons, despite the legislature’s understandable concerns about the privacy, wellbeing, and safety of children. The law was drafted so broadly that it actually would have hurt children. It could have prevented young people and adults from accessing things like online mental health resources; support communities related to school shootings and suicide prevention; and reporting about war, the climate crisis, and gun violence. It also could interfere with students#8217; attempts to express political or religious speech, or provide and receive personal messages about deaths in the family, rejection from a college, or a breakup. Paradoxically, the law exposes everyone’s information to greater privacy concerns by encouraging companies to gather and analyze user data for age estimation purposes./p pWhile we believe that the CAADCA burdens free speech and should be struck down, it is important that the court not issue a ruling that forecloses a path that other privacy laws could take to protect privacy without violating the First Amendment. We need privacy and free speech, too, especially in the digital age./p

Communities Should Reject Surveillance Products Whose Makers Won't Allow Them to be Independently Evaluated

pAmerican communities are being confronted by a lot of new police technology these days, a lot of which involves surveillance or otherwise raises the question: “Are we as a community comfortable with our police deploying this new technology?” A critical question when addressing such concerns is: “Does it even work, and if so, how well?” It’s hard for communities, their political leaders, and their police departments to know what to buy if they don’t know what works and to what degree./p pOne thing I’ve learned from following new law enforcement technology for over 20 years is that there is an awful lot of snake oil out there. When a new capability arrives on the scene—whether it’s a href=https://www.aclu.org/wp-content/uploads/publications/drawing_blank.pdfface recognition/a, a href=https://www.aclu.org/blog/privacy-technology/surveillance-technologies/experts-say-emotion-recognition-lacks-scientific/emotion recognition/a, a href=https://www.aclu.org/wp-content/uploads/publications/061819-robot_surveillance.pdfvideo analytics/a, or “a href=https://www.aclu.org/news/privacy-technology/chicago-police-heat-list-renews-old-fears-aboutbig data/a” pattern analysis—some companies will rush to promote the technology long before it is good enough for deployment, which sometimes a href=https://www.aclu.org/blog/privacy-technology/surveillance-technologies/experts-say-emotion-recognition-lacks-scientific/never happens/a. That may be even more true today in the age of artificial intelligence. “AI” is a term that often amounts to no more than trendy marketing jargon./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/news/privacy-technology/six-questions-to-ask-before-accepting-a-surveillance-technology target=_blank tabindex=-1 img width=1200 height=628 src=https://www.aclu.org/wp-content/uploads/2024/03/a573aa109804db74bfef11f8a6f475e7.jpg class=attachment-original size-original alt= decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2024/03/a573aa109804db74bfef11f8a6f475e7.jpg 1200w, https://www.aclu.org/wp-content/uploads/2024/03/a573aa109804db74bfef11f8a6f475e7-768x402.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/03/a573aa109804db74bfef11f8a6f475e7-400x209.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/03/a573aa109804db74bfef11f8a6f475e7-600x314.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/03/a573aa109804db74bfef11f8a6f475e7-800x419.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/03/a573aa109804db74bfef11f8a6f475e7-1000x523.jpg 1000w sizes=(max-width: 1200px) 100vw, 1200px / /a /div div class=wp-link__title a href=https://www.aclu.org/news/privacy-technology/six-questions-to-ask-before-accepting-a-surveillance-technology target=_blank Six Questions to Ask Before Accepting a Surveillance Technology /a /div div class=wp-link__description a href=https://www.aclu.org/news/privacy-technology/six-questions-to-ask-before-accepting-a-surveillance-technology target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletCommunity members, policymakers, and political leaders can make better decisions about new technology by asking these questions./p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/news/privacy-technology/six-questions-to-ask-before-accepting-a-surveillance-technology target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pGiven all this, communities and city councils should not adopt new technology that has not been subject to testing and evaluation by an independent, disinterested party. That’s true for all types of technology, but doubly so for technologies that have the potential to change the balance of power between the government and the governed, like surveillance equipment. After all, there’s no reason to get a href=https://www.aclu.org/news/privacy-technology/six-questions-to-ask-before-accepting-a-surveillance-technologywrapped up in big debates/a about privacy, security, and government power if the tech doesn’t even work./p pOne example of a company refusing to allow independent review of its product is the license plate recognition company Flock, which is pushing those surveillance devices into many American communities and tying them into a centralized national network. (We wrote more about this company in a 2022 a href=https://www.aclu.org/publications/fast-growing-company-flock-building-new-ai-driven-mass-surveillance-systemwhite paper/a.) Flock has steadfastly refused to allow the a href=https://www.aclu.org/news/privacy-technology/are-gun-detectors-the-answer-to-mass-shootingsindependent/a security technology reporting and testing outlet a href=https://ipvm.com/IPVM/a to obtain one of its license plate readers for testing, though IPVM has tested all of Flock’s major competitors. That doesn’t stop Flock from a href=https://ipvm.com/reports/flock-lpr-city-sued?code=lfgsdfasd543453boasting/a that “Flock Safety technology is best-in-class, consistently performing above other vendors.” Claims like these are puzzling and laughable when the company doesn’t appear to have enough confidence in its product to let IPVM test it./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/news/privacy-technology/experts-say-emotion-recognition-lacks-scientific target=_blank tabindex=-1 img width=1160 height=768 src=https://www.aclu.org/wp-content/uploads/2024/03/f0cab632e1da8a25e9a54ba8019ef74e.jpg class=attachment-original size-original alt= decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2024/03/f0cab632e1da8a25e9a54ba8019ef74e.jpg 1160w, https://www.aclu.org/wp-content/uploads/2024/03/f0cab632e1da8a25e9a54ba8019ef74e-768x508.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/03/f0cab632e1da8a25e9a54ba8019ef74e-400x265.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/03/f0cab632e1da8a25e9a54ba8019ef74e-600x397.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/03/f0cab632e1da8a25e9a54ba8019ef74e-800x530.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/03/f0cab632e1da8a25e9a54ba8019ef74e-1000x662.jpg 1000w sizes=(max-width: 1160px) 100vw, 1160px / /a /div div class=wp-link__title a href=https://www.aclu.org/news/privacy-technology/experts-say-emotion-recognition-lacks-scientific target=_blank Experts Say 'Emotion Recognition' Lacks Scientific Foundation /a /div div class=wp-link__description a href=https://www.aclu.org/news/privacy-technology/experts-say-emotion-recognition-lacks-scientific target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tablet/p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/news/privacy-technology/experts-say-emotion-recognition-lacks-scientific target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pCommunities considering installing Flock cameras should take note. That is especially the case when errors by Flock and other companies’ license plate readers can lead to innocent drivers finding themselves with their a href=https://ipvm.com/reports/flock-lpr-city-sued?code=lfgsdfasd543453hands behind their heads/a, facing jittery police pointing guns at them. Such errors can also expose police departments and cities to lawsuits./p pEven worse is when a company pretends that its product has been subject to independent review when it hasn’t. The metal detector company Evolv, which sells — wait for it — emAI/em metal detectors, submitted its technology to testing by a supposedly independent lab operated by the University of Southern Mississippi, and publicly touted the results of the tests. But a href=https://ipvm.com/reports/bbc-evolvIPVM/a and the a href=https://www.bbc.com/news/technology-63476769BBC/a reported that the lab, the National Center for Spectator Sports Safety and Security (a href=https://ncs4.usm.edu/NCS4/a), had colluded with Evolv to manipulate the report and hide negative findings about the effectiveness of the company’s product. Like Flock, Evolv refuses to allow IPVM to obtain one of its units for testing. (We wrote about Evolv and its product a href=https://www.aclu.org/news/privacy-technology/are-gun-detectors-the-answer-to-mass-shootingshere/a.)/p pOne of the reasons these companies can prevent a tough, independent reviewer such as IPVM from obtaining their equipment is their subscription and/or cloud-based architecture. “Most companies in the industry still operate on the more traditional model of having open systems,” IPVM Government Research Director Conor Healy told me. “But there’s a rise in demand for cloud-based surveillance, where people can store things in cloud, access them on their phone, see the cameras. Cloud-based surveillance by definition involves central control by the company that’s providing the cloud services.” Cloud-based architectures can a href=https://www.aclu.org/news/civil-liberties/major-hack-of-camera-company-offers-four-key-lessons-on-surveillanceworsen the privacy risks/a created by a surveillance system. Another consequence of their centralized control is increasing the ability of a company to control who can carry out an independent review./p pWe’re living in an era where a lot of new technology is emerging, with many companies trying to be the first to put them on the market. As Healy told me, “We see a lot of claims of AI, all the time. At this point, almost every product I see out there that gets launched has some component of AI.” But like other technologies before them, these products often come in highly immature, premature, inaccurate, or outright deceptive forms, relying little more than on the use of “AI” as a buzzword./p pIt’s vital for independent reviewers to contribute to our ongoing local and national conversations about new surveillance and other police technologies. It’s unclear why a company that has faith in its product would attempt to block independent review, which is all the more reason why buyers should know this about those companies./p

A Virtual Reality Tour of Surveillance Tech at the Border: A Conversation with Dave Maass of the Electronic Frontier Foundation

This interview is crossposted from The Markup, a nonprofit news organization that investigates technology and its impact on society.

By: Monique O. Madan, Investigative Reporter at The Markup

After reading my daily news stories amid his declining health, my grandfather made it a habit of traveling the world—all from his desk and wheelchair. When I went on trips, he always had strong opinions and recommendations for me, as if he’d already been there. “I've traveled to hundreds of countries," he would tell me. "It's called Google Earth. Today, I’m going to Armenia.” My Abuelo’s passion for teleporting via Google Street View has always been one of my fondest memories and has never left me. 

So naturally, when I found out that Dave Maass of the Electronic Frontier Foundation gave virtual reality tours of surveillance technology along the U.S.–Mexico border, I had to make it happen. I cover technology at the intersection of immigration, criminal justice, social justice and government accountability, and Maass’ tour aligns with my work as I investigate border surveillance. 

My journey began in a small, quiet, conference room at the Homestead Cybrarium, a hybrid virtual public library where I checked out virtual reality gear. The moment I slid the headset onto my face and the tour started, I was transported to a beach in San Diego. An hour and a half later, I had traveled across 1,500 miles worth of towns and deserts and ended up in Brownsville, Texas.

During that time, we looked at surveillance technology in 27 different cities on both sides of the border. Some of the tech I saw were autonomous towers, aerostat blimps, sky towers, automated license plate readers, and border checkpoints. 

After the excursion, I talked with Maass, a former journalist, about the experience. Our conversation has been edited for brevity and clarity.

Monique O. Madan: You began by dropping me in San Diego, California, and it was intense. Tell me why you chose the location to start this experience.

Dave Maass: So I typically start the tour in San Diego for two reasons. One is because it is the westernmost part of the border, so it's a natural place to start. But more importantly, it is such a stark contrast to be able to jump from one side to the other, from the San Diego side to the Tijuana side.

When you're in San Diego, you're in this very militarized park that's totally empty, with patrol vehicles and this very fierce-looking wall and a giant surveillance tower over your head. You can really get a sense of the scale.

And once you're used to that, I jump you to the other side of the wall. You're able to suddenly see how it's party time in Tijuana, how they painted the wall, and how there are restaurants and food stands and people playing on the beach and there are all these Instagram moments.

A surveillance tower overlooks the border fence

Credit: Electronic Frontier Foundation

Yet on the other side is the American militarized border, you know, essentially spying on everybody who's just going about their lives on the Mexican side.

It also serves as a way to show the power of VR. If there were no wall, you could walk that in a minute. But because of the border wall, you've got to go all the way to the border crossing, and then come all the way back. And we're talking, potentially, hours for you to be able to go that distance. 

Madan: I felt like I was in two different places, but it was really the same place, just feet away from each other. We saw remote video surveillance systems, relocatable ones. We saw integrated fixed towers, autonomous surveillance towers, sky towers, aerostat radar systems, and then covert automated license plate readers. How do you get the average person to digest what all these things really mean?

7 Stops on Dave Maass’ Virtual Reality Surveillance Tour of the U.S.–Mexico Border

The following links take you to Google Street View.

Maass: Me and some colleagues at EFF, we were looking at how we could use virtual reality to help people understand surveillance. We came up with a very basic game called “Spot the Surveillance,” where you could put on a headset and it puts you in one location with a 360-degree camera view. We took a photo of a corner in San Francisco that already had a lot of surveillance, but we also Photoshopped in other pieces of surveillance. The idea was for people to look around and try to find the surveillance.

When they found one, it would ping, and it would tell you what the technology could do. And we found that that helped people learn to look around their environment for these technologies, to understand it. So it gave people a better idea of how we exist in the environment differently than if they were shown a picture or a PowerPoint presentation that was like, “This is what a license plate reader looks like. This is what a drone looks like.”

That is why when we're on the southern border tour, there are certain places where I don't point the technology out to you. I ask you to look around and see if you can find it yourself.

Sometimes I start with one where it's overhead because people are looking around. They're pointing to a radio tower, pointing to something else. It takes them a while before they actually look up in the sky and see there's this giant spy mob over their head. But, yeah, one of the other ones is these license plate readers that are hidden in traffic cones. People don't notice them there because they're just these traffic cones that are so ubiquitous along highways and streets that they don't actually think about it.

Madan: People have the impression that surveillance ops are only in militarized settings. Can you talk to me about whether that’s true?

Maass: Certainly there are towers in the middle of the desert. Certainly there are towers that are in remote or rural areas. But there are just so many that are in urban areas, from big cities to small towns.

Rather than just a close-up picture of a tower, once you actually see one and you're able to look at where the cameras are pointed, you start to see things like towers that are able to look into people's back windows, and towers that are able to look into people's backyards, and whole communities that are going to have glimpses over their neighborhood all the time.

But so rarely in the conversation is the impact on the communities that live on both the U.S. and Mexican side of the border, and who are just there all the time trying to get by and have, you know, the normal dream of prospering and raising a family.

Madan: What does this mean from a privacy, human rights, and civil liberties standpoint? 

Maass: There’s not a lot of transparency around issues of technology. That is one of the major flaws, both for human rights and civil liberties, but it's also a flaw for those who believe that technology is going to address whatever amorphous problem they've identified or failed to identify with border security and migration. So it's hard to know when this is being abused and how.

But what we can say is that as [the government] is applying more artificial intelligence to its camera system, it's able to document the pattern of life of people who live along the border.

It may be capturing people and learning where they work and where they're worshiping or who they are associated with. So you can imagine that if you are somebody who lives in that community and if you're living in that community your whole life, the government may have, by the time you're 31 years old, your entire driving history on file that somebody can access at any time, with who knows what safeguards are in place.

But beyond all that, it really normalizes surveillance for a whole community.

There are a lot of psychological studies out there about how surveillance can affect people over time, affect their behavior, and affect their perceptions of a society. That's one of the other things I worry about: What kind of psychological trauma is surveillance causing for these communities over the long term, in ways that may not be immediately perceptible?

Madan: One of the most interesting uses of experiencing this tour via the VR technology was being able to pause and observe every single detail at the border checkpoint.

Maass: Most people are just rolling through, and so you don't get to notice all of the different elements of a checkpoint. But because the Google Street View car went through, we can roll through it at our leisure and point out different things. I have a series of checkpoints that I go through with people, show them this is where the license plate reader is, this is where the scanner truck is, here's the first surveillance camera, here's the second surveillance camera. We can see the body-worn camera on this particular officer. Here's where people are searched. Here's where they're detained. Here's where their car is rolled through an X-ray machine.

Madan: So your team has been mapping border surveillance for a while. Tell us about that and how it fits into this experience.

Maass: We started mapping out the towers in 2022, but we had started researching and building a database of at least the amount of surveillance towers by district in 2019. 

I don't think it was apparent to anyone until we started mapping these out, how concentrated towers are in populated areas. Maybe if you were in one of those populated areas, you knew about it, or maybe you didn't.

In the long haul, it may start to tell a little bit more about border policy in general and whether any of these are having any kind of impact, and maybe we start to learn more about apprehensions and other kinds of data that we can connect to.

Madan: If someone wanted to take a tour like this, if they wanted to hop on in VR and visit a few of these places, how can they do that? 

Maass: So if they have a VR headset, a Meta Quest 2 or newer, the Wander app is what you're going to use. You can just go into the app and position yourself somewhere in the border. Jump around a little bit, maybe it will be like five feet, and you can start seeing a surveillance tower.

If you don’t have a headset and want to do it in your browser, you can go to EFF’s map and click on a tower. You’ll see a Street View link when you scroll down. Or you can use those tower coordinates and then go to your VR headset and try to find it.

Madan: What are your thoughts about the Meta Quest headset—formerly known as the Oculus Rift—being founded by Palmer Luckey, who also founded the company that made one of the towers on the tour?

Maass: There’s certainly some irony about using a technology that was championed by Palmer Luckey to shine light on another technology championed by Palmer Luckey. That's not the only tech irony, of course: Wander [the app used for the tour] also depends on using products from Google and Meta, both of whom continue to contribute to the rise of surveillance in society, to investigate surveillance.

Madan: What's your biggest takeaway as the person giving this tour?

Maass: I am a researcher and educator, and an activist and communicator. To me, this is one of the most impactful ways that I can reach people and give them a meaningful experience about the border. 

I think that when people are consuming information about the border, they're just getting little snippets from a little particular area. You know, it's always a little place that they're getting a little sliver of what's going on. 

But when we're able to do this with VR, I'm able to take them everywhere. I'm able to take them to both sides of the border. We're able to see a whole lot, and they're able to come away by the end of it, feeling like they were there. Like your brain starts filling in the blanks. People get this experience that they wouldn't be able to get any other way.

Being able to linger over these spaces on my own time showed me just how much surveillance is truly embedded in people's daily lives. When I left the library, I found myself inspecting traffic cones for license plate readers. 

As I continue to investigate border surveillance, this experience really showed me just how educational these tools can be for academics, research and journalism. 

Thanks for reading,
Monique
Investigative Reporter
The Markup

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

❌