Skip Navigation
Developer platform Docker Hub suspends services in Russia
therecord.media Developer platform Docker Hub suspends services in Russia

Russian users lost access to Docker Hub — a U.S-based cloud service used by software developers — and couldn’t access it even through virtual private networks (VPNs), media reports said.

Developer platform Docker Hub suspends services in Russia

The U.S. service Docker Hub, widely used for developing software, has suspended its operations in Russia without giving advance notice to local users, according to media reports.

Russian users lost access to Docker Hub repositories on Thursday and couldn’t access the service even through virtual private networks (VPNs), reported Russian news website Kommersant.

Developers use the cloud-based platform to store, share and manage their container images — digital packages that include everything needed to run an application.

Docker Hub stated in a message displayed to those trying to access the platform from Russia that it is blocking services in Cuba, Iran, North Korea, Sudan, Syria and Russian-annexed Crimea to “adhere to U.S. export control rules.” Russia itself wasn’t included in the message.

At the time of publication, the platform’s operator, Docker Inc., hasn’t responded to a request for comment.

Russian legal expert Maria Udodova told Kommersant that the blocking could be linked to the new proposed rule introduced by the Department of Commerce in January to protect cloud services from foreign cyberthreats to national security. Recorded Future News couldn’t verify this claim.

In an interview with Russian media, several local tech businesses complained that due to the blocking, they cannot upload or save their projects from the repository. They said that Docker Hub was popular among Russian companies involved in cybersecurity.

Following the service suspension, Russian developers took to the Docker Hub forum and Reddit to voice their complaints.

“It’s not me who invaded Ukraine, it’s not millions of developers and software engineers either, but we have to suffer the consequences. Thanks a lot, Docker!” one user said on Reddit.

“Please consider keeping Docker Hub available for Russians — they’re oppressed by their own government they didn’t choose. The regime will have access to any technology anyway, and have resources to keep their infrastructure running,” another user wrote on the Docker community forum.

Industry experts admitted to Kommersant that Docker Hub restrictions could deal a blow to tech businesses, which now have to quickly find an alternative. This is not easy since other similar services, including GitHub, suspended some of their services in Russia when it invaded Ukraine.

In 2022, Docker said in a statement that the company “stands with Ukraine” and will not do business with Russian and Belarusian businesses or accept payments from these locations during the war.

The company also said that it removed the ability to purchase and renew Docker subscriptions from Russia and Belarus.

Slow exits

The fact that Docker Hub was still generally available in Russia until this week, despite the company’s previous statements, isn’t unusual.

With the start of the war in Ukraine two years ago, many Western tech firms announced that they would quit the Russian market or suspend selling their products there — either for moral reasons or due to economic sanctions imposed on Russia by the EU or the U.S.

Big tech companies that served many clients in Russia didn’t exit the market immediately. Only this August, Microsoft, for example, announced that it would stop renewing licenses for its products to Russian companies and would not process payments via wire transfer to local bank accounts.

In March, Russians received a notification from Microsoft saying that it would suspend access to its cloud services for local users as a result of European sanctions imposed on Russia after its invasion of Ukraine.

Earlier in January, Czech antivirus developer Avast suspended selling its software in Russia. In the initial months of the war, the company announced that it would stop renewing licenses for its products for Russian and Belarusian users.

5
More than 600,000 routers knocked out in October by Chalubo malware
therecord.media More than 600,000 routers knocked out in October by Chalubo malware

In a new report, researchers at Black Lotus Labs described a “destructive” incident between October 25-27 affecting routers made by Sagemcom and ActionTec.

More than 600,000 routers knocked out in October by Chalubo malware

A strain of malware named Chalubo wrecked over 600,000 routers for small offices and homes in the U.S. last year.

In a new report from Lumen Technologies’ Black Lotus Labs, researchers described a “destructive” incident between October 25-27 in which hundreds of thousands of routers made by Sagemcom and ActionTec were rendered permanently inoperable.

Chalubo was first discovered in 2018 by researchers from Sophos, which said it was used to infect devices and add them to powerful botnets that could perform distributed denial of service (DDoS) attacks.

Black Lotus Labs did not name the internet service provider (ISP) that deployed the routers but Reuters said an analysis of news coverage indicated it was likely Arkansas-based Windstream, which did not respond to requests for comment.

Further research revealed that the routers were destroyed by a firmware update sent out to the devices that had already been compromised by Chalubo.

“At this time, we do not have an overlap between this activity and any known nation-state activity clusters,” the researchers explained. “We assess with high confidence that the malicious firmware update was a deliberate act intended to cause an outage, and though we expected to see a number of router make and models affected across the internet, this event was confined to the single ISP’s autonomous system number (ASN).”

A survey of complaints on internet forums and outage detectors revealed that most people were complaining about issues with router models Sagemcom F5380, ActionTec T3200s and ActionTec T3260s.

Users who contacted ActionTec’s support center were told the entire router would need to be replaced. To check whether those models were the only ones affected, the researchers used internet scanning tool Censys and found that between October 27 and October 28, there was a 179,000 drop in IP addresses connected to ActionTec devices and a decrease of 480,000 devices associated with Sagemcom.

Lumen researchers noted that the Chalubo malware family continues to be active and found that more than 330,000 IP addresses communicated with tools connected to the malware, indicating that “while the Chalubo malware was used in this destructive attack, it was not written specifically for destructive actions.”

'Rural or underserved communities'

The researchers do not know what exploit was used to gain initial access to compromised devices. They could not find vulnerabilities for the specific models impacted, “suggesting the threat actor likely either abused weak credentials or exploited an exposed administrative interface.”

“We suspect the threat actors behind this event chose a commodity malware family to obfuscate attribution, instead of using a custom-developed toolkit,” they said.

The researchers noted that “a sizeable portion of this Internet Service Provider’s service area covers rural or underserved communities,” potentially making recovery more difficult.

The outage affected “places where residents may have lost access to emergency services, farming concerns may have lost critical information from remote monitoring of crops during the harvest, and health care providers cut off from telehealth or patients’ records,” they said.

Chalubo is a sophisticated malware family that its creators went to great lengths to conceal. The malicious code removes all of its files and renames itself after something already present on the device.

All of the communication with command and control (C2) servers is encrypted — which Lumen said contributed to the lack of previous research on the malware.

There has been significant law enforcement focus this week on malware that affects routers. International law enforcement agencies announced Thursday that they took several of the most influential malware families offline in the “largest ever operation against botnets.”

The FBI and international partners dismantled another massive botnet on Wednesday that infected more than 19 million IP addresses across 200 countries and was used for years to conceal cybercrime.

0
Ottawa wants the power to create secret backdoors in our networks to allow for surveillance
www.theglobeandmail.com Opinion: Ottawa wants the power to create secret backdoors in our networks to allow for surveillance

Uncompromised encryption is the backbone of cybersecurity. And yet Bill C-26 would allow the federal government to secretly order telcos to undermine that encryption – which would make us more vulnerable to malicious threats

Opinion: Ottawa wants the power to create secret backdoors in our networks to allow for surveillance

Kate Robertson is a senior research associate and Ron Deibert is director at the University of Toronto’s Citizen Lab.

A federal cybersecurity bill, slated to advance through Parliament soon, contains secretive, encryption-breaking powers that the government has been loath to talk about. And they threaten the online security of everyone in Canada.

Bill C-26 empowers government officials to secretly order telecommunications companies to install backdoors inside encrypted elements in Canada’s networks. This could include requiring telcos to alter the 5G encryption standards that protect mobile communications to facilitate government surveillance.

The government’s decision to push the proposed law forward without amending it to remove this encryption-breaking capability has set off alarm bells that these new powers are a feature, not a bug.

There are already many insecurities in today’s networks, reaching down to the infrastructure layers of communication technology. The Signalling System No. 7, developed in 1975 to route phone calls, has become a major source of insecurity for cellphones. In 2017, the CBC demonstrated how hackers only needed a Canadian MP’s cell number to intercept his movements, text messages and phone calls. Little has changed since: A 2023 Citizen Lab report details pervasive vulnerabilities at the heart of the world’s mobile networks.

So it makes no sense that the Canadian government would itself seek the ability to create more holes, rather than patching them. Yet it is pushing for potential new powers that would infect next-generation cybersecurity tools with old diseases.

It’s not as if the government wasn’t warned. Citizen Lab researchers presented the 2023 report’s findings in parliamentary hearings on Bill C-26, and leaders and experts in civil society and in Canada’s telecommunications industry warned that the bill must be narrowed to prevent its broad powers to compel technical changes from being used to compromise the ”confidentiality, integrity, or availability” of telecommunication services. And yet, while government MPs maintained that their intent is not to expand surveillance capabilities, MPs pushed the bill out of committee without this critical amendment last month. In doing so, the government has set itself up to be the sole arbiter of when, and on what conditions, Canadians deserve security for their most confidential communications – personal, business, religious, or otherwise.

The new powers would only make people in Canada more vulnerable to malicious threats to the privacy and security of all network users, including Canada’s most senior officials. Encryption of 5G technology safeguards a web of connection points surrounding mobile communications, and protects users from man-in-the-middle attacks that intercept their text and voice communications or location data. The law would also impact cloud-connected smart devices like cars, home CCTV, or pacemakers, and satellite-based services like Starlink – all of which could be compromised by any new vulnerabilities.

Unfortunately, history is rife with government backdoors exposing individuals to deep levels of cyber-insecurity. Backdoors can be exploited by law enforcement, criminals and foreign rivals alike. For this reason, past heads of the CIA, the NSA and the U.S. Department of Homeland Security, as well as Britain’s Government Communications Headquarters (GCHQ) and MI5, all oppose measures that would weaken encryption. Interception equipment relied upon by governments has also often been shown to have significant security weaknesses.

The bill’s new spy powers also reveal incoherence in the government’s cybersecurity strategy. In 2022, Canada announced it would be blocking telecom equipment from Huawei and ZTE, citing the “cascading economic and security impacts” that a supply-chain breach would engender. The government cited concerns that the Chinese firms might be “compelled to comply with extrajudicial directions from foreign governments.” And yet, Bill C-26 would quietly provide Canada with the same authority that it publicly condemned. If the bill passes as-is, all telecom providers in Canada would be compellable through secret orders to weaken encryption or network equipment. It doesn’t just contradict Canada’s own pro-encryption policy and expert guidance – authoritarian governments abroad would also be able to point to Canada’s law to justify their own repressive security legislation.

Now, more than ever, there is no such thing as a safe backdoor. The GCHQ reports that the threat from commercial hacking firms will be “transformational on the cyber landscape,” and that cyber mercenaries wield capabilities rivalling that of state cyber-agencies. If the Canadian government compels telcos to undermine security features to accommodate surveillance, it will pave the way for cyberespionage firms and other adversaries to find more ways into people’s communications. A shortcut that provides a narrow advantage for the few at the expense of us all is no way to secure our complex digital ecosystem.

Against this threat landscape, a pivot is crucial. Canada needs cybersecurity laws that explicitly recognize that uncompromised encryption is the backbone of cybersecurity, and it must be mandated and protected by all means possible.

1
Mastercard's Controversial Digital ID Rollout in Africa
reclaimthenet.org Mastercard's Controversial Digital ID Rollout in Africa

Critics question the motives behind pushing controversial digital ID schemes in economically disadvantaged regions.

Mastercard's Controversial Digital ID Rollout in Africa

One wouldn’t have pegged Mastercard for that corporation that is “driving sustainable social impact” and caring about remote communities around the world struggling to meet basic needs.

Nevertheless, here we are – or at least that’s how the global payment services behemoth advertises its push to proliferate the use of a scheme called Community Pass.

The purpose of Community Pass is to enable a digital ID and wallet that’s contained in a “smart card.” Launched four years ago, the program – which Mastercard says, in addition to being based on digital ID, is interoperable, and works offline – targets “underserved communities” and currently has 3.5 million users, with plans of growing that number to 30 million by 2027.

According to a map on Mastercard’s site, this program is now being either piloted or has been rolled out in India, Ethiopia, Uganda, Kenya, Tanzania, Mozambique, and Mauritania, while the latest announcement is the partnership with the African Development Bank Group in an initiative dubbed, Mobilizing Access to the Digital Economy (MADE).

!

The plan is to, over ten years, make sure 100 million people and businesses in Africa are included in digital ID programs and thus allowed access to government and “humanitarian” services.

As for Community Pass itself, it aims to incorporate 15 million users on the continent over the next five years. This is Mastercard’s part of the deal, whereas the African Development Bank Group said it would invest $300 million to make sure MADE happens.

Given how controversial digital ID schemes are, and how much pushback they encounter in developed countries, it’s hard to shake off the impression that such initiatives are pushed so aggressively in economically disadvantaged areas and communities precisely because little opposition is expected.

But MADE is presented as almost a “humanitarian” service itself – there, apparently, solely to make life better, in particular for farmers and women, and improve things like connectivity, financial services, employment rates, etc.

The news about Mastercard’s latest partnership and initiative came from a US-Africa business forum organized by the US Chamber of Commerce.

0
The EU is on the Brink off Making “Hate Speech” a Serious Crime
reclaimthenet.org The EU is on the Brink off Making “Hate Speech” a Serious Crime

The EU’s European Commission (EC) appears to be preparing to include “hate speech” among the list of most serious criminal offenses and regulate its investigation and prosecution across the bloc. Whether this type of proposal is cropping up now because of the upcoming EU elections or if the initiati...

The EU is on the Brink off Making “Hate Speech” a Serious Crime

The EU’s European Commission (EC) appears to be preparing to include “hate speech” among the list of most serious criminal offenses and regulate its investigation and prosecution across the bloc.

Whether this type of proposal is cropping up now because of the upcoming EU elections or if the initiative has legs will become obvious in time, but for now, the plans are supported by several EC commissioners.

The idea stems from the European Citizens’ Panel on Tackling Hatred in Society, one of several panels (ECPs) established to help EC President Ursula von der Leyen with her (campaign?) promise of ushering in a democracy in the EU that is “fit for the future.”

That could mean anything, and the vagueness by no means stops there: the very “hate speech,” despite the gravity of the proposals to classify it as a serious crime, is not even well defined, observers are warning.

Despite that, the recommendations contained in a report produced by the panel have been backed by EC’s Vice-President for Values and Transparency Vera Jourova as well as Vice President for Democracy and Demography Dubravka Suica.

According to Jourova, the panel’s recommendations on how to deal with “hate speech” are “clear and ambitious” – although, as noted, a clear definition of that type of speech is still be lacking.

This is the wording the report went for: any speech that is “incompatible with the values of human dignity, freedom, democracy, the rule of law, and respect of human rights” should be considered as “hate speech.”

Critics of this take issue with going for, in essence, subjective, not to mention vague expressions like “values of human dignity” considering that even in Europe, speech can still be lawful even if individuals or groups perceive it as offensive or upsetting.

Since there is also hate speech that is already illegal in the EU, the panel wants it to receive a new definition, and the goal, the report reads, is to “ensure that all forms of hate speech are uniformly recognized and penalized, reinforcing our commitment to a more inclusive and respectful society.”

If the EU decides to add hate speech to its list of crimes, the panel’s report added, this will allow for the protection of marginalized communities, and “uphold human dignity.”

Noteworthy is that the effort seems coordinated, even as far as the wording goes, as media reports note that the recommendation “adopts exactly the same terminology as an EC proposal that was recently endorsed by the European Parliament to extend the list of EU-wide crimes to include ‘hate speech’.”

0
Ireland Submits "Online Safety Code" for EU Assessment - Pushes Censorship, Digital ID
reclaimthenet.org Ireland Submits "Online Safety Code" for EU Assessment - Pushes Censorship, Digital ID

Critics argue the Code could undermine privacy while aiming to curb harmful online content.

Ireland Submits "Online Safety Code" for EU Assessment - Pushes Censorship, Digital ID

Ireland’s media regulator (Coimisiún na Meán) has updated the Online Safety Code (part of the Online Safety Framework, a mechanism of the Online Safety and Media Regulation Act), and submitted it to the European Commission for assessment.

Considered by opponents as a censorship law that also imposes age verification or estimation (phrased as “age assurance”), the Code aims to establish binding rules for video platforms with EU headquarters located in Ireland.

It is expected that the European Commission will announce its position within 3 to 4 months, after which the rules will be finalized and put into effect, the regulator said.

Once greenlit by Brussels, the final version of the Code will impose obligations on platforms to ban uploading or sharing videos of what is considered to be cyberbullying, promoting self-harm or suicide, and promoting eating or feeding disorders.

But the list is much longer and includes content deemed to be inciting hatred or violence, terrorism, child sex abuse material, racism, and xenophobia.

Even though the new rules will inevitably give wide remit to censor video content as belonging to any of these many categories, and even though children are unavoidably mentioned as the primary concern, the Irish press reports that not everyone is satisfied with just how far the new Code goes.

One is a group called the Hope and Courage Collective (H&CC), whose purpose is apparently to “fight against far-right hate.” H&CC is worried that the Code will not be able to “keep elections safe” nor protect communities “targeted by hate.”

But what it will do, according to the media regulator’s statement, is to use “age assurance” as a way to prevent children from viewing inappropriate content, and do so via age verification measures.

The age verification controversy, however, doesn’t stem from (even if only declarative) intent behind it, but from the question of how it is supposed to be implemented, and how that implementation will stop short of undermining privacy and therefore security of all users of a platform.

Still, the Irish regulator is satisfied that its new code, along with the EU’s Digital Services Act and Terrorist Content Online Regulation, will give it “a strong suite of tools to improve people’s lives online.”

0
Financial Surveillance? PayPal Plots Ad Network Built off Your Purchase History and Shopping Habits
Financial Surveillance? PayPal Plots Ad Network Built off Your Purchase History and Shopping Habits

PayPal has announced that it is creating an ad platform “powered” by the data the payment service giant has from millions of both customers and merchants – specifically, from their transaction information.

The data harvesting here will be on by default, but PayPal users (Venmo is included in the scheme) will be able to opt out of what some critics refer to as yet another example of “financial surveillance.” The company’s massive business in the first quarter of this year alone amounted to 6.5 transactions processed for 427 million customers.

Sellers are promised that they will, thanks to the new platform, achieve better sales of products and services, while customers are told to expect the ads targeting them to show more “relevant” products.

A press release revealed that to bolster this side of its business, PayPal has appointed two executives – Mark Grether, formerly Uber Advertising VP and general manager, and John Anderson, who was previously head of product and payments at the fintech firm Plaid.

In this way, PayPal is joining others who are turning to using customer data to monetize targeted advertising. In the company’s industry, Visa and JPMorgan Chase have been making similar moves, while big retailers “share” this type of data with Big Tech.

The PayPal scheme is based on shopping habits and purchase information that allows advertisers to pinpoint their campaigns, and Grether explained that the company “knows” who is making purchases on the internet and where and that this data can be “leveraged.”

He also told the Wall Street Journal that customers who use PayPal cards in physical stores will become sources of the same type of data.

Other than this, however, not many other details are known at this time as to the exact type of data that will be “fed” into the new ad platform.

A spokesperson has offered vague responses to this query, stating that there are no “definitive answers” to that at this “early stage” of the platform’s creation.

But, Taylor Watson was sure to offer boilerplate assurances of transparency and privacy protections:

“Alongside the advertising business, PayPal will build transparent, easy-to-use privacy controls,” said this spokesperson.

3
Alternative Media Giants Sue The Censorship Industrial Complex
reclaimthenet.org Alternative Media Giants Sue The Censorship Industrial Complex

The suit alleges a coordinated effort to suppress dissenting voices on platforms like NaturalNews.com and Brighteon.com, claiming substantial economic and reputational harm.

Alternative Media Giants Sue The Censorship Industrial Complex

In a new lawsuit, Webseed and Brighteon Media have accused multiple US government agencies and prominent tech companies of orchestrating a vast censorship operation aimed at suppressing dissenting viewpoints, particularly concerning COVID-19. The plaintiffs, Webseed and Brighteon Media, manage websites like NaturalNews.com and Brighteon.com, which have been at the center of controversy for their alternative health information and criticism of government policies.

We obtained a copy of the lawsuit for you here.

The defendants include the Department of State, the Global Engagement Center (GEC), the Department of Defense (DOD), the Department of Homeland Security (DHS), and tech giants such as Meta Platforms (formerly Facebook), Google, and X. Additionally, organizations like NewsGuard Technologies, the Institute for Strategic Dialogue (ISD), and the Global Disinformation Index (GDI) are implicated for their roles in creating and using tools to label and suppress what they consider misinformation.

Allegations of Censorship and Anti-Competitive Practices:

The lawsuit claims that these government entities and tech companies conspired to develop and promote censorship tools to suppress the speech of Webseed and Brighteon Media, among others. “The Government was the primary source of misinformation during the pandemic, and the Government censored dissidents and critics to hide that fact,” states Stanford University Professor J. Bhattacharya in support of the plaintiffs’ claims.

The plaintiffs argue that the government’s efforts were part of a broader strategy to silence voices that did not align with official narratives on COVID-19 and other issues. They assert that these actions were driven by an “anti-competitive animus” aimed at eliminating alternative viewpoints from the digital public square.

According to the complaint, the plaintiffs have suffered substantial economic harm, estimating losses between $25 million and $50 million due to reduced visibility and ad revenue from their platforms. They also claim significant reputational damage as a result of being labeled as purveyors of misinformation.

The complaint details how the GEC and other agencies allegedly funded and promoted tools developed by NewsGuard, ISD, and GDI to blacklist and demonetize websites like NaturalNews.com. These tools, which include blacklists and so-called “nutrition labels,” were then utilized by tech companies to censor content on their platforms. The plaintiffs argue that this collaboration between government agencies and private tech companies constitutes an unconstitutional suppression of free speech.

A Broader Pattern of Censorship:

The lawsuit references other high-profile cases, such as Missouri v. Biden, to illustrate a pattern of government overreach into the digital information space. It highlights how these efforts have extended beyond foreign disinformation to target domestic voices that challenge prevailing government narratives.

Webseed and Brighteon Media are seeking both monetary damages and injunctive relief to prevent further censorship. They contend that the government’s actions violate the First Amendment and call for an end to the use of these censorship tools.

As the case progresses, it promises to shine a light on the complex interplay between government agencies, tech companies, and the tools used to control the flow of information in the digital age. The outcome could have significant implications for the future of free speech and the regulation of online content.

0
European Council Approves “Rapid Response Teams” To Combat “Disinformation”
reclaimthenet.org European Council Approves “Rapid Response Teams” To Combat “Disinformation”

Critics warn of potential censorship as Ireland considers early adoption of EU's controversial "disinformation" crackdown.

European Council Approves “Rapid Response Teams” To Combat “Disinformation”

The EU has announced a guiding framework that will make it possible to set up what the bloc calls “Hybrid Rapid Response Teams” which will be “drawing on relevant sectoral national and EU civilian and military expertise.”

These teams will be created and then deployed to counter “disinformation” throughout the 27 member countries – but also to what Brussels calls partner countries. And Ireland might become an “early adopter.”

For a county to apply, it will first need to feel it is under attack by means of “hybrid threats and campaigns” and then request from the EU to help counter those by dispatching a “rapid response team.”

The EU is explaining the need for these teams as a result of a “deteriorating security environment, increasing disinformation, cyber attacks, attacks on critical infrastructure, and election interference by malign actors” – and even something the organization refers to as “instrumentalized migration.”

The framework comes out of the EU Hybrid Toolbox, which itself stems from the bloc’s Strategic Compass for Security and Defense.

Mere days after the EU made the announcement last week, news out of Ireland said that the Department of Foreign Affairs welcomed the development, stating that they will “now begin on operationalizing Ireland’s participation in this important initiative.”

The department explained what it sees as threats – there’s inevitably “disinformation,” along with cyber attacks, attacks on critical infrastructure, as well as “economic coercion.”

Ireland’s authorities appear to be particularly pleased with the EU announcement given that the country doesn’t have a centralized body that would fight such a disparate range of threats, real or construed.

The announcement about the “reaction teams” came from the Council of the EU, and was the next day “welcomed” by the European Commission, which repeated the points the original statement made about a myriad of threats.

The Hybrid Rapid Response Teams which have now been greenlit with the framework are seen as a key instrument in countering those threats.

Other than saying that the EU Hybrid Toolbox relies on “relevant civilian and military expertise,” the two EU press releases are short on detail about the composition of the future teams that will be sent on “short-term” missions.

However, it revealed that “rapid deployment to partner countries” will be made possible through the Emergency Response Coordination Center (ERCC) as the scheme’s operational hub.

0
Global Elections Face Growing Censorship Threat: The Rise of "Prebunking"
reclaimthenet.org Global Elections Face Growing Censorship Threat: The Rise of "Prebunking"

Censorship proponents now turn to "prebunking," a controversial strategy aimed at curbing so-called disinformation before it reaches the public eye.

Global Elections Face Growing Censorship Threat: The Rise of "Prebunking"

The feverish search for the next “disinformation” silver bullet continues as several elections are being held worldwide.

Censorship enthusiasts, who habitually use the terms “dis/misinformation” to go after lawful online speech that happens to not suit their political or ideological agenda, now feel that debunking has failed them.

(That can be yet another euphemism for censorship – when “debunking” political speech means removing information those directly or indirectly in control of platforms don’t like.)

Enter “prebuking” – and regardless of how risky, especially when applied in a democracy, this is, those who support the method are not swayed even by the possibility it may not work.

Prebunking is a distinctly dystopian notion that the audiences and social media users can be “programmed” (proponents use the term, “inoculated”) to reject information as untrustworthy.

To achieve that, speech must be discredited and suppressed as “misinformation” (via warnings from censors) before, not after it is seen by people.

“A radical playbook” is what some legacy media reports call this, at the same time implicitly justifying it as a necessity in a year that has been systematically hyped up as particularly dangerous because of elections taking place around the globe.

The Washington Post disturbingly sums up prebunking as exposing people to “weakened doses of misinformation paired with explanations (…) aimed at helping the public develop ‘mental antibodies’.”

This type of manipulation is supposed to steer the “unwashed masses” toward making the right (aka, desired by the “prebunkers”) conclusions, as they decide who to vote for.

Even as this is seen by opponents as a threat to democracy, it is being adopted widely – “from Arizona to Taiwan (with the EU in between)” – under the pretext of actually protecting democracy.

Where there are governments and censorship these days, there’s inevitably Big Tech, and Google and Meta are mentioned as particularly involved in carrying out prebunking campaigns, notably in the EU.

Apparently Google will not be developing Americans’ “mental antibodies” ahead of the US vote in November – that might prove too controversial, at least at this point in time.

The risk-reward ratio here is also unappealing.

“There aren’t really any actual field experiments showing that it (prebunking) can change people’s behavior in an enduring way,” said Cornell University psychology professor Gordon Pennycook.

0
Simplex Server Docker Installation Guide (SMP / XFTP)
forum.hackliberty.org Simplex Server Docker Installation Guide (SMP / XFTP)

Introduction Hello, today I am bringing you an updated (JUNE 01, 2024) guide on how to install Simplex SMP and XFTP servers using docker compose. This guide assumes you already have docker and docker compose installed – and also moves XFTP off default port of 443 due to reverse proxy conflicts. Pre...

Simplex Server Docker Installation Guide (SMP / XFTP)

>Hello, today I am bringing you an updated (JUNE 01, 2024) guide on how to install Simplex SMP and XFTP servers using docker compose. This guide assumes you already have docker and docker compose installed – and also moves XFTP off default port of 443 due to reverse proxy conflicts.

0
Biden’s Bold Move to Combat AI Abuse Stirs Privacy and Free Speech Fears
reclaimthenet.org Biden’s Bold Move to Combat AI Abuse Stirs Privacy and Free Speech Fears

On-device surveillance proposals stir debate over privacy rights and the future of digital content monitoring.

Biden’s Bold Move to Combat AI Abuse Stirs Privacy and Free Speech Fears

The Biden administration is pushing for sweeping measures to combat the proliferation of nonconsensual sexual AI-generated images, including controversial proposals that could lead to extensive on-device surveillance and control of the types of images generated. In a White House press release, President Joe Biden’s administration outlined demands for the tech industry and financial institutions to curb the creation and distribution of abusive sexual images made with artificial intelligence (AI).

A key focus of these measures is the use of on-device technology to prevent the sharing of nonconsensual sexual images. The administration stated that “mobile operating system developers could enable technical protections to better protect content stored on digital devices and to prevent image sharing without consent.”

This proposal implies that mobile operating systems would need to scan and analyze images directly on users’ devices to determine if they are sexual or non-consensual. The implications of such surveillance raise significant privacy concerns, as it involves monitoring and analyzing private content stored on personal devices.

Additionally, the administration is calling on mobile app stores to “commit to instituting requirements for app developers to prevent the creation of non-consensual images.” This broad mandate would require a wide range of apps, including image editing and drawing apps, to scan and monitor user activities on devices, analyze what art they’re creating and block the creation of certain kinds of content. Once this technology of on-device monitoring becomes normalized, this level of scrutiny could extend beyond the initial intent, potentially leading to censorship of other types of content that the administration finds objectionable.

The administration’s call to action extends to various sectors, including AI developers, payment processors, financial institutions, cloud computing providers, search engines, and mobile app store gatekeepers like Apple and Google. By encouraging cooperation from these entities, the White House hopes to curb the creation, spread, and monetization of nonconsensual AI images.

The initiative builds on previous efforts, such as the voluntary commitments secured by the Biden administration from major technology companies like Amazon, Google, Meta, and Microsoft to implement safeguards on new AI systems. Despite these measures, the administration acknowledges the need for legislative action to enforce these safeguards comprehensively.

The administration’s proposals raise significant questions about privacy and the potential for mission creep. The call for on-device surveillance to detect and prevent the sharing of non-consensual sexual images means that personal photos and content would be subject to continuous monitoring and analysis. Through Photoshop and other tools, people have been able to generate images on their devices for decades but the recent evolution of AI is being used to now call for surveillance of the content people create.

This could set a precedent for more extensive and intrusive forms of digital content scanning, leading to broader applications beyond the original intent.

3
European Council Approves the AI Act — a Law Accused of Legalizing Biometric Mass Surveillance
reclaimthenet.org European Council Approves the AI Act — a Law Accused of Legalizing Biometric Mass Surveillance

The EU’s European Council has followed the European Parliament (EP) in approving the AI Act – which opponents say is a way for the bloc to legalize biometric mass surveillance. More than that, the EU is touting the legislation as first of its kind in the world, and seems hopeful it will serve as a [...

European Council Approves the AI Act — a Law Accused of Legalizing Biometric Mass Surveillance

The EU’s European Council has followed the European Parliament (EP) in approving the AI Act – which opponents say is a way for the bloc to legalize biometric mass surveillance.

More than that, the EU is touting the legislation as first of its kind in the world, and seems hopeful it will serve as a standard for AI regulation elsewhere around the globe.

The Council announced the law is “groundbreaking,” taking a “risk-based” approach, meaning that the EU authorities get to grade the level of risk from AI to society and then impose rules of various levels of severity and penalties, including money fines for companies deemed to be infringing the act.

What this “granular” approach to “risk level” looks like is revealed in the fact that what the EU chooses to consider cognitive behavioral manipulation “unacceptable,” while AI use in education and facial recognition is “high risk. “Limited risk” applies to chatbots.

And developers will be under obligation to register in order to have the “risk” assessed before their apps become available to users in the EU.

The AI Act’s ambition, according to the EU, is to promote both the development and uptake, as well as investment in systems that it considers “safe and trustworthy,” targeting both private and public sectors for this type of regulation.

A press release said that the law “provides exemptions such as for systems used exclusively for military and defense as well as for research purposes.”

After the act is formally published, it will within three weeks come into effect across the 27-member countries.

Back in March, when the European Parliament approved the act, one of its members, Patrick Breyer of the German Pirate Party, slammed the preceding trilogue negotiations as “intransparent.”

But what was clear, according to this lawyer and privacy advocate, is that participants from the EP, initially saying they wanted to see a ban on real-time biometric mass surveillance in mass places, did a 180 and in the end agreed to “legitimize” it through AI Act’s provisions.

Breyer said that identification relying on CCTV footage is prone to errors that can have serious consequences – but that “none of these dystopian technologies will be off limits for EU governments” (thanks to the new law).

“As important as it is to regulate AI technology, defending our democracy against being turned into a high-tech surveillance state is not negotiable for us Pirates,” Breyer wrote at the time.

0
"Upload Moderation" - The EU's Latest Name For Messaging Surveillance
reclaimthenet.org "Upload Moderation" - The EU's Latest Name For Messaging Surveillance

Leaked EU plans reveal the growing plot to ban private messaging.

"Upload Moderation" - The EU's Latest Name For Messaging Surveillance

EU governments might soon endorse the highly controversial Child Sexual Abuse Regulation (CSAR), known colloquially as “chat control,” based on a new proposal by Belgium’s Minister of the Interior. According to a leak obtained by Pirate Party MEP and shadow rapporteur Patrick Breyer, this could happen as early as June.

The proposal mandates that users of communication apps must agree to have all images and videos they send automatically scanned and potentially reported to the EU and police.

This agreement would be obtained through terms and conditions or pop-up messages. To facilitate this, secure end-to-end encrypted messenger services would need to implement monitoring backdoors, effectively causing a ban on private messaging. The Belgian proposal frames this as “upload moderation,” claiming it differs from “client-side scanning.” Users who refuse to consent would still be able to send text messages but would be barred from sharing images and videos.

The scanning technology, employing artificial intelligence, is intended to detect known child sexual abuse material (CSAM) and flag new images and videos deemed suspicious. The proposal excludes the previously suggested scanning of text messages for grooming signs and does not address audio communication scanning, which has never been implemented.

The proposal first introduced on 8 May, has surprisingly gained support from several governments that were initially critical. It will be revisited on 24 May, and EU interior ministers are set to meet immediately following the European elections to potentially approve the legislation.

Patrick Breyer, a staunch opponent of chat control, expressed serious concerns. “The leaked Belgian proposal means that the essence of the EU Commission’s extreme and unprecedented initial chat control proposal would be implemented unchanged,” he warns. “Using messenger services purely for texting is not an option in the 21st century. And removing excesses that aren’t being used in practice anyway is a sham.”

Breyer emphasizes the threat to digital privacy, stating, “Millions of private chats and private photos of innocent citizens are to be searched using unreliable technology and then leaked without the affected chat users being even remotely connected to child sexual abuse – this would destroy our digital privacy of correspondence. Our nude photos and family photos would end up with strangers in whose hands they do not belong and with whom they are not safe.”

He also points out the risk to encryption, noting that “client-side scanning would undermine previously secure end-to-end encryption to turn our smartphones into spies – this would destroy secure encryption.”

Breyer is alarmed by the shifting stance of previously critical EU governments, which he fears could break the blocking minority and push the proposal forward. He criticizes the lack of a legal opinion from the Council on this fundamental rights issue. “If the EU governments really do go into the trilogue negotiations with this radical position of indiscriminate chat control scanning, experience shows that the Parliament risks gradually abandoning its initial position behind closed doors and agreeing to bad and dangerous compromises that put our online security at risk,” he asserts.

3
Lawmakers Push for the Censorship of "Harmful Content," "Disinformation" in Latest Section 230 Reform Push
reclaimthenet.org Lawmakers Push for the Censorship of "Harmful Content," "Disinformation" in Latest Section 230 Reform Push

Threatening to sunset Section 230 unless platforms agree to wider censorship demands.

Lawmakers Push for the Censorship of "Harmful Content," "Disinformation" in Latest Section 230 Reform Push

Section 230 of the Communications Act (CDA), an online liability shield that prevents online apps, websites, and services from being held civilly liable for content posted by their users if they act in “good faith” to moderate content, provided the foundation for most of today’s popular platforms to grow without being sued out of existence. But as these platforms have grown, Section 230 has become a political football that lawmakers have used in an attempt to influence how platforms editorialize and moderate content, with pro-censorship factions threatening reforms that force platforms to censor more aggressively and pro-free speech factions pushing reforms that reduce the power of Big Tech to censor lawful speech.

And during a Communications and Technology Subcommittee hearing yesterday, lawmakers discussed a radical new Section 230 proposal that would sunset the law and create a new solution that “ensures safety and accountability for past and future harm.”

We obtained a copy of the draft bill to sunset Section 230 for you here.

In a memo for the hearing, lawmakers acknowledged that their true intention is “not to have Section 230 actually sunset” but to “encourage” technology companies to work with Congress on Section 230 reform and noted that they intend to focus on the role Section 230 plays in shaping how Big Tech addresses “harmful content, misinformation, and hate speech” — three broad, subjective categories of legal speech that are often used to justify censorship of disfavored opinions.

And during the hearing, several lawmakers signaled that they want to use this latest piece of Section 230 legislation to force social media platforms to censor a wider range of content, including content that they deem to be harmful or misinformation.

Rep. Doris Matsui (D-CA) acknowledged that Section 230 “allowed the internet to flourish in its early days” but complained that it serves as “a haven for harmful content, disinformation, and online harassment.”

She added: “The role of Section 230 needs immediate scrutiny, because as it exists today, it is just not working.”

Rep. John Joyce (R-PA) suggested Section 230 reforms are necessary to protect children — a talking point that’s often used to erode free speech and privacy for everyone.

“We need to make sure that they [children] are not interacting with harmful or inappropriate content,” Rep. John Joyce (R-PA) said. “And Section 230 is only exacerbating this problem. We here in Congress need to find a solution to this problem that Section 230 poses.”

Rep. Tony Cárdenas (D-CA) complained that platforms aren’t doing enough to combat “outrageous and harmful content” and “harmful mis-and-dis-information”: >“While I wish we could better depend on American companies to help combat these issues, the reality is that outrageous and harmful content helps drive their profit margins. That’s the online platforms. >I’ll also highlight, as I have in previous hearings, that the problem of harmful mis-and-dis-information online is even worse for users who speak Spanish and other languages outside of English as a result of platforms not making adequate investments to protect them.”

Rep. Debbie Dingell (D-MI) also signaled an intent to use Section 230 reform to target “false information” and claimed that Section 230 has allowed platforms to “evade accountability for what occurs on their platforms.”

Rep. Buddy Cater (R-GA) framed Section 230 as “part of the problem” because “it’s kind of set a free for all on the Internet” when pushing for reform.

While several lawmakers were in favor of Section 230 reforms that pressure platforms to moderate more aggressively, one of the witnesses, Kate Tummarello, the Executive Director at the advocacy organization Engine, did warn that these efforts could lead to censorship.

“It’s not that the platforms would be held liable for the speech,” Tummarello said. “It’s that the platforms could very easily be pressured into removing speech people don’t like.”

You can watch the full hearing here.

0
New X Policy Forces Earners To Verify Their Government ID With Israeli Verification Company
reclaimthenet.org New X Policy Forces Earners To Verify Their Government ID With Israeli Verification Company

New policy raises privacy concerns amidst X's commitment to free speech.

New X Policy Forces Earners To Verify Their Government ID With Israeli Verification Company

X, formerly Twitter, is now mandating the use of a government ID-based account verification system for users that earn revenue on the platform – either for advertising or for paid subscriptions.

To implement this system, X has partnered with Au10tix, an Israeli company known for its identity verification solutions. Users who opt to receive payouts on the platform will have to undergo a verification process with the company.

This initiative aims to curb impersonation, fraud, and improve user support, yet it also raises profound questions about privacy and free speech, as X markets itself as a free speech platform, and free speech and anonymity often go hand-in-hand. This is especially true in countries where their speech can get citizens jailed or worse.

“We’re making changes to our Creator Subscriptions and Ads Revenue Share programs to further promote authenticity and fight fraud on the platform. Starting today, all new creators must verify their ID to receive payouts. All existing creators must do so by July 1, 2024,” the update to X’s verification page now reads.

!

This shift towards online digital ID verification is part of a broader trend across the political sphere, where the drive for identification often conflicts with the desire for privacy and anonymous speech. By linking online identities to government-issued IDs, platforms like X may stifle expression, as users become wary of speaking freely when their real identities are known.

This policy shift signals a move towards more accurate but also more intrusive forms of user identification. Although intended to enhance security, these practices risk undermining the very essence of free speech by making users feel constantly monitored and raise fears that, in the near future, all speech on major platforms will have to be linked to a government-issued ID.

Anonymity has long been a cornerstone of free speech, allowing individuals to express controversial, dissenting, or unpopular opinions without fear of retribution. Throughout history, anonymous speech has been a critical tool for activists, whistleblowers, and ordinary citizens alike. It enables people to criticize their governments, expose corruption, and share personal experiences without risking their safety or livelihoods.

Governments around the world have been pushing for an end to online anonymity over the last year, and X’s new policy change is a step towards this agenda.

Over the last year, a slew of child safety bills has emerged, ostensibly aimed at protecting the youngest internet users. However, beneath the surface of these well-intentioned initiatives lies a more insidious agenda: the push for widespread online ID verification.

X owner Elon Musk has commented in support of these bills, as recently as last week.

While this new X change is only for those users looking to claim a cut of the advertising revenue that X makes from their posts and is not yet enforced for all users, it is a large step towards the normalizing of online digital ID verification.

0
Say Goodbye to Cloud Anonymity? New US Regulations Demand User Identification
reclaimthenet.org Say Goodbye to Cloud Anonymity? New US Regulations Demand User Identification

New rules aim to combat malicious actors but risk compromising the privacy of everyday users.

Say Goodbye to Cloud Anonymity? New US Regulations Demand User Identification

The US Department of Commerce is seeking to end the right of users of cloud services to remain anonymous.

The proposal first emerged in January, documents show, detailing new rules (National Emergency with Respect to Significant Malicious Cyber-Enabled Activities) for Infrastructure as a Service (IaaS) providers, which include Know Your Customer (KYC) regulation, which is normally used by banks and financial institutions.

But now, the US government is citing concerns over “malicious foreign actors” and their usage of these services as a reason to effectively end anonymity on the cloud, including when only signing up for a trial.

Another new proposal from the notice is to cut access to US cloud services to persons designated as “foreign adversaries.”

As is often the case, although the justification for such measures is a foreign threat, US citizens inevitably, given the nature of the infrastructure in question, get caught up as well. And, once again, to address a problem caused by a few users, everyone will be denied the right to anonymity.

That would these days be any government’s dream, it appears, while the industry itself, especially the biggest players like Amazon, can implement the identification feature with ease, at the same time gaining a valuable new source of personal data.

The only losers here appear to be users of IaaS platforms, who will have to allow tech giants yet another way of accessing their sensitive personal information and risk losing it through leaks.

Meanwhile, the actual malicious actors will hardly give up those services – leaked personal data that can be sold and bought illegally, including by those the proposal says it is targeting.

Until now, providers of cloud services felt no need to implement a KYC regime, instead allowing people to become users, or try their products, simply by providing an email, and a valid credit card in case they signed up for a plan.

As for what the proposal considers to be an IaaS, the list is long and includes services providing processing, storage, networks, content delivery networks (CDNs), virtual private servers (VPSs), proxies, domain name resolution services, and more.

10
Online Speech Protections For Everyone Are In Danger
act.eff.org Online Speech Protections For Everyone Are In Danger

Some members of Congress want to delete Section 230, the key law underpinning free speech online. Even though this law has protected millions of Americans’ right to speak out and organize for decades, the House is now debating a proposal to “sunset” the law after 18 months. Section 230 reflects valu...

Some members of Congress want to delete Section 230, the key law underpinning free speech online. Even though this law has protected millions of Americans’ right to speak out and organize for decades, the House is now debating a proposal to “sunset” the law after 18 months.

Section 230 reflects values that most Americans agree with: you’re responsible for your own speech online, but, with narrow exceptions, not the speech of other people. This law protects every internet user and website host, from large platforms down to the smallest blogs. If Congress eliminates Section 230, we’ll all be less free to create art and speak out online.

Section 230 says that online services and individual users can’t be sued over the speech of other users, whether that speech is in a comment section, social media post, or a forwarded email. Without Section 230, it's likely that small platforms and Big Tech will both be much more likely to remove our speech, out of fear it offends someone enough to file a lawsuit.

Section 230 also protects content moderators who take actions against their site’s worst or most abusive users. Sunsetting Section 230 will let powerful people or companies constantly second-guess those decisions with lawsuits—it will be a field day for the worst-behaved people online.

The sponsors of this bill, Reps. Cathy McMorris Rodgers (R-WA) Frank Pallone (D-NJ), claim that if it passes, it will get Big Tech to come to the table to negotiate a new set of rules around online speech. Here’s what supporters of this bill don’t get: everyday users don’t want Big Tech to be in Washington, working with politicians to rewrite internet speech law. That will be a disaster for us all.

We need your help to tell all U.S. Senators and Representatives to oppose this bill, and vote no on it if it comes to the floor.

0
Elon Musk Seemingly Supports NY "Child Safety" Bill for Digital ID and Limiting "Addictive" Feeds

Elon Musk stopped just short of explicitly endorsing two New York state online child safety bills even though, for the proposals to work, platforms would have to implement age verification and digital ID for people to access online platforms.

The X owner’s reaction to a post about Meta and Google reportedly spending more than a million as they lobby against the bills read, “In sharp contrast, X supports child safety bills.”

It remains unclear whether Musk expressed his support for these particular bills – New York Senate Bill S7694 and Bill S3281 – or the legislative efforts in general to make the internet a safer place for minors. Another possibility is that he was not missing a chance to criticize the competition.

!

Either way, there are two problems with such efforts that keep cropping up in various jurisdictions: very often, the proposed laws are far broader, but use the issue of protecting children as the main talking point to shut up any opposition.

And, as in this case, they call for some form of age verification to be introduced, which is only doable by identifying everyone who visits sites or uses platforms, undermining online anonymity, and curbing free speech.

A press source who criticized Google and Meta for their lobbying effort (while speaking on condition of anonymity) said the bills’ provisions are “reasonable;” at least, most of them.

On the reasonable side is Bill S7694’s intention to, by amending general business law, make sure minors do not encounter “addictive” feeds on the social media they use.

This would be achieved by showing chronological rather than algorithmically manipulated feeds to those established to be minors.

Another provision is to limit the time and access these users can spend on the sites during the night, as a health benefit.

Bill S3281 deals with child data privacy, seeking to ban the harvesting of this data (and subsequent targeted advertising), as well as requiring “data controllers to assess the impact of its products on children for review by the Bureau of Internet and Technology.”

But the elephant in the room is – how are platforms supposed to know a user’s actual age?

This is where age verification comes in: the bills speak about using “commercially reasonable methods” to make sure a user is not a minor, and age verification through digital ID is also demanded to achieve “verifiable parental consent.”

0
Digital ID Laws Pass in Australian Parliament As Government Allocates Millions for Online Digital ID Implementation
reclaimthenet.org Digital ID Laws Pass in Australian Parliament As Government Allocates Millions for Online Digital ID Implementation

The move aims to streamline government services through digital IDs, sparking debate over security and potential political misuse.

Digital ID Laws Pass in Australian Parliament As Government Allocates Millions for Online Digital ID Implementation

The Australian Digital ID Law (Digital ID Bill 2024), which already passed the Senate, was adopted by Australia’s House of Representatives in an 87-56 vote.

Australia is joining the EU and several countries who seek to get rid of people’s physical IDs and replace them with digital schemes that pool the most sensitive personal information into massive, centralized databases.

This is considered by opponents as a security and privacy catastrophe in the making, with many purely political (ab)uses possible down the road.

In Australia, the goal is to get government services, health insurance, taxes, etc, all linked. And to do this, the governments will spend just shy of $197 million to launch the scheme.

MPs from the parties who voted against the bill – the Liberal-National Opposition – said that their constituents were worried about privacy, their freedoms in general, and government intervention.

Once again, arguments such as “convenience” – clearly a lopsided trade-off considering the gravity of these concerns – are offered to assuage them, and the point is made that participation is not mandatory.

At least not yet, and not explicitly.

Liberal Senator Alex Antic touched on precisely this point – an example being that the bill allows people to open bank accounts without digital IDs “by going to the nearest branch.”

But then – physical bank branches are now closing at a quick rate, Antic remarked.

Even more taxpayer money is being spent in Australia in order to shore up the Online Safety Act, and the eSafety program.

The censorship effort, which, like so many, refers to its purpose allegedly being merely to “protect the children” is in reality set up to hunt down whatever the government decides qualifies as “harmful content.” Now the federal budget is earmarking millions for several projects, including a pilot worth $6.5 million that is supposed to produce an online age verification method (this is referred to as “age assurance technology”).

Meanwhile, “emerging online threats” will get a share from the total of $43.2 million set aside in the budget’s communications package.

The eSafety Commissioner’s office will get $1.4 million over the next two years.

0
Telegram Founder Reveals US Government’s Alleged Covert Maneuvers to Backdoor the App
  • at least you admit to engaging in association fallacy -- good luck with that

  • Shifting Election Narratives and Tech Collusion: Federal Government Continues Outreach to Tech Companies
  • FBI directed big tech to censor the hunter biden laptop story prior to the 2020 election. Is that misinformation too? I recommend you actually do your own research instead of being spoonfed talking points by the mainstream media.

  • Suing the National Park Service for Not Accepting Cash
  • and at the cost of consumer privacy

  • 🚨 US Department of Commerce Proposes KYC Rule for Internet Infrastructure 🚨
  • the conspiracy theorist would say that KYC would give the opportunity for jackboots to kick your door in the minute you use your internet infrastructure to criticize the government

  • 🚨 US Department of Commerce Proposes KYC Rule for Internet Infrastructure 🚨
  • thankfully the internet is a global marketplace

  • Why were so many people believers in the conspiracy that 9/11 was an inside job
  • Because of the hard work done by American Patriots, Truth Seekers, and Researchers all over the globe. See for yourself.

  • Surveillance Overreach: Federal Investigators Asked Banks To Search Transactions Related to “MAGA,” “TRUMP”
  • right? we should give every transaction to the state to stop bad guys.

  • New Epstein Documents: Highlights and Name Drops
  • Following the latest batch of court documents, these names have been added:

    • Richard Branson
    • Sergey Brin
  • New Epstein Documents: Highlights and Name Drops
  • click on the link in the post

  • New Epstein Documents: Highlights and Name Drops
  • One allegation already made public concerns David Copperfield, an associate of both Casablancas and Trump, who judged Look of the Year in 1988 and 1991, and once dated another Elite supermodel, Claudia Schiffer. Two years ago, as the #MeToo movement reverberated through the entertainment industry, he was the subject of allegations by Brittney Lewis, a 17-year-old contestant in the 1988 Look of the Year, held in Japan. According to her account, published on the entertainment news website The Wrap, Copperfield invited her to a show in California after she had returned home to Utah. Lewis alleged that she saw Copperfield pour something into her glass and then blanked out, but says she retained hazy recollections of him sexually assaulting her in his hotel room.

  • New Epstein Documents: Highlights and Name Drops
  • In reference to the court documents, John Casablancas was mentioned only in questioning without any direct allegations, however, I believe he came up in questioning because of his relationship with David Cooperfield, the magician, and pedophile it seems.

  • New Epstein Documents: Highlights and Name Drops
  • Rich people can afford to pay lawyers and evade courts; now the rich and powerful have the support from the captured system.. which is why Epstein was tipped off to his search warrant.

  • New Epstein Documents: Highlights and Name Drops
  • Some of the names are mentioned only in questioning, others are directly implicated as abusers. The relevant details are in the forum post.

  • Removed
    New Epstein Documents: Highlights and Name Drops
  • I haven't seen that one, but I might have to now.

  • New Epstein Documents: Highlights and Name Drops
  • They really do think that they're a superior race of humans with full dominion over the lower species.

  • New Epstein Documents: Highlights and Name Drops
  • when I was looking some of these people up, I was surprised how many billionaires came up...

    In the 37th annual Forbes list of the world's billionaires, the list included 2,640 billionaires with a total net wealth of $12.2 trillion, down 28 members and $500 billion from 2022.

    however, when considering that there are only ~2,600 billionaires in the world, I could see how these ultra rich only associate with each other.

  • Has avoiding Cloudflare become Impossible?
  • Upon further investigation, I mistook original cloudflare headers that were passed through with x-archive-orig-* as an indication that archive.org was behind cloudflare. my mistake. I have edited the original post.

  • Please, Expose Your RSS
  • ReadYou on Fdroid

  • c0mmando c0mmando @links.hackliberty.org

    ⛦𝟛𝟙𝟛𝟛𝟟 𝕙𝟜𝕩𝕩𝟘𝕣🏴‍☠️₵Ɏ₱ⱧɆⱤ₱Ʉ₦₭ 🏴𝖍𝖆𝖈𝖐 𝖙𝖍𝖊 𝖕𝖑𝖆𝖓𝖊𝖙⛦

    Posts 746
    Comments 60