1
0
mirror of https://github.com/privacyguides/privacyguides.org.git synced 2025-09-10 20:18:53 +00:00

Compare commits

...

11 Commits

Author SHA1 Message Date
815eb65d10 refactor: Move videos to Ghost (#3115) 2025-09-10 02:21:13 -05:00
30f05ff291 docs: Link to TikTok profile (#3095)
Signed-off-by: fria <fria@privacyguides.org>
Signed-off-by: Freddy <freddy@privacyguides.org>
2025-09-09 22:04:49 -05:00
Em
7e5ec73759 update(blog): Clarify definition in Chat Control article (#3113)
Co-authored-by: Jonah Aragon <jonah@privacyguides.org>
2025-09-08 19:45:57 -05:00
Em
32d84e9a42 update(blog)!: Chat Control Must Be Stopped, Act Now! (#3111)
Signed-off-by: Jonah Aragon <jonah@privacyguides.org>
2025-09-08 12:45:29 -05:00
19947442a6 fix(blog): Escape quotes in article title 2025-09-03 15:41:37 -05:00
Em
e55eb0986b update(blog)!: Spotting the Red (and Green) Flags (#3101)
Signed-off-by: Jonah Aragon <jonah@privacyguides.org>
2025-09-03 14:24:12 -05:00
e5500a11da update(video): Update sources for latest 2
Co-Authored-By: Jordan Warne <contact@jordanwarne.net>
2025-08-29 17:38:27 -05:00
Daniel Gray
0c4f98e7fb fix(strings.en.env): Replace single quote env var with double quote
Changed the env var to use double quotes around the value and single
quotes in the HTML attributes to avoid issues caused by words containing
apostrophes within the text.
2025-08-29 16:14:10 +09:30
ac96552200 update(video)!: Privacy is Power
Co-Authored-By: Jordan Warne <contact@jordanwarne.net>
2025-08-28 22:49:39 -05:00
1a7eb59fee update(video)!: Age Verification is a Privacy Nightmare
Co-Authored-By: Jordan Warne <contact@jordanwarne.net>
2025-08-28 21:21:32 -05:00
Em
575818a637 update(blog)!: Privacy Washing Is a Dirty Business (#3098)
Signed-off-by: Jonah Aragon <jonah@privacyguides.org>
2025-08-20 12:01:23 -05:00
45 changed files with 999 additions and 800 deletions

View File

@@ -1,6 +1,5 @@
:1337 {
reverse_proxy /articles/* http://127.0.0.1:8001
reverse_proxy /videos/* http://127.0.0.1:8002
reverse_proxy /en/* http://127.0.0.1:8000
redir / /en/
}

View File

@@ -19,10 +19,6 @@
"8001": {
"label": "Articles",
"onAutoForward": "silent"
},
"8002": {
"label": "Videos",
"onAutoForward": "silent"
}
},
"otherPortsAttributes": {
@@ -52,20 +48,6 @@
"group": "Live server"
}
},
{
"label": "Videos",
"type": "shell",
"command": "mkdocs serve --config-file=mkdocs.videos.yml --dev-addr=localhost:8002",
"group": "test",
"runOptions": {
"runOn": "folderOpen"
},
"presentation": {
"reveal": "always",
"panel": "dedicated",
"group": "Live server"
}
},
{
"label": "Main",
"type": "shell",

View File

@@ -101,24 +101,13 @@ jobs:
continue-on-error: true
privileged: ${{ fromJSON(needs.metadata.outputs.privileged) }}
build_videos:
if: ${{ contains(github.event.pull_request.labels.*.name, 'ci:build videos') }}
needs: [submodule, metadata]
uses: ./.github/workflows/build-videos.yml
with:
ref: ${{github.event.pull_request.head.ref}}
repo: ${{github.event.pull_request.head.repo.full_name}}
continue-on-error: true
privileged: ${{ fromJSON(needs.metadata.outputs.privileged) }}
combine_build:
needs: [build_english, build_i18n, build_blog, build_videos]
needs: [build_english, build_i18n, build_blog]
if: |
(always() && !cancelled() && !failure()) &&
needs.build_english.result == 'success' &&
(needs.build_i18n.result == 'success' || needs.build_i18n.result == 'skipped') &&
(needs.build_blog.result == 'success' || needs.build_blog.result == 'skipped') &&
(needs.build_videos.result == 'success' || needs.build_videos.result == 'skipped')
(needs.build_blog.result == 'success' || needs.build_blog.result == 'skipped')
runs-on: ubuntu-latest
steps:
@@ -140,5 +129,5 @@ jobs:
cleanup:
if: ${{ always() }}
needs: [build_english, build_i18n, build_blog, build_videos]
needs: [build_english, build_i18n, build_blog]
uses: privacyguides/.github/.github/workflows/cleanup.yml@main

View File

@@ -1,116 +0,0 @@
name: 🛠️ Build Videos
on:
workflow_call:
inputs:
ref:
required: true
type: string
repo:
required: true
type: string
context:
type: string
default: deploy-preview
continue-on-error:
type: boolean
default: true
privileged:
type: boolean
default: true
permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
continue-on-error: ${{ inputs.continue-on-error }}
permissions:
contents: read
steps:
- name: Add GitHub Token to Environment
run: |
echo "GH_TOKEN=${{ secrets.GITHUB_TOKEN }}" >> "$GITHUB_ENV"
- name: Download Repository
uses: actions/checkout@v4
with:
repository: ${{ inputs.repo }}
ref: ${{ inputs.ref }}
persist-credentials: "false"
fetch-depth: 0
- name: Download Submodules
uses: actions/download-artifact@v4
with:
pattern: repo-*
path: modules
- name: Move mkdocs-material-insiders to mkdocs-material
if: inputs.privileged
run: |
rmdir modules/mkdocs-material
mv modules/repo-mkdocs-material-insiders modules/mkdocs-material
- name: Move brand submodule to theme/assets/brand
run: |
rmdir theme/assets/brand
mv modules/repo-brand theme/assets/brand
- name: Install Python (pipenv)
if: inputs.privileged
uses: actions/setup-python@v5
with:
cache: "pipenv"
- name: Install Python (no pipenv)
if: ${{ !inputs.privileged }}
uses: actions/setup-python@v5
- name: Install Python Dependencies
if: inputs.privileged
run: |
pip install pipenv
pipenv install
sudo apt install pngquant
- name: Install Python Dependencies (Unprivileged)
if: ${{ !inputs.privileged }}
run: |
pip install mkdocs-material mkdocs-rss-plugin mkdocs-glightbox mkdocs-macros-plugin
sudo apt install pngquant
- name: Set base navigation URLs for production build
if: inputs.context == 'production'
run: |
{
echo "MAIN_SITE_BASE_URL=https://www.privacyguides.org/en/"
echo "MAIN_SITE_ABOUT_URL=https://www.privacyguides.org/en/about/"
echo "MAIN_SITE_RECOMMENDATIONS_URL=https://www.privacyguides.org/en/tools/"
echo "MAIN_SITE_KNOWLEDGE_BASE_URL=https://www.privacyguides.org/en/basics/why-privacy-matters/"
echo "ARTICLES_SITE_BASE_URL=https://www.privacyguides.org/articles/"
echo "VIDEOS_SITE_BASE_URL=https://www.privacyguides.org/videos/"
} >> "$GITHUB_ENV"
- name: Build Website (Privileged)
if: inputs.privileged
run: |
pipenv run mkdocs build --config-file mkdocs.videos.yml
- name: Build Website (Unprivileged)
if: ${{ !inputs.privileged }}
run: |
BUILD_INSIDERS=false mkdocs build --config-file mkdocs.videos.yml
- name: Package Website
run: |
tar -czf site-build-videos.tar.gz site
- name: Upload Site
uses: actions/upload-artifact@v4
with:
name: site-build-videos.tar.gz
path: site-build-videos.tar.gz
retention-days: 1

View File

@@ -27,7 +27,6 @@ on:
- "main"
paths:
- "blog/**"
- "videos/**"
concurrency:
group: release-deployment
@@ -61,19 +60,8 @@ jobs:
continue-on-error: false
context: production
build_videos:
needs: submodule
permissions:
contents: read
uses: ./.github/workflows/build-videos.yml
with:
repo: ${{ github.repository }}
ref: ${{ github.ref }}
continue-on-error: false
context: production
deploy:
needs: [build_blog, build_videos]
needs: [build_blog]
uses: privacyguides/webserver/.github/workflows/deploy-garage.yml@main
with:
environment: production
@@ -83,5 +71,5 @@ jobs:
cleanup:
if: ${{ always() }}
needs: [build_blog, build_videos]
needs: [build_blog]
uses: privacyguides/.github/.github/workflows/cleanup.yml@main

View File

@@ -77,17 +77,6 @@ jobs:
continue-on-error: false
context: production
build_videos:
needs: submodule
permissions:
contents: read
uses: ./.github/workflows/build-videos.yml
with:
repo: ${{ github.repository }}
ref: ${{ github.ref }}
continue-on-error: false
context: production
release:
name: Create release notes
needs: build
@@ -109,7 +98,7 @@ jobs:
makeLatest: true
deploy:
needs: [build, build_blog, build_videos]
needs: [build, build_blog]
uses: privacyguides/webserver/.github/workflows/deploy-all.yml@main
secrets:
NETLIFY_TOKEN: ${{ secrets.NETLIFY_TOKEN }}
@@ -126,5 +115,5 @@ jobs:
cleanup:
if: ${{ always() }}
needs: [build, build_blog, build_videos]
needs: [build, build_blog]
uses: privacyguides/.github/.github/workflows/cleanup.yml@main

View File

@@ -15,6 +15,7 @@ Jonah Aragon <jonah@privacyguides.org> <jonah@triplebit.net>
Jonah Aragon <jonah@privacyguides.org> <jonah@privacytools.io>
Jonah Aragon <jonah@privacyguides.org> <github@aragon.science>
Jordan Warne <jordan@privacyguides.org> <jw@omg.lol>
Jordan Warne <jordan@privacyguides.org> <contact@jordanwarne.net>
Justin Ehrenhofer <justin.ehrenhofer@gmail.com> <12520755+SamsungGalaxyPlayer@users.noreply.github.com>
Mare Polaris <ph00lt0@privacyguides.org> <15004290+ph00lt0@users.noreply.github.com>
Niek de Wilde <niek@privacyguides.org> <github.ef27z@simplelogin.com>

View File

@@ -569,3 +569,4 @@ allowlisted
MyMonero
Monero-LWS
OkCupid
Anom

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 173 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 214 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 317 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 115 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 238 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 286 KiB

View File

@@ -0,0 +1,308 @@
---
date:
created: 2025-09-08T18:00:00Z
categories:
- News
authors:
- em
description:
Chat Control is back to undermine everyone's privacy. There's an important deadline this Friday on September 12th. We must act now to stop it!
schema_type: ReportageNewsArticle
preview:
cover: blog/assets/images/chat-control-must-be-stopped/chatcontrol-cover.webp
---
# Chat Control Must Be Stopped, Act Now!
![Filtered photo of a protest with a protestor holding a sign in first plan. The background is a red monochrome and the sign is in turquoise. The sign says "You won't make me live this 1984 sh*t".](../assets/images/chat-control-must-be-stopped/chatcontrol-cover.webp)
<small aria-hidden="true">Illustration: Em / Privacy Guides | Photo: Ramaz Bluashvili / Pexels</small>
If you've heard of [Chat Control](the-future-of-privacy.md) already, bad news: **it's back**. If you haven't, this is a pressing issue you should urgently learn more about if you value privacy, democracy, and human rights. This is happening **this week**, and **we must act to stop it right now**.<!-- more -->
Take a minute to visualize this: Every morning you wake up with a police officer entering your home to inspect it, and staying with you all day long.
The agent checks your bathroom, your medicine cabinet, your bedroom, your closets, your drawers, your fridge, and takes photos and notes to document everything. Then, this report is uploaded to the police's cloud. It's "[for a good cause](encryption-is-not-a-crime.md)" you know, it's to make sure you aren't hiding any child sexual abuse material under your bed.
Every morning. Even if you're naked in bed. Even while you're having a call with your doctor or your lover. Even when you're on a date. Even while you're working and discussing your client's confidential information with their attorney. This police officer is there, listening to you and reporting on everything you do.
This is the in-person equivalent of Chat Control, a piece of legislation that would mandate **all** services to scan **all** private digital communications of **everyone** residing in the European Union.
This is an Orwellian nightmare.
## Act now!
This is happening **this week**. European governments will be finalizing their positions on the regulation proposal on **Friday, September 12th, 2025**.
- ==If you are not located in Europe==: Keep reading, this will affect you too.
- If you are still unconvinced: Keep reading, we discuss Chat Control in [more details](#why-is-this-bad) below.
- If you are located in Europe: You must **act now** to stop it.
<div class="admonition question" markdown>
<p class="admonition-title">How to stop this? Contact your MEPs before September 12th</p>
Use this [**website**](https://fightchatcontrol.eu/) to easily contact your government representatives before September 12th, and tell them they should **oppose Chat Control**. Even if your country already opposes Chat Control, contact your representatives to tell them you are relieved they oppose, and support them in this decision to protect human rights. This will help reinforce their position.
But if your country *supports* Chat Control, or is *undecided*, **it is vital that you contact your representatives before this deadline**. To support your point, you can share this article with them or one of the many great [resources](#resources-to-learn-more-and-fight-for-human-rights) listed at the end.
At the time of this writing, the list of countries to contact is:
| **Supporting (15)** | | **Undecided (6)** |
| ---------------------------------- | ----------------------------------- | -------------------- |
| :triangular_flag_on_post: Bulgaria | :triangular_flag_on_post: Latvia | :warning: Estonia |
| :triangular_flag_on_post: Croatia | :triangular_flag_on_post: Lithuania | :warning: Germany |
| :triangular_flag_on_post: Cyprus | :triangular_flag_on_post: Malta | :warning: Greece |
| :triangular_flag_on_post: Denmark | :triangular_flag_on_post: Portugal | :warning: Luxembourg |
| :triangular_flag_on_post: France | :triangular_flag_on_post: Slovakia | :warning: Romania |
| :triangular_flag_on_post: Hungary | :triangular_flag_on_post: Spain | :warning: Slovenia |
| :triangular_flag_on_post: Ireland | :triangular_flag_on_post: Sweden | |
| :triangular_flag_on_post: Italy | | |
</div>
![A map of countries part of the European Union. Countries opposing Chat Control are represented in green, countries undecided in blue, and countries in favor are in red. Below there is text saying "Act now! www.chatcontrol.eu".](../assets/images/chat-control-must-be-stopped/chatcontrol-map-chatcontroleu-20250903.webp)
<small aria-hidden="true">Image: Patrick Breyer / [chatcontrol.eu](https://www.chatcontrol.eu)</small>
## What is Chat Control?
"Chat Control" refers to a series of legislative proposals that would make it mandatory for *all* service providers (text messaging, email, social media, cloud storage, hosting services, etc.) to scan *all* communications and *all* files (including end-to-end encrypted ones), in order to supposedly detect whatever the government deems "abusive material."
The push for Chat Control started in 2021 with the approval of a [derogation](https://www.patrick-breyer.de/en/chatcontrol-european-parliament-approves-mass-surveillance-of-private-communications/) to the ePrivacy Directive by the European Parliament. This derogation escalated to a second proposal for *mandatory* scanning a year later, which was [rejected](https://fortune.com/europe/2023/10/26/eu-chat-control-csam-encryption-privacy-european-commission-parliament-johansson-breyer-zarzalejos-ernst/) in 2023. Nevertheless, lawmakers and lobbyists determined to undermine our safety and civil liberties are bringing it back again two years later, **literally trying to wear you down**.
We cannot let authoritarians wear us down until we lose all our privacy rights. Our privacy rights are fundamental to so many other human rights, to civil liberties, to public safety, and to functioning democracies.
Chat Control undermines all of this.
Cryptography professor and cybersecurity expert Matthew Green described the 2022 proposal document for Chat Control as "[**the most terrifying thing I've ever seen**](https://fortune.com/2022/05/12/europe-phone-surveillance-crackdown-child-sexual-abuse-material-sparks-outrage-among-cybersecurity-experts-privacy-activists/)".
And terrifying, it is.
The [most recent proposal for Chat Control](https://tuta.com/blog/chat-control-criticism) comes from the EU Council Danish presidency pushing for regulation misleadingly called the **Child Sexual Abuse Regulation** (CSAR). Despite its seemingly caring name, this regulation will **not** help fight child abuse, and will even likely worsen it, impacting negatively what is already being done to fight child abuse (more on this in the [next section](#would-this-protect-the-children)).
The CSAR proposal (which *is* the latest iteration of Chat Control) could be implemented as early as *next month*, if we do not stop it.
**The problem is this: Chat Control will not work, it is unreliable, it will escalate in scope, and it will endanger everyone (including the children).**
Even if you are not in Europe, know that Chat Control will affect everyone inside *and* outside of Europe one way or another. Regardless of where you are, you should be concerned and pay attention, and there are things you can do to fight back. This is important.
![Still image from a video showing an illustration of three cellphones being scanned by a red light, with lines leading to a law enforcement icon.](../assets/images/chat-control-must-be-stopped/chatcontrol-stopscanningme-video.webp)
<small aria-hidden="true">Still image from [video](https://stopscanningme.eu/video/csar-explainer.mp4): Stop Scanning Me / EDRi</small>
## Why is this bad?
The idea that it's possible to somehow [magically protect](encryption-is-not-a-crime.md/#magical-backdoor-only-for-the-good-guys-is-a-complete-fantasy) information properly while giving access to unquestionably well-intended law enforcement comes from either extreme naivety, lack of information, and plain dishonesty.
This proposal would effectively break any end-to-end encryption protections, and potentially expose all your files and communications to not only law enforcement, but eventually also to criminals of all sorts (with the data breaches, data leaks, and corruption that will inevitably follow).
Here's a summary of some dangers this regulation would create if approved:
- **Breaking end-to-end encryption**: Removing crucial protections for all sensitive files and communications of vulnerable populations, victims, whistleblowers, journalists, activists, and everyone else.
- **Mission creep**: Once this mass surveillance system is in place, authorities can decide to add more criteria such as searching all communications for references to drug use, protest attendances, political dissidence, or even [negative comments](https://www.lemonde.fr/en/international/article/2025/03/22/how-a-french-researcher-being-refused-entry-to-the-us-turned-into-a-diplomatic-mess_6739415_4.html) about a leader. Europol (the EU law enforcement agency) has already called for [expanding the program](https://www.youtube.com/watch?v=L933xDcSS3o&t=2016s).
![A cartoon illustration explaining that chat control is planning to monitor all chats, emails, and messenger conversations, and use artificial intelligence to automatically report flagged content to the police.](../assets/images/chat-control-must-be-stopped/chatcontrol-LornaSchutte-chatcontroleu-1.webp)
<small aria-hidden="true">Image: Lorna Schütte / [chatcontrol.eu](https://www.chatcontrol.eu)</small>
- **Criminal attacks**: Each time a backdoor exists, it doesn't take long for criminals to find access and steal our information. This could include criminals finding access to each service independently or to the entire database authorities would keep. A database that would be filled with material tagged as sexually explicit text or photos of children. This could even *create* new Child Sexual Abuse Material (CSAM) for criminals. For example, consenting teenagers innocently sexting together could have their photos collected in this database, after being wrongly flagged by the automated system. Then, criminals could steal their intimate photos from the governments.
- **False positives**: With a mass surveillance system this large, moreover a system with no transparency and little oversight, false positives are inevitable. Despite marketing promises from the [organizations lobbying government officials](https://www.patrick-breyer.de/en/chat-control-eu-ombudsman-criticises-revolving-door-between-europol-and-chat-control-tech-lobbyist-thorn/), we all know AI technologies regularly misfire and cannot be reliable for anything of such importance. Loving parents could get flagged as pedophiles just for innocently uploading a photo of their child in the bathtub on their *private* cloud. Teenagers exploring their sexuality consensually with each other could get tagged as sexual predators (a label that might stick on them decades later). The police could receive reports for breastfeeding mothers. The list is infinite.
![A cartoon illustration summarizing why chat control is dangerous.](../assets/images/chat-control-must-be-stopped/chatcontrol-LornaSchutte-chatcontroleu-3.webp)
<small aria-hidden="true">Image: Lorna Schütte / [chatcontrol.eu](https://www.chatcontrol.eu)</small>
- **Overwhelming resources**: The inevitable false positives will completely overwhelm the agencies responsible for investigating flagged material. This will cost them precious time they will not have to investigate *actual* abuse cases. Organizations fighting child sexual abuse are already overwhelmed and lack resources to prosecute real criminals.
- **Hurting victims**: Such system of mass surveillance could prevent victims of child sexual abuse (and other crimes) to reach out for help. Knowing that all their communications would be scanned, they would lose all confidentiality while reporting crimes. The evidences they share could even be tagged by Chat Control, as if they were the perpetrator rather than the victim. Sadly, many will likely decide it's safer not to report at all.
- **Self-censorship**: With Chat Control in place, not only victims might censor themselves and stop reaching out for help, but everyone else as well. When people know they are being observed, they feel less free to be themselves and to share openly. This is doubly true for anyone who is part of a marginalized group, such as [LGBTQ+ people](importance-of-privacy-for-the-queer-community.md), or anyone who is being victimized or at risk of victimization.
![A cartoon illustration explaining how chat control does not protect the victims and might silence them due to loss of confidentiality.](../assets/images/chat-control-must-be-stopped/chatcontrol-LornaSchutte-chatcontroleu-2.webp)
<small aria-hidden="true">Image: Lorna Schütte / [chatcontrol.eu](https://www.chatcontrol.eu)</small>
- **Undermining democracy**: This surveillance system would allow governments to spy on opposition. Chat logs from opposing candidates, activists, and journalists could all be accessed by authorities in order to silence opponents or blackmail candidates. Even if you trust your government to not do this now, this doesn't mean it could not be used in this way by the next government. We have all seen how fast the political landscape can change.
- **Violating the GDPR (and other laws)**: The General Data Protection Regulation (GDPR) offers wonderful protections to Europeans. Sadly, Chat Control would make a complete farce of it. The Right to Erasure (right to delete) could be reduced to ashes by Chat Control, including for any highly sensitive information wrongly caught in the CSAR net. Moreover, it would [violate Article 7 and Article 8](https://tuta.com/blog/chat-control-criticism) of the EU Charter of Fundamental Rights.
Protecting the children is only the excuse used in hope of convincing a misinformed public. **Chat Control is authoritarian mass surveillance.**
Authorities understand well how important protecting communication and information is. This is why they included an exemption to protect *their own* communications, but not yours.
## Would this protect the children?
No.
This cannot be stressed enough: **This regulation would not protect the children, it would *harm* the children**, and everyone else too, worldwide. Claiming otherwise is either naivety, or misinformation.
Last year, the civil and human rights association European Digital Rights (EDRi) put together a [joint statement from 48 organizations](https://edri.org/our-work/joint-statement-on-the-future-of-the-csa-regulation/) for children's protection, digital rights, and human rights, demanding that the European Parliament invest instead in proven strategies to fight child abuse. This appeal to reason does not seem to have been heard by most EU Member States.
There are many things we can do as a society to increase protections for children and fight abusers and criminals, but Chat Control is far from it all. Protection of the children is clearly only an excuse here, and a very misleading one.
![A popular No Yes meme, with the face replaced with the European Commission logo. In the No-part is: "Invest in: social workers, help for victims, support hotlines, prevention, education, targeted police work, IT-security", and in the Yes-part below is: "Buy Chat Control filter technology that doesn't solve the problem".](../assets/images/chat-control-must-be-stopped/chatcontrol-stopscanningme-meme-4.webp)
<small aria-hidden="true">[Image](https://stopscanningme.eu/en/organise-now.html): Stop Scanning Me / EDRi</small>
### Mislabelling children as criminals
First, this automated system is flawed in many ways, and the false-positive rate would likely be high. But let's imagine that, magically, the system could flag CSAM at an accuracy rate of 99%. This still means 1% of reports would be false. Expanded to the size of Europe Union's population of approximately 450 million people, exchanging likely billions of messages and files every day, this still means millions could be falsely tagged as sexual predators, with all the [consequences](https://www.republik.ch/2022/12/08/die-dunklen-schatten-der-chatkontrolle) this implies.
Worse, the Swiss federal police reported that currently about 80% of all automated reports received were [false-positives](https://www.patrick-breyer.de/en/posts/chat-control/#WhatYouCanDo). This means in reality, the error rate is likely far higher than 1%, and actually closer to an **80% error rate**. Of the approximate 20% of positive reports, in Germany, over 40% of investigations initiated [targeted children](https://www.polizei-beratung.de/aktuelles/detailansicht/straftat-verbreitung-kinderpornografie-pks-2022/) themselves.
Sometimes, flagged content is simply teenagers innocently sexting each other consensually. Not only would they be wrongly tagged as criminals under Chat Control, but they'd be triggering an investigation that would expose their intimate photos to others.
Even in a magical world where Chat Control AI is 99% accurate, it would still wrongly tag and **expose sensitive data from millions of children**. In reality, no AI system is even remotely close to this accuracy level, and proprietary algorithms are usually opaque black boxes impossible to audit transparently. The number of children Chat Control would harm, and likely traumatize for life, would be disastrous.
### Exposing children's sensitive and sexual information
Any content that could be deemed suspicious or explicit by the system, accurately or not, would be flagged and reported.
When this content is reported, it will likely be uploaded to a database for human review. This means that if a teenager was sending an intimate photo of themselves to another consenting teenager, they could be flagged as sharing CSAM, even if it's their own photo. Then, their photo would be sent to the police for review. Information that should very much have stayed protected and private between these two teenagers is now exposed to strangers. This is wrong, and dangerous.
Even innocuous communications such as daily conversations, teenagers chatting with each other, parents reporting information about their child to a [doctor](https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html), and therapists talking with their patients, could all inadvertently expose children sensitive information. This is information that should have remained *private*, and would now be uploaded to a police database, likely [stored there forever](https://www.iccl.ie/news/an-garda-siochana-unlawfully-retains-files-on-innocent-people-who-it-has-already-cleared-of-producing-or-sharing-of-child-sex-abuse-material/) with few recourses to remove it.
The more we collect sensitive information about children (photos, faces, locations, identifications, medical information, private chats, experiences, etc.), the more we risk exposing children to harm. This includes systems used by authorities and governments. Even if everyone with legitimate access to this data is miraculously 100% exemplary and incorruptible citizens, the databases and scanning systems will still be vulnerable to attacks from criminals and hostile governments alike.
The only way to protect children's information properly is to **1) not collect it**, and **2) use end-to-end encryption to protect it** when we cannot avoid collecting it. Spying on everyone and every child is the opposite of that.
### Authorities' databases will be attacked
It's impossible to perfectly secure information online. There is a lot we can do to improve security (much more than is done now), but data breaches will happen.
If governments mandate a backdoor to have access to all our online communication and stored files, it's inevitable that at least some criminals will eventually get access to it as well. This is even truer if this system is closed-source, [privatized](https://fortune.com/europe/2023/09/26/thorn-ashton-kutcher-ylva-johansson-csam-csa-regulation-european-commission-encryption-privacy-surveillance/), and isn't subjected to frequent independent audits with strong accountability.
Once a vulnerability is found by criminals, they will have the same access as authorities have to our data. With Chat Control, this means pretty much all our data.
In addition, Chat Control could facilitate the proliferation of even more spyware and [stalkerware](https://stopstalkerware.org/) on the market, thriving on the vulnerabilities found in the powerful system. This would allow *anyone* to purchase access to spy on *anyone*, including databases of identified children. It could give a direct backdoor-access to pedophiles. How could *this* be helping to protect the children?
### The danger is inside
Even if the idea of online strangers accessing children's sensitive data is terrifying, the worse danger in often much closer.
Sadly, we already know that the [vast majority](https://content.c3p.ca/pdfs/C3P_SurvivorsSurveyFullReport2017.pdf) of child sexual abuse is perpetrated by adults close to the child, not strangers, and that two-thirds of CSAM images appear to have been [produced at home](https://theconversation.com/new-research-shows-parents-are-major-producers-of-child-sexual-abuse-material-153722). Chat Control would do nothing to fight this. In fact, it could facilitate it.
Child abuse is an incredibly important topic to discuss and to fight against as a society. Utilizing this issue as an excuse to pass a surveillance law that would endanger everyone, including the victims, is despicable.
When children are living with the abuser, the only escape is outside the home, and sometimes this means *online*. Abusers often use spying technologies to control and restrict access to help for their victims. If we make mass surveillance mandatory and normalized, this risks aggravating the stalkerware problem by obligating providers to implement backdoors in their systems. We would effectively be helping abusers at home to restrict access to help for their victims, including victims of CSAM. This is completely unacceptable.
### How to actually help the children
Despite the politicization of this issue to manipulate the public opinion in accepting mass surveillance, there are actually *proven* solutions to help to protect the children, online and offline.
First, governments should [listen](https://mogis.info/static/media/uploads/eu-libe-mogis-hahne-07032023_en.pdf) to [organizations already doing the work](https://edri.org/our-work/most-criticised-eu-law-of-all-time/). Most are understaffed and under-resourced to properly support the victims and prosecute the criminals. Thousands of more reports every day would not help them do any effective work. More capacity to conduct *targeted* investigation and arrest criminals, and more capacity to create safe spaces to support the victims and witnesses will help.
Privacy should be the default, for everyone.
If all our services were using end-to-end encryption when possible, and implemented proper security and privacy features and practices, this would effectively help to protect the children as well. Abusers and criminals are looking for leaked and stolen data all the time. When a cloud photo storage gets hacked, your photos are up for grabs online, including the photos of your children. When parents upload photos of their children and their address online, and this data gets exposed (leaked, breached, AI-scraped, etc.), this data then becomes accessible to criminals.
**Better privacy protections also means better protections for the children.**
Children themselves should receive better education on how their data is used online and how to protect it. Additionally, it is vital to provide better education on what behaviors aren't normal coming from an adult, and how to reach out for help when it happens. Children should have access to safe and confidential resources to report abuse, whether it's happening outside or inside their home.
Parents should be careful when sharing information about their children. And when they have to, they should benefit from complete confidentiality, knowing their communication is fully end-to-end encrypted and not shared with anyone else.
There is so much we can do to help to protect better the children online, surveillance is the opposite of it all.
## How would this affect me?
If this regulation is approved on **October 14th, 2025** (the date for the final vote), the consequences would be devastating for everyone, even outside the European Union.
We have seen how platforms implemented better privacy practices and features after the GDPR became effective in 2018, features that often benefited people worldwide. This could have the same effect in reverse.
Every platform potentially handling data of people located in the EU would be subjected to the law. Platforms would be obligated to scan all communications and all files of (at least) data subjects located in the EU, even data currently protected with end-to-end encryption. This would affect popular apps and services like Signal, Tuta, Proton, WhatsApp, Telegram, and much more.
### Outside of Europe
This would not only affect Europeans' data, but also the data of anyone outside communicating with someone located in the European Union. Because end-to-end encryption can only work if **both** ends are protected.
If Chat Control gets approved and applied, it will become very difficult to communicate with anyone located in the EU while keeping strong protections for your data. Many people might just accept the surveillance passively, and as a result lose their rights, their protections, and compromise their democratic processes. Overtime, this will likely lead to a slippery slope towards dystopian authoritarianism.
Outside of Europe, you could expect to see services removing some privacy-protective features, downgrading encryption, blocking European countries that are subjected to the law, or moving outside of Europe entirely. If localization-based scanning is too complicated to handle for an application, some companies might just decide it's simpler to scan communications for all users, worldwide.
Additionally, Five Eyes countries (Australia, Canada, New Zealand, the United Kingdom, and the United States) have already [expressed support](https://www.youtube.com/watch?v=L933xDcSS3o&t=2163s) for Chat Control, and might be keen to try the same at home, if this gets approved and tested in Europe first.
### Inside of Europe
Without using tools that would be now deemed illegal, you would lose any protections currently granted by end-to-end encryption. It would become impossible for you to send an email, a text message, or a photo without being observed by your government, and potentially also by criminals and foreign governments, following the inevitable data breaches.
You would have to constantly self-censor to avoid triggering the system and getting reported to the authorities. At first, you would probably just have to stop sending nudes, sexting, or sending photos of naked children in the bathtub or playing at the beach. Then, this would escalate to never mentioning drug or anything that could sound like drug, even as a joke. Later, you might have to stop texting about going to a protest, and stop organizing protests online. Further down the line, you might even have to self-censor to make sure you are not saying anything negative about a leader, or a [foreign politician](https://www.reuters.com/world/us/trump-administration-resuming-student-visa-appointments-state-dept-official-says-2025-06-18/) even. This isn't that hypothetical, this sort of [oppressive surveillance](https://www.hrw.org/news/2017/11/19/china-police-big-data-systems-violate-privacy-target-dissent) already exists in some countries.
Many services you currently rely on right now would simply shut down, or move away from Europe entirely. Businesses might also move outside of Europe if they worry about protecting their proprietary information. This could cause massive layoffs, while organizations move to jurisdictions where they are allowed to keep their data protected and unobserved.
Finally, even if this doesn't affect you personally, or you don't believe it will, [**this isn't just about you**](the-privacy-of-others.md).
The data of vulnerable people would be exposed and their safety put at risk. Victims might decide to stop reaching out for help or reporting crimes. Sources requiring anonymity might decide the risk isn't worth reporting valuable information to journalists. Opponents of governments in power could be silenced. Every democracy in the European Union would suffer greatly from it.
Chat Control is completely antithetical to the values the European Union has been presenting to the world in recent years.
![The popular Red Dress meme, with the offended woman overlaid with the words "Fundamental Rights", the whistling man the words "European Commission", and woman wearing the red dress the words "Scanning private messages and controlling how citizens use the internet".](../assets/images/chat-control-must-be-stopped/chatcontrol-stopscanningme-meme-2.webp)
<small aria-hidden="true">[Image](https://stopscanningme.eu/en/organise-now.html): Stop Scanning Me / EDRi</small>
## What can I do about it?
Even if the landscape seems dismal, **the battle isn't over**. There are many things you can do, right now, to fight against this authoritarian dystopia.
### For Europeans, specifically
- Contact your country representatives **TODAY**. Contact them before this Friday, September 12th, 2025. The group Fight Chat Control has put together an [**easy tool**](https://fightchatcontrol.eu/#contact-tool) making this quick with only a few clicks.
- After September 12th, the battle isn't over. Although governments will finalize their positions on that day, the final vote happens on **October 14th, 2025**. If you missed the September 12th deadline, keep contacting your representatives anyway.
- Tell your family and friends to contact their representatives as well, talk about it, make noise.
### For Everyone, including Europeans
- Talk about Chat Control on social media often, especially this week. Make noise online. Use the hashtags #ChatControl and #StopScanningMe to help others learn more about the opposition movement.
- Share informative [videos and memes](#resources-to-learn-more-and-fight-for-human-rights) about Chat Control. Spread the word in various forms.
- Contact your European friends in impacted countries and tell them to contact their representatives NOW.
- Even outside the EU, you can contact your own representatives as well, to let them know regulations like Chat Control are horrible for human rights, and you hope your country will never fall for such repressive laws. Tell your political representatives that privacy rights are important to you. **Your voice matters.**
We need your help to fight this. For democracy, for privacy, and for all other human rights, we cannot afford to lose this battle.
![Screenshot of the Fight Chat Control website in a browser.](../assets/images/chat-control-must-be-stopped/chatcontrol-fightchatcontrol-website.webp)
<small aria-hidden="true">Screenshot: [fightchatcontrol.eu](https://fightchatcontrol.eu/)</small>
## Resources to learn more, and fight for human rights
### Videos about Chat Control
- [**Stop Scanning Me**: Short video that summarizes perfectly the issues with Chat Control](https://stopscanningme.eu/video/csar-explainer.mp4)
- [**Stop Scanning Me**: German-language version of the same short video](https://www.patrick-breyer.de/posts/chat-control/)
- [**Louis Rossmann**: Video discussing why privacy matters, and the impact of Chat Control from a perspective outside of Europe](https://www.youtube.com/watch?v=3NyUgv6dpJc)
- [**Shaping Opinion**: Excellent interview with Chat Control expert Patrick Breyer (recommended)](https://www.youtube.com/watch?v=L933xDcSS3o)
- [**Patrick Breyer**: PeerTube channel with numerous videos related to Chat Control (German & English)](https://peertube.european-pirates.eu/c/patrick_breyer_mep_channel)
### Memes about Chat Control
- [**Stop Scanning Me**: Memes, banners, and other graphics](https://stopscanningme.eu/en/organise-now.html)
- [**Patrick Breyer**: Memes, explainers, maps, and other graphics](https://www.patrick-breyer.de/posts/chat-control/#WhatYouCanDo)
### Websites with more information
- [**Fight Chat Control** (Contact your representatives here **TODAY**!)](https://fightchatcontrol.eu/)
- [**Stop Scanning Me** (from EDRi)](https://stopscanningme.eu)
- [**Patrick Breyer** (expert and former Member of the European Parliament)](https://www.patrick-breyer.de/posts/chat-control/)
- [**European Crypto Initiative**](https://eu.ci/eu-chat-control-regulation/)
- [Follow **Fight Chat Control** on Mastodon for updates](https://mastodon.social/@chatcontrol)
<div class="admonition warning" markdown>
<p class="admonition-title">Important Note: If you are reading this article after September 12th</p>
Regardless of the outcome on Friday, the fight isn't over after September 12th. The next deadline will be the **final vote on October 14th, 2025**.
If you've missed September 12th, make sure to contact your representatives **right now** to tell them to **oppose Chat Control** on October 14th.
</div>
Update (9/8): Added clarification about what Chat Control is for readers unfamiliar with it.

View File

@@ -0,0 +1,216 @@
---
date:
created: 2025-08-20T17:00:00Z
categories:
- Opinion
authors:
- em
description:
Privacy washing is a widely used deceptive strategy. Learning to detect it better is an important skill to develop to help us to respond to it and report it.
schema_type: Opinion
preview:
cover: blog/assets/images/privacy-washing-is-a-dirty-business/washing-cover.webp
---
# Privacy Washing Is a Dirty Business
![Filtered photo of a sticker on a metallic surface with graffiti. The sticker has the sentence "We respect your privacy!" written on it, and the whole sentence is barred is a red line over it.](../assets/images/privacy-washing-is-a-dirty-business/washing-cover.webp)
<small aria-hidden="true">Photo: Marija Zaric / Unsplash</small>
Perhaps you haven't heard the term *privacy washing* before. Nonetheless, it's likely that you have already been exposed to this scheme in the wild. Regrettably, privacy washing is a widespread deceptive strategy.<!-- more -->
## What is privacy washing
Similarly to whitewashing (concealing unwanted truths to improve a reputation) and greenwashing (deceptively presenting a product as environmentally friendly for marketing purposes), privacy washing misleadingly, or fraudulently, presents a product, service, or organization as being responsible and trustworthy with data protection, when it isn't.
<div class="admonition quote inline end" markdown>
<p class="admonition-title">Your privacy is&ast; important to us. <small aria-hidden="true">&ast;not!</small></p></div>
The term has been used for over a decade already. It's saddening to see that not only is this [not a new problem](https://dataethics.eu/privacy-washing/), but it has only gotten worse through the years.
With the acceleration of data collection, the accumulation of data breaches, and the erosion of customers' trust, companies have an increased need for reassuring users to gain their business.
Despite consumers' rights and expectations, implementing proper data protection takes time, expertise, and money. Even if the long term benefits are colossal, the time invested often doesn't translate into direct *short term* profits, the main objective for most businesses. On the other hand, collecting more data to sell it to third parties often *does* translate into short term profits.
For these reasons, many companies quickly realize the need for *advertising* better privacy, but aren't necessarily willing to invest what it takes to make these claims true.
There comes privacy washing: <span class="pullquote-source">"Your privacy is&ast; important to us." <small aria-hidden="true">&ast;not!</small></span>
Privacy washing comes with a selection of washer cycles, from malicious trap to deceptive snake oil to perhaps the most common wash: plain negligence.
## Negligence, incompetence, or malevolence
In some other contexts, intentions might matter more. But when it comes to privacy washing, the result is often the same regardless of intentions: Personal data from users, customers, employees, patients, or children even being leaked and exploited in all sorts of ways.
Whether false claims come from negligence by failing to verify that data protections are properly implemented, incompetence to evaluate if they are, or maliciously trying to trick users in using a service that is actually detrimental to their privacy, harm is done, and sometimes permanently so.
Nonetheless, understanding the different types of privacy washing can help us to evaluate how to detect it, respond to it, and report it.
### Negligence and greed
> *They know what they are doing, but they care more about money*
The most common occurrence of privacy washing likely comes from negligence and greed. One of the biggest drivers for this is that the current market incentivizes it.
Today's software industry is largely inflated by venture capitalist funding, which creates expectations for a substantial return on investment. This funding model often encourages startups to quickly build an app following the [minimum viable product](https://en.wikipedia.org/wiki/Minimum_viable_product) principles, grow its user base as fast as possible, increase its value, and then sell it off for profits.
The problem is, this model is antithetical to implementing good privacy, security, and legal practices from the start. Data privacy cannot only be an afterthought. It must be implemented from the start, before users' data even gets collected.
Many startups fail to see how being thorough with data privacy will benefit them in the long term, and view privacy and security requirements only as a burden slowing down their growth. This mindset can result in perceiving privacy as a simple marketing asset, something businesses talk to users about for reassurance, but without putting any real effort into it beneath the surface.
<div class="admonition quote inline end" markdown>
<p class="admonition-title">Perhaps moving fast and breaking things wasn't such a good idea after all.</small></p></div>
Outside of privacy, this common startup mindset of playing fast and loose with customers and their safety frequently has **devastating** consequences. One recent and tragic example comes from OceanGate's Titan deep-sea submersible that [infamously imploded](https://globalnews.ca/news/11318623/titan-sub-report-oceangate-culture-critically-flawed/) during an exploration, killing its five passengers in an instant.
The final report blamed a problematic safety culture at OceanGate that was “critically flawed and at the core of these failures were glaring disparities between their written safety protocols and their actual practices.”
<span class="pullquote-source">Perhaps [moving fast and breaking things](move-fast-and-break-things.md) wasn't such a good idea after all.</span>
Alas, similar "glaring disparities" between policies and practices are widespread in the tech industry. While maybe not as dramatic and spectacular as an imploding submersible, [data leaks can also literally kill people](privacy-means-safety.md).
**Data privacy is the "passenger safety protocol" for software**, and it should never be trivialized.
Privacy isn't just "risk management", it is a human right. Analogous to safety protocols, organizations are responsible for ensuring their data protection policies are being followed, and are accurately describing their current practices. Anything less is negligence, at best.
Unfortunately, users (like passengers) often have very few ways to verify false claims about allegedly privacy-respectful features and policies. But this burden should never be on them in the first place.
### Incompetence and willful ignorance
> *They don't know what they are doing, or they just don't want to know*
Partly related to negligence, is plain incompetence and willful ignorance. Some organizations might be well-intentioned initially, but either lack the internal expertise to implement proper privacy practices, or conveniently decide not to spend much time researching about what their data protection responsibilities are.
For example, most businesses have heard by now of the requirement to present a privacy policy to their users, customers, and even web visitors. Deplorably, in a failed attempt to fulfill this legal obligation, many simply copy someone else's privacy policy and paste it on their own website. Not only this is very unlikely to be compliant with applicable privacy regulations, but it also possibly infringes *copyright* laws.
Do not simply copy-paste another organization's privacy policy and claim it as your own!
It's important to remember that legal requirements for policies aren't the end goal here. **The true requirements are the data protection *practices*.**
The policies *must* accurately describe what the *practices* are in reality. Because no two organizations have the exact same internal practices and third-party vendors, no two organizations should have the exact same privacy policy.
**Copy-paste privacy policies aren't compliance, they're deception.**
A privacy policy that isn't accurately describing an organization's practices is a form of privacy washing. Sadly, a quite commonly used one, like some quick light-wash cycle.
It's worth noting these days that creating a privacy policy using generative AI will lead to the exact same problems related to accuracy and potential infringement of both privacy and copyright laws. This is *not* a smart "shortcut" to try.
While lack of understanding of policies and legal requirements is only one example of how incompetence can become a form of privacy washing, there are infinitely more ways this can happen.
As soon as data is collected by an organization (or by the third-party software it uses), there is almost certainly legal obligations to protect this data, to restrict its collection and retention, and to inform data subjects.
Organizations that do not take this responsibility seriously, or blissfully decide to remain unaware of it, while presenting an empty privacy policy, are effectively doing privacy washing.
Implementing protections and limiting collection cannot be an afterthought. Once data is leaked, there is often nothing that can be done to truly delete it from the wild. The damage caused by leaked data can be tragic and permanent.
Organizations must take this responsibility much more seriously.
### Malevolence and fraud
> *They lie, and they want your data*
Greed and ignorance are common causes of privacy washing, but they can quickly escalate to fraud and ambush.
It's worth noting that a large amount of negligence or incompetence can be indistinguishable from malice, but there are organizations that deliberately lie to users to exploit them, or to trick them into unwillingly revealing sensitive information.
#### Anom, the secret FBI operation
Perhaps one of the most infamous example of this is the Anom honeypot. Anom was an encrypted phone company promising privacy and security, but that was in fact part of an undercover operation staged by the American Federal Bureau of Investigation (FBI), [Operation Trojan Shield](https://en.wikipedia.org/wiki/Operation_Trojan_Shield).
Investigative journalist Joseph Cox [reported](https://www.vice.com/en/article/inside-anom-video-operation-trojan-shield-ironside/) in 2021 that Anom advertised their products to criminal groups, then secretly sent a copy of every message on the device to the FBI. It was so secret, even Anom developers didn't know about the operation. They were told their customers were corporations.
A screenshot [shared](https://www.vice.com/en/article/operation-trojan-shield-anom-fbi-secret-phone-network/) by Motherboard shows an Anom slogan: "Anom, Enforce your right to privacy". It's hard to tell how many non-criminal persons (if any) might have accidentally been caught in this FBI net. Although this specific operation seems to have been narrowly targeting criminals, who knows if a similar operation could not be casting a wider net, inadvertently catching many innocent privacy-conscious users in its path.
#### Navigating VPN providers can be a minefield
Using a [trustworthy](https://www.privacyguides.org/en/vpn/) Virtual Private Network (VPN) service is a good strategy to improve your privacy online. That being said, evaluating trustworthiness is critical here. Using a VPN is only a transfer of trust, from your Internet Service Provider (ISP) to your VPN provider. Your VPN provider will still know your true IP address and location, and *could* technically see all your online activity while using the service, if they decided to look.
[Different VPN services are not equal](https://www.privacyguides.org/videos/2024/12/12/do-you-need-a-vpn/), unfortunately, snake oil products and traps are everywhere in this market. As with anything, do not assume that whoever screams the loudest is the most trustworthy. Loudness here only means more investment in advertising.
For example, take the interesting case of [Kape Technologies](https://en.wikipedia.org/wiki/Kape_Technologies), a billionaire-run company formerly known as Crossrider. This corporation has now acquired four different VPN services: ExpressVPN, CyberGhost, Private Internet Access, and Zenmate. This isn't that suspicious in itself, but Kape Technologies has also [acquired](https://cyberinsider.com/kape-technologies-owns-expressvpn-cyberghost-pia-zenmate-vpn-review-sites/) a number of VPN *review* websites, suspiciously always ranking its own VPN services at the top. This is a blatant conflict of interest, to say the least.
Sadly, on the VPN market — [estimated](https://www.grandviewresearch.com/industry-analysis/virtual-private-network-market) at $41.33 billion USD in 2022 — what is called a ["review" is often just *advertising*](the-trouble-with-vpn-and-privacy-review-sites.md).
Moreover, many free VPN providers [break their privacy promises](https://iapp.org/news/a/privacy-violations-by-free-vpn-service-providers) regarding users' data. In 2013, Facebook [bought](https://gizmodo.com/do-not-i-repeat-do-not-download-onavo-facebook-s-vam-1822937825) the free VPN provider Onavo, and included it in a Facebook feature deceptively labeled "Protect". As is now standard behavior for Facebook, the social media juggernaut actually collected and analyzed the data from Onavo users. This allowed Facebook to monitor the online habits of its users even when they weren't using the Facebook app. This is very much the opposite of data privacy, and of any implied promises to "Protect".
Then there's the case of Hotspot Shield VPN, accused in 2017 of [breaking](https://www.zdnet.com/article/privacy-group-accuses-hotspot-shield-of-snooping-on-web-traffic/) its privacy promises by the Center for Democracy & Technology, a digital rights nonprofit organization. While promising "anonymous browsing", Hotspot Shield allegedly deployed persistent cookies and used more than five different third-party tracking libraries. The parent company AnchorFree denied the accusations, but even *if* it wasn't the case for AnchorFree, how tempting would it be for a business with an ad-based revenue model to utilize the valuable data it collects for more of this revenue? And indeed, many free VPN services do [monetize](https://thebestvpn.com/how-free-vpns-sell-your-data/) users' data.
Worst of all are the *fake*, free VPN services. Like stepping on a landmine, criminals are [luring users](https://www.techradar.com/pro/criminals-are-using-a-dangerous-fake-free-vpn-to-spread-malware-via-github-heres-how-to-stay-safe) looking for a free VPN service and tricking them into downloading malware on their devices. While this goes beyond privacy washing, it's still a piece of software actively harming users and deceptively gaining their trust with the false promise of better privacy. Wherever privacy washing is being normalized by greedy or lazy organizations, criminals like this flourish.
#### Using compliance to appear legitimate
Another fraudulent case of privacy washing is organizations using false claims related to privacy law compliance to appear more legitimate.
Earlier this year, the digital rights organization Electronic Frontier Foundation (EFF) [called](https://www.eff.org/deeplinks/2025/01/eff-state-ags-time-investigate-crisis-pregnancy-centers) for an investigation into deceptive anti-abortion militant organizations (also called "[fake clinics](https://www.plannedparenthood.org/blog/what-are-crisis-pregnancy-centers)") in eight different US states.
These fake clinics were claiming to be bound by the Health Insurance Portability and Accountability Act (HIPAA) in order to appear like genuine health organizations. HIPAA is an American federal privacy law that was established in 1996 to protect sensitive health information in the United States.
Not only are many of these fake clinics **not** complying with HIPAA, but they collect extremely sensitive information without being bound by HIPAA in the first place, because they *aren't* licensed healthcare providers. Worse, some have [leaked this data](https://jessica.substack.com/p/exclusive-health-data-breach-at-americas) in all sorts of ways.
Thanks to the EFF's work, some of those fake clinics have now [quietly removed](https://www.eff.org/deeplinks/2025/08/fake-clinics-quietly-edit-their-websites-after-being-called-out-hipaa-claims) misleading language from their websites. But sadly, this small victory doesn't make these organizations any more trustworthy, it only slightly reduces the extent of their privacy washing.
### Deception and privacy-masquerading
> *They talk privacy, but their words are empty*
Perhaps the most obvious and pernicious examples of privacy washing are organizations that are clearly building products and features harming people's privacy, while using deceptive, pro-privacy language to disguise themselves as privacy-respectful organizations. There are likely more occurrences of this than there are characters in this article's text.
Buzzwords like "military-grade encryption", "privacy-enhancing", and the reassuring classic "we never share your data with anyone" get thrown around like candies falling off a privacy-preserving-piñata.
But **words are meaningless when they are deceitful**, and these candies quickly turn bitter once we learn the truth.
#### Google, the advertising company
An infamous recent example of this is Google, who [pushed](https://proton.me/blog/privacy-washing-2023) a new Chrome feature for targeted advertising in 2023 and dared to call it "Enhanced Ad Privacy"
This [enabled by default](https://www.eff.org/deeplinks/2023/09/how-turn-googles-privacy-sandbox-ad-tracking-and-why-you-should) technology allows Google to target users with ads customized around their browsing history. It's really difficult to see where the "privacy" is supposed to be here, even when squinting very hard.
Of course, Google, an advertising company, has long mastered the art of misleading language around data privacy to reassure its valuable natural resource, the user.
<div class="admonition quote inline end" markdown>
<p class="admonition-title">Google continued to collect personally identifiable user data from their extensive server-side tracking network.</small></p></div>
Everyone is likely familiar with Chrome's infamously deceptive "Incognito mode". In reality, becoming "Incognito" stopped at your own device where browsing history will not be kept, while <span class="pullquote-source">Google continued to collect personally identifiable user data from their extensive server-side tracking network.</span> Understandably, disgruntled users filed an official [class action lawsuit](https://www.theverge.com/2023/8/7/23823878/google-privacy-tracking-incognito-mode-lawsuit-summary-judgment-denied) to get reparation from this deception. In 2023, Google agreed [to settle](https://www.bbc.co.uk/news/business-67838384) this $5 billion lawsuit.
Despite claims of "privacy" in their advertising to users, Google, like many other big tech giants, has in reality spent millions [lobbying against](https://www.politico.com/news/2021/10/22/google-kids-privacy-protections-tech-giants-516834) better privacy protections for years.
#### World App, the biometric data collector
Similarly, Sam Altman's World project loves to throw privacy-preserving language around to reassure prospect users and investors. But despite all its claims, data protection authorities around the world have been [investigating, fining, and even banning](sam-altman-wants-your-eyeball.md/#privacy-legislators-arent-on-board) its operations.
The World App (developed by the World project) is an "everything app" providing users with a unique identifier called a World ID. This World ID, which grants various perks and accesses while using the World App, is earned by providing biometric data to the organization, in the form of an iris scan.
Providing an iris scan to a for-profit corporation with little oversight will rightfully scare away many potential users. This is why the company has evidently invested heavily in branding itself as a "privacy-preserving" technology, claims that are [questionable](sam-altman-wants-your-eyeball.md/#how-privacy-preserving-is-it) to say the least.
Despite catchy declarations such as "privacy by default and by design approach", the World project has accumulated an impressive history of privacy violations, and multiplies contradicting and misleading statements in its own documentation.
There are some stains that even a powerful, billionaire-backed, privacy wash just cannot clean off.
#### Flo, sharing your period data with Facebook
In 2019, the Wall Street Journal [reported](https://therecord.media/meta-flo-trial-period-tracking-data-sharing) that the period tracking application Flo had been sharing sensitive health data with Facebook (Meta), despite its promises of privacy.
The app, developed by Flo Health, repeatedly reassured users that the very sensitive information they shared with the app would remain private and would not be shared with any third parties without explicit consent.
Despite this pledge, the Flo app did share sensitive personal data with third parties, via the software development kits incorporated into the app.
This extreme negligence (or malevolence) have likely harmed some users in unbelievable ways. Considering the state of abortion rights in the United States at the moment, it's not an exaggeration to say this data leak could [severely endanger](privacy-means-safety.md/#healthcare-seekers) Flo App's users, including with risk of imprisonment.
In response, users have filed several [class action lawsuits](https://www.hipaajournal.com/jury-trial-meta-flo-health-consumer-privacy/) against Flo Health, Facebook, Google, AppsFlyer, and Flurry.
Trivializing health data privacy while promising confidentiality to gain users' trust should never be banalized. This is a very serious infringement of users' rights.
## Remain skeptical, revoke your trust when needed
Regardless of the promises to safeguard our personal data, it's sad to say, we can never let our guard down.
Privacy washing isn't a trend that is about to fade away, it's quite likely that it will even worsen in the years to come. We must prepare accordingly.
The only way to improve our safety (and our privacy) is to remain vigilant at all time, and grant our trust only sparsely. We also need to stay prepared to revoke this trust at any time, when we learn new information that justifies it.
Always remain skeptical when you encounter privacy policies that seem suspiciously too generic; official-looking badges on websites advertising unsupported claims of "GDPR compliance", reviews that are lacking supporting evidence and doubtfully independent; and over usage of buzzwords like "military-grade encryption", "privacy-enhancing", "fully encrypted", and (more recently) "AI-powered".
It's not easy to navigate the perilous waters of supposedly privacy-respectful software. And it's even worse in an age where AI-spawned websites and articles can create the illusion of trustworthiness with only a few clicks and prompts.
Learning [how to spot the red flags, and the green(ish) flags](red-and-green-privacy-flags.md), to protect ourselves from the deceptive manipulation of privacy washing is an important skill to develop to make better informed choices.

View File

@@ -0,0 +1,448 @@
---
date:
created: 2025-09-03T19:30:00Z
categories:
- Tutorials
authors:
- em
description:
Being able to distinguish facts from marketing lies is an essential skill in today's world. Despite all the privacy washing, there are clues we can look for to help.
schema_type: AnalysisNewsArticle
preview:
cover: blog/assets/images/red-and-green-privacy-flags/dontcare-cover.webp
---
# &ldquo;We [Don't] Care About Your Privacy&rdquo;
![Filtered photo of a metal container left on the street, with on it the painted sentence "We've updated our privacy policy." with three faded happy face icons around it. On and around the container are icons of hidden red flags.](../assets/images/red-and-green-privacy-flags/dontcare-cover.webp)
<small aria-hidden="true">Illustration: Em / Privacy Guides | Photo: Lilartsy / Unsplash</small>
They all claim "Your privacy is important to us." How can we know if that's true? With privacy washing being normalized by big tech and startups alike, it becomes increasingly difficult to evaluate who we can trust with our personal data. Fortunately, there are red (and green) flags we can look for to help us.<!-- more -->
If you haven't heard this term before, [privacy washing](privacy-washing-is-a-dirty-business.md) is the practice of misleadingly, or fraudulently, presenting a product, service, or organization as being trustworthy for data privacy, when in fact it isn't.
Privacy washing isn't a new trend, but it has become more prominent in recent years, as a strategy to gain trust from progressively more suspicious prospect customers. Unless politicians and regulators start getting much more serious and severe about protecting our privacy rights, this trend is likely to only get worse.
In this article, we will examine common indicators of privacy washing, and the "red" and "green" flags we should look for to make better-informed decisions and avoid deception.
## Spotting the red flags
<div class="admonition quote inline end" markdown>
<p class="admonition-title">Marketing claims can be separated from facts by an abysmally large pit of lies</p></div>
It's important to keep in mind that it's not the most visible product that's necessarily the best. More visibility only means more marketing. <span class="pullquote-source">Marketing claims can be separated from facts by an abysmally large pit of lies</span>.
Being able to distinguish between facts and marketing lies is an important skill to develop, doubly so on the internet. After all, it's difficult to find a single surface of the internet that isn't covered with ads, whether in plain sight or lurking in the shadows, disguised as innocent comments and enthusiastic reviews.
So what can we do about it?
There are some signs that should be considered when evaluating a product to determine its trustworthiness. It's unfair this burden falls on us, but sadly, until we get better regulations and institutions to protect us, we will have to protect ourselves.
It's also important to remember that evaluating trustworthiness isn't binary, and isn't permanent. There is always at least some risk, no matter how low, and trust should always be revoked when new information justifies it.
<div class="admonition info" markdown>
<p class="admonition-title">Examine flags collectively, and in context</p>
It's important to note that each red flag isn't necessarily a sign of untrustworthiness on its own (and the same is true for green flags, in reverse). But the more red flags you spot, the more suspicious you should get.
Taken into account *together*, these warning signs can help us estimate when it's probably reasonably safe to trust (low risk), when we should revoke our trust, or when we should refrain from trusting a product or organization entirely (high risk).
</div>
### :triangular_flag_on_post: Conflict of interest
Conflict of interest is one of the biggest red flag to look for. It comes in many shapes: Sponsorships, affiliate links, parent companies, donations, employments, personal relationships, and so on and so forth.
#### Content sponsorships and affiliate links
Online influencers and educators regularly receive offers to "monetize their audience with ease" if they accept to overtly or subtly advertise products within their content. If this isn't explicitly presented as advertising, then there is obviously a strong conflict of interest. The same is true for affiliate links, where creators receive a sum of money each time a visitor clicks on a link or purchase a product from this link.
It's understandable that content creators are seeking sources of revenue to continue doing their work. This isn't an easy job. But a trustworthy content creator should always **disclose** any potential conflicts of interest related to their content, and present paid advertising explicitly as paid advertising.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
Before trusting content online, try to examine what the sources of revenue are for this content. Look for affiliate links and sponsorships, and try to evaluate if what you find might have influenced the impartiality of the content.
</div>
#### Parent companies
This one is harder to examine, but is extremely important. In today's corporate landscape, it's not rare to find conglomerates of corporations with a trail of ownership so long it's sometimes impossible to find the head. Nevertheless, investigating which company owns which is fundamental to detect conflicts of interest.
For example, the corporation [Kape Technologies](https://en.wikipedia.org/wiki/Teddy_Sagi#Kape_Technologies) is the owner of both VPN providers (ExpressVPN, CyberGhost, Private Internet Access, and Zenmate) and websites publishing [*VPN reviews*](https://cyberinsider.com/kape-technologies-owns-expressvpn-cyberghost-pia-zenmate-vpn-review-sites/). Suspiciously, their own VPN providers always get ranked at the top on their own review websites. Even if there were no explicit directive for the websites to do this, which review publisher would dare to rank negatively a product owned by its parent company, the one keeping them alive? This is a direct and obvious conflict of interest.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
Look at the *Terms of Service* and *Privacy Policy* (or *Privacy Notice*) for declarations related to a parent company. This is often stated there. You can also examine an organization's *About* page, Wikipedia page, or even the official government corporate registries to find out if anyone else owns an organization.
</div>
#### Donations, event sponsorships, and other revenues
When money is involved, there is always a potential for conflict of interest. If an organization receives a substantial donation, grant, or loan from another, it will be difficult to remain impartial about it. Few would dare to talk negatively about a large donor.
This isn't necessarily a red flag in every situation of course. For example, a receiving organization could be in a position where the donor's values are aligned, or where impartiality isn't required. Nevertheless, it's something important to consider.
In 2016, developer and activist Aral Balkan [wrote](https://ar.al/notes/why-im-not-speaking-at-cpdp/) about how he refused an invitation to speak at a panel on Surveillance Capitalism at the [Computers, Privacy, & Data Protection Conference](http://www.cpdpconferences.org) (CPDP). The conference had accepted sponsorship from an organization completely antithetical to its stated values: [Palantir](https://www.independent.co.uk/news/world/americas/us-politics/trump-doge-palantir-data-immigration-b2761096.html).
Balkan wrote: "The sponsorship of privacy and human rights conferences by corporations that erode our privacy and human rights is a clear conflict of interests that we must challenge."
<div class="admonition quote inline end" markdown>
<p class="admonition-title">How could one claim to defend privacy rights while receiving money from organizations thriving on destroying them?</p></div>
This is a great example of how sponsors can severely compromise not only the impartiality of an organization, but also its credibility and its values. How could the talks being put forward at such a conference be selected without bias? <span class="pullquote-source">How could one claim to defend privacy rights while receiving money from organizations thriving on destroying them?</span>
It's worth nothing that this year's CPDP 2025 sponsors [included](https://www.cpdpconferences.org/sponsors-partners) Google, Microsoft, TikTok, and Uber.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
Examine who sponsors events and who donates to organizations. Try to evaluate if an organization or event received money from sources that could be in contradiction with its values. Does this compromise its credibility? If a sponsor or donor has conflicting values, what benefit would there be for the sponsor supporting this event or organization?
</div>
#### Employment and relationships
Finally, another important type of conflicts of interest to keep in mind are the relationships between the individuals producing the content and the companies or products they are reporting on.
For example, if a content creator is working or previously worked for an organization, and the content requires impartiality, this is a potential conflict of interest that should be openly disclosed.
The same can be true if this person is in a professional or personal relationship with people involved with the product. This can be difficult to detect of course, and is not categorically a sign of bias, but it's worth paying attention to it in our evaluations.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
Look for disclaimers related to conflict of interest. Research the history of an organization to gain a better understanding of the people involved. Wikipedia can be a valuable resource for this.
</div>
### :triangular_flag_on_post: Checkbox compliance and copy-paste policies
Regrettably, many organizations have no intention whatsoever to genuinely implement privacy-respectful practices, and are simply trying to get rid of these "pesky privacy regulation requirements" as cheaply and quickly as possible.
They treat privacy law compliance like an annoying list of annoying tasks. They think they can complete this list doing the bare *cosmetic* minimum, so that it will all *look* like it's compliant (of course, it is not).
A good clue this mindset might be ongoing in an organization is when it uses a very generic privacy policy and terms of service, policies that are often simply copy-pasted from another website or AI-generated (which is kind of the same thing).
Not only this is *extremely unlikely* to truly fulfill the requirements for privacy compliance, but it also almost certainly infringes on *copyright* laws.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
If you find few details in a privacy policy that are specific to the organization, try copying one of its paragraph or long sentence in a search engine (using quotation marks around it to find the exact same entry). This will help detect where other websites are using the same policy.
Some might be using legitimate templates of course, but even legal usable policy templates need to be customized heavily to be compliant. Sadly, many simply copy-paste material from other organizations without permission, or use generative AI tools doing the same.
If the whole policy is copied without customization, it's very unlikely to describe anything true.
</div>
### :triangular_flag_on_post: Meaningless privacy compliance badges
Many businesses and startups have started to proudly display privacy law "[compliance badges](https://www.shutterstock.com/search/compliance-badge)" on their websites, to reassure potential clients and customers.
While it can indeed be reassuring at first glance to see "GDPR Compliant!", "CCPA Privacy Approved", and other deceitful designs, there is no central authority verifying this systematically. At this time, anyone could decide to claim they are "GDPR Compliant" and ornate their website with a pretty badge.
Moreover, if this claim isn't true, this is fraudulent of course and likely to break many laws. But some businesses bet on the assumption that no one will verify or report it, or that data protection authorities simply have better things to do.
While most privacy regulations adopt principles similar to the European General Data Protection Regulation (GDPR) [principle of accountability](https://commission.europa.eu/law/law-topic/data-protection/rules-business-and-organisations/obligations/how-can-i-demonstrate-my-organisation-compliant-gdpr_en) (where organizations are responsible for compliance and for demonstrating compliance), organizations' assertions are rarely challenged or audited. Because most of the time there isn't anyone verifying compliance unless there's an individual complaint, organizations have grown increasingly fearless with false claims of compliance.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
Never trust a claim of privacy compliance at face value, especially if it comes in the shape of a pretty website badge.
Examine organizations' privacy policies, contact them and ask questions, look for independent reviews, investigate to see if an organization has been reported before. Never trust a first-party source to tell you how great and compliant the first-party is.
</div>
### :triangular_flag_on_post: Fake reviews
Fake reviews are a growing problem on the internet. And this was only aggravated by the arrival of generative AI. There are so many review websites that are simply advertising in disguise. Some fake reviews are [generated by AI](https://apnews.com/article/fake-online-reviews-generative-ai-40f5000346b1894a778434ba295a0496), some are paid for or [influenced by sponsorships and affiliate links](the-trouble-with-vpn-and-privacy-review-sites.md), some are in [conflict of interest](https://cyberinsider.com/kape-technologies-owns-expressvpn-cyberghost-pia-zenmate-vpn-review-sites/) from parent companies, and many are biased in other ways. Trusting an online review today feels like trying to find the single strand of true grass through an enormous plastic haystack.
Genuine reviews are (were?) usually a good way to get a second opinion while shopping online and offline. Fake reviews pollute this verification mechanism by duping us in believing something comes from an independent third-party, when it doesn't.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
Train yourself to spot fake reviews. There are [many signs](https://www.bbb.org/all/spot-a-scam/how-to-spot-a-fake-review) that can help with this, such as language that suspiciously uses the complete and correct product and feature brand each time, reviewers who published an unnatural quantity of reviews in a short period of time, excessively positive review, negative reviews talking about how great this *other* brand is, etc. Make sure to look for potential conflicts of interest as well.
</div>
### :triangular_flag_on_post: Fake AI-generated content
Sadly, the internet has been infected by a new plague in recent years: AI-generated content. This was mentioned before, but truly deserves its own red flag.
Besides AI-generated reviews, it's important to know there are also now multiple articles, social media posts, and even entire websites that are completely AI-generated, and doubly fake. This affliction makes it even harder for readers to find genuine sources of reliable information online. [Learning to recognize this fake content](https://www.cnn.com/interactive/2023/07/business/detect-ai-text-human-writing/) is now an internet survival skill.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
If you find a blog that publishes 5 articles per day from the same author every day, be suspicious. Look for publication dates, and if they are inhumanly close to each other, this can be a sign of AI-generated content.
When reading an article, AI-generated text will often use very generic sentences, you will rarely find the colorful writing style that is unique to an author. AI-writing is generally bland with no personality shinning through. You might also notice the writing feels circular. It will seems like it's not really saying anything specific, except for that one thing, that is repeated over and over.
</div>
### :triangular_flag_on_post: Excessive self-references
When writing an article, review, or a product description, writers often use text links to add sources of information to support their statements, or to provide additional resources to readers.
When **all** the text links in an article point to the same source, you should grow suspicious. If all the seemingly external links only direct to material created from the original source, this can give the impression of supporting independent evidences, when in fact there aren't any.
Of course, organizations will sometimes refer back to their own material to share more of what they did with you (we certainly do!), but if an article or review *only* uses self-references, and these references also only use self-references, this could be a red flag.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
Even if you do not click on links, at least hover over them to see where they lead. Usually, trustworthy sources will have at least a few links pointing to *external* third-party websites. A diversity of supporting resources is important when conducting impartial research, and should be demonstrated there whenever relevant.
</div>
### :triangular_flag_on_post: Deceptive designs
Deceptive design can be difficult to spot. Sometimes it's obvious, like a cookie banner with a ridiculously small <small>"reject all"</small> button, or an opt-out option hidden under twenty layers of menu.
Most of the time however, deceptive design is well-planned to psychologically manipulate us to pick the option most favorable to the company, at the expense of our privacy. The Office of the Privacy Commissioner of Canada has produced this informative [web page](https://www.priv.gc.ca/en/privacy-topics/technology/online-privacy-tracking-cookies/online-privacy/deceptive-design/gd_dd-ind/) to help us recognize better deceptive design.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
Favor tools and services that are built for privacy from the ground up, and always default to privacy first. Train yourself to spot deceptive patterns and be persistent to choose the most privacy-protective option.
Don't be afraid to [say no](you-can-say-no.md), to reject options and products, and to also report them when deceptive design becomes fraudulent or infringes privacy laws.
</div>
### :triangular_flag_on_post: Buzzword language
Be suspicious of buzzword language, especially when it becomes excessive or lacks any supportive evidences. **Remember that buzzwords aren't a promise, but only marketing to get your attention.** These words don't mean anything on their own.
Expressions like "military-grade encryption" are usually designed to inspire trust, but there is [no such thing](https://www.howtogeek.com/445096/what-does-military-grade-encryption-mean/) that grants better privacy. Most military organizations likely use industry-standard encryption from solid and tested cryptographic algorithms, like any trustworthy organizations and privacy-preserving tools do.
Newer promises like "AI-powered" are completely empty, if not *scary*. Thankfully, many "AI-powered" apps aren't really AI-powered, and this is a good thing because "AI" is more often [a danger to your privacy](https://www.sciencenewstoday.org/the-dark-side-of-ai-bias-surveillance-and-control), and not an enhancement at all.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
Remain skeptical of expressions like "privacy-enhancing", "privacy-first approach", "fully-encrypted", or "fully compliant" when these claims aren't supported with evidences. Fully encrypted means nothing if the encryption algorithm is weak, or if the company has access to your encryption keys.
When you see claims of "military-grade encryption", ask which cryptographic algorithms are used, and how encryption is implemented. Look for evidences and detailed information on technological claims. Never accept vague promises as facts.
</div>
### :triangular_flag_on_post: Unverifiable and unrealistic promises
Along the same lines, many businesses will be happy to promise you the moon. But then, they become reluctant to explain how they will get you the moon, how they will manage to give the moon to multiple customers at once, and what will happen to the planet once they've transported the moon away from its orbit to bring it back to you on Earth... Maybe getting the moon isn't such a good promise after all.
<div class="admonition quote inline end" markdown>
<p class="admonition-title">companies promising you software that is 100% secure and 100% private are either lying or misinformed themselves</p></div>
Similarly, <span class="pullquote-source">companies promising you software that is 100% secure and 100% private are either lying or misinformed themselves</span>.
No software product is 100% secure and/or 100% private. Promises like this are unrealistic, and (fortunately for those companies) often also *unverifiable*. But an unverifiable claim shouldn't default to a trustworthy claim, quite the opposite. Trust must be earned. If a product cannot demonstrate how their claims are true, then we must remain skeptical.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
Same as for buzzwords and compliance claims, never trust at face value. If there are no ways for you to verify a claim, remain skeptical and aware this promise could be empty.
Be especially suspicious with organizations repeating exaggerated guarantees such as 100% secure. Organizations that are knowledgeable about security and privacy will usually restrain from such binary statement, and tend to talk about risk reduction with nuanced terms like "more secure", or "more private".
</div>
### :triangular_flag_on_post: Flawed or absent process for data deletion
Examining an organization's processes for data deletion can reveal a lot on their privacy practices and expertise. Organizations that are knowledgeable about privacy rights will usually be prepared to respond to data deletion requests, and will already have a process in place, a process that [doesn't require providing more information](queer-dating-apps-beware-who-you-trust.md/#they-can-make-deleting-data-difficult) than they already have.
Be especially worried if:
- [ ] You don't find any mentions of data deletion in their privacy policy.
- [ ] From your account's settings or app, you cannot find any option to delete your account and data.
- [ ] The account and data deletion process uses vague terms that make it unclear if your data will be truly deleted.
- [ ] You cannot find an email address to contact a privacy officer in their privacy policy.
- [ ] The email listed in their privacy policy isn't an address dedicated to privacy.
- [ ] You emailed the address listed but didn't get any reply after two weeks.
- [ ] Their deletion process requires to fill a form demanding more information than they already have on you, or uses a privacy-invasive third-party like Google Forms.
- [ ] They argue with you when you ask for legitimate deletion.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
If this isn't already explicitly explained in their policies (or if you do not trust their description), find the privacy contact for an organization and email them *before* using their products or services, to ask about their data deletion practices.
Ask in advance which information will be required from you in order to delete your data. Also ask if they keep any data afterward, and (if they do) what data they keep. Once data is shared, this could be much harder to deal with. It's best to verify data deletion processes *before* trusting an organization with our data.
</div>
### :triangular_flag_on_post: False reassurances
The goal of privacy washing is to reassure worried clients, consumers, users, patients, and investors into using the organization's products or services. But making us *feel* more secure doesn't always mean that we are.
#### Privacy theaters
You might have heard the term "security theater" already, but there's also "[privacy theater](https://slate.com/technology/2021/12/facebook-twitter-big-tech-privacy-sham.html)". Many large tech organizations have mastered this art for decades now. In response to criticisms about their dubious privacy practices, companies like Facebook and Google love to add seemingly "privacy-preserving" options to their software's settings, to give people the impression it's possible to use their products while preserving their privacy. But alas, it is not.
Unfortunately, no matter how much you "harden" your Facebook or Google account for privacy, these corporations will keep tracking everything you do on and off their platforms. Yes, enabling these options *might* very slightly reduce exposure for *some* of your data (and you should enable them if you cannot leave these platforms). However, Facebook and Google will still collect enough data on you to make them billions in profits each year, otherwise they wouldn't implement these options at all.
#### Misleading protections
The same can be said for applications that have built a reputation on a supposedly privacy-first approach like [Telegram](https://cybersecuritycue.com/telegram-data-sharing-after-ceo-arrest/) and [WhatsApp](https://insidetelecom.com/whatsapp-security-risk-alert-over-privacy-concerns/). In fact, the protections these apps offer are only partial, often poorly explained to users, and the apps still collect a large amount of data and/or metadata.
#### When deletion doesn't mean deletion
In other cases, false reassurance comes in the form of supposedly deleted data that isn't truly deleted. In 2019, Global News [reported](https://globalnews.ca/news/5463630/amazon-alexa-keeps-data-deleted-privacy/) on Amazon's Alexa virtual assistant speaker that didn't always delete voice-recorded data as promised. Google was also found [guilty](https://www.cnet.com/tech/services-and-software/google-oops-did-not-delete-street-view-data-as-promised/) of this, even after receiving an order from UK's Information Commissioner's Office.
This can also happen with cloud storage services that display an option to "delete" a file, when in fact the file is [simply hidden](https://www.consumersearch.com/technology/cloud-storage-privacy-concerns-learn-permanently-delete-data) from the interface, while remaining available in a bin directory or from version control.
How many unaware organizations might have inadvertently (or maliciously) kept deleted data by misusing their storage service and version control system? Of course, if a copy of the data is kept in backups or versioning system, then it's **not** fully deleted, and doesn't legally fulfill a data deletion requirement.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
Do not simply trust a "privacy" or "opt-out" option. Look at the overall practices of an organization to establish trust. Privacy features have no value at all if we cannot trust the organization that implemented them.
Investigate to find an organization's history of data breaches and how they responded to it. Was this organization repeatedly fined by data protection authorities? Do not hesitate to ask questions to an organization's privacy officer about their practices. And look for independent reviews of the organization.
</div>
### :triangular_flag_on_post: New and untested technologies
Many software startups brag about how revolutionary their NewTechnology™ is. Some even dare to brag about a "unique" and "game-changing" novel encryption algorithm. You should not feel excited by this, you should feel *terrified*.
For example, any startups serious about security and privacy will know that **you should never be ["rolling your own crypto"](https://www.infosecinstitute.com/resources/cryptography/the-dangers-of-rolling-your-own-encryption/)**.
Cryptography is a complex discipline, and developing a robust encryption algorithm takes a lot of time and transparent testing to achieve. Usually, it is achieved with the help of an entire community of experts. Some beginners might think they had the idea of the century, but until their algorithm has been rigorously tested by hundreds of experts, this is an unfounded claim.
The reason most software use the same few cryptographic algorithms for encryption, and usually follow strict protocols to implement them, is because this isn't an easy task to do, and the slightest mistake could render this encryption completely useless. The same can be true for other types of technology as well.
Novel technologies might sound more exciting, but *proven* and *tested* technologies are usually much more reliable when it comes to privacy, and especially when it comes to encryption.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
If a company brags about its new technology, investigate what information they have made available about it. Look for a document called a *White Paper*, which should describe in technical details how the technology works.
If the code is open source, look at the project's page and see how many people have worked on it, who is involved, since how long, etc.
More importantly, look for independent audits from trustworthy experts. Read the reports and verify if the organization's claims are supported by professionals in the field.
</div>
### :triangular_flag_on_post: Critics from experts
<div class="admonition quote inline end" markdown>
<p class="admonition-title">if you find multiple reports of privacy experts raising the alarm about it, consider this a dark-red red flag</p></div>
No matter how much an organization or product claims to be "privacy-first", <span class="pullquote-source">if you find multiple reports of privacy experts raising the alarm about it, consider this a dark-red red flag</span>.
If a company has been [criticized by privacy commissioners](sam-altman-wants-your-eyeball.md/#privacy-legislators-arent-on-board), data protection authorities, privacy professionals, and consumer associations, especially if this has happened repeatedly, you should be *very* suspicious.
Sometimes, criticized corporations will use misleading language like "we are currently working with the commissioner", this *isn't* a good sign.
The marketing department will try to spin any authority audits into something that sounds favorable to the corporation, but this is only privacy washing. They would not be "working with" the privacy commissioner if they hadn't been forced to in the first place. And **they wouldn't have been forced to if they truly had privacy-respectful practices**.
<div class="admonition success" markdown>
<p class="admonition-title">What to do?</p>
Use a search engine to look for related news using keywords such as the company's name with "data breach", "fined", or "privacy".
Check the product's or corporation's Wikipedia page, sometimes there will be references to previous incidents and controversies listed there. Follow trustworthy sources of privacy and security news to stay informed about reported data leaks and experts raising the alarm.
</div>
## Looking for the green(ish) flags
Now that we have discussed some red flags to help us know when we should be careful, let's examine the signs that *can* be indicator of trustworthiness.
Like for red flags, green flags should always be taken into context and considered together. One, or even a few green flags (or greenish flags) aren't on their own a guarantee that an organization is trustworthy. Always remain vigilant, and be ready to revoke your trust at any time if new information warrants it.
### :custom-green-flag: Independent reviews
Independent reviews from trustworthy sources can be a valuable resource to help to determine if a product is reliable. This is never a guarantee of course, humans (even experts) can also make mistakes (less than AI, but still) and aren't immune to lies.
However, an impartial review conducted by an expert in the field has the benefit of someone who has likely put many hours investigating this topic, something you might understandably not always have the time to do yourself. But be careful to first evaluate if this is a genuine unbiased assessment, or simply marketing content disguised as one.
### :custom-green-flag: Independent audits
Similarly, independent audits from credible organizations are very useful to assess a product's claims. Make sure the company conducting the audit is reputable, impartial, and that you can find a copy of the audit's report they produced, ideally from a source that *isn't* the audited company's website (for example, the auditing organization might [provide](https://cure53.de/#publications) access to it transparently).
### :custom-green-flag: Transparency
Transparency helps a lot to earn trust, and source code that is publicly available helps a lot with transparency. If a piece of software publishes its code for anyone to see, this is already a significant level of transparency above any proprietary code.
Open source code is never a guarantee of security and privacy, but it makes it much easier to verify any organization assertions. This is almost impossible to do when code is proprietary. Because no one outside the organization can examine the code, they must be trusted on their own words entirely. Favor products with code that is transparently available whenever possible.
### :custom-green-flag: Verifiable claims
If you can easily verify an organization's claims, this is a good sign. For example, if privacy practices are explicitly detailed in policies (and match the observed behaviors), if source code is open and easy to inspect, if independent audits have confirmed the organization's claims, and if the organization is consistent with its privacy practices (in private as much as in public), this all helps to establish trust.
### :custom-green-flag: Well-defined policies
Trustworthy organizations should always have well-defined, unique, and easy to read privacy policies and terms of service. The conditions within it should also be fair. **You shouldn't have to sell your soul to 1442 marketing partners just to use a service or visit a website.**
Read an organization's privacy policy (or privacy notice), and make sure it includes:
- [x] Language unique to this organization (no copy-paste policy).
- [x] Disclosure of any parent companies owning this organization (if any).
- [x] A dedicated email address to contact for privacy-related questions and requests.
- [x] Detailed information on what data is collected for each activity. For example, the data collected when you use an app or are employed by an organization shouldn't be bundled together indistinctly with the data collected when you simply visit the website.
- [x] Clear limits on data retention periods (when the data will be automatically deleted).
- [x] Clear description of the process to follow in order to delete, access, or correct your personal data.
- [x] A list of third-party vendors used by the organization to process your information.
- [x] Evidences of accountability. The organization should demonstrate accountability for the data it collects, and shouldn't just transfer this responsibility to the processors it uses.
### :custom-green-flag: Availability
Verify availability. Who will you contact if a problem arises with your account, software, or data? Will you be ignored by an AI chatbot just repeating what you've already read on the company's website? Will you be able to reach out to a competent human?
If you contact an organization at the listed privacy-dedicated email address to ask a question, and receive a thoughtful non-AI-generated reply within a couple of weeks, this can be a good sign. If you can easily find a privacy officer email address, a company's phone number, and the location where the organization is based, this also can be encouraging signs.
### :custom-green-flag: Clear funding model
If a *free* service is provided by a *for-profit* corporation, you should investigate further. The old adage that if you do not pay for a product you are the product is sadly often true in tech, and doubly so for big tech.
Before using a new service, try to find what the funding model is. Maybe it's a free service run by volunteers? Maybe they have a paid tier for businesses, but remain free for individual users? Maybe they survive and thrive on donations? Or maybe everyone does pay for it (with money, not data).
Look for what the funding model is. If it's free, and you can't really find any details on how it's financed, this could be a red flag that your data might be used for monetization. But if the funding model is transparent, fair, and ethical, this *can* be a green flag.
### :custom-green-flag: Reputation history
Some errors are forgivable, but others are too big to let go. Look for an organization's track record to help to evaluate its reputation overtime. Check if there was any security or privacy incidents, or expert criticisms, and check how the organization responded to it.
If you find an organization that has always stuck to its values (integrity), is still run by the same core people in recent years (stability), seems to have a generally good reputation with others (reputability), and had few (or no) incidents in the past (reliability), this *can* be a green flag.
### :custom-green-flag: Expert advice
Seek expert advice before using a new product or service. Look online for reliable and independent sources of [recommendations](https://www.privacyguides.org/en/tools/) (like Privacy Guides!), and read thoroughly to determine if the description fits your privacy needs. No tool is perfect to protect your privacy, but experts will warn you about a tool's limitations and downsides.
There's also added value in community consensus. If a piece of software is repeatedly recommended by multiple experts (not websites or influencers, *experts*), then this *can* be a green flag that this tool or service is generally trusted by the community (at this point in time).
## Take a stand for better privacy
Trying to evaluate who is worthy of our trust and who isn't is an increasingly difficult task. While this burden shouldn't fall on us, there are unfortunately too few institutional protections we can rely on at the moment.
Until our governments finally prioritize the protection of human rights and privacy rights over corporate interests, we will have to protect ourselves. But this isn't limited to self-protection, our individual choices also matter collectively.
Each time we dig in to thoroughly investigate a malicious organization and expose its privacy washing, we contribute in improving safety for everyone around us.
Each time we report a business infringing privacy laws, talk publicly about our bad experience to get our data deleted, and more importantly refuse to participate in services and products that aren't worthy of our trust, this all helps to improve data privacy for everyone overtime.
Being vigilant and reporting bad practices is taking a stand for better privacy. We must all take a stand for better privacy, and expose privacy washing each time we spot it.

View File

@@ -182,6 +182,7 @@ However, Privacy Guides *does* have social media accounts on a wide variety of p
- [:simple-reddit: Reddit](https://reddit.com/r/PrivacyGuides)
- [:simple-x: X (Twitter)](https://x.com/privacy_guides)
- [:simple-youtube: YouTube](https://youtube.com/@privacyguides)
- [:simple-tiktok: TikTok](https://www.tiktok.com/@privacyguides)
</div>

View File

@@ -1,5 +1,5 @@
ANALYTICS_FEEDBACK_NEGATIVE_NAME="This page could be improved"
ANALYTICS_FEEDBACK_NEGATIVE_NOTE='Thanks for your feedback! If you want to let us know more, please leave a post on our <a href="https://discuss.privacyguides.net/c/site-development/7" target="_blank" rel="noopener">forum</a>.'
ANALYTICS_FEEDBACK_NEGATIVE_NOTE="Thanks for your feedback! If you want to let us know more, please leave a post on our <a href='https://discuss.privacyguides.net/c/site-development/7' target='_blank' rel='noopener'>forum</a>."
ANALYTICS_FEEDBACK_POSITIVE_NAME="This page was helpful"
ANALYTICS_FEEDBACK_POSITIVE_NOTE="Thanks for your feedback!"
ANALYTICS_FEEDBACK_TITLE="Was this page helpful?"

View File

@@ -27,6 +27,7 @@ site_description: "Privacy Guides is the most popular & trustworthy non-profit p
edit_uri_template: blob/main/blog/{path}?plain=1
extra:
scope: /
privacy_guides:
footer:
intro:
@@ -202,6 +203,9 @@ markdown_extensions:
pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji
emoji_generator: !!python/name:material.extensions.emoji.to_svg
options:
custom_icons:
- theme/icons
tables: {}
footnotes: {}
toc:

View File

@@ -1,220 +0,0 @@
# Copyright (c) 2022-2024 Jonah Aragon <jonah@triplebit.net>
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
docs_dir: "videos"
site_url: "https://www.privacyguides.org/videos/"
site_dir: "site/videos"
site_name: Privacy Guides
site_description: "This is our home for the latest video content from the Privacy Guides team. Privacy Guides is the most popular & trustworthy non-profit privacy resource to find privacy tools and learn about protecting your digital life."
edit_uri_template: blob/main/videos/{path}?plain=1
extra:
privacy_guides:
footer:
intro:
!ENV [
FOOTER_INTRO,
"Privacy Guides is a non-profit, socially motivated website that provides information for protecting your data security and privacy.",
]
note:
!ENV [
FOOTER_NOTE,
"We do not make money from recommending certain products, and we do not use affiliate links.",
]
copyright:
author:
!ENV [FOOTER_COPYRIGHT_AUTHOR, "Privacy Guides and contributors."]
date: !ENV [FOOTER_COPYRIGHT_DATE, "2019-2025"]
license:
- fontawesome/brands/creative-commons
- fontawesome/brands/creative-commons-by
- fontawesome/brands/creative-commons-sa
homepage: !ENV [MAIN_SITE_BASE_URL, "https://www.privacyguides.org/en/"]
generator: false
context: !ENV [BUILD_CONTEXT, "production"]
offline: !ENV [BUILD_OFFLINE, false]
deploy: !ENV DEPLOY_ID
social:
- icon: simple/mastodon
link: https://mastodon.neat.computer/@privacyguides
name: !ENV [SOCIAL_MASTODON, "Mastodon"]
- icon: simple/peertube
link: https://neat.tube/c/privacyguides
name: !ENV [SOCIAL_PEERTUBE, "PeerTube"]
- icon: simple/matrix
link: https://matrix.to/#/#privacyguides:matrix.org
name: !ENV [SOCIAL_MATRIX, "Matrix"]
- icon: simple/discourse
link: https://discuss.privacyguides.net/
name: !ENV [SOCIAL_FORUM, "Forum"]
- icon: simple/github
link: https://github.com/privacyguides
name: !ENV [SOCIAL_GITHUB, "GitHub"]
- icon: simple/torbrowser
link: http://www.xoe4vn5uwdztif6goazfbmogh6wh5jc4up35bqdflu6bkdc5cas5vjqd.onion/posts/
name: !ENV [SOCIAL_TOR_SITE, "Hidden service"]
repo_url:
!ENV [BUILD_REPO_URL, "https://github.com/privacyguides/privacyguides.org"]
repo_name: ""
theme:
name: material
language: en
custom_dir: theme
font:
text: Public Sans
code: DM Mono
palette:
- media: "(prefers-color-scheme)"
scheme: default
accent: deep purple
toggle:
icon: material/brightness-auto
name: !ENV [THEME_DARK, "Switch to dark mode"]
- media: "(prefers-color-scheme: dark)"
scheme: slate
accent: amber
toggle:
icon: material/brightness-2
name: !ENV [THEME_LIGHT, "Switch to light mode"]
- media: "(prefers-color-scheme: light)"
scheme: default
accent: deep purple
toggle:
icon: material/brightness-5
name: !ENV [THEME_AUTO, "Switch to system theme"]
favicon: assets/brand/logos/png/favicon-32x32.png
icon:
repo: simple/github
features:
- announce.dismiss
- navigation.tracking
- navigation.tabs
- navigation.path
- navigation.indexes
- navigation.footer
- content.action.edit
- content.tabs.link
- content.tooltips
- search.highlight
extra_css:
- assets/stylesheets/extra.css?v=20250723
watch:
- theme
- includes
plugins:
blog:
blog_dir: .
blog_toc: true
post_url_format: "{date}/{file}"
post_excerpt_max_authors: 0
authors_profiles: false
categories: false
rss:
match_path: posts/.*
abstract_chars_count: -1
date_from_meta:
as_creation: date.created
as_update: date.updated
categories:
- categories
- tags
glightbox: {}
tags: {}
search: {}
privacy:
enabled: !ENV [BUILD_PRIVACY, true]
offline:
enabled: !ENV [BUILD_OFFLINE, false]
group:
enabled: !ENV [BUILD_INSIDERS, true]
plugins:
macros: {}
meta: {}
optimize:
enabled: !ENV [OPTIMIZE, PRODUCTION, NETLIFY, false]
typeset: {}
social:
cards: !ENV [CARDS, true]
cards_dir: assets/img/social
cards_layout_dir: theme/layouts
cards_layout: page
markdown_extensions:
admonition: {}
pymdownx.details: {}
pymdownx.superfences:
custom_fences:
- name: mermaid
class: mermaid
format: !!python/name:pymdownx.superfences.fence_code_format
pymdownx.tabbed:
alternate_style: true
pymdownx.arithmatex:
generic: true
pymdownx.critic: {}
pymdownx.caret: {}
pymdownx.keys: {}
pymdownx.mark: {}
pymdownx.tilde: {}
pymdownx.snippets:
auto_append:
- !ENV [BUILD_ABBREVIATIONS, "includes/abbreviations.en.txt"]
pymdownx.tasklist:
custom_checkbox: true
attr_list: {}
def_list: {}
md_in_html: {}
meta: {}
abbr: {}
pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji
emoji_generator: !!python/name:material.extensions.emoji.to_svg
tables: {}
footnotes: {}
toc:
toc_depth: 4
nav:
- !ENV [NAV_HOME, "Home"]: !ENV [MAIN_SITE_BASE_URL, "/en/"]
- !ENV [NAV_KNOWLEDGE_BASE, "Knowledge Base"]:
!ENV [MAIN_SITE_KNOWLEDGE_BASE_URL, "/en/basics/why-privacy-matters/"]
- !ENV [NAV_RECOMMENDATIONS, "Recommendations"]:
!ENV [MAIN_SITE_RECOMMENDATIONS_URL, "/en/tools/"]
- !ENV [NAV_BLOG, "Articles"]: !ENV [ARTICLES_SITE_BASE_URL, "/articles/"]
- !ENV [NAV_VIDEOS, "Videos"]:
- index.md
- playlists.md
- !ENV [NAV_FORUM, "Forum"]: "https://discuss.privacyguides.net/"
- !ENV [NAV_WIKI, "Wiki"]:
!ENV [
NAV_WIKI_LINK,
"https://discuss.privacyguides.net/c/community-wiki/9411/none",
]
- !ENV [NAV_ABOUT, "About"]: !ENV [MAIN_SITE_ABOUT_URL, "/en/about/"]
validation:
nav:
not_found: info

View File

@@ -32,6 +32,7 @@ edit_uri_template:
!ENV [BUILD_EDIT_URI_TEMPLATE, "blob/main/docs/{path}?plain=1"]
extra:
scope: /
generator: false
context: !ENV [BUILD_CONTEXT, "production"]
offline: !ENV [BUILD_OFFLINE, false]

View File

@@ -0,0 +1,6 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg width="100%" height="100%" viewBox="0 0 36 36" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xml:space="preserve" xmlns:serif="http://www.serif.com/" style="fill-rule:evenodd;clip-rule:evenodd;stroke-linejoin:round;stroke-miterlimit:2;">
<path d="M13,34C13,34 13,36 11,36C9,36 9,34 9,34L9,2C9,2 9,0 11,0C13,0 13,2 13,2L13,34Z" style="fill:rgb(102,117,127);fill-rule:nonzero;"/>
<path d="M11,4C11,1.8 12.636,0.75 14.636,1.667L31.363,9.334C33.363,10.251 33.363,11.751 31.363,12.667L14.636,20.334C12.636,21.25 11,20.2 11,18L11,4Z" style="fill:rgb(69,221,46);fill-rule:nonzero;"/>
</svg>

After

Width:  |  Height:  |  Size: 797 B

View File

@@ -44,7 +44,11 @@
<link rel="next" href="{{ page.next_page.url | url }}">
{% endif %}
{% if config.extra.alternate is iterable %}
{% if page.is_homepage %}
<link rel="alternate" href="https://www.privacyguides.org/" hreflang="x-default">
{% else %}
<link rel="alternate" href="{{ "https://www.privacyguides.org/" ~ "en" ~ "/" ~ page.url }}" hreflang="x-default">
{% endif %}
{% for alt in config.extra.alternate %}
<link rel="alternate" href="{{ "https://www.privacyguides.org/" ~ alt.lang ~ "/" ~ page.url }}" hreflang="{{ alt.lang | d(lang.t('language')) }}">
{% endfor %}
@@ -84,6 +88,7 @@
{% endif %}
{% if config.extra.context == "production" %}
<link href="https://www.privacyguides.org/webmentions/receive/" rel="webmention">
<meta http-equiv="onion-location" content="{{ page.canonical_url | replace("https://www.privacyguides.org", "http://www.xoe4vn5uwdztif6goazfbmogh6wh5jc4up35bqdflu6bkdc5cas5vjqd.onion") }}" />
{% endif %}
{% if page and page.meta and page.meta.schema %}

View File

@@ -1 +0,0 @@
../blog/.authors.yml

View File

@@ -1,14 +0,0 @@
---
description: >-
This is our home for the latest video content from the Privacy Guides team. Be sure you are subscribed to find out about our latest uploads, and share these videos with your family and friends if you find them helpful!
template: video.html
hide:
- footer
---
# Latest Videos
This is our home for the latest video content from the Privacy Guides team. Be sure you are subscribed to find out about our latest uploads, and share these videos with your family and friends if you find them helpful!
[:simple-youtube: Subscribe on YouTube](https://www.youtube.com/@privacyguides){ .md-button .md-button--primary }
[:simple-peertube: Subscribe on PeerTube](https://neat.tube/c/privacyguides){ .md-button .md-button--primary }

View File

@@ -1,3 +0,0 @@
# Playlists
<!-- material/tags -->

View File

@@ -1,5 +0,0 @@
template: video-post.html
hide:
- toc
social:
cards_layout: video

View File

@@ -1,31 +0,0 @@
---
title: 5 Easy Steps to Protect Yourself Online
date:
created: 2025-02-14T17:00:00Z
authors:
- jordan
description: Worried about hackers and data breaches? You're not alone. In this video we outline 5 simple yet crucial steps you can take today to dramatically improve your online security and protect your personal information.
readtime: 8
thumbnail: https://neat.tube/lazy-static/previews/59e10e27-2bc4-4cd4-8cb7-605b101baf4e.jpg
embed: https://neat.tube/videos/embed/059b71a5-a1aa-44d5-b410-14a69e3082da
peertube: https://neat.tube/w/1GaeNH2GyUark4kNXCcL6Q
youtube: https://www.youtube.com/watch?v=x5bKUA2sVFM
links:
- Password Managers: https://www.privacyguides.org/en/passwords/
- Multifactor Authentication: https://www.privacyguides.org/en/multi-factor-authentication/
- Desktop Browsers: https://www.privacyguides.org/en/desktop-browsers/
- Browser Extensions: https://www.privacyguides.org/en/browser-extensions/
- Recommendation Criteria: https://www.privacyguides.org/en/about/criteria/
---
Worried about hackers and data breaches? You're not alone. In this video we outline 5 simple yet crucial steps you can take today to dramatically improve your online security and protect your personal information.
## Sources
- The biggest data breaches in 2024: <https://techcrunch.com/2024/10/14/2024-in-data-breaches-1-billion-stolen-records-and-rising/>
- Bitwarden Password Strength Tester: <https://bitwarden.com/password-strength/>
- Proton Pass Showcase video: <https://www.youtube.com/watch?v=4YLxjvxHMYo>
- Bitwarden Showcase video: <https://www.youtube.com/watch?v=3w0sJlgKfWQ>
- Google Incognito Lawsuit: <https://apnews.com/article/google-incognito-mode-tracking-lawsuit-settlement-8b30c9397f678bc4c546ab84191f7a9d>
- Google ad for GIMP was malicious: <https://www.bleepingcomputer.com/news/security/google-ad-for-gimporg-served-info-stealing-malware-via-lookalike-site/>
- Cops were allowed to force a suspect to use thumbprint to unlock phone, says court: <https://9to5mac.com/2024/04/19/thumbprint-to-unlock-phone-kaw/>

View File

@@ -1,56 +0,0 @@
---
title: |
How the NSA Tried to Backdoor Every Phone
date:
created: 2025-07-03T20:00:00Z
authors:
- jordan
tags:
- The History and Future of the Encryption Wars
description: |
In a world where everything is highly interconnected, it's easy to feel overwhelmed. Intentionally separating different parts of your life can help you with managing stress, with keeping your data secure, and with staying private online.
readtime: 9
thumbnail: https://neat.tube/lazy-static/previews/a5ca14cb-0317-4f93-8ac9-ba3257b38b2f.jpg
embed: https://neat.tube/videos/embed/qFThArEaKeHtDu78i27F29
peertube: https://neat.tube/w/qFThArEaKeHtDu78i27F29
youtube: https://www.youtube.com/watch?v=dDD37mSA0iw
---
Imagine a world where every conversation you had, every message you sent, every piece of data on your phone, was an open book to the government. In 1993, the NSA tried to do just that by inserting a tiny chip into everyone's phone. This is the second part of our series on the Crypto Wars, a monumental moment in history that shaped digital privacy forever.
## Sources
0:38 <https://en.wikipedia.org/wiki/Crypto_Wars>
0:52 <https://www.youtube.com/watch?v=2SWjIPwm954>
0:58 <https://www.youtube.com/watch?v=TLfZ5Sb1EGw>
1:02 <https://www.youtube.com/watch?v=bgTxFLHcuKc>
1:06 <https://www.youtube.com/watch?v=cmGkYHtV0Mo>
1:15 <https://www.youtube.com/watch?v=j9Qp7AWIylk>
3:43 <https://www.youtube.com/watch?v=ulWwPwm1jck>
3:46 <https://www.youtube.com/watch?v=hhxX8tSHnl0>
4:00 <https://www.youtube.com/watch?v=9vM0oIEhMag>
4:07 <https://www.youtube.com/watch?v=y36Z9dt5AWI>
4:27 <https://www.nytimes.com/1994/06/12/magazine/battle-of-the-clipper-chip.html>
5:18 <https://archive.epic.org/crypto/clipper/foia/att3600_2_9_93.html>
5:49 <https://www.youtube.com/watch?v=re9nG81Vft8>
6:27 <https://dl.acm.org/doi/10.1145/191177.191193>
7:14 <https://www.legislation.gov.uk/ukpga/2023/50/contents>
7:15 <https://www.homeaffairs.gov.au/help-and-support/how-to-engage-us/consultations/the-assistance-and-access-bill-2018>
7:35 <https://www.youtube.com/watch?v=bKIR8JLJOlc>
7:37 <https://www.youtube.com/watch?v=I5WjTTi67BE>

View File

@@ -1,60 +0,0 @@
---
title: |
Compartmentalize Your Life (and Your Privacy)
date:
created: 2025-06-05T20:00:00Z
authors:
- jordan
tags:
- Pride Month
description: |
In a world where everything is highly interconnected, it's easy to feel overwhelmed. Intentionally separating different parts of your life can help you with managing stress, with keeping your data secure, and with staying private online.
readtime: 8
thumbnail: https://neat.tube/lazy-static/previews/d2f0d1c3-8ea8-4c50-9a1b-9534f61874a7.jpg
embed: https://neat.tube/videos/embed/k1EeZGt9886WJQMDz2L9pB
peertube: https://neat.tube/w/k1EeZGt9886WJQMDz2L9pB
youtube: https://www.youtube.com/watch?v=Lhep-1Ynf4g
links:
- The Importance of Data Privacy For The Queer Community: https://www.privacyguides.org/articles/2025/06/03/importance-of-privacy-for-the-queer-community/#seeking-health-information
- Why Privacy Matters: https://www.privacyguides.org/en/basics/why-privacy-matters/
- Threat Modeling: https://www.privacyguides.org/en/basics/threat-modeling/
---
In a world where everything is highly interconnected, it's easy to feel overwhelmed. Compartmentalization can be a good strategy and important privacy technique: Intentionally separating different parts of your life can help you manage stress, keep your data secure, and stay private online.
## Sources
- <https://www.intomore.com/culture/identity/nearly-one-in-five-lgbtq-americans-are-still-in-the-closet-says-new-study/>
- <https://www.forbes.com/sites/daveywinder/2025/03/27/iphone-users-warned-as-dating-apps-leak-sensitive-photos-messages/>
- <https://en.wikipedia.org/wiki/Capital_punishment_for_homosexuality>
## Additional resources
### Helplines
- [Mindline Trans+ (UK)](https://www.mindinsomerset.org.uk/our-services/adult-one-to-one-support/mindline-trans/): A confidential emotional, mental health support helpline for people who identify as Trans, Agender, Gender Fluid or Non-Binary.
- [Trans Lifeline Hotline (US and Canada)](https://translifeline.org/hotline/): Trans peer support over the phone.
- [Suicide & Crisis Helpline (US and Canada)](https://988lifeline.org/): General support 24/7 phone number 988.
- [Suicide & Crisis Helpline (International)](https://en.wikipedia.org/wiki/List_of_suicide_crisis_lines): List of suicide crisis lines around the world.
### Supportive organizations
- [Egale (Canada, International)](https://egale.ca/asylum/): Resources for LGBTQ+ asylum and immigration requests from outside and inside Canada.
- [SOS Homophobie (France)](https://www.sos-homophobie.org/international-content): Non-profit, volunteer-run organization committed to combatting hate-motivated violence and discrimination against LGBTI people.
- [The Trevor Project (US)](https://www.thetrevorproject.org/): Suicide prevention and crisis intervention non-profit organization for LGBTQ+ young people.
- [Trans Rescue (International)](https://transrescue.org/): Organization assisting trans and queer individuals in relocating from dangerous areas to safer places.
- [Twenty10 (Australia)](https://twenty10.org.au/): Sydney-based organization providing a broad range of free support programs to the LGBTIQA+ community.
### International advocacy
- [Amnesty International](https://www.amnesty.org/en/what-we-do/discrimination/lgbti-rights/): Human rights organization running campaigns to protect and uphold the rights of LGBTI people globally.
- [Human Rights Watch](https://www.hrw.org/topic/lgbt-rights): Human rights non-profit who documents and exposes abuses based on sexual orientation and gender identity worldwide, and advocate for better protective laws and policies.

View File

@@ -1,24 +0,0 @@
---
title: Do you need a VPN?
date:
created: 2024-12-12T20:00:00Z
authors:
- jordan
description: Commercial Virtual Private Networks are a way of extending the end of your network to exit somewhere else in the world. This can have substantial privacy benefits, but not all VPNs are created equal.
readtime: 6
thumbnail: https://neat.tube/lazy-static/previews/3f9c497c-d0f1-4fe8-8ac5-539f0f5e40ed.jpg
embed: https://neat.tube/videos/embed/2e4e81e8-f59e-4eab-be4d-8464a4a83328
peertube: https://neat.tube/w/6HDQH1wnTACKFHh2u1CRQ5
youtube: https://www.youtube.com/watch?v=XByp-F8FXtg
links:
- VPN Recommendations: https://www.privacyguides.org/en/vpn/
- VPN Overview: https://www.privacyguides.org/en/basics/vpn-overview/
---
Commercial Virtual Private Networks are a way of extending the end of your network to exit somewhere else in the world. This can have substantial privacy benefits, but not all VPNs are created equal. More information about VPNs can be found on our website: <https://www.privacyguides.org/en/basics/vpn-overview/>
## Sources
- VPN Sponsorship Study: <https://www.cs.umd.edu/~dml/papers/vpn-ads-oakland22.pdf>
- VPNs Questionable Security Practices: <https://cdt.org/wp-content/uploads/2017/08/FTC-CDT-VPN-complaint-8-7-17.pdf>
- VPN Relationship Map: <https://kumu.io/Windscribe/vpn-relationships>

View File

@@ -1,24 +0,0 @@
---
title: |
Think Privacy Is Dead? You're Wrong.
date:
created: 2025-04-17T20:00:00Z
authors:
- jordan
description: |
Privacy isnt dead, in fact its growing. In this video, we explore common arguments against protecting your privacy and why they're not only wrong but dangerous.
readtime: 5
thumbnail: https://neat.tube/lazy-static/previews/ebdd1d98-7136-4f5d-9a9e-449004ce47d1.jpg
embed: https://neat.tube/videos/embed/sSx1yyXESXhvZh1E3VTwtG
peertube: https://neat.tube/w/sSx1yyXESXhvZh1E3VTwtG
youtube: https://www.youtube.com/watch?v=Ni2_BN_9xAY
links:
- Why Privacy Matters: https://www.privacyguides.org/en/basics/why-privacy-matters/
- posts/5-easy-steps-to-protect-yourself-online.md
---
Privacy isnt dead, in fact its growing. In this video, we explore common arguments against protecting your privacy and why they're not only wrong but dangerous.
## Sources
- <https://www.wired.com/story/google-app-gmail-chrome-data/>
- <https://www.cloudflare.com/learning/privacy/what-is-the-gdpr/>

View File

@@ -1,26 +0,0 @@
---
title: |
Is Your Data Really Safe? Understanding Encryption
date:
created: 2025-04-03T20:00:00Z
authors:
- jordan
description: |
Encryption is a cornerstone of security on the modern internet, in this video we dive deep into how it works and explain why it's so important.
readtime: 7
thumbnail: https://neat.tube/lazy-static/previews/f23bff89-bc84-46b7-ac0b-7e72a9c3ad7d.jpg
embed: https://neat.tube/videos/embed/6gASFPMvy7EBwTiM3XetEZ
peertube: https://neat.tube/w/6gASFPMvy7EBwTiM3XetEZ
youtube: https://www.youtube.com/watch?v=0uQVzK8QWsw
links:
- Privacy Means Safety<br><small>by Em on March 5, 2025</small>: https://www.privacyguides.org/articles/2025/03/25/privacy-means-safety/
- Why Privacy Matters: https://www.privacyguides.org/en/basics/why-privacy-matters/
---
Encryption is a cornerstone of security on the modern internet, in this video we dive deep into how it works and explain why it's so important. This is especially crucial as many governments around the world are pushing to ban encryption and breach our fundamental right to privacy.
## Sources
- <https://www.bbc.co.uk/news/articles/cgj54eq4vejo>
- <https://www.eff.org/deeplinks/2025/02/uks-demands-apple-break-encryption-emergency-us-all>
- <https://www.eff.org/deeplinks/2019/12/fancy-new-terms-same-old-backdoors-encryption-debate-2019>
- <https://www.amnesty.org/en/latest/news/2025/02/https-www-amnesty-org-en-latest-news-2025-02-uk-encryption-order-threatens-global-privacy-rights/>

View File

@@ -1,31 +0,0 @@
---
title: It's time to stop using SMS, here's why!
date:
created: 2025-01-24T20:00:00Z
authors:
- jordan
description: Text messaging has been a staple of communication for decades, but it's time to move on. In this video, we'll explore why SMS is an outdated and insecure technology and discuss better alternatives.
readtime: 7
thumbnail: https://neat.tube/lazy-static/previews/f3b63055-e1b3-4691-8687-4a838738141b.jpg
embed: https://neat.tube/videos/embed/7887f661-541c-4bff-9f69-2b7dd81622ca
peertube: https://neat.tube/w/fTfKp1tatNnGTtfP3SwbXu
youtube: https://www.youtube.com/watch?v=B9BWXvn-rB4s
links:
- Instant Messengers: https://www.privacyguides.org/en/real-time-communication/
---
Text messaging has been a staple of communication for decades, but it's time to move on. In this video, we'll explore why SMS is an outdated and insecure technology and discuss better alternatives.
## Sources
- <https://www.ivpn.net/blog/collection-of-user-data-by-isps-and-telecom-providers-and-sharing-with-third-parties>
- <https://www.t-mobile.com/privacy-center/education/analytics-reporting>
- <https://arstechnica.com/tech-policy/2021/03/t-mobile-will-tell-advertisers-how-you-use-the-web-starting-next-month/>
- <https://docs.fcc.gov/public/attachments/DOC-402213A1.pdf>
- <https://www.abc.net.au/news/2024-12-05/united-states-allege-china-behind-salt-typhoon-telecoms-hack/104687712>
- <https://mashable.com/article/salt-typhoon-telcom-hack-explainer-us-china>
- <https://docs.fcc.gov/public/attachments/DOC-408013A1.pdf>
- <https://www.bleepingcomputer.com/news/security/reddit-announces-security-breach-after-hackers-bypassed-staffs-2fa/>
- <https://www.gsma.com/solutions-and-impact/connectivity-for-good/mobile-for-development/wp-content/uploads/2019/02/ProofofIdentity2019_WebSpreads.pdf> (Page 12)
- <https://www.gsma.com/solutions-and-impact/technologies/networks/rcs/>
- <https://www.gsma.com/newsroom/article/rcs-nowin-ios-a-new-chapter-for-mobile-messaging/>

View File

@@ -1,23 +0,0 @@
---
title: |
Recall Is Back, But You Still Shouldnt Use It
date:
created: 2025-05-22T22:00:00Z
authors:
- jordan
description: |
Microsoft is rolling out its controversial Recall feature to Windows users with Copilot+ PCs. However, there are still many privacy and security concerns that remain, even after its reworking.
readtime: 6
thumbnail: https://neat.tube/lazy-static/previews/54ba6b19-122f-47b6-8f48-8ed651748fd6.jpg
embed: https://neat.tube/videos/embed/pAAK8bzb6saZfQSqZLW7eU
peertube: https://neat.tube/w/pAAK8bzb6saZfQSqZLW7eU
youtube: https://www.youtube.com/watch?v=AzLsJ-4_fhU
---
Microsoft is rolling out its controversial Recall feature to Windows users with Copilot+ PCs. However, there are still many privacy and security concerns that remain, even after its reworking.
## Sources
- Introducing Copilot+ PC's Full Keynote - Microsoft: <https://www.youtube.com/watch?v=aZbHd4suAnQ>
- Introducing Windows 11 - Microsoft: <https://www.youtube.com/watch?v=Uh9643c2P6k>
- Introducing Copilot - Microsoft: <https://www.youtube.com/watch?v=5rEZGSFgZVY>
- Introducing a new Copilot key for Windows 11 PCs - Microsoft: <https://www.youtube.com/watch?v=S1R08Qx6Fvs>

View File

@@ -1,16 +0,0 @@
---
title: |
Secureblue: Is This the Most Secure Linux Distro?
date:
created: 2025-07-25T18:00:00Z
authors:
- jordan
description: |
Today, were exploring Secureblue, a project aimed at addressing security concerns in traditional Linux distributions by significantly hardening existing components and systems.
readtime: 19
thumbnail: https://neat.tube/lazy-static/previews/708bf564-963b-4c3f-bb0e-5e2424353509.jpg
embed: https://neat.tube/videos/embed/4YA5XTiVbAdYv7nRsCroxn
peertube: https://neat.tube/w/4YA5XTiVbAdYv7nRsCroxn
youtube: https://www.youtube.com/watch?v=hmKQyeyOd54
---
Today, were exploring [**Secureblue**](https://www.privacyguides.org/en/desktop/#secureblue), a project aimed at addressing security concerns in traditional Linux distributions by significantly hardening existing components and systems.

View File

@@ -1,27 +0,0 @@
---
title: |
Stop Confusing Privacy, Anonymity, and Security
date:
created: 2025-03-14T02:00:00Z
authors:
- jordan
description: |
Are you mixing up privacy, security, and anonymity? Don't worry, it's more common than you might think! In this week's video we break down each term, so you can make educated decisions on what tools are best for you.
readtime: 7
thumbnail: https://neat.tube/lazy-static/previews/35388c84-1dc5-4e09-867e-0badf6ea75fa.jpg
embed: https://neat.tube/videos/embed/1f5361c6-2230-4466-9390-659e0a0692ad
peertube: https://neat.tube/w/4SmJxn7Q2XRp7ZGDCxvNUV
youtube: https://www.youtube.com/watch?v=RRt08MvK4tE
links:
- Common Threats: https://www.privacyguides.org/en/basics/common-threats/#security-and-privacy
- Recommended Tools: https://www.privacyguides.org/en/tools/
- Why Privacy Matters: https://www.privacyguides.org/en/basics/why-privacy-matters/
- VPN Overview: https://www.privacyguides.org/en/basics/vpn-overview/
- Do You Need a VPN?: https://www.privacyguides.org/videos/2024/12/12/do-you-need-a-vpn/
---
Are you mixing up privacy, security, and anonymity? Don't worry, it's more common than you might think! In this week's video we break down each term, so you can make educated decisions on what [privacy tools](https://www.privacyguides.org/en/tools/) are best for you.
## Sources
- <https://apnews.com/article/google-incognito-mode-tracking-lawsuit-settlement-8b30c9397f678bc4c546ab84191f7a9d>
- <https://cloud.google.com/blog/products/chrome-enterprise/chromeos-the-platform-designed-for-zero-trust-security>

View File

@@ -1,40 +0,0 @@
---
title: |
When Code Became a Weapon
date:
created: 2025-05-08T20:00:00Z
authors:
- jordan
description: |
During the Cold War, the US government tried to stop the export of strong cryptography. In this video we'll dive into the history and explain what happened to cause this and why it was eventually overturned.
tags:
- The History and Future of the Encryption Wars
readtime: 10
thumbnail: https://neat.tube/lazy-static/previews/64ffa267-44f4-4780-b283-a620bf856934.jpg
embed: https://neat.tube/videos/embed/8Yrh3JVFbS3ekG8i2JGzjN
peertube: https://neat.tube/w/8Yrh3JVFbS3ekG8i2JGzjN
youtube: https://youtu.be/DtPKBngQcEQ
links:
- Encryption Software: https://www.privacyguides.org/en/encryption/#openpgp
---
During the Cold War, the US government tried to stop the export of strong cryptography. In this video we'll dive into the history and explain what happened to cause this and why it was eventually overturned. The ability to use strong encryption wasnt a given; it has been continually fought for throughout history.
## Sources
- <https://hiddenheroes.netguru.com/philip-zimmermann>
- <https://dubois.com/pgp-case/>
- <https://www.philzimmermann.com/EN/background/index.html>
- <https://www.philzimmermann.com/EN/bibliography/index.html>
- <https://www.philzimmermann.com/multimedia/NPR%20Morning%20Edition%2012%20Jan%201996%20-%20Justice%20Dept%20drops%20Zimmermann%20case.m4a>
- [158,962,555,217,826,360,000 (Enigma Machine) - Numberphile](https://www.youtube.com/watch?v=G2_Q9FoD-oQ&pp=ygUSbnVtYmVycGhpbGUgZW5pZ21h)
- [Enigma Code](https://www.youtube.com/watch?v=LU2s28-tN08&pp=ygUbZW5pZ21hIG1hY2hpbmUgZGlzY292ZXJ5IHVr)
- [Our History](https://www.youtube.com/watch?v=tIDb-rVvHgQ&pp=ygUSb3VyIGhpc3RvcnkgbnNhIHl0)
- [The cold war, Checkpoint Charlie](https://www.youtube.com/watch?v=-pUmfKX3C04&pp=ygUSY2hlY2twb2ludCBjaGFybGll)
- [Ordinary Life in the USSR 1961](https://www.youtube.com/watch?v=ExHCAjRsZhA&pp=ygUYbGlmZSBpbiB0aGUgdXNzciBmb290YWdl)
- [USA: WASHINGTON: ANTI-NUCLEAR PROTESTS](https://www.youtube.com/watch?v=3SbC3EHS04I&pp=ygUZYW50aSBudWtlIHByb3Rlc3QgMTk5MCBhcNIHCQmGCQGHKiGM7w%3D%3D)
- [DEF CON 11 - Phil Zimmerman - A Conversation with Phil Zimmermann](https://www.youtube.com/watch?v=4ww8AAkWFhM&pp=ygUTcGhpbCB6aW1tZXJtYW5uIHBncA%3D%3D)
- [The Screen Savers - Phil Zimmerman, creator of Pretty Good Privacy (PGP) Interview](https://www.youtube.com/watch?v=cZD36L3BXXs&pp=ygUdcGhpbCB6aW1tZXJtYW5uIHNjcmVlbiBzYXZlcnM%3D)
- [Creator of PGP, Phil Zimmermann Talks At Bitcoin Wednesday](https://www.youtube.com/watch?v=M8z0Nx8svC4&pp=ygUXcGhpbCB6aW1tZXJtYW5uIGJpdGNvaW4%3D)
- [Life On The Internet: Networking (1996 Usenet Documentary)](https://www.youtube.com/watch?v=jNme5DlNaZY&pp=ygUbbGlmZSBvbiB0aGUgaW50ZXJuZXIgdXNlbmV0)
- [Snooping is in the nature of govts king of encryption Phil Zimmermann](https://www.youtube.com/watch?v=1eYZ8v_R9jI&pp=ygUdcGhpbCB6aW1tZXJtYW5uIHNjcmVlbiBzYXZlcnM%3D)
- <https://www.eff.org/cases/bernstein-v-us-dept-justice>

View File

@@ -1,22 +0,0 @@
---
title: |
Anonymity for Everyone: Why You Need Tor
date:
created: 2025-03-02T18:00:00Z
authors:
- jordan
description: Tor is an invaluable tool for bypassing censorship and browsing privately, in this week's video we dive into the details and explain how it works. Plus we cover some things you should avoid when using Tor to make sure you maintain your anonymity.
readtime: 7
thumbnail: https://neat.tube/lazy-static/previews/c47cf1e6-c0ba-4d80-82fb-fde27e1569c5.jpg
embed: https://neat.tube/videos/embed/725431de-407d-4d36-a4a0-f01e169e0cad
peertube: https://neat.tube/w/f7QkKGe5TJaPi6Y4S61Uoi
youtube: https://www.youtube.com/watch?v=R7vECGYUhyg
links:
- Tor Overview: https://www.privacyguides.org/en/advanced/tor-overview/
- Tor Browser: https://www.privacyguides.org/en/tor/
---
Tor is an invaluable tool for bypassing censorship and browsing privately, in this week's video we dive into the details and explain how it works. Plus we cover some things you should avoid when using Tor to make sure you maintain your anonymity.
## Sources
- Tor support documentation: <https://support.torproject.org/>