mirror of
https://github.com/privacyguides/privacyguides.org.git
synced 2025-09-05 09:38:52 +00:00
Compare commits
9 Commits
tiktok-pro
...
sumbitted-
Author | SHA1 | Date | |
---|---|---|---|
6e7ea3cb52
|
|||
a84f16fdc4
|
|||
19947442a6
|
|||
![]() |
e55eb0986b | ||
e5500a11da
|
|||
![]() |
0c4f98e7fb | ||
ac96552200
|
|||
1a7eb59fee
|
|||
![]() |
575818a637 |
1
.mailmap
1
.mailmap
@@ -15,6 +15,7 @@ Jonah Aragon <jonah@privacyguides.org> <jonah@triplebit.net>
|
||||
Jonah Aragon <jonah@privacyguides.org> <jonah@privacytools.io>
|
||||
Jonah Aragon <jonah@privacyguides.org> <github@aragon.science>
|
||||
Jordan Warne <jordan@privacyguides.org> <jw@omg.lol>
|
||||
Jordan Warne <jordan@privacyguides.org> <contact@jordanwarne.net>
|
||||
Justin Ehrenhofer <justin.ehrenhofer@gmail.com> <12520755+SamsungGalaxyPlayer@users.noreply.github.com>
|
||||
Mare Polaris <ph00lt0@privacyguides.org> <15004290+ph00lt0@users.noreply.github.com>
|
||||
Niek de Wilde <niek@privacyguides.org> <github.ef27z@simplelogin.com>
|
||||
|
1
.vscode/ltex.dictionary.en-US.txt
vendored
1
.vscode/ltex.dictionary.en-US.txt
vendored
@@ -569,3 +569,4 @@ allowlisted
|
||||
MyMonero
|
||||
Monero-LWS
|
||||
OkCupid
|
||||
Anom
|
||||
|
Binary file not shown.
After Width: | Height: | Size: 238 KiB |
Binary file not shown.
After Width: | Height: | Size: 286 KiB |
216
blog/posts/privacy-washing-is-a-dirty-business.md
Normal file
216
blog/posts/privacy-washing-is-a-dirty-business.md
Normal file
@@ -0,0 +1,216 @@
|
||||
---
|
||||
date:
|
||||
created: 2025-08-20T17:00:00Z
|
||||
categories:
|
||||
- Opinion
|
||||
authors:
|
||||
- em
|
||||
description:
|
||||
Privacy washing is a widely used deceptive strategy. Learning to detect it better is an important skill to develop to help us to respond to it and report it.
|
||||
schema_type: Opinion
|
||||
preview:
|
||||
cover: blog/assets/images/privacy-washing-is-a-dirty-business/washing-cover.webp
|
||||
---
|
||||
|
||||
# Privacy Washing Is a Dirty Business
|
||||
|
||||

|
||||
|
||||
<small aria-hidden="true">Photo: Marija Zaric / Unsplash</small>
|
||||
|
||||
Perhaps you haven't heard the term *privacy washing* before. Nonetheless, it's likely that you have already been exposed to this scheme in the wild. Regrettably, privacy washing is a widespread deceptive strategy.<!-- more -->
|
||||
|
||||
## What is privacy washing
|
||||
|
||||
Similarly to whitewashing (concealing unwanted truths to improve a reputation) and greenwashing (deceptively presenting a product as environmentally friendly for marketing purposes), privacy washing misleadingly, or fraudulently, presents a product, service, or organization as being responsible and trustworthy with data protection, when it isn't.
|
||||
|
||||
<div class="admonition quote inline end" markdown>
|
||||
<p class="admonition-title">Your privacy is* important to us. <small aria-hidden="true">*not!</small></p></div>
|
||||
|
||||
The term has been used for over a decade already. It's saddening to see that not only is this [not a new problem](https://dataethics.eu/privacy-washing/), but it has only gotten worse through the years.
|
||||
|
||||
With the acceleration of data collection, the accumulation of data breaches, and the erosion of customers' trust, companies have an increased need for reassuring users to gain their business.
|
||||
|
||||
Despite consumers' rights and expectations, implementing proper data protection takes time, expertise, and money. Even if the long term benefits are colossal, the time invested often doesn't translate into direct *short term* profits, the main objective for most businesses. On the other hand, collecting more data to sell it to third parties often *does* translate into short term profits.
|
||||
|
||||
For these reasons, many companies quickly realize the need for *advertising* better privacy, but aren't necessarily willing to invest what it takes to make these claims true.
|
||||
|
||||
There comes privacy washing: <span class="pullquote-source">"Your privacy is* important to us." <small aria-hidden="true">*not!</small></span>
|
||||
|
||||
Privacy washing comes with a selection of washer cycles, from malicious trap to deceptive snake oil to perhaps the most common wash: plain negligence.
|
||||
|
||||
## Negligence, incompetence, or malevolence
|
||||
|
||||
In some other contexts, intentions might matter more. But when it comes to privacy washing, the result is often the same regardless of intentions: Personal data from users, customers, employees, patients, or children even being leaked and exploited in all sorts of ways.
|
||||
|
||||
Whether false claims come from negligence by failing to verify that data protections are properly implemented, incompetence to evaluate if they are, or maliciously trying to trick users in using a service that is actually detrimental to their privacy, harm is done, and sometimes permanently so.
|
||||
|
||||
Nonetheless, understanding the different types of privacy washing can help us to evaluate how to detect it, respond to it, and report it.
|
||||
|
||||
### Negligence and greed
|
||||
|
||||
> *They know what they are doing, but they care more about money*
|
||||
|
||||
The most common occurrence of privacy washing likely comes from negligence and greed. One of the biggest drivers for this is that the current market incentivizes it.
|
||||
|
||||
Today's software industry is largely inflated by venture capitalist funding, which creates expectations for a substantial return on investment. This funding model often encourages startups to quickly build an app following the [minimum viable product](https://en.wikipedia.org/wiki/Minimum_viable_product) principles, grow its user base as fast as possible, increase its value, and then sell it off for profits.
|
||||
|
||||
The problem is, this model is antithetical to implementing good privacy, security, and legal practices from the start. Data privacy cannot only be an afterthought. It must be implemented from the start, before users' data even gets collected.
|
||||
|
||||
Many startups fail to see how being thorough with data privacy will benefit them in the long term, and view privacy and security requirements only as a burden slowing down their growth. This mindset can result in perceiving privacy as a simple marketing asset, something businesses talk to users about for reassurance, but without putting any real effort into it beneath the surface.
|
||||
|
||||
<div class="admonition quote inline end" markdown>
|
||||
<p class="admonition-title">Perhaps moving fast and breaking things wasn't such a good idea after all.</small></p></div>
|
||||
|
||||
Outside of privacy, this common startup mindset of playing fast and loose with customers and their safety frequently has **devastating** consequences. One recent and tragic example comes from OceanGate's Titan deep-sea submersible that [infamously imploded](https://globalnews.ca/news/11318623/titan-sub-report-oceangate-culture-critically-flawed/) during an exploration, killing its five passengers in an instant.
|
||||
|
||||
The final report blamed a problematic safety culture at OceanGate that was “critically flawed and at the core of these failures were glaring disparities between their written safety protocols and their actual practices.”
|
||||
|
||||
<span class="pullquote-source">Perhaps [moving fast and breaking things](move-fast-and-break-things.md) wasn't such a good idea after all.</span>
|
||||
|
||||
Alas, similar "glaring disparities" between policies and practices are widespread in the tech industry. While maybe not as dramatic and spectacular as an imploding submersible, [data leaks can also literally kill people](privacy-means-safety.md).
|
||||
|
||||
**Data privacy is the "passenger safety protocol" for software**, and it should never be trivialized.
|
||||
|
||||
Privacy isn't just "risk management", it is a human right. Analogous to safety protocols, organizations are responsible for ensuring their data protection policies are being followed, and are accurately describing their current practices. Anything less is negligence, at best.
|
||||
|
||||
Unfortunately, users (like passengers) often have very few ways to verify false claims about allegedly privacy-respectful features and policies. But this burden should never be on them in the first place.
|
||||
|
||||
### Incompetence and willful ignorance
|
||||
|
||||
> *They don't know what they are doing, or they just don't want to know*
|
||||
|
||||
Partly related to negligence, is plain incompetence and willful ignorance. Some organizations might be well-intentioned initially, but either lack the internal expertise to implement proper privacy practices, or conveniently decide not to spend much time researching about what their data protection responsibilities are.
|
||||
|
||||
For example, most businesses have heard by now of the requirement to present a privacy policy to their users, customers, and even web visitors. Deplorably, in a failed attempt to fulfill this legal obligation, many simply copy someone else's privacy policy and paste it on their own website. Not only this is very unlikely to be compliant with applicable privacy regulations, but it also possibly infringes *copyright* laws.
|
||||
|
||||
Do not simply copy-paste another organization's privacy policy and claim it as your own!
|
||||
|
||||
It's important to remember that legal requirements for policies aren't the end goal here. **The true requirements are the data protection *practices*.**
|
||||
|
||||
The policies *must* accurately describe what the *practices* are in reality. Because no two organizations have the exact same internal practices and third-party vendors, no two organizations should have the exact same privacy policy.
|
||||
|
||||
**Copy-paste privacy policies aren't compliance, they're deception.**
|
||||
|
||||
A privacy policy that isn't accurately describing an organization's practices is a form of privacy washing. Sadly, a quite commonly used one, like some quick light-wash cycle.
|
||||
|
||||
It's worth noting these days that creating a privacy policy using generative AI will lead to the exact same problems related to accuracy and potential infringement of both privacy and copyright laws. This is *not* a smart "shortcut" to try.
|
||||
|
||||
While lack of understanding of policies and legal requirements is only one example of how incompetence can become a form of privacy washing, there are infinitely more ways this can happen.
|
||||
|
||||
As soon as data is collected by an organization (or by the third-party software it uses), there is almost certainly legal obligations to protect this data, to restrict its collection and retention, and to inform data subjects.
|
||||
|
||||
Organizations that do not take this responsibility seriously, or blissfully decide to remain unaware of it, while presenting an empty privacy policy, are effectively doing privacy washing.
|
||||
|
||||
Implementing protections and limiting collection cannot be an afterthought. Once data is leaked, there is often nothing that can be done to truly delete it from the wild. The damage caused by leaked data can be tragic and permanent.
|
||||
|
||||
Organizations must take this responsibility much more seriously.
|
||||
|
||||
### Malevolence and fraud
|
||||
|
||||
> *They lie, and they want your data*
|
||||
|
||||
Greed and ignorance are common causes of privacy washing, but they can quickly escalate to fraud and ambush.
|
||||
|
||||
It's worth noting that a large amount of negligence or incompetence can be indistinguishable from malice, but there are organizations that deliberately lie to users to exploit them, or to trick them into unwillingly revealing sensitive information.
|
||||
|
||||
#### Anom, the secret FBI operation
|
||||
|
||||
Perhaps one of the most infamous example of this is the Anom honeypot. Anom was an encrypted phone company promising privacy and security, but that was in fact part of an undercover operation staged by the American Federal Bureau of Investigation (FBI), [Operation Trojan Shield](https://en.wikipedia.org/wiki/Operation_Trojan_Shield).
|
||||
|
||||
Investigative journalist Joseph Cox [reported](https://www.vice.com/en/article/inside-anom-video-operation-trojan-shield-ironside/) in 2021 that Anom advertised their products to criminal groups, then secretly sent a copy of every message on the device to the FBI. It was so secret, even Anom developers didn't know about the operation. They were told their customers were corporations.
|
||||
|
||||
A screenshot [shared](https://www.vice.com/en/article/operation-trojan-shield-anom-fbi-secret-phone-network/) by Motherboard shows an Anom slogan: "Anom, Enforce your right to privacy". It's hard to tell how many non-criminal persons (if any) might have accidentally been caught in this FBI net. Although this specific operation seems to have been narrowly targeting criminals, who knows if a similar operation could not be casting a wider net, inadvertently catching many innocent privacy-conscious users in its path.
|
||||
|
||||
#### Navigating VPN providers can be a minefield
|
||||
|
||||
Using a [trustworthy](https://www.privacyguides.org/en/vpn/) Virtual Private Network (VPN) service is a good strategy to improve your privacy online. That being said, evaluating trustworthiness is critical here. Using a VPN is only a transfer of trust, from your Internet Service Provider (ISP) to your VPN provider. Your VPN provider will still know your true IP address and location, and *could* technically see all your online activity while using the service, if they decided to look.
|
||||
|
||||
[Different VPN services are not equal](https://www.privacyguides.org/videos/2024/12/12/do-you-need-a-vpn/), unfortunately, snake oil products and traps are everywhere in this market. As with anything, do not assume that whoever screams the loudest is the most trustworthy. Loudness here only means more investment in advertising.
|
||||
|
||||
For example, take the interesting case of [Kape Technologies](https://en.wikipedia.org/wiki/Kape_Technologies), a billionaire-run company formerly known as Crossrider. This corporation has now acquired four different VPN services: ExpressVPN, CyberGhost, Private Internet Access, and Zenmate. This isn't that suspicious in itself, but Kape Technologies has also [acquired](https://cyberinsider.com/kape-technologies-owns-expressvpn-cyberghost-pia-zenmate-vpn-review-sites/) a number of VPN *review* websites, suspiciously always ranking its own VPN services at the top. This is a blatant conflict of interest, to say the least.
|
||||
|
||||
Sadly, on the VPN market — [estimated](https://www.grandviewresearch.com/industry-analysis/virtual-private-network-market) at $41.33 billion USD in 2022 — what is called a ["review" is often just *advertising*](the-trouble-with-vpn-and-privacy-review-sites.md).
|
||||
|
||||
Moreover, many free VPN providers [break their privacy promises](https://iapp.org/news/a/privacy-violations-by-free-vpn-service-providers) regarding users' data. In 2013, Facebook [bought](https://gizmodo.com/do-not-i-repeat-do-not-download-onavo-facebook-s-vam-1822937825) the free VPN provider Onavo, and included it in a Facebook feature deceptively labeled "Protect". As is now standard behavior for Facebook, the social media juggernaut actually collected and analyzed the data from Onavo users. This allowed Facebook to monitor the online habits of its users even when they weren't using the Facebook app. This is very much the opposite of data privacy, and of any implied promises to "Protect".
|
||||
|
||||
Then there's the case of Hotspot Shield VPN, accused in 2017 of [breaking](https://www.zdnet.com/article/privacy-group-accuses-hotspot-shield-of-snooping-on-web-traffic/) its privacy promises by the Center for Democracy & Technology, a digital rights nonprofit organization. While promising "anonymous browsing", Hotspot Shield allegedly deployed persistent cookies and used more than five different third-party tracking libraries. The parent company AnchorFree denied the accusations, but even *if* it wasn't the case for AnchorFree, how tempting would it be for a business with an ad-based revenue model to utilize the valuable data it collects for more of this revenue? And indeed, many free VPN services do [monetize](https://thebestvpn.com/how-free-vpns-sell-your-data/) users' data.
|
||||
|
||||
Worst of all are the *fake*, free VPN services. Like stepping on a landmine, criminals are [luring users](https://www.techradar.com/pro/criminals-are-using-a-dangerous-fake-free-vpn-to-spread-malware-via-github-heres-how-to-stay-safe) looking for a free VPN service and tricking them into downloading malware on their devices. While this goes beyond privacy washing, it's still a piece of software actively harming users and deceptively gaining their trust with the false promise of better privacy. Wherever privacy washing is being normalized by greedy or lazy organizations, criminals like this flourish.
|
||||
|
||||
#### Using compliance to appear legitimate
|
||||
|
||||
Another fraudulent case of privacy washing is organizations using false claims related to privacy law compliance to appear more legitimate.
|
||||
|
||||
Earlier this year, the digital rights organization Electronic Frontier Foundation (EFF) [called](https://www.eff.org/deeplinks/2025/01/eff-state-ags-time-investigate-crisis-pregnancy-centers) for an investigation into deceptive anti-abortion militant organizations (also called "[fake clinics](https://www.plannedparenthood.org/blog/what-are-crisis-pregnancy-centers)") in eight different US states.
|
||||
|
||||
These fake clinics were claiming to be bound by the Health Insurance Portability and Accountability Act (HIPAA) in order to appear like genuine health organizations. HIPAA is an American federal privacy law that was established in 1996 to protect sensitive health information in the United States.
|
||||
|
||||
Not only are many of these fake clinics **not** complying with HIPAA, but they collect extremely sensitive information without being bound by HIPAA in the first place, because they *aren't* licensed healthcare providers. Worse, some have [leaked this data](https://jessica.substack.com/p/exclusive-health-data-breach-at-americas) in all sorts of ways.
|
||||
|
||||
Thanks to the EFF's work, some of those fake clinics have now [quietly removed](https://www.eff.org/deeplinks/2025/08/fake-clinics-quietly-edit-their-websites-after-being-called-out-hipaa-claims) misleading language from their websites. But sadly, this small victory doesn't make these organizations any more trustworthy, it only slightly reduces the extent of their privacy washing.
|
||||
|
||||
### Deception and privacy-masquerading
|
||||
|
||||
> *They talk privacy, but their words are empty*
|
||||
|
||||
Perhaps the most obvious and pernicious examples of privacy washing are organizations that are clearly building products and features harming people's privacy, while using deceptive, pro-privacy language to disguise themselves as privacy-respectful organizations. There are likely more occurrences of this than there are characters in this article's text.
|
||||
|
||||
Buzzwords like "military-grade encryption", "privacy-enhancing", and the reassuring classic "we never share your data with anyone" get thrown around like candies falling off a privacy-preserving-piñata.
|
||||
|
||||
But **words are meaningless when they are deceitful**, and these candies quickly turn bitter once we learn the truth.
|
||||
|
||||
#### Google, the advertising company
|
||||
|
||||
An infamous recent example of this is Google, who [pushed](https://proton.me/blog/privacy-washing-2023) a new Chrome feature for targeted advertising in 2023 and dared to call it "Enhanced Ad Privacy"
|
||||
|
||||
This [enabled by default](https://www.eff.org/deeplinks/2023/09/how-turn-googles-privacy-sandbox-ad-tracking-and-why-you-should) technology allows Google to target users with ads customized around their browsing history. It's really difficult to see where the "privacy" is supposed to be here, even when squinting very hard.
|
||||
|
||||
Of course, Google, an advertising company, has long mastered the art of misleading language around data privacy to reassure its valuable natural resource, the user.
|
||||
|
||||
<div class="admonition quote inline end" markdown>
|
||||
<p class="admonition-title">Google continued to collect personally identifiable user data from their extensive server-side tracking network.</small></p></div>
|
||||
|
||||
Everyone is likely familiar with Chrome's infamously deceptive "Incognito mode". In reality, becoming "Incognito" stopped at your own device where browsing history will not be kept, while <span class="pullquote-source">Google continued to collect personally identifiable user data from their extensive server-side tracking network.</span> Understandably, disgruntled users filed an official [class action lawsuit](https://www.theverge.com/2023/8/7/23823878/google-privacy-tracking-incognito-mode-lawsuit-summary-judgment-denied) to get reparation from this deception. In 2023, Google agreed [to settle](https://www.bbc.co.uk/news/business-67838384) this $5 billion lawsuit.
|
||||
|
||||
Despite claims of "privacy" in their advertising to users, Google, like many other big tech giants, has in reality spent millions [lobbying against](https://www.politico.com/news/2021/10/22/google-kids-privacy-protections-tech-giants-516834) better privacy protections for years.
|
||||
|
||||
#### World App, the biometric data collector
|
||||
|
||||
Similarly, Sam Altman's World project loves to throw privacy-preserving language around to reassure prospect users and investors. But despite all its claims, data protection authorities around the world have been [investigating, fining, and even banning](sam-altman-wants-your-eyeball.md/#privacy-legislators-arent-on-board) its operations.
|
||||
|
||||
The World App (developed by the World project) is an "everything app" providing users with a unique identifier called a World ID. This World ID, which grants various perks and accesses while using the World App, is earned by providing biometric data to the organization, in the form of an iris scan.
|
||||
|
||||
Providing an iris scan to a for-profit corporation with little oversight will rightfully scare away many potential users. This is why the company has evidently invested heavily in branding itself as a "privacy-preserving" technology, claims that are [questionable](sam-altman-wants-your-eyeball.md/#how-privacy-preserving-is-it) to say the least.
|
||||
|
||||
Despite catchy declarations such as "privacy by default and by design approach", the World project has accumulated an impressive history of privacy violations, and multiplies contradicting and misleading statements in its own documentation.
|
||||
|
||||
There are some stains that even a powerful, billionaire-backed, privacy wash just cannot clean off.
|
||||
|
||||
#### Flo, sharing your period data with Facebook
|
||||
|
||||
In 2019, the Wall Street Journal [reported](https://therecord.media/meta-flo-trial-period-tracking-data-sharing) that the period tracking application Flo had been sharing sensitive health data with Facebook (Meta), despite its promises of privacy.
|
||||
|
||||
The app, developed by Flo Health, repeatedly reassured users that the very sensitive information they shared with the app would remain private and would not be shared with any third parties without explicit consent.
|
||||
|
||||
Despite this pledge, the Flo app did share sensitive personal data with third parties, via the software development kits incorporated into the app.
|
||||
|
||||
This extreme negligence (or malevolence) have likely harmed some users in unbelievable ways. Considering the state of abortion rights in the United States at the moment, it's not an exaggeration to say this data leak could [severely endanger](privacy-means-safety.md/#healthcare-seekers) Flo App's users, including with risk of imprisonment.
|
||||
|
||||
In response, users have filed several [class action lawsuits](https://www.hipaajournal.com/jury-trial-meta-flo-health-consumer-privacy/) against Flo Health, Facebook, Google, AppsFlyer, and Flurry.
|
||||
|
||||
Trivializing health data privacy while promising confidentiality to gain users' trust should never be banalized. This is a very serious infringement of users' rights.
|
||||
|
||||
## Remain skeptical, revoke your trust when needed
|
||||
|
||||
Regardless of the promises to safeguard our personal data, it's sad to say, we can never let our guard down.
|
||||
|
||||
Privacy washing isn't a trend that is about to fade away, it's quite likely that it will even worsen in the years to come. We must prepare accordingly.
|
||||
|
||||
The only way to improve our safety (and our privacy) is to remain vigilant at all time, and grant our trust only sparsely. We also need to stay prepared to revoke this trust at any time, when we learn new information that justifies it.
|
||||
|
||||
Always remain skeptical when you encounter privacy policies that seem suspiciously too generic; official-looking badges on websites advertising unsupported claims of "GDPR compliance", reviews that are lacking supporting evidence and doubtfully independent; and over usage of buzzwords like "military-grade encryption", "privacy-enhancing", "fully encrypted", and (more recently) "AI-powered".
|
||||
|
||||
It's not easy to navigate the perilous waters of supposedly privacy-respectful software. And it's even worse in an age where AI-spawned websites and articles can create the illusion of trustworthiness with only a few clicks and prompts.
|
||||
|
||||
Learning [how to spot the red flags, and the green(ish) flags](red-and-green-privacy-flags.md), to protect ourselves from the deceptive manipulation of privacy washing is an important skill to develop to make better informed choices.
|
448
blog/posts/red-and-green-privacy-flags.md
Normal file
448
blog/posts/red-and-green-privacy-flags.md
Normal file
@@ -0,0 +1,448 @@
|
||||
---
|
||||
date:
|
||||
created: 2025-09-03T19:30:00Z
|
||||
categories:
|
||||
- Tutorials
|
||||
authors:
|
||||
- em
|
||||
description:
|
||||
Being able to distinguish facts from marketing lies is an essential skill in today's world. Despite all the privacy washing, there are clues we can look for to help.
|
||||
schema_type: AnalysisNewsArticle
|
||||
preview:
|
||||
cover: blog/assets/images/red-and-green-privacy-flags/dontcare-cover.webp
|
||||
---
|
||||
|
||||
# “We [Don't] Care About Your Privacy”
|
||||
|
||||

|
||||
|
||||
<small aria-hidden="true">Illustration: Em / Privacy Guides | Photo: Lilartsy / Unsplash</small>
|
||||
|
||||
They all claim "Your privacy is important to us." How can we know if that's true? With privacy washing being normalized by big tech and startups alike, it becomes increasingly difficult to evaluate who we can trust with our personal data. Fortunately, there are red (and green) flags we can look for to help us.<!-- more -->
|
||||
|
||||
If you haven't heard this term before, [privacy washing](privacy-washing-is-a-dirty-business.md) is the practice of misleadingly, or fraudulently, presenting a product, service, or organization as being trustworthy for data privacy, when in fact it isn't.
|
||||
|
||||
Privacy washing isn't a new trend, but it has become more prominent in recent years, as a strategy to gain trust from progressively more suspicious prospect customers. Unless politicians and regulators start getting much more serious and severe about protecting our privacy rights, this trend is likely to only get worse.
|
||||
|
||||
In this article, we will examine common indicators of privacy washing, and the "red" and "green" flags we should look for to make better-informed decisions and avoid deception.
|
||||
|
||||
## Spotting the red flags
|
||||
|
||||
<div class="admonition quote inline end" markdown>
|
||||
<p class="admonition-title">Marketing claims can be separated from facts by an abysmally large pit of lies</p></div>
|
||||
|
||||
It's important to keep in mind that it's not the most visible product that's necessarily the best. More visibility only means more marketing. <span class="pullquote-source">Marketing claims can be separated from facts by an abysmally large pit of lies</span>.
|
||||
|
||||
Being able to distinguish between facts and marketing lies is an important skill to develop, doubly so on the internet. After all, it's difficult to find a single surface of the internet that isn't covered with ads, whether in plain sight or lurking in the shadows, disguised as innocent comments and enthusiastic reviews.
|
||||
|
||||
So what can we do about it?
|
||||
|
||||
There are some signs that should be considered when evaluating a product to determine its trustworthiness. It's unfair this burden falls on us, but sadly, until we get better regulations and institutions to protect us, we will have to protect ourselves.
|
||||
|
||||
It's also important to remember that evaluating trustworthiness isn't binary, and isn't permanent. There is always at least some risk, no matter how low, and trust should always be revoked when new information justifies it.
|
||||
|
||||
<div class="admonition info" markdown>
|
||||
<p class="admonition-title">Examine flags collectively, and in context</p>
|
||||
|
||||
It's important to note that each red flag isn't necessarily a sign of untrustworthiness on its own (and the same is true for green flags, in reverse). But the more red flags you spot, the more suspicious you should get.
|
||||
|
||||
Taken into account *together*, these warning signs can help us estimate when it's probably reasonably safe to trust (low risk), when we should revoke our trust, or when we should refrain from trusting a product or organization entirely (high risk).
|
||||
|
||||
</div>
|
||||
|
||||
### :triangular_flag_on_post: Conflict of interest
|
||||
|
||||
Conflict of interest is one of the biggest red flag to look for. It comes in many shapes: Sponsorships, affiliate links, parent companies, donations, employments, personal relationships, and so on and so forth.
|
||||
|
||||
#### Content sponsorships and affiliate links
|
||||
|
||||
Online influencers and educators regularly receive offers to "monetize their audience with ease" if they accept to overtly or subtly advertise products within their content. If this isn't explicitly presented as advertising, then there is obviously a strong conflict of interest. The same is true for affiliate links, where creators receive a sum of money each time a visitor clicks on a link or purchase a product from this link.
|
||||
|
||||
It's understandable that content creators are seeking sources of revenue to continue doing their work. This isn't an easy job. But a trustworthy content creator should always **disclose** any potential conflicts of interest related to their content, and present paid advertising explicitly as paid advertising.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
Before trusting content online, try to examine what the sources of revenue are for this content. Look for affiliate links and sponsorships, and try to evaluate if what you find might have influenced the impartiality of the content.
|
||||
|
||||
</div>
|
||||
|
||||
#### Parent companies
|
||||
|
||||
This one is harder to examine, but is extremely important. In today's corporate landscape, it's not rare to find conglomerates of corporations with a trail of ownership so long it's sometimes impossible to find the head. Nevertheless, investigating which company owns which is fundamental to detect conflicts of interest.
|
||||
|
||||
For example, the corporation [Kape Technologies](https://en.wikipedia.org/wiki/Teddy_Sagi#Kape_Technologies) is the owner of both VPN providers (ExpressVPN, CyberGhost, Private Internet Access, and Zenmate) and websites publishing [*VPN reviews*](https://cyberinsider.com/kape-technologies-owns-expressvpn-cyberghost-pia-zenmate-vpn-review-sites/). Suspiciously, their own VPN providers always get ranked at the top on their own review websites. Even if there were no explicit directive for the websites to do this, which review publisher would dare to rank negatively a product owned by its parent company, the one keeping them alive? This is a direct and obvious conflict of interest.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
Look at the *Terms of Service* and *Privacy Policy* (or *Privacy Notice*) for declarations related to a parent company. This is often stated there. You can also examine an organization's *About* page, Wikipedia page, or even the official government corporate registries to find out if anyone else owns an organization.
|
||||
|
||||
</div>
|
||||
|
||||
#### Donations, event sponsorships, and other revenues
|
||||
|
||||
When money is involved, there is always a potential for conflict of interest. If an organization receives a substantial donation, grant, or loan from another, it will be difficult to remain impartial about it. Few would dare to talk negatively about a large donor.
|
||||
|
||||
This isn't necessarily a red flag in every situation of course. For example, a receiving organization could be in a position where the donor's values are aligned, or where impartiality isn't required. Nevertheless, it's something important to consider.
|
||||
|
||||
In 2016, developer and activist Aral Balkan [wrote](https://ar.al/notes/why-im-not-speaking-at-cpdp/) about how he refused an invitation to speak at a panel on Surveillance Capitalism at the [Computers, Privacy, & Data Protection Conference](http://www.cpdpconferences.org) (CPDP). The conference had accepted sponsorship from an organization completely antithetical to its stated values: [Palantir](https://www.independent.co.uk/news/world/americas/us-politics/trump-doge-palantir-data-immigration-b2761096.html).
|
||||
|
||||
Balkan wrote: "The sponsorship of privacy and human rights conferences by corporations that erode our privacy and human rights is a clear conflict of interests that we must challenge."
|
||||
|
||||
<div class="admonition quote inline end" markdown>
|
||||
<p class="admonition-title">How could one claim to defend privacy rights while receiving money from organizations thriving on destroying them?</p></div>
|
||||
|
||||
This is a great example of how sponsors can severely compromise not only the impartiality of an organization, but also its credibility and its values. How could the talks being put forward at such a conference be selected without bias? <span class="pullquote-source">How could one claim to defend privacy rights while receiving money from organizations thriving on destroying them?</span>
|
||||
|
||||
It's worth nothing that this year's CPDP 2025 sponsors [included](https://www.cpdpconferences.org/sponsors-partners) Google, Microsoft, TikTok, and Uber.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
Examine who sponsors events and who donates to organizations. Try to evaluate if an organization or event received money from sources that could be in contradiction with its values. Does this compromise its credibility? If a sponsor or donor has conflicting values, what benefit would there be for the sponsor supporting this event or organization?
|
||||
|
||||
</div>
|
||||
|
||||
#### Employment and relationships
|
||||
|
||||
Finally, another important type of conflicts of interest to keep in mind are the relationships between the individuals producing the content and the companies or products they are reporting on.
|
||||
|
||||
For example, if a content creator is working or previously worked for an organization, and the content requires impartiality, this is a potential conflict of interest that should be openly disclosed.
|
||||
|
||||
The same can be true if this person is in a professional or personal relationship with people involved with the product. This can be difficult to detect of course, and is not categorically a sign of bias, but it's worth paying attention to it in our evaluations.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
Look for disclaimers related to conflict of interest. Research the history of an organization to gain a better understanding of the people involved. Wikipedia can be a valuable resource for this.
|
||||
|
||||
</div>
|
||||
|
||||
### :triangular_flag_on_post: Checkbox compliance and copy-paste policies
|
||||
|
||||
Regrettably, many organizations have no intention whatsoever to genuinely implement privacy-respectful practices, and are simply trying to get rid of these "pesky privacy regulation requirements" as cheaply and quickly as possible.
|
||||
|
||||
They treat privacy law compliance like an annoying list of annoying tasks. They think they can complete this list doing the bare *cosmetic* minimum, so that it will all *look* like it's compliant (of course, it is not).
|
||||
|
||||
A good clue this mindset might be ongoing in an organization is when it uses a very generic privacy policy and terms of service, policies that are often simply copy-pasted from another website or AI-generated (which is kind of the same thing).
|
||||
|
||||
Not only this is *extremely unlikely* to truly fulfill the requirements for privacy compliance, but it also almost certainly infringes on *copyright* laws.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
If you find few details in a privacy policy that are specific to the organization, try copying one of its paragraph or long sentence in a search engine (using quotation marks around it to find the exact same entry). This will help detect where other websites are using the same policy.
|
||||
|
||||
Some might be using legitimate templates of course, but even legal usable policy templates need to be customized heavily to be compliant. Sadly, many simply copy-paste material from other organizations without permission, or use generative AI tools doing the same.
|
||||
|
||||
If the whole policy is copied without customization, it's very unlikely to describe anything true.
|
||||
|
||||
</div>
|
||||
|
||||
### :triangular_flag_on_post: Meaningless privacy compliance badges
|
||||
|
||||
Many businesses and startups have started to proudly display privacy law "[compliance badges](https://www.shutterstock.com/search/compliance-badge)" on their websites, to reassure potential clients and customers.
|
||||
|
||||
While it can indeed be reassuring at first glance to see "GDPR Compliant!", "CCPA Privacy Approved", and other deceitful designs, there is no central authority verifying this systematically. At this time, anyone could decide to claim they are "GDPR Compliant" and ornate their website with a pretty badge.
|
||||
|
||||
Moreover, if this claim isn't true, this is fraudulent of course and likely to break many laws. But some businesses bet on the assumption that no one will verify or report it, or that data protection authorities simply have better things to do.
|
||||
|
||||
While most privacy regulations adopt principles similar to the European General Data Protection Regulation (GDPR) [principle of accountability](https://commission.europa.eu/law/law-topic/data-protection/rules-business-and-organisations/obligations/how-can-i-demonstrate-my-organisation-compliant-gdpr_en) (where organizations are responsible for compliance and for demonstrating compliance), organizations' assertions are rarely challenged or audited. Because most of the time there isn't anyone verifying compliance unless there's an individual complaint, organizations have grown increasingly fearless with false claims of compliance.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
Never trust a claim of privacy compliance at face value, especially if it comes in the shape of a pretty website badge.
|
||||
|
||||
Examine organizations' privacy policies, contact them and ask questions, look for independent reviews, investigate to see if an organization has been reported before. Never trust a first-party source to tell you how great and compliant the first-party is.
|
||||
|
||||
</div>
|
||||
|
||||
### :triangular_flag_on_post: Fake reviews
|
||||
|
||||
Fake reviews are a growing problem on the internet. And this was only aggravated by the arrival of generative AI. There are so many review websites that are simply advertising in disguise. Some fake reviews are [generated by AI](https://apnews.com/article/fake-online-reviews-generative-ai-40f5000346b1894a778434ba295a0496), some are paid for or [influenced by sponsorships and affiliate links](the-trouble-with-vpn-and-privacy-review-sites.md), some are in [conflict of interest](https://cyberinsider.com/kape-technologies-owns-expressvpn-cyberghost-pia-zenmate-vpn-review-sites/) from parent companies, and many are biased in other ways. Trusting an online review today feels like trying to find the single strand of true grass through an enormous plastic haystack.
|
||||
|
||||
Genuine reviews are (were?) usually a good way to get a second opinion while shopping online and offline. Fake reviews pollute this verification mechanism by duping us in believing something comes from an independent third-party, when it doesn't.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
Train yourself to spot fake reviews. There are [many signs](https://www.bbb.org/all/spot-a-scam/how-to-spot-a-fake-review) that can help with this, such as language that suspiciously uses the complete and correct product and feature brand each time, reviewers who published an unnatural quantity of reviews in a short period of time, excessively positive review, negative reviews talking about how great this *other* brand is, etc. Make sure to look for potential conflicts of interest as well.
|
||||
|
||||
</div>
|
||||
|
||||
### :triangular_flag_on_post: Fake AI-generated content
|
||||
|
||||
Sadly, the internet has been infected by a new plague in recent years: AI-generated content. This was mentioned before, but truly deserves its own red flag.
|
||||
|
||||
Besides AI-generated reviews, it's important to know there are also now multiple articles, social media posts, and even entire websites that are completely AI-generated, and doubly fake. This affliction makes it even harder for readers to find genuine sources of reliable information online. [Learning to recognize this fake content](https://www.cnn.com/interactive/2023/07/business/detect-ai-text-human-writing/) is now an internet survival skill.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
If you find a blog that publishes 5 articles per day from the same author every day, be suspicious. Look for publication dates, and if they are inhumanly close to each other, this can be a sign of AI-generated content.
|
||||
|
||||
When reading an article, AI-generated text will often use very generic sentences, you will rarely find the colorful writing style that is unique to an author. AI-writing is generally bland with no personality shinning through. You might also notice the writing feels circular. It will seems like it's not really saying anything specific, except for that one thing, that is repeated over and over.
|
||||
|
||||
</div>
|
||||
|
||||
### :triangular_flag_on_post: Excessive self-references
|
||||
|
||||
When writing an article, review, or a product description, writers often use text links to add sources of information to support their statements, or to provide additional resources to readers.
|
||||
|
||||
When **all** the text links in an article point to the same source, you should grow suspicious. If all the seemingly external links only direct to material created from the original source, this can give the impression of supporting independent evidences, when in fact there aren't any.
|
||||
|
||||
Of course, organizations will sometimes refer back to their own material to share more of what they did with you (we certainly do!), but if an article or review *only* uses self-references, and these references also only use self-references, this could be a red flag.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
Even if you do not click on links, at least hover over them to see where they lead. Usually, trustworthy sources will have at least a few links pointing to *external* third-party websites. A diversity of supporting resources is important when conducting impartial research, and should be demonstrated there whenever relevant.
|
||||
|
||||
</div>
|
||||
|
||||
### :triangular_flag_on_post: Deceptive designs
|
||||
|
||||
Deceptive design can be difficult to spot. Sometimes it's obvious, like a cookie banner with a ridiculously small <small>"reject all"</small> button, or an opt-out option hidden under twenty layers of menu.
|
||||
|
||||
Most of the time however, deceptive design is well-planned to psychologically manipulate us to pick the option most favorable to the company, at the expense of our privacy. The Office of the Privacy Commissioner of Canada has produced this informative [web page](https://www.priv.gc.ca/en/privacy-topics/technology/online-privacy-tracking-cookies/online-privacy/deceptive-design/gd_dd-ind/) to help us recognize better deceptive design.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
Favor tools and services that are built for privacy from the ground up, and always default to privacy first. Train yourself to spot deceptive patterns and be persistent to choose the most privacy-protective option.
|
||||
|
||||
Don't be afraid to [say no](you-can-say-no.md), to reject options and products, and to also report them when deceptive design becomes fraudulent or infringes privacy laws.
|
||||
|
||||
</div>
|
||||
|
||||
### :triangular_flag_on_post: Buzzword language
|
||||
|
||||
Be suspicious of buzzword language, especially when it becomes excessive or lacks any supportive evidences. **Remember that buzzwords aren't a promise, but only marketing to get your attention.** These words don't mean anything on their own.
|
||||
|
||||
Expressions like "military-grade encryption" are usually designed to inspire trust, but there is [no such thing](https://www.howtogeek.com/445096/what-does-military-grade-encryption-mean/) that grants better privacy. Most military organizations likely use industry-standard encryption from solid and tested cryptographic algorithms, like any trustworthy organizations and privacy-preserving tools do.
|
||||
|
||||
Newer promises like "AI-powered" are completely empty, if not *scary*. Thankfully, many "AI-powered" apps aren't really AI-powered, and this is a good thing because "AI" is more often [a danger to your privacy](https://www.sciencenewstoday.org/the-dark-side-of-ai-bias-surveillance-and-control), and not an enhancement at all.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
Remain skeptical of expressions like "privacy-enhancing", "privacy-first approach", "fully-encrypted", or "fully compliant" when these claims aren't supported with evidences. Fully encrypted means nothing if the encryption algorithm is weak, or if the company has access to your encryption keys.
|
||||
|
||||
When you see claims of "military-grade encryption", ask which cryptographic algorithms are used, and how encryption is implemented. Look for evidences and detailed information on technological claims. Never accept vague promises as facts.
|
||||
|
||||
</div>
|
||||
|
||||
### :triangular_flag_on_post: Unverifiable and unrealistic promises
|
||||
|
||||
Along the same lines, many businesses will be happy to promise you the moon. But then, they become reluctant to explain how they will get you the moon, how they will manage to give the moon to multiple customers at once, and what will happen to the planet once they've transported the moon away from its orbit to bring it back to you on Earth... Maybe getting the moon isn't such a good promise after all.
|
||||
|
||||
<div class="admonition quote inline end" markdown>
|
||||
<p class="admonition-title">companies promising you software that is 100% secure and 100% private are either lying or misinformed themselves</p></div>
|
||||
|
||||
Similarly, <span class="pullquote-source">companies promising you software that is 100% secure and 100% private are either lying or misinformed themselves</span>.
|
||||
|
||||
No software product is 100% secure and/or 100% private. Promises like this are unrealistic, and (fortunately for those companies) often also *unverifiable*. But an unverifiable claim shouldn't default to a trustworthy claim, quite the opposite. Trust must be earned. If a product cannot demonstrate how their claims are true, then we must remain skeptical.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
Same as for buzzwords and compliance claims, never trust at face value. If there are no ways for you to verify a claim, remain skeptical and aware this promise could be empty.
|
||||
|
||||
Be especially suspicious with organizations repeating exaggerated guarantees such as 100% secure. Organizations that are knowledgeable about security and privacy will usually restrain from such binary statement, and tend to talk about risk reduction with nuanced terms like "more secure", or "more private".
|
||||
|
||||
</div>
|
||||
|
||||
### :triangular_flag_on_post: Flawed or absent process for data deletion
|
||||
|
||||
Examining an organization's processes for data deletion can reveal a lot on their privacy practices and expertise. Organizations that are knowledgeable about privacy rights will usually be prepared to respond to data deletion requests, and will already have a process in place, a process that [doesn't require providing more information](queer-dating-apps-beware-who-you-trust.md/#they-can-make-deleting-data-difficult) than they already have.
|
||||
|
||||
Be especially worried if:
|
||||
|
||||
- [ ] You don't find any mentions of data deletion in their privacy policy.
|
||||
|
||||
- [ ] From your account's settings or app, you cannot find any option to delete your account and data.
|
||||
|
||||
- [ ] The account and data deletion process uses vague terms that make it unclear if your data will be truly deleted.
|
||||
|
||||
- [ ] You cannot find an email address to contact a privacy officer in their privacy policy.
|
||||
|
||||
- [ ] The email listed in their privacy policy isn't an address dedicated to privacy.
|
||||
|
||||
- [ ] You emailed the address listed but didn't get any reply after two weeks.
|
||||
|
||||
- [ ] Their deletion process requires to fill a form demanding more information than they already have on you, or uses a privacy-invasive third-party like Google Forms.
|
||||
|
||||
- [ ] They argue with you when you ask for legitimate deletion.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
If this isn't already explicitly explained in their policies (or if you do not trust their description), find the privacy contact for an organization and email them *before* using their products or services, to ask about their data deletion practices.
|
||||
|
||||
Ask in advance which information will be required from you in order to delete your data. Also ask if they keep any data afterward, and (if they do) what data they keep. Once data is shared, this could be much harder to deal with. It's best to verify data deletion processes *before* trusting an organization with our data.
|
||||
|
||||
</div>
|
||||
|
||||
### :triangular_flag_on_post: False reassurances
|
||||
|
||||
The goal of privacy washing is to reassure worried clients, consumers, users, patients, and investors into using the organization's products or services. But making us *feel* more secure doesn't always mean that we are.
|
||||
|
||||
#### Privacy theaters
|
||||
|
||||
You might have heard the term "security theater" already, but there's also "[privacy theater](https://slate.com/technology/2021/12/facebook-twitter-big-tech-privacy-sham.html)". Many large tech organizations have mastered this art for decades now. In response to criticisms about their dubious privacy practices, companies like Facebook and Google love to add seemingly "privacy-preserving" options to their software's settings, to give people the impression it's possible to use their products while preserving their privacy. But alas, it is not.
|
||||
|
||||
Unfortunately, no matter how much you "harden" your Facebook or Google account for privacy, these corporations will keep tracking everything you do on and off their platforms. Yes, enabling these options *might* very slightly reduce exposure for *some* of your data (and you should enable them if you cannot leave these platforms). However, Facebook and Google will still collect enough data on you to make them billions in profits each year, otherwise they wouldn't implement these options at all.
|
||||
|
||||
#### Misleading protections
|
||||
|
||||
The same can be said for applications that have built a reputation on a supposedly privacy-first approach like [Telegram](https://cybersecuritycue.com/telegram-data-sharing-after-ceo-arrest/) and [WhatsApp](https://insidetelecom.com/whatsapp-security-risk-alert-over-privacy-concerns/). In fact, the protections these apps offer are only partial, often poorly explained to users, and the apps still collect a large amount of data and/or metadata.
|
||||
|
||||
#### When deletion doesn't mean deletion
|
||||
|
||||
In other cases, false reassurance comes in the form of supposedly deleted data that isn't truly deleted. In 2019, Global News [reported](https://globalnews.ca/news/5463630/amazon-alexa-keeps-data-deleted-privacy/) on Amazon's Alexa virtual assistant speaker that didn't always delete voice-recorded data as promised. Google was also found [guilty](https://www.cnet.com/tech/services-and-software/google-oops-did-not-delete-street-view-data-as-promised/) of this, even after receiving an order from UK's Information Commissioner's Office.
|
||||
|
||||
This can also happen with cloud storage services that display an option to "delete" a file, when in fact the file is [simply hidden](https://www.consumersearch.com/technology/cloud-storage-privacy-concerns-learn-permanently-delete-data) from the interface, while remaining available in a bin directory or from version control.
|
||||
|
||||
How many unaware organizations might have inadvertently (or maliciously) kept deleted data by misusing their storage service and version control system? Of course, if a copy of the data is kept in backups or versioning system, then it's **not** fully deleted, and doesn't legally fulfill a data deletion requirement.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
Do not simply trust a "privacy" or "opt-out" option. Look at the overall practices of an organization to establish trust. Privacy features have no value at all if we cannot trust the organization that implemented them.
|
||||
|
||||
Investigate to find an organization's history of data breaches and how they responded to it. Was this organization repeatedly fined by data protection authorities? Do not hesitate to ask questions to an organization's privacy officer about their practices. And look for independent reviews of the organization.
|
||||
|
||||
</div>
|
||||
|
||||
### :triangular_flag_on_post: New and untested technologies
|
||||
|
||||
Many software startups brag about how revolutionary their NewTechnology™ is. Some even dare to brag about a "unique" and "game-changing" novel encryption algorithm. You should not feel excited by this, you should feel *terrified*.
|
||||
|
||||
For example, any startups serious about security and privacy will know that **you should never be ["rolling your own crypto"](https://www.infosecinstitute.com/resources/cryptography/the-dangers-of-rolling-your-own-encryption/)**.
|
||||
|
||||
Cryptography is a complex discipline, and developing a robust encryption algorithm takes a lot of time and transparent testing to achieve. Usually, it is achieved with the help of an entire community of experts. Some beginners might think they had the idea of the century, but until their algorithm has been rigorously tested by hundreds of experts, this is an unfounded claim.
|
||||
|
||||
The reason most software use the same few cryptographic algorithms for encryption, and usually follow strict protocols to implement them, is because this isn't an easy task to do, and the slightest mistake could render this encryption completely useless. The same can be true for other types of technology as well.
|
||||
|
||||
Novel technologies might sound more exciting, but *proven* and *tested* technologies are usually much more reliable when it comes to privacy, and especially when it comes to encryption.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
If a company brags about its new technology, investigate what information they have made available about it. Look for a document called a *White Paper*, which should describe in technical details how the technology works.
|
||||
|
||||
If the code is open source, look at the project's page and see how many people have worked on it, who is involved, since how long, etc.
|
||||
|
||||
More importantly, look for independent audits from trustworthy experts. Read the reports and verify if the organization's claims are supported by professionals in the field.
|
||||
|
||||
</div>
|
||||
|
||||
### :triangular_flag_on_post: Critics from experts
|
||||
|
||||
<div class="admonition quote inline end" markdown>
|
||||
<p class="admonition-title">if you find multiple reports of privacy experts raising the alarm about it, consider this a dark-red red flag</p></div>
|
||||
|
||||
No matter how much an organization or product claims to be "privacy-first", <span class="pullquote-source">if you find multiple reports of privacy experts raising the alarm about it, consider this a dark-red red flag</span>.
|
||||
|
||||
If a company has been [criticized by privacy commissioners](sam-altman-wants-your-eyeball.md/#privacy-legislators-arent-on-board), data protection authorities, privacy professionals, and consumer associations, especially if this has happened repeatedly, you should be *very* suspicious.
|
||||
|
||||
Sometimes, criticized corporations will use misleading language like "we are currently working with the commissioner", this *isn't* a good sign.
|
||||
|
||||
The marketing department will try to spin any authority audits into something that sounds favorable to the corporation, but this is only privacy washing. They would not be "working with" the privacy commissioner if they hadn't been forced to in the first place. And **they wouldn't have been forced to if they truly had privacy-respectful practices**.
|
||||
|
||||
<div class="admonition success" markdown>
|
||||
<p class="admonition-title">What to do?</p>
|
||||
|
||||
Use a search engine to look for related news using keywords such as the company's name with "data breach", "fined", or "privacy".
|
||||
|
||||
Check the product's or corporation's Wikipedia page, sometimes there will be references to previous incidents and controversies listed there. Follow trustworthy sources of privacy and security news to stay informed about reported data leaks and experts raising the alarm.
|
||||
|
||||
</div>
|
||||
|
||||
## Looking for the green(ish) flags
|
||||
|
||||
Now that we have discussed some red flags to help us know when we should be careful, let's examine the signs that *can* be indicator of trustworthiness.
|
||||
|
||||
Like for red flags, green flags should always be taken into context and considered together. One, or even a few green flags (or greenish flags) aren't on their own a guarantee that an organization is trustworthy. Always remain vigilant, and be ready to revoke your trust at any time if new information warrants it.
|
||||
|
||||
### :custom-green-flag: Independent reviews
|
||||
|
||||
Independent reviews from trustworthy sources can be a valuable resource to help to determine if a product is reliable. This is never a guarantee of course, humans (even experts) can also make mistakes (less than AI, but still) and aren't immune to lies.
|
||||
|
||||
However, an impartial review conducted by an expert in the field has the benefit of someone who has likely put many hours investigating this topic, something you might understandably not always have the time to do yourself. But be careful to first evaluate if this is a genuine unbiased assessment, or simply marketing content disguised as one.
|
||||
|
||||
### :custom-green-flag: Independent audits
|
||||
|
||||
Similarly, independent audits from credible organizations are very useful to assess a product's claims. Make sure the company conducting the audit is reputable, impartial, and that you can find a copy of the audit's report they produced, ideally from a source that *isn't* the audited company's website (for example, the auditing organization might [provide](https://cure53.de/#publications) access to it transparently).
|
||||
|
||||
### :custom-green-flag: Transparency
|
||||
|
||||
Transparency helps a lot to earn trust, and source code that is publicly available helps a lot with transparency. If a piece of software publishes its code for anyone to see, this is already a significant level of transparency above any proprietary code.
|
||||
|
||||
Open source code is never a guarantee of security and privacy, but it makes it much easier to verify any organization assertions. This is almost impossible to do when code is proprietary. Because no one outside the organization can examine the code, they must be trusted on their own words entirely. Favor products with code that is transparently available whenever possible.
|
||||
|
||||
### :custom-green-flag: Verifiable claims
|
||||
|
||||
If you can easily verify an organization's claims, this is a good sign. For example, if privacy practices are explicitly detailed in policies (and match the observed behaviors), if source code is open and easy to inspect, if independent audits have confirmed the organization's claims, and if the organization is consistent with its privacy practices (in private as much as in public), this all helps to establish trust.
|
||||
|
||||
### :custom-green-flag: Well-defined policies
|
||||
|
||||
Trustworthy organizations should always have well-defined, unique, and easy to read privacy policies and terms of service. The conditions within it should also be fair. **You shouldn't have to sell your soul to 1442 marketing partners just to use a service or visit a website.**
|
||||
|
||||
Read an organization's privacy policy (or privacy notice), and make sure it includes:
|
||||
|
||||
- [x] Language unique to this organization (no copy-paste policy).
|
||||
|
||||
- [x] Disclosure of any parent companies owning this organization (if any).
|
||||
|
||||
- [x] A dedicated email address to contact for privacy-related questions and requests.
|
||||
|
||||
- [x] Detailed information on what data is collected for each activity. For example, the data collected when you use an app or are employed by an organization shouldn't be bundled together indistinctly with the data collected when you simply visit the website.
|
||||
|
||||
- [x] Clear limits on data retention periods (when the data will be automatically deleted).
|
||||
|
||||
- [x] Clear description of the process to follow in order to delete, access, or correct your personal data.
|
||||
|
||||
- [x] A list of third-party vendors used by the organization to process your information.
|
||||
|
||||
- [x] Evidences of accountability. The organization should demonstrate accountability for the data it collects, and shouldn't just transfer this responsibility to the processors it uses.
|
||||
|
||||
### :custom-green-flag: Availability
|
||||
|
||||
Verify availability. Who will you contact if a problem arises with your account, software, or data? Will you be ignored by an AI chatbot just repeating what you've already read on the company's website? Will you be able to reach out to a competent human?
|
||||
|
||||
If you contact an organization at the listed privacy-dedicated email address to ask a question, and receive a thoughtful non-AI-generated reply within a couple of weeks, this can be a good sign. If you can easily find a privacy officer email address, a company's phone number, and the location where the organization is based, this also can be encouraging signs.
|
||||
|
||||
### :custom-green-flag: Clear funding model
|
||||
|
||||
If a *free* service is provided by a *for-profit* corporation, you should investigate further. The old adage that if you do not pay for a product you are the product is sadly often true in tech, and doubly so for big tech.
|
||||
|
||||
Before using a new service, try to find what the funding model is. Maybe it's a free service run by volunteers? Maybe they have a paid tier for businesses, but remain free for individual users? Maybe they survive and thrive on donations? Or maybe everyone does pay for it (with money, not data).
|
||||
|
||||
Look for what the funding model is. If it's free, and you can't really find any details on how it's financed, this could be a red flag that your data might be used for monetization. But if the funding model is transparent, fair, and ethical, this *can* be a green flag.
|
||||
|
||||
### :custom-green-flag: Reputation history
|
||||
|
||||
Some errors are forgivable, but others are too big to let go. Look for an organization's track record to help to evaluate its reputation overtime. Check if there was any security or privacy incidents, or expert criticisms, and check how the organization responded to it.
|
||||
|
||||
If you find an organization that has always stuck to its values (integrity), is still run by the same core people in recent years (stability), seems to have a generally good reputation with others (reputability), and had few (or no) incidents in the past (reliability), this *can* be a green flag.
|
||||
|
||||
### :custom-green-flag: Expert advice
|
||||
|
||||
Seek expert advice before using a new product or service. Look online for reliable and independent sources of [recommendations](https://www.privacyguides.org/en/tools/) (like Privacy Guides!), and read thoroughly to determine if the description fits your privacy needs. No tool is perfect to protect your privacy, but experts will warn you about a tool's limitations and downsides.
|
||||
|
||||
There's also added value in community consensus. If a piece of software is repeatedly recommended by multiple experts (not websites or influencers, *experts*), then this *can* be a green flag that this tool or service is generally trusted by the community (at this point in time).
|
||||
|
||||
## Take a stand for better privacy
|
||||
|
||||
Trying to evaluate who is worthy of our trust and who isn't is an increasingly difficult task. While this burden shouldn't fall on us, there are unfortunately too few institutional protections we can rely on at the moment.
|
||||
|
||||
Until our governments finally prioritize the protection of human rights and privacy rights over corporate interests, we will have to protect ourselves. But this isn't limited to self-protection, our individual choices also matter collectively.
|
||||
|
||||
Each time we dig in to thoroughly investigate a malicious organization and expose its privacy washing, we contribute in improving safety for everyone around us.
|
||||
|
||||
Each time we report a business infringing privacy laws, talk publicly about our bad experience to get our data deleted, and more importantly refuse to participate in services and products that aren't worthy of our trust, this all helps to improve data privacy for everyone overtime.
|
||||
|
||||
Being vigilant and reporting bad practices is taking a stand for better privacy. We must all take a stand for better privacy, and expose privacy washing each time we spot it.
|
61
blog/posts/the-fight-for-privacy-after-death.md
Normal file
61
blog/posts/the-fight-for-privacy-after-death.md
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
date:
|
||||
created: 2025-09-04T20:00:00Z
|
||||
categories:
|
||||
- Opinion
|
||||
authors:
|
||||
- aprilfools
|
||||
description: In 2020, London police failed to save two sisters in life, then violated their privacy in death. This is a call to arms for posthumous privacy rights.
|
||||
schema_type: OpinionNewsArticle
|
||||
---
|
||||
# Ghosts in the Machine: The Fight for Privacy After Death
|
||||
|
||||
In the early hours of 6 June 2020, Nicole Smallman and her sister Bibaa had just finished celebrating Bibaa’s birthday with friends in a park in London. Alone and in the dark, they were [fatally and repeatedly stabbed](https://en.wikipedia.org/wiki/Murders_of_Bibaa_Henry_and_Nicole_Smallman) 36 times.
|
||||
|
||||
But the police didn’t just fail them in life – they failed them in death too. PC Deniz Jaffer and PC Jamie Lewis, both of the Metropolitan Police, [took selfies](https://www.theguardian.com/uk-news/2021/dec/06/two-met-police-officers-jailed-photos-murdered-sisters-deniz-jaffer-jamie-lewis-nicole-smallman-bibaa-henry) with the dead bodies of the victims, posting them on a WhatsApp group. And no privacy laws prevented them from doing so.
|
||||
|
||||
This horrific case is just one in the murky, often sinister realm of posthumous privacy. In the UK, Europe, and across the world, privacy protections for the dead are at best a rarity – and at worst, a deep moral and societal failing that we cannot and must not accept.
|
||||
|
||||
Let’s take a step back. The case of the Smallmans starkly draws attention to the denial in death of guarantees to the living. Reading this blog, you are no doubt aware that the UK and Europe have firm privacy protections in *The General Data Protection Regulation* (GDPR) and Article 8 of the *European Convention on Human Rights* (ECHR). But the picture elsewhere is less clear, with a challenging patchwork of laws and regional statutes the only protection for those in the US and much of the rest of the world. And once you die? Almost universally, these protections [immediately cease](https://gdpr-info.eu/recitals/no-27/).
|
||||
|
||||
Here the problem begins. This abrupt collapse in privacy rights leaves the deceased and their families, like the Smallman family, newly vulnerable – and at a time when they are already utterly broken.
|
||||
|
||||
In the absence of law comes the pursuit of it, against a backdrop of flagrant privacy violations. What this pursuit means, in practical terms, is that two primary categories of posthumous privacy dominate legal debate: the medical, where the law has intervened tentatively, and the digital, where it simply hasn’t kept up.
|
||||
|
||||
Medical protections are tentative because of piecemeal development. Typically involving legal workarounds, they offer rare precedent for what might happen to your digital ghosts now and in the future, with the only clear trend being a reluctance to protect.
|
||||
|
||||
That said, the US is one country that has taken measures to protect the medical privacy of the dead. The *Health Insurance Portability and Accountability Act* (HIPAA) dictates that 50 years of protection must be given to your personally identifiable medical information after you die. Except there’s a catch. State laws also apply, and state laws differ. In Colorado, Louisiana, and many others, its efficacy is severely challenged by laws dictating the mandatory release of information regarded as public – including autopsy reports and even [your genetic information](http://dx.doi.org.ezp.lib.cam.ac.uk/10.1177/1073110516654124).
|
||||
|
||||
In lieu of any protections, surviving relatives in Europe have found some success claiming that their own Article 8 rights – that ECHR right to privacy – have been violated through disclosures or inspections related to their deceased.
|
||||
|
||||
In one case, Leyla Polat, an Austrian national, suffered the awful death of her son just two days after birth following a cerebral hemorrhage. The family refused a post-mortem examination, wanting to bury their child in accordance with Muslim beliefs; but doctors insisted it take place, covertly removing his internal organs and filling the hollows with cotton wool. When this was discovered during the funeral rites, the boy had to be buried elsewhere, and without ceremony. After several court cases and appeals, The European Court of Human Rights [found](https://hudoc.echr.coe.int/rum#%7B%22itemid%22:%5B%22002-13361%22%5D%7D) that Leyla’s Article 8 and 9 rights had been violated.
|
||||
|
||||
As an aside – Stalin’s grandson [tried the same Article 8 route](https://hudoc.echr.coe.int/eng#%7B%22itemid%22:%5B%22001-150568%22%5D%7D) in relation to reputational attacks on his grandfather, reflecting attempts to apply the workaround more widely.
|
||||
|
||||
It’s not that there hasn’t been some progress. The fundamental problem is that protections – already sparse – are only as good as their material and geographic scopes, their interactions with other laws, and how they are interpreted in a court. Nowhere is this more apparent than in the case of the Smallman sisters. Judge Mark Lucraft KC [found](https://www.judiciary.uk/wp-content/uploads/2022/07/R-v-Jaffer-Lewis-sentencing-061221.pdf) that PCs Jaffer and Lewis, in taking selfies with the murdered victims, had:
|
||||
|
||||
> *“…wholly disregarded the privacy of the two victims of horrific violence and their families for what can only have been some cheap thrill, kudos, a kick or some form of bragging right by taking images and then passing them to others.”*
|
||||
|
||||
Yet this acknowledgement of privacy violation is precisely just that. The crime the officers committed was misconduct in public office; they were not convicted on the basis of privacy law. That sense of progress – that we might be beginning to recognize the importance of posthumous privacy – has all but gone out of the window.
|
||||
|
||||
That does not leave your digital privacy in a good place. Whatever little protection you may be able to tease out for our medical privacy far, far exceeds the control you have over your virtual ghosts. And with AI just about everywhere, the prospects for your data after death are terrifying.
|
||||
|
||||
We’ve already established that data protections for the living – such as GDPR – expire at death. The simple reality is that dying places your data at the mercy of large technology corporations - and their dubious afterlife tools.
|
||||
|
||||
Even if you trust such tools to dispose of or act on our data, there is a disconnect between demand and take-up. A [study of UK nationals](https://www.tandfonline.com/doi/full/10.1080/13600869.2025.2506164#abstract) found a majority that wanted their data deleted at death were unaware of the tools, with large tech companies unwilling to share any details on their uptake. Reassuring stuff.
|
||||
|
||||
But the reality is, you shouldn’t. You’ll recall that [deletion doesn’t usually mean deletion](https://www.privacyguides.org/en/basics/account-deletion/) – and after death, even GDPR can’t force big tech to delete the data of those lucky enough to have benefited from it. Account deleted or not, our ghosts will all be stuck in the machine.
|
||||
|
||||
Recent reports have acknowledged dire possibilities. Almost worldwide, you can [legally train AI models on the data of a deceased person](https://www.reuters.com/article/world/data-of-the-dead-virtual-immortality-exposes-holes-in-privacy-laws-idUSKBN21Z0NE/) and recreate them in digital form – all without their prior consent. Organizations exist purely to scour your social media profiles and activity for this exact purpose. Your ghost could be used to generate engagement against your will, disclosing what you tried to hide.
|
||||
|
||||
You may ask: why should the law care? Why indeed, when it deems we [cannot be harmed](https://doi.org/10.1093/acprof:oso/9780199607860.003.0003) after death. To argue thus is to miss the point. A lack of privacy after death harms the living, often in ways others cannot see. The effect of [post-mortem anxiety](https://www.tandfonline.com/doi/full/10.1080/17577632.2024.2438395#d1e120) is a real one that deeply troubles individuals wishing to keep a part of them hidden from public – or even family – view, whether it be it an [illicit affair](https://www.cardozoaelj.com/wp-content/uploads/2011/02/Edwards-Galleyed-FINAL.pdf) or whatever else. Revelation at the point of death can be just as harmful to those still alive.
|
||||
|
||||
There is cause for optimism. Article 85 of the *French Data Protection Act* allows you to include [legally enforceable demands concerning your personal data](https://www.cnil.fr/fr/la-loi-informatique-et-libertes#article85) in your will. This is truly a landmark piece of legislation by the French that indicates what the global direction of travel should be, and what we should ultimately demand: protections for the dead, by the dead.
|
||||
|
||||
But even more urgently, we must demand that governments across the world introduce even the most basic legal framework for post-mortem privacy that protects you, your family, and community from egregious harm.
|
||||
|
||||
The Smallmans deserved dignity – and so does everyone else in death. The law must catch up.
|
||||
|
||||
---
|
||||
|
||||
*This article hasn’t even begun to scratch the surface of the complexity of post-mortem privacy, and there are innumerable relevant cases and laws that simply wouldn’t fit. If the topic has caught your interest, and you’d like to dig in more, [this white paper](https://doi.org/10.1016/j.clsr.2022.105737) by Uta Kohl is a good starting point.*
|
@@ -1,5 +1,5 @@
|
||||
ANALYTICS_FEEDBACK_NEGATIVE_NAME="This page could be improved"
|
||||
ANALYTICS_FEEDBACK_NEGATIVE_NOTE='Thanks for your feedback! If you want to let us know more, please leave a post on our <a href="https://discuss.privacyguides.net/c/site-development/7" target="_blank" rel="noopener">forum</a>.'
|
||||
ANALYTICS_FEEDBACK_NEGATIVE_NOTE="Thanks for your feedback! If you want to let us know more, please leave a post on our <a href='https://discuss.privacyguides.net/c/site-development/7' target='_blank' rel='noopener'>forum</a>."
|
||||
ANALYTICS_FEEDBACK_POSITIVE_NAME="This page was helpful"
|
||||
ANALYTICS_FEEDBACK_POSITIVE_NOTE="Thanks for your feedback!"
|
||||
ANALYTICS_FEEDBACK_TITLE="Was this page helpful?"
|
||||
|
@@ -202,6 +202,9 @@ markdown_extensions:
|
||||
pymdownx.emoji:
|
||||
emoji_index: !!python/name:material.extensions.emoji.twemoji
|
||||
emoji_generator: !!python/name:material.extensions.emoji.to_svg
|
||||
options:
|
||||
custom_icons:
|
||||
- theme/icons
|
||||
tables: {}
|
||||
footnotes: {}
|
||||
toc:
|
||||
|
@@ -206,6 +206,7 @@ nav:
|
||||
- !ENV [NAV_BLOG, "Articles"]: !ENV [ARTICLES_SITE_BASE_URL, "/articles/"]
|
||||
- !ENV [NAV_VIDEOS, "Videos"]:
|
||||
- index.md
|
||||
- "This Week in Privacy": https://discuss.privacyguides.net/c/announcements/livestreams/9414
|
||||
- playlists.md
|
||||
- !ENV [NAV_FORUM, "Forum"]: "https://discuss.privacyguides.net/"
|
||||
- !ENV [NAV_WIKI, "Wiki"]:
|
||||
|
6
theme/icons/custom/green-flag.svg
Normal file
6
theme/icons/custom/green-flag.svg
Normal file
@@ -0,0 +1,6 @@
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg width="100%" height="100%" viewBox="0 0 36 36" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xml:space="preserve" xmlns:serif="http://www.serif.com/" style="fill-rule:evenodd;clip-rule:evenodd;stroke-linejoin:round;stroke-miterlimit:2;">
|
||||
<path d="M13,34C13,34 13,36 11,36C9,36 9,34 9,34L9,2C9,2 9,0 11,0C13,0 13,2 13,2L13,34Z" style="fill:rgb(102,117,127);fill-rule:nonzero;"/>
|
||||
<path d="M11,4C11,1.8 12.636,0.75 14.636,1.667L31.363,9.334C33.363,10.251 33.363,11.751 31.363,12.667L14.636,20.334C12.636,21.25 11,20.2 11,18L11,4Z" style="fill:rgb(69,221,46);fill-rule:nonzero;"/>
|
||||
</svg>
|
After Width: | Height: | Size: 797 B |
66
videos/posts/age-verification-is-a-privacy-nightmare.md
Normal file
66
videos/posts/age-verification-is-a-privacy-nightmare.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
title: |
|
||||
Age Verification is a Privacy Nightmare...
|
||||
date:
|
||||
created: 2025-08-15T20:00:00Z
|
||||
authors:
|
||||
- jordan
|
||||
description: |
|
||||
Age verification laws and propositions forcing platforms to restrict content accessed by children and teens have been multiplying in recent years. The problem is, implementing such measures necessarily requires identifying each user accessing this content, one way or another. This is bad news for your privacy.
|
||||
readtime: 11
|
||||
thumbnail: https://neat.tube/lazy-static/previews/90d80b0a-48a9-4c8f-b4c3-74866afa3c49.jpg
|
||||
embed: https://neat.tube/videos/embed/aR4toTWJpcBZamUdQQpGRu
|
||||
peertube: https://neat.tube/w/aR4toTWJpcBZamUdQQpGRu
|
||||
youtube: https://www.youtube.com/watch?v=dczrLhSKO_A
|
||||
---
|
||||
|
||||
## Sources
|
||||
|
||||
- 0:05 <https://www.gov.uk/government/collections/online-safety-act>
|
||||
- 0:08 <https://www.conseil-etat.fr/Pages-internationales/english/news/pornographic-websites-the-order-requiring-user-age-verification-is-maintained>
|
||||
- 0:11: <https://www.infrastructure.gov.au/media-communications/internet/online-safety/social-media-minimum-age>
|
||||
- 0:19 <https://help.withpersona.com/articles/1DfUf0PhctXPDY8s1sbOaI/>
|
||||
- 0:46 <https://www.eff.org/deeplinks/2025/01/impact-age-verification-measures-goes-beyond-porn-sites>
|
||||
- 0:57 <https://www.reddit.com/r/AlJazeera/>
|
||||
- 1:16 <https://www.woodhullfoundation.org/fact-checked/online-age-verification-is-not-the-same-as-flashing-your-id-at-a-liquor-store/>
|
||||
- 1:34 <https://duckduckgo.com/?t=ffab&q=on+device+age+verification&ia=web>
|
||||
- 1:46 <https://www.idnow.io/>
|
||||
- 1:46 <https://withpersona.com/>
|
||||
- 2:11 <https://www.grandviewresearch.com/horizon/outlook/identity-verification-market/north-america>
|
||||
- 2:28 <https://www.aclu-mn.org/en/news/biased-technology-automated-discrimination-facial-recognition>
|
||||
- 3:08 <https://techcrunch.com/2025/07/26/dating-safety-app-tea-breached-exposing-72000-user-images/>
|
||||
- 3:20 <https://www.404media.co/id-verification-service-for-tiktok-uber-x-exposed-driver-licenses-au10tix/>
|
||||
- 3:27 <https://www.forbes.com/sites/paultassi/2025/07/31/the-uks-internet-age-verification-is-being-bypassed-by-death-stranding-2-garrys-mod/>
|
||||
- 3:30 <https://www.youtube.com/watch?v=BfyF3ZUYtyQ>
|
||||
- 3:33 <https://cybernews.com/security/developer-protests-uk-age-gating-with-mock-mp-ids/>
|
||||
- 3:56 <https://www.theverge.com/news/650493/discord-age-verification-face-id-scan-experiment>
|
||||
- 4:09 <https://www.gov.uk/government/collections/online-safety-act>
|
||||
- 4:26 <https://edition.cnn.com/2025/08/13/tech/youtube-ai-age-verification>
|
||||
- 4:27 <https://www.nme.com/news/music/youll-now-have-to-verify-your-age-to-access-certain-content-on-spotify-3881717>
|
||||
- 4:28 <https://arstechnica.com/tech-policy/2025/07/reddit-starts-verifying-ages-of-uk-users-to-comply-with-child-safety-law/>
|
||||
- 4:36 <https://web.archive.org/web/20250120205725/https://www.thegreenlivingforum.net/forum/viewtopic.php?f=2&t=114519>
|
||||
- 4:42 <https://action.freespeechcoalition.com/age-verification-resources/state-avs-laws/>
|
||||
- 4:51 <https://www.parl.ca/legisinfo/en/bill/44-1/s-210>
|
||||
- 4:53 <https://digital-strategy.ec.europa.eu/en/funding/call-tenders-development-consultancy-and-support-age-verification-solution>
|
||||
- 5:22 <https://www.infrastructure.gov.au/media-communications/internet/online-safety/social-media-minimum-age>
|
||||
- 5:54 <https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer>
|
||||
- 6:08 <https://www.404media.co/uk-users-need-to-post-selfie-or-photo-id-to-view-reddits-r-israelcrimes-r-ukrainewarfootage/>
|
||||
- 6:16 <https://www.reddit.com/r/IsraelCrimes/>
|
||||
- 6:18 <https://www.reddit.com/r/lgbt/comments/1m8ipus/how_is_this_okay_reddit_seems_to_be_classifying/>
|
||||
- 6:20 <https://www.reddit.com/r/AlJazeera/>
|
||||
- 6:34 <https://en.wikipedia.org/wiki/Criminalization_of_homosexuality>
|
||||
- 6:50 <https://www.cbc.ca/news/world/facebook-clarifies-breastfeeding-pics-ok-updates-rules-1.2997124>
|
||||
- 6:56 <https://www.plannedparenthood.org/learn/teens>
|
||||
- 7:36 <https://www.reddit.com/r/Twitter/comments/1mo4lmn/i_made_a_chrome_extension_to_bypass_age/>
|
||||
- 7:37 <https://x.com/DanySterkhov/status/1948665431633404170>
|
||||
- 7:43 <https://arstechnica.com/tech-policy/2025/07/vpn-use-soars-in-uk-after-age-verification-laws-go-into-effect/>
|
||||
- 7:48 <https://play.google.com/store/apps/details?id=ch.protonvpn.android>
|
||||
- 8:00 <https://cybernews.com/security/developer-protests-uk-age-gating-with-mock-mp-ids/>
|
||||
- 8:12 <https://www.youtube.com/watch?v=3vXZzRCc8WA>
|
||||
- 8:33 <https://withpersona.com/>
|
||||
- 8:42 <https://www.edps.europa.eu/data-protection/data-protection/glossary/d_en>
|
||||
- 8:50 <https://www.forbes.com/sites/daveywinder/2025/08/10/google-data-breach---august-8-email-warnings-now-confirmed/>
|
||||
- 8:52 <https://www.bleepingcomputer.com/news/security/manpower-staffing-agency-discloses-data-breach-after-attack-claimed-by-ransomhub/>
|
||||
- 8:55 <https://techcrunch.com/2025/01/15/powerschool-data-breach-victims-say-hackers-stole-all-historical-student-and-teacher-data/>
|
||||
- 9:33 <https://www.privacyguides.org/articles/2025/05/06/age-verification-wants-your-face/>
|
||||
- 10:03 <https://www.youtube.com/watch?v=uSliYzklo1w>
|
37
videos/posts/privacy-is-power.md
Normal file
37
videos/posts/privacy-is-power.md
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
title: |
|
||||
Privacy is Power. And You're Giving Yours Away.
|
||||
date:
|
||||
created: 2025-08-29T01:00:00Z
|
||||
authors:
|
||||
- jordan
|
||||
description: |
|
||||
Privacy isn't about hiding secrets - it's about power. In this video, we explain why thinking you "have nothing to hide" is a dangerous misconception, especially in our ever-connected digital age. Taking back your privacy is easier than you might think!
|
||||
readtime: 4
|
||||
thumbnail: https://neat.tube/lazy-static/previews/c2bb2266-f508-4cb6-993c-c458585cb230.jpg
|
||||
embed: https://neat.tube/videos/embed/vVECH95JDrM4pQf8vP612a
|
||||
peertube: https://neat.tube/w/vVECH95JDrM4pQf8vP612a
|
||||
youtube: https://www.youtube.com/watch?v=fPYsIJeN5WE
|
||||
---
|
||||
|
||||
## Sources
|
||||
|
||||
- 0:01 <https://www.forbes.com/sites/dereksaul/2023/10/25/meta-earnings-record-profits-sales-as-ads-stay-robust-during-zuckerbergs-year-of-efficiency/>
|
||||
- 0:03 <https://apnews.com/article/google-alphabet-earnings-artificial-intelligence-antitrust-30a75937bfbd9a4dfcee91cd4594cd59>
|
||||
- 0:05 <https://www.wired.com/story/openai-valuation-500-billion-skepticism/>
|
||||
- 1:53 <https://en.wikipedia.org/wiki/General_Data_Protection_Regulation>
|
||||
- 2:07 <https://duckduckgo.com/?q=privacy+definition&ia=web>
|
||||
- 2:18 <https://www.euronews.com/>
|
||||
- 2:22 <https://edition.cnn.com/>
|
||||
- 2:25 <https://www.france24.com/en/europe/>
|
||||
- 2:28 <https://www.db.com/index?language_id=3>
|
||||
- 2:39 <https://apnews.com/hub/europe>
|
||||
- 3:07 <https://myaccount.google.com/>
|
||||
- 3:41 <https://www.privacyguides.org/en/>
|
||||
- 3:55 <https://www.zdnet.com/article/not-just-youtube-google-is-using-ai-to-guess-your-age-based-on-your-activity-everywhere/>
|
||||
- 3:56 <https://www.bleepingcomputer.com/news/security/google-to-verify-all-android-devs-to-protect-users-from-malware/>
|
||||
- 3:57 <https://apnews.com/article/age-verification-kids-social-media-privacy-speech-1cf99c96ab6b461cf7612d312e111e79>
|
||||
- 3:59 <https://www.wired.com/story/europe-break-encryption-leaked-document-csa-law/>
|
||||
- 4:01 <https://www.forbes.com/sites/larsdaniel/2024/12/19/eus-chat-control-the-end-of-private-messaging-as-we-know-it/>
|
||||
- 4:02 <https://www.abc.net.au/news/2024-11-28/social-media-age-ban-passes-parliament/104647138>
|
||||
- 4:04 <https://www.bleepingcomputer.com/news/security/malicious-android-apps-with-19m-installs-removed-from-google-play/>
|
Reference in New Issue
Block a user