1
0
mirror of https://github.com/privacyguides/privacyguides.org.git synced 2025-07-02 17:42:39 +00:00

style!: Make minor changes across recommendation pages for consistency (#2972)

- Grammar, Style, and Wording Changes
  - Remove commas where pauses or breaks in reading may not be needed
  - Reduce instances of comma splices by replacing commas with
    semicolons or em dashes where appropriate
  - Spell out abbreviations like E2EE for the first instance of the term
    on the page, then use the abbreviation for the subsequent instances
  - Add line breaks in card descriptions for a cleaner look,
    particularly for mentions of an accompanying blog review of a tool
  - Move more technical information from recommendation blurb to the
    description under the card
  - Format quotations from audits as block quotes
  - Standardize syntax for tooltips
  - Arrange download links according to the widely used order
    - Mobile app stores > alternative distribution methods (e.g.,
      GitHub) > developer-owned or -operated F-Droid repositories >>
      desktop platforms > Flathub >> browsers >> web
  - Shorten Chrome extension links
  - Standardize icon for web download links
  - For "Repository" buttons, embed direct links to project's Readme to
    differentiate them from "Source Code" links

- Other Changes
  - Add GitHub releases link for IVPN
  - Mention CryptPad's official public instance and add link to list of
    public instances
  - Replace current link to Miniflux docs with a direct link to the end
    user docs
  - Update version of Newsboat documentation link
  - Update and reword guidance on finding YouTube channel code for RSS feed
  - Remove F-Droid download link for Stingle since it is not owned by
    the developer or dev team

Signed-off-by: Jonah Aragon <jonah@privacyguides.org>
Signed-off-by: fria <fria@privacyguides.org>
Signed-off-by: Daniel Gray <dngray@privacyguides.org>
This commit is contained in:
redoomed1
2025-05-15 11:40:42 +00:00
committed by Daniel Gray
parent 353d1fed48
commit 06d2f0e3e1
32 changed files with 214 additions and 186 deletions

View File

@ -43,7 +43,7 @@ To run AI locally, you need both an AI model and an AI client.
There are many permissively licensed models available to download. [Hugging Face](https://huggingface.co/models) is a platform that lets you browse, research, and download models in common formats like [GGUF](https://huggingface.co/docs/hub/en/gguf). Companies that provide good open-weights models include big names like Mistral, Meta, Microsoft, and Google. However, there are also many community models and [fine-tuned](https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning)) models available. As mentioned above, quantized models offer the best balance between model quality and performance for those using consumer-grade hardware.
To help you choose a model that fits your needs, you can look at leaderboards and benchmarks. The most widely-used leaderboard is the community-driven [LM Arena](https://lmarena.ai). Additionally, the [OpenLLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) focuses on the performance of open-weights models on common benchmarks like [MMLU-Pro](https://arxiv.org/abs/2406.01574). There are also specialized benchmarks which measure factors like [emotional intelligence](https://eqbench.com), ["uncensored general intelligence"](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard), and [many others](https://www.nebuly.com/blog/llm-leaderboards).
To help you choose a model that fits your needs, you can look at leaderboards and benchmarks. The most widely-used leaderboard is the community-driven [LM Arena](https://lmarena.ai). Additionally, the [OpenLLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) focuses on the performance of open-weights models on common benchmarks like [MMLU-Pro](https://arxiv.org/abs/2406.01574). There are also specialized benchmarks which measure factors like [emotional intelligence](https://eqbench.com), ["uncensored general intelligence"](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard), and [many others](https://nebuly.com/blog/llm-leaderboards).
## AI Chat Clients
@ -66,7 +66,7 @@ To help you choose a model that fits your needs, you can look at leaderboards an
In addition to supporting a large range of text models, Kobold.cpp also supports image generators such as [Stable Diffusion](https://stability.ai/stable-image) and automatic speech recognition tools such as [Whisper](https://github.com/ggerganov/whisper.cpp).
[:octicons-home-16: Homepage](https://github.com/LostRuins/koboldcpp){ .md-button .md-button--primary }
[:octicons-repo-16: Repository](https://github.com/LostRuins/koboldcpp#readme){ .md-button .md-button--primary }
[:octicons-info-16:](https://github.com/LostRuins/koboldcpp/wiki){ .card-link title="Documentation" }
[:octicons-code-16:](https://github.com/LostRuins/koboldcpp){ .card-link title="Source Code" }
[:octicons-lock-16:](https://github.com/LostRuins/koboldcpp/blob/2f3597c29abea8b6da28f21e714b6b24a5aca79b/SECURITY.md){ .card-link title="Security Policy" }
@ -123,14 +123,14 @@ Ollama simplifies the process of setting up a local AI chat by downloading the A
<div class="admonition recommendation" markdown>
![Llamafile Logo](assets/img/ai-chat/llamafile.png){align=right}
![Llamafile Logo](assets/img/ai-chat/llamafile.webp){align=right}
**Llamafile** is a lightweight, single-file executable that allows users to run LLMs locally on their own computers without any setup involved. It is [backed by Mozilla](https://hacks.mozilla.org/2023/11/introducing-llamafile) and available on Linux, macOS, and Windows.
Llamafile also supports LLaVA. However, it doesn't support speech recognition or image generation.
[:octicons-home-16: Homepage](https://github.com/Mozilla-Ocho/llamafile){ .md-button .md-button--primary }
[:octicons-info-16:](https://github.com/Mozilla-Ocho/llamafile#llamafile){ .card-link title="Documentation" }
[:octicons-repo-16: Repository](https://github.com/Mozilla-Ocho/llamafile#readme){ .md-button .md-button--primary }
[:octicons-info-16:](https://github.com/Mozilla-Ocho/llamafile#quickstart){ .card-link title="Documentation" }
[:octicons-code-16:](https://github.com/Mozilla-Ocho/llamafile){ .card-link title="Source Code" }
[:octicons-lock-16:](https://github.com/Mozilla-Ocho/llamafile#security){ .card-link title="Security Policy" }