1
0
mirror of https://github.com/privacyguides/i18n.git synced 2025-06-18 16:54:21 +00:00

New Crowdin translations by GitHub Action

This commit is contained in:
Crowdin Bot
2024-11-27 00:47:07 +00:00
parent dfc0d27923
commit bd1bc8691c

View File

@ -2,7 +2,7 @@
meta_title: 推薦的 AI 聊天:私密的 ChatGPT 替代方案 - Privacy Guides meta_title: 推薦的 AI 聊天:私密的 ChatGPT 替代方案 - Privacy Guides
title: AI 聊天 title: AI 聊天
icon: material/assistant icon: material/assistant
description: Unlike OpenAI's ChatGPT and its Big Tech competitors, these AI tools run locally so your data never leaves your desktop device. description: OpenAI ChatGPT 和與其競爭的科技巨擘產品不同,這些 AI 工具可在本地執行,因此您的資料從未離開您的電腦。
cover: ai-chatbots.webp cover: ai-chatbots.webp
--- ---
@ -10,23 +10,23 @@ cover: ai-chatbots.webp
- [:material-server-network: 服務提供商](basics/common-threats.md#privacy-from-service-providers){ .pg-teal } - [:material-server-network: 服務提供商](basics/common-threats.md#privacy-from-service-providers){ .pg-teal }
- [:material-account-cash: 監控資本主義](basics/common-threats.md#surveillance-as-a-business-model){ .pg-brown } - [:material-account-cash: 監控資本主義](basics/common-threats.md#surveillance-as-a-business-model){ .pg-brown }
- [:material-close-outline: Censorship](basics/common-threats.md#avoiding-censorship){ .pg-blue-gray } - [:material-close-outline: 審查](basics/common-threats.md#avoiding-censorship){ .pg-blue-gray }
Since the release of ChatGPT in 2022, interactions with Large Language Models (LLMs) have become increasingly common. LLMs can help us write better, understand unfamiliar subjects, or answer a wide range of questions. They can statistically predict the next word based on a vast amount of data scraped from the web. 自 2022 年 ChatGPT 發表以來,人們與 大型語言模型(LLM) 的互動變得越來越普遍。 大型語言模型 可以幫助我們寫出更好的文章;瞭解不熟悉的主題;或是回答各式各樣的問題。 They can statistically predict the next word based on a vast amount of data scraped from the web.
## Privacy Concerns About LLMs ## 大型語言模型的隱私權疑慮
Data used to train AI models, however, includes a massive amount of publicly available data scraped from the web, which can include sensitive information like names and addresses. Cloud-based AI software often [collects your inputs](https://openai.com/policies/row-privacy-policy), meaning your chats are not private from them. This practice also introduces a risk of data breaches. Furthermore, there is a real possibility that an LLM will leak your private chat information in future conversations with other users. 訓練 AI 模型需要用到資料;然而,用於訓練 AI 模型的資料包括從網路上獲取的大量公開資料,其中可能包括姓名和地址等敏感資訊。 基於雲端的 AI 軟體通常會 [收集您輸入的資料](https://openai.com/policies/row-privacy-policy),這表示您的聊天內容對他們而言並非私密的。 這種做法也會帶來資料外洩的風險。 此外,大型語言模型 真的有可能在未來與其他使用者的對話中洩露您的私人聊天資訊。
If you are concerned about these practices, you can either refuse to use AI, or use [truly open-source models](https://proton.me/blog/how-to-build-privacy-first-ai) which publicly release and allow you to inspect their training datasets. One such model is [OLMoE](https://allenai.org/blog/olmoe-an-open-small-and-state-of-the-art-mixture-of-experts-model-c258432d0514) made by [Ai2](https://allenai.org/open-data). 如果您擔心這些做法,您可以拒絕使用 AI或是使用 [真正開放原始碼的模型](https://proton.me/blog/how-to-build-privacy-first-ai),這些模型會公開釋出,並允許您檢查其訓練資料集。 由 [Ai2](https://allenai.org/open-data) 所製作的 [OLMoE](https://allenai.org/blog/olmoe-an-open-small-and-state-of-the-art-mixture-of-experts-model-c258432d0514),就是這樣的一個模型。
Alternatively, you can run AI models locally so that your data never leaves your device and is therefore never shared with third parties. As such, local models are a more private and secure alternative to cloud-based solutions and allow you to share sensitive information to the AI model without worry. 另外,您也可以在本地執行 AI 模型,這樣您的資料就不會離開您的裝置,因此也不會與第三方共用。 因此,相較於雲端解決方案;本機模型是更私密、更安全的替代方案,讓您可以放心地將敏感資訊分享給 AI 模型。
## AI Models ## AI 模型
### Hardware for Local AI Models ### 本地 AI 模型的硬體
Local models are also fairly accessible. It's possible to run smaller models at lower speeds on as little as 8GB of RAM. Using more powerful hardware such as a dedicated GPU with sufficient VRAM or a modern system with fast LPDDR5X memory offers the best experience. 本地模型也相當容易運行。 只要 8GB 記憶體,就能以較低的速度運行較小的模型。 使用更強大的硬體,例如具有足夠 VRAM 的專用 GPU 或具有快速 LPDDR5X 記憶體的現代系統,可以提供最佳的體驗。
LLMs can usually be differentiated by the number of parameters, which can vary between 1.3B to 405B for open-source models available for end users. For example, models below 6.7B parameters are only good for basic tasks like text summaries, while models between 7B and 13B are a great compromise between quality and speed. Models with advanced reasoning capabilities are generally around 70B. LLMs can usually be differentiated by the number of parameters, which can vary between 1.3B to 405B for open-source models available for end users. For example, models below 6.7B parameters are only good for basic tasks like text summaries, while models between 7B and 13B are a great compromise between quality and speed. Models with advanced reasoning capabilities are generally around 70B.