Analysis of Privacy Risks in Generative AI and the Use of Secure Systems

Introduction

Recent developments in generative artificial intelligence show a serious conflict between the usefulness of large language models (LLMs) and the need to protect personal information.

Main Body

The leak of personal data in LLMs is mainly caused by the use of huge datasets collected from the internet during training. Evidence shows that models like Google Gemini and ChatGPT can repeat exact contact details, such as phone numbers and addresses, even if that data was meant to be private. Although developers have added safety filters, researchers emphasize that these are often bypassed through clever prompting. Furthermore, it is currently very difficult to remove specific personal data from a trained model, which makes it hard to follow privacy laws like the GDPR. To solve these problems, Meta has introduced 'Incognito Chat' in WhatsApp using a 'Private Processing' system. This technology uses Trusted Execution Environments (TEEs) to ensure that AI processing happens in a secure cloud where the company cannot see the user's messages. This is different from other 'incognito' modes that still save logs for several days. However, this change creates a new risk: a lack of accountability. Legal experts assert that if there are no logs, it may be impossible to investigate cases where AI causes serious harm or illegal activity. At the same time, new AI assistants like Poppy rely on combining data from calendars, emails, and locations to help users. While these services claim they do not save data, the industry is gradually moving toward 'on-device processing.' This means the AI works directly on the user's phone or computer to reduce the risks of storing sensitive data in the cloud.

Conclusion

The AI industry is currently moving toward more secure and temporary processing methods to reduce the constant risk of data leaks and unauthorized storage.

Learning

πŸ’‘ The 'B2 Jump': Moving from Basic to Precise

At an A2 level, you describe things simply. To reach B2, you need to stop using "general" words and start using specific verbs and connectors that show a logical relationship between ideas.


πŸš€ Power-Up 1: Replacing 'Say' and 'Think'

In the text, the author doesn't just say "experts say." They use high-level alternatives. Look at the difference:

  • A2 (Basic): "Legal experts say that there are risks."
  • B2 (Professional): "Legal experts assert that there are risks."

Why this matters: Assert implies a strong, confident statement based on a position of authority. Using verbs like emphasize or assert instead of say instantly makes you sound more fluent and academic.

πŸš€ Power-Up 2: The Logic of "While"

Notice this sentence: "While these services claim they do not save data, the industry is gradually moving toward on-device processing."

The B2 Trick: Use "While..." at the start of a sentence to create a contrast.

  • A2 Style: "They say they don't save data. But the industry is changing." (Two short, choppy sentences).
  • B2 Style: "While [Point A], [Point B]." (One sophisticated, flowing sentence).

πŸš€ Power-Up 3: Precise Adverbs for Trend Analysis

B2 students describe how something happens, not just that it happens.

  • The phrase: *"...gradually moving toward..."
  • Analysis: Instead of saying "The industry is changing," adding gradually tells us the speed and nature of the change. It transforms a simple fact into a detailed observation.

Quick Reference Guide for your next writing:

Instead of...Try using...Context
Say/ThinkAssert / EmphasizeWhen giving a strong opinion
ButWhile / HoweverWhen contrasting two facts
ChangingGradually moving towardWhen describing a slow process

Vocabulary Learning

conflict (n.)
A serious disagreement or clash between two things.
Example:The conflict between the usefulness of large language models and the need to protect personal information is a major issue.
datasets (n.)
Large collections of data used for analysis or training.
Example:The leak of personal data in LLMs is mainly caused by the use of huge datasets collected from the internet.
bypass (v.)
To avoid or get around something, especially a rule or barrier.
Example:Developers added safety filters, but researchers say these are often bypassed through clever prompting.
accountability (n.)
The obligation to explain actions and accept responsibility for them.
Example:A lack of accountability may make it impossible to investigate cases where AI causes serious harm.
temporary (adj.)
Lasting for only a limited period of time.
Example:The industry is moving toward on-device processing, which is a more secure and temporary method.
unauthorized (adj.)
Not permitted or approved by authority.
Example:The new AI assistants aim to reduce the risk of unauthorized storage of sensitive data in the cloud.
processing (n.)
The act of handling or manipulating data to produce a result.
Example:The Trusted Execution Environments ensure that AI processing happens in a secure cloud.
incognito (adj.)
Hidden or disguised; not revealing one's identity.
Example:Meta's Incognito Chat uses a Private Processing system to keep user messages hidden from the company.