TechLetters ☕️ AI security, development, and safety dismantlement. Misinformation tops Global Risks. Europol wants weak communication security. DeepSeek is the new cool.
Security
Security of agentic AI via additional inference? Security of LLMs is not a solved problem. Ensuring that agentic models reliably browse the web, send emails, or upload code is akin to ensuring self-driving cars operate without accidents. Similar to self-driving cars, an agent sending a wrong email or creating security vulnerabilities could have significant real-world consequences. Additionally, LLM agents face unique challenges from adversaries, who may manipulate the inputs these agents encounter while browsing, reading files, or analyzing images. https://cdn.openai.com/papers/trading-inference-time-compute-for-adversarial-robustness-20250121_1.pdf
World Economic Forum Global Risks Report 2025 is quite interesting this time. State-Based Armed Conflict & Misinformation and Disinformation top the global risks. AI (2%) or cyberattack risk (2%) not the top issues. But mis/disinformation and cyberattack actually do top the list. Of immediate risks. This is not changing. The impact of dis/misinformation and cyberattacks is likewise viewed as severe. More experienced people (> 30 years old) are consistent in views.
Technology Policy
Europe chief against encrypted communication. "Technology giants must do more to co-operate with law enforcement on encryption or they risk threatening European democracy, according to the head of Europol". She considered end-to-end encryption incompatible with democracy?
US President Donald Trump has annulled Biden’s artificial intelligence (AI) executive order. The one “To reduce the risks that artificial intelligence poses to consumers, workers and national security.” The order, which required AI developers to report safety testing results and established federal oversight standards, has been removed from the White House website. A shift toward deregulation, A significant change in federal AI policy, prioritizing fewer government constraints. This is a signal for a AI policy revamp.
The United States is accelerating the race in AI development. With an initiative that combines economic, technological, and strategic goals. With an investment of up to $500 billion over four years ("Stargate"), the project will begin with an immediate allocation of $100 billion. Its objectives include building advanced AI infrastructure, addressing resource bottlenecks, and enhancing national security. Key priorities of this initiative include expanding data centers, which are crucial for training advanced AI models and competing effectively with global rivals. To meet the energy demands of AI, the initiative will also involve constructing power generation facilities. It is estimated that around 100,000 jobs will be created in the near future. This initiative aligns with the United States’ goals for industrial redevelopment and reaffirms its position as a leader in shaping the future of artificial intelligence.
US vs China and AI. Chinese DeepSeek have developed a low-cost AI model comparable to OpenAI's GPT-4, including advanced "chain of thought" reasoning. It can run locally on standard computers or modern smartphones without internet access, ensuring complete privacy by eliminating data transfers to servers—a true "private AI." While the most powerful models still require substantial resources, this progress reduces dependence on centralized "AI factories." China's rapid AI advancements—possibly spurred by U.S. sanctions—are reshaping the field. Affordable, powerful tools accessible to the masses challenge traditional business models, as they require less funding and infrastructure. Meanwhile, in the U.S., Trump dismantled Biden-era "secure AI" regulations, introducing a deregulated policy led by a Special Advisor for AI and Cryptocurrency. The broader goals remain unclear, but the shift underscores a new, unpredictable direction for global AI development. Oh, and here’s DeepSeek’s censorship analysis. https://huggingface.co/blog/leonardlin/chinese-llm-censorship-analysis
Other
Co-founder of French cryptocurrency/blockchain company Ledger, D. Balland, was kidnapped in France. The kidnappers reportedly sent a finger as part of the huge ransom demand. Police have located and released the kidnapped, and the hunt is on.
2025 just started. You can now use a GPT-4o grade multimodal large language model for vision and speech task on a smartphone...? "surpasses GPT-4o, Gemini 1.5 Pro, and Claude 3.5 Sonnet for single image understanding"?!
Why are Community notes superior to fact-checking?
- Community Notes uses crowdsourced input, ensuring diverse perspectives and reducing the influence of single entities.
- The system is superior to traditional fact-checking because it provides detailed context rather than simple true or false labels, allowing users to make informed decisions without feeling dictated to.
- Community Notes avoids bias through its crowdsourced model, which relies on contributions and ratings from a wide range of users with diverse perspectives. The algorithm ensures only notes agreed upon by users with historically differing views are surfaced, effectively filtering out polarized or one-sided content.
In case you feel it's worth it to forward this content further:
Subscribed
If you’d like to share: