TechLetters #136 Reviewing for security and privacy. ChatGPT for cybercrime. Privacy Sandbox, AI, sending e-mails a problem in 2023.
Security
Sending military emails is a tricky business. It is plagued with significant complexity. US and Dutch military personnel frequently makes typos, instead of ".mil" or ".nl", they do ".ml". Actual mail is leaking (personal data, passports, procurement information…).
ChatGPT for purely illegal stuff. No ethical restrictions. New Large Language Model (LLM/AI) for cybercriminals. Facilitates phishing, and more. It greatly improves the work. It will not refuse answering like ChatGPT or Bard do.
The U.S. adds other spyware vendors to sanctions entity list. Cytrox and Intellex. "trafficking in cyber exploits used to gain access to information systems, thereby threatening the privacy and security of individuals and organizations worldwide"
US cyber strategy implementation plan foresees more operations. Doing it in multiple areas - state cyber operations, cybercriminals. The goal is also to “Defeat Ransomware” by targeting the WHOLE ecosystem, like foras.
Leaked keys and a big problem. In May it was disclosed that MSI source code leaked. The leak included Intel OEM private key to sign firmware. This may have enormous consequences for security. To put it simple, bad software may be recognised as “legitimate” software. “approximately 20% of firmware updates contain at least two or three known vulnerabilities (previously disclosed), according to Binarly Platform data (based on enterprise-grade vendors study). But when we are talking about such data breaches with leaked keys they usually impacting the ecosystem way longer compared to any othe vulnerabilities. Due to the complexity of the supply chain revocation of such key can take years or impossible”
How to review for security or privacy. That's a big challenge in the industry. Input from Chrome Security."not practical for security engineers to scrutinize every change". ""It’s our job to identify and articulate security risks, and advocate for better approaches, but sometimes another concern dominates. If deviations from our advice are well justified we shouldn’t feel ignored"". Some of it is helpful in reviewing for privacy. I developed a methodology since 2016 which I (!) still use today ( with improvements). It often allowed me to identify the relevant privacy risks, and never let me miss an important point (see bellow).
Privacy
Reviewing for privacy. Some of my contributions about 'privacy review lessons learned'. Today I would expand on that in some places.
Google/Chrome launching Privacy Sandbox (privacy-improving ad infrastructure). In the Chrome web browser. They're testing various configuration/consent approach. You can manually enable it if you're curious. It's safe to say that I'm among the best informed/trained people in this space when it comes to privacy, so let's dive in. Topics API is analysing your web browsing history and 'offers' a website some interests based on it. It was vetted for privacy (independently), and it may leak some data, assuming that you browse one website very often and it observes the data gradually. The interesting thing is Protected Audience. It's aiming to be an on-device 'matching' of ads with user interests. The goal is to have an isolated environment where information isn't leaking and nobody is able to 'collect' any data. This is very different to cookies. This will be interesting to watch, but the design is fragile. The immediate issue: identified user-interest ARE NOT displayed for me to inspect in Chrome. The data is removed after 30 days. I cannot audit past usage, too. That is likely non-compliant with GDPR, let’s say, 100% chances. For now a lot of the privacy guarantees are relaxed, and not tight or strong. that in mind. I enabled it as I'm genuinely curious about the system that is replacing the privacy-invasive uses of the current ad technology like Real-Time Bidding. Study of privacy was part of my PhD!
Sensor that monitors the charge level of the car battery. It turned out that he also sent the location (GPS) of the car and other information to the manufacturer. Sent to China.
Technology Policy
Companies commit to AI “responsibility” and security. Large firms signed White House rules for "responsible" AI uses, including AI security. "it is vital that the model weights be released only when intended and when security risks are considered"
Other
In case you feel it's worth it to forward this content further:
If you’d like to share: