TechLetters #40 - Designing for privacy is hard, privacy PR is a challenge - some companies would perhaps benefit from some advice; Automotive cybersecurity consortium; yes crypt; Quantum errors
Éditorial
Designing for privacy is hard. Building a strong security and privacy posture is even harder if one wants to build a public image on this. This is important in context of strategic communication and PR. Companies desiring to have a consistent message must understand technologies, new developments (including the ones they introduce) but also be competent at foreseeing the public opinion’s future attitudes. We see this recently in how Google is managing Privacy Sandbox, and how Apple is managing their proposal on fingerprinting photos to identify illegal content.
This is hard.
It requires years of competencies, and the right people and skills.
Security
Fingerprinting versions, Windows/AV/etc. (link)
Automotive cybersecurity group. Companies form a consortium to create future rules of the road in automotive cybersecurity. Methods but also (threat) information sharing. We shall see what come out of it and in what ways the future cars will end up hacked (and they will!). “In a connected car, parts such as the engine, motors and brakes are electronically controlled. Data on their operational status is sent over the internet. If there is a security hole in the software that manages the data, it could be intercepted or the car itself seized by an outside operator.” (link)
Politically motivated cyberattacks. The recent cyberattacks on Iranian infrastructure were politically motivated. The source was domestic (perhaps, the opposition?), not foreign. They targeted, among others, railways. Politician data was leaked. Data destruction (wiper in use). (report, news cover)
Yescrypt. Debian 11 is released with a new way of crypting system passwords.
Privacy
Firefox enhances tracker control. Mozilla’s Firefox introduced new ways of clearing cookies and other tracking materia.
More about Apple’s photo fingerprinting technology. The tech is meant to detect illegal image content. Now we know a bit more from the disclosed document. The assumption here is that (yet undisclosed) “security researchers/auditors” will be able to inspect the system to check that it provides something. Additionally, the guarantees in the threat model are apparently based on the need of having “bodies subject to the jurisdiction of a minimum two independent governments” in order to insert the images to the scanned + an undisclosed process of Apple reviewers playing a role here. So in sum we have a rough oversight/governance model based on two governments+Apple employees. In general, the policy and governance model is not clear. What is the oversight of this system running on a potentially 1billion-user-scale? The safety margin for false positive rate is said to be one-in-trillion. Details of how this number is reached is not disclosed.
Other
Error-corrected quantum computing. Are you hearing about the next quantum computer being built or tested? Great, but keep in mind that they are all small toys, essentially useless today. To have a working computation process logical quits are needed, and this means quantum error correction. Now Google’s team demonstrated some elements of error correction. So progress, a bit. (papers: 1, 2)
Medical AI easily learns to detect human racial profile. “We demonstrate that medical AI systems can easily learn to recognise racial identity in medical images, and that this capability is extremely difficult to isolate or mitigate. We strongly recommend that all developers, regulators, and users who are involved with medical image analysis consider the use of deep learning models with extreme caution. In the setting of x-ray and CT imaging data, patient racial identity is readily learnable from the image data alone, generalises to new settings, and may provide a direct mechanism to perpetuate or even worsen the racial disparities that exist in current medical practice” (pdf)
In case you feel it's worth it to forward this letter further, I leave this thingy below:
You may also share here: