We are at a historical moment where countries are introducing regulations of content distributed online and via social media. In Europe, such regulation is the Digital Services Act (DSA), which is still in its early phases, and we are yet to see its full impact and enforcement in action.
Content moderation is a necessity today, with billions of people communicating real-time come risks of harms and abuse. However, one should bear in mind that this may bring controversy. Someone, for example, may call it out as "censorship," when the reasons for removing or modifying content will be political or related to issues of views. It is well known how subjective views can be, and objective views may be so to those who view them as objective. But how to ground and anchor the fundamentals is not the point of this analysis
The new Australian proposal to combat disinformation and misinformation is another interesting one. According to it,
misinformation is false, misleading, or deceptive content that has the potential to cause serious harm, such as a threat to public health or damage to infrastructure.
Disinformation, on the other hand, is false information (misinformation) disseminated intentionally, with the intent to mislead.
Like the European Digital Services Act (and likely future implementations across the continent), this proposal is still based on an outdated approach, where the role of truthful information and its potential use in propaganda is not sufficiently considered. This is a point I emphasize in my book Propaganda and one acknowledged in textbooks on the subject, as well as by the US military.
Returning to the Australian law, serious harm is defined as including: threats to the electoral process, public health, the spread of hatred against social groups, and damage to critical infrastructure or the economy.
Excluded from these provisions are: professional news agencies, satire, and scientific or artistic works. So, to circumvent, does it suffice to set up a university, like Polytechnic Institute of Propaganda Distribution?
As in Europe, digital platforms would be required to moderate content according to codes they have developed and submitted to the regulator. The latter can request changes if necessary. Platforms are not obliged to block content or user accounts, as long as these are not linked to the spread of disinformation. Heavy penalties are foreseen for non-compliance.
Laws must keep up with social, political, and technological developments. Today, digital platforms can be a source or space for diversion and even a breeding ground for riot activities, and other similar stuff. However, it is worth keeping in mind the values of freedom of speech and expression. Parts of Europe, particularly those that endured the decades of communism, should be especially mindful of this as they adapt such laws. The experiences of censorship and oppression should serve as a reminder of the dangers of over-regulating public discourse, even in the name of combating disinformation.
—
Additional question. This Substack is not funded. Should you want to help or support my book, consider simply ordering my books. Aside from that, I am wondering about offering a support functionality in this Substack. Would you be open to financially contributing to my work, either on a current basis, or for some closed content (like those in this week)? Feel free to let me know: me@lukaszolejnik.com
And if you’d need a consulting (or so) work or otherwise be aware of possible engagements, feel free to let me know as well. Thanks!