Social Media

We, as a society, have decided to cede our data to robots implemented with dangerous algorithms. This is not necessarily done with malicious intent. However, when the decision is made to allow these algorithms to dictate the flow of information among humans, then we will inevitably be misled in ways that are much more difficult to address. The sad reality is that we, on average, have been subliminally influenced by modern information in negative ways - by its source, its presentation, and its content.

Just thirty years ago, history and news would be published in books and newspapers. If an individual wanted to publish alternative sequences of events or interpretations of the world to a large audience, they would have gone through the bureaucracy of publishing. Today, any individual with little prior research can publish and propogate information to a massive audience with incredible ease. Furthermore, governments can similarly push propoganda to a target nationality, ethnicity, or other arbitrary categorisation. Trolls can fabricate information and incite mass hysteria effectively within hours. Needless to say, the pipeline for this process is social media.

It seems that much of the discussion around the perils of social media highlight the impact of malicious human interaction. However, the aforementioned problem of misinformation on social media is greatly exacerbated when artificial intelligence becomes involved. The scale at which information could be synthesised and presented to humans with parameters that skew the information is manyfold greater than the scale at which humans can synthesise and spread information.

The language model GPT-3 can generate poetry that is indistuingishable from human written poetry. I hypothesise that such a model could generate political rhetoric that is indistinguishable from human rhetoric. Such a model could generate alternative sequences of historical events, current events, interpretations of events, and politics, therefore enforcing fallacies. A botnet of accounts under the control of some model could effectively sway popular belief. I posit that everybody is vulnerable to believing synthesised media as being true in such an environment. If hyperrealistic media is published to compete with media collected from reality in such a way to optimally be more sensational and thus more viral than the real media, then it would become impossible to determine which media to absorb or believe on today’s social media platforms. Under the control of artifical intelligence, fallacies can become robust.

Anecdotally, I see how illiteracy is a major catalyst for social media misinformation. I am not referring to the inability to read; instead, I am referring to the inability to analyse information published on the internet. Beyond the superficial content of information, I am referring to ignorance of metadata such as the account used to post, the account’s behaviour, or perhaps cryptographic signatures. I am also referring to the inability to corroborate information. The skill of corroborating information is neglected throughout primary and secondary education here in the United States. I see these forms of illiteracy in older generations, and I can understand why. They grew up in the age of bureaucratic publishing, where a 160 character statement would not be able to imprint upon the minds of millions of people within hours. Skills to analyse data are paramount for modern-day literacy.

Regardless, there is a fundamental problem with social media that will handicap our efforts to combat misinformation in modern societies. We do not have control over data - neither its egress, nor its ingress. The systems that dictate how information leaves our fingers and passes through our retinas are almost entirely opaque to us. Can we find the exact machine learning model that collates data? Is information along with its metadata immutable? Do we have any guarantees that our information will not be used and/or construed by malicious actors running the social media platforms? For all of the major social media platforms, the answer to the above is a resounding no. And to emphasise the last point in particular, social media platforms are dictatorships. There is little to no transparency regarding internal decisions, and private companies can largely enforce whatever arbitrary actions they wish to take regarding how they run their platforms.

I propose a solution. We should take ownership of our data, and publish our statements onto platforms that we individually create and that we completely control. This blog is an example of such a platform. If we wish to absorb information, then we can set up a distributed “social media” of sorts, where we can declaratively retrieve data (such as with RSS) at our own discretion. Our trust should lie in the system that has never failed us: mathematics. With cryptography, we can easily, digitally sign our information so that people know exactly who published a statement, and with quantifiable certainty. By signing each other’s messages, we can show support for other statements. By signing each other’s cryptographic keys, we can raise the certainty that an “account” is legitimate. For much of the audience with information technology background, these ideas are well established protocols for communication. These ideas are not new; for instance, the concept of signing each other’s keys is known as a “web of trust.” However, such actions are rarely practiced, especially outside of the computer science community. That is why if you are GnuPG capable, I plead that we at least communicate with signing enabled, or potentially even with encryption enabled. In order for these systems to be transparent and verifiable, the source code needs to be freely available, and the programs need to be reproducible from the source code. This absolute transparency is the only way to trust a piece of software, and is one of numerous reasons why I encourage free and open source software (FOSS) so vehemently.

I quit social media three years ago, and my life is better for it. I encourage everybody to think critically about social media.