Social Media

We, as a society, have decided to cede our data to robots implemented with haphazard and inhuman algorithms. This is not a fault of the programmers per se. However, when the decision is made to allow these algorithms to dictate the flow of human information amongst humans, then we will be misled.

I would like to think that I can process information in an absolutely scientific way; however, I cannot surpass human biases. The sad reality is that we, on average, have been subliminally influenced by modern information in negative ways - by its source, its presentation, and its content.

Just thirty years ago, news and history would be published in books and newspapers - if somebody wanted to publish alternative sequences of events or interpretations of the world to a large audience, you would need to go through the bureaucracy of publishing. Today, any individual with little prior research can publish and propogate information to a massive audience. Governments can push propoganda to a target nationality, ethnicity, or other arbitrary categorization with incredible ease. Trolls can fabricate information and incite mass hysteria effectively within hours. Needless to say, the pipeline for this process is Social Media.

It seems that much of the discussion around the perils of Social Media highlight the impact of malicious human interaction. However, the aforementioned problem of misinformation on Social Media is greatly exacerbated when artificial intelligence becomes involved. The scale at which information could be synthesized and presented to humans with parameters that skew the information is manyfold greater than the scale at which humans can spread information.

The language model GPT-3 can generate poetry that is indistuingishable from human written poetry. I hypothesize that such a model could generate political rhetoric that is indistinguishable from human rhetoric. Such a model could generate alternate sequences of historical events, current events, interpretations of events, and politics, and therefore enforce a fallacy. A botnet of accounts under the control of a language model can assemble a cohesive network of social media accounts and can effectively sway popular belief - a phenomenon that both you and I will succumb to if we swim in the ocean of social media for too long. After all, if a belief exists that reinforces our prior experiences, why wouldn’t we eventually embrace that belief if it is perpetuated for long enough? Under the control of Artifical Intelligence, fallacies can become robust.

Anecdotally, I see how illiteracy is a major catalyst for social media misinformation. I am not referring to the inability to read; instead, I am referring to the inability to analyze internet information. Beyond the literal content of information, I am referring to extenuating metadata such as the account used to post, the account’s behaviour, or perhaps other higher level data such as the domain name or unauthorized access. I am also referring to the inability to corroborate information. This is a skill that is neglected in many stages of education here in the United States. I see these forms of illiteracy in older generations. I can understand why. They grew up in the age of bureaucratic publishing, where a 160 character statement would not be able to imprint upon the minds of millions of people within hours. These skills for modern day literacy are absolutely necessary.

Regardless, there is a fundamental problem with social media that will handicap our efforts to combat misinformation in modern societies. We do not have control over data - neither its egress, nor its ingress. The systems that dictate how information leaves our fingers and passes through our retinas are almost entirely opaque to us. Can we find the exact machine learning model that collates data? Is information along with its metadata immutable? Do we have any guarantees that our information will not be used and/or construed by malicious actors running the social media platform? For all of the major social media platforms, the answer to the above is a resounding no. And to emphasize the last point in particular, social media platforms are dictatorships. There is little to no transparency regarding internal decisions, and the company can enforce whatever arbitrary actions they wish to take.

I propose a solution. We should take ownership of our data, and publish our statements onto platforms that we individually create and that we completely control. This blog is an example of such a platform. If we wish to absorb information, then we can set up a distributed “social media” of sorts, where we can declaratively retrieve data (such as with RSS) at our own discretion. Our trust should lie in the system that has never failed us: mathematics. With cryptography, we can easily digitally sign our information so that people know exactly who published a statement, with quantifiable certainty. By signing each other’s messages, we can show support for other statements. By signing each other’s cryptographic keys, we can raise the certainty that an “account” is legitimate. These ideas are not new; for instance, the concept of signing each other’s keys is known as a “web of trust.” However, such actions are rarely practiced, especially outside of the computer science community. That is why if you are GnuPG capable, then I plead that we at least communicate with signing enabled, or potentially even with encryption enabled. In order for these systems to be transparent and verifiable, the source code needs to be freely available, and the programs need to be reproducible from the source code. This absolute transparency is the only way to trust a piece of software, and is the reason why I encourage free and open source software (FOSS) so heavily.

I quit social media three years ago, and my life is better for it. I encourage everybody to think critically about social media.