
The ability to use a mobile device is almost a must-have skill for people in today’s world. Even middle-aged and older people attempt to learn how to use these devices to keep up with technology, so as not to be left behind by society (Weber, 2024). With the iteration and development of mobile devices, users are no longer limited to posting text, images. They can upload videos, homemade animations and more on streaming media. Paired with the increasing convenience offered by software, users are gradually acquiring the skills to film, produce, and upload on their own. Have you ever uploaded what you see and hear on social media or video platforms? Even if you haven’t, I’m sure you’ve seen videos and text from individuals posting on online platforms. While they bring us the freedom of speech and information, have you ever suspected that they contain false and low-quality information? Is this information causing a lot of pollution in our internet environment?
Introduction
Before going further into our topic, some basic definitions need to be provided. The first is about self-media. Self-media is a term of Chinese origin that is seen as the process by which a person expresses his or her personal opinions and creates content through social media (Liu, 2019). Its format can include text, images, videos and is often active on platforms such as YouTube, TikTok, Instagram and others. The emergence of this form of media has liberated the creative power of ordinary citizens — publishing content on public platforms is no longer the prerogative of traditional forms of media.
This phenomenon of having the power to publish and receive information is regarded as freedom of information, the main components of which are democratic governance and transparency (Banisar, 2006). However, under the freedom of democracy, human desires give rise to a lot of information pollution, such as Hate Speech, Fake News, misleading content and so on. This information pollution can hinder the efficiency of normal information transmission. Among them, politically motivated disinformation and ‘network propaganda’ bring more negative impacts and challenges. They are not only related to mainstream media, but also affect marginalized websites (Flew, 2021). There is no doubt that we are feeling the beauty of the larger world in the age of self-publishing. The high speed of information transfer allows us to know in real time what is happening on the other side of the globe, even though we are thousands of kilometres apart. While the mobile phones in people’s hands create freedom and beauty, some of them will undoubtedly transmit misinformation, and some will even use them to satisfy their vanity and make profits. At that time, it is very important for the platforms and the relevant departments to regulate.
Case Study

I’m sure you’re no stranger to the hashtag #MeToo. It was born out of an anti-sexual harassment campaign by Alyssa Milano. The hashtag started as a small hashtag, but grew into a global movement to expose sexual harassment through gender inequality and abuse of power in the workplace, among other areas. The campaign received global attention with the participation of a large number of people, and a number of socially influential celebrities have been brought into disrepute. It is evident that these platforms provide enough freedom for self-media to speak up for themselves, while #MeToo also allows people to break their avoidance and silence on sensitive topics. The freedom of information brought about by self-media can be found to bring many marginalised voices together to fight against more powerful forces. In addition, the social impact and social change expected can be unprecedented, which reminds me of an old Chinese saying: ‘Water can carry a boat, but it can also capsize it.’ The freedom of information on the platform of self-media makes the transmission of information more efficient and timelier, and its impact will continue to grow, driven by the many similar experiences.
#MeToo has shown me that freedom of information has exploded with the assistance of self-media, but many people have been hurt by certain self-media accounts that lack social responsibility. On China’s TikTok, there is a self-media account called ‘DongBeiYuJie’. She has been vlogging the daily life of rural Northeast China since she set up her account. Her brash personality and image of a hard-working ordinary farmer have amassed more than 19 million followers in a few years. However, she used her influence to peddle fake and shoddy products and committed offences such as beating up people involved under house arrest after being challenged. In the end, law enforcement officials found that her daily videos were all fake, and that she had rented a filming location instead of her home, where she does not live. Even the neighbours were fooled by the filming team. In her daily videos, she repeatedly emphasised that this was her home and her daily life was the same, in order to attract public attention and try to make money from it. The incident was noticed by the official media, which also imposed a severe penalty and banned the account. Her fans were the biggest victims, not only suffering financially, but also having the enthusiasm and expectations of her fans thrown into cold water. Many netizens also claimed that they no longer believed the information on the Internet after this. This shows how important and urgent it is to manage and monitor these accounts. This is not only a matter of the survival of free speech, but also a matter of the interests of citizens and society.
Both events have their own advantages and disadvantages from a critical point of view, and the #Metoo movement has also seen a lot of fake news mixed in for the sake of profit and attention, causing unnecessary social impact. In the case of ‘DongBeiYuJie’, although the video she uploaded was a performance, it was also a portrayal of many people’s lives, which resonated with many of her fans. So, while we enjoy the freedom brought by self-media, we must also think about and judge the potential deception and danger involved. At the same time regulation and responsibility cannot be lacking.
Regulation and Responsibility
When we came up with this title, we were already critically analysing self-media, which is now an unstoppable trend, but in order to reduce information pollution, many platforms have had to strengthen their regulations and penalties, and a number of countries have introduced restrictions on speech for this purpose. However, the freedom of speech that people expect to be limited in regulation seems to create a contradiction between freedom and regulation.
Before discussing regulation, who do you think is most to blame? The platforms, the creators, or the government? It’s probably not a matter of picking one of the three, as they all need to be held accountable in their own part of the whole process of self-media. Firstly, there are the creators, who are not only the ones uploading the videos, but also part of the audience. They need to take responsibility for what they say and do and avoid delivering false and misleading statements to the public. Platform managers also have an obligation to monitor content and detect harmful speech and behavior in advance. Finally, there is the government, which, given the strong social influence of the self-published media, must also take on the responsibility of guarding the bottom line of speech to maintain basic social order.
But often the hardest part is defining what kind of speech is information pollution and needs to be intercepted and controlled. Many regions have adopted policies and regulations that are appropriate for their region. China introduced regulations in 2017 requiring self-publishing platforms and creators to register with the government and comply with content restrictions. Individuals with large followings are required to have real names and to disclose them. Due to different social systems, China has also adopted firewalls to limit the overseas content that Chinese citizens can view, with apps such as Instagram, X, and others being muted. But such government regulation has attracted a lot of scepticism, arguing that it greatly restricts freedom of speech, and that even so, some self-media accounts with a small following still violate the rules. Even if they don’t have much influence, they can still harm viewers.
Some have suggested the use of a globally harmonised regulatory system, but it seems to me that citizens of different countries will have different levels of freedom of speech to pursue, and countries will have different levels of acceptance of speech. Donald Trump wanted to introduce absolute freedom of speech in the US when he came to power in 2016, but the unavoidable pollution of information, such as DISINFORMATION and FAKE NEWS, has been a major factor in this increasingly rampant situation afterwards as well (Flew, 2021). But for self-media creators, Section 230 of the Communications Decency Act grants platform immunity. Instead of being held liable for the publication of illegal content, creators are directly held accountable. However, this has led to distrust among the public, who believe that such a regulation would allow the platforms to relax their control and allow misleading information to circulate. Such regulatory measures have polarised the attitudes of the public. In addition to those people who have lost their trust in the platforms as mentioned above, there are many others who believe that such a regulatory approach will not disrupt freedom of expression and innovation on the Internet. Self-media creators are able to provide content that is more diverse and has a variety of perspectives.
At this point, it can be seen that self-media regulation is a major challenge for countries around the world, with some utilising strict controls and others focusing on transparency and accountability (e.g. Europe and the US). Different regions have different moral issues to think about, and who defines content that is hurtful or illegal? Perhaps censoring content that prevents harassment, harm to vulnerable groups, etc., but at the same time, it might also make it possible to block some minority voices, political dissent, etc., that might be helpful. The ethical issue is not a choice between censorship and freedom of expression, but rather a need for regulators to give a fair balance: one that can be transparent and secure, but also inclusive and open. However, it is difficult to achieve perfect regulation.
In The Future: Balancing Freedom and Regulation
Manual review is very limited and difficult in the face of large amounts of data and self-media content. So, the assistance of AI alleviates much of the regulatory pressure. However, it can only identify some relatively basic content, such as politically sensitive speech, violence, pornography and so on. And there are often misjudgements and omissions. But AI is still a powerful tool for the future, and in the future, it will need more iterations and learning to make more accurate regulatory judgements. Secondly, the possibility of self-regulation within the self-media community is also a matter of concern. Communities can be viewed as multiple blocks, and block management allows users within a community to monitor each other and block misinformation before it spreads.
In addition, there is the possibility of increasing education on freedom of speech and information pollution in schools in the future. Such education will not only sensitise them to information pollution at an early age, but also enhance their media literacy. They can self-recognise misinformation, and they can also avoid creating misleading content on their own. All self-publishers need to realise that the so-called freedom of expression is about sharing their content under the rules, and not about using it to act recklessly and engage in illegal activities.
Finally, beyond sharing the freedom of information and pollution that comes with self-media, this post hopes that readers can be self-centred after understanding the advantages, disadvantages and regulatory difficulties of self-media, and that they can be mindful of what they post in the future and look critically at the self-media information that is already available on the web. Avoiding causing and suffering from information pollution is what we can do for now.
Reference List
Weber, B. (2024, June 20). Embracing technology: Surging smartphone adoption among the elderly. QliqSOFT. https://www.qliqsoft.com/blog/embracing-technology-surging-smartphone-adoption-among-the-elderly
Liu, J. (2019, November 3). Self-media: A new worldwide revolution. Medium. https://medium.com/@jliu2468/self-media-a-new-worldwide-revolution-8a8fb47b2090
Banisar, D. (2006). Freedom of Information Around the World 2006: A Global Survey of Access to Government Information Laws. Privacy International Flew, Terry (2021) Disinformation and Fake News. In Regulating Platforms. Cambridge: Polity, pp. 86-91
Be the first to comment