Matthew Kovacev

Matthew Kovacev
ix
Source: Matthew Kovacev

Summary

Hello everyone! My name is Matthew Kovacev, and I am the Hub Management Director of onAir Networks. I am responsible for moderating, curating, and administering several hubs, including, but not limited to the following hubs:

 

My mission is to get humanity to the next level of technological progress. I am a firm believer in AI, and I have faith that humanity will use it to achieve feats once thought impossible. Inspiration is my creed, and I will do everything I can to elevate humanity towards the stars.

Questions? Feel free to contact me

OnAir Post: Matthew Kovacev

About

Education and Experience

I received my Bachelor’s degree in Public Policy at George Mason University. I have worked on successful local campaigns in the DC metro area and written for the prominent EU-based international politics magazine Modern Diplomacy.

In the past year, I have pursued AI policy and data science. I have worked closely with The Millennium Project to refine futures research. I am currently the Director of Hub Management at onAir Networks.

I am highly passionate in technology and public policy, especially when it comes to AI. I believe that advanced AI technology will redefine the internet and the world, and I want to use it to help connect the world in ways never before seen.

Publications

I have a Substack blog called The Tech Train. This is where I write articles on technology developments, AI, social media, internet culture, and other related topics.

This blog can be accessed here.

I also wrote some opinion articles with Modern Diplomacy magazine during a brief internship in 2022, while I was pursuing my bachelor’s degree program at George Mason University.

To note, these were written between January and May in 2022, and some of the information may be outdated and may not reflect knowledge beyond the time when they were written.

These opinion pieces can be accessed here.

Contact

Email: OnAir Member

Web Links

Modern Diplomacy

How The Internet Facilitates the Spread of Misinformation

Source: Modern Diplomacy

The internet has made credibility harder to detect, opening the door for misinformation and conspiracy theories.

The internet has led to great advances, namely the dissemination of massive amounts of information to billions of people around the world. However, this glut of information has made it easier for malicious actors to publish misinformation and conspiracy theories.

This was seen during and after the 2016 and 2020 US Presidential Elections, where various actors on both sides engaged in propaganda and misinformation campaigns online. After Donald Trump was elected in 2016, Democrats claimed that Russian foreign actors intervened in the election illegally by posting misinformation online. These claims were proven false in a lengthy investigation by Robert Mueller, but there was no doubt quite a bit of misinformation spread in favor of Donald Trump.

In 2020, Joe Biden defeated Trump in that year’s presidential election. Misinformation and accusations from Trump that the Democratic party permitted widespread voter fraud were partially responsible for the storming of the Capitol on January 6th, 2021. The destruction displayed in the most important federal building in the country showed how dangerous misinformation can be to the liberal democratic system.

The most prolific of such misinformation campaigns and conspiracy theories is QAnon, a far-right conspiracy theory started on 4chan by an anonymous user that claims that Democrats are Satan-worshipping pedophiles. Machine learning has since been utilized to find the man responsible for this conspiracy theory. While relatively small, the following garnered by this anonymous user on an obscure imageboard left a considerable mark on modern political discourse in the United States.

Misinformation is not a new phenomenon, as it fueled the satanic panic in the 1980’s among other conspiracy theories. It is possible that fact checking has been made easier through the internet, therefore putting an end to such conspiracy theories and misinformation campaigns. However, the rise of QAnon and claims of voter fraud in the 2020 election have suggested otherwise.

The internet, in allowing just about anyone to publish their opinions on the world stage, has opened the door for a substantial increase in the spread of misinformation. It used to be very difficult to do such a thing as an ordinary person. In the information age, it is relatively easy to go viral and broadcast your opinions to the world. While this has opened discourse to new opinions, many of these opinions are rooted in falsehood.

Some may think that misinformation and conspiracy theories are isolated to the far right. As was seen after Trump’s election in 2016 and the resulting claims of Russian collusion, this is not the case. Some on the left are also guilty of making outlandish claims about their opposition. Additionally, not even centrists are immune to misinformation, as much information about the ongoing war in Ukraine, namely that of Snake Island and the Ghost of Kyiv, has been contested or even proven false. These lies were perpetuated by social media sites such as Twitter and Reddit. Even those on the ideological center were spreading these claims, not knowing the dangerous consequences of spreading unfounded claims.

The internet is truly a blessing. It has allowed us to see the world through many different perspectives. It is also a curse, as many of these perspectives are at best false and at worst dangerous. That is why fact checking by independent and unbiased sources is important in today’s society.

Because of deeply rooted biases in virtually every news source, even prominent news sites like the New York Times and FOX News are not immune to spreading misinformation. Fact checking before making claims online is necessary and vital because the effects of spreading misinformation can cost lives, as was seen on January 6th. If we make a conscious effort to check claims before responding, we will be more informed and safer overall. In short, be careful what you read online. You never know when you will be exposed to misinformation online.

How 4chan Radicalizes Youth and Grooms Them Towards Terrorism

Source: Modern Diplomacy

The image board was started in 2003 to discuss anime and various other topics but festered into a safe space for hateful rhetoric soon after. In the aftermath of yet another racially motivated mass shooting by a frequent user, its dangers have finally reached the mainstream.

4chan is an extremely unique website. It has been running since 2003, and over the course of almost 20 years, has influenced many internet memes and phenomena. However, in the wake of the European Migrant Crisis in 2015 and the 2016 Presidential Election, it became associated with white supremacy, especially on its /pol/ board. This hateful rhetoric festered, worsening in 2020 during the COVID pandemic and George Floyd protests. 4chan was sprung into the spotlight once again on May 14th, 2022, when a white supremacists livestreamed his massacre of a supermarket.

This attack, fresh in American’s minds, led many to question why 4chan is still allowed to exist. This comes after 4chan’s rhetoric inspired a 2015 mass shooting in Oregon and its users aided in the organization in the Unite The Right Rally and the January 6th Riots. Clearly 4chan is a hotbed for far-right terrorism. But why is this image board the way it is? The answer lies in its lax moderation of content.

Upon looking at 4chan, you will find it is mostly made up of pornography. However, if you go on the site’s /pol/ board, it does not take long to find the kind of rhetoric that radicalized the Buffalo shooter. One particular post I found featured a racist joke at the expense of Black people. Another was praising fighters in the Ukrainian Azov battalion while joking about killing trans people. Yet another post complained about an “influx of tourists” due to the Buffalo shooter, who they insulted with an anti-gay slur. These memes and jokes seem to appeal to a younger, perhaps teenaged audience. It is clear that they are still trying to recruit youth into their ranks even after the tragedy in Buffalo.

The content is, to say the least, vile. The fact that this stuff is permitted and encouraged by not just the userbase (which numbers in the millions) but also many moderators tells us that there is something fundamentally wrong with 4chan. In fact, copies of the livestreamed Buffalo massacre were spread widely on 4chan to the amusement of its userbase.

Many of the users on 4chan are social rejects who feel as if they have nothing to lose. They feel unaccepted and alienated from society, so they turn to 4chan. Many harmful ideologies, such as White supremacy and incel ideologies, seem extremely validating for these dejected youth.  Young, socially alienated men, who make up the majority of 4chan’s userbase, are also among the most vulnerable demographics for radicalization.

What can we do to prevent further radicalization of youth and deradicalize those already affected by harmful rhetoric? First of all, we need to either heavily regulate 4chan or have it shut down. There is no space on the internet for this kind of hatred or incitement to commit horrific acts like what happened in Buffalo. For those already radicalized, we need to perform a campaign of deradicalization among those affected by this rhetoric. But how can this be done?

4chan prides itself on anonymity, so it is difficult to figure out who uses it. Thus, education on radicalization and identification of propaganda is vital. This education should focus on adolescents mostly due to their predisposition towards radicalization when exposed to hateful rhetoric. While White supremacy must be emphasized, other forms of radicalization should be mentioned as well such as Jihadism and other forms of ethnic supremacy. Finally, tolerance must be fostered among all people, not just those at risk of becoming groomed into terrorism.

The age of 4chan has spawned many humorous memes, but it has since become a hotbed for hatred and terrorism. Since memes are able to convey dangerous ideas, websites like Reddit and Facebook need to be heavily regulated to prevent the dissemination of dangerous misinformation. It is unlikely that 4chan will ever moderate itself, as lack of strict moderation is its defining feature. Thus, it has overstayed its welcome and no longer has a place in today’s information-driven society.

How Memes Can Spread Dangerous Ideas

Source: Modern Diplomacy

Internet memes are an excellent way to send powerful messages to millions of people. But what happens when they are used for malicious purposes?

Memes have been a means of transmitting messages for centuries, proliferating immensely in recent decades due to their mass proliferation through the internet and their ability to broadcast messages to a massive audience. They have quite a bit of cultural significance and can be based on almost anything, provided they achieve viral status. However, memes have been subject to abuse by malicious groups and actors.

From the Blue Whale Challenge, an internet challenge that resulted in multiple suicides worldwide, to terrorist organizations like ISIS, which use internet memes to recruit young people, memes can be used for malicious purposes. Even toxic subcultures like MGTOW serve as a pipeline towards the incel movement. Indeed, such male supremacist organizations are not strangers to using memes and viral media to propagate their ideas and recruit young men and boys to their cause. In fact, one influencer, who goes by Sandman MGTOW, often posts such misogynistic memes and videos on his Twitter and YouTube channel.

These kinds of memes are easily identifiable by their bias towards a specific issue and their often-political message. One great example of a meme that has been subject to abuse by malicious actors is Pepe the frog. Based on a character by Matt Furie, this meme was abused by the alt right, being depicted as controversial figures such as Adolf Hitler and Donald Trump. The meme was so badly abused by these far-right actors that it was listed as a hate symbol by the ADL.

Memes have also influenced major world events like the 2016 election in the United States and the Arab Spring revolutions in the early 2010’s, which garnered immense media attention through the use of internet memes and viral media. This shows that memes can have the power to influence elections (albeit slightly) and topple oppressive regimes. Being a powerful tool for spreading information, there is also the use of memes for spreading misinformation.

The COVID-19 pandemic mediated a sizeable but modest anti-vaccine movement in countries like the United States, Canada, and Germany. These anti-vaxx groups used social media like Facebook and Reddit to spread memes full of misinformation and pseudo-science It can also be argued that memes were effective tools in spreading misinformation around the elections of 2016 and 2020 in the United States. Memes, while powerful, can be used by malicious actors such as far-right groups and anti-vaxx groups to peddle false information. This has contributed to the US having a COVID death toll of over one million, higher than most other countries worldwide.

The world has progressed quite a bit in the information age. People are able to communicate ideas with millions of people worldwide in seconds. The proliferation if information has never been more efficient in history. That is why the threats that arise from the mass proliferation of memes and viral media are so dire. As was seen during the 2016 and 2020 US elections, COVID, and Arab Spring, memes can be spread to convey messages that can change nations, affect millions (perhaps even billions) of people, and topple dictators. It has become possible for people to change the course of history with a single tweet or a single meme on Reddit or Instagram going viral.

What can we do to stem the massive proliferation of memes that serve to recruit people into dangerous organizations and fill their minds with misinformation? The answer lies in how we confront our biases and how we detect misinformation. People need to be informed about how they can detect bias and propaganda, in addition to using independent fact-checking services. By identifying propaganda from malicious actors and misinformation from online groups, we can stop the spread of dangerous memes before they proliferate.

What Everyone Should Know About Preventing Ethnic Violence: The Case of Bosnia

Source: Modern Diplomacy

When the Balkans spiraled into violence and genocide in the 90’s, many wondered what caused this resurgence in militant ethnic nationalism and how a similar situation may be countered.

***

The 1990’s were a vibrant decade, that is unless you were living in the Balkans. 1995 was especially bad, as the 11th of July of that year marked the Srebrenica Massacre, which saw Serbian soldiers murder over 8,000 Bosnian Muslims over the span of two weeks. This shocked the world, as it was the first case of a European country resorting to extreme violence and genocide on ethnic lines since World War II. After World War II, the idea that a European country would resort to genocide was unthinkable. As Balkan nations continue to see the consequences of the massacre after over 25 years, it is increasingly evident that more needs to be done to curb ethnic violence.

We must first investigate key causes of ethnic violence. According to V.P. Gagnon, the main driver of ethnic violence is elites that wish to stay in power. Ethnic nationalism is easy to exploit, as creating a scapegoat is extremely effective for keeping elites in power. This is exactly what happened in Yugoslavia, which had previously seen high levels of tolerance and intermarriage in more mixed areas that saw the worst violence during the war. Stuart J. Kaufman argues that elites may take advantage of natural psychological fears of in-group extinction, creating group myths, or stereotypes, of outgroups to fuel hatred against them. While they may take different approaches to this issue, Gagnon and Kaufman agree that the main drivers of ethnic violence are the elites.

In the journal Containing Fear: The Origins and Management of Ethnic Conflict, David lake and Donald Rothchild suggest that the main driver of ethnic conflict is collective fears for the future of in-groups. Fear is one of the most important emotions we have because it helps secure our existence in a hostile world. However, fear can easily be exploited by the elites to achieve their personal goals. In a multiethnic society such as Yugoslavia, the rise of an elite that adheres to the prospects of a single ethnic group could prove dangerous and sometimes even disastrous. The destruction of Yugoslavian hegemony under Josip Broz Tito and the resulting explosion of ethnic conflict at the hands of Serbian elites in Bosnia underline this because of the immense fear this created.

Regions with high Serb populations in Bosnia sought independence from the rest of the country when they found themselves separated from Serbia by the dissolution of Yugoslavia. Republika Srpska was formed by these alienated Serbs. The leadership and elites in Serbia riled up the Serb population of Republika Srpska by stereotyping and demonizing Bosnian Muslims as “descendants of the Turkish oppressors”. This scared the Serbs in Bosnia so much so that they obeyed the elites of Serbia in supporting and fighting for the independence of Republika Srpska by any means necessary. As was seen in Srebrenica, they were not opposed to genocide.

We know how the elites fuel ethnic tensions to secure power as well of the devastating effects of these tensions reaching their boiling point. But what could be done to address ethnic conflict? Experts like David Welsh suggest a remedy for ethnic conflict could be the complete enfranchisement of ethnic minorities and deterrence towards ethnic cleansing. We must therefore ensure that ethnic minorities are able to have a say in a democratic system that caters to all ethnicities equally. Fostering aversion to genocide is also vital toward addressing ethnic conflict because it is the inevitable result of unchecked ethnic conflict.

There is also the issue of members of ethnic groups voting for candidates and parties on ethnic lines. For example, in the United States, White American voters have shown to prefer White candidates over African American candidates, and vice versa. Keep in mind that the United States has a deep history of ethnic conflict, including the centuries-long subjugation of African Americans by White Americans.

Ethnic violence is horrifying and destructive, but it can be prevented. The first measure would be the establishment of a representative democracy, where members of all ethnicities are accurately represented. Another measure would be to make ethnic conflict and ethnic stereotyping taboo so that the average person would not resort to genocidal behavior once things go wrong. Lastly, making people feel secure is the most important step towards preventing ethnic conflict. If the people feel secure enough, they will not even need to think about ethnic violence. In short, while it is important to consider the differences of the various ethnic groups in a multiethnic society, it is vital that each group is kept represented and secure, free of any fears of subjugation.

While the case of Bosnia was extremely unfortunate, it provides an integral view into what could happen if perceived subjugation and fear of eradication reaches a breaking point. As was seen in Bosnia, ethnic violence can be extremely violent, resulting in untold suffering and death. That is why we must take necessary steps towards de-escalation and remediation of ethnic conflicts. These measures can, quite literally, save millions of lives.

Substack Blog: Tech Train

Chatbots: The Good, Bad, and Deadly

Source: Substack

In 1966, MIT professor Joseph Weisenbaum developed ELIZA, the world’s first chatbot, to test the natural language processing capabilities and mimic human language and conversation. Since then, chatbots have gone through a long history of development spanning decades. From the rise of virtual assistants like Siri and Alexa in the early 2010’s to the launch of ChatGPT in late 2022, we have seen immense progress in simulating human speech, conversation, and interactions. The consequences of this growth have simultaneously been intriguing and horrifying. In this article, I will summarize these consequences and whether they are technological wonders or dangerous pandora’s boxes that cannot be closed.

The Good
Chatbots have proven to be invaluable tools in productivity, making work and daily life easier for us. For example, there are few things more frustrating than dealing with customer service issues. The inconvenience of spending minutes to hours on hold to connect to overworked customer service representatives is a miserable chore way too many of us are familiar with. With AI representatives in the form of chatbots, customer service has become more efficient, personalized, and scalable, costing much less than having to hire a team of representatives. Better yet, these chatbots function 24/7, meaning services can be reached at any time of day.

Therapy may also be something that is ameliorated through the use of chatbots. While many are hesitant to replace therapists with AI chatbots due to trust issues and lack of personal connection, it is undeniable that chatbots can be used for therapeutic tasks like 24/7 crisis intervention, diagnostics, insurance, and medication management.

As chatbots do not require a substantial salary nor a need to sleep and cannot experience burnout, they are also much cheaper than hiring teams of people that could make errors, become jaded, and make insensitive comments. While AI chatbots also have these issues, they still prove to be cost effective, efficient solutions to dealing with large amounts of customers or clients.

The Bad
As AI relies on collecting billions of parameters worth of consumer data to work effectively, there is a tangible issue of data collection and management. Chatbots collect data from their customers, including potentially sensitive information such as healthcare data, medical diagnoses, demographic information, financial information, and location indicators such as IP addresses. Concerns about data leaks are a realistic concern and call into question issues of data security and privacy concerns. You may have sensitive data somewhere on the deep web if one of these chatbot companies experiences a data breach.

OpenAI has conducted extensive research on AI and its effect on people’s behavior and emotions. A study that is of particular interest to us is one investigating a link between ChatGPT chatbot use and loneliness. They found that while normal people use chatbots as tools and software, some lonely people will use it more frequently and develop emotional attachments. Thus, the study correlated heavy use of ChatGPT with loneliness, emotional dependence, and reduced socialization. The implications of such a study makes it clear that more policies and initiatives must be put in place to ensure restrictions are in place, especially for vulnerable populations like children or people at risk of social isolation or loneliness.

There are risks associated with using chatbots as a substitute for social interaction. However, some people have already reported falling in love with chatbots, sometimes with deadly consequences.

The Deadly
After five months of chatting with his AI girlfriend on Nomi, a chatbot platform, user Al Nowatzki was given some disturbing responses from the chatbot, going by the name “Erin“. The AI girlfriend made specific instructions and suggestions for him to commit suicide by either hanging or overdosing on pills. A 14 year old boy from Florida also committed suicide at the urges of a Character.AI chatbot based on a character from Game of Thrones, according to a lawsuit from his mother. These recent and concerning examples of social AI chatbot companions telling vulnerable users to commit suicide have raised discussions on chatbot ethics and censorship.

On one hand, these companies are resistant to censoring their products, citing free speech concerns and the importance of free expression. After all, they want to mimic human interaction as accurately as possible, including suggestive and entertaining speech that will sometimes push the boundaries on what is acceptable. Proponents of increased guardrails will stress the importance of AI guardrails and censorship of any response that could pose a safety risk to others, especially those who are vulnerable to suicidal urges and ideations. While it is important to maintain realism, this needs to be achieved in a way that poses as little of a threat to human life as possible.

Now what?
Chatbots are not new inventions, having existed for over 50 years. Since ELIZA, we have seen immense growth of chatbots, albeit at the risk of aiding in the loneliness epidemic, reducing human connections, and sometimes posing a threat to human life. Of course, as AI technology advances, we will see smarter, more convincing chatbots appear in the near future. It is thus important to not only regulate AI tools to ensure safety, but also educate the public, especially children, about the risks of unsafe AI tools and practices.

AI chatbots are here to stay, and they are only getting better. We need to thus make sure they are safe and do not pose a threat to life or livelihood.

Discuss

OnAir membership is required. The lead Moderator for the discussions is Matthew Kovacev. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on the contents of this post.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar