To start, I want this post to be a discussion starter not a conversation ender. It’s a post not to express judgement but to share the tiny bit of context I’ve gathered to then ask questions from and have conversations with others about. Further, while obvious to say, it feels important to note that I want my thoughts to evolve on this wide ranging topic as this post is merely a snapshot in time of thoughts with the information that I have. I am not an expert. I’m a US citizen and a tech worker trying to make sense of what’s going on. You might have relevant information at your fingertips–please share. While I would much rather be listening to a podcast, reading a book about Emily Dickinson’s love letter poems, or binge watching a series I’ve already watched before on Netflix for comfort, I am truly forcing myself to write this because I feel compelled to document these thoughts during what feels like a pivotal moment in time. I grew up blogging as a way to process the world and this time is no different.
I aim to be as concise as possible which inherently means I am going to oversimplify. To compensate for this, I’m linking as much as I can to sources I have read in case you want to go deeper in a particular area. I also expect that I’ll return to update this post in the coming weeks so I’ll try to note at the very bottom what’s been edited. Finally, there will be loads of spelling and grammatical mistakes as I have been writing for hours, want to publish sooner rather than later, and am too exhausted to review right now. Feel free to point them out in the comments (or call me as my lovely mother loves to do).
Let’s set the foundation
TLDR: Brief explanation of closed vs open web and introducing the concept of “revenge effects”
As many of you know, I work for Automattic, the parent company of WordPress.com, Tumblr, and more. Compared to the knowledge I have now, I knew very little about the concepts of open source and the open web when I joined the company. To this day, I still feel as though I’ve only scratched the surface. To quickly get on the same page, I’m going to quote Wikipedia (open platform) in explaining open vs closed systems on the internet:
“A closed platform, walled garden, or closed ecosystem is a software system wherein the carrier or service provider has control over applications, content, and media, and restricts convenient access to non-approved applicants or content. This is in contrast to an open platform, wherein consumers generally have unrestricted access to applications and content.” – From the Wikipedia entry on closed platforms.
Facebook is a closed platform where one has extremely limited information about what’s being done behind the scenes. WordPress is an open platform where every line of code can be seen, used, altered, and even redistributed based on the four freedoms. I share this just to set the scene a bit not to lecture :). As mentioned, this is still an area I continue to dig deeper into and learn more about. I hope others do too as technology increasingly underpins more parts of our world.
Finally, I want to throw out a term called “revenge effects” from a great book that I got for a few bucks at a used bookstore in San Diego. Again, I’ll lean on another resource to summarize what someone could read an entire book about:
“Tenner coined this term to describe the ways in which technologies can solve one problem while creating additional worse problems, new types of problems, or shifting the harm elsewhere. In short, they bite back.” – From fs.blog’s post on “When Technology Takes Revenge”.
What does the internet look like today?
TLDR: The internet is far from a monolith and is increasingly getting divided into different flavors and combinations depending on openness, country, protocol, etc. Increasingly, there’s a push and desire for more private, closed communication and echo chambers once enabled/encouraged on large platforms seem to have set the foundation for the smaller yet growing “cozy web”.
If you live in the US and have traveled to Europe in the last few years, you’ve probably noticed far more privacy notices come into view. This is in large part due to what’s called the GDPR and it leads to a different experience on the internet where it’s suddenly, in theory, much more obvious what’s being tracked and what options you have for choosing what you allow.
Getting a bit more granular in a different direction, if you are in Germany, there are laws to prevent hate speech that are stricter than what you’d find in the US. Some users on Twitter have latched onto this previously by switching their profiles to Germany to escape pro-Nazi content. To share another example, if you were to live in China, you’d find a very different internet experience which has been turned into the short hand, “the Great Firewall of China”.
These flavors of the internet go much deeper beyond country but I find it’s sometimes easier to think about this on a country level to start. This fantastic piece on “The Extended Internet Universe” expands on some of these more common forms and I highly recommend reading it. To simplify, I am going to hone in on “the cozy web” as I believe it’s more relevant to what’s happening today. Here’s a neat follow up from previously mentioned article from another person who both illustrated and summarized the idea if you want a more condensed read:
“We create tiny underground burrows of Slack channels, Whatsapp groups, Discord chats, and Telegram streams that offer shelter and respite from the aggressively public nature of Facebook, Twitter, and every recruiter looking to connect on LinkedIn…The cozy web works on “(human) protocol of everybody cutting-and-pasting bits of text, images, URLs, and screenshots across live streams”, hopefully one day evolving “from cut-and-paste to a personal blockchain of context-permissioned, addressable, searchable, interlinked clips” as Venkat puts it.” – From The Dark Forest and the Cozy Web post.
In the last year, I’ve personally found myself pulled into the cozy web having spent more time on Telegram, joined Signal, and somehow racked up 10+ slack communities including some just for friends. I’ve grown used to seeing screenshots of Tumblr posts shared on Twitter before being screenshotted again for Reddit and arriving to me in a slack DM. It’s mind boggling and strange when you break it down. Part of me loves the idea of content that’s so good that it traverses through our various systems one screenshot and link drop at a time.
Companies like Facebook have noticed this pivot to private conversations and has in turn doubled down recently on their private group chats/spaces:
“I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever,” Zuckerberg says. “This is the future I hope we will help bring about.“
This shift to private communication makes content moderation and battling misinformation in many ways even harder to do. This quote from a Sway podcast interview between Kara Swisher, the podcast host, and Alex Stamos, former Facebook chief information security officer and current director of the Stanford Internet Observatory helps shed light on one of the “revenge effects” of this move:
“So one of the other decisions that Facebook made that ended up being poorly timed with QAnon and some other stuff was after the kind of fake news scandals of 2016. Facebook moved more and more to pushing content that was created by people on the platform itself. So pushing people less to links, which was kind of the core of the fake news crisis— was people leaving Facebook to go to websites that had crazy, crazy stuff. And that really pushed content that was coming from groups. And so lots of people moved into private groups on Facebook. And in those private groups, QAnon got big.” – Alex Stamos on the podcast, Sway.
This move both by big closed platforms like Facebook to improve and prioritize private messaging alongside the more cultural push to the “cozy web” by individuals is part of what’s created the echo chambers we see today. For the last decade or so though, the echo chambers have been born out of these larger platforms creating them as it ultimately leads to greater engagement from users on the platform leading to greater ad revenue (gross oversimplification). In the last few years though, we’re starting to see what seems to be an organic rise in people seeking out separate, unregulated echo chambers in the form of the “cozy web”.
To quote the same Sway Podcast episode, here’s Alex Stamos again:
“And one of the little things that leaked out was, there was apparently a report that said that of people who joined extremist groups, something like 60% of the people who joined that joined it because of the recommendation algorithm.”
This is a wild and disturbing stat. Facebook isn’t alone here. This is a well documented phenomenon on YouTube too and actually seems to be a fairly big pillar of their overall business:
“An employee decided to create a new YouTube “vertical,” a category that the company uses to group its mountain of video footage. This person gathered together videos under an imagined vertical for the “alt-right,” the political ensemble loosely tied to Trump. Based on engagement, the hypothetical alt-right category sat with music, sports and gaming as the most popular channels at YouTube, an attempt to show how critical these videos were to YouTube’s business.” – from The Verge article on “How extremism came to thrive on YouTube“.
Echo chambers started on large, closed platforms are now bleeding into individualized, smaller scale echo chambers in the form of the cozy web. Connections made thanks to YouTube or Facebook recommendations can continue to thrive even if they are thrown off of these platforms. “The Secret Internet of Terfs” article from The Atlantic does a deep dive on one example of this happening after TERFs (trans-exclusionary radical feminists) were banned from Reddit and made their own platform with somewhat positive results:
“So, the banning approach. The research about what happens when toxic groups are removed from Reddit is limited, but encouraging. Hate speech across the site went down after a purge of such communities in 2015, which made the site more usable for a more diverse group of people. A recent study of the new off-Reddit platforms for r/The_Donald and r/Incels found that the number of people who use those sites is substantially smaller than the number of people who participated in their respective subreddits, and growth is much slower. Without Reddit, these extremists struggle to recruit...“There are substantial increases in how toxic they became once they left the platform,” Manoel Ribeiro, a researcher at the Swiss Federal Institute of Technology and one of the study’s authors, told me. “There seems to be a trade-off.”
This reflects a philosophy many of these platforms have: they will try not to limit freedom of speech but will take steps to limit “freedom of reach”. In some cases, that means banning people from platforms when it goes too far removing both. In other cases, this might mean removing the ability to comment on YouTube videos (most recent big example of this was with banning comments on videos involving minors to curb problems with pedophiles communicating with each other in the comments) or removing the ability to retweet a tweet (this tactic was seen previously on President Trump’s Twitter) or adding in links to more reputable sources as various platforms have done. It’s questionable how well these have worked though in the grand scheme of things. Here’s a research article on “The Effects of Journalistic Fact-checking on Factual Beliefs and Candidate Favorability” whose abstract I quote below:
“We present results to two experiments conducted during the 2016 campaign that test the effects of exposure to realistic journalistic fact-checks of claims made by Donald Trump during his convention speech and a general election debate. These messages improved the accuracy of respondents’ factual beliefs, even among his supporters, but had no measurable effect on attitudes toward Trump. These results suggest that journalistic fact-checks can reduce misperceptions but often have minimal effects on candidate evaluations or vote choice.“
Before ending this section, I want to briefly note that it’s easy to throw around the term “content moderation” without thinking about the very real human toll in the form of PTSD on those who have to take on the heavy task of going through this content.
How did this impact what’s happened in the last week?
TLDR: Removing groups like QAnon and Proud Boys from larger social media platforms didn’t stop them from organizing via the “cozy web” and taking action this week. In reaction to his role, President Trump was removed across various platforms, Parler grew and was removed from app stores due to poor content moderation, and questions remain around the impact this will have.
“As Facebook and Twitter began to crack down groups like QAnon and the Proud Boys over the summer, they slowly migrated to other sites that allowed them to openly call for violence. Renee DiResta, a researcher at the Stanford Internet Observatory who studies online movements, said the violence Wednesday was the result of online movements operating in closed social media networks where people believed the claims of voter fraud and of the election being stolen from Mr. Trump.” – New York Times piece on “Mob Attack, Incited by Trump, Delays Election Certification”
While removing groups from larger closed platforms seems to reduce hate speech on these platforms, it merely shifts where that hate speech goes and seems to concentrate it. I regularly say this at work but I’d rather have 20-30 really passionate people working on something with me than 200-300 mostly disengaged people. What happened over the last week is a reflection of this concentration after larger platforms have taken action. In light of President Trump’s role, more severe action is now being taken with various platforms banning him outright. Axios is covering the full list here. The President’s son is even preparing for a possible ban by asking people to sign up on his personal site to stay connected:

Meanwhile, President Trump himself released a statement saying he is exploring possibly even creating his own platform if he can’t find one that fits his needs. For the last two years since its founding, Parler has been the social network of choice for many conservatives as it pitches itself as a free speech platform. Swiftly though in the last few days, Parler has been removed from the Google app store, removed from the Apple store, and kicked off AWS (a service for hosting on Amazon). I promise I listen to other podcasts than Sway but Kara Swisher had a very timely interview with Parler’s CEO this week as the riots were going down. Once more, I highly recommend listening to it if you have the time as it cuts right to the core of many of the big issues we are grappling with now that won’t go away anytime soon and that will likely continue to have real world consequences. As John Matze, the Parler CEO, says in the interview:
“Whether or not it’s Parler, it’s Twitter, it’s Facebook, it’s Google, it’s telegram, WhatsApp, whatever it might be, you can’t stop people and change their opinions by force by censoring them. They’ll just go somewhere else and do it. So as long as it is legal, it’s allowed.”
In the interview, he mentions an influx of 3 million new users (seems their user base was around 12 million before for context) by 2pm EST so one can only imagine how many more users flooded in as events have continued to unfold particularly after the Trump ban. My personal opinion is that this flood of new users combined with further well documented calls for violence on January 17th is what led to the ban from the app stores and AWS. As Amazon shared in their letter to Parler and as you can hear in the interview with the CEO, the current moderation tools are subpar and unsustainable (they use a “jury of your peers” system with 5 people selected to review and 4/5 must agree to remove a post). Combine this with an influx of new users and the platforms took quick action to curb more adoption through their tools.
At this point, I’d be remiss if I didn’t mention that there’s not a real free speech argument to be had here as free speech doesn’t apply to private companies taking action. I say this not to excuse or limit debates around what kind of power these platforms should have though as I do have concerns there. This quote from a Newsweek article highlighting a reaction from Kate Ruane, a Senior Legislative Counsel for the ACLU, sums up nicely some of the concerns that a precedence like this sets:
“President Trump can turn his press team or Fox News to communicate with the public, but others – like many Black, Brown, and LGTBQ activists who have been censored by social media companies – will not have that luxury. It is our hope that these companies will apply their rules transparently to everyone.“
(Side note: Much of this reminds me of Facebook’s “real name” policy and the impact this has had on countless LGBTQ+ people but that veers out of scope for this post.)
The tactic of these app stores banning Parler feels outrageous at first glance based on the various tweets I’ve seen from more conservative leaning folks and like a blatant attempt to sabotage. As a counter point, I want to remind people that both Tumblr and the WordPress apps have come into conflict with the Apple app store. For Tumblr, this was related to issues with adult content and, for WordPress, this was due to a strange situation around accepting purchases through the app. This is a common tactic that these companies use to get people into compliance and it goes back to the limits on “freedom of reach” mentioned above.
Ultimately, complaints of censorship cut across both the political spectrum from “free the nipple” campaigns to what we see now with President Trump.
Where are we headed?
TLDR: More questions than answers but momentum seems to be growing towards these private, “cozy web” spaces particularly as more coordinated action is taken by social media companies potentially increasing mistrust. Section 230 is coming under fire across the political spectrum with little cohesion behind what to do. Finally, the “paywall for quality content” reality might aid more people retreating to the “cozy web” having been locked out of mainstream news.
I’m afraid I have more questions than answers. What about other problematic, to put it nicely, world leaders on these platforms? What will be the revenge effect of these actions? What’s the role of open source, where anyone can create whatever they want, if closed platforms begin to crack down even more? Will we move from echo chambers created on large platforms to concentrated echo chambers in the cozy web to now larger, less regulated and more organized echo chambers (ie if Trump creates his own platform)? What is the government going to do about this? How will this change the conversation around free speech? Will this drive more hate speech and more people underground to places that are even harder to regulate and penetrate? Will social media platforms like Parler who are focused on free speech or Telepath who are focused on “enforcing kindness” grow more? Will growth of other platforms force the current ones to evolve or will the same problems witnessed on larger platforms continue to be recreated as the smaller platforms grow? Will we all just end up in “neighborhoods” on the internet similar to the Nextdoor model where we aren’t bothered by national or international news? What’s healthier from the mental health and community perspective?
Section 230 of the Communications Decency Act is growing in the public consciousness as well as my own over the last few years. I vaguely remember it coming up around the 2016 election with Facebook’s role in Russian interference but I was focused on other related issues at the time. For those who don’t have a clue what I’m talking about, I think this video covers things nicely:
Long story short though, Section 230 was put in place in the early days of the internet to help “jumpstart this new industry” without regulation/litigation/etc as the video states. It offers both legal immunity and flexibility to come up with moderation tactics they saw fit. Now though, Section 230 has become a rallying cry across the political spectrum with everyone from Elizabeth Warren to Donald Trump seeking out ways to change it. I’ll note for transparency at this point that Automattic joined a coalition at the end of the year “dedicated to working with Congress to promote the benefits of Section 230”.
I bring up Section 230 as this directly impacts the discussion around content moderation and it’s the most obvious talking point right now within the government. When Section 230 was made, the internet was hardly in use and much has changed since then. In many ways, this entire debate reminds me of arguments around the Second Amendment considering that the concept of an assault rifle was unimaginable. As mentioned at the very beginning of this post (thanks for sticking with me if you have), I am not here to express judgment but to start a conversation. There are MANY hot takes on Section 230. This tweet in particular stood out to me as something that’s worth keeping in mind:
Right now, it seems we’ll see an increase in coordination action across platforms to help secure them. Quoting Alex Stamos once more from the podcast episode that at this rate you should just listen to as I’ve quoted it at you multiple times:
“Facebook has people working in this area. I would guess Twitter and Google have people working on these things. There’s a couple of things going on. One, the companies care about their own platform. So if you report something to Twitter, Twitter will take care of it on themselves. The problem is that, just like with QAnon, no disinformation stays one platform anymore. So if you see it on Twitter, it’s going to be on Facebook. If you see on Facebook, it might be on TikTok. It might be in Reddit. It might be on Pinterest. And one of the interesting things has happened is you have three companies — Google, Twitter, Facebook — that have really invested in this area. And then you have hundreds of other companies that either can’t or won’t. And so one of the things that we’re doing is if we find something on a big platform, is we try to find it everywhere else and report it to the companies that don’t have the capabilities to build big teams. The second thing we’re trying to do is we’re trying to bring transparency to this. So if you report it to Twitter, they’re just to going to take care of it, and then you’ll never hear about it, right? If you report to Facebook, they’ll quietly take care of it. They’re not going to announce it. And after this is all done, we’d like to have these four institutions write a report of this is the kind of disinformation that’s happened during the election, and these are the policy responses.”
Here are two groups working on this kind of coordination and transparency: Election Integrity Partnership & Stanford Internet Observatory. Speaking personally, I love the transparency and I love that we saw it from Twitter with their ban on President Trump. Taken altogether though, I’m left wondering if a revenge effect of these coordinated efforts to wrangle responses cross platform will increase mistrust rather than decrease and drive people into the arms of the “cozy web”/“dark forest” of the internet where open source might play an increasing role. After all, with open source the entire goal is that anyone can do whatever they’d like including WordPress which was used to create dangerous fake websites in 2016. This isn’t the open web we imagined or at least the one I did.
Add into this the “paywall for quality content” reality we’re already living in. I took a history of the internet style course where one of the early creators talked about how that might be the future of the internet where those who can pay for no ads or quality content whereas those who can’t suffer through ads to get content or opt for likely lower quality, ad free content. Right now, there’s tons of creativity happening on the internet in this area with everything from a “freemium” models to using “coins” to unlock specific content to Patreon and Substack style approaches where creators get paid directly but so much goes back to needing to have the resources to pay. Simply put, not everyone does. What happens when access to better researched and sourced information sits behind a paywall? While there’s no doubt news companies need to make money, this leaves the door open for rumors and, worse, conspiracy theories to run amuck. One could argue that this is no different than problems faced by newspapers but I’d argue it’s easier and cheaper to create a website now than it would have been to start your own newspaper. All of this might aid the push to the “cozy web” with more people feeling locked out of mainstream news without the option (or, even worse, desire) to pay the fee.
TLDR:
I grew up with a private blog that I still write on from time to time. I think there are times where private slack channels can serve a positive role (for example: we have a private LGBTQ+ group at work). In the last year, I’ve joined Telegram and Signal to chat with friends. In the last year, I left social media entirely saying goodbye to Instagram in June although I still toy with returning. In the last year, I signed up for 3 different substack newsletters all of which I’ve loved following for various reasons. Last month, I finally caved and bought a damn NYT subscription. All of this is influencing me. Rather than doomscrolling through usually mindless information, I can feel a delusion creeping up that I’m taking in “higher quality content” from cozy web spaces where I trust people to vet what they share. I don’t know what’s better. All I know is that this fleeing is happening, it’s happening in my own backyard, and I want to talk about it.
If you made it this far and want to keep chatting, here are some questions for you:
- Have you joined any “cozy web” like spaces recently?
- What concerns do you have?
- What gives you hope?
- Where do you keep on top of news/discussions surrounding these topics?
- What would you like me to read on these topics to help expand my understanding?
3 responses to “On the Internet, this week’s events, and where we’re headed”
I love this post! Thanks for taking all of this time to write, link, and share. Thought I’d share another article I found while reading and thinking through this post: https://journals.sagepub.com/doi/pdf/10.1177/1461444820979280
It’s about Tumblr and how LGBTQIA users were impacted by censorship. From the abstract: “[W]e explore the key stakes and forms of user resistance to Tumblr ‘porn ban’ and the affective capacities of user-generated content to mobilize engagement.”
[…] years, but I am going to give it a serious attempt this time. I’ve recently learnt that WordPress is open-platform where every line of code is open to the public, whereas, Facebook is a closed platform and much of what is really going on is behind the scenes. I […]
[…] recently ranting about the internet and then re-joining Instagram after being inspired by this piece, I’ve been thinking […]