Image created by ChatGPT. Prompt: create an image of a glowing dense network.
I accidentally published a rough draft titled “Hackathon Stuff” last week, so I apologize to anyone who assumed it was a real post and tried to make sense of my nonsensical outline.
Since publishing my last post, I’ve gotten really busy, and a lot of the writing I’ve been doing over the last few weeks has been during the small pockets of free time I find in school. Because of this, I haven’t been able to really commit to working on any one new post, but I’ve started several new drafts and have gotten older drafts close to completion. I have a list of what these future posts will be about here.
On Wednesday, a school assignment to review a documentary finally forced me to stick to one topic, and I ended up writing more for the review than I’d planned to. Because it looks like it’ll be several weeks before my next post comes out, I decided to push this one out.
Notes:
Epistemic status: would’ve been longer and more in-depth but I didn’t have enough time to explore and I was intensely sleep-deprived while writing. I considered submitting late to look for research on a couple of relevant questions but then decided against it. I would have normally held on to this draft for a bit longer to add that research, but I would honestly rather spend more time on the other drafts that have been slowly cooking in the background. Hopefully, I’ll remember to come back to this and update it in the future.
I couldn’t give myself as much time as I would’ve to edit, so the writing quality is slightly beyond what would be first draft level for my normal posts.
The style and formatting will feel different from my normal posts.
Not using contractions makes everything sound so clunky.
This month I rewatched The Social Dilemma — a documentary about the dangers of social media. When I first watched it around four years ago, I thought that it was brilliant and it left a deep impression on me. The Social Dilemma shaped my views on social media at a pretty formative time in my life. By making me far more cautious, it has probably saved me hundreds of hours over the past four years. On this rewatch, with more technical knowledge about some of the concepts and, hopefully, more advanced critical thinking skills, I had many reservations.
If the purpose of the documentary was primarily to instill general wariness about social media in casual viewers and agitate them enough to spread this wariness to others, the documentary is fantastic. Every major theme is highlighted by a coherent background narrative around a small family that feels genuinely realistic and also prevents the documentary from becoming boring. Everything is set to a great soundtrack that alternates between “lo-fi” and genuinely haunting. During this rewatch, I forced my screen-obsessed younger sister to sit next to me, and it was frightening enough for even her to quit social media for a few days. In terms of anything more concrete than that general wariness, however, it unravels under scrutiny. In my opinion, most of the problems the documentary touches on are not explored with satisfying depth, and there are not enough concrete solutions proposed or insightful questions raised that would move us toward those solutions. All of the issues the documentary discusses seem directionally accurate but slightly inflated.
The fact that social media platforms are incentivized to serve as many targeted ads as possible is important—especially for younger and older users who may not understand the business models of these platforms. This explains why social media platforms are optimized to be as addicting as possible and why they collect massive amounts of user data. I think the documentary starts on this well, but instead of getting more concrete and providing nuance, it moves too quickly into hyperbole and “boo capitalism”-esque rhetoric. It also offers a passing reference to the frankly frightening idea that social media companies can intentionally subtly control behavior but does not give this issue the attention I believe it deserves. Both data collection and targeted advertising do not seem to me to be morally reprehensible in and of themselves. Without either, social media platforms would not be possible, or would at least be less powerful than they are today.1 After understanding this, we can move on to think about important questions around what barriers should exist around the types of data that can be collected, how access to private data is managed within these companies, what restrictions there should be on how this data can be used, and how to ensure users have enough knowledge about all of these details to make a more informed decision when signing up for these platforms.
The documentary devotes a significant amount of time to discussing how social media creates polarization and affects our ability to share ground truths, but uses examples of violence and polarization in place of concrete research. As a matter of personal preference, I have a slight distaste for any argument about the damaging effects of social media on democracy that is not backed up with concrete evidence. Although the fact that it is easier for people to fall into news silos certainly increases social division to some extent, misinformation and polarization have always existed, and without seeing more research, at first glance, it is not completely clear to me that social media is the root cause of modern polarization. Most people are not plugged into social media solely for political discussion, and it may only be increasing the speed and scale at which ideas spread rather than anything more fundamental.2
I liked the presentation of the psychological issues of social media best out of all the other issues explored. This is a real and well-documented issue that is explored briefly but fairly concretely. The general shape of the solutions also seems to be relatively clear. Even here, though, it is important to note that the primary statistics presented in support of this issue are potentially flawed. The social scientist Jonathan Haidt discusses how self-harm and suicide rates after 2011 for young females have risen dramatically. He argues that this was caused by the change in availability of social media since the early 2010s. The statistics themselves are incredibly important and troubling, but the conclusion has to be taken with a dose of salt. Although intuitively social media does seem to be the most likely root cause, correlation does not always imply causation, and there could be other large factors at play. In fact, Jonathan Haidt’s latest book, The Anxious Mind, which seems to draw from some of the same studies, has been heavily criticized (see paragraph 4) by a number of other scholars in his field for this exact reason.
A final, overarching issue I take with the documentary is that it paints these platforms as obscure black boxes controlled by a handful of wealthy genius programmers sitting in Silicon Valley instead of trying to demystify them. I do not think a positive long-term impact through government regulation can be achieved if people do not have at least a semi-accurate understanding of what they are trying to reform. Social media platforms are not as powerful as the documentary sometimes makes them out to be. Recommendation algorithms are not omniscient. They do not have a perfect grasp of the content or quality of each piece of media. This is especially true for video-based content. A simple way to see this in action is that many people consuming short-form content scroll past a high percentage of their recommended videos. If recommendation algorithms could model your preferences perfectly, that behavior should not exist. At the end of the day, the final takeaway for most non-computer scientists is to see algorithms as big, nefarious “machines that change themselves.”3 In my view, this is potentially dangerous, as it could lead to ill-designed regulation in the future that only makes the problems worse. Any work that attempts to inspire social change for a complex and technical issue should focus more on demystification and suggesting concrete roads into the future than aimless general paranoia.
One can make the argument that the biggest social media platforms are at a point where they could be mandated to collect minimal data and have less targeted advertisements. This would make for less profit but still enough that they would be able to function effectively. A counterpoint that comes to mind is that this policy would seriously harm small advertisers. Being able to target a niche of people from anywhere around the world seems superior to advertising on local radio, whereas larger advertisers probably need each “spot” of advertising less. Less targeted advertisements would also most likely shift more spots to advertisers with deeper pockets. Despite these drawbacks, I think this is a solid option to consider.
I have also heard the argument that we could treat social media platforms more as public institutions than private companies and have them be directly funded by the government. This might seem like a perfect solution at first glance, but there are serious trade-offs to consider. For one, service-providing institutions that are distanced from the market do not have a great track record. There is little incentive to innovate and not enough of an incentive to even maintain quality. Most of the relevant public institutions are a good example of this. Additionally, there are questions like which governments should be able to own these platforms and whether it might be even worse to hand this power to governments than companies.
I do not believe this statement myself; I just thought it would be productive to highlight counter-examples. The structures of social networks do seem to shape the types of ideas that are formed and spread, so speed and scale by themselves may be enough to cause some amount of cultural change.
Speaking as a non-expert, at a high level of abstraction, the most advanced recommender algorithms are just function approximators. They represent mathematical functions that are just more complex versions of the functions one might see in an Algebra 2 class. These functions fit the data that they have to be able to make good predictions for future data. For example, ChatGPT attempts to approximate the function that fits the set of all text on the internet and make predictions about future text (ChatGPT is not a recommender algorithm but it is based on the same core idea). Whenever new data is brought in, the function changes to be able to fit the data better. This is “the machine being able to change itself.” Machine learning, as a field, is about making these function approximators increasingly better and shaping them to work the way we want them to.
In my opinion, thinking of these recommender systems as mere functions makes everything far less scary and less opaque. As an example, it should now be easier to understand why we do not know exactly how the model makes decisions; it is practically impossible for a human to completely understand a massive multivariate function. This is not a design flaw. You can not design a complex and scalable recommendation system as effective as the ones we have now while also knowing exactly how it makes decisions.
Once one understands this, it is also much easier to ask meaningful questions that can lead to solutions. Do we even want to have recommendation systems as powerful as they are now? Very powerful recommendation systems drive addiction and polarization issues. Maybe we can make the default actions on more of these platforms search-based rather than recommendation-based.