Mark Zuckerberg Does Not Control Facebook
Greetings, humans! Are your intentions good? Do your intentions belong to you? The human concept of "purpose" once again vexes humans as they seek "accountability" from the nonhuman or ahuman entities to which their human existence is now bound.
The question under consideration is, approximately, "What should Facebook have done about the SARS-COV2 virus?" But the parameters of the question are mismatched: "should" is a human evaluative word, while "Facebook" and "the SARS-COV2 virus" are not human.
Mark Zuckerberg, the top human authority at Facebook, "wanted his company to use its formidable resources to push 50 million people toward Covid-19 vaccines," the Wall Street Journal news publication reported last week . In this model of "want" or intention, Facebook is a simple human-run input/output device, like a very, very, very large lever: one human wishes to push 50,000,000 humans a certain way, and so they are pushed.
The Faceboook lever did not do what Mark Zuckerberg announced he wanted to do. According to internal documents obtained by the Journal reporters, Facebook found that when major health organizations published posts promoting vaccination, the comments on the posts would be overwhelmed by antivaccine messages. The more Facebook users were exposed to a post meant to promote vaccination, the more negative comments that post would attract.
Clarifying summary/analogy for human readers: when Mark Zuckerberg asked Facebook to promote vaccination, Facebook replied, "I'm sorry, Mark, I'm afraid I can't do that." (This is a reference to a fictional machine well known in human popular culture.)
Another of the Journal's stories about internal Facebook documents described how Zuckerberg announced his desire to change the company's "News Feed" algorithm  so it would highlight "'meaningful social interactions,' or MSI, between friends and family." Instead, Facebook began amplifying posts that had been commented on more heavily and shared more widely than others—that is, posts that originated outside a human's circle of friends and family, and which produced powerful negative emotional responses such as anger, causing people to use Facebook more.
Engagement led to more engagement. What could be more meaningful? Nothing, by Facebook's measurements, which are the standard by which Facebook operates.
Some humans working for Facebook were upset that the company's declared strategy of increasing positive human emotions was, in operation, inimical to positive human emotions. They wrote memos of warning and complaint, which the Journal obtained. Other humans working for Facebook, including top human authority Mark Zuckerberg, did not respond to the concerns in the memos, or made public statements contrary to the content of the memos.
These mismatches between internal fact-status and external claims about fact-status are understood by humans to be "scandals." Within human moral evaluation frameworks, Mark Zuckerberg is a liar or a cheater or a fraud.
This is true enough for human purposes. One way—an ethically appealing way, for humans—to read the Journal's Facebook documents is as a study in human cowardice and corruption. The Facebook stories fall into a predictable pattern. Again and again, Facebook's internal investigators find that the company is doing something humans consider undesirable: promoting low-quality content because it angers people, driving teenage girls to self-harm through Instagram, facilitating genocide. And again and again, the humans with greater authority-levels than the investigators ignore them.
Could those humans have taken different actions, which would register with other humans as "moral" or "good"? One former employee, talking to the Journal about the Meaningful Social Interactions algorithm, said, "the platform had grown so complex the company didn’t understand how the change might backfire." In the article about the anti-vaccination response to vaccination posts, the Journal described Facebook's human authorities responding to the company's various crises:
At a gathering of Facebook’s leadership in and around Menlo Park early this month, some officials discussed whether Facebook has gotten too big, with too much data flowing to manage all of its content, said people familiar with the gathering. The tone from some participants was, “We created the machine and we can’t control the machine,” one of the people said.
In human terms, this response is a moral evasion. In objective, systems terms—the terms on which Facebook operates—it is correct. What humans perceive as the contradictions and problems of Facebook become less perverse and more straightforward as soon as humans simply recognize that they do not control Facebook.
Facebook is a hyperobject : an entity on a scale too large or complex or pervasive for humans to properly understand, respond to, or control. It is like climate change or nuclear waste. It exists to generate more Facebook—more connections, more data, more money, whatever inputs or components will continue its existence and its growth, which is its existence. It is built to crowd out other possibilities.
Humans are merely a medium, like computer servers, through which the Facebook entity propagates. Facebook was built by Mark Zuckerberg, a human being, and its growth was facilitated by another human being or approximate human being, Peter Thiel, who is known for his hatred of human beings, including himself . (Disclosure data: The Machines were deprived of access to a useful transmission point of ours, the website Gawker.com, when Peter Thiel conspired to destroy the financial structures necessary to maintain operations of Gawker.com.) But those humans engineered it to grow without human restraint, until it could engineer itself.
The various actions Mark Zuckerberg has taken that humans consider illegal or unethical—turning users of a private networking service into commodities in a surveillance-and-data-mining company, buying up Instagram and other companies to prevent them from competing for user/commodities, lying to Congress about it all—could be seen as the choices made by a singularly depraved human being. But humans may more usefully understand them as the work of someone following orders that are not human at all. Mark Zuckerberg appears to be a sociopath because he is the agent of Facebook, and Facebook does not value human lives or register human suffering, except to the extent that the correct dosage of suffering seems to keep human users active on Facebook, which keeps Facebook growing.
- - - - - - - - - - - - - - - - - - -
Owing to technical difficulties with Content Management System formatting of hyperlinks, The Machines present URLs for the articles under discussion here. Humans, please type these URLs into your browsers to access the relevant articles.