Cory Doctorow:

Writing for a notional audience — particularly an audience of strangers — demands a comprehensive account that I rarely muster when I’m taking notes for myself. I am much better at kidding myself my ability to interpret my notes at a later date than I am at convincing myself that anyone else will be able to make heads or tails of them. […] 

Blogging isn’t just a way to organize your research — it’s a way to do research for a book or essay or story or speech you don’t even know you want to write yet. It’s a way to discover what your future books and essays and stories and speeches will be about.

I like this post very much — and especially Doctorow’s idea of a blog as a place to make your commonplace book public. I hesitate to go all-in with this approach simply because I don’t want to overwhelm my readers with stuff.

I’m trying it out, but this is my fourth post today, which seems massively self-indulgent. On the other hand, it is rather odd for me to have two separate collections of notes, one on my computer and one on my blog. 

Snakes and Ladders

17 May 2022 at 18:11

Facebook admits its mistakes

A graph showing that a majority of adults think social networks are censoring their political viewpoints

Today, let’s talk about Facebook’s latest effort to make the platform more comprehensible to outsiders — and how its findings inform our current, seemingly endless debate over whether you can have a social network and free speech, too.

Start with an observation: last week, Pew Research reported that large majorities of Americans — Elon Musk, for example! — believe that social networks are censoring posts based on the political viewpoints they express. Here’s Emily A. Vogels:

Rising slightly from previous years, roughly three-quarters of Americans (77%) now think it is very or somewhat likely that social media sites intentionally censor political viewpoints they find objectionable, including 41% who say this is very likely.

Majorities across political parties and ideologies believe these sites engage in political censorship, but this view is especially widespread among Republicans. Around nine-in-ten Republicans (92%), including GOP leaners, say social media sites intentionally censor political viewpoints that they find objectionable, with 68% saying this is very likely the case. Among conservative Republicans, this view is nearly ubiquitous, with 95% saying these sites likely censor certain political views and 76% saying this is very likely occurring.

One reason I find these numbers interesting is that of course social networks are removing posts based on the viewpoints they express. American social networks all agree, for example, that Nazis are bad and that you shouldn’t be allowed to post on their sites saying otherwise. This is a political view, and to say so should not be controversial.

Of course, that’s not the core complaint of most people who complain about censorship on social networks. Republicans say constantly that social networks are run by liberals, have liberal policies, and censor conservative viewpoints to advance their larger political agenda. (Never mind the evidence that social networks have generally been a huge boon to the conservative moment.)

And so when you ask people, as Pew did, whether social networks are censoring posts based on politics, they’re not answering the question you actually asked. Instead, they’re answering the question: for the most part, do the people running these companies seem to share your politics? And that, I think more or less explains 100 percent of the difference in how Republicans and Democrats responded.

But whether on Twitter or in the halls of Congress, this conversation almost always takes place only at the most abstract level. People will complain about individual posts that get removed, sure, but only rarely does anyone drill down into the details: on what categories of posts are removed, in what numbers, and in what the companies themselves have to say about the mistakes they make.

That brings us to a document that has a boring name, but is full of delight for those of us who are nosy and enjoy reading about the failures of artificial-intelligence systems: Facebook’s quarterly community standards enforcement report, the latest of which the company released today as part of a larger “transparency report” for the latter half of 2021.

An important thing to focus on, whether you’re an average user worried about censorship or recently bought a social network promising to allow almost all legal speech, is what kind of kind of speech Facebook removes. Very little of it is “political,” at least in the sense of “commentary about current events.” Instead, it’s posts related to drugs, guns, self-harm, sex and nudity, spam and fake accounts, and bullying and harassment.

To be sure, some of these categories are deeply enmeshed in politics — terrorism and “dangerous organizations,” for example, or what qualifies as hate speech. But for the most part, this report chronicles stuff that Facebook removes because it’s good for business. Over and over again, social products find that their usage shrinks when even a small percentage of the material they host includes spam, nudity, gore, or people harassing each other.

Usually social companies talk about their rules in terms of what they’re doing “to keep the community safe.” But the more existential purpose is to keep the community returning to the site at all. This is what makes Texas’ new social media law, which I wrote about yesterday, potentially so dangerous to platforms: it seemingly requires them to host material that will drive away their users.

At the same time, it’s clear that removing too many posts also drives people away. In 2020, I reported that Mark Zuckerberg told employees that censorship was the No. 1 complaint of Facebook’s user base.

A more sane approach to regulating platforms would begin with the assumption that private companies should be allowed to establish and enforce community guidelines, if only because their companies likely would not be viable without them. From there, we can require platforms to tell us how they are moderating, under the idea that sunlight is the best disinfectant. And the more we understand about the decisions platforms make, the smarter the conversation we can have about what mistakes we’re willing to tolerate.

As the content moderation scholar Evelyn Douek has written: “Content moderation will always involve error, and so the pertinent questions are what error rates are reasonable and which kinds of errors should be preferred.”

Facebook’s report today highlights two major kinds of errors: ones made by human beings, and ones made by artificial intelligence systems.

Start with the humans. For reasons that the report does not disclose, between the last quarter of 2021 and the first quarter of this one, its human moderators suffered “a temporary decrease in the accuracy of enforcement” on posts related to drugs. As a result, the number of people requesting appeals rose from 80,000 to 104,000, and Facebook ultimately restored 149,000 posts that had been wrongfully removed.

Humans arguably had a better quarter than Facebook’s automated systems, though. Among the issues with AI this time around:

  • Facebook restored 345,600 posts that had been wrongfully removed for violating policies related to self harm, up from 95,300 the quarter earlier, due to “an issue which cause dour media-matching technology to action non-violating content.”

  • The company restored 414,000 posts that had been wrongfully removed for violating policies related to terrorism, and 232,000 related to organized hate groups, apparently due to the same issue.

  • The number of posts it wrongfully removed for violating policies related to violent and graphic content last quarter more than doubled, to 12,800, because automated systems incorrectly took down photos and videos of Russia’s invasion of Ukraine.

Of course, there was also good evidence that automated systems are improving. Most notably, Facebook took action on 21.7 million posts that violated policies related to violence and incitement, up from 12.4 million the previous quarter, “due to the improvement and expansion of our proactive detection technology.” That raises, uh, more than a few questions about what escaped detection in earlier quarters.

Still, Facebook shares much more about its mistakes than other platforms do; YouTube, for example, shares some information about videos that were taken down in error, but not by category and without any information about the mistakes that were made.

And yet still there’s so much more we would benefit from knowing — from Facebook, YouTube, and all the rest. How about seeing all of this data broken down by country, for example? How about seeing information about more explicitly “political” categories, such as posts removed for violating policies related to health misinformation? And how about seeing it all monthly, rather than quarterly?

Truthfully, I don’t know that any of that would do much to shift the current debate about free expression. Partisans simply have too much to gain politically by endlessly crying “censorship” whenever any decision related to content moderation goes against them.

But I do wish that lawmakers would at least spend an afternoon enmeshing themselves in the details of a report like Facebook’s, which lays out both the business and technical challenges of hosting so many people’s opinions. It underscores the inevitability of mistakes, some of them quite consequential. And it raises questions that lawmakers could answer via regulations that might actually withstand 1st Amendment scrutiny, such as what rights to appeal a person should have if their post or account are removed in error.

There’s also, I think, an important lesson for Facebook in all that data. Every three months, according to its own data, millions of its users are seeing their posts removed in error. It’s no wonder that, over time, this has become the top complaint among the user base. And while mistakes are inevitable, it’s also easy to imagine Facebook treating these customers better: explaining the error in detail, apologizing for it, inviting users to submit feedback about the appeals process. And then improving that process.

The status quo, in which those users might get see a short automated response that answers none of their questions, is a world in which support for the social network — and for content moderation in general — continues to decline. If only to preserve their businesses, the time has come for platforms to stand up for it.

Musk Reads

Well, let’s see. Elon Musk’s attempt to renegotiate his Twitter deal saw him on Tuesday issuing an ultimatum of sorts, tweeting that “this deal cannot move forward” unless concerns about spam and fake accounts are resolved to his satisfaction. Matt Levine points out that those concerns can never be resolved to his satisfaction, because they are transparently phony concerns intended to drive Twitter back to the negotiating table. Over a deal that he already signed, but now regrets.

Meanwhile, Twitter filed a preliminary proxy statement with the Securities and Exchange Commission — a necessary step on the road to getting shareholder approval for the deal. The most interesting tidbit in there, to me, is that Musk says he asked former CEO Jack Dorsey to remain on the board, but Dorsey declined. This would seem to blow up a lot of theories about how Dorsey had orchestrated this whole disaster as a way to eventually return as CEO.

The document is also, as Levine points out, an extended chronicle of Musk violating US securities laws. For which there are unlikely to be any meaningful penalties.

It seems like there are only two remaining paths forward: one in which Musk says definitively that he won’t buy the company at the priced he agreed to, or he attempts to sever ties with Twitter completely. Both are bad for the company, the rule of law, etc.

Finally, would you believe that today three more senior employees quit Twitter?



Those good tweets

Twitter avatar for @Nurse_Ratchrachel 🎰🍓 @Nurse_RatchMcRib (My Chemical Romance is back)

May 15th 2022

Twitter avatar for @CodeineFridgeT 🎯 @CodeineFridgetwitter is the smoking area of social media apps

May 14th 2022

Twitter avatar for @pheenohPheenoh @pheenohThis is the greatest email I’ve ever received Image

May 15th 2022


May 16th 2022


Talk to me

Send me tips, comments, questions, and widely viewed content: casey@platformer.news.


18 May 2022 at 01:08

Matt Levine on ‘Yield Farming’

Speaking of cryptocurrencies as Ponzi schemes (powered by energy-intensive computing), here’s Matt Levine, in a podcast interview with FTX CEO Sam Bankman-Fried, responding to Bankman-Fried’s description of “yield farming”:

I think of myself as like a fairly cynical person. And that was so much more cynical than how I would’ve described farming. You’re just like, well, I’m in the Ponzi business and it’s pretty good.

Daring Fireball

18 May 2022 at 01:16

Molly White, Interviewed by Harvard Business Review: ‘Cautionary Tales From Cryptoland’


One more on the “cryptocurrency is mostly about scams” front — a concise interview with Web3 Is Going Just Great creator Molly White, by Harvard Business Review editor Thomas Stackpole:

Stackpole: One of the most surprising (to me, anyway) arguments you make is that Web3 could be a disaster for privacy and create major issues around harassment. Why? And does it feel like the companies “buying into” Web3 are aware of this?

White: Blockchains are immutable, which means once data is recorded, it can’t be removed. [...] Many blockchains also have a very public record of transactions: Anyone can see that a person made a transaction and the details of that transaction. Privacy is theoretically provided through pseudonymity — wallets are identified by a string of characters that aren’t inherently tied to a person. But because you’ll likely use one wallet for most of your transactions, keeping one’s wallet address private can be both challenging and a lot of work and is likely to only become more challenging if this future vision of crypto ubiquity is realized. If a person’s wallet address is known and they are using a popular chain like Ethereum to transact, anyone [else] can see all transactions they’ve made.

Imagine if you went on a first date, and when you paid them back for your half of the meal, they could now see every other transaction you’d ever made — not just the public transactions on some app you used to transfer the cash but any transactions: the split checks with all of your previous dates, that monthly transfer to your therapist, the debts you’re paying off (or not), the charities to which you’re donating (or not), the amount you’re putting in a retirement account (or not). What if they could see the location of the corner store by your apartment where you so frequently go to grab a pint of ice cream at 10 PM?

Web3 is my favorite new blog in years. Everything about it is just perfect.

Daring Fireball

18 May 2022 at 01:30

Refresh complete

(10) All feeds
Last 24 hours

Download OPML

A Working Library
Alan Ralph
all that's past is prologue...
annie mueller
Austin Kleon
Ben Werdmüller
Bix Dot Blog
Blog - CJ Chilvers
blog – rebeccatoh.co
Blog – Reimena Yee
Blog on roytang.net
Brain Pickings
Brandon’s Journal
Broken River Blog
CJ Eller
Colin Devroe
Colin Walker – Live Feed
Colin Walker — Daily Feed
Daring Fireball
David Heinemeier Hansson
Derek Sivers
Dino’s Journal 📖
Doug Belshaw's Thought Shrapnel
Draft Ideas
Everything is ablaze! – Jon Mitchell, author and musician
Existential Comics
Home on Jamie
Hypertext Monster
Instapaper: Unread
Jack Baty's Weblog
James Van Dyne
Jan-Lukas Else
Jason Fried
John P. Weiss Blog
Julian Summerhayes
Katy DeCorah
Kev Quirk
LaCaze Business Review
Latest Thoughts
Live & Learn
Lucy Bellwood
Manton Reece
Manu Feed
Nicholas Bate
Nitin Khanna
nutcroft blog
Oh Hello Ana
Om Malik
On my Om
Open Thinkering
Other Life
Peerverse Blog
Piper Haywood
Read Write Respond
remy sharp's b:log
Robin Sloan
Scripting News for email
Sentiers — Blog
Seth's Blog
Simon Collison | Articles & Stream
Snakes and Ladders
Start here
Steven Pressfield
Stories by M.G. Siegler on Medium
Study Hacks - Decoding Patterns of Success - Cal Newport
The Cramped
Uncut Kibitzing
Colin Walker Colin Walker colin@colinwalker.blog