Twitter: pay to play?

Twitter's Q3 financial results, tweet impressions and user growth were better than expected giving some slight optimism that things may be turning a corner.

They were no doubt helped (spiked?) by the US presidential campaign and debates but new avenues such as the NFL live stream deal seem to have played their part.

However, as I wrote last time, the network's abuse problem is widely viewed as the source of many of its problems so it was reassuring to see the latest shareholder letter include the following:

For the past few months our team has been working hard to build the most important safety features and updating our safety policies to give people more control over their Twitter experience. Next month, we will be sharing meaningful updates to our safety policy, our product, and enforcement strategy.

Money for tweeting, ads for free

Before the results and letter were published I wrote this in a private forum:

One of the ideas that has been floating around for ages is that of a freemium model for Twitter. So, the question is: would you pay to use Twitter?

Is Twitter close enough to being a utility that it could justify a monthly fee? Would there need to be an ad-free tier for paying users or would providing some kind of premium functionality be a better option? Does Twitter provide sufficient value to its users?

The answer depends on who you are, why and how much you use it.

From a business perspective Twitter would need to work out if the fees sufficiently offset (or bettered) what they could charge for advertising. If the paid tier was ad-free then the potential "eyes on" would be reduced meaning they may have to charge less for advertising.

But I also wondered how charging for Twitter might affect the abuse problem, a question echoed by Greg Pinelo:

Obviously, his implication was that you would be charging everyone and that the fee introduces an element of friction that may deter some trolls from signing up, but it could go further.

Currently, accounts can be completely anonymous and trolls can get away with all sorts without fearing repercussions beyond Twitter suspending their account. So what? Trolls will just create a new one, again perfectly anonymous, and carry on like nothing happened.

If people paid to use it, however, then you have an actual, real world ID linked to accounts in users payment details. Would this act as a deterrent for some?

In addition, if a known troll has had their account suspended for behavioural issues it becomes harder to create new accounts if they try to re-use the same payment details. From this point of view the money becomes less important than the data.

There's a but

When presented with the option of paying for a service or getting a slightly worse experience for free most people will opt for free.

Getting people to make the jump is hard.

The free experience has to be good enough to keep people coming back and good enough is going to be okay for the majority. With the option of going elsewhere (also for free) you can't try to force people to pay by making your entry tier frustrating or inconvenient.

In the private forum mentioned earlier, I ran a poll alongside my thoughts. Relatively small percentages of those who responded wanted an ad-free experience or premium features for their money but 55% chose the option "I don't use it/love it enough to justify paying."

By enforcing payment not only might you keep out the trolls but potentially also a large proportion of the casual users who would never consider parting with their hard earned cash, no matter how nominal the fee.


So it appears that Twitter is stuck with its current model and will have to rely on its new safety policy and enforcement strategy to reduce the abuse on the network or risk halving its user base.

The policy will have to be a lot more transparent on exactly what qualifies as abuse or harassment and what the punishments are for any infractions.

But a policy is worth nothing without adequate, consistent, and again, transparent enforcement: say what you'll do, do what you say, and let everyone know WHY!

Twitter: pay to play?

The wake up call Twitter needs?

With it emerging that both Disney and Salesforce pulled out of an acquisition because of trolls, online bullying and corporate image is Twitter finally going to get the message about its abuse problem?

One of the biggest criticisms is that the company doesn't do enough to proactively combat abuse on the network, instead just reacting to high profile incidents like those involving Leslie Jones and Milo Yiannopoulos to demonstrate that it taking action.

When reporting abuse against someone else users have had their complaints dismissed because they concern a third party. More still reveal that blatantly abusive behaviour is deemed not to contravene Twitter's idea of acceptable use.

No wonder people become disillusioned and close their accounts.

It is one thing to advocate free speech but another entirely not to act when the ideal of free speech is flouted.

Too late?

For some it will already be too late, the horse has bolted and any action taken by Twitter may now appear a cynical response to something hurting the bottom line.

But this doesn't mean the network shouldn't act.

If no deal is on the table then Twitter has to be its own saviour; direction and discovery are only part of the solution.

A change in strategy to become "the people's news network" may attract extra users but only by creating a fair and safe environment will they be encouraged to sign up and stick around.

It is a shame that only something of this nature may cause Twitter to rethink its approach but past inaction does not have to remain the template for the future.

The wake up call Twitter needs?

Owning your words

A discussion earlier got me thinking about Twitter now allowing people to request verification of their accounts.

I wasn't going to submit an application as there are more well known Colin Walkers out there - from footballer and manager to cellist - but hey, nothing ventured, nothing gained and I am verifying that I am me!

Why verify?

Verification was originally intended to stop confusion and to stop people passing themselves off as others. It is a defence mechanism designed to ensure you are talking to or about the right person.

But it has another side to it in that it ensures the person talking is who they say they are.

Twitter's verification guidelines advise that:

If the account belongs to a person, the name reflects the real or stage name of the person.

It is not a real names policy per se but does act as an approximation of identity.

Verify to protect

Jason Calacanis posted a mock message on behalf of Jack Dorsey, Twitter's CEO in which he sets a scene where verification is open to all - effectively as an identity mechanism - and those not verified would have their tweets blurred out by default, only visible if we chose to view them.

The intention is for everyone to be accountable for what they post and, by virtue of verification, identifiable. If a troll starts posting abuse so what? You can't see it anyway.

Does this go too far? Would it ever really sit well with Twitter's users? How many would quit the service over having their identity held to ransom in this way, being forced to verify their account or have their tweets obscured?

Part of the joy of Twitter is its openness and freedom, the ability to see tweets from complete strangers and to become involved in the conversations of others.

Jason also writes:

"We are going to still allow anonymity on Twitter, because we all know that some voices need to be heard without revealing their identity. From political dissidents to parody accounts, anonymity has a place on the service"

Forgive me if I've missed something, but if unverified accounts were blurred out by default then that "place" becomes a ghetto for second-class Twitter citizens whose voices are actually silenced until we deign to hear them.

Hardly a welcoming act for those users who are potentially the most vulnerable.

Tweet quality

Twitter has launched new ways to control your experience including a quality filter to remove "lower-quality content, like duplicate Tweets or content that appears to be automated" from the feed.

An interesting proposition.

Twitter advises that it uses a number of quality signals such as account origin and behavior. It would be good to know exactly what this involves.

Are individual accounts graded and would this be included in account origin? Would known troll accounts have their visibility downgraded based on their behaviour?

Could the number of blocks or reported tweets a user receives be an account quality signal with those repeatedly penalised being hidden?

It is arguably a better system than a blanket hiding of all tweets by unverified users.


It occurred to me that as an algorithm is in place to automatically decide what tweets should be hidden, could this same algorithm not be used as the basis for further action?

By establishing patterns and consistent behaviour could it not be used to identify potential problem accounts?

Twitter finally has a live tool at its disposal but needs to demonstrate it is fully committed to solving its abuse problem.

Owning your words