2 minute read

I’m taking on the task of building an automation system for some of my online social engagement. Since I am not such a Very Important Person (yet :), the absolute worst that can happen is that one of my friend/followers shares something racist or sexist or *-ist that I wouldn’t otherwise agree with. Bad, but I can at least un-share or reply with an "I'm sorry folks, my robot and I need to talk" statement. But this leads to an interesting question:

What does it mean to imbue responsibility over my online persona to a digital system?

It’s not really that bizarre of a question to ask. We already grant immense amounts of control over our online profiles to the social primaries (i.e. Facebook, Twitter, Google+). For most people, any trending app that wants access to “post to your timeline” is enough of a reason to grant full access to activities on behalf of your profile, though it shouldn’t. Every time you want to play Candy Crush or Farmville, you are telling King and Zynga that it’s okay for them to say whatever they want as if they were you to people in your network.

The more of a public figure you are, the more your risk goes up. Consider that Zynga is not at all incentivized to post bad or politically incorrect content to your network on your behalf. That’s not the problem. The problem is when (not if) the company behind a game gets hacked, as did Zynga in 2011. It happens all the time. It’s probably happened to you, and you stand to lose more than just face.

So what is the first thing to get right about automating social media?

Trust and security are the first priorities, even before defining how the system works. Automation rules are great except for when the activities they’re automating do not follow the rules of trust and responsibility that a human would catch in a heartbeat. There is no point to automation if it’s not working properly. And there’s no point in automation of social media if it’s not trustworthy.

For me at least in the initial phases of planning out what this system would look like, trust (not just “security”) will be a theme in all areas of design. It will be a question I ask early and often with every algorithm I design and every line of code I write. Speaking of algorithms, an early example of these rules go something like this (pseudo-code):

F(auto-follow):
	For(accounts = (new followers | in [list] | [search results] | [known queries])
		Given [accounts] with [golden keywords] in their description 
		AND who have a minimum of [2x more followers than me]
		AND who have [golden keywords] in at least [3] posts no older than [2 weeks]
		THEN auto-follow

F(auto-share):
	For(accounts = (in [list] | [known queries])
		Given [accounts]
		When recent [share] contain [golden keywords]
		AND [share] has been [highlighted] by [followers in common]
		AND [share] does not contain [excluded keywords]
		Add to [buffer] for [reshare]
		AND send notification of future reshare to email
			UNLESS already in buffer
			OR daily personal reshare quota is reached
			OR daily contributor-specific quota is reached