« CRUSH ALL HU-MANS (aka, the Robot Economy) | Main | Future is Now, Part 58 »

Lies, Damn Lies, and Twitter Bots

PinocchioBear with me -- this is going to get twisted.

I've been paying quite a bit of attention to the use of deception as a tactical method, from real-world griefing to deception as a means of protecting privacy. I'm particularly interested in the political uses of technology-enabled deception -- uses that I suspect are likely to become more prevalent in the near future.

Two of my rules for constructing useful and interesting scenarios are to (a) think about what happens when seemingly disparate changes smash together, and (b) imagine how new developments might be misused. In both cases, the goal is to uncover something unexpected, but (upon reflection) disturbingly plausible. I'd like to lay out for you the chain of connections that lead me to believe that we're on the verge of something big.

  • Twitter bots: One study suggests that nearly half of the Twitter accounts following corporate Twitter feeds are actually bots -- simple programs that mimic a human user, sending out messages, responding to keywords, and the like. Bots can be set up to retweet each other, and could potentially drive up the visibility of various hashtags and links on Twitter results.

  • Algorithmic trading: Increasingly, financial market trade decisions are executed by software, based on programmatic rules; such rules can include responding to news feeds. Some algorithmic trading systems have branched out beyond mainstream news sources, and have connected up to Twitter feeds.

  • High-Frequency trading: A special form of algorithmic trading, high-frequency trading involving rapidly purchased and re-sold shares, with positions held for as little as a few seconds. Because of the speed of execution, high-frequency trading is even more dependent upon breaking news feeds (and, likely, Twitter) than regular algorithmic trading.

  • The Flash Crash: In May of 2010, the US Dow Jones Industrial Average dropped over 998 points (about 9% -- the biggest one day point decline in DJIA history), only to recover within a few minutes. The apparent cause? "Order flow toxicity," when a large seller exhausts available buyers, triggering a cascade of selling by intermediaries -- particularly high-frequency algorithmic traders.

  • United deja-vu stock crash: In September of 2008, Google News posted as current a six-year-old article about United Airlines filing for bankruptcy; as a result, the value of UAL stock dropped by 75%, but recovered as the error was spotted.

  • Media hacking: Here's where this starts to get good. It's surprisingly easy to spread a piece of juicy misinformation, in part due to the speed of digital media, in part due to the need for news services to fill 24 hours of broadcast time, and in part due to the related need for news services to be first to break a story. Pranksters have had a field day spreading rumors, and activist groups such as The Yes Men have built a cottage industry out of making political statements through hoaxes.

    But this is taking on a more sobering form. According to The Verge:

    ...the hackers who planted fake news stories on Reuters's website earlier this month weren't doing it for fun. Reuters was caught in the middle of an "intensifying conflict in cyberspace between supporters and opponents of Syrian President Bashar al-Assad," in the words of one of its reporters, as hackers attempted to co-opt the news agency's credibility in order to support government forces in the Syrian national conflict. [...]

    To Americans and anyone accustomed to a free press, it should have been easy to spot the one-sided propaganda in the middle of less histrionic material. But the hackers tried to pass their message off as news. The fake posts were written in a plain, straightforward, newsman-like style, with appropriate headlines ("Riad Al-Asaad: Syrian Free Army pulls back tactically from Aleppo") accompanied by appropriate photos.

    The goal wasn't to draw attention to an otherwise ignored issue, or simply for the lulz; the hack was done to sow confusion and to poison the information stream.


    Okay, the pieces should be falling into place at this point. Algorithmic trading, particularly high-frequency trading, is extremely vulnerable to disruption; as it becomes more deeply connected to rapid news inputs from Twitter, the potential increases for misinformation flows to trigger flash crashes and stock price drops. But financial systems aren't going to respond to a single tweet -- they're going to pay attention to "legitimate" news feeds and to sudden bursts of tweets about a particular (relevant) subject.

    A black hat hacker could, with ease, create a network of Twitter bots set to retweet each other on command, send @messages to important information hubs (a few of which would retweet stories further), and drive up the visibility of certain hashtags and keywords. Done with the right target and message, and at the right time, such a network could potentially trigger sudden swings in value of targeted shares. The drop in value need not last for long; trading systems that know the stories to be false could swiftly snap up the briefly-undervalued stock. Conversely, the attack could be done in a way to cripple a particular company or stock market, or even to distract journalists from another story.

    Similarly, a Twitter bot network, retweeting/spreading misinformation, could potentially cause a media firestorm if the target was a politician. Even if the misinformation was corrected within the hour, the spread would be impossible to fully contain. Could something like this even swing an election?

  • Comments

    Sounds like a new, interesting component in a hostile corporate takeover. Drive down valuation, then buy enough (temporary cheaper) stock for your intended puropse.

    The central problem as I see it is the unwarranted trust people place on information that appears on the internet. If you approached a stranger in the street, presented them with a printout of a Perl script, and asked how much they trusted it then they'd just stare blankly, and yet they quite happily accept information generated by a script when making financial or political decisions.

    Sometimes the information is worth listening to - Twitter bots are running which can give earthquake and tsunami early warning due to the fact that seismic waves travel slower than an uptick in Twitter messages saying "OMG! Earthquake #LA".

    It's a question of trust. People have always listened to gossip, and many a fortune has been lost and made based on a rumour about a company. Even in the days of the telegraph people were using the new technology to game the system. The use of the internet to disseminate these rumours is nothing new, it's just far bigger and faster than anything preceding it.

    A scarier threat from this is a blended attack that uses various malware, meme manipulation (as you describe), and small scale physical attacks to push various infrastructure past it's breaking point.

    A sophisticated attacker could coordinate a number of small scale "events" (think vandalism), and then amplify their effect by using a network of blog and twitter bots to steer and augment Internet traffic about those events.

    Then additional small-scale physical events could be coordinated to reinforce the belief that fear and protective measures are justified.

    Various malware could then easily be deployed on web sites that discuss these events in order to collect information about individuals that are interested so that their identities and systems can be leveraged to augment the attack.

    This kind of mechanism could be used to create large scale panic about food supplies, financial systems, various products or companies, modes of transport, or any other facilities that are used on a large scale.

    By itself, this is interesting and troublesome,... but with a small extra push this could lead to nearly catastrophic results. All complex systems exhibit structured criticality to some extent, and all systems also typically have a tipping point where placing any additional stress on the system can cause it to fail catastrophically.

    If an attacker studies the system they want to attack in order to identify these weak points and then targets the above mechanism to push that system into a failure mode, then they can cause the system to fail catastrophically.

    This is a kind of blended, indirect attack where people are "hacked" and used as a kind of human bot-net to create a large scale denial of service attack on a critical resource or infrastructure. The worst part about this is that once you've managed to manipulate the information stream so that people distrust the resource you are attacking, it's failure under these additional stresses completely legitimizes the fears and distrust that were initially fabricated. At that point, anyone attempting to explain what really happened through social engineering and media manipulation would be discarded as a crank.

    Think about it... anybody claiming that some catastrophic failure of important infrastructure was really caused by somebody playing around with misinformation on the Internet is pitching what sounds like a conspiracy theory. No amount of proof, no matter how legitimate the data source, will stand up to the real-life experiences of millions of people who were ultimately victimized by the failure.

    ----

    If that bends your mind, then think about this:

    If the mechanisms are already in place for this kind of thing to be done intentionally, then there is also nothing really stopping it from happening unintentionally.

    Wetware is built to find patterns, and it is especially good at finding non-existent patterns in random data when it is under stress. If the right combination of bots and real events occurs then there is a pretty good chance it could trigger a very large event.

    All of our systems are increasingly automated. If you think about it. There are no doubt a countless number of as-yet unknown vulnerabilities built into our wired world. All that is required is for an attacker to discover one of these vulnerabilities and exploit it. Or, if there is no such attacker (and I think it likely there is somewhere) then we are still in the position of waiting to discover one of these vulnerabilities in a very unpleasant way.

    We may have already seen this kind of unintended event on a very small scale when Orson Welles put War Of The Worlds on the radio...

    Post a comment

    All comments go through moderation, so if it doesn't show up immediately, I'm not available to click the "okiedoke" button. Comments telling me that global warming isn't real, that evolution isn't real, that I really need to follow [insert religion here], that the world is flat, or similar bits of inanity are more likely to be deleted than approved. Yes, it's unfair. Deal. It's my blog, I make the rules, and I really don't have time to hand-hold people unwilling to face reality.

    Archives

    Creative Commons License
    This weblog is licensed under a Creative Commons License.
    Powered By MovableType 4.37