There’s a great poster over at Despair Inc. that reads,
“It could be that the purpose of your life is only to serve as a warning to others.”
In the security field we strive to keep our employers and clients out of that category. However, reality is such that we often learn best from our mistakes and those of others. As any parent can attest, even the best warning about the potential danger involved in a childish act of stupidity doesn’t come close to the educational impact of falling, or watching one’s friend fall, flat on their face.
Last week I wrote about a security breach at Twitter that resulted from a poor security design. The kindest thing I can say is that Twitter managed to ignore more than thirty years of security knowledge and made a design error that I would expect a junior security consultant to pick up in a matter of minutes.
Don’t get me wrong — I’m a huge fan of Twitter. The basic concept behind their service isn’t new, but their timing, marketing and some of their technical decisions are brilliant. But, as much as it pains me to say this about any company, they are making the same critical mistake that has plagued many startups in the Internet space: They obviously lack competent security expertise.
I’m sure that they mean well, and I’m sure Twitter has some very talented developers that really want to do the right thing. I’m sure that they have considered some aspects of security. But they need more. They need a security pro sitting around the development table. They need to critically examine every aspect of their system from a security perspective. And they desperately need a good security risk assessment.
Take, for example, my experience with Twitter last week. On Tuesday they announced the ability to send updates via SMS to Rogers phones. I found out because my phone suddenly started getting SMS messages. I replied with “off” and it stopped. Wednesday the exact same thing happened again. “Off” worked, and I logged in via the web to make sure it was really turned off.
Thursday morning it was back with a vengeance. I was driving to the office and a flood of messages began. Having worked on an SMS project, I knew that mobile phone companies require systems that use SMS to honour the ‘stop’ command. As soon as a mobile phone subscriber sends ‘stop’ the service provider is supposed to reply with an acknowledgement and not send any further messages. So I replied with ‘stop’. Twitter sent an acknowledgement, but messages continued to flood in. At first I assumed there must be a queue somewhere, but an hour later I was still being flooded with so many messages that my phone was almost useless.
I logged into Twitter and tried to turn off the SMS updates. But the system gave me an error and continued to show the updates as ‘on’. Next I tried to delete the phone. Given that the Twitter ‘Devices’ page displayed my mobile phone number, that should have been easy. But in response to the ‘delete’ button Twitter replied that there was no valid device to delete.
I opened a support case and while waiting found that the ‘sleep’ function would still work. I temporarily managed to get messages under control by telling Twitter that I sleep 23 hours per day. About 10 hours into the incident, I received a reply from Twitter support indicating that they couldn’t resolve the issue and had escalated it. Some time after that they managed to delete my phone from the system.
From a security perspective, a few things went wrong. First and foremost, the system is clearly not designed to gracefully handle database inconsistencies. I don’t know how Twitter’s database works. Presumably it’s large and complex due to the sheer volume of data it handles. But if the system can display your telephone number and not delete it, sometime is very wrong.
In a perfect world, databases maintain internal consistency. But we don’t live in a perfect world, and all sorts of strange things can happen in a database. From a security perspective (as well as an operational one), we need to accept this fact and design for it.
When it comes to any type of communications system, we must recognize that system failures do occur. For example, radio systems often have timers to shut down the transmitter in the event that a person, computer, or stuck microphone attempts to transmit for a long period of time. When designing an SMS gateway, we similarly need to recognize that database issues or queuing problems could potentially result in a large quantity of undesired messages being sent to a mobile phone. To protect both both the organization and the user, the system should be designed to tolerate these failures gracefully. And when the user sends ‘stop’, the system must ensure that the messages do indeed stop.
Then there’s the helpdesk issue. Twitter is a free service, and we all understand that free services can’t always provide immediate technical support. But Twitter doesn’t give the user any way to indicate the severity of the issue. A ten hour response time to most support requests is fine – but when Twitter is malfunctioning and slamming a user with SMS messages it is woefully inadequate.
Part of a security risk assessment involves asking difficult questions about internal and external threats. It requires considering what can go wrong and determining the potential consequences. It involves exploring scenarios like, “What happens if one of our executive’s email accounts is hacked?” and “What could cause the system to go berserk and start flooding users with messages?”
Good security is about much more than checking a user’s password. It’s about achieving a holistic understanding of the system’s confidentiality, integrity and availability properties. It’s about understanding what can go wrong and how to design and operate the system to minimize the risk. And ultimately it is about protecting the organization’s bottom line.
If Twitter wants to avoid serving as a warning to others, they need to start taking security much more seriously. They need to find about $50,000 in their budget for a proper risk assessment. Then they need to start incorporating security requirements into their software development lifecycle. Investors may be desperate for a good start-up these days, but they understand that security breaches, especially those that reveal questionable security competencies, are bad for business. And in the fickle word of social media, they can be fatal.