Urgh... if I don't post something soon Justin is going to bump me off the front page... but I guess since I have now handed in my second ethics assignment I can spare a few minutes... before moving on to the Network and Security Administration one (sigh)
I read this rant by Marcus Ranum on the train this morning, a very enjoyable read and eerily familiar... Though I do find his contention that "The failures I am describing are failures of hope" a little unfair, I can hardly be to blame for all information security problems... sorry, it's late and I have just finished an ethics assignment.
More on security disasters once my brain stops feeling quite like unset papier mache
Now this is just great.
My favourites:
Ignorance is Bliss Maxim: The confidence that people have in security is inversely proportional to how much they know about it.
Arrogance Maxim: The ease of defeating a security device or system is proportional to how confident/arrogant the designer, manufacturer, or user is about it, and to how often they use words like “impossible” or “tamper-proof”.
My favourites:
Ignorance is Bliss Maxim: The confidence that people have in security is inversely proportional to how much they know about it.
Arrogance Maxim: The ease of defeating a security device or system is proportional to how confident/arrogant the designer, manufacturer, or user is about it, and to how often they use words like “impossible” or “tamper-proof”.
An interesting article over at the New York Times (original site here) on 'self destructing' messages (self encrypting and throw away the key really).
The researchers at the University of Washington have developed a system to help control how long user data is available 'in the cloud'. Recognising that end users have little control over where their data in the cloud is stored, or even who has access to it, the Vanish system is designed to help the users control how long anyone (themselves included) can access the data.
The system works by encrypting the data with a key distributed in bits throughout a peer-to-peer network.
Now encryption is nothing new, but the difference with Vanish is that neither the sender nor the recipient hold the key in the long term.
While an interesting piece of research with some long term potential, this certainly doesn't eliminate the dangers of storing your data in the cloud.
Just ask twitter who learned the hard way a couple of weeks ago that someone determined (or bored!) enough will find the weakest link and exploit it.
Information Security is often described as 'asymmetrical warfare', a battle in which the good guys have to find and plug all the holes and the bad guys only have to find one poorly defended point.....
The researchers at the University of Washington have developed a system to help control how long user data is available 'in the cloud'. Recognising that end users have little control over where their data in the cloud is stored, or even who has access to it, the Vanish system is designed to help the users control how long anyone (themselves included) can access the data.
The system works by encrypting the data with a key distributed in bits throughout a peer-to-peer network.
Now encryption is nothing new, but the difference with Vanish is that neither the sender nor the recipient hold the key in the long term.
While an interesting piece of research with some long term potential, this certainly doesn't eliminate the dangers of storing your data in the cloud.
Just ask twitter who learned the hard way a couple of weeks ago that someone determined (or bored!) enough will find the weakest link and exploit it.
Information Security is often described as 'asymmetrical warfare', a battle in which the good guys have to find and plug all the holes and the bad guys only have to find one poorly defended point.....
It was recently reported that police in Queensland would start 'wardriving' around select Queensland towns as part of a public service to educate residents and small businesses on the dangers of running unsecured wireless networks.
This is not the first time this has happened, back in 2006 the Douglas County, Colorado Sheriff's department started doing the same thing. I couldn't find any information on how well it went, or if they are still doing it to this day. There didn't seem to be any information on their website. I have emailed them to ask if they're continuing the practice.
I have mixed feelings about this one. On one hand, education is part of law enforcement, just as educating the users in your business can assist in securing your network and your data. On the other hand I imagine there is normally enough 'real crime' to keep generally overworked police busy, and I doubt the general public even want to hear the message. Public service announcements about drinking and driving, smoking and speeding haven't slashed the instances of those three things and they will kill you!
Manufacturers providing home wireless routers that force a password change during install and have security (encryption) turned on by default thereafter would probably do more good than the police and public service announcements. The average home user doesn't want to think about computer security, they just want it their new toy to work., just like their TV and DVD player did when they plugged them in.
Despite society getting more tech-savvy, your average consumer doesn't want to have to get a degree in computer science or an MCSE to set up a printer. They have a hard enough time moving from Windows XP to Windows Vista.
All in all, providing it doesn't take away from more important policing, I think more education is a good thing and at least they're trying something different up north.
Hopefully I'll hear back from the Douglas County Sheriff's office and find out if they had much success....
This is not the first time this has happened, back in 2006 the Douglas County, Colorado Sheriff's department started doing the same thing. I couldn't find any information on how well it went, or if they are still doing it to this day. There didn't seem to be any information on their website. I have emailed them to ask if they're continuing the practice.
I have mixed feelings about this one. On one hand, education is part of law enforcement, just as educating the users in your business can assist in securing your network and your data. On the other hand I imagine there is normally enough 'real crime' to keep generally overworked police busy, and I doubt the general public even want to hear the message. Public service announcements about drinking and driving, smoking and speeding haven't slashed the instances of those three things and they will kill you!
Manufacturers providing home wireless routers that force a password change during install and have security (encryption) turned on by default thereafter would probably do more good than the police and public service announcements. The average home user doesn't want to think about computer security, they just want it their new toy to work., just like their TV and DVD player did when they plugged them in.
Despite society getting more tech-savvy, your average consumer doesn't want to have to get a degree in computer science or an MCSE to set up a printer. They have a hard enough time moving from Windows XP to Windows Vista.
All in all, providing it doesn't take away from more important policing, I think more education is a good thing and at least they're trying something different up north.
Hopefully I'll hear back from the Douglas County Sheriff's office and find out if they had much success....
I saw a few articles recently (to which I would post a link, but I can't find them again...) about how download limits are bad for security. The basic point being made was that developers can't be trusted to deliver secure software, so a plethora of security updates is inevitable. For those people subject to download limits, they may (or probably would) choose to spend their precious download limits on things they perceive as far more valuable to themselves than a patch for Windows or Acrobat.
The sudden interest seems to have come on the back of US ISPs such as Time Warner Cable looking at charging customers by the byte which has led to a consumer advocacy group asking Congress to investigate whether charging by the byte is 'price gouging'.
While it may be new for the US, this type of download limitation and additional charges for exceeding set caps is nothing new here in Australia or many other parts of the world.
But how could this affect security?
I was told a story from a South African Microsoft employee about the way ISPs divided up download limits in the Republic. As far as I recall, there was basically a generous allowance for sites hosted within South Africa, and a much smaller allowance for sites based overseas. As Microsoft did not have a windows update server in South Africa, this led to people being unwilling to update windows and burn up their precious overseas download limit. A partial solution was another Microsoft employee set up a private WSUS server within South Africa and advised people to connect to his server to obtain the frequent updates.
While there are obvious potential security issues with that solution, it is perhaps the lesser of two evils compared to not patching at all.
But do 'regular' users really pay all that much attention to their download caps? All sorts of applications rely heavily on internet access to be able to download updates, from Windows and Adobe Acrobat to itunes and anti-virus products. Would someone really disable their AV updates to save download allowance?
Speaking to a few non-IT friends the prevailing opinion is it is not something they even think about, and I imagine that is the common view. I suspect it would take being heavily slugged with extra charges for exceeding your allowance before most people even think about their download limits - although I have heard of people using 3G tethered internet connections on global roaming being unhappily surprised with hugh bills for unknowingly downloading patches and updates automatically while travelling.
At this point it seems like much ado about nothing, and the introduction of download limits in the US will hardly lead to a new age of poorly secured unpatched systems. The bigger problem is the underlying operating systems and applications that are built with security as an afterthought (if it is thought of at all), the constant downloading of updates and patches is simply a symptom.
The sudden interest seems to have come on the back of US ISPs such as Time Warner Cable looking at charging customers by the byte which has led to a consumer advocacy group asking Congress to investigate whether charging by the byte is 'price gouging'.
While it may be new for the US, this type of download limitation and additional charges for exceeding set caps is nothing new here in Australia or many other parts of the world.
But how could this affect security?
I was told a story from a South African Microsoft employee about the way ISPs divided up download limits in the Republic. As far as I recall, there was basically a generous allowance for sites hosted within South Africa, and a much smaller allowance for sites based overseas. As Microsoft did not have a windows update server in South Africa, this led to people being unwilling to update windows and burn up their precious overseas download limit. A partial solution was another Microsoft employee set up a private WSUS server within South Africa and advised people to connect to his server to obtain the frequent updates.
While there are obvious potential security issues with that solution, it is perhaps the lesser of two evils compared to not patching at all.
But do 'regular' users really pay all that much attention to their download caps? All sorts of applications rely heavily on internet access to be able to download updates, from Windows and Adobe Acrobat to itunes and anti-virus products. Would someone really disable their AV updates to save download allowance?
Speaking to a few non-IT friends the prevailing opinion is it is not something they even think about, and I imagine that is the common view. I suspect it would take being heavily slugged with extra charges for exceeding your allowance before most people even think about their download limits - although I have heard of people using 3G tethered internet connections on global roaming being unhappily surprised with hugh bills for unknowingly downloading patches and updates automatically while travelling.
At this point it seems like much ado about nothing, and the introduction of download limits in the US will hardly lead to a new age of poorly secured unpatched systems. The bigger problem is the underlying operating systems and applications that are built with security as an afterthought (if it is thought of at all), the constant downloading of updates and patches is simply a symptom.
I've been reading a few pros and cons recently about password masking. Traditionally it is one of those unquestionable security commandments - "Thou shalt mask passwords", but is it always necessary?
Why do we mask passwords? What's the benefit?
To stop the password being exposed to third parties. The password is a shared secret between the system and the authorized user, so letting others see it in plain text is a no-no. While this is true, are all passwords created equally? The password, or PIN number, you may use on your ATM card in a public place is at much higher risk of being seen by an unauthorised third party than your webmail login or even your network login that you use in the privacy of your office or cubicle.
"But we don't all sit in an office or in a cozy cubicle!" you say, cursing the designer of the open plan office. Very true.
Another benefit may be that users 'feel' more secure that their password is being kept 'secret' by not displaying on the screen. Despite the original purpose being to mask the password from a 'shoulder surfing' colleague, it has come to be something expected by users today, and like the padlock in the corner of the browser is a 'symbol of security'.
The downside is of course, that users cannot see what they're typing, so when they are denied a login they're not sure if they've forgotten their password or are simply mistyping the correct password. There is also the argument that password masking leads to poorer security because users choose 'easier' passwords (ie: less complex ones) so that they reduce the chances of mistyping a complex password. This argument does assume that having the password unmasked would lead to more complex passwords because the chances of mistyping are reduced. Personally I think most poor simple passwords are based on ease of remembering rather than the odds of mistyping.
I noticed recently that Apple had implemented a 'half-way' solution on the iphone (and this may have been around for a while, I'm just stating where I saw it) in that as each character is typed it appears in plaintext briefly before becoming an asterisk. This has the benefit of reducing mistypes (not uncommon with the iphone on screen keyboard!) but also making shoulder surfing a little harder by forcing the 'surfer' to pay attention to each keystroke and never showing the password as a whole. I think this is an interesting solution that has potential for those sorts of passwords that are most likely to be used in a 'private' setting (like your office or home) but not of course for PIN numbers and the like.
Why do we mask passwords? What's the benefit?
To stop the password being exposed to third parties. The password is a shared secret between the system and the authorized user, so letting others see it in plain text is a no-no. While this is true, are all passwords created equally? The password, or PIN number, you may use on your ATM card in a public place is at much higher risk of being seen by an unauthorised third party than your webmail login or even your network login that you use in the privacy of your office or cubicle.
"But we don't all sit in an office or in a cozy cubicle!" you say, cursing the designer of the open plan office. Very true.
Another benefit may be that users 'feel' more secure that their password is being kept 'secret' by not displaying on the screen. Despite the original purpose being to mask the password from a 'shoulder surfing' colleague, it has come to be something expected by users today, and like the padlock in the corner of the browser is a 'symbol of security'.
The downside is of course, that users cannot see what they're typing, so when they are denied a login they're not sure if they've forgotten their password or are simply mistyping the correct password. There is also the argument that password masking leads to poorer security because users choose 'easier' passwords (ie: less complex ones) so that they reduce the chances of mistyping a complex password. This argument does assume that having the password unmasked would lead to more complex passwords because the chances of mistyping are reduced. Personally I think most poor simple passwords are based on ease of remembering rather than the odds of mistyping.
I noticed recently that Apple had implemented a 'half-way' solution on the iphone (and this may have been around for a while, I'm just stating where I saw it) in that as each character is typed it appears in plaintext briefly before becoming an asterisk. This has the benefit of reducing mistypes (not uncommon with the iphone on screen keyboard!) but also making shoulder surfing a little harder by forcing the 'surfer' to pay attention to each keystroke and never showing the password as a whole. I think this is an interesting solution that has potential for those sorts of passwords that are most likely to be used in a 'private' setting (like your office or home) but not of course for PIN numbers and the like.
Subscribe to:
Posts (Atom)