Google Search

Saturday, November 30, 2013

Android holed again, JAY Z and “Magna Carta”, Tumblr and HTTPS – 60 Sec Security [VIDEO]

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

How did rapper JAY Z take the concept of Magna Carta to a whole new level?

Watch this week's 60 Second Security and find out!

? Can't view the video on this page? Watch directly from YouTube. Can't hear the audio? Click on the Captions icon for closed captions.

Google's Android operating system has another security hole. Same story as before: uou can tamper with other peoples' digitally-signed packages and Android won't notice.Rapper JAY Z's latest album release, "Magna Carta", was preceded by a custom Android app that had some privacy boffins up in arms.Tumblr managed to forget the S in HTTPS in a recent release of its iOS app. The social networking company is "tremendously sorry."

(If you enjoyed this video, you'll find plenty more on the SophosLabs YouTube channel.)

http://twitter.com/duckblog

Tags: 60 Sec Security, 60 Second Security, 60 Seconds, 60SS, Android, APK, app, carter, Code signing, data breach, Data Collection, EPIC, Exploit, exra field, Google, https, ios, Jay Z, master keys, Privacy, sniffing, Social Networking, Spam, Tumblr, vulnerability


View the original article here

Friday, November 29, 2013

Jay-Z’s ‘Magna Carta’ mobile app is too snoopy, privacy advocates complain

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Jay-Z. Image courtesy of Shutterstock.Why does Jay-Z want to know who we're talking to?

Because that's the type of information demanded by an app he released earlier this month to promote and distribute his latest album over Samsung devices.

In fact, the galaxy of permissions required by this busybody little app "verges on parody," the Electronic Privacy Information Center (EPIC) said in a complaint it filed this week with the Federal Trade Commission (FTC).

The Magna Carta App, used to promote the album, "Jay-Z Magna Carta Holy Grail", was launched 4 July on Samsung Galaxy Nexus devices in advance of the record release.

EPIC wants the FTC to stop Samsung from distributing the app until its privacy concerns are addressed and the app falls in line with the Consumer Privacy Bill of Rights [PDF].

The app requires these permissions:

To modify or delete contents of phone USB storage.To prevent phone from sleeping and view all running apps.To access your precise (GPS) and approximate (network-based) location.To read your phone status and identity (i.e. who you're talking to on voice calls).To run at startup.To test access to protected storage. To receive data from internet, view Wi-Fi connections, and view network connections.To control your phone's vibration. To find accounts on the device - in other words, to gather email addresses and social media usernames connected to the phone.

The app not only wants to know who you call, it also demands your Twitter or Facebook login so it can post on your behalf, presumably so it can create "social buzz," EPIC says.

Beyond that, people who downloaded the Magna Carta app have been forced to post a canned Facebook or Twitter message to hype the album for each song's lyrics they wanted to check out - a process that "encouraged users to flood their friends with unwanted advertising" and forced users to act as "mandatory marketing tools" to access the lyrics, EPIC says.

Users were suitably appalled. One actually paused for an entire 6 seconds.

And then, well, he or she went ahead and downloaded it.

Others are in mourning for the loss of lifespan the app sucked up.

One user's comment:

"I downloaded it, opened it, noticed the obscene amount of personal data they wanted, closed it again and uninstalled. I'd like that minute and a half of my life back please."

Observers are, naturally, assuming that Jay-Z has undertaken advanced surveillance as a hobby.

From Jon Pareles, writing for the New York Times:

"If Jay-Z wants to know about my phone calls and e-mail accounts, why doesn't he join the National Security Agency?"

Pareles is particularly irked, given lyrics from at least one Jay-Z song - "Somewhere in America" - that seem, confusingly enough, to be anti-NSA:

"Feds still lurking"

"They see I'm still putting work in..."

As Pareles points out, now Jay-Z is lurking, in our phones.

Jay-Z, if you're listening, which it seems like you are, then please, call off your Samsung colleagues.

We've got enough eavesdropping going on without you adding to the snooping.

Follow @LisaVaas

Follow @NakedSecurity

Image of Jay-Z courtesy of Shutterstock.


View the original article here

Thursday, November 28, 2013

Facebook, the early years: handing out a master password like candy

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Mark Zuckerberg. Image courtesy of Kobby Dagan / Shutterstock.You are not paranoid about surveillance - at least, not as far as Facebook is concerned.

It appears that Facebook founder Mark Zuckerberg and his minions, in the early days, had a master password with which they could sign in to any user account and poke at whatever data we entrusted to the site.

The Guardian gleaned this from Zuckerberg's former speechwriter, Katherine Losse.

Losse told the media outlet that users should be guarded with their private data on the site - a timely warning, given the launch of Facebook's social search tool graph search.

Losse - aka Facebook employee No. 51 - joined the company in 2005 as a customer support staffer and worked her way up to being Zuckerberg's ghostwriter. She left in 2010 and, according to the Guardian, is now regarded as a rogue former employee by Facebook itself.

In 2012, she released a book, The Boy Kings, about those early years.

Recent revelations about the US National Security Agency's (NSA's) voraciously hungry appetite for surveillance may have left many users of social networking sites fretting about the government sucking up our private data, but Facebook has been privy to that data - including our passwords - from its infancy, Losse told the Guardian.

As The Guardian's Siraj Datoo points out, that's a little scary, given that plenty of users likely have never changed their passwords since they first signed up.

To make matters worse, many people commit security blasphemy by using the same password on multiple sites.

To make matters spontaneously combust in worse-osity, Losse wrote in "The Boy Kings" that in its early years, Facebook passed out the master password like candy, without vetting any of the support staffers.

Here's an excerpt from the book, courtesy of coverage from CNet's Jennifer Van Grove:

"Jake introduced us to the hanky application through which users' e-mails to Facebook flowed. Once we learned how the software worked, Jake taught us, without batting an eyelid, the master password by which we could log in as any Facebook user and access all their messages and data... I experienced a brief moment of stunned disbelief: They just hand over the password with no background check to make sure I am not a crazed stalker?"

As Losse told The Guardian, social networking users tend to assume they're the only ones who can access the information they input, but at most companies, it's probably not true, given that "at least some of the staff need to have access to user accounts in order to do their jobs."

She said:

"There has to be a way for the staff to manage and repair user account issues, and for this reason user data within most startups, especially when they are young, is never completely locked up from company staff."

At any rate, Facebook doesn't hand out a master password anymore, it says.

Nowadays, the company told CNet, employees don't have password access:

"An audit by the Irish Data Protection Commission included a detailed review of the level of access to user data that employees have at Facebook and found that we have an appropriate framework in place. Facebook employees do not have access to users' passwords."

It is, of course, preferable that we have as clear a picture as possible of what companies do with our personal data, so this history of early data yahooism is welcome.

Facebook silhouette. Image courtesy of Shutterstock.If it helps Losse to sell more books by tying it in to concern about PRISM-like surveillance, that's OK, as far as I'm concerned.

The more light we shed on these formerly murky matters, the better.

Facebook from its start could watch us, listen to us, and, probably, make fun of us and our soppy, trivial and/or really embarrassing posts and data.

Now it can't, it assures us.

If that helps to ease your compulsive surveillance suspicions, paralyzing fear of electronic privacy violation, or even, to borrow the Joy of Tech's formal diagnosis, PRISM Anxiety Disorder, all the better.

Thank you, Ms. Losse, for letting us know.

Follow @LisaVaas

Follow @NakedSecurity

Images of Facebook silhouette and Mark Zuckerberg courtesy of Kobby Dagan / Shutterstock.com.


View the original article here

Wednesday, November 27, 2013

College student gets a year in the slammer for keylogging student accounts to rig election

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Vote for me. Image courtesy of ShutterstockOn the last day of the four-day 2012 election for student council, computer techs noticed that something was a bit off with one of the university's computers.

Viewing it remotely, they noted that whoever was using the computer cast vote after vote.

The techs then watched the mysteriously multi-voting user log into the account of a university official. There, he read an email from a student who complained that the system was preventing her from voting.

There were actually quite a few students who couldn't vote in that election, because their login details had been electronically pickpocketed by the same young man who was running for student council president.

That man, former California State University San Marcos student Matthew Weaver, has been found guilty of using keyloggers to steal nearly 750 student passwords, many of which he then used to log in to others' accounts and to then fraudulently cast votes for himself and four of his fraternity brothers.

On Monday Weaver was sentenced in federal court to a year in prison, according to the U-T San Diego.

Authorities said that Weaver installed keyloggers on 19 school computers, managed to steal credentials for a whopping 745 students, and cast ballots from the accounts of 630 of them.

When campus police tracked him down, Weaver was sitting at a school computer with the keyloggers.

Weaver, now 22, was a third-year business student when he cooked up the scheme to rig the March 2012 election. He started months in advance, with police finding a PowerPoint presentation he'd created earlier in the year.

The presentation proposed running for president along with four of his frat brothers, who would run as vice presidents.

Weaver's presentation noted that his intended position came with a $8,000 stipend, while the vice presidents each stood to get a $7,000 stipend, for a total of $36,000.

Police also found traces of Weaver's research into the matter, including computer searches on such phrases as "how to rig an election" and "jail time for keylogger".

Then, a month prior to the election, Weaver bought three keyloggers, in the form of small electronic devices, to surreptitiously record keystrokes.

Keyboard. Image courtesy of ShutterstockBecause landing in jail for a brief stint obviously wasn't enough to convince him that wire fraud isn't a wise course, he and a friend cooked up a business plan of action that was even worse: to attempt to deflect the blame, they created fake Facebook pages using names of real students, posted fake conversations, and tried to make it look like the students had framed him.

Those manufactured conversations were sent to reporters at a few media outlets, but none fell for it, the U-T San Diego reports.

Weaver pleaded guilty to three federal charges, including wire fraud and unauthorized access to a computer.

As the judge pointed out to the U-T San Diego, Weaver jumped from the frying pan into the fire - or, in the judge's own words, he was "on fire" for the crime, and then he went and poured gasoline on it to try to cover it up.

Don't try this at home, kids. Don't play with matches, and don't mess with keyloggers.

Follow @LisaVaas

Follow @NakedSecurity

Image of Vote For Me and keyboard courtesy of Shutterstock.


View the original article here

Tuesday, November 26, 2013

SSCC 113 – Another Android hole, Tumblr forgets encryption, Nintendo under attack [PODCAST]

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

News, opinion, advice and research: Chet and Duck (Chester Wisniewski and Paul Ducklin) bring you their unique and entertaining combination of all four in their regular quarter-hour programme.

Chester's been on the road, so this epsiode of the Chet Chat is a couple of days late for logistical reasons.

We apologise for that, but Chet and Duck think it's no less interesting nevertheless!

In fact, this week's main story - the two-in-a-row exploits against Android code verification - intrigued your presenters so much that they resolved to link up and record this show, come what may.

And so, here it is: SSCC Episode #113.

(You can keep up with our podcasts via RSS or iTunes, and catch up on previous Chet Chats and other Sophos podcasts by browsing our podcast archive.)

The news wires have been buzzing with the "master keys" attack, and the "extra field" attack, both of which let you create Android Package files (APKs) that show one set of content to Google's cryptographic verification, and another to the installer.

Chet and Duck explain what happened, come up with some ideas that would have avoided the problem in the first place, explain what to do about it, and wonder how long before the fixes are on your handset.

From Android to iOS, where Tumblr published a version of its app that somehow managed to leave out the part that encrypts your PII before sending it over the internet.

Chet wonders how the average user is supposed to spot that sort of bug.

Nintendo got pounded by crackers who mounted a month-long password guessing attack.

The crooks only got hold of 24,000 passwords as a result (only!), and it looks as though those successes were largely down to using dictionaries of usernames and passwords from earlier hacks.

What to do? Federated identity? Password managers? A slimmer digital lifestyle?

Chet and Duck discuss the pros and cons of various ways to address the problem of password re-use.

And Chet's going to be at BlackHat 2013, and at DEF CON, so be sure to look him up in Vegas and say, "Hi."

Duck won't be there in body but you will find him present in mind and spirit, as he's putting together a special #sophospuzzle for the occasion.

The puzzle will go up on Naked Security, so everyone can have a go, but BlackHatters can enter at Sophos's booth at the trade show and win a secret prize!

(It's a cool secret prize, which Duck lets slip in the podcast, and Chester bemoans being ineligible to win.)

Don't forget: for a regular Chet Chat fix, follow us via RSS or on iTunes.

Follow @NakedSecurity

Follow @duckblog

Tags: #sophospuzzle, Android, Blackhat, DEF CON, Defcon, Encryption, extra field, firesheep, Google, hole, https, master keys, Nintendo, password reuse, Tumblr


View the original article here

Monday, November 25, 2013

Ruby + OpenSSL && sprintf() == 2009-style Man-in-the-Middle?

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Four years ago, a security researcher known as Moxie Marlinspike presented a paper at the Black Hat conference in Las Vegas in which he outlined a number of attacks against SSL.

As you no doubt know, SSL (secure sockets layer) and its modern incarnation TLS (transport layer security) constitute the S in HTTPS, and form the basis of online web security.

HTTPS, or HTTP-over-SSL, has two main advantages over vanilla HTTP:

Confidentiality. The connection is strongly encrypted, and should be incomprehensible to anyone other that the legitimate recipient.Integrity. The client can verify that a server very likely does belong to the organisation it claims.

Any bug or flaw that might interfere with these aspects of SSL is of great interest, because it could punch holes in your online safety and security.

Back in 2009, one of Marlinspike's anti-SSL tricks was to supply a digital signature that looked as though it was issued by company X, but in fact, came from company Y.

He did this by sneaking a [NUL] character (a byte with the numeric value of zero) into the name on the digital signature, something like this:

Now, SSL certificates are usually signed by a certifying authority (CA) that vouches for the company that owns the domain named in the certificate, i.e. domain.test in the example above.

The CA is supposed to contact someone official at domain.test and verify that the company really wants to issue a certificate for the server example.org[NUL].domain.test.

And the owner of domain.test can deviously, if with apparent honesty, say, "Yes!"

Note that the CA will decompose the servername from right-to-left, in order to ensure that it uses the most significant parts of the name to work out whom to deal with when researching the veracity of the certificate.

In our example, this ensures that the CA correctly contacts domain.test, not example.org.

Later, at run-time, web clients are supposed to check that the name of the server to which they are connecting matches the name on the signed SSL certificate.

Because they need to check the entire server name, it doesn't really matter whether they check from right-to-left or left-to-right.

But Marlinspike noticed that the checking code in some of these web clients used an old-fashioned C function such as strcmp() to do a left-to-right match.

And strcmp() treats the special character [NUL] as denoting the end of a string.

So, with strcmp()-type comparison, the following two strings match perfectly:

That's because strcmp() doesn't bother checking past the [NUL] in the first string.

This means you now have a way to let domain.test mint certificates that look as though they belong to example.org.

This, in turn, means that the folks at domain.test can divert example.org's secure network traffic without generating a browser warning, and can thus pull off a Man-in-the-Middle attack.

This throws confidentiality and integrity right out of the window.

The fix is easy: don't stop processing server or domain names at the first [NUL] character: process the entire string, every time.

Sadly, however, it looks as though the programming language Ruby neglected to implement such a fix until last week.

So, if you have web-facing code written in Ruby, and you support SSL (which you do, right?), be sure to patch as soon as you can, to avoid falling victim to this flaw!

Follow @duckblog


View the original article here

Saturday, November 23, 2013

Opera breached, has code cert stolen, possibly spreads malware - advice on what to do

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Filed Under: Featured, Malware

Norwegian-based Opera, makers of one of the most popular browsers outside the Big Four, has announced a scary-sounding network intrusion.

The official story is still somewhat unclear.

But here are the relevant paragraphs from Opera's official mea culpa document:

On June 19th we uncovered, halted and contained a targeted attack on our internal network infrastructure. Our systems have been cleaned and there is no evidence of any user data being compromised. We are working with the relevant authorities to investigate its source and any potential further extent. We will let you know if there are any developments.

The current evidence suggests a limited impact. The attackers were able to obtain at least one old and expired Opera code signing certificate, which they have used to sign some malware. This has allowed them to distribute malicious software which incorrectly appears to have been published by Opera Software, or appears to be the Opera browser.

It is possible that a few thousand Windows users, who were using Opera between 01.00 and 01.36 UTC on June 19th, may automatically have received and installed the malicious software. To be on the safe side, we will roll out a new version of Opera which will use a new code signing certificate.

The title of the article is Security breach stopped, but that doesn't sound quite right to me.

The conclusions I reached, based on the announcement above, were:

The network was breached.A code-signing key was stolen.Malware has been signed with it and circulated.At least one infected file was posted on an Opera server.That file may have been downloaded and installed by Opera itself.Cleanup and remediation has now been done at Opera.

That sounds a bit more like Security breach not stopped to me.

How else could a signed-and-infected file have been automatically downloaded by an already-installed instance of Opera?

Anyway, wouldn't Opera's auto-update have failed or produced a warning due to the expired certificate?

Until Opera has worked out the answer to these questions, Opera users probably want to assume the worst.

The good news is that the malware involved is widely detected by anti-virus tools, and the period of possible exposure via Opera itself was at most 36 minutes.

? According to Opera, Sophos products block the offending file as Mal/Zbot-FG.

So, if you are an Opera for Windows user:

Download a fresh copy of the latest version (since the buggy download appears to be a thing of the past).Make sure your anti-virus is up to date.If you can spare the time, do an on-demand ("scan now") check of your computer.

If we find out more detail about whether malware was distributed by existing Opera installations or not, we'll let you know.

Sophos can help with an emergency cleanup of your Windows PC.

You can use the standalone Sophos Virus Removal Tool to detect and clean malware. This tool can be used alongside your existing anti-virus. (Free download, no registration required.)

You can download a fully-functioning evaluation version of Sophos EndUser Protection for Windows and use it for malware detection, prevention and clean-up. (Free download, registration required.)

Or you can use the Sophos Bootable Anti-Virus utility. SBAV requires you to download a Windows program to create and then use a bootable CD or USB key, so some technical expertise is recommended. The advantage of SBAV is that it is immune to malware already on your PC, as it runs from a self-contained Linux-based operating system. (Free download, no registration required.)

Follow @duckblog


View the original article here

Friday, November 22, 2013

Google adds (some) malware and phishing info to Transparency Report

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

World wide web. Image courtesy of ShutterstockGoogle has expanded its Transparency Report data to include stats from their 'Safe Browsing' system, which keeps tabs on where malware and phishing sites are hosted.

The data is a little short on definition, but it does give some interesting insights into which hosting providers are doing the worst job of keeping their IP space clean.

The twice-yearly Transparency Report has traditionally covered more politically-sensitive topics - which countries are blocking access to Google services, and who's been asking Google to provide data on their users (or "product"), or to take stuff down that might be found offensive for some reason, or in breach of copyright.

Some of this stuff is interesting in itself, not least when it very nearly names-and-shames dodgy political and judicial figures trying to abuse their authority and silence their critics.

There's also quite a big question mark hanging over just how "transparent" it all is, in the light of the whole PRISM brouhaha.

For the most part it seems fairly detailed and fine-grained though, or at least gives the impression of trying to be, as far as "the man" will let them, with some of the data even provided as spreadsheets for proper looking at by proper science-y types.

The new data is based on the Safe Browsing programme, which combines scanning by Google and reports from the wider web world to keep tabs on where the bad stuff is at; browsers use the data to filter search results, to protect their users from potential malware and phishing.

It's a little less detailed; much of it consists of little graphs showing trends of malware and phishing spotted over time. Some is rather hard to find much value in, data for related topics covering wildly different time periods and thus hard to compare.

Some of the graphs seem more useful, but may not be; an apparently clear, if somewhat loose, correlation between the number of malware sites and phishing sites picked up at any given time may imply a definite link between the two activities, but could also simply be showing how hard the Google scanning crew were working that week.

The one graph which does seem clear is the contrast between "attack" and "compromised" sites - i.e., sites deliberately set up to get you, versus legitimate sites that have been taken over by the bad guys. The graph shows actual attack sites on the increase recently, but still barely registering - it seems the compromised sites outnumber them massively, and always have.

Again, there is, of course, room for some sampling bias here - it's quite possible that the attack sites are better at hiding from Google, and of course they have no legit owners or admins to spot the compromise and report it.

Some numbers are available for these graphs, but they require some mouse skills to hover over the exact spot you're interested in.

The real detail is on the "Malware Dashboard" page though. This breaks down the sites recorded by the Safe Browsing scheme by Autonomous System (AS - basically an ISP or other large-ish body responsible for a subsection of the internet).

Google malware dashboard

It provides a rather undramatic world map highlighting which geographic regions are especially malware-ridden (nowhere's that much worse than anywhere else, it turns out), but then also breaks down the data by AS, including details of how many threats have been spotted in each.

The clear leader recently, using the default three-month view, is one called "Webair Internet Development", a US-based ISP on which Google has found 43% of sites checked have been malicious.

Looking at a sample of the domains they host seems to confirm some old stereotypes - it seems to be remarkably popular with gambling, pharmacy and porn sites, with domain names like "top3casino", "247-pharmacy" and "seemyass" jumping out of the list.

This impression is reversed by checking into the next two in the list though, American Access Integrated Technologies and Spain's True Records; both are listed as hosting 40% bad sites, but both are apparently hosting a random selection of legit-sounding domains (although, of course, there seems to be a fair amount of porn in both).

Again we come back to sampling error though.

The Webair listing says 43%, but as you may have spotted, that's 43% of sites checked. In the period covered, Google has only actually looked at 2% of the sites hosted there. So, it all comes down to how good the Safe Browsing team are at deciding which sites to check.

If they're super hot and have pinpointed all the bad stuff in the whole AS with just a few misses, we've got 43% of 2%, aka 0.86% - not such bad guys after all.

On the other hand, if they're really terrible and have foolishly started their scanning with the handful of clean sites on a seriously malware-riddled section, it could be as high as 98.86% danger.

That's the problem with stats, really - and we're not even considering whether the results of the Safe Browsing checks could be in error.

Looking at the longer term, by turning the dial up to the maximum 1 year, the top five are all in the 80s and 90s, apart from number 1 which, rather intriguingly, is listed as "unknown" - they know it's the biggest, but can't say why.

All this top five also list the % of the total AS scanned as "unknown". Not much for those real science-y people to play with here unfortunately.

So what's the use of it all?

Well, the actual data on whether or not your site is listed is made available to site admins, which is helpful, but there's nothing new here. The main value of this new regular report, it would seem, is to highlight potentially dodgy providers.

So, if you're running a website and your provider comes high up in one of these lists, get in touch with them. Ask them, hey, what's up, are you some sort of haven for crooks, or just incompetent?

If they really are dirty, you might just get them to clean up their act. If not, you'll at least be helping keep them on their toes.

And if you've somehow got your mum's flower arranging club website registered with a Russian 'bulletproof' provider, then maybe this should give you fair warning it's time to move it on.

Follow @VirusBtn
Follow @NakedSecurity

Images of world wide web courtesy of Shutterstock.


View the original article here

Canadian cop claims he didn't know cyber-stalking was illegal

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Man spying. Image courtesy of ShutterstockA Canadian police officer who pleaded guilty to planting spyware on his wife's BlackBerry has been sentenced to demotion, after two years' paid suspension.

According to local news sources, a mitigating factor in his sentencing was that he didn't know that planting the cyber bug was a crime.

The unnamed (for legal reasons) officer, from the police force of Sault Ste. Marie, Ontario, was apparently drinking heavily at the time, and suspected his wife of cheating on him with a close friend.

His defense counsel argued that both the jealousy angle and the lack of clarity around such spyware should weigh in the officer's favour, an opinion supported by the eventual decision to grant a conditional discharge and place him on probation for twelve months last year, and now to sentence him to demotion to second class constable for at least two years.

The spyware he planted could apparently harvest chat and SMS data as well as monitor GPS location information, with the information gathered posted to a remote site and accessible from anywhere in the world.

The officer admitted to buying the spyware online, under his own name and with a credit card, from a US website advertising the tool as suitable for snooping on spouses suspected of infidelity.

The case was one of the first brought under new laws covering digital surveillance, as the judge at his original hearing last year pointed out. The case was treated as a gentle introduction to the new laws, with future offenders warned that they would not be treated so lightly.

It does seem a rather delicate slap on the wrist, especially for a police officer, who should be expected to be more up to speed than most people on what is permissible behaviour and what is, in fact, a crime.

The case highlights the difficulties surrounding "greyware", the "potentially unwanted applications' (aka PUA), which most quality security products will alert on if asked to, but whose developers claim they are servicing a legitimate need.

GPS map. Image courtesy of ShutterstockThe PUA issue has been around for quite a few years in the PC world, but is now becoming a particular problem in the mobile space, where this kind of snoopware is especially effective thanks to GPS location data and the intimate info many mobile users share by SMS, instant messaging and social networking apps.

With people only just starting to realise the need for security software on their mobiles to help spot stuff like this, as well as simpler security practices such as screen locks and not letting strangers fiddle with your phone, this sort of story should help hit the point home.

And from the other side, it should make it clear to people thinking about using this kind of tool to snoop on their friends and neighbours: it's not just not cool, in many jurisdictions it's a crime.

Follow @VirusBtn
Follow @NakedSecurity

Images of man spying and GPS map courtesy of Shutterstock.


View the original article here

Wednesday, November 20, 2013

Facebook leak, Canadian spam, Opera breach - 60 Sec Security [VIDEO]

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

It's Saturday, and that means 60 Second Security, where we aim to touch on some of the more thought-provoking security topics of the past week in just one minute of video.

Why not give this week's video a go? [Higher resolution available directly from YouTube. Click the Captions icon for closed captions.]

Facebook suffers a data leakage crisis where information uploaded by X about Y may be downloadable by Z.Canada is the last G8 country to go for anti-spam legislation. Only it just got delayed again. Might be ready by 2014. Or 2017.A Korean graphical designer created an "anti-surveillance" font. It doesn't work, but, hey, it's the thought that counts.And Opera wrote up a "Security attack stopped" incident. Except it was more like "Security attack not stopped."

(If you enjoyed this video, you'll find plenty more on the SophosLabs YouTube channel.)

http://twitter.com/duckblog

Tags: 60 Sec Security, 60 Second Security, 60 Seconds, 60SS, anti-spam, anti-surveillance, breach, browser, Canada, certificate, Code signing, data breach, Facebook, font, korean, leak, legislation, Malware, opera, PRISM, Spam, typeface, typography, zxx


View the original article here

Tuesday, November 19, 2013

Thieves pounce on one of a sheriff's office's last, unencrypted laptops

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Sheriff's badgeThe sheriff's office in King's County, Seattle, was in the process of adding encryption software this past spring and as of March had done so on 60% of all computers.

Wouldn't you know it? The laptop that got stolen from a detective's truck, unfortunately, was in the 40%.

According to Komo News, the laptop and a personal hard drive, both of which were full of case files, were stolen from the backseat of an undercover detective's pickup truck in March in the US state of Washington.

Komo News reports that the case files contained personal information about thousands of crime victims, suspects, witnesses and even police officers, including sensitive data such as social security and drivers license numbers.

Last week, the office sent out 2,300 letters to all those who might now be vulnerable to identity theft.

Detective Sergeant Katie Larson said that the months' long delay in notifying those affected was due to the fact that the office needed time to figure out whom to notify:

"It's not something you can just press a button and it all pops up for you... Somebody had to go through and read everything and cull out all of that information."

(Actually, I'm pretty sure there are things called "data backups" that enable you to press a button and have things pop back up for you.)

The sheriff's office said this wasn't the first time they've lost data, but this was the worst data loss yet.

It begs the question, yet again, of why anyone would ever leave an unencrypted laptop containing highly sensitive information sitting around in a car.

Sheriff's office officials said that the detective hadn't followed policy and could now face discipline.

If it's any consolation, King County sheriff's office, you're in stellar company, joined by the likes of NASA itself.

But somehow, I don't think that will console those who got a data breach notification letter and now have to deal with the potential of identity theft...

Follow @LisaVaas

Follow @NakedSecurity

Image of sheriff's badge by unknown artist, labelled for commercial reuse under Creative Commons.


View the original article here

Facebook leaks are a lot leakier than Facebook is letting on

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Remember last week, when Naked Security et al. told you that Facebook leaked email addresses and phone numbers for 6 million users, but that it was really kind of a modest leak, given that it's a billion-user service?

OK, scratch the "modest" part.

The researchers who originally found out that Facebook is actually creating secret dossiers for users are now saying the numbers don't quite match up.

The number of affected users Facebook noted in a posting on its security blog is far less than what they themselves found, and Facebook is also "hoarding non-user contact information - seen when it was also shared and exposed in the leak," writes ZDNet's Violet Blue.

The bug involved the exposure of contact details when using the Download Your Information (DYI) tool to access data history records, which resulted in access to an address book with contacts users hadn't provided to Facebook.

Selecting privacy settings in FacebookWhat that means is that even if you don't share details of your own personal information with Facebook, Facebook well may have gotten it through other people in your network who've let Facebook have access to their contact lists.

Facebook accidentally combined these "shadow" profiles with users' own Facebook profiles and then blurted both data sets out to people who used the DYI tool and who had some connection to the people whose data was breached.

It's understandable why Facebook users are steamed.

Facebook has gotten information you didn't choose to share, has retained it, and has inadvertently left it open for unauthorized access since at least 2012.

Some users, in fact, complained in comments that the bug persisted even after Facebook reportedly fixed it, according to Violet Blue.

Packet storm reported on Wednesday that its researchers, who had prior test data verifying the leak, were able to compare what they knew was being leaked with what Facebook reported to its users.

Packet Storm claims that Facebook didn't come clean about all the data involved.

From its posting:

"We compared Facebook email notification data to our test case data. In one case, they stated 1 additional email address was disclosed, though 4 pieces of data were actually disclosed. For another individual, they only told him about 3 out of 7 pieces of data disclosed. It would seem clear that they did not enumerate through the datasets to get an accurate total of the disclosure...

"Facebook claimed that information went unreported because they could not confirm it belonged to a given user. Facebook used its own discretion when notifying users of what data was disclosed, but there was apparently no discretion used by the 'bug' when it compiled your data. It does not appear that they will take any extra steps at this point to explain the real magnitude of the exposure and we suspect the numbers are much higher."

Not only is the extent of exposed data likely to expand, Packet Storm says, but the number of people affected is much higher than 6 million, given that Facebook has only contacted its users.

Here's how Facebook replied when Packet Storm asked about contacting non-users about the breach:

"We asked Facebook if they enumerated the information in hopes that their reporting had a bug but we were told that they only notified users if the leaked information mapped to their name.

"We asked Facebook what this means for non-Facebook-users who had their information also disclosed. The answer was simple - they were not contacted and the information was not reported. Facebook felt that if they attempted to contact non-users, it would lead to more information disclosure."

That's a "weak, circular" argument, Packet Storm complains.

To better protect users' contact and personal information, the researchers suggest that Facebook can simply adopt this suggested flow:

1. When a person uploads someone's contact information, Facebook should automatically correlate it to what they have shared on their profile (and obviously only suggest them as a friend if their settings allow it). If their settings do not allow it, they should treat it as a user not in Facebook (see #2). If the information uploaded includes data specific to an individual who does not already have that data included in their profile, Facebook should provide a notification along the lines of:

"You are attempting to add data about John Smith that he has not shared with Facebook. How do you want to handle this situation?"

Two options are provided:

A) "Ask John Smith's permission to add this information"

B) "Discard additional information"

If they choose option A, John Smith is notified by Facebook the next time he logs in and gets to decide what he wants to do with HIS data. Seems simple enough.

2. When a person uploads someone's contact information and it does not correlate to any Facebook user, they should be able to use it for the Invitation feature with the caveat that Facebook automatically deletes all data within 1 week. The invite to the person can say "this link will expire in 1 week", which it should anyways. When an individual uses the invitation link to sign up, THEY will decide what information to share with Facebook.

That does seem simple enough, but Facebook hadn't responded to the suggestion at the time of writing.

While we wait for Facebook to (maybe) fix a situation that seems far more widespread than originally reported, we can help each other out by immediately removing our imported contacts, to keep everybody's personal data out of this swamp.

If you haven't done so already, you can easily remove uploaded contacts here.

Follow @LisaVaas

Follow @NakedSecurity


View the original article here

Monday, November 18, 2013

Anatomy of a browser trick - you've heard of "clickjacking", now meet "keyjacking"...

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

An Italian security researcher has rediscovered a trick known as user interface redressing.

He's used the concept to detail some potentially risky behaviour in some versions of Internet Explorer on Windows 7 and 8.

As that's a fairly common combination, and because the trick is worth pondering for anyone who likes to be thoughtful about computer security, here's what Rosario Valotta came up with last week.

? If you've ever been confused by the term UI redress, you aren't alone. To keep it clear, imagine it written as UI re-dress. It means that you put a new layer of clothes over an interface object as a sneaky way of changing its appearance, not that you right the wrongs that were done to it (the usual sense of "redress" when written as an unhyphenated word).

You may remember clickjacking, where your cursor is placed over a clickable button, such as a Facebook Like, that is itself placed over an innocent-looking image.

Then the button is made transparent, so that the image "re-dresses" the button and you think you are clicking on the image.

Valotta's trick is keyjacking, which is like clickjacking but with the re-dressing done the other way around.

You initiate a download window, which, at least under Internet Explorer 8 on Windows 7, produces a Run|Save|Cancel dialog.

You cover up the dialog with a window that looks like a CAPTCHA with R as the first character you need to type in.

Then you remove focus from the foreground window so that if the user does innocently press R, it is fed into the underlying dialog, not into the fake CAPTCHA window.

In IE 8 on Windows 7, that tricks you into choosing the Run option, so the downloaded file is launched automatically, apparently with your official blessing.

? In clickjacking, you click on a button that is opaque to your mouse (so it accepts and processes your click), but transparent to your eyes. In keyjacking, you type a character into a window that is opaque to your eyes, but transparent to your keyboard (so it passes your keystroke through to a hidden window underneath).

Here's what is supposed to happen in Valotta's demo, starting with the launch page:

If you click the button to launch the demo, it opens a window containing an invisible IFRAME that's populated, using JavaScript, with an EXE file:

Pushing an EXE file into the IFRAME initiates a file download and causes a double popup, the first to denote the start of the download, and the second to ask you whether you'd like to Run, Save or Cancel:

But you can't see any of this, because the window responsible for the download is a pop-under window, re-dressed on top with a window that appears to be asking for input, but isn't:

(In the on-line demo, the field into which you are supposed to enter the CAPTCHA text is actually an animated GIF containing a flashing cursor, for added realism. The CAPTCHA in the demo starts with E, which stands for Esegui, the equivalent of Run on Valotta's Italian-language version of Windows.)

In theory, then, the CAPTCHA acts as a realistic and innocent-looking subterfuge that sneakily tricks you into signalling Run to a dialog you can't see.

In practice, in my tests using a default installation of IE 8 on Windows 7 Enterprise, IE automatically averted the danger by blocking the download with a yellow security bar:

To initiate the download, you have to click on the security bar in the offending window, select the Download File... option from the dropdown menu that appears, and only then click Run or type R:

Since the security bar is out of sight, there doesn't appear to be an easy way to trick you into following that sequence of steps.

And if you're a Firefox user, like me, the subterfuge is immediately obvious, at least with Valotta's demo.

The hidden window doesn't pop up underneath, and both the IFRAME border and the download dialog are clearly visible by default:

(The u are not character string visible in the background is partially-obscured text from the fake CAPTCHA window shown above.)

Valotta says the trick does work under IE 9 and 10 on Windows 7, and IE 10 on Windows 8, so his discussion is nevertheless worth studying, especially if you design web applications for a living.

[NB. Please see Valotta's comment below pointing out my error in an earlier version of this article. Apologies for the misunderstanding.]

It's a timely reminder, in today's web-based AJAX-heavy world, that what you see in your browser may not be precisely what you get, and that JavaScript's windows focus and transparency system are ripe for visual abuse.

Some of the things you can do for additional security include:

Turn on as much of your browser's real-time protection (e.g. popup blockers and protected mode) as you can tolerate in order to reduce the risk of unwanted browser windows.Use a web filtering product on your computer or as part of your network gateway in order to block access to suspicious URLs and files.Ensure that your on-access (real-time) virus scanner is turned on in order to stop dangerous downloads from launching, even if they are successfully downloaded.

Follow @duckblog

Image of laptop keyboard courtesy of Shutterstock.


View the original article here

Saturday, November 16, 2013

Facebook pays $20K for easily exploitable flaw that could have led to account hijackings

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Like money, image courtesy of ShutterstockFacebook has paid out $20,000 for a serious bug that could have allowed an attacker to hijack anyone's account with ease, with no user interaction on the part of the victim.

Jack Whitten, the UK-based application-security engineer (by day) and security researcher (by night) who discovered the flaw, said in a post mortem on Wednesday that he reported the hole to Facebook on 23 May and that it was fixed by 28 May.

The exploit was enabled by manipulating the way that Facebook handles updates to mobile phones via SMS.

As it is, Whitten explains, Facebook gives users the option of linking their mobile numbers with their accounts.

Users then can receive updates via SMS and can also login using their phone number rather than their email address.

Whitten found that when sending the letter F to Facebook's SMS shortcode - which is 32665 in the UK - Facebook returned an 8-character verification code.

After submitting the code into the activation box and fiddling with the profile_id form element, Facebook sent Whitten back a _user value that was different from the profile_id that Whitten modified.

Whitten says that trying the exploit might have led to having to reauthorize after submitting the request, but he could do that with his own password instead of trying to guess at his target's password.

After that point, Facebook was sending an SMS confirmation. From there, Whitten said, an intruder could initiate a password reset request on his targeted user's account and get the code back, again via SMS.

After a reset code is sent via SMS, the account is hijacked, Whitten wrote:

We enter this code into the form, choose a new password, and we're done. The account is ours.

Bandage on thumb, image courtesy of ShutterstockFacebook closed the security hole by no longer accepting the profile_id parameter from users.

This could have been a valuable flaw were it to fall into the hands of attackers who might have used it to steal personal data or send out spam.

As it is, one commenter on Whitten's post who obviously didn't understand the "it's now fixed" part of the story made the bug's value clear with his or her eagerness to figure out how to exploit it:

›khalil0777 • a day ago

someone explain me how to exploit it i am realyy need it i wait your helps friends :/

:/ oh well, ›khalil0777, looks like you're too late for that party.

I'd say better luck next time, but perhaps instead I'll save my good wishes for Mr. Whitten.

May he enjoy his $20,000.

It was well-earned, and it's a bargain for Facebook even were the reward to be doubled, considering the grief that could have been caused by such an easy exploit.

Follow @LisaVaas
Follow @NakedSecurity

Images of money and thumbs up courtesy of Shutterstock.


View the original article here

Friday, November 15, 2013

Microsoft ready to cough up (potentially big!) bounty bucks for bugs

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Magnifying glass image courtesy of ShutterstockOn Wednesday, Microsoft announced that it's now going to pony up with bounties that can reach $100,000 for vulnerabilities that can crack Windows, starting with the upcoming preview version of Windows 8.1, due to be released later this month.

But that's not all. Researchers who go beyond reporting novel exploits by sending in a whitepaper to describe "effective, practical, and robust" mitigation for qualifying exploits can get up to an additional $50,000 - or what Microsoft has dubbed the "BlueHat Bonus for Defense".

Facebook, Google, Mozilla and Twitter have all offered bounties for some time, but those have ranged from a few hundred to several thousand dollars.

In contrast, Microsoft's bounties are downright lavish.

Plus, they pertain specifically to research on products still in beta.

Their bug bounty program for Internet Explorer 11 Preview, which will pay out $11,000 for unique exploits, runs between June 26 and July 26 2013, so Microsoft is urging researchers to get hopping on preparing those reports.

Microsoft senior security strategist Katie Moussouris said in a blog post that rewarding researchers earlier in the game is better for all:

"[Many organizations] don’t offer bounties for software in beta, so some researchers would hold onto vulnerabilities until the code is released to manufacturing. Learning about these vulnerabilities earlier is always better for us and for our customers."

Maybe it's late to the bug bounty game, but given the generous rewards and the focus on finding bugs early while products are still in beta, there's a greatness to Microsoft's lateness.

Follow @LisaVaas
Follow @NakedSecurity

Image of magnifying glass courtesy of Shutterstock.


View the original article here

Wednesday, November 13, 2013

Hey board directors, help your companies fight cybercrime - and yes, it matters

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Boardroom. Image from ShutterstockBoardrooms need to "wake up" to the danger of cybercrime, according to a recent report.

The UK's ICSA, commissioned by the government's Department for Business, Innovation and Skills (BIS), issued the guidance document on how boards can better understand and cope with the threats posed to businesses by malware, hacking, cyber espionage and other digital dangers.

Now, in security circles, "ICSA" generally refers to a leading security testing and certification body, formerly known as NCSA. Or, in some specialist cases, the International Chinese Statistical Association (for some reason, founded in San Francisco and registered as a non-profit in Delaware).

But no, the ICSA we're talking about here is the Institute of Chartered Secretaries and Administrators.

Their report didn't get much attention when it first appeared a few weeks ago. In fact, I didn't spot it until the press release was picked up by, of all places, an Isle of Man-based news site.

So, I hear you ask, what's the rumpus? A bunch of people moan about their bosses' ignorance, and no-one really listens. Big deal.

Two things though. First, these are not the people who do the typing and answer the phones. Important and delightful as those secretaries are, these are corporate secretaries, a whole different thing.

Corporate secretary is a high-power position, basically sitting between the board of directors and the company at large, ensuring the board gets the information it needs from the company, and the company acts on the board's decisions.

The ICSA is the body representing the most experienced and highly-qualified corporate secretaries in the UK, and rightly refers to itself as "a recognised authority on corporate governance and compliance". So, if they say boards are paying too little attention to cyber issues, you can be pretty sure they're right.

Second, their report (PDF) provides some pretty good advice. It gives a clear, simple breakdown of the dangers businesses might face, stressing the need to weigh up the risks specific to a given organisation and the importance of focusing on resilience in the face of attack:

The cyber threats facing businesses and their supply chains cannot be prevented through investment in technology alone. It requires comprehensive risk assessment processes to identify and prioritise the protection of critical information assets.

It puts particular emphasis on the problem extending to all parts of a company:

Internal functions such as HR, finance, legal and marketing may not appreciate the extent to which critical information is at risk, nor realise the potential impact of a cyber attack on their organisation. ...Day-to-day control of cyber risks should not be left to the IT department.

Few companies can survive these days without some sort of internet presence, and even the smallest are likely to be making ever more use of information technology.

Blindfolded man on computer. Image courtesy of ShutterstockFor most, all this is still a relatively new side of doing business, and it changes and evolves at a bewildering pace. This exposes firms to a whole new world of risk, which many staff - especially in senior roles - have minimal understanding of.

Board positions tend to be very senior roles indeed, so members might not be in touch with the fast-moving world of cyber security.

They also tend to be filled from a limited set of backgrounds, mainly financial, sales, marketing and legal areas with limited uptake of people from more technical departments. But their input and backing is vital to ensure cyber security is given the proper emphasis at every level.

It seems that board members need all the help and advice they can get when it comes to shoring up their firms against digital dangers.

So, if you're a board member, read the guidance, and act on it. If you're working for a board which isn't helping with your cyber security needs, try subtly pointing them towards this kind of advice - it just might sink in.

Follow @VirusBTN
Follow @NakedSecurity

Images of hand shadow, boardroom and blindfolded man on computer courtesy of Shutterstock.


View the original article here

Tuesday, November 12, 2013

The day I caught an ATM card catcher

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

ATM image courtesy of ShutterstockEver found a card catcher in your local cash machine? A few years ago I did and they're surprisingly easy to dismantle - but in my case, a little more thought should have been applied to the possible consequences.

On a summer evening at my local pub, I ran out of cash and headed over the road with a friend to use the ATM. All regular stuff.

The machine ate my card. Initially cursing my luck and thinking there was just an issue with my card, I explained the problem to my friend and pawed lamely at the card slot on the machine.

But fiddling with the machine made me notice something. The facing around the card slot had moved a little. I picked at it and managed to drag out the attached magnetic tape that was holding my card. A card catcher.

This is a very simple device that retains your card so that someone else can retrieve it. There are quite a few variations of the theme but they all feature a tape of sorts that slips into the machine and catches the card, plus a front piece that sits in front of the usual ATM card slot.

Feeling as though I’d vandalised a cash machine, I noticed that the ATM was no longer in service. So, something had at least been triggered. A couple of women who had been hanging around us at the machine had now magically vanished - I realised afterwards that they were probably waiting to pick up the card, once I'd decided I wasn't going to get it back.

Sometimes a card catcher is set up with a small camera on the ATM to record the PIN and then, having snatched the card, others are able to access the associated bank account.

? Card catchers and skimmers have a related purpose but work differently. A skimmer reads your card on its way into the regular card slot and stores the data for later. A card catcher pretends to retain your card so you assume the bank has confiscated it. Skimmers let crooks make a digital counterfeit of your card later. Card catchers let crooks get hold of your actual card (including the chip, if it has one, and the security code printed on the back).

So, card catcher in hand, and not really thinking about the fact I was holding a tool for committing crime or that someone might want it back, I took it with me back to the pub. We marvelled over the sticky panel and the now rather bent and mangled loop that I had crumpled getting my card out.

I called the bank, of course, and told them what had happened. A very kind phone operator sounded suitably horrified but had no idea what it was that I had been describing. She did however flag my account for any suspicious activity - so I guess that was a start.

I also called the local police station. They told me to throw it away. Nothing more or less than that.

I took this at face value and worked on destroying the device as much as possible before throwing it in the bin behind the bar at the pub.

I wondered if things had changed in recent years so contacted my bank and local police station to see what their advice would be now. Luckily, it's very different. My bank told me:

In the evening, call the police and let them know if you have found a card catcher. Call the local number, or if the situation appears to be threatening to you, then call 999 [the UK emergency services number]. If you find one during the day, go into the branch of the bank and let them know.

The bank also alerted me to warning messages that are either on the ATM itself or on its screen. But when I went to check, I found the messages are in among the ads for different bank services so they're not immediately obvious. And that's if you even read the messages before you put your card in - I can't say I do.

Credit card. Image courtesy of Shutterstock.Round at the local police station, a nice desk officer told me that the advice for the public is to be wary of groups of people around an ATM, look closely at the cash machine and check if it looks shabby.

If you find a device on a cash machine, call the local police immediately and they will send someone to retrieve it.

The best advice was to use a cash machine that is inside a bank branch where possible - there’s less of a chance that something can be stuck on an ATM when it’s inside the building.

Banks are also redesigning their ATMs so that it is harder to attach a card catcher. You may have seen curved areas where you put your card in, leaving only a small area on the lip of the card slot. This makes it a bit harder to stick something to the front of the machine.

More recently a man serving time in a jail in Romania for taking part in ATM fraud decided to put his knowledge to a different use and designed a revolving system for putting cards into cash machines so that it is harder for others to commit ATM fraud.

It’s probably best not to do as I did - carrying a tool for catching cards is not a good idea and making off with it in front of people who might want it back is also not so wise.

If you can’t get your card out again, there’s not much you can do other than call your bank and cancel your card, as well as letting the local police know so they can take the apparatus from the machine.

Of course, prevention is always best so at least being wary and taking a closer look at a cash machine before you put your card in is a smart option.

These days there’s usually another machine not too far away and it’s usually worth the leg work to avoid having to cancel your card.

Follow @jemimah_knight

Follow @NakedSecurity

Here's a demonstration of some of the things to look out for when you're using a cash machine, courtesy of Naked Security's friends at the Queensland Police Service in Australia. This video focuses on skimmers, but the same advice applies to card catchers too. [Note: 000 is the Australian emergency services number.]

Images of ATM machine and credit card courtesy of Shutterstock.


View the original article here

Anatomy of a cryptoglitch - Apple's iOS hotspot passphrases crackable in 50 seconds

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

A posse of computer scientists at the University of Erlangen in Germany has published a well-worth-reading paper about Wi-Fi security on Apple's iOS.

In the hope that you'll end up reading the original paper, in order to give Messrs Metz, Freiling and Kurtz, the authors, the click-respect they deserve, I'll tell you briefly what they found.

Apple iDevices with 3G support can be used as Wi-Fi access points.

Many users turn this feature on and off while they're out-and-about, for example to help friends and colleagues jump online at the coffee shop.

For this reason, iOS calls the feature "Personal Hotspot."

Hotspots are meant to be easy to use, so Apple included a feature that lets you automatically generate a WPA passphrase that you can read out to your friends and that they can type in easily.

The passphrase, of course, is also supposed to be reasonably secure, so that the guys sitting at the next table can't crack it and then decrypt all your network traffic, possibly even as you sit there and work.

Apple's iOS passphrase generator therefore creates a pronounceable string of up to six characters, and combines it with a four digit number for the sake of variety.

If we reasonably but naively assume that 1% of all six-character strings are suitably pronounceable, that gives a choice of 0.01 x 266 x 10,000 passwords, or 30 billion (US billions, i.e. 30 x 109).

And if we reasonably but naively assume that a half-decent laptop can test 3000 WPA keys per second against a sniffed Wi-Fi session, that means about 120 days to complete an exhaustive check of all possible passwords.

? Note that to recover WPA passwords an attacker needs to be sniffing packets when you first connect, and to capture the so-called authentication handshake at the start of the session. At a coffee shop, you should assume an adversary will acquire your authentication packets.

Kurtz, Freiling and Metz [KFM] didn't just assume: like good researchers, they set out to investigate.

First, they clicked the iOS option a few times to generate some Personal Hotspot passphrases.

The pronounceable strings looked like words, so they wrote them down and searched on the internet to see whether those words appeared together in a downloadable list.

Bingo! (Or, more accurately, Scrabble!)

They found a word list, extracted from an open source Scrabble game, consisting of 52,000 words that always seemed to include the ones generated by iOS.

That meant just 52,000 words x 10,000 digit combinations, for a grand total of 52 million possible passphrases.

That would take five hours to crack on a laptop CPU at 3000 WPA keys/second, or 50 minutes (six times faster) using a modest graphics card.

? WPA password cracking requires you to salt and hash each potential passphrase into a 256-bit master key using the PBKDF2 algorithm. This is deliberately designed to be slow, requiring approximately 16,000 iterations of the SHA-1 hash function for each passphrase you try. Nevertheless, SHA-1 calculations can be sped up dramatically using Graphics Processing Units (GPUs).

Guessing that Apple wasn't using the Scrabble game data in its passphrase generator, KFM decided to see if they could narrow down their 52,000-word dictionary.

Disassembling the passphrase generator code in iOS, they found that it worked like this:

Feed a pseudorandom non-word into the spelling checker and see what comes back.Append four pseudorandom digits.

The code instructs the spelling checker to restrict its choices to words of four to six letters in length.

Now consider that when you feed a pseudorandom string into a spelling checker, you won't get a pseudorandom result.

There isn't a straightforward and consistent mapping between the set of possible unpronounceable words and all known words, at least in the English language.

So KFM wrote their own implementation of the passphrase generation code and ran it 100 million times with pseudorandom input.

Only 1842 different words came back, with some of them very much more likely than others.

Ouch!

Now, only 18 million possible passphrases remained!

KFM then tried a four-GPU rig of slightly more powerful graphics cards - something many attackers would have access to - and found that they needed just 50 seconds to churn through all possible hotspot passphrases and thus to guarantee a crack.

If you're like me, you'll prefer to use your own Personal Hotspot even when password-protected free Wi-Fi is available, on the grounds that other people who already know the WPA password can intercept your handshake and then read all your traffic in real time.

That makes the passphrase choice for your Personal Hotspot as important as the passphrase you choose for your fixed-line Wi-Fi router at home.

Unfortunately, while Apple's automatic passphrase generator for iOS may give the impression of "pronounceable randomness," it actually gives a false sense of security because it is far too predictable.

? Ironically, if iOS generated passcodes of only seven digits (for 10 million possible passcodes), you might consider it safer, if no more secure, since at least there would be no false sense of security. The limitation would be self-documenting.

The lessons we can learn from this are:

Algorithms which look cryptographically reasonable from a few sample runs may turn out to be completely flawed.Community cryptographic testing and peer review are vitally important, so avoid proprietary algorithms if you can.Spelling checkers aren't supposed to be pseudrandom generators.Anyone who knows your WPA key and is around when you connect to your network can decrypt your traffic in real time.Anyone who is around when you connect and can sniff your traffic can attempt to crack the password and decrypt your traffic later.Choose your own passphrase, and make it a good one, when using iOS's Personal Hotspot.

We recently made a short video on the topic of personal Wi-Fi security.

We included a section giving you some practical and visual advice on how to choose and remember decent WPA Wi-Fi passphrases [click the Captions icon during playback for closed captions]:

Enjoy the video, and be careful out there!

Follow @duckblog

Tags: Apple, crack, dictionary, Freiling, hotspot, ios, Kurtz, Metz, passphrase, password, Wi-fi


View the original article here

Sunday, November 10, 2013

Who is SophosLabs: Numaan Huq, Threat Researcher

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

SophosLabs is at the center of Sophos. It's the place where highly skilled experts in the field work round the clock to build protection from the latest threats.

But who works there?

In the first of this series, we're talking to Numaan Huq, Senior Threat Researcher from SophosLabs Vancouver.

Numaan HuqI am originally from Dhaka, Bangladesh. I moved to Canada for school back in 2000.

I have a BSc ('04) and MSc ('09) in Computer Science from the University of Victoria (UVic) in Victoria, BC. My focus in my senior years and in grad school was on networks. My MSc project thesis entitled "Performance Analysis of Cascaded Policing System" was on network traffic shaping.

I've been an avid fan of fantasy novels since grade 7. I listen to whatever music fits my mood and my fancy - I don’t have a fascination for any particular genre.

I'm a big fan of soccer and Formula 1 racing. I’m currently harboring a dream of going mountaineering and am targeting a climb of Mount Rainier (elevation 14,410 feet) in Washington State next year. But first I need to whip myself into shape.

I can't live without my internet enabled phone.

My friends say I cook tasty food, but then maybe they're just happy getting a free meal. They sing songs about my deeds so I invite them again.

I did an internship where I worked on programming Voice over IP (VoIP) phones. Part of my responsibility included pen-testing the VoIP phones to see if we could get root access to the device.

When I was looking for employment, the job at Sophos seemed like a natural extension of my experience. It also helped that it was in the same city I was living in!

My specialty within the Labs is on APTs (advanced persistent threats), web threats and vulnerabilities. I am co-author of a paper titled "Trapping unknown malware in a context web" which has been accepted to the VB2013 conference in Berlin this October.

Currently I'm conducting research on malware that targets point of sale (PoS) systems.

I am also the SophosLabs contact for Microsoft's MAPP and I coordinate and contribute to SophosLabs' Patch Tuesday processing. My other interests include OS architecture and encryption algorithms.

The best thing about my work is definitely the team. We have a set of very talented and hardworking people here in Canada who truly believe that we can do things better, while at the same time making a difference. That attitude rubs off on you quickly, and it motivates me.

I've had two super memorable moments in SophosLabs:

When I wrote my first ever virus detection and disinfection, W32/Ngvck-U, back in August 2007. I worked very hard in analyzing the virus, figuring out how the infection routine worked and then writing and re-writing disinfection to meet the Labs standard. I felt very proud that day and it solidified my self confidence.I was asked to figure out the "attack graph" of an APT for a very important customer. All I had was a mess of packed, seemingly disjointed files, out of which I needed to build a complete picture. It took almost a week of intense reversing to try to connect the dots and decipher where the APT had come from, how it penetrated/propagated and finally what it attempted to mine. Again, it was a superbly satisfying exercise.

To me, the biggest threat in the next few years is APTs. Though it is a term much loved by the media, in reality it is a serious threat which will only mature. The threat vectors will get more complicated because the operating and eco systems are becoming more secure and dynamic all at once. This requires the creation of more complicated and innovative methods of exploitation.

Social engineering is as old as malware. There is a popular saying in SophosLabs: "The biggest vulnerability lies between the chair and the computer."

The business of mass-produced malware generated using crime kits is in its infancy and I predict it will become more sophisticated and user-friendly, leading to an exponential increase in the volume of malware. I think new business models will emerge for malware, driven mostly by the latest technology trends of that period.

SophosLabsWhen giving security advice, I tend to tell people to run Macs. It's not like OSX is bullet-proof, but their market share is small so they don't make a lucrative target for malware authors. This is why we see so little malware for OSX.

If you're tech-savvy, go ahead and run Linux.

My recommendations: Encrypt, encrypt, encrypt everything! Most modern operating systems have simple options to enable encryption. And always protect your mobile device with a strong password.

For security reading, I recommend Naked Security of course! And Virus Bulletin has a link to most of the popular computer security blogs.

If you're trying to break into the security field - read lots. This is a dynamic place and following blogs and forums is a great way to get your head around the security space. Visit a couple of conferences to 'meet and greet' people in the industry.

A popular free conference is Security B-Sides and most major conferences have student registration prices. Attend a "dojo" or training session at one of these conferences; they are a bit pricey but extremely helpful.

In the future, I would definitely like to stay within the realms of computer security. In ten years I might transition from researcher to a management role but as they say, "Yesterday is history. Tomorrow is a mystery. Today is a gift."

Want to know more about SophosLabs?

Check out our YouTube playlist, or read more here.

Follow @SophosLabs
Follow @NakedSecurity


View the original article here

Facebook issues data breach notification - may have leaked your email and phone number

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Facebook just published a data breach notification on its security blog.

You might not immediately notice that from the title of the article, which announces itself as an "Important Message from Facebook's White Hat Program."

But the social networking giant is, indeed, reporting a data leakage problem.

The silver lining is that the quantity of data wrongly disclosed due to Facebook's bug seems to be modest, at least by the standards of a billion-user service.

The cloud (bad pun intended) is that Facebook's systems made the fault possible in the first place.

Facebook, understandably, isn't giving the gory details of the bug and how it could have been exploited, which makes the big picture hard to see.

What it is saying, is this:

We recently received a report to our White Hat program regarding a bug that may have allowed some of a person’s contact information (email or phone number) to be accessed by people who either had some contact information about that person or some connection to them.

So let me tell you what I think the story is all about.

Bear with me, please: I'm going to take a while to set the stage first.

Imagine that Charlie Smith - one of thousands of people with that name - is on Facebook.

He's chosen to tell Facebook his email address, chazza@example.org, but not much more. He hasn't shared where he lives, the name of his employer or his phone number.

Alice joins up and decides to let Facebook at her contact lists. (Facebook squeezes you pretty hard to try to persuade you to upload as much as possible about your web of friends, for reasons that will become obvious in a moment.)

She knows a Charlie Smith; her Charles has a phone number of +1.500.555.5000, and an email address of chazza@example.org.

Facebook can now cross-match the email address and suggest that she might want to try to hook up with Charlie.

Chances are, of all the C. Smiths on Facebook, this is the one she knows.

She sends a Friend Request; it was the right Charlie, and he accepts it.

So far, so good.

Later, Bob comes along.

His contact list, which he yields up to the Facebook empire, identifies a chazza@example.org, known as Charlie Smith, currently living in Someplace, Pennsylvania, and working for the Acme Pointed Stick company.

Facebook likewise puts Bob in touch with Charlie, and thus indirectly with Alice, and the three of them end up as Facebook friends.

Alice is happy; Bob is happy; and, since he agreed to the Friend Requests, we assume Charlie is happy too.

Easy as A-B-C.

Of course, Facebook is the happiest of all, because it now knows (or can make a staggeringly likely guess at) a bunch of personal information about Charlie that he himself chose not to reveal.

Of course, as more people share more information about their contacts, and implicitly confirm the identity of those contacts through the Facebook friendships they forge, Facebook builds up an ever more detailed picture of everyone.

Welcome to the wonderful world of data mining.

You don't have to like this sort of thing, but there's not a lot you can do about it.

Even staying away from sites like Facebook, or "resigning" from them if you're already on, might not help very much.

After all, in our hypothetical example above, Charlie Smith only gave his name and email address; his address, employer and phone number were provided by other people, presumably with their informed consent.

? Alice and Bob may not have thought through the consequences of letting Facebook at their contact lists, but it was their their choice to populate their contact databases with the sort of detail they did, and their choice to let Facebook at that data.

What Facebook seems to be admitting to, in Friday's breach notification message, is that it was careless with the aggregated data accumulated from contact list uploads.

The problem, says Facebook, lay in its Download Your Information (DYI) feature, which exists so you can suck down everything you've previously entrusted to the social networking giant.

Ironically, DYI itself is an important security component of Facebook, because it helps to deal with two serious concerns about cloud-style services:

DYI improves availability, because it allows you to make your own off-site backup of everything you've stored on Facebook.DYI improves transparency, because it acts as a record of everything you've uploaded to Facebook over the years.

But there was a bug in DYI, of the data leakage/unauthorised disclosure sort.

Apparently, DYI was capable of letting you download more than you'd uploaded in the first place.

Using our example above, Bob might have ended up receiving Alice's contact data about Charlie, as well as his own, when he hit the DYI button.

In other words, Bob wouldn't just get back Charlie's address and workplace, which is what he himself uploaded, but might also have ended up with Charlie's phone number, courtesy of Alice.

That's not good at all.

It's especially bad for Charlie, who not only didn't open up his phone number to his Facebook friends, but chose not to upload it in the first place.

Facebook chose to release its statement about this breach on Friday evening, which has already raised the eyebrows of former Naked Security denizen Graham Cluley.

Friday nights, he argues, are the traditional time for burying the sort of announcements you make of necessity rather than by choice.

You can see why Facebook might want this to be a weekend story: there's a chance that it might cause some companies to rethink their "Facebook at Work" strategies, and go back to the old days where Facebook was blocked outright.

That would put a dent in Facebook's daytime traffic, for sure.

After all, if someone shares their contact list while they're at work, they might end up sharing a whole lot more, about many more people, than they really intended.

And Facebook just admitted that, somewhere in its cloud, was a bug that prevented it from taking proper care of that data.

Facebook turned off DYI once the bug was disclosed, fixed it, turned DYI back on again, and published its data breach notification.

Even if you take a cynical view of the timing and the title of the notification, I think you should be happy about some aspects of this cautionary tale:

Respect to the finder of the bug for disclosing it responsibly to Facebook so it could be fixed, even though he'd probably have got a lot more publicity if he'd told the world first.Thanks to Facebook for having a bug bounty programme so that the finder gets some sort of reward for doing the right thing.Well done to Facebook for taking the bug report seriously and fixing the problem.Congratulations to those jurisdictions that have passed strong data breach notification laws, so that this sort of problem can't just be swept under the carpet.Huzzah to those of you who take the stance of not sharing contact lists with social networking sites, on the principle that "if you don't share it, they can't lose it."

Follow @duckblog


View the original article here