Google Search

Friday, June 20, 2014

Feds swoop in, snatch mobile phone tracking records away from ACLU

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Image of Statue of Liberty, courtesy of ShutterstockThe American Civil Liberties Union (ACLU) filed a run-of-the-mill public records request about cell phone surveillance with a local police department in Florida.

The US Marshals Service last week reacted by swooping in and snatching those records out from under the ACLU's nose just hours before they were supposed to review them.

After the Feds seized the surveillance records, US Marshals then moved the physical records 320 miles away, meaning the ACLU wouldn't be able to learn how, and how extensively, police use snooping devices.

The ACLU promptly filed an emergency motion to get local police to disclose the records, which detailed how police had used a stingray to track nearby phones to a suspect’s apartment without getting a warrant.

A Florida judge last Tuesday granted the ACLU's emergency motion.

A stingray is a surveillance device that sends powerful signals to trick cell phones - including those of innocent bystanders - into transmitting their locations and their IDs.

The ACLU called the records grab an "extraordinary attempt to keep information from the public".

Even a former judge and a former United States magistrate judge found the US Marshals' action "weird" and "out of line", they told Ars Technica.

Former US magistrate judge Brian Owsley had this to say:

This one is particularly disturbing given the federal government's role in coming in and taking all of these records that were at issue in a state open government act.

In order to spirit away the records, the ACLU explains, the US Marshals waved a wand over Sarasota police detective Michael Jackson and transmogrified him - and the records - into their own property:

The Sarasota Police set up an appointment for us to inspect the applications and orders, as required by Florida law. But a few hours before that appointment, an assistant city attorney sent an email cancelling the meeting on the basis that the US Marshals Service was claiming the records as their own and instructing the local cops not to release them. Their explanation: the Marshals Service had deputized the local officer, and therefore the records were actually the property of the federal government.

The ACLU called the Marshal’s actions highly irregular:

The Sarasota detective created the applications, brought them to court, and retained the applications and orders in his files. Merely giving [the detective] a second title ('Special Deputy US Marshal') does not change these facts. But regardless, once the Sarasota Police Department received our records request, state law required them to hold onto the records for at least 30 days, to give us an opportunity to go to court and seek an order for release of the documents.

Last week, Ars Technica reported how use of the stingray in a Tallahassee, Florida, rape case only came out once testimony from a local police officer was unsealed.

The detective had told the court that he would only testify about how the stingray was used if his testimony was not made public.

That's because, the assistant attorney general told the court, the police were under a non-disclosure agreement (NDA).

Late last Tuesday, the judge ordered unsealing of the entire transcript of the suppression hearing.

The ACLU published the portion that, it says, the government tried to keep secret.

The ACLU says the released information "confirms key information about the invasiveness of stingray technology", including that:

Stingrays "emulate a cellphone tower" and "force" cell phones to register their location and identifying information with the stingray instead of with real cell towers in the area.Stingrays can track cell phones whenever the phones are turned on, not just when they are making or receiving calls.Stingrays force cell phones in range to transmit information back "at full signal, consuming battery faster."When in use, stingrays are "evaluating all the [cell phone] handsets in the area" in order to search for the suspect’s phone. That means that large numbers of innocent bystanders' location and phone information is captured.In this case, police used two versions of the stingray - one mounted on a police vehicle, and the other carried by hand. Police drove through the area using the vehicle-based device until they found the apartment complex in which the target phone was located, and then they walked around with the handheld device and stood "at every door and every window in that complex" until they figured out which apartment the phone was located in. In other words, police were lurking outside people's windows and sending powerful electronic signals into their private homes in order to collect information from within.The Tallahassee detective testifying in the hearing estimated that, between spring of 2007 and August of 2010, the Tallahassee Police had used stingrays "200 or more times."

I agree with a commenter on Ars's coverage, CQLanik, who noted that if a local police department can't allow the public to know the shady methods used to come by their evidence, then that method shouldn't be legal:

People have a right to face their accuser, and that right is being taken away by the use of secret evidence gathering.

What do you think?

Follow @LisaVaas

Follow @NakedSecurity

Image of Statue of Liberty courtesy of Shutterstock.


View the original article here

Thursday, June 19, 2014

Patch Tuesday for June 2014 - 7 bulletins, 3 RCEs, 2 critical, and 1 funky sort of hole

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

The elevator pitch for this month's Microsoft Patch Tuesday is as follows:

Seven bulletins. Three remote code execution (RCE) holes, of which two are deemed Critical. Patches apply to Windows, Internet Explorer (IE), Office, Live Meeting and Lync. All supported versions of IE get patches. All Windows versions, including Server Core and RT, get at least one Critical RCE patch. All patched systems need a reboot.

Even more briefly: you'll need to patch and reboot every Windows system on your network.

OK, except for your Windows XP computers.

But why not reboot them all in solidarity, anyway?

Some of them might not come back up, and then you'll have an excuse to tell your boss that you can't put off updating them any more.

One of the patches, number seven, is a security hole of a type you don't see announced very often in Microsoft bulletins: Tampering.

You're probably used to seeing vulnerability tags like RCE (remote code execution), EoP (elevation of privilege, where a regular user can get unauthorised administrative or system powers), DoS (denial of service, where an outsider can crash software that you rely on), and Information Disclosure (where data that should stay private can be accessed without authorisation).

If you've listened to our Understanding Vulnerabilities podcast, you'll know that RCE bugs usually get the most attention, because they offer a break-and-enter path to attackers who are outside your network.

(Audio player above not working? Listen on Soundcloud.)

But the other sorts of vulnerability can be combined with RCE into a much more dangerous cocktail.

For example, a Disclosure bug might allow crooks to steal authentication data that makes it much easier for them to pull off an RCE; a cunningly timed DoS might knock out intrusion detection software that would otherwise trigger an alert; and an EoP might add system administrator powers to a user-level compromise.

? Here's an analogy: a Disclosure bug tells a crook where you live and when you won't be home; the RCE lets him pick your front door lock and get inside; the DoS means he knows how to turn off your burglar alarm; and the EoP gets him into your safe as well, once he's in the house.

Tampering is another sort of security hole that may help crooks, either by allowing them to initiate their attack more easily, or by making things worse for you once they have broken in.

Very loosely, tampering means that you can make a security-related change that should raise an alarm, but doesn't.

For example, you might be able to add malware to someone else's digitally signed software and have the system still accept it as trusted.

You might be able to make your own digital certificate, for example for a fake web page, but pass it off as someone else's.

Or you might be able to tamper with a protected configuration file, thus altering the settings and behaviour of software such as a web server, without being noticed.

One well-known example of a tampering exploit is last year's MasterKey malware for Android, which bypassed Google's Android Package (APK) cryptographic verifier, making the malware look legitimate.

This didn't just allow the malware to get the blessing of Google's compulsory install-time security check, but also allowed the crooks to put the blame on a innocent vendor, whose digitally signed package they started with.

Another famous tampering exploit is the announcement by security researchers in 2008 that they had succeeded in creating a fake Certification Authority web certificate by finding a collision in the MD5 hashing algorithm.

Their home-made certificate appeared to have been signed by one of the top-level "root authorities" that almost every browser trusts by default, and would have allowed them to sign apparently-trusted certificates for any website they liked.

? Don't use MD5 in any new project. We knew it was cryptographically flawed before 2008, but the abovementioned certificate crack made it quite clear that it was dangerously unsafe in real life, not just in the lab.

We can't yet say exactly what form this latest Windows tampering vulnerability takes, but it affects Windows 7; 8 and 8.1; Server 2008 R2 (not Itanium, and not Server Core); and all supported flavours of Server 2012, including Server Core.

Watch this space: we'll tell you more after we've spoken officially to Microsoft on Patch Tuesday itself.

The final item of interest about the June 2014 Patch Tuesday is that the update to IE fixes a security hole known as CVE-2014-1770.

Technically, this became a zero-day in IE 8 when it was disclosed by HP's Zero Day Initiative during May 2014, after Microsoft hadn't managed to come up with a fix for six months. (More precisely, after 180 days.)

The discoverer of the bug, who sold it to HP for an undisclosed sum, was careful to point out that all that was published last month was an advisory, not a proof of concept; indeed, he said that "it won’t be easy reproduce the vulnerability based on the advisory alone."

Even after you have uncovered a vulnerability, there is almost always a lot of work (and sometimes it proves as good as impossible) to weaponise the vulnerability by actually coming up with a way to exploit it.

According to Microsoft, writing on its Security Response Center blog, no in-the-wild exploit using CVE-2014-1770 was ever seen, and thankfully the issue becomes moot on 10 June 2014, when the latest IE patches come out.

As we said at the outset: you'll need to patch and reboot every Windows system on your network this month.

Except XP, but that's another can of worms altogether.

Have a happy Tuesday!

Follow @duckblog


View the original article here

Wednesday, June 18, 2014

Ransom-taking iPhone hackers busted by Russian authorities

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

iphone-lock-170The mystery of the ransom messages from "Oleg Pliss," and the iDevice locking attack that popped up in Australia and the US last month, appears to have been solved.

Authorities in Russia said they detained two criminals behind ransom attacks on Apple users that locked their devices remotely and demanded payment to unlock them.

I say "seems to have been solved" because Russian police said the hackers were responsible for the same scam on users in Russia, without mentioning victims in other countries.

The two Russian hackers - a 23-year-old and a 17-year-old from Moscow - reportedly confessed to scamming users into giving away their Apple IDs and using the Find My iPhone feature to lock the devices until the victims paid a ransom of up to $100 USD.

According to The Sydney Morning Herald, Russian media reported the pair of hackers were caught on CCTV when they withdrew victims' payments from an ATM.

Russia's Ministry of Internal Affairs stated on its website that agents searched the hackers' apartments and seized computers, phones, SIM cards and "literature" on hacking.

Russian authorities said the hackers used "two well-known schemes" to perpetrate their attacks, which affected Apple users in Russia.

It seems the two hackers tricked Apple users into giving away their Apple IDs with a phishing scam that asked them to sign up for an online video service that required their Apple IDs.

If a hacker gets hold of your Apple ID they can create an iCloud account which they can then use then lock your iPhone, iPad, iPod or iMac device remotely.

The Sydney Morning Herald reports that victims who locked their phones with passcodes could simply enter it, change their iCloud password and avoid having to pay a ransom.

Users who didn't set passcodes were less fortunate and had to resort to wiping their devices and restoring them from backups.

If you've been hacked by 'Oleg Pliss' then we recommend you follow the advice in our earlier article Apple ransomware strikes Australia.

In the security industry we call cyber attacks that take over your computer and demand payment "ransomware".

The most famous ransomware is the notorious CryptoLocker, which authorities recently knocked out by taking over the cybercriminals' command and control servers.

Only recently, however, have crooks figured out how to turn the success of ransomware for PCs into a lucrative racket on mobile devices.

Technically, since the "Oleg Pliss" hackers didn't drop any malware onto the devices of their victims, the iDevice-locking attack isn't a real example of ransomware, but it has the same devious purpose - to extort victims for money.

It's a much different story for Android, which is more susceptible to mobile malware.

A file-encrypting ransomware for Android called Simplelocker was recently discovered, and another kind of ransomware known as a "police locker" has hit Android users who download an infected file claiming to be a video player.

iphone-5-lock-screen-170As a security precaution, you should make sure you lock your phone with a secure passcode.

Your Apple ID is the key to your iDevices, so make sure you hold onto it tight (don't use your Apple ID for a suspicious media-download website, for example).

You should also make sure your iDevices are up to date with the latest iOS software version to stay safe from known exploits.

For Android users, we also recommend using an anti-virus such as Sophos Antivirus and Security, our free app for smartphones and tablets.

For more information on keeping your phones and tablets safe take a look at our 10 tips for securing your smartphone.

Follow @JohnZorabedian
Follow @NakedSecurity

Image of locked iPhone courtesy of Shutterstock.

Tags: apple ID, hacking, iCloud, ios, iPhone, Ministry of Internal Affairs, oleg pliss, passcode, phishing, ransomware, russia


View the original article here

Tuesday, June 17, 2014

Facebook stupidity leads to largest gang bust in NYC history

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Image of Facebook style gun courtesy of ShutterstockKids can be street-smart and Facebook-stupid, to paraphrase how Vice News put it.

Police love that naive, completely misplaced trust in the supposed anonymity of social media postings.

In fact, it was a long trail of quite helpful Facebook postings about crimes that lead New York City police to what authorities are calling "the largest gang takedown in New York City's history".

After a 4-year-long investigation by the New York Police Department (NYPD), 103 gang members were indicted on Wednesday, thanks mostly to the evidence teenagers left on their Facebook profiles.

Five hundred NYPD officers descended on two housing projects in the NYC neighborhood of West Harlem Wednesday morning to arrest 40 of those who were indicted.

Police told reporters that 23 more alleged gang members are still being sought, while the rest were apprehended prior to the Wednesday bust.

Most of those arrested are between 15 and 20 years old, while some were as old as 30.

Prosecutors say the boys and men belong to three gangs: the two allied gangs of Make It Happen Boys and Money Avenue, and their rivals, 3 Staccs.

The gangs have waged war over the past four years, with the carnage now resulting in accusations of two homicides, 19 non-fatal shootings and about 50 other shooting incidents, according to a press release put out by Manhattan District Attorney Cyrus R. Vance, Jr.

According to the indictments (which can be read here and here), the gang members fought tooth and nail to control their territory - the two housing projects are only a block away from each other - and to climb the gangster hierarchy via shootings, stabbings, slashings, assaults, gang assaults, robberies, revenge shootings, and murders.

They were also busy chronicling it all via social media, posting hundreds of Facebook updates, direct messages, mobile phone videos, and calls made from Rikers Correctional Facility to plot the deaths of rival gang members.

They used postings to publicise and claim credit for - and to rub their enemies' noses in - their crimes, prosecutors say.

One of the gangs's victims - 18 year-old Tayshana "Chicken" Murphy - was a promising basketball star. Her father has said that she was being recruited by several colleges.

Ms. Murphy was gunned down in her building in September 2011. One of the gang members allegedly bragged about it on Facebook.

A second victim, Walter "Recc" Sumter, who owned the gun used to kill Ms. Murphy, was murdered that December in apparent retaliation.

Prosecutors say that two days after the death of Ms. Murphy, alleged gang member Davon "Hef" Golbourne wrote to a 3Staccs rival that they had "fried the chicken."

The rival, Brian "Pumpa" Rivera, replied "NOW IMAAA KILL YUHH."

In fact, investigators pored over more than 40,000 phone calls between gang members already in jail and those on the outside, hundreds of hours of surveillance video, and "more than a million social media pages," Vance said in his statement.

According to Vice News, the word "Facebook" shows up 162 times in one of the indictments and 171 in the second.

Rev. Vernon Williams, a Harlem pastor who has spent years trying to curb youth violence in the neighborhood and who personally knows many of the indicted teens, told Vice News that they're not the brightest bulbs on the tree when it comes to social media:

They are Facebook dummies.

Because the stuff that they were saying, that was gonna come back to bite them, especially admitting participating in crimes, admitting getting the weapons that were gonna be used in crimes, and then calling someone in a state prison and giving them a report of what they did.

But while the kids were undeniably stupid about Facebook, Williams also criticised the law for letting this battle wage for so long instead of stepping in earlier:

The indictment is almost 200 pages long and I would say 75-80 percent of [one of the indictments] is Facebook posts and similar activity.

The DAs office was helped by the accused. All [the police] did was watch and document it. I don’t know what took them so long, but once they had enough, they scooped them up.

That is a very good question. Why did police need four years to round these guys up when they had alleged criminals posting about it on social media?

Stupidity about social media is a gift to investigators. One would hope that the gift gets turned into protection for the community as fast as practicable.

Follow @LisaVaas

Follow @NakedSecurity

Image of Facebook gun courtesy of Shutterstock.


View the original article here

Monday, June 16, 2014

Gameover and CryptoLocker revisited - the important lessons we can learn

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

We recently wrote about an international takedown operation, spearheaded by US law enforcement, against the Gameover and CryptoLocker malware.

That led to a resurgence of interest in our earlier articles about these threats.

So we thought it would be handy to revisit the lessons that this sort of crimeware can teach us.

If we're honest, Gameover is the more serious threat to worry about.

It's a bot, or zombie, meaning that its function is to hand covert remote control of your computer over to cybercriminals.

They can go after your online banking credentials (and the Gameover gang did, to the tune of some $100m in the US alone), but they can also read your mail, mess with your social networking accounts, record your voice, turn on your webcam, and more.

In fact, the crooks can do pretty much anything they like, not least because Gameover, like most zombie malware, includes a general-purpose "download, install and launch yet more malware" function.

(Audio player above not working for you? Listen on Soundcloud.)

In other words, finding out you've had Gameover for the past month is like realising you forgot to hang up the phone and your boss has been listening in to the last 30 minutes of garrulous tittle-tattle you've been having with your chums.

You can't be sure just how badly things might end up, but you know it's not going to be good.

And one way that Gameover ended for many victims was with a CryptoLocker attack.

That's because the crooks used the Gameover botnet to infect selected victims with the CryptoLocker ransomware, which promptly called home, downloaded a disk-scrambling encryption key, and locked up their data.

Want it back? That'll be $300.

For the most part, as far as we can see, victims who paid up did get their data back, and word quickly spread that the crooks were (if you will pardon the oxymoron) men of their word, with the result that business boomed.

Fellow Naked Security writer Chester Wisniewski, who speaks at a lot of conferences and seminars, even met people who shrugged and admitted that they'd handed over $300 to the crooks because it was less hassle than restoring from backup, and they'd heard that the crooks would probably honour the payment.

Honour, indeed!

So CryptoLocker ended up as better-known and more feared than Gameover, even though, for many people, Gameover was actually the cause of their CryptoLocker trouble.

You can see why CryptoLocker captured the imagination more than Gameover: CryptoLocker is one of those in-your-face, "so near but so far" threats.

If you get hit, your computer still works, your files are still there, and you can even open them up.

But if you do you will find they consist of the digital equivalent of shredded cabbage.

Worse still, CryptoLocker doesn't limit itself to scrambling files on your hard disk.

Any drives, shares and folders that you can find with Explorer are visible to the malware, and if it has write access to any of those places, the data stored there is shredded cabbage, too.

USB drives, secondary hard disks, network shares, perhaps even your cloud storage, if you have software loaded that makes it appear as a directory tree on your computer: all of these can end up ruined after a visit from CryptoLocker.

If your user account has Administrator privileges, or worse still, System Administrator privileges, you might end up spreading the ruination far and wide through your organisation.

At worst, a single user who is infected could leave all his work colleagues affected, even those who don't use Windows and couldn't get infected themselves, even if they tried.

Here are four suggestions that you can try yourself, and recommend to your friends and family.

• Don't rely on reactive virus scanning.

Reactively scanning your computer once a week, or once a month, cannot, by definition, prevent malware. It's a handy way of getting a "second opinion" about what's on your computer, but make sure you also use a proactive anti-virus program with an on-access or real-time scanner for both files and web pages. Real-time protection steps in before infection happens, so it doesn't just detect malware and malicious websites, it blocks them, too.

• Do consider email and web filtering.

Most businesses perform some sort of web or email filtering, to protect both the data and the staff in the organisation. If you have children to look after at home, or are the IT geek in a shared house, you might want to do the same sort of thing at home. (Sophos's UTM Home Edition is our full-featured business product, totally free for non-commercial use at home. It even includes 12 Sophos Anti-Virus for Windows licences for your desktops and laptops.)

Blocking suspicious websites needn't be about censoriousness or being a judgmental Big Brother. Instead, think of it as something you do because you're a concerned parent, or because you're watching your buddies' backs.

• Don't make your normal user account into an Administrator.

Privileged accounts can "reach out" much further and more destructively that standard accounts, both on your own hard disk and across the network. Malware that runs as administrator can do much more damage, and be much harder to get rid of, than malware running as a regular user.

For example, on Windows 8.1, you need to have at least one Administrator account, or else you wouldn't be able to look after after your computer. But you can create a second account to use for your day-to-day work and make that account into a Standard user.

• Do make time for regular, off-line backups.

Even cloud backups can be considered "off-line," as long as you don't keep your cloud storage mounted as if it were a local disk, where it can be accessed all the time, by any program. Also, consider using backup software that can keep multiple versions (revisions) of regularly-changing files such as documents and spreadsheets, so that if you ruin a file without realising it, you don't end up with a backup that is equally ruined.

If you use the cloud for backup, we nevertheless recommend taking regular physical copies, for example onto removable USB disks, that you can keep somewhere physcially secure, such as a safe-deposit box. Don't risk losing everything if you lose your computer together with your cloud storage password, or if your cloud provider goes bust (or gets shut down).

Encrypting your backups as you save them to removable disks or before you upload them to the cloud is also wise. That way they are shredded cabbage to everyone else.

The operation against Gameover and CryptoLocker by law enforcement is most welcome, andshould be applauded.

But the mopping-up part of the operation is down to us.

The criminal business empires that have grown up around botnets like Gameover would rapidly fall apart if we kept our computers clean in the first place.

Kill-a-zombie today!

Follow @duckblog

Image of Killer Zombie Robot courtesy of Shutterstock.

Click to get the free version of Sophos UTM...

Click to get the free version of Sophos Anti-Virus for Mac...

Click to get Sophos Free Anti-Virus and Security for Android...


View the original article here

Sunday, June 15, 2014

Google to flag 'right to be forgotten' censored search results

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Image of Removed stamp courtesy of Shutterstock, Google search results from Wikimedia CommonsGoogle may be forced to forget about you, but it just might stick a flag on the search results it's reluctantly expunged.

According to The Guardian, the search giant plans to put an alert at the bottom of every page where it's been compelled to remove links in the wake of the recent, landmark "right to be forgotten" court ruling.

Last month, at the command of the EU's Court of Justice, Google reluctantly put out a "forget me" form to enable European Union citizens to request that it remove links that include their name and that are deemed "inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes for which they were processed."

By the end of the first day, 12,000 Europeans had submitted the form.

As of last week, that number had hit 41,000 requests, at the rate of about 10,000 per day.

According to the Financial Times, those familiar with the search results removal process say that the takedown requests are coming in from across the EU, with a particularly high proportion coming from Germany and the UK.

The requests reportedly include one from a man who tried to kill his family and wanted a link to a news article about it taken down.

Other requests have come in from a politician with a murky past and a convicted paedophile, the Guardian reports.

Google chief executive Larry Page has said that nearly a third of the 41,000 requests received related to a fraud or scam, one-fifth concerned serious crime, and 12% are connected to child pornography arrests.

The Guardian says that Google plans to flag censored search results much like it alerts users to takedown requests over copyright infringing material.

When links have been removed from a list of search results, Google provides a notification at the bottom of that page and links to a separate page at chillingeffects.org, an archive of cease-and-desist notices meant to protect lawful online activity from legal threats.

On the site, each listing displays the name of the complainant, the title of the copyrighted content and a list of allegedly infringing URLs. The site at the link given above, for example, lists 640 URLs that allegedly infringe on Walt Disney's "Maleficent" film.

Google considers the enforced expunging to be censorship, and it's got some heavyweights on its side.

Wikipedia founder Jimmy Wales has condemned the ruling, telling Tech Crunch in an interview over the weekend that it was a "terrible danger" that could make it more difficult to make "real progress on privacy issues."

Wales is one of a seven-person advisory committee set up by Google to issue recommendations about where the boundaries of the public interest lie in the requests.

Wales told Tech Crunch that in spite of the tens of thousands of people eager to have their pasts erased from search results, the ruling simply amounts to censorship of knowledge, packaged in "incoherent legislation":

In the case of truthful, non-defamatory information obtained legally, I think there is no possibility of any defensible 'right' to censor what other people are saying.

We have a typical situation where incompetent politicians have written well-meaning but incoherent legislation without due consideration for human rights and technical matters.

I've asked Google if it will begin placing notifications on pages where it has removed links due to "right to be forgotten" requests. I'll update the story if any comment is forthcoming.

Follow @LisaVaas

Follow @NakedSecurity

Images from Shutterstock and Creative Commons.


View the original article here

Saturday, June 14, 2014

Kim Dotcom offers $5M (£3M) for whistleblower help

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

Whistle, courtesy of ShutterstckAvast! Kim Dotcom, alleged King 'o the Pirates, be offerin' a $5 million (£3 million) bounty t' any of ye bilge rats who can blow yer whistle sweet enough t' skewer his extradition!

As followers of this summer-blockbuster-esque saga will recall, Dotcom's Megaupload file-sharing empire was shut down in 2012, leaving him fighting extradition to the USA to face charges of racketeering, money laundering and copyright theft - charges with potential jail terms of 20 years.

Now, Dotcom's offering mega-bucks to anybody who can help him prove his long-argued contention that Hollywood studios illegally set the US authorities on him, aided and abetted by the country's close ally, New Zealand.

Dotcom tweeted that he has few options as he fights one of the biggest copyright infringement cases ever brought:

My case is unfair:
I was declined discovery
I didn't get my own data back
I need Whistleblowers
I am offering USD $5M

In his tweet, Dotcom included a link to a Torrent Freak article about how the bounty will go to "anyone prepared to reveal behind-the-scenes wrongdoing and corruption."

About a year ago, Dotcom was supposed to have gotten back some of his seized property. A judge also granted him the right to see all of the evidence against him before, rather than after, extradition.

In April, Hollywood came after him again, as six mammoth movie studios filed suit against what they say is the former file-sharing site's mind-numbingly-massive copyright infringement.

Now, after several delays, a Supreme Court hearing on Doctom's extradition is set to begin in Auckland on 7 July 2014.

And just as his assets were about to be released in New Zealand and Hong Kong, Hollywood sought to get Dotcom's assets re-frozen.

Now, Dotcom is hoping to fight back by getting some dirt on his legal enemies.

Here's what he says in the Torrent Freak article:

Let me be clear, we are asking for information that proves unlawful or corrupt conduct by the US government, the New Zealand government, spy agencies, law enforcement and Hollywood.

...and he suggests taking any such dirt to a newspaper that's done quite a lot of dirt-handling in the past year, with all its Edward Snowden-fueled whistleblowing:

I have been in touch with the Guardian editor and he has kindly retweeted my offer and told me that he hopes that someone will reply to that offer.

...preferably by using a new whistleblower tool released by The Guardian last week.

Dotcom also recommends that whistleblowers take even more caution in covering their tracks by using the whistleblower tool on an internet cafe computer, using a memory stick, instead of doing it from work, from home, or via a personal computer or phone.

Then again, he says, you could just buy a disposable laptop or netbook and destroy it when you're done.

I guess that disposable computers are a reasonable investment, really, for anybody who stands a chance of earning $5 million for helping out a famous, and infamous, alleged pirate.

Follow @LisaVaas
Follow @NakedSecurity

Image of whistle courtesy of Shutterstock.


View the original article here

Friday, June 13, 2014

"Turing Test" allegedly defeated - is it time to welcome your robot overlords?

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

I'm sure you have heard of, and indeed at some time faced up to and solved, a CAPTCHA.

All over the web, you'll see people telling you CAPTCHA is a pun on "capture," since it's meant to catch out automated software, but actually stands for Completely Automated Turing Test for Telling Computers and Humans Apart.

That's nonsense, of course, or else the acronym would be CATTTCHA, which would be a perfectly good play on words itself.

CAPTCHA is better expanded as Completely Automated Procedure for Telling Computers and Humans Apart.

Briefly put, a CAPTCHA falls a long way short of a real Turing Test, which sets much higher human-like behavioural standards on computers that attempt it.

The Turing Test, as you can probably guess, is named after British computer pioneer Alan Turing.

Turing proposed his now famous test back in a seminal paper published in 1950, entitled Computing Machinery and Intelligence.

The test was presented as a way of answering the question, "Can machines think?"

To bypass the complexity of defining "thinking," and of deciding through philosophical argument that an entity was engaging in it, Turing proposed a practicable systematic alternative in the form of a test.

He based it on an imaginary contest called The Imitation Game.

A man and a woman are sitting in separate rooms, each in front of a teleprinter, so they can't be seen or their tone of voice heard.

One of them is denoted by X and the other by Y; a questioner gets to interrogate them, directing each question at either X or Y.

That means he can group all of X's answers together, and all of Y's answers together; at the end, he has to work out who's who.

But here's the tricky part: the man must convince the questioner he's the woman, and so must the woman. (You could do it the other way around, but one person is being themselves, and the other is trying to imitate someone they aren't.)

The idea is that if the questioner can tell them apart, the man hasn't played a convincing enough role.

Since the woman's job is to convince the questioner that she is, indeed, female, thus exposing the man as a fraud, her best approach is to be as truthful and accurate as possible.

She is effectively on the questioner's side, so misleading him won't help.

It sounds like a parlour game - it might even have been a 1940s parlour game - but once you think about the sort of tactics the man would need to adopt, you can see where Turing was going.

Replace the man in the game with a computer, and see if the questioner can distinguish the computer from the woman. (Or from a man. This time the differentiation is not gender based: it's computer versus human.)

Turing's suggestion was that if you can't tell the computer from the human, then you have as good as answered the question, "Can computers think?" with the word, "Yes."

In other words, given the right sort of questions, the human participant would have to perform what we call "thinking" in order to answer.

So, if a computer could give sufficiently human-like answers, you'd have to concede it was "thinking," too.

Clearly, to pass a proper Turing test, a computer program would need a much broader set of skills than it would need to read the following CAPTCHA:

Make no mistake: programming the sort of software than can read modern CAPTCHAs is a serious challenge in its own right.

You might even decide to refer to a computer that could do it as "clever," but it still wouldn't be thinking.

Interestingly, in the paper in which he introduced the Imitation Game, Turing estimated that by the year 2000, computers would able to survive his eponymous test for five minutes at least 30% of the time.

? Generally speaking, the longer the questioning goes on, the more likely the questioner will tell the human and the computer apart, as he has more opportunity to catch the computer out. So the longer a computer can last, the more we should accept that it is "thinking."

Furthermore, Turing guessed that his fin de siècle test-beating computers would need about 128MB (1Gbit) of memory to do the job.

He was a trifle optimistic, but nevertheless surprisingly close.

It actually took until 07 June 2014 for a serious claim to surface that a computer, or more precisely a program, had passed a Turing Test.

It happened in a contest organised by the University of Reading in England, and the "thinking software" was called Eugene Goostman.

Just how seriously the world of computer science will take the claim remains to be seen: Reading University's machine intelligence experts are no strangers to controversy.

Indeed, the spokesman in Reading's latest press release is none other that Professor Kevin Warwick, a media-savvy cyberneticist who promotes himself as the man who "became the world's first Cyborg in a ground breaking set of scientific experiments."

And University of Reading research fellow Mark Gasson proudly announced, in 2010, that he was the first human to infect himself with a computer virus.

? What Gasson actually did, as far as we can see, is to inject himself with an RFID chip containing executable code that could, in theory, be considered an exploit against a vulnerable RFID reader, if Gasson were to find (or build) a vulnerable RFID reader to match his "infected" chip.

The Eugene Goostman software was developed in Saint Petersburg, Russia, by a team including Vladimir Veselov, an American born in Russia, and Eugene Demchenko, a Russian born in Ukraine.

This year's competition took place, fittingly if slightly sadly, on the 60th anniversary of Turing's death.

Eugene, reports the University of Reading, tricked 33% of the judges into thinking he was human in a series of five-minute conversations.

Fans of TV Sci-Fi shows will enjoy that fact that one of the judges was Robert Llewellyn, the actor who played the intelligent robot Kryten in the cult comedy series Red Dwarf.

Will 07 June 2014 become, as one of my Naked Security colleagues joked (at least, I assume he was joking), the day we first welcomed our robot overlords?

I'm saying, "No."

One trick the programmers used was to make Eugene a 13-year-old boy, which almost certainly gave them much more leeway for "believable mistakes" than if they had simulated a person of adult age.

As Veselov pointed out:

Eugene was 'born' in 2001. Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything. We spent a lot of time developing a character with a believable personality.

As Turing Tests go, this one feels a bit more like getting a learner's permit for a moped than qualifying for your unrestricted car licence.

Eugene has a few years to go before he can do that.

So Naked Security's message to our new robot overlord is, "Stop showing off on the internet and go and tidy your bedroom!"

That's what it told me to say, anyway.

Follow @duckblog


View the original article here

Thursday, June 12, 2014

Mobile malware, Gameover, CryptoLocker, and SSL/TLS holes - 60 Sec Security [VIDEO]

Over 170,000 people are part of the Sophos community on Facebook. Why not join us on Facebook to find out about the latest security threats.

Hi fellow Twitter user! Follow our team of security experts on Twitter for the latest news about internet security threats.

Already using Google+? Find us on Google+ for the latest security news.

• How long has mobile malware been around?

• Is it really game over for Gameover and CryptoLocker?

• Which cryptographic security libraries need patching?

Find all the answers in this week's 60 Sec Security - 07 June 2014.

? Can't view the video on this page? Watch directly from YouTube.

Follow @duckblog

Tags: 60 Sec Security, 60 Second Security, 60 Seconds, 60SS, Android, cabir, caribe, cryptolocker, doj, FBI, gameover, gnutls, heartbleed, Mobile, openssl, Patch, ransomware, rce, simplelocker, Symbian, takedown


View the original article here

Wednesday, June 11, 2014

Finding the hidden zombie in your network: Statistical approach to unraveling computer botnets

How do you detect a "botnet," a network of computers infected with malware -so-called zombies -- that allow a third party to take control of those machines? The answer may lie in a statistical tool first published in 1966 and brought into the digital age researchers writing this month in the International Journal of Electronic Security and Digital Forensics.

Millions of computers across the globe are infected with malware, despite the best efforts of public awareness campaigns about phishing attacks and antivirus software. Much of the infection is directed towards allowing a third party to take control of a given machine or indeed a network of machines and exploiting them unbeknownst the legitimate users in malicious and criminal activity. Security and software companies do monitor internet activity and there have been many well-publicized successes in destroying such botnets. However, malware writers are always developing new tools and techniques that allow them to infect unprotected computers and rebuild botnets.

Botnets are widely used in organized crime to attempt breaches on security systems by mounting distributed denial of service (dDOS) attacks, among other techniques, on corporate, banking and government systems. Such attacks can open up "backdoors" into a private computer network that lets the botnet controller access proprietary and other sensitive information, passwords or even voting systems. Botnets have also been used for simply malicious purposes to force websites and other services offline, occasionally in an act of protest or rebellion.

Now, R. Anitha and colleagues at PSG College of Technology, Coimbatore, India, have turned to a statistical tool known as the hidden semi-Markov model (HsMM) to help them develop monitoring software that can detect the telltale signs of botnet activity on a computer and so disable the offending malware. In probability theory and statistics, a Markov process is one in which someone can predict the next state of a process based on its current state without knowing the full history of the process. An example in gambling would be that if you have chip now and the odds of winning or losing on the next bet are even then we can predict without knowing how many chips you had earlier that you will either have none or two after the next bet.

A hidden-Markov model would thus include variables of which the observer has no sight but can infer and so predict an outcome. Predicting whether it rained on a given day based on whether a fair-weather-only walker was out on a given day without you having a weather report for their area involves a hidden-Markov process. A hidden semi-Markov model then involves a process of this sort but where the time-elapsed into the current state affects the prediction. For example, one might predict the rainfall pattern based on how long it is since our fair-weather walk last ventured out.

The team has applied the statistical logic of the hidden semi-Markov model to forecast the characteristics of internet activity on a given computer suspected of being a "zombie computer" in a botnet based on management information base (MIB) variables. These variables are the components used to control the flow of data packets in and out of the computer via the internet protocol. Their approach can model the "normal" behavior and then highlight botnet activity as being a deviation from the normal without the specific variables that are altered by the malware being in plain sight.

The team points out that botnet and malware developers have focused recently on web-based, http, type activity, which is easier to disguise among the myriad packets of data moving to and fro across a network and in and out of a particular computer. Their tests on a small zombie computer network shows that the hidden semi-Markov model they have developed as a lightweight and real-time detection system can see through this disguise easily. If implemented widely such as system could lock down this kind of botnet very quickly and slow the assimilation of zombie computers by criminals and others with malicious intent.


View the original article here

Tuesday, June 10, 2014

Quantum dots provide complete control of photons

By emitting photons from a quantum dot at the top of a micropyramid, researchers at Link?ping University are creating a polarized light source for such things as energy-saving computer screens and wiretap-proof communications.

Polarized light -- where all the light waves oscillate on the same plane -- forms the foundation for technology such as LCD displays in computers and TV sets, and advanced quantum encryption. Normally, this is created by normal unpolarized light passing through a filter that blocks the unwanted light waves. At least half of the light emitted, and thereby an equal amount of energy, is lost in the process.

A better method is to emit light that is polarized right at the source. This can be achieved with quantum dots -- crystals of semiconductive material so small that they produce quantum mechanical phenomena. But until now, they have only achieved polarization that is either entirely too weak or hard to control.

A semiconductive materials research group led by Professor Per Olof Holtz is now presenting an alternative method where asymmetrical quantum dots of a nitride material with indium is formed at the top of microscopic six-sided pyramids. With these, they have succeeded in creating light with a high degree of linear polarization, on average 84%. The results are being published in the Nature periodical Light: Science & Applications.

"We're demonstrating a new way to generate polarized light directly, with a predetermined polarization vector and with a degree of polarization substantially higher than with the methods previously launched," Professor Holtz says.

In experiments, quantum dots were used that emit violet light with a wavelength of 415 nm, but the photons can in principle take on any colour at all within the visible spectrum through varying the amount of the metal indium.

"Our theoretical calculations point to the fact that an increased amount of indium in the quantum dots further improves the degree of polarization," says reader Fredrik Karlsson, one of the authors of the article.

The micropyramid is constructed through crystalline growth, atom layer by atom layer, of the semiconductive material gallium nitride. A couple of nanothin layers where the metal indium is also included are laid on top of this. From the asymmetrical quantum dot thus formed at the top, light particles are emitted with a well-defined wavelength.

The results of the research are opening up possibilities, for example for more energy-effective polarized light-emitting diodes in the light source for LCD screens. As the quantum dots can also emit one photon at a time, this is very promising technology for quantum encryption, a growing technology for wiretap-proof communications.


View the original article here

Monday, June 9, 2014

Storage system for 'big data' dramatically speeds access to information

As computers enter ever more areas of our daily lives, the amount of data they produce has grown enormously. But for this "big data" to be useful it must first be analyzed, meaning it needs to be stored in such a way that it can be accessed quickly when required.

Previously, any data that needed to be accessed in a hurry would be stored in a computer's main memory, or dynamic random access memory (DRAM) -- but the size of the datasets now being produced makes this impossible.

So instead, information tends to be stored on multiple hard disks on a number of machines across an Ethernet network. However, this storage architecture considerably increases the time it takes to access the information, according to Sang-Woo Jun, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.

"Storing data over a network is slow because there is a significant additional time delay in managing data access across multiple machines in both software and hardware," Jun says. "And if the data does not fit in DRAM, you have to go to secondary storage -- hard disks, possibly connected over a network -- which is very slow indeed."

Now Jun, fellow CSAIL graduate student Ming Liu, and Arvind, the Charles W. and Jennifer C. Johnson Professor of Electrical Engineering and Computer Science, have developed a storage system for big-data analytics that can dramatically speed up the time it takes to access information.

The system, which will be presented in February at the International Symposium on Field-Programmable Gate Arrays in Monterey, Calif., is based on a network of flash storage devices.

Flash storage systems perform better at tasks that involve finding random pieces of information from within a large dataset than other technologies. They can typically be randomly accessed in microseconds. This compares to the data "seek time" of hard disks, which is typically four to 12 milliseconds when accessing data from unpredictable locations on demand.

Flash systems also are nonvolatile, meaning they do not lose any of the information they hold if the computer is switched off.

In the storage system, known as BlueDBM -- or Blue Database Machine -- each flash device is connected to a field-programmable gate array (FPGA) chip to create an individual node. The FPGAs are used not only to control the flash device, but are also capable of performing processing operations on the data itself, Jun says.

"This means we can do some processing close to where the data is [being stored], so we don't always have to move all of the data to the machine to work on it," he says.

What's more, FPGA chips can be linked together using a high-performance serial network, which has a very low latency, or time delay, meaning information from any of the nodes can be accessed within a few nanoseconds. "So if we connect all of our machines using this network, it means any node can access data from any other node with very little performance degradation, [and] it will feel as if the remote data were sitting here locally," Jun says.

Using multiple nodes allows the team to get the same bandwidth and performance from their storage network as far more expensive machines, he adds.

The team has already built a four-node prototype network. However, this was built using 5-year-old parts, and as a result is quite slow.

So they are now building a much faster 16-node prototype network, in which each node will operate at 3 gigabytes per second. The network will have a capacity of 16 to 32 terabytes.

Using the new hardware, Liu is also building a database system designed for use in big-data analytics. The system will use the FPGA chips to perform computation on the data as it is accessed by the host computer, to speed up the process of analyzing the information, Liu says.

"If we're fast enough, if we add the right number of nodes to give us enough bandwidth, we can analyze high-volume scientific data at around 30 frames per second, allowing us to answer user queries at very low latencies, making the system seem real-time," he says. "That would give us an interactive database."

As an example of the type of information the system could be used on, the team has been working with data from a simulation of the universe generated by researchers at the University of Washington. The simulation contains data on all the particles in the universe, across different points in time.

"Scientists need to query this rather enormous dataset to track which particles are interacting with which other particles, but running those kind of queries is time-consuming," Jun says. "We hope to provide a real-time interface that scientists can use to look at the information more easily."


View the original article here

Sunday, June 8, 2014

Privacy compliance for big data systems automated: Search engine code is moving target that eludes manual audits

Web services companies, such as Facebook, Google and Microsoft, all make promises about how they will use personal information they gather. But ensuring that millions of lines of code in their systems operate in ways consistent with privacy promises is labor-intensive and difficult. A team from Carnegie Mellon University and Microsoft Research, however, has shown these compliance checks can be automated.

The researchers developed a prototype automated system that is now running on the data analytics pipeline of Bing, Microsoft's search engine. According to Saikat Guha, researcher at Microsoft, it's the first time automated privacy compliance analysis has been applied to the production code of an Internet-scale system and is a reflection of Microsoft's commitment to creating the technology necessary to further safeguard the privacy of customers.

Employing a new, lawyer-friendly language to specify privacy policies and using a data inventory to annotate existing programs, the researchers showed that a team of just five people could manage a daily compliance check on millions of lines of code written by several thousand developers.

They presented their research findings at the 35th IEEE Symposium on Security & Privacy, May 18-21, in San Jose, Calif.

"Companies in the United States have a legal obligation to declare how they use personal information they gather and it's also good business to establish a bond of trust with customers," said Anupam Datta, associate professor of computer science and electrical and computer engineering. "But these systems are constantly evolving and their scale can be daunting. The manual methods typically used for checking compliance are labor intensive, yet too often fail to catch all violations of policy."

"Tens of millions of lines of code are already in the pipeline," noted Shayak Sen, a Ph.D. student in computer science who interned at Microsoft Research India and the lead student author on the study. "And during our implementation on Bing, we found that more than 20 percent of the code was changing on a daily basis." At these large scales, automated methods offer the best hope of verifying compliance.

"One reason that gaps exist between policies set by a company's privacy team and the code written by software developers is that the two groups don't speak the same language," Datta said. Lawyers and privacy champions typically have little experience in programming and developers attempting to translate policies into code can get tripped up by ambiguities in the language of the privacy policies.

So the researchers developed a language -- Legalease -- that could be easily learned and used by privacy advocates. It employs allow-deny rules with exceptions, a structure that is found in many privacy policies and laws, such as the Health Insurance Portability and Accountability Act (HIPAA), and is expressive enough to capture the real policies of an industrial-scale system such as Bing.

In preliminary usability testing, a dozen Microsoft employees were given a one-page document explaining Legalease and spent an average of under 5 minutes studying it. They then took an average of less than 15 minutes to encode nine Bing policy clauses regarding how user information can be used. "They were able to perform this task with a high degree of accuracy, which is encouraging," Sen said.

But encoding privacy policies correctly means little if it cannot be applied to large codebases written by large teams of programmers. To solve this dilemma, the researchers leveraged Grok -- a data inventory that annotates existing programs written in languages typically employed by MapReduce-like systems, such as those used by Bing and Google -- for their backend data analytics over user data.

Grok performs this automated annotation by combining information from different sources with varying levels of confidence. For instance, automated pattern-matching to column names can be performed across an entire database, but with low confidence, while annotations by developers have high confidence, but low coverage.

Grok had been developed by Microsoft Research and deployed by Bing for the express purpose of automating privacy compliance checking the previous year, but writing policies for Grok was cumbersome.

"Legalease was the final piece of the automated privacy compliance jigsaw puzzle," Guha said. "Developed over Sen's internship and subsequent collaboration with CMU, Legalease bridged privacy teams with Grok, and through Grok, with the developers."

Datta said automating the process of compliance checks could push the industry to adopt stronger privacy protection policies.

"Sometimes, companies want to make their policies stronger, but hesitate because they are not sure they can ensure compliance in these large systems," he explained, noting that online privacy policy compliance is enforced in the United States by the Federal Trade Commission.

The research team included Sriram K. Rajamani of Microsoft Research in Bangalore, India; Janice Tsai of Microsoft Research, Redmond, and Jeannette Wing, corporate vice president of Microsoft Research and former head of CMU's Computer Science Department.

This research was supported, in part, by the Air Force Office of Scientific Research and the National Science Foundation.


View the original article here

Saturday, June 7, 2014

Security and privacy? Now they can go hand in hand

Online identification and authentication keeps transactions secure on the Internet, however this has also implications for your privacy. Disclosing more personal information than needed online when, say, you log in to your bank website may simplify the bank's security at the cost of your privacy. Now, thanks to research by the EU-funded project Attribute-based Credentials for TrustABC4Trust , there is a new approach that keeps systems secure and protects your identity.

The ABC4Trust research team is piloting this technology with young people, often thought to be the less careful about their online security. But 'that's not the case', says Prof. Dr. Kai Rannenberg , Coordinator of the ABC4Trust project, 'The participants were very interested in learning which personal data they reveal and how they can control this. The university students especially feel that Attribute-based Credentials (ABCs) can help them manage their e-identities and enable them use Internet services in a privacy preserving way.'

For example, at Norrtullskolan secondary school in S?derhamn, Sweden, pupils can access counselling services online. However, until recently the pupils couldn't access these services using a pseudonym -- they had to identify themselves by name so the school could check whether they were allowed to use them.

But in the ABC4Trust pilot scheme, each child is issued with a 'deck' of digital certificates that validate information like their enrollment status, their date of birth and so on. This allows the school pupils to enjoy both privacy and security. Instead of having to reveal their whole identity when using the counselling service they can simply use one of the certificates in their deck that pseudonymously verifies they are enrolled at the school.

Another pilot developed at the Computer Technology Institute and Press "Diophantus" and trialled at the University of Patras , Greece, allows students to give anonymous feedback on their courses and lecturers, while ensuring that only registered students can take part in the polls.

Prof. Rannenberg says, 'Our user studies showed, that the school children, parents and the university students are happy that they are giving less of their private information when they access the services and leave feedback. Also the respective authorities are happy with the pilots and the feedback; in the not too distant future we expect more European public services and other organisations switch to Privacy-ABCs.'

Users want Privacy, Organisations want Security

According to recent research by market research organisation, Ovum, 68 % of us in the EU would like to opt out of having our personal data tracked. In a speech in May , Commissioner Neelie Kroes stressed that it is essential for EU business 'To show the citizen that going online is not just convenient, but trustworthy… With resilient and secure networks and systems I think we can build that trust.'

ABC4Trust is a 13.05 Million Euro project, with 8.85 Million Euro funded by the European Union's Seventh Framework Programme (FP7) . The international and multidisciplinary ABC4Trust consortium is led by Johann Wolfgang Goethe-Universit?tFrankfurt am Main, Germany and it is composed of 11 partners from 7 countries. ABC4Trust started in November 2010 and will run for 4 ? years.


View the original article here

Friday, June 6, 2014

Privacy and vulnerability issues: Could decentralized networks help save democracy?

Democratic movements can flourish online, but just as easily get censored. A group of researchers is developing solutions to the vulnerabilities and privacy problems with using big social media platforms like Facebook and Twitter.

Turkish President Recep Tayyip Erdogan disrupted communications between his opponents when he shut down Twitter during the run-up to the country's recent election. But in doing so, he provided yet more proof of how flawed social web activism can be. Whether the lessons in Turkey are heeded could have serious consequences for democracy.

Social networks such as Twitter and Facebook have enabled unprecedented levels of communication and have even received credit for at least one major democratic revolution. There's just one problem: because of their monolithic nature, these centralized networks expose users to snooping and interference of the kind Erdogan caused, says Sonja Buchegger, Associate Professor of Computer Science at KTH Royal Institute of Technology.

A single, large-scale platform provides an easier target for anyone who wants to interfere with online political activity, says Buchegger. "But, if Twitter were decentralized, and you had users cooperating and communicating directly, that wouldn't have been possible to disrupt.

"Decentralization allows for greater freedom of expression.

The good news is that there could be a computer science answer to the problem. Buchegger is leading a group of scientists at KTH who are creating building blocks that developers could use to launch decentralized, distributed networks, which would not only be difficult to interfere with, but would also protect people from government snooping.

"The internet itself is not centralized -- it would be hard to shut down," Buchegger says. "It was built as a robust, decentralized tool to communicate; and we can do the same for other services that are now centralized, like social networks."

Whether the demand for such networks would go mainstream any time soon is hard to tell. Buchegger notes that it is difficult for most people to wrap their head around the notion that their personal information is exposed on web-based email and social platforms.

"The whole privacy issue online is very young, and the population is not used to thinking in this way," she says. "Offline, we know how to protect our privacy; we know who can overhear us; we see who is in the room with us and we know whether we can trust those people; but online we haven't really grasped who the audience is and how that changes over time."

Buchegger's research is focused on the privacy issues of distributed peer-to-peer (P2P) networks, that is, the underlying infrastructure for a decentralized system in which people could store their data beyond the reach of data miners or government surveillance.

"We are developing these little building blocks: this is how you do passwords in a distributed environment; this is how you do search in a privacy-preserving decentralized environment; this is how you make news feeds; this is how you control access," she says. "Then you can put the building blocks together and build a new communications system -- that's the idea."

For example, encryption tools are being tested that could provide users with "fine grain" control over their privacy. One could use encryption keys to decide specifically who can access or view a given piece of content. "You wouldn't have to worry about all the people you don't want to access it because the default is that access is denied," she says.

The research into privacy tools cuts right to one of the major weaknesses of centralized networks -they rely on centralized data centers for storage, thus exposing millions of people's personal information to prying eyes.

Buchegger says that as far as promoting democracy goes, distributed networks could outshine so-called "Facebook revolutions," encouraging more widespread activism, particularly for those whose only connection to the web is with a phone.

"This is a way of developing the idea of a commons, in which more people get together and organize and share resources," she says. "A decentralized network would also be a sort of commons because you could imagine how people with large servers could store encrypted data for others. It could enable access to resources for those who cannot store so much on their phone."

While distributed networks offer potential for greater communication and more effective organizing, Buchegger is quick to point out that technology is not a quick fix for promoting democracy. Ultimately political action depends on people assembling in the non-virtual world. "There is a danger that you think that just because you repost something on Facebook or Twitter that you are doing activism, but it's not actually doing something.

"Networks can reach more people and be used to organize physical activism, but they're not a substitute for activism."

Cite This Page:

KTH The Royal Institute of Technology. "Privacy and vulnerability issues: Could decentralized networks help save democracy?." ScienceDaily. ScienceDaily, 12 May 2014. .KTH The Royal Institute of Technology. (2014, May 12). Privacy and vulnerability issues: Could decentralized networks help save democracy?. ScienceDaily. Retrieved May 30, 2014 from www.sciencedaily.com/releases/2014/05/140512101634.htmKTH The Royal Institute of Technology. "Privacy and vulnerability issues: Could decentralized networks help save democracy?." ScienceDaily. www.sciencedaily.com/releases/2014/05/140512101634.htm (accessed May 30, 2014).

View the original article here

Thursday, June 5, 2014

Computer security: Reducing risks of malware infections

Installing computer security software, updating applications regularly and making sure not to open emails from unknown senders are just a few examples of ways to reduce the risk of infection by malicious software, or "malware." However, even the most security-conscious users are open to attack through unknown vulnerabilities, and even the best security mechanisms can be circumvented as a result of poor user choices.

"The reality is that successful malware attacks depend on both technological and human factors," says Professor Jos? Fernandez. "Although there has been significant research on the technical aspects, there has been much less on human behaviour and how it affects malware and defence measures. As a result, no one at the present time can really say how important these factors are. For example, are users who are older and less computer-savvy more open to infection?" It is therefore necessary to take a closer look at the impact that both technological and human factors have on the success or failure of protective mechanisms.

To answer this type of question, Prof. Fernandez and his team drew inspiration from the clinical trial method to design the first-ever study applied to computer security. In a fashion similar to medical studies that evaluate the effectiveness of a particular treatment, their experiment was aimed at assessing the performance of anti-virus software and the likelihood that participants' computers would become infected with malware. The four-month study involved 50 subjects who agreed to use laptops that were instrumented to monitor possible infections and gather data on user behaviour. "Analyzing the data allowed us not only to identify which users were most at risk, based on their characteristics and behaviour, but also to measure the effectiveness of various protective measures," says Polytechnique student Fanny Lalonde L?vesque, who is writing her master's thesis on this project.

This pilot study provided some very interesting results on the effectiveness of computer defences and the risk factors for infection. For example, 38% of the users' computers were exposed to malware and 20% were infected, despite the fact that they were all protected by the same anti-virus product, which was updated regularly. With regard to the users themselves, there did not seem to be any significant difference in exposure rates between men and women. In addition, the most technically sophisticated users turned out to be the group most at risk… This result may seem counter-intuitive, as it contradicts the opinion of some computer experts who argue that people should have a kind of "Internet license" before going online. "The results of this study provide some intriguing insights. Are these 'expert' users at higher risk because of a false sense of security, or because they are naturally curious and therefore more risk-tolerant? Further research is needed to understand the causes of this phenomenon, so that we can better educate and raise awareness among users," says Professor Fernandez. In the future, this type of study will help provide scientific data to support decision-making on security management, education, regulation and even computer security insurance. A second phase, which will involve hundreds of users over a period of several months, is already being prepared.

The initial results of this experiment were presented at the ACM Conference on Computer and Communications Security (CCS), which took place November in 2013 in Berlin, Germany.


View the original article here

Wednesday, June 4, 2014

NSA pursues quantum technology

In this month's issue of Physics World, Jon Cartwright explains how the revelation that the US National Security Agency (NSA) is developing quantum computers has renewed interest and sparked debate on just how far ahead they are of the world's major labs looking to develop the same technology.

In 2006 the NSA openly announced a partnership with two US institutions to develop quantum computers. However, according to documents leaked by whistle-blower Edward Snowden, and published last month by the Washington Post, the NSA also wishes to develop the technology so that it is capable of breaking modern Internet security.

The $79.7m project, dubbed "Penetrating Hard Targets," could be made possible by the extraordinary potential of quantum computers to factorize large numbers in a short space of time, quickly deciphering encryption keys that are used to protect sensitive information.

For the NSA, this could mean deciphering banking transactions, private messages and government files; however, many physicists are not surprised and believe this is exactly the type of technology that the NSA is expected to develop.

Raymond Laflamme, a leading quantum information theorist at the University of Waterloo in Canada, said "If you put my level of surprise on a scale from zero to 10, where 10 is very, very surprised, my answer would be zero."

For many other physicists the news has confirmed the need to stay ahead of the game and develop more sophisticated encryption techniques, some of which also take advantage of quantum phenomena.

Quantum key distribution (QKD) is one such technique, which guarantees the security of an encryption key based on fundamental aspects of quantum mechanics, whereby the process of trying to measure or access an encryption key made from various quantum states will automatically destroy it.

The latest leaked documents, however, also reveal that the NSA is attempting to exploit practical loops in QKD under a programme known as "Owning the Net."

Cartwright concludes that quantum computers are still expected to be many years away, with the control of qubits -- the packets of information that quantum computers would process -- a major sticking point for physicists; however, the extent to which the NSA has developed the technology remains largely unknown.

Also in this issue of Physics World, and online today, 31 January, Matin Durrani, editor of the magazine, provides further details of the UK's ?270m investment into quantum technology that was announced by the chancellor George Osborne in last year's Autumn Statement.

The initiative, which will begin in 2015, will focus on areas such as chip-scale atomic clocks for improved GPS communication, quantum-enabled sensors, quantum communication and quantum computing, while some ?4m will go on equipment for the new Advanced Metrology Laboratory being built at the National Physical Laboratory.

The quantum-physics initiative, which has involved careful behind-the-scenes negotiations between the UK physics community, government and industry, was formally put to Osborne last year by a group of physicists led by Professor Sir Peter Knight from Imperial College London.

Jon Cartwright's analysis of NSA developments will be freely available on physicsworld.com from Thursday 6 February 2014.

Story Source:

The above story is based on materials provided by Physics World. Note: Materials may be edited for content and length.


View the original article here

Tuesday, June 3, 2014

'Surveillance minimization' needed to restore trust

Surveillance minimization -- where surveillance is the exception, not the rule -- could help rebuild public trust following revelations about the collection of personal data, according to a law academic from the University of East Anglia.

Dr Paul Bernal, whose research covers privacy, surveillance and human rights, says the role of government surveillance and of surveillance by commercial groups and others must be reconsidered.

He suggests surveillance minimization as a way forward and will present the idea today at the Computers, Privacy and Data Protection international conference, taking place in Brussels, Belgium. It comes after the announcement last week by American President Barack Obama of curbs on the use of bulk data collected by US intelligence agencies, including the National Security Agency (NSA). His speech followed widespread anger after leaks revealed the full extent of US surveillance operations, including the mass collection of electronic data from communications of private individuals and spying on foreign leaders.

"Surveillance minimization is a simple concept and uses one of the overriding principles of data protection, the idea of data minimization, and applies it to communications surveillance," said Dr Bernal, who is currently writing a book on internet privacy and data protection. "The potential impact upon individuals from surveillance by commercial organizations can be significant, and as the NSA's PRISM program in particular demonstrated there are inextricable links between the commercial and the governmental.

"Surveillance minimization requires surveillance to be targeted rather than universal, controlled and warranted at the point of data gathering rather than of data access, and performed for the minimum necessary time on the minimum necessary people. Surveillance minimization could play a part in rebuilding the trust that is vital in this field -- and in the construction of a more 'privacy-friendly' internet -- one where surveillance is the exception, not the rule."

Dr Bernal argues the debate and discussion around the issues has been "miscast" and the common understanding -- that there is a balance to be found between the individual right to privacy and the collective right to security -- significantly misses the point.

"Communications surveillance, and internet surveillance in particular, has become a topic of much discussion in recent years," he said. "The information released, revealing at least some of the true extent and nature of communications surveillance being carried out by the NSA and others, has come as a surprise to many and contributed to an atmosphere of confusion and of distrust in a field where trust is of the utmost importance.

"Surveillance impacts upon more than just individual privacy, but upon a wide range of human rights, from freedom of expression and freedom of association and assembly to protection from discrimination. The impact is not just on individuals but on communities and other groups, and casting the debate as one of individual versus collective rights is misleading, inappropriately downplaying the significance of the impact of surveillance. The nature of this impact needs to be understood better if a more appropriate balance is to be found between people's rights and the duties of states to provide security for their citizens. Consequently, a new understanding of the balance between the relevant competing rights, needs and imperatives has to be established."


View the original article here

Monday, June 2, 2014

Quantum physics could make secure, single-use computer memories possible

Computer security systems may one day get a boost from quantum physics, as a result of recent research from the National Institute of Standards and Technology (NIST). Computer scientist Yi-Kai Liu has devised a way to make a security device that has proved notoriously difficult to build -- a "one-shot" memory unit, whose contents can be read only a single time.

The research, which Liu is presenting at this week's Innovations in Theoretical Computer Science conference, shows in theory how the laws of quantum physics could allow for the construction of such memory devices. One-shot memories would have a wide range of possible applications such as protecting the transfer of large sums of money electronically. A one-shot memory might contain two authorization codes: one that credits the recipient's bank account and one that credits the sender's bank account, in case the transfer is canceled. Crucially, the memory could only be read once, so only one of the codes can be retrieved, and hence, only one of the two actions can be performed -- not both.

"When an adversary has physical control of a device -- such as a stolen cell phone -- software defenses alone aren't enough; we need to use tamper-resistant hardware to provide security," Liu says. "Moreover, to protect critical systems, we don't want to rely too much on complex defenses that might still get hacked. It's better if we can rely on fundamental laws of nature, which are unassailable."

Unfortunately, there is no fundamental solution to the problem of building tamper-resistant chips, at least not using classical physics alone. So scientists have tried involving quantum mechanics as well, because information that is encoded into a quantum system behaves differently from a classical system.

Liu is exploring one approach, which stores data using quantum bits, or "qubits," which use quantum properties such as magnetic spin to represent digital information. Using a technique called "conjugate coding, "two secret messages -- such as separate authorization codes -- can be encoded into the same string of qubits, so that a user can retrieve either one of the two messages. But as the qubits can only be read once, the user cannot retrieve both.

The risk in this approach stems from a more subtle quantum phenomenon: "entanglement," where two particles can affect each other even when separated by great distances. If an adversary is able to use entanglement, he can retrieve both messages at once, breaking the security of the scheme.

However, Liu has observed that in certain kinds of physical systems, it is very difficult to create and use entanglement, and shows in his paper that this obstacle turns out to be an advantage: Liu presents a mathematical proof that if an adversary is unable to use entanglement in his attack, that adversary will never be able to retrieve both messages from the qubits. Hence, if the right physical systems are used, the conjugate coding method is secure after all.

"It's fascinating how entanglement -- and the lack thereof -- is the key to making this work," Liu says. "From a practical point of view, these quantum devices would be more expensive to fabricate, but they would provide a higher level of security. Right now, this is still basic research. But there's been a lot of progress in this area, so I'm optimistic that this will lead to useful technologies in the real world."


View the original article here

Sunday, June 1, 2014

Future industry: No chance for industrial pirates with highly secure networks

In the future, production facilities will be able to communicate and interact with one another, and machinery will often be remote-serviced. But no company boss wants to run the risk of opening the door to industrial espionage and sabotage with unsecure networks. A new development offers a particularly high level of security. Researchers are presenting the system at the embedded world trade fair from 25 through 27 February in Nuremberg.

Though it looks like something straight out of a science-fiction film, it will soon become a reality in the production halls of the future: products along the production lines will know where they are, which steps they have already completed, and what they still need to become a finished product. Production facilities will coordinate their work steps and exchange information with one another. There will be no need for technicians to set foot in the production halls for servicing, with machinery inspections carried out remotely instead. In a word: products and plants will be intelligent. This is also referred to as "Industry 4.0" -- meaning industry of the fourth generation, following mechanization, electrification and digitization.

There's one sticking point, though. Facilities will use a data network to communicate with one another, and even the products themselves will have to "log in." Human beings will use this network connection to control and monitor production, too -- to keep an eye on plant operation even if they don't happen to be in the production hall. On top of this, there will be remote maintenance and remote software updates. For all these functions, one thing is indispensable: secure access that keeps industrial pirates and saboteurs out. Certainly, businesses can use a normal Internet connection for this form of data traffic, securing it through a "Virtual Private Network," or VPN for short. "But there's something many people don't know: there are VPNs and there are VPNs -- and not every VPN access is secure," explains Bartol Filipovic, division director at the Fraunhofer Institute for Applied and Integrated Security (AISEC) in Garching, Germany.

That is why researchers have come up with a router that offers secure VPN access. Authorization and firewall functionalities provide additional access protection. The necessary security protocols can also be integrated directly in the industrial customer's plants and machinery. "The system is a software kit. We've already developed the basic components, and we can tailor them to fit the customer's specific requirements," Filipovic points out. The process takes around four weeks to complete. The researchers integrate simple systems at the same time, such as sensors in the pharmaceuticals industry that report filling levels or mixing ratios -- these, too, should not forward their information to unauthorized parties.

Physical protection: film sounds an alarm

On the one hand, the system protects companies from spies trying to hack their way into the network from off-site locations. On the other hand, it also outwits data thieves trying to coax secrets out of routers and circuit boards on location. A special film affixed to security-relevant casings immediately reports any attempts to unscrew the protective covering to access security-relevant data. Developed at AISEC, the film is affixed to the router casing, or directly onto the circuit boards -- the board containing key control elements such as microcontrollers, chips, diodes and other security-critical processing units -- and sealed shut at multiple points. If the router is switched off, all of the software it contains is stored in encrypted form. If it is in operation, though, it needs the decrypted program code. Each decryption key is a function of the properties of the protective film. And if these properties are changed -- by tearing open or drilling into the film to reach the circuit boards, for instance -- the film detects the attack in a few milliseconds and responds immediately: it deletes all of its unencrypted, security-relevant data.

Unauthorized intruders cannot get to the software. Data deletion is no problem for the business, however: all a company has to do is reinstall the software and affix a new protective film. "Combining software and film gives us an ideal security level," Filipovic says, "and the events of 2013 very clearly taught us just how important that can be." Secure communication software and hardware are fundamental to the evolution of production toward digitization and Industry 4.0; and protection against espionage, sabotage and product piracy is crucial to innovation and a strong competitive position.


View the original article here