Saturday, May 31, 2014

New algorithm shakes up cryptography

Researchers at the Laboratoire Lorrain de Recherches en Informatique et ses Applications (CNRS/Universit? de Lorraine/Inria) and the Laboratoire d'Informatique de Paris 6 (CNRS/UPMC) have solved one aspect of the discrete logarithm problem. This is considered to be one of the 'holy grails' of algorithmic number theory, on which the security of many cryptographic systems used today is based. They have devised a new algorithm (1) that calls into question the security of one variant of this problem, which has been closely studied since 1976.

This result, published on the site of the International Association of Cryptologic Research and on the HAL open access archive, was presented at the international conference Eurocrypt 2014 held in Copenhagen on 11-15 May 2014 and published in Advances in cryptology. It discredits several cryptographic systems that until now were assumed to provide sufficient security safeguards. Although this work is still theoretical, it is likely to have repercussions especially on the cryptographic applications of smart cards, RFID chips (2), etc.

To protect confidentiality of information, cryptography seeks to use mathematical problems that are difficult to solve, even for the most powerful machines and the most sophisticated algorithms.

The security of a variant of the discrete logarithm, reputed to be very complex, has been called into question by four researchers from CNRS and the Laboratoire d'Informatique de Paris 6 (CNRS/UPMC), namely Pierrick Gaudry, Razvan Barbulescu, Emmanuel Thom? and Antoine Joux (3). The algorithm they devised stands out from the best algorithms known to date for this problem. Not only is it significantly easier to explain, but its complexity is also considerably improved. This means that it is able to solve increasingly large discrete logarithm problems, while its computing time increases at a far slower rate than with previous algorithms. The computation of discrete logarithms associated with problems that are deliberately made difficult for cryptographic applications is thus made considerably easier.

Since solving this variant of the discrete logarithm is now within the capacity of current computers, relying on its difficulty for cryptographic applications is therefore no longer an option. This work is still at a theoretical stage and the algorithm still needs to be refined before it is possible to provide a practical demonstration of the weakness of this variant of the discrete logarithm. Nonetheless, these results reveal a flaw in cryptographic security and open the way to additional research. For instance, the algorithm could be adapted in order to test the robustness of other cryptographic applications.

(1) A method consisting in a series of instructions that enables a computer to solve a complex problem.

(2) An RFID chip is a computer chip coupled with an antenna that enables it to be activated at a distance by a reader and to communicate with it.

(3) Antoine Joux, who was attached to the Laboratoire Parall?lisme, R?seaux, Syst?mes, Mod?lisation (PRISM) (CNRS/UVSQ) at the time of open access publication, is currently a researcher at the Laboratoire d'Informatique de Paris 6 (CNRS/UPMC) and has since obtained the Chair of Cryptology at the Fondation UPMC.

Story Source:

The above story is based on materials provided by CNRS. Note: Materials may be edited for content and length.


View the original article here

Thursday, May 29, 2014

IT security for the daily life: Withdrawing money at cash machines with 'Google Glass'

Taking photos with a wink, checking one's calendar with a glance of the right eye, reading text messages -- the multinational cooperation Google wants to make it possible with Google Glass. But what IT experts celebrate as a new milestone makes privacy groups skeptical. So far, few people have access to the prototype to test how it can be used in daily life. "Thanks to the Max Planck Institute for Informatics we are one of the few universities in Germany that can do research with Google Glass," says Dominique Schr?der, assistant professor of Cryptographic Algorithms at Saarland University.

The futuristic-looking device consists of a glasses frame on which a camera and a mini computer are installed. It depicts information in the user's field of vision via a glass prism that is installed at the front end of the right temple. According to the German computer magazine "c't," this causes an effect "as if the user were looking at a 24 inch screen from a distance of two and a half meters." Schr?der, who also does research at the Center for IT-Security, Privacy and Accountability (CISPA), located only a few yards away, is aware of the data security concerns with Google Glass: "We know that you can use it to abuse data. But it can also be used to protect data." T

o prove this, Schr?der and his group combine "Google Glass" with cryptographic methods and techniques from automated image analysis to create the software system "Ubic." By using Ubic, withdrawing money at a cash machine would change as follows: The customer identifies himself to the cash machine. This requests from a reliable instance the public key of the customer. It uses the key to encrypt the one-way personal identification number (PIN) and seals it additionally with a "digital signature," the digital counterpart of the conventional signature. The result shows up on the screen as a black-and-white pattern, a so-called QR code.

The PIN that is hidden below is only visible for the identified wearer of the glasses. Google Glass decrypts it and shows it in the wearer's field of vision." Although the process occurs in public, nobody is able to spy on the PIN," explains Schr?der. This is not the case if PINs are sent to a smart phone. To spy on the PIN while it is being entered would also be useless, since the PIN is re-generated each time the customer uses the cash machine. An attacker also wearing a Google Glass is not able to spy on the process, either. The digital signature guarantees that no assailant is able to intrude between the customer and the cash machine as during the so-called "skimming," where the assailant can impersonate the customer.

Only the customer is able to decrypt the encryption by the public key with his secret key. As long as this is safely stored on the Google Glass, his money is also safe. At the computer expo Cebit, the researchers will also present how Google Glass can be used to hide information. Several persons all wearing Google Glass can read the same document with encrypted text at the same time, but in their fields of vision they can only see the text passages that are intended for them.

"This could be interesting, for example, for large companies or agencies that are collecting information in one document, but do not want to show all parts to everybody," explains Mark Simkin, who was one of the developers of Ubic. A large electric company has already sent a request to the computer scientists in Saarbr?cken. Google Glass is expected to enter the American market this year.


View the original article here

Wednesday, May 28, 2014

New technique targets C code to spot, contain malware attacks

Researchers from North Carolina State University have developed a new tool to detect and contain malware that attempts root exploits in Android devices. The tool improves on previous techniques by targeting code written in the C programming language -- which is often used to create root exploit malware, whereas the bulk of Android applications are written in Java.

Root exploits take over the system administration functions of an operating system, such as Android. A successful Android root exploit effectively gives hackers unfettered control of a user's smartphone.

The new security tool is called Practical Root Exploit Containment (PREC). It refines an existing technique called anomaly detection, which compares the behavior of a downloaded smartphone application (or app), such as Angry Birds, with a database of how the application should be expected to behave.

When deviations from normal behavior are detected, PREC analyzes them to determine if they are malware or harmless "false positives." If PREC determines that an app is attempting root exploit, it effectively contains the malicious code and prevents it from being executed.

"Anomaly detection isn't new, and it has a problematic history of reporting a lot of false positives," says Dr. Will Enck, an assistant professor of computer science at NC State and co-author of a paper on the work. "What sets our approach apart is that we are focusing solely on C code, which is what most -- if not all -- Android root exploits are written in."

"Taking this approach has significantly driven down the number of false positives," says Dr. Helen Gu, an associate professor of computer science at NC State and co-author of the paper. "This reduces disturbances for users and makes anomaly detection more practical."

The researchers are hoping to work with app vendors, such as Google Play, to establish a database of normal app behavior.

Most app vendors screen their products for malware, but malware programmers have developed techniques for avoiding detection -- hiding the malware until users have downloaded the app and run it on their smartphones.

The NC State research team wants to take advantage of established vendor screening efforts to create a database of each app's normal behavior. This could be done by having vendors incorporate PREC software into their app assessment processes. The software would take the app behavior data and create an external database, but would not otherwise affect the screening process.

"We have already implemented the PREC system and tested it on real Android devices," Gu says. "We are now looking for industry partners to deploy PREC, so that we can protect Android users from root exploits."

The paper, "PREC: Practical Root Exploit Containment for Android Devices," will be presented at the Fourth ACM Conference on Data and Application Security and Privacy being held March 3-5 in San Antonio, Texas. Lead author of the paper is former NC State graduate student Tsung-Hsuan Ho. The paper was co-authored by Daniel Dean, a Ph.D. student in Gu's lab at NC State.

The work was supported by the National Security Agency; U.S. Army Research Office grant W911NF-10-1-0273; National Science Foundation grants CNS-1149445, CNS-1253346, and CNS-1222680; IBM Faculty Awards and Google Research Awards.

Cite This Page:

North Carolina State University. "New technique targets C code to spot, contain malware attacks." ScienceDaily. ScienceDaily, 4 March 2014. .North Carolina State University. (2014, March 4). New technique targets C code to spot, contain malware attacks. ScienceDaily. Retrieved May 5, 2014 from www.sciencedaily.com/releases/2014/03/140304141856.htmNorth Carolina State University. "New technique targets C code to spot, contain malware attacks." ScienceDaily. www.sciencedaily.com/releases/2014/03/140304141856.htm (accessed May 5, 2014).

View the original article here

Tuesday, May 27, 2014

Platform would protect smartphones from cyber criminals

Criminals don't have to pick your pocket to get what they want out of your mobile. But a certifiably secure operating platform is being developed by Swedish researchers so that consumers can be confident that their mobile data is safe.

Market analysts expect the next decade to see a significant expansion in the numbers of connected devices and machines.

But increased connectivity also presents an opportunity for criminals. Mads Dam, an expert in computer security at Stockholm's KTH Royal Institute of Technology, says that devices and modules will be exposed to increasingly sophisticated attacks by cyber criminals.

"People are going to place even higher value on products with verifiable security claims," says Dam, who is Professor of Teleinformatics at KTH's School of Computer Science and Communication.

While compact in size, mobile phones pose a huge security challenge, Dam says. "Android, for example, has more than 10 million lines of code and is executing on a computing platform with one billion transistors.

"So it's not surprising that securing this kind of system is difficult," Dam says. "The good news is that an end-to-end security guarantee is within reach."

Dam and his colleagues aim to publish a certifiably secure, trusted execution platform for operating systems. The idea is to outwit malware and other attacks on a device with a layer of software called a "hypervisor," which is designed to secure the interaction between the operating system (OS) and the hardware.

"If the operating system asks for the camera to be turned on, the hypervisor can step in and verify whether that is really what the user wants," he says. "Or if the operating system wants to access a piece of memory that normally should be regarded as secure, it could step in and allow, or disallow, the request."

In fact, Dam says, a hypervisor-based solution could completely isolate different apps from each other, to create truly tamper-proof applications, for instance for banking or communication.

Such a platform could be made much smaller than the OS itself, he says. "We're talking about a factor of 1,000 to 10,000, which is sufficient to create mathematical models that can analyse the security of interaction between the OS and the hardware so well that we can formally guarantee the security of an operating system like Linux."

And it's not just mobile users that will benefit. In addition to mobile communications networks, the platform would be applicable in a wide range of areas including control systems for manufacturing plants, power stations, utilities and infrastructure. Other uses would be in vehicles, avionics and medical systems, cloud application platforms and also for devices in the internet of things.

The project partners, which include the Swedish Institute of Computer Science (SICS), propose publishing key components of the hypervisor as open source, in order to increase trust and allow de facto industry standardization of the security platform.

Dam says it will require more than a secure execution platform to secure devices from end-to-end, that is, from the user interface through the software stack, down to bits of silicon and back. Hardware and application platforms will have to be validated too. But the KTH team has made great progress during the last decade on tracing security from the application and user interface to the execution platform and back, he says, and the hypervisor will be a vital tool to achieve this.

"Soon we will be able to engage industry and organisations with serious security concerns, like banks, public organisations, defence and providers, and develop this space."

Cite This Page:

KTH The Royal Institute of Technology. "Platform would protect smartphones from cyber criminals." ScienceDaily. ScienceDaily, 5 March 2014. .KTH The Royal Institute of Technology. (2014, March 5). Platform would protect smartphones from cyber criminals. ScienceDaily. Retrieved May 5, 2014 from www.sciencedaily.com/releases/2014/03/140305125102.htmKTH The Royal Institute of Technology. "Platform would protect smartphones from cyber criminals." ScienceDaily. www.sciencedaily.com/releases/2014/03/140305125102.htm (accessed May 5, 2014).

View the original article here

Monday, May 26, 2014

Ease and security of password protections improved

Passwords guard everything from our cellphones to our bank accounts, but they often present a relatively weak challenge to hackers looking for the information that passwords should protect. New research from the University of Alabama at Birmingham, in collaboration with the University of California at Irvine, proposes and tests a variety of methods that add a strong second layer of security to a password.

In a paper presented at the 2014 Network and Distributed Systems Security Symposium, researchers offered innovative options to improve the security of two-factor authentication systems while also ensuring the systems’ usability.

“There have been many attacks on servers that store passwords lately, such as the breaches at PayPal and LinkedIn,” said Nitesh Saxena, Ph.D., associate professor in the Department of Computer and Information Sciences and a core member of the Center for Information Assurance and Joint Forensics Research.

Many people use the same few uncomplicated passwords repeatedly, making them easy to remember. Passwords are typically stored on servers in a hashed form. Hackers can garner passwords either by an online brute-force attack, or by hacking a server with poor security and using a “dictionary” of passwords to test offline.

“A single server break-in can lead to several of a user’s accounts being compromised, because they’re using the same password in several places,” Saxena said.

Two-factor authentication schemes, such as Google Authenticator, or hardware tokens, such as RSA SecureID, use a second device to generate a temporary personal identification number, or PIN, that the user must enter along with their password. But current two-factor schemes present the same vulnerabilities to server hacks as password-only authentication, Saxena says.

“If someone hacks into the server, they could learn the passwords via an offline dictionary attack,” he said. “Learning the passwords wouldn’t compromise the second authentication factor, but the user might be using that same password elsewhere. The hacker might not be able to log into Facebook if Facebook uses two-factor authentication, but they could log into Twitter if Twitter uses the single-factor authentication using the same password.”

The paper proposes and tests four two-factor schemes that require servers to store a randomized hash of the passwords and a second device, such as the user’s security token or smartphone, to store a corresponding secret code. The paper presents these schemes at several levels of computer system bandwidth, effectively turning four schemes into 13 security options.

“Rather than requiring the user to enter both their password and a PIN generated by an app, the user could enter a password, and their smartphone could automatically send a PIN over a Bluetooth connection or through a simple QR code,” Saxena said.

Saxena and his co-authors, UAB graduate student Maliheh Shirvanian, Stanislaw Jarecki and Naveen Nathan of the University of California at Irvine, analyze each scheme in terms of security provided, usability and deployability.

The schemes are geared toward using soft tokens, like smartphones. Using smartphones to provide secret codes can give a security system the flexibility to protect several passwords with a single soft token.

“Hard tokens are traditionally used within the context of a company that needs more security,” Saxena said. “With soft tokens in play, you can use just one token, such as your smartphone, to log into different websites securely.”

However, the proposed approaches are applicable to hardware tokens too.

“With each of our proposals, you get a high level of security with the same or better level of usability than the current two-factor authentication schemes,” Shirvanian said.


View the original article here

Sunday, May 25, 2014

WPA2 wireless security cracked

There are various ways to protect a wireless network. Some are generally considered to be more secure than others. Some, such as WEP (Wired Equivalent Privacy), were broken several years ago and are not recommended as a way to keep intruders away from private networks. Now, a new study published in the International Journal of Information and Computer Security, reveals that one of the previously strongest wireless security systems, Wi-Fi protected access 2 (WPA2) can also be easily broken into on wireless local area networks (WLANs).

Achilleas Tsitroulis of Brunel University, UK, Dimitris Lampoudis of the University of Macedonia, Greece and Emmanuel Tsekleves of Lancaster University, UK, have investigated the vulnerabilities in WPA2 and present its weakness. They say that this wireless security system might now be breached with relative ease by a malicious attack on a network. They suggest that it is now a matter of urgency that security experts and programmers work together to remove the vulnerabilities in WPA2 in order to bolster its security or to develop alternative protocols to keep our wireless networks safe from hackers and malware.

The convenience of wireless network connectivity of mobile communications devices, such as smart phones, tablet PCs and laptops, televisions, personal computers and other equipment, is offset by the inherent security vulnerability. The potential for a third party to eavesdrop on the broadcast signals between devices is ever present. By contrast a wired network is intrinsically more secure because it requires a physical connection to the system in order to intercept packets of data. For the sake of convenience, however, many people are prepared to compromise on security. Until now, the assumption was that the risk of an intruder breaching a wireless network secured by the WPA2 system was adequately protected. Tsitroulis and colleagues have now shown this not to be the case.

If setup correctly, WPA2 using pre-shared key (PSK) encryption keys can be very secure. Depending on which version is present on the wireless device it also has the advantage of using strong encryption based on either the temporal key integrity protocol (TKIP) or the more secure counter mode with cipher block chaining message authentication code protocol (CCMP). 256-bit encryption is available and a password can be an alphanumeric string with special characters up to 63 characters long.

The researchers have now shown that a brute force attack on the WPA2 password is possible and that it can be exploited, although the time taken to break into a system rises with longer and longer passwords. However, it is the de-authentication step in the wireless setup that represents a much more accessible entry point for an intruder with the appropriate hacking tools. As part of their purported security protocols routers using WPA2 must reconnect and re-authenticate devices periodically and share a new key each time. The team points out that the de-authentication step essentially leaves a backdoor unlocked albeit temporarily. Temporarily is long enough for a fast-wireless scanner and a determined intruder. They also point out that while restricting network access to specific devices with a given identifier, their media access control address (MAC address), these can be spoofed.

There are thus various entry points for the WPA2 protocol, which the team details in their paper. In the meantime, users should continue to use the strongest encryption protocol available with the most complex password and to limit access to known devices via MAC address. It might also be worth crossing one's fingers…at least until a new security system becomes available.

Story Source:

The above story is based on materials provided by Inderscience. Note: Materials may be edited for content and length.


View the original article here

Saturday, May 24, 2014

Ensuring solid-state drives are up to scratch: Data buffering scheme improves performance of solid-state drives

A data buffering scheme improves the performance of solid-state drives in large-scale, data-intensive applications.

Solid-state drives (SSDs) store digital information using electronic circuits. The power efficiency of SSDs and their ability to read and write data quickly means that they are becoming the primary storage device in computers. A major drawback of SSDs, however, is the limited number of times that data can be stored and deleted -- an aspect that hinders the use of these devices for data-intensive applications known as data-center environments.

Qingsong Wei and co-workers at the A*STAR Data Storage Institute and National University of Singapore have developed a scheme for writing data to SSDs that could circumvent these problems to make solid-state drives useful for an even broader range of applications.

SSDs divide their storage space into distinct areas called blocks. A computer can either save large files across consecutive blocks -- a process known as sequential writing -- or write smaller files in blocks scattered throughout the device -- so-called random writing.

The researchers conducted an intensive workload study of the distribution of read and write request sizes over ten real enterprise workload traces supplied by the Storage Network Industry Association. They found that the highest traffic was from small, random requests of less than 64 kilobytes in size.

Generally, random writing is much slower -- by as much as four times -- than sequential writing. One way around this bottleneck is to use part of the memory as a 'buffer'. The buffer briefly stores data as it comes into the drive, which then enables sequential writing at a later time. Current buffer management approaches improve sequential writing but only at low buffer usage, wasting expensive buffer space.

Wei's team helped to solve this problem through an alternative approach that categorizes the data in the buffer by its popularity, which reflects how frequently the data is likely to be needed. The scheme retains popular blocks in the buffer, rather than deleting them, and sequentially writes less popular blocks to the SSD.

"Our buffer management scheme can increase sequential writing with high buffer utilization, thus improving performance and extending the lifetime of the SSD," says Wei.

The researchers tested the approach and demonstrated that the so-called popularity-aware buffer management scheme, or PAB, can achieve an improvement in performance of up to 72 per cent and triple the device lifetime compared to existing schemes. "Our method reduces the cost of SSDs by improving buffer utilization and is easy to implement," explains Wei. "Our next step will be to design smarter SSDs by integrating these same ideas with emerging non-volatile memory."

Journal Reference:

Qingsong Wei, Lingfang Zeng, Jianxi Chen, Cheng Chen. A Popularity-Aware Buffer Management to Improve Buffer Hit Ratio and Write Sequentiality for Solid-State Drive. IEEE Transactions on Magnetics, 2013; 49 (6): 2786 DOI: 10.1109/TMAG.2013.2249579

View the original article here

Friday, May 23, 2014

Global attack needed to catch credit thieves

Stopping massive data breaches like the one that hit Target will require a more sophisticated, collaborative approach by law enforcement agencies around the world, a Michigan State University cyber security expert argues.

In a new research report for the National Institute of Justice, Thomas Holt found many hackers and data thieves are operating in Russia or on websites where users communicate in Russian, making it easier to hide from U.S. and European authorities. All countries need to better work together to fight hacking and data theft campaigns, he said, and use undercover stings in which officers pose as administrators of the Internet forums where stolen data is advertised.

The Target breach, which comprised 40 million credit- and debit-card accounts during the 2013 holiday shopping season, may have originated in Russia, the Wall Street Journal recently reported.

"This is a truly global problem, one that we cannot solve domestically and that has to involve multiple nations and rigorous investigation through various channels," said Holt, associate professor of criminal justice.

Holt authored the 155-page report with Olga Smirnova from Eastern Carolina University. The National Institute of Justice funded their research, the largest to date on this crime, with a $280,000 grant.

Holt and Smirnova analyzed 13 Internet forums through which stolen credit data was advertised. Specifically, they found:

Ten of the forums were in Russian and three were in English, though the forums were hosted across the world.Visa and MasterCard were the most common cards for sale.The average advertised price for a stolen credit- or bank-card number was about $102.The average price for access to a hacked eBay or PayPal account was about $27.

Skilled hackers who steal thousands or even millions of cards generally attempt to quickly dump the data to buyers found through advertisements the hackers create in Internet forums. The buyers then assume the risk of making purchases or taking cash advances on the cards in return for a potentially large profit.

In the United States, Holt said it is imperative more money and resources -- such as Russian-speaking analysts and new technology -- be allocated to the FBI, Secret Service and other federal agencies to more effectively combat cybercrime.

Tougher state and federal cybercrime laws should also be passed to promote security and corporate responsibility. While 46 states currently require companies to disclose any loss of sensitive personal information in the event of a security breach, Holt suggested the laws generally don't go far enough to protect consumers.

"Greater transparency is needed on part of both corporations and banks to disclose the true number of customers affected and to what degree as quickly as possible in order to reduce the risk of customer loss and economic harm," he said.

Consumers also need to be more vigilant.

"There is a big need for public awareness campaigns to promote basic computer security principals and vigilance against identity theft," Holt said. "Consumers need to understand the potential harm from responding to unsolicited email and clicking on suspicious web links as well as the need to run anti-virus and security tools on their computers."

?


View the original article here

Thursday, May 22, 2014

'Unbreakable' security codes inspired by nature

A revolutionary new method of encrypting confidential information has been patented by scientists at Lancaster University.

They have been inspired by their discoveries from human biology, which model how the heart and lungs coordinate their rhythms by passing information between each other.

A mathematical model based on the complex interaction between these organs has now been transferred to the world of modern communications.

This discovery could transform daily life which is reliant on secure electronic communications for everything from mobiles to sensor networks and the internet.

Every device, from your car key to online bank account, contains different identification codes enabling information to be transferred in confidence. But the race to outwit the hackers means there is a continual demand for better encryption methods.

Inspiration for the new method of encryption came from interdisciplinary research in the Physics Department by Dr Tomislav Stankovski, Professor Peter McClintock, and Professor Aneta Stefanovska, and the patent includes Dr Robert Young.

Professor McClintock commented that this is a significant discovery.

He said: "This promises an encryption scheme that is so nearly unbreakable that it will be equally unwelcome to internet criminals and official eavesdroppers."

Professor Stefanovska emphasized the interdisciplinary aspect: "As so often happens with important breakthroughs, this discovery was made right on the boundary between two different subjects -- because we were applying physics to biology."

Dr Stankovski said: "Here we offer a novel encryption scheme derived from biology, radically different from any earlier procedure. Inspired by the time-varying nature of the cardio-respiratory coupling functions recently discovered in humans, we propose a new encryption scheme that is highly resistant to conventional methods of attack."

The advantage of this discovery is that it offers an infinite number of choices for the secret encryption key shared between the sender and receiver. This makes it virtually impossible for hackers and eavesdroppers to crack the code.

The new method is exceptionally resistant to interference from the random fluctuations or "noise" which affects all communications systems.

It can also transmit several different information streams simultaneously, enabling all the digital devices in the home, for example, to operate on one encryption key instead of dozens of different ones.


View the original article here

Wednesday, May 21, 2014

Black markets for hackers increasingly sophisticated, specialized, maturing

Black and gray markets for computer hacking tools, services and byproducts such as stolen credit card numbers continue to expand, creating an increasing threat to businesses, governments and individuals, according to a new RAND Corporation study.

One dramatic example is the December 2013 breach of retail giant Target, in which data from approximately 40 million credit cards and 70 million user accounts was hijacked. Within days, that data appeared -- available for purchase -- on black market websites.

"Hacking used to be an activity that was mainly carried out by individuals working alone, but over the last 15 years the world of hacking has become more organized and reliable," said Lillian Ablon, lead author of the study and an information systems analyst at RAND, a nonprofit research organization. "In certain respects, cybercrime can be more lucrative and easier to carry out than the illegal drug trade."

The growth in cybercrime has been assisted by sophisticated and specialized markets that freely deal in the tools and the spoils of cybercrime. These include items such as exploit kits (software tools that can help create, distribute, and manage attacks on systems), botnets (a group of compromised computers remotely controlled by a central authority that can be used to send spam or flood websites), as-a-service models (hacking for hire) and the fruits of cybercrime, including stolen credit card numbers and compromised hosts.

In the wake of several highly-publicized arrests and an increase in the ability of law enforcement to take down some markets, access to many of these black markets has become more restricted, with cybercriminals vetting potential partners before offering access to the upper levels. That said, once in, there is very low barrier to entry to participate and profit, according to the report.

RAND researchers conducted more than two dozen interviews with cybersecurity and related experts, including academics, security researchers, news reporters, security vendors and law enforcement officials. The study outlines the characteristics of the cybercrime black markets, with additional consideration given to botnets and their role in the black market, and "zero-day" vulnerabilities (software bugs that are unknown to vendors and without a software patch). Researchers also examine various projections and predictions for how the black market may evolve.

What makes these black markets notable is their resilience and sophistication, Ablon said. Even as consumers and businesses have fortified their activities in reaction to security threats, cybercriminals have adapted. An increase in law enforcement arrests has resulted in hackers going after bigger targets. More and more crimes have a digital component.

The RAND study says there will be more activity in "darknets," more checking and vetting of participants, more use of crypto-currencies such as Bitcoin, greater anonymity capabilities in malware, and more attention to encrypting and protecting communications and transactions. Helped by such markets, the ability to attack will likely outpace the ability to defend.

Hyper-connectivity will create more points of presence for attack and exploitation so that crime increasingly will have a networked or cyber component, creating a wider range of opportunities for black markets. Exploitations of social networks and mobile devices will continue to grow. There will be more hacking-for-hire, as-a-service offerings and cybercrime brokers.

However, experts disagree on who will be the most affected by the growth of the black market, what products will be on the rise and which types of attacks will be more prevalent, Ablon said.

The study, "Markets for Cybercrime Tools and Stolen Data: Hackers' Bazaar," can be found at http://www.rand.org/pubs/research_reports/RR610.html.

Story Source:

The above story is based on materials provided by RAND Corporation. Note: Materials may be edited for content and length.


View the original article here

Tuesday, May 20, 2014

Keeping secrets in a world of spies and mistrust

Revelations of the extent of government surveillance have thrown a spotlight on the security -- or lack thereof -- of our digital communications. Even today's encrypted data is vulnerable to technological progress. What privacy is ultimately possible? In the 27 March issue of Nature, the weekly international journal of science, researchers Artur Ekert and Renato Renner review what physics tells us about keeping our secrets secret.

In the history of secret communication, the most brilliant efforts of code-makers have been matched time and again by the ingenuity of code-breakers. Sometimes we can even see it coming. We already know that one of today's most widely used encryption systems, RSA, will become insecure once a quantum computer is built.

But that story need not go on forever. "Recent developments in quantum cryptography show that privacy is possible under stunningly weak assumptions about the freedom of action we have and the trustworthiness of the devices we use," says Ekert, Professor of Quantum Physics at the University of Oxford, UK, and Director of the Centre for Quantum Technologies at the National University of Singapore. He is also the Lee Kong Chian Centennial Professor at the National University of Singapore.

Over 20 years ago, Ekert and others independently proposed a way to use the quantum properties of particles of light to share a secret key for secure communication. The key is a random sequence of 1s and 0s, derived by making random choices about how to measure the particles (and some other steps), that is used to encrypt the message. In the Nature Perspective, he and Renner describe how quantum cryptography has since progressed to commercial prospect and into new theoretical territory.

Even though privacy is about randomness and trust, the most surprising recent finding is that we can communicate secretly even if we have very little trust in our cryptographic devices -- imagine that you buy them from your enemy -- and in our own abilities to make free choices -- imagine that your enemy is also manipulating you. Given access to certain types of correlations, be they of quantum origin or otherwise, and having a little bit of free will, we can protect ourselves. What's more, we can even protect ourselves against adversaries with superior technology that is unknown to us.

"As long as some of our choices are not completely predictable and therefore beyond the powers that be, we can keep our secrets secret," says Renner, Professor of Theoretical Physics at ETH Zurich, Switzerland. This arises from a mathematical discovery by Renner and his collaborator about 'randomness amplification': they found that a quantum trick can turn some types of slightly-random numbers into completely random numbers. Applied in cryptography, such methods can reinstate our abilities to make perfectly random choices and guarantee security even if we are partially manipulated.

"As well as there being exciting scientific developments in the past few years, the topic of cryptography has very much come out of the shadows. It's not just spooks talking about this stuff now," says Ekert, who has worked with and advised several companies and government agencies.

The semi-popular essay cites 68 works, from the writings of Edgar Allen Poe on cryptography in 1841, through the founding papers of quantum cryptography in 1984 and 1991, right up to a slew of results from 2013.

The authors conclude that "The days we stop worrying about untrustworthy or incompetent providers of cryptographic services may not be that far away."

Journal Reference:

Artur Ekert, Renato Renner. The ultimate physical limits of privacy. Nature, 2014; 507 (7493): 443 DOI: 10.1038/nature13132

View the original article here

Monday, May 19, 2014

Critical vulnerabilities in TLS implementation for Java

In January and April 2014, Oracle has released critical Java software security updates. They resolve, amongst others, three vulnerabilities discovered by researchers from the Horst G?rtz Institute for IT Security at the Ruhr-Universit?t Bochum. These vulnerabilities affect the "Java Secure Socket Extension" (JSSE), a software library implementing the "Transport Layer Security" protocol (TLS). TLS is used to encrypt sensitive information transferred between browsers and web servers, such as passwords and credit card data, for example.

Similar to Heartbleed

Recently, the Heartbleed vulnerability of OpenSSL, the most important TLS implementation, has hit the headlines. Like OpenSSL, JSSE is an open source TLS implementation, maintained by Oracle. The researchers discovered three weaknesses in the JSSE library, two of which could be used to completely break the security of TLS encryption. Following the "responsible disclosure" paradigm, the team of Prof Dr J?rg Schwenk privately informed Oracle about these vulnerabilities prior to public announcement. The researchers recommend to install Oracle's software updates for applications using JSSE as soon as possible.

How to break TLS in JSSE

JSSE was found vulnerable to so-called "Bleichenbacher attacks." First, the researchers intercepted an encrypted communication between a client (e.g. a web browser) and a server. Then, they sent a few thousands requests to the server; by examining the responses of the server they could compute the secret session key. This session key can be used to decrypt all data exchanged between client and server. The first vulnerability was based on critical information that the TLS server transmitted via error messages. The second one was based on different response times of the JSSE server. Bleichenbacher attacks are complex cryptographic attacks, also referred to as adaptive chosen-ciphertext attacks.

April patch from Oracle solves another problem

The April patch provided by Oracle also fixes another cryptographic algorithm (PKCS#1 v2.1, aka RSA-OAEP), which was vulnerable to a different adaptive chosen-ciphertext attack. This algorithm is not used in TLS, but in other security-critical applications, such as Web Services, for instance.


View the original article here

Sunday, May 18, 2014

Cybersecurity researchers roll out a new heartbleed solution: Red Herring creates decoy servers, entraps, monitors hackers

As companies scrambled in recent days to address the latest cybersecurity bug known as Heartbleed, researchers at The University of Texas at Dallas had a solution that fixes the vulnerability, and also detects and entraps hackers who might be using it to steal sensitive data.

The advanced technique -- dubbed Red Herring -- was created by a team led by Dr. Kevin Hamlen, an associate professor of computer science in the Erik Jonsson School of Computer Science and Engineering. It automates the process of creating decoy servers, making hackers believe they have gained access to confidential, secure information, when in fact their deeds are being monitored, analyzed and traced back to the source.

"Our automated honeypot creates a fixed Web server that looks and acts exactly like the original -- but it's a trap," said Hamlen, a member of the UT Dallas Cyber Security Research and Education Institute (CSI). "The attackers think they are winning, but Red Herring basically keeps them on the hook longer so the server owner can track them and their activities. This is a way to discover what these nefarious individuals are trying to do, instead of just blocking what they are doing."

The Heartbleed bug affects about two-thirds of websites previously believed to be secure. These are websites that use the computer code library called OpenSSL to encrypt supposedly secure Internet connections that are used for sensitive purposes such as online banking and purchasing, sending and receiving emails, and remotely accessing work networks. Heartbleed became public last week.

In 2012, a new feature named Heartbeat was added to software primarily for slow Internet connections. Heartbeat allowed connections to be held open, even during idle time. A flaw in the implementation allowed confidential information to be passed through the connection, hence the name Heartbleed.

Even though Heartbleed is now in the process of being fixed, victims face the challenge of not knowing who may already be exploiting it to steal the information, and what information they may be going after. A common fix for this type of problem is to create a trap, a honeypot that lures and exposes attackers. Typically this can involve setting up another Web server somewhere else.

"There are all sorts of ad hoc solutions where people try to confuse the attacker by deploying fake servers, but our solution builds the trap into the real server so that attacks against the real server are detected and monitored," Hamlen said. "Our research idea can build this honeypot really quickly and reliably as new vulnerabilities are disclosed."

The Red Herring algorithm created by Hamlen automatically converts a patch -- code widely used to fix new vulnerabilities like Heartbleed -- into a honeypot that can catch the attacker at the same time.

"When Heartbleed came out, this was the perfect test of our prototype," Hamlen said.

Red Herring doesn't stop at being a decoy and blocker; it can also lead to catching the attacker. As the attacker thinks he or she is stealing data, an analyst is tracking the attack to find out what information the attacker is after, how the malicious code works and who is sending the code.

"In their original disclosure, security firm Codenomicon urged experts to start manually building honeypots for Heartbleed," Hamlen said. "Since we already had created algorithms to automate this process, we had a solution within hours."

When news of Heartbleed became public on April 8, software engineering doctoral student Frederico Araujo started researching the vulnerability and had implemented Red Herring by 2:30 a.m. April 9.

"I was very proud that he had taken the initiative before I'd even gotten to it," Hamlen said. "Normally, I personally would have started working on it sooner, but I'd been up all night grading papers the night before."


View the original article here

Saturday, May 17, 2014

Security tools for Industry 4.0

An increasing number of unsecured, computer-guided production machinery and networks in production facilities are gradually evolving into gateways for data theft. New security technologies may directly shield the sensitive data that is kept there.

You can hear the metallic buzz as the milling machine bores into the workpiece. Just a few last drill holes, and the camshaft is complete. The computer-guided machine performed the entire job -- thanks to the digital manufacturing data that were uploaded onto its embedded computer beforehand. Everything runs without a hitch, only -- the data are stolen.

Manufacturing data determine the production process for a product, and are just as valuable today as the design plans. They contain distinctive, inimitable information about the product and its manufacture. Whoever possesses this info merely needs the right equipment, et voil?: the pirated or counterfeit product is done. Whereas design data are well-protected from unauthorized outside access today, production data often lie exposed and unsecured in the computer-assisted machinery. An infected computer on the network, or just a USB stick, are all a thief would need to heist the data. Or hackers could directly attack the IT network -- for instance, through unsecured network components, like routers or switches.

Encrypting manufacturing data upon creation

Researchers at the Fraunhofer Institute for Secure Information Technology SIT in Darmstadt are exhibiting how these security gaps can be closed up at this year's CeBIT from 10 to 14 March, 2014 (Hall 9, Booth E40). They will be presenting, for example, a software application that immediately encrypts manufacturing data as soon as they emerge. Integrated into computer and equipment, they ensure that both communicate with each other through a protected transportation channel and that only licensed actions are executed. "To the best of our knowledge, no comparable safeguard has previously existed for manufacturing data that reside directly in the machine tool," states Thomas Dexheimer from the SIT's Security Testlab. Digital Rights Management (DRM) controls all important parameters of the assignment, such as designated use, quantity, etc. This way, brand manufacturers are able to guarantee that even external producers can only produce an authorized quantity, as instructed in advance -- and no additional pirated units.

His colleague at SIT, Dr. Carsten Rudolph, is more involved with secured networks. At CeBIT, Rudolph will exhibit his "Trusted Core Network." "Hackers can also gain access to sensitive production data via unsecured network components. These are small computers themselves, and can be easily manipulated," says the "Trust and Compliance" department head at SIT. In order to prevent this, he called upon one piece of technology that, for the most part, lies dormant (in deep slumber) and, for all intents and purposes, unused on our PCs: the Trusted Platform Module. This relates to a small computer chip that can encrypt, decrypt, and digitally sign the data. Installed into a network component, it indicates which software is running on the component, and assigns a distinct identity to it. "As soon as the software changes in a component, the adjacent component registers this occurrence and notifies the administrator. Hacker attacks can be exposed quickly and easily this way," says Rudolph.

"Both security technologies are important building blocks for the targeted Industry 4.0 scenario," says Dexheimer. The term "Industry 4.0" stands for the fourth industrial revolution. After water and steam power, followed by electrical energy, electronics and information technology, now, the cyber-physical systems (IT systems embedded in machinery that communicate with each other via wireless or cabled networks) and the Internet of Things are expected to move into the factory halls. "This revolution can only work if the intellectual property is sufficiently protected. And that's a tall order, because the targets of production IT will increase exponentially, due to ever growing digitization and networking," explains Dexheimer.

At this year's CeBIT, both researchers -- Dexheimer and Rudolph -- will present a computer-assisted machine tool using a CAD computer and a 3D printer. SIT's security software is installed both on the computer and the printer. The data are encrypted on the computer, and decrypted by the printer. The printer also validates the licensed authorization to conduct the print job. To ensure that the data are also securely embedded in the network, the scientists have built a Trusted Platform Module into multiple routers, and are displaying this as a demo. "An attacker cannot hack this there, because he or she will get nowhere near the built-in key," explains Rudolph.


View the original article here

Friday, May 16, 2014

Record quantum entanglement of multiple dimensions: Two Schrödinger cats which could be alive, dead, or in 101 other states simultaneously

An international team of researchers, directed by researchers from the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, and with participation from the Universitat Aut?noma de Barcelona, has managed to create an entanglement of 103 dimensions with only two photons. The record had been established at 11 dimensions. The discovery could represent a great advance toward the construction of quantum computers with much higher processing speeds than current ones, and toward a better encryption of information.

The states in which elementary particles, such as photons, can be found have properties which are beyond common sense. Superpositions are produced, such as the possibility of being in two places at once, which defies intuition. In addition, when two particles are entangled a connection is generated: measuring the state of one (whether they are in one place or another, or spinning one way or another, for example) affects the state of the other particle instantly, no matter how far away from each other they are.

Scientists have spent years combining both properties to construct networks of entangled particles in a state of superposition. This in turn allows constructing quantum computers capable of operating at unimaginable speeds, encrypting information with total security and conducting experiments in quantum mechanics which would be impossible to carry out otherwise.

Until now, in order to increase the "computing" capacity of these particle systems, scientists have mainly turned to increasing the number of entangled particles, each of them in a two-dimensional state of superposition: a qubit (the quantum equivalent to an information bit, but with values which can be 1, 0 or an overlap of both values). Using this method, scientists managed to entangle up to 14 particles, an authentic multitude given its experimental difficulty.

The research team was directed by Anton Zeilinger and Mario Krenn from the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences. It included the participation of Marcus Huber, researcher from the Group of Quantum Information and Quantum Phenomena from the UAB Department of Physics, as well as visiting researcher at the Institute of Photonic Sciences (ICFO). The team has advanced one more step towards improving entangled quantum systems.

In a new article posted on arXiv, scientists described how they managed to achieve a quantum entanglement with a minimum of 103 dimensions with only two particles. "We have two Schr?dinger cats which could be alive, dead, or in 101 other states simultaneously," Huber jokes, "plus, they are entangled in such a way that what happens to one immediately affects the other." The results implies a record in quantum entanglements of multiple dimensions with two particles, established until now at 11 dimensions.

Instead of entangling many particles with a qubit of information each, scientists generated one single pair of entangled photons in which each could be in more than one hundred states, or in any of the superpositions of theses states; something much easier than entangling many particles. These highly complex states correspond to different modes in which photons may find themselves in, with a distribution of their characteristic phase, angular momentum and intensity for each mode.

"This high dimension quantum entanglement offers great potential for quantum information applications. In cryptography, for example, our method would allow us to maintain the security of the information in realistic situations, with noise and interference. In addition, the discovery could facilitate the experimental development of quantum computers, since this would be an easier way of obtaining high dimensions of entanglement with few particles," explains UAB researcher Marcus Huber.

Now that the results demonstrate that obtaining high dimension entanglements is accessible, scientists conclude in the article that the next step will be to search how they can experimentally control these hundreds of spatial modes of the photons in order to conduct quantum computer operations.


View the original article here

Thursday, May 15, 2014

Brain research tracks internet safety performance, dispels assumptions, identifies traits of those at-risk

Internet users face a barrage of information with each click, some of it designed to compromise security and privacy. Spammers hope, and security researchers have warned, that users cannot distinguish legitimate websites from dangerous ones and do not heed browser safety warnings.

However, new research from the University of Alabama at Birmingham suggests that users pay more attention to Internet safety than previously assumed. In a paper that won the "Distinguished Paper Award" at the 2014 Network and Distributed Systems Security Symposium, researchers used a novel methodology to gain new neurological insights into how users face security questions and how their personalities might affect their performance.

Nitesh Saxena, Ph.D., associate professor in the Department of Computer and Information Sciences and a core member of the Center for Information Assurance and Joint Forensics Research, wondered what was happening in Internet users' brains when they encountered malware warnings or malicious websites.

"Many computer-based lab studies on user-centered security have concluded that users do not pay attention to these tasks and are ill-equipped to pay attention to security warnings," Saxena said. "I had been taught for years that users are careless when it comes to security endeavors."

However, security studies in lab settings show different results than a recent study based on real-world user data, he says.

He teamed up with Rajesh Kana, Ph.D., associate professor of psychology, and UAB graduate research assistants Ajaya Neupane (lead student author) and Michael Georgescu, as well as Keya Kuruvilla, a Department of Psychology student, to use brain imaging to discover what is really happening in users' brains as they encounter security questions.

Users were given two tasks. First, they were shown intermingled examples of popular websites' real login pages and fraudulent replications of those pages and were asked to determine which were real and which were fake -- phishing -- sites. Users were then asked to read several sample news articles and were interrupted by pop-ups that contained either benign information or warnings about malware, software created to obtain unauthorized access to a computer's resources and collect information.

Using a functional magnetic resonance imaging, or fMRI, machine, researchers measured users' accuracy while tracking their brain activity. Results showed activation in areas of the brain associated with attention, decision-making and problem-solving. Activity in the brain's decision-making regions carried across both tasks, suggesting that accuracy at one task could predict accuracy at the other.

"For both tasks we found brain activity, so people are not careless," Saxena said. "But whether or not their decisions are valid is a different situation."

Accuracy in the malware warning task was about 89 percent, and the fMRI scans showed high brain activity in regions associated with problem-solving and decision-making.

"In the warning task, people seem to make extra effort to make decisions," Saxena said. "When they were subject to warnings, there was also activity in language comprehension areas. Warnings trigger some sort of thought process in people's brains that there is something unusual going on."

Accuracy in identifying real versus fake websites was low at only about 60 percent -- only 10 percent better than a random guess, though participants showed activation in brain regions associated with decision-making.

"In the phishing task, users didn't do very well," Saxena said. "That may be because they don't know what to look for. When they look at a website, they might be paying attention only to the look and feel of the website instead of the URL, which is often a real indicator."

Researchers also had users complete a personality assessment to measure their impulsiveness, and the fMRI results showed differences in how highly impulsive users behaved.

"Not all individuals are alike," Saxena said. "We found a negative correlation of impulsivity and brain activity. Highly impulsive people probably just hit 'yes' when they are stopped by a malware warning asking if they want to proceed. This is interesting because it offers a way to predict how people may perform in security tasks based on impulsivity scores."

The relationship between personality traits like impulsivity and brain responses was especially interesting, Kana says.

"Participants with greater impulsive traits showed less brain activity in key decision-making areas of the brain during security decisions," Kana said.

The study could help security programmers focus their attention on designing better warning systems, and network managers target their security training at users who tend to be impulsive, Neupane says.

Cite This Page:

University of Alabama at Birmingham. "Brain research tracks internet safety performance, dispels assumptions, identifies traits of those at-risk." ScienceDaily. ScienceDaily, 28 February 2014. .University of Alabama at Birmingham. (2014, February 28). Brain research tracks internet safety performance, dispels assumptions, identifies traits of those at-risk. ScienceDaily. Retrieved May 5, 2014 from www.sciencedaily.com/releases/2014/02/140228121130.htmUniversity of Alabama at Birmingham. "Brain research tracks internet safety performance, dispels assumptions, identifies traits of those at-risk." ScienceDaily. www.sciencedaily.com/releases/2014/02/140228121130.htm (accessed May 5, 2014).

View the original article here

Wednesday, May 14, 2014

Flaw in 'secure' cloud storage could put privacy at risk

Johns Hopkins computer scientists have found a flaw in the way that secure cloud storage companies protect their customers' data. The scientists say this weakness jeopardizes the privacy protection these digital warehouses claim to offer. Whenever customers share their confidential files with a trusted friend or colleague, the researchers say, the storage provider could exploit the security flaw to secretly view this private data.

The lead author of the new article is Duane C. Wilson, a doctoral student in the Department of Computer Science in the university's Whiting School of Engineering. The senior author is his faculty adviser, Giuseppe Ateniese, an associate professor in the department. Both are affiliated with the Johns Hopkins University Information Security Institute.

Their research focused on the secure cloud storage providers that are increasingly being used by businesses and others to house or back up sensitive information about intellectual property, finances, employees and customers. These storage providers claim to offer "zero-knowledge environments," meaning that their employees cannot see or access the clients' data. These storage businesses typically assert that this confidentiality is guaranteed because the information is encrypted before it is uploaded for cloud storage.

But the Johns Hopkins team found that complete privacy could not be guaranteed by these vendors. "Our research shows that as long as the data is not shared with others, its confidentiality will be preserved, as the providers claim," Wilson said. "However, whenever data is shared with another recipient through the cloud storage service, the providers are able to access their customers' files and other data."

The problem, Wilson said, is that privacy during file-sharing is normally preserved by the use of a trusted third party, a technological "middle-man" who verifies the identify of the users who wish to share files. When this authentication process is finished, this third party issues "keys" that can unscramble and later re-encode the data to restore its confidentiality.

"In the secure cloud storage providers we examined," Wilson said, "the storage businesses were each operating as their own 'trusted third party,' meaning they could easily issue fake identity credentials to people using the service. The storage businesses could use a phony 'key' to decrypt and view the private information, then re-encrypt it before sending it on to its intended recipient."

Wilson added, "As a result, whenever data is shared with another user or group of users, the storage service could perform a man-in-the-middle attack by pretending to be another user or group member. This would all happen without alerting the customers, who incorrectly believe that the cloud storage provider cannot see or access their data."

These storage services generally do not share the details of how their technology works, so Wilson and Ateniese substantiated the security flaw by using a combination of reverse engineering and network traffic analysis to study the type of communication that occurs between a secure cloud storage provider and its customers.

The researchers pointed out that their study focused only on three storage providers that claimed their customers' data would remain completely confidential. Other file-sharing services, such as Dropbox and Google Drive, make no pledge of privacy. Instead, they say that after a user's data is uploaded, it is encrypted with keys that are owned by the file-sharing service.

To solve the security flaw, the researchers recommend that the arrangements between customers and secure storage providers be revised so that an independent third party serves as the file-sharing "middle-man," instead of the storage company itself.

"Although we have no evidence that any secure cloud storage provider is accessing their customers' private information, we wanted to get the word out that this could easily occur," said Ateniese, who supervised the research. "It's like discovering that your neighbors left their door unlocked. Maybe no one has stolen anything from the house yet, but don't you think they'd like to know that it would be simple for thieves to get inside?"


View the original article here

Tuesday, May 13, 2014

First contagious airborne WiFi virus discovered

Researchers at the University of Liverpool have shown for the first time that WiFi networks can be infected with a virus that can move through densely populated areas as efficiently as the common cold spreads between humans.

The team designed and simulated an attack by a virus, called "Chameleon," and found that not only could it spread quickly between homes and businesses, but it was able to avoid detection and identify the points at which WiFi access is least protected by encryption and passwords.

Researchers from the University's School of Computer Science and Electrical Engineering and Electronics, simulated an attack on Belfast and London in a laboratory setting, and found that "Chameleon" behaved like an airborne virus, travelling across the WiFi network via Access Points (APs) that connect households and businesses to WiFi networks.

Areas that are more densely populated have more APs in closer proximity to each other, which meant that the virus propagated more quickly, particularly across networks connectable within a 10-50 metre radius.

Alan Marshall, Professor of Network Security at the University, said: "When "Chameleon" attacked an AP it didn't affect how it worked, but was able to collect and report the credentials of all other WiFi users who connected to it. The virus then sought out other WiFi APs that it could connect to and infect."

"Chameleon" was able to avoid detection as current virus detection systems look for viruses that are present on the Internet or computers, but Chameleon is only ever present in the WiFi network. Whilst many APs are sufficiently encrypted and password protected, the virus simply moved on to find those which weren't strongly protected including open access WiFi points common in locations such as coffee shops and airports.

Professor Marshall continued said: "WiFi connections are increasingly a target for computer hackers because of well-documented security vulnerabilities, which make it difficult to detect and defend against a virus.

"It was assumed, however, that it wasn't possible to develop a virus that could attack WiFi networks but we demonstrated that this is possible and that it can spread quickly. We are now able to use the data generated from this study to develop a new technique to identify when an attack is likely."


View the original article here

Monday, May 12, 2014

Strong software protection needed for mobile devices

The massive adoption of mobile computing platforms creates the urgent need for secure application execution on such platforms. Unfortunately, today's mobile platforms do not support strong security solutions equivalent to smartcards in set-top boxes or to dongles to reliably control licensing terms. Furthermore, many of these mobile devices are shared for professional and private applications, and are thus intrinsically hard to control and secure.

Michael Zunke, chief technology officer of SafeNet's Software Monetization Business Unit states that "Security is ever more essential as an enabler for the sustainable innovation of mobile applications and services. Security solutions based on custom hardware security components like dongles and smart cards are not a natural fit for these mobile environments. The industry therefore needs a comprehensive security framework in which software protection is the key ingredient."

According to Brecht Wyseur, NAGRA's security architect, the big challenge in the next years will be to increase the security level of software solutions to allow for both cost effective deployment and long-term renewability, either stand-alone or in combination with a hardware root of trust.

Hence, more research is needed to come up with a solution that is strong enough to be a viable solution for an increasing number of applications in which privacy and security are essential. The ASPIRE project will create the ASPIRE software security framework which will develop, combine and integrate five different types of software protection techniques into one easy to use framework. It will deliver comprehensive, effective security metrics and a decision support system to assist the software developer.

"The integrated tool chain will allow service providers to automatically protect the assets in their mobile applications with the best local and network-based protection techniques," notes Bjorn De Sutter, coordinator of the project, adding that "ASPIRE will make mobile software more trustworthy by leveraging the available network connection and by developing a layered security approach of strong protections. We will also make it measurable by developing practical, validated attack and protection models and practical metrics."

Story Source:

The above story is based on materials provided by Ghent University. Note: Materials may be edited for content and length.


View the original article here

Sunday, May 11, 2014

Geographical passwords easier to remember

It's much easier to remember a place you have visited than a long, complicated password, which is why computer scientist Ziyad Al-Salloum of ZSS-Research in Ras Al Khaimah, UAE, is developing a system he calls geographical passwords.

Writing in a freely available "open access" research paper in the International Journal of Security and Networks, Al-Salloum emphasizes how increasingly complicated our online lives are becoming with more and more accounts requiring more and more passwords. Moreover, he adds that even strong, but conventional passwords are a security risk in the face of increasingly sophisticated "hacker" tools that can break into servers and apply brute force to reveal passwords. Indeed, over the last few years numerous major corporations and organizations -- LinkedIn, Sony, the US government, Evernote, Twitter, Yahoo and many others -- have had their systems compromised to different degrees and overall millions of usernames and associated passwords have been harvested and even leaked online.

Al-Salloum has devised geographical passwords as a simple yet practical approach to access credentials that could provide secure access to different entities and at the same time mitigate many of the vulnerabilities associated with current password-based schemes. The new "geo" approach exploits our remarkable ability to recall with relative ease a favorite or visited place and to use that place's specific location as the access credentials. The prototype system developed at ZSS -- Research has proven itself capable of protecting a system against known password threats. "Proposing an effective replacement of conventional passwords could reduce 76% of data breaches, based on an analysis of more than 47,000 reported security incidents," Al-Salloum reports.

The geographical password system utilizes the geographical information derived from a specific memorable location around which the user has logged a drawn boundary- longitude, latitude, altitude, area of the boundary, its perimeter, sides, angles, radius and other features form the geographical password. For instance, the user might draw a six-side polygon around a geographical feature such as the Eiffel Tower, Uluru (also known as Ayer's Rock), a particular promontory on the Grand Canyon, a local church, a particular tree in the woodland where they walk their dog…or any other geographical feature. Once created, the password is then "salted" by adding a string of hidden random characters that are user-specific and the geographical password and the salt "hashed" together. Thus, even if two users pick the same place as their geographical password the behind-the-scenes password settings is unique to them.

If the system disallowed two users from picking the same location, this will make it much easier for adversaries to guess passwords.

The guessability, or entropy, of a geographical password would increase significantly if the password comprised two or more pinpointed locations. Al-Salloum explains that a whole-earth map might have 360 billion tiles at 20 degrees of "zoom," which offers an essentially limitless number of essentially unguessable geographical passwords.


View the original article here

Saturday, May 10, 2014

Quantum cryptography for mobile phones

An ultra-high security scheme that could one day get quantum cryptography using Quantum Key Distribution into mobile devices has been developed and demonstrated by researchers from the University of Bristol's Centre for Quantum Photonics (CQP) in collaboration with Nokia.

Secure mobile communications underpin our society and through mobile phones, tablets and laptops we have become online consumers. The security of mobile transactions is obscure to most people but is absolutely essential if we are to stay protected from malicious online attacks, fraud and theft.

Currently available quantum cryptography technology is bulky, expensive and limited to fixed physical locations -- often server rooms in a bank. The team at Bristol has shown how it is possible to reduce these bulky and expensive resources so that a client requires only the integration of an optical chip into a mobile handset.

The scheme relies on the breakthrough protocol developed by CQP research fellow Dr Anthony Laing, and colleagues, which allows the robust exchange of quantum information through an unstable environment. The research is published in the latest issue of Physical Review Letters.

Dr Laing said: "With much attention currently focused on privacy and information security, people are looking to quantum cryptography as a solution since its security is guaranteed by the laws of physics. Our work here shows that quantum cryptography need not be limited to large corporations, but could be made available to members of the general public. The next step is to take our scheme out of the lab and deploy it in a real communications network."

The system uses photons -- single particles of light -- as the information carrier and the scheme relies on the integrated quantum circuits developed at the University of Bristol. These tiny microchips are crucial for the widespread adoption of secure quantum communications technologies and herald a new dawn for secure mobile banking, online commerce, and information exchange and could shortly lead to the production of the first 'NSA proof' mobile phone.


View the original article here

Friday, May 9, 2014

Collecting digital user data without invading privacy

The statistical evaluation of digital user data is of vital importance for analyzing trends. But it can also undermine the privacy. Computer scientists from Saarbr?cken have now developed a novel cryptographic method that makes it possible to collect data and protect the privacy of the user at the same time. They present their approach for the first time at the computer expo Cebit in Hannover at the Saarland University research booth.

"Many website providers are able to collect data, but only a few manage to do so without invading users' privacy," explains Aniket Kate, who leads the research group "Cryptographic Systems" at the Cluster of Excellence "Multimodal Computing and Interaction" (MMCI) in Saarbr?cken. Two aspects threaten privacy during data aggregation: On the one hand, where and how is the data aggregated? For example, website owners are interested in the age and gender of their visitors. Therefore, they store data files (cookies) on their computers that observe which other websites they visit. "But this wealth of sensitive information allows them also to reconstruct detailed profiles of each individual," says Kate. On the other hand, it is important to publish aggregated data in a privacy-preserving way. "Researchers have already demonstrated that precise information about the habits of citizens can be reconstructed from the electricity consumption information collected by so-called smart meters," explains Kate.

In cooperation with his colleagues Fabienne Eigner and Matteo Maffei from the Center for IT-Security, Privacy and Accountability (CISPA) and Francesca Pampaloni from the Italian IMT Institute for Advanced Studies Lucca, Kate developed a software system called "Privada." It is not only able to resolve the dilemma between the desire for information and the protection of data, but it can also be easily applied in different domains. "For example, with Privada website owners are still able to observe that their websites are mainly visited by middle-aged women, but nothing more," Kate explains.

To achieve this, users split up the requested information and send parts of it to previously defined servers performing multi-party computation: Each server evaluates its data without being aware of the data of other parties. So together they compute a secret, but are not able to decode it on their own. Moreover, each party adds on a value corresponding to a probability distribution to make the data a little bit imprecise. The perturbated partial results are assembled into the actual analysis. The perturbation ensures that the identity of the individual person is protected, while trends are still significant in the aggregated statistic about user data.

The privacy is even guaranteed if all but one of the servers collaborate. Hence, according to the researchers, it is even conceivable that companies could provide such servers. If only servers, and not users, perturb the data with a certain amount of noise, that has two advantages: Firstly, not much computational power is necessary on the user's side. Hence, even a mobile phone could send the partial result to a particular server. Also, in total, there is only a minimal amount of noise attached to the aggregated data. Hence, the resulting statistic about user data is as accurate as possible.

The computer scientists from Saarbr?cken have already implemented their concept. "The computation is fast; the servers just need a few seconds," says Fabienne Eigner, part of the research group "Secure and Privacy-preserving Systems" at Saarland University. She also worked on the software system. The architecture is constructed in such a way that it would not make any difference if someone were to analyze the data of a thousand or a million people," explains Eigner.


View the original article here

Thursday, May 8, 2014

More secure communications thanks to quantum physics

One of the recent revelations by Edward Snowden is that the U.S. National Security Agency is currently developing a quantum computer. Physicists aren't surprised by this news; such a computer could crack the encryption that is commonly used today in no time and would therefore be highly attractive for the NSA.

Professor Thomas Walther of the Institute of Applied Physics at the Technical University of Darmstadt is convinced that "Sooner or later, the quantum computer will arrive." Yet the quantum physicist is not worried. After all, he knows of an antidote: so-called quantum cryptography. This also uses the bizarre rules of quantum physics, but not to decrypt messages at a record pace. Quite the opposite -- to encrypt it in a way that can not be cracked by a quantum computer. To do this, a "key" that depends on the laws of quantum mechanics has to be exchanged between the communication partners; this then serves to encrypt the message. Physicists throughout the world are perfecting quantum cryptography to make it suitable for particularly security-sensitive applications, such as for banking transactions or tap-proof communications. Walther's Ph.D. student Sabine Euler is one of them.

As early as the 1980s, physicists Charles Bennett and Gilles Brassard thought about how quantum physics could help transfer keys while avoiding eavesdropping. Something similar to Morse code is used, consisting of a sequence of light signals from individual light particles (photons). The information is in the different polarizations of successive photons. Eavesdropping is impossible due to the quantum nature of photons. Any eavesdropper will inevitably be discovered because the eavesdropper needs to do measurements on the photons, and these measurements will always be noticed.

"That's the theory" says Walther. However, there are ways to listen without being noticed in practice. This has been demonstrated by hackers who specialize in quantum cryptography based on systems already available on the market. "Commercial systems have always relinquished a little bit of security in the past," says Walther. In order to make the protocol of Bennett and Brassard reality, you need, for example, light sources that are can be controlled so finely that they emit single photons in succession. Usually, a laser that is weakened so much that it emits single photons serves as the light source. "But sometimes two photons can come out simultaneously, which might help a potential eavesdropper to remain unnoticed" says Walther. The eavesdropper could intercept the second photon and transmit the first one.

Therefore, the team led by Sabine Euler uses a light source that transmits a signal when it sends a single photon; this signal can be used to select only the individually transmitted photons for communication. Nevertheless, there are still vulnerabilities. If the system changes the polarization of the light particles during coding, for example, the power consumption varies or the time interval of the pulses changes slightly. "An eavesdropper could tap this information and read the message without the sender and receiver noticing" explains Walther. Sabine Euler and her colleagues at the Institute of Applied Physics are trying to eliminate these vulnerabilities. "They are demonstrating a lot of creativity here" says Walther approvingly. Thanks to such research, it will be harder and harder for hackers to take advantage of vulnerabilities in quantum cryptography systems.

The TU Darmstadt quantum physicists want to make quantum cryptography not only more secure, but more manageable at the same time. "In a network in which many users wish to communicate securely with each other, the technology must be affordable," he says. Therefore, his team develops its systems in such a manner that they are as simple as possible and can be miniaturized.

The research team is part of the Center for Advanced Security Research Darmstadt (CASED), in which the TU Darmstadt, the Fraunhofer Institute for Secure Information Technology and the University of Darmstadt combine their expertise in current and future IT security issues. Over 200 scientists conduct research in CASED, funded by the State Initiative for Economic and Academic Excellence (LOEWE) of the Hessian Ministry for Science and the Arts. "We also exchange information with computer scientists, which is very exciting," says Walther.

After all, the computer science experts deal with many of the same issues as Walther's quantum physicists. For example, Johannes Buchmann of the department of Computer Science at the TU Darmstadt is also working on encryption methods that theoretically can not be cracked by a quantum computer. However, these are not based on quantum physics phenomena, but rather on an unsolvable math problem.

Therefore, it may well be that the answer to the first code-cracking quantum computer comes from Darmstadt.

Bizarre quantum physics and encryption

A quantum computer could quickly crack current encryptions because it can test very many possibilities simultaneously, in the same way as if you could try all possible variations for a password at once. After all, according to the quantum physics principle of superposition, atoms, electrons or photons can have several states simultaneously; for example, they can rotate clockwise and counterclockwise at the same time.

However, if you were to measure a property of a particle, such as the direction of rotation, the superposition is lost. This phenomenon is useful for quantum cryptography. Eavesdroppers inevitably betray themselves because their measurements of the photon change the photon's characteristics. Moreover, quantum physics forbids them to copy the photon with all its properties. Therefore, they can not siphon off any information to retransmit the uninfluenced photons on to the sender of the message.

Story Source:

The above story is based on materials provided by Technische Universit?t Darmstadt. The original article was written by Christian J. Meier. Note: Materials may be edited for content and length.


View the original article here

Wednesday, May 7, 2014

Quantum physics secures new cryptography scheme

The way we secure digital transactions could soon change. An international team has demonstrated a form of quantum cryptography that can protect people doing business with others they may not know or trust -- a situation encountered often on the internet and in everyday life, for example at a bank's ATM.

"Having quantum cryptography to hand is a realistic prospect, I think. I expect that quantum technologies will gradually become integrated with existing devices such as smartphones, allowing us to do things like identify ourselves securely or generate encryption keys," says Stephanie Wehner, a Principal Investigator at the Centre for Quantum Technologies (CQT) at the National University of Singapore, and co-author on the paper.

In cryptography, the problem of providing a secure way for two mutually distrustful parties to interact is known as 'two-party secure computation'. The new work, published in Nature Communications, describes the implementation using quantum technology of an important building block for such schemes.

CQT theorists Wehner and Nelly Ng teamed up with researchers at the Institute for Quantum Computing (IQC) at the University of Waterloo, Canada, for the demonstration.

"Research partnerships such as this one between IQC and CQT are critical in moving the field forward," says Raymond Laflamme, Executive Director at the Institute for Quantum Computing. "The infrastructure that we've built here at IQC is enabling exciting progress on quantum technologies."

"CQT and IQC are two of the world's largest, leading research centres in quantum technologies. Great things can happen when we combine our powers," says Artur Ekert, Director of CQT.

The experiments performed at IQC deployed quantum-entangled photons in such a way that one party, dubbed Alice, could share information with a second party, dubbed Bob, while meeting stringent restrictions. Specifically, Alice has two sets of information. Bob requests access to one or the other, and Alice must be able to send it to him without knowing which set he's asked for. Bob must also learn nothing about the unrequested set. This is a protocol known as 1-2 random oblivious transfer (ROT).

ROT is a starting point for more complicated schemes that have applications, for example, in secure identification. "Oblivious transfer is a basic building block that you can stack together, like lego, to make something more fantastic," says Wehner.

Today, taking money out of an ATM requires that you put in a card and type in your PIN. You trust the bank's machine with your personal data. But what if you don't trust the machine? You might instead type your PIN into your trusted phone, then let your phone do secure quantum identification with the ATM (see artist's impression). Ultimately, the aim is to implement a scheme that can check if your account number and PIN matches the bank's records without either you or the bank having to disclose the login details to each other.

Unlike protocols for ROT that use only classical physics, the security of the quantum protocol cannot be broken by computational power. Even if the attacker had a quantum computer, the protocol would remain secure.

Its security depends only on Alice and Bob not being able to store much quantum information for long. This is reasonable physical assumption, given today's best quantum memories are able to store information for minutes at most. Moreover, any improvements in memory can be matched by changes in the protocol: a bigger storage device simply means more signals have to be sent in order to achieve security. (The idea of 'noisy storage' securing quantum cryptography was developed by Wehner in earlier papers.)

To start the ROT protocol, Alice creates pairs of entangled photons. She measures one of each pair and sends the other to Bob to measure. Bob chooses which photons he wants to learn about, dividing his data accordingly without revealing his picks to Alice. Both then wait for a length of time chosen such that any attempt to store quantum information about the photons is likely to fail. To complete the oblivious transfer, Alice then tells Bob which measurements she made, and they both process their data in set ways that ensure the result is correct and secure within a pre-agreed margin of error.

In the demonstration performed at IQC, Alice and Bob achieved a random oblivious transfer of 1,366 bits. The whole process took about three minutes.

The experiment adapted devices built to do a more standard form of quantum cryptography known as quantum key distribution (QKD), a scheme that generates random numbers for scrambling communication. Devices for QKD are already commercially available, and miniaturised versions of this experiment are in principle possible using integrated optics. In the future, people might carry hand-held quantum devices that can perform this kind of feat.

"We did the experiment with big and bulky optics taking metres of space, but you can well imagine this technology being shrunk down to sit happily next to classical processing circuits on a small little microchip. The field of integrated quantum optics has been progressing in leaps and bounds, and most of the key pieces required to implement ROT have already been successfully demonstrated in integrated setups a few millimetres in size," says Chris Erven, who performed the experiments at IQC as a PhD student under the supervision of Raymond Laflamme and Gregor Weihs. Weihs is now at the University of Innsbruck, Austria. Erven is now a postdoctoral fellow at the University of Bristol, UK.


View the original article here

Tuesday, May 6, 2014

Mobile users may not buy into instant gratification cues, gimmicky ads

Gimmicky contest ads and flashy free-prize messages may be an instant turnoff for mobile users, according to Penn State researchers.

In a study, a tempting offer of a free prize drawing for registering on a mobile website led users to distrust the site, said S. Shyam Sundar, Distinguished Professor of Communications and co-director of the Media Effects Research Laboratory.

Sundar said that in an increasingly information-loaded world, people tend to lean on cues, such as icons and messages, for decision-making shortcuts, called heuristics. However, some cues may elicit user reaction in the opposite direction of what most marketers would anticipate.

"Even though we turn to our mobile devices for instantly gratifying our need for information, we may not be persuaded by advertising appeals for instant gratification," said Sundar. "It's a boomerang effect--marketers may think that they are activating the instant gratification heuristic when they display time-sensitive offers, but what they're actually doing is cuing red flags about the site."

Mobile users tend to be more knowledgeable about technology than regular users.

"It could be that an instant gratification message makes mobile users, who tend to be more tech savvy, leery about the site," said Sundar.

Even though free-prize ads are ubiquitous on the internet, marketers may want to seek other ways to reach mobile customers, according to the researchers.

The researchers, who presented their findings? Apr. 28 at the Association for Computing Machinery's Conference on Human Factors in Computing Systems, also tested a warning cue that seemed to prompt more conflicting reactions from users, said Sundar. When a security alert -- a caution icon with a warning message -- appeared, users became more worried about security, as expected. However, users were willing to reveal more information about their social media accounts after viewing the security prompt.

One possible explanation for this behavior is that the security cue makes the users distinguish more carefully between public and private information.

"People may feel that the social media information is already public information, not necessarily private information, and they are not as concerned about revealing social media information," said Sundar, who worked with Bo Zhang, Mu Wu, Hyunjin Kang and Eun Go, all doctoral students in mass communications. "The 'privacy paradox' of giving away information when we are most concerned about its safety may not be all that paradoxical if you consider that the information we give away is not quite private."

The researchers recruited 220 participants to test four different mobile sites. The participants were first asked to navigate to a mobile site. One site included a caution symbol and a security warning that the site was insecure and another site contained a gift box icon with a message that the user could win a free prize for registering. A third site showed both a warning and an instant gratification message and a fourth site, which featured neither alerts, served as the control in the study. Except for these cues, all other content in the four sites was identical.

Participants could choose how much or how little personal, professional, financial or social media information they provided in the registration form, which served as a measure of their information disclosure behaviors. After registering, they filled out an online questionnaire about their impressions of the mobile website.


View the original article here

Monday, May 5, 2014

Software analyzes apps for malicious behavior

Apps on web-enabled mobile devices can be used to spy on their users. Computer scientists at the Center for Security, Privacy and Accountability (CISPA) developed software that shows whether an app has accessed private data. To accomplish this, the program examines the "bytecode" of the app in question. The researchers show their program at the upcoming computer expo Cebit in Hannover.

Last year at the end of July the Russian software company "Doctor Web" detected several malicious apps in the app store "Google Play." Downloaded on a smartphone, the malware installed -- without the permission of the user -- additional programs which sent expensive text messages to premium services. Although Doctor Web, according to its own statement, informed Google immediately, the malicious apps were still available for download for several days. Doctor Web estimates that in this way up to 25,000 smartphones were used fraudulently.

Computer scientists from the German Saarland University have now developed software which can discover such malicious apps already in the app store. The software detects pieces of code where the app accesses sensitive data and where data is sent from the mobile device. If the software detects a connection between such a "source" and such a "sink," it reports that as suspect behavior. To give an example of such a malicious source-sink combination, Erik Derr explains: "Your address book is read; hundreds of instructions later and without your permission an SMS is sent or a website is visited." Derr is a PhD candidate at the Graduate School of Computer Science and does research at the Center for IT-Security, Privacy and Accountability (CISPA), only a few yards away.

To identify a functional relation between source and sink, the computer scientists from Saarbr?cken use new methods of information flow analysis. As input they provide suspicious combinations of accesses on the application programming interface. As the software needs a lot of computational power and storage, it runs on a separate server. "So far we have tested up to 3000 apps with it. The software analyzes them fast enough that the approach can also be used in practice," Derr says.


View the original article here