Norway officially accuses China of stealing military secrets
Info security solution kolkata

Threat actors in China have stolen confidential information from Norwegian companies which is now being used in Chinese military technology says General Lt Morten Haga Lunde, head of the Norwegian intelligence.

General Lt Morten Haga Lunde, head of the Norwegian intelligence
General Lt Morten Haga Lunde, head of the Norwegian intelligence

General Lt Morten Haga Lunde, head of the Norwegian intelligence agency E-tjenesten, has gone on the record to accuse China of involvement in cyber-espionage activities in the country, stating that threat actors in China have stolen confidential information from Norwegian companies which is now being used in Chinese military technology. The companies hit and technology stolen was not revealed. This is believed to be the first time this Nato government has unequivocally accused China – though with attribution in hacking being so difficult, threat actors in China could be taken to mean non-government Chinese hackers working with government sanction.

Lunde was reported by Norwegian Broadcasting (NRK), while on the Norwegian defence department’s annual threat assessment, Fokus 2016 as saying both Russian and Chinese intelligence posed digital threats to Norway. He also noted that the threat coming from these nations applies not only to Norwegian companies, but is a global challenge.

Robert Aranjelovic, director security strategy for EMEA at Blue Coat Systems told that while some may consider Norway an unlikely target due to its distance from China and its relatively small size, it could be considered a valuable target for cyber-espionage due to its strategic location and role as a leading oil producer make it a valuable intelligence target for future claims in the anticipated resource boom expected to follow the melting of the Arctic ice shelf. Also Norway is a disproportionately large in the global defence industry with extremely valuable weapons-related intellectual property, but primarily, because it is a NATO country. Also, the general secretary of NATO, Jens Stoltenberg, is Norwegian.

He added, “What’s important here is that its not just the boogie man out there, this is an official government statement saying that this is a real threat.”

Snorre Fagerland, senior principal security researcher at Blue Coat’s Oslo-based research team backed up the government assertions and explained that his team has observed activities of threat actors with a profile consistent with those cited by Lunde. This includes what was described as overwhelming circumstantial evidence. This ranged from time zone, geolocating IP addresses, source-code, language used including that being used on the machines employed, type of malware including new malware using elements of past attacks – all points to China.

He told SC: “Threat actors presumed to be coming out of China have been really active for quite a long time, but we are still seeing significant threat activity from these actors. We have seen high threat activity globally against various countries.  Just today I have been looking at activity against Japan.  Their methodology seems to match what you can expect from presumed Chinese threat actors. It’s really, really hard to pinpoint any activity in this space as coming from government, and particularly in China because there is an underground which sympathises with government policy and so you get a grey area which might be government driven or sponsored, but you don’t know to  what extent.

“Certainly the targets here would be of most interest to government associated interests, but it could also be a case of privateers trying to get hold of information that they hope to sell.”

Aranjelovic added that what could also be noted with state-sanctioned actors was, “some of the sophistication of some of the attacks, for instance developing a really sophisticated piece of malware, the amount of time and resource that’s gone into it. They indicate its probably not some kid in a basement doing this.”

Fagerland continued, “We are developing profiles of specific groups that we see originating in different areas. So we are tracking groups – and in China there is a multitude.  Some are probably related to semi-commercial threat actors, security companies, some related to private underground hackers and some are unknown, certainly capable, but we don’t have enough information to pin them to any specific organisation.”

Aranjelovic adds: “Part of the game being played is the obfuscation of attribution. To make it very shady where the attack came from geographically, but more importantly, any involvement from the state itself – there is even more effort and a lot of intermediaries to mask all of that.”

According to Fagerland:  “It’s usually easier to follow the attack down to an individual than it is to an organisation. Chinese hackers have traditionally been negligent or indifferent to operational security. They can hack and then blog about it – more prevelant five years back. Now they are getting better, but they still make mistakes, like when they were more careless, and some of the attribution comes from this. You can go back and look at their history, the technology, what they have done before, and even though they are quite good now, they will often do things they did back then. Language, not only in the source code, the machines that host the malware, the code itself, where did it come from.  Some of the code has been adapted from earlier known stuff, and Chinese tend to favour their own stuff and typically don’t go out and fetch US open source, they get Chinese open source – probably because of language.  No single factor is being considered, it’s the big picture and then create a confidence score – but rarely is it 100 percent confidence.”

Regarding defence against such attacks, Aranjelovic advises organisations to improve their incident response capabilities with the help of advanced threat protection and forensics technologies. This also includes cooperation and information sharing with others on threat intelligence. Fagerland noted how, as a nation-state you can sit on the nodes where the traffic flows through, and monitor or sometimes inject things into that traffic with a great deal of access.  When it comes to other nations you don’t have that kind of access, so in those cases spear-phishing attacks tend to be quite efficient.”

Aranjelovic reiterated the issue that the older barrier approach had been superceded and the capability of current actors at the top end were unlikely to be kept out.  A dedicated state player with a vested interest in infiltrating your organisation is likley to get in. Therefore any company which has information that it would hurt them if it were lost, needs to protect against that loss. So its important to quickly identify when there has been a breach and identify what data has been compromised, who did it and how they got in so you can close off those avenues.

Fagerland adds, “Its not just what you own, but who you know.  If you have powerful companies who are customers or clients then you also become a target to be used as a stepping stone to breach them.  Lawyers’ offices are prime targets because they serve many of these powerful companies, and when you get a pdf from your lawyer you tend to click on it.” (Spoofed targetted phishing emails with malware are widely used by the presumed Chinese actors).

An upcoming Blue Coat Research report provides detailed observations of a group – believed to be situated in the region – which has been engaging in an extensive campaign of cyberattacks against a wide range of Japanese companies and organisations since mid-2012. The report will provide detailed examples of some of the techniques used. It also shows consistency with what has been happening in Europe.

Read more
Researchers confirm cases of ransomware encryption jumping devices via cloud apps
Info security solution kolkata

Netskope's report explained how cloud apps give ransomware a means to spread its encryption protocols to secondary victims without needed to be downloaded again.
Netskope’s report explained how cloud apps give ransomware a means to spread its encryption protocols to secondary victims without needed to be downloaded again.

The rapidly expanding use of cloud applications is not only spreading malware faster than ever, but also propagating the effects of ransomware encryption from one user to another — even when the malware itself is not actually downloaded by the secondary victim, according to a new report.

The February 2016 Worldwide Cloud Report from cloud access security broker Netskope noted a “handful” of instances in which ransomware encrypted a user device’s files as well as copies of those files saved to the sync folder of a popular cloud storage application. Subsequently, secondary users who also automatically synced to that very same folder had their device’s files encrypted as well.

In an interview with, Netskope Chief Marketing Officer Jamie Barnett confirmed this was the first time her company has detected this encryption phenomenon in the wild. Such scenarios have until now been largely hypothetical, but Barnett is not shocked to see a real-life case. “It was a blinding flash of the obvious for us,” she said.

It was not reported how affected companies handled the cloud-based spread of encrypted files. However, Barnett noted that in its own controlled recreation of the ransomware attack, Netskope determined that once the malware was quarantined and killed, the remediation spread across the cloud as well.

In an analysis of its own client base comprising hundreds of companies (most mid-to-large-size), Netskope determined that between Oct. 1 and Dec. 31, 2015, 4.1 percent of businesses used at least one IT department-approved cloud app that had malware embedded within it.

“While this may not seem like a large number,” the report said, “consider the fact that sanctioned apps represent less than five percent of an enterprise’s total cloud app footprint.” In other words, the cloud’s “fan-out” effect of spreading malware is exacerbated further by employees’ use of cloud-based business apps that IT departments did not officially vet and approve.

Indeed, Netskope’s latest research shows that globally, enterprises have on average 917 business-related cloud apps in use, the vast majority of which are unsanctioned. This is the highest number ever observed by the company and reflects a 21 percent jump over the previous quarter. Barnett attributed many of these unsanctioned apps to the proliferation of individual apps designed to streamline and simplify the tasks of corporate departments such as HR, finance and marketing.

The report also addressed efforts on the part of companies that must comply with the European Union’s General Data Protection Regulation (GDPR). Netskope warned it would be an “uphill battle” for the companies it researchers, with only about 40 percent of their cloud apps ensuring that users’ data will not be shared with third parties.

Read more
Lines drawn in iPhone backdoor case; Apple gets backup
Insec Ethical Hacking Hub Bug in the GitHub Extension for Visual Studio Makes Developer Lose $6,500

The lines are being drawn in the fight over whether a court order should compel Apple to help the FBI unlock the iPhone of a San Bernadino terrorist who killed 14 people. And, while court proceedings haven’t moved much, both sides have been busy trying the case in the court of public opinion this week.

Both sides claim their actions are in the name of security. James Comey, director of the FBI, has framed the agency’s argument in the name of national security and following every potential lead in its attempt to root out terrorists.

“Maybe the phone holds the clue to finding more terrorists. Maybe it doesn’t,” Comey wrote in a blog post. “But we can’t look the survivors in the eye, or ourselves in the mirror, if we don’t follow this lead.”

Comey also tried to assert that the FBI is not “trying to set a precedent or send any kind of message” with the court order.

“The particular legal issue is actually quite narrow. The relief we seek is limited and its value increasingly obsolete because the technology continues to evolve,” Comey wrote. “We simply want the chance, with a search warrant, to try to guess the terrorist’s passcode without the phone essentially self-destructing and without it taking a decade to guess correctly. That’s it.”

Unfortunately for Comey, that message has been undercut by a confidential National Security Council “decision memo” published this week by Bloomberg News. While the FBI has said publicly that it does not want to legislate backdoors, the memo reportedly described how government agencies could develop encryption workarounds, including estimating additional budgets and identifying laws that may need to be changed.

Separately, despite the FBI continuing to claim this case is only about the one iPhone used by Syed Farook, it has been reported that the U.S. Department of Justice has about 12 cases around the country in which it is attempting to gain access to locked iPhones in other criminal cases.

Apple digs in and gets support

Apple officially filed its motion to overturn the court order that would force it to create an iPhone backdoor to aid the FBI, and it contested the idea that this case is not just about one phone as the FBI claims. The motion stated that “the government knows those statements are not true … If this order is permitted to stand, it will only be a matter of days before some other prosecutor, in some other important case, before some other judge, seeks a similar order using this case as precedent.”

In the motion, Apple claimed this case could have far-reaching effects beyond the FBI compelling Apple to create an iPhone backdoor.

“If it succeeds here against Apple, there is no reason why the government could not deploy its new authority to compel other innocent and unrelated third-parties to do its bidding in the name of law enforcement,” Apple wrote, describing ways the government could manipulate pharmaceutical companies or journalists in similar ways. “Indeed, under the government’s formulation, any party whose assistance is deemed ‘necessary’ by the government falls within the ambit of the All Writs Act.”

Apple CEO Tim Cook followed a similar line of logic in his first public interview on the topic. Cook repeated a number of times in the interview with ABC News that in his view this case “is not about one phone; it is about the future.

“Some things are hard and some things are right and some things are both. This is one of those things,” Cook said. “Think about this — it is, in our view, the software equivalent of cancer — is this something that should be created? Technology can do many things, but there are many things technology should never be allowed to do. And, the way you don’t allow it is to not create it.”

Where the FBI framed the case as a matter of national security, Cook framed it as a matter of personal security and said someone’s smartphone often has more personal information on it than can be found in their house, including banking information and the location of their children.

Cook argued what many infosec experts have argued in the past — that “there’s no such thing as a backdoor for the good guys; the bad guys will find it too.” Cook even noted that the government isn’t necessarily the best place for a master key or iPhone backdoor to be held, as evidenced by the millions who have had their information stolen in breaches of federal agencies like OPM.

Public polls on the topic showed the country is divided in supporting Apple with the majority currently supporting the FBI. A recent poll from the Pew Research Center found that 51% of those surveyed thought Apple should help the FBI unlock the phone, 38% said Apple shouldn’t, and 11% were undecided.

Cook said he understood why people felt that way, but also said that the more people learn about why Apple has taken the stance it has, the more people are siding with Apple.

“What I’ve seen is people understand what is at stake here and increasing numbers support us,” Cook said in the interview with ABC. “I have gotten thousands of emails since this occurred and the largest single category of people are from the military. These are men and women who fight for our freedom and our liberty, and they want us to stand up and be counted on this issue for them.”

High-profile support in this case has been somewhat mixed. Michael Hayden, former director of both the NSA and CIA who has been a vocal advocate of strong encryption, said that he opposes this effort.

“Jim [Comey] would like a back door available to American law enforcement in all devices globally,” Hayden said in an interview. “And, frankly, I think on balance that actually harms American safety and security, even though it might make Jim’s job a bit easier in some specific circumstances.”

Former Microsoft founder and CEO Bill Gates said this case could set a bad precedent and said there should be a balance “between safeguards against government power and security.”

Microsoft itself has come out in support of Apple. Brad Smith, Microsoft president and chief legal officer, said the company would file an amicus brief next week, which is a filing that allows parties not directly involved in the case to weigh in. Twitter has also reportedly planned a similar filing in support of Apple, as have Google and Facebook who had previously stated support.

Ultimately, this looks to be a long and drawn out fight. And Tim Cook has said that Apple is prepared to take this case all the way to the Supreme Court if need be.

Next Steps

Learn more about the FBI’s continued efforts to bypass encryption.

Learn why metadata means the FBI’s “going dark” argument doesn’t work.

Learn about an open letter urging President Obama to resist mandating backdoors.

Read more
Microsoft EMET vulnerability turns tool against itself
Insec Ethical Hacking Hub Phishing + Ransomware = A Modern Day Threat

When Microsoft upgraded its Enhanced Mitigation Experience Toolkit, or EMET, earlier this month, the software giant touted the fact that version 5.5 added support for Windows 10, as well as various other improvements and mitigations. But this week, researchers reported a key vulnerability in earlier versions of Microsoft EMET, which allowed attackers to turn the free antimalware tool against itself.

The vulnerability gives attackers an easy way to use “a portion of code within EMET that is responsible for unloading EMET” to disable EMET entirely, according to a new report from Abdulellah Alsaheel and Raghav Pande, security researchers at FireEye Inc., based in Milpitas, Calif.

According to the FireEye report, Microsoft EMET “adds security mitigations to user-mode programs beyond those built in to the operating system.” By running “inside ‘protected’ programs as a Dynamic Link Library (DLL),” EMET makes exploitation of some memory-related exploits more difficult.

The researchers worked with Microsoft to patch EMET, which was issued earlier this month, but the vulnerability can be exploited in currently supported versions older than EMET 5.5 — 5.0, 5.1 and 5.2 — as well as in all older, unsupported versions. Microsoft described the mitigation of this vulnerability in its update as “EAF/EAF+ pseudo-mitigation performance improvements.” Export Address Table Filtering protects against attacks that attempt to read DLL export tables.

Microsoft EMET was never intended to be a complete solution to malware, but rather as a way of putting higher barriers in the way of malware writers. According to Microsoft, EMET is intended to “detect and block exploitation techniques that are commonly used to exploit memory corruption vulnerabilities.”

“EMET anticipates the most common actions and techniques adversaries might use in compromising a computer, and helps protect by diverting, terminating, blocking, and invalidating those actions and techniques,” Microsoft wrote. And EMET is able to protect against some zero-day vulnerabilities by making them harder to exploit.

However, “if an attacker can bypass EMET with significantly less work, then it defeats EMET’s purpose of increasing the cost of exploit development,” the FireEye researchers wrote. They also described a fairly simple exploit of the vulnerability that takes advantage of the portion of code in EMET, which unloads EMET after it has determined a piece of software is safe.

Vulnerabilities and exploits that either bypass or disable EMET have been seen both in research and attacks in several versions, including in 2014, when Bromium found a way to bypass EMET 4.1

Just how bad was 2015 for cybercrime?

This week saw the release of new research reports about cybercrime in the past year, and the news isn’t great.

First, according to the IBM X-Force Threat Intelligence Report 2016, attackers appear to be getting more organized and sophisticated, with the single biggest reason for the escalation being “the increasing involvement and investment of full-blown criminal organizations in digital crime, and the resulting increase in numbers of well-orchestrated operations, such as

“These gangs operate much like businesses, leveraging connections, employing collaboration and deploying teams for different tasks,” according to the IBM X-Force report.

Meanwhile, the 2016 Dell Security Annual Threat Report reported four key findings from its research in 2015, starting with the continuing evolution of exploit kits “to stay one step ahead of security systems, with greater speed, heightened stealth and novel shapeshifting abilities.” No. 2: Web traffic “encryption continued to surge, leading to under-the-radar hacks affecting at least 900 million users in 2015.”

Dell also reported Android malware continued to grow throughout the year, with increases in Android ransomware attacks, improvements in detection evasion by malware writers and financial apps being a particularly appealing target for attackers.

Finally, attacks are way up in 2015, compared with 2014, according to Dell’s research. “Malware attacks nearly doubled to 8.19 billion; popular malware families continued to morph from season to season and differed across geographic regions,” the report claimed.

Also this week, Amsterdam-based security firm Gemalto released findings from its Breach Level Index. The vendor reported there were 1,673 data breaches globally, leading to 707 million data records being compromised last year.

“In 2014, consumers may have been concerned about having their credit card numbers stolen, but there are built-in protections to limit the financial risks,” said John Hart, vice president and CTO at Gemalto. “However, in 2015, criminals shifted to attacks on personal information and identity theft, which are much harder to remediate once they are stolen.”

Gemalto also reported government sector breaches, which accounted for 43% of compromised data records, were “up 476% from 2014 due to several very large data breaches in the United States and Turkey,” and those breaches comprised 16% of all data breaches.

The healthcare sector was also hit hard in 2015, with 19% of all records compromised and 23% of all data breaches. Meanwhile, the Gemalto report also claimed “the retail sector saw a major drop (93%) in the number of stolen data records, compared to the same period last year, accounting for just 6% of stolen records and 10% of the total number of breaches in 2015.”

In other news:

  • Google and a group of global mobile telecommunications operators this week jointly announced a mobile industry initiative to accelerate the availability of Rich Communications Services. RCS is a more feature-rich specification for messaging than SMS, and Google plans to add RCS messaging to its Android mobile operating system. RCS delivers features such as group chat, photo sharing and read receipts to mobile messaging applications, similar to those offered in “over the top” messaging applications available through services like Skype, Facebook and others that bypass mobile firms’ text messaging services.
  • The proposed Dell-EMC deal rolls on, as a waiting period required by U.S. antitrust legislation expired this week. Dell’s acquisition of EMC is still subject to regulatory approval in the European Union and China, as well as approval by EMC and VMware shareholders. The deal is expected to close later this year. Questions still remain over how the deal will impact the information security business of both firms. EMC purchased RSA Security in 2006, and Dell has expanded its security portfolio in recent years as well.
  • Donna Seymour, embattled CIO for the Office of Personnel Management, announced her retirement just two days before she was scheduled to testify — again — before the House Committee on Oversight and Government Reform, according to a report from USA Today. Seymour testified last summer before the House Oversight Committee hearing on the OPM breach.

Next Steps

Find out how Microsoft’s Device Guard can help protect Windows 10 from malware.

Learn how Windows 10 addresses some long-standing Windows vulnerabilities.

Read about how to watch out for vulnerabilities in Linux.

Read more
Get into RSA 2016 free, meet our experts, hear great talks!
Insec Ethical Hacking Hub Penn State University Becomes Victim To Yet Another Cyberattack

Are you going to be in San Francisco next week, at the start of March 2016?

If so, and you aren’t already planning to drop in at the Moscone Center for this year’s RSA conference, why not get in for free, on us?

You can use the code XSSOPHOS16 (X-ray Sierra SOPHOS Sixteen) to register for a free expo pass, which will let you into the exhibition hall…

…where you can hear some great talks, right on the Sophos booth (N3101).

We take a slightly different approach from many vendors at trade shows: we don’t juggle, ride unicycles, or use actors with scripted presentations.

We send top researchers to present on our booth, and they give regular, informative and entertaining conference-quality talks that last long enough to tell you what you need to know, but no so long that we use up time you’d rather spend enjoying the rest of the show.

Better yet, you can stay after each presentation to talk face-to-face to our researchers, so it’s not like a conference session where you have to ask your questions publicly from the floor, for everyone to hear, and where you need to clear out of the room quickly to make way for the next speaker.

Talks run all day, every day, and topics include:

And best of all, there are free socks! (While supplies last. Be warned: they’re popular.)

And these aren’t just any socks, these are Ultimate Wardrobe Edition socks, as featured in the Sophos Store. (Yes, you can buy your own if you aren’t going to be at RSA.)

Sign up for your free pass now!

By the way, if you’re across the Atlantic in Europe and you aren’t going to RSA, why not check out the free Security SOS webinars that we’re running from 14-18 March 2016 for your chance to learn from our experts?

Read more
Apple will unbrick iPhones bricked by “1970” bug
Info Security Solution

Earlier this month, iPhone fans and detractors alike were abuzz on technical forums over what seemed to be a rather tricky bug in iOS.

According to the rumours, setting the date on your iPhone to 01 January 1970 would activate a time bomb in the device…

…so that when you next restarted it, it would freeze during bootup.

Even a complete firmware reinstall wouldn’t help, because the fresh copy of iOS would have the same boot-time bug, and a firmware reset doesn’t update the time.

So your phone would continue freezing every time you restarted, even after the firmware “fix.”

You can imagine what some self-styled security researchers took to doing in Apple Stores, just so they could tell their buddies:

Holy s***! It worked. iPad Pro out of commission.

Others, it seems, decided that a rumour like this was bound to be nonsense, and set out to prove it to themselves with a confidence that was as impressive as it was misguided.

All of this raises the questions, “Why 1970? And why New Year’s Day? Is it anything to do with flared trousers?”

We can’t talk to the sartorial question, but we can explain the significance of 1970.

On Unix-derived systems like iOS, the system time is stored as a non-negative integer (that’s fancy talk for “zero and up”) that is the number of seconds since 1970-01-01T00:00:00Z, which is exactly (OK, almost exactly*) when the decade of the 70s started at Greenwich in London.

In other words, midnight on New Year’s Day, 1970, is represented by the number zero in what’s known as the Unix epoch, and timestamps go up from there.

 Unix time Conventional date and time
------------- ----------------------------------------------- 0 = 1970-01-01T00:00:00Z, start of the epoch 60 = One minute past midnight 86,400 = Midnight the next day (02 Jan 1970) 946,684,800 = Start of the current millennium (01 Jan 2000)
1,000,000,000 = 01:46:40 on Sunday, 09 Sep 2001
1,234,567,890 = 23:31:30 on Friday, 13 Feb 2009
1,414,213,562 = 05:06:02 on Saturday, 25 Oct 2014 (√2 x 1 billion)
2,147,483,647 = 03:14:07 on Tuesday, 19 Jan 2038 (doomsday time for signed 32-bit integers)
3,141,592,653 = 00:37:33 on Sunday, 21 July 2069 (π x 1 billion)

Bugs involving Unix times of zero have happened before, not least because zero often has a special meaning in computer programs.

As well as meaning “the start of the 1970s,” it’s often used to denote a range of different situations, such as “FALSE” (meaning that an error happened), or “not-yet-used” (meaning something that hasn’t been changed from its default), or “nothing-out-of-the-ordinary” (meaning that an error did not happen).

When one value inside a program can be interpreted in different ways, problems can easily arise, especially when you think that in C programs and Unix shell scripts, zero sometimes means “it worked just fine” and sometimes means “it broke really badly.”

Combine that with a timestamp, which will never be zero unless someone deliberately sets the clock way back in time – something that might never happen during testing – and you have a recipe for a bug that is a zero day, literally and figuratively.

The good news is that Apple will be fixing this problem in the next release of iOS, version 9.3, so that this bricking trick will no longer work.

Apple’s HT205248 article suggests that the bug isn’t limited to a time of exactly zero, but to any time that’s within six months of zero. Presumably, if you keep your bricked iPhone charged up for the next six months so the clock’s still ticking, it will heal itself once the device thinks it’s mid-1970 again.

Better yet, if you have a device that was already bricked by the 1970 bug, you can register for Apple’s Beta program, reflash the latest pre-release firmware version (iOS 9.3 Beta 4), and when you restart…

…your iDevice should have a new lease on life.

[*] Times denoted by Z-for-Zulu are UTC, an absolute measure of time that is not exactly the same as Greenwich Time, a mean solar time. They differ by up to one second, but that’s a story for another timezone.

• James Marshall Hendrix stamp thanks to catwalker /

• Phone image thanks to Yeamake /

Read more
Apple responds in iPhone unlocking case: US seeks “dangerous” powers
Insec Ethical Hacking Hub Bug in the GitHub Extension for Visual Studio Makes Developer Lose $6,500

Apple filed a motion in a California court yesterday, asking the judge to throw out the order compelling Apple to assist the FBI in unlocking an encrypted iPhone, and calling the US government’s demands a “dangerous” overreach of its constitutional powers.

Apple’s motion comes after a district court judge ordered Apple last week to create special software that would allow the FBI to pull data from an iPhone belonging to Syed Rizwan Farook – one of the shooters in the December terrorist attack in San Bernardino, California.

The company had until today (26 February) to respond to the court order.

Apple has been using the court of public opinion to argue its case for more than a week – saying that unlocking the iPhone would require Apple to create a backdoor to defeat its own security.

Tim Cook, Apple’s CEO, said in a note published on the company’s website that Apple would not comply with the court’s order.

To do so would put millions of Apple customers at risk, and undermine security features designed to protect iPhone users from hackers and government surveillance, Cook said in his letter and in media interviews.

In its legal motion to vacate the judge’s order, Apple contends that the case is not merely about a single iPhone, but rather the government’s grab for power that would violate the constitution, set a dangerous precedent, and go against the will of Congress.

Apple’s motion to vacate is a 36-page document that lays out a multi-faceted argument, including an explanation of the technical issues involved, the legal precedents, and a detailed unraveling of what Apple calls the government’s flawed understanding of the law.

Ultimately, this case hinges on the court’s interpretation of a 1789 law called the All Writs Act, which gives courts the authority to issue writs (orders) “necessary or appropriate in aid of their respective jurisdictions and agreeable to the usages and principles of law.”

The All Writs Act does not give the government the authority to force Apple into creating code that does not exist in order to do the government’s bidding, the company says.

The iPhone in question in this case, an iPhone 5c running a recent version of the Apple iOS operating system, is locked with a passcode and the only person who knows the passcode – Farook – is dead.

The FBI wants Apple to create a new version of iOS that would allow it to “brute force” the passcode, using software to make millions of guesses at possible passcode combinations in a matter of seconds until finding the right combination to unlock the device.

From a technical perspective, Apple argued in another court case that unlocking an iPhone running recent versions of iOS (iOS 8 or higher) is “impossible,” because Apple does not store the passcode or the unique ID used to create a key to encrypt the device.

Now Apple concedes that unlocking the iPhone is possible, but to do so would require Apple to create special software to bypass the iPhone’s security, taking engineers and other Apple staff weeks to accomplish.

Creating the software would open a Pandora’s box, Apple says.

Apple would need to take exceptional measures to protect all knowledge of the backdoor from getting out and being exploited by criminals and foreign governments.

This backdoor is “too dangerous to build,” Apple says.

Creating a backdoor to the terrorist’s iPhone should not even have been necessary, Apple says.

If the FBI had consulted Apple it could have provided technical assistance to get a backup of all data on Farook’s device from his iCloud account.

Instead, by resetting Farook’s iCloud password, the FBI lost the opportunity to get a backup of the data by connecting to a known Wi-Fi network.

From a legal perspective, Apple argues that Congress has passed a law – the Communications Assistance for Law Enforcement Act (CALEA) – that excuses companies like Apple from aiding the government in cases where it does not have a copy of the encryption key.

If the court follows the government’s interpretation of the All Writs Act to compel Apple to create a backdoor in this case, it would set a dangerous precedent.

Apple said that not only would that mean the government could demand assistance in thousands of cases, most not involving terrorism, it could also demand that Apple develop other kinds of software to track suspects, such as creating code to remotely turn on a device’s microphone or camera.

In the end, Apple says these issues should not be decided by a judge behind closed doors, but with a robust, public debate.

You can read Apple’s motion in full here.

Image of Apple logo courtesy of Anton Watman /

Read more
Hospitals vulnerable to cyber attacks on just about everything

They entered the hospital and moved from floor to floor, dropping malware-laced USB thumb drives where staffers might tend to pick them up.

Before they entered the facility, the security researchers at Independent Security Evaluators had disguised the drives, labeling them with the hospital’s logo.

Within 24 hours, infection spread as hospital employees used the bobbytrapped drives at nursing stations that obediently called in to request malware from the researchers’ server.

In this case, the infection was benign: an emulation of malware that can download and install itself off a USB stick, take control of the targeted system, and grant control to a remote adversary.

If it had been a malicious attack, an attacker could have used that network foothold to attack critical medicine dispensary equipment, potentially leading to a patient being given the wrong medicine.

The dangers of people plugging in rigged USB sticks is nothing new. But it was only one of a dizzying array of attacks the team launched in a two-year project aimed at dissecting hospital security.

The researchers have documented their findings in a paper titled Securing Hospitals.

Free commercial-grade security for the home.

Learn More

The team, led by healthcare head Geoff Gentry, examined 12 healthcare facilities, two healthcare data centers, a pair of live medical devices, and a couple of web apps open to remote attacks.

Safety was paramount, the team stressed: All of the attacks were carried out with the permission and supervision of authorized hospital personnel, performed on non-critical systems or on decommissioned or non-connected medical devices.

Also, in most cases, all but the final step that involved manipulation of an actual medical device, medicine dispensary, or health record was performed online, with the final step taken offline to ensure there was no accidental injury or harm caused to a patient.

One of the attacks, against a medical device, started with targeting an externally facing web server at one of the hospitals.

By exploiting server vulnerabilities, the researchers gained control of the web server, thereby getting a foothold into the internal network, from which they ran scans until they found vulnerable patient monitors.

Using an authentication bypass attack, they forced the monitor to emit false alarms, had it display the wrong vital signs, and disabled the monitor’s alarm altogether: tampering that could potentially lead to a patient’s death or serious injury.

The same methods could be deployed against all medical devices, the team said in the paper:

This attack would have been possible against all medical devices … likely preventing assistance and resulting in the death or serious injury [to] patients.

The attack scenario is harrowing: Diligently executed, many human lives could be at stake, and extrapolating this problem to other hospitals is even more worrisome.

As far as the team knows, to date, there’s been no comprehensive attack model that shows how patients are most likely to be targeted in a cyber attack.

If you flip to page 28 of their report, you’ll see the model the researchers came up with after two years of attacks.

The so-called Patient Health Attack Model visualizes the primary attack surfaces as those that directly affect a patient’s health. For example, active medical devices that can be hacked to deliver a lethal dose of medicine, such as an insulin pump, or a heart defibrillator that could be modified or disabled so it can’t deliver electrical current to save a patient in distress.

There are far more primary attack surfaces, as Independent Security Evaluators enumerated, including:

  • Medical records. Removing somebody’s allergy to penicillin, for example, could injure them if a doctor administers the antibiotic.
  • Work orders. For example, altering an instruction to deliver morphine to Patient A instead of Patient B could have catastrophic consequences.
  • Medicine. Hospitals are vulnerable to malicious actors losing or destroying medicine, altering inventory so a healthcare worker administers the wrong medication, or sending the wrong medicine to the wrong patient.
  • Surgery. Orders are vulnerable to being altered, which could result, for example, in the wrong leg being amputated or organs being removed from the wrong patient. Surgery schedules can be altered. Medical records can be changed so that the wrong blood type is transfused into a patient, X-rays are switched, or an anesthesiologist gets the wrong weight, height or age for a patient.
  • Blood, organs and other biological material. Attack surfaces include the climate control systems necessary for storage of these crucial materials.

The list goes on: an attack could disable lighting in the surgery room. Or an attacker might set off a fire alarm or sprinklers, disrupting the calm, controlled environment necessary for optimal surgical precision. Clinicians could be misinformed by compromised monitoring devices or distracted by false alarms triggered in the building.

With regards to electronic health records (EHR), one platform proved vulnerable to a variety of cross-site scripting (XSS) attacks: attacks that are so well-known, and common, that they’re found on the OWASP top 10 list of web application vulnerabilities.

The readily exploitable XSS attacks the team identified allowed for the modification of administrator settings, the addition of users, and the direct manipulation of patient records.

They also found it possible to deliver a payload that, when executed by a nonprivileged nurse or physician account, would escalate their privileges to that of an administrator account.

The researchers called lack of funding arguably the most detrimental issue with hospital security. It’s not that the money’s not there; it’s that cyber security isn’t taken seriously enough, they said:

The issues aren’t so much that hospitals do not have the funds, but that they are directed in a way that security is not a priority. This needs to change in order to protect patient health.

Other issues include wasting funds on low-priority security items; security understaffing and lack of training; lack of defined, implemented, and/or auditable policy; and reliance on legacy systems, among many, many other areas of concern.

From the paper:

The findings show an industry in turmoil: lack of executive support; insufficient talent; improper implementations of technology; outdated understanding of adversaries; lack of leadership, and a misguided reliance upon compliance.

[It] illustrates our greatest fear: patient health remains extremely vulnerable.

One overarching finding of our research is that the industry focuses almost exclusively on the protection of patient health records, and rarely addresses threats to or the protection of patient health from a cyber threat perspective.

Researcher Ted Harrington summed it up for The Register:

We found egregious business shortcomings in every hospital, including insufficient funding, insufficient staffing, insufficient training, lack of policy, lack of network awareness, and many more.

These vulnerabilities are a result of systemic business failures.

The paper includes advice on remediating the vulnerabilities the team uncovered.

Image of hospital equipment courtesy of Shutterstock.

Read more
Computers can tell if you’re bored
Insec Ethical Hacking Hub Rahul Yadav Fired As Housing CEO. No Association With The Company Anymore!

A new study from a body-language expert at Brighton and Sussex Medical School (BSMS) in the UK shows that a computer can tell if you’re bored by how much you twitch while reading something on screen, tracking your tiny, involuntary movements to…

HEY! Are you even listening?!?

Dr. Harry Witchel, Discipline Leader in Physiology, has got you pegged, oh ye of wandering attention.

According to a release the school put out on Wednesday, we send out “rapt engagement” vibes by more or less freezing solid, like a slack-jawed kid in front of a TV screen when SpongeBob SquarePants is on.

Free commercial-grade security for the home.

Learn More


Our study showed that when someone is really highly engaged in what they’re doing, they suppress these tiny involuntary movements. It’s the same as when a small child, who is normally constantly on the go, stares gaping at cartoons on the television without moving a muscle.

The school thinks the discovery could have an impact on the development of artificial intelligence (AI).

One example: AI online tutoring programs could discern when they’re boring students silly and could adapt to a given viewer’s level of interest in an attempt to re-engage them.

Another possible application: teaching companion robots how to better gauge somebody’s state of mind.


Being able to ‘read’ a person’s interest in a computer program could bring real benefits to future digital learning, making it a much more two-way process. Further ahead it could help us create more empathetic companion robots, which may sound very ‘sci fi’ but [which] are becoming a realistic possibility within our lifetimes.

That would have come in handy for Pepper, the emotion-reading, joke-telling AI customer service robot who didn’t know enough to take cover when a drunk beat it up.

Or, come to think of it, maybe hitchBOT, the smiling, privacy-invading, hitchhiking robot might have been saved from its barbaric dismantling if it could have read micro-movements?

Then again, probably not. Smashing and kicking seem more like macro-movements.

BSMS suggests another possible use of micro-movement tracking: movie directors or game makers could use the technology to read, moment-by-moment, whether the events on the screen are interesting.

While viewers can be asked subjectively what they liked or disliked, a non-verbal technology would be able to detect emotions or mental states that people either forget or prefer not to mention.

The BSMS study included 27 participants who faced a range of 3-minute stimuli on a computer, from fascinating games to tedious readings from banking regulation.

They were given a handheld trackball to minimize instrumental movements, such as those we make when we move a mouse.

Then, their movements were quantified over the three minutes with the use of video motion tracking.

In two comparable reading tasks, BSMS says the less boring reading resulted in a significant reduction – 42% – of movement.

This certainly isn’t the first computer user micro-tracking experiment.

Back in late 2013, Facebook mulled silently tracking users’ cursor movements to see which ads we like best.

Google’s yet another company interested in our micro-wiggling.

In 2014, it was experimenting with swapping text CAPTCHAs for our human quiveriness when we click a mouse.

Readers, how much do you value your micro-movement privacy? Would you kiss it goodbye if it meant no more snoring your way through online content?

Let us know below!

Image of Bored man at laptop courtesy of

Read more