Fraudsters eat for free as Deliveroo accounts hit by mystery breach
Insec Ethical Hacking Hub Cyber crime goes up by 103.2 percent in UP

Food delivery network Deliveroo has suffered a mysterious security breach that has left dozens of UK users picking up large bills for food they never ordered.

News of the problem were revealed by the BBC’s Watchdog TV show, which said it had received “scores of complaints” of rogue transactions appearing on viewers’ accounts during the last month.

In one example in London, £240 ($300) was debited from a customer in Reading for food delivered 30 miles away in London. In another, Southampton University students were billed a total of £440 for food and alcohol delivered in Leicester (120 miles away) and London (60 miles away).

These were organised fraudsters with big appetites, in the latter incident taking delivery of four curries, six naan breads, a kebab, three grilled chickens, four pizzas, five cheesecakes, garlic bread and a liver-killing eight bottles of vodka.

The first these customers knew of the orders was when they received notification by email and through the Deliveroo smartphone app, by which time it was already too late to stop them.

To its credit, once informed of the fraudulent transactions, Deliveroo refunded the money promptly, although that could still have taken up to 10 days.

The unsettling question is how Deliveroo’s customers were breached in the first place.

Deliveroo has blamed the breach on cybercriminals getting hold of login details “stolen from another service unrelated to our company in a major data breach”.

This is called “credential stuffing” and involves attackers trying logins stolen from one website on lots of others to see if account holders have reused passwords across services.

So far, the company has offered no evidence to back up this claim. On the assumption that it is true, Deliveroo users in the habit of re-using passwords should change theirs immediately as a precaution.

Harder to explain is the ease with which fraudsters were able to run up unusually large bills for food delivered significant distances from registered addresses.

Deliveroo says it uses “anomaly detection” to spot this sort of deviation from normal behaviour but clearly something went wrong with this or it wasn’t applied widely enough.

The criminals were also able to get the food delivered to public buildings rather than home addresses, another red flag that should have raised suspicions.

This is despite the company not asking customers to enter a Card Verification Value 2 (CVV2) code when making orders, a card security system designed to ensure that someone ordering something online has physical possession of the card used to pay for it.

The company said it has started asking customers to verify their identity when changing addresses.

Ideally, Deliveroo should give a more detailed account of what went wrong and not fall back on the “security by obscurity” approach often used by UK companies after security incidents.

If lessons can be learned then customers should be able to learn them too.

–News collected and synced by Info Security Solution Kolkata,

Read more
The malicious iPhone video with a silver lining
Info Security Solution

Anyone here old enough to remember MS-DOS?

In those days, memory protection meant putting the lid back on your computer properly, process separation meant having two computers, and the term “sneakernet” was a tautology.

Code and data were gloriously undistinguished to the point that deliberately interleaving machine instructions and data variables in your programs was perfectly normal.

Indeed, many programs started with a JMP instruction that caused the CPU to hop forwards in memory, skipping over things like error messages, menu screens and other data tables, to land in the executable part.

The unused memory starting at the end of the executable part was used for temporary storage needed while the program ran.

In fact, when your application loaded, its so-called uninitialised variables ended up “initialised”, often rather interestingly, with whatever was left over in memory from the previous program.

(Programs were always full-blown “applications” back then. The newfangled diminutive “app” didn’t exist, which is ironic when you consider that modern apps are thousands of times, sometimes even millions of times, larger than old-school applications.)

If anything went wrong with your MS-DOS program – a buffer overflow, for example, or a corrupted return pointer, or just a wrongly directed jump caused by some other sort of bug – then the results were almost always catastrophic, at least for your data.

When a crash involved a memory access that went wrong, the destination address would often be the RAM in your video card.

This gave wild results, because every even byte denoted the character to display, and every odd byte denoted the colour combination to use, giving absurdly abstract art like this:

And these catastrophes weren’t just occasional annoyances: a busy user might expect to reboot several times a day.

There were no arguments back then about whether you should leave desktop PCs turned on overnight.

First, computers used a lot of power in those days, so you saved serious money by turning them off; second, the chance of it running correctly through the night was pretty low; and third, you’d reboot in the morning anyway, just to ensure that you had a fresh start.

Malicious video problems

It was with all of this in my mind that I read a recent story on 9to5mac with a dramatic headline: There’s another malicious link floating around that will cause any iOS device to freeze.

Simply put, it’s a video that somehow consumes sufficiently many resources, or perhaps even triggers what might turn out to be a potentially dangerous vulnerability…

…that you end up with an entirely unresponsive device.

The video eats so much of your iPhone’s lunch, in fact, that you have to reboot by holding the power button for a few seconds to access the iOS shutdown slider so you can restart:

Things can get so bad that the power button alone isn’t enough – after all, the slider shown above is a itself a software control.

If your device is frozen so solid that you can’t even slide the shutdown button, you can do a force restart by holding the power and home buttons down at the same time for 10 seconds. (On an iPhone 7, use power and volume down.)

The bottom line

All said, this does constitute a security risk, even if only a Denial of Service (DoS) where someone crashes your phone by enticing you to a booby-trapped video.

At any rate, there will probably be some sort of security fix in a forthcoming iOS update.

However, we think that the bottom line of this story is good news…

…when you think how far we have come in the past 20 or 30 years.

We’ve evolved from the crashtastic ecosystems of MS-DOS and early Macs to a world in which a video that doesn’t play properly is considered cause for security concern, and where an unexpected reboot is rightly written up as something malicious.

If that’s not a sliver of good security news for the Black Friday weekend, we don’t know what is.

–News collected and synced by Info Security Solution Kolkata,

Read more
‘Compromised’ laptop implicated in US Navy breach of 130,000 records
Insec Ethical Hacking Hub ISRO Will Use Satellites To Map & Create 3D Visualizations of Indian Heritage Sites 2

The personal details of more than 130,000 former and currently serving sailors in the US Navy have been “accessed by unknown individuals”, the Department of the Navy said on Thursday.

Details including names and social security numbers have been compromised, the department added.

The leak happened after the laptop of a contractor working for Hewlett Packard Enterprise was “compromised”, said the department.

Little more is known about the breach, and the Navy reassured sailors that it is “in the early stages of investigating” the breach and is “working quickly to identify and take care of those affected by this breach”.

The department also said it was taking the sensible step of “reviewing credit monitoring service options for affected sailors”.

In the meantime, we’d add some further advice if you think you’re one of the sailors whose details might have been compromised:

  • Keep an eye on your bank and credit card statements for dodgy transactions.
  • Be particularly wary of emails, texts or messages on other platforms asking you to click a link and log in to “confirm your account details” or hand over other personal information.
  • Do take up the Navy Department’s offer of credit monitoring services, which will keep an eye on anyone trying to open accounts using your name or social security number.

It seems at the moment that “there is no evidence to suggest misuse of the information that was compromised”, but there’s no harm in following our advice.

Vice-admiral Robert Burke, chief of naval personnel, moved to reassure sailors, saying: “The Navy takes this incident extremely seriously – this is a matter of trust for our sailors.”

–News collected and synced by Info Security Solution Kolkata,

Read more
Facebook ‘quietly developing censorship tool’ for China
Insec Ethical Hacking Hub Now, Facebook Wants to Ride the Ecommerce Wave in India, To Launch Online Shopping Festival

You can just imagine the seething frustration at Facebook’s commanding heights: what will it take for us to get back into China, with its 721,000,000+ internet users?

Mark Zuckerberg learning Mandarin, visiting the Great Wall, ostentatiously leaving the Chinese president’s book on governance in sight during a visit by the nation’s internet tsar? None of it’s worked! Time to play our final hand – censorship!

That’s one take on the events that might have led to today’s New York Times expose: it seems Facebook has tasked its development teams with “quietly develop[ing] software to suppress posts from appearing in people’s news feeds in specific geographic areas”.

As “current and former Facebook employees” told the Times, Facebook wouldn’t do the suppression themselves, nor need to. Rather:

It would offer the software to enable a third party – in this case, most likely a partner Chinese company – to monitor popular stories and topics that bubble up as users share them across the social network… Facebook’s partner would then have full control to decide whether those posts should show up in users’ feeds.

This is a step beyond the censorship Facebook has already agreed to perform on behalf of governments such as Turkey, Russia and Pakistan. In those cases, Facebook agreed to remove posts that had already “gone live”. If this software were in use, offending posts could be halted before they ever appeared in a local user’s news feed.

As the Times notes, if Facebook ever did return to China, many observers expect it to happen alongside a local partner who could manage the sensitive local politics – especially the censorship rules that have made it impossible for Google and Twitter to operate there.

Facebook’s putative censorship software might make it easier to gain China’s approval for such a partnership. It would certainly fit with Mark Zuckerberg’s earlier statements to employees that:

It’s better for Facebook to be a part of enabling conversation, even if it’s not yet the full conversation.

And Facebook wouldn’t be alone among western companies in agreeing to Chinese censorship. According to Fortune, LinkedIn and Microsoft’s Bing search engine already have.

However, as The Verge reported, once such a tool were introduced:

Facebook would likely face pressure from other autocratic regimes to enable its use in their own countries. It is not impossible that the United States would be one of those countries.

In his Times report, Mike Isaac states that some Facebook employees left the company to protest this censorship project. After posting his story, he tweeted that “it was post-election result that scared some sources into discussing this tool, for fear of a hostile US admin accessing it”.

What does Facebook say?

We have long said that we are interested in China, and are spending time understanding and learning more about the country. However, we have not made any decision on our approach to China. Our focus right now is on helping Chinese businesses and developers expand to new markets outside China by using our ad platform.

While Facebook continues to play its cards close to its chest, it’s looking increasingly like the cat’s out of the bag. If so, it might not be long before other governments start demanding Facebook’s new toy. It could happen before you can say “fake news“!

–News collected and synced by Info Security Solution Kolkata,

Read more
Google secures five-year access to health data of 1.6m people

Artificial intelligence firm DeepMind and a London hospital trust, the Royal Free London NHS Foundation Trust, have signed a five-year deal to develop a clinical app called Streams. The deal extends the already controversial partnership between the London-based startup, which was bought by Google in 2014, and the healthcare trust.

The Streams app is for healthcare professionals. According to the Financial Times, it will trigger mobile alerts when a patient’s vital signs or blood results become abnormal so that a doctor can intervene quickly and prevent the problem escalating.

The trust said that Streams has, thus far, been using algorithms to detect acute kidney injury, and added that it would

alert doctors to [a] patient in need “within seconds”, rather than hours [and] free up doctors from paperwork, creating more than half a million hours of extra direct care

The aim is to use Streams as a diagnostic support tool for a far wider range of illness, including sepsis and organ failure.

OK, so that’s the what. Now for the controversial bit: the how…

The app quite obviously relies on access to patient data.

A story in New Scientist earlier this year raised concerns that the partnership had given DeepMind access to “a wide range of healthcare data on the 1.6 million patients … from the last five years”, and noted that the data will be stored in the UK by a third party and that DeepMind is obliged to delete its copy of the data when the agreement expires.

In a follow-up story published this week, New Scientist revealed that the UK’s Information Commissioner’s Office began investigating the data-sharing agreement following its revelations. A statement from the office says that it is “working to ensure that the project complies with the Data Protection Act”.

But is that enough?

Privacy firms have raised concerns that medical records are being collected on a massive scale without the explicit consent of patients.  Phil Booth, coordinator of medConfidential, queried the value of the app:

Our concern is that Google gets data on every patient who has attended the hospital in the last five years and they’re getting a monthly report of data … [but] because the patient history is up to a month old, [it] makes the entire process unreliable and makes the fog of unhelpful data potentially even worse.

Academics have also raised concerns. Speaking to the Financial Times, Julia Powles, a lawyer who specializes in technology law and policy from the University of Cambridge, highlighted that:

We do not know – and have no power to find out – what Google and DeepMind are really doing with NHS patient data, nor the extent of Royal Free’s meaningful control over what DeepMind is doing.

Give Google a chance?

When Natasha Loder asked:

Powles responded:

That’s exactly it, isn’t it? The issue is not with what Google is trying to achieve, but that fact that it is Google doing it.

Doing it right

I have no issues with technologies being used to improve patient outcomes … provided the right people are doing it, for the right reasons and that it’s done in the right way.

Here we have Google creating an app that really needs real-time data to be useful. Surely it could potentially put patients at risk if the data are not up to the minute when you’re talking about things like organ failure and sepsis. Won’t the doctor need to know what’s been happening with the patient in the last weeks, days, hours and even minutes?

On my second point, Google is not doing the work for profit. Mustafa Suleyman, head of DeepMind Health and DeepMind’s co-founder, told the FT:

We get a modest service fee to supply the software. Ultimately, we could get reimbursed [by the NHS] for improved outcomes.

So you have to ask why. To access to data? To gain a foothold in health analytics? To test possibilities? To build a proof of concept it can sell in the future?

I suspect all of those are near the truth.

Does Google really need to be given this data at all? Wouldn’t it have been a lot safer if the NHS Trust had trialled the app on Google’s behalf, keeping the data safely in-house? After all, if you wanted to test-drive a piece of technology, wouldn’t you ask for the technology to test rather than hand over your data?

Or is this something that can only be accessed as a service, in other words, where data need to sit on the service provider’s machines? If that’s the case, we need to seriously look at how organizations access cloud-based third-party services that require a local copy of data. If we don’t, we risk finding copies of patient, student, citizen and other very personal data here, there and everywhere in the future.

–News collected and synced by Info Security Solution Kolkata,

Read more
Data breach hits MSG: Rangers, Knicks, Rockettes fans hacked
Insec Ethical Hacking Hub Group of cyber-criminals bases in different countries nabbed in joint international operation

The venues host hundreds of thousands of people annually.

The venues host hundreds of thousands of people annually.

Madison Square Garden Company (MSG) reported payment card information was stolen from potentially hundreds of thousands of customers who attended shows or sporting events at the organization’s five major venues during the last year.

MSG reported it had been told by several financial institutions that a pattern of fraudulent activity had been spotted taking place in its point of sale (POS) system and a subsequent investigation by MSG and an outside security firm discovered unauthorized personnel had been accessing POS data from Nov. 9, 2015 to Oct. 24, 2016. The food and merchandise retail POS systems affected were located at Madison Square Garden, the Theater at Madison Square Garden, Radio City Music Hall, Beacon Theater, and Chicago Theater. Information involved included, credit card numbers, cardholder names, expiration dates and internal verification codes, but MSG said not all cards used during this period were affected.

“Findings from the investigation show external unauthorized access to MSG’s payment processing system and the installation of a program that looked for payment card data as that data was being routed through the system for authorization,” MSG said in a written statement.

This attack is reminiscent of similar data breaches which hit retailers several years ago, but have recently fallen out favor as cybercriminals switched over to using ransomware and targeting other types of large organizations.

“Madison Square Garden’s breach may be common in that we’ve seen it before, but it’s not common in that we haven’t seen much of it lately. In fact this breach bears a strong resemblance to the high-profile POS RAM scraping hacks we saw so much of in 2014 (Target, Home Depot, Neiman Marcus),” Casey Ellis, CEO and founder of Bugcrowd, told SC Media in an email.

MSG said the malware has been removed from its system and that the company continues to work with an outside security firm to mitigate the damage.

The venues impacted host the NHL Rangers, NBA Knicks, Radio City Music Hall Rockettes and top-flight musical acts that attract hundreds of thousands of visitors per year. MSG has not released any figures on how many people were impacted nor what type of malware was involved.

“It’s critical to properly segment these networks, actively monitor them for breach indicators, and always assume that these systems have been breached,” Richard Henderson, global security strategist at Absolute Software, said to SC Media in an email.

–News collected and synced by Info Security Solution Kolkata,

Read more
Stop wasting time making the wrong passwords stronger
Insec Ethical Hacking Hub ISRO Will Use Satellites To Map & Create 3D Visualizations of Indian Heritage Sites 2

Most of the energy spent on making passwords stronger is wasted, according to at Microsoft Research, and has no effect on security.

The reason, say Microsoft’s researchers in a recent paper, is because there are two vast “don’t care” regions where energy spent on strengthening passwords is simply wasted.

The chasm

The first “don’t care” region is an online-offline chasm. The chasm represents the gap between the number of guesses a password might have to withstand in an online attack and how many it might face in an offline attack (you can read more about it in my article Do we really need strong passwords?).

To withstand a determined online attack using a website’s login screen your password might have to withstand 1 million guesses. To survive an offline attack by an attacker with specialist hardware, direct access to the password database and plenty of time the figure is eight orders of magnitude greater: 100 trillion guesses.

If passwords sit between these two thresholds then they’re more than good enough to withstand an online attack, but not good enough to handle an offline attack.

Any effort to strengthen passwords in the chasm that falls short of pushing them out of it is therefore wasted.

The saturation threshold

The second “don’t care” region is the threshold at which an attacker stops trying to crack passwords because they’ve already thoroughly compromised the system they’re attacking.

…for an enterprise network a compromised account almost certainly has snowballing effects … The first credential gives initial access to the network, the second, third and fourth solidify the beachhead, but the benefit brought by each additional credential decreases steadily.

So an attacker doesn’t need to crack all of a system’s passwords: in fact they can probably leave most of them untouched.

The point of saturation varies from one network or system to another but the researchers set themselves an upper bound for the saturation point at just 10% of passwords, with the caveat that “saturation likely occurs at much lower values”.

Efforts to strengthen the passwords above the saturation point yield little if any additional security.

Focusing where it matters

On any given system a huge number of passwords are likely to sit in one of the two “don’t care” regions.

If you’re an end user you’ll never know how your passwords are stored or which side of the saturation point they sit, so you should shoot for the strongest passwords you can muster.

If you’re a system administrator charged with keeping your network safe and you don’t have infinite time and resources, the “don’t care” regions can help shape your approach to passwords.

…many policy and education mechanisms are unfocused, in the sense that they cannot be targeted at the specific part of the cumulative distribution where they make most difference (and away from the “don’t care” region where they make none).

How then should you make sure that your efforts to strengthen users’ passwords actually make a difference?

Don’t waste time on composition policies

Perhaps the least popular approach is password composition policies.

These are sets of rules such as “your password should be at least eight characters long and contain at least one uppercase letter, one number and one special character”. They’re popular because the rules are easy to check and they increase the entropy of your password (which can be important but isn’t the same thing as password strength).

However, the case against these rules is compelling: they’re annoying (to everyone, even people choosing really strong passwords); they measure something that isn’t password strength and they restricting the pool of possible passwords (the “password space”), which is a helping hand to password crackers.

Microsoft Research has come up with another reason to ditch those policies, which is that even if they do help to make passwords stronger, they fall into the “don’t care” region where it makes no difference:

…the evidence strongly suggests that none of the password composition policies in common use or seriously proposed can help … enterprises that impose stringent password composition policies on their users suffer the same fate as those that do not

Do block common passwords

Instead of using password composition policies organisations should simply stop users from choosing anything that might appear on SplashData’s annual worst password lists.

Attackers know what the most popular passwords are and any attacker worth their salt will be sure to try them first.

Password blocklists work just where you want them to: below the saturation point for online guessing. Sure, blocklists can be annoying, but they only annoy people choosing poor passwords.

Microsoft and Twitter are both mentioned as sites that use blocklists of hundreds of passwords, but the authors suggest going much further and blocking not just the worst few hundred, but the worst million passwords.

They also suggest that you might use zxcvbn on your website, a password strength meter that actually tries to measure password strength.

Throttle passwords

Limiting the number of times a user can try a wrong password can reduce the vulnerability of passwords below the saturation threshold. Attacks against rate-limited interfaces take a long time and attackers have to be far more circumspect about the guesses they make.

If you’re in any doubt about just how inconvenient rate-limiting can be, just ask the FBI.

The best bang-for-buck guesses for attackers are the the most common passwords, so password blocklists and throttling make a potent combination:

Together with password blocklisting … throttling may almost completely shut down generic online guessing attacks.

NIST (the National Institute of Standards and Technology) now recommends that users be allowed no more than 100 consecutive incorrect guesses in any 30-day period.

Note that whilst sysadmins looking to shepherd flocks of dodgy passwords can feel good about blocklists and throttling, it’s not an excuse for individuals to back off on their password discipline. Recent research showed that if attackers (or more likely their software) target you personally then even the NIST limit of 100 guesses might not be enough to keep you safe.

Enforce two-factor authentication

The paper is tightly focused on passwords and doesn’t cover things like 2FA (two-factor authentication) so I’m going to give it an honorable mention.

Two-factor authentication forces users to provide two pieces of information – typically their password and a code provided by a token, an SMS message or an app.

It protects systems from attackers with stolen passwords, because passwords aren’t enough by themselves to gain access, and it makes guessing passwords online very hard indeed.

Store passwords correctly

Throttling and blocklists are great for fending off online attacks but if a hacker makes off with your password database they can’t help. After a password database has been stolen the password hashes stored inside it are at the mercy of whatever time and hardware the attacker can afford.

How the stolen passwords have been stored makes a huge difference to how big the chasm is.

Passwords should be stored as hashes that have been salted and stretched (for an exhaustive examination of why read How to store your users’ passwords safely).

“Stretching” means repeating the salting and hashing process over and over, typically thousands and thousands of times, in an effort to make password hashing much more computationally expensive.

Moore’s law sees to it that the hardware used for password cracking is always getting faster. Stretching gives system administrators an easy way to keep up – as computers get faster they can simply increase the number of salting and hashing iterations passwords are passed through before being stored.

The upper limit on the number of iterations is determined by what users will stand because they have to wait for their passwords to pass through the salt, hash, stretch process to be authenticated.

The slower the hash the longer that both users and password crackers have to wait:

If 10ms is a tolerable delay an attacker with access to 1000 GPUs can compute a total of … 1012 guesses in four months.  Directing this effort at 100 accounts would mean that each would have to withstand a minimum of T1 = 1010 guesses. Since these are conservative assumptions, it appears challenging to decrease [the chasm] below this point.

1010 guesses reduces the online-offline “don’t care” region considerably but it still leaves us four orders of magnitude adrift of the chasm’s leading edge. But what about other ways of storing passwords?

Administrators can eliminate the online-offline chasm completely by removing the possibility of stolen hash databases, and one way to do that is by using an HSM (Hardware Security Module). An attacker who steals the password database without the HSM has nothing more than a useless list of Message Authentication Codes.

What it all means

The conclusions of the research have the world’s sysadmins in mind. If your job involves looking after users’ passwords and your time is limited then its conclusions can help you focus your energy where it matters – on actually improving security.

If you’re an end-user however, you can’t relax. You’ll never know how your passwords are stored or whether yours sits above or below the saturation point. The measures that sites use to defeat online guessing may be more obvious to you but you’ll still have no control over them, aside from adopting 2FA if it’s available.

Make sure that every password you choose is unique and strong enough to withstand an offline guessing attack. Make each password a random collection of at least 14 letters, numbers and wacky characters and (if you don’t have a photographic memory) use a password manager to keep them safe.

–News collected and synced by Info Security Solution Kolkata,

Read more
DHS hiring puts into question the cybersecurity skills shortage
Info Security Solution

The cybersecurity skills shortage has been discussed in many different ways over the recent years, but a successful hiring event held by the Department of Homeland Security has some wondering if that event was a sign of optimism or an outlier.

The Department of Homeland Security (DHS) held a two-day hiring event “aimed at filling mission-critical positions to protect our Nation’s cyberspace” in July. According to a new blog post, that event garnered “over 14,000 applicants and over 2,000 walk-ins” and culminated with more than 800 candidate interviews and “close to 150 tentative job offers.”

Angela Bailey, chief human capital officer for the DHS, said in a blog post that the DHS “set out to dispel certain myths regarding cybersecurity hiring,” including the ideas that there is a cybersecurity skills shortage and that organizations cannot hire people “on the spot.”

“While not all of them were qualified, we continue to this day to hire from the wealth of talent made available as a result of our hiring event,” Bailey wrote. “We demonstrated that by having our hiring managers, HR specialists, and personnel security specialists together, we were able to make about 150 job offers within two days. Close to 430 job offers have been made in total, with an original goal of filling around 350 positions.”

Gunter Ollmann, CSO for Vectra Networks, said although the event “was pitched under the banner of cybersecurity it is not clear what types of jobs were actually being filled,” and some positions sounded more “like IT roles with an impact on cybersecurity, rather than cybersecurity specific or even experienced infosec roles.”

“Everyone with a newly minted computer science degree is being encouraged to get in to cybersecurity, as the lack of candidates is driving up salaries,” Ollmann told SearchSecurity. “Government jobs have always been popular with recent graduates that managed to scrape through their education, but would unlikely appear on the radar as interns for larger commercial organizations or research-led businesses.”

Chris Sullivan, CISO and CTO for Core Security, agreed that the DHS event may not be indicative of the state of the cybersecurity skills shortage.

“It looks like DHS executed well and had a successful event but we shouldn’t interpret that as a sign that cyber-defender resource problems are over. In fact, every CISO that I speak to has not seen any easing in the availability or cost of experienced resources,” Sullivan said. “In addition, the medium to long term solution requires both formal and on the job training — college curriculum is coming but much of it remains immature. We need resources to train the trainers.”

Derek Manky, global security strategist at Fortinet, warned about putting too much into just a few hundred positions compared to the potentially hundreds of thousands of cybersecurity jobs left unfilled.

“The DHS numbers are relatively small compared with the overall number of unfilled positions,” Manky said. “Part of the solution is to build better technology that requires less human capital to be effective and can evolve to meet shifts in the threat landscape. Additionally, the market needs to better define what skills a cybersecurity professional should hold and use these definitions to focus on efforts that can engage and develop a new generation of cybersecurity talent.”

Rob Sadowski, director of marketing at RSA, the Security Division of EMC, said this event might be cause for optimism regarding the cybersecurity skills shortage.

“The experience that DHS shared is encouraging because it shows a groundswell of interest in cybersecurity careers. This interest and enthusiasm needs to continue across the public and private sector if we are to address the still significant gap in cybersecurity talent that is required in today’s advanced threat world,” Sadowski told SearchSecurity before hedging his bet. “The talent pool in an area such as DC, where many individuals have strong backgrounds in defense or intelligence, security clearances, and public sector agency experience contributes significantly towards building a pool of qualified cybersecurity candidates that may not be present in other parts of the country or the world.”

Bailey attributed some of the success of the DHS event to proper planning and preparation.

“Before the event, we carefully evaluated the security clearance requirements for the open positions. We identified many positions that could be performed fully with a ‘Secret’ rather than a ‘Top Secret’ clearance to broaden our potential applicant pool,” Bailey wrote. “We knew that all too often the security process is where we’ve lost excellent candidates. By beginning the paperwork at the hiring event, we eliminated one of the more daunting steps and helped the candidates become more invested in the process.”

Bailey noted the most important advice in hiring was to not let bureaucracy get in the way.

“The most important lesson learned from our experience is the value of acting collaboratively, quickly, and decisively. My best advice is to just do it,” Bailey wrote. “Don’t spend your precious time deliberating over potential barriers or complications; stop asking Congress for yet another hiring authority or new personnel system, instead capitalize on the existing rules, regulations and hiring authorities available today.”

Sadowski said rapid action is a cornerstone of an effective security program, but noted not all organizations may have that option.

“It’s great that DHS has the luxury to act decisively in hiring, especially from what they saw as a large, qualified pool,” Sadowski said. “However, many private sector organizations may not have this freedom, where qualified potential hires may require significant commitment, investment, and training so that they understand how security impacts that particular business, and how to best leverage the technology that is in place.”

Next Steps

Learn more about how the cybersecurity skills shortage be fixed.

Find out how to live with the cybersecurity skills shortages.

Get info on why there is a delay in adopting new tech because of the skills shortage.

–News collected and synced by Info Security Solution Kolkata,

Read more
Digital Guardian for Data Loss Prevention: Product overview

Digital Guardian, which was known as Verdasys until 2014, offers several data loss prevention products. Originally focused on technologies for stopping data loss from insider threats, Digital Guardian has expanded its DLP product lineup to address external threats as well.

The company’s original product is Digital Guardian for Data Loss Prevention (DLP), an endpoint DLP product. In addition, Digital Guardian acquired Code Green Networks in October 2015, adding Code Green Networks’ TrueDLP suite of products — Network Data Loss Prevention, Cloud Data Loss Prevention and Discovery Data Loss Prevention — to its lineup. When used together, Digital Guardian four DLP products address security for data in use, data in transit, data at rest and cloud-based sensitive data protection.

Digital Guardian for Data Loss Prevention

The Digital Guardian for Data Loss Prevention product provides context-aware data loss prevention inspection of all data at rest and data in use on Windows, Mac OS X and Linux-based desktops and laptops. It also offers monitoring and control for removable devices, such as USB flash drives and removable media attached to protected endpoints. This ensures that only authorized removable devices are used and that only the appropriate files may be copied or moved to removable devices and media. Digital Guardian for Data Loss Prevention also allows security managers to set policies for their organizations that can block, all or automatically encrypt sensitive data depending on the situation — such as attaching a file to an email or uploading it to a cloud service.

One of the key features included in Digital Guardian for Data Loss Prevention is automated data classification; the product is designed to tag and classify data upon installation, sorting personally identifiable information, healthcare data, PCI DDS data and more. In addition, Digital Guardian’s DLP software can cover up to 250,000 employees with a single management server.

Digital Guardian Network Data Loss Prevention

The Digital Guardian Network Data Loss Prevention product monitors three types of communications: email traffic, HTTP/HTTPS/FTP traffic and all other packets — for sensitive data in the packet content. The onboard Message Transfer Agent examines email messages for content, source, destination, attachments and subject before leaving an organization. The HTTP/HTTPS/FTP uses a web proxy acting as an ICAP client to communicate with the Network DLP appliance’s ICAP service. This enables Network DLP to inspect all outbound sessions for these protocols. The packet monitoring ensures that all outbound data packets regardless of network protocol or destination port are inspected before leaving the organization.

Digital Guardian Cloud Data Protection

The Digital Guardian Cloud Data Protection product provides monitoring and control for all data exchanges with cloud-based resources involving desktops and laptops as well as iOS and Android mobile devices. Supported cloud services include Accellion, Box, Citrix ShareFile and Egnyte. The cloud DLP product scans all files uploaded to cloud storage for confidential or regulated data and remediates it based on policies.

Digital Guardian for Data Discovery

Digital Guardian for Data Discovery performs network and local scans of at-rest files to identify sensitive information found in servers and other data center assets. It also offers an agent that can be used to scan desktops, laptops and servers at remote offices. Once sensitive data is detected, Discovery DLP can handle the file containing that data based on policy. Common responses include deleting a file, moving a file to a vault — optionally leaving a notification in place of the relocated file — generating an alert or triggering a custom script.

Summary

Digital Guardian DLP products cover several enterprise IT areas, including endpoint devices, networks and cloud services. The DLP suite also comes with a data discovery component that’s designed to help companies identify and audit potentially unsecure data within the IT environment. The suite covers data in use on endpoint devices, data in transit on networks, and data at rest as well as cloud and mobile data. Digital Guardian’s products are designed to meet the needs of large enterprises as well as small and medium-sized businesses.

Customers can access the Digital Guardian Support Portal for 24/7 technical support, FAQs, tutorials and other information. Digital Guardian also offers free product trials. Companies interested in Digital Guardian for Data Loss Prevention and other DLP products should contact the vendor for pricing and licensing information.

Next Steps

Part one of this series looks at the basics of data loss prevention products

Part two examines the business case for DLP products

Part three explores usage scenarios for DLP products

Part four focuses on procuring DLP products

Part five offers insight on selecting the right DLP product

Part six compares the best DLP products on the market

This was last published in November 2016

–News collected and synced by Info Security Solution Kolkata,

Read more
Fake online news still rattling cages, from Facebook to Google to China
Insec Ethical Hacking Hub Fake Android Virus alert says "Your Mobile compromised by Chinese Hackers"

Post-election, the ripples from fake online news continue to rock boats, from Google to Facebook to China and beyond.

The way to tackle the problem, as far as China’s concerned, seems to be to track down those who post fake news and rumors, and then “reward and punish” them – whatever that means.

According to Reuters, Chinese political and business leaders speaking at the World Internet Conference last week used the spread of fake news, along with activists’ ability to organize online, as signs that cyberspace has become treacherous and needs to be controlled.

Ren Xianling, second in command at the Cyberspace Administration of China (CAC), said that the country should begin using identification systems to track down people who post false news and rumors.

It’s one more step on the road to a more restricted internet: one that China’s already walking and one that extends even beyond its infamous Great Firewall of censorship.

Earlier this month, the country adopted a controversial cybersecurity law, set to go into effect in June 2017, that has companies fearing that they’ll have to surrender intellectual property or open backdoors in their products in order to operate in China.

Meanwhile, over at Facebook, employees have reportedly gone commando, forming an unofficial task force to study fake news.

According to BuzzFeed, the renegades have already disagreed with CEO Mark Zuckerberg, who called it “a pretty crazy idea” to think that fake news on Facebook influenced the outcome.

He’s since dialed it back, saying that this is an issue that Facebook has “always taken seriously”.

Over the weekend, Zuck took to his personal Facebook page to post seven projects launched to tweak the site and polish the algorithms that pushed fiction to the top of Trending, where it’s been masquerading as real news.

They are:

  • Stronger detection to the systems that spot misinformation before users have to do it themselves.
  • Much easier user reporting.
  • Third-party verification by fact-checking organizations.
  • Possible warnings on stories flagged by those fact-checkers or the Facebook community.
  • Raising the bar for what stories appear in “related articles” in the News Feed.
  • Cutting off the money flow. “A lot of misinformation is driven by financially motivated spam. We’re looking into disrupting the economics with ads policies like the one we announced earlier this week, and better ad farm detection,” Zuckerberg said.
  • More input from news professionals, to better understand their fact-checking systems.

As the media has been covering in minute detail post-election, it’s been suggested that such fake news swayed voters, who shocked much of the world by voting for Donald Trump in the US presidential election.

If we bounce on over to Google, another heavyweight in the news dissemination machinery, we find that it’s reportedly planning to remove its “In the news” section from the top of desktop search results in coming weeks.

Google got dragged into the fake news mess last week, when its search engine was prominently displaying a bogus report about Donald Trump having won the popular vote.

One of the top results for the In the news section when visitors searched for “final election count” was a blog, 70 News, that falsely claimed Trump had won the popular vote by a margin of almost 700,000.

He didn’t. As of Tuesday, votes were still being counted, but Hillary Clinton’s lead of 1.7 million votes was still growing.

Business Insider spoke to a source familiar with Google’s plans who said that it will replace the In the news section with a carousel of top stories, similar to what it now features on mobile.

The plan was in the works for some time before the 70 News piece got featured.

The removal of the word “news” will, hopefully, help visitors distinguish between Google’s human-vetted Google News product and the results of its Google Search engine, which don’t get assessed on the basis of whether they’re true or not – just whether they’re newsy.

However, Google has made clear that it’s not interested in serving up nonsense. Last week, Google CEO Sundar Pichai had this to say on the matter:

From our perspective, there should just be no situation where fake news gets distributed, so we are all for doing better here.

To put some bite into that bark, Google said it would starve out fake-news sites, banning them from its ad network and all that revenue. Facebook did the same.

In his post, Zuckerberg stressed that this is complex stuff, technically and philosophically. Facebook doesn’t want to suppress people’s voices, so that means it errs on the side of letting people share what they want whenever possible. The more people share, the more the ad revenue flows, and it doesn’t matter to ad revenue what people share, be it divine inspiration or drivel.

But over at Princeton University, four college students last week showed that as far as the technical part of the equation goes, it might not be quite that hard after all.

The Washington Post reports that the four spent 36 hours at a hackathon, coming out the other end with a rudimentary tool to block fake news sites.

They’re busy with class work and a little overwhelmed with an outpouring of interest. Want to have a spin with their Chrome extension? Here you go: they open-sourced it.

As the fake-news saga keeps spinning, bear in mind that we can influence this, too. If we see something that we consider fake and comment on it, that’s a +1 as far as the algorithms are concerned.

Did you share it with friends so you can all laugh at how dumb the post was? That’s another +1. All your friends who chimed in? +1, +1, +1, +1. Instead, just ignore it; starve fake news until it shrivels out of our feeds.

–News collected and synced by Info Security Solution Kolkata,

Read more