MFA in the USA

defenseindepthCastleWhat prevents a democratic republic like the United States of America from devolving into a dictatorship?  What stops the President from seizing control of the country?  What limits the power of Congress and stems the possibility of corrupt and unjust laws?  The answer to these questions is a simple one and known by every child in every social studies class across America – a system of checks and balances.  All the power and all the responsibility is not invested in any single branch of government.  Responsibility is divided and power is shared.  This simple, yet ingenious approach to government has preserved the sanctity and security of our nation for more than 240 years.  This concept of checks and balances has also proven its value in other segments of life and business including the principles of IT security.

Checks and balances permeate almost every aspect of a sound IT security program.  The practice of this concept is known by many different names – separation of duties, layered perimeter defenses, 3rd part auditing, and most recently multi-factor authentication.  The latter (Multi-factor authentication or MFA) has become particularly relevant in the last several months and has spurred many debates over the how’s and why’s of identity and access management.  As such, there is tremendous value in exploring its significance as a check in the computer authentication process and understanding what it does and does not do to protect a user’s identity and system access.

At its core, MFA is built on the principle of “something you know” and “something you have”.  The “something you know” is fairly straight-forward.  You know your username and your password.  The “something you have” can be a little trickier.  Sometimes it is a physical token you use, such as a key card or a USB drive you insert into your computer.  Other times it is a piece of software generating a code on your smartphone or a text message you receive from an authenticating system.  The end goal of this authentication process is to separate the two items.  The “something you have” is separate from the “something you know”.  It is out-of-band and not easily intercepted by someone or something attempting to compromise the authentication process.  In a modern world filled with cyber criminals lurking around every corner armed with phishing attacks and social engineering tricks and treats, protecting user identities has become a full time job and the most trusted tool in the trade has become multi-factor authentication.

The title of “most trusted tool” for MFA is frankly quite accurate and far from a literary exaggeration.  What was once an optional security feature left to IT security aficionados and the truly paranoid, MFA has, over the last year, become a standard authentication mechanism for numerous businesses, online retailers and service providers.  This tremendous growth in use has been fueled by the fear of identity theft and financial loss associated with email phishing schemes and online hacking.  Multi-factor authentication has provided some much needed peace of mind as a second layer of protection for users fearing compromise because it prevents access to systems and websites even if a user’s password has been successfully stolen or intercepted by a cybercriminal.  Just because “something you know” has been stolen, the “something you have” still protects your account.

As users have become more comfortable with and accustomed to MFA, a new question has arisen that deserves our attention.  Users are now asking, “If my password is now protected by multi-factor authentication, then why do I need to worry about following all of these strong password requirements?”  Those requirements typically include longer, randomized passphrases comprised of case-sensitive letters, numbers and symbols.  The answer to this question is also quite simple.  Multi-factor authentication is not perfect.  As a process, it can be broken, sidestepped, or even experience outages.  In just the last week, PayPal announced that it had corrected a flaw in its two-factor authentication mechanism that allowed for the bypassing of the secondary security layer altogether.  Apple in the last 72 hours announced an emergency security update that addressed among other issues a flaw in its authentication process that would allow for remote access to and jailbreaking of iOS devices.  These are only 2 examples among many others because, at the end of the day, we are dealing with technology written and maintained by humans, and humans make mistakes.

Remember that at its core, MFA is an extra layer of protection for the authentication process.  It is not a replacement for strong passwords, but instead should be viewed as in addition to strong passwords.  It is part of a checks and balances system that has evolved in the world of strong authentication, and in this system, just as we discussed in the introduction of this article, power and responsibility is both divided and shared, but never exclusive.  IT security defenses, like the defenses used throughout the history of humanity, are most effective when they are layered.

This article began with the example of a historically validated and somewhat aloof core principle of democratic society.  Allow me to end it with some of the sage advice I received from my grandmother over and over throughout my formative years.  Don’t put all of your eggs in one basket.  Do not assume that just because one of your layers of defense is strong, the others are suddenly less important.   You need both checks and balances.  The responsibility for secure authentication is both divided among and shared by the multiple factors in use.  Every factor needs to be strong and reliable to ensure the safety of the user involved and the system being accessed.  Given the prolific growth of cybercrime in the world, now is not the time to cut corners and to sacrifice security for expediency.  Now is the time to strengthen your walls, to deepen your moats, and to raise your drawbridges.  The cyber criminals are coming, but you don’t have to let them in.

Lions and Tigers and Passwords and Hoaxes, oh my!

Many of you may have seen a great deal of bluster in the main stream media and general interest IT circles over the last few days concerning the possible breach and release of tens of millions of Google, Yahoo, and Microsoft credentials.  This breach was attributed to a Russian hacker after a huge, low cost dump of credentials flooded the black market.  I have personally seen multiple emails and alerts floating around the Internet from “experts” spreading large quantities of FUD (Fear, Uncertainty and Doubt), claiming that passwords should be rotated immediately, not only for Google, Yahoo, and Microsoft, but also any other systems that might have the same or similar credentials.  Fortunately, professionals in the IT Security community saw through this hoax fairly quickly and never raised the red flag.  The data dump in question proved quickly to be more than 98% dummy data.  Even on the black market, too good to be true usually means it is not what it appears to be.

So what should be the takeaways and lessons learned from this type of event?  We can certainly learn a great deal from these types of false alarms.  Here are a few of my thoughts and suggestions:

  • Don’t overreact – Wait for the IT Security professionals and the vendors in question to weigh in before assuming that all is lost. Google, Yahoo and Microsoft were quick to verify the data was false and confirm that a breach had not occurred.  Though I am never against periodically rotating passwords, sometimes these hoaxes are designed to fuel a mass password change panic which is then exploited by phishing attacks and other credential harvesting techniques by the bad guys.
  • Don’t focus only on passwords – Consider utilizing multi-factor authentication for web mail and social media accounts. Twitter, Linkedin, Google, Yahoo, Microsoft and others all support free, multi-factor authentication mechanisms as a protection against the theft of usernames and passwords.  Multi-factor authentication basically means that in order to sign into a service, either via your PC or your mobile device, you must have something you know (your username and password) and something you have (your smartphone text message or token).  This type of protection can buy you the time you need to investigate alerts while knowing your credentials are safe from misuse.
  • Lessen the impact of lost credentials – Always use separate passwords for different services and accounts. In the event a credential is lost or compromised, you are only exposed for that one service or resource.  I fully realize this strategy creates some overhead in managing lots of usernames and passwords, but fortunately there are many great password management tools on the market today to help remedy this problem.  I am personally a fan of tools like 1Password and LastPass.
  • Have good resources on stand-by to help – IT Security is an ever-evolving, specialized field. Make sure your IT services team has expertise on staff and is ready to help.  Consider finding trusted sources you can follow via an RSS feed or Twitter to know what is really going on in the world of IT security so that you can better differentiate between the hoaxes and the real threats.

SkyNet is born? – Microsoft Windows 10 and Data Privacy

Skynet_LogoThe time has come to have the Microsoft / Windows 10 discussion.  For those of you that follow one or several of the myriad of tech news sources available online, I don’t need to say anything else.  You know exactly where this article is going.  For anyone else who hasn’t stumbled across any of the headlines of the last several months, the discussion in question is about data collection, forced upgrades, and control.  Microsoft has chosen a path with their implementation of Windows 10 that crosses a line, or frankly several lines, in terms of user privacy and user choice, and I believe it is time for me to weigh in and help move this conversation forward.

I readily admit that nothing I am about to share or discuss is particularly new or innovative.  These Windows 10 concerns have existed since the beta releases and have been thoroughly covered in the tech and IT security media.  My motivation is simply the fact that I have finally reached my personal boiling point.  I was asked this week by colleagues in my office why I have not written about these issues or raised an electronic red flag.  Sadly, the most honest answer I could give then and share now is that I was avoiding the conversation because: A) it hasn’t really affected me personally as an OS X user, and B) I don’t honestly know what the solution would or could be to this problem.  That said, I do not think this conversation can be avoided any longer and it is time to speak up.

Before we get into examining why I felt the need to avoid this conversation, let’s take a moment to frame the issues with Microsoft and Windows 10, and the best starting point is Microsoft’s new approach to user data collection.  With the release of Windows 10, Microsoft has defined certain data collection points that they believe are important, if not necessary, to providing the best user experience possible.  In a blog post from September 2015, Terry Myerson, Microsoft’s Windows Chief, attempted to justify the data being collected by Microsoft by defining the 3 core areas where data collection was beneficial if not necessary: data used for safety and reliability, user personalization data, and advertising data.  According to Myerson, this data greatly enhances the user experience and is transmitted, collected and stored in a safe and responsible manner by the team at Microsoft.  Many in the world of tech and IT security are openly questioning these claims and are quick to point out the difficulties experienced when attempting to stop or block these data collection processes.

To provide a little perspective, a colleague of mine has the following statement taped to his office door:

Microsoft’s service agreement for Windows 10 is 12,000 words in length.  Here’s one excerpt from Microsoft’s Terms of Use that you may not have read:

“We will access, disclose and preserve personal data, including your content (such as the content of your emails, other private communications or files in private folders), when we have a good faith belief that doing so is necessary.”

To better understand the pervasiveness of Microsoft’s data collection strategy, you only need to look at the Windows 10 achievement milestones Microsoft is bragging about and sharing with the world.  The Hacker News, an IT security news and blogging site, deftly outlined the following stats shared by Microsoft to start the new year:

  • People spent over 11 Billion hours on Windows 10 in December 2015.
  • More than 44.5 Billion minutes were spent in Microsoft Edge across Windows 10 devices in December alone.
  • Windows 10 users asked Cortana over 2.5 Billion questions since launch.
  • About 30 percent more Bing search queries per Windows 10 device compared to prior versions of Windows.
  • Over 82 Billion photographs were viewed in the Windows 10 Photo application.
  • Gamers spent more than 4 Billion hours playing PC games on Windows 10 OS.
  • Gamers streamed more than 6.6 Million hours of Xbox One games to Windows 10 PCs.

Microsoft is clearly sharing these statistics to tout how successful the Windows 10 rollout has been and how well received the product is with end users, but these statistics are also a brazen admission of how deeply Microsoft is monitoring its user base and exactly how much data they are collecting about the Windows 10 population.  Just break these statistics down.  Microsoft is cataloging overall usage hours by end users, specific application usage hours, Cortana requests, Bing queries, photo and video content usage, and cross platform communications.  As a potential end user, you should be both afraid and appalled by these statistics.

Another frightening data collection area that should be considered is Microsoft’s new approach to whole disk or device encryption.  Device encryption is a new, free service available for all Microsoft devices with the necessary supporting chipsets and hardware.  For those of you in the corporate world familiar with Microsoft’s professional Bitlocker offering, the underlying technology is the same across all platforms.  However, unlike Pro and Enterprise users, the Home/free device encryption solution Microsoft is now providing across the board lacks the options available to Bitlocker deployments when it comes to how the encryption key is handled.  To make a long story short, if you are using the free or Home solution, Microsoft is collecting and storing your encryption key on their servers and associating it with your Microsoft account.  They did not ask.  They simply did this because they determined it was best for the end user and his/her overall experience.  If you have Bitlocker in an enterprise environment, you do have other options for storing and managing encryption keys, but even with that process, if the wrong boxes are checked, the result can be keys being submitted to a Microsoft repository.  Ponder that fact for just a moment.  If/when Microsoft’s server resources get compromised, then a huge portion of the world’s end users will have their private encryption keys published and available for public consumption.

So how did Microsoft, and as an extension, we as the end user public get to this point?  The answer is system updates.  Microsoft writes them.  End users need them to fix OS and application problems.  IT security professionals, myself included, harp that critical and security-related patching is vital to stay ahead of the cyber crime curve.  So Microsoft leveraged this delivery mechanism to start sending out “critical” updates to users to prompt, then highly encourage, then all but force an upgrade to Windows 10.  Microsoft used similar updates to open communications paths and allow for new data collection points.  Filtering these updates is very difficult for the average, non-technical Windows user, and the more technical user has started seeing features break and options unavailable if patches were not applied.  Microsoft basically took advantage of a captive audience and began to build their “OS utopia” one update at a time.

As we speak about a captive audience and the Microsoft update process, let’s take a moment to look at the announcement this week surrounding support for Internet Explorer.  Microsoft has announced that as of January 12, 2016, all versions of Internet Explorer prior to IE 11 or Microsoft Edge will cease to be supported and will no longer receive security updates.  Though there are some exceptions for embedded versions of Windows, this basically means that IE 7, 8, 9, and 10 will no longer be patched.  Along with these versions of IE, Microsoft also quietly indicated that Windows 8 as an operating system will also no longer be supported.  On its face, this announcement is not an evil act.  It is important for organizations and individuals to update and upgrade software to the latest version, especially an application as vulnerable to attack as a web browser.  But let us be clear.  This was not an altruistic act by Microsoft to move users to a safer and more secure platform.  It was a targeted act that moves users to the most current and most pervasively monitored version of an application, and it also encourages an upgrade path to Windows 10.  There are very practical implications to this move by Microsoft.  Many organizations and individuals rely upon legacy web applications that simply do not support new versions of IE.  Others simply do not have the time and resources to update and retrain.  There is the real potential for a security vacuum with the lack of patches for legacy versions of Internet Explorer.

I began this article with an admission that I have honestly been avoiding this conversation for a couple of reasons.  First of all, I am primarily an OS X user and these problems don’t directly affect me.  OK.  I admit that is a bit of a cop out.  I still own several Windows devices, as do my children, and of course, many of my customers.  But in truth, as I sit and type here on my Macbook Air, I do not personally fear many of the intrusions I have outlined to this point, and at some level, that fact kept my boiling point in check.  That said, I have experienced some of the pains I have detailed in this article, especially in the support and configuration of devices for my teenage boys.  These issues do exist in the real world and need to be addressed, but that fact also leads to the second reason why I have avoided this conversation.  How do we solve or begin to solve this problem?

At the heart of this problem is the most commonly used operating system on the planet – Windows.  Though far, far from perfect, Apple OS X and the many flavors of Linux available throughout the world do not generally have the same number of privacy concerns that Windows 10 enjoys.  In all honesty, there are many ways you can share your private information with the good people of Apple, but those options can be fairly easily controlled and disabled by the end user.  So, is the solution to press the world to go out and buy Macs?  I don’t think so.  For many, this is a cost prohibited scenario.  There is a sunk cost to hardware already purchased.  There is a learning curve.  So is the solution a custom distribution of Linux that can run on already purchased hardware?  Maybe, but even that option is difficult and unlikely to gain any traction.  Once again, there is a learning curve and a populous that simply lacks the skills and resources to transition away from Windows.  Sadly, at the end of the day, we are discussing a market that Microsoft has dominated for more than 20 years.  We are navigating on a boat that simply turns too slowly.

So what is the answer and is there a solution?  I freely admit that I do not know for sure.  But I do have hope.  I have hope for the simple reason that we still have a voice.  We can still complain about the level of intrusion Microsoft is making into the lives and actions of its end users.  We can share these concerns with the masses, with the press, and with the legislators that have such a keen desire to tout the need for both security and privacy.  We can choose to save our money and invest in better software and hardware whenever possible.  We can collaborate as a community on tweaks and fixes and filters for Windows 10 that can curb the loss of data.  Frankly, we can become the community of IT users and professionals that we have always pined for – a group of people concerned for the common good and willing to work together and share information to make the cyber ecosystem a safer and more reliable place to work and play.  It is not easy and it will not quick, but the effort is well worth it.

The 12 Steps of Good Vulnerability Management

threat_vulnerabillity_scanStep #1 – Admit That You Have a Problem

Many IT professionals live in a world of denial. Assumptions are made about the security of systems and risks are often ignored. These stances are not taken out of ignorance or irresponsibility, but are instead often-pragmatic decisions based on the number of resources available and the number of hours in a day. IT managers are frequently forced to hope that the diligence that went into the deployment and configuration of network equipment and servers 3 years ago will continue to protect that equipment today. Unfortunately, that is often far from reality.

The first and most important step for all IT professionals is to recognize and admit that vulnerabilities are real and that they are a problem to be tackled consistently and systematically. Attack vectors change, software evolves, and firmware gets revised. Very little if any part of information technology is static. Recognizing this fact is key. Once you admit that problems exist, solutions become a possibility.

Step #2 – Define and Understand Your Boundaries

Once you recognize that there is a vulnerability problem to be tackled, the next step is to start to define the battle and the related battlefield. Certain questions must be answered initially. What tools do I currently have at my disposal (vulnerability scanners, logs, discovery tools, monitoring tools, asset management information, etc.)? How many locations and subnets do I need to evaluate? What impact will this work have on my network? What are my potential maintenance windows?

The answers to these questions will help to define your next steps including what needs to be purchased, who needs to be called, and how quickly you can dig in and start working the problem. Rome was not built in a day, so remember that patience is key. The development of a strong plan is the foundation for success.

Step #3 – Know Thyself

Knowing thyself means a lot of things to a lot of people but in terms of vulnerability management it means understanding how many devices you have on your network, where those devices are located and what potential function they have. This process usually begins with a discovery scan across all subnets. In a perfect world (and I know all IT shops are generally utopic J), this type of scan validates all of the existing asset management inventories and there are no surprises. In reality, a good discovery scan can identify lost or forgotten components, expose unauthorized devices, backfill or create asset inventory lists, and provide a strong starting point for vulnerability remediation.

Step #4 – Address the Obvious Problems First

Most IT professionals do not need to perform extensive testing or run numerous scans to identify their “problem children” on the network. Every organization has certain servers and network devices that are adverse to patching or downtime or both. Build a plan, schedule the necessary downtime and patch these devices. There is no need to wait for a vulnerability assessment to know that these machines will need to be addressed. Plus, the cleaner the initial vulnerability assessment, the faster remediation can begin.

Also, remember Step #3. Review your discovery scan and target any anomalies. All unknown or unexpected devices should be investigated and all unnecessary and unused machines should be decommissioned.

Step #5 – Assess Your Situation

At the heart of every good vulnerability management strategy is a thorough vulnerability assessment utilizing an established and exhaustive scanning tool. Several important decisions go into a strong initial vulnerability assessment. Select a reputable scanning tool with a mature vulnerability signature database. This will limit false positives and ensure valuable initial scan results. Using the results of your initial discovery scan, target all assets on your network. Do not make assumptions as to which devices should or should not be scanned. Scan them all. Target a maintenance window that will allow for as much potential down time as possible. This will allow for a more thorough and intrusive scan of all nodes without impacting business functionality. Finally, be patient. A thorough scan takes time and monitoring. Be aware of your maintenance window and be prepared to pause your scan to ensure production is not affected.

Step #6 – Remediation is Fundamental

This particular step is quite possibly the most important and the most easily forgotten step in good vulnerability management. A strong vulnerability assessment is only as good as the time and effort put into the remediation of the assessment’s findings. Far too often organizations diligently scan their networks only to set aside the resulting report and never fix any problems. Scanning becomes a compliance checkbox effort while the remediation work falls to the bottom the tasks list.

Review your vulnerability scan results. Build a remediation plan starting with your most vulnerable and critical systems first. Then work the plan. Realize and accept the fact that all findings cannot be remediated quickly and some findings may find their way onto the next scheduled scan. That’s ok. Be methodical and eventually those results reports will be smaller and smaller and your network will be more and more secure.

Step #7 – Reassess Your Situation Every Few Month

Simply completing a vulnerability scan and successfully remediating all of the related findings is not the same thing as reaching the end of the vulnerability management rainbow. You are not done. The clock starts all over again. Like its close cousin patch management, vulnerability management is a continuous process and, as such, requires a consistent methodology. Develop a set of quarterly procedures including discovery scans, vulnerability assessments and remediation tasks. Such a strategy will shorten vulnerability windows and give you a bit more peace of mind from quarterly interval to interval.

Step #8 – Develop Better Habits

As has been stated throughout this list, a lack of patches and up-to-date firmware is often a root cause of vulnerabilities on systems. IT professionals the world-over have the best of intentions when it comes to the development and implementation of a patch management strategy. Unfortunately, project schedules and real world challenges interfere with those strategies, leading to interruptions if not the complete abandonment of system patching. No good comes from this.

IT professionals should make patch management and, as an extension, vulnerability management methodical and habitual. Time should be built into work schedules and project plans to ensure these critical tasks are complete. Resources should be dedicated to remediation. Both planning and execution are necessary to ensure all systems are as hardened and as defensible as possible in the event of a cyber-threat.

Step #9 – Increase Your Frequency

Vulnerability management and the plans and processes associated with it should evolve over time. As remediation strategies become more effective, each follow-up vulnerability scan report should be smaller and more manageable. Once those reports become more manageable, the time between scans can shrink. The more frequently an IT shop scans for and remediates vulnerabilities, the less time that shop and its associated organization spends vulnerable to potential threats. Less vulnerability is always a good thing.

A good strategy for scanning intervals is to attempt to shrink from quarterly to monthly, and then from monthly to weekly. Most professionals would agree that an attack window of seven days is much more palatable than an attack window of 90 days.

Step #10 – Make It Automatic

The inevitable challenge that comes with more frequent vulnerability assessments is having the time and resources to perform the scans. Fortunately, most of the leading scanning tools and vulnerability management solutions have automation mechanisms to help solve this problem. A good tool/solution should allow an administrator to schedule scans as needed and to route the results of those scans to an email box or file share automatically. A good tool/solution should also generate alerts based on critical findings or system conflicts associated with the scanning process. This level of automation should free up administrators and allow for more frequent and burden free scans, which in turn provides valuable insight and smaller vulnerability windows.

Step #11 – Learn to Follow the Trends

Aside from the immediate goal of identifying and eliminating network and device vulnerabilities, a strong vulnerability management methodology also provides invaluable insight into the function and effectiveness of an organization’s IT security practice. By tracking the vulnerabilities and threats identified in the scanning process in relation to the remediation process designed to eliminate those threats, an IT security practice can demonstrate its effectiveness and the organization’s overall security posture over time.

Many of the more robust vulnerability management solutions on the market today can track remediation successes over time and provide reports and graphs demonstrating the effectiveness of the vulnerability management methodology. This is a valuable tool for most IT security practices because it validates all of the efforts exerted to keep an organization safe and it also provides a financial justification for resources acquired and monies spent.

Step #12 – Make Continuous Progress

This final step makes the assumption that scanning and remediation are moving along smoothly and vulnerability windows have been shrunk to as small as possible. Many vulnerability management solutions and threat intelligence platforms now support Layer 7 continuous monitoring of networks for potential vulnerabilities and threats. This is accomplished through passive packet inspection and traffic pattern recognition. Such a solution is the logical next step in vulnerability management; knowing the problem as it occurs.

That being said, perhaps assumptions should not be made. Maybe an IT shop’s vulnerability management methodology does not need to be perfect. Continuous monitoring has value regardless of the state of your vulnerability management strategy. Knowing you have a problem is truly half the battle. But remember that knowing you have a problem is not the same thing as solving it. That takes a plan and that takes proper execution.

Good Luck! Go fight the good fight against bugs and vulnerabilities!

My Thoughts – Sony pulls ‘The Interview’ after 9/11 terror threat

This is just one link to one of dozens of articles concerning the Sony breach and the subsequent pulling of “The Interview” from movie theaters around the country.  I like many of you am both angered and frustrated at this entire situation, from Sony’s response to the conjecture of retaliatory attacks by the US government against North Korea.

First and foremost, this entire situation is an example of cyber-bullying targeted at the US Constitution and its freedom of expression as well as the very nature of capitalism in a free market society.  Every American should be outraged that the acts of one nation state could influence what appears at an American theater.  It really is that simple.  Corporate America is bowing to the whim of a violent dictator.  We are setting a very dangerous precedent by allowing this to happen.

Secondly, Sony is clearly not guiltless in this situation either.  Like most instances of bullying, Sony was not prepared for conflict.  They found themselves cornered on the playground with their IT pants pulled down around their ankles due to a complete and utter disregard for proper cyber defenses.  Other corporations desperately need to take notice and prepare themselves.  There are plenty of bullies on the playground of our world’s economic stage and the environment is ripe for a wave of similar extortion attempts and cyber attacks.

Finally, retaliation in the forms being bantered around via public media outlets is not the answer.  There are no real value-added cyber targets in North Korea and the attack itself was clearly outsourced to players located in other locations throughout the world.  Retaliation and retribution need to come in the form of real world controls.  This is not a tit for tat situation.  At the end of the day, the American infrastructure is under attack, either physically or economically, and that kind of threat should be handled in a serious manner and at the highest levels of government.  As citizens, we have a right and responsibility to demand this of our elected officials.  Do not be lulled into thinking this is just about a silly movie and the bruised egos of the Hollywood elite.

https://nakedsecurity.sophos.com/2014/12/18/sony-pulls-the-interview-after-terror-threat-sued-by-staff-over-privacy-violations