My Thoughts – Sony pulls ‘The Interview’ after 9/11 terror threat

This is just one link to one of dozens of articles concerning the Sony breach and the subsequent pulling of “The Interview” from movie theaters around the country.  I like many of you am both angered and frustrated at this entire situation, from Sony’s response to the conjecture of retaliatory attacks by the US government against North Korea.

First and foremost, this entire situation is an example of cyber-bullying targeted at the US Constitution and its freedom of expression as well as the very nature of capitalism in a free market society.  Every American should be outraged that the acts of one nation state could influence what appears at an American theater.  It really is that simple.  Corporate America is bowing to the whim of a violent dictator.  We are setting a very dangerous precedent by allowing this to happen.

Secondly, Sony is clearly not guiltless in this situation either.  Like most instances of bullying, Sony was not prepared for conflict.  They found themselves cornered on the playground with their IT pants pulled down around their ankles due to a complete and utter disregard for proper cyber defenses.  Other corporations desperately need to take notice and prepare themselves.  There are plenty of bullies on the playground of our world’s economic stage and the environment is ripe for a wave of similar extortion attempts and cyber attacks.

Finally, retaliation in the forms being bantered around via public media outlets is not the answer.  There are no real value-added cyber targets in North Korea and the attack itself was clearly outsourced to players located in other locations throughout the world.  Retaliation and retribution need to come in the form of real world controls.  This is not a tit for tat situation.  At the end of the day, the American infrastructure is under attack, either physically or economically, and that kind of threat should be handled in a serious manner and at the highest levels of government.  As citizens, we have a right and responsibility to demand this of our elected officials.  Do not be lulled into thinking this is just about a silly movie and the bruised egos of the Hollywood elite.

Active Shooter Incident Response Planning

In light of the tragic shooting in Portland, Oregon today, I thought it appropriate to post some relevant content concerning active shoot scenarios.  I recently prepared much of this content as a mechanism to help others plan and prepare for such an incident.  Incident response planning is a tremendously important process for all organizations.  Planning is a proven first step to mitigating the impact of these tragic events.  Most of this content was collected from three sources:

Department of Homeland Security – Active Shooter Preparedness:

Federal Bureau of Investigation – Law Enforcement Bulletin – Addressing the Problem of the Active Shooter By Katherine W. Schweit, J.D.:

 New York City Police Department – Active Shooter Recommendations and Analysis for Risk Mitigation:

The Department of Homeland Security (DHS) defines an active shooter as “an individual actively engaged in killing or attempting to kill people in a confined and populated area.” In its definition, DHS further notes that, “in most cases, active shooters use firearm(s) and there is no pattern or method to their selection of victims.” This is the scenario each business, organization and individual should consider and prepare for accordingly.

Relevant Statistics

The following are valuable statistics to consider when developing an active shooter response plan. These statistics are provided by the Federal Bureau of Investigation and the New York City Police Department:

  • The average active-shooter incident lasts 12 minutes. 37% last less than 5 minutes.
  • In 98% of incidents, the offender is a single shooter.
  • 97% of all offenders are male.
  • In 40% of incidents, the offender commits suicide.
  • The median age of an offender is 35 years old. However, this median conceals a more complicated distribution. The distribution of ages is bimodal, with an initial peak for shootings at schools by 15-19 year olds, and a second peak in non-school facilities by 35-44 year olds.
  • The average number of deaths in an incident is 3.1. The average number of wounded individuals is 3.9.
  • Active-shooter incidents often occur in small- and medium-sized communities where police departments are limited by budget constraints and small workforces.
  • 43% of the time, the attack is over before the police arrive at the scene.
  • When law enforcement arrives while the shooting is underway, the shooter often stops as soon as he/she hears or sees law enforcement.
  • 24% of all incidents occur in an open commercial space. 11% occur in an office building.

The FBI has provided a list of relevant points when considering active shooters. These points are very important when developing training material for employees and, specifically, human resources personnel. These key considerations of an active shooter include:

1) There is no one demographic profile of an active shooter.

2) Many active shooters display observable pre-attack behaviors, which, if recognized, can lead to the disruption of the planned attack.

3) The pathway to targeted violence typically involves an unresolved real or perceived grievance and an ideation of a violent resolution that eventually moves from thought to research, planning, and preparation.

4) A thorough threat assessment typically necessitates a holistic review of an individual of concern, including historical, clinical, and contextual factors.

5) Human bystanders generally represent the greatest opportunity for the detection and recognition of an active shooter prior to his or her attack.

6) Concerning active shooters, a person who makes a threat is rarely the same as the person who poses a threat.

7) Successful threat management of a person of concern often involves long-term caretaking and coordination between law enforcement, mental health care, and social services.

8) Exclusionary interventions (e.g., expulsion, termination) do not necessarily represent the end of threat-management efforts.

9) While not every active shooter can be identified and thwarted prior to attacking, many potential active shooters who appear to be on a trajectory toward violence can be stopped.

10) The FBI’s Behavioral Analysis Unit is available to assist state and local agencies in the assessment and management of threatening persons and communications.

Preparedness Recommendations

The following content has been sourced from the Department of Homeland Security and the New York City Police Department and is considered a best practices approach to preparing for an active shooter incident.

Develop Procedures:

  • Conduct a realistic security assessment to determine the facility’s vulnerability to an active shooter attack.
  • Identify multiple evacuation routes and practice evacuations under varying conditions; post evacuation routes in conspicuous locations throughout the facility; ensure that evacuation routes account for individuals with special needs and disabilities.
  • Designate shelter locations with thick walls, solid doors with locks, minimal interior windows, first-aid emergency kits, communication devices, and duress alarms.
  • Designate a point-of-contact with knowledge of the facility’s security procedures and floor plan to liaise with police and other emergency agencies in the event of an attack.
  • Incorporate an active shooter drill into the organization’s emergency preparedness procedures.
  • Limit access to blueprints, floor plans, and other documents containing sensitive security information, but make sure these documents are available to law enforcement responding to an incident.
  • Establish a central command station for building security.

Implement Systems:

  • Put in place credential-based access control systems that provide accurate attendance reporting, limit unauthorized entry, and do not impede emergency egress.
  • Put in place closed-circuit television systems that provide domain awareness of the entire facility and its perimeter; ensure that video feeds are viewable from a central command station.
  • Put in place communications infrastructure that allows for facility-wide, real-time messaging.
  • Put in place elevator systems that may be controlled or locked down from a central command station.

Train Employees and Building Occupants:

  • Evacuate if at all possible. Building occupants should evacuate the facility if safe to do so; evacuees should leave behind their belongings, visualize their entire escape route before beginning to move, and avoid using elevators or escalators.
  • If evacuation is not possible, then hide. Building occupants should hide in a secure area (preferably a designated shelter location), lock the door, blockade the door with heavy furniture, cover all windows, turn off all lights, silence any electronic devices, lie on the floor, and remain silent.
  • Take action as a last resort. If neither evacuating the facility nor seeking shelter is possible, building occupants should attempt to disrupt and/or incapacitate the active shooter by throwing objects, using aggressive force, and yelling.
  • Employees and building occupants should be trained to call 911 as soon as it is safe to do so.
  • Make sure employees and building occupants are trained on how to respond to law enforcement when they arrive on scene: follow all official instructions, remain calm, keep hands empty and visible at all times, and avoid making sudden or alarming movements.


The content of this document should provide the foundation for decision making when developing policies, plans and procedures to deal with an active shooter scenario. When questions arise during the development of these documents, do not hesitate to reach out to local, state, and federal law enforcement for guidance and clarification.

The Next Evolution of the Triad

A good friend and colleague Michael Burgess, CISSP, sent me the following message this morning:

“I’ve been doing some research and thought you may benefit from (if you haven’t already ran across it). Some have begin adding an addition to a well known acronym and a core principle in information security.  I think it is picking up steam and with good reason.


Accountability as in the process of tracing, or being able to trace activities to a responsible source….I think it is a good addition given experiences and how often accountability is needed, or would have been helpful.”

I think Mr. Burgess and the growing movement to expand the traditional triad are spot on.  Accountability is an important principle in IT Security and is closely tied to the principles of data integrity, confidentiality and availability.  It speaks to the responsibilities of data stewards and data owners and the need for security analysts to capture activities and report on anomalous behavior.

Kudos to Michael for bringing this idea forward and continuing the conversation to our profession stronger.


Does the POODLE Vulnerability really have Teeth?

Many of you have seen press coverage or the many online updates involving the POODLE vulnerability. After the fallout surrounding the HeartBleed vulnerability, websites and web application vendors are not taking any chances and have saturated mailboxes and web banners with alerts for their potential users. I sincerely appreciate this diligence, but it can lead to some confusion over the risks facing customers and application owners.

Let me start by saying there is a significant difference between HeartBleed and POODLE. HeartBleed is based on a flaw found in a version of OpenSSL that was extremely popular for web servers hosting some of the most frequented sites on the web including national banks and the world’s largest online retailer. HeartBleed affected millions of online customers and resulted in the loss of tens of thousands of hours in IT resources to validate and upgrade web servers around the world.

Pardon the pun, but POODLE is a completely different animal. POODLE is based on a flaw within SSLv3. SSLv3 is a block cipher dating back more than 18 years and this particular vulnerability manipulates the padding added to an encrypted block when it is too short for the algorithm. Based on its age, it is rarely used on webpages today. It has been largely replaced by one of several versions of TLS (Transport Layer Security). Consider these facts:

  • SSLv3 was originally released in 1996. TLS 1.0 (Transport Layer Security) was released in 1999 as an upgrade to SSLv3. The latest released version of TLS is 1.2 which became available in August 2008.
  • SSLv3 only accounts for approximately 0.3% of all HTTPS Internet connections.
  • Of the Alexa Top Million Domains of the Internet, only 0.42% have some reliance on SSLv3, and that is typically tied to a subdomain.

Clearly, the threat footprint for POODLE pails in comparison to HeartBleed. That does not mean we should not take steps to alleviate the threat. Most of the websites that still leverage SSLv3 are moving away from it and toward TLS. Internet Explorer 6.o is the only major browser still in production that does not support TLS 1.0 or higher, making it the last hurdle for those still forced to utilize it. In fact, most web browsers are moving to disable support for SSLv3.

  • Firefox Version 34, slated for release on November 25, will disable SSLv3 by default.
  • Microsoft has announced its plan to disable support for SSLv3 in Internet Explorer and all of its online services over the next few months.
  • Microsoft has also released a FixIt tool that allows users to disable SSLv3 support in any of the currently supported version of Internet Explorer.
  • Google Chrome and Firefox both currently support SCSV (Signaling Cipher Suite Value) which is a TLS Fallback mechanism to prevent protocol downgrade attacks such as POODLE.

As an IT Security professional, I am always thrilled to see the world at large take threats and vulnerabilities seriously. But I do become concerned when the media overreacts to a threat or begins to paint all vulnerabilities and incidents with the same broad brush. By doing so, we either become hyper-sensitive to every threat, large or small, or we become completely desensitized to all threats, leaving us more vulnerable to criminal activity. At the end of the day, I hope we can reach a balance where each incident is dealt with appropriately and given the weight it deserves.

The Pro’s and Con’s of NFC and Apple Pay – is it safe to leave your wallet at home?

With the official launch of Apple Pay today, I wanted to take a moment and talk about the potential pro’s and con’s of using an eWallet in general (Google Wallet or Square for example) and Apple Pay specifically.

Let’s start with the fundamentals. Google Wallet, Square, Apple Pay and most other eWallet options rely on NFC to transmit and receive data. NFC stands for near field communications and is a proximity-based network technology built into many popular smartphones including the Samsung Galaxy s5 and the iPhone 6. NFC operates in a range of approximately 3 – 10 cm and over a frequency of 13.56MHz. The basic procedure for a physical purchase transaction using NFC via a smartphone goes something like this:

– Customer selects a payment option on his/her phone
– Customer holds the phone an inch or two from the retailer’s payment terminal
– A few beeps occur and like magic, the transaction goes through.

To the consumer, this type of payment option seems like a great deal. No cards were swiped. The phone remained in their possession. They didn’t even have to reach for their wallet. Unfortunately, from a fraud perspective, a few things are left to be desired.

Let’s start by discussing the pitfalls of traditional NFC eWallets like Google Wallet. Even though a physical card is not swiped during the transaction, Google Wallet and most others in this traditional category still transmit cardholder data in that brief NFC network session. By cardholder data, I am referring to those all-important 16 digits on your credit or debit card as well as some other personally identifiable information necessary to complete the transaction. This means that your card info is still vulnerable to the same types of malware within the retailer’s network that caused breaches like those reported at Target, Kmart and others.

Another issue that arises in this scenario is that the same card information we are afraid to hand over to the retailer is now living on our smartphones. Without proper security (locked screens via passwords or biometrics, phone encryption, remote kill switches), a lost or stolen phone becomes as dangerous and critical as a lost or stolen wallet. At the end of the day, I believe that traditional NFC-based eWallets are not any worse than traditional card-based transactions, but they are also not significantly better.

Now let’s take a moment and explore some of the differences found in Apple’s eWallet solution, Apple Pay. Apple Pay is a tokenized solution meaning that a randomly generated token is passed via the NFC transmission in the place of actual cardholder data. This is possible through Apple’s partnerships with First Data and other banking institutions and merchants. This tokenization process eliminates some of the risk incurred by traversing a retailer’s network that might be infected with malware. The cardholder data is not present in the transmission for the malware to capture.

Also, no cardholder data or transaction data is captured or stored by Apple. Nothing is retained on an Apple server. All card information is stored locally in the iPhone 6 on a separate, heavily encrypted memory partition called Secure Element. Apple doesn’t even know what you bought. Apple Watch will have its own Secure Element storage for implementations of Apple Pay via pairing with the iPhone 4s/5/5s.

Even though Secure Element is present and in use, important information is still located on the iPhone that could still be lost or stolen. Apple Pay can support two-factor authentication for all financial transactions by requiring the use of Touch ID to trigger the payment process. Touch ID is the biometric reader integrated in the home button of all iPhone 5s/6/6Plus devices. This means your finger would need to be present to give value to the lost or stolen device if used for a purchase. Taking that one step further, the use of Apple Pay on lost or stolen iPhones can be remotely disabled using the “Find My iPhone” application.

Let me be clear by saying that all of these differences found in Apple’s implementation do not make it a perfect solution. Though no cardholder data is stored, Apple Pay does require a connection/association to an iTunes/AppleID account, and these accounts have been proven vulnerable over the past several months. Also, Touch ID, as a second factor of authentication, has been proven vulnerable to hacking since its release. Finally, Because of its large user base, Apple in general and Apple Pay specifically will have a huge target on its back at launch. Criminals and white hat and black hat hackers will attempt to find and exploit its weaknesses.

No technical solution deployed for payments will ever be perfect. There are too many factors in play and as humans, we will make mistakes. That said, I do believe there is significant value in a tokenized financial transaction solution, specifically when a second factor of authentication can be brought to bear. I will personally be testing Apple Pay at my earliest convenience and I will report back on my success or failure.

Target is proof nothing really changes…

Originally Posted on January 31, 2014:

In the ever-changing world of complex IT security, nothing really changes. That seems like a paradoxical statement. There are always new, complex, brilliantly constructed attacks against computer systems. Almost every company and individual on the planet has a backlog of patches to be applied to their computers, from the operating systems to their browsers of choice to the applications that fuel our businesses, with most involving new compromises or vulnerabilities that have been discovered or exploited. Like I said, it is ever-changing and complex in the world of IT security. But like I said, nothing ever really changes. Let me explain.

Let’s start by deconstructing what we know about the Target breach. It is the most prevalent example on everyone’s mind at the moment. We have all heard the stories of the complexity of new memory-targeting point-of-sale customized malware that scrapes card numbers from memory in flight during the nanoseconds when the data is unprotected by strong encryption. We have also heard about creative exfiltration techniques involving compromised data center resources and network port manipulation. Questions are flying around the industry surrounding well known network and computer management software and whether or not a previously unknown vulnerability was exploited to gain entry to the CDE (cardholder data environment). Yet, with all of this speculation – with all of the complexity of the attack being bantered around in the press, the truth of the matter is this attack began like almost every other one that preceded it. Someone was socially engineered.

When it comes to the specifics of the Target breach, the best money is on a spear phishing attack against a vendor of Target’s, which would have exposed privileged user credentials to get the party started for the perpetrators. So, with everything going with this breach and all of the scary tech being discussed, it all really comes down to an email and a careless user. Nothing really changes.

So how should the IT security community be responding to this fact? Based on the number of cold calls and mass market emails in my inbox, software and appliance vendors have a solution to fix this problem. That solution ranges from better logging to stronger A/V to whitelisting to network monitoring to hardware-based encryption. Don’t get me wrong. These are all great technologies and have a place in a strong, multi-layered approach to IT security. But don’t lose sight of that initial email that started this ball rolling. Don’t let that email fall off your remediation plan. In fact, in my humble opinion, you need to move it to the top of the list.

End user awareness training is one of those areas in which most everyone sees value, but very few take seriously. It’s tough to plan and implement and it often falls well outside of the comfort zone of most IT professionals. It also often lacks those hardened, quantitative metrics to which IT personnel love to cling. At the end of the day, it involves working with people, and people are hard. Computers are easy. Wombat and Phishme and others are starting to make progress in the tools/services space to address this problem, but they face an uphill battle. Those line items are often the first to get cut when security budgets get tight. “There is always next year” becomes the battle cry.

I think it is time to stand up and fight the good fight. We are a community of professionals willing to work together to make things better and therefore safer. We have to learn to focus on the right fruit to harvest because, unfortunately, it is not always going to be the low hanging variety. Send out some educational emails. Teach a seminar or lunch and learn. Set up a table in the cafeteria. Do whatever it takes. In a complex, ever-changing world of IT Security, we cannot let ourselves be defeated by a well-placed email.

Encryption – Safe and Secure or Bad Guy Beacon?

Originally Posted on August 3, 2013:

I readily admit that I am way off schedule when it comes to the timeliness of my blog posts, especially concerning Edward Snowden, the NSA and domestic spying. That being said, I still want to take a moment and discuss a couple of open issues remaining from my last post on this subject. Specifically, I want to talk about techniques and options being employed or discussed to avoid the prying eyes of the NSA or other government entities, and whether or not those techniques and options are viable or worthwhile.

Any discussion on how to hide or mask activity on the Internet begins and ends with encryption. People all over the country are clamoring to install and configure some form of encryption software or hardware to protect emails, web browsing history, file transfers, and any other form of communication over the wire. Sign ups and usage of VPN solutions like ProXPN have skyrocketed over the last few months, seeing growth patterns in the hundreds and thousands of percentage points. HTTPS site conversions are taking place at an astounding rate. Encrypted email solutions are flying off the virtual shelves as individuals start to fill their key rings and expand their circles of trust. Yet, with all of this activity, I think it is important that we start with a very important and yet somewhat basic question – Is encryption necessary or even worthwhile?

For those of you hoping to avoid the intrusions of the federal government or trying to “stay off the radar”, I believe the obvious answer is a resounding “no”. First of all, the federal government has openly admitted that their justification for capturing and analyzing Internet traffic takes into account encryption. Specifically, federal officials have admitted that encrypted Internet traffic, by its very nature, is considered suspicious and warrants investigation. So the use of encryption in almost any form places you squarely on the radar of those you are trying to avoid. Secondly, the federal government is one of the few entities on the planet with the resources to brute force encryption keys through the use of large computer clusters and emerging quantum computing techniques. The federal government also has the ability to go after corporately controlled private keys via the courts to decrypt certain traffic streams. Given these resources, you may have no good option to protect your data even if you were willing to paint that large “encrypted” target on your back.

So does this mean that you should give up on encryption or other techniques to protect your data? The answer to that question is also a resounding “no”. Encryption has its place and its value, but it should be used in a specific, targeted manner. Consider where encryption does have inherent value – protection against most private entities, financial fraud, identity fraud, corporate espionage , etc. The following are a few of the areas where I believe encryption is vital:

• Personal financial transactions – use certified and verified sites that employ strong encryption for banking and other financial transactions.
• Corporate network access – use IPSEC VPN’s to access corporate network segments and resources, especially from public places and via wireless connections.
• Sensitive correspondence – use public/private key email solutions like PGP to protect sensitive emails and text messages, especially communications involving PII or PHI.
• Smartphones – deploy passcodes or better yet passphrases on your smartphone and encrypt the data stores when possible to protect against lost or stolen devices. Far too much personal information including passwords and credentials exist on our smartphones today

These are just a few of the security controls I believe every person should consider deploying in their day to day lives, but they will not necessarily protect you from a motivated and financially powerful nation state. At this point, when it comes to the federal government and the NSA I am not sure we have any great options to stem the tide of potentially pilfered domestic information. I believe the conversation itself is our best tool. Speak out. Talk to your Congressman. Write. Blog. Continue to help the general populous understand what is at stake. Only when the public is informed, educated and motivated will real change take place. Until then, good luck.

The Passing of Seth Vidal

Originally Posted on July 10, 2013:

I have lost a dear friend from my time at Emory & Henry College and the open source community has lost a brilliant Computer Scientist.  Seth Vidal was tragically killed in a hit-and-run accident riding his bicycle in the Durham, NC area.  I am sharing some articles that shed light on Seth’s life and his contributions to the world at large:


Also, this is great Youtube video from the PathLessPedaled in which Seth contributes to a bike tour of the Durham area:


PRISM, the NSA, and other Mysteries of the Universe

Originally Posted on July 1, 2013:

Several weeks ago, Edward Snowden “introduced” the world to the domestic spying techniques of the NSA and the now household term PRISM.  I hope that you read the word “introduced” with the appropriate sarcastic air quotes attached because I think all of us at some level or another understood and continue to understand that the US government, among other national entities, takes several liberties with our personal data.  At the end of the day, without some level of encryption or tactical diversion, everything we send out on a wire to the rest of the world via a computer is exposed and potentially archived for posterity.  That being said, the media crush surrounding Snowden and the NSA has started several interesting conversations worth exploring.

Let’s start with everyone’s new favorite word “PRISM”.  Over the past several weeks, Steve Gibson and other blogging, tweeting, and podcasting security experts have tackled this word and its potential meaning and have come up with a highly plausible explanation.  The general consensus is PRISM is not any form of acronym, but instead a real world description of the activity taking place.  In the real world, prisms split light and display that light in its many forms.  Also in the real world, the vast majority of all Internet traffic travels through fiber optic cable across the backbones of the worlds Tier 1, 2, and 3 level Internet Providers or ISP’s.  Fiber optic networks literally carry light signals from point A to point B that can be converted at the router-level back to that Internet traffic we all love and upon which we so heavily rely.  The NSA’s PRISM program is most probably splitting that fiber optic light at the ISP level, allowing them to collect, analyze, and store in near real-time all of the country’s Internet traffic flowing through that particular ISP at that particular time.

Many people, after reading this, would ask the question “But I thought Snowden said the NSA was getting information directly from Google or Facebook or Yahoo?”  Fair question, but think about the impact of a program like PRISM and how it would enable attempts to gather sensitive information from those companies.  As I stated, the NSA most probably uses PRISM to split and monitor Internet traffic at the ISP level.  In the world today there are only 12 – 15 major Tier 1 ISP’s and the majority of those are located in the United States.  As far back as 2004, 2006, and 2007, stories came out on the Internet and in limited national media outlets about secret rooms controlled by the NSA at Tier 1 ISP’s like AT&T WorldNet.  These are not just conspiracy theory web blog entries.  In at least one case, we have legal depositions from Technical personnel at AT&T describing one of the rooms in question as it existed at their major Internet POP (point of presence) in San Francisco.  By targeting these ISP facilities immediately upstream of companies like Facebook or Google, the NSA can build a fairly accurate image of the type of traffic hitting the servers at these companies in question.  Even the encrypted traffic generated by these companies and the users utilizing their services creates patterns that can be interpreted and data mined.  One well executed FISA warrant at the ISP level seems ever so much more effective than 1000’s of challenged warrants submitted to a Google or Facebook.

The next question you are asking or at least should be asking is “But these are FISA warrants…how can they be used against US citizens?”  That is a good and important question with a very simple, yet complicated answer.  FISA stands for Foreign Intelligence Surveillance Act and is a law intended to empower US law enforcement against terrorists and others who seek to do harm against our country.  No one really likes terrorists and no one likes people harming our country, so that was not a difficult law to pass.  As I mentioned before, there are only a small number of Tier 1 ISPs in the world today and the majority are in the United States, so if I am the NSA and I want to target terrorists using the Internet, then those US-based ISPs are a great place to start.  See, I told you it was a simple answer.  Now, here’s the complicated part – all or most of our, the US citizen’s, Internet traffic is also carried by those ISPs.

One interesting tidbit of information that arose in the Snowden story arc is the supposed fact that the NSA has a facility somewhere in Utah that houses 5 Zettabytes of data on Internet traffic and behavior.  Now, I have been an IT professional for nearly 16 years and I had to go look up what a zettabyte of data actually is.  Most of you are reading this article from a computer or smart device that stores data on media measured in gigabytes.  A decent smartphone stores between 32 and 64 gigabytes of data.  Most laptops have hard drives storing 500 to 750 gigabytes of data.  The next step up in storage terms is the terabyte.  A terabyte in simplified terms is approximately 1000 gigabytes.  Some of the newest computers on the market now come with hard drives measuring 1 or 2 terabytes.  Large corporate SAN’s (storage area networks) are measured in terabytes.  The next step up is a petabyte.  A petabyte is approximately 1000 terabytes.  Petabytes are ridiculously large storage units.  Large Internet caching engines and centralized backup facilities measure their storage in petabytes.  This is where my general knowledge and usage of storage terms ends.  I regularly use the terms gigabyte and terabyte and even occasionally have a reason to throw out petabyte from time to time, but the next two terms are not part of my day-to-day vocabulary.  After petabyte comes the measurement term Exabyte.  An Exabyte is approximately 1000 petabytes.  After Exabyte we finally arrive at zettabyte.  A zettabyte is approximately 1000 exabytes.  As an aside, I find it quite funny as I type this point in MS Word on my Macbook Air, Word knows how to properly spell gigabyte and petabyte and Exabyte, but it has never seen apparently zettabyte.  Strangely, I feel a little better about myself.

Let’s get back to the NSA and that supposed facility in Utah housing 5 zettabytes of data.  I need to help put zettabytes in perspective.  As I mentioned in the last paragraph, some of the latest computers to hit the market tout hard drives of 1 to 2 terabytes in size.  That’s a big computer.  That would hold roughly 1,000,000 to 2,000,000 of your nicest pictures and 100,000’s of movies.  A zettabyte is approximately 1 Billion terabytes.  That’s 1 followed by 000,000,000 zeros.  The NSA has 5,000,000,000 terabytes of storage in Utah, housing Internet history from many major ISP’s.  And the NSA not only houses this data but also has devised a way to successfully analyze and data mine all of this information.  That is rather impressive.  Dwayne Melancon, CTO for Tripwire, posted on his blog that if nothing else, this activity by the NSA can be seen as one of the first major successes of Big Data analytics and I would have to agree.  What they have accomplished is the Holy Grail of data warehousing and forensics.  Unfortunately, because of the nature of the program, this technology will most likely never make its way into the private sector, at least not intentionally.

Now that we understand what a zettabyte is and how much data is involved, let’s go back to the complicated half of the question surrounding the use of a FISA warrant by the NSA.  The warrant is legitimate in that the NSA is targeting foreign intelligence and from what we can tell, is succeeding in collecting and analyzing it.  Unfortunately, they are also collecting a tremendous amount and domestic intelligence as well and we are forced to take their word for it when they say that data is not being used against US citizens.

Trust is at the heart of the Snowden story and is the reason so many people are so upset to learn our government has all of this data.  Do we trust our government to not use this data against its citizens.  Do we trust our government to protect this data.  Who do we trust?  I tend to not get too upset about all of this because I learned a long time ago to not trust the Internet or anything I placed on the wire.  I tend to use the Internet with the mindset that I have nothing to lose because I try not to expose anything worth losing.

I have spent far too long in this post explaining and trying to understand the nature of PRISM, so I am going to pause here and come back in my next entry with a little more insight, including my take on how you can better protect yourself online and why it shouldn’t really matter.

QAT in 3 Steps – Answering the Questions that Matter…

Originally Post on April 28, 2013:

One of the new job responsibilities in my current role at work is to help guide the development of a QAT group within our Systems Development Department.  QAT, for those of you not indoctrinated, is Quality Assurance Testing.  For our company, this is somewhat of a new concept.  In the past, developers, analysts and project managers have all accepted / tackled the responsibility of testing and promoting new applications and features.  That testing was both technical/feature based as well as UAT (User Acceptance Testing) focused.  Those same developers and analysts, and in some cases, project managers have also tackled the long-term responsibility of supporting the application and features.  That support, in many cases, did not have an expiration date, nor did it have real business hours.  Along with support also came documentation.  Team members often took their own notes or wrote their own support procedures, generally for their own use, so those documents were frequently biased to a particular skill set or set of experiences in the lifecycle of the application.  After years of frustration and angst, and based on a desire for consistency and objectivity, we as a department made the conscious decision to dedicate resources for these functions.  Now, we have the daunting task of actually figuring out how to do it.

After a few long conversations with teammates and colleagues, I wrote on the whiteboard 3 questions that I believe are at the heart of this new QAT department.  I believe these questions are fundamental, specific, necessary, and ordered according to their importance to the process of Quality Assurance Testing:

1)   How do we properly and consistently test applications, systems and features?

2)   How do we methodically and consistently document systems and support scenarios?

3)   How do we consistently supply effective support from all team members?

Since I have already admitted to a necessary order of operation for these questions, let’s begin with question #1 – How do we properly and consistently test applications, systems and features?  This has to be the first question that you answer for a couple of key reasons.  First, technical testing is both obvious and, unfortunately often overlooked in the development of sound and effective systems and applications.  Bug testing and fixing by technically sound and objective teammates is essential to overcome the nature bias of the creative process.  Our Development team, like most, is filled with extremely talented and committed developers and analysts, but even the best talent in the world cannot overcome the limitations of reviewing a critiquing one’s own work.  An objective set of eyes, trained to understand the development process, is the most effective way to identify potential issues and flaws in an application and correct those issues before the application or system goes into production.

The second key reason this question must be tackled first is the role and input of the user community.  It is not very hard in any project, especially those larger and more complex projects, to lose sight of the needs and desires of the user community.  All projects should be driven by a specifically defined set of requirements, but even articulate and specific requirements can be interpreted in different ways.  Developers can from time to time find themselves writing and creating a product that meets, to the letter, the requirements they were given, but the spirit of those requirements might still be lost in the development process.  A well-defined QAT team should always have the needs and necessary functionality of the user population in mind as well as a well-developed process to test the realities of that philosophy.  Consistent and effective UAT processes should be maintained and deployed by an objective group of resources.  If this is accomplished, at the end of the day, you know you have a system or application that will be accepted, used and appreciated by your target audience.

This brings us to the second of the three questions we must answer – How do we methodically and consistently document systems and support scenarios?  This question logically follows the first question.  Once a functional and user accepted system or application is developed and tested, all of aspects of that product must be documented to ensure that it can be properly maintained and deployed as it was intended.  In a perfect world, once an application is developed, tested and found to be functional and complete, the development resource will be assigned to another project and move on.  Once they have engaged in another assignment, knowledge of the last assignment begins to slip away more quickly than any of us want to admit.  Hence the need to work with QAT to hand off that knowledge while it is fresh and accurate.  That handoff should be complete and consistent for every project.  QAT teams must develop processes to ensure they learn everything they need to know in the documentation process to ensure long-term support and maintenance is possible without unnecessary development resources.

A second aspect of proper documentation is a consideration of the many forms in which it must be written and the many audiences it must address.  Good system or application documentation takes into account user populations, tier 1 helpdesk support, tier 2 QAT support, and tier 3 development support.  The documentation must take into account the skill set of its audience and the ways each audience may use it.  Users may simply want FAQ-type answers to everyday questions.  Helpdesk associates will want the same thing augmented by more technical resources related with more common technical issues.  QAT will want significant technical details including testing results and deep-dive functional parameters.  Finally, development resources will want exhaustive details including development comments, coding methodologies and other information to ensure they can support future rewrites or system integration work.  There is never a guarantee that the original developer will be available to perform future work on the system.

These various levels of documentation serve another purpose beyond meeting the needs of their defined audiences.  By providing consistent and accurate documentation at every level of system or application interaction, you can reduce the need to escalate support issues to higher cost support personnel.  The more end users can solve their own problems, the less calls you generate for the helpdesk.  The better informed the helpdesk, the less escalations to QAT.  The better armed the QAT team, the less interruptions created in Development.  Teams can focus on what they do best without constantly supporting trivial issues.

This brings us to the final question in our trinity of QAT processes – How do we consistently supply effective support from all team members?  As we mentioned in our contemplation of Questions #1 and #2, sound testing procedures and good documentation should reduce the overall number of support issues, especially those issues reaching beyond the helpdesk to QAT and Development.  That being said, no system is perfect and a strong subset of support resources will be necessary in any successful QAT team.  There are several key points in any effective support structure including:

  • ·      Communication – Understanding the “how’s”, “why’s”, and “how often’s” of support communication is critical.  The QAT team should define the various communication channels, how they will be utilized and how frequently in the course of any support issue.
  • ·      Documentation – As we learned when considering system and application documentation, consistent and accurate information is critical in the support of any system.  Support ticket information should be maintained and stored centrally.  Problem resolution info should be recorded and shared with all involved.  This allows the team to learn from their mistakes and reduce the fix time in the future for similar problems, if not eliminating those problems altogether in the future.
  • ·      Cross-training – Everyone cannot be an expert in every system or application, but everyone can be taught the basics and made aware of the power users and best technical resources.  Anytime you can reduce the time it takes to find an answer, the time you are freeing for other more productive work.

I fully recognize that answering these three questions is not a simple task and I also realize it is often difficult to keep the answering process in the right order.  Support issues arise before documentation is complete or distributed.  Documentation must often be started before testing is done.  I view the order of the question answering process as both a best practice scenario as well as method to track system milestones.  Development work is not complete until testing is done and signed off.  Testing is not complete until all relevant information is gathered and documented.  Documentation is not complete until all users and support personnel are informed and trained.  And support…well, support is simply never complete until an application or system is decommissioned.  If we focus on and strive to meet these milestones, I think we will lighten our overall burden and function in an IT system lifecycle we can all live with.