Wednesday 23 August 2017

Entropy & Passwords


I've been putting off writing this for a while, mostly because I hope that this should be obvious to people in the security world. However, a few things happened lately and I thought I would finally bite the bullet and type this up. Most of this comes directly from NIST Electronic Authentication Guideline and is well worth a read.



Claude Shannon - 
By DobriZheglov (Own work) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], 
via Wikimedia Commons

In 1948 a gentleman called Claude Shannon wrote an article called "A Mathematical Theory" which gave birth to a new field of mathematics called Information Theory. It has a profound impact on how we define and use information.  

Within Prediction and Entropy of Printed English, Shannon states that:
 The entropy is a statistical parameter which measures, in a certain sense, how much information is produced on the average for each letter of a text in the language. If the language is translated into binary digits (0 or 1) in the most efficient way, the entropy  is the average number of binary digits required per letter of the original language.
In thermodynamics entropy is described as:
measure of disorder in the universe or of the availability of the energy in a system to do work. 
And similarly within Information Theory, Shannon was defining entropy as the uncertainty of symbols chosen from a given set of symbols, based on a priori probabilities.  i.e. The probability that a given variable (X) has particular value.

This type of entropy is used to define how difficult it is to guess a particular password. The higher the entropy the harder it is to guess what the password is.,

Why does this matter? Because we're currently in an arms race when it comes to passwords and customers are definitely suffering because of it. In a battle where entropy is the winning factor, humans are always going to lose. Humans aren't designed to be random, we're geared to see patterns. We're really good at patterns, so good at patterns in fact that we often see patterns which aren't there

Like most things which become a problem, it all started swimmingly...Back in the 80s and 90s, cracking password hashes was a time consuming business. Many hours I spent tuning my word lists and running those passwd and shadow files through Alec Muffett's Crack (I am sure that can be taken in many different ways, so let's just keep moving on).  If you had good password security it was possible to be secure and not be too onerous on the user. Today things have definitely changed, using GPUs and hashcat it's billions of hashes which can be computed in seconds.

What does this mean for users? Password hell. We keep pushing to have passwords have higher entropy. It's got to the point where we now have applications dedicated to generating passwords for users and keep track of them. Because for passwords to be secure they really need to be completely random.  By the way, I'm not saying you shouldn't use these, you definitely should but we still need a password to protect the other passwords so....

So what's the solution? 

So before we go further, let's do some tests, before we can do that we need to define how we're calculating entropy. So if we have a number(a) of values within a set (e.g. the letters of the alphabet) and the length of the (l), then the entropy would be a^l. For example using the alphabet and a length of 6 characters we get 26^6 = 308915776. However normally entropy is expressed in terms of bits so if we take the log2(308915776) we get 28.  So to abstract out that formula we have:


Where b is the cardinality of the set which the value can be chosen from.
Where l is the length of the password
Where H is the entropy in bits of the password.

There are 95 printable ascii characters, so let's look at entropy as password length increases:


This really shouldn't be surprising right? As the length increases uniformly the entropy increases along with it.  This describes the entropy if there is an equal chance of selecting any member of the set but humans don't select their characters randomly. Typically passwords are selected based off the language which the person speaks, if she is English then most cases she will select passwords based off the English language.  

As Shannon pointed out, the English language has a set of properties which significantly reduce it's entropy, capital letters tend to be used at the beginning of a word rather than in the middle of it, certain pairings of letters (i before e except after c, u next to q etc). These conventions and properties of a language reduce it's entropy. This is compounded by the fact that when a user is forced by password policy they tend to substitute i's for 1's, s's with $'s etc, rather than sprinkling these within the password. 

All this means that the entropy of real life password isn't anywhere near as strong as a random selection.  The excellent NIST Electronic Authentication Guideline has devised a "scoring" system to help determine a more realistic entropy based of human behavior. It's defined as:
• the entropy of the first character is taken to be 4 bits; 
• the entropy of the next 7 characters are 2 bits per character; this is roughly consistent with Shannon’s estimate that “when statistical effects extending over not more than 8 letters are considered the entropy is roughly 2.3 bits per character;” 
• for the 9th through the 20th character the entropy is taken to be 1.5 bits per character; -49- Special Publication 800-63 Electronic Authentication Guideline 
• for characters 21 and above the entropy is taken to be 1 bit per character; 
• A “bonus” of 6 bits of entropy is assigned for a composition rule that requires both upper case and non-alphabetic characters. This forces the use of these characters, but in many cases thee characters will occur only at the beginning or the end of the password, and it reduces the total search space somewhat, so the benefit is probably modest and nearly independent of the length of the password; 
• A bonus of up to 6 bits of entropy is added for an extensive dictionary check. If the attacker knows the dictionary, he can avoid testing those passwords, and will in any event, be able to guess much of the dictionary, which will, however, be the most likely selected passwords in the absence of a dictionary rule. 
Using that scheme, they determined the following results:

The results are interesting because as we see while enforcing complexity and testing passwords against dictionaries of known common passwords does indeed increase the entropy for smaller passwords, as the size of the password increases, the dominating factor again becomes the size of the password. 

So in a rather convoluted manner we're back to the original proposition, what's the best practice when it comes to passwords?

Well the solution really is bigger passwords, the problem is when people think passwords, they think exactly that; a pass word, like speak "friend" and enter, fame. But this reduces the scope for entropy significantly because 80% of words fall between 2-7 letters long.


Looking at the bell curve, after about 14 characters the increased entropy from dictionary and complexity rules starts to become redundant but that only leaves around 2% of English words we can use. On the bright side we can start looking at Germanic words which fair a lot better but it's probably not going to be a great UX experience trying to enforce that. 

So how we do get users to use more secure passwords without making them memorize gibberish? It's really simple, we start the long process of changing how people think of passwords and make them start making them look at using pass phrases.  

What's easier to remember? "Palo told us Thales fell down the well."  which 40 characters well above our sweet spot or "rb97P)eE4.Y3"? They both have similar entropy but one is more easier to remember than the other. 

This is a very long winded way of saying something very obvious but the next time you're making something to ask for a password, maybe give an example as a passphrase rather than a password.  It's going to take a while to move people away from thinking about single words. For a long time software has conditioned people to think of passwords that way. 

It's common even today to see systems which restrict the use of space or put 12-20 character limits on passwords, it's a tremendous disservice to the industry to do this. There really isn't any need for it anyway as these passwords are going to be hashed so it's not like the hashes are going to get bigger.

This was written as a request by a friend, I would just like to apologize to them how long it's taken me to actually get around to writing this. Hopefully you'll forgive the delay =)


Thursday 3 August 2017

Trust. No. One. - Zero Trust in Networking.

It's funny how quickly things change. Especially so when it comes to technology and the internet. In some ways security reflects this world of constant change. In fact, within certain aspects of security these types of changes can happen daily. Exploits, vulnerabilities and patches happen can happen on extremely condensed time frames. 
We trust certain sets of programs and services. Hackers constantly attack those services so the realms of secure and non-secure intersect and switch constantly. As Alex Stamos discussed in his key note at Black Hat this year, this is the sexy part of security and it really is sexy. It's a game of chess, where security researchers and hackers test their skills. The days of easy exploits is mostly past us and the exploits that arise these days are staggering in their ingenuity. Understanding how these are made is fascinating and well worth exploring.
I am not going to be discussing the sexy world of 0day. Today I would like to talk about something unsexy but something which needs to be addressed, and that is networking. The assertion I'm going to make is that most networking models we use are simply hopelessly out of date, how we use the internet has changed but network models hasn't.
Let's look at conventional wisdom when it comes to networking design. If we look at the diagram above, we can see the castle like design of networks. Each circle represents a layer within the network which has been segmented from the other levels. 
This model is based on physical security, where you have a DMZ followed by areas of increasing security until some where in the middle there is the keep which represents the most secure layer. This type of design makes sense as it gives you a fall back plan where in order to get into the most secure area you must of first infiltrate all the other areas. 
The problem is that networks really don't work like that anymore, or realistically ever did. Typical companies have 3 layers, a DMZ, a middle layer where all the users and internal servers are kept and finally PCI segment. But these segments aren't so much staunch walls built for a robust defense, they tend to have gaping holes in them, things like VPNs and special firewall rules to allow traffic from different machines, satellite Offices which need access, public VPNs to allow for users to access the internal servers while working away from the office. Restricted segments tend not to have their own VPNs and normally have firewall rules which allow for select machines within the internal network access them. And within the last few years SaaS services and cloud deployments keep increasingly mission critical infrastructure and services. 
Networks aren't stable, they grow and evolve, firewall rules get added, don't always get removed and typically are run by ops rather than the security teams so there is nebulous arrangement of duties and keeping these perimeters secure. Coupled with infrastructure which has been in place for many years means that significant configuration drift could of taken place. This is compounded if companies have multiple mergers etc. 
The nature of attacks has changed, the old days of attacking directly the DMZ to try and gain access to the networks is very rare. Hackers start by attacking the users themselves with spear phishing attacks and the similar. Using the law of numbers to find an entrance, once they have a machine which has been compromised within the internal network, they leverage the trust of that network to scan for other machines within it they can attack and spread to. As they sink deeper into the internal networks they find more machines which have access to other segments of the network until eventually all areas have been breached. 
These types of hacks are common and represent the short comings of this type of network architecture. Once a hacker has pierced one of these segments he has gained that level of trust and access to all the machines within that segment. So how do we combat this? Also how can we combat this in a manner which doesn't mean a massive overhaul of networking infrastructure.
The answer appears to be simple but brings up many interesting connotations, trust no one. Stop caring about where a machine is within the network, assume every packet of communication is malicious until proven otherwise.
Sounds interesting doesn't it? While this has been discussed before, it's really Googles excellent BeyondCorp which has started to bring it into the main stream, where companies are starting to be built around it. With the continued adoptance of the cloud and SaaS, zero-trust networking is becoming more and more compelling. 
So how does this work? There is no completely standard way of implementing a Zero Trust network, but we can go over one of the ideas of how to implement such a network in theory loosely based on the BeyondCorp paper. 
The basic premise is built on 2 key assertions:
  1. Verification of the User or Process attempting to connect
  2. Verification the Machine the User or Process is using for the attempt.
  • Each service machine generates a certificate which is signed by a trust server.
  • Each service generates a certificate which is signed by the trust server.
  • Each client machine generates a certificate which is signed by a trust server. 
  • When a user enters the network he generates a certificate which is signed by a trust server.
  • When a connection is attempted to a service, the agent on the machine presents both the public key for the user and the machine to the trust server.
  • The trust server verifies that the user is allowed access to the service and the keys have been verified, a short term certificate is created and signed (typically for 10-30 mins). The public key is encrypted with the machines public key and the users public key and sent back to the agent.
  • The machine then makes an attempt to the service using the short term certificate. The service then reaches out to the trust server with the certificate, if the certificate is valid the trust server sends the key encrypted by the private keys of the service and the machine it's running on.
  • Then the connection is established.
Let's look at what this approach gives us within certain scenarios.
  1. Users credentials are compromised. If the users credentials are compromised it would not be enough to give access to any service. You need both certificate of a machine and the user to be able to access anything. When the compromise is discovered, simply revoking the users certificate is enough to stop all network access to all services within the network. 
  2. Machine has been compromised. Access to the machines certificate would not be good enough, you also need to have a compromised user certificate as well.When the compromise is discovered, simply revoking the machines certificate is also enough to isolate it from all services.
  3. The certificate for the connection is compromised. This connection certificates are short term and can only be used once, this means you need to compromise it and use it within a very shortened time frame. It reduces the risk significantly.
  4. Service has been compromised. There is a flaw with the service software and some how they have both the client certificate and the machine certificate, this would allow them access to the machine, but within the service machines certificate they can not continue to spread within the network.
  5. With this paradigm, the network connection is now not tied to the networking layer but to the security layer, meaning you can restrict the machines and users who have access to a particular service. If a user isn't allowed a service simply refuse to generate the certificate. 
Zero-Trust Networking gives a great deal of flexibility and a much needed rethink to how we manage security at the networking layer. While firewalls will always have their place, they're still static entities with hard to maintain sets of rules, the need for them to be physical boxes comes from a time when managing that amount of data needed dedicated hardware.  
But machines are now more than capable of handling the networking traffic they're going to be generating and with this approach we're effectively making each dynamic firewalls around all endpoints and services. All the while bringing the networking connections more inline with the security models based off a user and their roles.
Networking is on the edge of change with software defined networking and Zero-Trust, it's going to be an interesting time, maybe networking isn't so unsexy after all.