Author: dennis

  • Blocking High Bounce Rate Email Accounts

    Your email account with us may be affected by our improved anti-spam systems.  If it is, you need to know what to do about it to minimize the inconvenience.

    The number of spammers making use of snowshoe spamming has recently increased dramatically.  The practice is called that because the load of getting out for example a million spam emails is spread out over thousands of compromised computers.  It is easier to evade detection because each individual machine is sending a relatively small number of emails.  They stay under the radar so that the machine owner remains unaware.

    Our first line of defense is blocking IP addresses of compromised machines.  If you get a message when you try to send an email which begins “Blacklist Reject”, it means the IP address your computer is using is on a list.  Your machine may have been compromised or your local service provider may have recently assigned to you an address of another machine which was.  What you need to do is check for a compromise or change your IP address.  A phone call to your service provider may  be necessary.  Other than providing advice, we can’t help with this.

    The second line of defense is based on tracking the bounce rate of every email account.  We track bounces as a percentage of all sent emails and if it is too high, there is about a 99% probability (not an exaggeration) that there is some sort of compromise. The most reliable indicator of spamming is higher than normal bounce rates. Normal bounce rates are under 1%, if people are paying attention. The block threshold we use is 3%.

    You need to receive and read your bounce messages to avoid getting innocently blocked.  If your email client (Outlook, Windows Mail, Thunderbird) sets the “Return-to” address to an address you do not check or you discard emails from “Mailer-Daemon”, you won’t have a clue when there is a problem.  If your account gets blocked, the bounce message will include a link to the mail server where you can remove the block.

    A recent security study found that 37% of computers with Internet connections are being operated by remote users. That is, they are under the control of criminals. Often, user logins with passwords are sold by these people. With the new system, we have so far identified more than 5,000 computers which were logging in to our servers with valid passwords and sending spam.

    Please understand that it is essential to protect the integrity and reputation of our mail servers, even if it means occasionally causing inconvenience. The purpose of the unblock webpage is to mitigate the inconvenience.   In a sense, you should be glad if you get inconvenienced in this way. It is an early warning that you have a serious problem.  A compromise could turn into problems such as identity theft and worse.

    The good news is that we actively pay attention to keeping your email safe and secure.

     

  • DMARC

    The handwriting is on the wall.  Sending email using your own domain name is about to get more complicated, but also more reliable – if you take the right steps.  If you don’t, you will find that more and more of your email fails to get delivered, filtered out as spam.  The reason is DMARC.

    The acronym stands for, “Domain-based Message Authentication, Reporting & Conformance”.   What that means is giving recipient email servers much more information.  You can get the details from the DMARC website, but basically it’s a much more reliable way of separating legitimate email from mail sent by scam operators.  It’s important to you because it is gaining traction with all the large email service providers.

    DMARC expands on 2 older email authentication techniques, SPF and DKIM.  SPF stands for, “Sender Policy Framework”.  It gives recipient mail servers some clues about where email should come from.  It enumerates the email servers which send your email and (among many other things) lets you specify what to do with email not from those servers.  

    DKIM is the technique of signing outbound emails with a key value which the recipient server can independently verify as belonging to your domain.  Both have been in use for many years and are routinely considered when evaluating whether an email is spam or not.  Both suffer from the shortcomings of the way email works and is used.  They help with, but come no where close to solving the spam problem. 

    For example, if you were to put in place an SPF record which says that all email from you originates from a specific email server, about 1/3 of your email would bounce.  Roughly that much email is handled in one way or another by forwarders and there is no acceptable way to trace a specific email back to the source.  DKIM was invented to address the shortcomings of SPF, but has shortcomings of its own.   When you factor in forwarders, auto responders, list servers, catch-all email addresses, spammer tactics and counter measures, what you find is that the number of special cases is huge.

    Efforts to retrofit the system with standards and methods which solve the problems have generally met with resistance, low acceptance and sparse implementation.  People want their email to “just work” without having to understand anything about it and without having to deal with spam and in any way they can imagine and it had better be reliable and fast as well.  Accommodating complex and conflicting demands has created a complex and conflicting environment.

    Thousands of email servers are are misconfigured, compounding the problem.  That includes mail servers at many large companies, government agencies, service providers and especially at universities.  Email was designed for an environment very different from what the Internet has become.  It’s reasonable to call the entire system as it exists now, a mess.

    What is different about DMARC is that many large service providers are finally willing to step on some toes.   The threat from phishing scams, large networks of compromised computers, espionage and criminal enterprises has become too great to ignore.  Among the service providers to implement and enforce DMARC policies are: PayPal, Yahoo, AOL, Google, Microsoft, Hotmail, Comcast, Facebook and Twitter.  Some 80,000 domains are protected with DMARC policies.   Enforcement has meant breaking certain kinds of email use.  For example, you can no longer set the from address to [someaddress]@aol.com on an email which will be sent from a non-AOL mail server.  It will bounce when sent anywhere which considers DMARC.  Although somewhat apologetic about it, AOL is now enforcing DMARC policies.  AOL is just one of many.  And this example is just one of many things DMARC will change.

    We are often asked what can be done to prevent spammers from hijacking email addresses.  We mask how common this is by refusing returns of emails not sent by our servers.  Spammers always forge from, reply-to, and return-to addresses.  It’s a good question because anti-spam measures are turning more and more to reputation based metrics.  Because there are so many uninformed email users and mail server operators, these forgeries can and do damage reputations.  DMARC nearly eliminates this problem.

    DMARC is a golden opportunity for reputation based spam filtering.  It’s presence allows immediate and unequivocal rejection of a lot of spam.  Since its presence on a particular domain implies “not spam”, what is the effect of its absence?  In our spam filtering process the vast majority of spam is easily identified, but that still leaves a huge amount for evaluation.  As DMARC becomes more widely used, its absence is a more clear indication that any given email is from an unreliable source and is spam.

    The bottom line is that deploying DMARC on your domain is something you need to get done.  And as time passes it will get more important.

    We are available to answer questions and help you get this done.

  • Heartbleed Bug Details

    We are responsible for more than 30,000 user logins on computers which had been susceptible to the Heartbleed bug.  As you may imagine, that means we have been watching very carefully for signs of trouble.  So far, we have found no evidence of any compromise.  What this does not mean is that complacency is acceptable.

    Media reports have in some cases flatly stated that this bug allows an attacker to compromise any server with the bug.  That is, to gain access to the server at the administrator level.  This is almost total nonsense.  As you will see from what follows, the problem is entirely limited to data flows.  It’s true that if an administrator happened to log in while an attacker was running an exploit, there is a slight chance that his login name and password were captured.  On our systems I can say unequivocally that this never happened.  We use a two factor encryption system.   The second part was exposed to the bug, but the first part was never in jeopardy.

    As an aside, this bug has created a much needed uproar in the security community.   I checked the online banking system I use.  I found that while it was never susceptible to the heartbleed bug, it was susceptible to man in the middle attacks.  That is nearly as serious!  I suggest you check and complain at any secure sites you need to use.  Here is a way to check:

    Website Security Test

    Just put the subject website name in the box.

    Very few of our users have followed our advice and changed their passwords.  If you are one of them, please do so.  It’s true that media reports have been overblown and the probability that your account has been affected is low.  None the less, this is NOT something to ignore.

    We have more information now about this bug, how it works, what it would take to compromise a server and what it would take to compromise any individual account.  It’s not nearly as bad as first reported. 

    It’s a buffer overflow vulnerability.  When a program starts up, it allocates memory to hold what it needs to do its work.  In the case of openssl (the program with the bug) as it services requests it allocates and frees memory to temporarily hold the information flowing in and out.  When a request has completed, the pointer to this memory is discarded.  The problem is that whatever was in that area is still there.  The bug allows an attacker to request and get what was left over from the last request which used that memory area.

    Memory is allocated on what’s called the heap, from the bottom up.  When openssl starts up, the first thing it does is load public and private keys into the heap.  This means the private keys are in the lowest part of memory.  Following is a diagram of subsequent memory allocations:

    The client machine is allowed to check with the server periodically to see if the encrypted connection is still intact.  This is called a heartbeat.  The client sends the server a test packet and the server sends it back as verification.  The bug is that the server fails to sanity check the packet size which is provided in the first few bytes of the request, “length” in the diagram.  It returns a memory area as large as whatever the client put there.  If it’s bigger than what was sent, material left over from the last request is sent.  If that happens to contain something sensitive, we have a serious problem.

    Initially it was reported that the attacker could gain access to the keys to the kingdom, the server private encryption keys.  Security experts, with an abundance of caution, are unwilling to flatly state that it is impossible for an attacker to get these keys, but that appears to be the case.  The keys are located in the lowest portion of memory which is never freed and reused.  Using an assortment of scenarios, literally hundreds of millions of attempts have been made to get the keys.  None has yet succeeded.

    UPDATE: Although the process is difficult, it has now been proven that it was possible to obtain the server keys.

    Again, the likelihood that your passwords have been compromised by this particular bug is very low.  Our servers were updated and new security certificates issued within 3 hours of the announcement. 

    However, recent security studies have been done on the source of the bot net (robot network) problem.   A bot net is a group of compromised computers being controlled remotely by someone other than the owner.  They are used in concert or individually to attack other computers.  The studies have provided good evidence that about 37% of computers connected to the Internet with broadband connections are participating in bot net activity.  This means there is roughly a 1 in 3 chance that your computer is one of them.  If it is, you can bet your passwords are out there.  If you have been complacent about security, you really need to check for this.

  • The Need for Speed

    When a Facebook post gets several million likes and is backed by a WordPress post on one of our servers, we handle some serious traffic.  Maintaining the fast response we strive for can be a challenge.

    We continually search for ways to increase server performance for our busiest sites.  For the sake of reliability and stability, most of our servers run Centos 6, but Centos has become a source of frustration.  It’s obsolete.  Centos 7 (based on Redhat Enterprise Linux 7) is in the works and it will be a big relief when it finally appears.  When I read that RHEL 7 will be based on Fedora 19, I decided it was time to try Fedora as a server.  Fedora lists “First” as one of its core values and tries to implement the most recent innovations with releases twice per year.

    Our latest server is running Fedora 20.   I can hardly believe the performance gain.  To test, we moved 5 very busy sites onto this server.  A server load average is a rough estimate of how many processes had to wait over the last second.  It correlates directly with web site performance, speed as perceived by a site visitor.  At quiet times on the previous server I was seeing loads at 1 and 2.  At the same times on this server, the load is 0.0, sometimes spiking up to 0.2.  At busy times I was seeing loads of 2 and 3, sometimes spiking to 5 and 7.  On this server it’s 0.2 and 0.3 with spikes up to 0.5.  On top of that, spikes disappear much more rapidly.  Where I had been seeing spikes fade away in 5 or 10 seconds, now I am seeing them fade in 2 or 3 seconds.  That’s huge.  It means not only that site visitors are getting page loads 10 times (or more) faster, but the fastest performance is being seen by 5 times as many visitors.

    The gains came from a long list of improvements.  Before getting into what they are, a disclaimer.  To make this understandable to people who aren’t techno nerds, I’m oversimplifying a bit. 

    Where loads are coming from can get really complicated with so many things going on at the same time in a server.  It matters what those things are because they are dynamically interacting with each other.  For example, under certain conditions we see loads more than double when the visitor hit rate doubles.  Sometimes the increase is logarithmic and sometimes quadratic.   But it can also increase by less than double.  It depends on the server software, the site application software and on the interaction between them.  Having said that, here’s the list:

    • The Linux kernel V 3.13 – Recent optimizations have a substantial effect.
    • The network stack and drivers have been improved.
    • Web server software: Apache 2.4.9 or Nginx 1.5.13 – Nginx is a bit faster, but when some features are essential, Apache is a better choice.  Apache 2.4 outperforms 2.2.
    • XFS file system.  This is disk reads and writes.  Under load it’s twice as fast as EXT4.  Disk operations are the most common performance bottleneck.
    • MariaDB (from MySQL 5.5) – this version is said to be 2 – 3 times faster in the most common use cases.
    • PHP 5.5.10 – by being less tolerant of errors and changing semantics, performance gains of 20 to 40% are often possible.
    • Zend Opcache – saves the step of compiling scripts into executable code by holding the code in memory.  It also saves a disk read.
    • PHP-FPM – this keeps the php binary up and running so that the web server process can pass it script names to execute, passing back the results.

    We are now offering virtual private servers set up this way.  We are also offering the opportunity to run sites in a shared VPS set up this way.   You can find it in our ordering system.  This is a strong value for sites too busy to run on a cPanel shared hosting server.

    I was asked, “Where’s Varnish?”  Varnish is a page cache which resides in memory.  It saves a disk access when a static file (html page, image, etc.) has been read recently.   This is useful on proxy servers with a server farm behind them.  In other situations it’s counter productive.  Modern operating systems try to take full advantage of available memory.  All memory not in immediate use is allocated to disk cache buffers.   When more memory is needed by a program it is taken from the least recently used (LRU) memory pages.  In other words, it’s already doing what Varnish does and much more efficiently in terms of the whole system.   Adding Varnish is much more likely to increase disk reads than reduce them.

    You will find many pages on the Internet recommending Varnish incorrectly.  Something like 80% of the information and recommendations about getting the most out of your web site or server is dead wrong, too incomplete to be useful or doesn’t apply to the most common environments.   People like to write about the wonderful new things they’ve learned, but often fail to realize they don’t know enough about the bigger picture.  Be careful what you believe.

  • The Heartbleed Bug

    Today, a serious and pervasive threat to security on the Internet was revealed: the so-called Heartbleed Bug. In my opinion, everyone who regularly uses a password on a “secure” Internet connection should have at least a rudimentary grasp of the problem. A web site has been set up to describe it in detail: Heartbleed.com.

    The short version of the problem is that an encryption vulnerability was found.  Under certain circumstances, a third party can decrypt your session with a secured web site or impersonate a secured web site.  First the attacker must obtain the encryption keys from a secure site and this is what the bug allows him to do.  Once the attacker has the keys, if he can get access to what is flowing back and forth between you and the site, he can read it.  That includes passwords, credit card information, all of it.

    News reports have given the impression that with stolen keys, an attacker can walk right into a server and get whatever they want.  This is wrong.  If a server administrator logged in remotely while his session was being read, the attacker could then log in with the same credentials.  This is quite different and not at all likely.  Most servers have constraints on where administrators can be when they log in.  It would set off alarms.

    The likelihood that information has already been stolen from you is low.  Normally we don’t see security bugs exploited until they are well known.  This problem was first discovered last week and was announced publicly today.   The delay was to allow time to get fixes in place.  Our servers have been updated and certificates replaced.  We are no longer vulnerable to this threat.

    What is IMPORTANT is that any secure sites you interact with have been updated.  If they do not post a notice, you should ask before logging in.  I just attempted to find out if my bank was aware of the problem.  I was unable to get an answer.  Hopefully the people who manage the web site have taken care of it, but the only safe assumption is that they have not.  I’m not going to use web banking until I can get an answer and you shouldn’t either.

  • NSO Transparency Report

    On January 27th, the United States Department of Justice announced new rules regarding the disclosure of National Security Orders.  This included National Security Letters (NSLs) received by a company.   The DOJ and Director of National Intelligence (DNI) now allow a company to disclose the number of letters and orders it has received as a single number in bands of 250.  The first band would be 0-249.  It continues to be illegal to make this disclosure as an exact number.

    At Deerfield Hosting, we believe that disclosing the exact number of orders a company has received poses no threat to national security.

    Many people assume that tech companies receive and comply with large numbers of such orders daily.  However, the real numbers are probably quite small.  For example, Apple has just reported a band of 0-249.

    In keeping with the new rules, our report is this:

    • National Security Orders Received: 0 – 249
    • Total Accounts Affected: 0 – 249

    We believe that the new rules are a step in the right direction, but do not go far enough.  We also believe that orders of these types are a violation of due process and are unconstitutional.  It is our policy to refuse to comply with such orders unless received by a court and not merely by the DOJ.

    Moreover, if you wish to know of any orders we have received which pertain to any accounts you may have with us, simply ask us.  We are quite prepared to break the law and give you an answer.