Magento Auto Scaling with Varnish on Amazon AWS

Stefan Wieczorek —  December 31, 2013 — 1 Comment

Magento Auto Scaling with Varnish on Amazon AWS

In this blog post, we from MGT-Commerce want to explain how to Auto Scale Magento together with Varnish Cache on the Amazon AWS Cloud. A lot of our customers do a lot of marketing like sending newsletter, groupon or tv campaigns.

These marketing campaigns are really expensive and even more expensive if you can’t sell your products because of an overloaded server.

A good marketing campaign means much more traffic than on a normal day.

It’s not uncommon that you have hundreds or thousands of customers within the first 10-15 minutes.

You have heard about instant scalability in the AWS Cloud as one of many cloud benefits but haven’t made the leap to autoscale cloud deployment yet.

What are you waiting for?

What is Auto Scaling?

Auto Scaling is a web service from Amazon that enables you to automatically launch or terminate servers based on user-defined policies, health status checks, and schedules.
With Auto Scaling you can ensure that the number of your servers you are using increases seamlessly during demand spikes to maintain performance, and decreases automatically during demand lulls to minimize costs.

Let us look at an example of how Auto Scaling works. Suppose you have a magento store that runs on a single server. The single server performs well when you have regular traffic. However, occasionally the traffic to your magento store increases up to five times the normal load.
When that happens, you need additional server resources to handle the traffic otherwise you are down.

What happens in this example if you use Auto Scaling?
You define the conditions that determine the increasing traffic to your magento store, and then tell Auto Scaling to launch more servers whenever those conditions are met. Second, you define another set of conditions that determine the decreasing traffic to your magento store and then tell Auto Scaling to terminate a server when those conditions are met.

Architecture Requirements

Flexible Hosting Platform

Amazon AWS has a global infrastructure around the world which is very secure, reliable, scaleable and flexible.
With Amazon AWS, we have access to the same reliable, secure technology platform used to power Amazon.com’s global web properties.

Multi Server Environment

For Auto Scaling Magento we need a multi server environment to add and remove servers from the loadbalancer without any downtime.
It’s not possible to scale a single server automatically. A single server can only scale vertical.
To add more resources (CPU, RAM) you need to stop / start the server which takes depending on the load up to several minutes.

Multi Server Architecture Diagram

AWS-Multi-Server-Environment

Share Nothing?

For Auto Scaling we need a “Share Nothing” architecture which means we need to share the varnish cache, database, the network file system (NFS), the session and cache storage.
With these architecture we are able to add and remove web servers to the loadbalancer.

Problems to solve

1) Sync of the latest source code

Since we launch new instances from an Image (AMI) we need to sync the magento source code before we can put the instance into service.
We have developed a small shell script which rsync the source files from the admin server before we start nginx.
Another solution could be to put the source code into a S3 bucket and download it right after the start of the instance.

2) Varnish Cache Auto Scaling

To get varnish working in a multi server environment you need to define all backends (instances) in the config file.
With Auto Scaling you have no knowledge which instances and how many instances are running.
Because of that you need an automatic way to write the config file based on the instances in the load balancer.
We thought the easiest solution would to define the load balancer (ELB) as backend.
Unfortunately this does not work because varnish needs a static ip as backend and the ELB does not have one. The load balancer has several private ips and scales automatically.
We have solved the problem with a php script which updates the configuration file with all active instances from the loadbalancer. For this we use the AWS SDK for PHP.

3) Auto Scaling Code Deployment

With Auto Scaling you may have 1,2,3 or even 100 active instances running.
The problem is that you need the private ip of each instance to deploy a new version.
The easiest way to get all instances is to use the Amazon API.
We launch all instances with an environment tag to identify them later.
With the help of the environment tag we can simply filter the instances for the deployment.

Let’s create an Auto Scaling Group

Before we can start with creating an Auto Scale Group you need to login into your AWS Account.
On the left bottom you find the menu “Auto Scaling”.

To create an Auto Scaling Group, we will first need to create a template that your Auto Scaling group will use when it launches instances for you, called a launch configuration.

Step 1: Create launch configuration

Auto Scaling Create Launch Configuration

First, define a template that your Auto Scaling group will use to launch instances.
You can change your group’s launch configuration at any time.

First you need to select the AMI which you want to use for starting new instances.
After that you will select the instance type before you enter a name for the launch configuration.
In the third step you select the security group(s) which contains a set of firewall rules that control traffic for your instance.

Step 2: Create Auto Scaling Group

Create Auto Scaling Group

Under Auto Scaling –> Launch Configurations you have a list of all created launch configurations.
Select one to create an Auto Scaling Group.

In the form below you enter the group name, the group size and the Availability Zone(s) where AWS will start new instances.

Auto Scaling Form

Step 3: Create Scaling Policies

A scaling policy is a set of instructions for making such adjustments in response to an Amazon CloudWatch alarm that you assign to it.
In each policy, you can choose to add or remove a specific number of instances of the existing group size.
When the alarm triggers, it will execute the policy and adjust the size of your group accordingly.

In the example below, we have configured 2 policies and 2 Alarms which triggers these policies.

Scale up Policy:

The first policy is the scale-up policy where we add automatically 2 instances when our Auto Scaling Group reaches the alarm threshold CPUUtilization >=60 % for 60 seconds.

Scale down Policy:

The second policy is the scale-down policy which removes an instance when the average CPUUtilization of the Auto Scaling Group is lower or equal 25%.

Auto Scaling Policies

Lessons Learned

Scale up early

We learned to scale up early on an average of >= 55% CPU Utilization over a peroid of 60-120 seconds because it takes up to 2-3 minutes until the new instances are in service.
Booting the instance takes about 1 minute. After that we sync the source code from the admin server before we start the webserver (nginx).
Afterwards the load balancer can start with it’s health checks. Depending on your health check configuration it can take up to 30 seconds to get the instance into service.

Scale down slowly

The time required for a metric to meet a threshold to scale down should be greater than the time to scale up.
We scale down instance per instance when the average CPU Utilization is smaller or equal than 25% over a period of 15 minutes.

An emerging e-commerce business needs autoscale cloud deployment and the best Magento hosting there is.

MGT-Commerce is your ideal partner for managed hosting on AWS.

Contact us now, and we will assist you in finding the best solutions for your business.

One response to Magento Auto Scaling with Varnish on Amazon AWS

  1. Thanks for the thorough background information. Sure has helped me out a lot.

Leave a Reply

*