Always Networks is now Serverless!

Fri 21 December 2018

Powered by AWS Cloud Computing

This blog has now been moved on to AWS using entirely serverless technologies, meaning both reduced cost and better performance.

To do this, a number of technologies have been employed to automate the deployment. As there is really only one Always Networks site, a lot of this was probably a lot more effort than it could have been - it would probably have been easier to spin up some of this stuff using the web interface. In fact, for the first dev version of this site that's exactly what I did.

However, I opted to use CloudFormation and try and automate this stuff as much as possible - it's in the spirit of how I usually do things, and it was a very good learning experience. I've used CloudFormation before, but typically I've just edited and added to other people's scripts - this was my first opportunity to write my own from scratch.

So, why did I do this?

  • Reduced Costs
    Always Networks currently hosts sites for a number of our clients. We are not primarily a web hosting company, but it is a service we offer as an add-on. This web hosting is far more expensive than using S3.
  • Increased Resiliency
    The web hosting we offer at Always Networks is a single server. Putting this into AWS makes it much more resilient.
  • Decreased Overhead
    No more server patching and maintenance! There is no operating system any more, AWS take care of that. Using *aaS simplifies management and reduces overhead.
  • To Learn
    This took a lot of my time, and as it was not for a client it was unbillable. But I learned a lot doing it.

Below are some details of how I approached this and the technologies I used.

The Site

The original Always Networks site was built using Wordpress. This is obviously not Serverless, so I needed an alternative. After researching several options, I decided on Pelican. For the static "business" parts of the site I could just use HTML, but for the blog I needed something that could turn the dynamic content into static pages.

There were a number of reasons I ended up at this decision:

  • It is Python based - Python is my primary language of choice for scripting and where I am most comfortable.
  • It has a nice plugin architecture, making it easily extendable.
  • It is open source, currently actively maintained, and seems to have an active community - making it easier to find help using Google!
  • It supports themes, making it easy to customise my site.


I opted to use the Bootstrap framework to build my site. I've used this before for a few client projects so I'm familiar with it. It provides a good baseline for CSS, and a responsive framework so that the site is mobile friendly too.

I also wrote my own Pelican theme. I was quite happy overall with the look of Always Networks when it was on Wordpress, and didn't want the site looking too different. By creating my own theme I could make the site look exactly how I wanted it - it was also a good opportunity to refresh my HTML and CSS skills.


CloudFormation is used to deploy this site. Because of this, I can easily stand up a dev site or a test site in a matter of a few minutes, to allow me to develop and try out new features on the site.

The CloudFormation script takes four inputs:

  1. Domain Name
    This is the domain name of the zone hosted in Route 53. This is where the subdomain will be added. For this site, it was
  2. Full domain name
    This is the full domain name of the site - for example
  3. The ACM certificate ARN
    More on ACM below, but the certificate has to pre-exist (* for this site)
  4. Pelican Config File
    I have a number of config files for Pelican, one for the production site, one for dev, etc - all with slightly different parameters. This parameter tells CodeBuild which settings to use.

Rather than logging into the console each time, I update the CloudFormation script using AWS CLI:

aws cloudformation update-stack --stack-name dev-site --template-body file://cloudformation.yaml --capabilities CAPABILITY_IAM


This site is served from an S3 bucket. All of the static files are hosted there. It's really cheap for storage and data transfer so an economical way of serving up the files.


The S3 bucket is fronted by CloudFront, to provide a Content Delivery Network (CDN), ensuring fast page load times around the world. The CloudFront distribution has two origins - one is the S3 bucket for the main site, and the other is the API Gateway - more on that below.

When I update the website, I invalidate the CloudFront cache manually, by running this:

aws cloudfront create-invalidation --distribution-id xxxxxxxxxxx --paths '/*'


CodeCommit is a git respository, and it is where I store the website source code. I also store it in GitLab, which is the "master copy". The CodeCommit versions are specific to the environment, each instance of this site has its own CodeCommit repository. For this reason I treat these repositories as disposable and keep my master code in GitLab. This is managed by using git remote add and pushing to specific remotes using git push dev.

The CodeCommit Repository has a trigger configured, which means that every time I push to it it calls a lambda function.

Deployment Lambda

This specific lambda function is the one that CodeCommit trigger. It has the following code:

        ZipFile: |
          import json
          import boto3

          client = boto3.client('codebuild')

          def lambda_handler(event, context):
            print("Received event:" + json.dumps(event, indent=2))
            response = client.start_build(
            return "Build triggered"

What this does is launches a CodeBuild project with a name identified by "customData", which is passed in from the CodeCommit trigger.

This Lambda function is used as CodeCommit cannot directly trigger CodeBuild. The "proper" way of doing this would be to use CodePipeline, but that has additional costs and I don't really need any of its functionality.


This is the bit that I think is pretty cool. Pelican takes a bunch of templates and a bunch of pages and blog posts, and makes static HTML for them. Each time you publish a blog post, you need to run the Pelican commands again to regenerate the HTML, then upload that HTML to the S3 bucket. This means you need to be on a PC with a bunch of dependancies installed, and it is a fairly involved process.

The way I have things configured, a push to the CodeCommit respository now calls the deployment Lambda, which then calls a CodeBuild job. The CodeBuild job launches, installs the requirements it needs, runs Pelican to build the site, then syncs it to the S3 bucket. This means that from a git push the entire site is redeployed! Here is the build job:

        Location: !GetAtt [WebsiteCode, CloneUrlHttp]
        Type: CODECOMMIT
        BuildSpec: !Sub |
          version: 0.1

                - echo Entered the install phase...
                - pip install -r requirements.txt
                - echo Build started on `date`
                - pelican content -o output -s ${PelicanConfigFile}
                - aws s3 sync output s3://${WebsiteBucket}
              - '/root/.cache/pip/**/*'

Amazon Certificate Manager (ACM)

Free SSL certificates! What more can I say.

Unfortunately I couldn't build this into the CloudFormation script, as it requires a manual verification step to verify domain ownership. That's why I just include the ARN as a parameter into the CloudFormation script. I used a wildcard for *, so I can deploy this site on any subdomain of that using the same certificate.


There are a few Lambda functions I use for this site - mainly for the Tools section (so far, DNS and Whois, try them out!). I intend to add to this as and when I have the time. There is another for the contact form.

When the lambda functions are initially created in CloudFormation, they just have placeholder code. Something like print("here's a function!"). I then update the code for each script individually. This is because I used zip packages for my lambda functions, to include packages they are dependant upon.

I use a little script to update the functions. Each function is stored on my laptop (and a private repo on Gitlab) in a folder, matching the function name in CloudFormation. The CloudFormation script deploys the lambda function as {stackname}-{functionname}. I use this script:

cd $1
zip -X -r ../$ *
cd ..
aws lambda update-function-code --function-name "www-site-$1" --zip-file "fileb://$"
rm $

When I pass a folder name to this script (without the trailing /), it zips up the folder and updates the function code for www-site-{foldername}.

API Gateway

The longest section of the CloudFormation script is the API gateway. There is a resource for every Lambda function, to make them useable by the website. The website uses AJAX calls to the API Gateway (via CloudFront - with caching disabled by setting the TTL's to 0).

One thing here - when you update the API Gateway and update the CloudFormation stack, it doesn't redeploy the API Gateway. I have another script I run manually to update the API Gateway deployment:

aws apigateway create-deployment --rest-api-id xxxxxxxxxxxx --stage-name api

IAM and CloudWatch

Obviously supporting all of the above are a bunch of IAM roles and policies, and CloudWatch for logging.

The cost of running this is now pennies per month. The most expensive component by far is Route 53 - which is $0.50 cents per hosted zone plus some usage charges.

There is a noticeable performance improvement in page load times when visiting my site. If I'm noticing that here in the UK, when the previous site was UK hosted anyway, then I expect it will be even more noticeable for people visiting from other parts of the world.

In the future I want to further enhance the site by adding a comment system (until then, feel free to reach out to me on Twitter, links below), and adding more and more tools to the tools section.

Share this post

  • Share to Facebook
  • Share to Twitter
  • Share to Google+
  • Share to LinkedIn
  • Share by Email