Git Branching

Here’s a basic example of branching from the command line, for reference. This sequence of commands:

  1. switches the user to the intended “source branch”,
  2. creates a new local branch (and checks it out at the same time), and
  3. pushes the new branch to the origin.

In this project, we’re using version numbers as the branch names.

Upload to S3 with expiring link

Upload a file to S3, and create a temporary link for it.

#!/bin/bash
# Add a file to s3 with temporary url, optionally specified in the command line.
if [ -z "$2" ] ; then
    echo "Need at least two arguments:  local source and s3 destination (e.g. file.txt s3://my-bucket/dir)"
	echo "3rd (optional) argument is expiration sends."
	exit 1
fi

if [ -z "$3" ]
  then
    export exp=3600
  else
    export exp=$3
fi

aws s3 cp $1 $2/$1
aws s3 presign $2/$1 --expires-in $exp

echo "expires in $exp seconds"

WordPress in AWS – the Real World

This article outlines one way to wire up WordPress AT SCALE. In the real world, there can be separate and distinct development, staging and production environments. So how should we move information between them, and how should this system work?

Here we take the AWS CodeDeploy/WordPress example to the next level.  Standard disclaimer – there are many approaches to this, so adjust to suit your needs. There is room for improvement, so feel free to comment!

In our scenario we have three environments:

  • Development environments – The dev team handles theme development, as well as configuration management. So these environments live on their local machines.
  • Staging – Any functional changes are tested here before being “elevated” to Production. The development team manages the push of code to this environment.
  • Production – Content authors make changes directly to a Production instance. Once functional changes are tested in Staging, the Production Manager can deploy them here. Content approval may be implemented by means of a plugin if desired.

Our overarching challenge is that WordPress stores its content in two ways: In its database and on disk. We need to handle deployment of the WordPress PHP code, themes and any plugins in use. We also want the ability to see all content in any environment.

Overall Architecture


The site runs on multiple EC2 instances which are situated behind an Application Load Balancer (ALB). CloudFront provides caching services, and a web application firewall (WAF) is attached. As typical in these scenarios, DNS is set up so that mysite.com and www.mysite.com point to CloudFront, and origin.mysite.com is set up to point to the ALB.

Source Control

This is an important place to start. We have the ENTIRE WordPress code base checked into git, using AWS CodeCommit. You might also consider using GitHub due to its unique CodeDeploy integration features.

This means that the dev team is responsible for updating WordPress and plugins.  They are updated in a dev environment, checked into the code repository, and deployed to production.  In addition, activation of of plugins must be scripted, for when new instances come online (see wordpress command line, below).

AWS CodeDeploy

The code in our source control is deployed using AWS CodeDeploy. This allows us to deploy any changes into a somewhat vanilla AMI. I say “somewhat” because an instance running CodeDeploy requires two things: An agent and a role with specific permissions.

We’ve set up a CodeDeploy Application and a Deployment Group for each environment. In this case we’ll name them my-wordpress, dg-staging and dg-production.

Store configuration files in S3

Our deployment process contains some code that downloads the appropriate config file, depending on the environment. CodeDeploy makes the deployment group name available for our use.

#copy the appropriate config file from s3
if [ "$DEPLOYMENT_GROUP_NAME" == "dg-staging" ]; then
	CONFIG_LOCATION=s3://my-bucket/my-wordpress/staging
elif [ "$DEPLOYMENT_GROUP_NAME" == "dg-production" ]; then
	CONFIG_LOCATION=s3://my-bucket/my-wordpress/production
else
    echo "Unrecognized deployment group name: $DEPLOYMENT_GROUP_NAME.  Bailing out."
    return 1
fi
aws s3 cp $CONFIG_LOCATION/wp-config.php /var/www/html/.

Make sure to add s3 read bucket permission to the role used by your servers.

The WordPress Command Line

Any WordPress plugins or scripts must be added or activated using the wp-cli, because we must consider new instances which are being added (or replacing terminated instances). The wp-cli is installed using CodeDeploy’s BeforeInstall hook, and enables us to make changes to WordPress using the AfterInstall hook.

Offload WordPress media files

In a single-instance install, WordPress stores uploaded files locally, but we want to be able to see all authored content in all environments! In order to avoid having to move content between instances we use a plugin which transparently uses S3 to store media files. Here’s what that install looks like in the deployment script:

#Addtional libraries for WP Offload S3 Lite
wp plugin install amazon-web-services --activate --allow-root
#WP Offload S3 Lite
wp plugin install amazon-s3-and-cloudfront --activate --allow-root

Database backup and movement

AWS simplifies the task of creating a high availability database by making RDS available to us. We use separate databases for each environment. We’d like to implement two additional features:

  • Some sort of periodic database backup. As unlikely as RDS is to fail, we can also use this to roll back the database to a point in time if necessary.
  • A way to “push” the production database to staging or the development environments. If content managers are working through the Production server, we’ll have to do this to keep them in sync.

Protecting the Origin from public access using AWS WAF

In our scenario, we needed a way to make the origin server inaccessible to the general public, while allowing content editors direct access to it. A Web Application Firewall (AWS WAF) is attached to the ALB. The WAF is set up so that ONE OF THE FOLLOWING rules must be satisfied for entry. All other rules are denied:

  1. For editors: There is a rule to allow internal company ip’s. This can be modified as needed.
  2. For CloudFront: A verified-origin rule identifies these requests, based on a header which goes with all requests. Below is an example of using CURL to add the header to simulate a cloudfront request
  3. Any other special exceptions.

The second of these rules can be tested as follows (using an EXTERNAL) ip:

curl -sLIXGET -H 'x-origin-verify: SecretCode9873498274'

Here’s what it looks like from CloudFront:

Plugins

The following plugins are used to support various aspects of scalability.

  1. C3 Cloudfront Cache Controller – invalidates cloudfront as needed, when content is edited.
  2. ManageWP – this tool does a LOT, but is used in our case for convenience when copying the production database to dev to achieve consistency.
  3. WP Offload S3 Lite – WordPress media files (like images) are stored in S3, so they can be accessed from all instances.
  4. WP BASIC Auth – to protect the dev environment from general visibility.

AWS Credential Manager

In my work as a consultant, I’ve found the need to switch frequently between different sets of credentials. I’ve integrated some of the following methods into a .NET based credential management tool, located on github. This way I can switch my default credential in about 10 seconds, without looking anything up. Naturally all the source code is out there.

The README file pretty well documents the app. There are some improvements that could be made, but it’s my intention to keep this app pretty simple. I’ve posted more detail on credential management here.

The app also demonstrates how you can use credential management in your .NET app, if that’s your intention.

Let me know if you find it useful, or feel free to post questions.

Creating a Certificate Signing Request (CSR)

For a recent project we needed a signed certificate for an HTTPS server, and to import into the AWS Certificate Manager. This article is for you if you need to submit a CSR to an administrator or a signing authority (like DigiCert) who will sign your private key with their CA.

First you’ll need to create a private key, and then create your CSR from that, adding the necessary information.

As it turns out, there is a single OpenSSL command that will do all of that. The private key it creates is also suitable for import to the AWS Certificate Manager.

Start with the following information on hand:

  • Country Code
  • State/Province
  • City
  • Company Name (no abbreviations)
  • Fully Qualified Domain Name (FQDN), e.g. intown.biz, or www.intown.biz. You can only use one FDQN, but you can use a wildcard, e.g. *.intown.biz.
  • an administrator email address

Issue the following Command:

openssl req -new -nodes -out intown-biz.key.pem -keyout intown-biz.csr.pem

Sample results:

Generating a 2048 bit RSA private key
..............+++
...............+++
writing new private key to 'intown-biz.key.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:Maryland
Locality Name (eg, city) []:Bethesda
Organization Name (eg, company) [Internet Widgits Pty Ltd]:The In Town Company
Organizational Unit Name (eg, section) []:consulting
Common Name (e.g. server FQDN or YOUR name) []:intown.biz
Email Address []:administrator@intown.biz

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

Note that I skipped the optional parts, including the password.

This command is really doing three things:

  1. Creating a private key (and calling it intown-biz.key.pem)
  2. Prompting you for the information you need, for a signing certificate.
  3. Creating the signing certificate and calling it intown-biz.crt.pem)

Double check your work

Run this command to view the decoded value of your CSR.

openssl req -in intown-biz.crt.pem -noout -text

Submit the CSR

The CSR goes to your administrator, or a signing authority. The cool thing about the CSR is that once it is used to create your public key (or “certificate”), the key pair will be “signed” without having to hand over the actual private key to the signing authority.

Hold on to the private key! That’s the only copy you get. Make sure to follow your company’s policy for managing private keys.

When the signing request is processed, you should receive:

  1. The public certificate or “certificate body” which you can give out
  2. An intermediate CA or “Certificate Chain” which helps the server to locate the signing CA.

At this point you’re pretty much done with the CSR. With the pubic cert, private cert and intermediate ca in hand, you’re good to go!

If you’re using the AWS Certificate Manager, simply paste the text of these into the appropriate boxes.

Use SSL Client Authentication with node.js and Express

For a recent engagement, I was asked to set up client authentication for a node.js/express app. With this type of authentication, an SSL certificate is required to access the site.

To get the most out this article, download and run the accompanying this demonstration app. It contains scripts to create the necessary certificates, similar to what you see here. These scripts and this article use openssl, which can be found in Linux or the Git for Windows bash shell.

For these purposes you don’t need to understand everything about certs, but here are some basics:

Create a signing key (CA)

Everything starts with a signing key, which can be created with a command similar to this:

openssl req -new -x509 -days 365 -keyout ca-key.pem -out ca-crt.pem

The resulting cert and key can then be used to sign other certificates. Creating a self signed cert is similar.

Create a signed certificate

Creating a signed certificate is a multi-step process. First you’ll need a signing key, as generated above.

export CERT=server
export CA_NAME=ca

#generating private key for server
openssl genrsa -out $CERT-key.pem 4096

#generate a signing request
openssl req -new -sha256 -config $CERT.cnf -key $CERT-key.pem -out $CERT-csr.pem

#perform the signing
openssl x509 -req -days 365 -in $CERT-csr.pem -CA $CA_NAME-crt.pem -CAkey $CA_NAME-key.pem -CAcreateserial -out $CERT-crt.pem

The initial command (genrsa) generates public and private keys. The third command’s final product, server-crt.pem, is used along with the private key to run ssh in express.

Set up https

To start, your express app must be running https, so you’ll need to have (or generate) a public and private key for that. Here is an example of how to set that up:

var express = require('express');
var fs = require('fs');
var https = require('https');
var app = express();
var options = {
    key: fs.readFileSync('certs/server-key.pem'),
    cert: fs.readFileSync('certs/server-crt.pem'),
};
app.use(function (req, res, next) {
    res.writeHead(200);
    res.end("hello world\n");
    next();
});
var listener = https.createServer(options, app).listen(4433, function () {
    console.log('Express HTTPS server listening on port ' + listener.address().port);
});

Test https

$ node app
Express HTTPS server listening on port 4433

Test using curl in another window. Note that we use -k to bypass certificate validation.

$ curl https://127.0.0.1:4433 -k
hello world

Set up client authentication

Once you have ssh cooperating, you can add client auth. requestCert will require the client (e.g. curl, browser) to send a cert. rejectUnauthorized=false allows us to handle validation ourselves.

If you’re using a browser,

You’ll notice that now we have the client auth certificate. If this was created as a CA, then ANY certificate which is signed by this one will validate.

var options = {
    key: fs.readFileSync('certs/server-key.pem'),
    cert: fs.readFileSync('certs/server-crt.pem'),
    ca: fs.readFileSync('certs/ca.pem'), #client auth ca OR cert
    requestCert: true,                   #new
    rejectUnauthorized: false            #new
};

Here’s how we handle the validation:

app.use(function (req, res, next) {
   if (!req.client.authorized) {
       return res.status(401).send('User is not authorized');
   }
   #examine the cert itself, and even validate based on that!
   var cert = req.socket.getPeerCertificate();
   if (cert.subject) {
       console.log(cert.subject.CN);
   }
   next();
});

Test client authentication

# Access the site using the client certificate created above.
curl -v -s -k --key certs/client1-key.pem --cert certs/client1-crt.pem https://localhost:4433

Test client authentication with a browser

  1. Add the client pfx file to your certificate store. If you’re using a newly created CA, you might need to add its pfx as well. If you’re not presented with a dialog box in step 3, this is likely the problem.
  2. Browse to https://localhost:4433.
  3. You should be presented with a dialog box. Choose the client certificate.
  4. Note that if you change the certificate, you will need to completely close Chrome (and possibly need to kill the task) in order to see the dialog again.

Certificates CAN be revoked. This is not covered in this article, but see the appropriate section here.

Other articles you might like:
HTTPS Authorized Certs with Node.js
How To Create an SSH CA to Validate Hosts and Clients with Ubuntu

Connect to an AWS CodeCommit git repository

Let’s review how to connect to an AWS CodeCommit git repository. As new git users quickly find out, each implementation of git (github, bitbucket, CodeCommit etc) has a slightly different way of authenticating the user. These steps only apply to the AWS implementation, and will apply to either Linux or Windows.

There are two ways developers can connect with git: ssh and https. With CodeCommit I have always used ssh, because it is simpler and works more easily.

There are AWS cli commands for high-level tasks (such as create repository, or list all repositories), but developers normally interact with CodeCommit through a standard git client.

Prerequisites

You’ll need:

  • Your own AWS account. This is because AWS bills according to the number of users who access repositories. Currently the first 5 users are free, and after that the charge is $1/user/month (pricing).
  • A git client. For Windows users, I recommend Git for Windows. This gives you a command line with everything that git offers, so there’s no additional layer to obscure its functionality. Best of all, you get a free Bash shell. So goodbye putty, hello ssh (for starters). If you are on Windows and do NOT use this client, sorry but these instructions probably do not apply to you.

Create SSH Key

The first step is to create an ssh key which will be used ONLY to authenticate us to CodeCommit. Open a bash shell and use the ssh-keygen command. Name the key as appropriate:

Michael@Mendelson MINGW64 /c/dev/aws/credential-manager (master)
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/c/Users/Michael/.ssh/id_rsa): codecommit
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in codecommit.
Your public key has been saved in codecommit.pub.
The key fingerprint is:
SHA256:OWyerhZPzJp5ozI/mF7PsHjU09Dq1hcYrBlcdgE+XEA Michael@Mendelson
The key's randomart image is:
+---[RSA 2048]----+
|         .Eoo.   |
|         oo..    |
|       . =+.     |
|       .+.+.     |
|       +SB o     |
|      ooXoo .    |
|     +oOoo   .   |
|    =oB** . .    |
|   .o*=*+. .     |
+----[SHA256]-----+

The command above creates two files: codecommit.pub (public key) and codecommit (private key). I recommend moving them both to your ~/.ssh/ directory.

Add Public Key to AWS

Log into AWS and go to the IAM (Identity & Access Management) service. Go to the list of users, and select your user. Find the Security Credentials tab. At the bottom you’ll see SSH keys for AWS CodeCommit. Click the upload button and paste in the SSH public key. When you click OK, take note of the SSH Key ID.

Get the Repository Url

While you’re in the AWS console, get your connection URL.  In CodeDeploy, navigate to your repository and find it:

get-ssh-url

Set up Git

There are a couple of ways to make this work. I’ll star with the way I prefer.

SSH config file

Find your .ssh directory (linux: ~/.ssh, windows: C:\Users\Username\.ssh) and find a file called config. If it doesn’t exist, create it.

Add the following lines, substituting your SSH Key ID in the User field, and your private key file name in the IdentityFile field. Also note that the Host name specifies the AWS region where the repo was created:

Host git-codecommit.us-east-1.amazonaws.com
  User APKAJTDPQ6HZKPCBOCXQ
  IdentityFile ~/.ssh/codecommit

You should now be able to access your repository as follows (substitute your ssh string):

git clone ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/repo-name

Now let’s say you have repos in multiple AWS accounts. You can use the config file to set up an alias for each:

Host codecommit-myrepo
    Hostname git-codecommit.us-east-1.amazonaws.com
    User APKAJTDPQ6HZKPCBOCXQ
    IdentityFile ~/.ssh/codecommit
Host codecommit-client
    Hostname git-codecommit.us-east-1.amazonaws.com
    User APKAI4I45LFFKYK4T4A 
    IdentityFile ~/.ssh/codecommit-client

Then access your repositories like this:

git clone ssh://codecommit-myrepo/v1/repos/repo-name

On the URL

With this method, you don’t need to bother with the config file – just add your SSH Key ID to the url string, and your key goes in your .ssh directory. It should be picked up automatically.

git clone ssh://APKAJTDPQ6HZKPCBOCXQ@git-codecommit.us-east-1.amazonaws.com/v1/repos/repo-name

Troubleshooting

Here’s a list of things to check, if you have everything set up, but you’re getting permission denied.

  • Test your connection as follows:
Michael@Mendelson MINGW64 /c/dev/aws
$ ssh git-codecommit.us-east-1.amazonaws.com
You have successfully authenticated over SSH. You can use Git to interact with AWS CodeCommit. Interactive shells are not supported.Connection to git-codecommit.us-east-1.amazonaws.com closed by remote host.
Connection to git-codecommit.us-east-1.amazonaws.com closed.

If this test does not work, you can see ssh debugging information by modifying your config file to look like this:

Host git-codecommit.us-east-1.amazonaws.com
  User APKAJTDPQ6HZKPCBOCXQ
  IdentityFile ~/.ssh/codecommit
  LogLevel DEBUG3

This gives you a ridiculous amount of information. Use DEBUG2 or DEBUG1 if you prefer.

 

node.js deep dive: Passing config variables through jade

Using a MEAN stack knockoff, I needed to pass variables configured in the server environment, to make them accessible to client side javascript. In doing so I had to tangle with nconf and jade.

Here was my solution.

My config looks like this:

{
  NODE_ENV : development,
  ...
  ui_varables {
     var1: one,
     var2: two
  }
}

First I had to make sure that the necessary config variables were being passed. MEAN uses the node nconf package, and by default is set up to limit which variables get passed from the environment. I had to remedy that:

config/config.js:

original:

nconf.argv()
  .env(['PORT', 'NODE_ENV', 'FORCE_DB_SYNC'] ) // Load only these environment variables
  .defaults({
  store: {
    NODE_ENV: 'development'
  }
});

after modifications:

nconf.argv()
  .env('__') // Load ALL environment variables
  // double-underscore replaces : as a way to denote hierarchy
  .defaults({
  store: {
    NODE_ENV: 'development'
  }
});

Now I can set my variables like this:

export ui_varables__var1=first-value
export ui_varables__var2=second-value

Note: I reset the “heirarchy indicator” to “__” (double underscore) because its default was “:”, which makes variables more difficult to set from bash.

Now the jade part:
Next the values need to be rendered, so that javascript can pick them up on the client side. A straightforward way to write these values to the index file. Because this is a one-page app (angular), this page is always loaded first. I think ideally this should be a javascript include file (just to keep things clean), but this is good for a demo.

app/controllers/index.js:

'use strict';

/**
* Module dependencies.
*/
var _ = require('lodash');
var config = require('../../config/config');

exports.render = function(req, res) {
  res.render('index', {
    user: req.user ? JSON.stringify(req.user) : "null",
    //new lines follow:
    config_defaults : {
       ui_defaults: JSON.stringify(config.configwriter_ui).replace(/<\//g, '<\\/')  //NOTE: the replace is xss prevention
    }
  });
};

app/views/index.jade:

extends layouts/default

block content
  section(ui-view)
    script(type="text/javascript").
    window.user = !{user};
    //new line here
    defaults = !{config_defaults.ui_defaults};

In my rendered html, this gives me a nice little script:

	 	
<script type="text/javascript">
 window.user = null;	 	 
 defaults = {"var1":"first-value","var2:"second-value"};
</script>	 	 

From this point it’s easy for angular to utilize the code.

Linux Cheat Sheet – Users

Create User

useradd --create-home -c "Account to run the something app" something

Eek, reverse it

userdel -r something

SSH Instance Access

Set up an ssh key, and add it to the current user’s authorized keys.

cd
sshgen -f company
# set it up with no password
cat company.pub >> .ssh/authorized_keys
# make sure to save off the public and private keys.  One way: 
aws s3 cp company s3://my-bucket/company
aws s3 cp company.pub s3://my-bucket/company.pub

Location for sudo privs

/etc/sudoers.d/cloud-init

Also helpful: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/managing-users.html

Linux Cheat Sheet

One purpose of this blog is to have a convenient place for me and others to look up repeatable processes. In this case, it’s mostly about me. But you are welcome.

Minimal explanation here – There are good explanations of these steps elsewhere on the web. Double click the code boxes to copy (YMMV).

User-related commands

Mount Disk

Situation: You spin up a Linux instance with an additional drive, which you would like to access as /data.

sudo -i  #if necessary
lsblk    #if necessary
mkfs -t ext4 /dev/xvdb
mkdir /data
sudo mount /dev/xvdb /data
cp /etc/fstab /etc/fstab.orig
echo "/dev/xvdb   /data   ext4  defaults,nofail 0 2" >> /etc/fstab 
mount -a   #test

Basic Run Script

If your linux dist uses Upstart, put something like this into /etc/init/myapp.conf.

description "A Bamboo remote agent daemon job file for Upstart"
author "Your name"
start on runlevel [2345]
stop on shutdown
setuid bamboo
setgid bamboo
 
pre-start script
  logger "[`date`] Bamboo remote agent starting"
  exec /etc/init.d/bamboo-agent.sh start
end script
 
pre-stop script
  exec /etc/init.d/bamboo-agent.sh stop
  logger "[`date`] Bamboo remote agent stopping"
end script

Source