DIY node.js server on Amazon EC2

I’m involved with a project where our ruby/rails developer dropped out, so I decided to take on the job using node.js (rather than learn rails). We initially were using services from dotCloud, but it was too flakey from day to day and our demo was coming up. For hosting, Amazon’s EC2 was the obvious candidate, but I’d have to setup and provision the entire server from scratch. This is that story 🙂

Here’s what we’ll do

  • choose a Linux image
  • create a HelloWorld node.js server
  • use git to push code changes to the server
  • automatically restart node after pushing with git
  • set up node to run long term using supervisor

Setup a New EC2 Instance

launch a new Ubuntu instance

First things first, login to your AWS console and launch a new Ubuntu Linux image for your new EC2 server. Select the Community AMIs tab and search for this one:


I choose Ubuntu over other Linux distributions because more of what I needed was already available via the standard package manager (redis, couchdb, etc…). At this point, I usually assign an elastic IP address to my new instances before proceeding with ssh.

update your new system

Once your new instance is up and running, login and update the system. The upgrade might take some time.

$ sudo apt-get update
$ sudo apt-get -y upgrade

install the rcconf service utility

This will make it easy to manage services

$ sudo apt-get install rcconf

install some build tools including git

$ sudo apt-get install build-essential
$ sudo apt-get install libssl-dev
$ sudo apt-get install git-core

libssl-dev is needed to use the crypt node.js package

build node

You can view the [node.js installation instructions]( “Node.js installation”) or just follow what I did.

$ wget
$ tar xzf node-latest.tar.gz
$ cd node-v0.4.7
$ ./configure --prefix=/usr
$ make
$ sudo make install

I used –prefix=/usr to install node on the existing PATH. make install can take quite a while… go brew some espresso.

install the node package manager npm:

Get the latest npm from github and install.

$ cd ~
$ git clone
$ cd npm
$ sudo make install

get some really good node packages 🙂

$ cd ~
$ npm install connect redis connect-redis jade express express-resource futures emailjs

install a web server

$ sudo apt-get install nginx

Edit the nginx default configuration file replacing the root (/) location section

$ sudo vi /etc/nginx/sites-enabled/default

Use a proxy passengry entry. This will forward your requests to your node server

location / {

Restart ngnix

$ sudo /etc/init.d/nginx restart

Hello New Server!

create a directory for your node server

$ mkdir ~/www
$ cd ~/www

create a HelloWorld server

$ cat > server.js

var http = require('http');
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
}).listen(8124, "");
console.log('Server running at');

Start the server

$ node server.js

test your new server

Test in a browser by navigating to http://your.static.ip/

Shutdown your new server before continuing…

Bring in Source Control!

Now we’ll get to the good part where we can leverage git to deploy new server code

create a remote repository for our node project

Create a bare repository outside the www folder

$ mkdir ~/repo
$ cd ~/repo
$ git init --bare

create a git hook

Create a post-recieve hook that will copy over new code after it’s been pushed to the repository

$ cat > hooks/post-receive

git checkout -f

$ chmod +x hooks/post-receive

add this remote repository to your LOCAL repository:

On your local development machine, setup a repository for your new server

$ mkdir helloworld
$ cd helloworld
$ git init
$ git remote add ec2 ssh://ubuntu@your.static.ip/home/ubuntu/repo

create a local HelloWorld node server

$ cat > server.js

var http = require('http');
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
}).listen(8124, "");
console.log('Server running at');

add and commit it to your local repository

$ git add server.js
$ git commit -m 'first'

push our local code changes to the remote repository:

$ git push ec2 +master:refs/heads/master

you only need to specify +master:refs/heads/mast the first time

test your new server

Test in a browser by navigating to http://your.static.ip/

Have Your Node Server Run Forever

dotCloud uses supervisor to keep node servers up after reboot and crashing. We’ll do the same. So, back on your EC2 instance:

$ sudo apt-get install python-setuptools
$ sudo easy_install supervisor

install it as a service

$ curl > supervisord
$ chmod +x supervisord
$ sudo mv supervisord /etc/init.d/supervisord

set the service to start on boot

check the supervisord service in the services list using rcconf

$ sudo rcconf

create a configuration file

$ sudo echo_supervisord_conf > supervisord.conf
$ sudo mv supervisord.conf /etc/supervisord.conf

edit the new configuration file

$ sudo vi /etc/supervisord.conf

change the permissions of the [unix_http_server]

chmod=0777                 ; sockef file mode (default 0700)

set the user that supervisord runs as to be ‘ubuntu’

This is under the [supervisord] section


add a section that describes your node server

Note that we can set the NODE_ENV variable here.

command=node server.js

reload supervisord

$ supervisorctl reload

You can also restart the service

$ /etc/init.d/supervisord restart

restart supervisord when changes are pushed

In the post-recieve git hook we previously created, append a final command that restarts supervisord

$ vi ~/repo/hooks/post-receive

git checkout -f
sudo supervisorctl restart node

That’s It 🙂

Now you can work on a local development machine, push your changes to the server using git, sit back and relax while the post-recieve hook tells supervisor to restart your node server. Refresh the browser and throw the confetti!

73 comments on “DIY node.js server on Amazon EC2

  1. That’s the bit that stood out for me. It looks like the NGNIX is acting as a proxy forwarder. Requests to port 80 seem to be forwarded to localhost:8120 (someone correct me if I’m wrong)

    I’ve recently built an amazon micro node.js server in a similar fashion, but have used this article: to configure IPTABLES. My express app runs on port 3000 and all requests to 80 are autoforwarded to 3000.

    Still there are some steps in this article that have taught me something. I like the install as service bit and the setting of an environment variable to production.

    With respect to running as a service though… what happens if the app crashes? Does it autorestart or should I still be using a nodemon / spark2 style option?

    • NGNIX is definitely overkill, I’m only using it as a reverse proxy. I’m working on a haproxy version now and will post it when i’m done. Also I heard that ngnix doesn’t handle websockets well…

      Using supervisor is the best part. This is what dotCloud is using. When your server boots or your node server crashes, supervisor will start it back up… I only use nodemon for local development.

      • I was watching Pedro Teixeira’s nondetuts where he tells us about different sorts of node tools. The up-to-the-minute recommendation* seems to be that we use Cluster** for allowing node to leverage the power of multiple cores.

        I wonder if there’s a way to set Cluster to work with Supervisor… or maybe it already works like supervisor. I’ve not fully researched either side of the equation yet, but there might be something there.

        * (Pedro answered the question about 4 days ago, but I only saw it just now).

        • Cluster and Supervisor, yes… that’s a good research project. If cluster’s ‘resuscitation’ works on crashed node workers, then supervisor won’t see it. But supervisor can still be configured to kick-off your cluster on reboot…

          • Cluster is nice, but left over the weekend or overnigth occasionally produced hundreds or thousands of reboots. Time for another research project to fix that one…

      • Calling nginx “overkill” is an engineering fallacy. Exactly *how* do you think it is overkill? You are wrong anyway but I would just like to know.

        • just a guess, but nginx (or apache) probably has a lot more features than haproxy — a reverse proxy is all I need for the purposes of the most simple (but useful) example for this recipe… I’m still trying to get my head around the best ways to use varnish, haproxy, nginx/apache and node servers together for a larger deployment. If you have any tips, I’d love to know!

      • Yeah I had some trouble with NowJS (websockets) and Nginx so I’m just listening on port 80 now. Will let you know if I find a solution. Otherwise great recipe, very tasty!

      • Actually I disagree, Nginx has some really nice features.

        For example, serving cached content, gzip compression, and ssl handling.

        Better to use node.js to handle your apis and dynamic content, and use nginx for web-zone stuff (serving lolcats).

  2. I’ve been waiting for something as detailed as this for Node on EC2. I only had 2 snafus:

    1) git push ec2 +master:refs/heads/master ….. Permission denied (publickey). fatal: The remote end hung up unexpectedly
    2) supervisorctl reload …… error: , [Errno 2] No such file or directory: file: line: 1

  3. Hi, node noob here. Why are you using nginx again? Could you quickly explain why it is needed/not needed in thsi setup? All I’m interested in is running node (but the git part is of course also very neat!). Cheers.

    • You’ll likely want a front-end in production for caching, load-balancing, serving static files (so node.js doesn’t have to be configured to do it)… or all three. Instead of nginx, you could have substituted varnish or haproxy… I just used nginx as one example of a front-end reverse proxy.

  4. Upstart would be a much nicer solution to babysitting the node process. Bear in mind however that you will probably have to change over to systemd at some point.

  5. Great post! Thanks for the trailblazing. I ran in to some confusing problems… I built from the Ubuntu Maverick image you recommended. I installed what I needed plus some, but *not* nginix. When I ran the HelloWorld server it started, but I could not connect to it — I was able to ssh in to the instance no problem (using both the assigned external domain name, and an attached elastic IP). I battled this for hours thinking it was the port firewalling on ec2 of port 80… finally started poking around with netstat and noticed that ssh was listening at IP, not I had two entries showing with ‘route -n’, one destination was the instances internal ec2 IP, the other was So I edited the server.js file and changed the listen() to use “” IP — et voilla, I was then able to connect via http from the browser. And finally it strikes me as I go back and re-read your recipe that the IP is where you have nginix forwarding to!! Let this be a lesson to anyone who skips part of an installation instruction without reading the part that’s being skipped 🙂

    • wow… sorry to hear all that! It *is* tricky… I myself had to do it a few times and again to verify for this blog post. I’ve done it to support other domains by adding ‘servers’ to my nginx conf forwarding them to other node servers on different ports… =D

  6. Just a heads up, as I used the latest micro ubuntu instance…

    1. my ubuntu instance only had yum installed, and I was unable to install apt-get, so I had to hunt down the alternate package names. No big deal, but it may confuse some folks. I may blog the differences at some point.
    2. the user for that instance was ec2-user; just a substitution wherever I saw “ubuntu”.
    3. In order to allow the git push to the remote repo, on my mac, I executed “ssh-add -K /path/to/pem.pem”… this added my creds and allowed a seamless push.
    4. I edited nginx.conf directly to add the proxy location.

    • Thanks for that! Yes… I took the easy route with Ubuntu. The packages I needed (beyond the ones mentioned here) were already available.

    • Clint, can you please explain in detail how to do this. I am having the same problem. My cant instance hang up when im trying to push my code from a local machine.

  7. Thanks for the post! If you’re forwarding all requests to the node server, how do you set up nginx to serve the static files? I’m trying to do something like, serve a static file and have the client connect to the node server, but couldn’t get it to work, so I guess I’ll just skip nginx for the moment…

    • I didn’t have to serve static files from nginx yet, but it seems this would be a very common scenario — should be docs about this. Probably a better idea than having node/express do it…

  8. Thank you soooooo much! this tutorial is amazingly detailed. I am a beginning linux user on centOS and I was able to follow along…

    Also, if anyone runs into a “file not in directory” error its because git redirects you on:

    curl > supervisord

    so just go to that url, copy all the text and paste it into the file if the curl turns up empty or with html in it…

    also, you can check boxes in rcconf by hitting the space bar.

    Thanks again!

  9. for me supervisor didn’t work with error:

    $ sudo supervisorctl start node
    node: ERROR (no such file)

    until I changed “command node server.js” to “command /usr/local/bin/node server.js”

    i.e. full path to node.js

  10. Thanks! Definitely useful. I was clueless about a couple things until I used your post for reference. Haven’t done the Git part yet, but I did complete my proxy with nginx and your method of starting up node with supervisord with multiple node instances.

  11. Thanks! Your post made it easy for me to complete my project. I too kit one step further with using nginx as reverse domain proxy, also multiple nodejs instances with supervisor. Again, great article. Can not wait to view more!

    • Cool! Glad to know it still works… there’s a more update-to-date AMI available, but I’m still using supervisor and nginx with my projects. To push code, I’m using git and rsync now instead pushing to a server repo and using commit hooks. That recipe is here:

      There’s also a way to use git and rsync while maintaining timestamps on files (so rsync doesn’t always copy every tracked file) — a blog post on that is coming shortly =)

  12. Be sure that you open up inbound ports 22 and 80 in your security group in EC2. I couldn’t figure out why it was working and then I realized my EC2 firewall was blocking the requests. Yes, I’m a newb.

    For other newbs, how you do this is by logging into EC2, clicking “Security Groups” under “NETWORK & SECURITY” (on the left), selecting the group you used/created for your ubuntu instance, clicking the “Inbound” tab (at the bottom) and making a new “Custom TCP rule” with the port “20” (click Add Rule) and then make another “Custom TCP rule” with the port “80” (click Add Rule) and then click “Apply Rule Changes”. There may be another way, this is just how I did it…

  13. Hi Jason,

    Thank you very much for writing this tutorial. I really appreciate the ability to use AWS as my node.js server. But i cannot push my git straight to AWS. Instead i have to push my code to Github first then using the CLI on the AWS, i have to clone the code into the development machine.

    I think this has something to do with the public key something. If you could help me on this, it would be appreciated.

  14. If you are unable to access your instance through browser (i.e. “http://ec2-xxx-xxx-xxx-xx….com) even after you’ve configured nginx and node, then there may be an issue with your ‘Security Groups’ on AWS. You have to go to your AWS web console -> EC2 -> Security Groups and make sure that your EC2 instance has a rule for HTTP access. Thought it may help somebody out there.

Leave a Reply