We've been building a simple site with Yesod and docker. But there's
no point in building a site if you don't know how to deploy it. In this
final part of our tutorial we're going to:
The last thing we need is the
The first
The final
We are, however, providing environment variables for it. The substitutions you use for the
The
Once your account is set up spin up an Ubuntu instance. The instructions here are for Ubuntu but they shouldn't be too hard to adapt if you prefer other operating systems.
You'll also need to assign an Elastic IP to your instance, again through the EC2 dashboard. Once you've done so you can hook up a domain name you own to your IP to let visitors access your site. Choosing a service to buy and manage domain names is beyond the scope of this tutorial but there are plenty of good services out there.
With all those things done you'll now be able to ssh into your instance. The command to do so on a Linux machine is:
We're going to deploy our
Finally
from https://ilikewhenit.works/blog/5
- package our compiled binary into a docker container;
- set up a docker container to serve our site over https; and
- deploy everything to Amazon EC2
parent_directory/
site/
...
database/
dev_env
webserver/
binary/
dev.yml
Compiling
If you've been following along from part two then you'll already have a site to compile. A simplestack build
command will do it and tell you where it's dropped the binary. But
you'll need to deploy a few more things. Way back in part two, you
created an empty binary
directory under your project root. It's time to put some things in there. First, copy the binary you just created into the binary
directory. Now copy your site's config
and static
directories in there too. Now, create a new file called prod
and enter the following, making sensible substitutions for all the supersecret
entries:APPROOT=http://localhost
HOST=0.0.0.0
PGHOST=database
PGUSER=supersecret
PGDATABASE=supersecret
PGPASS=supersecret
ADMIN_NAME=supersecret
ADMIN_PASSWORD=supersecret
We'll
use these values as environment variables in the docker container we're
about to build. The approot and host variables will serve our
application and make it available to the outside, acting as a reverse
proxy for another docker container holding our webapp to pick up and
serve to the world. The PGHOST
value is set to database
so it will find our postgres docker container being set up by docker compose. The ADMIN_NAME
and ADMIN_PASSWORD
values will be familiar if you've read part four, and I'll assume the PGUSER
and similar ones are obvious.The last thing we need is the
Dockerfile
to bring this all together. Create a new file, called Dockerfile
in your binary
directory. It'll be very simple and very short.FROM haskell:7.10.3
MAINTAINER Your Name
ENV REFRESHED_AT 2016-08-05
RUN ["apt-get", "-y", "update"]
RUN ["apt-get", "-y", "install", "libpq-dev"]
That's it. All you need to run your binary and interact with your database. Don't worry that you can't see your prod
file referenced in there, we'll pick that up with docker compose later.The webserver
Docker lets us set up a webserver reliably just by writing two files. No messing around with installation and setup. First, create a newDockerfile
in your webserver
directory that should look like this:FROM nginx
MAINTAINER Garry Cairns
ENV REFRESHED_AT 2016-04-10
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
# load nginx conf as root
COPY ./site.conf /etc/nginx/conf.d/default.conf
COPY ./fullchain.pem /etc/nginx/ssl.cert
COPY ./privkey.pem /etc/nginx/ssl.dkey
# make a directory for the api volume
RUN ["mkdir", "/opt/server"]
#start the server
CMD ["nginx", "-g", "daemon off;"]
You'll notice we're copying a some files from our current directory into the container, and that some of them are key files. Do not ever upload a docker image to docker hub with key files in it. I've warned you. For now, just create the site.conf
file.upstream yesod {
server binary:3000;
}
server {
listen 80;
listen [::]:80;
server_name yourdomain.name;
return 301 https://$server_name$request_uri;
}
server {
listen 443 default_server ssl;
server_name yourdomain.name;
ssl_certificate /etc/nginx/ssl.cert;
ssl_certificate_key /etc/nginx/ssl.dkey;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://yesod;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
location ~ ^/(media|static)/ {
root /opt/server;
}
}
We'll unpack that a little. If you follow the curly braces and indendation you'll see three distinct blocks, an upstream
one and two server
blocks. The upstream block takes advantage of the same docker compose networking feature we saw when we told our binary that the database host was database
. That corresponds to the database
container definition we're going to put in our compose yaml file, just like this one corresonds to the binary
definition we'll put there. This is what connects our webserver to our application.The first
server
block listens for requests on port 80 (http). The return 301 https://$server_name$request_uri;
line forwards every request it receives to the corresponding page served over https
by using the http 301 response. As good citizens of the web we're going to serve our whole site over https
.The final
server
block serves the https
pages. First we get it listening on port 443 (https) and set up our key
locations. I know those keys still don't exist anywhere but we'll come
to that. The location
block after that forwards traffic onto our Yesod application. Finally, a second location
block serves requests to /static/*
or /media/*
from Nginx directly, since these will be static files and thus better served by Nginx.Our production compose file
We've been using a very minimal compose setup until now. In fact it hasn't been composing anything. That's about to change. Create a new file in your project root calledprod.yml
. It should look like the following.version: "2.0"
services:
database:
image: postgres
env_file: ./database/prod
binary:
build: ./binary
command: /opt/server/your_binary_name
env_file: ./binary/prod
links:
- database
tty: false
volumes:
- /etc/ssl/certs/:/etc/ssl/certs/
- ./binary:/opt/server/
working_dir: /opt/server
webserver:
build: webserver
ports:
- "80:80"
- "443:443"
links:
- binary
volumes_from:
- binary
I won't regurgitate compose's excellent documentation. Instead we'll focus on what each section here is achieving for us. Starting with database
, you'll see we're using an image
declaration instead of the build
declaration in the other parts of the file. That's because we're just using the official postgres docker image without any customization of the machine it builds, meaning no Dockerfile.We are, however, providing environment variables for it. The substitutions you use for the
supersecret
values here should tie in with the relevant PG
prefixed variables in the binary
directory environment variable file. Thus the prod
file in your database
directory should look a bit like this:POSTGRES_USER=supersecret
POSTGRES_DB=supersecret
POSTGRES_PASSWORD=supersecret
The binary
section has the most going on in it but even that is quite simple. First it build
s out the Dockerfile
we specified earlier. It supplies the relevant command
to run once the container is up-and-running. In this case it's running our binary. We tell the container in which env_file
it can find its environment variables. We specify the containers to which it links
, in this case our database
. We set tty
to false to show we don't want/need an interactive terminal environment for the container. We specify two volumes
- locations on our hard drive we want to share into the container.
We're mapping our own ssl certificates into the container and also
telling the container to pick up the contents of our binary
directory and place them in /opt/server/
in its own filesystem. Finally, we set the container's working_dir
to the directory in which our code now lives.The
webserver
section is similar to the binary
one. We build the container specified in the relevant Dockerfile
, link to any relevant containers and pull in volumes. The links and volumes are from the binary
container, in order to let nginx
pick up the reverse proxy and serve static files directly respectively.
The only additional thing we're doing is mapping the container's ports
80 and 443 to the base system's ports in order to open our site to
outside browsers.Deployment
We're going to deploy our web application on Amazon EC2. If you haven't got an account you can sign up for the free tier; what we're doing will run on that just fine. But remember the free usage runs out after a year. You've been warned.Once your account is set up spin up an Ubuntu instance. The instructions here are for Ubuntu but they shouldn't be too hard to adapt if you prefer other operating systems.
Accessing your machine
You'll need two types of access to your EC2 instance: SSH for you to control the application we're building and HTTP for your visitors. Create a security group through the AWS EC2 dashboard and call it webapp or something similar. Open ports 22, 80 and 443, then assign your instance to this security group (you can, and should, further restrict access to port 22 to specific IPs but I'll leave that up to you). Create a key pair and name it appropriately. You'll need it to ssh into your site.You'll also need to assign an Elastic IP to your instance, again through the EC2 dashboard. Once you've done so you can hook up a domain name you own to your IP to let visitors access your site. Choosing a service to buy and manage domain names is beyond the scope of this tutorial but there are plenty of good services out there.
With all those things done you'll now be able to ssh into your instance. The command to do so on a Linux machine is:
ssh -i ./path/to/your/private_key.pem ubuntu@public_dns_of_your_elastic_ip.amazonaws.com
Once you're in create a directory, I'll call it parent_directory
, to house the docker-compose directory set up we've been working with.Installing software
We're using docker so you won't need to install very much software on your EC2 instance. Just install docker and docker compose using the instructions we used to install them on your development machine. You'll need one additional piece of software called certbot. Follow the appropriate instructions for Nginx plus the operating system you've used for EC2. The instructions on here assume Ubuntu 14.04.Production configuration
You're now ready to deploy code to your site with a simple scp command. The steps we'll go through for deployment are:- compile the latest binary of your web application;
scp
the relevant files to your EC2 instance;- put your SSL certificates where your
webserver
container can see them; and - (re)start the application.
We're going to deploy our
prod.yml
file and our binary
, database
and webserver
directories. The command you'll use is:scp -i ./path/to/your/private_key.pem -r /path/to/your/parent_directory/binary/ ubuntu@public_dns_of_your_elastic_ip.amazonaws.com:/home/ubuntu/parent_directory
Repeat that command for all the bits we want to deploy. Now ssh
into your EC2 instance. We're almost done. Right now your webserver
setup has a problem. Docker's going to look in the directory fullchain.pem
and privkey.pem
and find nothing. We need to resolve that. When you ran the certbot
command to create those files they were dropped somewhere like /etc/letsencrypt/live/mydomain.name/
. Copy them to your webserver
directory so they'll get picked up by the Dockerfile
.Finally
cd
to the parent_directory
and docker-compose --file=prod.yml up -d
to launch your site. Any time you want to deploy updates you should only need to scp
the new binary
directory then docker-compose --file=prod.yml stop
followed by docker-compose --file=prod.yml up -d
will restart with your new application.Visit your site
Congratulations; it's a web application! You should now be able to visit your website. I hope this tutorial has been useful to you.from https://ilikewhenit.works/blog/5
No comments:
Post a Comment