Enable yourself with docker

After the commit, comes the question, how to deliver it? How to deliver with no hustle, how to do it continuously, how to be agile and most important question, how not to loose word soft from software, which should stand for – easy to adjust. Docker and here is how.

use case

Two teams, creating two services, not micro services, micro services are for fancy pants. Service a) Payroll, service b) Account. Both services combined makes system called “Kids Payroll”.

The keyword here is, combined. Services are developed separately, but delivered together. How to do that? Just docker will not do, as my intention isn’t to put everything in one container, that is just pure madness. But Docker Compose

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Now look at that! Multi-container this is something I was seeking for, each service has its own container, and I can glue them together and create one application. Lets to this!

What is the plan

Create two asp.net core services and put them behind nginx proxy. Use docker compose YAML file to create docker application.

Created two web api projects, Payroll and Account, with Docker support enabled.

How does YAML file looks out of the box?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
version: '3'

services
:
  payroll
:
    image
: payroll
    build
:
      context
: .
      dockerfile
: Payroll/Dockerfile

  account
:
    image
: account
    build
:
      context
: .
      dockerfile
: Account/Dockerfile

Neat, so far I didn’t even interact with it, Visual Studio does all this magic out of box, nice. But I’m willing to create some fancy routing and SSL offload, and in general I feel better, if reverse proxy stands in front of services.

nginx

Altered YAML file to add nginx service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
version: '3'

services
:
  nginx
:
    image
: itcorplv/repo:nginx
    ports
:
     - 443:443
      - 80:80
    restart
: always
    build
:
      context
: ./nginx
      dockerfile
: Dockerfile
    depends_on
:
     - payroll
      - account
    restart
: always
  payroll
:
    image
: itcorplv/repo:payroll
    build
:
      context
: .
      dockerfile
: Payroll/Dockerfile
    restart
: always
  account
:
    image
: itcorplv/repo:account
    build
:
      context
: .
      dockerfile
: Account/Dockerfile
    restart
: always
  • New service nginx
  • nginx depends on account and payroll services, to access account and payroll
  • fancy image names, as I created repository at docker hub. Image name has to be username/repository:tag

nginx Dockerfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
FROM xqdocker/ubuntu-nginx
COPY nginx.conf /etc/nginx/nginx.conf

RUN mkdir -p /var/www/account/html \
    && mkdir -p /var/www/payroll/html

COPY index.account.html /var/www/account/html/index.html
COPY index.payroll.html /var/www/payroll/html/index.html

COPY account /etc/nginx/sites-available/account
COPY payroll /etc/nginx/sites-available/payroll

RUN ln -s /etc/nginx/sites-available/account /etc/nginx/sites-enabled/ \
    && ln -s /etc/nginx/sites-available/payroll /etc/nginx/sites-enabled/
   
RUN openssl genrsa -out /opt/kidpayroll.key 2048 \
    && openssl req -new -x509 -key /opt/kidpayroll.key -out /opt/kidpayroll.cert -days 3650 -subj /CN=itcorpz.lv/CN=*.itcorpz.lv

copying nginx configuration, creating home folders for account and payroll service, placing default index.html. At line 10 and 11 account and payroll sub-domain configuration copied to nginx configuration folders. And to make it secure, created self signed certificate. In production this can and has to be replaces with lets encrypt.

nginx Account service configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
server {
    listen 80;
    listen [::]:80;

    root /var/www/account/html;
    index index.html index.htm index.nginx-debian.html;

    server_name account.itcorpz.lv;

    location / {
            try_files $uri $uri/ =404;
    }
}

server {
    listen 443;
    server_name account.itcorpz.lv;

    ssl on;
    ssl_certificate /opt/kidpayroll.cert;
    ssl_certificate_key /opt/kidpayroll.key;

    resolver 127.0.0.11 valid=15s;
    set $account_host http://account;

    location / {
        proxy_set_header   Host $host:443;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header   X-Forwarded-Proto https;
        proxy_pass $account_host:5000;
        proxy_set_header   X-Forwarded-Host $server_name;
        proxy_read_timeout  1200s;
    }
}
  • From line 1 to 13 just serve index.html, when url is http://acount.itcorpz.lv
  • At line 23 define resolver for variable $account_host, whenever $account_host is resolved at line 31, it will be resolved from 127.0.0.11 seeking for http://account (Embedded DNS server) and resolver result will be valid for 15 seconds.
  • From line 26 to 34 actual forwarding happens to the account service which is hosted on port 5000, where exactly, it is up to resolver to find out.

Similar configuration is for payroll service.

Edit hosts file on windows machine

1
2
3
your.ip.goes.here           itcorpz.lv
your.ip.goes.here           account.itcorpz.lv
your.ip.goes.here           payroll.itcorpz.lv

Lets test the result

Why resolver

A definition of a cluster: “A group of similar things or people positioned or occurring closely together”. Yes – it is possible to scale services within one application, now I can use cluster tricks and load balance services. And that is why there is resolver, locate newly spawned services and round robin the request to it.

And small cluster is alive!

How to deliver

This is the best part the no hustle delivery in four steps

  1. docker-compose build – on build machine
  2. docker-compose push – on build machine
  3. docker-compose pull – on target machine
  4. docker-compose up -d – on target machine

Only parts which has been changed will be:

  • pushed to the container repository;
  • pulled from container repository;
  • re-deployed;

And this is only the beginning of what is possible within docker, put the same docker application inside Docker Swarm and feel the power.

p
ower
Source code of the experiment GitHub

Leave a Reply

Your email address will not be published. Required fields are marked *