I work in large enterprise and past two years I have been working with micro-service based project and used Octopus Deploy tool to deploy our packages to different kind of on-premise environments, but few weeks ago we started transition to cloud (AWS) and here where fun begins.
For me AWS was something completely new and actually have no idea how to deploy my application from our servers to “THE CLOUD”, but that never stopped me.
I knew that my colleagues from other departments are successfully doing it and of course – first thing in my mind was to copy their approach and make some improvements. Soon I found some projects and started to dig in.
What I found was not what I was looking for. Just to start AWS instance there was 180 lines of Powershell (filtering out latest source ami, tagging, naming, configuring etc) script and don`t get me started on deployments, backups etc. Those scripts was doing their job and doing it well, but I know at this moment that NOPE, I will try to do it differently.
Actually I was pretty lucky, every year we have this conference – DevTernity and I was to participate in workshop dedicated to infrastructure as a code. There we was working with Terraform and like 10% of this workshop was about this cool tool called Packer (https://www.packer.io/) – we ware building simple linux images.
“Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration. Packer is lightweight, runs on every major operating system, and is highly performant, creating machine images for multiple platforms in parallel. Packer does not replace configuration management like Chef or Puppet. In fact, when building images, Packer is able to use tools like Chef or Puppet to install software onto the image.”
I somehow knew that I will use Packer, somehow.
LET`S START BUILDING
Until this moment I had created linux AMI, but not Windows, this is something different. Because Packer is great, there is a detailed instructions what you need to do, to build Windows AMI.
After creating few AWS AMI images I started to look after a tool or a way how to pull in few dependencies (octopus tools, octopus tentacle etc), and it didn`t took a lot of digging – Chocolatey. This tool is great, because it is simple and can download and install latest packages with one command and for surprise, it had all my dependencies. Great, now I can worry no more about managing installs and updates.
Now comes the fun part – how to deploy? At first it sounds simple enough – just deploy, but here comes the trickiest part:
First – I need to register Tentacle into Octopus Deploy server.
To do that, I need to tell server “where AMI is” – IP address wont work, so I need DNS of my AMI and DNS is always changing (every next AMI have its own dns). But can I know DNS from inside EC2 instance? Yes I can and actually – very very simple (if you know where to look). Every EC2 instance have it`s own metadata and you can use it. Link to AWS documentation
F.Y.I: If you register Tentacle without specifying –publicHostname it will register it with your EC2 instance name and Octopus Deploy server won`t be able connect.
$webClient = New-Object System.Net.WebClient
$dns = $webClient.DownloadString(‘http://169.254.169.254/latest/meta-data/public-hostname’)
I use powershell script to save my DNS into variable and now – my Octopus Deploy server will always know where my AMI is.
.\Tentacle.exe register-with –instance “Tentacle” –server “https://octopus.server” –apiKey=”$Env:OAPI” –role “H2O” –environment “HOP-AWS-INSTALL” –publicHostname “$dns” –force –comms-style TentaclePassive –console
Second – deploy packages(this is the easy part).
When tentacle is registered I can use octopustools to call server and sey “Hey, deploy please latest release for this channel and deploy to this specific environment that I just registered”
octo deploy-release –project “HOP Deploy Dev” –channel “AWS Install HoP” –version latest –deployto “HOP-AWS-INSTALL” –server “https://octopus.server/” –apiKey “$Env:OAPI” –waitfordeployment
“–waitfordeployment” – is very important, so octopustools waits for deployment and not moves forward before all steps finished successful.
Third – deregister (cleanup)
.\Tentacle.exe deregister-from –instance “Tentacle” –server “https://octopus.server” –apiKey “$Env:OAPI” –multiple –console
.\Tentacle.exe delete-instance –console -instance “Tentacle”
Now, let`s put this all together and run it from Packer, all should be work, right? WRONG. And this is the best part, just because you completed all steps successfully manually, doesn’t mean Packer will have the same result.
I encountered few problems I wish someone would tell me (they are obvious now):
- You won`t install tentacle successfully, because your Packer user can`t generate certificates. Workaround? Best way I found is to generate certificate locally (on your pc, for example) and tell packer to copy this certificate to AMI it is building and use it to register
.\Tentacle.exe import-certificate –instance “Tentacle” -f c:\cert.txt –console
- Now you can deploy your release? Nope. Why? Because when packer is building your image, it will create a new AWS security group only with one port open (so it can communicate), to fix this, create new security group just for building and open two TCP ports – 10933 for tentacle and 5985 for packer and specify it to packer template. That`s it, now you can deploy.
Now all should work just fine.
Of course all this is great, but if its not automated it`s sucks. We use TeamCity as our main build server and of course I used it to automate my image building. And here how it looks:
- We checkout packer template
- When we recive it we fill it with api keys, usernames, etc.
- Now we can pass it to Packer (which is installed on agents)
- Packer connects to AWS
- Filters out latest Windows source ami and creates new EC2 instance with it
- Installs chocolatey and pulls in latest packages
- Chocolatey finishes it`s “business”
- We register our tentacle and connect to Octopus Deploy server with request for deployment
- Octopus Deploy server have specific project with defined process for each channel and starts deployment
- When deployment is finished, we clean up environment
- Packer finishes EC2 provisioning and stops EC2 instance and creates new AMI
- If no errors ware encountered, packer terminates source EC2, keysets etc and we have our new, fresh AMI to use
If one of your powershell scripts ends with error, Packer will consider it as a critical error and terminate building process.
Just because there is existing solution, does not mean you must use it, first try to find a better one and this is why instead of 180 lines of hardcore powershell script code just to start instance I have 65 lines of template (including brackets) with 4 small powershell scripts to do it all – create, provision, deploy, destroy etc. Simple, clean and maintainable way to create your AMIs.
As always – I will improve my solution as time passes, but for now – keep it simple and fun.
Link to my github where you can find my template and ps scripts.