Orchestrating Dreamfactory with docker-compose and a LoadBalancer
As a followup on my introductive article about Dreamfactory + Docker and Dreamfactorys article “Scaling DreamFactory with Docker” about how to manually run Dreamfactory containers in a load-balancer pool, I’m taking the chance to show you how I implemented Dreamfactory Docker containers with docker-compose as an orchestrator. (You could aswell use any other like Marathon/Mesos, AWS ECS, Swarm..)
As a good starting point you can use the docker-compose.yml from their df-docker repo. In fact, its perfectly fine to use it as is if you only want to orchestrate Dreamfactory and its dependencies Redis and MySQL, which will give you 3 running containers.
If your only other requirement is to connect it to an “external” MySQL and/or Redis thats not living in a container, you can simply remove the “mysql” and “redis” service from their docker-compose.yml and provide the credentials in the environment attribute for the “web” service.
Note: In any scenario you absolutely want to set the APP_KEY as an environment variable. Otherwise you will end up with mostly-broken containers.
However, if you need to orchestrate more containers than Dreamfactory alone, my approach might be of interest to you. Especially when having more web-serving containers than only DF.
Dreamfactorys article suggests using tutum/haproxy image as LoadBalancer, which is perfectly fine. I chose jwilder/nginx-proxy for that purpose. There are in fact a few loadbalancer-specific aspects why you might wanna consider doing the same but both will work just fine.
Now, let me show you a boiled down version of my docker-compose.yml bringing things together:
version: '2' services: nginx-proxy: image: jwilder/nginx-proxy volumes: - /var/run/docker.sock:/tmp/docker.sock:ro network_mode: bridge ports: - 80:80 dreamfactory: image: dreamfactorysoftware/df-docker depends_on: - "nginx-proxy" network_mode: bridge environment: VIRTUAL_HOST: df.acme.com DB_HOST: mysql.acme.com DB_USERNAME: df_admin DB_PASSWORD: s3cret DB_DATABASE: df2 REDIS_HOST: cache.acme.com REDIS_DATABASE: 0 APP_KEY: UseAny32CharactersLongStringHere ALLOW_FOREVER_SESSIONS: true JWT_TTL: 86400 # add your other services in the same fashion
So what did I do here? Most importanly, the VIRTUAL_HOST variable which tells the nginx-proxy which containers it should proxy_pass requests for a given hostname to.
When scaling up, it will automatically recognize multiple containers with the same VIRTUAL_HOST as a pool and load-balance to them !
The DB_ and REDIS_ variables point the dreamfactory container to “external” services but you can also specify linked services there. Like the df-docker/docker-compose.yml shows.
The ALLOW_FOREVER_SESSIONS and JWT_TTL variables are not required but would change your Dreamfactory accordingly. See their Wiki if you are curious.
For configuring the nginx-proxy, like adding SSL certificates, see its README.md on Github.
(PS: Jason Wilder, if you ever read this, kudos to you! Awesome idea and Docker image! )
Now let the magic happen and bring up your platform:
# yes that was already it, no need to "docker run" and link containers manually !
So far, so boring. There is no scaling in here, you might say. And you are right, so lets scale things up
# yes that was again already it, now check "docker ps" to verify the containers being up
Now you scaled up your REST API to be served by 10 individual and immutable Dreamfactory containers ! The same way you can scale them down again.
(Your host systems CPU/memory is a limitation of course as we arent talking about sth like Mesos / Kubernetes / Swarm etc here)
If you want to actually see the requests being load-balanced, check DFs “Config” page and look at the “Host” listed there. Then refresh the page a couple times and you should see alternating container IDs.