This post will be the first of a "series" that I'm planning about an infrastructure for Microservices.
First of all, a few concepts:
- According to their web site, Mesos is a distributed systems kernel. Mesos abstracts every machine resource of a cluster to build a "super computer", exposing it as an unique resource pool.
- Marathon, in the other hand, is a resource manager that runs on top of Mesos. Using Marathon you will be able to deploy containers and applications into a Mesos cluster.
- Zookeeper is an open source server that provides a highly reliable distributed coordination. In simple words, Zookeeper will build our cluster and Mesos will use it to operate.
- Nginx is a web server that can be also used as a reverse proxy.
- Nixy is an application written in Go that automatically configures Nginx for the applications deployed on Marathon/Mesos.
Architecture:
In words:
- Totally, there will be 17 machines.
- Five machines will represent our "master pool", which will contain a Mesos master, a Zookeeper and a Marathon in each of them.
- Nine machines for our slave pool. These machines will only have a Mesos slave in each of them that will register in the Zookeeper cluster.
- Three machines as our proxy pool. They will be in charge for the service discovery and load balancing of our applications. These machines will have only Nginx and Nixy.
- The slave pool is where your applications will run. The master pool is the pool that will coordinate the execution of your applications in the slave pool.
- This architecture is highly scalable. You can have as many slave machines as you want without the need to change any configuration, just add more slave machines to the pool and your resource pool will grow.
- Each one of the three Nixy instances will register itself in each Marathon for the service discovery.
- The main load balancer can be a DNS round-robin, another Nginx, maybe a HAProxy, etc. It is up to you do decide.
- Every machine of this tutorial uses a CentOS 7
Machines
Number | Pool | Hostname |
---|---|---|
#1 | Slave | slave-1 |
#2 | Slave | slave-2 |
#3 | Slave | slave-3 |
#4 | Slave | slave-4 |
#5 | Slave | slave-5 |
#6 | Slave | slave-6 |
#7 | Slave | slave-7 |
#8 | Slave | slave-8 |
#9 | Slave | slave-9 |
#10 | Master | master-1 |
#11 | Master | master-2 |
#12 | Master | master-3 |
#13 | Master | master-4 |
#14 | Master | master-5 |
#15 | Proxy | proxy-1 |
#16 | Proxy | proxy-2 |
#17 | Proxy | proxy-3 |
Configuring the master pool
The following commands should be executed in every master machine.
Installing the repository
sudo rpm -Uvh http://repos.mesosphere.com/el/7/noarch/RPMS/mesosphere-el-repo-7-1.noarch.rpm
Installing Zookeeper, Marathon and Mesos in the master pool
As I said before, we will install Mesos, Zookeeper and Marathon in each machine.
sudo yum -y install mesos marathon && sudo yum -y install mesosphere-zookeeper
Configuring Zookeeper
Set /etc/zookeeper/conf/myid to the id of the current master.
Example for master one:
echo 1 > /etc/zookeeper/conf/myid
Configure /etc/zookeeper/conf/zoo.cfg, informing each machine our cluster will have.
server.1=master-1:2888:3888
server.2=master-2:2888:3888
server.3=master-3:2888:3888
server.4=master-4:2888:3888
server.5=master-5:2888:3888
Configuring Mesos
Now, you need to edit your /etc/mesos/zk informing the ZK url of your cluster:
echo zk://master-1:2181,master-2:2181,master-3:2181,master-4:2181,master-5:2181/mesos > /etc/mesos/zk
Since we have five master machines, the mesos quorum will be three:
echo 3 > /etc/mesos-master/quorum
Disabling Mesos slave on the master nodes
Since we have a separated slave pool, there is no need to use Mesos slave alongside our Mesos master instances, so, let's disable it:
systemctl stop mesos-slave.service
systemctl disable mesos-slave.service
Configuring Marathon
Add the following lines to /etc/sysconfig/marathon:
MARATHON_EVENT_SUBSCRIBER=http_callback
MARATHON_TASK_LAUNCH_TIMEOUT=600000
MARATHON_TASK_LOST_EXPUNGE_GC=60000
MARATHON_TASK_LOST_EXPUNGE_INITIAL_DELAY=60000
MARATHON_TASK_LOST_EXPUNGE_INTERVAL=60000
Depending on the task you want to run in Marathon, it can take quite while to start, so, we increased the task launch timeout to 10 minutes. Also, we are enabling the http callback feature for the service discovery, thus, Nixy will register itself in Marathon for the service discovery and finally update Nginx if any service is healthy/unhealthy. Also, we decreased the interval time for Marathon to remove lost tasks that may happen sometimes.
Restarting Mesos, Marathon and Zookeeper
sudo systemctl restart zookeeper
sudo service mesos-master restart
sudo service marathon restart
That'd be all.
Configuring the slave pool
The following commands should be executed in every slave machine.
Installing the repository
sudo rpm -Uvh http://repos.mesosphere.com/el/7/noarch/RPMS/mesosphere-el-repo-7-1.noarch.rpm
Installing Mesos
sudo yum -y install mesos
Configuring Mesos
The same as the master nodes, you need to edit your /etc/mesos/zk informing the ZK url of your master cluster:
echo zk://master-1:2181,master-2:2181,master-3:2181,master-4:2181,master-5:2181/mesos > /etc/mesos/zk
Disabling Mesos master in the slave nodes
Since the mesos package is the same, we need to disable the master instance in each slave machine, so, there will be only mesos slave instances running:
sudo systemctl stop mesos-master.service
sudo systemctl disable mesos-master.service
Restarting Mesos
sudo service mesos-slave restart
Validating our cluster
If everything worked as expected, you should be able to access Mesos using:
http://master-1:5050/
The cluster will elect a Mesos master as a leader. If the master-1 is not the leader, it will redirect to the master leader.
You should be able to see the resources, such as how much memory, CPU and how many slaves you have in your resource pool.
Also, you can validate if Marathon is up, accessing:
http://master-1:8080/
You can access any Marathon from your master cluster.
Configuring the proxy pool
The following commands should be executed in every proxy machine.
Installing Nginx
First, add the Nginx repository:
sudo yum install epel-release
Then, install Nginx:
sudo yum install nginx
After that, enable it to start when your machine starts:
sudo systemctl enable nginx
Finally, you start it:
sudo systemctl start nginx
Installing Nixy
Get the latest Nixy .tar.gz from their github release page. Currently it is 0.6.0.
wget https://github.com/martensson/nixy/releases/download/v0.6.0/nixy_0.6.0_linux_amd64.tar.gz
After downloading, uncompress it:
tar -xvf nixy_0.6.0_linux_amd64.tar.gz
And then move it to /opt/nixy
mv nixy_0.6.0_linux_amd64/ /opt/nixy/
Configuring Nixy
Edit your /opt/nixy/nixy.toml to:
port = "8000"
marathon = ["http://master-1:8080", "http://master-2:8080", "http://master-3:8080", "http://master-4:8080", "http://master-5:8080"]
nginx_config = "/etc/nginx/nginx.conf"
nginx_template = "/opt/nixy/nginx.tmpl"
nginx_cmd = "nginx"
After that, edit your /opt/nixy/nginx.tmpl:
worker_processes auto;
pid /run/nginx.pid;
events {
use epoll;
worker_connections 2048;
multi_accept on;
}
http {
add_header X-Proxy {{ .Xproxy }} always;
access_log off;
error_log /var/log/nginx/error.log warn;
server_tokens off;
client_max_body_size 128m;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
proxy_redirect off;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# time out settings
proxy_send_timeout 120;
proxy_read_timeout 120;
send_timeout 120;
keepalive_timeout 10;
{{- range $id, $app := .Apps}}
upstream {{index $app.Hosts 0}} {
{{- range $app.Tasks}}
server {{ . }};
{{- end}}
}
{{- end}}
server {
listen 80;
server_name app.org;
location / {
return 503;
}
{{- range $id, $app := .Apps}}
location /{{index $app.Hosts 0}}/ {
proxy_set_header HOST $host;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_connect_timeout 30;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://{{index $app.Hosts 0}};
}
{{- end}}
}
}
This is your Nginx template. If you deploy an app named as "hello-world" in Marathon, Nixy will reload your Nginx and you will be able to access it in: http://proxy-n/hello-world/, where N is 1, 2 or 3.
I don't think it is needed to say, but, thanks do Nginx, there's no downtime when Nixy is reloading it.
Configure Nixy to start as a service
Since Nixy has no stop/start script, we will create ours.
Add the following lines to /etc/systemd/system/nixy.service:
[Unit]
Description=Nixy Service
After=nginx.service
[Service]
Type=simple
ExecStart=/bin/sh -c '/opt/nixy/nixy -f /opt/nixy/nixy.toml &> /var/log/nixy.log'
Restart=always
[Install]
WantedBy=multi-user.target
And then, enable it to start when our machine is up:
sudo systemctl enable nixy
Lastly but not least, start Nixy: (be sure that both Nginx and Marathon are running)
sudo systemctl start nixy
Validating
You can check if Nixy is running checking its status:
sudo systemctl status nixy
Or its log:
tail -f /var/log/nixy.log
Or even its URL:
curl -X GET http://localhost:8000/v1/health
Other posts about REST and Microservices
There are other posts that I wrote that may be useful to you.
Deploying and running Docker containers on Marathon
The basic you should know before building your REST api
Building microservices using Undertow, CDI and JAX-RS
Finalizing
Now you are able to deploy your applications into Marathon. An application is exposed on http://proxy-n/{marathon-app-id} as soon as its deploy is done. Don't you know how to use Marathon? Do not worry. Next post we are going to talk about how you can deploy your applications using it.
I hope this post was helpful to you.
Questions? Please, leave a comment!