Getting A Cluster Started
Now, we have everything in place to get a consul cluster up and running quickly. The process is relatively simple.
On a server that contains the bootstrap configuration file (server1 in our case), use su to change to the consul user briefly. We can then call consul and pass in the bootstrap directory as an argument:
su consul consul agent -config-dir /etc/consul.d/bootstrap
The service should start up and occupy the terminal window. In bootstrap mode, this server will self-elect as leader, creating a basis for forming the cluster.
On your other consul servers, as root, start the consul service that we just created with the upstart script by typing:
These servers will connect to the bootstrapped server, completing the cluster. At this point, we have a cluster of three servers, two of which are operating normally, and one of which is in bootstrap mode, meaning that it can make executive decisions without consulting the other servers.
This is not what we want. We want each of the servers on equal footing. Now that the cluster is created, we can shutdown the bootstrapped consul instance and then re-enter the cluster as a normal server.
To do this, hit CTRL-C in the bootstrapped server’s terminal:
Now, exit back into your root session and start the consul service like you did with the rest of the servers:
exit start consul
This will cause the previously-bootstrapped server to join the cluster with un-elevated privileges, bringing the cluster into its final state.
Now that the cluster is fully operational, client machines can connect. On the client machine, do the same procedure as root:
The client will connect to the cluster as a client. You can see the members of the cluster (servers and clients) by asking consul for its members on any of the machines:
consul members Node Address Status Type Build Protocol server3 192.0.2.3:8301 alive server 0.3.0 2 server2 192.0.2.2:8301 alive server 0.3.0 2 server1 192.0.2.1:8301 alive server 0.3.0 2 agent1 192.0.2.50:8301 alive client 0.3.0 2
Connecting to the Web UI
We have configured our client machine to host a web interface to the cluster. However, this is being served on the local interface, meaning that it is not accessible to us using the machine’s public interface.
To get access to the web UI, we will create an SSH tunnel to the client machine that holds the UI files. Consul serves the HTTP interface on port 8500. We will tunnel our local port 8500 to the client machine’s port 8500. On your local computer, type:
ssh -N -f -L 8500:localhost:8500 firstname.lastname@example.org
This will connect to the remote machine, create a tunnel between our local port and the remote port and then put the connection into the background.
In your local web browser, you can now access the consul web interface by typing:
This will give you the default web UI page:
You can use this interface to check the health of your servers and get an overview of your services and infrastructure.
When you are finished using the web UI, you can close the SSH tunnel. Search for the process’s pid number using the ps command and grep to search for the port number we forwarded:
#> ps aux | grep 8500 1001 5275 0.0 0.0 43900 1108 ? Ss 12:03 0:00 ssh -N -f -L 8500:localhost:8500 email@example.com 1001 5309 0.0 0.0 13644 948 pts/7 S+ 12:12 0:00 grep --colour=auto 8500
The highlighted number in the output above (on the line that contains the tunneling command we used) is the pid number we’re looking for. We can then pass this to the kill command to close the tunnel: