RabbitMQ installation on Ubuntu 12.04 LTS

From MyWiki

Revision as of 16:04, 20 April 2014 by Admin (Talk | contribs)
Jump to: navigation, search

Contents

Debian

$  wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
$  sudo apt-key add rabbitmq-signing-key-public.asc 

#  apt-key add rabbitmq-signing-key-public.asc 

Add the line to /etc/apt/sources.list

deb http://www.rabbitmq.com/debian/ testing main
#  apt-get update
#  apt-cache showpkg rabbitmq-server
#  apt-get install rabbitmq-server


Ubuntu 12.04 LTS

Ref:
[1] RabbitMQ and Erlang and Ubuntu (12.04) Oh My!
[2] Installing on Debian / Ubuntu
[3] Clustering Guide
[4] Configuration
[5] LDAP Plugin


Clone VM from Ubuntu template

Rename the server

Edit /etc/iptables.active.rules

*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
#
# allow loopback
-A INPUT -s 127.0.0.1 -d 127.0.0.1 -i lo+ -j ACCEPT
#
# allow DNS queries
-A INPUT -p tcp -m tcp --sport 53 -j ACCEPT
-A INPUT -p udp -m udp --sport 53 -j ACCEPT
#
# allow RabbitMQ
# For a cluster of nodes, they must be open to each other on 35197, 4369 and 5672.
# For any servers that want to use the message queue, only 5672 is required.
-A INPUT -p tcp -m tcp --dport 5672 --tcp-flags SYN,RST,ACK,ACK SYN -j ACCEPT
-A INPUT -p tcp -m tcp -s 10.8.116.42 --dport 25672 --tcp-flags SYN,RST,ACK,ACK SYN -j ACCEPT
-A INPUT -p tcp -m tcp -s 10.8.116.43 --dport 25672 --tcp-flags SYN,RST,ACK,ACK SYN -j ACCEPT
-A INPUT -p tcp -m tcp -s 10.8.116.42 --dport 4369 --tcp-flags SYN,RST,ACK,ACK SYN -j ACCEPT
-A INPUT -p tcp -m tcp -s 10.8.116.43 --dport 4369 --tcp-flags SYN,RST,ACK,ACK SYN -j ACCEPT
-A INPUT -p tcp -m tcp --dport 15672 --tcp-flags SYN,RST,ACK,ACK SYN -j ACCEPT
#
# allow NTP
-A INPUT -p udp -m udp --dport 123 -j ACCEPT
#
# allow SSH in
-A INPUT -p tcp -m tcp --dport 22 --tcp-flags SYN,RST,ACK,ACK SYN -j ACCEPT
#
# allow monitoring.production.smartbox.com in
-A INPUT -p tcp -m tcp -s 10.10.0.29 --dport 10050 --tcp-flags SYN,RST,ACK,ACK SYN -j ACCEPT
-A INPUT -p tcp -m tcp -s 10.10.0.38 --dport 10050 --tcp-flags SYN,RST,ACK,ACK SYN -j ACCEPT
-A INPUT -p tcp -m tcp -s 10.10.0.39 --dport 10050 --tcp-flags SYN,RST,ACK,ACK SYN -j ACCEPT
#
# allow already established coonections
-A INPUT -p tcp -m state --state ESTABLISHED,RELATED -j ACCEPT
#
# allow ICMP
-A INPUT -p icmp -m icmp --icmp-type any -j ACCEPT
#
# Log everything that's blocked
# -A INPUT -j LOG --log-prefix "rejected: "
COMMIT

/etc/hosts:

127.0.0.1	localhost

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

/etc/hostname

mqnode-01

Setup TCP/IP

Add the below line to /etc/apt/sources.list

deb http://packages.erlang-solutions.com/debian precise contrib

Now get the PGP key and do ‘apt-get update’:

  wget http://binaries.erlang-solutions.com/debian/erlang_solutions.asc
  apt-key add erlang_solutions.asc 
  apt-get update

Install Erlang:

  apt-get install erlang
  apt-get install erlang-nox

Get the latest RabbitMQ:

  wget http://www.rabbitmq.com/releases/rabbitmq-server/v3.3.0/rabbitmq-server_3.3.0-1_all.deb
  dpkg -i /tmp/rabbitmq-server_3.3.0-1_all.deb 

  service rabbitmq-server stop

Rename the node:

  vi /etc/rabbitmq/rabbitmq-env.conf

add:

NODENAME=mq@mqnode-01

Restart RabbitMQ

  service rabbitmq-server start

root@mqnode-01:~# rabbitmqctl status
Status of node 'mq@mqnode-01' ...
[{pid,1994},
 {running_applications,[{rabbit,"RabbitMQ","3.3.0"},
                        {mnesia,"MNESIA  CXC 138 12","4.11"},
                        {os_mon,"CPO  CXC 138 46","2.2.14"},
                        {xmerl,"XML parser","1.3.6"},
                        {sasl,"SASL  CXC 138 11","2.3.4"},
                        {stdlib,"ERTS  CXC 138 10","1.19.4"},
                        {kernel,"ERTS  CXC 138 10","2.16.4"}]},
 {os,{unix,linux}},
 {erlang_version,"Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [async-threads:30] [kernel-poll:true]\n"},
 {memory,[{total,34581144},
          {connection_procs,2632},
          {queue_procs,5264},
          {plugins,0},
          {other_proc,13301880},
          {mnesia,58688},
          {mgmt_db,0},
          {msg_index,33584},
          {other_ets,769384},
          {binary,13680},
          {code,16367375},
          {atom,594537},
          {other_system,3434120}]},
 {alarms,[]},
 {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]},
 {vm_memory_high_watermark,0.4},
 {vm_memory_limit,416132300},
 {disk_free_limit,50000000},
 {disk_free,1316016128},
 {file_descriptors,[{total_limit,924},
                    {total_used,3},
                    {sockets_limit,829},
                    {sockets_used,1}]},
 {processes,[{limit,1048576},{used,124}]},
 {run_queue,0},
 {uptime,465}]
...done.


Clustering

Do all the above on the second node (of course, keeping the hostname different).

The end result should produce the below status:

root@mqnode-02:~# rabbitmqctl status
Status of node 'mq@mqnode-02' ...
[{pid,9709},
 {running_applications,[{rabbit,"RabbitMQ","3.3.0"},
                        {mnesia,"MNESIA  CXC 138 12","4.11"},
                        {os_mon,"CPO  CXC 138 46","2.2.14"},
                        {xmerl,"XML parser","1.3.6"},
                        {sasl,"SASL  CXC 138 11","2.3.4"},
                        {stdlib,"ERTS  CXC 138 10","1.19.4"},
                        {kernel,"ERTS  CXC 138 10","2.16.4"}]},
 {os,{unix,linux}},
 {erlang_version,"Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [async-threads:30] [kernel-poll:true]\n"},
 {memory,[{total,34835080},
          {connection_procs,2632},
          {queue_procs,5264},
          {plugins,0},
          {other_proc,13567696},
          {mnesia,58688},
          {mgmt_db,0},
          {msg_index,21576},
          {other_ets,766920},
          {binary,14928},
          {code,16367375},
          {atom,594537},
          {other_system,3435464}]},
 {alarms,[]},
 {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]},
 {vm_memory_high_watermark,0.4},
 {vm_memory_limit,416132300},
 {disk_free_limit,50000000},
 {disk_free,1194078208},
 {file_descriptors,[{total_limit,924},
                    {total_used,3},
                    {sockets_limit,829},
                    {sockets_used,1}]},
 {processes,[{limit,1048576},{used,124}]},
 {run_queue,0},
 {uptime,11}]
...done.

Make sure that Erlang cookie is the same on both nodes. Copy content of /var/lib/rabbitmq/.erlang.cookie from node #1 to node #2 and test that it works.

root@mqnode-02:~# service rabbitmq-server stop
root@mqnode-02:~# vi /var/lib/rabbitmq/.erlang.cookie 
root@mqnode-02:~# service rabbitmq-server start

root@mqnode-01:~# rabbitmqctl -n rabbit@mqnode-02 status
Status of node 'rabbit@mqnode-02' ...
[{pid,9358},
 {running_applications,[{rabbit,"RabbitMQ","3.3.0"},
                        {os_mon,"CPO  CXC 138 46","2.2.14"},
                        {xmerl,"XML parser","1.3.6"},
                        {mnesia,"MNESIA  CXC 138 12","4.11"},
                        {sasl,"SASL  CXC 138 11","2.3.4"},
                        {stdlib,"ERTS  CXC 138 10","1.19.4"},
                        {kernel,"ERTS  CXC 138 10","2.16.4"}]},
 {os,{unix,linux}},
 {erlang_version,"Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [async-threads:30] [kernel-poll:true]\n"},
 {memory,[{total,34560416},
          {connection_procs,2632},
          {queue_procs,5264},
          {plugins,0},
          {other_proc,13303992},
          {mnesia,58248},
          {mgmt_db,0},
          {msg_index,23808},
          {other_ets,762920},
          {binary,13648},
          {code,16361039},
          {atom,594537},
          {other_system,3434328}]},
 {alarms,[]},
 {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]},
 {vm_memory_high_watermark,0.4},
 {vm_memory_limit,416132300},
 {disk_free_limit,50000000},
 {disk_free,1194278912},
 {file_descriptors,[{total_limit,924},
                    {total_used,3},
                    {sockets_limit,829},
                    {sockets_used,1}]},
 {processes,[{limit,1048576},{used,124}]},
 {run_queue,0},
 {uptime,83}]
...done.

On the second node stop RabbitMQ application:

root@mqnode-02:~# rabbitmqctl stop_app
Stopping node 'mq@mqnode-02' ...
...done.

Join cluster with node #1:

root@mqnode-02:~# rabbitmqctl join_cluster mq@mqnode-01
Clustering node 'mq@mqnode-02' with 'mq@mqnode-01' ...
...done.


root@mqnode-02:~# rabbitmqctl cluster_status
Cluster status of node 'mq@mqnode-02' ...
[{nodes,[{disc,['mq@mqnode-01','mq@mqnode-02']}]}]
...done.

Check cluster status from node #1:

root@mqnode-01:~# rabbitmqctl cluster_status
Cluster status of node 'mq@mqnode-01' ...
[{nodes,[{disc,['mq@mqnode-01','mq@mqnode-02']}]},
 {running_nodes,['mq@mqnode-01']},
 {cluster_name,<<"mq@mqnode-01.sandbox.local">>},
 {partitions,[]}]
...done.

Start RabbitMQ application on node #2 again and check:

root@mqnode-02:~# rabbitmqctl start_app
Starting node 'mq@mqnode-02' ...
...done.
root@mqnode-02:~# rabbitmqctl cluster_status
Cluster status of node 'mq@mqnode-02' ...
[{nodes,[{disc,['mq@mqnode-01','mq@mqnode-02']}]},
 {running_nodes,['mq@mqnode-01','mq@mqnode-02']},
 {cluster_name,<<"mq@mqnode-01.sandbox.local">>},
 {partitions,[]}]
...done.

And from node #1 again:

root@mqnode-01:~# rabbitmqctl cluster_status
Cluster status of node 'mq@mqnode-01' ...
[{nodes,[{disc,['mq@mqnode-01','mq@mqnode-02']}]},
 {running_nodes,['mq@mqnode-02','mq@mqnode-01']},
 {cluster_name,<<"mq@mqnode-01.sandbox.local">>},
 {partitions,[]}]
...done.

Now we have both nodes in the cluster running.


Management

Enable rabbitmq_management plugin:

root@mqnode-01:~# rabbitmq-plugins enable rabbitmq_management
The following plugins have been enabled:
  mochiweb
  webmachine
  rabbitmq_web_dispatch
  amqp_client
  rabbitmq_management_agent
  rabbitmq_management
Plugin configuration has changed. Restart RabbitMQ for changes to take effect.

root@mqnode-01:~# service rabbitmq-server restart

root@mqnode-01:~# netstat -anp | grep 15672
tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      3222/beam       

Add the below to /etc/iptables.active.rules (if it hasn’t been added yet)

-A INPUT -p tcp -m tcp --dport 15672 --tcp-flags SYN,RST,ACK,ACK SYN -j ACCEPT

Reload

root@mqnode-01:~# iptables-restore < /etc/iptables.active.rules

Enable the same plugin on node #2 and add the iptables rule. Otherwise, you won’t see node #2 statistics in the WebUI.

Login with default user guest is not working, so we need to create new administrator user:

root@mqnode-01:~# rabbitmqctl list_users
Listing users ...
guest	[administrator]
...done.
root@mqnode-01:~# rabbitmqctl add_user alex changepwd
Creating user "alex" ...
...done.
root@mqnode-01:~# rabbitmqctl set_user_tags alex administrator
Setting tags for user "alex" to [administrator] ...
...done.

Checking that it’s the same now on node #2:

root@mqnode-02:~# rabbitmqctl list_users
Listing users ...
alex	[administrator]
guest	[administrator]
...done.

Now login in to the management WebUI and enjoy the view -> http://mqnode-01.sandbox.local:15672/

Add more space to /var, because that’s where the queues are going to live:

root@mqnode-01:~# lvextend -L+2G /dev/sysvg/varlv 
root@mqnode-01:~# resize2fs /dev/sysvg/varlv 

Do the same on node #2.

To be on the safe side - delete guest user


ADD LOG ROTATION DETAILS AND SET IT UP PROPERLY.


LDAP (Active Directory) authentication

By default, there is no configuration file. You can check it in /var/log/rabbitmq/mq@mqnode-01.log:

=INFO REPORT==== 5-Apr-2014::23:56:26 ===
Starting RabbitMQ 3.3.0 on Erlang R16B03-1
Copyright (C) 2007-2013 GoPivotal, Inc.
Licensed under the MPL.  See http://www.rabbitmq.com/

=INFO REPORT==== 5-Apr-2014::23:56:26 ===
node           : mq@mqnode-01
home dir       : /var/lib/rabbitmq
config file(s) : (none)
cookie hash    : IPcFWy3U5jdINhKoM34KhA==
log            : /var/log/rabbitmq/mq@mqnode-01.log
sasl log       : /var/log/rabbitmq/mq@mqnode-01-sasl.log
database dir   : /var/lib/rabbitmq/mnesia/mq@mqnode-01
Personal tools