Archive for June, 2014

Imagine a scenario where ESB is configured to forward request to a backend service. Client sends request to the ESB, ESB forward the request to the backend service. Backend service send response to the ESB and then ESB forward the response to the client.

When ESB forward the request to the backend service, ESB create a TCP connection with the backend server.  Below is wireshark TCP stream filter for a single TCP stream.

TCP packets exchanging for a single request response

TCP packets exchanging for a single request response

You can see there are multiple TCP packets exchanging. They are
SYNK
SYNC ACK
ACK
#other ACK s
FIN ACK
FIN ACK
ACK

You will see there are 6 additional TCP packets other than for the data for a single TCP connection. When client sends multiple requests to the same proxy, ESB has to repeat the same task over and over again. Everytime 6 more TCP packets are wasted. Keep-Alive is the way to avoid it. When Keep-Alive is on, ESB does not create TCP connection for every request-response connection, rather it use the same connection to pass data with the backend. The idea is to use a single persistent connection for multiple requests/responses.

Below image clearly show difference  how ESB communicates with the backend when Keep-Alive is turned off and on.

keepalive

Difference when Keep-Alive is turned on and off

Disable Keep-Alive

By default Keep-Alive is TRUE in ESB. However there might be scenarios where backend service does not support keep-alive. In that case we have to switch off the keep-live as below

<property name="NO_KEEPALIVE" value="true" scope="axis2"/>

Above will not disable Keep-Alive for every meditations. If you want to disable Keep-Alive globally you have to add the property to the repository/conf/passthru-http.properties property file as below

http.connection.disable.keepalive=true

References

http://en.wikipedia.org/wiki/HTTP_persistent_connection
http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html

WSO2 truststore which is located at  contains the certificates of the third parties who are trusted by a WSO2 carbon server.  By default truststore ships packed with some certificates such as GoDaddy, verySign etc. You can view the existing certificates by

List existing certificates
keytool -list -v -keystore CARBON_HOME/repository/resources/security/client-truststore.jks

Below is a sample output of the listed certificate details.

Alias name: verisignclass3g3ca
Creation date: Mar 13, 2009
Entry type: trustedCertEntry

Owner: CN=VeriSign Class 3 Public Primary Certification Authority - G3, OU="(c) 1999 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US
Issuer: CN=VeriSign Class 3 Public Primary Certification Authority - G3, OU="(c) 1999 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US
Serial number: 9b7e0649a33e62b9d5ee90487129ef57
Valid from: Fri Oct 01 06:00:00 IST 1999 until: Thu Jul 17 05:29:59 IST 2036
Certificate fingerprints:
	 MD5:  CD:68:B6:A7:C7:C4:CE:75:E0:1D:4F:57:44:61:92:09
	 SHA1: 13:2D:0D:45:53:4B:69:97:CD:B2:D5:C3:39:E2:55:76:60:9B:5C:C6
	 Signature algorithm name: SHA1withRSA
	 Version: 1


*******************************************
*******************************************

Alias name: godaddyclass2ca
Creation date: Mar 13, 2009
Entry type: trustedCertEntry

Owner: OU=Go Daddy Class 2 Certification Authority, O="The Go Daddy Group, Inc.", C=US
Issuer: OU=Go Daddy Class 2 Certification Authority, O="The Go Daddy Group, Inc.", C=US
Serial number: 0
Valid from: Tue Jun 29 23:06:20 IST 2004 until: Thu Jun 29 22:36:20 IST 2034
Certificate fingerprints:
	 MD5:  91:DE:06:25:AB:DA:FD:32:17:0C:BB:25:17:2A:84:67
	 SHA1: 27:96:BA:E6:3F:18:01:E2:77:26:1B:A0:D7:77:70:02:8F:20:EE:E4
	 Signature algorithm name: SHA1withRSA
	 Version: 3

Extensions: 

#1: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: D2 C4 B0 D2 91 D4 4C 11   71 B3 61 CB 3D A1 FE DD  ......L.q.a.=...
0010: A8 6A D4 E3                                        .j..
]
]

Add a CA certificate you trust

Sometimes you may want your carbon server to trust a certificate you trust. In that case you have to add that certificate to the carbon truststore.

 keytool -import -alias udara.com  -file udara.com.crt -keystore CARBON_HOME/repository/resources/security/client-truststore.jks

Please enter “yes” when you are prompted with “Trust this certificate? [no]:

If importing the certificate is successfull you will be shown a output as “Certificate was added to keystore” at the end.

keytool -import -alias udara   -file certificate.crt -keystore client-truststore.jks 
Enter keystore password:  
Owner: EMAILADDRESS=udaraliyanage@gmail.com, CN=udara.com, OU=section, O=Udara Company, L=Wadduwa, ST=Western, C=LK
Issuer: EMAILADDRESS=udaraliyanage@gmail.com, CN=udara.com, OU=section, O=Udara Company, L=Wadduwa, ST=Western, C=LK
Serial number: f486cce7e716f5a2
Valid from: Sat Jun 14 19:26:33 IST 2014 until: Sun Jun 14 19:26:33 IST 2015
Certificate fingerprints:
	 MD5:  DC:A2:CE:72:91:4B:66:12:2B:D0:C9:70:A8:54:3B:45
	 SHA1: B1:09:CF:D8:1E:43:ED:B5:34:7B:75:F8:D8:A8:6A:4F:BC:CB:AD:CB
	 Signature algorithm name: SHA256withRSA
	 Version: 3

Extensions: 

#1: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [

KeyIdentifier [
0000: 71 5F 14 CB A0 DC 4D A5   8E 1E A2 5C B4 E2 6F 7F  q_....M....\..o.
0010: 82 C8 C8 7E                                        ....
]

]

Trust this certificate? [no]:  yes         
Certificate was added to keystore
Verify the certificate is added
keytool -list -v -keystore CARBON_HOME/repository/resources/security/client-truststore.jks | grep udara.com

 

Search with the alias you provided when importing the certificate. You should see the details of the certificate added.

udara@udara-ThinkPad-T530:~/projects/support/keys$ keytool -list -keystore client-truststore.jks | grep -i udara
Enter keystore password:  wso2carbon
udara, Jun 14, 2014, trustedCertEntry,
Create Private key for you

Please note the you will be prompted to enter a passphrase, please remember the passphrase you entered for a while. You will need it later.

sudo openssl genrsa -des3 -out udara.com.key 1024

Generated private key is similar to below key.

Create a certificate signing request
sudo openssl req -new -key udara.com.key -out udara.com.csr

You will be prompted for pass phrase, and other details needed to create the certificate. Enter the same passphrase you entered in the previous step.

root@udara-ThinkPad-T530: sudo openssl req -new -key udara.com.key -out udara.com.csr
Enter pass phrase for udara.com.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:LK
State or Province Name (full name) [Some-State]:Western
Locality Name (eg, city) []:COlombo
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Udara Pvt Ltd
Organizational Unit Name (eg, section) []:stratos
Common Name (e.g. server FQDN or YOUR name) []:udara.com
Email Address []:udaraliyanage@gmail.com
Remove the passphrase (Optional)

This step is optional. If passphrase is not removed, you will have to provide pass phrase everytime Nginx is restarted/started.

cp udara.com.key udara.com.key.back
sudo openssl rsa -in udara.com.key.back -out udara.com.key

udara.com.key contains the private key and pass phrase is removed from it.

Self sign the certificate
sudo openssl x509 -req -days 365 -in udara.com.csr -signkey udara.com.key -out udara.com.crt
 Install the keys to Nginx

Create a directory for ssl

	sudo mkdir /et/nginx/ssl

Copy the private key and the signed certificate to the ssl directory.

sudo cp udara.com.crt /etc/nginx/udara.com.crt
sudo cp udara.com.key /etc/nginx/udara.com.key
Configure certificates to Nginx
server {
        listen 443;
        server_name udara.com;

        root /usr/share/nginx/www;
        index index.html index.htm;

        ssl on;
        ssl_certificate /etc/nginx/ssl/udara.com.crt;
        ssl_certificate_key /etc/nginx/ssl/udara.com.key; 
}
Restart Nginx in order to apply the changes
sudo service nginx restart
Test the configurations

Locate the browser to the https://udara.com. You will see a box as below since your browser does not trust your key. Proceed by clicking “I understand the risks”

firefox-ssl

 

Debug SSL certificate  from the command line.

You can view the certificate using command line as below.

openssl s_client -connect udara.com:443
CONNECTED(00000003)
depth=0 C = US, ST = CA, L = Mountain View, O = WSO2, CN = localhost
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 C = US, ST = CA, L = Mountain View, O = WSO2, CN = localhost
verify error:num=21:unable to verify the first certificate
verify return:1
---
Certificate chain
 0 s:/C=US/ST=CA/L=Mountain View/O=WSO2/CN=localhost
   i:/C=US/ST=CA/L=Mountain View/O=WSO2/CN=localhost
---
Server certificate
-----BEGIN CERTIFICATE-----
MIICNTCCAZ6gAwIBAgIES343gjANBgkqhkiG9w0BAQUFADBVMQswCQYDVQQGEwJV
UzELMAkGA1UECAwCQ0ExFjAUBgNVBAcMDU1vdW50YWluIFZpZXcxDTALBgNVBAoM
BFdTTzIxEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xMDAyMTkwNzAyMjZaFw0zNTAy
MTMwNzAyMjZaMFUxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTEWMBQGA1UEBwwN
TW91bnRhaW4gVmlldzENMAsGA1UECgwEV1NPMjESMBAGA1UEAwwJbG9jYWxob3N0
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCUp/oV1vWc8/TkQSiAvTousMzO
M4asB2iltr2QKozni5aVFu818MpOLZIr8LMnTzWllJvvaA5RAAdpbECb+48FjbBe
0hseUdN5HpwvnH/DW8ZccGvk53I6Orq7hLCv1ZHtuOCokghz/ATrhyPq+QktMfXn
RS4HrKGJTzxaCcU7OQIDAQABoxIwEDAOBgNVHQ8BAf8EBAMCBPAwDQYJKoZIhvcN
AQEFBQADgYEAW5wPR7cr1LAdq+IrR44iQlRG5ITCZXY9hI0PygLP2rHANh+PYfTm
xbuOnykNGyhM6FjFLbW2uZHQTY1jMrPprjOrmyK5sjJRO4d1DeGHT/YnIjs9JogR
Kv4XHECwLtIVdAbIdWHEtVZJyMSktcyysFcvuhPQK8Qc/E/Wq8uHSCo=
-----END CERTIFICATE-----
subject=/C=US/ST=CA/L=Mountain View/O=WSO2/CN=localhost
issuer=/C=US/ST=CA/L=Mountain View/O=WSO2/CN=localhost
---
No client certificate CA names sent
---
SSL handshake has read 1100 bytes and written 443 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 1024 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: 061F79D65FD224EDFFC5130BEE77EE37183F1C6AB943315B1B00C64BE6C64DB9
    Session-ID-ctx: 
    Master-Key: 84E05FFF76FF291E0A8FB08981D1CD86407E93B0A1DEC6CD115ACCCFD4514ACC139BCE33D51E73E50F65860A10FAD8CE
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 300 (seconds)
    TLS session ticket:
    0000 - 90 8e 1c dd 0e 56 c5 73-1c 7e 2f dd 21 7a c9 0b   .....V.s.~/.!z..
    0010 - 69 19 e9 7f af b3 74 1d-c1 fc 13 ab 9c c5 15 aa   i.....t.........
    0020 - 8b 15 9d ae 12 0c 1b 4b-97 0a 07 9a 1e 5d 0c cc   .......K.....]..
    0030 - 4c ba 1e 43 09 34 06 55-e9 15 9c be e8 30 94 c4   L..C.4.U.....0..
    0040 - 8d 58 65 4c 19 91 85 09-a7 a5 12 99 03 e5 7c ca   .XeL..........|.
    0050 - 8f c5 cd 71 69 3f 44 76-64 fa 59 ea a5 4e 24 40   ...qi?Dvd.Y..N$@
    0060 - e2 ef 71 11 6d 5a b3 5c-e2 94 4c 79 49 59 2b 1f   ..q.mZ.\..LyIY+.
    0070 - 07 3d e3 a9 6a a1 8c eb-71 c7 30 35 4c 73 59 80   .=..j...q.05LsY.
    0080 - 74 84 25 b5 b7 cc 17 81-10 01 f3 32 c9 44 3e 19   t.%........2.D>.
    0090 - 93 52 13 65 36 4a 13 65-a4 ff 92 a3 fd a6 3e 95   .R.e6J.e......>.

    Start Time: 1402859008
    Timeout   : 300 (sec)
    Verify return code: 21 (unable to verify the first certificate

 

 

I am using a simple HTTP server written in Python which will runs on the port given by the commandline argument. The servers will act as upstream servers for this test. Three servers are started
on port 8080,8081 and 8081. Each server logs its port number when a request is received. Logs will be written to the log file located at var/log/loadtest.log. So by looking at the log file, we can identify how Nginx distribute incoming requests among the three upstream servers.

Below diagram shows how Nginx and upstream servers are destrubuted.

Load balancing with Nginx

Load balancing with Nginx

Below is the code for the simple HTTP server. This is a modification of [1].

#!/usr/bin/python

#backend.py
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import sys
import logging

logging.basicConfig(filename='var/log/loadtest.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')

#This class will handles any incoming request from the browser.
class myHandler(BaseHTTPRequestHandler):

	#Handler for the GET requests
	def do_GET(self):
		logging.debug("Request received for server on : %s " % PORT_NUMBER)
		self.send_response(200)
		self.send_header('Content-type','text/html')
		self.end_headers()
		# Send the html message
		self.wfile.write("Hello World: %s" % PORT_NUMBER)
		return

try:
	#Create a web server and define the handler to manage the
	#incoming request
	PORT_NUMBER = int(sys.argv[1])
	server = HTTPServer(('', PORT_NUMBER), myHandler)
	print 'Started httpserver on port %s '  %  sys.argv[1]
	#Wait forever for incoming htto requests
	server.serve_forever()

except KeyboardInterrupt:
	print '^C received, shutting down the web server'
	server.socket.close()

Lets start the servers on port 8080, 8081 and 8081.

nohup python backend.py 8080 &
nohup python backend.py 8081 &
nohup python backend.py 8082 &

Check if the servers are running on the secified ports.

netstat -tulpn | grep 808
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      454/python
tcp        0      0 0.0.0.0:8081            0.0.0.0:*               LISTEN      455/python
tcp        0      0 0.0.0.0:8082            0.0.0.0:*               LISTEN      457/python

*Configure Nginx as a load balancer for above upstream server.

Create a configuration file in /etc/nginx/udara.com.conf with the below content. Above started servers are configured as upstream servers.

upstream udara.com {
        server udara.com:8080 ;
        server udara.com:8081 ;
        server udara.com:8082 ;
}

server {
           listen 80;
           server_name udara.com;
           location / {
                        proxy_pass http://udara.com;
           }

* Pick a client to send request. You can use Jmeter or any other tool. However I wrote a very simple shell script which will send given number of request to the Nginx

#!/bin/bash
c=1
count=$1
echo $count
while [ $c -le $count ]
do
     curl http://udara.com/
     (( c++ ))
done
 Round robing load balancing
upstream udara.com {
        server udara.com:8080 ;
        server udara.com:8081 ;
        server udara.com:8082 ;
}

Let’s issue 9 request.

./requester.sh 9

Logs written on var/log/loadtest.log log file.

06/15/2014 11:54:11 AM Request received for server on : 8080
06/15/2014 11:54:11 AM Request received for server on : 8081
06/15/2014 11:54:11 AM Request received for server on : 8082
06/15/2014 11:54:11 AM Request received for server on : 8080
06/15/2014 11:54:11 AM Request received for server on : 8081
06/15/2014 11:54:11 AM Request received for server on : 8082
06/15/2014 11:54:11 AM Request received for server on : 8080
06/15/2014 11:54:11 AM Request received for server on : 8081
06/15/2014 11:54:11 AM Request received for server on : 8082

Request are distributed evenly among all three servers in round robin fashion.

Session stickiness

Requests from the same client will be forwarded to the same server always.The first three octets of the client IPv4 address, or the entire IPv6 address, are used as the hashing key to determine which server to forward the request. In case the selected server is unavailable, the request will be forwaded to another server.

upstream udara.com {
	ip_hash
        server udara.com:8080 ;
        server udara.com:8081 ;
        server udara.com:8082 ;
}

All the requests are forwaded to the server running on 8082.

06/15/2014 11:54:55 AM Request received for server on : 8082
06/15/2014 11:54:55 AM Request received for server on : 8082
06/15/2014 11:54:55 AM Request received for server on : 8082
06/15/2014 11:54:55 AM Request received for server on : 8082
06/15/2014 11:54:55 AM Request received for server on : 8082
Weighted load balancing

By default Nginx equaly destribute the requests among all upstream servers. This is OK when all the upstream has the same capacity to serve the requests . But there are scenarios where some upstream servers have more resources and some resources have low resources compared to others. So more requests should be forwarded to high capacity servers and low capacity servers should be forwaded less number of requests. Ningx has provided the ability to specify the weight for every server. Specify weight propotional to the capacity of the servers.

upstream udara.com {
 server udara.com:8080 weight=4; #server1
 server udara.com:8081 weight=3; #server2
 server udara.com:8082 weight=1; #server3
}

Above configurations says server1’s capacity is four times of server3 and server 2 has thrice the capacity of server3. So for every 8 requests, 4 should be forwaded to server1, 3 should be forwaded to server2 and one request for server3.
Below logs shows that requests are destributed according to the weight specified.

06/15/2014 12:01:36 PM Request received for server on : 8081
06/15/2014 12:01:36 PM Request received for server on : 8080
06/15/2014 12:01:36 PM Request received for server on : 8080
06/15/2014 12:01:36 PM Request received for server on : 8081
06/15/2014 12:01:36 PM Request received for server on : 8080
06/15/2014 12:01:36 PM Request received for server on : 8081
06/15/2014 12:01:36 PM Request received for server on : 8082
06/15/2014 12:01:36 PM Request received for server on : 8080
 Mark a server as unavailable

“down” is used to tell Nginx that a upstream is not available. This is usefull when we know that the server is down for some reason or there is maintainance going on that server. Nginx will not forward request to the servers marked as down.

upstream udara.com {
        server udara.com:8080 weight=4;
        server udara.com:8081 weight=3 down;
        server udara.com:8082 weight=1;
}

 

06/15/2014 12:10:54 PM Request received for server on : 8080
06/15/2014 12:10:54 PM Request received for server on : 8080
06/15/2014 12:10:54 PM Request received for server on : 8082
06/15/2014 12:10:54 PM Request received for server on : 8080

No request has forwarded to the server running on port 8081.

High avalability/ Backup

When a upstream server node is marked as backup, Nginx will forward requests to them only when primary servers are unavailable.

upstream udara.com {
        server udara.com:8080 ; #server1
        server udara.com:8081 ; #server2
        server udara.com:8082  backup; #server3
}

Request will be sent only to server1 and server2. No requests will be sent to server3 since it is the backup node.

06/15/2014 02:57:40 PM Request received for server on : 8080
06/15/2014 02:57:40 PM Request received for server on : 8081
06/15/2014 02:57:40 PM Request received for server on : 8080
06/15/2014 02:57:40 PM Request received for server on : 8081

Stop the servers running on 8080 and 8081 so only server on 8082 is running.
Request are sent to the backup node.

06/15/2014 02:46:04 PM Request received for server on : 8082
06/15/2014 02:46:04 PM Request received for server on : 8082
06/15/2014 02:46:04 PM Request received for server on : 8082
06/15/2014 02:46:04 PM Request received for server on : 8082
Multiple backup nodes.
upstream udara.com {
        server udara.com:8080 ; #server1
        server udara.com:8081  backup; #server2
        server udara.com:8082  backup; #server3
}

Requests are directed only to server1 as long as server1 is available.

06/15/2014 03:03:02 PM Request received for server on : 8080
06/15/2014 03:03:02 PM Request received for server on : 8080
06/15/2014 03:03:02 PM Request received for server on : 8080
06/15/2014 03:03:02 PM Request received for server on : 8080

When server1 is stopped, requests are forwaded to both server2 and server3.

06/15/2014 02:57:40 PM Request received for server on : 8081
06/15/2014 02:57:40 PM Request received for server on : 8082
06/15/2014 02:57:40 PM Request received for server on : 8081
06/15/2014 02:57:40 PM Request received for server on : 8082

[1] https://github.com/tanzilli/playground/blob/master/python/httpserver/example1.py

[1] http://nginx.org/en/docs/http/load_balancing.html

Extract private key and certificate.
keytool -importkeystore -srckeystore wso2carbon.jks -destkeystore wso2.p12 -srcstoretype jks  -deststoretype pkcs12 -alias wso2carbon
openssl pkcs12 -in wso2.p12 -out wso2.pem
Extract only the certificate.
openssl pkcs12 -in wso2.p12 -out wso2.pem
Extract the private key.
openssl pkcs12 -in wso2.p12 -nocerts -out wso2.key
Remove pass phrase from the private key.

Private key is encrypted with a passphrase to enforce security. However if you use this private key to configure SSL for a server (Apache or nginx) you will have to provide this passphrase everytime you start/restart the server. This is kind of a burden. So let’s remove the passphrase from the private key.

openssl rsa -in wso2.key -out wso2.key

Now above private key and certificate can be used to configure SSL in Apache and Nginx

Nginx SSL configuration

server{

 listen 443 ssl;
 server_name wso2.as.com;

 ssl_certificate /etc/nginx/ssl/wso2.crt;
 ssl_certificate_key /etc/nginx/ssl/wso2.key;
}

Apache2 SSL configuration

SSLCertificateFile /path/to/wso2.crt
SSLCertificateKeyFile /path/to/wso2.pem

References:

http://stackoverflow.com/questions/652916/converting-a-java-keystore-into-pem-format
http://www.networking4all.com/en/support/ssl+certificates/manuals/microsoft/all+windows+servers/export+private+key+or+certificate/
https://www.digitalocean.com/community/tutorials/how-to-create-a-ssl-certificate-on-nginx-for-ubuntu-14-04

WSO2 ESB – Switch to NIO transport

Posted: June 9, 2014 in axis2
Tags: ,

By default WSO2 ESB comes with Passthru transport. However if you want to switch to old NIO transport below steps provide you the guidance.

Remove/Comment the default PassThrough transport receivers.

Locate below NIO transport receivers in axis2.xml and remove or comment them.

 <transportReceiver name="http" class="org.apache.synapse.transport.passthru.PassThroughHttpListener">
        <parameter name="port" locked="false">8280</parameter>
        <parameter name="non-blocking" locked="false">true</parameter>
        <!--parameter name="bind-address" locked="false">hostname or IP address</parameter-->
        <!--parameter name="WSDLEPRPrefix" locked="false">https://apachehost:port/somepath</parameter-->
        <parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.PassThroughNHttpGetProcessor</parameter>
        <!--<parameter name="priorityConfigFile" locked="false">location of priority configuration file</parameter>-->
    </transportReceiver>

<transportReceiver name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLListener">
        <parameter name="port" locked="false">8243</parameter>
        <parameter name="non-blocking" locked="false">true</parameter>-->
        <!--parameter name="bind-address" locked="false">hostname or IP address</parameter-->
        <!--parameter name="WSDLEPRPrefix" locked="false">https://apachehost:port/somepath</parameter-->
        <parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.PassThroughNHttpGetProcessor</parameter>
        <parameter name="keystore" locked="false">
            <KeyStore>
                <Location>repository/resources/security/wso2carbon.jks</Location>
                <Type>JKS</Type>
                <Password>wso2carbon</Password>
                <KeyPassword>wso2carbon</KeyPassword>
            </KeyStore>
        </parameter>
        <parameter name="truststore" locked="false">
            <TrustStore>
                <Location>repository/resources/security/client-truststore.jks</Location>
                <Type>JKS</Type>
                <Password>wso2carbon</Password>
            </TrustStore>
        </parameter>
        <!--<parameter name="SSLVerifyClient">require</parameter>
            supports optional|require or defaults to none -->
    </transportReceiver>
Remove/Comment the default PassThrough transport receivers

Locate below NIO transport senders in axis2.xml and remove or comment them.

<transportSender name="http" class="org.apache.synapse.transport.passthru.PassThroughHttpSender">
        <parameter name="non-blocking" locked="false">true</parameter>
    </transportSender>

<transportSender name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLSender">
        <parameter name="non-blocking" locked="false">true</parameter>
        <parameter name="keystore" locked="false">
            <KeyStore>
                <Location>repository/resources/security/wso2carbon.jks</Location>
                <Type>JKS</Type>
                <Password>wso2carbon</Password>
                <KeyPassword>wso2carbon</KeyPassword>
            </KeyStore>
        </parameter>
        <parameter name="truststore" locked="false">
            <TrustStore>
                <Location>repository/resources/security/client-truststore.jks</Location>
                <Type>JKS</Type>
                <Password>wso2carbon</Password>
            </TrustStore>
        </parameter>
        <!--<parameter name="HostnameVerifier">DefaultAndLocalhost</parameter>-->
            <!--supports Strict|AllowAll|DefaultAndLocalhost or the default if none specified -->
    </transportSender>
Uncomment/Add Http NIO transport receivers

Locate below NIO transport receiver in axis2.xml and uncomment them.

<transportReceiver name="http" class="org.apache.synapse.transport.nhttp.HttpCoreNIOListener">
        <parameter name="port" locked="false">8280</parameter>
        <parameter name="non-blocking" locked="false">true</parameter> -->
        <!--parameter name="bind-address" locked="false">hostname or IP address</parameter-->
        <!--parameter name="WSDLEPRPrefix" locked="false">https://apachehost:port/somepath</parameter-->
        <!--<parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor</parameter> -->
        <!--<parameter name="priorityConfigFile" locked="false">location of priority configuration file</parameter>-->
        <!--parameter name="disableRestServiceDispatching" locked="false">true</parameter-->
    </transportReceiver>

<transportReceiver name="https" class="org.apache.synapse.transport.nhttp.HttpCoreNIOSSLListener">
        <parameter name="port" locked="false">8243</parameter>
        <parameter name="non-blocking" locked="false">true</parameter> -->
        <!--parameter name="bind-address" locked="false">hostname or IP address</parameter-->
        <!--parameter name="WSDLEPRPrefix" locked="false">https://apachehost:port/somepath</parameter-->
        <!--<parameter name="priorityConfigFile" locked="false">location of priority configuration file</parameter>-->
        <!--parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor</parameter>
        <parameter name="disableRestServiceDispatching" locked="false">true</parameter>
        <parameter name="keystore" locked="false">
            <KeyStore>
                <Location>repository/resources/security/wso2carbon.jks</Location>
                <Type>JKS</Type>
                <Password>wso2carbon</Password>
                <KeyPassword>wso2carbon</KeyPassword>
            </KeyStore>
        </parameter>
        <parameter name="truststore" locked="false">
            <TrustStore>
                <Location>repository/resources/security/client-truststore.jks</Location>
                <Type>JKS</Type>
                <Password>wso2carbon</Password>
            </TrustStore>
        </parameter -->
        <!--<parameter name="SSLVerifyClient">require</parameter>
            supports optional|require or defaults to none -->
    </transportReceiver>
Uncomment/Add Http NIO transport senders

Locate below NIO transport senders in axis2.xml and uncomment them.

 <transportSender name="http" class="org.apache.synapse.transport.nhttp.HttpCoreNIOSender">
 <parameter name="non-blocking" locked="false">true</parameter>
 </transportSender>
 <transportSender name="https" class="org.apache.synapse.transport.nhttp.HttpCoreNIOSSLSender">
 <parameter name="non-blocking" locked="false">true</parameter>
 <parameter name="keystore" locked="false">
 <KeyStore>
 <Location>repository/resources/security/wso2carbon.jks</Location>
 <Type>JKS</Type>
 <Password>wso2carbon</Password>
 <KeyPassword>wso2carbon</KeyPassword>
 </KeyStore>
 </parameter>
 <parameter name="truststore" locked="false">
 <TrustStore>
 <Location>repository/resources/security/client-truststore.jks</Location>
 <Type>JKS</Type>
 <Password>wso2carbon</Password>
 </TrustStore>
 </parameter> -->
 <!--<parameter name="HostnameVerifier">DefaultAndLocalhost</parameter>-->
 <!--supports Strict|AllowAll|DefaultAndLocalhost or the default if none specified -->
 </transportSender>

 

Deployment synchronization of WSO2 process of syncing deployment artifacts across the product cluster. The goal of depsync is to synchronize  artifacts (proxies, APIs, webapps etc) across all the nodes When a user upload or update an artifact. If not for depsync when a artifact is updated by the user, those artifacts should be added to other servers manually. Current depsync is carried out with a SVN repository. When a user updates an artifact, manager node commit the changes to the central SVN repository and inform worker nodes that there is a is a artifact update. Then worker nodes get a SVN update from the repository.

This article explain an alliterative way of achieving the same goal of debsync. This method eliminates the overhead of maintaining a separate SVN server for depsync, instead uses rsync tool which is pre installed in most of the unix systems..

rsync is a file transfering utility for unix systems. rsync algorithm is smart so that only transfer the difference of the files. rsync can be configured to use rsh or rss as the transport.

Prerequisites

Icron is a utility that watch for file system changes and let user defined commands to trigger when an file system changing event occurred.
Install incron if you don’t already have it installed

	sudo apt-get install incron
	
Configure Deployment synchronization

1) Add host entries of all worker nodes

vi /etc/hosts
192.168.1.1 worker1 worker1.wso2.com
192.168.1.2 worker2 worker2.wso2.com
192.168.1.3 worker3 worker3.wso2.com

2. Create SSH keys on the management node.

ssh-keygen -t dsa

3). Copy the public key to the worker nodes so you can ssh to the worker nodes without providing password each time.

ssh-copy-id -i ~/.ssh/id_rsa.pub worker1.wso2.com
ssh-copy-id -i ~/.ssh/id_rsa.pub worker2.wso2.com
ssh-copy-id -i ~/.ssh/id_rsa.pub worker3.wso2.com

4) Create a script file /opt/scripts/push_artifacts.sh with the below content

The script assumes your management server pack  is located on home/ubuntu/manager/ where as worker nodes are on /home/ubuntu/worker in every worker node.

#!/bin/bash
# push_artifacts.sh - Push artifact changes to the worker nodes.

master_artifact_path=/home/ubuntu/manager/wso2esb4.6.0/repository/deployment/
worker_artifact_path=/home/ubuntu/worker/wso2esb4.6.0/repository/deployment/

worker_nodes=(worker1 worker2 worker3)

while [ -f /tmp/.rsync.lock ]
do
  echo -e &amp;amp;quot;[WARNING] Another rsync is in progress, waiting...&amp;amp;quot;
  sleep 2
done

mkdir /tmp/.rsync.lock

if [ $? -eq 0 ]; then
echo "[ERROR] : can not create rsync lock";
exit 1
else
echo "INFO : created rsync lock";
fi

for i in ${worker_nodes[@]}; do

echo "===== Beginning artifact sync for $i ====="

rsync -avzx --delete -e ssh $master_artifact_path ubuntu@$i:$worker_artifact_path

if [ $? = &amp;amp;quot;1&amp;amp;quot; ]; then
echo "[ERROR] : rsync failed for $i";
exit 1
fi

echo "===== Completed rsync for $i =====";
done

rm -rf /tmp/.rsync.lock
echo "[SUCCESS] : Artifact synchronization completed successfully"

The above script will send the artifact changes to all the worker nodes.

5) Trigger push_artifacts.sh script when an artifact is added, modified or removed.

Execute below command to configure icron.

incrontab -e

Add the below text in to the prompt opened by above step.

/home/ubuntu/wso2/wso2esb4.6.0/repository/deployment/server IN_MODIFY,IN_CREATE,IN_DELETE sh /opt/scripts/push_artifacts.sh

Above text tell icron to watch on the file changes (File edits, creations and deletions) of the directory under /home/ubuntu/wso2/wso2esb4.6.0/repository/deployment/server and trigger push_artifacts.sh script whenever a such kind of directory structure change is occured. Simply saying, icron will execute push_artifacts.sh (Script created in step 4) in an event of a artifact change of wso2esb is occured. Thus in case of any artifact change of the master node, all the changes are sync to the all worker nodes which is exactly the goal of deployment synchronization.

Advantages over SVN based debployment synchronization
  • No SVN repository is needed.

There is no overhead of running a SVN server

  • Carbon servers are not needed to cluster to have deployment synchronization

If you are using SVN based deployment synchronization or Registry based deployment synchronization, you need to cluster carbon servers. But this method does not require clustering.

  • Can support multiple manager nodes

In SVN based depsync system is limited to single manager nodes due to the reason that there is a posibility of a node get crashed due to SVN commit conflicts occur when multiple managers commiting artifact updates concurrently. The reason for this is SVN does not support concurrent commits. That issue is not applicable since. However syncing script should be updated to synchronize artifacts among manager nodes also.

  • No configurations needed on any of the worker nodes.

Practically in a real deployment there are one or two (maximum) management node and many worker nodes. Since configurations are done only in the management node,  new worker nodes can be added without doing any configurations from the worker node side. Only needed to add the hostname of the new worker node to the artifact update script  created in step 4.

  • Take backup of the artifacts.

rsync can be configured to backup artifacts to another backup location.

Disadvantages over SVN based deployment synchronization
  • New nodes needed to be added manually.

When a new worker node is started, it should be added manually added to the script.

  • Artifact path is hard coded in the script.

Carbon server should be placed under /home/ubuntu/wso2 (path specified in the script). If the Carbon server pack is moved to another location, script also has to be updated.

Note : This method is not one of the recommended way of doing deployment synchronization.