New Relic is a popular performance monitoring system which provided realtime analytics such as performance, memory usage and cpu usage, threads, web page response time etc. You can even profile application remotely using New Relic dashboard.

This article explains how to integrate New Relic performance monitoring Java agent with WSO2 Carbon products.

Tested platform: Java 8, WSO2 ESB 5.0.0, Mac OS Sierra 10.12.3

1) Signup in New Relic website.
You will get a license key once subscribe

2) Download and extract New Relic agent jar zip files as below. It contains
i) New relic Agent Jar file
ii) newrelic.yml configuration yaml file

wget -N https://download.newrelic.com/newrelic/java-agent/newrelic-agent/current/newrelic-java.zip
unzip -q newrelic-java.zip

3) Copy newrelic.jar and newrelic.yml into

mkdir $CARBON_HOME/newrelicAgent
cp newrelic.yml $CARBON_HOME/newrelicAgent
cp newrelic.yml $CARBON_HOME/newrelicAgent

4) Set New Relic licence key in newrelic.yml
Locate to this section in license_key: ‘<%= license_key %>’ and replace it with the licence key you received at Step 1.

license_key: 'e5620kj287aee4ou7613c2ku7d56k12387bd5jyb'

5) Set java agent into $CARBON_HOME/bin/wso2server.sh as below

-javaagent:$CARBON_HOME/newrelicAgent/newrelic.jar \

Sample section looks like this

while [ "$status" = "$START_EXIT_STATUS" ]
do
    $JAVACMD \
    -Xbootclasspath/a:"$CARBON_XBOOTCLASSPATH" \
    $JVM_MEM_OPTS \
    -XX:+HeapDumpOnOutOfMemoryError \
    -XX:HeapDumpPath="$CARBON_HOME/repository/logs/heap-dump.hprof" \
    $JAVA_OPTS \
    -javaagent:$CARBON_HOME/newrelicAgent/newrelic.jar \

4) sh $ESB_HOME/bin/wso2server.sh

At startup you will see below logs in carbon log file

Mar 26, 2017 13:08:58 +0800 [12884 1] com.newrelic INFO: New Relic Agent: Loading configuration file "/Users/udara/projects/testings/relic/wso2esb-5.0.0-BETA2/newrelicAgent/./newrelic.yml"
Mar 26, 2017 13:08:59 +0800 [12884 1] com.newrelic INFO: New Relic Agent: Writing to log file: /Users/udara/projects/testings/relic/wso2esb-5.0.0-BETA2/newrelic/logs/newrelic_agent.log

5) Do some operations such as accessing management console, accessing apis etc. Then Login to New Relic dashboard where you will find statistics about your carbon product.

Screen Shot 2017-03-26 at 12.59.05 PM.jpg

Screen Shot 2017-03-26 at 1.42.35 PM

Screen Shot 2017-03-26 at 1.25.43 PM.jpg

Screen Shot 2017-03-26 at 1.47.07 PM

Beware of below error

When I tried the same with WSO2 API Manager 2.1.0 I encountered the below error at server startup. Post [2] has suggested that it is due to an issue with temp directory.  The root cause for this is WSO2 startup scripts deletes TMP_DIR at startup script which causes New Relic not able to write to the temp directory. The fix is to delete the content of TMP_DIR instead of deleting the whole directory. So you will have to change CARBON_HOME/bin/wso2server.sh as below. Just comment TMP_DIR folder deletion and modify it to remove only the folder content.

TMP_DIR="$CARBON_HOME"/tmp
#if [ -d "$TMP_DIR" ]; then
#rm -rf "$TMP_DIR"
#fi

if [-d "$TMP_DIR"]; then
rm -rf "$TMP_DIR/*"
fi
Error bootstrapping New Relic agent: java.lang.RuntimeException: java.io.IOException: No such file or directory
java.lang.RuntimeException: java.io.IOException: No such file or directory
    at com.newrelic.bootstrap.BootstrapLoader.load(BootstrapLoader.java:122)
    at com.newrelic.bootstrap.BootstrapAgent.startAgent(BootstrapAgent.java:110)
    at com.newrelic.bootstrap.BootstrapAgent.premain(BootstrapAgent.java:79)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:386)
    at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:401)
Caused by: java.io.IOException: No such file or directory

References

[1] http://lasanthatechlog.blogspot.com/2015/06/integrating-wso2-products-with-new-relic.html

[2] https://discuss.newrelic.com/t/error-bootstrapping-new-relic-agent-in-hadoop-mapreduce-job/23763

In my earlier post, I wrote how to filter Json payload using Groovy scripts in WSO2 ESB script mediator. This post is its XML counterpart.

If you did not read my earlier post, the Script Mediator of WSO2 ESB used to invoke the functions of a variety of scripting languages such as JavaScript, Groovy, or Ruby.

In this example, payload consists of xml payload with the details of  set of employees. We are going to filter out
old employees (age >30) from this list. However using Groovy I found it easier to remove young employees and keep the old employees
in payload.

Prerequisites:
Download Groovy all dependency jar (I used groovy-all-2.2.0-beta-1.jar) into $ESB_HOME/repository/lib and start WSO2 ESB

Here is the payload before the script mediator.

<employees>
<employee>
<age>25</age>
<firstName>John</firstName>
<lastName>Doe</lastName>
</employee>
<employee>
<age>45</age>
<firstName>Anna</firstName>
<lastName>Smith</lastName>
</employee>
<employee>
<age>35</age>
<firstName>Peter</firstName>
<lastName>Jones</lastName>
</employee>
</employees>

 

Now lets write the script mediator which filter out employees younger than 30 years.

<property name="messageType"; value="application/json" scope="axis2" />
<property name="payload" expression="json-eval($.)" />
<script language="groovy">
import groovy.util.XmlSlurper;
import groovy.xml.MarkupBuilder;
import groovy.xml.StreamingMarkupBuilder;

def payload = mc.getPayloadXML();
def rootNode = new XmlSlurper().parseText(payload);
rootNode.children().findAll{it.age.text().toInteger() &lt; 30 }.replaceNode {};

mc.setPayloadXML(groovy.xml.XmlUtil.serialize(rootNode));
</script>

 

First I fetches payload using getPayloadXML provided by Synapse. Then I parse the payload as XML using parseText() of XmlSlurper class.
Then I findAll employees who’s age is less than 30 by finding and remove them. Finally serialize the object and set to synapse message context as new payload.
So the new payload consists of old employees as below

<employees>
<employee>
<age>45</age>
<firstName>Anna</firstName>
<lastName>Smith</lastName>
</employee>
<employee>
<age>35</age>
<firstName>Peter</firstName>
<lastName>Jones</lastName>
</employee>
</employees>

The Script Mediator is used to invoke the functions of a variety of scripting languages such as JavaScript, Groovy, or Ruby.
This port consists of a sample in Groovy scripting language using which which you can perform Collection operation easily.

Prerequisites:
Download Groovy all dependency jar (I used groovy-all-2.2.0-beta-1.jar) into $ESB_HOME/repository/lib and start WSO2 ESB

Let’s say that your current payload consists of set of employees represented as below.

{
  "employees": [
    {
      "firstName": "John";,
      "lastName": "Doe",
      "age":25
    },
    {
      "firstName": "Anna",
      "lastName": "Smith",
      "age":45
    },
    {
      "firstName": "Peter",
      "lastName":"Jones",
      "age":35
    }
  ]
}

Now you want to filter out the set of old(age>30) employees to apply a new insurance policy.
Let’s see how you can achieve this task using WSO2 ESB script mediator using groovy script.

<property name="messageType"; value="application/json" scope="axis2" />
<property name="payload" expression="json-eval($.)" />

<script language="groovy">
 import groovy.json.*;
 def payload = mc.getProperty("payload");
 def empList = new JsonSlurper().parseText(payload.toString());
 empList.employees = empList.employees.findAll{it.age gt; 30}
 mc.setPayloadJSON(JsonOutput.toJson(empList));
</script>

First I set property “payload” to store message payload before script mediator.
Then withing script mediator I fetches its content using mc.getProperty(). Then parse the paylod
to Json which converts Json payload string to Groovy object, List type in this case. There after I can
use Groovy funtion findAll() to filter employees using Closure age>30. Finally converts Grooby object
back to Json String in toJson() funtions and set the filtered employees as payload.

So payload will be changed as below, to consist only old employees after going through the script mediator.

{
  "employees": [
    {
      "firstName": "Anna",
      "lastName": "Smith",
      "age": 45
    },
    {
     "firstName": "Peter",
      "lastName": "Jones",
      "age": 35
    }
  ]
}

You know that Logstash, Elasticsearch and Kibana triple, aka ELK is a well used log
analysis tool set. This howto guide explains how to publish logs of WSO2 Carbon
servers to ELK platform

# Setup ELK

You can download Logstash, Elasticsearch and Kibana binaries one by one and setup ELK. But I am a Docker fan
so I use a preconfigured Docker image. Most people use sebp/elk Docker image. This Docker images by default does not comes
with a Logstash receiver for receiving beats event. Thus I added below Logstash configuration to receive beats events and create my own
docker image udaraliyanage/elk. Either you can use my Docker image or add below Logstash configuration to default Docker images

input {
beats {
type => beats
port => 7000
}
}
output {
elasticsearch {
hosts => “localhost:9200”
}
stdout { codec => rubydebug }
}

Above configuration causes Logstash to listen on port 7000 (input section) and forward the logs to Elasticsearch which is running on port 9200
of Docker container.

Now start the docker container as
docekr run -d -p 7000:7000 -p 5601:5601 udaraliyanage/elklog4

port 6000 => Logstash
port 5601 => Kibana

  # Setup Carbon Server to publish logs to Logstash

* Download filebeat deb file from [2] and install
dpkg -i filebeat_1.2.3_amd64

* Create a filebeat configuration file /etc/carbon_beats.yml with following content.

Please make sure to provide the correct wso2carbon.log file location in paths section. You can provide multiple carbon logs as well
if you are running multiple Carbon servers in your machine.

filebeat:
prospectors:

paths:
– /opt/wso2as-5.3.0/repository/logs/wso2carbon.log
input_type: log
document_type: appserver_log
output:
logstash:
hosts: [“localhost:7000”]
console:
pretty: true
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB

* Now  start Carbon Server ./bin/wso2server.sh start`

# View logs from Kibana by visiting http://localhost:5601

[1] https://www.elastic.co/products/beats/filebeat
[2] https://hub.docker.com/r/sebp/elk/

I assume that you know that Logstash, Elasticsearch and Kibana stack, a.k.a ELK is a well used log analysis tool set. This howto guide explains how to publish logs of WSO2 Carbon
servers to ELK platform.

# Setup ELK

You can download Logstash, Elasticsearch and Kibana binaries one by one and setup ELK. But I am a Docker fan so I use a preconfigured Docker image. Most people use sebp/elk Docker image. This Docker images by default does not comes with a Logstash receiver for log4j events. Thus I added below Logstash configuration to receive log4j events and create my own docker image udaraliyanage/elk. Either you can use my Docker image or add below Logstash configuration to default Docker images

input {
  log4j {
    mode => server
    host => “0.0.0.0”
    port => 6000
    type => “log4j”
  }
}
output {
  elasticsearch {
      hosts => “localhost:9200”
  }
  stdout { codec => rubydebug }
}

Above configuration causes Logstash to listen on port 6000 (input section) and forward the logs to Elasticsearch which is running on port 9200
of Docker container.

Now start the docker container as
`docekr run -d -p 6000:6000 -p 5601:5601 udaraliyanage/elklog4j`

port 6000 => Logstash
port 5601 => Kibana

# Setup Carbon Server to publish logs to Logstash

* Download Logstash json even layout dependecy jary from [3] and place it $CARBON_HOME/repository/components/lib .
This convert the log event to binary format and stream them to
a remote log4j host, in our case Logstash running on port 6000

* Add following log4j appended configurations to Carbon servers by editing $CARBON_HOME/repository/conf/log4j.properties file

log4j.rootLogger=INFO, CARBON_CONSOLE, CARBON_LOGFILE, CARBON_MEMORY,tcp

log4j.appender.tcp=org.apache.log4j.net.SocketAppender
log4j.appender.tcp.layout=org.wso2.carbon.utils.logging.TenantAwarePatternLayout
log4j.appender.tcp.layout.ConversionPattern=[%d] %P%5p {%c} – %x %m%n
log4j.appender.tcp.layout.TenantPattern=%U%@%D[%T]
log4j.appender.tcp.Port=6000
log4j.appender.tcp.RemoteHost=localhost
log4j.appender.tcp.ReconnectionDelay=10000
log4j.appender.tcp.threshold=DEBUG
log4j.appender.tcp.Application=myCarbonApp

RemoteHost => Logstash server where we want to publish events to, it is localhost:6000 in our case.
Application => Name of the application which publishes log. It is useful for the one who view logs from Kibana so that he can find from which server a particular logs is received.

* Now  start Carbon Server ./bin/wso2server.sh start`

# View logs from Kibana by visiting http://localhost:5601

[1] https://hub.docker.com/r/sebp/elk/
[2] https://www.elastic.co/guide/en/logstash/current/plugins-inputs-log4j.html
[3] http://mvnrepository.com/artifact/net.logstash.log4j/jsonevent-layout/1.7

This article demonstrate how to build a sample REST service using WSO2 Micro Service Server.

Step 1: Build the product

git clone https://github.com/wso2/msf4j.git
mvn clean install

Step 2: Create sample micro service project

mvn archetype:generate \
-DarchetypeGroupId=org.wso2.msf4j \
-DarchetypeArtifactId=msf4j-microservice \
-DarchetypeVersion=1.0.0-SNAPSHOT \
-DgroupId=org.example -DartifactId=Customer-Service \
-Dversion=0.1-SNAPSHOT \
-Dpackage=org.example.service \
-DserviceClass=CustomerService

Once the project is created, it will generate following source code structure. CustomerService.java is service file generated for you.

Sample service sources  generated

Sample service sources generated

Step 3: Create sample Json service

Open CustomerService.java from your IDE and replaces generated sample methods with following method. Method “getCustomer” exposes a GET service which return a simple Customer object.

@Path("/customer")
public class CustomerService {

@GET
@Path("/")
@Produces({"application/json", "text/xml"})
public Response getCustomer() {
&nbsp;return Response.status(Response.Status.OK).entity(new Customer("udara", "wso2")).build();
}

private class Customer {
String name;
String company;

public Customer(String name, String company) {
this.name = name;
this.company = company;
}
}
}

Step 4: Run Application.java using your IDE

2016-02-10 12:17:41 INFO  MicroservicesRegistry:76 – Added microservice: org.example.service.HelloService@6aa8ceb6
2016-02-10 12:17:41 INFO  NettyListener:56 – Starting Netty Http Transport Listener
2016-02-10 12:17:42 INFO  NettyListener:80 – Netty Listener starting on port 8080
2016-02-10 12:17:42 INFO  MicroservicesRunner:122 – Microservices server started in 436ms

Step 5 : Invoke the micro service we just implemented

$ curl -X GET http://localhost:8080/customer/ | python -m json.tool
{
“company”: “wso2”,
“name”: “sampath”
}

Please note that “customer” is the path given to CustomerService class.

References

http://blog.afkham.org/2016/02/writing-your-first-java-microservices.html

This is a python code snippet I wrote to automate API creation in WSO2 API Manager. WSO2 API manager has exposed a API, Publisher API  using which we can perform APIM related task.

This python client first login to APIM and check weather there is already an API with the same name, if not it will create an API. This API has two resources each has unlimited throttling tier

  1. PUT /cart
  2. POST /checkout

In addition API has fail over endpoint, one production endpoint and one fail over endpoint. So once the API is invoked, API first try to reach the production endpoint, if production endpoint is not available it will try to reach fail over endpoint.

Once the API is created, this code will publishes the newly created API so users can subscribe to the API.

	log = LogFactory().get_log(__name__)
        log.info("Starting api creation plugin...")

        apim_domain = "localhost"
        apim_username = "admin"
        apim_password = "admin"

        endpoint = "http://api.openweathermap.org/data/2.5/weather?q=London"

        url =  'https://%s:9443/publisher/site/blocks/' %  apim_domain

        loging_url = urlparse.urljoin(url, 'user/login/ajax/login.jag')
        payload = {'action':'login', 'username':apim_username, 'password':apim_password }

        log.info("Login into APIManager %s " % loging_url)
        resp = requests.post(loging_url, data=payload, verify=False)
        log.info("APIM Logging response %s" % resp)
        cookie = resp.cookies

        swagger = {'paths': {'/cart': {'put': {'x-auth-type': 'None',
                                               'x-throttling-tier': 'Unlimited',
                                               'responses': {'200': {}}}}, '/checkout': {'post': {
            'parameters': [{
                               'schema': {'type': 'object'},
                               'description': 'Request Body',
                               'name': 'Payload',
                               'required': 'false',
                               'in': 'body',
                               }],
            'responses': {'200': {}},
            'x-auth-type': 'None',
            'x-throttling-tier': 'Unlimited',
            }}}}

        swager_json = json.dumps(swagger)

        api_url = urlparse.urljoin(url,'item-add/ajax/add.jag')

        endpoint_conf = \
            {'production_endpoints': {'url': 'http://ws.cdyne.com/phoneverify/phoneverify.asmx',
                                      'config': 'null'},
             'production_failovers': [{'url': 'http://failover_domain:30000/StorefrontDemo/api/customer'
                                          , 'config': 'null'}], 'endpoint_type': 'failover'}

        endpoint_conf['production_endpoints']['url']= endpoint
        endpoint_json = json.dumps(endpoint_conf)

        payload = {
            'action': 'addAPI',
            'name': 'Storefront',
            'context': 'storefront',
            'version': 'v1',
            'visibility': 'public',
            'endpointType': 'nonsecured',
            'tiersCollection': 'Unlimited',
            'http_checked': 'http',
            'https_checked': 'https',
            'resourceCount': 0,
            'resourceMethod-0': 'PUT',
            'resourceMethodAuthType-0': 'None',
            'resourceMethodThrottlingTier-0': 'Unlimited',
            'uriTemplate-0': 'cart',
            'resourceMethod-0': 'POST',
            'resourceMethodAuthType-0': 'None',
            'resourceMethodThrottlingTier-0': 'Unlimited',
            'uriTemplate-0': 'checkout',
            }

        payload['endpoint_config']=endpoint_json
        payload['swagger'] = swager_json

        exist_payload = {
        'action':'isAPINameExist',
        'apiName':'Storefront'
        }

        #check if API with the same name already exist
        resp = requests.post(api_url, data = exist_payload, verify = False, cookies = cookie)
        api_exist =  ('true' == json.loads(resp.text)['exist'])
        logging.info("API already exist %s " % api_exist)

        if not api_exist:            
            log.info("Creating API WebbAppAPI %s " % api_url)
            resp = requests.post(api_url, data=payload, verify=False, cookies=cookie)
            log.info("APIM api creation response %s" % resp)

            publish_url = urlparse.urljoin(url, 'life-cycles/ajax/life-cycles.jag')
            payload = {
            'action':'updateStatus',
            'name':'Storefront',
            'version':'v1',
            'provider':'admin',
            'status':'PUBLISHED',
            'publishToGateway':'true',
            'requireResubscription':'false'
            }
            log.info("Publishing API WebbAppAPI %s " % publish_url)
            resp = requests.post(publish_url, data=payload, verify=False, cookies=cookie)
            log.info("APIM api publishing response %s" % resp)

        log.info("*****************API creation plugin completed *****************")
	

Below is the created API

Storefront API

Storefront API

Python Flask enable CORS

Posted: October 24, 2015 in Uncategorized
Tags: , , ,

Install flask-cors plugin

pip install -U flask-cors

Python flask code to enable cors for all resources

from flask import Flask
from OpenSSL import SSL

import os

from flask.ext.cors import CORS
context = SSL.Context(SSL.SSLv23_METHOD)
cer = os.path.join(os.path.dirname(__file__), 'resources/exported-pem.crt')
key = os.path.join(os.path.dirname(__file__), 'resources/exported-pem.key')

app = Flask(__name__)
CORS(app, resources={r"/": {"origins": "*"}, r"/": {"supports_credentials": True}})

@@app.route('/', methods=['POST','GET'])
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
context = (cer, key)
app.run( host='0.0.0.0', port=5000, debug = True, ssl_context=context)

supports_credentials will cause Flask to send “Access-Control-Allow-Credentails” header to true.

Access-Control-Allow-Credentials: true

Please have a look into the below screenshot of cors pre flight OPTION request

Cors preflight OPTION request

Cors preflight OPTION request

Python Flask API in HTTPS

Posted: October 21, 2015 in Uncategorized
Tags: , , , ,

Before starting a server with SSL, you need to create private key and a certificate. I will create a self signed certificate for this tutorial.
Below commands will ask for information regarding your certirficate. Among them, ‘common name’ is the most important inforamtion. It should be the domain name of your server running. This will output two files,

1) udara.com.key –> private key for my domain
2) udara.com.ct –>Self signed certificate

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout udara.com.key -out udara.com.crt

Below is the Flask code snippet to start your Flask API in HTTPS

from flask import Flask
from OpenSSL import SSL

import os

context = SSL.Context(SSL.SSLv23_METHOD)
cer = os.path.join(os.path.dirname(__file__), 'resources/udara.com.crt')
key = os.path.join(os.path.dirname(__file__), 'resources/udara.com.key')

app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello World!'

if __name__ == '__main__':
    context = (cer, key)
    app.run( host='0.0.0.0', port=5000, debug = True, ssl_context=context)

When you run above code, it will show below output. Note that it is running HTTPS

* Running on https://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
  • Add A record
import boto3

client = boto3.client('route53', aws_access_key_id="AWS_KEY", aws_secret_access_key="AWS_SEC_KEY")
hostedZoneId = 'HOSTED_ZONE_ID'

ip= '123.123.123.123'


if aws_region == "US":
    #US is my default region. So cont_code is blank
    cont_code = {}
elif aws_region == "EU":
    cont_code = {'ContinentCode':'EU'}
elif aws_region == "AP":
    cont_code = {'ContinentCode':'AS'}

response = client.change_resource_record_sets(
    HostedZoneId = hostedZoneId,
    ChangeBatch={
        'Comment': 'comment',
        'Changes': [
            {
                'Action': 'CREATE',
                'ResourceRecordSet': {
                    'Name': domain,
                    'Type': 'A',
                    'SetIdentifier': 'my_a_record',
                    'GeoLocation': cont_code,
                    'TTL': 60,
                    'ResourceRecords': [
                        {
                            'Value': ip
                        },
                        ],
                    }
            },
            ]
    }
)


print("DNS record status %s "  % response['ChangeInfo']['Status'])
print("DNS record response code %s " % response['ResponseMetadata']['HTTPStatusCode'])
  • Delete A record

When deleting the A record you only have to change the action to DELETE

'Action': 'DELETE'