In my earlier post, I wrote how to filter Json payload using Groovy scripts in WSO2 ESB script mediator. This post is its XML counterpart.

If you did not read my earlier post, the Script Mediator of WSO2 ESB used to invoke the functions of a variety of scripting languages such as JavaScript, Groovy, or Ruby.

In this example, payload consists of xml payload with the details of  set of employees. We are going to filter out
old employees (age >30) from this list. However using Groovy I found it easier to remove young employees and keep the old employees
in payload.

Prerequisites:
Download Groovy all dependency jar (I used groovy-all-2.2.0-beta-1.jar) into $ESB_HOME/repository/lib and start WSO2 ESB

Here is the payload before the script mediator.

<employees>
<employee>
<age>25</age>
<firstName>John</firstName>
<lastName>Doe</lastName>
</employee>
<employee>
<age>45</age>
<firstName>Anna</firstName>
<lastName>Smith</lastName>
</employee>
<employee>
<age>35</age>
<firstName>Peter</firstName>
<lastName>Jones</lastName>
</employee>
</employees>

 

Now lets write the script mediator which filter out employees younger than 30 years.

<property name="messageType"; value="application/json" scope="axis2" />
<property name="payload" expression="json-eval($.)" />
<script language="groovy">
import groovy.util.XmlSlurper;
import groovy.xml.MarkupBuilder;
import groovy.xml.StreamingMarkupBuilder;

def payload = mc.getPayloadXML();
def rootNode = new XmlSlurper().parseText(payload);
rootNode.children().findAll{it.age.text().toInteger() &lt; 30 }.replaceNode {};

mc.setPayloadXML(groovy.xml.XmlUtil.serialize(rootNode));
</script>

 

First I fetches payload using getPayloadXML provided by Synapse. Then I parse the payload as XML using parseText() of XmlSlurper class.
Then I findAll employees who’s age is less than 30 by finding and remove them. Finally serialize the object and set to synapse message context as new payload.
So the new payload consists of old employees as below

<employees>
<employee>
<age>45</age>
<firstName>Anna</firstName>
<lastName>Smith</lastName>
</employee>
<employee>
<age>35</age>
<firstName>Peter</firstName>
<lastName>Jones</lastName>
</employee>
</employees>

The Script Mediator is used to invoke the functions of a variety of scripting languages such as JavaScript, Groovy, or Ruby.
This port consists of a sample in Groovy scripting language using which which you can perform Collection operation easily.

Prerequisites:
Download Groovy all dependency jar (I used groovy-all-2.2.0-beta-1.jar) into $ESB_HOME/repository/lib and start WSO2 ESB

Let’s say that your current payload consists of set of employees represented as below.

{
  "employees": [
    {
      "firstName": "John";,
      "lastName": "Doe",
      "age":25
    },
    {
      "firstName": "Anna",
      "lastName": "Smith",
      "age":45
    },
    {
      "firstName": "Peter",
      "lastName":"Jones",
      "age":35
    }
  ]
}

Now you want to filter out the set of old(age>30) employees to apply a new insurance policy.
Let’s see how you can achieve this task using WSO2 ESB script mediator using groovy script.

<property name="messageType"; value="application/json" scope="axis2" />
<property name="payload" expression="json-eval($.)" />

<script language="groovy">
 import groovy.json.*;
 def payload = mc.getProperty("payload");
 def empList = new JsonSlurper().parseText(payload.toString());
 empList.employees = empList.employees.findAll{it.age gt; 30}
 mc.setPayloadJSON(JsonOutput.toJson(empList));
</script>

First I set property “payload” to store message payload before script mediator.
Then withing script mediator I fetches its content using mc.getProperty(). Then parse the paylod
to Json which converts Json payload string to Groovy object, List type in this case. There after I can
use Groovy funtion findAll() to filter employees using Closure age>30. Finally converts Grooby object
back to Json String in toJson() funtions and set the filtered employees as payload.

So payload will be changed as below, to consist only old employees after going through the script mediator.

{
  "employees": [
    {
      "firstName": "Anna",
      "lastName": "Smith",
      "age": 45
    },
    {
     "firstName": "Peter",
      "lastName": "Jones",
      "age": 35
    }
  ]
}

You know that Logstash, Elasticsearch and Kibana triple, aka ELK is a well used log
analysis tool set. This howto guide explains how to publish logs of WSO2 Carbon
servers to ELK platform

# Setup ELK

You can download Logstash, Elasticsearch and Kibana binaries one by one and setup ELK. But I am a Docker fan
so I use a preconfigured Docker image. Most people use sebp/elk Docker image. This Docker images by default does not comes
with a Logstash receiver for receiving beats event. Thus I added below Logstash configuration to receive beats events and create my own
docker image udaraliyanage/elk. Either you can use my Docker image or add below Logstash configuration to default Docker images

input {
beats {
type => beats
port => 7000
}
}
output {
elasticsearch {
hosts => “localhost:9200”
}
stdout { codec => rubydebug }
}

Above configuration causes Logstash to listen on port 7000 (input section) and forward the logs to Elasticsearch which is running on port 9200
of Docker container.

Now start the docker container as
docekr run -d -p 7000:7000 -p 5601:5601 udaraliyanage/elklog4

port 6000 => Logstash
port 5601 => Kibana

  # Setup Carbon Server to publish logs to Logstash

* Download filebeat deb file from [2] and install
dpkg -i filebeat_1.2.3_amd64

* Create a filebeat configuration file /etc/carbon_beats.yml with following content.

Please make sure to provide the correct wso2carbon.log file location in paths section. You can provide multiple carbon logs as well
if you are running multiple Carbon servers in your machine.

filebeat:
prospectors:

paths:
– /opt/wso2as-5.3.0/repository/logs/wso2carbon.log
input_type: log
document_type: appserver_log
output:
logstash:
hosts: [“localhost:7000”]
console:
pretty: true
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB

* Now  start Carbon Server ./bin/wso2server.sh start`

# View logs from Kibana by visiting http://localhost:5601

[1] https://www.elastic.co/products/beats/filebeat
[2] https://hub.docker.com/r/sebp/elk/

I assume that you know that Logstash, Elasticsearch and Kibana stack, a.k.a ELK is a well used log analysis tool set. This howto guide explains how to publish logs of WSO2 Carbon
servers to ELK platform.

# Setup ELK

You can download Logstash, Elasticsearch and Kibana binaries one by one and setup ELK. But I am a Docker fan so I use a preconfigured Docker image. Most people use sebp/elk Docker image. This Docker images by default does not comes with a Logstash receiver for log4j events. Thus I added below Logstash configuration to receive log4j events and create my own docker image udaraliyanage/elk. Either you can use my Docker image or add below Logstash configuration to default Docker images

input {
  log4j {
    mode => server
    host => “0.0.0.0”
    port => 6000
    type => “log4j”
  }
}
output {
  elasticsearch {
      hosts => “localhost:9200”
  }
  stdout { codec => rubydebug }
}

Above configuration causes Logstash to listen on port 6000 (input section) and forward the logs to Elasticsearch which is running on port 9200
of Docker container.

Now start the docker container as
`docekr run -d -p 6000:6000 -p 5601:5601 udaraliyanage/elklog4j`

port 6000 => Logstash
port 5601 => Kibana

# Setup Carbon Server to publish logs to Logstash

* Download Logstash json even layout dependecy jary from [3] and place it $CARBON_HOME/repository/components/lib .
This convert the log event to binary format and stream them to
a remote log4j host, in our case Logstash running on port 6000

* Add following log4j appended configurations to Carbon servers by editing $CARBON_HOME/repository/conf/log4j.properties file

log4j.rootLogger=INFO, CARBON_CONSOLE, CARBON_LOGFILE, CARBON_MEMORY,tcp

log4j.appender.tcp=org.apache.log4j.net.SocketAppender
log4j.appender.tcp.layout=org.wso2.carbon.utils.logging.TenantAwarePatternLayout
log4j.appender.tcp.layout.ConversionPattern=[%d] %P%5p {%c} – %x %m%n
log4j.appender.tcp.layout.TenantPattern=%U%@%D[%T]
log4j.appender.tcp.Port=6000
log4j.appender.tcp.RemoteHost=localhost
log4j.appender.tcp.ReconnectionDelay=10000
log4j.appender.tcp.threshold=DEBUG
log4j.appender.tcp.Application=myCarbonApp

RemoteHost => Logstash server where we want to publish events to, it is localhost:6000 in our case.
Application => Name of the application which publishes log. It is useful for the one who view logs from Kibana so that he can find from which server a particular logs is received.

* Now  start Carbon Server ./bin/wso2server.sh start`

# View logs from Kibana by visiting http://localhost:5601

[1] https://hub.docker.com/r/sebp/elk/
[2] https://www.elastic.co/guide/en/logstash/current/plugins-inputs-log4j.html
[3] http://mvnrepository.com/artifact/net.logstash.log4j/jsonevent-layout/1.7

This article demonstrate how to build a sample REST service using WSO2 Micro Service Server.

Step 1: Build the product

git clone https://github.com/wso2/msf4j.git
mvn clean install

Step 2: Create sample micro service project

mvn archetype:generate \
-DarchetypeGroupId=org.wso2.msf4j \
-DarchetypeArtifactId=msf4j-microservice \
-DarchetypeVersion=1.0.0-SNAPSHOT \
-DgroupId=org.example -DartifactId=Customer-Service \
-Dversion=0.1-SNAPSHOT \
-Dpackage=org.example.service \
-DserviceClass=CustomerService

Once the project is created, it will generate following source code structure. CustomerService.java is service file generated for you.

Sample service sources  generated

Sample service sources generated

Step 3: Create sample Json service

Open CustomerService.java from your IDE and replaces generated sample methods with following method. Method “getCustomer” exposes a GET service which return a simple Customer object.

@Path("/customer")
public class CustomerService {

@GET
@Path("/")
@Produces({"application/json", "text/xml"})
public Response getCustomer() {
&nbsp;return Response.status(Response.Status.OK).entity(new Customer("udara", "wso2")).build();
}

private class Customer {
String name;
String company;

public Customer(String name, String company) {
this.name = name;
this.company = company;
}
}
}

Step 4: Run Application.java using your IDE

2016-02-10 12:17:41 INFO  MicroservicesRegistry:76 – Added microservice: org.example.service.HelloService@6aa8ceb6
2016-02-10 12:17:41 INFO  NettyListener:56 – Starting Netty Http Transport Listener
2016-02-10 12:17:42 INFO  NettyListener:80 – Netty Listener starting on port 8080
2016-02-10 12:17:42 INFO  MicroservicesRunner:122 – Microservices server started in 436ms

Step 5 : Invoke the micro service we just implemented

$ curl -X GET http://localhost:8080/customer/ | python -m json.tool
{
“company”: “wso2”,
“name”: “sampath”
}

Please note that “customer” is the path given to CustomerService class.

References

http://blog.afkham.org/2016/02/writing-your-first-java-microservices.html

This is a python code snippet I wrote to automate API creation in WSO2 API Manager. WSO2 API manager has exposed a API, Publisher API  using which we can perform APIM related task.

This python client first login to APIM and check weather there is already an API with the same name, if not it will create an API. This API has two resources each has unlimited throttling tier

  1. PUT /cart
  2. POST /checkout

In addition API has fail over endpoint, one production endpoint and one fail over endpoint. So once the API is invoked, API first try to reach the production endpoint, if production endpoint is not available it will try to reach fail over endpoint.

Once the API is created, this code will publishes the newly created API so users can subscribe to the API.

	log = LogFactory().get_log(__name__)
        log.info("Starting api creation plugin...")

        apim_domain = "localhost"
        apim_username = "admin"
        apim_password = "admin"

        endpoint = "http://api.openweathermap.org/data/2.5/weather?q=London"

        url =  'https://%s:9443/publisher/site/blocks/' %  apim_domain

        loging_url = urlparse.urljoin(url, 'user/login/ajax/login.jag')
        payload = {'action':'login', 'username':apim_username, 'password':apim_password }

        log.info("Login into APIManager %s " % loging_url)
        resp = requests.post(loging_url, data=payload, verify=False)
        log.info("APIM Logging response %s" % resp)
        cookie = resp.cookies

        swagger = {'paths': {'/cart': {'put': {'x-auth-type': 'None',
                                               'x-throttling-tier': 'Unlimited',
                                               'responses': {'200': {}}}}, '/checkout': {'post': {
            'parameters': [{
                               'schema': {'type': 'object'},
                               'description': 'Request Body',
                               'name': 'Payload',
                               'required': 'false',
                               'in': 'body',
                               }],
            'responses': {'200': {}},
            'x-auth-type': 'None',
            'x-throttling-tier': 'Unlimited',
            }}}}

        swager_json = json.dumps(swagger)

        api_url = urlparse.urljoin(url,'item-add/ajax/add.jag')

        endpoint_conf = \
            {'production_endpoints': {'url': 'http://ws.cdyne.com/phoneverify/phoneverify.asmx',
                                      'config': 'null'},
             'production_failovers': [{'url': 'http://failover_domain:30000/StorefrontDemo/api/customer'
                                          , 'config': 'null'}], 'endpoint_type': 'failover'}

        endpoint_conf['production_endpoints']['url']= endpoint
        endpoint_json = json.dumps(endpoint_conf)

        payload = {
            'action': 'addAPI',
            'name': 'Storefront',
            'context': 'storefront',
            'version': 'v1',
            'visibility': 'public',
            'endpointType': 'nonsecured',
            'tiersCollection': 'Unlimited',
            'http_checked': 'http',
            'https_checked': 'https',
            'resourceCount': 0,
            'resourceMethod-0': 'PUT',
            'resourceMethodAuthType-0': 'None',
            'resourceMethodThrottlingTier-0': 'Unlimited',
            'uriTemplate-0': 'cart',
            'resourceMethod-0': 'POST',
            'resourceMethodAuthType-0': 'None',
            'resourceMethodThrottlingTier-0': 'Unlimited',
            'uriTemplate-0': 'checkout',
            }

        payload['endpoint_config']=endpoint_json
        payload['swagger'] = swager_json

        exist_payload = {
        'action':'isAPINameExist',
        'apiName':'Storefront'
        }

        #check if API with the same name already exist
        resp = requests.post(api_url, data = exist_payload, verify = False, cookies = cookie)
        api_exist =  ('true' == json.loads(resp.text)['exist'])
        logging.info("API already exist %s " % api_exist)

        if not api_exist:            
            log.info("Creating API WebbAppAPI %s " % api_url)
            resp = requests.post(api_url, data=payload, verify=False, cookies=cookie)
            log.info("APIM api creation response %s" % resp)

            publish_url = urlparse.urljoin(url, 'life-cycles/ajax/life-cycles.jag')
            payload = {
            'action':'updateStatus',
            'name':'Storefront',
            'version':'v1',
            'provider':'admin',
            'status':'PUBLISHED',
            'publishToGateway':'true',
            'requireResubscription':'false'
            }
            log.info("Publishing API WebbAppAPI %s " % publish_url)
            resp = requests.post(publish_url, data=payload, verify=False, cookies=cookie)
            log.info("APIM api publishing response %s" % resp)

        log.info("*****************API creation plugin completed *****************")
	

Below is the created API

Storefront API

Storefront API

Python Flask enable CORS

Posted: October 24, 2015 in Uncategorized
Tags: , , ,

Install flask-cors plugin

pip install -U flask-cors

Python flask code to enable cors for all resources

from flask import Flask
from OpenSSL import SSL

import os

from flask.ext.cors import CORS
context = SSL.Context(SSL.SSLv23_METHOD)
cer = os.path.join(os.path.dirname(__file__), 'resources/exported-pem.crt')
key = os.path.join(os.path.dirname(__file__), 'resources/exported-pem.key')

app = Flask(__name__)
CORS(app, resources={r"/": {"origins": "*"}, r"/": {"supports_credentials": True}})

@@app.route('/', methods=['POST','GET'])
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
context = (cer, key)
app.run( host='0.0.0.0', port=5000, debug = True, ssl_context=context)

supports_credentials will cause Flask to send “Access-Control-Allow-Credentails” header to true.

Access-Control-Allow-Credentials: true

Please have a look into the below screenshot of cors pre flight OPTION request

Cors preflight OPTION request

Cors preflight OPTION request