Archive for the ‘wso2’ Category

New Relic is a popular performance monitoring system which provided realtime analytics such as performance, memory usage and cpu usage, threads, web page response time etc. You can even profile application remotely using New Relic dashboard.

This article explains how to integrate New Relic performance monitoring Java agent with WSO2 Carbon products.

Tested platform: Java 8, WSO2 ESB 5.0.0, Mac OS Sierra 10.12.3

1) Signup in New Relic website.
You will get a license key once subscribe

2) Download and extract New Relic agent jar zip files as below. It contains
i) New relic Agent Jar file
ii) newrelic.yml configuration yaml file

wget -N https://download.newrelic.com/newrelic/java-agent/newrelic-agent/current/newrelic-java.zip
unzip -q newrelic-java.zip

3) Copy newrelic.jar and newrelic.yml into

mkdir $CARBON_HOME/newrelicAgent
cp newrelic.yml $CARBON_HOME/newrelicAgent
cp newrelic.yml $CARBON_HOME/newrelicAgent

4) Set New Relic licence key in newrelic.yml
Locate to this section in license_key: ‘<%= license_key %>’ and replace it with the licence key you received at Step 1.

license_key: 'e5620kj287aee4ou7613c2ku7d56k12387bd5jyb'

5) Set java agent into $CARBON_HOME/bin/wso2server.sh as below

-javaagent:$CARBON_HOME/newrelicAgent/newrelic.jar \

Sample section looks like this

while [ "$status" = "$START_EXIT_STATUS" ]
do
    $JAVACMD \
    -Xbootclasspath/a:"$CARBON_XBOOTCLASSPATH" \
    $JVM_MEM_OPTS \
    -XX:+HeapDumpOnOutOfMemoryError \
    -XX:HeapDumpPath="$CARBON_HOME/repository/logs/heap-dump.hprof" \
    $JAVA_OPTS \
    -javaagent:$CARBON_HOME/newrelicAgent/newrelic.jar \

4) sh $ESB_HOME/bin/wso2server.sh

At startup you will see below logs in carbon log file

Mar 26, 2017 13:08:58 +0800 [12884 1] com.newrelic INFO: New Relic Agent: Loading configuration file "/Users/udara/projects/testings/relic/wso2esb-5.0.0-BETA2/newrelicAgent/./newrelic.yml"
Mar 26, 2017 13:08:59 +0800 [12884 1] com.newrelic INFO: New Relic Agent: Writing to log file: /Users/udara/projects/testings/relic/wso2esb-5.0.0-BETA2/newrelic/logs/newrelic_agent.log

5) Do some operations such as accessing management console, accessing apis etc. Then Login to New Relic dashboard where you will find statistics about your carbon product.

Screen Shot 2017-03-26 at 12.59.05 PM.jpg

Screen Shot 2017-03-26 at 1.42.35 PM

Screen Shot 2017-03-26 at 1.25.43 PM.jpg

Screen Shot 2017-03-26 at 1.47.07 PM

Beware of below error

When I tried the same with WSO2 API Manager 2.1.0 I encountered the below error at server startup. Post [2] has suggested that it is due to an issue with temp directory.  The root cause for this is WSO2 startup scripts deletes TMP_DIR at startup script which causes New Relic not able to write to the temp directory. The fix is to delete the content of TMP_DIR instead of deleting the whole directory. So you will have to change CARBON_HOME/bin/wso2server.sh as below. Just comment TMP_DIR folder deletion and modify it to remove only the folder content.

TMP_DIR="$CARBON_HOME"/tmp
#if [ -d "$TMP_DIR" ]; then
#rm -rf "$TMP_DIR"
#fi

if [-d "$TMP_DIR"]; then
rm -rf "$TMP_DIR/*"
fi
Error bootstrapping New Relic agent: java.lang.RuntimeException: java.io.IOException: No such file or directory
java.lang.RuntimeException: java.io.IOException: No such file or directory
    at com.newrelic.bootstrap.BootstrapLoader.load(BootstrapLoader.java:122)
    at com.newrelic.bootstrap.BootstrapAgent.startAgent(BootstrapAgent.java:110)
    at com.newrelic.bootstrap.BootstrapAgent.premain(BootstrapAgent.java:79)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:386)
    at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:401)
Caused by: java.io.IOException: No such file or directory

References

[1] http://lasanthatechlog.blogspot.com/2015/06/integrating-wso2-products-with-new-relic.html

[2] https://discuss.newrelic.com/t/error-bootstrapping-new-relic-agent-in-hadoop-mapreduce-job/23763

The Script Mediator is used to invoke the functions of a variety of scripting languages such as JavaScript, Groovy, or Ruby.
This port consists of a sample in Groovy scripting language using which which you can perform Collection operation easily.

Prerequisites:
Download Groovy all dependency jar (I used groovy-all-2.2.0-beta-1.jar) into $ESB_HOME/repository/lib and start WSO2 ESB

Let’s say that your current payload consists of set of employees represented as below.

{
  "employees": [
    {
      "firstName": "John";,
      "lastName": "Doe",
      "age":25
    },
    {
      "firstName": "Anna",
      "lastName": "Smith",
      "age":45
    },
    {
      "firstName": "Peter",
      "lastName":"Jones",
      "age":35
    }
  ]
}

Now you want to filter out the set of old(age>30) employees to apply a new insurance policy.
Let’s see how you can achieve this task using WSO2 ESB script mediator using groovy script.

<property name="messageType"; value="application/json" scope="axis2" />
<property name="payload" expression="json-eval($.)" />

<script language="groovy">
 import groovy.json.*;
 def payload = mc.getProperty("payload");
 def empList = new JsonSlurper().parseText(payload.toString());
 empList.employees = empList.employees.findAll{it.age gt; 30}
 mc.setPayloadJSON(JsonOutput.toJson(empList));
</script>

First I set property “payload” to store message payload before script mediator.
Then withing script mediator I fetches its content using mc.getProperty(). Then parse the paylod
to Json which converts Json payload string to Groovy object, List type in this case. There after I can
use Groovy funtion findAll() to filter employees using Closure age>30. Finally converts Grooby object
back to Json String in toJson() funtions and set the filtered employees as payload.

So payload will be changed as below, to consist only old employees after going through the script mediator.

{
  "employees": [
    {
      "firstName": "Anna",
      "lastName": "Smith",
      "age": 45
    },
    {
     "firstName": "Peter",
      "lastName": "Jones",
      "age": 35
    }
  ]
}

I assume that you know that Logstash, Elasticsearch and Kibana stack, a.k.a ELK is a well used log analysis tool set. This howto guide explains how to publish logs of WSO2 Carbon
servers to ELK platform.

# Setup ELK

You can download Logstash, Elasticsearch and Kibana binaries one by one and setup ELK. But I am a Docker fan so I use a preconfigured Docker image. Most people use sebp/elk Docker image. This Docker images by default does not comes with a Logstash receiver for log4j events. Thus I added below Logstash configuration to receive log4j events and create my own docker image udaraliyanage/elk. Either you can use my Docker image or add below Logstash configuration to default Docker images

input {
  log4j {
    mode => server
¬† ¬† host => “0.0.0.0”
    port => 6000
¬† ¬† type => “log4j”
  }
}
output {
  elasticsearch {
¬† ¬† ¬† hosts => “localhost:9200”
  }
  stdout { codec => rubydebug }
}

Above configuration causes Logstash to listen on port 6000 (input section) and forward the logs to Elasticsearch which is running on port 9200
of Docker container.

Now start the docker container as
`docekr run -d -p 6000:6000 -p 5601:5601 udaraliyanage/elklog4j`

port 6000 => Logstash
port 5601 => Kibana

# Setup Carbon Server to publish logs to Logstash

* Download Logstash json even layout dependecy jary from [3] and place it $CARBON_HOME/repository/components/lib .
This convert the log event to binary format and stream them to
a remote log4j host, in our case Logstash running on port 6000

* Add following log4j appended configurations to Carbon servers by editing $CARBON_HOME/repository/conf/log4j.properties file

log4j.rootLogger=INFO, CARBON_CONSOLE, CARBON_LOGFILE, CARBON_MEMORY,tcp

log4j.appender.tcp=org.apache.log4j.net.SocketAppender
log4j.appender.tcp.layout=org.wso2.carbon.utils.logging.TenantAwarePatternLayout
log4j.appender.tcp.layout.ConversionPattern=[%d] %P%5p {%c} – %x %m%n
log4j.appender.tcp.layout.TenantPattern=%U%@%D[%T]
log4j.appender.tcp.Port=6000
log4j.appender.tcp.RemoteHost=localhost
log4j.appender.tcp.ReconnectionDelay=10000
log4j.appender.tcp.threshold=DEBUG
log4j.appender.tcp.Application=myCarbonApp

RemoteHost => Logstash server where we want to publish events to, it is localhost:6000 in our case.
Application => Name of the application which publishes log. It is useful for the one who view logs from Kibana so that he can find from which server a particular logs is received.

* Now  start Carbon Server ./bin/wso2server.sh start`

# View logs from Kibana by visiting http://localhost:5601

[1] https://hub.docker.com/r/sebp/elk/
[2] https://www.elastic.co/guide/en/logstash/current/plugins-inputs-log4j.html
[3] http://mvnrepository.com/artifact/net.logstash.log4j/jsonevent-layout/1.7

The SSL ciphers supported by are the ciphers supported by¬†internal Tomcat server. However you may sometime want customize the ciphers that your server should support. For instance Tomcat support export grade ciphers which will make your server vulnerable to recent FREAK attack. Let’s see how you can define the ciphers.

  • How to view the supporting ciphers

1) Download TestSSLServer.jar jar at http://www.bolet.org/TestSSLServer/TestSSLServer.jar

2) Start the WSO2 server

List the supported ciphers
3) java -jar TestSSLServer.jar localhost 9443

Supported cipher suites (ORDER IS NOT SIGNIFICANT):
TLSv1.0
RSA_WITH_RC4_128_MD5
RSA_WITH_RC4_128_SHA
RSA_WITH_3DES_EDE_CBC_SHA
DHE_RSA_WITH_3DES_EDE_CBC_SHA
RSA_WITH_AES_128_CBC_SHA
DHE_RSA_WITH_AES_128_CBC_SHA
RSA_WITH_AES_256_CBC_SHA
DHE_RSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_RC4_128_SHA
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
(TLSv1.1: idem)
TLSv1.2
RSA_WITH_RC4_128_MD5
RSA_WITH_RC4_128_SHA
RSA_WITH_3DES_EDE_CBC_SHA
DHE_RSA_WITH_3DES_EDE_CBC_SHA
RSA_WITH_AES_128_CBC_SHA
DHE_RSA_WITH_AES_128_CBC_SHA
RSA_WITH_AES_256_CBC_SHA
DHE_RSA_WITH_AES_256_CBC_SHA
RSA_WITH_AES_128_CBC_SHA256
RSA_WITH_AES_256_CBC_SHA256
DHE_RSA_WITH_AES_128_CBC_SHA256
DHE_RSA_WITH_AES_256_CBC_SHA256
TLS_ECDHE_RSA_WITH_RC4_128_SHA
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
———————-
Server certificate(s):
6bf8e136eb36d4a56ea05c7ae4b9a45b63bf975d: CN=localhost, O=WSO2, L=Mountain View, ST=CA, C=US
———————-

  • Configure the preffered ciphers

1) Open [CARBON_HOME]/repository/conf/tomcat/catalina-server.xml and find the Connector configuration corresponding to SSL/TLS. Most probably this is the connector which has port 9443

2) Add a attribute called ciphers which have allowed ciphers in comma separated

<Connector protocol=”org.apache.coyote.http11.Http11NioProtocol”
port=”9443″
bindOnInit=”false”
sslEnabledProtocols=”TLSv1,TLSv1.1,TLSv1.2″
ciphers=”SSL_RSA_WITH_RC4_128_MD5″

Here I have added just one cipher for the simplicity.

3) List the supported ciphers now
java -jar TestSSLServer.jar localhost 9443

Supported versions: TLSv1.0 TLSv1.1 TLSv1.2
Deflate compression: no
Supported cipher suites (ORDER IS NOT SIGNIFICANT):
TLSv1.0
RSA_WITH_RC4_128_MD5
(TLSv1.1: idem)
(TLSv1.2: idem)
———————-
Server certificate(s):
6bf8e136eb36d4a56ea05c7ae4b9a45b63bf975d: CN=localhost, O=WSO2, L=Mountain View, ST=CA, C=US

 

References : http://blog.facilelogin.com/2014/10/poodle-attack-and-disabling-ssl-v3-in.html

Deployment synchronization of WSO2 process of syncing deployment artifacts across the product cluster. The goal of depsync is to synchronize  artifacts (proxies, APIs, webapps etc) across all the nodes When a user upload or update an artifact. If not for depsync when a artifact is updated by the user, those artifacts should be added to other servers manually. Current depsync is carried out with a SVN repository. When a user updates an artifact, manager node commit the changes to the central SVN repository and inform worker nodes that there is a is a artifact update. Then worker nodes get a SVN update from the repository.

This article explain an alliterative way of achieving the same goal of debsync. This method eliminates the overhead of maintaining a separate SVN server for depsync, instead uses rsync tool which is pre installed in most of the unix systems..

rsync is a file transfering utility for unix systems. rsync algorithm is smart so that only transfer the difference of the files. rsync can be configured to use rsh or rss as the transport.

Prerequisites

Icron is a utility that watch for file system changes and let user defined commands to trigger when an file system changing event occurred.
Install incron if you don’t already have it installed

	sudo apt-get install incron
	
Configure Deployment synchronization

1) Add host entries of all worker nodes

vi /etc/hosts
192.168.1.1 worker1 worker1.wso2.com
192.168.1.2 worker2 worker2.wso2.com
192.168.1.3 worker3 worker3.wso2.com

2. Create SSH keys on the management node.

ssh-keygen -t dsa

3). Copy the public key to the worker nodes so you can ssh to the worker nodes without providing password each time.

ssh-copy-id -i ~/.ssh/id_rsa.pub worker1.wso2.com
ssh-copy-id -i ~/.ssh/id_rsa.pub worker2.wso2.com
ssh-copy-id -i ~/.ssh/id_rsa.pub worker3.wso2.com

4) Create a script file /opt/scripts/push_artifacts.sh with the below content

The script assumes your management server pack  is located on home/ubuntu/manager/ where as worker nodes are on /home/ubuntu/worker in every worker node.

#!/bin/bash
# push_artifacts.sh - Push artifact changes to the worker nodes.

master_artifact_path=/home/ubuntu/manager/wso2esb4.6.0/repository/deployment/
worker_artifact_path=/home/ubuntu/worker/wso2esb4.6.0/repository/deployment/

worker_nodes=(worker1 worker2 worker3)

while [ -f /tmp/.rsync.lock ]
do
  echo -e &amp;amp;quot;[WARNING] Another rsync is in progress, waiting...&amp;amp;quot;
  sleep 2
done

mkdir /tmp/.rsync.lock

if [ $? -eq 0 ]; then
echo "[ERROR] : can not create rsync lock";
exit 1
else
echo "INFO : created rsync lock";
fi

for i in ${worker_nodes[@]}; do

echo "===== Beginning artifact sync for $i ====="

rsync -avzx --delete -e ssh $master_artifact_path ubuntu@$i:$worker_artifact_path

if [ $? = &amp;amp;quot;1&amp;amp;quot; ]; then
echo "[ERROR] : rsync failed for $i";
exit 1
fi

echo "===== Completed rsync for $i =====";
done

rm -rf /tmp/.rsync.lock
echo "[SUCCESS] : Artifact synchronization completed successfully"

The above script will send the artifact changes to all the worker nodes.

5) Trigger push_artifacts.sh script when an artifact is added, modified or removed.

Execute below command to configure icron.

incrontab -e

Add the below text in to the prompt opened by above step.

/home/ubuntu/wso2/wso2esb4.6.0/repository/deployment/server IN_MODIFY,IN_CREATE,IN_DELETE sh /opt/scripts/push_artifacts.sh

Above text tell icron to watch on the file changes (File edits, creations and deletions) of the directory under /home/ubuntu/wso2/wso2esb4.6.0/repository/deployment/server and trigger push_artifacts.sh script whenever a such kind of directory structure change is occured. Simply saying, icron will execute push_artifacts.sh (Script created in step 4) in an event of a artifact change of wso2esb is occured. Thus in case of any artifact change of the master node, all the changes are sync to the all worker nodes which is exactly the goal of deployment synchronization.

Advantages over SVN based debployment synchronization
  • No SVN repository is needed.

There is no overhead of running a SVN server

  • Carbon servers are not needed to cluster to have deployment synchronization

If you are using SVN based deployment synchronization or Registry based deployment synchronization, you need to cluster carbon servers. But this method does not require clustering.

  • Can support multiple manager nodes

In SVN based depsync system is limited to single manager nodes due to the reason that there is a posibility of a node get crashed due to SVN commit conflicts occur when multiple managers commiting artifact updates concurrently. The reason for this is SVN does not support concurrent commits. That issue is not applicable since. However syncing script should be updated to synchronize artifacts among manager nodes also.

  • No configurations needed on any of the worker nodes.

Practically in a real deployment there are one or two (maximum) management node and many worker nodes. Since configurations are done only in the management node,  new worker nodes can be added without doing any configurations from the worker node side. Only needed to add the hostname of the new worker node to the artifact update script  created in step 4.

  • Take backup of the artifacts.

rsync can be configured to backup artifacts to another backup location.

Disadvantages over SVN based deployment synchronization
  • New nodes needed to be added manually.

When a new worker node is started, it should be added manually added to the script.

  • Artifact path is hard coded in the script.

Carbon server should be placed under /home/ubuntu/wso2 (path specified in the script). If the Carbon server pack is moved to another location, script also has to be updated.

Note : This method is not one of the recommended way of doing deployment synchronization.

Use case 

Adding persistent volume is very much similar to adding a virtual hard drive to your instance. In Amazon EC2 it is adding a EBS storage device to the instance. Please refer to [1] for more details. Persistence volume capability comes handy when you want to store your content in a separate place and make the data available even after the instance is terminated or deleted. MySQL, MongoDB are examples where you may need this capability.

Below are the steps you need to perform to enable persistent volume mapping capability.

 Cartridge Definition

You need to add a similar configuration to the cartridge definition. Please note that the persistent is an optional configuration which user specify if he wants additional volumes to store his content.
¬†“persistence”: {
¬† ¬† ¬† ¬† “isRequired”: true,
¬† ¬† ¬† ¬† “volume” : [
            {
¬† ¬† ¬† ¬† ¬† ¬† “device”: “/dev/sdc”,
¬† ¬† ¬† ¬† ¬† ¬† “mappingPath”: “/home/ubuntu/sdc”,
¬† ¬† ¬† ¬† ¬† ¬† “size”: “10”,
¬† ¬† ¬† ¬† ¬† ¬† “removeOnTermination”: “false”
            }
     ]
     }
Here I say I want an extra volumes to store my content. Linux device of the first volume should be /dev/sdc which will be mounted to the directory /home/ubuntu/sdc. The capacity of the volume should be 10GB. removeOnTermination parameter specify what should be done to the volume created after the termination of the instance. if removeOnTermination=false the volume and its data will exist even after the volume is terminated so data is not deleted.
Subscribe with persistent volumes
Just by adding to the cartridge definition does not provide volumes for the instances. We have to specify we require volumes at the time of subscribing.

Subscribing via CLI

You have to add below highlighted parameters when subscribing by Stratos CLI.
subscribe-cartridge <other-parameters> -pv <PERSISTENCE-VOLUME> -v <VOLUME-SIZE> -t <REMOVE-ON-TERMINATION>
PERSISTENCE-VOLUME>= true/false.  true if you need the feature
VOLUME-SIZE = size (in Giga Bytes) of the storage needed
REMOVE-ON-TERMINATION = true/false whether to delete the volume when the instance is terminated.
Subscribing via Stratos UI
¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬†If you are subscribing with the Stratos Ui, there is tick as “Require Persistent Storage mappings” on the “subscribe to cartridge” page. Please tick it and provide required details.
Subscribe with Persistent mapping

Subscribe with Persistent mapping

Multiple persistent volumes

Stratos provides the capability to specify multiple persistent volumes in case you require more that one additional volume.
¬†“persistent”: {
¬† ¬† ¬† ¬† “isRequired”: true,
¬† ¬† ¬† ¬† “volume” : [
            {
¬† ¬† ¬† ¬† ¬† ¬† “device”: “/dev/sdc”,
¬† ¬† ¬† ¬† ¬† ¬† “mappingPath”: “/home/ubuntu/sdc”,
¬† ¬† ¬† ¬† ¬† ¬† “size”: “10”,
¬† ¬† ¬† ¬† ¬† ¬† “removeOnTermination”: “false”
            },
            {
¬† ¬† ¬† ¬† ¬† ¬† “device”: “/dev/sdf”,
¬† ¬† ¬† ¬† ¬† ¬† “mappingPath”: “/home/ubuntu/sdf”,
¬† ¬† ¬† ¬† ¬† ¬† “size”: “20”,
¬† ¬† ¬† ¬† ¬† ¬† “removeOnTermination”: “false”
            }
     ]
     }
Note that don’t specify the same mappingPath to multiple volumes. Then it will be mapped to the last volume which can not be predicted, other volumes will not be mapped to a directory.

What happens behind the scene.

If you subscribe with persistent mappings, Stratos will,
1 Create the volumes.
2 Format and create a writable file system (ext3)
3 Mount volumes to the directory specified as mappingPath
Please note that the formatting and creating a file system happens at the first time only. Next time when the volume is attaching to another instance (after first instance is terminated), it is no longer needed to create a file system since file system is already there.

 How to verify if the volumes are created.

Log in to the instance created by ssh and execute the command “df -h”. You will see a output similar to below.
df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvdc        10G  7.1G   97G  3% /home/ubuntu/sdc
/dev/sda1       4000G  40G   320G  10% /home
Limitations.
In most IAASes such as EC2 one volume can be attached to only one instance. So if your deployment policy has maxInstance count more than 1, you might encounter problems when Stratos tries to attach the the volume to the second instance.
This has tested only in EC2, need to be tested in other IAASes as well.

Improvements

Currently size parameter is mandatory at the time of subscription. But in ideal case the size specified in cartridge definition should be considered as the default if the size is not given at the time of subscription.

By definition WSO2 App Factory is a multi-tenant, elastic and self-service Enterprise DevOps platform that enables multiple project teams to collaboratively create, run and manage enterprise applications.   Oh! kind of confusing? Yes, as most other definitions, only a few will grab what App Factory means from the first look at its definition. If it’s explained it in simpler words, WSO2 App Factory is a Platform as a service (PAAS) which manage enterprise application development from the cradle of the application to the grave. (Still confusing…? Figure below illustrates the move from the traditional on-premise software to cloud based services. You can see the Platform as a service in the third column.)

paas_illustration

Unless it is a university assignment or test, every real world application development has to undergo several phases until it is ready to go live. Applications has to be designed, developed and sent to QA for testing. Then, QA has to test them rigorously before approving for production. Then the bug fixing and stabilization phase. When the software is ready, it gets deployed. Finally when the application completed its job, it is needed to be retired.

Organizations have to use a number of tools in each of the above phases. For instance, developers may be using SVN for creating code repositories, maven or ant for building the projects, JIRA for ticket tracking and various other tools for finding bugs in the application. Above tools are independent of each other which results in organizations having to put a considerable effort in deploying those tools. If you are a developer, QA manager, system administrator or a¬†DevOps¬†or any other stakeholder who is involved in application development, there is no doubt that you have endured the pain of above and you might be wondering ‚ÄúIs there one single tool which does the work of all of the above tools?‚ÄĚ. WSO2 App Factory does exactly that. By using App Factory you gain all the support for your application development, all under one roof.

Individual building blocks of the App Factory is illustrated in the below diagram.

wso2_appfactory-topology

Diagram 1 depicts the components of the App factory. Management portal, what is the main interaction point to the system is at the center. Source code management, issue trackers and other features are accessible via the portal. When a developer created an app via the management portal, he is provided with a space in the repo , space in the build environment and a project  in the issue tracker and so on. You clone from the repository you are provided into your development machine. Then develop the application with your favorite programming IDE and commit. WSO2 is planning to rollout a browser based IDE in the future to make the complete lifecycle run on the cloud. The application you are developing is continuously built in the cloud using your built tool. If automatic build is enabled, the build process will be triggered automatically when you commit. If auto deploy is enabled, the app will be deployed in the development cloud automatically after the build. Then after the development is completed, the apps will be promoted to the test cloud.  This promotion will retire the apps from the development cloud and deploy them in the test cloud. QA department will test them, promote to the production or staging cloud if tests pass or demote again to the development cloud if fail. The ultimate step is to send the apps to the app store enabling users to discover the apps. The most interesting thing is, all the above tasks can be executed using a single tool via a single management portal.

wso2_appfactory-lifecycle

 

Features of App factory

    1. Self-Provisioning of the workspace and resources such as code repository, issue tracking, build configuration and bug finding tools… etc.
    2. Support a variety of application

‚ó謆¬†¬†¬† Web applications
‚ó謆¬†¬†¬† PHP
‚ó謆¬†¬†¬† Jaxrs
‚ó謆¬†¬†¬† Jaxws
‚ó謆¬†¬†¬† Jaggery
‚ó謆¬†¬†¬† WSO2 ESB
‚ó謆¬†¬†¬† WSO2 BPEL
‚ó謆¬†¬†¬† WSO2 Data services

  1. Gather developers, QAs and DevOps of the organization to the application workspace
  2. Automate continuous builds, continuous tests and development activities
  3. One click solutions for branching and versioning
  4. Deploy application into WSO2 rich middleware stack
  5. No need to change your way of doing things

‚ó謆¬†¬†¬† App factory can be configured to integrate with your existing software development life cycle.
‚ó謆¬†¬†¬† Integrate with your existing users via LDAP or Microsoft Active directory

WSO2-AppFactory-applications-integrated-300x210

Yes, WSO2 App Factory is customizable. For instance organizations are not required to use the tools that App factory supports, they can plug in a tool of their preference. It is a matter of integrating another tool. Different organizations have different workflows, still App Factory can be configured to suit their own workflows.

In summary WSO2 App Factory is a cloud enabled DevOps PAAS for enterprise which manages the entire life cycle of an application. It leverages the application development giving enterprises a competitive advantage in the cloud.

Enough of talking, so help yourself by visiting App Factory preview in live. It is free and open source.

This article is just a bird’s eye view of the WSO2 App Factory. Visit its home page to broaden your knowledge. Good short video about the product is shown below: