Posts Tagged ‘python’

This is a python code snippet I wrote to automate API creation in WSO2 API Manager. WSO2 API manager has exposed a API, Publisher API  using which we can perform APIM related task.

This python client first login to APIM and check weather there is already an API with the same name, if not it will create an API. This API has two resources each has unlimited throttling tier

  1. PUT /cart
  2. POST /checkout

In addition API has fail over endpoint, one production endpoint and one fail over endpoint. So once the API is invoked, API first try to reach the production endpoint, if production endpoint is not available it will try to reach fail over endpoint.

Once the API is created, this code will publishes the newly created API so users can subscribe to the API.

	log = LogFactory().get_log(__name__)
        log.info("Starting api creation plugin...")

        apim_domain = "localhost"
        apim_username = "admin"
        apim_password = "admin"

        endpoint = "http://api.openweathermap.org/data/2.5/weather?q=London"

        url =  'https://%s:9443/publisher/site/blocks/' %  apim_domain

        loging_url = urlparse.urljoin(url, 'user/login/ajax/login.jag')
        payload = {'action':'login', 'username':apim_username, 'password':apim_password }

        log.info("Login into APIManager %s " % loging_url)
        resp = requests.post(loging_url, data=payload, verify=False)
        log.info("APIM Logging response %s" % resp)
        cookie = resp.cookies

        swagger = {'paths': {'/cart': {'put': {'x-auth-type': 'None',
                                               'x-throttling-tier': 'Unlimited',
                                               'responses': {'200': {}}}}, '/checkout': {'post': {
            'parameters': [{
                               'schema': {'type': 'object'},
                               'description': 'Request Body',
                               'name': 'Payload',
                               'required': 'false',
                               'in': 'body',
                               }],
            'responses': {'200': {}},
            'x-auth-type': 'None',
            'x-throttling-tier': 'Unlimited',
            }}}}

        swager_json = json.dumps(swagger)

        api_url = urlparse.urljoin(url,'item-add/ajax/add.jag')

        endpoint_conf = \
            {'production_endpoints': {'url': 'http://ws.cdyne.com/phoneverify/phoneverify.asmx',
                                      'config': 'null'},
             'production_failovers': [{'url': 'http://failover_domain:30000/StorefrontDemo/api/customer'
                                          , 'config': 'null'}], 'endpoint_type': 'failover'}

        endpoint_conf['production_endpoints']['url']= endpoint
        endpoint_json = json.dumps(endpoint_conf)

        payload = {
            'action': 'addAPI',
            'name': 'Storefront',
            'context': 'storefront',
            'version': 'v1',
            'visibility': 'public',
            'endpointType': 'nonsecured',
            'tiersCollection': 'Unlimited',
            'http_checked': 'http',
            'https_checked': 'https',
            'resourceCount': 0,
            'resourceMethod-0': 'PUT',
            'resourceMethodAuthType-0': 'None',
            'resourceMethodThrottlingTier-0': 'Unlimited',
            'uriTemplate-0': 'cart',
            'resourceMethod-0': 'POST',
            'resourceMethodAuthType-0': 'None',
            'resourceMethodThrottlingTier-0': 'Unlimited',
            'uriTemplate-0': 'checkout',
            }

        payload['endpoint_config']=endpoint_json
        payload['swagger'] = swager_json

        exist_payload = {
        'action':'isAPINameExist',
        'apiName':'Storefront'
        }

        #check if API with the same name already exist
        resp = requests.post(api_url, data = exist_payload, verify = False, cookies = cookie)
        api_exist =  ('true' == json.loads(resp.text)['exist'])
        logging.info("API already exist %s " % api_exist)

        if not api_exist:            
            log.info("Creating API WebbAppAPI %s " % api_url)
            resp = requests.post(api_url, data=payload, verify=False, cookies=cookie)
            log.info("APIM api creation response %s" % resp)

            publish_url = urlparse.urljoin(url, 'life-cycles/ajax/life-cycles.jag')
            payload = {
            'action':'updateStatus',
            'name':'Storefront',
            'version':'v1',
            'provider':'admin',
            'status':'PUBLISHED',
            'publishToGateway':'true',
            'requireResubscription':'false'
            }
            log.info("Publishing API WebbAppAPI %s " % publish_url)
            resp = requests.post(publish_url, data=payload, verify=False, cookies=cookie)
            log.info("APIM api publishing response %s" % resp)

        log.info("*****************API creation plugin completed *****************")
	

Below is the created API

Storefront API

Storefront API

Python Flask enable CORS

Posted: October 24, 2015 in Uncategorized
Tags: , , ,

Install flask-cors plugin

pip install -U flask-cors

Python flask code to enable cors for all resources

from flask import Flask
from OpenSSL import SSL

import os

from flask.ext.cors import CORS
context = SSL.Context(SSL.SSLv23_METHOD)
cer = os.path.join(os.path.dirname(__file__), 'resources/exported-pem.crt')
key = os.path.join(os.path.dirname(__file__), 'resources/exported-pem.key')

app = Flask(__name__)
CORS(app, resources={r"/": {"origins": "*"}, r"/": {"supports_credentials": True}})

@@app.route('/', methods=['POST','GET'])
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
context = (cer, key)
app.run( host='0.0.0.0', port=5000, debug = True, ssl_context=context)

supports_credentials will cause Flask to send “Access-Control-Allow-Credentails” header to true.

Access-Control-Allow-Credentials: true

Please have a look into the below screenshot of cors pre flight OPTION request

Cors preflight OPTION request

Cors preflight OPTION request

Python Flask API in HTTPS

Posted: October 21, 2015 in Uncategorized
Tags: , , , ,

Before starting a server with SSL, you need to create private key and a certificate. I will create a self signed certificate for this tutorial.
Below commands will ask for information regarding your certirficate. Among them, ‘common name’ is the most important inforamtion. It should be the domain name of your server running. This will output two files,

1) udara.com.key –> private key for my domain
2) udara.com.ct –>Self signed certificate

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout udara.com.key -out udara.com.crt

Below is the Flask code snippet to start your Flask API in HTTPS

from flask import Flask
from OpenSSL import SSL

import os

context = SSL.Context(SSL.SSLv23_METHOD)
cer = os.path.join(os.path.dirname(__file__), 'resources/udara.com.crt')
key = os.path.join(os.path.dirname(__file__), 'resources/udara.com.key')

app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello World!'

if __name__ == '__main__':
    context = (cer, key)
    app.run( host='0.0.0.0', port=5000, debug = True, ssl_context=context)

When you run above code, it will show below output. Note that it is running HTTPS

* Running on https://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
  • Add A record
import boto3

client = boto3.client('route53', aws_access_key_id="AWS_KEY", aws_secret_access_key="AWS_SEC_KEY")
hostedZoneId = 'HOSTED_ZONE_ID'

ip= '123.123.123.123'


if aws_region == "US":
    #US is my default region. So cont_code is blank
    cont_code = {}
elif aws_region == "EU":
    cont_code = {'ContinentCode':'EU'}
elif aws_region == "AP":
    cont_code = {'ContinentCode':'AS'}

response = client.change_resource_record_sets(
    HostedZoneId = hostedZoneId,
    ChangeBatch={
        'Comment': 'comment',
        'Changes': [
            {
                'Action': 'CREATE',
                'ResourceRecordSet': {
                    'Name': domain,
                    'Type': 'A',
                    'SetIdentifier': 'my_a_record',
                    'GeoLocation': cont_code,
                    'TTL': 60,
                    'ResourceRecords': [
                        {
                            'Value': ip
                        },
                        ],
                    }
            },
            ]
    }
)


print("DNS record status %s "  % response['ChangeInfo']['Status'])
print("DNS record response code %s " % response['ResponseMetadata']['HTTPStatusCode'])
  • Delete A record

When deleting the A record you only have to change the action to DELETE

'Action': 'DELETE'

 

I am using a simple HTTP server written in Python which will runs on the port given by the commandline argument. The servers will act as upstream servers for this test. Three servers are started
on port 8080,8081 and 8081. Each server logs its port number when a request is received. Logs will be written to the log file located at var/log/loadtest.log. So by looking at the log file, we can identify how Nginx distribute incoming requests among the three upstream servers.

Below diagram shows how Nginx and upstream servers are destrubuted.

Load balancing with Nginx

Load balancing with Nginx

Below is the code for the simple HTTP server. This is a modification of [1].

#!/usr/bin/python

#backend.py
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import sys
import logging

logging.basicConfig(filename='var/log/loadtest.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')

#This class will handles any incoming request from the browser.
class myHandler(BaseHTTPRequestHandler):

	#Handler for the GET requests
	def do_GET(self):
		logging.debug("Request received for server on : %s " % PORT_NUMBER)
		self.send_response(200)
		self.send_header('Content-type','text/html')
		self.end_headers()
		# Send the html message
		self.wfile.write("Hello World: %s" % PORT_NUMBER)
		return

try:
	#Create a web server and define the handler to manage the
	#incoming request
	PORT_NUMBER = int(sys.argv[1])
	server = HTTPServer(('', PORT_NUMBER), myHandler)
	print 'Started httpserver on port %s '  %  sys.argv[1]
	#Wait forever for incoming htto requests
	server.serve_forever()

except KeyboardInterrupt:
	print '^C received, shutting down the web server'
	server.socket.close()

Lets start the servers on port 8080, 8081 and 8081.

nohup python backend.py 8080 &
nohup python backend.py 8081 &
nohup python backend.py 8082 &

Check if the servers are running on the secified ports.

netstat -tulpn | grep 808
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      454/python
tcp        0      0 0.0.0.0:8081            0.0.0.0:*               LISTEN      455/python
tcp        0      0 0.0.0.0:8082            0.0.0.0:*               LISTEN      457/python

*Configure Nginx as a load balancer for above upstream server.

Create a configuration file in /etc/nginx/udara.com.conf with the below content. Above started servers are configured as upstream servers.

upstream udara.com {
        server udara.com:8080 ;
        server udara.com:8081 ;
        server udara.com:8082 ;
}

server {
           listen 80;
           server_name udara.com;
           location / {
                        proxy_pass http://udara.com;
           }

* Pick a client to send request. You can use Jmeter or any other tool. However I wrote a very simple shell script which will send given number of request to the Nginx

#!/bin/bash
c=1
count=$1
echo $count
while [ $c -le $count ]
do
     curl http://udara.com/
     (( c++ ))
done
 Round robing load balancing
upstream udara.com {
        server udara.com:8080 ;
        server udara.com:8081 ;
        server udara.com:8082 ;
}

Let’s issue 9 request.

./requester.sh 9

Logs written on var/log/loadtest.log log file.

06/15/2014 11:54:11 AM Request received for server on : 8080
06/15/2014 11:54:11 AM Request received for server on : 8081
06/15/2014 11:54:11 AM Request received for server on : 8082
06/15/2014 11:54:11 AM Request received for server on : 8080
06/15/2014 11:54:11 AM Request received for server on : 8081
06/15/2014 11:54:11 AM Request received for server on : 8082
06/15/2014 11:54:11 AM Request received for server on : 8080
06/15/2014 11:54:11 AM Request received for server on : 8081
06/15/2014 11:54:11 AM Request received for server on : 8082

Request are distributed evenly among all three servers in round robin fashion.

Session stickiness

Requests from the same client will be forwarded to the same server always.The first three octets of the client IPv4 address, or the entire IPv6 address, are used as the hashing key to determine which server to forward the request. In case the selected server is unavailable, the request will be forwaded to another server.

upstream udara.com {
	ip_hash
        server udara.com:8080 ;
        server udara.com:8081 ;
        server udara.com:8082 ;
}

All the requests are forwaded to the server running on 8082.

06/15/2014 11:54:55 AM Request received for server on : 8082
06/15/2014 11:54:55 AM Request received for server on : 8082
06/15/2014 11:54:55 AM Request received for server on : 8082
06/15/2014 11:54:55 AM Request received for server on : 8082
06/15/2014 11:54:55 AM Request received for server on : 8082
Weighted load balancing

By default Nginx equaly destribute the requests among all upstream servers. This is OK when all the upstream has the same capacity to serve the requests . But there are scenarios where some upstream servers have more resources and some resources have low resources compared to others. So more requests should be forwarded to high capacity servers and low capacity servers should be forwaded less number of requests. Ningx has provided the ability to specify the weight for every server. Specify weight propotional to the capacity of the servers.

upstream udara.com {
 server udara.com:8080 weight=4; #server1
 server udara.com:8081 weight=3; #server2
 server udara.com:8082 weight=1; #server3
}

Above configurations says server1’s capacity is four times of server3 and server 2 has thrice the capacity of server3. So for every 8 requests, 4 should be forwaded to server1, 3 should be forwaded to server2 and one request for server3.
Below logs shows that requests are destributed according to the weight specified.

06/15/2014 12:01:36 PM Request received for server on : 8081
06/15/2014 12:01:36 PM Request received for server on : 8080
06/15/2014 12:01:36 PM Request received for server on : 8080
06/15/2014 12:01:36 PM Request received for server on : 8081
06/15/2014 12:01:36 PM Request received for server on : 8080
06/15/2014 12:01:36 PM Request received for server on : 8081
06/15/2014 12:01:36 PM Request received for server on : 8082
06/15/2014 12:01:36 PM Request received for server on : 8080
 Mark a server as unavailable

“down” is used to tell Nginx that a upstream is not available. This is usefull when we know that the server is down for some reason or there is maintainance going on that server. Nginx will not forward request to the servers marked as down.

upstream udara.com {
        server udara.com:8080 weight=4;
        server udara.com:8081 weight=3 down;
        server udara.com:8082 weight=1;
}

 

06/15/2014 12:10:54 PM Request received for server on : 8080
06/15/2014 12:10:54 PM Request received for server on : 8080
06/15/2014 12:10:54 PM Request received for server on : 8082
06/15/2014 12:10:54 PM Request received for server on : 8080

No request has forwarded to the server running on port 8081.

High avalability/ Backup

When a upstream server node is marked as backup, Nginx will forward requests to them only when primary servers are unavailable.

upstream udara.com {
        server udara.com:8080 ; #server1
        server udara.com:8081 ; #server2
        server udara.com:8082  backup; #server3
}

Request will be sent only to server1 and server2. No requests will be sent to server3 since it is the backup node.

06/15/2014 02:57:40 PM Request received for server on : 8080
06/15/2014 02:57:40 PM Request received for server on : 8081
06/15/2014 02:57:40 PM Request received for server on : 8080
06/15/2014 02:57:40 PM Request received for server on : 8081

Stop the servers running on 8080 and 8081 so only server on 8082 is running.
Request are sent to the backup node.

06/15/2014 02:46:04 PM Request received for server on : 8082
06/15/2014 02:46:04 PM Request received for server on : 8082
06/15/2014 02:46:04 PM Request received for server on : 8082
06/15/2014 02:46:04 PM Request received for server on : 8082
Multiple backup nodes.
upstream udara.com {
        server udara.com:8080 ; #server1
        server udara.com:8081  backup; #server2
        server udara.com:8082  backup; #server3
}

Requests are directed only to server1 as long as server1 is available.

06/15/2014 03:03:02 PM Request received for server on : 8080
06/15/2014 03:03:02 PM Request received for server on : 8080
06/15/2014 03:03:02 PM Request received for server on : 8080
06/15/2014 03:03:02 PM Request received for server on : 8080

When server1 is stopped, requests are forwaded to both server2 and server3.

06/15/2014 02:57:40 PM Request received for server on : 8081
06/15/2014 02:57:40 PM Request received for server on : 8082
06/15/2014 02:57:40 PM Request received for server on : 8081
06/15/2014 02:57:40 PM Request received for server on : 8082

[1] https://github.com/tanzilli/playground/blob/master/python/httpserver/example1.py

[1] http://nginx.org/en/docs/http/load_balancing.html