Testing Flask subdomain routing locally

I’m working on a project in which each customer gets a subdomain for his personal area and thanks to Flask it’s just a matter of using “subdomain” params in routing config. The tricky part is how to setup Flask and the /etc/hosts file in order to make it working on a local development machine.
So the first step is to map to a custom domain name, and the same for a custom subdomain name: localwebsite peter.localwebsite

Of course you can choose any name you like, the only important thing is that the main “fake domain” must match in the “fake subdomain” mapping! (in the case above “localwebsite”)

Then in Flask you have to specify the server name and its port (required!):

app.config['SERVER_NAME'] = 'localwebsite:8080'

In the above scenario I’m mapping the previously “fake domain” by specifying the port on which the Flask server is running (which in my development settings is 8080 and should be 5000 if not specifically defined).
Finally we can register the routes:

@app.route('/', subdomain='<customer>')
def customer_subdomain(customer):
    return 'Customer is: {}'.format(customer)

and if we call peter.localwebsite:8080 it will return “Customer is: peter” as a response! (If we point to localwebsite:8080 the default home page view will be used as expected)

Dynamic (and crazy) Python class runtime definition using built-in type() function

I just realized that thanks to the dynamic nature of Python we can create absurd class names at runtime… even a “?” class!
As everybody knows the following code raises a SyntaxError:

class ?What:

but… what if we create it dynamically using the built-in function type()?
The main use of type() is to get the type of an object like:

class Foo:

f = Foo()

type(f) # -> <class '__main__.Foo'>

But, the function can be also used to create a class at runtime by passing: a string representing the class name, a tuple containing superclass(es) from which to inherit and a dictionary containing class attributes.
The previously defined class can be dynamically defined in this way:

type('Foo', (object, ), {})

…the crazy thing is that, since we provide the name as a string, we can dynamically create class names which should be otherwise impossible to define in a classic static way. Example:

question_mark_type = type('?', (object, ), {})
question_mark_instance = question_mark_type()
type(question_mark_instance) # -> <class '__main__.?'>

We have defined a “?” class! :D
Of course you should avoid such an abomination, but this is a cool python feature, since it allows magic things happen. In fact I realized this while testing a dynamic database introspection using SQLAlchemy.
I created tables named with invalid chars like “!table”, “$table”, “#table” and so on (which are allowed in some databases) but I was expecting that the ORM automapping would had failed, since that names can’t be valid class names… but clearly SQLAlchemy makes use of type() in order to create dynamic model classes and so is possible to map bad table names as working Python classes… really cool!

Regular Expressions in Python: how to match english and non english letters

Ok, this is a quick (and I hope super-helpful) tip on how to match foreign languages letters like (ö, è…) in a python regex.
As everybody knows, matching letter signs is just a matter of using [a-z] or \w (the latter will also match underscores!) but unfortunately letters with “decorations” are not matched by these selectors. If you want to match them, you have to use unicode selectors (something like [\u00D8-\u00F6]), but python can automatically match all the unicode variants by simply passing the flag re.UNICODE to compile(). So this:

re.compile('[^\W_]', re.IGNORECASE | re.UNICODE)

will match any english and non english letter.
But let me explain… \w matches letters and underscores, \W (note it’s uppercased) as opposite match all but letters and undescores, so [^\W_] will match letters only (thanks to the negation ^).
Bear in mind: the flag re.UNICODE as reported in python docs :

“Makes several escapes like \w, \b, \s and \d dependent on the Unicode character database”

A stupid demonstration:

# -*- coding: utf-8 -*-
import re

ENGLISH_CHARS = re.compile('[^\W_]', re.IGNORECASE)
ALL_CHARS = re.compile('[^\W_]', re.IGNORECASE | re.UNICODE)

assert len(ENGLISH_CHARS.findall('_àÖÎ_')) == 0
assert len(ALL_CHARS.findall('_àÖÎ_')) == 3

ps: not all languages have implemented the unicode flag, for example JavaScript had not …I love Python :)

Webucator has published a video based on this post, and as explained in the video, this is no longer required in Python 3, since the default encoding is UTF-8 instead of ASCII! Checkout the video

High level URL manipulation using native Python API

While developing my new project I faced the need of manipulating an URL in order to change its query-string.
Basically my goal was to provide a parameter with a default value if not already defined and to add another new one.
The modules I used are urlparse and urllib, and in a few lines of code I achieved my goal in an high level programming fashion (I mean, without regex or low level “hacks”).
So let’s start… the first step is to parse the URL string using urlsplit:

from urlparse import urlsplit

url_data = urlsplit(url_string)

Supposing url_string is a string holding a valid url like “http://www.mysite.com/path/?a=1&b=2”, the urlsplit will return a SplitResult object which is a named tuple.
A named tuple is a subclass of tuple, a class which behaves like it but offer a way to initialize it using pre-defined keywords arguments and to refer them later, so for example is possible to create a named tuple called “CreditCard” in this way:

from collections import namedtuple

CreditCard = namedtuple('CreditCard', 
'number, secure_code, expire, owner')

and use it in this way:

card = CreditCard(number=1234567890, 
                  owner='Peter Parker')
print '{}\'s card number is: {} and'
      'has this secure code: {}'.format(
          card.owner, card.number, card.secure_code

One cool feature of named tuple is that you can update one of its field without to have to recreate the object by yourself using the method _replace (it will returns a new tuple with the updated value… remember that tuples are IMMUTABLE objects!).
So to change the owner of the previously defined credit card you will do:

card = card._replace(owner='Bruce Wayne')

(to be honest I don’t know why they decided to mark this helpful method as “protected” using the underscore prefix… but this does’n really matter)

ok… you should get it now. Let’s back to SplitResult… the tuple has the following properties (they are all strings objects):

  • scheme (http, https…)
  • netloc (www.mysite.com)
  • path (/path/)
  • query (the query-string)
  • fragment (what comes after the “#” sign)

So, what I needed was to manipulate the query-string, but once parsed out from the original URL it’s just a raw string and to avoid to mess up with string manipulation I used parse_qs, which returns a python dictionary:

from urlparse import parse_qs 

qs_data = parse_qs(url_data.get('query'))

A dictionary is very handy in order to manipulate query-string parameters, so now all I have to do is something like:

if not 'target_parameter' in qs_data:
    qs_data['target_parameter'] = ['tp1']
qs_data['extra_parameter'] = ['ex1']

You may be wondering about value assignment as a list instead of simple strings, well, this is because parse_qs returns a dictionary with keys and values as sequences since a parameter can be supplied with multiple values (ie: “?cat=ACTION&cat=HORROR&cat=COMEDY”).

Now that the query string data has been updated all I have to do is to serialize it back to a simple string and update the original SplitResult.

from urllib import urlencode

url_data._replace(query=urlencode(qs_data, True))

The second argument passed to urlencode (True) tells the function that we are passing sequences as values so it will handle them according.
The new modified url can be now retrieved by calling:


to summing up, this is the full code:

from urllib import urlencode
from urlparse import urlsplit, parse_qs

# parse original string url
url_data = urlsplit(url_string)

# parse original query-string
qs_data = parse_qs(url_data.get('query'))

# manipulate the query-string
if not 'target_parameter' in qs_data:
    qs_data['target_parameter'] = ['tp1']
qs_data['extra_parameter'] = ['ex1']

# get the url with modified query-string
url_data._replace(query=urlencode(qs_data, True)).geturl()

That’s all folks! If you enjoyed this post don’t forget to share it using the buttons below ;)

Python: reading numbers from JSON without loss of precision using Decimal class for data processing

In the project I’m working on, I’m using an external API which returns a JSON response containing conversion rates for currencies. Since I’m dealing with currencies and prices, the precision of numbers plays an important rule in order to calculate values in the application. The good thing about JSON, despite its name is the acronym of “JavaScript Object Notation“, is that it’s a cross-language format, so it’s not limited to the capabilities of a specific language like JavaScript, so numbers in in JSON may have an higher precision than a js float!
This is a quote from wikipedia about JSON numbers (emphasis is mine):

Number — a signed decimal number that may contain a fractional part and may use exponential E notation. JSON does not allow non-numbers like NaN, nor does it make any distinction between integer and floating-point. (Even though JavaScript uses a double-precision floating-point format for all its numeric values, other languages implementing JSON may encode numbers differently)

By default Python’s json module will loads decimal numbers as float, so if we have a JSON like:

{ "number": 1.00000000000000000001 }

the default conversion into python will be {u'number': 1.0} if we just write the following code:

import json


But, fortunately is dead simple to load numbers in JSON using the decimal module, there is no need to write custom decoders as I saw on the web, it’s just a matter of specify the Decimal class for floats parsing in the loads() function in this way:

import json
from decimal import Decimal

json.loads(json_string, parse_float=Decimal)

In this way the loaded python object will be:
{u'number': Decimal('1.00000000000000000001')}
And we will be able to perform precise arithmetic computations!
It’s also possible to use Decimal even for integer numbers, by specifying parse_int:


Additional reading from official Python docs: