Mini-tip about Django templates

Django templates priority order

  • First, the TEMPLATE_DIRS dir setting in settings.py file of the project enviroment
  • Secondly, templates dir into de apps.
  • The last one is the propertly app which use this template

In addition templates used by admin gui or any other Django core module  can be overwritten in cascade. For example:

  • /usr/share/pyshared/django/contrib/admin/templates/admin/change_list_results.html
  • <app_dir>/templates/admin/change_list_results.html
  • <app_dir>/templates/admin/<lowercase_app_name>/change_list_results.html
  • <app_dir>/templates/admin/<lowercase_app_name>/<lowercase_model_name>/change_list_results.html

these templates are being replaced by the next one in order. That is, you can replace the template for a concrete template for a singular model of a specific application.

Paramiko example

I have the pleasure of presenting a tip from the past. Today from long time ago: Paramiko.

import os
import paramiko
hostname="vps.doc.com"
username="admin"
password="password"
port=22
remotepath="/tmp/test"

ssh = paramiko.SSHClient()
ssh.load_host_keys(os.path.expanduser(os.path.join("~", ".ssh", "known_hosts")))
ssh.connect(hostname, port=port, username=str(username), password=str(password))
sftp = ssh.open_sftp()

remote_file = sftp.file(remotepath, "r")
remote_file.set_pipelined(True)
file_lines = remote_file.read()
return file_lines
file_lines = ...

sftp.open(remotepath, 'w').write(file_lines)
sftp.close()
ssh.close()

Django Installation & Configuration using Nginx+FastCGI+PosgtreSQL

Django box

Configuring PostgreSQL

  1. Install PostgreSQL
  2. Create a django user

    sudo -u postgres createuser -P django_user

    Also you can alter the user attributes as follow:

    sudo su -
      passwd postgres
      su postgres
      psql template1
      ALTER USER django_user WITH ENCRYPTED PASSWORD 'mypassword';
      
  3. Create the Django project database::
    sudo -u postgres psql template1
    CREATE DATABASE django_db OWNER django_user ENCODING ‘UTF8’;
  4. Give comoda access (pg_hba.conf):
  5. local   django_db        django_user                      md5
    

Configuring access to PostgreSQL

  1. Install psycopg2 Python package.
  2. Verify the installation

    You should be all set now, but let’s verify this right away. Open the
    shell and run the following instructions inside the python shell (start
    off with the python command):

    >>> import django
    >>> print django.VERSION
    (0, 97, 'pre')
    >>> import psycopg2
    >>> psycopg2.apilevel
    '2.0'
  3. Configure Django project settings (settings.py on the project directory):
    DATABASE_ENGINE = 'postgresql_psycopg2'           # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.
    DATABASE_NAME = 'django_db'             # Or path to database file if using sqlite3.
    DATABASE_USER = 'django_user'             # Not used with sqlite3.
    DATABASE_PASSWORD = 'XXXXXX'         # Not used with sqlite3.
    DATABASE_HOST = 'localhost'             # Set to empty string for localhost. Not used with sqlite3.
    DATABASE_PORT = '5432'             # Set to empty string for default. Not used with sqlite3.
  4. Install Django Python package.
  5. Run syncdb from Django project directory:
    python manage.py syncdb

Running Django project like FastCGI

  1. Install Django and Flup Python packages.
  2. You need also to start Django fastcgi server (from the project folder):
    python manage.py runfcgi host=127.0.0.1 port=8000 --settings=settings

    If you need to add something to pythonpath:

    python manage.py runfcgi host=127.0.0.1 port=8000 --settings=settings --pythonpath=/a/path/to/somewhere/

Configuring Nginx to serve the Django FastCGI service

  1. Configure Nginx:
    server {
    listen   80;
    #server_name  localhost;
    location / {
    root   html;
    rewrite  ^ https://172.2.30.31$request_uri  redirect;
    }
    access_log  /var/log/nginx/localhost.access.log;
    }
    server {
    listen   443;
    # server_name  localhost;
    access_log  /var/log/nginx/localhost-ssl.access.log;
    ssl_prefer_server_ciphers   on;
    ssl  on;
    ssl_certificate      /etc/nginx/ssl/all_com.crt;
    ssl_certificate_key  /etc/nginx/ssl/all_com.key;
    ssl_session_timeout  5m;
    #ssl_protocols  SSLv2 SSLv3 TLSv1;
    #ssl_ciphers  ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
    location / {
    # host and port to fastcgi server
    fastcgi_pass 127.0.0.1:8000;
    fastcgi_param PATH_INFO $fastcgi_script_name;
    fastcgi_param REQUEST_METHOD $request_method;
    fastcgi_param QUERY_STRING $query_string;
    fastcgi_param CONTENT_TYPE $content_type;
    fastcgi_param CONTENT_LENGTH $content_length;
    fastcgi_pass_header Authorization;
    fastcgi_intercept_errors off;
    }
    #error_page  404  /404.html;
    # redirect server error pages to the static page /50x.html
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
    root   /var/www/nginx-default;
    }
    }
  2. Access to http://<django-server>/.

Debian package guide: Latest Python policy

From a couple of days ago, I has been recycling my knowledge about Debian-Python packages. Debian 6.0 is currently next to be released and we’ll need effort to adapt many of own packages from etch to squeeze.

I’ve been following the Debian-Python mailling list from one year ago and I know many several troubles, changes or  improvements  which was occurred during this period.

As a brief resume, many things has changed: default Python interpreter for Debian 6.0,  the backend frameworks to build packages (CDBS with python-distutils.mk, Python-central or Python-support) …

All these changes have been discussed on Debian Wiki and have been formalized as the new Python Policy. This policy is already accessible on http://www.debian.org/doc/packaging-manuals/python-policy/.

CMD module: building a Command Line Interpreter using Python.

I’ve been searching documentation about build a Command Line Interpreter (CLI) from some time ago. My requirements were:

  • I was needed a command history
  • I was needed TAB auto-completion.
  • … a easy framework.

Against I could expect, I didn’t found a lot of information on Internet about this aim. So after some time searching documentation about it, I had some luck and I finally found some refers to cmd Python module (oh, what surprise!):

Next lines, show a simple example about how it work:

import cmd

class HelloWorld(cmd.Cmd):
    """Simple command processor example."""

    def do_greet(self, person):
        if person:
            print "hi,", person
        else:
            print 'hi'

    def help_greet(self):
        print '\n'.join([ 'greet [person]',
                           'Greet the named person',
                           ])

    def do_EOF(self, line):
        return True

if __name__ == '__main__':
    HelloWorld().cmdloop()

… and this is a example of use:

$ python cmd_do_help.py
(Cmd) help greet
greet [person]
Greet the named person

One extra reference:

CDM2 is a extentesion of CMD. It adds several features for command-prompt tools:

  • Searchable command history (commands: “hi”, “li”, “run”)
  • Load commands from file, save to file, edit commands in file
  • Multi-line commands
  • Case-insensitive commands
  • Special-character shortcut commands (beyond cmd’s “@” and “!”)
  • Settable environment parameters
  • Parsing commands with flags
  • > (filename), >> (filename) redirect output to file
  • < (filename) gets input from file
  • bare >, >>, < redirect to/from paste buffer
  • accepts abbreviated commands when unambiguous
  • py enters interactive Python console
  • test apps against sample session transcript (see example/example.py)

Defining function args as list of arguments for Python

Saltycrane

Python offers a way to define functions args as a tuple. The syntax is similar to C language, we’ll use *args to refer the tuple of arguments which are used in the function invocation.

def test_args(*args):
    for arg in args:
        print "another arg:", arg  

test_args(1, "two", 3)

Results:

another arg: 1
another arg: two
another arg: 3

Using *args when calling a function

Also, this special syntax can be used, not only in function definitions, but also when calling a function.

def test_args_call(arg1, arg2, arg3):
    print "arg1:", arg1
    print "arg2:", arg2
    print "arg3:", arg3

args = ("two", 3)
test_args_call(1, *args)

Results:

arg1: 1
arg2: two
arg3: 3

More info about:

Is not the same …

Usually, we don’t note  little, but relevant, differents in the code that we are reviewing. For example, the two next classes, apparently, are equivalents:

  class A:
    l = []
    __init__(self):
      ...
  class B:
    def __init__(self):
      self.l = []

    ...

But, really, this two classes are differ in their behavior:

  >>> a = A()
  >>> a.l.append(1)
  >>> a2 = A()
  >>> a2.l.append(2)
  >>> print a.l
  [1,2]
  >>> b = B()
  >>> b.l.append(1)
  >>> b2 = B()
  >>> b2.l.append(2)
  >>> print b.1
  [1]

Class A, due to l var is defined in class definition, share the l var between all A objects instanciates.

Chardet: encoding auto detect for Python

Last week, I has been fighting against the hordes of the character encoding. My new task in tha job-tasks-pool is develop a friendly web app to manage configuration files of cluster apps. Ohh, great idea!, why didn’t I realize it before? (irony) . Only I need 4 things: one parser for each conffile syntax, a stable and secure way to get/save remote files, work with non-controlled kind of differents char encodings, … and a fancy GUI. As you can intuit, the task is not just a bit app, but this post only refer to a very useful python lib ( chardet discovered a few days ago. This lib can be auto-detect the encoding of a file very reliable. I suggest you that visit chardet homesite to see some clear examples.

Using it in a couple of lines of code:

import io
import chardet

s=io.open(‘channels_info’, ‘r’, errors=’replace’)
r=s.read()

# For example:
# fileencoding=”iso-8859-15″
fe = chardet.detect(r)[‘encoding’]
fl = fl.decode(fe)
print fl

Object factory using the Python import mechanism

Today something related to Python programing … These snippets were rescued from a forgotten repository into my old Inspiron 6000.

Speaking clearly, this post is about one app (purpose of it is not relevant) which internal handlers class can be setting in configuration time. This idea is cool when you want minimize hookups in your app. For example, this solution is a easy way when, in a hypothetical future, a third developer want to extend your app with a new handler. In this scenario, this developer only should build the handler as a external module and load it in the python classpath.

First step, I build my Factory class. These objects class create DataSources objects:

class DataSourceFactory (object):

def create (self,handlerClassname, object_uid=None, key_value=None, \
handler_options={}):

...
modulename, classname = handlerClassname.rsplit('.',1)
module = __import__(modulename, {}, {}, classname)
handler_class = getattr (module,classname)
ds_handler = handler_class()
for k,v in handler_options.items():
setattr(ds_handler, k, eval(v))

ds = DataSource(ds_handler)
ds.key_value = key_value
ds.object_uid = object_uid

return ds

You note, two things in the previous code:

  • The create function received a string with the classname of the handler
  • I use getattr and __import__ ( object reflexion ) for instantiate the hadler objects received as parameter

The classname, in my app, is setting in the app configuration file. This file is a standard Python config file:

[origin]
datasource_handler='syncldap.datasources.LdapDataSourceHandler'
key_value='sid'
object_uid='sid'
...

This confs are loaded into de app using the RawConfigParser:

def create_Synchronizer(self,config_filename):
  # RawConfigParser not interpolate attribute values
  cfg = self.ConfigParser.RawConfigParser()
  cfg.readfp(file(config_filename))

# DataSourceFactory
data_source_factory = self.datasources.DataSourceFactory()
# Load class name of origin handler
origin_data_source_handler_classname = \
eval (cfg.get('origin','datasource_handler'))
# For example: 'syncldap.datasources.LdapDataSourceHandler'

# Load origin options
origin_handler_options = dict (cfg.items('opt:origin_handler'))
origin_key_value = eval \
  (cfg.get('origin','key_value'))

origin_object_uid = eval \
  (cfg.get('origin','object_uid'))

# Creating origin source
origin_source = \
  data_source_factory.create(origin_data_source_handler_classname, \
  origin_object_uid, origin_key_value, origin_handler_options)

ClusterSSH: A GTK parallel SSH tool

From some time ago, I’m using a amazing admin tool named clusterSSH (aka cssh).

With this tool (packages available for GNU/Debian like distributions, at least), we
can interact simultaneously against a servers cluster
. This is very useful,
when your are making eventual tasks in similar servers
(for example, Tomcat Cluster nodes, … ) and you want execute the same intructions
in all of them.

cssh

My config (~/.csshrc) file for cssh is look like to the default settings:

auto_quit=yes
command=
comms=ssh
console_position=
extra_cluster_file=~/.clusters <<<<<<<<<
history_height=10
history_width=40
key_addhost=Control-Shift-plus
key_clientname=Alt-n
key_history=Alt-h
key_paste=Control-v
key_quit=Control-q
key_retilehosts=Alt-r
max_host_menu_items=30
method=ssh
mouse_paste=Button-2
rsh_args=
screen_reserve_bottom=60
screen_reserve_left=0
screen_reserve_right=0
screen_reserve_top=0
show_history=0
ssh=/usr/bin/ssh
ssh_args= -x -o ConnectTimeout=10
telnet_args=
terminal=/usr/bin/xterm
terminal_allow_send_events=-xrm ‘*.VT100.allowSendEvents:true’
terminal_args=
# terminal_bg_style=dark
terminal_colorize=1
terminal_decoration_height=10
terminal_decoration_width=8
terminal_font=6×13
terminal_reserve_bottom=0
terminal_reserve_left=5
terminal_reserve_right=0
terminal_reserve_top=5
terminal_size=80×24
terminal_title_opt=-T
title=CSSH
unmap_on_redraw=no
use_hotkeys=yes
window_tiling=yes
window_tiling_direction=right

The  ~/.clusters file is the file which defined the concrete clusters (see man ):

# home cluster
c-home tor@192.168.1.10 pablo@192.168.1.11

# promox-10.40.140
promox-10.40.140 10.40.140.17 10.40.140.18 10.40.140.19 10.40.140.33 10.40.140.17 10.40.140.18 10.40.140.33

# kvm-10.41.120
kvm-10.41.120 10.41.120.17 10.41.120.18

When I want work with c-home cluster, we execute de cssh as following:

# cssh c-home

In addition, I have written a tiny python script that automatized the cluster lines generation. This script is based in pararell executed ICMP queries. Thats is cool when your servers are deploying in a big VLAN or the number of them is big. In this cases, we can execute my script to found the servers.

# ./cssh-clusterline-generator.py -L 200 -H 250 -d mot -n 10.40.140 >> ~/.clusters

# mot-10.40.140-range-10-150
mot-10.40.140-range-10-150 10.40.140.17 10.40.140.19 10.40.140.32 10.40.140.37

Finally, … the script:

import os
from threading import Thread
from optparse import OptionParser

class Thread_(Thread):
def __init__ (self,ip):
Thread.__init__(self)
self.ip = ip
self.status = -1
def run(self):

res = os.system(“ping -c 1 %s > /dev/null” % self.ip)
res_str = “Not founded”

self.status = res

threads_=[]
ips = “”

parser = OptionParser()
parser.add_option(“-n”, “–net”, dest=”network”, default=”10.121.55″,
help=”Class C Network”, metavar=”NETWORK”)
parser.add_option(“-L”, “–lowrange”, dest=”lowrange”, default=”1″,
help=”Low range”, metavar=”LOW”)
parser.add_option(“-H”, “–highrange”, dest=”highrange”, default=”254″,
help=”High range”, metavar=”HIGH”)
parser.add_option(“-d”, “–deploy”, dest=”deploy”, default=”Net”,
help=”Deploy name”, metavar=”DEPLOY”)
parser.add_option(“-v”, “–verbose”, dest=”verbose”,
default=False, action=”store_true”,
help=”Verboise mode”)

(options, args) = parser.parse_args()

low_range = int(options.lowrange)
high_range = int(options.highrange)
net=options.network
deploy_id = options.deploy
verbose=options.verbose

for i in range (low_range, high_range+1):
ip = net + “.” + str(i)
h = Thread_(ip)
threads_.append(h)
h.start()

for h in threads_:
res_str = “Not founded”
h.join()
count=0

if h.status == 0:
count = count + 1
res_str = “FOUNDED”
if verbose:
print “Looking host in %s … %s” % (h.ip, res_str)
ips += h.ip + ” ”

if verbose:
print “Finished word. %s host founded” % count

print “”
print “# ” + deploy_id + “-” + net + “-range-” + str(low_range) + “-” + str(high_range)
line = deploy_id + “-” + net + “-range-” + str(low_range) + “-” + str(high_range) + ” ” + ips
print line