Month: September 2010
Facebook hace un guiño al 503
No me parecía justo dejar pasar sin mencionar la pequeña dedicatoria que Facebook ha realizado a este blog hoy, día 23 de Septiembre de 2010. Durante un tiempo aproximado de 20 minutos, el website de Facebook recibió a sus visitantes con el siguiente mensaje de error:
Service Unavailable – DNS failure
The server is temporarily unable to service your request. Please try again later.
Reference #11.dc7e7a5c.1285272379.79da1
Desde mi humilde sofá de casa, sólo quiero decir … gracias por la referencia :-P.
En el site de desarrollo de Facebook se pudo seguir la evolución de la incidencia:
Current Status: API Latency IssuesWe are currently experiencing latency issues with the API, and we are actively investigating. We will provide an update when either the issue is resolved or we have an ETA for resolution.
… pero este no ha sido el único sitio donde se ha seguido muy de cerca el estado de la caída de esta red social. Y es que el dato curioso y, incluso quizá, un poco hiriente es que está caída fue muy seguida por los usuarios de esta misma red social (Facebook) en la otra gran red social de microblogging: Twitter.
Chardet: encoding auto detect for Python
Last week, I has been fighting against the hordes of the character encoding. My new task in tha job-tasks-pool is develop a friendly web app to manage configuration files of cluster apps. Ohh, great idea!, why didn’t I realize it before? (irony) . Only I need 4 things: one parser for each conffile syntax, a stable and secure way to get/save remote files, work with non-controlled kind of differents char encodings, … and a fancy GUI. As you can intuit, the task is not just a bit app, but this post only refer to a very useful python lib ( chardet discovered a few days ago. This lib can be auto-detect the encoding of a file very reliable. I suggest you that visit chardet homesite to see some clear examples.
Using it in a couple of lines of code:
import io
import chardets=io.open(‘channels_info’, ‘r’, errors=’replace’)
r=s.read()# For example:
# fileencoding=”iso-8859-15″
fe = chardet.detect(r)[‘encoding’]
fl = fl.decode(fe)
print fl
Object factory using the Python import mechanism
Today something related to Python programing … These snippets were rescued from a forgotten repository into my old Inspiron 6000.
Speaking clearly, this post is about one app (purpose of it is not relevant) which internal handlers class can be setting in configuration time. This idea is cool when you want minimize hookups in your app. For example, this solution is a easy way when, in a hypothetical future, a third developer want to extend your app with a new handler. In this scenario, this developer only should build the handler as a external module and load it in the python classpath.
First step, I build my Factory
class. These objects class create DataSources
objects:
class DataSourceFactory (object): def create (self,handlerClassname, object_uid=None, key_value=None, \ handler_options={}): ... modulename, classname = handlerClassname.rsplit('.',1) module = __import__(modulename, {}, {}, classname) handler_class = getattr (module,classname) ds_handler = handler_class() for k,v in handler_options.items(): setattr(ds_handler, k, eval(v)) ds = DataSource(ds_handler) ds.key_value = key_value ds.object_uid = object_uid return ds
You note, two things in the previous code:
- The
create
function received a string with the classname of the handler - I use
getattr
and__import__
( object reflexion ) for instantiate the hadler objects received as parameter
The classname
, in my app, is setting in the app configuration file. This file is a standard Python config file:
[origin] datasource_handler='syncldap.datasources.LdapDataSourceHandler' key_value='sid' object_uid='sid' ...
This confs are loaded into de app using the RawConfigParser:
def create_Synchronizer(self,config_filename): # RawConfigParser not interpolate attribute values cfg = self.ConfigParser.RawConfigParser() cfg.readfp(file(config_filename)) # DataSourceFactory data_source_factory = self.datasources.DataSourceFactory()
# Load class name of origin handler origin_data_source_handler_classname = \ eval (cfg.get('origin','datasource_handler')) # For example: 'syncldap.datasources.LdapDataSourceHandler' # Load origin options origin_handler_options = dict (cfg.items('opt:origin_handler')) origin_key_value = eval \ (cfg.get('origin','key_value')) origin_object_uid = eval \ (cfg.get('origin','object_uid')) # Creating origin source origin_source = \ data_source_factory.create(origin_data_source_handler_classname, \ origin_object_uid, origin_key_value, origin_handler_options)
ClusterSSH: A GTK parallel SSH tool
From some time ago, I’m using a amazing admin tool named clusterSSH (aka cssh).
With this tool (packages available for GNU/Debian like distributions, at least), we
can interact simultaneously against a servers cluster. This is very useful,
when your are making eventual tasks in similar servers
(for example, Tomcat Cluster nodes, … ) and you want execute the same intructions
in all of them.
My config (~/.csshrc
) file for cssh is look like to the default settings:
auto_quit=yes
command=
comms=ssh
console_position=
extra_cluster_file=~/.clusters <<<<<<<<<
history_height=10
history_width=40
key_addhost=Control-Shift-plus
key_clientname=Alt-n
key_history=Alt-h
key_paste=Control-v
key_quit=Control-q
key_retilehosts=Alt-r
max_host_menu_items=30
method=ssh
mouse_paste=Button-2
rsh_args=
screen_reserve_bottom=60
screen_reserve_left=0
screen_reserve_right=0
screen_reserve_top=0
show_history=0
ssh=/usr/bin/ssh
ssh_args= -x -o ConnectTimeout=10
telnet_args=
terminal=/usr/bin/xterm
terminal_allow_send_events=-xrm ‘*.VT100.allowSendEvents:true’
terminal_args=
# terminal_bg_style=dark
terminal_colorize=1
terminal_decoration_height=10
terminal_decoration_width=8
terminal_font=6×13
terminal_reserve_bottom=0
terminal_reserve_left=5
terminal_reserve_right=0
terminal_reserve_top=5
terminal_size=80×24
terminal_title_opt=-T
title=CSSH
unmap_on_redraw=no
use_hotkeys=yes
window_tiling=yes
window_tiling_direction=right
The ~/.clusters
file is the file which defined the concrete clusters (see man ):
# home cluster
c-home tor@192.168.1.10 pablo@192.168.1.11# promox-10.40.140
promox-10.40.140 10.40.140.17 10.40.140.18 10.40.140.19 10.40.140.33 10.40.140.17 10.40.140.18 10.40.140.33# kvm-10.41.120
kvm-10.41.120 10.41.120.17 10.41.120.18
When I want work with c-home cluster, we execute de cssh as following:
# cssh c-home
In addition, I have written a tiny python script that automatized the cluster lines generation. This script is based in pararell executed ICMP queries. Thats is cool when your servers are deploying in a big VLAN or the number of them is big. In this cases, we can execute my script to found the servers.
# ./cssh-clusterline-generator.py -L 200 -H 250 -d mot -n 10.40.140 >> ~/.clusters
# mot-10.40.140-range-10-150
mot-10.40.140-range-10-150 10.40.140.17 10.40.140.19 10.40.140.32 10.40.140.37
Finally, … the script:
import os
from threading import Thread
from optparse import OptionParserclass Thread_(Thread):
def __init__ (self,ip):
Thread.__init__(self)
self.ip = ip
self.status = -1
def run(self):res = os.system(“ping -c 1 %s > /dev/null” % self.ip)
res_str = “Not founded”self.status = res
threads_=[]
ips = “”parser = OptionParser()
parser.add_option(“-n”, “–net”, dest=”network”, default=”10.121.55″,
help=”Class C Network”, metavar=”NETWORK”)
parser.add_option(“-L”, “–lowrange”, dest=”lowrange”, default=”1″,
help=”Low range”, metavar=”LOW”)
parser.add_option(“-H”, “–highrange”, dest=”highrange”, default=”254″,
help=”High range”, metavar=”HIGH”)
parser.add_option(“-d”, “–deploy”, dest=”deploy”, default=”Net”,
help=”Deploy name”, metavar=”DEPLOY”)
parser.add_option(“-v”, “–verbose”, dest=”verbose”,
default=False, action=”store_true”,
help=”Verboise mode”)(options, args) = parser.parse_args()
low_range = int(options.lowrange)
high_range = int(options.highrange)
net=options.network
deploy_id = options.deploy
verbose=options.verbosefor i in range (low_range, high_range+1):
ip = net + “.” + str(i)
h = Thread_(ip)
threads_.append(h)
h.start()for h in threads_:
res_str = “Not founded”
h.join()
count=0if h.status == 0:
count = count + 1
res_str = “FOUNDED”
if verbose:
print “Looking host in %s … %s” % (h.ip, res_str)
ips += h.ip + ” ”if verbose:
print “Finished word. %s host founded” % countprint “”
print “# ” + deploy_id + “-” + net + “-range-” + str(low_range) + “-” + str(high_range)
line = deploy_id + “-” + net + “-range-” + str(low_range) + “-” + str(high_range) + ” ” + ips
print line