Building RPM from Git with Koji

If you have followed my other articles about Koji, you should have a fully working setup now. However its not very handy to only build local SRPM.
Fortunately Koji can build RPMs by spec-files and Makefiles which it gets from a Git repo or other SCM. Read on to learn how you get that going.

In /etc/kojid/kojid.conf*:no

;using any other command instead of "make sources". Example showing "fedpkg sources"

Read more

Crypting HDD as folder on Linux with LUKS

If you want to encrypt and secure your personal confident data on Linux, here is how to do it.

The following method explains how to encrypt a harddisk or partition and mount it as a folder anywhere in your filesystem. There are also other possibilities like using a file as encrypted container or encrypting your whole system partition.
We will be using dm-crypt + LUKS (Linux Unified Key Setup-on-disk-format), which is a block device level encryption scheme just like Truecrypt.

First you need to install some dependencies:

# you need EPEL repo installed for this
yum install cryptsetup-luks pv

Read more

Playing with YUM API

I was migrating from one server to another and because I love python, I really try to involve it into each and every script I’m writing.
So I saved all of my installed packages in a text file called installed.txt , and using the yum python API, I managed to install all the needed packages as easy as this:

Read more

Setting up simple Upstart Service

Service are important when you decide to monitor or keep a job running on your server
Upstart made it easy like heaven to add new service taking care  of controlling the service,
Here is an example how to setup very simple service and control it afterwards easy as this :

# Ubuntu / Debian
cd /etc/init
touch mysimpleservice.conf

Read more

Compiling your own Kernel for Debian and CentOS (or alike)

For various reasons you might need to (re)-compile your own kernel. For instance if the installed kernel by your distribution does not support a certain feature you need.
I recently discovered that the kernels provided by OVH for their servers do NOT support loadable modules. Even the “original” kernels they provide dont. So you need to compile your own kernel to use things like cryptsetup (dm-crypt).

Fortunately compiling your own kernel is easy if you know howto do it.

Necessary packages

# RHEL / Fedora / CentOS
yum groupinstall "Development Tools"
yum install ncurses ncurses-devel

# Ubuntu / Debian
apt-get install lzma kernel-package debhelper fakeroot build-essential libtool automake make gcc ncurses ncurses-dev

Read more

iptables rules for NAT with FTP active / passive connections

If you have an FTP server running behind a server that acts as the gateway or firewall, here are the rules to enable full NAT for active and passive connections.

# general rules for forwarding traffic between external interface tap0 and internal interface eth0
iptables -t nat -A POSTROUTING -o tap0 -j MASQUERADE
iptables -A FORWARD -i tap0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth0 -o tap0 -j ACCEPT

# NAT for active/passive FTP. would be your internal ftp server
iptables -t nat -A PREROUTING  -p tcp  --dport 20 -j DNAT --to
iptables -t nat -A PREROUTING  -p tcp  --dport 21 -j DNAT --to
iptables -t nat -A PREROUTING  -p tcp  --dport 1024:65535 -j DNAT --to
iptables -A FORWARD -s -p tcp --sport 20 -j ACCEPT
iptables -A FORWARD -s -p tcp --sport 21 -j ACCEPT
iptables -A FORWARD -s -p tcp --sport 1024:65535 -j ACCEPT

# allowing active/passive FTP
iptables -A OUTPUT -p tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 20 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 1024: --dport 1024: -m state --state ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp --dport 20 -m state --state ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp --sport 1024: --dport 1024: -m state --state ESTABLISHED,RELATED,NEW -j ACCEPT

Read more

Spacewalk vs. Katello

When managing alot of systems (virtual or physical) it makes sense to centralize the package management. It also saves you alot of time.

Spacewalk does exactly that for RPM-based systems like CentOS, Fedora or SLE. Its the community and open-source version of the RedHat Network Satellite Products  (RHN). It brings you alot of nice features like

  • Systems inventory with hardware and software info (DMI)
  • Centralized package management. Installing / Updating software on systems (single/grouped/batch)
  • Errata overview for systems (security/bugfixes/enhancements)
  • Kickstart / Provision systems
  • Audit
  • basic config file distribution (better do this with puppet/chef)
  • basic monitoring (better do this with munin/graphite/ganglia..)

Read more

ERROR: [ipv6_set_default_route] Given IPv6 default gateway ‘fe80 :: 1’ is link-local

ERROR: [ipv6_set_default_route] Given IPv6 default gateway ‘fe80 :: 1’ is link-local, but no scope or gateway device is specified

If you encounter this error on CentOS6 then remove IPV6_DEFAULTGW=fe80::1 from /etc/sysconfig/network-scripts/ifcfg-eth0 and do a

sercvice network restart

Your IPv6 networking should still be working but there error should be gone.

Installing Hadoop-LZO compression module RPMs

Recently I wrote about compiling and installing LZO support for Hadoop.

But now I found RPMs for this by Cloudera. Strangely they arent mentioned anywhere but here “Installing-and-Using-Impala” . So its much simpler to install now.

# this is for RHEL/CentOS . For Debian, Ubuntu, SLES/SUSE see
cd /etc/yum.repos.d/ && wget
yum install hadoop-lzo-cdh4 hadoop-lzo-cdh4-mr1

Its really as simple as that! Installs LZO for MapReduce and for Hadoop to /usr/lib/hadoop/lib/ and /usr/lib/hadoop-0.20-mapreduce/lib and also to the native/ paths.

Check my older blog post about the necessary configuration settings which are left to do. Just skip the compilation part.

Compiling and installing Hadoop-LZO compression support module

If you want to benefit of splittable LZO compression in Hadoop you have to build it yourself. Due to licensing reasons, the module isnt shipped with Apaches Hadoop or Cloudera.

LZO compression is significantly faster than the other compressions in Hadoop, such as Snappy or GZip.

The original work is located at however there are two more improved forks that are in-sync. We will be using Todd Lipcons fork. He is employed at Cloudera, so his fork is the closest to the CDH4 stack.

The reason why I am writing this short article is because all installation articles for hadoop-lzo I found were not as short as they could be. So here we go:

yum install lzo-devel ant java-1.7.0-openjdk-devel gcc
cd /usr/local/src
git clone git://
cd hadoop-lzo

Read more