Wednesday, November 9, 2016

Heron master branch and Java typologies with maven

Heron uses Bazel as a build tool. So the jar files it creates are not installed in maven automatically. If you are trying to develop a topology with the latest source code of Heron using the Heron API jars from the online maven repositories may not be an option. In that case you can build Heron and install the jars manually in local maven repository to get your project build.

The important Jar for the project is heron-storm.jar. After building heron, this jar can be found in

Now lets install this jar manually in to local maven repo.
cd HERON_SRC/bazel-genfiles/heron/storm/src/java
mvn install:install-file -DcreateChecksum=true -Dpackaging=jar -Dfile=heron-storm.jar -DgroupId=com.twitter.heron -DartifactId=heron-storm -Dversion=VERSION

You can give a fake version like 0.14.2-SNAPSHOT when installing the jar file.

Now we can include this jar in our topology as a maven dependency.

Tuesday, November 8, 2016

Twitter Heron libunwind issues on Redhat/CentOS

If your Twitter Heron build fails in Redhat/CentOS with the following error, here is a fix for solving the issue.

First here is the error I got while trying to build Heron 0.14.2 source code.

config.status: executing libtool commands
configure: error: in `/tmp/gperftools.69skq/gperftools-2.4':
configure: error: No frame pointers and no libunwind. The compilation will fail
See `config.log' for more details
Now lets see how we are going to fix this. 

First we need to install libunwind. In my case I don't have permission to install libunwind on this cluster using the package manager yum. So I install it on the home directory using the source code.
$ wget
$ tar -xvf libunwind-1.1.tar.gz
$ cd libunwind-1.1
$ ./configure --prefix=$HOME
$ make
$ sudo make install

Now lets change the gperftools build in Heron.
vi third_party/gperftools/BUILD

Change the configure command in the genrule section

cmd = "\n".join([
        "export INSTALL_DIR=$$(pwd)/$(@D)",
        "export TMP_DIR=$$(mktemp -d -t gperftools.XXXXX)",
        "mkdir -p $$TMP_DIR",
        "cp -R $(SRCS) $$TMP_DIR",
        "cd $$TMP_DIR",
        "tar xfz " + package_file,
        "cd " + package_dir,
        "./configure --prefix=$$INSTALL_DIR --enable-shared=no LIBS=\"-Wl,--rpath -Wl,/N/u/skamburu/lib -L/N/u/skamburu/lib\" CPPFLAGS=-I/N/u/skamburu/include LDFLAGS=-L/N/u/skamburu/lib",
        "make install",
        "rm -rf $$TMP_DIR",
Now you should good to compile Heron.

Tuesday, November 1, 2016

Deploying Heron typologies using SLURM

In this post we will explore how to deploy a Heron streaming topology on a HPC cluster using the SLURM scheduler.

First lets install Heron. Some of the core parts of Heron is written using C++. So it is important to get the correct binary for your OS in the HPC cluster. If there are no suitable binaries for Heron for your HPC environment, you can always build Heron from the source code. Heron documentation discusses how to compile from source code in different environments in great detail.

Lets say we have an Heron installation that works on the environment. Now lets try to install it. Usually HPC environments has a shared file system like NFS. You can use a shared location to install Heron.

Install Heron

We need to install heron client and tools packages. Heron client provides all the functionalities required to run a topology. Heron tools provide the things like UI for viewing the topologies.

In this setup deploy folder in the home directory is used to install Heron. The home directory is shared across the cluster. Note we are using the binaries built from the source.

cd /N/u/skamburu/deploy
mkdir heron
sh ./  --prefix=/N/u/skamburu/deploy/heron
sh ./  --prefix=/N/u/skamburu/deploy/heron

You can add the heron bin directory to the PATH environment variable.

export PATH=$PATH:/N/u/skamburu/deploy/heron/bin

Run Topology

Now lets run an example topology shipped with Heron using the slurm scheduler.

cd /N/u/skamburu/deploy/heron/bin
./heron submit slurm /N/u/skamburu/deploy/heron/heron/examples/heron-examples.jar com.twitter.heron.examples.MultiSpoutExclamationTopology example

Heron UI

After running the example topology lets start the Heron tracker and UI. Before starting the tracker make sure to change the tracker configuration to point to slurm cluster.

vi /N/u/skamburu/deploy/heron/herontools/conf/heron_tracker.yaml

    type: "file"
    name: "local"
    rootpath: "~/.herondata/repository/state/slurm"
    tunnelhost: "localhost"

Now lets start the tracker and UI.

cd /N/u/skamburu/deploy/heron/bin
./heron-tracker &
./heron-ui &

This will start the Heron UI on port 8889. Since this is an HPC cluster usually the ports will be blocked from firewall. Usually we forward the ports to the local machine so that we can view the UI from the desktop.

ssh -i ~/.ssh/id_rsa -L 8889:localhost:8889 user@cluster

Now we can view the UI in the browser by pointing to the URL


Error handling

The Heron job is submitted to the Slurm scheduler using a bash script. This script only provides minimal configurations for Slurm. You can modify this script according to your environment to submit the jobs. For example in one cluster we had to specify the Slurm partition in the script. So we added that. Here is an example of the slurm script.
vi /N/u/skamburu/deploy/heron/heron/conf/slurm/

#!/usr/bin/env bash

# arg1: the heron executable
# arg2: arguments to executable

#SBATCH --ntasks-per-node=1
#SBATCH --time=00:30:00
#SBATCH --partition=delta

module load python

for i in $(seq 1 $SLURM_NNODES); do
    index=`expr $i - $ONE`
    echo "Exec" $1 $index ${@:3}
    srun -lN1 -n1 --nodes=1 --relative=$index $1 $index ${@:2} &

echo $SLURM_JOB_ID >


If your job gets canceled before killing it using the heron kill command you may have to manually delete some files to submit it again. Usually you can delete the files in the

rm -rf ~/.herondata/topologies/slurm/skamburu/example

Killing the topology

To kill the topology you can use the heron kill command.
cd /N/u/skamburu/deploy/heron/bin
./heron kill slurm example