[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

ANNOUNCE: ParBASH 0.1 release - parallel processing in BASH

From: Milenko Petrovic
Subject: ANNOUNCE: ParBASH 0.1 release - parallel processing in BASH
Date: Mon, 20 Jul 2009 07:45:08 -0700 (PDT)
User-agent: G2/1.0


I'd like to announce the release of the 0.1 version of ParBASH. Using
ParBASH, it is possible to write bash scripts that can be
automatically parallelized on SMP, multicore,  and distributed systems
using Apache Hadoop.

Here is an example script to find top 10 references for Barack Obama
pages on wikipedia using Amazon EC2:


cat hdfs:/wikipedia-out/* | grep Obama | \
perl -ne 'while (/<link type="external" href="([^"]+)">/g) { print
"$1\n"; }' |\perl -ne 'if (/http:\/\/([^\/]+)(\/|$)/) { print
"$1\n"; }' |\
perl -ne '
  if (/([^\.]\.)+([^\.]+\.[a-zA-Z]{2,3}\.[^\.]+)$/) { print "$2\n";}
  else if (/([^\.]+\.[a-zA-Z]{2,3}\.[^\.]+)$/) { print "$1\n";}
  else if (/([^\.]\.)*([^\.]+\.[^\.]+)$/) { print "$2\n"; }' |\
sort | uniq -c > hdfs:/out

How and why of wiki.sh and parbash on

Source code and more examples:

If someone wants to try compiling the code and play around with it,
please contact me, I can help you get started.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]