Reader small image

You're reading from  Instant Pentaho Data Integration Kitchen

Product typeBook
Published inJul 2013
Reading LevelBeginner
PublisherPackt
ISBN-139781849696906
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Sergio Ramazzina
Sergio Ramazzina
author image
Sergio Ramazzina

Sergio Ramazzina is an experienced software architect/trainer with more than 25 years of experience in the IT field. He has worked on a broad number of projects for banks and major Italian companies and has designed complex enterprise solutions in Java, JavaEE, and Ruby. He started using Pentaho products from the very beginning in late 2003. He gained thorough experience by deploying Pentaho as an open source BI solution, standalone or deeply integrated in other applications as the analytical engine of choice. In 2009, due to his experience in the Java/JavaEE world and appreciation for the open source world and its main ideas, he began participating actively as a contributor to some of the Pentaho projects such as JPivot, Saiku, CDF, and CDA and rose to the Pentaho Active Contributor level. At that time, he started participating as a BI architect and Pentaho expert on a wide number of projects where open source BI and Pentaho were the main players. In late 2010, he founded Serasoft, a young Italian consulting firm that specializes in delivering high value open source Business Intelligence solutions. With the team in Serasoft, he shared his passion and experience in designing and delivering highly innovative enterprise solutions to help users make their work more effective. In July 2013, he published his first book, Instant Pentaho Data Integration Kitchen, Packt Publishing. He is also passionate about skiing, tennis, and photography, and he loves his young daughter, Camilla, very much. You can follow him on Twitter at @sramazzina. You can also look at his profile on LinkedIn at http://it.linkedin.com/in/sramazzina/.
Read more about Sergio Ramazzina

Right arrow

Scheduling PDI jobs and transformations (Intermediate)


ETL processes are always batch processes; you will launch them, and then at the very end you expect to get back the results. There is no kind of user interaction during the process's execution. Often, ETL processes can take quite some time to execute because they either work with a lot of data or they implement complex process rules that take a long time to execute. It is always a good idea to schedule them so that they can run whenever the system load is low. This last recipe guides you through how to schedule PDI jobs and transformations on Unix/Linux. As usual, anything is applicable for both jobs and transformations; the only difference is in the name of the script's file that is to be scheduled.

Getting ready

To get ready for this recipe, you need to check that the JAVA_HOME environment variable is set properly and then configure your environment variables so that the Kitchen script can start from anywhere without specifying the complete path to your PDI home directory. For details about these checks, refer to the recipe Executing PDI jobs from a filesystem (Simple).

How to do it...

To schedule ETL processes on Linux, use the steps that follow:

  1. To schedule ETL processes on Linux, we always use the cron scheduler. The task to do this is very easy, while understanding the syntax of the cron schedules is a bit more complicated.

  2. To add a new schedule, you need to edit the crontab file; to do this, type the following command from the Linux command line:

    crontab –e
    
  3. Depending on the distribution, you can also use some graphical tools to do the same, but as I always suggest, it is better to learn the command-line commands to give you complete freedom and portability.

  4. To schedule a process, you need to specify the time at which the command will run. To do this, there is a particular syntax that lets you specify minutes, hours, the month date, and the month year. The format of any of these items is as follows:

    • Minute: The minute of the hour (0-59)

    • Hour: The hour of the day (0-23)

    • Month day: The day of the month (1-31)

    • Month: The month of the year (1-12)

    • Weekday: The day of the week (0-6, 0 = Sunday)

  5. You can follow some simple rules to specify ranges or multiple values for any of the preceding items, summarized as follows:

    • Define ranges by giving the start and end values of the interval and separating them with a hyphen -

    • Define multiple values by separating the single values with a comma

  6. So, let's suppose that we want to schedule a job in the <book_samples>/sample1 directory called export-job.kjb to execute at every half an hour, at 20 and 50 minutes past the hour every day, except on weekends. To do this, you need to add a new schedule by typing the following command:

    20,50 * * * 1-5 <pdi_home>/kitchen.sh –file <books_samples>/export-job.kjb
  7. After you have typed the command, save and quit the crontab editor by pressing the Esc key; then type :wq, and then press the Enter key. The schedule will immediately be available. As you can see, in this case, you need to specify the complete path to the PDI scripts and to the export-job.kjb job. Of course, scheduling a Pan script is just a matter of changing the script name.

  8. Further references about how to schedule tasks with the cron scheduler can be found on Wikipedia at http://en.wikipedia.org/wiki/Cron.

To schedule ETL processes in Windows from Task Scheduler, use the steps that follow:

  1. Go to Control Panel | Administrative Tools | Task Scheduler.

  2. Select Create a Basic Task from the menu on the right.

  3. The Create Basic Task Wizard dialog will be displayed. Enter the task name (for example, Example PDI job). Click on Next.

  4. Select the schedule you want to apply. You can either select the schedule based on the calendar or on an event. Let us suppose our schedule is based on the calendar and we want our program to be scheduled daily. Select the Daily radio button and click on Next.

  5. Insert the time you want the event to fire and its recurrence multiplicity (Recur every). We want, for example, the event to fire everyday at 4.00 A.M. Click on Next.

  6. Select the action you want to accomplish; in our case, we select Start a program. Click on Next.

  7. In the Program/script field, insert the full path and the name of the script you want to start (Kitchen.bat or Pan.bat). In the Add arguments field, insert the usual information you pass along to the Kitchen or Pan script to start your job or transformation properly. Click on Next.

  8. At the end, a summary dialog will open showing you all the parameters you inserted to completely define your schedule.

  9. Click on Finish.

  10. Your job will now appear in the list of waiting jobs.

To schedule ETL processes in Windows using the command line, use the following steps:

  1. The at command works the same as cron, so it becomes very easy to schedule our job with Kitchen or our transformation with Pan.

  2. Let's suppose that we want to schedule a job in the <book_samples>/sample1 directory called export-job.kjb to be executed everyday at 8:00 AM with the exception of the weekends. To do this, you need to add a new schedule by typing the following command:

    at 8:00 /every:M,T,W,Th,F <pdi_home>\Kitchen.bat /file:<books_samples>\export-job.kjb
    
  3. After you type the command, the schedule immediately becomes available. As you can see, in this case, you need to specify the complete path to the PDI scripts and to the export-job.kjb job.

There's more...

For Linux/Mac users, an interesting point is that whenever we execute a problem through a cron schedule, we get into trouble with our environment variables settings. Let's see how we can apply a little trick to solve all of this easily and without any pain.

Understanding crontab malfunctions

For Linux/Mac users, we can get into trouble because a crontab schedule for our application does not work properly. The reason for this is that crontab passes a minimal set of environment variables to our application. To look at this, you can add a dummy job and have the output of the environment variables written on a file as suggested by the following example:

* * * * * env > /tmp/env.log

To work around this, a first simple practice would be to remember to specify all of the environment variables used by your script in the script. If you are going to launch an application, wrap it in a little script where you will set all of your needed environment variables.

In case you don't want to redefine all of your environment variables, another option would be to add the following in the crontab scheduler before your command:

. $HOME/.profile.

An example of this is as follows:

0 5 * * * . $HOME/.profile; /path/to/command/to/run

This could be a simple way to access any environment variable defined for our user without having to respecify them in our script one by one.

Previous PageNext Chapter
You have been reading a chapter from
Instant Pentaho Data Integration Kitchen
Published in: Jul 2013Publisher: PacktISBN-13: 9781849696906
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Sergio Ramazzina

Sergio Ramazzina is an experienced software architect/trainer with more than 25 years of experience in the IT field. He has worked on a broad number of projects for banks and major Italian companies and has designed complex enterprise solutions in Java, JavaEE, and Ruby. He started using Pentaho products from the very beginning in late 2003. He gained thorough experience by deploying Pentaho as an open source BI solution, standalone or deeply integrated in other applications as the analytical engine of choice. In 2009, due to his experience in the Java/JavaEE world and appreciation for the open source world and its main ideas, he began participating actively as a contributor to some of the Pentaho projects such as JPivot, Saiku, CDF, and CDA and rose to the Pentaho Active Contributor level. At that time, he started participating as a BI architect and Pentaho expert on a wide number of projects where open source BI and Pentaho were the main players. In late 2010, he founded Serasoft, a young Italian consulting firm that specializes in delivering high value open source Business Intelligence solutions. With the team in Serasoft, he shared his passion and experience in designing and delivering highly innovative enterprise solutions to help users make their work more effective. In July 2013, he published his first book, Instant Pentaho Data Integration Kitchen, Packt Publishing. He is also passionate about skiing, tennis, and photography, and he loves his young daughter, Camilla, very much. You can follow him on Twitter at @sramazzina. You can also look at his profile on LinkedIn at http://it.linkedin.com/in/sramazzina/.
Read more about Sergio Ramazzina