Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-python-multimedia-video-format-conversion-manipulations-and-effects
Packt
10 Dec 2010
11 min read
Save for later

Python Multimedia: Video Format Conversion, Manipulations and Effects

Packt
10 Dec 2010
11 min read
  Python Multimedia Learn how to develop Multimedia applications using Python with this practical step-by-step guide Use Python Imaging Library for digital image processing. Create exciting 2D cartoon characters using Pyglet multimedia framework Create GUI-based audio and video players using QT Phonon framework. Get to grips with the primer on GStreamer multimedia framework and use this API for audio and video processing.       Installation prerequisites We will use Python bindings of GStreamer multimedia framework to process video data. See Python Multimedia: Working with Audios for the installation instructions to install GStreamer and other dependencies. For video processing, we will be using several GStreamer plugins not introduced earlier. Make sure that these plugins are available in your GStreamer installation by running the gst-inspect-0.10 command from the console (gst-inspect-0.10.exe for Windows XP users). Otherwise, you will need to install these plugins or use an alternative if available. Following is a list of additional plugins we will use in this article: autoconvert: Determines an appropriate converter based on the capabilities. It will be used extensively used throughout this article. autovideosink: Automatically selects a video sink to display a streaming video. ffmpegcolorspace: Transforms the color space into a color space format that can be displayed by the video sink. capsfilter: It's the capabilities filter—used to restrict the type of media data passing down stream, discussed extensively in this article. textoverlay: Overlays a text string on the streaming video. timeoverlay: Adds a timestamp on top of the video buffer. clockoverlay: Puts current clock time on the streaming video. videobalance: Used to adjust brightness, contrast, and saturation of the images. It is used in the Video manipulations and effects section. videobox: Crops the video frames by specified number of pixels—used in the Cropping section. ffmux_mp4: Provides muxer element for MP4 video muxing. ffenc_mpeg4: Encodes data into MPEG4 format. ffenc_png: Encodes data in PNG format. Playing a video Earlier, we saw how to play an audio. Like audio, there are different ways in which a video can be streamed. The simplest of these methods is to use the playbin plugin. Another method is to go by the basics, where we create a conventional pipeline and create and link the required pipeline elements. If we only want to play the 'video' track of a video file, then the latter technique is very similar to the one illustrated for audio playback. However, almost always, one would like to hear the audio track for the video being streamed. There is additional work involved to accomplish this. The following diagram is a representative GStreamer pipeline that shows how the data flows in case of a video playback. In this illustration, the decodebin uses an appropriate decoder to decode the media data from the source element. Depending on the type of data (audio or video), it is then further streamed to the audio or video processing elements through the queue elements. The two queue elements, queue1 and queue2, act as media data buffer for audio and video data respectively. When the queue elements are added and linked in the pipeline, the thread creation within the pipeline is handled internally by the GStreamer. Time for action – video player! Let's write a simple video player utility. Here we will not use the playbin plugin. The use of playbin will be illustrated in a later sub-section. We will develop this utility by constructing a GStreamer pipeline. The key here is to use the queue as a data buffer. The audio and video data needs to be directed so that this 'flows' through audio or video processing sections of the pipeline respectively. Download the file PlayingVidio.py from the Packt website. The file has the source code for this video player utility. The following code gives an overview of the Video player class and its methods. import time import thread import gobject import pygst pygst.require("0.10") import gst import os class VideoPlayer: def __init__(self): pass def constructPipeline(self): pass def connectSignals(self): pass def decodebin_pad_added(self, decodebin, pad): pass def play(self): pass def message_handler(self, bus, message): pass # Run the program player = VideoPlayer() thread.start_new_thread(player.play, ()) gobject.threads_init() evt_loop = gobject.MainLoop() evt_loop.run() As you can see, the overall structure of the code and the main program execution code remains the same as in the audio processing examples. The thread module is used to create a new thread for playing the video. The method VideoPlayer.play is sent on this thread. The gobject.threads_init() is an initialization function for facilitating the use of Python threading within the gobject modules. The main event loop for executing this program is created using gobject and this loop is started by the call evt_loop.run(). Instead of using thread module you can make use of threading module as well. The code to use it will be something like: import threading threading.Thread(target=player.play).start() You will need to replace the line thread.start_new_thread(player.play, ()) in earlier code snippet with line 2 illustrated in the code snippet within this note. Try it yourself! Now let's discuss a few of the important methods, starting with self.contructPipeline: 1 def constructPipeline(self): 2 # Create the pipeline instance 3 self.player = gst.Pipeline() 4 5 # Define pipeline elements 6 self.filesrc = gst.element_factory_make("filesrc") 7 self.filesrc.set_property("location", 8 self.inFileLocation) 9 self.decodebin = gst.element_factory_make("decodebin") 10 11 # audioconvert for audio processing pipeline 12 self.audioconvert = gst.element_factory_make( 13 "audioconvert") 14 # Autoconvert element for video processing 15 self.autoconvert = gst.element_factory_make( 16 "autoconvert") 17 self.audiosink = gst.element_factory_make( 18 "autoaudiosink") 19 20 self.videosink = gst.element_factory_make( 21 "autovideosink") 22 23 # As a precaution add videio capability filter 24 # in the video processing pipeline. 25 videocap = gst.Caps("video/x-raw-yuv") 26 self.filter = gst.element_factory_make("capsfilter") 27 self.filter.set_property("caps", videocap) 28 # Converts the video from one colorspace to another 29 self.colorSpace = gst.element_factory_make( 30 "ffmpegcolorspace") 31 32 self.videoQueue = gst.element_factory_make("queue") 33 self.audioQueue = gst.element_factory_make("queue") 34 35 # Add elements to the pipeline 36 self.player.add(self.filesrc, 37 self.decodebin, 38 self.autoconvert, 39 self.audioconvert, 40 self.videoQueue, 41 self.audioQueue, 42 self.filter, 43 self.colorSpace, 44 self.audiosink, 45 self.videosink) 46 47 # Link elements in the pipeline. 48 gst.element_link_many(self.filesrc, self.decodebin) 49 50 gst.element_link_many(self.videoQueue, self.autoconvert, 51 self.filter, self.colorSpace, 52 self.videosink) 53 54 gst.element_link_many(self.audioQueue,self.audioconvert, 55 self.audiosink) In various audio processing applications, we have used several of the elements defined in this method. First, the pipeline object, self.player, is created. The self.filesrc element specifies the input video file. This element is connected to a decodebin. On line 15, autoconvert element is created. It is a GStreamer bin that automatically selects a converter based on the capabilities (caps). It translates the decoded data coming out of the decodebin in a format playable by the video device. Note that before reaching the video sink, this data travels through a capsfilter and ffmpegcolorspace converter. The capsfilter element is defined on line 26. It is a filter that restricts the allowed capabilities, that is, the type of media data that will pass through it. In this case, the videoCap object defined on line 25 instructs the filter to only allow video-xraw-yuv capabilities. The ffmpegcolorspace is a plugin that has the ability to convert video frames to a different color space format. At this time, it is necessary to explain what a color space is. A variety of colors can be created by use of basic colors. Such colors form, what we call, a color space. A common example is an rgb color space where a range of colors can be created using a combination of red, green, and blue colors. The color space conversion is a representation of a video frame or an image from one color space into the other. The conversion is done in such a way that the converted video frame or image is a closer representation of the original one. The video can be streamed even without using the combination of capsfilter and the ffmpegcolorspace. However, the video may appear distorted. So it is recommended to use capsfilter and ffmpegcolorspace converter. Try linking the autoconvert element directly to the autovideosink to see if it makes any difference. Notice that we have created two sinks, one for audio output and the other for the video. The two queue elements are created on lines 32 and 33. As mentioned earlier, these act as media data buffers and are used to send the data to audio and video processing portions of the GStreamer pipeline. The code block 35-45 adds all the required elements to the pipeline. Next, the various elements in the pipeline are linked. As we already know, the decodebin is a plugin that determines the right type of decoder to use. This element uses dynamic pads. While developing audio processing utilities, we connected the pad-added signal from decodebin to a method decodebin_pad_added. We will do the same thing here; however, the contents of this method will be different. We will discuss that later. On lines 50-52, the video processing portion of the pipeline is linked. The self.videoQueue receives the video data from the decodebin. It is linked to an autoconvert element discussed earlier. The capsfilter allows only video-xraw-yuv data to stream further. The capsfilter is linked to a ffmpegcolorspace element, which converts the data into a different color space. Finally, the data is streamed to the videosink, which, in this case, is an autovideosink element. This enables the 'viewing' of the input video. Now we will review the decodebin_pad_added method. 1 def decodebin_pad_added(self, decodebin, pad): 2 compatible_pad = None 3 caps = pad.get_caps() 4 name = caps[0].get_name() 5 print "n cap name is =%s"%name 6 if name[:5] == 'video': 7 compatible_pad = ( 8 self.videoQueue.get_compatible_pad(pad, caps) ) 9 elif name[:5] == 'audio': 10 compatible_pad = ( 11 self.audioQueue.get_compatible_pad(pad, caps) ) 12 13 if compatible_pad: 14 pad.link(compatible_pad) This method captures the pad-added signal, emitted when the decodebin creates a dynamic pad. Here the media data can either represent an audio or video data. Thus, when a dynamic pad is created on the decodebin, we must check what caps this pad has. The name of the get_name method of caps object returns the type of media data handled. For example, the name can be of the form video/x-raw-rgb when it is a video data or audio/x-raw-int for audio data. We just check the first five characters to see if it is video or audio media type. This is done by the code block 4-11 in the code snippet. The decodebin pad with video media type is linked with the compatible pad on self.videoQueue element. Similarly, the pad with audio caps is linked with the one on self.audioQueue. Review the rest of the code from the PlayingVideo.py. Make sure you specify an appropriate video file path for the variable self.inFileLocation and then run this program from the command prompt as: $python PlayingVideo.py This should open a GUI window where the video will be streamed. The audio output will be synchronized with the playing video. What just happened? We created a command-line video player utility. We learned how to create a GStreamer pipeline that can play synchronized audio and video streams. It explained how the queue element can be used to process the audio and video data in a pipeline. In this example, the use of GStreamer plugins such as capsfilter and ffmpegcolorspace was illustrated. The knowledge gained in this section will be applied in the upcoming sections in this article. Playing video using 'playbin' The goal of the previous section was to introduce you to the fundamental method of processing input video streams. We will use that method one way or another in the future discussions. If just video playback is all that you want, then the simplest way to accomplish this is by means of playbin plugin. The video can be played just by replacing the VideoPlayer.constructPipeline method in file PlayingVideo.py with the following code. Here, self.player is a playbin element. The uri property of playbin is set as the input video file path. def constructPipeline(self): self.player = gst.element_factory_make("playbin") self.player.set_property("uri", "file:///" + self.inFileLocation)
Read more
  • 0
  • 0
  • 8114

article-image-working-simple-associations-using-cakephp
Packt
24 Oct 2009
5 min read
Save for later

Working with Simple Associations using CakePHP

Packt
24 Oct 2009
5 min read
Database relationship is hard to maintain even for a mid-sized PHP/MySQL application, particularly, when multiple levels of relationships are involved because complicated SQL queries are needed. CakePHP offers a simple yet powerful feature called 'object relational mapping' or ORM to handle database relationships with ease.In CakePHP, relations between the database tables are defined through association—a way to represent the database table relationship inside CakePHP. Once the associations are defined in models according to the table relationships, we are ready to use its wonderful functionalities. Using CakePHP's ORM, we can save, retrieve, and delete related data into and from different database tables with simplicity, in a better way—no need to write complex SQL queries with multiple JOINs anymore! In this article by Ahsanul Bari and Anupom Syam, we will have a deep look at various types of associations and their uses. In particular, the purpose of this article is to learn: How to figure out association types from database table relations How to define different types of associations in CakePHP models How to utilize the association for fetching related model data How to relate associated data while saving There are basically 3 types of relationship that can take place between database tables: one-to-one one-to-many many-to-many The first two of them are simple as they don't require any additional table to relate the tables in relationship. In this article, we will first see how to define associations in models for one-to-one and one-to-many relations. Then we will look at how to retrieve and delete related data from, and save data into, database tables using model associations for these simple associations. Defining One-To-Many Relationship in Models To see how to define a one-to-many relationship in models, we will think of a situation where we need to store information about some authors and their books and the relation between authors and books is one-to-many. This means an author can have multiple books but a book belongs to only one author (which is rather absurd, as in real life scenario a book can also have multiple authors). We are now going to define associations in models for this one-to-many relation, so that our models recognize their relations and can deal with them accordingly. Time for Action: Defining One-To-Many Relation Create a new database and put a fresh copy of CakePHP inside the web root. Name the database whatever you like but rename the cake folder to relationship. Configure the database in the new Cake installation. Execute the following SQL statements in the database to create a table named authors, CREATE TABLE `authors` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `name` varchar( 127 ) NOT NULL , `email` varchar( 127 ) NOT NULL , `website` varchar( 127 ) NOT NULL ); Create a books table in our database by executing the following SQL commands: CREATE TABLE `books` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `isbn` varchar( 13 ) NOT NULL , `title` varchar( 64 ) NOT NULL , `description` text NOT NULL , `author_id` int( 11 ) NOT NULL ) Create the Author model using the following code (/app/models/authors.php): <?php class Author extends AppModel{ var $name = 'Author'; var $hasMany = 'Book';} ?> Use the following code to create the Book model (/app/models/books.php): <?phpclass Book extends AppModel{ var $name = 'Book'; var $belongsTo = 'Author';}?> Create a controller for the Author model with the following code: (/app/controllers/authors_controller.php): <?phpclass AuthorsController extends AppController { var $name = 'Authors'; var $scaffold;}?>   Use the following code to create a controller for the Book model (/app/controllers/books_controller.php): <?php class BooksController extends AppController { var $name = 'Books'; var $scaffold; } ?> Now, go to the following URLs and add some test data: http://localhost/relationship/authors/ and http://localhost/relationship/books/ What Just Happened? We have created two tables: authors and books for storing author and book information. A foreign-key named author_id is added to the books table to establish the one-to-many relation between authors and books. Through this foreign-key, an author is related to multiple books, as well as, a book is related to one single author. By Cake convention, the name of a foreign-key should be underscored, singular name of target model, suffixed with _id. Once the database tables are created and relations are established between them, we can define associations in models. In both of the model classes, Author and Book, we defined associations to represent the one-to-many relationship between the corresponding two tables. CakePHP provides two types of association: hasMany and belongsTo to define one-to-many relations in models. These associations are very appropriately named: As an author 'has many' books, Author model should have hasMany association to represent its relation with the Book model. As a book 'belongs to' one author, Book model should have belongsTo association to denote its relation with the Author model. In the Author model, an association attribute $hasMany is defined with the value Book to inform the model that every author can be related to many books. We also added a $belongsTo attribute in the Book model and set its value to Author to let the Book model know that every book is related to only one author. After defining the associations, two controllers were created for both of these models with scaffolding to see how the associations are working.
Read more
  • 0
  • 0
  • 7846

article-image-tiered-application-architecture-docker-compose-part-3
Darwin Corn
08 Aug 2016
6 min read
Save for later

Tiered Application Architecture with Docker Compose, Part 3

Darwin Corn
08 Aug 2016
6 min read
This is the third part in a series that introduces you to basic web application containerization and deployment principles. If you're new to the topic, I suggest reading Part 1 and Part 2 . In this post, I attempt to take the training wheels off, and focus on using Docker Compose. Speaking of training wheels, I rode my bike with training wheels until I was six or seven. So in the interest of full disclosure, I have to admit that to a certain degree I'm still riding the containerization wave with my training wheels on. That's not to say I’m not fully using container technology. Before transitioning to the cloud, I had a private registry running on a Git server that my build scripts pushed to and pulled from to automate deployments. Now, we deploy and maintain containers in much the same way as I've detailed in the first two Parts in this series, and I take advantage of the built-in registry covered in Part 2 of this series. Either way, our use case multi-tiered application architecture was just overkill. Adding to that, when we were still doing contract work, Docker was just getting 1.6 off the ground. Now that I'm working on a couple of projects where this will be a necessity, I'm thankful that Docker has expanded their offerings to include tools like Compose, Machine and Swarm. This post will provide a brief overview of a multi-tiered application setup with Docker Compose, so look for future posts to deal with the latter two. Of course, you can just hold out for a mature Kitematic and do it all in a GUI, but you probably won't be reading this post if that applies to you. All three of these Docker extensions are relatively new, and so the entirety of this post is subject to a huge disclaimer that even Docker hasn't fully developed these extensions to be production-ready for large or intricate deployments. If you're looking to do that, you're best off holding out for my post on alternative deployment options like CoreOS and Kubernetes. But that's beyond the scope of what we're looking at here, so let's get started. First, you need to install the binary. Since this is part 3, I'm going to assume that you have the Docker Engine already installed somewhere. If you're on Mac or Windows, the Docker Toolbox you used to install it also contained an option to install Compose. I'm going to assume your daily driver is a Linux box, so these instructions are for Linux. Fortunately, the installation should just be a couple of commands--curling it from the web and making it executable: # curl -L https://github.com/docker/compose/releases/download/1.6.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose # chmod +x /usr/local/bin/docker-compose # docker-compose -v That last command should output version info if you've installed it correctly. For some reason, the linked installation doc thinks you can run that chmod as a regular user. I'm not sure of any distro that lets regular users write to /usr/local/bin, so I ran them both as root. Docker has its own security issues that are beyond the scope of this series, but I suggest reading about them if you're using this in production. My lazy way around it is to run every Docker-related command elevated, and I'm sure someone will let me have it for that in the comments. Seems like a better policy than making /usr/local/bin writeable by anyone other than root. Now that you have Compose installed, let's look at how to use it to coordinate and deploy a layered application. I'm abandoning my sample music player of the previous two posts in favor of something that's already separated its functionality, namely the Taiga project. If you're not familiar, it's a slick flat JIRA-killer, and the best part is that it's open source with a thorough installation guide. I've done the heavy lifting, so all you have to do is clone the docker-taiga repo into wherever you keep your source code and get to Composin'. $ git clone https://github.com/ndarwincorn/docker-taiga.git $ cd docker-taiga You'll notice a few things. In the root of the app, there's an .envfile where you can set all the environmental variables in one place. Next, there are two folders with taiga- prefixes. They correspond to the layers of the application, from the Angular frontend to the websocket and backend Django server. Each contains a Dockerfile for building the container, as well as relevant configuration files. There's also a docker-entrypoint-initdb.d folder that contains a shell script that creates the Taiga database when the postgres container is built. Having covered container creation in part 1, I'm more concerned with the YAML file in the root of the application, docker-compose.yml. This file coordinates the container/image creation for the application, and full reference can be found on Docker's website. Long story short, the compose YAML file gives the containers a creation order (databases, backend/websocket, frontend) and links them together, so that ports exposed in each container don't need to be published to the host machine. So, from the root of the application, let's run a # docker-compose up and see what happens. Provided there are no errors, you should be able to navigate to localhost:8080 and see your new Taiga deployment! You should be able to log in with the admin user and password 123123. Of course, there's much more to do--configure automated e-mails, link it to your Github organization, configure TLS. I'll leave that as an exercise for you. For now, enjoy your brand-new layered project management application. Of course, if you're deploying such an application for an organization, you don't want all your eggs in one basket. The next two parts in the series will deal with leveraging Docker tools and alternatives to deploy the application in a clustered, high-availability setup. About the Author Darwin Corn is a systems analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the information technology world.
Read more
  • 0
  • 0
  • 7655

article-image-validating-and-using-model-data
Packt
07 May 2013
14 min read
Save for later

Validating and Using the Model Data

Packt
07 May 2013
14 min read
(For more resources related to this topic, see here.) Declarative validation It's easy to set up declarative validation for an entity object to validate the data that is passed through the metadata file. Declarative validation is the validation added for an attribute or an entity object to fulfill a particular business validation. It is called declarative validation because we don't write any code to achieve the validation as all the business validations are achieved declaratively. The entity object holds the business rules that are defined to fulfill specific business needs such as a range check for an attribute value or to check if the attribute value provided by the user is a valid value from the list defined. The rules are incorporated to maintain a standard way to validate the data. Knowing the lifecycle of an entity object It is important to know the lifecycle of an entity object before knowing the validation that is applied to an entity object. The following diagram depicts the lifecycle of an entity: When a new row is created using an entity object, the status of the entity is set to NEW. When an entity is initialized with some values, the status is changed from NEW to INITIALIZED. At this time, the entity is marked invalid or dirty; this means that the state of the entity is changed from the value that was previously checked with the database value. The status of an entity is changed to UNMODIFIED, and the entity is marked valid after applying validation rules and committing to the database. When the value of an unmodified entity is changed, the status is changed to MODIFIED and the entity is marked dirty again. The modified entity again goes to an UNMODIFIED state when it is saved to the database. When an entity is removed from the database, the status is changed to DELETED. When the value is committed, the status changes to DEAD. Types of validation Validation rules are applied to an entity to make sure that only valid values are committed to the database and to prevent any invalid data from getting saved to the database. In ADF, we use validation rules for the entity object to make sure the row is valid all the time. There are three types of validation rules that can be set for the entity objects; they are as follows: Entity-level validation Attribute-level validation Transaction-level validation Entity-level validation As we know, an entity represents a row in the database table. Entity-level validation is the business rule that is added to the database row. For example, the validation rule that has to be applied to a row is termed as entity-level validation. There are two unique declarative validators that will be available only for entity-level validation—Collection and UniqueKey. The following diagram explains that entity-level validations are applied on a single row in the EMP table. The validated row is highlighted in bold. Attribute-level validation Attribute-level validations are applied to attributes. Business logic mostly involves specific validations to compare different attribute values or to restrict the attributes to a specific range. These kinds of validations are done in attribute-level validation. Some of the declarative validators available in ADF are Compare, Length, and Range. The Precision and Mandatory attribute validations are added, by default, to the attributes from the column definition in the underlying database table. We can only set the display message for the validation. The following diagram explains that the validation is happening on the attributes in the second row: There can be any number of validations defined on a single attribute or on multiple attributes in an entity. In the diagram, Empno has a validation that is different from the validation defined for Ename. Validation for the Job attribute is different from that for the Sal attribute. Similarly, we can define validations for attributes in the entity object. Transaction-level validation Transaction-level validations are done after all entity-level validations are completed. If you want to add any kind of validation at the end of the process, you can defer the validation to the transaction level to ensure that the validation is performed only once. Built-in declarative validators ADF Business Components includes some built-in validators to support and apply validations for entity objects. The following screenshot explains how a declarative validation will show up in the Overview tab: The Business Rules section for the EmpEO.xml file will list all the validations for the EmpEO entity. In the previous screenshot, we will see that the there are no entity-level validators defined and some of the attribute-level validations are listed in the Attributes folder. Collection validator A Collection validator is available only for entity-level validation. To perform operations such as average, min, max, count, and sum for the collection of rows, we use the collection validator. Collection validators are compared to the GROUP BY operation in an SQL query with a validation. The aggregate functions, such as count, sum, min, and max are added to validate the entity row. The validator is operated against the literal value, expression, query result, and so on. You must have the association accessor to add a collection validation. Time for action – adding a collection validator for the DeptEO file Now, we will add a Collection validator to DeptEO.xml for adding a count validation rule. Imagine a business rule that says that the number of employees added to department number 10 should be more than five. In this case, you will have a count operation for the employees added to department number 10 and show a message if the count is less than 5 for a particular department. We will break this action into the following three parts: Adding a declarative validation: In this case, the number of employees added to the department should be greater than five Specifying the execution rule: In our case, the execution of this validation should be fired only for department number 10 Displaying the error message: We have to show an error message to the user stating that the number of employees added to the department is less than five Adding the validation Following are the steps to add the validation: Go to the Business Rules section of DeptEO.xml. You will find the Business Rules section in the Overview tab. Select Entity Validators and click on the + button. You may right-click on the Entity Validators folder and then select New Validator to add a validator. Select Collection as Rule Type and move on to the Rule Definition tab. In this section, select Count for the Operation field; Accessor is the association accessor that gets added through a composition association relationship. Only the composition association accessor will be listed in the Accessor drop-down menu. Select the accessor for EmpEO listed in the dropdown, with Empno as the value for Attribute. In order to create a composition association accessor, you will have to create an association between DeptEO.xml and EmpEO.xml based on the Deptno attribute with cardinality of 1 to *. The Composition Association option has to be selected to enable a composition relationship between the two entities. The value of the Operator option should be selected as Greater Than. Compare with will be a literal value, which is 5 that can be entered in the Enter Literal Value section below. Specifying the execution rule Following are the steps to specify the execution: Now to set the execution rule, we will move to the Validation Execution tab. In the Conditional Execution section, add Deptno = '10' as the value for Conditional Execution Expression. In the Triggering Attribute section, select the Execute only if one of the Selected Attributes has been changed checkbox. Move the Empno attribute to the Selected Attributes list. This will make sure that the validation is fired only if the Empno attribute is changed: Displaying the error message Following are the steps to display the error message: Go to the Failure Handling section and select the Error option for Validation Failure Severity. In the Failure Message section, enter the following text: Please enter more than 5 Employees You can add the message stored in a resource bundle to Failure Message by clicking on the magnifying glass icon. What just happened? We have added a collection validation for our EmpEO.xml object. Every time a new employee is added to the department, the validation rule fires as we have selected Empno as our triggering attribute. The rule is also validated against the condition that we have provided to check if the department number is 10. If the department number is 10, the count for that department is calculated. When the user is ready to commit the data to the database, the rule is validated to check if the count is greater than 5. If the number of employees added is less than 5, the error message is displayed to the user. When we add a collection validator, the EmpEO.xml file gets updated with appropriate entries. The following entries get added for the aforementioned validation in the EmpEO.xml file: <validation:CollectionValidationBean Name="EmpEO_Rule_0" ResId= "com.empdirectory.model.entity.EmpEO_Rule_0" OnAttribute="Empno" OperandType="LITERAL" Inverse="false" CompareType="GREATERTHAN" CompareValue="5" Operation="count"> <validation:OnCondition> <![CDATA[Deptno = '10']]> </validation:OnCondition> </validation:CollectionValidationBean> <ResourceBundle> <PropertiesBundle PropertiesFile= "com.empdirectory.model.ModelBundle"/> </ResourceBundle> The error message that is added in the Failure Handling section is automatically added to the resource bundle. The Compare validator The Compare validator is used to compare the current attribute value with other values. The attribute value can be compared against the literal value, query result, expression, view object attribute, and so on. The operators supported are equal, not-equal, less-than, greater-than, less-than or equal to, and greater-than or equal to. The Key Exists validator This validator is used to check if the key value exists for an entity object. The key value can be a primary key, foreign key, or an alternate key. The Key Exists validator is used to find the key from the entity cache, and if the key is not found, the value is determined from the database. Because of this reason, the Key Exists validator is considered to give better performance. For example, when an employee is assigned to a department deptNo 50 and you want to make sure that deptNo 50 already exists in the DEPT table. The Length validator This validator is used to check the string length of an attribute value. The comparison is based on the character or byte length. The List validator This validator is used to create a validation for the attribute in a list. The operators included in this validation are In and NotIn. These two operators help the validation rule check if an attribute value is in a list. The Method validator Sometimes, we would like to add our own validation with some extra logic coded in our Java class file. For this purpose, ADF provides a declarative validator to map the validation rule against a method in the entity-implementation class. The implementation class is generated in the Java section of the entity object. We need to create and select a method to handle method validation. The method is named as validateXXX(), and the returned value will be of the Boolean type. The Range validator This validator is used to add a rule to validate a range for the attribute value. The operators included are Between and NotBetween. The range will have a minimum and maximum value that can be entered for the attribute. The Regular Expression validator For example, let us consider that we have a validation rule to check if the e-mail ID provided by the user is in the correct format. For the e-mail validation, we have some common rules such as the following: The e-mail ID should start with a string and end with the @ character The e-mail ID's last character cannot be the dot (.) character Two @ characters are not allowed within an e-mail ID For this purpose, ADF provides a declarative Regular Expression validator. We can use the regex pattern to check the value of the attribute. The e-mail address and the US phone number pattern is provided by default: Email: [A-Z0-9._%+-]+@[A-Z0-,9.-]+.[A-Z]{2,4} Phone Number (US): [0-9]{3}-?[0-9]{3}-?[0-9]{4} You should select the required pattern and then click on the Use Pattern button to use it. Matches and NotMatches are the two operators that are included with this validator. The Script validator If we want to include an expression and validate the business rule, the Script validator is the best choice. ADF supports Groovy expressions to provide Script validation for an attribute. The UniqueKey validator This validator is available for use only for entity-level validation. To check for uniqueness in the record, we would be using this validator. If we have a primary key defined for the entity object, the Uniqueness Check Definition section will list the primary keys defined to check for uniqueness, as shown in the following screenshot: If we have to perform a uniqueness check against any attribute other than the primary key attributes, we will have to create an alternate key for the entity object. Time for action – creating an alternate key for DeptEO Currently, the DeptEO.xml file has Deptno as the primary key. We would add business validation that states that there should not be a way to create a duplicate of the department name that is already available. The following steps show how to create an alternate key: Go to the General section of the DeptEO.xml file and expand the Alternate Keys section. Alternate keys are keys that are not part of the primary key. Click on the little + icon to add a new alternate key. Move the Dname attribute from the Available list to the Selected list and click on the OK button. What just happened? We have created an alternate key against the Dname attribute to prepare for a unique check validation for the department name. When the alternate key is added to an entity object, we will see the AltKey attribute listed in the Alternate Key section of the General tab. In the DeptEO.xml file, you will find the following code that gets added for the alternate key definition: <Key Name="AltKey" AltKey="true"> <DesignTime> <Attr Name="_isUnique" Value="true"/> <Attr Name="_DBObjectName" Value="HR.DEPT"/> </DesignTime> <AttrArray Name="Attributes"> <Item Value= "com.empdirectory.model.entity.DeptEO.Dname"/> </AttrArray> </Key> Have a go hero – compare the attributes For the first time, we have learned about the validations in ADF. So it's time for you to create your own validation for the EmpEO and DeptEO entity objects. Add validations for the following business scenarios: Continue with the creation of the uniqueness check for the department name in the DeptEO.xml file. The salary of the employees should not be greater than 1000. Display the following message if otherwise: Please enter Salary less than 1000. Display the message invalid date if the employee's hire date is after 10-10-2001. The length of the characters entered for Dname of DeptEO.xml should not be greater than 10. The location of a department can only be NEWYORK, CALIFORNIA, or CHICAGO. The department name should always be entered in uppercase. If the user enters a value in lowercase, display a message. The salary of an employee with the MANAGER job role should be between 800 and 1000. Display an error message if the value is not in this range. The employee name should always start with an uppercase letter and should end with any character other than special characters such as :, ;, and _. After creating all the validations, check the code and tags generated in the entity's XML file for each of the aforementioned validations.
Read more
  • 0
  • 0
  • 7654

article-image-working-webstart-and-browser-plugin
Packt
06 Feb 2015
12 min read
Save for later

Working with WebStart and the Browser Plugin

Packt
06 Feb 2015
12 min read
 In this article by Alex Kasko, Stanislav Kobyl yanskiy, and Alexey Mironchenko, authors of the book OpenJDK Cookbook, we will cover the following topics: Building the IcedTea browser plugin on Linux Using the IcedTea Java WebStart implementation on Linux Preparing the IcedTea Java WebStart implementation for Mac OS X Preparing the IcedTea Java WebStart implementation for Windows Introduction For a long time, for end users, the Java applets technology was the face of the whole Java world. For a lot of non-developers, the word Java itself is a synonym for the Java browser plugin that allows running Java applets inside web browsers. The Java WebStart technology is similar to the Java browser plugin but runs remotely on loaded Java applications as separate applications outside of web browsers. The OpenJDK open source project does not contain the implementations for the browser plugin nor for the WebStart technologies. The Oracle Java distribution, otherwise matching closely to OpenJDK codebases, provided its own closed source implementation for these technologies. The IcedTea-Web project contains free and open source implementations of the browser plugin and WebStart technologies. The IcedTea-Web browser plugin supports only GNU/Linux operating systems and the WebStart implementation is cross-platform. While the IcedTea implementation of WebStart is well-tested and production-ready, it has numerous incompatibilities with the Oracle WebStart implementation. These differences can be seen as corner cases; some of them are: Different behavior when parsing not well-formed JNLP descriptor files: The Oracle implementation is generally more lenient for malformed descriptors. Differences in JAR (re)downloading and caching behavior: The Oracle implementation uses caching more aggressively. Differences in sound support: This is due to differences in sound support between Oracle Java and IcedTea on Linux. Linux historically has multiple different sound providers (ALSA, PulseAudio, and so on) and IcedTea has more wide support for different providers, which can lead to sound misconfiguration. The IcedTea-Web browser plugin (as it is built on WebStart) has these incompatibilities too. On top of them, it can have more incompatibilities in relation to browser integration. User interface forms and general browser-related operations such as access from/to JavaScript code should work fine with both implementations. But historically, the browser plugin was widely used for security-critical applications like online bank clients. Such applications usually require security facilities from browsers, such as access to certificate stores or hardware crypto-devices that can differ from browser to browser, depending on the OS (for example, supports only Windows), browser version, Java version, and so on. Because of that, many real-world applications can have problems running the IcedTea-Web browser plugin on Linux. Both WebStart and the browser plugin are built on the idea of downloading (possibly untrusted) code from remote locations, and proper privilege checking and sandboxed execution of that code is a notoriously complex task. Usually reported security issues in the Oracle browser plugin (most widely known are issues during the year 2012) are also fixed separately in IcedTea-Web. Building the IcedTea browser plugin on Linux The IcedTea-Web project is not inherently cross-platform; it is developed on Linux and for Linux, and so it can be built quite easily on popular Linux distributions. The two main parts of it (stored in corresponding directories in the source code repository) are netx and plugin. NetX is a pure Java implementation of the WebStart technology. We will look at it more thoroughly in the following recipes of this article. Plugin is an implementation of the browser plugin using the NPAPI plugin architecture that is supported by multiple browsers. Plugin is written partly in Java and partly in native code (C++), and it officially supports only Linux-based operating systems. There exists an opinion about NPAPI that this architecture is dated, overcomplicated, and insecure, and that modern web browsers have enough built-in capabilities to not require external plugins. And browsers have gradually reduced support for NPAPI. Despite that, at the time of writing this book, the IcedTea-Web browser plugin worked on all major Linux browsers (Firefox and derivatives, Chromium and derivatives, and Konqueror). We will build the IcedTea-Web browser plugin from sources using Ubuntu 12.04 LTS amd64. Getting ready For this recipe, we will need a clean Ubuntu 12.04 running with the Firefox web browser installed. How to do it... The following procedure will help you to build the IcedTea-Web browser plugin: Install prepackaged binaries of OpenJDK 7: sudo apt-get install openjdk-7-jdk Install the GCC toolchain and build dependencies: sudo apt-get build-dep openjdk-7 Install the specific dependency for the browser plugin: sudo apt-get install firefox-dev Download and decompress the IcedTea-Web source code tarball: wget http://icedtea.wildebeest.org/download/source/icedtea-web-1.4.2.tar.gz tar xzvf icedtea-web-1.4.2.tar.gz Run the configure script to set up the build environment: ./configure Run the build process: make Install the newly built plugin into the /usr/local directory: sudo make install Configure the Firefox web browser to use the newly built plugin library: mkdir ~/.mozilla/plugins cd ~/.mozilla/plugins ln -s /usr/local/IcedTeaPlugin.so libjavaplugin.so Check whether the IcedTea-Web plugin has appeared under Tools | Add-ons | Plugins. Open the http://java.com/en/download/installed.jsp web page to verify that the browser plugin works. How it works... The IcedTea browser plugin requires the IcedTea Java implementation to be compiled successfully. The prepackaged OpenJDK 7 binaries in Ubuntu 12.04 are based on IcedTea, so we installed them first. The plugin uses the GNU Autconf build system that is common between free software tools. The xulrunner-dev package is required to access the NPAPI headers. The built plugin may be installed into Firefox for the current user only without requiring administrator privileges. For that, we created a symbolic link to our plugin in the place where Firefox expects to find the libjavaplugin.so plugin library. There's more... The plugin can also be installed into other browsers with NPAPI support, but installation instructions can be different for different browsers and different Linux distributions. As the NPAPI architecture does not depend on the operating system, in theory, a plugin can be built for non-Linux operating systems. But currently, no such ports are planned. Using the IcedTea Java WebStart implementation on Linux On the Java platform, the JVM needs to perform the class load process for each class it wants to use. This process is opaque for the JVM and actual bytecode for loaded classes may come from one of many sources. For example, this method allows the Java Applet classes to be loaded from a remote server to the Java process inside the web browser. Remote class loading also may be used to run remotely loaded Java applications in standalone mode without integration with the web browser. This technique is called Java WebStart and was developed under Java Specification Request (JSR) number 56. To run the Java application remotely, WebStart requires an application descriptor file that should be written using the Java Network Launching Protocol (JNLP) syntax. This file is used to define the remote server to load the application form along with some metainformation. The WebStart application may be launched from the web page by clicking on the JNLP link, or without the web browser using the JNLP file obtained beforehand. In either case, running the application is completely separate from the web browser, but uses a sandboxed security model similar to Java Applets. The OpenJDK project does not contain the WebStart implementation; the Oracle Java distribution provides its own closed-source WebStart implementation. The open source WebStart implementation exists as part of the IcedTea-Web project. It was initially based on the NETwork eXecute (NetX) project. Contrary to the Applet technology, WebStart does not require any web browser integration. This allowed developers to implement the NetX module using pure Java without native code. For integration with Linux-based operating systems, IcedTea-Web implements the javaws command as shell script that launches the netx.jar file with proper arguments. In this recipe, we will build the NetX module from the official IcedTea-Web source tarball. Getting ready For this recipe, we will need a clean Ubuntu 12.04 running with the Firefox web browser installed. How to do it... The following procedure will help you to build a NetX module: Install prepackaged binaries of OpenJDK 7: sudo apt-get install openjdk-7-jdk Install the GCC toolchain and build dependencies: sudo apt-get build-dep openjdk-7 Download and decompress the IcedTea-Web source code tarball: wget http://icedtea.wildebeest.org/download/source/icedtea-web-1.4.2.tar.gz tar xzvf icedtea-web-1.4.2.tar.gz Run the configure script to set up a build environment excluding the browser plugin from the build: ./configure –disable-plugin Run the build process: make Install the newly-built plugin into the /usr/local directory: sudo make install Run the WebStart application example from the Java tutorial: javaws http://docs.oracle.com/javase/tutorialJWS/samples/ deployment/dynamictree_webstartJWSProject/dynamictree_webstart.jnlp How it works... The javaws shell script is installed into the /usr/local/* directory. When launched with a path or a link to the JNLP file, javaws launches the netx.jar file, adding it to the boot classpath (for security reasons) and providing the JNLP link as an argument. Preparing the IcedTea Java WebStart implementation for Mac OS X The NetX WebStart implementation from the IcedTea-Web project is written in pure Java, so it can also be used on Mac OS X. IcedTea-Web provides the javaws launcher implementation only for Linux-based operating systems. In this recipe, we will create a simple implementation of the WebStart launcher script for Mac OS X. Getting ready For this recipe, we will need Mac OS X Lion with Java 7 (the prebuilt OpenJDK or Oracle one) installed. We will also need the netx.jar module from the IcedTea-Web project, which can be built using instructions from the previous recipe. How to do it... The following procedure will help you to run WebStart applications on Mac OS X: Download the JNLP descriptor example from the Java tutorials at http://docs.oracle.com/javase/tutorialJWS/samples/deployment/dynamictree_webstartJWSProject/dynamictree_webstart.jnlp. Test that this application can be run from the terminal using netx.jar: java -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot dynamictree_webstart.jnlp Create the wslauncher.sh bash script with the following contents: #!/bin/bash if [ "x$JAVA_HOME" = "x" ] ; then JAVA="$( which java 2>/dev/null )" else JAVA="$JAVA_HOME"/bin/java fi if [ "x$JAVA" = "x" ] ; then echo "Java executable not found" exit 1 fi if [ "x$1" = "x" ] ; then echo "Please provide JNLP file as first argument" exit 1 fi $JAVA -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot $1 Mark the launcher script as executable: chmod 755 wslauncher.sh Run the application using the launcher script: ./wslauncher.sh dynamictree_webstart.jnlp How it works... The next.jar file contains a Java application that can read JNLP files and download and run classes described in JNLP. But for security reasons, next.jar cannot be launched directly as an application (using the java -jar netx.jar syntax). Instead, netx.jar is added to the privileged boot classpath and is run specifying the main class directly. This allows us to download applications in sandbox mode. The wslauncher.sh script tries to find the Java executable file using the PATH and JAVA_HOME environment variables and then launches specified JNLP through netx.jar. There's more... The wslauncher.sh script provides a basic solution to run WebStart applications from the terminal. To integrate netx.jar into your operating system environment properly (to be able to launch WebStart apps using JNLP links from the web browser), a native launcher or custom platform scripting solution may be used. Such solutions lay down the scope of this book. Preparing the IcedTea Java WebStart implementation for Windows The NetX WebStart implementation from the IcedTea-Web project is written in pure Java, so it can also be used on Windows; we also used it on Linux and Mac OS X in previous recipes in this article. In this recipe, we will create a simple implementation of the WebStart launcher script for Windows. Getting ready For this recipe, we will need a version of Windows running with Java 7 (the prebuilt OpenJDK or Oracle one) installed. We will also need the netx.jar module from the IcedTea-Web project, which can be built using instructions from the previous recipe in this article. How to do it... The following procedure will help you to run WebStart applications on Windows: Download the JNLP descriptor example from the Java tutorials at http://docs.oracle.com/javase/tutorialJWS/samples/deployment/dynamictree_webstartJWSProject/dynamictree_webstart.jnlp. Test that this application can be run from the terminal using netx.jar: java -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot dynamictree_webstart.jnlp Create the wslauncher.sh bash script with the following contents: #!/bin/bash if [ "x$JAVA_HOME" = "x" ] ; then JAVA="$( which java 2>/dev/null )" else JAVA="$JAVA_HOME"/bin/java fi if [ "x$JAVA" = "x" ] ; then echo "Java executable not found" exit 1 fi if [ "x$1" = "x" ] ; then echo "Please provide JNLP file as first argument" exit 1 fi $JAVA -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot $1 Mark the launcher script as executable: chmod 755 wslauncher.sh Run the application using the launcher script: ./wslauncher.sh dynamictree_webstart.jnlp How it works... The netx.jar module must be added to the boot classpath as it cannot be run directly because of security reasons. The wslauncher.bat script tries to find the Java executable using the JAVA_HOME environment variable and then launches specified JNLP through netx.jar. There's more... The wslauncher.bat script may be registered as a default application to run the JNLP files. This will allow you to run WebStart applications from the web browser. But the current script will show the batch window for a short period of time before launching the application. It also does not support looking for Java executables in the Windows Registry. A more advanced script without those problems may be written using Visual Basic script (or any other native scripting solution) or as a native executable launcher. Such solutions lay down the scope of this book. Summary In this article we covered the configuration and installation of WebStart and browser plugin components, which are the biggest parts of the Iced Tea project.
Read more
  • 0
  • 0
  • 7645

article-image-working-liferay-user-user-group-organization
Packt
04 Jun 2015
23 min read
Save for later

Working with a Liferay User / User Group / Organization

Packt
04 Jun 2015
23 min read
In this article by Piotr Filipowicz and Katarzyna Ziółkowska, authors of the book Liferay 6.x Portal Enterprise Intranets Cookbook, we will cover the basic functionalities that will allow us to manage the structure and users of the intranet. In this article, we will cover the following topics: Managing an organization structure Creating a new user group Adding a new user Assigning users to organizations Assigning users to user groups Exporting users (For more resources related to this topic, see here.) The first step in creating an intranet, beyond answering the question of who the users will be, is to determine its structure. The structure of the intranet is often a derivative of the organizational structure of the company or institution. Liferay Portal CMS provides several tools that allow mapping of a company's structure in the system. The hierarchy is built by organizations that match functional or localization departments of the company. Each organization represents one department or localization and assembles users who represent employees of these departments. However, sometimes, there are other groups of employees in the company. These groups exist beyond the company's organizational structure, and can be reflected in the system by the User Groups functionality. Managing an organization structure Building an organizational structure in Liferay resembles the process of managing folders on a computer drive. An organization may have its suborganizations and—except the first level organization—at the same time, it can be a suborganization of another one. This folder-similar mechanism allows you to create a tree structure of organizations. Let's imagine that we are obliged to create an intranet for a software development company. The company's headquarter is located in London. There are also two other offices in Liverpool and Glasgow. The company is divided into finance, marketing, sales, IT, human resources, and legal departments. Employees from Glasgow and Liverpool belong to the IT department. How to do it… In order to create a structure described previously, these are the steps: Log in as an administrator and go to Admin | Control Panel | Users | Users and Organizations. Click on the Add button. Choose the type of organization you want to create (in our example, it will be a regular organization called software development company, but it is also possible to choose a location). Provide a name for the top-level organization. Choose the parent organization (if a top-level organization is created, this must be skipped). Click on the Save button: Click on the Change button and upload a file, with a graphic representation of your company (for example, logo). Use the right column menu to navigate to data sections you want to fill in with the information. Click on the Save button. Go back to the Users and Organizations list by clicking on the back icon (the left-arrow icon next to the Edit Software Development Company header). Click on the Actions button, located near the name of the newly created organization. Choose the Add Regular Organization option. Provide a name for the organization (in our example, it is IT). Click on the Save button. Go back to the Users and Organizations list by clicking on the back icon (left-arrow icon next to Edit IT header). Click on the Actions button, located near the name of the newly created organization (in our case, it is IT). Choose the Add Location option. Provide a name for the organization (for instance, IT Liverpool). Provide a country. Provide a region (if available). Click on the Save button. How it works… Let's take a look at what we did throughout the previous recipe. In steps 1 through 6, we created a new top-level organization called software development company. With steps 7 through 9, we defined a set of attributes of the newly created organization. Starting from step 11, we created suborganizations: standard organization (IT) and its location (IT Liverpool). Creating an organization There are two types of organizations: regular organizations and locations. The regular organization provides the possibility to create a multilevel structure, each unit of which can have parent organizations and suborganizations (there is one exception: the top-level organization cannot have any parent organizations). The localization is a special kind of organization that allows us to provide some additional data, such as country and region. However, it does not enable us to create suborganizations. When creating the tree of organizations, it is possible to combine regular organizations and locations, where, for instance, the top-level organization will be the regular organization and, both locations and regular organizations will be used as child organizations. When creating a new organization, it is very important to choose the organization type wisely, because it is the only organization parameter, which cannot be modified further. As was described previously, organizations can be arranged in a tree structure. The position of the organization in a tree is determined by the parent organization parameter, which is set by creating a new organization or by editing an existing one. If the parent organization is not set, a top-level organization is always created. There are two ways of creating a suborganization. It is possible to add a new organization by using the Add button and choosing a parent organization manually. The other way is to go to a specific organization's action menu and choose the Add Regular Organization action. While creating a new organization using this option, the parent organization parameter will be set automatically. Setting attributes Similarly, just like its counterpart in reality, every organization in Liferay has a set of attributes that are grouped and can be modified through the organization profile form. This form is available after clicking on the Edit button from the organization's action list (see the There's more… section). All the available attributes are divided into the following groups: The ORGANIZATION INFORMATION group, which contains the following sections: The Details section, which allows us to change the organization name, parent organization, country, or region (available for locations only). The name of the organization is the only required organization parameter. It is used by the search mechanism to search for organizations. It is also a part of an URL address of the organization's sites. The Organization Sites section, which allows us to enable the private and public pages of the organization's website. The Categorization section, which provides tags and categories. They can be assigned to an organization. IDENTIFICATION, which groups the Addresses, Phone Numbers, Additional Email Addresses, Websites, and Services sections. MISCELLANEOUS, which consists of: The Comments section, which allows us to manage an organization's comments The Reminder Queries section, in which reminder queries for different languages can be set The Custom Fields section, which provides a tool to manage values of custom attributes defined for the organization Customizing an organization functionalities Liferay provides the possibility to customize an organization's functionality. In the portal.properties file located in the portal-impl/src folder, there is a section called Organizations. All these settings can be overridden in the portal-ext.properties file. We mentioned that top-level organization cannot have any parent organizations. If we look deeper into portal settings, we can dig out the following properties: organizations.rootable[regular-Organization]=true organizations.rootable[location]=false These properties determine which type of organization can be created as a root organization. In many cases, users want to add a new organization's type. To achieve this goal, it is necessary to set a few properties that describe a new type: organizations.types=regular-Organization,location,my-Organization organizations.rootable[my-organization]=false organizations.children.types[my-organization]=location organizations.country.enabled[my-organization]=false organizations.country.required[my-organization]=false The first property defines a list of available types. The second one denies the possibility to create an organization as a root. The next one specifies a list of types that we can create as children. In our case, this is only the location type. The last two properties turn off the country list in the creation process. This option is useful when the location is not important. Another interesting feature is the ability to customize an organization's profile form. It is possible to indicate which sections are available on the creation form and which are available on the modification form. The following properties aggregate this feature: organizations.form.add.main=details,organization-site organizations.form.add.identification= organizations.form.add.miscellaneous=   organizations.form.update.main=details,organization-site,categorization organizations.form.update.identification=addresses,phone-numbers,additional-email-addresses,websites,services organizations.form.update.miscellaneous=comments,reminder-queries,custom-fields There's more… It is also possible to modify an existing organization and its attributes and to manage its members using actions available in the organization Actions menu. There are several possible actions that can be performed on an organization: The Edit action allows us to modify the attributes of an organization. The Manage Site action redirects the user to the Site Settings section in Control Panel and allows us to manage the organization's public and private sites (if the organization site has been already created). The Assign Organization Roles action allows us to set organization roles to members of an organization. The Assign Users action allows us to assign users already existing in the Liferay database to the specific organization. The Add User action allows us to create a new user, who will be automatically assigned to this specific organization. The Add Regular Organization action enables us to create a new child regular organization (the current organization will be automatically set as a parent organization of a new one). The Add Location action enables us to create a new location (the current organization will be automatically set as a parent organization of a new one). The Delete action allows us to remove an organization. While removing an organization, all pages with portlets and content are also removed. An organization cannot be removed if there are suborganizations or users assigned to it. In order to edit an organization, assign or add users, create a new suborganization (regular organization or location) or delete an organization. Perform the following steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Actions button, located near the name of the organization you want to modify. Click on the name of the chosen action. Creating a new user group Sometimes, in addition to the hierarchy, within the company, there are other groups of people linked by common interests or occupations, such as people working on a specific project, people occupying the same post, and so on. Such groups in Liferay are represented by user groups. This functionality is similar to the LDAP users group where it is possible to set group permissions. One user can be assigned into many user groups. How to do it… In order to create a new user group, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | User Groups. Click on the Add button. Provide Name (required) and Description of the user group. Leave the default values in the User Group Site section. Click on the Save button. How it works… The user groups functionality allows us to create a collection of users and provide them with a public and/or private site, which contain a bunch of tools for collaboration. Unlike the organization, the user group cannot be used to produce a multilevel structure. It enables us to create non-hierarchical groups of users, which can be used by other functionalities. For example, a user group can be used as an additional information targeting tool for the announcements portlet, which presents short messages sent by authorized users (the announcements portlet allows us to direct a message to all users from a specific organization or user group). It is also possible to set permissions to a user group and decide which actions can be performed by which roles within this particular user group. It is worth noting that user groups can assemble users who are already members of organizations. This mechanism is often used when, aside from the company organizational structure, there exist other groups of people who need a common place to store data or for information exchange. There's more… It is also possible to modify an existing user group and its attributes and to manage its members using actions available in the user group Actions menu. There are several possible actions that can be performed on a user group. They are as follows: The Edit action allows us to modify attributes of a user group The Permissions action allows us to decide which roles can assign members of this user group, delete the user group, manage announcements, set permissions, and update or view the user group The Manage Site Pages action redirects the user to the site settings section in Control Panel and allows us to manage the user group's public and private sites The Go to the Site's Public Pages action opens the user group's public pages in a new window (if any public pages of User Group Site has been created) The Go to the Site's Private Pages action opens the user group's private pages in a new window (if any public pages of User Group Site has been created) The Assign Members action allows us to assign users already existing in the Liferay database to this specific user group The Delete action allows us to delete a user group A user group cannot be removed if there are users assigned to it. In order to edit a user group, set permissions, assign members, manage site pages, or delete a user group, perform these steps: Go to Admin | Control panel | Users | User Groups. Click on the Actions button, located near the name of the user group you want to modify: Click on the name of the chosen action. Adding a new user Each system is created for users. Liferay Portal CMS provides a few different ways of adding users to the system that can be enabled or disabled depending on the requirements. The first way is to enable users by creating their own accounts via the Create Account form. This functionality allows all users who can enter the site containing the form to register and gain access to the designated content of the website. In this case, the system automatically assigns the default user account parameters, which indicate the range of activities that may be carried by them in the system. The second solution (which we presented in this recipe) is to reserve the users' account creation to the administrators, who will decide what parameters should be assigned to each account. How to do it… To add a new user, you need to follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Add button. Choose the User option. Fill in the form by providing the user's details in the Email Address (Required), Title, First Name (Required), Middle Name, Last Name, Suffix, Birthday, and Job Title fields (if the Autogenerated User Screen Names option in the Portal Settings | Users section is disabled, the screen name field will be available): Click on the Save button: Using the right column menu, navigate to the data sections you want to fill in with the information. Click on the Save button. How it works… In steps 1 through 5, we created a new user. With steps 6 and 7, we defined a set of attributes of the newly created user. This user is active and can already perform activities according to their memberships and roles. To understand all the mechanisms that influence the user's possible behavior in the system, we have to take a deeper look at these attributes. User as a member of organizations, user groups, and sites The first and most important thing to know about users is that they can be members of organizations, user groups, and sites. The range of activities performed by users within each organization, user group, or site they belong to is determined by the roles assigned to them. All the roles must be assigned for each user of an organization and site individually. This means it is possible, for instance, to make a user the administrator of one organization and only a power user of another. User attributes Each user in Liferay has a set of attributes that are grouped and can be modified through the user profile form. This form is available after clicking on the Edit button from the user's actions list (see, the There's more… section). All the available attributes are divided into the following groups: USER INFORMATION, which contains the following sections: The Details section enables us to provide basic user information, such as Screen Name, Email Address, Title, First Name, Middle Name, Last Name, Suffix, Birthday, Job Title, and Avatar The Password section allows us to set a new password or force a user to change their current password The Organizations section enables us to choose the organizations of which the user is a member The Sites section enables us to choose the sites of which the user is a member The User Groups section enables us to choose user groups of which the user is a member The Roles tab allows us to assign user roles The Personal Site section helps direct the public and private sites to the user The Categorization section provides tags and categories, which can be assigned to a user IDENTIFICATION allows us to to set additional user information, such as Addresses, Phone Numbers, Additional Email Addresses, Websites, Instant Messenger, Social Network, SMS, and OpenID MISCELLANEOUS, which contains the following sections: The Announcements section allows us to set the delivery options for alerts and announcements The Display Settings section covers the Language, Time Zone, and Greeting text options The Comments section allows us to manage the user's comments The Custom Fields section provides a tool to manage values of custom attributes defined for the user User site As it was mentioned earlier, each user in Liferay may have access to different kinds of sites: organization sites, user group sites, and standalone sites. In addition to these, however, users may also have their own public and private sites, which can be managed by them. The user's public and private sites can be reached from the user's menu located on the dockbar (the My Profile and My Dashboard links). It is also possible to enter these sites using their addresses, which are /web/username/home and /user/username/home, respectively. Customizing users Liferay gives us a whole bunch of settings in portal.properties under the Users section. If you want to override some of the properties, put them into the portal-ext.properties file. It is possible to deny deleting a user by setting the following property: users.delete=false As in the case of organizations, there is a functionality that lets us customize sections on the creation or modification form: users.form.add.main=details,Organizations,personal-site users.form.add.identification= users.form.add.miscellaneous=   users.form.update.main=details,password,Organizations,sites,user-groups,roles,personal-site,categorization users.form.update.identification=addresses,phone-numbers,additional-email-addresses,websites,instant-messenger,social-network,sms,open-id users.form.update.miscellaneous=announcements,display-settings,comments,custom-fields There are many other properties, but we will not discuss all of them. In portal.properties, located in the portal-impl/src folder, under the Users section, it is possible to find all the settings, and every line is documented by comment. There's more… Each user in the system can be active or inactive. An active user can log into their user account and use all resources available for them within their roles and memberships. Inactive user cannot enter his account, access places and perform activities, which are reserved for authorized and authenticated users only. It is worth noticing that active users cannot be deleted. In order to remove a user from Liferay, you need to to deactivate them first. To deactivate a user, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Actions button located near the name of the user. Go to the All Users tab. Find the active user you want to deactivate. Click on the Deactivate button. Confirm this action by clicking on the Ok button. To activate a user, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Go to the All Users tab. Find the inactive user you want to activate. Click on the Actions button located near the name of the user. Click on the Activate button. Sometimes, when using the system, users report some irregularities or get a little confused and require assistance. You need to look at the page through the user's eyes. Liferay provides a very useful functionality that allows authorized users to impersonate another user. In order to use this functionality, perform these steps: Log in as an administrator and go to Control Panel | Users | Users and Organizations. Click on the Actions button located near the name of the user. Click on the Impersonate user button. See also For more information on managing users, refer to the Exporting users recipe from this article Assigning users to organizations There are several ways a user can be assigned to an organization. It can be done by editing the user account that has already been created (see the User attributes section in Adding a new user recipe) or using the Assign Users action from the organization actions menu. In this recipe, we will show you how to assign a user to an organization using the option available in the organization actions menu. Getting ready To go through this recipe, you will need an organization and a user (refer to Managing an organization structure and Adding a new user recipes from this article). How to do it… In order to assign a user to an organization from the organization menu, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Actions button located near the name of the organization to which you want to assign the user. Choose the Assign Users option. Click on the Available tab. Mark a user or group of users you want to assign. Click on the Update Associations button. How it works… Each user in Liferay can be assigned to as many regular organizations as required and to exactly one location. When a user is assigned to the organization, they appear on the list of users of the organization. They become members of the organization and gain access to the organization's public and private pages according to the assigned roles and permissions. As was shown in the previous recipe, while editing the list of assigned users in the organization menu, it is possible to assign multiple users. It is worth noting that an administrator can assign the users of the organizations and suborganizations tasks that she or he can manage. To allow any administrator of an organization to be able to assign any user to that organization, set the following property in the portal-ext.properties file: Organizations.assignment.strict=true In many cases, when our organizations have a tree structure, it is not necessary that a member of a child organization has access to the ancestral ones. To disable this structure set the following property: Organizations.membership.strict=true See also For information on how to create user accounts, refer to the Adding a new user recipe from this article For information on assigning users to user groups, refer to the Assigning users to a user group recipe from this article Assigning users to a user group In addition to being a member of the organization, each user can be a member of one or more user groups. As a member of a user group, a user can profit by getting access to the user group's sites or other information directed exclusively to its members, for instance, messages sent by the Announcements portlet. A user becomes a member of the group when they are assigned to it. This assignment can be done by editing the user account that has already been created (see the User attributes description in Adding a new user recipe) or using the Assign Members action from the User Groups actions menu. In this recipe, we will show you how to assign a user to a user group using the option available in the User Groups actions menu. Getting ready To step through this recipe, first, you have to create a user group and a user (see the Creating a new user group and Adding a new user recipes). How to do it… In order to assign a user to a user group from the User Groups menu, perform these steps: Log in as an administrator and go to Admin | Control panel | Users | User Groups. Click on the Actions button located near the name of the user group to which you want to assign the user. Click on the Assign Members button. Click on the Available tab. Mark a user or group of users you want to assign. Click on the Update Associations button. How it works… As was shown in this recipe, one or more users can be assigned to a user group by editing the list of assigned users in the user group menu. Each user assigned to a user group becomes a member of this group and gains access to the user group's public and private pages according to assigned roles and permissions. See also For information on how to create user accounts, refer to the Adding a new user recipe from this article For information about assigning users to organization, refer to the Assigning users to organizations recipe from this article Exporting users Liferay Portal CMS provides a simple export mechanism, which allows us to export a list of all the users stored in the database or a list of all the users from a specific organization to a file. How to do it… In order to export the list of all users from the database to a file, follow these steps: Log in as an administrator and go to Admin | Control Panel | Users | Users and Organizations. Click on the Export Users button. In order to export the list of all users from the specific organization to a file, follow these steps: Log in as an administrator and go to Admin | Control Panel | Users | Users and Organizations. Click on the All Organizations tab. Click on the name of an organization to which the users are supposed to be exported. Click on the Export Users button. How it works… As mentioned previously, Liferay allows us to export users from a particular organization to a .csv file. The .csv file contains a list of user names and corresponding e-mail addresses. It is also possible to export all the users by clicking on the Export Users button located on the All Users tab. You will find this tab by going to Admin | Control panel | Users | Users and Organizations. See also For information on how to create user accounts, refer to the Adding a new user recipe from this article For information on how to assign users to organizations, refer to the Assigning users to organizations recipe from this article Summary In this article, you have learnt how to manage an organization structure by creating users and assigning them to organizations and user groups. You have also learnt how to export users using Liferay's export mechanism. Resources for Article: Further resources on this subject: Cache replication [article] Portlet [article] Liferay, its Installation and setup [article]
Read more
  • 0
  • 1
  • 7584
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-web-services-microsoft-azure
Packt
29 Nov 2010
8 min read
Save for later

Web Services in Microsoft Azure

Packt
29 Nov 2010
8 min read
A web service is not one single entity and consists of three distinct parts: An endpoint, which is the URL (and related information) where client applications will find our service A host environment, which in our case will be Azure A service class, which is the code that implements the methods called by the client application A web service endpoint is more than just a URL. An endpoint also includes: The bindings, or communication and security protocols The contract (or promise) that certain methods exist, how these methods should be called, and what the data will look like when returned A simple way to remember the components of an endpoint is A/B/C, that is, address/bindings/contract. Web services can fill many roles in our Azure applications—from serving as a simple way to place messages into a queue, to being a complete replacement for a data access layer in a web application (also known as a Service Oriented Architecture or SOA). In Azure, web services serve as HTTP/HTTPS endpoints, which can be accessed by any application that supports REST, regardless of language or operating system. The intrinsic web services libraries in .NET are called Windows Communication Foundation (WCF). As WCF is designed specifically for programming web services, it's referred to as a service-oriented programming model. We are not limited to using WCF libraries in Azure development, but we expect it to be a popular choice for constructing web services being part of the .NET framework. A complete introduction to WCF can be found at http://msdn.microsoft.com/en-us/netframework/aa663324.aspx. When adding WCF services to an Azure web role, we can either create a separate web role instance, or add the web services to an existing web role. Using separate instances allows us to scale the web services independently of the web forms, but multiple instances increase our operating costs. Separate instances also allow us to use different technologies for each Azure instance; for example, the web form may be written in PHP and hosted on Apache, while the web services may be written in Java and hosted using Tomcat. Using the same instance helps keep our costs much lower, but in that case we have to scale both the web forms and the web services together. Depending on our application's architecture, this may not be desirable. Securing WCF Stored data are only as secure as the application used for accessing it. The Internet is stateless, and REST has no sense of security, so security information must be passed as part of the data in each request. If the credentials are not encrypted, then all requests should be forced to use HTTPS. If we control the consuming client applications, we can also control the encryption of the user credentials. Otherwise, our only choice may be to use clear text credentials via HTTPS. For an application with a wide or uncontrolled distribution (like most commercial applications want to be), or if we are to support a number of home-brewed applications, the authorization information must be unique to the user. Part of the behind-the-services code should check to see if the user making the request can be authenticated, and if the user is authorized to perform the action. This adds additional coding overhead, but it's easier to plan for this up front. There are a number of ways to secure web services—from using HTTPS and passing credentials with each request, to using authentication tokens in each request. As it happens, using authentication tokens is part of the AppFabric Access Control, and we'll look more into the security for WCF when we dive deeper into Access Control. Jupiter Motors web service In our corporate portal for Jupiter Motors, we included a design for a client application, which our delivery personnel will use to update the status of an order and to decide which customers will accept delivery of their vehicle. For accounting and insurance reasons, the order status needs to be updated immediately after a customer accepts their vehicle. To do so, the client application will call a web service to update the order status as soon as the Accepted button is clicked. Our WCF service is interconnected to other parts of our Jupiter Motors application, so we won't see it completely in action until it all comes together. In the meantime, it will seem like we're developing blind. In reality, all the components would probably be developed and tested simultaneously. Creating a new WCF service web role When creating a web service, we have a choice to add the web service to an existing web role or create a new web role. This helps us deploy and maintain our website application separately from our web services. And in order for us to scale the web role independently from the worker role, we'll create our web service in a role separate from our web application. Creating a new WCF service web role is very simple—Visual Studio will do the "hard work" for us and allow us to start coding our services. First, open the JupiterMotors project. Create the new web role by right-clicking on the Roles folder in our project, choosing Add, and then select the New Web Role Project… option. When we do this, we will be asked what type of web role we want to create. We will choose a WCF Service Web Role, call it JupiterMotorsWCFRole, and click on the Add button. Because different services must have unique names in our project, a good naming convention to use is the project name concatenated with the type of role. This makes the different roles and instances easily discernable and complies with the unique naming requirement. This is where Visual Studio does its magic. It creates the new role in the cloud project, creates a new web role for our WCF web services, and creates some template code for us. The template service created is called "Service1". You will see both, a Service1.svc file as well as an IService1.vb file. Also, a web.config file (as we would expect to see in any web role) is created in the web role and is already wired up for our Service1 web service. All of the generated code is very helpful if you are learning WCF web services. This is what we should see once Visual Studio finishes creating the new project: We are going to start afresh with our own services—we can delete Service1.svc and IService1.vb. Also, in the web.config file, the following boilerplate code can be deleted (we'll add our own code as needed): <system.serviceModel> <services> <service name="JupiterMotorsWCFRole.Service1" behaviorConfiguration="JupiterMotorsWCFRole. Service1Behavior"> <!-- Service Endpoints --> <endpoint address="" binding="basicHttpBinding" contract="JupiterMotorsWCFRole.IService1"> <!-- Upon deployment, the following identity element should be removed or replaced to reflect the identity under which the deployed service runs. If removed, WCF will infer an appropriate identity automatically. --> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="JupiterMotorsWCFRole.Service1Behavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> Let's now add a WCF service to the JupiterMotorsWCFRole project. To do so, right-click on the project, then Add, and select the New Item... option. We now choose a WCF service and will name it as ERPService.svc: Just like the generated code when we created the web role, ERPService.svc as well as IERPService.vb files were created for us, and these are now wired into the web.config file. There is some generated code in the ERPService.svc and IERPService.vb files, but we will replace this with our code in the next section. When we create a web service, the actual service class is created with the name we specify. Additionally, an interface class is automatically created. We can specify the name for the class; however, being an interface class, it will always have its name beginning with letter I. This is a special type of interface class, called a service contract. The service contract provides a description of what methods and return types are available in our web service.
Read more
  • 0
  • 0
  • 7497

article-image-entering-people-information
Packt
24 Jun 2015
9 min read
Save for later

Entering People Information

Packt
24 Jun 2015
9 min read
In this article by Pravin Ingawale, author of the book Oracle E-Business Suite R12.x HRMS – A Functionality Guide, will learn about entering a person's information in Oracle HRMS. We will understand the hiring process in Oracle. This, actually, is part of the Oracle I-recruitment module in Oracle apps. Then we will see how to create an employee in Core HR. Then, we will learn the concept of person types and defining person types. We will also learn about entering information for an employee, including additional information. Let's see how to create an employee in core HR. (For more resources related to this topic, see here.) Creating an employee An employee is the most important entity in an organization. Before creating an employee, the HR officer must know the date from which the employee will be active in the organization. In Oracle terminology, you can call it the employee's hire date. Apart from this, the HR officer must know basic details of the employee such as first name, last name, date of birth, and so on. Navigate to US HRMS Manager | People | Enter and Maintain. This is the basic form, called People in Oracle HRMS, which is used to create an employee in the application. As you can see in the form, there is a field named Last, which is marked in yellow. This indicates that this is mandatory to create an employee record. First, you need to set the effective date on the form. You can set this by clicking on the icon, as shown in the following screenshot: You need to enter the mandatory field data along with additional data. The following screenshot shows the data entered: Once you enter the required data, you need to specify the action for the entered record. The action we have selected is Create Employment. The Create Employment action will create an employee in the application. There are other actions such as Create Applicant, which is used to create an applicant for I-Recruitment. The Create Placement action is used to create a contingent worker in your enterprise. Once you select this action, it will prompt you to enter the person type of this employee as in the following screenshot. Select the Person Type as Employee and save the record. We will see the concept of person type in the next section. Once you select the employee person type and then save the record, the system will automatically generate the employee number for the person. In our case, the system has generated an employee number 10160. So now, we have created an employee in the application. Concept of person types In any organization, you need to identify different types of people. Here, you can say that you need to group different types of people. There are basically three types of people you capture in HRMS system. They are as follows: Employees: These include current employees and past employees. Past employees are those who were part of your enterprise earlier and are no longer active in the system. You can call them terminated or ex-employees. Applicants: If you are using I-recruitment, applicants can be created. External people: Contact is a special category of external type. Contacts are associated with an employee or an applicant. For example, there might be a need to record the name, address, and phone number of an emergency contact for each employee in your organization. There might also be a need to keep information on dependents of an employee for medical insurance purposes or for some payments in payroll processing. Using person types There are predefined person types in Oracle HRMS. You can add more person types as per your requirements. You can also change the name of existing person types when you install the system. Let's take an example for your understanding. Your organization has employees. There might be employees of different types; you might have regular employees and employees who are contractors in your organization. Hence, you can categorize employees in your organization into two types: Regular employees Consultants The reason for creating these categories is to easily identify the employee type and store different types of information for each category. Similarly, if you are using I-recruitment, then you will have candidates. Hence, you can categorize candidates into two types. One will be internal candidate and the other will be external candidate. Internal candidates will be employees within your organization who can apply for an opening within your organization. An external candidate is an applicant who does not work for your organization but is applying for a position that is open in your company. Defining person types In an earlier section, you learned the concept of person types, and now you will learn how to define person types in the system. Navigate to US HRMS Manager | Other Definitions | Person Types. In the preceding screenshot, you can see four fields, that is, User Name, System Name, Active, and Default flag. There are eight person types recognized by the system and identified by a system name. For each system name, there are predefined usernames. A username can be changed as per your needs. There must be one username that should be the default. While creating an employee, the person types that are marked by the default flag will come by default. To change a username for a person type, delete the contents of the User Name field and type the name you'd prefer to keep. To add a new username to a person type system name: Select New Record from the Edit menu. Enter a unique username and select the system name you want to use. Deactivating person types You cannot delete person types, but you can deactivate them by unchecking the Active checkbox. Entering personal and additional information Until now, you learned how to create an employee by entering basic details such as title, gender, and date of birth. In addition to this, you can enter some other information for an employee. As you can see on the people form, there are various tabs such as Employment, Office details, Background, and so on. Each tab has some fields that can store information. For example, in our case, we have stored the e-mail address of the employee in the Office Details tab. Whenever you enter any data for an employee and then click on the Save button, it will give you two options as shown in the following screenshot: You have to select one of the options to save the data. The differences between both the options are explained with an example. Let's say you have hired a new employee as of 01-Jan-2014. Hence, a new record will be created in the application with the start date as 01-Jan-2014. This is called an effective start date of the record. There is no end date for this record, so Oracle gives it a default end date, which is 31-Dec-4712. This is called the effective end date of the record. Now, in our case, Oracle has created a single record with the start date and end date as 01-Jan-2014 and 31-Dec-4712, respectively. When we try to enter additional data for this record (in our case, it is phone number) then Oracle will prompt you to select the Correction or Update option. This is called the date-tracked option. If you select the correction mode, then Oracle will update an existing record in the application. Now, if you date track to, say, 01-Aug-2014 and then enter the phone number and select the update mode, then it will end the historical data with the new date minus one and create a new record with the start date 01-Aug-2014 with the phone number that you have entered. Thus, the historical data will be preserved and a new record will be created with the start date 01-Aug-2014 and a phone number. The following tabular representation will help you understand better in Correction mode: Employee Number LastName Effective Start Date Effective End Date Phone Number 10160 Test010114 01-Jan-2014 31-Dec-4712 +0099999999 Now, if you want to change the phone number from 01-Aug-2014 in Update mode (date 01-Aug-2014), then the record will be as follows: Employee Number LastName Effective Start Date Effective End Date Phone Number 10160 Test010114 01-Jan-2014 31-Jul-2014 +0099999999 10160 Test010114 01-Aug-2014 31-Jul-2014 +0088888888 Thus, in update mode, you can see that historical data is intact. If HR wants to view some historical data, then the HR employee can easily view this data. Everything associated with Oracle HRMS is date-tracked. Every characteristic about the organization, person, position, salary, and benefits is tightly date-tracked. This concept is very important in Oracle and is used in almost all the forms in which you store employee-related information. Thus, you have learned about the date tracking concept in Oracle APPS. There are some additional fields, which can be configured as per your requirements. Additional personal data can be stored in these fields. These are called as descriptive flexfields in Oracle. We created personal DFF to store data about Years of Industry Experience and whether an employee is Oracle Certified or not. This data can be stored in the People form DFF as marked in the following screenshot: When you click on the box, it will open the new form as shown in the following screenshot. Here, you can enter the additional data. This is called Additional Personal Details DFF. It is stored in personal data; this is normally referred to as the People form DFF. We have created a Special Information Types (SIT) to store information on languages known by an employee. This data will have two attributes, namely, the language known and the fluency. This can be entered by navigating to US HRMS Manager | People | Enter and Maintain | Special Info. Click on the Details section. This will open a new form to enter the required details. Each record in the SIT is date-tracked. You can enter the start date and the end date. Thus, we have seen DFF in which you stored additional person data and we have seen KFF, where you enter the SIT data. Summary In this article, you have learned about creating a new employee, entering employee data, and additional data using DFF and KFF. You also learned the concept of person type. Resources for Article: Further resources on this subject: Knowing the prebuilt marketing, sales, and service organizations [article] Oracle E-Business Suite with Desktop Integration [article] Oracle Integration and Consolidation Products [article]
Read more
  • 0
  • 0
  • 7421

article-image-your-first-fuelphp-application-7-easy-steps
Packt
04 Mar 2015
12 min read
Save for later

Your first FuelPHP application in 7 easy steps

Packt
04 Mar 2015
12 min read
In this article by Sébastien Drouyer, author of the book FuelPHP Application Development Blueprints we will see that FuelPHP is an open source PHP framework using the latest technologies. Its large community regularly creates and improves packages and extensions, and the framework’s core is constantly evolving. As a result, FuelPHP is a very complete solution for developing web applications. (For more resources related to this topic, see here.) In this article, we will also see how easy it is for developers to create their first website using the PHP oil utility. The target application Suppose you are a zoo manager and you want to keep track of the monkeys you are looking after. For each monkey, you want to save: Its name If it is still in the zoo Its height A description input where you can enter custom information You want a very simple interface with five major features. You want to be able to: Create new monkeys Edit existing ones List all monkeys View a detailed file for each monkey Delete monkeys These preceding five major features, very common in computer applications, are part of the Create, Read, Update and Delete (CRUD) basic operations. Installing the environment The FuelPHP framework needs the three following components: Webserver: The most common solution is Apache PHP interpreter: The 5.3 version or above Database: We will use the most popular one, MySQL The installation and configuration procedures of these components will depend on the operating system you use. We will provide here some directions to get you started in case you are not used to install your development environment. Please note though that these are very generic guidelines. Feel free to search the web for more information, as there are countless resources on the topic. Windows A complete and very popular solution is to install WAMP. This will install Apache, MySQL and PHP, in other words everything you need to get started. It can be accessed at the following URL: http://www.wampserver.com/en/ Mac PHP and Apache are generally installed on the latest version of the OS, so you just have to install MySQL. To do that, you are recommended to read the official documentation: http://dev.mysql.com/doc/refman/5.1/en/macosx-installation.html A very convenient solution for those of you who have the least system administration skills is to install MAMP, the equivalent of WAMP but for the Mac operating system. It can be downloaded through the following URL: http://www.mamp.info/en/downloads/ Ubuntu As this is the most popular Linux distribution, we will limit our instructions to Ubuntu. You can install a complete environment by executing the following command lines: # Apache, MySQL, PHP sudo apt-get install lamp-server^   # PHPMyAdmin allows you to handle the administration of MySQL DB sudo apt-get install phpmyadmin   # Curl is useful for doing web requests sudo apt-get install curl libcurl3 libcurl3-dev php5-curl   # Enabling the rewrite module as it is needed by FuelPHP sudo a2enmod rewrite   # Restarting Apache to apply the new configuration sudo service apache2 restart Getting the FuelPHP framework There are four common ways to download FuelPHP: Downloading and unzipping the compressed package which can be found on the FuelPHP website. Executing the FuelPHP quick command-line installer. Downloading and installing FuelPHP using Composer. Cloning the FuelPHP GitHub repository. It is a little bit more complicated but allows you to select exactly the version (or even the commit) you want to install. The easiest way is to download and unzip the compressed package located at: http://fuelphp.com/files/download/28 You can get more information about this step in Chapter 1 of FuelPHP Application Development Blueprints, which can be accessed freely. It is also well-documented on the website installation instructions page: http://fuelphp.com/docs/installation/instructions.html Installation directory and apache configuration Now that you know how to install FuelPHP in a given directory, we will explain where to install it and how to configure Apache. The simplest way The simplest way is to install FuelPHP in the root folder of your web server (generally the /var/www directory on Linux systems). If you install fuel in the DIR directory inside the root folder (/var/www/DIR), you will be able to access your project on the following URL: http://localhost/DIR/public/ However, be warned that fuel has not been implemented to support this, and if you publish your project this way in the production server, it will introduce security issues you will have to handle. In such cases, you are recommended to use the second way we explained in the section below, although, for instance if you plan to use a shared host to publish your project, you might not have the choice. A complete and up to date documentation about this issue can be found in the Fuel installation instruction page: http://fuelphp.com/docs/installation/instructions.html By setting up a virtual host Another way is to create a virtual host to access your application. You will need a *nix environment and a little bit more apache and system administration skills, but the benefit is that it is more secured and you will be able to choose your working directory. You will need to change two files: Your apache virtual host file(s) in order to link a virtual host to your application Your system host file, in order redirect the wanted URL to your virtual host In both cases, the files location will be very dependent on your operating system and the server environment you are using, so you will have to figure their location yourself (if you are using a common configuration, you won’t have any problem to find instructions on the web). In the following example, we will set up your system to call your application when requesting the my.app URL on your local environment. Let’s first edit the virtual host file(s); add the following code at the end: <VirtualHost *:80>    ServerName my.app    DocumentRoot YOUR_APP_PATH/public    SetEnv FUEL_ENV "development"    <Directory YOUR_APP_PATH/public>        DirectoryIndex index.php        AllowOverride All        Order allow,deny        Allow from all    </Directory> </VirtualHost> Then, open your system host files and add the following line at the end: 127.0.0.1 my.app Depending on your environment, you might need to restart Apache after that. You can now access your website on the following URL: http://my.app/ Checking that everything works Whether you used a virtual host or not, the following should now appear when accessing your website: Congratulation! You just have successfully installed the FuelPHP framework. The welcome page shows some recommended directions to continue your project. Database configuration As we will store our monkeys into a MySQL database, it is time to configure FuelPHP to use our local database. If you open fuel/app/config/db.php, all you will see is an empty array but this configuration file is merged to fuel/app/config/ENV/db.php, ENV being the current Fuel’s environment, which in that case is development. You should therefore open fuel/app/config/development/db.php: <?php //... return array( 'default' => array(    'connection' => array(      'dsn'       => 'mysql:host=localhost;dbname=fuel_dev',      'username'   => 'root',      'password'   => 'root',    ), ), ); You should adapt this array to your local configuration, particularly the database name (currently set to fuel_dev), the username, and password. You must create your project’s database manually. Scaffolding Now that the database configuration is set, we will be able to generate a scaffold. We will use for that the generate feature of the oil utility. Open the command-line utility and go to your website root directory. To generate a scaffold for a new model, you will need to enter the following line: php oil generate scaffold/crud MODEL ATTR_1:TYPE_1 ATTR_2:TYPE_2 ... Where: MODEL is the model name ATTR_1, ATTR_2… are the model’s attributes names TYPE_1, TYPE_2… are each attribute type In our case, it should be: php oil generate scaffold/crud monkey name:string still_here:bool height:float description:text Here we are telling oil to generate a scaffold for the monkey model with the following attributes: name: The name of the monkey. Its type is string and the associated MySQL column type will be VARCHAR(255). still_here: Whether or not the monkey is still in the facility. Its type is boolean and the associated MySQL column type will be TINYINT(1). height: Height of the monkey. Its type is float and its associated MySQL column type will be FLOAT. description: Description of the monkey. Its type is text and its associated MySQL column type will be TEXT. You can do much more using the oil generate feature, as generating models, controllers, migrations, tasks, package and so on. We will see some of these in the FuelPHP Application Development Blueprints book and you are also recommended to take a look at the official documentation: http://fuelphp.com/docs/packages/oil/generate.html When you press Enter, you will see the following lines appear: Creating migration: APPPATH/migrations/001_create_monkeys.php Creating model: APPPATH/classes/model/monkey.php Creating controller: APPPATH/classes/controller/monkey.php Creating view: APPPATH/views/monkey/index.php Creating view: APPPATH/views/monkey/view.php Creating view: APPPATH/views/monkey/create.php Creating view: APPPATH/views/monkey/edit.php Creating view: APPPATH/views/monkey/_form.php Creating view: APPPATH/views/template.php Where APPPATH is your website directory/fuel/app. Oil has generated for us nine files: A migration file, containing all the necessary information to create the model’s associated table The model A controller Five view files and a template file More explanation about these files and how they interact with each other can be accessed in Chapter 1 of the FuelPHP Application Development Blueprints book, freely available. For those of you who are not yet familiar with MVC and HMVC frameworks, don’t worry; the chapter contains an introduction to the most important concepts. Migrating One of the generated files was APPPATH/migrations/001_create_monkeys.php. It is a migration file and contains the required information to create our monkey table. Notice the name is structured as VER_NAME where VER is the version number and NAME is the name of the migration. If you execute the following command line: php oil refine migrate All migrations files that have not been yet executed will be executed from the oldest version to the latest version (001, 002, 003, and so on). Once all files are executed, oil will display the latest version number. Once executed, if you take a look at your database, you will observe that not one, but two tables have been created: monkeys: As expected, a table have been created to handle your monkeys. Notice that the table name is the plural version of the word we typed for generating the scaffold; such a transformation was internally done using the Inflector::pluralize method. The table will contain the specified columns (name, still_here), the id column, but also created_at and updated_at. These columns respectively store the time an object was created and updated, and are added by default each time you generate your models. It is though possible to not generate them with the --no-timestamp argument. migration: This other table was automatically created. It keeps track of the migrations that were executed. If you look into its content, you will see that it already contains one row; this is the migration you just executed. You can notice that the row does not only indicate the name of the migration, but also a type and a name. This is because migrations files can be placed at many places such as modules or packages. The oil utility allows you to do much more. Don’t hesitate to take a look at the official documentation: http://fuelphp.com/docs/packages/oil/intro.html Or, again, to read FuelPHP Application Development Blueprints’ Chapter 1 which is available for free. Using your application Now that we generated the code and migrated the database, our application is ready to be used. Request the following URL: If you created a virtual host: http://my.app/monkey Otherwise (don’t forget to replace DIR): http://localhost/DIR/public/monkey As you can notice, this webpage is intended to display the list of all monkeys, but since none have been added, the list is empty. Then let’s add a new monkey by clicking on the Add new Monkey button. The following webpage should appear: You can enter your monkey’s information here. The form is certainly not perfect - for instance the Still here field use a standard input although a checkbox would be more appropriated - but it is a great start. All we will have to do is refine the code a little bit. Once you have added several monkeys, you can again take a look at the listing page: Again, this is a great start, though we might want to refine it. Each item on the list has three associated actions: View, Edit, and Delete. Let’s first click on View: Again a great start, though we will refine this webpage. You can return back to the listing by clicking on Back or edit the monkey file by clicking on Edit. Either accessed from the listing page or the view page, it will display the same form as when creating a new monkey, except that the form will be prefilled of course. Finally, if you click on Delete, a confirmation box will appear to prevent any miss clicking. Want to learn more ? Don’t hesitate to check out FuelPHP Application Development Blueprints’ Chapter 1 which is freely available in Packt Publishing’s website. In this chapter, you will find a more thorough introduction to FuelPHP and we will show how to improve this first application. You are also recommended to explore FuelPHP website, which contains a lot of useful information and an excellent documentation: http://www.fuelphp.com There is much more to discover about this wonderful framework. Summary In this article we leaned about the installation of the FuelPHP environment and installation of directories in it. Resources for Article: Further resources on this subject: PHP Magic Features [Article] FuelPHP [Article] Building a To-do List with Ajax [Article]
Read more
  • 0
  • 0
  • 7271

article-image-understanding-business-activity-monitoring-oracle-soa-suite
Packt
28 Oct 2009
14 min read
Save for later

Understanding Business Activity Monitoring in Oracle SOA Suite

Packt
28 Oct 2009
14 min read
How BAM differs from traditional business intelligence The Oracle SOA Suite stores the state of all processes in a database in documented schemas so why do we need yet another reporting tool to provide insight into our processes and services? In other words how does BAM differ from traditional BI (Business Intelligence)? In traditional BI, reports are generated and delivered either on a scheduled basis or in response to a user request. Any changes to the information will not be reflected until the next scheduled run or until a user requests the report to be rerun. BAM is an event-driven reporting tool that generates alerts and reports in real time, based on a continuously changing data stream, some of whose data may not be in the database. As events occur in the Services and Processes, the business has defined they are captured by BAM and reports and views are updated in real time. Where necessary these updated reports are delivered to users. This delivery to users can take several forms. The best known is the dashboard on users' desktops that will automatically update without any need for the user to refresh the screen. There are also other means to deliver reports to the end user, including sending them via a text message or an email. Traditional reporting tools such as Oracle Reports and Oracle Discoverer as well as Oracles latest Business Intelligence Suite can be used to provide some real-time reporting needs but they do not provide the event driven reporting that gives the business a continuously updating view of the current business situation. Event Driven Architecture Event Driven Architecture (EDA) is about building business solutions around responsiveness to events. Events may be simple triggers such as a stock out event or they may be more complex triggers such as the calculations to realize that a stock out will occur in three days. An Event Driven Architecture will often take a number of simple events and then combine them through a complex event processing sequence to generate complex events that could not have been raised without aggregation of several simpler events. Oracle BAM scenarios Oracle Business Activity Monitoring is typically used to monitor two distinct types of real-time data. Firstly it may be used to monitor the overall state of processes in the business. For example it may be used to track how many auctions are currently running, how many have bids on them, and how many have completed in the last 24 hours (or other time periods). Secondly it may be used to track in real-time Key Performance Indicators or KPIS. For example it may be used to provide a real-time updating dashboard to a seller to show the current total value of all the sellers' auctions and to track this against an expected target. In the first case, we are interested in how business processes are progressing and are using BAM to identify bottlenecks and failure points within those processes. Bottlenecks can be identified by too much time being spent on given steps in the process. BAM allows us to compute the time taken between two points in a process, such as the time between order placement and shipping, and provide real-time feedback on those times. Similarly BAM can be used to track the percentage drop-out rate between steps in a sales process, allowing the business to take appropriate action. In the second case, our interest is on some aggregate number, such as our total liabilities should we win all the auctions we are bidding on. This requires us to aggregate results from many events, possibly performing some kind of calculation on them to provide us with a single KPI that gives an indication to the business of how things are going. BAM allows us to continuously update this number in real on a dashboard without the need for continued polling. It also allows us to trigger alerts, perhaps through email or SMS, to notify an individual, when a threshold is breached. In both cases reports delivered can be customized based on the individual receiving the report. BAM architecture It may seem odd to have a section on architecture in the middle of a article about how to effectively use BAM, but key to successful utilization of BAM is an understanding of how the different tiers relate to each other. Logical view The following diagram represents a logical view of how BAM operates. Events are acquired from one or more sources through event acquisition and then normalized, correlated, and stored in event storage (generally a memory area in BAM that is backed up to disc). The report cache generates reports based on events in storage and then delivers those reports, together with real-time updates through the report delivery layer. Event processing is also performed on events in storage, and when defined conditions are met, alerts will be delivered through the alert delivery service. Physical view To better understand the physical view of the architecture of BAM, we have divided this section into four parts. Let us discuss these in detail. Capture This logical view maps onto the physical BAM components shown in the following diagram. Data acquisition in the SOA Suite is handled by sensors in BPEL and ESB. BAM can also receive events from JMS message queues and access data in databases (useful for historical comparison). For complex data formats or for other data sources then Oracle Data Integrator (ODI is a separate product to the SOA Suite) is recommended by Oracle. Although potentially less efficient and more work than running ODI, it is also possible to use adapters to acquire data from multiple sources and feed it into BAM through ESB or BPEL. At the data capture level we need to think of the data items that we can provide to feed the reports and alerts that we desire to generate. We must consider the sources of that data and the best way to load it into BAM. Store Once the data is captured, it is then stored in a normalized form in the Active Data Cache (ADC). This storage facility has the ability to do simple correlation based on fields within the data, and multiple data items received from the acquisition layer may update just a single object in the data cache. For example the state of a given BPEL process instance may be represented by a single object in the ADC and all updates to that process state will just update that single data item rather than creating multiple data items. Process Reports are run based on user demand. Once a report is run it will update the user's screen on a real time basis. Where multiple users are accessing the same report only one instance of the report is maintained by the report server. As events are captured and stored in real time the report engine will continuously monitor them for any changes that need to be made to those reports which are currently active. When changes are detected that impact active reports, then the appropriate report will be updated in memory and the updates sent to the user screen. In addition to the event processing required to correctly insert and update items in the ADC, there is also a requirement to monitor items in the ADC for events that require some sort of action to be taken. This is the job of the event processor. This will monitor data in the ADC to see if registered thresholds on values have been exceeded or if certain time-outs have expired. The event processor will often need to perform calculations across multiple data items to do this. Deliver Delivery of reports takes place in two ways. First, users request reports to be delivered to their desktop by selecting views within BAM. These reports are delivered as HTML pages within a browser and are updated whenever the underlying data used in the report changes. The second approach is that reports are sent out as a result of events being triggered by the Event Processing Engine. In the latter case, the report may be delivered by email, SMS, or voice messaging using the notifications service. A final option available for these event generated reports is to invoke a web service to take some sort of automated action. Closing the loop While monitoring what is happening is all very laudable, it is only of benefit if we actually do something about what we are monitoring. BAM provides the real-time monitoring ability very well but it also provides the facility to invoke other services to respond to undesirable events such as stock outs. The ability to invoke external services is crucial to the concept of a closed loop control environment where as a result of monitoring we are able to reach back into the processes and either alter their execution or start new ones. For example when a stock out or low stock event is raised then the message centre could invoke a web service requesting a supplier to send more stock to replenish inventory. Placing this kind of feedback mechanism in BAM allows us to trigger events across multiple applications and locations in a way that may not be possible within a single application or process. For example, in response to a stock out, instead of requesting our supplier to provide more stock, we may be monitoring stock levels in independent systems and, based on stock levels elsewhere, may redirect stock from one location to another. BAM platform anomaly In 10g SOA Suite, BAM runs only as a Windows application. Unlike the rest of SOA Suite, it does not run on a JEE Application Server and it can only run on the Windows platform. In the next release, 11g, BAM will be provided as a JEE application that can run on a number of application servers and operating systems. User interface Development in Oracle BAM is done through a web-based user interface. This user interface gives access to four different applications that allow you to interact with different parts of BAM. These are: Active Viewer for giving access to reports; this relates to the deliver stage for user requested reports. Active Studio for building reports; this relates to the 'process' stage for creating reports. Architect for setting up both inbound and outbound events. Data elements are defined here as data sources. Alerts are also configured here. This covers setting up, acquire and store stages as well as the deliver stage for alerts. Administrator for managing users and roles as well as defining the types of message sources. We will not examine the applications individually but will take a task-focused look at how to use them as part of providing some specific reports. Monitoring process state Now that we have examined how BAM is constructed, let us use this knowledge to construct some simple dashboards that track the state of a business process. We will instrument a simple version of an auction process. The process is shown in the following figure: An auction is started and then bids are placed until the time runs out at which point the auction is completed. This is modelled in BPEL. This process has three distinct states: Started Bid received Completed We are interested in the number of auctions in each state as well as the total value of auctions in progress. One needs to follow these steps to build the dashboard: Define our data within the Active Data Cache Create sensors in BPEL and map to data in the ADC Create suitable reports Run the reports Defining data objects Data in BAM is stored in data objects. Individual data objects contain the information that is reported in BAM dashboards and may be updated by multiple events. Generally BAM will report against aggregations of objects, but there is also the ability for reports to drill down into individual data objects. Before defining our data objects let's group them into an Auction folder so they are easy to find. To do this we use the BAM Architect application and select Data Objects which gives us the following screen: We select Create subfolder to create the folder and give it a name Auction. We then select Create folder to actually create the folder and we get a confirmation message to tell us that the folder was created. Notice that once created, the folder also appears in the Folders window on the left-hand side of the screen. Now we have our folder we can create a data object. Again we select Data Objects from the drop-down menu. To define the data objects that are to be stored in our Active Data Cache, we open the Auction folder if it is not already open and selectCreate Data Object. If we don't select the Auction folder then we pick it later when filling in the details of the data object. We need to give our object a unique name within the folder and optionally provide it with a tip text that helps explain what the object does when the mouse is moved over it in object listings. Having named our object we can now create the data fields by selecting Add a field. When adding fields we need to provide a name and type as well as indicating if they must contain data; the default Nullable does not require a field to be populated. We may also optionally indicate if a field should be public "available for display" and what if any tool tip text it should have. Once all the data fields have been defined then we can click Create Data Object to actually create the object as we have defined it. We are then presented with a confirmation screen that the object has been created. Grouping data into hierarchies When creating a data object it is possible to specify Dimensions for the object. A dimension is based on one or more fields within the object. A given field can only participate in one dimension. This gives the ability to group the object by the fields in the given dimension. If multiple fields are selected for a single dimension then they can be layered into a hierarchy, for example to allow analysis by country, region, and city. In this case all three elements would be selected into a single dimension, perhaps called geography. Within geography a hierarchy could be set up with country at the top, region next, and finally city at the bottom, allowing drill down to occur in views. Just as a data object can have multiple dimensions, a dimension can also have multiple hierarchies. A digression on populating data object fields In the previous discussion, we mentioned the Nullable attribute that can be attached to fields. This is very important as we do not expect to populate all or even most of the fields in a data object at one moment in time. Do not confuse data objects with the low level events that are used to populate them. Data objects in BAM do not have a one-to-one correspondence with the low level events that populate them. In our auction example there will be just one auction object for every auction. However there will be at least two and usually more messages for every auction; one message for the auction starting, another for the auction completing, and additional messages for each bid received. These messages will all populate or in some cases overwrite different parts of the auction data object. The table shows how the three messages populate different parts of the data object. Message Auction ID State Highest bid Reserve Expires Seller Highest bidder Auction Started Inserted Inserted Inserted Inserted Inserted Inserted   Bid Received   Updated Updated       Updated Auction Finished   Updated           Summary In this article we have explored how Business Activity Monitoring differs from and is complementary to more traditional Business Intelligence solutions such as Oracle Reports and Business Objects. We have explored how BAM can allow the business to monitor the state of business targets and Key Performance Indicators, such as the current most popular products in a retail environment or the current time taken to serve customers in a service environment.
Read more
  • 0
  • 0
  • 7172
article-image-netbeans-ide-7-building-ejb-application
Packt
01 Jun 2011
10 min read
Save for later

NetBeans IDE 7: Building an EJB Application

Packt
01 Jun 2011
10 min read
  NetBeans IDE 7 Cookbook Over 70 highly focused practical recipes to maximize your output with NetBeans         Introduction Enterprise Java Beans (EJB) is a framework of server-side components that encapsulates business logic. These components adhere to strict specifications on how they should behave. This ensures that vendors who wish to implement EJB-compliant code must follow conventions, protocols, and classes ensuring portability. The EJB components are then deployed in EJB containers, also called application servers, which manage persistence, transactions, and security on behalf of the developer. If you wish to learn more about EJBs, visit http://jcp.org/en/jsr/detail?id=318 or https://www.packtpub.com/developer-guide-for-ejb3/book. For our EJB application to run, we will need the application servers. Application servers are responsible for implementing the EJB specifications and creating the perfect environment for our EJBs to run in. Some of the capabilities supported by EJB and enforced by Application Servers are: Remote access Transactions Security Scalability NetBeans 6.9, or higher, supports the new Java EE 6 platform, making it the only IDE so far to bring the full power of EJB 3.1 to a simple IDE interface for easy development. NetBeans makes it easy to develop an EJB application and deploy on different Application Servers without the need to over-configure and mess with different configuration files. It's as easy as a project node right-click. Creating EJB project In this recipe, we will see how to create an EJB project using the wizards provided by NetBeans. Getting ready It is required to have NetBeans with Java EE support installed to continue with this recipe. If this particular NetBeans version is not available in your machine, then you can download it from http://download.netbeans.org. There are two application servers in this installation package, Apache Tomcat or GlassFish, and either one can be chosen, but at least one is necessary. In this recipe, we will use the GlassFish version that comes together with NetBeans 7.0 installation package. How to do it... Lets create a new project by either clicking File and then New Project, or by pressing Ctrl+Shift+N. In the New Project window, in the categories side, choose Java Web and in Projects side, select WebApplication, then click Next. In Name and Location, under Project Name, enter EJBApplication. Tick the Use Dedicated Folder for Storing Libraries option box. Now either type the folder path or select one by clicking on browse. After choosing the folder, we can proceed by clicking Next. In Server and Settings, under Server, choose GlassFish Server 3.1. Tick Enable Contexts and Dependency Injection. Leave the other values with their default values and click Finish. The new project structure is created. How it works... NetBeans creates a complete file structure for our project. It automatically configures the compiler and test libraries and creates the GlassFish deployment descriptor. The deployment descriptor filename specific for the GlassFish web server is glassfish-web.xml.   Adding JPA support The Java Persistence API (JPA) is one of the frameworks that equips Java with object/relational mapping. Within JPA, a query language is provided that supports the developers abstracting the underlying database. With the release of JPA 2.0, there are many areas that were improved, such as: Domain Modeling EntityManager Query interfaces JPA query language and others We are not going to study the inner workings of JPA in this recipe. If you wish to know more about JPA, visit http://jcp.org/en/jsr/detail?id=317 or http://download.oracle.com/javaee/5/tutorial/doc/bnbqa.html. NetBeans provides very good support for enabling your application to quickly create entities annotated with JPA. In this recipe, we will see how to configure your application to use JPA. We will continue to expand the previously-created project. Getting ready We will use GlassFish Server in this recipe since it is the only server that supports Java EE 6 at the moment. We also need to have Java DB configured. GlassFish already includes a copy of Java DB in its installation folder. Another source of installed Java DB is the JDK installation directory. It is not necessary to build on top of the previous recipe, but it is imperative to have a database schema. Feel free to create your own entities by following the steps presented in this recipe. How to do it... Right-click on EJBApplication node and select New Entity Classes from Database.... In Database Tables: Under Data Source, select jdbc/sample and let the IDE initialize Java DB. When Available Tables is populated, select MANUFACTURER, click Add, and then click Next. In Entity Classes: leave all the fields with their default values and only in Package, enter entities and click Finish. How it works... NetBeans then imports and creates our Java class from the database schema, in our case the Manufacturer.java file placed under the entities package. Besides that, NetBeans makes it easy to import and start using the entity straightaway. Many of the most common queries, for example find by name, find by zip, and find all, are already built into the class itself. The JPA queries, which are akin to normal SQL queries, are defined in the entity class itself. Listed below are some of the queries defined in the entity class Manufacturer.java: @Entity @Table(name = "MANUFACTURER") @NamedQueries({ @NamedQuery(name = "Manufacturer.findAll", query = "SELECT m FROM Manufacturer m"), @NamedQuery(name = "Manufacturer.findByManufacturerId", query = "SELECT m FROM Manufacturer m WHERE m.manufacturerId = :manufacturerId"), The @Entity annotation defines that this class, Manufacturer.java, is an entity and when followed by the @Table annotation, which has a name parameter, points out the table in the Database where the information is stored. The @NamedQueries annotation is the place where all the NetBeans-generated JPA queries are stored. There can be as many @NamedQueries as the developer feels necessary. One of the NamedQueries we are using in our example is named Manufacturer.findAll, which is a simple select query. When invoked, the query is translated to: SELECT m FROM Manufacturer m On top of that, NetBeans implements the equals, hashCode, and toString methods. Very useful if the entities need to be used straight away with some collections, such as HashMap. Below is the NetBeans-generated code for both hashCode and the toString methods: @Override public int hashCode() { int hash = 0; hash += (manufacturerId != null ? manufacturerId.hashCode() : 0); return hash; } @Override public boolean equals(Object object) { // TODO: Warning - this method won't work in the case the id fields are not set if (!(object instanceof Manufacturer)) { return false; } Manufacturer other = (Manufacturer) object; if ((this.manufacturerId == null && other.manufacturerId != null) || (this.manufacturerId != null && !this.manufacturerId. equals(other.manufacturerId))) { return false; } return true; } NetBeans also creates a persistence.xml and provides a Visual Editor, simplifying the management of different Persistence Units (in case our project needs to use more than one); thereby making it possible to manage the persistence.xml without even touching the XML code. A persistence unit, or persistence.xml, is the configuration file in JPA which is placed under the configuration files, when the NetBeans view is in Projects mode. This file defines the data source and what name the persistence unit has in our example: <persistence-unit name="EJBApplicationPU" transaction-type="JTA"> <jta-data-source>jdbc/sample</jta-data-source> <properties/> </persistence-unit> The persistence.xml is placed in the configuration folder, when using the Projects view. In our example, our persistence unit name is EJBApplicationPU, using the jdbc/sample as the data source. To add more PUs, click on the Add button that is placed on the uppermost right corner of the Persistence Visual Editor. This is an example of adding another PU to our project:   Creating Stateless Session Bean A Session Bean encapsulates business logic in methods, which in turn are executed by a client. This way, the business logic is separated from the client. Stateless Session Beans do not maintain state. This means that when a client invokes a method in a Stateless bean, the bean is ready to be reused by another client. The information stored in the bean is generally discarded when the client stops accessing the bean. This type of bean is mainly used for persistence purposes, since persistence does not require a conversation with the client. It is not in the scope of this recipe to learn how Stateless Beans work in detail. If you wish to learn more, please visit: http://jcp.org/en/jsr/detail?id=318 or https://www.packtpub.com/developer-guide-for-ejb3/book In this recipe, we will see how to use NetBeans to create a Stateless Session Bean that retrieves information from the database, passes through a servlet and prints this information on a page that is created on-the-fly by our servlet. Getting ready It is required to have NetBeans with Java EE support installed to continue with this recipe. If this particular NetBeans version is not available in your machine, please visit http://download.netbeans.org. We will use the GlassFish Server in this recipe since it is the only Server that supports Java EE 6 at the moment. We also need to have Java DB configured. GlassFish already includes a copy of Java DB in its installation folder. It is possible to follow the steps on this recipe without the previous code, but for better understanding we will continue to build on the top of the previous recipes source code. How to do it... Right-click on EJBApplication node and select New and Session Bean.... For Name and Location: Name the EJB as ManufacturerEJB. Under Package, enter beans. Leave Session Type as Stateless. Leave Create Interface with nothing marked and click Finish. Here are the steps for us to create business methods: Open ManufacturerEJB and inside the class body, enter: @PersistenceUnit EntityManagerFactory emf; public List findAll(){ return emf.createEntityManager().createNamedQuery("Manufacturer. findAll").getResultList(); } Press Ctrl+Shift+I to resolve the following imports: java.util.List; javax.persistence.EntityManagerFactory; javax.persistence.PersistenceUnit; Creating the Servlet: Right-click on the EJBApplication node and select New and Servlet.... For Name and Location: Name the servlet as ManufacturerServlet. Under package, enter servlets. Leave all the other fields with their default values and click Next. For Configure Servlet Deployment: Leave all the default values and click Finish. With the ManufacturerServlet open: After the class declaration and before the processRequest method, add: @EJB ManufacturerEJB manufacturerEJB; Then inside the processRequest method, first line after the try statement, add: List<Manufacturer> l = manufacturerEJB.findAll(); Remove the /* TODO output your page here and also */. And finally replace: out.println("<h1>Servlet ManufacturerServlet at " + request. getContextPath () + "</h1>"); With: for(int i = 0; i < 10; i++ ) out.println("<b>City</b>"+ l.get(i).getCity() +", <b>State</b>"+ l.get(i).getState() +"<br>" ); Resolve all the import errors and save the file. How it works... To execute the code produced in this recipe, right-click on the EJBApplication node and select Run. When the browser launches append to the end of the URL/ManufacturerServlet, hit Enter. Our application will return City and State names. One of the coolest features in Java EE 6 is that usage of web.xml can be avoided if annotating the servlet. The following code does exactly that: @WebServlet(name="ManufacturerServlet", urlPatterns={"/ ManufacturerServlet"}) Since we are working on Java EE 6, our Stateless bean does not need the daunting work of creating interfaces, the @Stateless annotation takes care of that, making it easier to develop EJBs. We then add the persistence unit, represented by the EntityManagerFactory and inserted by the @PersistenceUnit annotation. Finally we have our business method that is used from the servlet. The findAll method uses one of the named queries from our entity to fetch information from the database.  
Read more
  • 0
  • 0
  • 7100

article-image-working-complex-associations-using-cakephp
Packt
28 Oct 2009
6 min read
Save for later

Working with Complex Associations using CakePHP

Packt
28 Oct 2009
6 min read
Defining Many-To-Many Relationship in Models In the previous article in this series on Working with Simple Associations using CakePHP, we assumed that a book can have only one author. But in real life scenario, a book may also have more than one author. In that case, the relation between authors and books is many-to-many. We are now going to see how to define associations for a many-to-many relation. We will modify our existing code-base that we were working on in the previous article to set up the associations needed to represent a many-to-many relation. Time for Action: Defining Many-To-Many Relation Empty the database tables: TRUNCATE TABLE `authors`;TRUNCATE TABLE `books`; Remove the author_id field from the books table: ALTER TABLE `books` DROP `author_id` Create a new table, authors_books:; CREATE TABLE `authors_books` (`author_id` INT NOT NULL ,`book_id` INT NOT NULL Modify the Author (/app/models/author.php) model: <?phpclass Author extends AppModel{ var $name = 'Author'; var $hasAndBelongsToMany = 'Book';}?> Modify the Book (/app/models/book.php) model: <?phpclass Book extends AppModel{ var $name = 'Book'; var $hasAndBelongsToMany = 'Author';}?> Modify the AuthorsController (/app/controllers/authors_controller.php): <?phpclass AuthorsController extends AppController { var $name = 'Authors'; var $scaffold;}?> Modify the BooksController (/app/controllers/books_controller.php): <?phpclass BooksController extends AppController { var $name = 'Books'; var $scaffold;}?> Now, visit the following URLs and add some test data into the system:http://localhost/relationship/authors/ and http://localhost/relationship/books/ What Just Happened? We first emptied the database and then dropped the field author_id from the books table. Then we added a new join table authors_books that will be used to establish a many-to-many relation between authors and books. The following diagram shows how a join table relates two tables in many-to-many relation: In a many-to-many relation, one record of any of the tables can be related to multiple records of the other table. To establish this link, a join table is used—a join table contains two fields to hold the primary-keys of both of the records in relation. CakePHP has certain conventions for naming a join table—join tables should be named after the tables in relation, in alphabetical order, with underscores in between. The join table between authors and books tables should be named authors_books, not books_authors. Also by Cake convention, the default value for the foreign keys used in the join table must be underscored, singular name of the models in relation, suffixed with _id. After creating the join table, we defined associations in the models, so that our models also know about the new relationship that they have. We added hasAndBelongsToMany (HABTM) associations in both of the models. HABTM is a special type of association used to define a many-to-many relation in models. Both the models have HABTM associations to define the many-to-many relationship from both ends. After defining the associations in the models, we created two controllers for these two models and put in scaffolding in them to see the association working. We could also use an array to set up the HABTM association in the models. Following code segment shows how to use an array for setting up an HABTM association between authors and books in the Author model: var $hasAndBelongsToMany = array( 'Book' => array( 'className' => 'Book', 'joinTable' => 'authors_books', 'foreignKey' => 'author_id', 'associationForeignKey' => 'book_id' ) ); Like, simple relationships, we can also override default association characteristics by adding/modifying key/value pairs in the associative array. The foreignKey key/value pair holds the name of the foreign-key found in the current model—default is underscored, singular name of the current model suffixed with _id. Whereas, associationForeignKey key/value pair holds the foreign-key name found in the corresponding table of the other model—default is underscored, singular name of the associated model suffixed with _id. We can also have conditions, fields, and order key/value pairs to customize the relationship in more detail. Retrieving Related Model Data in Many-To-Many Relation Like one-to-one and one-to-many relations, once the associations are defined, CakePHP will automatically fetch the related data in many-to-many relation. Time for Action: Retrieving Related Model Data Take out scaffolding from both of the controllers—AuthorsController (/app/controllers/authors_controller.php) and BooksController (/app/controllers/books_controller.php). Add an index() action inside the AuthorsController (/app/controllers/authors_controller.php), like the following: <?phpclass AuthorsController extends AppController { var $name = 'Authors'; function index() { $this->Author->recursive = 1; $authors = $this->Author->find('all'); $this->set('authors', $authors); }}?> Create a view file for the /authors/index action (/app/views/authors/index.ctp): <?php foreach($authors as $author): ?><h2><?php echo $author['Author']['name'] ?></h2><hr /><h3>Book(s):</h3><ul><?php foreach($author['Book'] as $book): ?><li><?php echo $book['title'] ?></li><?php endforeach; ?></ul><?php endforeach; ?> Write down the following code inside the BooksController (/app/controllers/books_controller.php): <?phpclass BooksController extends AppController { var $name = 'Books'; function index() { $this->Book->recursive = 1; $books = $this->Book->find('all'); $this->set('books', $books); }}?> Create a view file for the action /books/index (/app/views/books/index.ctp): <?php foreach($books as $book): ?><h2><?php echo $book['Book']['title'] ?></h2><hr /><h3>Author(s):</h3><ul><?php foreach($book['Author'] as $author): ?><li><?php echo $author['name'] ?></li><?php endforeach; ?></ul><?php endforeach; ?> Now, visit the following URLs:http://localhost/relationship/authors/http://localhost/relationship/books/ What Just Happened? In both of the models, we first set the value of $recursive attributes to 1 and then we called the respective models find('all') functions. So, these subsequent find('all') operations return all associated model data that are related directly to the respective models. These returned results of the find('all') requests are then passed to the corresponding view files. In the view files, we looped through the returned results and printed out the models and their related data. In the BooksController, this returned data from find('all') is stored in a variable $books. This find('all') returns an array of books and every element of that array contains information about one book and its related authors. Array ( [0] => Array ( [Book] => Array ( [id] => 1 [title] => Book Title ... ) [Author] => Array ( [0] => Array ( [id] => 1 [name] => Author Name ... ) [1] => Array ( [id] => 3 ... 54 54 ... ...) Same for the Author model, the returned data is an array of authors. Every element of that array contains two arrays: one contains the author information and the other contains an array of books related to this author. These arrays are very much like what we got from a find('all') call in case of the hasMany association.
Read more
  • 0
  • 0
  • 7049

article-image-customizing-and-extending-aspnet-mvc-framework
Packt
12 Oct 2009
5 min read
Save for later

Customizing and Extending the ASP.NET MVC Framework

Packt
12 Oct 2009
5 min read
(For more resources on .NET, see here.) Creating a control When building applications, you probably also build controls. Controls are re-usable components that contain functionality that can be re-used in different locations. In ASP.NET Webforms, a control is much like an ASP.NET web page. You can add existing web server controls and markup to a custom control and define properties and methods for it. When, for example, a button on the control is clicked, the page is posted back to the server that performs the actions required by the control. The ASP.NET MVC framework does not support ViewState and postbacks, and therefore, cannot handle events that occur in the control. In ASP.NET MVC, controls are mainly re-usable portions of a view, called partial views, which can be used to display static HTML and generated content, based on ViewData received from a controller. In this topic, we will create a control to display employee details. We will start by creating a new ASP.NET MVC application using File | New | Project... in Visual Studio, and selecting ASP.NET MVC Application under Visual C# - Web. First of all, we will create a new Employee class inside the Models folder. The code for this Employee class is: public class Employee{ public string FirstName { get; set; } public string LastName { get; set; } public string Email { get; set; } public string Department { get; set; }} On the home page of our web application, we will list all of our employees. In order to do this, modify the Index action method of the HomeController to pass a list of employees to the view in the ViewData dictionary. Here's an example that creates a list of two employees and passes it to the view: public ActionResult Index(){ ViewData["Title"] = "Home Page"; ViewData["Message"] = "Our employees welcome you to our site!"; List<Employee> employees = new List<Employee> { new Employee{ FirstName = "Maarten", LastName = "Balliauw", Email = "maarten@maartenballiauw.be", Department = "Development" }, new Employee{ FirstName = "John", LastName = "Kimble", Email = "john@example.com", Department = "Development" } }; return View(employees);} The corresponding view, Index.aspx in the Views | Home folder of our ASP.NET MVC application, should be modified to accept a List<Employee> as a model. To do this, edit the code behind the Index.aspx.cs file and modify its contents as follows: using System.Collections.Generic;using System.Web.Mvc;using ControlExample.Models;namespace ControlExample.Views.Home{ public partial class Index : ViewPage<List<Employee>> { }} In the Index.aspx view, we can now use this list of employees. Because we will display details of more than one employee somewhere else in our ASP.NET MVC web application, let's make this a partial view. Right-click the Views | Shared folder, click on Add | New Item... and select the MVC View User Control item template under Visual C# | Web | MVC. Name the partial view, DisplayEmployee.ascx. The ASP.NET MVC framework provides the flexibility to use a strong-typed version of the ViewUserControl class, just as the ViewPage class does. The key difference between ViewUserControl and ViewUserControl<T>is that with the latter, the type of view data is explicitly passed in, whereas the non-generic version will contain only a dictionary of objects. Because the DisplayEmployee.aspx partial view will be used to render items of the type Employee, we can modify the DisplayEmployee. ascx code behind the file DisplayEmployee.ascx.cs and make it strong-typed: using ControlExample.Models;namespace ControlExample.Views.Shared{ public partial class DisplayEmployee : System.Web.Mvc.ViewUserControl<Employee> { }} In the view markup of our partial view, the model can now be easily referenced. Just as with a regular ViewPage, the ViewUserControl will have a ViewData property containing a Model property of the type Employee. Add the following code to DisplayEmployee.ascx: <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="DisplayEmployee.ascx.cs" Inheits="ControlExample.Views.Shared.DisplayEmployee" %><%=Html.Encode(Model.LastName)%>, <%=Html.Encode(Model.FirstName)%><br/><em><%=Html.Encode(Model.Department)%></em> The control can now be used on any view or control in the application. In the Views | Home | Index.aspx view, use the Model property (which is a List<Employee>) and render the control that we have just created for each employee: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="true" CodeBehind="Index.aspx.cs"Inherits="ControlExample.Views.Home.Index" %><asp:Content ID="indexContent" ContentPlaceHolderID="MainContent"runat="server"> <h2><%= Html.Encode(ViewData["Message"]) %></h2> <p>Here are our employees:</p> <ul> <% foreach (var employee inModel) { %> <li> <% Html.RenderPartial("DisplayEmployee", employee); %> </li> <% } %> </ul></asp:Content> In case the control's ViewData type is equal to the view page's ViewData type, another method of rendering can also be used. This method is similar to ASP.NET Webforms controls, and allows you to specify a control as a tag. Optionally, a ViewDataKey can be specified. The control will then fetch its data from the ViewData dictionary entry having this key. <uc1:EmployeeDetails ID="EmployeeDetails1" runat="server" ViewDataKey="...." /> For example, if the ViewData contains a key emp that is filled with an Employee instance, the user control could be rendered using the following markup: <uc1:EmployeeDetails ID="EmployeeDetails1" runat="server" ViewDataKey="emp" /> After running the ASP.NET MVC web application, the result will appear as shown in the following screenshot:
Read more
  • 0
  • 0
  • 6993
article-image-asterisk-gateway-interface-scripting-php
Packt
16 Oct 2009
4 min read
Save for later

Asterisk Gateway Interface Scripting with PHP

Packt
16 Oct 2009
4 min read
PHP-CLI vs PHP-CGI Most Linux distributions include both versions of PHP when installed, especially if you are using a modern distribution such as CentOS or Mandriva. When writing AGI scripts with PHP, it is imperative that you use PHP-CLI, and not PHP-CGI. Why is this so important? The main issue is that PHP-CLI and PHP-CGI handle their STDIN (standard input) slightly differently, which makes the reading of channel variables via PHP-CGI slightly more problematic. The php.ini configuration file The PHP interpreter includes a configuration file that defines a set of defaults for the interpreter. For your scripts to work in an efficient manner, the following must be set—either via the php.ini file, or by your PHP script: ob_implicit_flush(false); set_time_limit(5); error_log = filename;error_reporting(0); The above code snippet performs the following: Directive Description ob_implicit_flush(false); Sets your PHP output buffering to false, in order to make sure that output from your AGI script to Asterisk is not buffered, and takes longer to execute set_time_limit(5); Sets a time limit on your AGI scripts to verify that they don't extend beyond a reasonable time of execution; there is no rule of thumb relating to the actual value; it is highly dependant on your implementation Depending on your system and applications, your maximum time limit may be set to any value; however, we suggest that you verify your scripts, and are able to work with a maximum limit of 30 seconds. error_log=filename; Excellent for debugging purposes; always creates a log file error_reporting(E_NONE); Does not report errors to the error_log; changes the value to enable different logging parameters; check the PHP website for additional information about this AGI script permissions All AGI scripts must be located in the directory /var/lib/asterisk/agi-bin, which is Asterisk's default directory for AGI scripts. All AGI scripts should have the execute permission, and should be owned by the user running Asterisk. If you are unfamiliar with these, consult with your system administrator for additional information. The structure of a PHP based AGI script Every PHP based AGI script takes the following form: #!/usr/bin/php -q <? $stdin = fopen(‘php://stdin’, ‘r’); $stdout = fopen(‘php://stdout’, ‘w’); $stdlog = fopen(‘my_agi.log’, ‘w’); /* Operational Code starts here */ .. .. ..?> Upon execution, Asterisk transmits a set of information to our AGI script via STDIN. Handling of that input is best performed in the following manner: #!/usr/bin/php -q <? $stdin = fopen(‘php://stdin’, ‘r’); $stdout = fopen(‘php://stdout’, ‘w’); $stdlog = fopen(‘my_agi.log’, ‘w’); /* Handling execution input from Asterisk */ while (!feof($stdin)) { $temp = fgets($stdin); $temp = str_replace("n","",$temp); $s = explode(":",$temp); $agivar[$s[0]] = trim($s[1]); if $temp == "") { break; } } /* Operational Code starts here */ .. .. ..?> Once we have handled our inbound information from the Asterisk server, we can start our actual operational flow. Communication between Asterisk and AGI The communication between Asterisk and an AGI script is performed via STDIN and STDOUT (standard output). Let's examine the following diagram: In the above diagram, ASC refers to our AGI script, while AST refers to Asterisk itself. As you can see from the diagram above, the entire flow is fairly simple. It is just a set of simple I/O queries and responses that are carried through the STDIN/STDOUT data streams. Let's now examine a slightly more complicated example: The above figure shows an example that includes two new elements in our AGI logic—access to a database, and to information provided via a web service. For example, the above image illustrates something that may be used as a connection between the telephony world and a dating service. This leads to an immediate conclusion that just as AGI is capable of connecting to almost any type of information source, depending solely on the implementation of the AGI script and not on Asterisk, Asterisk is capable of interfacing with almost any type of information source via out-of-band facilities. Enough of talking! Let's write our first AGI script.
Read more
  • 0
  • 0
  • 6738

article-image-primer-agi-asterisk-gateway-interface
Packt
16 Oct 2009
2 min read
Save for later

A Primer to AGI: Asterisk Gateway Interface

Packt
16 Oct 2009
2 min read
How does AGI work Let's examine the following diagram: As the previous diagram illustrates, an AGI script communicates with Asterisk via two standard data streams—STDIN (Standard Input) and STDOUT (Standard Output). From the AGI script point-of-view, any input coming in from Asterisk would be considered STDIN, while output to Asterisk would be considered as STDOUT. The idea of using STDIN/STDOUT data streams with applications isn't a new one, even if you're a junior level programmer. Think of it as regarding any input from Asterisk with a read directive and outputting to Asterisk with a print or echo directive. When thinking about it in such a simplistic manner, it is clear that AGI scripts can be written in any scripting or programming language, ranging from BASH scripting, through PERL/PHP scripting, to even writing C/C++ programs to perform the same task. Let's now examine how an AGI script is invoked from within the Asterisk dialplan: exten => _X.,1,AGI(some_script_name.agi,param1,param2,param3) As you can see, the invocation is similar to the invocation of any other Asterisk dialplan application. However, there is one major difference between a regular dialplan application and an AGI script—the resources an AGI script consumes.While an internal application consumes a well-known set of resources from Asterisk, an AGI script simply hands over the control to an external process. Thus, the resources required to execute the external AGI script are now unknown, while at the same time, Asterisk consumes the resources for managing the execution of the AGI script.Ok, so BASH isn't much of a resource hog, but what about Java? This means that the choice of programming language for your AGI scripts is important. Choosing the wrong programming language can often lead to slow systems and in most cases, non-operational systems. While one may argue that the underlying programming language has a direct impact on the performance of your AGI application, it is imperative to learn the impact of each. To be more exact, it's not the language itself, but more the technology of the programming language runtime that is important. The following table tries to distinguish between three programming languages' families and their applicability to AGI development.
Read more
  • 0
  • 0
  • 6544
Modal Close icon
Modal Close icon