Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-user-input-validation-tapestry-5
Packt
16 Oct 2009
9 min read
Save for later

User Input Validation in Tapestry 5

Packt
16 Oct 2009
9 min read
Adding Validation to Components The Start page of the web application Celebrity Collector has a login form that expects the user to enter some values into its two fields. But, what if the user didn't enter anything and still clicked on the Log In button? Currently, the application will decide that the credentials are wrong and the user will be redirected to the Registration page, and receive an invitation to register. This logic does make some sense; but, it isn't the best line of action, as the button might have been pressed by mistake. These two fields, User Name and Password, are actually mandatory, and if no value was entered into them, then it should be considered an error. All we need to do for this is to add a required validator to every field, as seen in the following code: <tr> <td> <t:label t_for="userName"> Label for the first text box</t:label>: </td> <td> <input type="text" t_id="userName" t_type="TextField" t:label="User Name" t_validate="required"/> </td></tr><tr> <td> <t:label t_for="password"> The second label</t:label>: </td><td> <input type="text" t_id="password" t_label="Password" t:type="PasswordField" t_validate="required"/></td></tr> Just one additional attribute for each component, and let's see how this works now. Run the application, leave both fields empty and click on the Log In button. Here is what you should see: Both fields, including their labels, are clearly marked now as an error. We even have some kind of graphical marker for the problematic fields. However, one thing is missing—a clear explanation of what exactly went wrong. To display such a message, one more component needs to be added to the page. Modify the page template, as done here: <t:form t_id="loginForm"> <t:errors/> <table> The Errors component is very simple, but one important thing to remember is that it should be placed inside of the Form component, which in turn, surrounds the validated components. Let's run the application again and try to submit an empty form. Now the result should look like this: This kind of feedback doesn't leave any space for doubt, does it? If you see that the error messages are strongly misplaced to the left, it means that an error in the default.css file that comes with Tapestry distribution still hasn't been fixed. To override the faulty style, define it in our application's styles.css file like this: DIV.t-error LI{ margin-left: 20px;} Do not forget to make the stylesheet available to the page. I hope you will agree that the efforts we had to make to get user input validated are close to zero. But let's see what Tapestry has done in response to them: Every form component has a ValidationTracker object associated with it. It is provided automatically, we do not need to care about it. Basically, ValidationTracker is the place where any validation problems, if they happen, are recorded. As soon as we use the t:validate attribute for a component in the form, Tapestry will assign to that component one or more validators, the number and type of them will depend on the value of the t:validate attribute (more about this later). As soon as a validator decides that the value entered associated with the component is not valid, it records an error in the ValidationTracker. Again, this happens automatically. If there are any errors recorded in ValidationTracker, Tapestry will redisplay the form, decorating the fields with erroneous input and their labels appropriately. If there is an Errors component in the form, it will automatically display error messages for all the errors in ValidationTracker. The error messages for standard validators are provided by Tapestry while the name of the component to be mentioned in the message is taken from its label. A lot of very useful functionality comes with the framework and works for us "out of the box", without any configuration or set-up! Tapestry comes with a set of validators that should be sufficient for most needs. Let's have a more detailed look at how to use them. Validators The following validators come with the current distribution of Tapestry 5: Required—checks if the value of the validated component is not null or an empty string. MinLength—checks if the string (the value of the validated component) is not shorter than the specified length. You will see how to pass the length parameter to this validator shortly. MaxLength—same as above, but checks if the string is not too long. Min—ensures that the numeric value of the validated component is not less than the specified value, passed to the validator as a parameter. Max—as above, but ensures that the value does not exceed the specified limit. Regexp—checks if the string value fits the specified pattern. We can use several validators for one component. Let's see how all this works together. First of all, let's add another component to the Registration page template: <tr> <td><t:label t_for="age"/>:</td> <td><input type="text" t_type="textfield" t_id="age"/></td></tr> Also, add the corresponding property to the Registration page class, age, of type double. It could be an int indeed, but I want to show that the Min and Max validators can work with fractional numbers too. Besides, someone might decide to enter their age as 23.4567. This will be weird, but not against the laws. Finally, add an Errors component to the form at the Registration page, so that we can see error messages: <t:form t_id="registrationForm"> <t:errors/> <table> Now we can test all the available validators on one page. Let's specify the validation rules first: Both User Name and Password are required. Also, they should not be shorter than three characters and not longer than eight characters. Age is required, and it should not be less than five (change this number if you've got a prodigy in your family) and not more than 120 (as that would probably be a mistake). Email address is not required, but if entered, should match a common pattern. Here are the changes to the Registration page template that will implement the specified validation rules: <td> <input type="text" t_type="textfield" t_id="userName" t:validate="required,minlength=3,maxlength=8"/></td>...<td> <input type="text" t_type="passwordfield" t_id="password" t:validate="required,minlength=3,maxlength=8"/></td>...<td> <input type="text" t_type="textfield" t_id="age" t:validate="required,min=5,max=120"/></td>...<input type="text" t_type="textfield" t_id="email" t:validate="regexp"/> As you see, it is very easy to pass a parameter to a validator, like min=5 or maxlength=8. But, where do we specify a pattern for the Regexp validator? The answer is, in the message catalog. Let's add the following line to the app.properties file: email-regexp=^([a-zA-Z0-9_.-])+@(([a-zA-Z0-9-])+.)+([a-zA-Z0-9]{2,4})+$ This will serve as a regular expression for all Regexp validators applied to components with ID email throughout the application. Run the application, go to the Registration page and, try to submit the empty form. Here is what you should see: Looks all right, but the message for the age could be more sensible, something like You are too young! You should be at least 5 years old. We'll deal with this later. However for now, enter a very long username, only two characters for password and an age that is more than the upper limit, and see how the messages will change: Again, looks good, except for the message about age. Next, enter some valid values for User Name, Password and Age. Then click on the check box to subscribe to the newsletter. In the text box for email, enter some invalid value and click on Submit. Here is the result: Yes! The validation worked properly, but the error message is absolutely unacceptable. Let's deal with this, but first make sure that any valid email address will pass the validation.   Providing Custom Error Messages We can provide custom messages for validators in the application's (or page's) message catalog. For such messages we use keys that are made of the validated component's ID, the name of validator and the "message" postfix. Here is an example of what we could add to the app.properties file to change error messages for the Min and Max validators of the Age component as well as the message used for the email validation: email-regexp-message=Email address is not valid.age-min-message=You are too young! You should be at least 5 years old.age-max-message=People do not live that long! Still better, instead of hard-coding the required minimal age into the message, we could insert into the message the parameter that was passed to the Min validator (following the rules for java.text.Format), like this: age-min-message=You are too young! You should be at least %s years old. If you run the application now and submit an invalid value for age, the error message will be much better: You might want to make sure that the other error messages have changed too. We can now successfully validate values entered into separate fields, but what if the validity of the input depends on how two or more different values relate to each other? For example, at the Registration page we want two versions of password to be the same, and if they are not, this should be considered as an invalid input and reported appropriately. Before dealing with this problem however, we need to look more thoroughly at different events generated by the Form component.  
Read more
  • 0
  • 0
  • 4197

article-image-financial-management-microsoft-dynamics-ax-2012-r3
Packt
11 Feb 2015
4 min read
Save for later

Financial Management with Microsoft Dynamics AX 2012 R3

Packt
11 Feb 2015
4 min read
In this article by Mohamed Aamer, author of Microsoft Dynamics AX 2012 R3 Financial Management, we will cover that the core foundation of Enterprise Resource Planning (ERP) is financial management; it is vital to comprehend the financial characteristics in Microsoft Dynamics AX 2012 R3 from a practical perspective engaged with the application mechanism. It is important to cover the following topics: Understanding financial management aspects in Microsoft Dynamics AX 2012 R3 Covering the business rational, basic setups, and configuration Real-life business requirements and its solution Hints of implementation tips and tricks in addition to the key consideration points during analysis, design, deployment, and operation (For more resources related to this topic, see here.) Microsoft Dynamics AX 2012 R3 Financial Management book covers the main characteristics general ledger and its integration between other subledgers (Accounts payable, Accounts receivable, fixed assets, cash and bank management, and inventory). It also covers the core features of main accounts, the categorization accounts, and its controls, along with the opening balance process and concept, and the closing procedure. It then discusses subledgers functionality (Accounts payable, Accounts receivable, fixed assets, cash and bank management, cash flow management, and inventory) in more details by walking through the master data, controls, and transactions and its effects on the general ledger. It explores financial reporting that is one of the basic implementation corner stone. The main principles for reporting are reliability of business information and the ability to get the right information at the right time for the right person. Reports that analyze ERP data in an expressive way represent the output of the ERP implementation; it is considered as the cream of the implementation—the next level of value that solution stakeholders should target for. This ultimate outcome results from building all reports based on a single point of information. Planning reporting needs for ERP The Microsoft Dynamics AX implementation teamwork should challenge the management's reporting needs in the analysis phase of the implementation with a particular focus on exploring the data required to build reports. These data requirements should then be cross-checked with the real data entry activities that end users will execute to ensure that business users will get vital information from the reports. The reporting levels are as follows: Operational management Middle management Top management Understanding information technology value chain The model of a management information system is most applicable to the Information Technology (IT) manager or Chief Information Officer (CIO) of a business. Business owners likely don't care as much about the specifics as long as these aspects of the solution deliver the required results. The following are the basic layers of the value chain: Database management Business processes Business Intelligence Frontend Understanding Microsoft Dynamics AX information source blocks This section explores the information sources that eventually determine the strategic value of Business Intelligence (BI) reporting and analytics. These are divided into three blocks. Detailed transactions block Business Intelligence block Executive decisions block Discovering Microsoft Dynamics AX reporting The reporting options offered by Microsoft Dynamics AX are: Inquiry forms SQL Reporting Services (SSRS) reports The original transaction The Original document function Audit trail Reporting currency Companies report their transactions in a specific currency that is known as accounting currency or local currency. It is normal to post transactions in a different currency, and this amount of money is translated to the home currency using the current exchange rate. Autoreports The Autoreport wizard is a user-friendly tool. The end user can easily generate a report starting from every form in Microsoft Dynamics AX. This wizard helps the user to create a report based on the information in the form and save the report. Summary In this article, we covered financial reporting from planning to consideration of reporting levels. We covered important points that affect reporting quality by considering the reporting value chain, which consists of infrastructure, database management, business processes, business intelligence, and the frontend. We also discussed the information source blocks, which consist of the detailed transactions block, business intelligence block, and executive decisions block. Then we learned about the reporting possibilities in Microsoft Dynamics AX such as inquiry forms and SSRS reports, and autoreport capabilities in Microsoft Dynamics AX 2012 R3. Resources for Article: Further resources on this subject: Customization in Microsoft Dynamics CRM [Article] Getting Started with Microsoft Dynamics CRM 2013 Marketing [Article] SOP Module Setup in Microsoft Dynamics GP [Article]
Read more
  • 0
  • 0
  • 4187

article-image-moving-your-application-production
Packt
16 Dec 2013
9 min read
Save for later

Moving Your Application to Production

Packt
16 Dec 2013
9 min read
(For more resources related to this topic, see here.) We will start by examining the Sencha Cmd compiler. Compiling with Sencha Cmd This section will focus on using Sencha Cmd to compile our Ext JS 4 application for deployment within a Web Archive (WAR) file. The goal of the compilation process is to create a single JavaScript file that contains all of the code needed for the application, including all the Ext JS 4 dependencies. The index.html file that was created during the application skeleton generation is structured as follows: <!DOCTYPE HTML> <html> <head> <meta charset="UTF-8"> <title>TTT</title> <!-- <x-compile> --> <!-- <x-bootstrap> --> <link rel="stylesheet" href="bootstrap.css"> <script src = "ext/ext-dev.js"></script> <script src = "bootstrap.js"></script> <!-- </x-bootstrap> --> <script src = "app.js"></script> <!-- </x-compile> --> </head> <body></body> </html> The open and close tags of the x-compile directive enclose the part of the index.html file where the Sencha Cmd compiler will operate. The only declarations that should be contained in this block are the script tags. The compiler will process all of the scripts within the x-compile directive, searching for dependencies based on the Ext.define, requires, or uses directives. An exception to this is the ext-dev.js file. This file is considered to be a "bootstrap" file for the framework and will not be processed in the same way. The compiler ignores the files in the x-bootstrap block and the declarations are removed from the final compiler-generated page. The first step in the compilation process is to examine and parse all the JavaScript source code and analyze any dependencies. To do this the compiler needs to identify all the source folders in the application. Our application has two source folders: Ext JS 4 sources in webapp/ext/src and 3T application sources in webapp/app. These folder locations are specified using the -sdk and -classpath arguments in the compile command: sencha –sdk {path-to-sdk} compile -classpath={app-sources-folder} page -yui -in{index-page-to-compile}-out {output-file-location} For our 3T application the compile command is as follows: sencha –sdk ext compile -classpath=app page -yui -in index.html-out build/index.html This command performs the following actions: The Sencha Cmd compiler examines all the folders specified by the -classpath argument. The -sdk directory is automatically included for scanning. The page command then includes all of the script tags in index.html that are contained in the x-compile block. After identifying the content of the app directory and the index.html page, the compiler analyzes the JavaScript code and determines what is ultimately needed for inclusion in a single JavaScript file representing the application. A modified version of the original index.html file is written to build/index.html. All of the JavaScript files needed by the new index.html file are concatenated and compressed using the YUI Compressor, and written to the build/all-classes.js file. The sencha compile command must be executed from within the webapp directory, which is the root of the application and is the directory containing the index.html file. All the arguments supplied to the sencha compile command can then be relative to the webapp directory. Open a command prompt (or terminal window in Mac) and navigate to the webapp directory of the 3T project. Executing the sencha compile command as shown earlier in this section will result in the following output: Opening the webapp/build folder in NetBeans should now show the two newly generated files: index.html and all-classes.js. The all-classes.js file will contain all the required Ext JS 4 classes in addition to all the 3T application classes. Attempting to open this file in NetBeans will result in the following warning: "The file seems to be too large to open safely...", but you can open the file in a text editor to see the following concatenated and minified content: Opening the build/index.html page in NetBeans will display the following screenshot: You can now open the build/index.html file in the browser after running the application, but the result may surprise you: The layout that is presented will depend on the browser, but regardless, you will see that the CSS styling is missing. The CSS files required by our application need to be moved outside the <!-- <x-compile> --> directive. But where are the styles coming from? It is now time to briefly delve into Ext JS 4 themes and the bootstrap.css file. Ext JS 4 theming Ext JS 4 themes leverage Syntactically Awesome StyleSheets (SASS) and Compass (http://compass-style.org/) to enable the use of variables and mixins in stylesheets. Almost all of the styles for Ext JS 4 components can be customized, including colors, fonts, borders, and backgrounds, by simply changing the SASS variables. SASS is an extension of CSS that allows you to keep large stylesheets well-organized; a very good overview and reference can be found at http://sass-lang.com/documentation/file.SASS_REFERENCE.html. Theming an Ext JS 4 application using Compass and SASS is beyond the scope of this book. Sencha Cmd allows easy integration with these technologies to build SASS projects; however, the SASS language and syntax is a steep learning curve in its own right. Ext JS 4 theming is very powerful and minor changes to the existing themes can quickly change the appearance of your application. You can find out more about Ext JS 4 theming at http://docs.sencha.com/extjs/4.2.2/#!/guide/theming. The bootstrap.css file was created with the default theme definition during the generation of the application skeleton. The content of the bootstrap.css file is as follows: @import 'ext/packages/ext-theme-classic/build/resources/ext-theme-classic-all.css'; This file imports the ext-theme-classic-all.css stylesheet, which is the default "classic" Ext JS theme. All of the available themes can be found in the ext/packages directory of the Ext JS 4 SDK: Changing to a different theme is as simple as changing the bootstrap.css import. Switching to the neptune theme would require the following bootstrap.css definition: @import 'ext/packages/ext-theme-neptune/build/resources/ext-theme-neptune-all.css'; This modification will change the appearance of the application to the Ext JS "neptune" theme as shown in the following screenshot: We will change the bootstrap.css file definition to use the gray theme: @import 'ext /packages/ext-theme-gray/build/resources/ext-theme-gray-all.css'; This will result in the following appearance: You may experiment with different themes but should note that not all of the themes may be as complete as the classic theme; minor changes may be required to fully utilize the styling for some components. We will keep the gray theme for our index.html page. This will allow us to differentiate the (original) index.html page from the new ones that will be created in the following section using the classic theme. Compiling for production use Until now we have only worked with the Sencha Cmd-generated index.html file. We will now create a new index-dev.html file for our development environment. The development file will be a copy of the index.html file without the bootstrap.css file. We will reference the default classic theme in the index-dev.html file as follows: <!DOCTYPE HTML> <html> <head> <meta charset="UTF-8"> <title>TTT</title> <link rel="stylesh eet" href="ext/packages/ext-theme-classic/build/resources/ext-theme-classic-all. css"> <link rel="stylesheet" href="resources/styles.css"> <!-- <x-compile> --> <!-- <x-bootstrap> --> <script src = "ext/ext-dev.js"></script> <script src = "bootstrap.js"></script> <!-- </x-bootstrap> --> <script src = "app.js"></script> <!-- </x-compile> --> </head> <body></body> </html> Note that we have moved the stylesheet definition out of the <!-- <x-compile> --> directive. If you are using the downloaded source code for the book, you will have the resources/styles.css file and the resources directory structure available. The stylesheet and associated images in the resources directory contain the 3T logos and icons. We recommend you download the full source code now for completeness. We can now modify the Sencha Cmd compile command to use the index-dev.html file and output the generated compile file to index-prod.html in the webapp directory: sencha –sdk ext compile -classpath=app page -yui -in index-dev.html-out index-prod.html This command will generate the index-prod.html file and the all-classes.js files in the webapp directory as shown in the following screenshot: The index-prod.html file references the stylesheets directly and uses the single compiled and minified all-classes.js file. You can now run the application and browse the index-prod.html file as shown in the following screenshot: You should notice a significant increase in the speed with which the logon window is displayed as all the JavaScript classes are loaded from the single all-classes.js file. The index-prod.html file will be used by developers to test the compiled all-classes.js file. Accessing the individual pages will now allow us to differentiate between environments: The logon window as displayed in the browser Page description The index.html page was generated by Sencha Cmd and has been configured to use the gray theme in bootstrap.css. This page is no longer needed for development; use index-dev.html instead. You can access this page at http://localhost:8080/index.html The index-dev.html page uses the classic theme stylesheet included outside the <!-- <x-compile> --> directive. Use this file for application development. Ext JS 4 will dynamically load source files as required. You can access this page at http://localhost:8080/index-dev.html The index-prod.html file is dynamically generated by the Sencha Cmd compile command. This page uses the all-classes.js all-in-one compiled JavaScript file with the classic theme stylesheet. You can access this page at http://localhost:8080/index-prod.html
Read more
  • 0
  • 0
  • 4159

article-image-distributed-rails-applications-with-chef
Rahmal Conda
31 Oct 2014
4 min read
Save for later

Configuring Distributed Rails Applications with Chef: Part 1

Rahmal Conda
31 Oct 2014
4 min read
Since the Advent of Rails (and Ruby by extension), in the period between 2005 and 2010, Rails went from a niche Web Application Framework to being the center of a robust web application platform. To do this it needed more than Ruby and a few complementary gems. Anyone who has ever tried to deploy a Rails application into a production environment knows that Rails doesn’t run in a vacuum. Rails still needs a web server in front of it to help manage requests like Apache or Nginx. Oops, you’ll need unicorn or Passenger too. Almost all of the Rails apps are backed by some sort of data persistence layer. Usually that is some sort of relational database. More and more it’s a NoSQL DB like MongoDB or depending on the application, you’re probably going to deploy a caching strategy at some point: Memcached, Redis, the list goes on. What about background jobs? You’ll need another server instance for that too, and not just one either. High availability systems need to be redundant. If you’re lucky enough to get a lot of traffic, you’ll need a way to scale all of this. Why Chef? Chances are that you’re managing all of this traffic manually. Don’t feel bad, everyone starts out that way. But as you grow, how do you manage all of this without going insane? Most Rails developers start off with Capistrano, which is a great choice. Capistrano is a remote server automation tool. It’s used most often as a deployment tool for Rails. For the most part it’s a great solution for managing multiple servers that make up your Rails stack. It’s only when your architecture reaches a certain size that I’d recommend choosing Chef over Capistrano. But really, there’s no reason to choose one over the other since they actually work pretty well together, and they are both similar regarding deployment. Where Chef excels, however, is when you need to provision multiple servers with different roles, and changing software stacks. This is what I’m going to focus on in this post. But let’s introduce Chef first. What is Chef anyway? Basically, Chef is a Ruby-based configuration management engine. It is a software configuration management tool, used for provisioning servers for certain roles within a platform stack, and deploying applications to those servers. It is used to automate server configuration and integration into your infrastructure. You define your infrastructure in configuration files written in Chef’s Ruby DSL and Chef takes care of setting up individual machines and linking them together. Chef server You set up one of your server instances (virtual or otherwise) as the server and all your other instances are clients that communicate with the Chef "server" via REST over HTTPS. The server is an application that stores cookbooks for your nodes. Recipes and cookbooks Recipes are files that contain sets of instructions written in Chef’s Ruby DSL. These instructions perform some kind of procedure, usually installing software and configuring some service. These recipes are bound together along with configuration file templates, resources, and helper scripts as cookbooks. Cookbooks generally correspond to a specific server configuration. For instance, a Postgres cookbook might contain a recipe for Postgres Server, Postgres Client, maybe PostGIS, and some configuration files for how the DB instance should be provisioned. Chef Solo For stacks that don’t necessarily need a full Chef server setup, but use cookbooks to set up Rails and DB servers, there’s Chef Solo. Chef Solo is a local standalone Chef application that can be used to remotely deploy servers and applications. Wait, where is the code? In Part 2 of this post I’m going to walk you through the setting up of a Rails application with Chef Solo, then I’ll expand to show a full Chef server configuration management engine. While Chef can be used for many different application stacks, I’m going to focus on Rails configuration and deployment, provisioning and deploying the entire stack. See you next time! About the Author Rahmal Conda is a Software Development Professional and Ruby aficionado from Chicago. After 10 years working in web and application development, he moved out to the Bay Area, eager to join the startup scene. He had a taste of the startup life in Chicago working at a small personal finance company. After that he knew it was the life he had been looking for. So he moved his family out west. Since then he's made a name for himself in the social space at some high profile Silicon Valley startups. Right now he's the one of the Co-founders and Platform Architect of Boxes, a mobile marketplace for the world's hidden treasures.
Read more
  • 0
  • 0
  • 4141

article-image-how-to-build-a-koa-web-application-part-2
Christoffer Hallas
08 Feb 2015
5 min read
Save for later

How to Build a Koa Web Application - Part 2

Christoffer Hallas
08 Feb 2015
5 min read
In Part 1 of this series, we got everything in place for our Koa app using Jade and Mongel. In this post, we will cover Jade templates and how to use listing and viewing pages. Please note that this series requires that you use Node.js version 0.11+. Jade templates Rendering HTML is always an important part of any web application. Luckily, when using Node.js there are many great choices, and for this article we’ve chosen Jade. Keep in mind though that we will only touch on a tiny fraction of the Jade functionality. Let’s create our first Jade template. Create a file called create.jade and put in the following: create.jade doctype html html(lang='en') head title Create Page body h1 Create Page form(method='POST', action='/create') input(type='text', name='title', placeholder='Title') input(type='text', name='contents', placeholder='Contents') input(type='submit') For all the Jade questions you have that we won’t answer in this series, I refer you to the excellent official Jade website at http://jade-lang.com . If you add the following statement app.listen(3000); to the end of index.js, then you should be able to run the program from your terminal using the following command and by visiting http://localhost:3000 in your browser. $ node --harmony index.js The --harmony flag just tells the node program that we need support for generators in our program: Listing and viewing pages Now that we can create a page in our MongoDB database, it is time to actually list and view these pages. For this purpose we need to add another middleware to our index.js file after the first middleware: app.use(function* () { if (this.method != 'GET') { this.status = 405; this.body = 'Method Not Allowed'; return } … }); As you can probably already tell, this new middleware is very similar to the first one we added that handled the creation of pages. At first we make sure that the method of the request is GET, and if not, we respond appropriately and return the following: var params = this.path.split('/').slice(1); var id = params[0]; if (id.length == 0) { var pages = yield Page.find(); var html = jade.renderFile('list.jade', { pages: pages }); this.body = html; return } Then, we proceed to inspect the path attribute of the Koa context, looking for an ID that represents the page in the database. Remember how we redirected using the ID in the previous middleware. We inspect the path by splitting it into an array of strings separated by the forward slashes of a URL; this way the path /1234 becomes an array of ‘’ and ‘1234.’ Because the path starts with a forward slash, the first item in the array will always be the empty string, so we just discard that by default. Then we check the length of the ID parameter, and if it’s zero we know that there is in fact no ID in the path, and we should just look for the pages in the database and render our list.jade template with those pages made available to the template as the variable pages. Making data available in templates is also known as providing locals to the template. list.jade doctype html html(lang="en") head title Your Web Application body h1 Your Web Application ul - each page in pages li a(href='/#{page._id}')= page.title But if the length of id was not zero, we assume that it’s an id and we try to load that specific page from the database instead of all the pages, and we proceed to render our view.jade template with the: var page = yield Page.findById(id); var html = jade.renderFile('view.jade', page); this.body = html; view.jade doctype html html(lang="en") head title= title body h1= title p= contents That’s it You should now be able to run the app as previously described and create a page, list all of your pages, and view them. If you want to, you can continue and build a simple CMS system. Koa is very simple to use and doesn’t enforce a lot of functionality on you, allowing you to pick and choose between libraries that you need and want to use. There are many possibilities and that is one of Koa’s biggest strengths. Find even more Node.js content on our Node.js page. Featuring our latest titles and most popular tutorials, it's the perfect place to learn more about Node.js. About the author Christoffer Hallas is a software developer and entrepreneur from Copenhagen, Denmark. He is a computer polyglot and contributes to and maintains a number of open source projects. When not contemplating his next grand idea (which remains an idea), he enjoys music, sports, and design of all kinds. Christoffer can be found on GitHub as hallas and at Twitter as @hamderhallas.
Read more
  • 0
  • 0
  • 4080

article-image-integrating-spring-framework-hibernate-orm-framework-part-1
Packt
31 Dec 2009
6 min read
Save for later

Integrating Spring Framework with Hibernate ORM Framework: Part 1

Packt
31 Dec 2009
6 min read
Spring is a general-purpose framework that plays different roles in many areas of application architecture. One of these areas is persistence. Spring does not provide its own persistence framework. Instead, it provides an abstraction layer over JDBC, and a variety of O/R mapping frameworks, such as iBATIS SQL Maps, Hibernate, JDO, Apache OJB, and Oracle TopLink. This abstraction allows consistent, manageable data-access implementation. Spring's abstraction layer abstracts the application from the connection factory, the transaction API, and the exception hierarchies used by the underlying persistence technology. Application code always uses the Spring API to work with connection factories, utilizes Spring strategies for transaction management, and involves Spring's generic exception hierarchy to handle underlying exceptions. Spring sits between the application classes and the O/R mapping tool, undertakes transactions, and manages connection objects. It translates the underlying persistence exceptions thrown by Hibernate to meaningful, unchecked exceptions of type DataAccessException. Moreover, Spring provides IoC and AOP, which can be used in the persistence layer. Spring undertakes Hibernate's transactions and provides a more powerful, comprehensive approach to transaction management. The Data Access Object pattern Although you can obtain a Session object and connect to Hibernate anywhere in the application, it's recommended that all interactions with Hibernate be done only through distinct classes. Regarding this, there is a JEE design pattern, called the DAO pattern. According to the DAO pattern, all persistent operations should be performed via specific classes, technically called DAO classes. These classes are used exclusively for communicating with the data tier. The purpose of this pattern is to separate persistence-related code from the application's business logic, which makes for more manageable and maintainable code, letting you change the persistence strategy flexibly, without changing the business rules or workflow logic. The DAO pattern states that we should define a DAO interface corresponding to each DAO class. This DAO interface outlines the structure of a DAO class, defines all of the persistence operations that the business layer needs, and (in Spring-based applications) allows us to apply IoC to decouple the business layer from the DAO class. Service Facade Pattern In implementation of data access tier, the Service Facade Pattern is always used in addition to the DAO pattern. This pattern indicates using an intermediate object, called service object, between all business tier objects and DAO objects. The service object assembles the DAO methods to be managed as a unit of work. Note that only one service class is created for all DAOs that are implemented in each use case. The service class uses instances of DAO interfaces to interact with them. These instances are instantiated from the concrete DAO classes by the IoC container at runtime. Therefore, the service object is unaware of the actual DAO implementation details. Regardless of the persistence strategy your application uses (even if it uses direct JDBC), applying the DAO and Service Facade patterns to decouple application tiers is highly recommended. Data tier implementation with Hibernate Let's now see how the discussed patterns are applied to the application that directly uses Hibernate. The following code shows a sample DAO interface: package com.packtpub.springhibernate.ch13;import java.util.Collection;public interface StudentDao { public Student getStudent(long id); public Collection getAllStudents(); public Collection getGraduatedStudents(); public Collection findStudents(String lastName); public void saveStudent(Student std); public void removeStudent(Student std);} The following code shows a DAO class that implements this DAO interface: package com.packtpub.springhibernate.ch13;import org.hibernate.Session;import org.hibernate.SessionFactory;import org.hibernate.Transaction;import org.hibernate.HibernateException;import org.hibernate.Query;import java.util.Collection;public class HibernateStudentDao implements StudentDao { SessionFactory sessionFactory; public Student getStudent(long id) { Student student = null; Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); student = (Student) session.get(Student.class, new Long(id)); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } return student; } public Collection getAllStudents(){ Collection allStudents = null; Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); Query query = session.createQuery( "from Student std order by std.lastName, std.firstName"); allStudents = query.list(); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } return allStudents; } public Collection getGraduatedStudents(){ Collection graduatedStudents = null; Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); Query query = session.createQuery( "from Student std where std.status=1"); graduatedStudents = query.list(); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } return graduatedStudents; } public Collection findStudents(String lastName) { Collection students = null; Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); Query query = session.createQuery( "from Student std where std.lastName like ?"); query.setString(1, lastName + "%"); students = query.list(); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } return students; } public void saveStudent(Student std) { Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); session.saveOrUpdate(std); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } } public void removeStudent(Student std) { Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); session.delete(std); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } } public void setSessionFactory(SessionFactory sessionFactory) { this.sessionFactory = sessionFactory; }} As you can see, all implemented methods do routines. All obtain a Session object at first, get a Transaction object, perform a persistence operation, commit the transaction, rollback the transaction if exception occurs, and finally close the Session object. Each method contains much boilerplate code that is very similar to the other methods. Although applying the DAO pattern to the persistence code leads to more manageable and maintainable code, the DAO classes still include much boilerplate code. Each DAO method must obtain a Session instance, start a transaction, perform the persistence operation, and commit the transaction. Additionally, each DAO method should include its own duplicated exception-handling implementation. These are exactly the problems that motivate us to use Spring with Hibernate. Template Pattern: To clean the code and provide more manageable code, Spring utilizes a pattern called Template Pattern. By this pattern, a template object wraps all of the boilerplate repetitive code. Then, this object delegates the persistence calls as a part of functionality in the template. In the Hibernate case, HibernateTemplate extracts all of the boilerplate code, such as obtaining a Session, performing transaction, and handing exceptions.
Read more
  • 0
  • 0
  • 4062
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-working-master-pages-aspnet-mvc-2
Packt
17 Jan 2011
6 min read
Save for later

Working with Master Pages in ASP.NET MVC 2

Packt
17 Jan 2011
6 min read
  ASP.NET MVC 2 Cookbook A fast-paced cookbook with recipes covering all that you wanted to know about developing with ASP.NET MVC Solutions to the most common problems encountered with ASP.NET MVC development Build and maintain large applications with ease using ASP.NET MVC Recipes to enhance the look, feel, and user experience of your web applications Expand your MVC toolbox with an introduction to lots of open source tools Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible      How to create a master page In this recipe, we will take a look at how to create a master page and associate it with our view. Part of creating a master page is defining placeholders for use in the view. We will then see how to utilize the content placeholders that we defined in the master page. How to do it... Start by creating a new ASP.NET MVC application. Then add a new master page to your solution called Custom.Master. Place it in the Views/Shared directory. Notice that there is a placeholder already placed in the middle of our page. Let's wrap that placeholder with a table. We will put a column to the left and the right of the existing placeholder. Then we will rename the placeholder to MainContent.Views/Shared/Custom.Master: <table> <tr> <td> </td> <td> <asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server"></asp:ContentPlaceHolder> </td> <td> </td> </tr> </table> Next, we will copy the placeholder into the first and the third columns.Views/Shared/Custom.Master: <table> <tr> <td> <asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server"></asp:ContentPlaceHolder> </td> <td> <asp:ContentPlaceHolder ID="MainContent" runat="server"></asp:ContentPlaceHolder> </td> <td> <asp:ContentPlaceHolder ID="ContentPlaceHolder2" runat="server"></asp:ContentPlaceHolder> </td> </tr> </table> Next, we need to add a new action to the HomeController.cs file, from which we will create a new view. Do this by opening the HomeController.cs file, then add a new action named CustomMasterDemo.Controllers/HomeController.cs: public ActionResult CustomMasterDemo() { return View(); } Then right-click on the CustomerMasterDemo and choose AddView, and select the new Custom.Master page that we created. Next, you need to change the ContentPlaceHolderID box to show the center placeholder name ContentPlaceHolder2. Then hit Add and you should see a new view with four placeholders. Views/Home/CustomMasterDemo.aspx: <asp:Content ID="Content1" ContentPlaceHolderID="MainContent" runat="server"> <h2>Custom Master Demo</h2> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="head" runat="server"> <meta name="description" content="Here are some keywords for our page description."> </asp:Content> <asp:Content ID="Content3" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <div style="width:200px;height:200px;border:1px solid #ff0000;"> <ul> <li>Home</li> <li>Contact Us</li> <li>About Us</li> </ul> </div> </asp:Content> <asp:Content ID="Content" ContentPlaceHolderID="ContentPlaceHolder2" runat="server"> <div style="width:200px;height:200px;border:1px solid #000000;"> <b>News</b><br/> Here is a blurb of text on the right! </div> </asp:Content> You should now see a page similar to this: How it works... This particular feature is a server-side carry over from web forms. It works just as it always has. Before being sent down to the client, the view is merged into the master file and processed according to the matching placeholder IDs. Determining the master page in the ActionResult In the previous recipe, we took a look at how to build a master page. In this recipe, we are going to take a look at how to control what master page to use programmatically. There are all sorts of reasons for using different master pages. For example, you might want to use different master pages based on the time of day, if a user is logged in or not, for different areas of your site (blog, shopping, forum, and so on). How to do it... We will get started by first creating a new MVC web application. Next, we need to create a second master page. We can do this quickly by making a copy of the default master page that is provided. Name it Site2.Master. Next, we need to make sure we can tell these two master pages apart. The easiest way to do this is to change the contents of the H1 tag to say Master 1 and Master 2 in each of the master pages. Now we can take a look at the HomeController. We will check if we are in an even or odd second and based on that we can return an even or odd master page. We do this by specifying the master page name that we want to use when we return the view.Controllers/HomeController.cs: public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; string masterName = ""; if (DateTime.Now.Second % 2 == 0) masterName = "Site2"; else masterName = "Site"; return View("Index", masterName); } Now you can run the application. Refreshing the home page should alternate between the two master pages now and then. (Remember that this is based on the second and is now just a pure alternating page scheme.) How it works... This method of controlling which master page is used by the view is built into the MVC framework and is the easiest way of performing this type of control. However, having to dictate this type of logic in every single action would create quite a bit of fluff code in our controller. This option might be appropriate for certain needs though!
Read more
  • 0
  • 0
  • 4061

article-image-securing-your-trixbox-server
Packt
12 Oct 2009
6 min read
Save for later

Securing Your trixbox Server

Packt
12 Oct 2009
6 min read
Start with a good firewall Never have your trixbox system exposed completely on the open Internet; always make sure it is behind a good firewall. While many people think that because trixbox is running on Linux, it is totally secure, Linux, like anything else, has its share of vulnerabilities, and if things are not configured properly, is fairly simple for hackers to get into. There are really good open-source firewalls available, such as pfSense, Viata, and M0n0Wall. Any access to system services, such as HTTP or SSH, should only be done via a VPN or using a pseudo-VPN such as Hamachi. The best designed security starts with being exposed to the outside world as little as possible. If we have remote extensions that cannot use VPNs, then we will be forced to leave SIP ports open, and the next step will be to secure those as well. Stopping unneeded services Since trixbox CE is basically a stock installation of CentOS Linux, very little hardening has been done to the system to secure it. This lack of security is intentional as the first level of defence should always be a good firewall. Since there will be people who still insist on putting the system in a data center with no firewall, some care will need to be taken to ensure that the system is as secure as possible. The first step is to disable any services that are running that could be potential security vulnerabilities. We can see the list of services that are used with the chkconfig –list command. [trixbox1.localdomain rules]# chkconfig --listanacron 0:off 1:off 2:on 3:on 4:on 5:on 6:offasterisk 0:off 1:off 2:off 3:off 4:off 5:off 6:offavahi-daemon 0:off 1:off 2:off 3:off 4:off 5:off 6:offavahi-dnsconfd 0:off 1:off 2:off 3:off 4:off 5:off 6:offbgpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offcapi 0:off 1:off 2:off 3:off 4:off 5:off 6:offcrond 0:off 1:off 2:on 3:on 4:on 5:on 6:offdc_client 0:off 1:off 2:off 3:off 4:off 5:off 6:offdc_server 0:off 1:off 2:off 3:off 4:off 5:off 6:offdhcpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offdhcrelay 0:off 1:off 2:off 3:off 4:off 5:off 6:offez-ipupdate 0:off 1:off 2:off 3:off 4:off 5:off 6:offhaldaemon 0:off 1:off 2:off 3:on 4:on 5:on 6:offhttpd 0:off 1:off 2:off 3:on 4:on 5:on 6:offip6tables 0:off 1:off 2:off 3:off 4:off 5:off 6:offiptables 0:off 1:off 2:off 3:off 4:off 5:off 6:offisdn 0:off 1:off 2:off 3:off 4:off 5:off 6:offkudzu 0:off 1:off 2:off 3:on 4:on 5:on 6:offlm_sensors 0:off 1:off 2:on 3:on 4:on 5:on 6:offlvm2-monitor 0:off 1:on 2:on 3:on 4:on 5:on 6:offmDNSResponder 0:off 1:off 2:off 3:on 4:on 5:on 6:offmcstrans 0:off 1:off 2:off 3:off 4:off 5:off 6:offmdmonitor 0:off 1:off 2:on 3:on 4:on 5:on 6:offmdmpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offmemcached 0:off 1:off 2:on 3:on 4:on 5:on 6:offmessagebus 0:off 1:off 2:off 3:on 4:on 5:on 6:offmultipathd 0:off 1:off 2:off 3:off 4:off 5:off 6:offmysqld 0:off 1:off 2:off 3:on 4:on 5:on 6:offnamed 0:off 1:off 2:off 3:off 4:off 5:off 6:offnetconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:offnetfs 0:off 1:off 2:off 3:on 4:on 5:on 6:offnetplugd 0:off 1:off 2:off 3:off 4:off 5:off 6:offnetwork 0:off 1:off 2:on 3:on 4:on 5:on 6:offnfs 0:off 1:off 2:off 3:off 4:off 5:off 6:offnfslock 0:off 1:off 2:off 3:on 4:on 5:on 6:offntpd 0:off 1:off 2:off 3:on 4:on 5:on 6:offospf6d 0:off 1:off 2:off 3:off 4:off 5:off 6:offospfd 0:off 1:off 2:off 3:off 4:off 5:off 6:offportmap 0:off 1:off 2:off 3:on 4:on 5:on 6:offpostfix 0:off 1:off 2:on 3:on 4:on 5:on 6:offrdisc 0:off 1:off 2:off 3:off 4:off 5:off 6:offrestorecond 0:off 1:off 2:on 3:on 4:on 5:on 6:offripd 0:off 1:off 2:off 3:off 4:off 5:off 6:offripngd 0:off 1:off 2:off 3:off 4:off 5:off 6:offrpcgssd 0:off 1:off 2:off 3:on 4:on 5:on 6:offrpcidmapd 0:off 1:off 2:off 3:on 4:on 5:on 6:offrpcsvcgssd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsaslauthd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsnmpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsnmptrapd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsshd 0:off 1:off 2:on 3:on 4:on 5:on 6:offsyslog 0:off 1:off 2:on 3:on 4:on 5:on 6:offvsftpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offxinetd 0:off 1:off 2:off 3:on 4:on 5:on 6:offzaptel 0:off 1:off 2:on 3:on 4:on 5:on 6:offzebra 0:off 1:off 2:off 3:off 4:off 5:off 6:off The highlighted lines are services that are started automatically on system startup. The following list of services is required by trixbox CE and should not be disabled: Anacron crond haldaemon httpd kudzu lm_sensors lvm2-monitor mDNSResponder mdmonitor memcached messagebus mysqld network ntpd postfix sshd syslog xinetd zaptel To disable a service, we use the command chkconfig <servicename> off. We can now turn off some of the services that are not needed: chkconfig ircd offchkconfig netfs offchkconfig nfslock offchkconfig openibd offchkconfig portmap offchkconfig restorecond offchkconfig rpcgssd offchkconfig rpcidmapd offchkconfig vsftpd off We can also stop the services immediately without having to reboot: service ircd stopservice netfs stopservice nfslock stopservice openibd stopservice portmap stopservice restorecond stopservice rpcgssd stopservice rpcidmapd stopservice vsftpd stop Securing SSH A very large misconception is that by using SSH to access your system, you are safe from outside attacks. The security of SSH access is only as good as the security you have used to secure SSH access. Far too often, we see systems that have been hacked because their root password is very simple to guess (things like password or trixbox are not safe passwords). Any dictionary word is not safe at all, and substituting numbers for letters is very poor practice as well. So, as long as SSH is exposed to the outside, it is vulnerable. The best thing to do, if you absolutely have to have SSH running on the open Internet, is to change the port number used to access SSH. This section will detail the best methods of securing your SSH connections. Create a remote login account First off, we should create a user on the system and only allow SSH connections from it. The username should be something that only you know and is not easily guessed. Here, we will create a user called trixuser and assign a password to it. The password should be something with letters, numbers, symbols, and not based on a dictionary word. Also, try to string it into a sentence making sure to use the letters, numbers, and symbols. Spaces in passwords work well too, and are hard to add in scripts that might try to break into your server. A nice and simple tool for creating hard-to-guess passwords can be found at http://www.pctools.com/guides/password/. [trixbox1.localdomain init.d]# useradd trixuser[trixbox1.localdomain init.d]# passwd trixuser Now, ensure that the new account works by using SSH to log in to the trixbox CE server with this new account. If it does not let you in, make sure the password is correct or try to reset it. If it works, continue on. Only allowing one account access to the system over SSH is a great way to lock out most brute force attacks. To do this, we need to edit the file in /etc/ssh/sshd_config and add the following to the file. AllowUsers trixuser The PermitRootLogin setting can be edited so that root can't log in over SSH. Remove the # from in front of the setting and change the yes to no. PermitRootLogin no
Read more
  • 0
  • 0
  • 4058

article-image-facelets-components-jsf-12
Packt
30 Nov 2009
12 min read
Save for later

Facelets Components in JSF 1.2

Packt
30 Nov 2009
12 min read
One of the more advanced features of the Facelets framework is the ability to define complex templates containing dynamic nested content. What is a template?The Merriam-Webster dictionary defines the word "template" as "a gauge, pattern, or mold (as a thin plate or board) used as a guide to the form of a piece being made" and as "something that establishes or serves as a pattern." In the context of user interface design for the Web, a template can be thought of as an abstraction of a set of pages in the web application.A template does not define content, but rather it defines placeholders for content, and provides the layout, orientation, flow, structure, and logical organization of the elements on the page. We can also think of templates as documents with "blanks" that will be filled in with real data and user interface controls at request time. One of the benefits of templating is the separation of content from presentation, making the maintenance of the views in our web application much easier. The <ui:insert> tag has a name attribute that is used to specify a dynamic content region that will be inserted by the template client. When Facelets renders a UI composition template, it attempts to substitute any <ui:insert> tags in the Facelets template document with corresponding <ui:define> tags from the Facelets template client document. Conceptually, the Facelets composition template transformation process can be visualized as follows: In this scenario, the browser requests a Facelets template client document in our JSF application. This document contains two <ui:define> tags that specify named content elements and references a Facelets template document using the <ui:composition> tag's template attribute. The Facelets template document contains two <ui:insert> tags that have the same names as the <ui:define> tags in the client document, and three <ui:include> tags for the header, footer, and navigation menu. This is a good example of the excellent support that Facelets provides for the Composite View design pattern. Facelets transforms the template client document by merging any content it defines using <ui:define> tags with the content insertion points specified in the Facelets template document using the <ui:insert> tag. The result of merging the Facelets template client document with the Facelets template document is rendered in the browser as a composite view. While this concept may seem a bit complicated at first, it is actually a powerful feature of the Facelets view defi nition framework that can greatly simplify user interface templating in a web application. In fact, the Facelets composition template document can itself be a template client by referencing another composition template. In this way, a complex hierarchy of templates can be used to construct a flexible, multi-layered presentation tier for a JSF application. Without the Facelets templating system, we would have to copy and paste view elements such as headers, footers, and menus from one page to the next to achieve a consistent look and feel across our web application. Facelets templating enables us to define our look and feel in one document and to reuse it across multiple pages. Therefore, if we decide to change the look and feel, we only have to update one document and the change is immediately propagated to all the views of the JSF application. Let's look at some examples of how to use the Facelets templating feature. A simple Facelets template The following is an example of a simple Facelets template. It simply renders a message within an HTML <h2> element. Facelets will replace the "unnamed" <ui:insert> tag (without the name attribute) in the template document with the content of the <ui:composition> tag from the template client document. template01.jsf<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title>Facelets template example</title><link rel="stylesheet" type="text/css" href="/css/style.css" /></head><body><h2><ui:insert /></h2></body></html> A simple Facelets template client Let's look at a simple example of Facelets templating. The following page is a Facelets template client document. (Remember: you can identify a Facelets template client by looking for the existence of the template attribute on the <ui:composition> tag.) The <ui:composition> tag simply contains the text Hello World. templateClient01.jsf<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title>ui:composition example</title></head><body><ui:composition template="/WEB-INF/templates/template01.jsf">Hello World</ui:composition><ui:debug /></body></html> The following screenshot displays the result of the Facelets UI composition template transformation when the browser requests templateClient01.jsf. Another simple Facelets template client The following Facelets template client example demonstrates how a template can be reused across multiple pages in the JSF application: templateClient01a.jsf<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title>ui:composition example</title></head><body><ui:composition template="/WEB-INF/templates/template01.jsf">How are you today?</ui:composition><ui:debug /></body></html> The following screenshot displays the result of the Facelets UI composition template transformation when the browser requests templateClient01a.jsf: A more complex Facelets template The Facelets template in the previous example is quite simple and does not demonstrate some of the more advanced capabilities of Facelets templating. In particular, the template in the previous example only has a single <ui:insert> tag, with no name attribute specified. The behavior of the unnamed <ui:insert> tag is to include any content in the referencing template client page. In more complex templates, multiple <ui:insert> tags can be used to enable template client documents to defi ne several custom content elements that will be inserted throughout the template. The following Facelets template document declares three named <ui:insert> elements. Notice carefully where these tags are located. template02.jsf<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title><ui:insert name="title" /></title><link rel="stylesheet" type="text/css" href="/css/style.css" /></head><body><ui:include src="/WEB-INF/includes/header.jsf" /><h2><ui:insert name="header" /></h2><ui:insert name="content" /><ui:include src="/WEB-INF/includes/footer.jsf" /></body></html> In the following example, the template client document defines three content elements named title, header, and content using the <ui:define> tag. Their position in the client document is not important because the template document determines where this content will be positioned. templateClient02.jsf<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title>ui:composition example</title></head><body><ui:composition template="/WEB-INF/templates/template02.jsf"><ui:define name="title">Facelet template example</ui:define><ui:define name="header">Hello World</ui:define><ui:define name="content">Page content goes here.</ui:define></ui:composition><ui:debug /></body></html> The following screenshot displays the result of a more complex Facelets UI composition template transformation when the browser requests the page named templateClient02.jsf. The next example demonstrates reusing a more advanced Facelets UI composition template. At this stage, we should have a good understanding of the basic concepts of Facelets templating and reuse. templateClient02a.jsf<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title>ui:composition example</title></head><body>Facelets Components[ 78 ]<ui:composition template="/WEB-INF/templates/template02.jsf"><ui:define name="title">Facelet template example</ui:define><ui:define name="header">Thanks for visiting!</ui:define><ui:define name="content">We hope you enjoyed our site.</ui:define></ui:composition><ui:debug /></body></html> The next screenshot displays the result of the Facelets UI composition transformation when the browser requests templateClient02a.jsf. We can follow this pattern to make a number of JSF pages reuse the template in this manner to achieve a consistent look and feel across our web application. Decorating the user interface The Facelets framework supports the definition of smaller, reusable view elements that can be combined at runtime using the Facelets UI tag library. Some of these tags, such as the <ui:composition> and <ui:component> tags, trim their surrounding content. This behavior is desirable when including content from one complete XHTML document within another complete XHTML document. There are cases, however, when we do not want Facelets to trim the content outside the Facelets tag, such as when we are decorating content on one page with additional JSF or HTML markup defi ned in another page. For example, suppose there is a section of content in our XHTML document that we want to wrap or "decorate" with an HTML <div> element defined in another Facelets page. In this scenario, we want all the content on the page to be displayed, and we are simply surrounding part of the content with additional markup defined in another Facelets template. Facelets provides the <ui:decoration> tag for this purpose. Decorating content on a Facelets page The following example demonstrates how to decorate content on a Facelets page with markup from another Facelets page using the <ui:decoration> tag. The <ui:decoration> tag has a template attribute and behaves like the <ui:composition> tag. Facelets templating typically uses the <ui:composition>. It references a Facelets template document that contains markup to be included in the current document. The main difference between the <ui:composition> tag and the <ui:decoration> tag is that Facelets trims the content outside the <ui:composition> tag but does not trim the content outside the <ui:decoration> tag. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title>ui:decorate example</title><link rel="stylesheet" type="text/css" href="css/style.css" /></head><body>Text before will stay.<ui:decorate template="/WEB-INF/templates/box.jsf"><span class="header">Information Box</span><p>This is the first line of information.</p><p>This is the second line of information.</p><p>This is the third line of information.</p></ui:decorate>Text after will stay.<ui:debug /></body></html> Creating a Facelets decoration Let's examine the Facelets decoration template referenced by the previous example. The following source code demonstrates how to create a Facelets template to provide the decoration that will surround the content on another page. As we are using a <ui:composition> tag, only the content inside this tag will be used. In this example, we declare an HTML <div> element with the "box" CSS style class that contains a single Facelets <ui:insert> tag. When Facelets renders the above Facelets page, it encounters the <ui:decorate> tag that references the box.jsf page. The <ui:decorate> tag will be merged together with the associated decoration template and then rendered in the view. In this scenario, Facelets will insert the child content of the <ui:decorate> tag into the Facelets decoration template where the <ui:insert> tag is declared. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title>Box</title></head><body><ui:composition><div class="box"><ui:insert /></div></ui:composition></body></html> The result is that our content is surrounded or "decorated" by the <div> element. Any text before or after the <ui:decoration> is still rendered on the page, as shown in the next screenshot: The included decoration is rendered as is, and is not nested inside a UI component as demonstrated in the following Facelets debug page:
Read more
  • 0
  • 0
  • 4033

article-image-drawing-2d
Packt
08 Oct 2013
15 min read
Save for later

Drawing in 2D

Packt
08 Oct 2013
15 min read
(For more resources related to this topic, see here.) Drawing basics The screens of modern computers consist of a number of small squares, called pixels ( picture elements ). Each pixel can light in one color. You create pictures on the screen by changing the colors of the pixels. Graphics based on pixels is called raster graphics. Another kind of graphics is vector graphics, which is based on primitives such as lines and circles. Today, most computer screens are arrays of pixels and represent raster graphics. But images based on vector graphics (vector images) are still used in computer graphics. Vector images are drawn on raster screens using the rasterization procedure. The openFrameworks project can draw on the whole screen (when it is in fullscreen mode) or only in a window (when fullscreen mode is disabled). For simplicity, we will call the area where openFrameworks can draw, the screen . The current width and height of the screen in pixels may be obtained using the ofGetWidth() and ofGetHeight() functions. For pointing the pixels, openFrameworks uses the screen's coordinate system. This coordinate system has its origin on the top-left corner of the screen. The measurement unit is a pixel. So, each pixel on the screen with width w and height h pixels can be pointed by its coordinates (x, y), where x and y are integer values lying in the range 0 to w-1 and from 0 to h-1 respectively. In this article, we will deal with two-dimensional (2D) graphics, which is a number of methods and algorithms for drawing objects on the screen by specifying the two coordinates (x, y) in pixels. The other kind of graphics is three-dimensional (3D) graphics, which represents objects in 3D space using three coordinates (x, y, z) and performs rendering on the screen using some kind of projection of space (3D) to the screen (2D). The background color of the screen The drawing on the screen in openFrameworks should be performed in the testApp::draw() function. Before this function is called by openFrameworks, the entire screen is filled with a fixed color, which is set by the function ofSetBackground( r, g, b ). Here r, g, and b are integer values corresponding to red, green, and blue components of the background color in the range 0 to 255. Note that each of the ofSetBackground() function call fills the screen with the specified color immediately. You can make a gradient background using the ofBackgroundGradient() function. You can set the background color just once in the testApp::setup() function, but we often call ofSetBackground() in the beginning of the testApp::draw() function to not mix up the setup stage and the drawing stage. Pulsating background example You can think of ofSetBackground() as an opportunity to make the simplest drawings, as if the screen consists of one big pixel. Consider an example where the background color slowly changes from black to white and back using a sine wave. This is example 02-2D/01-PulsatingBackground. The project is based on the openFrameworks emptyExample example. Copy the folder with the example and rename it. Then fill the body of the testApp::draw() function with the following code: float time = ofGetElapsedTimef(); //Get time in seconds//Get periodic value in [-1,1],with wavelength equal to 1 second float value = sin( time * M_TWO_PI );//Map value from [-1,1] to [0,255] float v = ofMap( value, -1, 1, 0, 255 );ofBackground( v, v, v ); //Set background color This code gets the time lapsed from the start of the project using the ofGetElapsedTimef() function, and uses this value for computing value = sin( time * M_TWO_PI ). Here, M_TWO_PI is an openFrameworks constant equal to 2π; that is, approximately 6.283185. So, time * M_TWO_PI increases by 2π per second. The value 2π is equal to the period of the sine wave function, sin(). So, the argument of sin(...) will go through its wavelength in one second, hence value = sin(...) will run from -1 to 1 and back. Finally, we map the value to v, which changes in range from 0 to 255 using the ofMap() function, and set the background to a color with red, green, and blue components equal to v. Run the project; you will see how the screen color pulsates by smoothly changing its color from black to white and back. Replace the last line, which sets the background color to ofBackground( v, 0, 0 );, and the color will pulsate from black to red. Replace the argument of the sin(...) function to the formula time * M_TWO_PI * 2 and the speed of the pulsating increases by two times. We will return to background in the Drawing with an uncleared background section. Now we will consider how to draw geometric primitives. Geometric primitives In this article we will deal with 2D graphics. 2D graphics can be created in the following ways: Drawing geometric primitives such as lines, circles, and other curves and shapes like triangles and rectangles. This is the most natural way of creating graphics by programming. Generative art and creative coding projects are often based on this graphics method. We will consider this in the rest of the article. Drawing images lets you add more realism to the graphics. Setting the contents of the screen directly, pixel-by-pixel, is the most powerful way of generating graphics. But it is harder to use for simple things like drawing curves. So, such method is normally used together with both of the previous methods. A somewhat fast technique for drawing a screen pixel-by-pixel consists of filling an array with pixels colors, loading it in an image, and drawing the image on the screen. The fastest, but a little bit harder technique, is using fragment shaders. openFrameworks has the following functions for drawing primitives: ofLine( x1, y1, x2, y2 ): This function draws a line segment connecting points (x1, y1) and (x2, y2) ofRect( x, y, w, h ): This function draws a rectangle with the top-left corner (x, y), width w, and height h ofTriangle( x1, y1, x2, y2, x3, y3 ): This function draws a triangle with vertices (x1, y1), (x2, y2), and (x3, y3) ofCircle( x, y, r ): This function draws a circle with center (x, y) and radius r openFrameworks has no special function for changing the color of a separate pixel. To do so, you can draw the pixel (x, y) as a rectangle with width and height equal to 1 pixel; that is, ofRect( x, y, 1, 1 ). This is a very slow method, but we sometimes use it for educational and debugging purposes. All the coordinates in these functions are float type. Although the coordinates (x, y) of a particular pixel on the screen are integer values, openFrameworks uses float numbers for drawing geometric primitives. This is because a video card can draw objects with the float coordinates using modeling, as if the line goes between pixels. So the resultant picture of drawing with float coordinates is smoother than with integer coordinates. Using these functions, it is possible to create simple drawings. The simplest example of a flower Let's consider the example that draws a circle, line, and two triangles, which forms the simplest kind of flower. This is example 02-2D/02-FlowerSimplest. This example project is based on the openFrameworks emptyExample project. Fill the body of the testApp::draw() function with the following code: ofBackground( 255, 255, 255 ); //Set white background ofSetColor( 0, 0, 0 ); //Set black colorofCircle( 300, 100, 40 ); //Blossom ofLine( 300, 100, 300, 400 ); //Stem ofTriangle( 300, 270, 300, 300, 200, 220 ); //Left leaf ofTriangle( 300, 270, 300, 300, 400, 220 ); //Right leaf On running this code, you will see the following picture of the "flower": Controlling the drawing of primitives There are a number of functions for controlling the parameters for drawing primitives. ofSetColor( r, g, b ): This function sets the color of drawing primitives, where r, g, and b are integer values corresponding to red, green, and blue components of the color in the range 0 to 255. After calling ofSetColor(), all the primitives will be drawn using this color until another ofSetColor() calling. We will discuss colors in more detail in the Colors section. ofFill() and ofNoFill(): These functions enable and disable filling shapes like circles, rectangles, and triangles. After calling ofFill() or ofNoFill(), all the primitives will be drawn filled or unfilled until the next function is called. By default, the shapes are rendered filled with color. Add the line ofNoFill(); before ofCircle(...); in the previous example and you will see all the shapes unfilled, as follows: ofSetLineWidth( lineWidth ): This function sets the width of the rendered lines to the lineWidth value, which has type float. The default value is 1.0, and calling this function with larger values will result in thick lines. It only affects drawing unfilled shapes. The line thickness is changed up to some limit depending on the video card. Normally, this limit is not less than 8.0. Add the line ofSetLineWidth( 7 ); before the line drawing in the previous example, and you will see the flower with a thick vertical line, whereas all the filled shapes will remain unchanged. Note that we use the value 7; this is an odd number, so it gives symmetrical line thickening. Note that this method for obtaining thick lines is simple but not perfect, because adjacent lines are drawn quite crudely. For obtaining smooth thick lines, you should draw these as filled shapes. ofSetCircleResolution( res ): This function sets the circle resolution; that is, the number of line segments used for drawing circles to res. The default value is 20, but with such settings only small circles look good. For bigger circles, it is recommended to increase the circle resolution; for example, to 40 or 60. Add the line ofSetCircleResolution( 40 ); before ofCircle(...); in the previous example and you will see a smoother circle. Note that a large res value can decrease the performance of the project, so if you need to draw many small circles, consider using smaller res values. ofEnableSmoothing() and ofDisableSmoothing(): These functions enable and disable line smoothing. Such settings can be controlled by your video card. In our example, calling these functions will not have any effect. Performance considerations The functions discussed work well for drawings containing not more than a 1000 primitives. When you draw more primitives, the project's performance can decrease (it depends on your video card). The reason is that each command such as ofSetColor() or ofLine() is sent to drawing separately, which takes time. So, for drawing 10,000, 100,000, or even 1 million primitives, you should use advanced methods, which draw many primitives at once. In openFrameworks, you can use the ofMesh and ofVboMesh classes for this. Using ofPoint Maybe you noted a problem when considering the preceding flower example: drawing primitives by specifying the coordinates of all the vertices is a little cumbersome. There are too many numbers in the code, so it is hard to understand the relation between primitives. To solve this problem, we will learn about using the ofPoint class and then apply it for drawing primitives using control points. ofPoint is a class that represents the coordinates of a 2D point. It has two main fields: x and y, which are float type. Actually, ofPoint has the third field z, so ofPoint can be used for representing 3D points too. If you do not specify z, it sets to zero by default, so in this case you can think of ofPoint as a 2D point indeed. Operations with points To represent some point, just declare an object of the ofPoint class. ofPoint p; To initialize the point, set its coordinates. p.x = 100.0; p.y = 200.0; Or, alternatively, use the constructor. p = ofPoint( 100.0, 200.0 ); You can operate with points just as you do with numbers. If you have a point q, the following operations are valid: p + q or p - q provides points with coordinates (p.x + q.x, p.y + q.y) or (p.x - q.x, p.y - q.y) p * k or p / k, where k is the float value, provides the points (p.x * k, p.y * k) or (p.x / k, p.y / k) p += q or p -= q adds or subtracts q from p There are a number of useful functions for simplifying 2D vector mathematics, as follows: p.length(): This function returns the length of the vector p, which is equal to sqrt( p.x * p.x + p.y * p.y ). p.normalize(): This function normalizes the point so it has the unit length p = p / p.length(). Also, this function handles the case correctly when p.length() is equal to zero. See the full list of functions for ofPoint in the libs/openFrameworks/math/ofVec3f.h file. Actually, ofPoint is just another name for the ofVec3f class, representing 3D vectors and corresponding functions. All functions' drawing primitives have overloaded versions working with ofPoint: ofLine( p1, p2 ) draws a line segment connecting the points p1 and p2 ofRect( p, w, h ) draws a rectangle with top-left corner p, width w, and height h ofTriangle( p1, p2, p3 ) draws a triangle with the vertices p1, p2, and p3 ofCircle( p, r ) draws a circle with center p and radius r Using control points example We are ready to solve the problem stated in the beginning of the Using ofPoint section. To avoid using many numbers in drawing code, we can declare a number of points and use them as vertices for primitive drawing. In computer graphics, such points are called control points . Let's specify the following control points for the flower in our simplest flower example: Now we implement this in the code. This is example 02-2D/03-FlowerControlPoints. Add the following declaration of control points in the testApp class declaration in the testApp.h file: ofPoint stem0, stem1, stem2, stem3, leftLeaf, rightLeaf; Then set values for points in the testApp::update() function as follows: stem0 = ofPoint( 300, 100 ); stem1 = ofPoint( 300, 270 ); stem2 = ofPoint( 300, 300 ); stem3 = ofPoint( 300, 400 ); leftLeaf = ofPoint( 200, 220 ); rightLeaf = ofPoint( 400, 220 ); Finally, use these control points for drawing the flower in the testApp::draw() function: ofBackground( 255, 255, 255 ); //Set white background ofSetColor( 0, 0, 0 ); //Set black colorofCircle ( stem0, 40 ); //Blossom ofLine( stem0, stem3 ); //Stem ofTriangle( stem1, stem2, leftLeaf ); //Left leaf ofTriangle( stem1, stem2, rightLeaf ); //Right leaf You will observe that when drawing with control points the code is much easier to understand. Furthermore, there is one more advantage of using control points: we can easily change control points' positions and hence obtain animated drawings. See the full example code in 02-2D/03-FlowerControlPoints. In addition to the already explained code, it contains a code for shifting the leftLeaf and rightLeaf points depending on time. So, when you run the code, you will see the flower with moving leaves. Coordinate system transformations Sometimes we need to translate, rotate, and resize drawings. For example, arcade games are based on the characters moving across the screen. When we perform drawing using control points, the straightforward solution for translating, rotating, and resizing graphics is in applying desired transformations to control points using corresponding mathematical formulas. Such idea works, but sometimes leads to complicated formulas in the code (especially when we need to rotate graphics). The more elegant solution is in using coordinate system transformations. This is a method of temporarily changing the coordinate system during drawing, which lets you translate, rotate, and resize drawings without changing the drawing algorithm. The current coordinate system is represented in openFrameworks with a matrix. All coordinate system transformations are made by changing this matrix in some way. When openFrameworks draws something using the changed coordinate system, it performs exactly the same number of computations as with the original matrix. It means that you can apply as many coordinate system transformations as you want without any decrease in the performance of the drawing. Coordinate system transformations are managed in openFrameworks with the following functions: ofPushMatrix(): This function pushes the current coordinate system in a matrix stack. This stack is a special container that holds the coordinate system matrices. It gives you the ability to restore coordinate system transformations when you do not need them. ofPopMatrix(): This function pops the last added coordinate system from a matrix stack and uses it as the current coordinate system. You should take care to see that the number of ofPopMatrix() calls don't exceed the number of ofPushMatrix() calls. Though the coordinate system is restored before testApp::draw() is called, we recommend that the number of ofPushMatrix() and ofPopMatrix() callings in your project should be exactly the same. It will simplify the project's debugging and further development. ofTranslate( x, y ) or ofTranslate( p ): This function moves the current coordinate system at the vector (x, y) or, equivalently, at the vector p. If x and y are equal to zero, the coordinate system remains unchanged. ofScale( scaleX, scaleY ): This function scales the current coordinate system at scaleX in the x axis and at scaleY in the y axis. If both parameters are equal to 1.0, the coordinate system remains unchanged. The value -1.0 means inverting the coordinate axis in the opposite direction. ofRotate( angle ): This function rotates the current coordinate system around its origin at angle degrees clockwise. If the angle value is equal to 0, or k * 360 with k as an integer, the coordinate system remains unchanged. All transformations can be applied in any sequence; for example, translating, scaling, rotating, translating again, and so on. The typical usage of these functions is the following: Store the current transformation matrix using ofPushMatrix(). Change the coordinate system by calling any of these functions: ofTranslate(), ofScale(), or ofRotate(). Draw something. Restore the original transformation matrix using ofPopMatrix(). Step 3 can include steps 1 to 4 again. For example, for moving the origin of the coordinate system to the center of the screen, use the following code in testApp::draw(): ofPushMatrix(); ofTranslate( ofGetWidth() / 2, ofGetHeight() / 2 ); //Draw something ofPopMatrix(); If you replace the //Draw something comment to ofCircle( 0, 0, 100 );, you will see the circle in the center of the screen. This transformation significantly simplifies coding the drawings that should be located at the center of the screen. Now let's use coordinate system transformation for adding triangular petals to the flower. For further exploring coordinate system transformations.
Read more
  • 0
  • 0
  • 4029
article-image-data-modeling-erwin
Packt
14 Oct 2009
3 min read
Save for later

Data Modeling with ERWin

Packt
14 Oct 2009
3 min read
Depending on your data modeling need and what you already have, there are two other ways to create a data model: Derive from an existing model and Reverse Engineer an existing database. Let’s start with creating a new model by clicking the Create model button. We’d like to create both logical and physical models, so select Logical/Physical. You can see in the Model Explorer that our new model gets Model_1 name, ERWin’s default name. Let’s rename our model to Packt Model. Confirm by looking at the Model Explorer that our model is renamed correctly. Next, we need to choose the ER notation. ERWin offers two notations: IDEF1X and IE. We’ll use IE for our logical and physical models. It’s a good practice during model development to save our work from time to time. Our model is still empty, so let’s next create a logical model in it. Logical Model Logical model in ERWin is basically ER model. An ER model consists of entities and their attributes, and their relationships. Let’s start by creating our first entity: CUSTOMER and its attributes. To add an entity, load (click) the Entity button on the toolbar, and drop it on the diagramming canvas by clicking your mouse on the canvas. Rename the entity’s default E/1 name to CUSTOMER by clicking on the name (E/1) and typing its new name over it. To add an attribute to an entity, right-click the entity and select Attributes. Click the New button. Type in our first attribute name (CUSTOMER_NO) over its name, and select Number as its data type, and then click OK. We want this attribute as the entity’s primary key, so check the Primary Key box. In the same way, add an attribute: CUSTOMER_NAME with a String data type. Our CUSTOMER entity now has two attributes as seen in its ER diagram. Notice that the CUSTOMER_NO attribute is at the key area, the upper part of the entity box; while the CUSTOMER_NAME is in the common attribute area, the bottom part. Similarly, add the rest of the entities and their attributes in our model. When you’re done, we’ll have six entities in our ER diagram. Our entities are not related yet. To rearrange the entities in the diagram, you can move around the entities by clicking and dragging them.
Read more
  • 0
  • 0
  • 4017

article-image-integrating-microsoft-dynamics-ax-2009-using-biztalk-adapter
Packt
03 Aug 2011
5 min read
Save for later

Integrating with Microsoft Dynamics AX 2009 using BizTalk Adapter

Packt
03 Aug 2011
5 min read
  Microsoft BizTalk 2010: Line of Business Systems Integration What is Dynamics AX? Microsoft Dynamics AX (formally Microsoft Axapta) is Microsoft's Enterprise Resource Planning (ERP) solution for mid-size and large customers. Much like SAP, Dynamics AX provides functions that are critical to businesses that can benefit from BizTalk's integration. Microsoft Dynamics AX is fully customizable and extensible through its rich development platform and tools. It has direct connections to products such as Microsoft BizTalk Server, Microsoft SQL Server, Exchange, and Office. Often Dynamics AX is compared to SAP All in One. Those who are familiar with SAP are also familiar with high cost of implementation, maintenance, and customization associated with it. A Microsoft Dynamics AX solution offers more customizability, lower maintenance costs, and lower per-user costs than SAP. ERP implementations often fail in part due to lack of user acceptance in adopting a new system. The Dynamics AX user interface has a similar look and feel to other widely used products such as Microsoft Office and Microsoft Outlook, which significantly increases the user's comfort level when dealing with a new ERP system. For more information on Dynamics AX 2009 and SAP, please see http://www.microsoft.com/dynamics/en/us/compare-sap.aspx. Methods of integration with AX Included with Dynamics AX 2009, Microsoft provides two tools for integration with Dynamics AX: Dynamics AX BizTalk Adapter .NET Business Connector The BizTalk adapter interfaces via the Application Interface Framework Module (AIF) in Dynamics AX 2009, and the .NET Business Connector directly calls the Application Object Tree (AOT) classes in your AX source code. The AIF module requires a license key, which can add cost to your integration projects if your organization has not purchased this module. It provides an extensible framework that enables integration via XML document exchange. A great advantage of the AIF module is its integration functionality with the BizTalk Dynamics AX adapter. Other adapters include a FILE adapter and MSMQ, as well as Web Services to consume XML files are included out of the box. The AIF module requires a fair amount of setup and configuration. Other advantages include full and granular security, capability of synchronous and asynchronous mode integration mode, and full logging of transactions and error handling. The Microsoft BizTalk AX 2009 adapter can execute AX actions (exposed functions to the AIF module) to write data to AX in both synch and asynch modes. Which mode is used is determined by the design of your BizTalk application (via logical ports). A one-way send port will put the XML data into the AIF queue, whereas a two-way send-receive port will execute the actions and return a response message. Asynch transitions will stay in the AIF queue until a batch job is executed. Setting up and executing the batch jobs can be very difficult to manage. Pulling data from AX can also be achieved using the BizTalk adapter. Transactions pushed into the same AIF queue (with an OUTBOUND direction in an async mode) can be retrieved using the AX adapter which polls AX for these transactions. The .NET Business connector requires custom .NET code to be written in order to implement it. If your business requirements are for a single (or very small amount) of point-to-point integration data flows, then we would recommend using the .NET Business Connector. However, this often requires customizations in order to create and expose the methods. Security also needs to be handled with the service account that the code is running under. Installing the adapter and .NET Business Connector The Microsoft BizTalk adapter for Dynamics AX 2009 and the .NET Business Connector are installed from your Dynamics AX Setup install setup under Integration on the Add or Modify components window. Each component is independent of one another; however the BizTalk adapter leverages components of the business connector. You are not required to install the Dynamics AX client on the BizTalk server. When installed in BizTalk adapter, you can simply select all the defaults from the install wizard. For the .NET business connector, you'll be prompted for the location of your Dynamics AX instance. This will be used only as a default configuration and can easily be changed. Configuring Dynamics AX 2009 Application Integration Framework for BizTalk Adapter Configuration of the AIF module involves several steps. It also goes a long way to increasing your understanding of the granularity of Dynamics AX setup and security considerations that were taken into account for integration of what can be highly sensitive data.It is recommended that this setup be done with Admin level security, however, only full control of the AIF module is required. This setup is almost identical in version prior to Dynamics AX 2009; minor differences will be noted. All AIF setup tables can be found in Dynamics AX under Basic | Setup | Application Integration Framework. The first step is rather simple, however critical. In the Transport Adapters form, add in a new entry selecting Adapter Class drop down AifBizTalkAdapter, select Active, and Direction will be Receive and Respond. You also notice there are two other out-of-the-box adapters: FILE and MSMQ. This is a one-time setup that is effective across all companies. Next, using the Channels form, set up an active channel for your specific BizTalk server. Select a meaningful and identifiable Channel ID and Name such as BizTalkChannelID and BizTalkChannel. Select the Adapter to BizTalk Adapter, check Active, set Direction to Both, Response channel equal to the Channel ID of BizTalkChannelID. Set the Address to your BizTalk Server (I2CDARS1 as shown below). (Move the mouse over the image to enlarge.)  
Read more
  • 0
  • 0
  • 3988

article-image-interacting-user
Packt
08 Aug 2013
23 min read
Save for later

Interacting with the User

Packt
08 Aug 2013
23 min read
(For more resources related to this topic, see here.) Creating actions, commands, and handlers The first few releases of the Eclipse framework provided Action as a means of contributing to menu items. These were defined declaratively via actionSets in the plugin.xml file, and many tutorials still reference those today. At the programming level, when creating views, Actions are still used to provide context menus programmatically. They were replaced with commands in Eclipse 3, as a more abstract way of decoupling the operation of a command with its representation of the menu. To connect these two together, a handler is used. E4: Eclipse 4.x uses the command's model, and decouples it further using the @Execute annotation on the handler class. Commands and views are hooked up with entries on the application's model. Time for action – adding context menus A context menu can be added to the TimeZoneTableView class and respond to it dynamically in the view's creation. The typical pattern for Eclipse 3 applications is to create a hookContextMenu() method, which is used to wire up the context menu operation with displaying the menu. A default implementation can be seen by creating an example view, or one can be created from first principles. Eclipse menus are managed by a MenuManager. This is a specialized subclass of a more general ContributionManager, which looks after a dynamic set of contributions that can be made from other sources. When the menu manager is connected to a control, it responds in the standard ways for the platform for showing the menu (typically a context-sensitive click or short key). Menus can also be displayed in other locations, such as a view's or the workspace's coolbar (toolbar). The same MenuManager approach works in these different locations. Open the TimeZoneTableView class and go to the createPartControl() method. At the botom of the method, add a new MenuManager with the ID #PopupMenu and associate it to the viewer's control. MenuManager manager = new MenuManager("#PopupMenu");Menu menu = manager.createContextMenu(tableViewer.getControl());tableViewer.getControl().setMenu(menu); If the Menu is empty, the MenuManager won't show any content, so this currently has no effect. To demonstrate this, an Action will be added to the Menu. An Action has text (for rendering in the pop-up menu, or the menu at the top of the screen), as well as a state (enabled/disabled, selected) and a behavior. These are typically created as subclasses and (although the Action doesn't strictly require it) an implementaton of the run() method. Add this to the botom of the createPartControl() method. Action deprecated = new Action() { public void run() { MessageDialog.openInformation(null, "Hello", "World"); }};deprecated.setText("Hello");manager.add(deprecated); Run the Eclipse instance, open the Time Zone Table View, and right-click on the table. The Hello menu can be seen, and when selected, an informational dialog is shown. What just happened? The MenuManager(with the id #PopupMenu) was bound to the control, which means when that particular control's context sensitive menu is invoked, the manager will be able to ask to display a menu. The manager is associated with a single Menu object (which is also stamped on the underlying control itself) and is responsible for updating the status of the menu. Actions are deprecated. They are included here since examples on the Internet may have preferred references to them, but it's important to note that while they still work, the way of building user interfaces are with the commands and handlers, shown in the next section. When the menu is shown, the actions that the menu contains are rendered in the order in which they are added. Action are usually subclasses that implement a run() method, which performs a certain operation, and have text which is displayed. Action instances also have other metadata, such as whether they are enabled or disabled. Although it is tempting to override the access or methods, this behavior doesn't work—the setters cause an event to be sent out to registered listeners, which causes side effects, such as updating any displayed controls. Time for action – creating commands and handlers Since the Action class is deprecated, the supported mechanism is to create a command, a handler, and a menu to display the command in the menu bar. Open the plug-in manifest for the project, or double-click on the plugin.xml file. Edit the source on the plugin.xml tab, and add a definition of a Hello command as follows: <extension point="org.eclipse.ui.commands"> <command name="Hello" description="Says Hello World" id="com.packtpub.e4.clock.ui.command.hello"/></extension> This creates a command, which is just an identifier and a name. To specify what it does, it must be connected to a handler, which is done by adding the following extension: <extension point="org.eclipse.ui.handlers"> <handler class= "com.packtpub.e4.clock.ui.handlers.HelloHandler" commandId="com.packtpub.e4.clock.ui.command.hello"/></extension> The handler joins the processing of the command to a class that implements IHandler, typically AbstractHandler. Create a class HelloHandler in a new com.packtpub.e4.clock.ui.handlers package, which implements AbstractHandler(from the org.eclipse.core.commands package). public class HelloHandler extends AbstractHandler { public Object execute(ExecutionEvent event) { MessageDialog.openInformation(null, "Hello", "World"); return null; }} The command's ID com.packtpub.e4.clock.ui.command.hello is used to refer to it from menus or other locations. To place the contribution in an existing menu structure, it needs to be specified by its locationURI, which is a URL that begins with menu:such as menu:window?after=additionsor menu:file?after=additions. To place it in the Help menu, add this to the plugin.xml file. <extension point="org.eclipse.ui.menus"> <menuContribution allPopups="false" locationURI="menu:help?after=additions"> <command commandId="com.packtpub.e4.clock.ui.command.hello" label="Hello" style="push"> </command> </menuContribution></extension> Run the Eclipse instance, and there will be a Hello menu item under the Help menu. When selected, it will pop up the Hello World message. If the Hello menu is disabled, verify that the handler extension point is defined, which connects the command to the handler class. What just happened? The main issue with the actions framework was that it tightly coupled the state of the command with the user interface. Although an action could be used uniformly between different menu locations, the Action superclass still lives in the JFace package, which has dependencies on both SWT and other UI components. As a result, Action cannot be used in a headless environment. Eclipse 3.x introduced the concept of commands and handlers, as a means of separating their interface from their implementation. This allows a generic command (such as Copy) to be overridden by specific views. Unlike the traditional command design pattern, which provides implementation as subclasses, the command in Eclipse 3.x uses a final class and then a retargetable IHandler to perform the actual execution. E4: In Eclipse 4.x, the concepts of commands and handlers are used extensively to provide the components of the user interface. The key difference is in their definition; for Eclipse 3.x, this typically occurs in the plugin.xml file, whereas in E4 it is part of the application model. In the example, a specific handler was defined for the command, which is valid in all contexts. The handler's class is the implementation; the command ID is the reference. The org.eclipse.ui.menus extension point allows menuContributions to be added anywhere in the user interface. To address where the menu can be contributed to, the location URIobject defines where the menu item can be created. The syntax for the URI is as follows: menu: Menus begin with the menu: protocol (can also be toolbar:or popup:) identifier: This can be a known short name (such as file, window, and help), the global menu (org.eclipse.ui.main.menu), the global toolbar (org.eclipse.ui.main.toolbar), a view identifier (org.eclipse. ui.views.ContentOutline), or an ID explicitly defined in a pop-up menu's registerContextMenu()call. ?after(or before)=key: This is the placement instruction to put this after or before other items; typically additions is used as an extensible location for others to contribute to. The locationURIallows plug-ins to contribute to other menus, regardless of where they are ultimately located. Note, that if the handler implements the IHandler interface directly instead of subclassing AbstractHandler, the isEnabled() method will need to be overridden as otherwise the command won't be enabled, and the menu won't have any effect. Time for action – binding commands to keys To hook up the command to a keystroke a binding is used. This allows a key (or series of keys) to be used to invoke the command, instead of only via the menu. Bindings are set up via an extension point org.eclipse.ui.bindings, and connect a sequence of keystrokes to a command ID. Open the plugin.xml in the clock.uiproject. In the plugin.xml tab, add the following: <extension point="org.eclipse.ui.bindings"> <key commandId="com.packtpub.e4.clock.ui.command.hello" sequence="M1+9" contextId="org.eclipse.ui.contexts.window" schemeId= "org.eclipse.ui.defaultAcceleratorConfiguration"/></extension> Run the Eclipse instance, and press Cmd+ 9(for OS X) or Ctrl+ 9 (for Windows/Linux). The same Hello dialog should be displayed, as if it was shown from the menu. The same keystroke should be displayed in the Help menu. What just happened? The M1 key is the primary meta key, which is Cmd on OS X and Ctrl on Windows/Linux. This is typically used for the main operations; for example M1+ C is copy and M1+ V is paste on all systems. The sequence notation M1+ 9 is used to indicate pressing both keys at the same time. The command that gets invoked is referenced by its commandId. This may be defined in the same plug-in, but does not have to be; it is possible for one application to provide a set of commands and another plug-in to provide keystrokes that bind them. It is also possible to set up a sequence of key presses; for example, M1+ 9 8 7would require pressing Cmd+ 9 or Ctrl+ 9 followed by 8 and then 7 before the command is executed. This allows a set of keystrokes to be used to invoke a command; for example, it's possible to emulate an Emacs quit operation with the keybinding Ctrl + X Ctrl + Cto the quit command. Other modifier keys include M2(Shift), M3(Alt/Option), and M4(Ctrl on OS X). It is possible to use CTRL, SHIFT, or ALT as long names, but the meta names are preferred, since M1tends to be bound to different keys on different operating systems. The non-modifier keys themselves can either be single characters (A to Z), numbers (0to 9), or one of a set of longer name key-codes, such as F12, ARROW_UP, TAB, and PAGE_UP. Certain common variations are allowed; for example, ESC/ESCAPE, ENTER/RETURN, and so on. Finally, bindings are associated with a scheme, which in the default case should be org.eclipse.ui.defaultAcceleratorConfiguration. Schemes exist to allow the user to switch in and out of keybindings and replace them with others, which is how tools like "vrapper" (a vi emulator) and the Emacs bindings that come with Eclipse by default can be used. (This can be changed via Window | Preferences | Keys menu in Eclipse.) Time for action – changing contexts The context is the location in which this binding is valid. For commands that are visible everywhere—typically the kind of options in the default menu—they can be associated with the org.eclipse.ui.contexts.windowcontext. If the command should also be invoked from dialogs as well, then the org.eclipse.ui.context.dialogAndWindowcontext would be used instead. Open the plugin.xml file of the clock.ui project. To enable the command only for Java editors, go to the plugin.xml tab, and modify the contextId as follows: <extension point="org.eclipse.ui.bindings"> <key commandId="com.packtpub.e4.clock.ui.command.hello" sequence="M1+9" contextId="org.eclipse.ui.contexts.window" contextId="org.eclipse.jdt.ui.javaEditorScope" schemeId="org.eclipse.ui.defaultAcceleratorConfiguration"/></extension> Run the Eclipse instance, and create a Java project, a test Java class, and an empty text file. Open both of these in editors. When the focus is on the Java editor, the Cmd + 9 or Ctrl+ 9 operation will run the command, but when the focus is on the text editor, the keybinding will have no effect. Unfortunately, it also highlights the fact that just because the keybinding is disabled when in the Java scope, it doesn't disable the underlying command. If there is no change in behavior, try cleaning the workspace of the test instance at launch, by going to the Run | Run... menu, and choosing Clear on the workspace. This is sometimes necessary when making changes to the plugin.xml file, as some extensions are cached and may lead to strange behavior. What just happened? Context scopes allow bindings to be valid for certain situations, such as when a Java editor is open. This allows the same keybinding to be used for different situations, such as a Format operation—which may have a different effect in a Java editor than an XML editor, for instance. Since scopes are hierarchical, they can be specifically targeted for the contexts in which they may be used. The Java editor context is a subcontext of the general text editor, which in turn is a subcontext of the window context, which in turn is a subcontext of the windowAndDialogcontext. The available contexts can be seen by editing the plugin.xml file in the plug-in editor; in the extensions tab the binding shows an editor window with a form: Clicking on the Browse… button next to the contextId brings up a dialog, which presents the available contexts: It's also possible to find out all the contexts programmatically or via the running OSGi instance, by navigating to Window | Show View | Console, and then using New Host OSGi Console in the drop-down menu, and then running the following code snippet: osgi> pt -v org.eclipse.ui.contextsExtension point: org.eclipse.ui.contexts [from org.eclipse.ui]Extension(s):-------------------null [from org.eclipse.ant.ui] <context> name = Editing Ant Buildfiles description = Editing Ant Buildfiles Context parentId = org.eclipse.ui.textEditorScope id = org.eclipse.ant.ui.AntEditorScope </context>null [from org.eclipse.compare] <context> name = Comparing in an Editor description = Comparing in an Editor parentId = org.eclipse.ui.contexts.window id = org.eclipse.compare.compareEditorScope </context> Time for action – enabling and disabling the menu's items The previous section showed how to hide or show a specific keybinding depending on the open editor type. However, it doesn't stop the command being called via the menu, or from it showing up in the menu itself. Instead of just hiding the keybinding, the menu can be hidden as well by adding a visibleWhenblock to the command. The expressions framework provides a number of variables, including activeContexts, which contains a list of the active contexts at the time. Since many contexts can be active simultaneously, the active contexts is a list (for example, [dialogAndWindows,windows, textEditor, javaEditor]). So, to find an entry (in effect, a contains operation) an iterate operator with the equals expression is used. Open up the plugin.xml file, and update the the Hello command by adding a visibleWhen expression. <extension point="org.eclipse.ui.menus"> <menuContribution allPopups="false" locationURI="menu:help?after=additions"> <command commandId="com.packtpub.e4.clock.ui.command.hello" label="Hello" style="push"> <visibleWhen> <with variable="activeContexts"> <iterate operator="or"> <equals value="org.eclipse.jdt.ui.javaEditorScope"/> </iterate> </with> </visibleWhen> </command> </menuContribution></extension> Run the Eclipse instance, and verify that the menu is hidden until a Java editor is opened. If this behavior is not seen, run the Eclipse application with the clean argument to clear the workspace. After clearing, it will be necessary to create a new Java project with a Java class, as well as an empty text file, to verify that the menu's visibility is correct. What just happened? Menus have a visibleWhen guard that is evaluated when the menu is shown. If it is false, he menu is hidden. The expressions syntax is based on nested XML elements with certain conditions. For example, an <and> block is true if all of its children are true, whereas an <or> block is true if one of its children is true. Variables can also be used with a property test using a combination of a <with> block (which binds the specified variable to the stack) and an <equals> block or other comparison. In the case of variables that have lists, an <iterate> can be used to step through elements using either operator="or" or operator="and" to dynamically calculate enablement. To find out if a list contains an element, a combination of <iterate> and <equals> operators is the standard pattern. There are a number of variables that can be used in tests; they include the following variables: activeContexts: List of context IDs that are active at the time activeShell: The active shell (dialog or window) activeWorkbenchWindow: The active window activeEditor: The current or last active editor activePart: The active part (editor or view) selection: The current selection org.eclipse.core.runtime.Platform: The Platform object The Platform object is useful for performing dynamic tests using test, such as the following: <test value="ACTIVE" property="org.eclipse.core.runtime.bundleState" args="org.eclipse.core.expressions"/><test property="org.eclipse.core.runtime.isBundleInstalled" args="org.eclipse.core.expressions"/> Knowing if a bundle is installed is often useful; it's better to only enable functionality if a bundle is started (or in OSGi terminology, ACTIVE). As a result, use of isBundleInstalled has been replaced by the bundleState=ACTIVE tests. Time for action – reusing expressions Although it's possible to copy and paste expressions between places where they are used, it is preferable to re-use an identical expression. Declare an expression using the expression's extension point, by opening the plugin.xml file of the clock.uiproject. <extension point="org.eclipse.core.expressions.definitions"> <definition id="when.hello.is.active"> <with variable="activeContexts"> <iterate operator="or"> <equals value="org.eclipse.jdt.ui.javaEditorScope"/> </iterate> </with> </definition></extension> If defined via the extension wizard, it will prompt to add dependency on the org.eclipse.core.expressions bundle. This isn't strictly necessary for this example to work. To use the definition, the enablement expressions needs to use the reference. <extension point="org.eclipse.ui.menus"> <menuContribution allPopups="false" locationURI="menu:help?after=additions"> <command commandId="com.packtpub.e4.clock.ui.command.hello" label="Hello" style="push"> <visibleWhen> <with variable="activeContexts"> <iterate operator="or"> <equals value="org.eclipse.jdt.ui.javaEditorScope"/> </iterate> </with> <reference definitionId="when.hello.is.active"/> </visibleWhen> </command> </menuContribution></extension> Now that the reference has been defined, it can be used to modify the handler as well, so that the handler and menu become active and visible together. Add the following to the Hellohandler in the plugin.xml file: <extension point="org.eclipse.ui.handlers"> <handler class="com.packtpub.e4.clock.ui.handlers.Hello" commandId="com.packtpub.e4.clock.ui.command.hello"> <enabledWhen> <reference definitionId="when.hello.is.active"/> </enabledWhen> </handler></extension> Run the Eclipse application and exactly the same behavior will occur; but should the enablement change, it can be done in one place. What just happened? The org.eclipse.core.expressions extension point defined a virtual condition that could be evaluated when the user's context changes, so both the menu and the handler can be made visible and enabled at the same time. The reference was bound in the enabledWhen condition for the handler, and the visibleWhencondition for the menu. Since references can be used anywhere, expressions can also be defined in terms of other expressions. As long as the expressions aren't recursive, they can be built up in any manner. Time for action – contributing commands to pop-up menus It's useful to be able to add contributions to pop-up menus so that they can be used by different places. Fortunately, this can be done fairly easily with the menuContribution element and a combination of enablement tests. This allows the removal of the Action introduced in the first part of this article with a more generic command and handler pairing. There is a deprecated extension point—which still works in Eclipse 4.2 today—called objectContribution, which is a single specialized hook for contributing a pop-up menu to an object. This has been deprecated for some time, but often older tutorials or examples may refer to it. Open the TimeZoneTableView class and add the hookContextMenu()method as follows: private void hookContextMenu(Viewer viewer) { MenuManager manager = new MenuManager("#PopupMenu"); Menu menu = manager.createContextMenu(viewer.getControl()); viewer.getControl().setMenu(menu); getSite().registerContextMenu(manager, viewer);} Add the same hookContextMenu() method to the TimeZoneTreeView class. In the TimeZoneTreeView class, at the end of the createPartControl() method, call hookContextMenu(tableViewer). In the TimeZoneTableViewclass, at the end of the createPartControl() method, replace the call to the action with a call to hookContextMenu()instead: hookContextMenu(tableViewer);MenuManager manager = new MenuManager("#PopupMenu");Menu menu = manager.createContextMenu(tableViewer.getControl());tableViewer.getControl().setMenu(menu);Action deprecated = new Action() { public void run() { MessageDialog.openInformation(null, "Hello", "World"); }};deprecated.setText("Hello");manager.add(deprecated); Running the Eclipse instance now and showing the menu results in nothing being displayed, because no menu items have been added to it yet. Create a command and a handler Show the Time. <extension point="org.eclipse.ui.commands"> <command name="Show the Time" description="Shows the Time" id="com.packtpub.e4.clock.ui.command.showTheTime"/></extension><extension point="org.eclipse.ui.handlers"> <handler class= "com.packtpub.e4.clock.ui.handlers.ShowTheTime" commandId="com.packtpub.e4.clock.ui.command.showTheTime"/></extension> Create a class ShowTheTime, in the com.packtpub.e4.clock.ui.handlers package, which extends org.eclipse.core.commands.AbstractHandler, to show the time in a specific time zone. public class ShowTheTime extends AbstractHandler { public Object execute(ExecutionEvent event) { ISelection sel = HandlerUtil.getActiveWorkbenchWindow(event) .getSelectionService().getSelection(); if (sel instanceof IStructuredSelection && !sel.isEmpty()) { Object value = ((IStructuredSelection)sel).getFirstElement(); if (value instanceof TimeZone) { SimpleDateFormat sdf = new SimpleDateFormat(); sdf.setTimeZone((TimeZone) value); MessageDialog.openInformation(null, "The time is", sdf.format(new Date())); } } return null; }} Finally, to hook it up, a menu needs to be added to the special locationURI popup:org.eclipse.ui.popup.any. <extension point="org.eclipse.ui.menus"> <menuContribution allPopups="false" locationURI="popup:org.eclipse.ui.popup.any"> <command label="Show the Time" style="push" commandId="com.packtpub.e4.clock.ui.command.showTheTime"> <visibleWhen checkEnabled="false"> <with variable="selection"> <iterate ifEmpty="false"> <adapt type="java.util.TimeZone"/> </iterate> </with> </visibleWhen </command> </menuContribution></extension> Run the Eclipse instance, and open the Time Zone Table view or Time Zone Table view. Right-click on a TimeZone, and the command Show the Time will be displayed (that is, one of the leaves of the tree or one of the rows of the table). Select the command and a dialog should show the time. What just happened? The views and the knowledge of how to wire up commands in this article provided a unified means of adding commands, based on the selected object type. This approach of registering commands is powerful, because any time a time zone is exposed as a selection in the future it will now have a Show the Time menu added to it automatically. The commands define a generic operation, and handlers bind those commands to implementations. The context-sensitive menu is provided by the pop-up menu extension point using the locationURI popup:org.eclipse.ui.popup.any. This allows the menu to be added to any pop-up menu that uses a MenuManager and when the selection contains a TimeZone. The MenuManager is responsible for listening to the mouse gestures to show a menu, and filling it with details when it is shown. In the example, the command was enabled when the object was an instance of a TimeZone, and also if it could be adapted to a TimeZone. This would allow another object type (say, a contact card) to have an adapter to convert it to a TimeZone, and thus show the time in that contact's location. Have a go hero – using view menus and toolbars The way to add a view menu is similar to adding a pop-up menu; the locationURI used is the view's ID rather than the menu item itself. Add a Show the Time menu to the TimeZone view as a view menu. Another way of adding the menu is to add it as a toolbar, which is an icon in the main Eclipse window. Add the Show the Time icon by adding it to the global toolbar instead. To facilitate testing of views, add a menu item that allows you to show the TimeZone views with PlatformUI.getActiveWorkbenchWindow().getActivePage().showView(id). Jobs and progress Since the user interface is single threaded, if a command takes a long amount of time it will block the user interface from being redrawn or processed. As a result, it is necessary to run long-running operations in a background thread to prevent the UI from hanging. Although the core Java library contains java.util.Timer, the Eclipse Jobs API provides a mechanism to both run jobs and report progress. It also allows jobs to be grouped together and paused or joined as a whole.
Read more
  • 0
  • 0
  • 3984
article-image-skin-customization-jboss-richfaces-33
Packt
30 Nov 2009
5 min read
Save for later

Skin Customization in JBoss RichFaces 3.3

Packt
30 Nov 2009
5 min read
Skinnability Every RichFaces component gives the support for skinnability and it means that just by changing the skin, we change the look for all of the components. That's very good for giving our application a consistent look and not repeating the same CSS values for each component every time. RichFaces still uses CSS, but it also enhances it in order to make it simpler to manage and maintain. Customize skin parameters A skin file contains the basic settings (such as font, colors, and so on) that we'll use for all the components—just by changing those settings, we can customize the basic look and feel for the RichFaces framework. As you might know, RichFaces comes with some built-in skins (and other external plug 'n' skin ones)—you can start with those skins in order to create your own custom skin. The built-in skins are: Plain emeraldTown blueSky wine japanCherry ruby classic deepMarine The plug 'n' skin ones are: laguna darkX glassX The plug 'n' skin skins are packaged in external jar files (that you can download from the same location as that of the RichFaces framework) that must be added into the project in order to be able to use them. Remember that the skin used by the application can be set as context-param in the web.xml file: <context-param> <param-name>org.richfaces.SKIN</param-name> <param-value>emeraldTown</param-value></context-param> This is an example with the emeralTown skin set: If we change the skin to japanCherry, we have the following screenshot: That's without changing a single line of CSS or XHTML! Edit a basic skin Now let's start creating our own basic skin. In order to do that, we are going to reuse one of the built-in skin files and change it. You can find the skin files in the richfaces-impl-3.x.x.jar file inside the META-INF/skins directory. Let's open the file and then open, for example, the emeraldTown.skin.properties file that looks like this (yes, the skin file is a .properties file!): #ColorsheaderBackgroundColor=#005000headerGradientColor=#70BA70headerTextColor=#FFFFFFheaderWeightFont=boldgeneralBackgroundColor=#f1f1f1generalTextColor=#000000generalSizeFont=18pxgeneralFamilyFont=Arial, Verdana, sans-serifcontrolTextColor=#000000controlBackgroundColor=#ffffffadditionalBackgroundColor=#E2F6E2shadowBackgroundColor=#000000shadowOpacity=1panelBorderColor=#C0C0C0subBorderColor=#fffffftabBackgroundColor=#ADCDADtabDisabledTextColor=#67AA67trimColor=#BBECBBtipBackgroundColor=#FAE6B0tipBorderColor=#E5973EselectControlColor=#FF9409generalLinkColor=#43BD43hoverLinkColor=#FF9409visitedLinkColor=#43BD43# FontsheaderSizeFont=18pxheaderFamilyFont=Arial, Verdana, sans-seriftabSizeFont=11tabFamilyFont=Arial, Verdana, sans-serifbuttonSizeFont=18buttonFamilyFont=Arial, Verdana, sans-seriftableBackgroundColor=#FFFFFFtableFooterBackgroundColor=#cccccctableSubfooterBackgroundColor=#f1f1f1tableBorderColor=#C0C0C0tableBorderWidth=2px#Calendar colorscalendarWeekBackgroundColor=#f5f5f5calendarHolidaysBackgroundColor=#FFEBDAcalendarHolidaysTextColor=#FF7800calendarCurrentBackgroundColor=#FF7800calendarCurrentTextColor=#FFEBDAcalendarSpecBackgroundColor=#E2F6E2calendarSpecTextColor=#000000warningColor=#FFE6E6warningBackgroundColor=#FF0000editorBackgroundColor=#F1F1F1editBackgroundColor=#FEFFDA#GradientsgradientType=plain In order to test it, let's open our application project, create a file called mySkin.skin.properties inside the directory /resources/WEB-INF/, and add the above text. Then, let's open the build.xml file and edit it, and add the following code into the war target: <copy tofile="${war.dir}/WEB-INF/classes/mySkin.skin.properties"file="${basedir}/resources/WEB-INF/mySkin.skin.properties"overwrite="true"/> Also, as our application supports multiple skins, let's open the components.xml file and add support to it: <property name="defaultSkin">mySkin</property><property name="availableSkins"> <value>mySkin</value> <value>laguna</value> <value>darkX</value> <value>glassX</value> <value>blueSky</value> <value>classic</value> <value>ruby</value> <value>wine</value> <value>deepMarine</value> <value>emeraldTown</value> <value>japanCherry</value></property> If you just want to select the new skin as the fixed skin, you would just edit the web.xml file and select the new skin by inserting the name into the context parameter (as explained before). Just to make an (bad looking, but understandable) example, let's change some parameters in the skin file: #ColorsheaderBackgroundColor=#005000headerGradientColor=#70BA70headerTextColor=#FFFFFFheaderWeightFont=boldgeneralBackgroundColor=#f1f1f1generalTextColor=#000000generalSizeFont=18pxgeneralFamilyFont=Arial, Verdana, sans-serifcontrolTextColor=#000000controlBackgroundColor=#ffffffadditionalBackgroundColor=#E2F6E2shadowBackgroundColor=#000000shadowOpacity=1panelBorderColor=#C0C0C0subBorderColor=#fffffftabBackgroundColor=#ADCDADtabDisabledTextColor=#67AA67trimColor=#BBECBBtipBackgroundColor=#FAE6B0tipBorderColor=#E5973EselectControlColor=#FF9409generalLinkColor=#43BD43hoverLinkColor=#FF9409visitedLinkColor=#43BD43# FontsheaderSizeFont=18pxheaderFamilyFont=Arial, Verdana, sans-seriftabSizeFont=11tabFamilyFont=Arial, Verdana, sans-serifbuttonSizeFont=18buttonFamilyFont=Arial, Verdana, sans-seriftableBackgroundColor=#FFFFFFtableFooterBackgroundColor=#cccccctableSubfooterBackgroundColor=#f1f1f1tableBorderColor=#C0C0C0tableBorderWidth=2px#Calendar colorscalendarWeekBackgroundColor=#f5f5f5calendarHolidaysBackgroundColor=#FFEBDAcalendarHolidaysTextColor=#FF7800calendarCurrentBackgroundColor=#FF7800calendarCurrentTextColor=#FFEBDAcalendarSpecBackgroundColor=#E2F6E2calendarSpecTextColor=#000000warningColor=#FFE6E6warningBackgroundColor=#FF0000editorBackgroundColor=#F1F1F1editBackgroundColor=#FEFFDA#GradientsgradientType=plain Here is the screenshot of what happened with the new skin: How do I know which parameters to change? The official RichFaces Developer Guide contains, for every component, a table with the correspondences between the skin parameters and the CSS properties they are connected to.
Read more
  • 0
  • 0
  • 3920

article-image-database-active-record-and-model-tricks
Packt
11 Jul 2013
14 min read
Save for later

Database, Active Record, and Model Tricks

Packt
11 Jul 2013
14 min read
(For more resources related to this topic, see here.) Getting data from a database Most applications today use databases. Be it a small website or a social network, at least some parts are powered by databases. Yii introduces three ways that allow you to work with databases: Active Record Query builder SQL via DAO We will use all these methods to get data from the film, film_actor, and actor tables and show it in a list. We will measure the execution time and memory usage to determine when to use these methods. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Download the Sakila database from the following URL:http://dev.mysql.com/doc/index-other.html Execute the downloaded SQLs; first schema then data. Configure the DB connection in protected/config/main.php to use the Sakila database. Use Gii to create models for the actor and film tables. How to do it... We will create protected/controllers/DbController.php as follows: <?php class DbController extends Controller { protected function afterAction($action) { $time = sprintf('%0.5f', Yii::getLogger() ->getExecutionTime()); $memory = round(memory_get_peak_usage()/(1024*1024),2)."MB"; echo "Time: $time, memory: $memory"; parent::afterAction($action); } public function actionAr() { $actors = Actor::model()->findAll(array('with' => 'films', 'order' => 't.first_name, t.last_name, films.title')); echo '<ol>'; foreach($actors as $actor) { echo '<li>'; echo $actor->first_name.' '.$actor->last_name; echo '<ol>'; foreach($actor->films as $film) { echo '<li>'; echo $film->title; echo '</li>'; } echo '</ol>'; echo '</li>'; } echo '</ol>'; } public function actionQueryBuilder() { $rows = Yii::app()->db->createCommand() ->from('actor') ->join('film_actor', 'actor.actor_id=film_actor.actor_id') ->leftJoin('film', 'film.film_id=film_actor.film_id') ->order('actor.first_name, actor.last_name, film.title') ->queryAll(); $this->renderRows($rows); } public function actionSql() { $sql = "SELECT * FROM actor a JOIN film_actor fa ON fa.actor_id = a.actor_id JOIN film f ON fa.film_id = f.film_id ORDER BY a.first_name, a.last_name, f.title"; $rows = Yii::app()->db->createCommand($sql)->queryAll(); $this->renderRows($rows); } public function renderRows($rows) { $lastActorName = null; echo '<ol>'; foreach($rows as $row) { $actorName = $row['first_name'].' '.$row['last_name']; if($actorName!=$lastActorName){ if($lastActorName!==null){ echo '</ol>'; echo '</li>'; } $lastActorName = $actorName; echo '<li>'; echo $actorName; echo '<ol>'; } echo '<li>'; echo $row['title']; echo '</li>'; } echo '</ol>'; } } Here, we have three actions corresponding to three different methods of getting data from a database. After running the preceding db/ar, db/queryBuilder and db/sql actions, you should get a tree showing 200 actors and 1,000 films they have acted in, as shown in the following screenshot: At the bottom there are statistics that give information about the memory usage and execution time. Absolute numbers can be different if you run this code, but the difference between the methods used should be about the same: Method Memory usage (megabytes) Execution time (seconds) Active Record 19.74 1.14109 Query builder 17.98 0.35732 SQL (DAO) 17.74 0.35038 How it works... Let's review the preceding code. The actionAr action method gets model instances by using the Active Record approach. We start with the Actor model generated with Gii to get all the actors and specify 'with' => 'films' to get the corresponding films using a single query or eager loading through relation, which Gii builds for us from InnoDB table foreign keys. We then simply iterate over all the actors and for each actor—over each film. Then for each item, we print its name. The actionQueryBuilder function uses query builder. First, we create a query command for the current DB connection with Yii::app()->db->createCommand(). We then add query parts one by one with from, join, and leftJoin. These methods escape values, tables, and field names automatically. The queryAll function returns an array of raw database rows. Each row is also an array indexed with result field names. We pass the result to renderRows, which renders it. With actionSql, we do the same, except we pass SQL directly instead of adding its parts one by one. It's worth mentioning that we should escape parameter values manually with Yii::app()->db->quoteValue before using them in the query string. The renderRows function renders the query builder. The DAO raw row requires you to add more checks and generally, it feels unnatural compared to rendering an Active Record result. As we can see, all these methods give the same result in the end, but they all have different performance, syntax, and extra features. We will now do a comparison and figure out when to use each method: Method Active Record Query Builder SQL (DAO) Syntax This will do SQL for you. Gii will generate models and relations for you. Works with models, completely OO-style, and very clean API. Produces array of properly nested models as the result. Clean API, suitable for building query on the fly. Produces raw data arrays as the result. Good for complex SQL. Manual values and keywords quoting. Not very suitable for building query on the fly. Produces raw data arrays as results. Performance Higher memory usage and execution time compared to SQL and query builder. Okay. Okay. Extra features Quotes values and names automatically. Behaviors. Before/after hooks. Validation. Quotes values and names automatically. None. Best for Prototyping selects. Update, delete, and create actions for single models (model gives a huge benefit when using with forms). Working with large amounts of data, building queries on the fly. Complex queries you want to do with pure SQL and have maximum possible performance. There's more... In order to learn more about working with databases in Yii, refer to the following resources: http://www.yiiframework.com/doc/guide/en/database.dao http://www.yiiframework.com/doc/guide/en/database.query-builder http://www.yiiframework.com/doc/guide/en/database.ar See also The Using CDbCriteria recipe Defining and using multiple DB connections Multiple database connections are not used very often for new standalone web applications. However, when you are building an add-on application for an existing system, you will most probably need another database connection. From this recipe you will learn how to define multiple DB connections and use them with DAO, query builder, and Active Record models. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Create two MySQL databases named db1 and db2. Create a table named post in db1 as follows: DROP TABLE IF EXISTS `post`; CREATE TABLE IF NOT EXISTS `post` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `title` VARCHAR(255) NOT NULL, `text` TEXT NOT NULL, PRIMARY KEY (`id`) ); Create a table named comment in db2 as follows: DROP TABLE IF EXISTS `comment`; CREATE TABLE IF NOT EXISTS `comment` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `text` TEXT NOT NULL, `postId` INT(10) UNSIGNED NOT NULL, PRIMARY KEY (`id`) ); How to do it... We will start with configuring the DB connections. Open protected/config/main.php and define a primary connection as described in the official guide: 'db'=>array( 'connectionString' => 'mysql:host=localhost;dbname=db1', 'emulatePrepare' => true, 'username' => 'root', 'password' => '', 'charset' => 'utf8', ), Copy it, rename the db component to db2, and change the connection string accordingly. Also, you need to add the class name as follows: 'db2'=>array( 'class'=>'CDbConnection', 'connectionString' => 'mysql:host=localhost;dbname=db2', 'emulatePrepare' => true, 'username' => 'root', 'password' => '', 'charset' => 'utf8', ), That is it. Now you have two database connections and you can use them with DAO and query builder as follows: $db1Rows = Yii::app()->db->createCommand($sql)->queryAll(); $db2Rows = Yii::app()->db2->createCommand($sql)->queryAll(); Now, if we need to use Active Record models, we first need to create Post and Comment models with Gii. Starting from Yii version 1.1.11, you can just select an appropriate connection for each model.Now you can use the Comment model as usual. Create protected/controllers/DbtestController.php as follows: <?php class DbtestController extends CController { public function actionIndex() { $post = new Post(); $post->title = "Post #".rand(1, 1000); $post->text = "text"; $post->save(); echo '<h1>Posts</h1>'; $posts = Post::model()->findAll(); foreach($posts as $post) { echo $post->title."<br />"; } $comment = new Comment(); $comment->postId = $post->id; $comment->text = "comment #".rand(1, 1000); $comment->save(); echo '<h1>Comments</h1>'; $comments = Comment::model()->findAll(); foreach($comments as $comment) { echo $comment->text."<br />"; } } } Run dbtest/index multiple times and you should see records added to both databases, as shown in the following screenshot: How it works... In Yii you can add and configure your own components through the configuration file. For non-standard components, such as db2, you have to specify the component class. Similarly, you can add db3, db4, or any other component, for example, facebookApi. The remaining array key/value pairs are assigned to the component's public properties respectively. There's more... Depending on the RDBMS used, there are additional things we can do to make it easier to use multiple databases. Cross-database relations If you are using MySQL, it is possible to create cross-database relations for your models. In order to do this, you should prefix the Comment model's table name with the database name as follows: class Comment extends CActiveRecord { //… public function tableName() { return 'db2.comment'; } //… } Now, if you have a comments relation defined in the Post model relations method, you can use the following code: $posts = Post::model()->with('comments')->findAll(); Further reading For further information, refer to the following URL: http://www.yiiframework.com/doc/api/CActiveRecord See also The Getting data from a database recipe Using scopes to get models for different languages Internationalizing your application is not an easy task. You need to translate interfaces, translate messages, format dates properly, and so on. Yii helps you to do this by giving you access to the Common Locale Data Repository ( CLDR ) data of Unicode and providing translation and formatting tools. When it comes to applications with data in multiple languages, you have to find your own way. From this recipe, you will learn a possible way to get a handy model function that will help to get blog posts for different languages. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Set up the database connection and create a table named post as follows: DROP TABLE IF EXISTS `post`; CREATE TABLE IF NOT EXISTS `post` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `lang` VARCHAR(5) NOT NULL DEFAULT 'en', `title` VARCHAR(255) NOT NULL, `text` TEXT NOT NULL, PRIMARY KEY (`id`) ); INSERT INTO `post`(`id`,`lang`,`title`,`text`) VALUES (1,'en_us','Yii news','Text in English'), (2,'de','Yii Nachrichten','Text in Deutsch'); Generate a Post model using Gii. How to do it... Add the following methods to protected/models/Post.php as follows: class Post extends CActiveRecord { public function defaultScope() { return array( 'condition' => "lang=:lang", 'params' => array( ':lang' => Yii::app()->language, ), ); } public function lang($lang){ $this->getDbCriteria()->mergeWith(array( 'condition' => "lang=:lang", 'params' => array( ':lang' => $lang, ), )); return $this; } } That is it. Now, we can use our model. Create protected/controllers/ DbtestController.php as follows: <?php class DbtestController extends CController { public function actionIndex() { // Get posts written in default application language $posts = Post::model()->findAll(); echo '<h1>Default language</h1>'; foreach($posts as $post) { echo '<h2>'.$post->title.'</h2>'; echo $post->text; } // Get posts written in German $posts = Post::model()->lang('de')->findAll(); echo '<h1>German</h1>'; foreach($posts as $post) { echo '<h2>'.$post->title.'</h2>'; echo $post->text; } } } Now, run dbtest/index and you should get an output similar to the one shown in the following screenshot: How it works... We have used Yii's Active Record scopes in the preceding code. The defaultScope function returns the default condition or criteria that will be applied to all the Post model query methods. As we need to specify the language explicitly, we create a scope named lang, which accepts the language name. With $this->getDbCriteria(), we get the model's criteria in its current state and then merge it with the new condition. As the condition is exactly the same as in defaultScope, except for the parameter value, it overrides the default scope. In order to support chained calls, lang returns the model instance by itself. There's more... For further information, refer to the following URLs: http://www.yiiframework.com/doc/guide/en/database.ar http://www.yiiframework.com/doc/api/CDbCriteria/ See also The Getting data from a database recipe The Using CDbCriteria recipe Processing model fields with AR event-like methods Active Record implementation in Yii is very powerful and has many features. One of these features is event-like methods , which you can use to preprocess model fields before putting them into the database or getting them from a database, as well as deleting data related to the model, and so on. In this recipe, we will linkify all URLs in the post text and we will list all existing Active Record event-like methods. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Set up a database connection and create a table named post as follows: DROP TABLE IF EXISTS `post`; CREATE TABLE IF NOT EXISTS `post` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `title` VARCHAR(255) NOT NULL, `text` TEXT NOT NULL, PRIMARY KEY (`id`) ); Generate the Post model using Gii How to do it... Add the following method to protected/models/Post.php as follows: protected function beforeSave() { $this->text = preg_replace('~((?:https?|ftps?)://.*?)( |$)~iu', '<a href="1">1</a>2', $this->text); return parent::beforeSave(); } That is it. Now, try saving a post containing a link. Create protected/controllers/TestController.php as follows: <?php class TestController extends CController { function actionIndex() { $post=new Post(); $post->title='links test'; $post->text='test http://www.yiiframework.com/ test'; $post->save(); print_r($post->text); } } Run test/index. You should get the following: How it works... The beforeSave method is implemented in the CActiveRecord class and executed just before saving a model. By using a regular expression, we replace everything that looks like a URL with a link that uses this URL and call the parent implementation, so that real events are raised properly. In order to prevent saving, you can return false. There's more... There are more event-like methods available as shown in the following table: Method name Description afterConstruct Called after a model instance is created by the new operator beforeDelete/afterDelete Called before/after deleting a record beforeFind/afterFind Method is invoked before/after each record is instantiated by a find method beforeSave/afterSave Method is invoked before/after saving a record successfully beforeValidate/afterValidate Method is invoked before/after validation ends Further reading In order to learn more about using event-like methods in Yii, you can refer to the following URLs: http://www.yiiframework.com/doc/api/CActiveRecord/ http://www.yiiframework.com/doc/api/CModel See also The Using Yii events recipe  The Highlighting code with Yii recipe The Automating timestamps recipe The Setting up an author automatically recipe
Read more
  • 0
  • 0
  • 3874
Modal Close icon
Modal Close icon