Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-deploying-orchestrator-appliance
Packt
16 Sep 2015
5 min read
Save for later

Deploying the Orchestrator Appliance

Packt
16 Sep 2015
5 min read
This article by Daniel Langenhan, the author of VMware vRealize Orchestrator Essentials, discusses the deployment of Orchestrator Appliance, and then goes on to explaining how to access it using the Orchestrator home page. In the following sections, we will discuss how to deploy Orchestrator in vCenter and with VMware Workstation. (For more resources related to this topic, see here.) Deploying the Appliance with vCenter To make the best use of Orchestrator, its best to deploy it into your vSphere infrastructure. For this, we deploy it with vCenter. Open your vSphere Web Client and log in. Select a host or cluster that should host the Orchestrator Appliance. Right-click the Host or Cluster and select Deploy OVF Template. The deploy wizard will start and ask you the typical OVF questions: Accept the EULA Choose the VM name and the VM folder where it will be stored Select the storage and network it should connect to. Make sure that you select a static IP The Customize template step will now ask you about some more Orchestrator-specific details. You will be asked to provide a new password for the root user. The root user is used to connect to the vRO appliance operating system or the web console. The other password that is needed is for the vRO Configurator interface. The last piece of information needed is the network information for the new VM. The following screenshot shows an example of the Customize template step:   The last step summarizes all the settings and lets you power on the VM after creation. Click on Finish and wait until the VM is deployed and powered on. Deploying the appliance into VMware Workstation For learning how to use Orchestrator, or for testing purposes, you can deploy Orchestrator using VMware Workstation (Fusion for MAC users). The process is pretty simple: Download the Orchestrator Appliance on to your desktop. Double-click on the OVA file. The import wizard now asks you for a name and location of your local file structure for this VM. Chose a location and click on Import. Accept the EULA. Wait until the import has finished. Click on Edit virtual machine settings. Select Network Adapter. Chose the correct network (Bridged, NAT, or Host only) for this VM. I typically use Host Only.   Click on OK to exit the settings. Power on the VM. Watch the boot screen. At some stage, the boot will stop and you will be prompted for the root password. Enter a new password and confirm it. After a moment, you will be asked for the password for the Orchestrator Configurator. Enter a new password and confirm it. After this, the boot process should finish, and you should see the Orchestrator Appliance DHCP IP. If you would like to configure the VM with a fixed IP, access the appliance configuration, as shown on the console screen (see the next section). After the deployment If the deployment is successful, the console of the VM should show a screen that looks like the following screenshot:   You can now access the Orchestrator Appliance, as shown in the next section. Accessing Orchestrator Orchestrator has its own little webserver that can be accessed by any web browser. Accessing the Orchestrator home page We will now access the Orchestrator home page: Open a web browser such as Mozilla Firefox, IE, or Google Chrome. Enter the IP or FQDN of the Orchestrator Appliance. The Orchestrator home page will open. It looks like the following screenshot:   The home page contains some very useful links, as shown in the preceding screenshot. Here is an explanation of each number: Number Description 1 Click here to start the Orchestrator Java Client. You can also access the Client directly by visiting https://[IP or FQDN]:8281/vco/client/client.jnlp. 2 Click here to download and install the Orchestrator Java Client locally. 3 Click here to access the Orchestrator Configurator, which is scheduled to disappear soon, whereupon we won't use it any more. The way forward will be Orchestrator Control Center. 4 This is a selection of links that can be used to find helpful information and download plugins. 5 These are some additional links to VMware sites. Starting the Orchestrator Client Let's open the Orchestrator Client. We will use an internal user to log in until we have hooked up Orchestrator to SSO. For the Orchestrator Client, you need at least Java 7. From the Orchestrator home page, click on Start Orchestrator Client. Your Java environment will start. You may be required to acknowledge that you really want to start this application. You will now be greeted with the login screen to Orchestrator:   Enter vcoadmin as the username and vcoadmin as the password. This is a preconfigured user that allows you to log in and use Orchestrator directly. Click on Login. Now, the Orchestrator Client will load. After a moment, you will see something that looks like the following screenshot: You are now logged in to the Orchestrator Client. Summary This article guided you through the process of deploying and accessing an Orchestrator Appliance with vCenter and VMware workstation. Resources for Article: Further resources on this subject: Working with VMware Infrastructure [article] Upgrading VMware Virtual Infrastructure Setups [article] VMware vRealize Operations Performance and Capacity Management [article]
Read more
  • 0
  • 0
  • 7781

article-image-creating-video-streaming-site
Packt
16 Sep 2015
16 min read
Save for later

Creating a Video Streaming Site

Packt
16 Sep 2015
16 min read
 In this article by Rachel McCollin, the author of WordPress 4.0 Site Blueprints Second Edition, you'll learn how to stream video from YouTube to your own video sharing site, meaning that you can add more than just the videos to your site and have complete control over how your videos are shown. We'll create a channel on YouTube and then set up a WordPress site with a theme and plugin to help us stream video from that channel WordPress is the world's most popular Content Management System (CMS) and you can use it to create any kind of site you or your clients need. Using free plugins and themes for WordPress, you can create a store, a social media site, a review site, a video site, a network of sites or a community site, and more. WordPress makes it easy for you to create a site that you can update and add to over time, letting you add posts, pages, and more without having to write code. WordPress makes your job of creating your own website simple and hassle-free! (For more resources related to this topic, see here.) Planning your video streaming site The first step is to plan how you want to use your video site. Ask yourself a few questions: Will I be streaming all my video from YouTube? Will I be uploading any video manually? Will I be streaming from multiple sources? What kind of design do I want? Will I include any other types of content on my site? How will I record and upload my videos? Who is my target audience and how will I reach them? Do I want to make money from my videos? How often will I create videos and what will my recording and editing process be? What software and hardware will I need for recording and editing videos? It's beyond the scope of this article to answer all of these questions, but it's worth taking some time before you start to consider how you're going to be using your video site, what you'll be adding to it, and what your objectives are. Streaming from YouTube or uploading videos direct? WordPress lets you upload your videos directly to your site using the Add Media button, the same button you use to insert images. This can seem like the simplest way of doing things as you only need to work in one place. However, I would strongly recommend using a third-party video service instead, for the following reasons: It saves on storage space in your site. It ensures your videos will play on any device people choose to view your site from. It keeps the formats your video is played in up to date so that you don't have to re-upload them when things change. It can have massive SEO benefits socially if you use YouTube. YouTube is owned by Google and has excellent search engine rankings. You'll find that videos streamed via YouTube get better Google rankings than any videos you upload directly to your site. In this article, the focus will be on creating a YouTube channel and streaming video from it to your website. We'll set things up so that when you add new videos to your channel, they'll be automatically streamed to your site. To do that, we'll use a plugin. Understanding copyright considerations Before you start uploading video to YouTube, you need to understand what you're allowed to add, and how copyright affects your videos. You can find plenty of information on YouTube's copyright rules and processes at https://www.youtube.com/yt/copyright/, but it can quite easily be summarized as this: if you created the video, or it was created by someone who has given you explicit permission to use it and publish it online, then you can upload it. If you've recorded a video from the TV or the Web that you didn't make and don't have permission to reproduce (or if you've added copyrighted music to your own videos without permission), then you can't upload it. It may seem tempting to ignore copyright and upload anything you're able to find and record (and you'll find plenty of examples of people who've done just that), but you are running a risk of being prosecuted for copyright infringement and being forced to pay a huge fine. I'd also suggest that if you can create and publish original video content rather than copying someone else's, you'll find an audience of fans for that content, and it will be a much more enjoyable process. If your videos involve screen capture of you using software or playing games, you'll need to check the license for that software or game to be sure that you're entitled to publish video of you interacting with it. Most software and games developers have no problem with this as it provides free advertising for them, but you should check with the software provider and the YouTube copyright advice. Movies and music have stricter rules than games generally do however. If you upload videos containing someone else's video or music content that's copyrighted and you haven't got permission to reproduce, then you will find yourself in violation of YouTube's rules and possibly in legal trouble too. Creating a YouTube channel and uploading videos So, you've planned your channel and you have some videos you want to share with the world. You'll need a YouTube channel so you can upload your videos. Creating your YouTube channel You'll need a YouTube channel in order to do this. Let's create a YouTube channel by following these steps: If you don't already have one, create a Google account for yourself at https://accounts.google.com/SignUp. Head over to YouTube at https://www.youtube.com and sign in. You'll have an account with YouTube because it's part of Google, but you won't have a channel yet. Go to https://www.youtube.com/channel_switcher. Click on the Create a new channel button. Follow the instructions onscreen to create your channel. Customize your channel, uploading images to your profile photo or channel art and adding a description using the About tab. Here's my channel: It can take a while for artwork from Google+ to show up on your channel, so don't worry if you don't see it straight away. Uploading videos The next step is to upload some videos. YouTube accepts videos in the following formats: .MOV .MPEG4 .AVI .WMV .MPEGPS .FLV 3GPP WebM Depending on the video software you've used to record, your video may already be in one of these formats or you may need to export it to one of these and save it before you can upload it. If you're not sure how to convert your file to one of the supported formats, you'll find advice at https://support.google.com/youtube/troubleshooter/2888402 to help you do it. You can also upload videos to YouTube directly from your phone or tablet. On an Android device, you'll need to use the YouTube app, while on an iOS device you can log in to YouTube on the device and upload from the camera app. For detailed instructions and advice for other devices, refer to https://support.google.com/youtube/answer/57407. If you're uploading directly to the YouTube website, simply click on the Upload a video button when viewing your channel and follow the onscreen instructions. Make sure you add your video to a playlist by clicking on the +Add to playlist button on the right-hand side while you're setting up the video as this will help you categorize the videos in your site later. Now when you open your channel page and click on the Videos tab, you'll see all the videos you uploaded: When you click on the Playlists tab, you'll see your new playlist: So you now have some videos and a playlist set up in YouTube. It's time to set up your WordPress site for streaming those videos. Installing and configuring the YouTube plugin Now that you have your videos and playlists set up, it's time to add a plugin to your site that will automatically add new videos to your site when you upload them to YouTube. Because I've created a playlist, I'm going to use a category in my site for the playlist and automatically add new videos to that category as posts. If you prefer you can use different channels for each category or you can just use one video category and link your channel to that. The latter is useful if your site will contain other content as well, such as photos or blog posts. Note that you don't need a plugin to stream YouTube videos to your site. You can simply paste the URL for a video into the editing pane when you're creating a post or page in your site, and WordPress will automatically stream the video. You don't even need to add an embed code, just add the YRL. But if you don't want to automate the process of streaming all of the videos in your channel to your site, this plugin will make that process easy. Installing the Automatic YouTube Video Posts plugin The Automatic YouTube Video Posts plugin lets you link your site to any YouTube channel or playlist and automatically adds each new video to your site as a post. Let's start by installing it. I'm working with a fresh WordPress installation but you can also do this on your existing site if that's what you're working with. Follow these steps: In the WordPress admin, go to Plugins | Add New. In the Search box, type Automatic Youtube. The plugins that meet the search criteria will be displayed. Select the Automatic YouTube Video Posts plugin and then install and activate it. For the plugin to work, you'll need to configure its settings and add one or more channels or playlists. Configuring the plugin settings Let's start with the plugin settings screen. You do this via the Youtube Posts menu, which the plugin has added to your admin menu: Go to Youtube Posts | Settings. Edit the settings as follows:     Automatically publish posts: Set this to Yes     Display YouTube video meta: Set this to Yes     Number of words and Video dimensions: Leave these at the default values     Display related videos: Set this to No     Display videos in post lists: Set this to Yes    Import the latest videos every: Set this to 1 hours (note that the updates will happen every hour if someone visits the site, but not if the site isn't visited) Click on the Save changes button. The settings screen will look similar to the following screenshot: Adding a YouTube channel or playlist The next step is to add a YouTube channel and/or playlist so that the plugin will create posts from your videos. I'm going to add the "Dizzy" playlist I created earlier on. But first, I'll create a category for all my videos from that playlist. Creating a category for a playlist Create a category for your playlist in the normal way: In the WordPress admin, go to Posts | Categories. Add the category name and slug or description if you want to (if you don't, WordPress will automatically create a slug). Click on the Add New Category button. Adding your channel or playlist to the plugin Now you need to configure the plugin so that it creates posts in the category you've just created. In the WordPress admin, go to Youtube Posts | Channels/Playlists. Click on the Add New button. Add the details of your channel or playlist, as shown in the next screenshot. In my case, the details are as follows:     Name: Dizzy     Channel/playlist: This is the ID of my playlist. To find this, open the playlist in YouTube and then copy the last part of its URL from your browser. The URL for my playlist is   https://www.youtube.com/watch?v=vd128vVQc6Y&list=PLG9W2ELAaa-Wh6sVbQAIB9RtN_1UV49Uv and the playlist ID is after the &list= text, so it's PLG9W2ELAaa-Wh6sVbQAIB9RtN_1UV49Uv. If you want to add a channel, add its unique name.      Type: Select Channel or Playlist; I'm selecting Playlist.      Add videos from this channel/playlist to the following categories: Select the category you just created.      Attribute videos from this channel to what author: Select the author you want to attribute videos to, if your site has more than one author. Finally, click on the Add Channel button. Adding a YouTube playlist Once you click on the Add Channel button, you'll be taken back to the Channels/Playlists screen, where you'll see your playlist or channel added: The newly added playlist If you like, you can add more channels or playlists and more categories. Now go to the Posts listing screen in your WordPress admin, and you'll see that the plugin has created posts for each of the videos in your playlist: Automatically added posts Installing and configuring a suitable theme You'll need a suitable theme in your site to make your videos stand out. I'm going to use the Keratin theme which is grid-based with a right-hand sidebar. A grid-based theme works well as people can see your videos on your home page and category pages. Installing the theme Let's install the theme: Go to Appearance | Themes. Click on the Add New button. In the search box, type Keratin. The theme will be listed. Click on the Install button. When prompted, click on the Activate button. The theme will now be displayed in your admin screen as active: The installed and activated theme Creating a navigation menu Now that you've activated a new theme, you'll need to make sure your navigation menu is configured so that it's in the theme's primary menu slot, or if you haven't created a menu yet, you'll need to create one. Follow these steps: Go to Appearance | Menus. If you don't already have a menu, click on the Create Menu button and name your new menu. Add your home page to the menu along with any category pages you've created by clicking on the Categories metabox on the left-hand side. Once everything is in the right place in your menu, click on the Save Menu button. Your Menus screen will look something similar to this: Now that you have a menu, let's take a look at the site: The live site That's looking good, but I'd like to add some text in the sidebar instead of the default content. Adding a text widget to the sidebar Let's add a text widget with some information about the site: In the WordPress admin, go to Appearance | Widgets. Find the text widget on the left-hand side and drag it into the widget area for the main sidebar. Give the widget a title. Type the following text into the widget's contents: Welcome to this video site. To see my videos on YouTube, visit <a href="https://www.youtube.com/channel/UC5NPnKZOjCxhPBLZn_DHOMw">my channel</a>. Replace the link I've added here with a link to your own channel: The Widgets screen with a text widget added Text widgets accept text and HTML. Here we've used HTML to create a link. For more on HTML links, visit http://www.w3schools.com/html/html_links.asp. Alternatively if you'd rather create a widget that gives you an editing pane like the one you use for creating posts, you can install the TinyMCE Widget plugin from https://wordpress.org/plugins/black-studio-tinymce-widget/screenshots/. This gives you a widget that lets you create links and format your text just as you would when creating a post. Now go back to your live site to see how things are looking:The live site with a text widget added It's looking much better! If you click on one of these videos, you're taken to the post for that video: A single post with a video automatically added Your site is now ready. Managing and updating your videos The great thing about using this plugin is that once you've set it up you'll never have to do anything in your website to add new videos. All you need to do is upload them to YouTube and add them to the playlist you've linked to, and they'll automatically be added to your site. If you want to add extra content to the posts holding your videos you can do so. Just edit the posts in the normal way, adding text, images, and anything you want. These will be displayed as well as the videos. If you want to create new playlists in future, you just do this in YouTube and then create a new category on your site and add the playlist in the settings for the plugin, assigning the new channel to the relevant category. You can upload your videos to YouTube in a variety of ways—via the YouTube website or directly from the device or software you use to record and/or edit them. Most phones allow you to sign in to your YouTube account via the video or YouTube app and directly upload videos, and video editing software will often let you do the same. Good luck with your video site, I hope it gets you lots of views! Summary In this article, you learned how to create a WordPress site for streaming video from YouTube. You created a YouTube channel and added videos and playlists to it and then you set up your site to automatically create a new post each time you add a new video, using a plugin. Finally, you installed a suitable theme and configured it, creating categories for your channels and adding these to your navigation menu. Resources for Article: Further resources on this subject: Adding Geographic Capabilities via the GeoPlaces Theme[article] Adding Flash to your WordPress Theme[article] Adding Geographic Capabilities via the GeoPlaces Theme [article]
Read more
  • 0
  • 1
  • 12966

article-image-implementing-microsoft-dynamics-ax
Packt
16 Sep 2015
6 min read
Save for later

Implementing Microsoft Dynamics AX

Packt
16 Sep 2015
6 min read
 In this article by Yogesh Kasat and JJ Yadav, authors of the book Microsoft Dynamics AX Implementation Guide, you will learn one of the important topic in Microsoft Dynamics AX implementation process—configuration data management. (For more resources related to this topic, see here.) The configuration of an ERP system is one of the most important parts of the process. Configuration means setting up the base data and parameters to enable your product features such as financial, shipping, sales tax, and so on. Microsoft Dynamics AX has been developed based on the generic requirements of various organizations and contains the business processes belonging to diverse business segments. It is a very configurable product that allows the implementation team to configure features based on specific business needs. During the project, the implementation team identifies the relevant components of the system and sets up and aligns these components to meet the specific business requirements. This process starts in the analysis phase of the project carrying on through the design, development, and deployment phases. Configuration management is different from data migration. Data migration broadly covers the transactional data of the legacy system and core data such as Opening balances, Open AR, Open AP, customers, vendors, and so on. When we talk about configuration management, we are referring to items like fiscal years and periods, chart of accounts, segments, and defining applicable rules, journal types, customer groups, terms of payments, module-based parameters, workflows, number sequences, and the like. In a broader sense, configuration covers the basic parameters, setup data, and reference data which you configure for the different modules in Dynamics AX. The following diagram shows the different phases of configuration management: In any ERP implementation project, you deal with multiple environments. For example, you start with CRP, after the development you move to the test environment, and then training, UAT, and production, as shown in the following diagram: One of the biggest challenges that an implementation team faces is moving the configuration from one environment to another. If configurations keep changing in every environment, it becomes more difficult to manage them. Similar to code promotion and release management across environments, configuration changes need to be tracked through a change-control process across environments to ensure that you are testing with a consistent set of configurations. The objective is to keep track of all the configuration changes and make sure that they make it to the final cut in the production environment. The following sections outline some approaches used for configuration data management in the Dynamics AX project. The golden environment An environment that is pristine without any transactions—the golden environment—is sometimes referred to as a stage or pre-prod environment. Create the configurations from scratch and/or use various tools to create and update the configuration data. Develop a process to update the configuration in the golden environment once it has been changed and approved in the test environments. The golden environment can be turned into a production environment or the data can be copied over to the production environment using database restore. The golden environment database can be used as a starting point for every run of data migration. For example, if you are preparing for UAT, use the golden environment database as a starting point. Copy to UAT and perform data migration in your UAT environment. This would ensure time you are testing with the golden configurations (If the configuration is missing in the golden environment, you would be able to catch it during testing and fix your UAT and the golden environment too). The pros of the golden environment are given as follows: The golden environment is a single environment for controlling the configuration data It uses all the tools available for the initial configuration There are less number of chances for corruption of the configuration data The cons of the golden environment are given as follows: There is a risk of missing configuration updates due to not following the processes (as the configuration updates are made directly in the testing and UAT environments). There are chances of migrating the revision data into the production environment like workflow history, address revisions, and policies versions. There is a risk of migrating environment-specific data from the golden environment to the production environment. This is not useful for a project going live in multiple phases, as you will not be able to transfer the incremental configuration data using database restore. You must keep the environment in sync with the latest code. Copying the template company In this approach, the implementation team typically defines a template legal entity and configures the template company from scratch. Once completed, the template company's configuration data is copied over to the actual legal entity using the data export/import process. This approach is useful for projects going live in multiple phases, where a global template is created and used across different legal entities. Whereas, in AX 2012, a lot configuration data is shared and it makes it almost impossible to copy the company data. Building configuration templates In this approach, the implementation team typically builds a repository of all the configurations done in a file, imports them in each subsequent environment, and finally, in the production environment. The pros of building configuration templates are as follows: It is a clean approach. You can version-control the configuration file. This approach is very useful for projects going live in multiple phases, as you can import the incremental configuration data in the subsequent releases. This approach may need significant development efforts to create the X+ scripts or DIXF custom entities to import all the required configurations. Summary Clearly there are several options to choose from for configuration data management but they have their own pros and cons. While building configuration template is ideal solution for configuration data management it could be costly as it may need significant development effort to build custom entity to export and import data across environments. The golden environment process is widely used on the implementation projects as it’s easy to manage and require minimal development team involvement. Resources for Article: Further resources on this subject: Web Services and Forms[article] Setting Up and Managing E-mails and Batch Processing[article] Integrating Microsoft Dynamics GP Business Application fundamentals[article]
Read more
  • 0
  • 0
  • 4251

article-image-writing-custom-spring-boot-starters
Packt
16 Sep 2015
10 min read
Save for later

Writing Custom Spring Boot Starters

Packt
16 Sep 2015
10 min read
 In this article by Alex Antonov, author of the book Spring Boot Cookbook, we will cover the following topics: Understanding Spring Boot autoconfiguration Creating a custom Spring Boot autoconfiguration starter (For more resources related to this topic, see here.) Introduction Its time to take a look behind the scenes and find out the magic behind the Spring Boot autoconfiguration and write some starters of our own as well. This is a very useful capability to possess, especially for large software enterprises where the presence of a proprietary code is inevitable and it is very helpful to be able to create internal custom starters that would automatically add some of the configuration or functionalities to the applications. Some likely candidates can be custom configuration systems, libraries, and configurations that deal with connecting to databases, using custom connection pools, http clients, servers, and so on. We will go through the internals of Spring Boot autoconfiguration, take a look at how new starters are created, explore conditional initialization and wiring of beans based on various rules, and see that annotations can be a powerful tool, which provides the consumers of the starters more control over dictating what configurations should be used and where. Understanding Spring Boot autoconfiguration Spring Boot has a lot of power when it comes to bootstrapping an application and configuring it with exactly the things that are needed, all without much of the glue code that is required of us, the developers. The secret behind this power actually comes from Spring itself or rather from the Java Configuration functionality that it provides. As we add more starters as dependencies, more and more classes will appear in our classpath. Spring Boot detects the presence or absence of specific classes and based on this information, makes some decisions, which are fairly complicated at times, and automatically creates and wires the necessary beans to the application context. Sounds simple, right? How to do it… Conveniently, Spring Boot provides us with an ability to get the AUTO-CONFIGURATION REPORT by simply starting the application with the debug flag. This can be passed to the application either as an environment variable, DEBUG, as a system property, -Ddebug, or as an application property, --debug. Start the application by running DEBUG=true ./gradlew clean bootRun. Now, if you look at the console logs, you will see a lot more information printed there that is marked with the DEBUG level log. At the end of the startup log sequence, we will see the AUTO-CONFIGURATION REPORT as follows: ========================= AUTO-CONFIGURATION REPORT ========================= Positive matches: ----------------- … DataSourceAutoConfiguration - @ConditionalOnClass classes found: javax.sql.DataSource,org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseType (OnClassCondition) … Negative matches: ----------------- … GsonAutoConfiguration - required @ConditionalOnClass classes not found: com.google.gson.Gson (OnClassCondition) … How it works… As you can see, the amount of information that is printed in the debug mode can be somewhat overwhelming; so I've selected only one example of positive and negative matches each. For each line of the report, Spring Boot tells us why certain configurations have been selected to be included, what they have been positively matched on, or, for the negative matches, what was missing that prevented a particular configuration to be included in the mix. Let's look at the positive match for DataSourceAutoConfiguration: The @ConditionalOnClass classes found tells us that Spring Boot has detected the presence of a particular class, specifically two classes in our case: javax.sql.DataSource and org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseType. The OnClassCondition indicates the kind of matching that was used. This is supported by the @ConditionalOnClass and @ConditionalOnMissingClass annotations. While OnClassCondition is the most common kind of detection, Spring Boot also uses many other conditions. For example, OnBeanCondition is used to check the presence or absence of specific bean instances, OnPropertyCondition is used to check the presence, absence, or a specific value of a property as well as any number of the custom conditions that can be defined using the @Conditional annotation and Condition interface implementations. The negative matches show us a list of configurations that Spring Boot has evaluated, which means that they do exist in the classpath and were scanned by Spring Boot but didn't pass the conditions required for their inclusion. GsonAutoConfiguration, while available in the classpath as it is a part of the imported spring-boot-autoconfigure artifact, was not included because the required com.google.gson.Gson class was not detected as present in the classpath, thus failing the OnClassCondition. The implementation of the GsonAutoConfiguration file looks as follows: @Configuration @ConditionalOnClass(Gson.class) public class GsonAutoConfiguration { @Bean @ConditionalOnMissingBean public Gson gson() { return new Gson(); } } After looking at the code, it is very easy to make the connection between the conditional annotations and report information that is provided by Spring Boot at the start time. Creating a custom Spring Boot autoconfiguration starter We have a high-level idea of the process by which Spring Boot decides which configurations to include in the formation of the application context. Now, let's take a stab at creating our own Spring Boot starter artifact, which we can include as an autoconfigurable dependency in our build. Let's build a simple starter that will create another CommandLineRunner that will take the collection of all the Repository instances and print out the count of the total entries for each. We will start by adding a child Gradle project to our existing project that will house the codebase for the starter artifact. We will call it db-count-starter. How to do it… We will start by creating a new directory named db-count-starter in the root of our project. As our project has now become what is known as a multiproject build, we will need to create a settings.gradle configuration file in the root of our project with the following content: include 'db-count-starter' We should also create a separate build.gradle configuration file for our subproject in the db-count-starter directory in the root of our project with the following content: apply plugin: 'java' repositories { mavenCentral() maven { url "https://repo.spring.io/snapshot" } maven { url "https://repo.spring.io/milestone" } } dependencies { compile("org.springframework.boot:spring- boot:1.2.3.RELEASE") compile("org.springframework.data:spring-data- commons:1.9.2.RELEASE") } Now we are ready to start coding. So, the first thing is to create the directory structure, src/main/java/org/test/bookpubstarter/dbcount, in the db-count-starter directory in the root of our project. In the newly created directory, let's add our implementation of the CommandLineRunner file named DbCountRunner.java with the following content: public class DbCountRunner implements CommandLineRunner { protected final Log logger = LogFactory.getLog(getClass()); private Collection<CrudRepository> repositories; public DbCountRunner(Collection<CrudRepository> repositories) { this.repositories = repositories; } @Override public void run(String... args) throws Exception { repositories.forEach(crudRepository -> logger.info(String.format("%s has %s entries", getRepositoryName(crudRepository.getClass()), crudRepository.count()))); } private static String getRepositoryName(Class crudRepositoryClass) { for(Class repositoryInterface : crudRepositoryClass.getInterfaces()) { if (repositoryInterface.getName(). startsWith("org.test.bookpub.repository")) { return repositoryInterface.getSimpleName(); } } return "UnknownRepository"; } } With the actual implementation of DbCountRunner in place, we will now need to create the configuration object that will declaratively create an instance during the configuration phase. So, let's create a new class file called DbCountAutoConfiguration.java with the following content: @Configuration public class DbCountAutoConfiguration { @Bean public DbCountRunner dbCountRunner(Collection<CrudRepository> repositories) { return new DbCountRunner(repositories); } } We will also need to tell Spring Boot that our newly created JAR artifact contains the autoconfiguration classes. For this, we will need to create a resources/META-INF directory in the db-count-starter/src/main directory in the root of our project. In this newly created directory, we will place the file named spring.factories with the following content: org.springframework.boot.autoconfigure.EnableAutoConfiguration=org.test.bookpubstarter.dbcount.DbCountAutoConfiguration For the purpose of our demo, we will add the dependency to our starter artifact in the main project's build.gradle by adding the following entry in the dependencies section: compile project(':db-count-starter') Start the application by running ./gradlew clean bootRun. Once the application is complied and has started, we should see the following in the console logs: 2015-04-05 INFO org.test.bookpub.StartupRunner : Welcome to the Book Catalog System! 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : AuthorRepository has 1 entries 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : PublisherRepository has 1 entries 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : BookRepository has 1 entries 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : ReviewerRepository has 0 entries 2015-04-05 INFO org.test.bookpub.BookPubApplication : Started BookPubApplication in 8.528 seconds (JVM running for 9.002) 2015-04-05 INFO org.test.bookpub.StartupRunner           : Number of books: 1 How it works… Congratulations! You have now built your very own Spring Boot autoconfiguration starter. First, let's quickly walk through the changes that we made to our Gradle build configuration and then we will examine the starter setup in detail. As the Spring Boot starter is a separate, independent artifact, just adding more classes to our existing project source tree would not really demonstrate much. To make this separate artifact, we had a few choices: making a separate Gradle configuration in our existing project or creating a completely separate project altogether. The most ideal solution, however, was to just convert our build to Gradle Multi-Project Build by adding a nested project directory and subproject dependency to build.gradle of the root project. By doing this, Gradle actually creates a separate artifact JAR for us but we don't have to publish it anywhere, only include it as a compile project(':db-count-starter') dependency. For more information about Gradle multi-project builds, you can check out the manual at http://gradle.org/docs/current/userguide/multi_project_builds.html. Spring Boot Auto-Configuration Starter is nothing more than a regular Spring Java Configuration class annotated with the @Configuration annotation and the presence of spring.factories in the classpath in the META-INF directory with the appropriate configuration entries. During the application startup, Spring Boot uses SpringFactoriesLoader, which is a part of Spring Core, in order to get a list of the Spring Java Configurations that are configured for the org.springframework.boot.autoconfigure.EnableAutoConfiguration property key. Under the hood, this call collects all the spring.factories files located in the META-INF directory from all the jars or other entries in the classpath and builds a composite list to be added as application context configurations. In addition to the EnableAutoConfiguration key, we can declare the following automatically initializable other startup implementations in a similar fashion: org.springframework.context.ApplicationContextInitializer org.springframework.context.ApplicationListener org.springframework.boot.SpringApplicationRunListener org.springframework.boot.env.PropertySourceLoader org.springframework.boot.autoconfigure.template.TemplateAvailabilityProvider org.springframework.test.contex.TestExecutionListener Ironically enough, a Spring Boot Starter does not need to depend on the Spring Boot library as its compile time dependency. If we look at the list of class imports in the DbCountAutoConfiguration class, we will not see anything from the org.springframework.boot package. The only reason that we have a dependency declared on Spring Boot is because our implementation of DbCountRunner implements the org.springframework.boot.CommandLineRunner interface. Summary Resources for Article: Further resources on this subject: Welcome to the Spring Framework[article] Time Travelling with Spring[article] Aggregators, File exchange Over FTP/FTPS, Social Integration, and Enterprise Messaging[article]
Read more
  • 0
  • 0
  • 7741

article-image-virtualization-0
Packt
16 Sep 2015
16 min read
Save for later

Virtualization

Packt
16 Sep 2015
16 min read
This article by Skanda Bhargav, the author of Troubleshooting Ubuntu Server, deals with virtualization techniques—why virtualization is important and how administrators can install and serve users with services via virtualization. We will learn about KVM, Xen, and Qemu. So sit back and let's take a spin into the virtual world of Ubuntu. (For more resources related to this topic, see here.) What is virtualization? Virtualization is a technique by which you can convert a set of files into a live running machine with an OS. It is easy to set up one machine and much easier to clone and replicate the same machine across hardware. Also, each of the clones can be customized based on requirements. We will look at setting up a virtual machine using Kernel-based Virtual Machine, Xen, and Qemu in the sections that follow. Today, people are using the power of virtualization in different situations and environments. Developers use virtualization in order to have an independent environment in which to safely test and develop applications without affecting other working environments. Administrators are using virtualization to separate services and also commission or decommission services as and when required or requested. By default, Ubuntu supports the Kernel-based Virtual Machine (KVM), which has built-in extensions for AMD and Intel-based processors. Xen and Qemu are the options suggested where you have hardware that does not have extensions for virtualization. libvirt The libvirt library is an open source library that is helpful for interfacing with different virtualization technologies. One small task before starting with libvirt is to check your hardware support extensions for KVM. The command to do so is as follows: kvm-ok You will see a message stating whether or not your CPU supports hardware virtualization. An additional task would be to verify the BIOS settings for virtualization and activate it. Installation Use the following command to install the package for libvirt: sudo apt-get install kvm libvirt-bin Next, you will need to add the user to the group libvirt. This will ensure that user gets additional options for networking. The command is as follows: sudo adduser $USER libvirtd We are now ready to install a guest OS. Its installation is very similar to that of installing a normal OS on the hardware. If your virtual machine needs a graphical user interface (GUI), you can make use of an application virt-viewer and connect using VNC to the virtual machine's console. We will be discussing the virt-viewer and its uses in the later sections of this article. virt-install virt-install is a part of the python-virtinst package. The command to install this package is as follows: sudo apt-get install python-virtinst One of the ways of using virt-install is as follows: sudo virt-install -n new_my_vm -r 256 -f new_my_vm.img -s 4 -c jeos.iso --accelerate --connect=qemu:///system --vnc --noautoconsole -v Let's understand the preceding command part by part: -n: This specifies the name of virtual machine that will be created -r: This specifies the RAM amount in MBs -f: This is the path for the virtual disk -s: This specifies the size of the virtual disk -c: This is the file to be used as virtual CD, but it can be an .iso file as well --accelerate: This is used to make use of kernel acceleration technologies --vnc: This exports the guest console via vnc --noautoconsole: This disables autoconnect for the virtual machine console -v: This creates a fully virtualized guest Once virt-install is launched, you may connect to console with virt-viewer utility from remote connections or locally using GUI. Use to wrap long text to next line. virt-clone One of the applications to clone a virtual machine to another is virt-clone. Cloning is a process of creating an exact replica of the virtual machine that you currently have. Cloning is helpful when you need a lot of virtual machines with same configuration. Here is an example of cloning a virtual machine: sudo virt-clone -o my_vm -n new_vm_clone -f /path/to/ new_vm_clone.img --connect=qemu:///sys Let's understand the preceding command part by part: -o: This is the original virtual machine that you want to clone -n: This is the new virtual machine name -f: This is the new virtual machine's file path --connect: This specifies the hypervisor to be used Managing the virtual machine Let's see how to manage the virtual machine we installed using virt. virsh Numerous utilities are available for managing virtual machines and libvirt; virsh is one such utility that can be used via command line. Here are a few examples: The following command lists the running virtual machines: virsh -c qemu:///system list The following command starts a virtual machine: virsh -c qemu:///system start my_new_vm The following command starts a virtual machine at boot: virsh -c qemu:///system autostart my_new_vm The following command restarts a virtual machine: virsh -c qemu:///system reboot my_new_vm You can save the state of virtual machine in a file. It can be restored later. Note that once you save the virtual machine, it will not be running anymore. The following command saves the state of the virtual machine: virsh -c qemu://system save my_new_vm my_new_vm-290615.state The following command restores a virtual machine from saved state: virsh -c qemu:///system restore my_new_vm-290615.state The following command shuts down a virtual machine: virsh -c qemu:///system shutdown my_new_vm The following command mounts a CD-ROM in the virtual machine: virsh -c qemu:///system attach-disk my_new_vm /dev/cdrom /media/cdrom The virtual machine manager A GUI-type utility for managing virtual machines is virt-manager. You can manage both local and remote virtual machines. The command to install the package is as follows: sudo apt-get install virt-manager The virt-manager works on a GUI environment. Hence, it is advisable to install it on a remote machine other than the production cluster, as production cluster should be used for doing the main tasks. The command to connect the virt-manager to a local server running libvirt is as follows: virt-manager -c qemu:///system If you want to connect the virt-manager from a different machine, then first you need to have SSH connectivity. This is required as libvirt will ask for a password on the machine. Once you have set up passwordless authentication, use the following command to connect manager to server: virt-manager -c qemu+ssh://virtnode1.ubuntuserver.com/system Here, the virtualization server is identified with the hostname ubuntuserver.com. The virtual machine viewer A utility for connecting to your virtual machine's console is virt-viewer. This requires a GUI to work with the virtual machine. Use the following command to install virt-viewer: sudo apt-get install virt-viewer Now, connect to your virtual machine console from your workstation using the following command: virt-viewer -c qemu:///system my_new_vm You may also connect to a remote host using SSH passwordless authentication by using the following command: virt-viewer -c qemu+ssh://virtnode4.ubuntuserver.com/system my_new_vm JeOS JeOS, short for Just Enough Operation System, is pronounced as "Juice" and is an operating system in the Ubuntu flavor. It is specially built for running virtual applications. JeOS is no longer available as a downloadable ISO CD-ROM. However, you can pick up any of the following approaches: Get a server ISO of the Ubuntu OS. While installing, hit F4 on your keyboard. You will see a list of items and select the one that reads Minimal installation. This will install the JeOS variant. Build your own copy with vmbuilder from Ubuntu. The kernel of JeOS is specifically tuned to run in virtual environments. It is stripped off of the unwanted packages and has only the base ones. JeOS takes advantage of the technological advancement in VMware products. A powerful combination of limited size with performance optimization is what makes JeOS a preferred OS over a full server OS in a large virtual installation. Also, with this OS being so light, the updates and security patches will be small and only limited to this variant. So, the users who are running their virtual applications on the JeOS will have less maintenance to worry about compared to a full server OS installation. vmbuilder The second way of getting the JeOS is by building your own copy of Ubuntu; you need not download any ISO from the Internet. The beauty of vmbuilder is that it will get the packages and tools based on your requirements. Then, build a virtual machine with these and the whole process is quick and easy. Essentially, vmbuilder is a script that will automate the process of creating a virtual machine, which can be easily deployed. Currently, the virtual machines built with vmbuilder are supported on KVM and Xen hypervisors. Using command-line arguments, you can specify what additional packages you require, remove the ones that you feel aren't necessary for your needs, select the Ubuntu version, and do much more. Some developers and admins contributed to the vmbuilder and changed the design specifics, but kept the commands same. Some of the goals were as follows: Reusability by other distributions Plugin feature added for interactions, so people can add logic for other environments A web interface along with CLI for easy access and maintenance Setup Firstly, we will need to set up libvirt and KVM before we use vmbuilder. libvirt was covered in the previous section. Let's now look at setting up KVM on your server. We will install some additional packages along with the KVM package, and one of them is for enabling X server on the machine. The command that you will need to run on your Ubuntu server is as follows: sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils The output of this command will be as follows: Let's look at what each of the packages mean: libvirt-bin: This is used by libvirtd for administration of KVM and Qemu qemu-kvm: This runs in the background ubuntu-vm-builder: This is a tool for building virtual machines from the command line bridge-utils: This enables networking for various virtual machines Adding users to groups You will have to add the user to the libvirtd command; this will enable them to run virtual machines. The command to add the current user is as follows: sudo adduser `id -un` libvirtd The output is as follows:   Installing vmbuilder Download the latest vmbuilder called python-vm-builder. You may also use the older ubuntu-vm-builder, but there are slight differences in the syntax. The command to install python-vm-builder is as follows: sudo apt-get install python-vm-builder The output will be as follows:   Defining the virtual machine While defining the virtual machine that you want to build, you need to take care of the following two important points: Do not assume that the enduser will know the technicalities of extending the disk size of virtual machine if the need arises. Either have a large virtual disk so that the application can grow or document the process to do so. However, it would be better to have your data stored in an external storage device. Allocating RAM is fairly simple. But remember that you should allocate your virtual machine an amount of RAM that is safe to run your application. To check the list of parameters that vmbuilder provides, use the following command: vmbuilder ––help   The two main parameters are virtualization technology, also known as hypervisor, and targeted distribution. The distribution we are using is Ubuntu 14.04, which is also known as trusty because of its codename. The command to check the release version is as follows: lsb_release -a The output is as follows:   Let's build a virtual machine on the same version of Ubuntu. Here's an example of building a virtual machine with vmbuilder: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system Now, we will discuss what the parameters mean: --suite: This specifies which Ubuntu release we want the virtual machine built on --flavour: This specifies which virtual kernel to use to build the JeOS image --arch: This specifies the processor architecture (64 bit or 32 bit) -o: This overwrites the previous version of the virtual machine image --libvirt: This adds the virtual machine to the list of available virtual machines Now that we have created a virtual machine, let's look at the next steps. JeOS installation We will examine the settings that are required to get our virtual machine up and running. IP address A good practice for assigning IP address to the virtual machines is to set a fixed IP address, usually from the private pool. Then, include this info as part of the documentation. We will define an IP address with following parameters: --ip (address): This is the IP address in dotted form --mask (value): This is the IP mask in dotted form (default is 255.255.255.0) --net (value): This is the IP net address (default is X.X.X.0) --bcast (value): This is the IP broadcast (default is X.X.X.255) --gw (address): This is the gateway address (default is X.X.X.1) --dns (address): This is the name server address (default is X.X.X.1) Our command looks like this now: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 You may have noticed that we have assigned only the IP, and all others will take the default value. Enabling the bridge We will have to enable the bridge for our virtual machines, as various remote hosts will have to access the applications. We will configure libvirt and modify the vmbuilder template to do so. First, create the template hierarchy and copy the default template into this folder: mkdir -p VMBuilder/plugins/libvirt/templates cp /etc/vmbuilder/libvirt/* VMBuilder/plugins/libvirt/templates/ Use your favorite editor and modify the following lines in the VMBuilder/plugins/libvirt/templates/libvirtxml.tmpl file: <interface type='network'> <source network='default'/> </interface> Replace these lines with the following lines: <interface type='bridge'> <source bridge='br0'/> </interface>   Partitions You have to allocate partitions to applications for their data storage and working. It is normal to have a separate storage space for each application in /var. The command provided by vmbuilder for this is --part: --part PATH vmbuilder will read the file from the PATH parameter and consider each line as a separate partition. Each line has two entries, mountpoint and size, where size is defined in MBs and is the maximum limit defined for that mountpoint. For this particular exercise, we will create a new file with name vmbuilder.partition and enter the following lines for creating partitions: root 6000 swap 4000 --- /var 16000 Also, please note that different disks are identified by the delimiter ---. Now, the command should be like this: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 --part vmbuilder.partition Use to wrap long text to the next line. Setting the user and password We have to define a user and a password in order for the user to log in to the virtual machine after startup. For now, let's use a generic user identified as user and the password password. We can ask user to change the password after first login. The following parameters are used to set the username and password: --user (username): This sets the username (default is ubuntu) --name (fullname): This sets a name for the user (default is ubuntu) --pass (password): This sets the password for the user (default is ubuntu) So, now our command will be as follows: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 --part vmbuilder.partition --user user --name user --pass password Final steps in the installation – first boot There are certain things that will need to be done at the first boot of a machine. We will install openssh-server at first boot. This will ensure that each virtual machine has a key, which is unique. If we had done this earlier in the setup phase, all virtual machines would have been given the same key; this might have posed a security issue. Let's create a script called first_boot.sh and run it at the first boot of every new virtual machine: # This script will run the first time the virtual machine boots # It is run as root apt-get update apt-get install -qqy --force-yes openssh-server Then, add the following line to the command line: --firstboot first_boot.sh Final steps in the installation – first login Remember we had specified a default password for the virtual machine. This means all the machines where this image will be used for installation will have the same password. We will prompt the user to change the password at first login. For this, we will use a shell script named first_login.sh. Add the following lines to the file: # This script is run the first time a user logs in. echo "Almost at the end of setting up your machine" echo "As a security precaution, please change your password" passwd Then, add the parameter to your command line: --firstlogin first_login.sh Auto updates You can make your virtual machine update itself at regular intervals. To enable this feature, add a package named unattended-upgrades to the command line: --addpkg unattended-upgrades ACPI handling ACPI handling will enable your virtual machine to take care of shutdown and restart events that are received from a remote machine. We will install the acipd package for the same: --addpkg acipd The complete command So, the final command with the parameters that we discussed previously would look like this: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 --part vmbuilder.partition --user user --name user --pass password --firstboot first_boot.sh --firstlogin first_login.sh --addpkg unattended-upgrades --addpkg acipd   Summary In this article, we discussed various virtualization techniques. We discussed virtualization as well as the tools and packages that help in creating and running a virtual machine. Also, you learned about the ways we can view, manage, connect to, and make use of the applications running on the virtual machine. Then, we saw the lightweight version of Ubuntu that is fine-tuned to run virtualization and applications on a virtual platform. At the later stages of this article, we covered how to build a virtual machine from a command line, how to add packages, how to set up user profiles, and the steps for first boot and first login. Resources for Article: Further resources on this subject: Introduction to OpenVPN [article] Speeding up Gradle builds for Android [article] Installing Red Hat CloudForms on Red Hat OpenStack [article]
Read more
  • 0
  • 0
  • 6715

article-image-working-user-interface
Packt
16 Sep 2015
17 min read
Save for later

Working on the User Interface

Packt
16 Sep 2015
17 min read
In this article by Fabrizio Caldarelli, author of the book Yii By Example, will cover the following topics related to the user interface in this article: Customizing JavaScript and CSS Using AJAX Using the Bootstrap widget Viewing multiple models in the same view Saving linked models in the same view It is now time for you to learn what Yii2 supports in order to customize the JavaScript and CSS parts of web pages. A recurrent use of JavaScript is to handle AJAX calls, that is, to manage widgets and compound controls (such as a dependent drop-down list) from jQuery and Bootstrap. Finally, we will employ jQuery to dynamically create more models from the same class in the form, which will be passed to the controller in order to be validated and saved. (For more resources related to this topic, see here.) Customize JavaScript and CSS Using JavaScript and CSS is fundamental to customize frontend output. Differently from Yii1, where calling JavaScript and CSS scripts and files was done using the Yii::app() singleton, in the new framework version, Yii2, this task is part of the yiiwebView class. There are two ways to call JavaScript or CSS: either directly passing the code to be executed or passing the path file. The registerJs() function allows us to execute the JavaScript code with three parameters: The first parameter is the JavaScript code block to be registered The second parameter is the position where the JavaScript tag should be inserted (the header, the beginning of the body section, the end of the body section, enclosed within the jQuery load() method, or enclosed within the jQuery document.ready() method, which is the default) The third and last parameter is a key that identifies the JavaScript code block (if it is not provided, the content of the first parameter will be used as the key) On the other hand, the registerJsFile() function allows us to execute a JavaScript file with three parameters: The first parameter is the path file of the JavaScript file The second parameter is the HTML attribute for the script tag, with particular attention given to the depends and position values, which are not treated as tag attributes The third parameter is a key that identifies the JavaScript code block (if it's not provided, the content of the first parameter will be used as the key) CSS, similar to JavaScript, can be executed using the code or by passing the path file. Generally, JavaScript or CSS files are published in the basic/web folder, which is accessible without restrictions. So, when we have to use custom JavaScript or CSS files, it is recommended to put them in a subfolder of the basic/web folder, which can be named as css or js. In some circumstances, we might be required to add a new CSS or JavaScript file for all web application pages. The most appropriate place to put these entries is AppAsset.php, a file located in basic/assets/AppAsset.php. In it, we can add CSS and JavaScript entries required in web applications, even using dependencies if we need to. Using AJAX Yii2 provides appropriate attributes for some widgets to make AJAX calls; sometimes, however, writing a JavaScript code in these attributes will make code hard to read, especially if we are dealing with complex codes. Consequently, to make an AJAX call, we will use external JavaScript code executed by registerJs(). This is a template of the AJAX class using the GET or POST method: <?php $this->registerJs( <<< EOT_JS // using GET method $.get({ url: url, data: data, success: success, dataType: dataType }); // using POST method $.post({ url: url, data: data, success: success, dataType: dataType }); EOT_JS ); ?> An AJAX call is usually the effect of a user interface event (such as a click on a button, a link, and so on). So, it is directly connected to jQuery .on() event on element. For this reason, it is important to remember how Yii2 renders the name and id attributes of input fields. When we call Html::activeTextInput($model, $attribute) or in the same way use <?= $form->field($model, $attribute)->textInput() ?>. The name and id attributes of the input text field will be rendered as follows: id : The model class name separated with a dash by the attribute name in lowercase; for example, if the model class name is Room and the attribute is floor, the id attribute will be room-floor name: The model class name that encloses the attribute name, for example, if the model class name is Reservation and the attribute is price_per_day, the name attribute will be Reservation[price_per_day]; so every field owned by the Reservation model will be enclosed all in a single array In this example, there are two drop-down lists and a detail box. The two drop-down lists refer to customers and reservations; when user clicks on a customer list item, the second drop-down list of reservations will be filled out according to their choice. Finally, when a user clicks on a reservation list item, a details box will be filled out with data about the selected reservation. In an action named actionDetailDependentDropdown():   public function actionDetailDependentDropdown() { $showDetail = false; $model = new Reservation(); if(isset($_POST['Reservation'])) { $model->load( Yii::$app->request->post() ); if(isset($_POST['Reservation']['id'])&& ($_POST['Reservation']['id']!=null)) { $model = Reservation::findOne($_POST['Reservation']['id']); $showDetail = true; } } return $this->render('detailDependentDropdown', [ 'model' => $model, 'showDetail' => $showDetail ]); } In this action, we will get the customer_id and id parameters from a form based on the Reservation model data and if it are filled out, the data will be used to search for the correct reservation model to be passed to the view. There is a flag called $showDetail that displays the reservation details content if the id attribute of the model is received. In Controller, there is also an action that will be called using AJAX when the user changes the customer selection in the drop-down list:   public function actionAjaxDropDownListByCustomerId($customer_id) { $output = ''; $items = Reservation::findAll(['customer_id' => $customer_id]); foreach($items as $item) { $content = sprintf('reservation #%s at %s', $item->id, date('Y-m-d H:i:s', strtotime($item- >reservation_date))); $output .= yiihelpersHtml::tag('option', $content, ['value' => $item->id]); } return $output; } This action will return the <option> HTML tags filled out with reservations data filtered by the customer ID passed as a parameter. If the customer drop-down list is changed, the detail div will be hidden, an AJAX call will get all the reservations filtered by customer_id, and the result will be passed as content to the reservations drop-down list. If the reservations drop-down list is changed, a form will be submitted. Next in the form declaration, we can find the first of all the customer drop-down list and then the reservations list, which use a closure to get the value from the ArrayHelper::map() methods. We could add a new property in the Reservation model by creating a function starting with the prefix get, such as getDescription(), and put in it the content of the closure, or rather: public function getDescription() { $content = sprintf('reservation #%s at %s', $this>id, date('Y-m-d H:i:s', strtotime($this>reservation_date))); return $content; } Or we could use a short syntax to get data from ArrayHelper::map() in this way: <?= $form->field($model, 'id')->dropDownList(ArrayHelper::map( $reservations, 'id', 'description'), [ 'prompt' => '--- choose' ]); ?> Finally, if $showDetail is flagged, a simple details box with only the price per day of the reservation will be displayed. Using the Bootstrap widget Yii2 supports Bootstrap as a core feature. Bootstrap framework CSS and JavaScript files are injected by default in all pages, and we could use this feature even to only apply CSS classes or call our own JavaScript function provided by Bootstrap. However, Yii2 embeds Bootstrap as a widget, and we can access this framework's capabilities like any other widget. The most used are: Class name Description yiibootstrapAlert This class renders an alert Bootstrap component yiibootstrapButton This class renders a Bootstrap button yiibootstrapDropdown This class renders a Bootstrap drop-down menu component yiibootstrapNav This class renders a nav HTML component yiibootstrapNavBar This class renders a navbar HTML component For example, yiibootstrapNav and yiibootstrapNavBar are used in the default main template.   <?php NavBar::begin([ 'brandLabel' => 'My Company', 'brandUrl' => Yii::$app->homeUrl, 'options' => [ 'class' => 'navbar-inverse navbar-fixed-top', ], ]); echo Nav::widget([ 'options' => ['class' => 'navbar-nav navbar- right'], 'items' => [ ['label' => 'Home', 'url' => ['/site/index']], ['label' => 'About', 'url' => ['/site/about']], ['label' => 'Contact', 'url' => ['/site/contact']], Yii::$app->user->isGuest ? ['label' => 'Login', 'url' => ['/site/login']] : ['label' => 'Logout (' . Yii::$app->user- >identity->username . ')', 'url' => ['/site/logout'], 'linkOptions' => ['data-method' => 'post']], ], ]); NavBar::end(); ?> Yii2 also supports, by itself, many jQuery UI widgets through the JUI extension for Yii 2, yii2-jui. If we do not have the yii2-jui extension in the vendor folder, we can get it from Composer using this command: php composer.phar require --prefer-dist yiisoft/yii2-jui In this example, we will discuss the two most used widgets: datepicker and autocomplete. Let's have a look at the datepicker widget. This widget can be initialized using a model attribute or by filling out a value property. The following is an example made using a model instance and one of its attributes: echo DatePicker::widget([ 'model' => $model, 'attribute' => 'from_date', //'language' => 'it', //'dateFormat' => 'yyyy-MM-dd', ]); And, here is a sample of the value property's use: echo DatePicker::widget([ 'name' => 'from_date', 'value' => $value, //'language' => 'it', //'dateFormat' => 'yyyy-MM-dd', ]); When data is sent via POST, the date_from and date_to fields will be converted from the d/m/y to the y-m-d format to make it possible for the database to save data. Then, the model object is updated through the save() method. Using the Bootstrap widget, an alert box will be displayed in the view after updating the model. Create the datePicker view: <?php use yiihelpersHtml; use yiiwidgetsActiveForm; use yiijuiDatePicker; ?> <div class="row"> <div class="col-lg-6"> <h3>Date Picker from Value<br />(using MM/dd/yyyy format and English language)</h3> <?php $value = date('Y-m-d'); echo DatePicker::widget([ 'name' => 'from_date', 'value' => $value, 'language' => 'en', 'dateFormat' => 'MM/dd/yyyy', ]); ?> </div> <div class="col-lg-6"> <?php if($reservationUpdated) { ?> <?php echo yiibootstrapAlert::widget([ 'options' => [ 'class' => 'alert-success', ], 'body' => 'Reservation successfully updated', ]); ?> <?php } ?> <?php $form = ActiveForm::begin(); ?> <h3>Date Picker from Model<br />(using dd/MM/yyyy format and italian language)</h3> <br /> <label>Date from</label> <?php // First implementation of DatePicker Widget echo DatePicker::widget([ 'model' => $reservation, 'attribute' => 'date_from', 'language' => 'it', 'dateFormat' => 'dd/MM/yyyy', ]); ?> <br /> <br /> <?php // Second implementation of DatePicker Widget echo $form->field($reservation, 'date_to')- >widget(yiijuiDatePicker::classname(), [ 'language' => 'it', 'dateFormat' => 'dd/MM/yyyy', ]) ?> <?php echo Html::submitButton('Send', ['class' => 'btn btn- primary']) ?> <?php $form = ActiveForm::end(); ?> </div> </div> The view is split into two columns, left and right. The left column simply displays a DataPicker example from the value (fixed to the current date). The right column displays an alert box if the $reservation model has been updated, and the next two kinds of widget declaration too; the first one without using $form and the second one using $form, both outputting the same HTML code. In either case, the DatePicker date output format is set to dd/MM/yyyy through the dateFormat property and the language is set to Italian through the language property. Multiple models in the same view Often, we can find many models of same or different class in a single view. First of all, remember that Yii2 encapsulates all the views' form attributes in the same container, named the same as the model class name. Therefore, when the controller receives the data, these will all be organized in a key of the $_POST array named the same as the model class name. If the model class name is Customer, every form input name attribute will be Customer[attributeA_of_model] This is built with: $form->field($model, 'attributeA_of_model')->textInput(). In the case of multiple models of the same class, the container will again be named as the model class name, but every attribute of each model will be inserted in an array, such as: Customer[0][attributeA_of_model_0] Customer[0][attributeB_of_model_0] … … … Customer[n][attributeA_of_model_n] Customer[n][attributeB_of_model_n] These are built with: $form->field($model, '[0]attributeA_of_model')->textInput(); $form->field($model, '[0]attributeB_of_model')->textInput(); … … … $form->field($model, '[n]attributeA_of_model')->textInput(); $form->field($model, '[n]attributeB_of_model')->textInput(); Notice that the array key information is inserted in the attribute name! So when data is passed to the controller, $_POST['Customer'] will be an array composed by the Customer models and every key of this array, for example, $_POST['Customer'][0] is a model of the Customer class. Let's see now how to save three customers at once. We will create three containers, one for each model class that will contain some fields of the Customer model. Create a view containing a block of input fields repeated for every model passed from the controller: <?php use yiihelpersHtml; use yiiwidgetsActiveForm; /* @var $this yiiwebView */ /* @var $model appmodelsRoom */ /* @var $form yiiwidgetsActiveForm */ ?> <div class="room-form"> <?php $form = ActiveForm::begin(); ?> <div class="model"> <?php for($k=0;$k<sizeof($models);$k++) { ?> <?php $model = $models[$k]; ?> <hr /> <label>Model #<?php echo $k+1 ?></label> <?= $form->field($model, "[$k]name")->textInput() ?> <?= $form->field($model, "[$k]surname")->textInput() ?> <?= $form->field($model, "[$k]phone_number")- >textInput() ?> <?php } ?> </div> <hr /> <div class="form-group"> <?= Html::submitButton('Save', ['class' => 'btn btn- primary']) ?> </div> <?php ActiveForm::end(); ?> </div> For each model, all the fields will have the same validator rules of the Customer class, and every single model object will be validated separately. Saving linked models in the same view It could be convenient to save different kind of models in the same view. This approach allows us to save time and to navigate from every single detail until a final item that merges all data is created. Handling different kind of models linked to each other it is not so different from what we have seen so far. The only point to take care of is the link (foreign keys) between models, which we must ensure is valid. Therefore, the controller action will receive the $_POST data encapsulated in the model's class name container; if we are thinking, for example, of the customer and reservation models, we will have two arrays in the $_POST variable, $_POST['Customer'] and $_POST['Reservation'], containing all the fields about the customer and reservation models. Then, all data must be saved together. It is advisable to use a database transaction while saving data because the action can be considered as ended only when all the data has been saved. Using database transactions in Yii2 is incredibly simple! A database transaction starts with calling beginTransaction() on the database connection object and finishes with calling the commit() or rollback() method on the database transaction object created by beginTransaction(). To start a transaction: $dbTransaction = Yii::$app->db->beginTransaction(); Commit transaction, to save all the database activities: $dbTransaction->commit(); Rollback transaction, to clear all the database activities: $dbTransaction->rollback(); So, if a customer was saved and the reservation was not (for any possible reason), our data would be partial and incomplete. Using a database transaction, we will avoid this danger. We now want to create both the customer and reservation models in the same view in a single step. In this way, we will have a box containing the customer model fields and a box with the reservation model fields in the view. Create a view the fields from the customer and reservation models: <?php use yiihelpersHtml; use yiiwidgetsActiveForm; use yiihelpersArrayHelper; use appmodelsRoom; ?> <div class="room-form"> <?php $form = ActiveForm::begin(); ?> <div class="model"> <?php echo $form->errorSummary([$customer, $reservation]); ?> <h2>Customer</h2> <?= $form->field($customer, "name")->textInput() ?> <?= $form->field($customer, "surname")->textInput() ?> <?= $form->field($customer, "phone_number")->textInput() ?> <h2>Reservation</h2> <?= $form->field($reservation, "room_id")- >dropDownList(ArrayHelper::map(Room::find()->all(), 'id', function($room, $defaultValue) { return sprintf('Room n.%d at floor %d', $room- >room_number, $room->floor); })); ?> <?= $form->field($reservation, "price_per_day")->textInput() ?> <?= $form->field($reservation, "date_from")->textInput() ?> <?= $form->field($reservation, "date_to")->textInput() ?> </div> <div class="form-group"> <?= Html::submitButton('Save customer and room', ['class' => 'btn btn-primary']) ?> </div> <?php ActiveForm::end(); ?> </div> We have created two blocks in the form to fill out the fields for the customer and the reservation. Now, create a new action named actionCreateCustomerAndReservation in ReservationsController in basic/controllers/ReservationsController.php:   public function actionCreateCustomerAndReservation() { $customer = new appmodelsCustomer(); $reservation = new appmodelsReservation(); // It is useful to set fake customer_id to reservation model to avoid validation error (because customer_id is mandatory) $reservation->customer_id = 0; if( $customer->load(Yii::$app->request->post()) && $reservation->load(Yii::$app->request->post()) && $customer->validate() && $reservation->validate() ) { $dbTrans = Yii::$app->db->beginTransaction(); $customerSaved = $customer->save(); if($customerSaved) { $reservation->customer_id = $customer->id; $reservationSaved = $reservation->save(); if($reservationSaved) { $dbTrans->commit(); } else { $dbTrans->rollback(); } } else { $dbTrans->rollback(); } } return $this->render('createCustomerAndReservation', [ 'customer' => $customer, 'reservation' => $reservation ]); } Summary In this article, we saw how to embed JavaScript and CSS in a layout and views, with file content or an inline block. This was applied to an example that showed you how to change the number of columns displayed based on the browser's available width; this is a typically task for websites or web apps that display advertising columns. Again on the subject of JavaScript, you learned how implement direct AJAX calls, taking an example where the reservation detail was dynamically loaded from the customers dropdown list. Next we looked at Yii's core user interface library, which is built on Bootstrap and we illustrated how to use the main Bootstrap widgets natively, together with DatePicker, probably the most commonly used jQuery UI widget. Finally, the last topics covered were multiple models of the same and different classes. We looked at two examples on these topics: the first one to save multiple customers at the same time and the second to create a customer and reservation in the same view. Resources for Article: Further resources on this subject: Yii: Adding Users and User Management to Your Site [article] Meet Yii [article] Creating an Extension in Yii 2 [article]
Read more
  • 0
  • 0
  • 2869
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-remote-desktop-your-pi-everywhere
Packt
16 Sep 2015
6 min read
Save for later

Remote Desktop to Your Pi from Everywhere

Packt
16 Sep 2015
6 min read
In this article by Gökhan Kurt, author of the book Raspberry Pi Android Projects, we will make a gentle introduction to both Pi and Android platforms to warm us up. Many users of the Pi face similar problems when they wish to administer it. You have to be near your Pi and connect a screen and a keyboard to it. We will solve this everyday problem by remotely connecting to our Pi desktop interface. The article covers the following topics: Installing necessary components in the Pi and Android Connecting the Pi and Android (For more resources related to this topic, see here.) Installing necessary components in the Pi and Android The following image shows you that the LXDE desktop manager comes with an initial setup and a few preinstalled programs: LXDE desktop management environment By clicking on the screen image on the tab bar located at top, you will be able to open a terminal screen that we will use to send commands to the Pi. The next step is to install a component called x11vnc. This is a VNC server for X, the window management component of Linux. Issue following command on the terminal: sudo apt-get install x11vnc This will download and install x11vnc to the Pi. We can even set a password to be used by VNC clients that will remote desktop to this Pi using the following command and provide a password to be used later on: x11vnc –storepasswd Next, we can get the x11vnc server running whenever the Pi is rebooted and the LXDE desktop manager starts. This can be done through the following steps: Go into the .config directory on the Pi user's home directory located at /home/pi:cd /home/pi/.config Make a subdirectory here named autostart:mkdir autostart Go into the autostart directory:cd autostart Start editing a file named x11vnc.desktop. As a terminal editor, I am using nano, which is the easiest one to use on the Pi for novice users, but there are more exciting alternatives, such as vi: nano x11vnc.desktop Add the following content into this file: [Desktop Entry] Encoding=UTF-8 Type=Application Name=X11VNC Comment= Exec=x11vnc -forever -usepw -display :0 -ultrafilexfer StartupNotify=false Terminal=false Hidden=false Save and exit using (Ctrl+X, Y, <Enter>) in order if you are using nano as the editor of your choice. Now you should reboot the Pi to get the server running using the following command: sudo reboot After rebooting, we can now find out what IP address our Pi has been given in the terminal window by issuing the ifconfig command. The IP address assigned to your Pi is to be found under the eth0 entry and is given after the inet addr keyword. Write this address down: Example output from ifconfig command The next step is to download a VNC client to your Android device.In this project, we will use a freely available client for Android, namely androidVNC or as it is named in the Play Store—VNC Viewer for Android by androidVNC team + antlersoft. The latest version in use at the writing of this book was 0.5.0. Note that in order to be able to connect your Android VNC client to the Pi, both the Pi and the Android device should be connected to the same network. Android through Wi-Fi and Pi through its Ethernet port. Connecting the Pi and Android Install and open androidVNC on your device. You will be presented with a first activity user interface asking for the details of the connection. Here, you should provide Nickname for the connection, Password you enter when you run the x11vnc –storepasswd command, and the IP Address of the Pi that you have found out using the ifconfig command. Initiate the connection by pressing the Connect button, and you should now be able to see the Pi desktop on your Android device. In androidVNC, you should be able to move the mouse pointer by clicking on the screen and under the options menu in the androidVNC app, you will find out how to send text and keys to the Pi with the help of Enter and Backspace. You may even find it convenient to connect to the Pi from another computer. I recommend using RealVNC for this purpose, which is available on Windows, Linux, and Mac OS. What if I want to use Wi-Fi on the Pi? In order to use a Wi-Fi dongle on the Pi, first of all, open the wpa-supplicant configuration file using the nano editor with the following command: sudo nano /etc/wpa_supplicant/wpa_supplicant.conf Add the following to the end of this file: network={ ssid="THE ID OF THE NETWORK YOU WANT TO CONNECT" psk="PASSWORD OF YOUR WIFI" } I assume that you have set up your wireless home network to use WPA-PSK as the authentication mechanism. If you have another mechanism, you should refer to the wpa_supplicant documentation. LXDE provides even better ways to connect to Wi-Fi networks through a GUI. It can be found on the upper-right corner of the desktop environment on the Pi. Connecting from everywhere Now, we have connected to the Pi from our device, which we need to connect to the same network as the Pi. However, most of us would like to connect to the Pi from around the world as well. To do this, first of all, we need to now the IP address of the home network assigned to us by our network provider. By going to http://whatismyipaddress.com URL, we can figure out what our home network's IP address is. The next step is to log in to our router and open up requests to the Pi from around the world. For this purpose, we will use a functionality found on most modern routers called port forwarding. Be aware of the risks contained in port forwarding. You are opening up access to your Pi from all around the world, even to malicious ones. I strongly recommend that you change the default password of the user pi before performing this step. You can change passwords using the passwd command. By logging onto a router's management portal and navigating to the Port Forwarding tab, we can open up requests to the Pi's internal network IP address, which we have figured out previously, and the default port of the VNC server, which is 5900. Now, we can provide our external IP address to androidVNC from anywhere around the world instead of an internal IP address that works only if we are on the same network as the Pi. Port forwarding settings on Netgear router administration page Refer to your router's user manual to see how to change the Port Forwarding settings. Most routers require you to connect through the Ethernet port in order to access the management portal instead of Wi-Fi. Summary In this article, we installed Raspbian, warmed up with the Pi, and connected the Pi using an Android device. Resources for Article:   Further resources on this subject: Raspberry Pi LED Blueprints [article] Color and motion finding [article] From Code to the Real World [article]
Read more
  • 0
  • 0
  • 13338

article-image-prerequisites-map-application
Packt
16 Sep 2015
10 min read
Save for later

Prerequisites for a Map Application

Packt
16 Sep 2015
10 min read
In this article by Raj Amal, author of the book Learning Android Google Maps, we will cover the following topics: Generating an SHA1 fingerprint in the Windows, Linux, and Mac OS X Registering our application in the Google Developer Console Configuring Google Play services with our application Adding permissions and defining an API key Generating the SHA1 fingerprint Let's learn about generating the SHA1 fingerprint in different platforms one by one. Windows The keytool usually comes with the JDK package. We use the keytool to generate the SHA1 fingerprint. Navigate to the bin directory in your default JDK installation location, which is what you configured in the JAVA_HOME variable, for example, C:Program FilesJavajdk 1.7.0_71. Then, navigate to File | Open command prompt. Now, the command prompt window will open. Enter the following command, and then hit the Enter key: keytool -list -v -keystore "%USERPROFILE%.androiddebug.keystore" - alias androiddebugkey -storepass android -keypass android You will see output similar to what is shown here: Valid from: Sun Nov 02 16:49:26 IST 2014 until: Tue Oct 25 16:49:26 IST 2044 Certificate fingerprints: MD5: 55:66:D0:61:60:4D:66:B3:69:39:23:DB:84:15:AE:17 SHA1: C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33 In the preceding output, note down the SHA1 value that is required to register our application with the Google Developer Console: The preceding screenshot is representative of the typical output screen that is shown when the preceding command is executed. Linux We are going to obtain the SHA1 fingerprint from the debug.keystore file, which is present in the .android folder in your home directory. If you install Java directly from PPA, open the terminal and enter the following command: keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey - storepass android -keypass android This will return an output similar to the one we've obtained in Windows. Note down the SHA1 fingerprint, which we will use later. If you've installed Java manually, you'll need to run a keytool from the keytool location. You can export the Java JDK path as follows: export JAVA_HOME={PATH to JDK} After exporting the path, run the keytool as follows: $JAVA_HOME/bin/keytool -list -v -keystore ~/.android/debug.keystore - alias androiddebugkey -storepass android -keypass android The output of the preceding command is shown as follows: Mac OS X Generating the SHA1 fingerprint in Mac OS X is similar to you what you performed in Linux. Open the terminal and enter the command. It will show output similar to what we obtained in Linux. Note down the SHA1 fingerprint, which we will use later: keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey - storepass android -keypass android Registering your application to the Google Developer Console This is one of the most important steps in our process. Our application will not function without obtaining an API key from the Google Developer Console. Follow these steps one by one to obtain the API key: Open the Google Developer Console by visiting https://console.developers.google.com and click on the CREATE PROJECT button. A new dialog box appears. Give your project a name and a unique project ID. Then, click on Create: As soon as your project is created, you will be redirected to the Project dashboard. On the left-hand side, under the APIs & auth section, select APIs: Then, scroll down and enable Google Maps Android API v2: Next, under the same APIs & auth section, select Credentials. Select Create new Key under the Public API access, and then select Android key in the following dialog: In the next window, enter the SHA1 fingerprint we noted in our previous section followed by a semicolon and the package name of the Android application we wish to register. For example, my SHA1 fingerprint value is C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33, and the package name of the app I wish to create is com.raj.map; so, I need to enter the following: C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33;com.raj.map You need to enter the value shown in the following screen: Finally, click on Create. Now our Android application will be registered with the Google Developer Console and it will a display a screen similar to the following one: Note down the API key from the screen, which will be similar to this: AIzaSyAdJdnEG5vfo925VV2T9sNrPQ_rGgIGnEU Configuring Google Play services Google Play services includes the classes required for our map application. So, it is required to be set up properly. It differs for Eclipse with the ADT plugin and Gradle-based Android Studio. Let's see how to configure Google Play services for both separately; It is relatively simple. Android Studio Configuring Google Play Services with Android Studio is very simple. You need to add a line of code to your build.gradle file, which contains the Gradle build script required to build our project. There are two build.gradle files. You must add the code to the inner app's build.gradle file. The following screenshot shows the structure of the project: The code should be added to the second Gradle build file, which contains our app module's configuration. Add the following code to the dependencies section in the Gradle build file: compile 'com.google.android.gms:play-services:7.5.0 The structure should be similar to the following code: dependencies { compile 'com.google.android.gms:play-services:7.5.0' compile 'com.android.support:appcompat-v7:21.0.3' } The 7.5.0 in the code is the version number of Google Play services. Change the version number according to your current version. The current version can be found from the values.xml file present in the res/values directory of the Google Play services library project. The newest version of Google Play services can found at https://developers.google.com/android/guides/setup. That's it. Now resync your project. You can sync by navigating to Tools | Android | Sync Project with Gradle Files. Now, Google Play services will be integrated with your project. Eclipse Let's take a look at how to configure Google Play services in Eclipse with the ADT plugin. First, we need to import Google Play services into Workspace. Navigate to File | Import and the following window will appear: In the preceding screenshot, navigate to Android | Existing Android Code Into Workspace. Then click on Next. In the next window, browse the sdk/extras/google/google_play_services/libproject/google-play-services_lib directory root directory as shown in the following screenshot: Finally, click on Finish. Now, google-play-services_lib will be added to your Workspace. Next, let's take a look at how to configure Google Play services with our application project. Select your project, right-click on it, and select Properties. In the Library section, click on Add and choose google-play-services_lib. Then, click on OK. Now, google-play-services_lib will be added as a library to our application project as shown in the following screenshot: In the next section, we will see how to configure the API key and add permissions that will help us to deploy our application. Adding permissions and defining the API key The permissions and API key must be defined in the AndroidManifest.xml file, which provides essential information about applications in the operating system. The OpenGL ES version must be specified in the manifest file, which is required to render the map and also the Google Play services version. Adding permissions Three permissions are required for our map application to work properly. The permissions should be added inside the <manifest> element. The four permissions are as follows: INTERNET ACCESS_NETWORK_STATE WRITE_EXTERNAL_STORAGE READ_GSERVICES Let's take a look at what these permissions are for. INTERNET This permission is required for our application to gain access to the Internet. Since Google Maps mainly works on real-time Internet access, the Internet it is essential. ACCESS_NETWORK_STATE This permission gives information about a network and whether we are connected to a particular network or not. WRITE_EXTERNAL_STORAGE This permission is required to write data to an external storage. In our application, it is required to cache map data to the external storage. READ_GSERVICES This permission allows you to read Google services. The permissions are added to AndroidManifest.xml as follows: <uses-permission android_name="android.permission.INTERNET"/> <uses-permission android_name="android.permission.ACCESS_NETWORK_STATE"/> <uses-permission android_name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android_name="com.google.android.providers.gsf.permission.READ_GSERVICES" /> There are some more permissions that are currently not required. Specifying the Google Play services version The Google Play services version must be specified in the manifest file for the functioning of maps. It must be within the <application> element. Add the following code to AndroidManifest.xml: <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version" />   Specifying the OpenGL ES version 2 Android Google maps uses OpenGL to render a map. Google maps will not work on devices that do not support version 2 of OpenGL. Hence, it is necessary to specify the version in the manifest file. It must be added within the <manifest> element, similar to permissions. Add the following code to AndroidManifest.xml: <uses-feature android_glEsVersion="0x00020000" android_required="true"/> The preceding code specifies that version 2 of OpenGL is required for the functioning of our application. Defining the API key The Google maps API key is required to provide authorization to the Google maps service. It must be specified within the <application> element. Add the following code to AndroidManifest.xml: <meta-data android_name="com.google.android.maps.v2.API_KEY" android_value="API_KEY"/> The API_KEY value must be replaced with the API key we noted earlier from the Google Developer Console. The complete AndroidManifest structure after adding permissions, specifying OpenGL, the Google Play services version, and defining the API key is as follows: <?xml version="1.0" encoding="utf-8"?> <manifest package="com.raj.sampleapplication" android_versionCode="1" android_versionName="1.0" > <uses-feature android_glEsVersion="0x00020000" android_required="true"/> <uses-permission android_name="android.permission.INTERNET"/> <uses-permission android_name="android.permission.ACCESS_NETWORK_STATE"/> <uses-permission android_name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android_name="com.google.android.providers.gsf.permission.READ_GSERVICES" /> <application> <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version" /> <meta-data android_name="com.google.android.maps.v2.API_KEY" android_value="AIzaSyBVMWTLk4uKcXSHBJTzrxsrPNSjfL18lk0"/> </application> </manifest>   Summary In this article, we learned how to generate the SHA1 fingerprint in different platforms, registering our application in the Google Developer Console, and generating an API key. We also configured Google Play services in Android Studio and Eclipse and added permissions and other data in a manifest file that are essential to create a map application. Resources for Article: Further resources on this subject: Testing with the Android SDK [article] Signing an application in Android using Maven [article] Code Sharing Between iOS and Android [article]
Read more
  • 0
  • 0
  • 6271

article-image-building-wpf-net-client
Packt
16 Sep 2015
8 min read
Save for later

Building a WPF .NET Client

Packt
16 Sep 2015
8 min read
In this article by Einar Ingebrigtsen, author of the book SignalR: Real-time Application Development - Second Edition we will bring the full feature set of what we've built so far for the web onto the desktop through a WPF .NET client. There are quite a few ways of developing Windows client solutions, and WPF was introduced back in 2005 and has become one of the most popular ways of developing software for Windows. In WPF, we have something called XAML, which is what Windows Phone development supports and is also the latest programming model in Windows 10. In this chapter, the following topics will be covered: MVVM Brief introduction to the SOLID principles XAML WPF (For more resources related to this topic, see here.) Decoupling it all So you might be asking yourself, what is MVVM? It stands for Model View ViewModel: a pattern for client development that became very popular in the XAML stack, enabled by Microsoft based on Martin Fowlers presentation model (http://martinfowler.com/eaaDev/PresentationModel.html). Its principle is that you have a ViewModel that holds the state and exposes behavior that can be utilized from a view. The view observes any changes of the state the ViewModel exposes, making the ViewModel totally unaware that there is a View. The ViewModel is decoupled and can be put in isolation and is perfect for automated testing. As part of the state that the ViewModel typically holds is the model part, which is something it usually gets from the server, and a SignalR hub is the perfect transport to get this. It boils down to recognizing the different concerns that make up the frontend and separating it all. This gives us the following diagram: Decoupling – the next level In this chapter, one of the things we will brush up is the usage of the Dependency Inversion Principle, the D of SOLID. Let's start with the first principle: the S in SOLID of Single Responsibility Principle, which states that a method or a class should only have one reason to change and only have one responsibility. With this, we can't have our units take on more than one responsibility and need help from collaborators to do the entire job. These collaborators are things we now depend on and we should represent these dependencies clearly to our units so that anyone or anything instantiating it knows what we are depending on. We have now flipped around the way in which we get dependencies. Instead of the unit trying to instantiate everything itself, we now clearly state what we need as collaborators, opening up for the calling code to decide what implementations of these dependencies you want to pass on. Also, this is an important aspect; typically, you'd want the dependencies expressed in the form of interfaces, yielding flexibility for the calling code. Basically, what this all means is that instead of a unit or system instantiating and managing its dependencies, we decouple and let something called as the Inversion of Control container deal with this. In the sample, we will use an IoC (Inversion of Control) container called Ninject that will deal with this for us. What it basically does is manage what implementations to give to the dependency specified on the constructor. Often, you'll find that the dependencies are interfaces in C#. This means one is not coupled to a specific implementation and has the flexibility of changing things at runtime based on configuration. Another role of the IOC container is to govern the life cycle of the dependencies. It is responsible for knowing when to create new instances and when to reuse an instance. For instance, in a web application, there are some systems that you want to have a life cycle of per request, meaning that we will get the same instance for the lifetime of a web request. The life cycle is configurable in what is known as a binding. When you explicitly set up the relationship between a contract (interface) and its implementation, you can choose to set up the life cycle behavior as well. Building for the desktop The first thing we will need is a separate project in our solution: Let's add it by right-clicking on the solution in Solution Explorer and navigating to Add | New Project: In the Add New Project dialog box, we want to make sure the .NET Framework 4.5.1 is selected. We could have gone with 4.5, but some of the dependencies that we're going to use have switched to 4.5.1. This is the latest version of the .NET Framework at the time of writing, so if you can, use it. Make sure to select Windows Desktop and then select WPF Application. Give the project the name SignalRChat.WPF and then click on the OK button: Setting up the packages We will need some packages to get started properly. This process is described in detail in Chapter 1, The Primer. Let's start off by adding SignalR, which is our primary framework that we will be working with to move on. We will be pulling this using NuGet, as described in Chapter 1, The Primer: Right-click on the References in Solution Explorer and select Manage NuGet Packages, and type Microsoft.AspNet.SignalR.Client in the Search dialog box. Select it and click on Install. Next, we're going to pull down something called as Bifrost. Bifrost is a library that helps us build MVVM-based solutions on WPF; there are a few other solutions out there, but we'll focus on Bifrost. Add a package called Bifrost.Client. Then, we need the package that gives us the IOC container called Ninject, working together with Bifrost. Add a package called Bifrost.Ninject. Observables One of the things that is part of WPF and all other XAML-based platforms is the notion of observables; be it in properties or collections that will notify when they change. The notification is done through well-known interfaces for this, such as INotifyPropertyChanged or INotifyCollectionChanged. Implementing these interfaces quickly becomes tedious all over the place where you want to notify everything when there are changes. Luckily, there are ways to make this pretty much go away. We can generate the code for this instead, either at runtime or at build time. For our project, we will go for a build-time solution. To accomplish this, we will use something called as Fody and a plugin for it called PropertyChanged. Add another NuGet package called PropertyChanged.Fody. If you happen to get problems during compiling, it could be the result of the dependency to a package called Fody not being installed. This happens for some versions of the package in combination with the latest Roslyn compiler. To fix this, install the NuGet package called Fody explicitly. Now that we have all the packages, we will need some configuration in code: Open the App.xam.cs file and add the following statement: using Bifrost.Configuration; The next thing we will need is a constructor for the App class: public App() { Configure.DiscoverAndConfigure(); } This will tell Bifrost to discover the implementations of the well-known interfaces to do the configuration. Bifrost uses the IoC container internally all the time, so the next thing we will need to do is give it an implementation. Add a class called ContainerCreator at the root of the project. Make it look as follows: using Bifrost.Configuration; using Bifrost.Execution; using Bifrost.Ninject; using Ninject; namespace SignalRChat.WPF { public class ContainerCreator : ICanCreateContainer { public IContainer CreateContainer() { var kernel = new StandardKernel(); var container = new Container(kernel); return container; } } } We've chosen Ninject among others that Bifrost supports, mainly because of familiarity and habit. If you happen to have another favorite, Bifrost supports a few. It's also fairly easy to implement your own support; just go to the source at http://github.com/dolittle/bifrost to find reference implementations. In order for Bifrost to be targeting the desktop, we need to tell it through configuration. Add a class called Configurator at the root of the project. Make it look as follows: using Bifrost.Configuration; namespace SignalRChat.WPF { public class Configurator : ICanConfigure { public void Configure(IConfigure configure) { configure.Frontend.Desktop(); } } } Summary Although there are differences between creating a web solution and a desktop client, the differences have faded over time. We can apply the same principles across the different environments; it's just different programming languages. The SignalR API adds the same type of consistency in thinking, although not as matured as the JavaScript API with proxy generation and so on; still the same ideas and concepts are found in the underlying API. Resources for Article: Further resources on this subject: The Importance of Securing Web Services [article] Working with WebStart and the Browser Plugin [article] Microsoft Azure – Developing Web API for Mobile Apps [article]
Read more
  • 0
  • 0
  • 7972

article-image-crud-operations-rest
Packt
16 Sep 2015
11 min read
Save for later

CRUD Operations in REST

Packt
16 Sep 2015
11 min read
In this article by Ludovic Dewailly, the author of Building a RESTful Web Service with Spring, we will learn how requests to retrieve data from a RESTful endpoint, created to access the rooms in a sample property management system, are typically mapped to the HTTP GET method in RESTful web services. We will expand on this by implementing some of the endpoints to support all the CRUD (Create, Read, Update, Delete) operations. In this article, we will cover the following topics: Mapping the CRUD operations to the HTTP methods Creating resources Updating resources Deleting resources Testing the RESTful operations Emulating the PUT and DELETE methods (For more resources related to this topic, see here.) Mapping the CRUD operations[km1]  to HTTP [km2] [km3] methods The HTTP 1.1 specification defines the following methods: OPTIONS: This method represents a request for information about the communication options available for the requested URI. This is, typically, not directly leveraged with REST. However, this method can be used as a part of the underlying communication. For example, this method may be used when consuming web services from a web page (as a part of the C[km4] ross-origin resource sharing mechanism). GET: This method retrieves the information identified by the request URI. In the context of the RESTful web services, this method is used to retrieve resources. This is the method used for read operations (the R in CRUD). HEAD: The HEAD requests are semantically identical to the GET requests except the body of the response is not transmitted. This method is useful for obtaining meta-information about resources. Similar to the OPTIONS method, this method is not typically used directly in REST web services. POST: This method is used to instruct the server to accept the entity enclosed in the request as a new resource. The create operations are typically mapped to this HTTP method. PUT: This method requests the server to store the enclosed entity under the request URI. To support the updating of REST resources, this method can be leveraged. As per the HTTP specification, the server can create the resource if the entity does not exist. It is up to the web service designer to decide whether this behavior should be implemented or resource creation should only be handled by POST requests. DELETE: The last operation not yet mapped is for the deletion of resources. The HTTP specification defines a DELETE method that is semantically aligned with the deletion of RESTful resources. TRACE: This method is used to perform actions on web servers. These actions are often aimed to aid development and the testing of HTTP applications. The TRACE requests aren't usually mapped to any particular RESTful operations. CONNECT: This HTTP method is defined to support HTTP tunneling through a proxy server. Since it deals with transport layer concerns, this method has no natural semantic mapping to the RESTful operations. The RESTful architecture does not mandate the use of HTTP as a communication protocol. Furthermore, even if HTTP is selected as the underlying transport, no provisions are made regarding the mapping of the RESTful operations to the HTTP method. Developers could feasibly support all operations through POST requests. This being said, the following CRUD to HTTP method mapping is commonly used in REST web services: Operation HTTP method Create POST Read GET Update PUT Delete DELETE Our sample web service will use these HTTP methods to support CRUD operations. The rest of this article will illustrate how to build such operations. Creating r[km5] esources The inventory component of our sample property management system deals with rooms. If we have already built an endpoint to access the rooms. Let's take a look at how to define an endpoint to create new resources: @RestController @RequestMapping("/rooms") public class RoomsResource { @RequestMapping(method = RequestMethod.POST) public ApiResponse addRoom(@RequestBody RoomDTO room) { Room newRoom = createRoom(room); return new ApiResponse(Status.OK, new RoomDTO(newRoom)); } } We've added a new method to our RoomsResource class to handle the creation of new rooms. @RequestMapping is used to map requests to the Java method. Here we map the POST requests to addRoom(). Not specifying a value (that is, path) in @RequestMapping is equivalent to using "/". We pass the new room as @RequestBody. This annotation instructs Spring to map the body of the incoming web request to the method parameter. Jackson is used here to convert the JSON request body to a Java object. With this new method, the POSTing requests to http://localhost:8080/rooms with the following JSON body will result in the creation of a new room: { name: "Cool Room", description: "A room that is very cool indeed", room_category_id: 1 } Our new method will return the newly created room: { "status":"OK", "data":{ "id":2, "name":"Cool Room", "room_category_id":1, "description":"A room that is very cool indeed" } } We can decide to return only the ID of the new resource in response to the resource creation. However, since we may sanitize or otherwise manipulate the data that was sent over, it is a good practice to return the full resource. Quickly testing endpoints[km6]  For the purpose of quickly testing our newly created endpoint, let's look at testing the new rooms created using Postman. Postman (https://www.getpostman.com) is a Google Chrome plugin extension that provides tools to build and test web APIs. This following screenshot illustrates how Postman can be used to test this endpoint: In Postman, we specify the URL to send the POST request to http://localhost:8080/rooms, with the "[km7] application/json" content type header and the body of the request. Sending this requesting will result in a new room being created and returned as shown in the following: We have successfully added a room to our inventory service using Postman. It is equally easy to create incomplete requests to ensure our endpoint performs any necessary sanity checks before persisting data into the database. JSON versus[km8]  form data Posting forms is the traditional way of creating new entities on the web and could easily be used to create new RESTful resources. We can change our method to the following: @RequestMapping(method = RequestMethod.POST, consumes = MediaType.APPLICATION_FORM_URLENCODED_VALUE) public ApiResponse addRoom(String name, String description, long roomCategoryId) { Room room = createRoom(name, description, roomCategoryId); return new ApiResponse(Status.OK, new RoomDTO(room)); } The main difference with the previous method is that we tell Spring to map form requests (that is, with application/x-www-form-urlencoded the content type) instead of JSON requests. In addition, rather than expecting an object as a parameter, we receive each field individually. By default, Spring will use the Java method attribute names to map incoming form inputs. Developers can change this behavior by annotating attribute with @RequestParam("…") to specify the input name. In situations where the main web service consumer is a web application, using form requests may be more applicable. In most cases, however, the former approach is more in line with RESTful principles and should be favored. Besides, when complex resources are handled, form requests will prove cumbersome to use. From a developer standpoint, it is easier to delegate object mapping to a third-party library such as Jackson. Now that we have created a new resource, let's see how we can update it. Updating r[km9] esources Choosing URI formats is an important part of designing RESTful APIs. As seen previously, rooms are accessed using the /rooms/{roomId} path and created under /rooms. You may recall that as per the HTTP specification, PUT requests can result in creation of entities, if they do not exist. The decision to create new resources on update requests is up to the service designer. It does, however, affect the choice of path to be used for such requests. Semantically, PUT requests update entities stored under the supplied request URI. This means the update requests should use the same URI as the GET requests: /rooms/{roomId}. However, this approach hinders the ability to support resource creation on update since no room identifier will be available. The alternative path we can use is /rooms with the room identifier passed in the body of the request. With this approach, the PUT requests can be treated as POST requests when the resource does not contain an identifier. Given the first approach is semantically more accurate, we will choose not to support resource create on update, and we will use the following path for the PUT requests: /rooms/{roomId} Update endpoint[km10]  The following method provides the necessary endpoint to modify the rooms: @RequestMapping(value = "/{roomId}", method = RequestMethod.PUT) public ApiResponse updateRoom(@PathVariable long roomId, @RequestBody RoomDTO updatedRoom) { try { Room room = updateRoom(updatedRoom); return new ApiResponse(Status.OK, new RoomDTO(room)); } catch (RecordNotFoundException e) { return new ApiResponse(Status.ERROR, null, new ApiError(999, "No room with ID " + roomId)); } } As discussed in the beginning of this article, we map update requests to the HTTP PUT verb. Annotating this method with @RequestMapping(value = "/{roomId}", method = RequestMethod.PUT) instructs Spring to direct the PUT requests here. The room identifier is part of the path and mapped to the first method parameter. In fashion similar to the resource creation requests, we map the body to our second parameter with the use of @RequestBody. Testing update requests[km11]  With Postman, we can quickly create a test case to update the room we created. To do so, we send a PUT request with the following body: { id: 2, name: "Cool Room", description: "A room that is really very cool indeed", room_category_id: 1 } The resulting response will be the updated room, as shown here: { "status": "OK", "data": { "id": 2, "name": "Cool Room", "room_category_id": 1, "description": "A room that is really very cool indeed." } } Should we attempt to update a nonexistent room, the server will generate the following response: { "status": "ERROR", "error": { "error_code": 999, "description": "No room with ID 3" } } Since we do not support resource creation on update, the server returns an error indicating that the resource cannot be found. Deleting resources[km12]  It will come as no surprise that we will use the DELETE verb to delete REST resources. Similarly, the reader will have already figured out that the path to delete requests will be /rooms/{roomId}. The Java method that deals with room deletion is as follows: @RequestMapping(value = "/{roomId}", method = RequestMethod.DELETE) public ApiResponse deleteRoom(@PathVariable long roomId) { try { Room room = inventoryService.getRoom(roomId); inventoryService.deleteRoom(room.getId()); return new ApiResponse(Status.OK, null); } catch (RecordNotFoundException e) { return new ApiResponse(Status.ERROR, null, new ApiError( 999, "No room with ID " + roomId)); } } By declaring the request mapping method to be RequestMethod.DELETE, Spring will make this method handle the DELETE requests. Since the resource is deleted, returning it in the response would not make a lot of sense. Service designers may choose to return a boolean flag to indicate the resource was successfully deleted. In our case, we leverage the status element of our response to carry this information back to the consumer. The response to deleting a room will be as follows: { "status": "OK" } With this operation, we have now a full-fledged CRUD API for our Inventory Service. Before we conclude this article, let's discuss how REST developers can deal with situations where not all HTTP verbs can be utilized. HTTP method override In certain situations (for example, when the service or its consumers are behind an overzealous corporate firewall, or if the main consumer is a web page), only the GET and POST HTTP methods might be available. In such cases, it is possible to emulate the missing verbs by passing a customer header in the requests. For example, resource updates can be handle using POST requests by setting a customer header (for example, X-HTTP-Method-Override) to PUT to indicate that we are emulating a PUT request via a POST request. The following method will handle this scenario: @RequestMapping(value = "/{roomId}", method = RequestMethod.POST, headers = {"X-HTTP-Method-Override=PUT"}) public ApiResponse updateRoomAsPost(@PathVariable("roomId") long id, @RequestBody RoomDTO updatedRoom) { return updateRoom(id, updatedRoom); } By setting the headers attribute on the mapping annotation, Spring request routing will intercept the POST requests with our custom header and invoke this method. Normal POST requests will still map to the Java method we had put together to create new rooms. Summary In this article, we've performed the implementation of our sample RESTful web service by adding all the CRUD operations necessary to manage the room resources. We've discussed how to organize URIs to best embody the REST principles and looked at how to quickly test endpoints using Postman. Now that we have a fully working component of our system, we can take some time to discuss performance. Resources for Article: Further resources on this subject: Introduction to Spring Web Application in No Time[article] Aggregators, File exchange Over FTP/FTPS, Social Integration, and Enterprise Messaging[article] Time Travelling with Spring[article]
Read more
  • 0
  • 0
  • 20530
article-image-identifying-best-places
Packt
16 Sep 2015
9 min read
Save for later

Identifying the Best Places

Packt
16 Sep 2015
9 min read
In this article by Ben Mearns, author of the book QGIS Blueprints, we will take a look at how the raster data can be analyzed, enhanced, and used for map production. Specifically, you will learn to produce a grid of the suitable locations based on the criteria values in other grids using raster analysis and map algebra. Then, using the grid, we will produce a simple click-based map. The end result will be a site suitability web application with click-based discovery capabilities. We'll be looking at the suitability for the farmland preservation selection. In this article, we will cover the following topics: Vector data ETL for raster analysis Batch processing Leaflet map application publication with QGIS2Leaf (For more resources related to this topic, see here.) Vector data Extract, Transform, and Load Our suitability analysis uses map algebra and criteria grids to give us a single value for the suitability for some activity in every place. This requires that the data be expressed in the raster (grid) format. So, let's perform the other necessary ETL steps and then convert our vector data to raster. We will perform the following actions: Ensure that our data has identical spatial reference systems. For example, we may be using a layer of the roads maintained by the state department of transportation and a layer of land use maintained by the department of natural resources. These layers must have identical spatial reference systems or be transformed to have identical systems. Extract geographic objects according to their classes as defined in some attribute table field if we want to operate on them while they're still in the vector form. If no further analysis is necessary, convert to raster. Loading data and establishing the CRS conformity It is important for the layers in this project to be transformed or projected into the same geographic or projected coordinate system. This is necessary for an accurate analysis and for publication to the web formats. Perform the following steps for this: Disable 'on the fly' projection if it is turned on. Otherwise, 'on the fly' will automatically project your data again to display it with the layers that are already in the Canvas. Navigate to Setting | Options and perform the settings shown in the following screenshot: Add the project layers: Navigate to Layer | Add Layer | Vector Layer. Add the following layers from within c2/data. ApplicantsCountyEasementsLanduseRoads You can select multiple layers to add by pressing Shift and clicking on the contiguous files or pressing Ctrl and clicking on the noncontiguous files. Import the Digital Elevation Model from c2/data/dem/dem.tif. Navigate to Layer | Add Layer | Raster Layer. From the dem directory, select dem.tif and then click on Open. Even though the layers are in a different CRS, QGIS does not warn us in this case. You must discover the issue by checking each layer individually. Check the CRS of the county layer and one other layer: Highlight the county layer in the Layers panel. Navigate to Layer | Properties. The CRS is displayed under the General tab in the Coordinate reference system section: Note that the county layer is in EPSG: 26957, while the others are in EPSG: 2776. We will transform the county layer from EPSG:26957 to EPSG:2776. Navigate to Layer | Save As | Select CRS. We will save all the output from this article in c2/output. To prepare the layers for conversion to raster, we will add a new generic column to all the layers populated with the number 1. This will be translated to a Boolean type raster, where the presence of the object that the raster represents (for example, roads) is indicated by a cell of 1 and all others with a zero. Follow these steps for the applicants, easements, and roads: Navigate to Layer | Toggle Editing. Then, navigate to Layer | Open Attribute Table. Add a column with the button at the top of the Attribute table dialog. Use value as the name for the new column and the following data format options: Select the new column from the dropdown in the Attribute table and enter 1 into the value box: Click on Update All. Navigate to Layer | Toggle Editing. Finally, save. The extracting (filtering) features Let's suppose that our criteria includes only a subset of the features in our roads layer—major unlimited access roads (but not freeways), a subset of the features as determined by a classification code (CFCC). To temporarily extract this subset, we will do a layer query by performing the following steps: Filter the major roads from the roads layer. Highlight the roads layer. Navigate to Layer | Query. Double-click on CFCC to add it to the expression. Click on the = operator to add to the expression Under the Values section, click on All to view all the unique values in the CFCC field. Double-click on A21 to add this to the expression. Do this for all the codes less than A36. Include A63 for highway on-ramps. You selection code will look similar to this: "CFCC" = 'A21' OR "CFCC" = 'A25' OR "CFCC" = 'A31' OR "CFCC" = 'A35' OR "CFCC" = 'A63' Click on OK, as shown in the following screenshot: Create a new c2/output directory. Save the roads layer as a new layer with only the selected features (major_roads) in this directory. To clear a layer filter, return to the query dialog on the applied layer (highlight it in the Layers pane; navigate to Layer | Query and click on Clear). Repeat these steps for the developed (LULC1 = 1) and agriculture (LULC1 = 2) landuses (separately) from the landuse layer. Converting to raster In this section, we will convert all the needed vector layers to raster. We will be doing this in batch, which will allow us to repeat the same operation many times over multiple layers. Doing more at once—working in batch The QGIS Processing Framework provides capabilities to run the same operation many times on different data. This is called batch processing. A batch process is invoked from an operation's context menu in the Processing Toolbox. The batch dialog requires that the parameters for each layer be populated for every iteration. Convert the vector layers to raster. Navigate to Processing Toolbox. Select Advanced Interface from the dropdown at the bottom of Processing Toolbox (if it is not selected, it will show as Simple Interface). Type rasterize to search for the Rasterize tool. Right-click on the Rasterize tool and select Execute as batch process: Fill in the Batch Processing dialog, making sure to specify the parameters as follows: Parameter Value Input layer (For example, roads) Attribute field value Output raster size Output resolution in map units per pixel Horizontal 30 Vertical 30 Raster type Int16 Output layer (For example, roads) The following images show how this will look in QGIS: Scroll to the right to complete the entry of parameter values.   Organize the new layers (optional step).    Batch sometimes gives unfriendly names based on some bug in the dialog box.    Change the layer names by doing the following for each layer created by batch:    Highlight the layer.    Navigate to Layer | Properties.    Change the layer name to the name of the vector layer from which this was created (for example, applicants). You should be able to find a hint for this value in the layer properties in the layer source (name of the .tif file).    Group the layers.    Press Shift + click on all the layers created by batch and the previous roads raster.    Navigate to Right click | Group selected. Publishing the results as a web application Now that we have completed our modeling for the site selection of a farmland for conservation, let's take steps to publish this for the Web. QGIS2leaf QGIS2leaf allows us to export our QGIS map to web map formats (JavaScript, HTML, and CSS) using the Leaflet map API. Leaflet is a very lightweight, extensible, and responsive (and trendy) web mapping interface. QGIS2Leaf converts all our vector layers to GeoJSON, which is the most common textual way to express the geographic JavaScript objects. As our operational layer is in GeoJSON, Leaflet's click interaction is supported, and we can access the information in the layers by clicking. It is a fully editable HTML and JavaScript file. You can customize and upload it to an accessible web location. QGIS2leaf is very simple to use as long as the layers are prepared properly (for example, with respect to CRS) up to this point. It is also very powerful in creating a good starting application including GeoJSON, HTML, and JavaScript for our Leaflet web map. Make sure to install the QGIS2Leaf plugin if you haven't already. Navigate to Web | QGIS2leaf | Exports a QGIS Project to a working Leaflet webmap. Click on the Get Layers button to add the currently displayed layers to the set that QGIS2leaf will export. Choose a basemap and enter the additional details if so desired. Select Encode to JSON. These steps will produce a map application similar to the following one. We'll take a look at how to restore the labels: Summary In this article, using the site selection example, we covered basic vector data ETL, raster analysis, and web map creation. We started with vector data, and after unifying CRS, we prepared the attribute tables. We then filtered and converted it to raster grids using batch processing. Finally, we published the prepared vector output with QGIS2Leaf as a simple Leaflet web map application with a strong foundation for extension. Resources for Article:   Further resources on this subject: Style Management in QGIS [article] Preparing to Build Your Own GIS Application [article] Geocoding Address-based Data [article]
Read more
  • 0
  • 0
  • 2074

article-image-java-hibernate-collections-associations-and-advanced-concepts
Packt
15 Sep 2015
16 min read
Save for later

Java Hibernate Collections, Associations, and Advanced Concepts

Packt
15 Sep 2015
16 min read
In this article by Yogesh Prajapati and Vishal Ranapariya, the author of the book Java Hibernate Cookbook, he has provide a complete guide to the following recipes: Working with a first-level cache One-to-one mapping using a common join table Persisting Map (For more resources related to this topic, see here.) Working with a first-level cache Once we execute a particular query using hibernate, it always hits the database. As this process may be very expensive, hibernate provides the facility to cache objects within a certain boundary. The basic actions performed in each database transaction are as follows: The request reaches the database server via the network. The database server processes the query in the query plan. Now the database server executes the processed query. Again, the database server returns the result to the querying application through the network. At last, the application processes the results. This process is repeated every time we request a database operation, even if it is for a simple or small query. It is always a costly transaction to hit the database for the same records multiple times. Sometimes, we also face some delay in receiving the results because of network routing issues. There may be some other parameters that affect and contribute to the delay, but network routing issues play a major role in this cycle. To overcome this issue, the database uses a mechanism that stores the result of a query, which is executed repeatedly, and uses this result again when the data is requested using the same query. These operations are done on the database side. Hibernate provides an in-built caching mechanism known as the first-level cache (L1 cache). Following are some properties of the first-level cache: It is enabled by default. We cannot disable it even if we want to. The scope of the first-level cache is limited to a particular Session object only; the other Session objects cannot access it. All cached objects are destroyed once the session is closed. If we request for an object, hibernate returns the object from the cache only if the requested object is found in the cache; otherwise, a database call is initiated. We can use Session.evict(Object object) to remove single objects from the session cache. The Session.clear() method is used to clear all the cached objects from the session. Getting ready Let's take a look at how the L1 cache works. Creating the classes For this recipe, we will create an Employee class and also insert some records into the table: Source file: Employee.java @Entity @Table public class Employee { @Id @GeneratedValue private long id; @Column(name = "name") private String name; // getters and setters @Override public String toString() { return "Employee: " + "nt Id: " + this.id + "nt Name: " + this.name; } } Creating the tables Use the following table script if the hibernate.hbm2ddl.auto configuration property is not set to create: Use the following script to create the employee table: CREATE TABLE `employee` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ); We will assume that two records are already inserted, as shown in the following employee table: id name 1 Yogesh 2 Aarush Now, let's take a look at some scenarios that show how the first-level cache works. How to do it… Here is the code to see how caching works. In the code, we will load employee#1 and employee#2 once; after that, we will try to load the same employees again and see what happens: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); System.out.println("nLoading employee#2..."); /* Line 6 */ Employee employee2 = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2.toString()); System.out.println("nLoading employee#1 again..."); /* Line 10 */ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); System.out.println("nLoading employee#2 again..."); /* Line 15 */ Employee employee2_dummy = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush Loading employee#1 again... Employee: Id: 1 Name: Yogesh Loading employee#2 again... Employee: Id: 2 Name: Aarush How it works… Here, we loaded Employee#1 and Employee#2 as shown in Line 2 and 6 respectively and also the print output for both. It's clear from the output that hibernate will hit the database to load Employee#1 and Employee#2 because at startup, no object is cached in hibernate. Now, in Line 10, we tried to load Employee#1 again. At this time, hibernate did not hit the database but simply use the cached object because Employee#1 is already loaded and this object is still in the session. The same thing happened with Employee#2. Hibernate stores an object in the cache only if one of the following operations is completed: Save Update Get Load List There's more… In the previous section, we took a look at how caching works. Now, we will discuss some other methods used to remove a cached object from the session. There are two more methods that are used to remove a cached object: evict(Object object): This method removes a particular object from the session clear(): This method removes all the objects from the session evict (Object object) This method is used to remove a particular object from the session. It is very useful. The object is no longer available in the session once this method is invoked and the request for the object hits the database: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); /* Line 5 */ session.evict(employee1); System.out.println("nEmployee#1 removed using evict(…)..."); System.out.println("nLoading employee#1 again..."); /* Line 9*/ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Employee#1 removed using evict(…)... Loading employee#1 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Here, we loaded an Employee#1, as shown in Line 2. This object was then cached in the session, but we explicitly removed it from the session cache in Line 5. So, the loading of Employee#1 will again hit the database. clear() This method is used to remove all the cached objects from the session cache. They will no longer be available in the session once this method is invoked and the request for the objects hits the database: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); System.out.println("nLoading employee#2..."); /* Line 6 */ Employee employee2 = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2.toString()); /* Line 9 */ session.clear(); System.out.println("nAll objects removed from session cache using clear()..."); System.out.println("nLoading employee#1 again..."); /* Line 13 */ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); System.out.println("nLoading employee#2 again..."); /* Line 17 */ Employee employee2_dummy = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush All objects removed from session cache using clear()... Loading employee#1 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush Here, Line 2 and 6 show how to load Employee#1 and Employee#2 respectively. Now, we removed all the objects from the session cache using the clear() method. As a result, the loading of both Employee#1 and Employee#2 will again result in a database hit, as shown in Line 13 and 17. One-to-one mapping using a common join table In this method, we will use a third table that contains the relationship between the employee and detail tables. In other words, the third table will hold a primary key value of both tables to represent a relationship between them. Getting ready Use the following script to create the tables and classes. Here, we use Employee and EmployeeDetail to show a one-to-one mapping using a common join table: Creating the tables Use the following script to create the tables if you are not using hbm2dll=create|update: Use the following script to create the detail table: CREATE TABLE `detail` ( `detail_id` bigint(20) NOT NULL AUTO_INCREMENT, `city` varchar(255) DEFAULT NULL, PRIMARY KEY (`detail_id`) ); Use the following script to create the employee table: CREATE TABLE `employee` ( `employee_id` BIGINT(20) NOT NULL AUTO_INCREMENT, `name` VARCHAR(255) DEFAULT NULL, PRIMARY KEY (`employee_id`) ); Use the following script to create the employee_detail table: CREATE TABLE `employee_detail` ( `detail_id` BIGINT(20) DEFAULT NULL, `employee_id` BIGINT(20) NOT NULL, PRIMARY KEY (`employee_id`), KEY `FK_DETAIL_ID` (`detail_id`), KEY `FK_EMPLOYEE_ID` (`employee_id`), CONSTRAINT `FK_EMPLOYEE_ID` FOREIGN KEY (`employee_id`) REFERENCES `employee` (`employee_id`), CONSTRAINT `FK_DETAIL_ID` FOREIGN KEY (`detail_id`) REFERENCES `detail` (`detail_id`) ); Creating the classes Use the following code to create the classes: Source file: Employee.java @Entity @Table(name = "employee") public class Employee { @Id @GeneratedValue @Column(name = "employee_id") private long id; @Column(name = "name") private String name; @OneToOne(cascade = CascadeType.ALL) @JoinTable( name="employee_detail" , joinColumns=@JoinColumn(name="employee_id") , inverseJoinColumns=@JoinColumn(name="detail_id") ) private Detail employeeDetail; public long getId() { return id; } public void setId(long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Detail getEmployeeDetail() { return employeeDetail; } public void setEmployeeDetail(Detail employeeDetail) { this.employeeDetail = employeeDetail; } @Override public String toString() { return "Employee" +"n Id: " + this.id +"n Name: " + this.name +"n Employee Detail " + "nt Id: " + this.employeeDetail.getId() + "nt City: " + this.employeeDetail.getCity(); } } Source file: Detail.java @Entity @Table(name = "detail") public class Detail { @Id @GeneratedValue @Column(name = "detail_id") private long id; @Column(name = "city") private String city; @OneToOne(cascade = CascadeType.ALL) @JoinTable( name="employee_detail" , joinColumns=@JoinColumn(name="detail_id") , inverseJoinColumns=@JoinColumn(name="employee_id") ) private Employee employee; public Employee getEmployee() { return employee; } public void setEmployee(Employee employee) { this.employee = employee; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } public long getId() { return id; } public void setId(long id) { this.id = id; } @Override public String toString() { return "Employee Detail" +"n Id: " + this.id +"n City: " + this.city +"n Employee " + "nt Id: " + this.employee.getId() + "nt Name: " + this.employee.getName(); } } How to do it… In this section, we will take a look at how to insert a record step by step. Inserting a record Using the following code, we will insert an Employee record with a Detail object: Code Detail detail = new Detail(); detail.setCity("AHM"); Employee employee = new Employee(); employee.setName("vishal"); employee.setEmployeeDetail(detail); Transaction transaction = session.getTransaction(); transaction.begin(); session.save(employee); transaction.commit(); Output Hibernate: insert into detail (city) values (?) Hibernate: insert into employee (name) values (?) Hibernate: insert into employee_detail (detail_id, employee_id) values (?,?) Hibernate saves one record in the detail table and one in the employee table and then inserts a record in to the third table, employee_detail, using the primary key column value of the detail and employee tables. How it works… From the output, it's clear how this method works. The code is the same as in the other methods of configuring a one-to-one relationship, but here, hibernate reacts differently. Here, the first two statements of output insert the records in to the detail and employee tables respectively, and the third statement inserts the mapping record in to the third table, employee_detail, using the primary key column value of both the tables. Let's take a look at an option used in the previous code in detail: @JoinTable: This annotation, written on the Employee class, contains the name="employee_detail" attribute and shows that a new intermediate table is created with the name "employee_detail" joinColumns=@JoinColumn(name="employee_id"): This shows that a reference column is created in employee_detail with the name "employee_id", which is the primary key of the employee table inverseJoinColumns=@JoinColumn(name="detail_id"): This shows that a reference column is created in the employee_detail table with the name "detail_id", which is the primary key of the detail table Ultimately, the third table, employee_detail, is created with two columns: one is "employee_id" and the other is "detail_id". Persisting Map Map is used when we want to persist a collection of key/value pairs where the key is always unique. Some common implementations of java.util.Map are java.util.HashMap, java.util.LinkedHashMap, and so on. For this recipe, we will use java.util.HashMap. Getting ready Now, let's assume that we have a scenario where we are going to implement Map<String, String>; here, the String key is the e-mail address label, and the value String is the e-mail address. For example, we will try to construct a data structure similar to <"Personal e-mail", "emailaddress2@provider2.com">, <"Business e-mail", "emailaddress1@provider1.com">. This means that we will create an alias of the actual e-mail address so that we can easily get the e-mail address using the alias and can document it in a more readable form. This type of implementation depends on the custom requirement; here, we can easily get a business e-mail using the Business email key. Use the following code to create the required tables and classes. Creating tables Use the following script to create the tables if you are not using hbm2dll=create|update. This script is for the tables that are generated by hibernate: Use the following code to create the email table: CREATE TABLE `email` ( `Employee_id` BIGINT(20) NOT NULL, `emails` VARCHAR(255) DEFAULT NULL, `emails_KEY` VARCHAR(255) NOT NULL DEFAULT '', PRIMARY KEY (`Employee_id`,`emails_KEY`), KEY `FK5C24B9C38F47B40` (`Employee_id`), CONSTRAINT `FK5C24B9C38F47B40` FOREIGN KEY (`Employee_id`) REFERENCES `employee` (`id`) ); Use the following code to create the employee table: CREATE TABLE `employee` ( `id` BIGINT(20) NOT NULL AUTO_INCREMENT, `name` VARCHAR(255) DEFAULT NULL, PRIMARY KEY (`id`) ); Creating a class Source file: Employee.java @Entity @Table(name = "employee") public class Employee { @Id @GeneratedValue @Column(name = "id") private long id; @Column(name = "name") private String name; @ElementCollection @CollectionTable(name = "email") private Map<String, String> emails; public long getId() { return id; } public void setId(long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Map<String, String> getEmails() { return emails; } public void setEmails(Map<String, String> emails) { this.emails = emails; } @Override public String toString() { return "Employee" + "ntId: " + this.id + "ntName: " + this.name + "ntEmails: " + this.emails; } } How to do it… Here, we will consider how to work with Map and its manipulation operations, such as inserting, retrieving, deleting, and updating. Inserting a record Here, we will create one employee record with two e-mail addresses: Code Employee employee = new Employee(); employee.setName("yogesh"); Map<String, String> emails = new HashMap<String, String>(); emails.put("Business email", "emailaddress1@provider1.com"); emails.put("Personal email", "emailaddress2@provider2.com"); employee.setEmails(emails); session.getTransaction().begin(); session.save(employee); session.getTransaction().commit(); Output Hibernate: insert into employee (name) values (?) Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?,?,?) Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?,?,?) When the code is executed, it inserts one record into the employee table and two records into the email table and also sets a primary key value for the employee record in each record of the email table as a reference. Retrieving a record Here, we know that our record is inserted with id 1. So, we will try to get only that record and understand how Map works in our case. Code Employee employee = (Employee) session.get(Employee.class, 1l); System.out.println(employee.toString()); System.out.println("Business email: " + employee.getEmails().get("Business email")); Output Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from employee employee0_ where employee0_.id=? Hibernate: select emails0_.Employee_id as Employee1_0_0_, emails0_.emails as emails0_, emails0_.emails_KEY as emails3_0_ from email emails0_ where emails0_.Employee_id=? Employee Id: 1 Name: yogesh Emails: {Personal email=emailaddress2@provider2.com, Business email=emailaddress1@provider1.com} Business email: emailaddress1@provider1.com Here, we can easily get a business e-mail address using the Business email key from the map of e-mail addresses. This is just a simple scenario created to demonstrate how to persist Map in hibernate. Updating a record Here, we will try to add one more e-mail address to Employee#1: Code Employee employee = (Employee) session.get(Employee.class, 1l); Map<String, String> emails = employee.getEmails(); emails.put("Personal email 1", "emailaddress3@provider3.com"); session.getTransaction().begin(); session.saveOrUpdate(employee); session.getTransaction().commit(); System.out.println(employee.toString()); Output Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from employee employee0_ where employee0_.id=? Hibernate: select emails0_.Employee_id as Employee1_0_0_, emails0_.emails as emails0_, emails0_.emails_KEY as emails3_0_ from email emails0_ where emails0_.Employee_id=? Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?, ?, ?) Employee Id: 2 Name: yogesh Emails: {Personal email 1= emailaddress3@provider3.com, Personal email=emailaddress2@provider2.com, Business email=emailaddress1@provider1.com} Here, we added a new e-mail address with the Personal email 1 key and the value is emailaddress3@provider3.com. Deleting a record Here again, we will try to delete the records of Employee#1 using the following code: Code Employee employee = new Employee(); employee.setId(1); session.getTransaction().begin(); session.delete(employee); session.getTransaction().commit(); Output Hibernate: delete from email where Employee_id=? Hibernate: delete from employee where id=? While deleting the object, hibernate will delete the child records (here, e-mail addresses) as well. How it works… Here again, we need to understand the table structures created by hibernate: Hibernate creates a composite primary key in the email table using two fields: employee_id and emails_KEY. Summary In this article you familiarized yourself with recipes such as working with a first-level cache, one-to-one mapping using a common join table, and persisting map. Resources for Article: Further resources on this subject: PostgreSQL in Action[article] OpenShift for Java Developers[article] Oracle 12c SQL and PL/SQL New Features [article]
Read more
  • 0
  • 0
  • 4829

article-image-beautiful-designs
Packt
15 Sep 2015
6 min read
Save for later

Beautiful Designs

Packt
15 Sep 2015
6 min read
In this article written by Stefan Kottwitz, author of the book LaTeX Cookbook, the author wants us to learn about the following topics: Adding a background image Preparing pretty headings (For more resources related to this topic, see here.) Non-standard documents, such as photo books, calendars, greeting cards, fairy tale books, may have a fancier design. The following recipes will show some decorative examples. Adding a background image We can add background graphics such as watermarks, pre-designed letter-heads, or photos to any LaTeX document. This recipe will show us a way to add a background image. How to do it... We will use the background package. In this recipe, you can use any LaTeX document. You may also start with the article class and add some dummy text. You just need to insert some commands into your document preamble, which means, between documentclass{…} and begin{document}. It would be: Loading the background package Setting up the background using the command backgroundsetup with options Here we go: Load the background package using the following command: usepackage{background} Setup the background. Optionally, specify scaling factor, rotation angle, and opacity. Provide the command for printing on the background. We will use includegraphics here with a drawing of the CTAN lion: backgroundsetup{scale = 1, angle = 0, opacity = 0.2, contents = {includegraphics[width = paperwidth, height = paperheight, keepaspectratio] {ctanlion.pdf}}} Compile at least twice to let the layout settle. Now all of your pages will show a light version of the image over the whole page background, like this: How it works... The background package can place any text, drawing, or image on the page background. It provides options for position, color, and opacity. The example already showed some self-explanatory parameters. They can be given as package options or by using the backgroundsetup command. This command can be used as often as you like to make changes. The contents option contains the actual commands which shall be applied to the background. This can simply be includegraphics, some text, or any sequence of drawing commands. The package bases on TikZ and the everypage package. It can require several compiling runs until the positioning is finally correct. That is because TikZ writes the marks into the .aux file, which gets read in and processed in the next LaTeX run. There's more... Instead of images, you could display dynamic values such as the page number or the head mark with the project title, instead of using a package such as fancyhdr, scrpage2, or scrlayer-scrpage. The following command places a page number at the background: Placed at the top With customizable rotation, here 0 degrees Scaled four times the size of normal text Colored with 80 percent of standard blue (like mixed with 20 percent of white) Vertically shifted by 2ex downwards With dashes around backgroundsetup{placement = top, angle = 0, scale = 4, color = blue!80,vshift = -2ex, contents = {--thepage--}} Here is a cut-out of the top of page 7: To see how you can draw with TikZ on the background, let's take a look at an example. It draws a rounded border, and fills the interior background with light yellow color: usetikzlibrary{calc} backgroundsetup{angle = 0, scale = 1, vshift = -2ex, contents = {tikz[overlay, remember picture] draw [rounded corners = 20pt, line width = 1pt, color = blue, fill = yellow!20, double = blue!10] ($(current page.north west)+(1cm,-1cm)$) rectangle ($(current page.south east)+(-1,1)$);}} Here, we first loaded the calc library, which provides syntax for coordinate calculations that we used at the end. A TikZ image in the overlay mode draws a rectangle with rounded corners. It has double lines with yellow in-between. The rectangle dimensions are calculated from the position of the current page node, which stands for the whole page. The result looks like this: Here is a summary of selected options with their default values: contents: Text, images, or drawing commands, Draft is the default placement: The center, top or bottom, center is the default color: A color expression which TikZ understands, default is red!45 angle: A value between -360 and 360, 0 is default for top and bottom, 60 for center opacity: A value for the transparency between 0 and 1, default is 0.5 scale: A positive value, default is 8 for top and bottom, 15 for center hshift and vshift: Any length for horizontal or vertical shifting, default is 0 pt Further options for TikZ node parameters are explained in the package manual, which also contains some examples. It also shows how to select just certain pages for having this background. You can open it by typing texdoc background at the command line, or at http://texdoc.net/pkg/background. There are more packages which can do a similar task like we showed in this recipe, for example watermark, xwatermark, and the packages everypage and eso-pic, which don't require TikZ. Preparing pretty headings This recipe will show how to bring some color into documents headings. How to do it... We will use TikZ for coloring and positioning. Follow the following steps: Set up a basic document with blindtext support: documentclass{scrartcl} usepackage[automark]{scrpage2} usepackage[english]{babel} usepackage{blindtext} Load TikZ, beforehand, pass a naming option to the implicitly loaded package xcolor for using names for predefined colors: PassOptionsToPackage{svgnames}{xcolor} usepackage{tikz} Define a macro which prints the heading, given as an argument: newcommand{tikzhead}[1]{% begin{tikzpicture}[remember picture,overlay] node[yshift=-2cm] at (current page.north west) {begin{tikzpicture}[remember picture, overlay] path[draw=none, fill=LightSkyBlue] (0,0) rectangle (paperwidth,2cm); node[anchor=east,xshift=.9paperwidth, rectangle, rounded corners=15pt, inner sep=11pt, fill=MidnightBlue, font=sffamilybfseries] {color{white}#1}; end{tikzpicture} }; end{tikzpicture}} Use the new macro for the headings, printing headmark, and complete the document with some dummy text: clearscrheadings ihead{tikzhead{headmark}} pagestyle{scrheadings} begin{document} tableofcontents clearpage blinddocument end{document} Compile and take a look at a sample page header: How it works... We created a macro which draws a filled rectangle over the whole page width and puts a node with text inside it, shaped as a rectangle with rounded corners. It's just a brief glimpse at TikZ' drawing syntax. The main points are as follows: Referring to the current page node for positioning Using the drawing macro within a header command The rest are drawing syntax and style options, described in the TikZ manual. You can read it by typing the texdoc tikz command at the command prompt, or by visiting http://texdoc.net/pkg/tikz. Summary In this article we learnt how to add a background image to our document and also how to create pretty and attractive headings for our documents. Resources for Article: Further resources on this subject: Creating Tables in Latex [article] Parsing Specific Data in Python Text Processing [article] Scribus: Managing Colors [article]
Read more
  • 0
  • 0
  • 2380
article-image-using-3d-objects
Packt
15 Sep 2015
11 min read
Save for later

Using 3D Objects

Packt
15 Sep 2015
11 min read
In this article by Liz Staley, author of the book Manga Studio EX 5 Cookbook, you will learn the following topics: Adding existing 3D objects to a page Importing a 3D object from another program Manipulating 3D objects Adjusting the 3D camera (For more resources related to this topic, see here.) One of the features of Manga Studio 5 that people ask me about all the time is 3D objects. Manga Studio 5 comes with a set of 3D assets: characters, poses, and a few backgrounds and small objects. These can be added directly to your page, posed and positioned, and used in your artwork. While I usually use these 3D poses as a reference (much like the wooden drawing dolls that you can find in your local craft store), you can conceivably use 3D characters and imported 3D assets from programs such as Poser to create entire comics. Let's get into the third dimension now, and you will learn how to use these assets in Manga Studio 5. Adding existing 3D objects to a page Manga Studio 5 comes with many 3D objects present in the materials library. This is the fastest way to get started with using the 3D features. Getting ready You must have a page open in order to add a 3D object. Open a page of any size to start the recipes covered here. How to do it… The following steps will show us how to add an existing 3D material to a page: Open the materials library. This can be done by going to Window | Material | Material [3D]. Select a category of 3D material from the list on the left-hand side of the library, or scroll down the Material library preview window to browse all the available materials. Select a material to add to the page by clicking on it to highlight it. In this recipe, we are choosing the School girl B 02 character material. It is highlighted in the following screenshot: Hold the left mouse button down on the selected material and drag it onto the page, releasing the mouse button once the cursor is over the page, to display the material. Alternately, you can click on the Paste selected material to canvas icon at the bottom of the Material library menu. The selected 3D material will be added to the page. The School girl B 02 material is shown in this default character pose: Importing a 3D object from another program You don't have to use only the default 3D models included in Manga Studio 5. The process of importing a model is very easy. The types of files that can be imported into Manga Studio 5 are c2fc, c2fr, fbx, 1wo, 1ws, obj, 6kt, and 6kh. Getting ready You must have a page open in order to add a 3D object. Open a page of any size to start this recipe. For this recipe, you will also need a model to import into the program. These can be found on numerous websites, including my.smithmicro.com, under the Poser tab. How to do it… The following steps will walk us through the simple process of importing a 3D model into Manga Studio 5: Open the location where the 3D model you wish to import has been saved. If you have downloaded the 3D model from the Internet, it may be in the Downloads folder on your PC. Arrange the windows on your computer screen so that the location of the 3D model and Manga Studio 5 are both visible, as shown in the following screenshot: Click on the 3D model file and hold down the mouse button. While still holding down the mouse button, drag the 3D model file into the Manga Studio 5 window. Release the mouse button. The 3D model will be imported into the open page, as shown in this screenshot: Manipulating 3D objects You've learned how to add a 3D object to our project. But how can you pose it the way you want it to look for your scene? With a little time and patience, you'll be posing characters like a pro in no time! Getting ready Follow the directions in the Adding existing 3D objects to a page recipe before following the steps in this recipe. How to do it… This recipe will walk us through moving a character into a custom pose: Be sure that the Object tool under Operation is selected. Click on the 3D object to manipulate, if it is not already selected. To move the entire object up, down, left, or right, hover the mouse cursor over the fourth icon in the top-left corner of the box around the selected object. Click and hold the left mouse button; then, drag to move the object in the desired direction. The following screenshot shows the location of the icon used to move the object up, down, left, or right. It is highlighted in pink and also shown over the 3D character. If your models are moving very slowly, you may need to allocate more memory to Manga Studio EX 5. This can be done by going to File | Preferences | Performance. To rotate the object along the y axis (or the horizon line), hover the mouse cursor over the fifth icon in the top-left corner of the box around the selected object. Click on it, hold the left mouse button, and drag. The object will rotate along the y axis, as shown in this screenshot: To rotate the object along the x axis (straight up and down vertically), hover the mouse cursor over the sixth icon in the top-left corner of the box around the selected object. Click and drag. The object will rotate vertically around its center, , as shown in the following screenshot: To move the object back and forth in 3D space, hover the mouse cursor over the seventh icon in the top-left corner of the box around the selected object. Click and hold the left mouse button; then drag it. The icon is shown as follows, highlighted in pink, and the character has been moved back—away from the camera: To move one part of a character, click on the part to be moved. For this recipe, we'll move the character's arm down. To do this, we'll click on the upper arm portion of the character to select it. When a portion of the character is selected, a sphere with three lines circling it will appear. Each of these three lines represents one axis (x, y, and z) and controls the rotation of that portion of the character. This set of lines is shown here: Use the lines of the sphere to rotate the part of the character to the desired position. For a more precise movement, the scroll wheel on the mouse can be used as well. In the following screenshot, the arm has been rotated so that it is down at the character's side: Do you keep accidentally moving a part of the model that you don't want to move? Put the cursor over the part of the model that you'd like to keep in place, and then right-click. A blue box will appear on that part of the model, and the piece will be locked in to place. Right-click again to unlock the part. How it works… In this recipe, we covered how to move and rotate a 3D object and portions of 3D characters. This is the start of being able to create your own custom poses and saving them for reuse. It's also the way to pose the drawing doll models in Manga Studio to make pose references for your comic artwork. In the 3D-Body Type folder of the materials library, you will find Female and Male drawing dolls that can be posed just as the premade characters can. These generic dolls are great for getting that difficult pose down. Then use the next recipe, Adjusting the 3D camera, to get the angle you need, and draw away! The following screenshot shows a drawing doll 3D object that has been posed in a custom stance. The preceding pose was relatively easy to achieve. The figure was rotated along the x axis, and then the head and neck joints were both rotated individually so that the doll looked toward the camera. Both its arms were rotated down and then inward. The hands were posed. The ankle joints were selected and the feet were rotated so that the toes were pointed. Then the knee of the near leg was rotated to bend it. The hip of the near leg was also rotated so that the leg was lifted slightly, giving a "cutesy" look to the pose. Having trouble posing a character's hands exactly the way you want them? Then open the Sub Tool Detail palette and click on Pose in the left-hand-side menu. In this area, you will find a menu with a picture of a hand. This is a quick controller for the fingers. Select the hand that you wish to pose. Along the bottom of the menu are some preset hand poses for things such as closed fists. At the top of each finger on this menu is an icon that looks like chain links. Click on one of them to lock the finger that it is over and prevent it from moving. The triangle area over the large blue hand symbol controls how open and closed the fingers are. You will find this menu much easier than rotating each joint individually—I'm sure! Adjusting the 3D camera In addition to manipulating 3D objects or characters, you can also change the position of the 3D camera to get the composition that you desire for your work. Think of the 3D camera just like a camera on a movie set. It can be rotated or moved around to frame the actors (3D characters) and scenery just the way the director wants! Not sure whether you moved the character or the camera? Take a look at the ground plane, which is the "checkerboard" floor area underneath the characters and objects. If the character is standing straight up and down on the ground plane, it means that the camera was moved. If the character is floating above or below the ground plane, or part of the way through it, it means that the character or object was moved. Getting ready Follow the directions given in the Adding existing 3D objects to a page recipe before following the steps in this recipe. How to do it… To rotate the camera around an object (the object will remain stationary), hover the mouse cursor over the first icon in the top-left corner of the box around the selected object. Click and hold the left mouse button, and then drag. The icon and the camera rotation are shown in the following screenshot: To move the camera up, down, left, or right, hover the mouse cursor over the second icon in the top-left corner of the box around the selected object. Click and hold the left mouse button, and then drag. The icon and camera movement are shown in this screenshot: To move the camera back and forth in the 3D space, hover the mouse cursor over the third icon in the top-left corner of the box around the selected object. Again, click and hold the left mouse button, and then drag. The next screenshot shows the zoom icon in pink at the top and the overlay on top of the character. Note how the hand of the character and the top of the head are now out of the page, since the camera is closer to her and she appears larger on the canvas. Summary In this article, we have studied to add existing 3D objects to a page using Manga Studio 5 in detail. After adding the existing object, we saw steps to add the 3D object from another program. Then, there are steps to manipulate these 3D objects along the co-ordinate system by using tools available in Manga Studio 5. Finally, we learnt to position the 3D camera, by rotating it around an object. Resources for Article: Further resources on this subject: Ink Slingers [article] Getting Familiar with the Story Features [article] Animating capabilities of Cinema 4D [article]
Read more
  • 0
  • 0
  • 3947

article-image-smart-features-improve-your-efficiency
Packt
15 Sep 2015
11 min read
Save for later

Smart Features to Improve Your Efficiency

Packt
15 Sep 2015
11 min read
In this article by Denis Patin and Stefan Rosca authors of the book WebStorm Essentials, we are going to deal with a number of really smart features that will enable you to fundamentally change your approach to web development and learn how to gain maximum benefit from WebStorm. We are going to study the following in this article: On-the-fly code analysis Smart code features Multiselect feature Refactoring facility (For more resources related to this topic, see here.) On-the-fly code analysis WebStorm will preform static code analysis on your code on the fly. The editor will check the code based on the language used and the rules you specify and highlight warnings and errors as you type. This is a very powerful feature that means you don't need to have an external linter and will catch most errors quickly thus making a dynamic and complex language like JavaScript more predictable and easy to use. Runtime error and any other error, such as syntax or performance, are two things. To investigate the first one, you need tests or a debugger, and it is obvious that they have almost nothing in common with the IDE itself (although, when these facilities are integrated into the IDE, such a synergy is better, but that is not it). You can also examine the second type of errors the same way but is it convenient? Just imagine that you need to run tests after writing the next line of code. It is no go! Won't it be more efficient and helpful to use something that keeps an eye on and analyzes each word being typed in order to notify about probable performance issues and bugs, code style and workflow issues, various validation issues, warn of dead code and other likely execution issues before executing the code, to say nothing of reporting inadvertent misprints. WebStorm is the best fit for it. It performs a deep-level analysis of each line, each word in the code. Moreover, you needn't break off your developing process when WebStorm scans your code; it is performed on the fly and thus so called: WebStorm also enables you to get a full inspection report on demand. For getting it, go to the menu: Code | Inspect Code. It pops up the Specify Inspection Scope dialog where you can define what exactly you would like to inspect, and click OK. Depending on what is selected and of what size, you need to wait a little for the process to finish, and you will see the detailed results where the Terminal window is located: You can expand all the items, if needed. To the right of this inspection result list you can see an explanation window. To jump to the erroneous code lines, you can simply click on the necessary item, and you will flip into the corresponding line. Besides simple indicating where some issue is located, WebStorm also unequivocally suggests the ways to eliminate this issue. And you even needn't make any changes yourself—WebStorm already has quick solutions, which you need just to click on, and they will be instantly inserted into the code: Smart code features Being an Integrated Development Environment (IDE) and tending to be intelligent, WebStorm provides a really powerful pack of features by using which you can strongly improve your efficiency and save a lot of time. One of the most useful and hot features is code completion. WebStorm continually analyzes and processes the code of the whole project, and smartly suggests the pieces of code appropriate in the current context, and even more—alongside the method names you can find the usage of these methods. Of course, code completion itself is not a fresh innovation; however WebStorm performs it in a much smarter way than other IDEs do. WebStorm can auto-complete a lot things: Class and function names, keywords and parameters, types and properties, punctuation, and even file paths. By default, the code completion facility is on. To invoke it, simply start typing some code. For example, in the following image you can see how WebStorm suggests object methods: You can navigate through the list of suggestions using your mouse or the Up and Down arrow keys. However, the list can be very long, which makes it not very convenient to browse. To reduce it and retain only the things appropriate in the current context, keep on typing the next letters. Besides typing only initial consecutive letter of the method, you can either type something from the middle of the method name, or even use the CamelCase style, which is usually the quickest way of typing really long method names: It may turn out for some reason that the code completion isn't working automatically. To manually invoke it, press Control + Space on Mac or Ctrl + Space on Windows. To insert the suggested method, press Enter; to replace the string next to the current cursor position with the suggested method, press Tab. If you want the facility to also arrange correct syntactic surroundings for the method, press Shift + ⌘ + Enter on Mac or Ctrl + Shift + Enter on Windows, and missing brackets or/and new lines will be inserted, up to the styling standards of the current language of the code. Multiselect feature With the multiple selection (or simply multiselect) feature, you can place the cursor in several locations simultaneously, and when you will type the code it will be applied at all these positions. For example, you need to add different background colors for each table cell, and then make them of twenty-pixel width. In this case, what you need to not perform these identical tasks repeatedly and save a lot of time, is to place the cursor after the <td> tag, press Alt, and put the cursor in each <td> tag, which you are going to apply styling to: Now you can start typing the necessary attribute—it is bgcolor. Note that WebStorm performs smart code completion here too, independently of you typing something on a single line or not. You get empty values for bgcolor attributes, and you fill them out individually a bit later. You need also to change the width so you can continue typing. As cell widths are arranged to be fixed-sized, simply add the value for width attributes as well. What you get in the following image: Moreover, the multiselect feature can select identical values or just words independently, that is, you needn't place the cursor in multiple locations. Let us watch this feature by another example. Say, you changed your mind and decided to colorize not backgrounds but borders of several consecutive cells. You may instantly think of using a simple replace feature but you needn't replace all attribute occurrences, only several consecutive ones. For doing this, you can place the cursor on the first attribute, which you are going to perform changes from, and click Ctrl + G on Mac or Alt + J on Windows as many times as you need. One by one the same attributes will be selected, and you can replace the bgcolor attribute for the bordercolor one: You can also select all occurrences of any word by clicking Ctrl + command + G on Mac or Ctrl + Alt + Shift + J. To get out of the multiselect mode you have to click in a different position or use the Esc key. Refactoring facility Throughout the development process, it is almost unavoidable that you have to use refactoring. Also, the bigger code base you have, the more difficult it becomes to control the code, and when you need to refactor some code, you can most likely be up against some issues relating to, examples. naming omission or not taking into consideration function usage. You learned that WebStorm performs a thorough code analysis so it understands what is connected with what and if some changes occur it collates them and decide what is acceptable and what is not to perform in the rest of the code. Let us try a simple example. In a big HTML file you have the following line: <input id="search" type="search" placeholder="search" /> And in a big JavaScript file you have another one: var search = document.getElementById('search'); You decided to rename the id attribute's value of the input element to search_field because it is less confusing. You could simply rename it here but after that you would have to manually find all the occurrences of the word search in the code. It is evident that the word is rather frequent so you would spend a lot of time recognizing usage cases appropriate in the current context or not. And there is a high probability that you forget something important, and even more time will be spent on investigating an issue. Instead, you can entrust WebStorm with this task. Select the code unit to refactor (in our case, it is the search value of the id attribute), and click Shift + T on Mac or Ctrl + Alt + Shift + T on Windows (or simply click the Refactor menu item) to call the Refactor This dialog. There, choose the Rename… item and enter the new name for the selected code unit (search_field in our case). To get only a preview of what will happen during the refactoring process, click the Preview button, and all the changes to apply will be displayed in the bottom. You can walk through the hierarchical tree and either apply the change by clicking the Do Refactor button, or not. If you need a preview, you can simply click the Refactor button. What you will see is that the id attribute got the search_field value, not the type or placeholder values, even if they have the same value, and in the JavaScript file you got getElementById('search_field'). Note that even though WebStorm can perform various smart tasks, it still remains a program, and there can occur some issues caused by so-called artificial intelligence imperfection, so you should always be careful when performing the refactoring. In particular, manually check the var declarations because WebStorm sometimes can apply the changes to them as well but it is not always necessary because of the scope. Of course, it is just a little of what you are enabled to perform with refactoring. The basic things that the refactoring facility allows you to do are as follows: The elements in the preceding screenshot are explained as follows: Rename…: You have already got familiar with this refactoring. Once again, with it you can rename code units, and WebStorm automatically will fix all references of them in the code. The shortcut is Shift + F6. Change Signature…: This feature is used basically for changing function names, and adding/removing, reordering, or renaming function parameters, that is, changing the function signature. The shortcut is ⌘ + F6 for Mac and Ctrl + F6 for Windows. Move…: This feature enables you to move files or directories within a project, and it simultaneously repairs all references to these project elements in the code so you needn't manually repair them. The shortcut is F6. Copy…: With this feature, you can copy a file or directory or even a class, with its structure, from one place to another. The shortcut is F5. Safe Delete…: This feature is really helpful. It allows you to safely delete any code or entire files from the project. When performing this refactoring, you will be asked about whether it is needed to inspect comments and strings or all text files for the occurrence of the required piece of code or not. The shortcut is ⌘ + delete for Mac and Alt + Delete for Windows. Variable…: This refactoring feature declares a new variable whereto the result of the selected statement or expression is put. It can be useful when you realize there are too many occurrences of a certain expression so it can be turned into a variable, and the expression can just initialize it. The shortcut is Alt +⌘ + V for Mac and Ctrl + Alt + V for Windows. Parameter…: When you need to add a new parameter to some method and appropriately update its calls, use this feature. The shortcut is Alt + ⌘ + P for Mac and Ctrl + Alt + P for Windows. Method…: During this refactoring, the code block you selected undergoes analysis, through which the input and output variables get detected, and the extracted function receives the output variable as a return value. The shortcut is Alt + ⌘ + M for Mac and Ctrl + Alt + M for Windows. Inline…: The inline refactoring is working contrariwise to the extract method refactoring—it replaces surplus variables with their initializers making the code more compact and concise. The shortcut is Alt + ⌘ + N for Mac and Ctrl + Alt + N for Windows. Summary In this article, you have learned about the most distinctive features of WebStorm, which are the core constituents of improving your efficiency in building web applications. Resources for Article: Further resources on this subject: Introduction to Spring Web Application in No Time [article] Applications of WebRTC [article] Creating Java EE Applications [article]
Read more
  • 0
  • 0
  • 2421
Modal Close icon
Modal Close icon