Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-vim-72-scripting
Packt
30 Apr 2010
9 min read
Save for later

Vim 7.2 Scripting

Packt
30 Apr 2010
9 min read
Scripting tips In this section, we will look at a few extra tips that can be handy when you create scripts for Vim. Some are simple code pieces you can add directly in your script, while others are good-to-know tips. Gvim or Vim? Some scripts have extra features when used in the GUI version of Vim (Gvim). This could be adding menus, toolbars, or other things that only work if you are using Gvim. So what do you do to check if the user runs the script in a console Vim or in Gvim? Vim has already prepared the information for you. You simply have to check if the feature gui_running is enabled. To do so, you use a function called has(), which returns 1 (true) if a given feature is supported / enabled and 0 (false), otherwise. An example could be: if has("gui_running") "execute gui-only commands here.endif This is all you need to do to check if a user has used Gvim or not. Note that it is not enough to check if the feature "gui" exists, because this will return true if your Vim is just compiled with GUI support—even if it is not using it. Look in :help 'feature-list' for a complete list of other features you can check with the has() function. Which operating system? If you have tried to work with multiple operating systems such as Microsoft Windows and Linux, you will surely know that there are many differences. This can be everything from where programs are placed, to which programs you have available and how access to files is restricted. Sometimes, this can also have an influence on how you construct your Vim script as you might have to call external programs, or use other functionality, specific for a certain operating system. Vim lets you check which operation system you are on, such that you can stop executing your script or make decisions about how to configure your script. This is done with the following piece of code: if has("win16") || has("win32") || has("win64")|| has("win95") " do windows things hereelseif has("unix") " do linux/unix things hereendif This example only shows how to check for Windows (all flavors available) and Linux / Unix. As Vim is available on a wide variety of platforms, you can of course also check for these. All of the operating systems can be found in: :help 'feature-list' Which version of Vim? Throughout the last decade or two, Vim has developed and been extended with a large list of functions. Sometimes, you want to use the newest functions in your script, as these are the best / easiest to use. But what about the users who have a version of Vim that is older than the one you use, and hence don't have access to the functions you use? You have three options: Don't care and let it be the user's own problem (not a good option). Check if the user uses an old version of Vim, and then stop executing the script if this is the case. Check if the user has too old a version of Vim, and then use alternative code. The first option is really not one I would recommend anyone to use, so please don't use it. The second option is acceptable, if you can't work around the problem in the old version of Vim. However, if it is possible to make an alternative solution for the older version of Vim, then this will be the most preferable option. So let's look at how you can check the version of Vim. Before we look at how to check the version, we have to take a look at how the version number is built. The number consists of three parts: Major number (for example, 7 for Vim version 7) Minor number (for example, 3 for Vim version 6.3) Patch number (for example, 123 for Vim 7.0.123) The first two numbers are the actual version number, but when minor features / patches are applied to a version of Vim, it is mostly only the patch number that is increased. It takes quite a bit of change to get the minor number to increase, and a major part of Vim should change in order to increase the major version number. Therefore, when you want to check which version of Vim the user is using, you do it for two versions—major and minor versions and patch number. The code for this could look like: if v:version >= 702 || v:version == 701 && has("patch123") " code here is only done for version 7.1 with patch 123 " and version 7.2 and aboveendif The first part of the if condition checks if our version of Vim is version 7.2 (notice that the minor version number is padded with 0 if less than 10) or above. If this is not the case, then it checks to see if we have a version 7.1 with patch 123. If patch version 124 or above is included, then you also have patch 123 automatically. Printing longer lines Vim was originally created for old text terminals where the length of lines was limited to a certain number of characters. Today, this old limitation shows up once in a while. One place where you meet this limitation of line length is when printing longer lines to the screen using the "echo" statement. Even though you use Vim in a window where there are more than the traditional 80 characters per line, Vim will still prompt you to press Enter after echoing lines longer than 80 characters. There is, however, a way to get around this, and make it possible to use the entire window width to echo on. Window width means the total number of columns in the Vim window minus a single character. So if your Vim window is wide enough to have 120 characters on each line, then the window width is 120-1 characters. By adding the following function to your script, you will be able to echo screen-wide long lines in Vim: " WideMsg() prints [long] message up to (&columns-1) lengthfunction! WideMsg(msg) let x=&ruler | let y=&showcmd set noruler noshowcmd redraw echo a:msg let &ruler=x | let &showcmd=yendfunction This function was originally proposed by the Vim script developer Yakov Lerner on the Vim online community site at http://www.vim.org. Now whenever you need to echo a long line of text in your script, instead of using the echo statement you simply use the function Widemsg(). An example could be: :call WideMsg("This should be a very long line of text") The length of a single line message is still limited, but now it is limited to the width of the Vim window instead of the traditional 80-1 characters. Debugging Vim scripts Sometimes things in your scripts do not work exactly as you expect them to. In these cases, it is always good to know how to debug your script. In this section, we will look at some of the methods you can use to find your error. Well-structured code often has fewer bugs and is also easier to debug. In Vim, there is a special mode to perform script debugging. Depending on what you want to debug, there are some different ways to start this mode. So let's look at some different cases. If Vim just throws some errors (by printing them at the bottom of the Vim window), but you are not really sure where it is or why it happens, then you might want to try to start Vim directly in debugging mode. This is done on the command line by invoking Vim with the -Dargument. vim -D somefile.txt The debugging mode is started when Vim starts to read the first vimrc file it loads (in most cases the global vimrc file where Vim is installed). We will look at what to do when you get into debug mode in a moment. Another case where you might want to get into debug mode is when you already know which function the error (most likely) is in, and hence, just want to debug that function. In that case you just open Vim as normal (load the script with the particular function if needed) and then use the following command: :debug call Myfunction() Here everything after the :debug is the functionality you want to debug. In this case, it is a simple call of the function Myfunction(), but it could just as well be any of the following: :debug read somefile.txt:debug nmap ,a :call Myfunction() <CR>:debug help :debug So let's look at what to do when we get into the debugging mode. When reaching the first line that it should debug, Vim breaks the loading and shows something like: Entering Debug mode. Type "cont" to continue.cmd: call MyFunction()> Now you are in the Vim script debugger and have some choices for what to make Vim do. If you are not familiar with debugging techniques, it might be a good idea to read up on this subject before starting to debug your scripts. The following commands are available in the debugger (shortcuts are in parentheses): cont (c): Continue running the scripts / commands as normal (no debugging) until the next breakpoint (more about this later). quit (q): Quit the debugging process without executing the last lines. interrupt (i): Stop the current process like quit, but go back to the debugger. step (s): Execute the next line of code and come back to the debugger when it is finished. If a line calls a function or sources a file, then it will step into the function / file. next (n): Execute the next command and come back to the debugger when it is finished. If used on a line with a function call, it does not go into the function but steps over it. finish (f): Continue executing the script without stopping on breakpoints. Go into debug mode when done.
Read more
  • 0
  • 0
  • 3295

article-image-applications-webrtc
Packt
27 Feb 2015
20 min read
Save for later

Applications of WebRTC

Packt
27 Feb 2015
20 min read
This article is by Andrii Sergiienko, the author of the book WebRTC Cookbook. WebRTC is a relatively new and revolutionary technology that opens new horizons in the area of interactive applications and services. Most of the popular web browsers support it natively (such as Chrome and Firefox) or via extensions (such as Safari). Mobile platforms such as Android and iOS allow you to develop native WebRTC applications. In this article, we will cover the following recipes: Creating a multiuser conference using WebRTCO Taking a screenshot using WebRTC Compiling and running a demo for Android (For more resources related to this topic, see here.) Creating a multiuser conference using WebRTCO In this recipe, we will create a simple application that supports a multiuser videoconference. We will do it using WebRTCO—an open source JavaScript framework for developing WebRTC applications. Getting ready For this recipe, you should have a web server installed and configured. The application we will create can work while running on the local filesystem, but it is more convenient to use it via the web server. To create the application, we will use the signaling server located on the framework's homepage. The framework is open source, so you can download the signaling server from GitHub and install it locally on your machine. GitHub's page for the project can be found at https://github.com/Oslikas/WebRTCO. How to do it… The following recipe is built on the framework's infrastructure. We will use the framework's signaling server. What we need to do is include the framework's code and do some initialization procedure: Create an HTML file and add common HTML heads: <!DOCTYPE html> <html lang="en"> <head>     <meta charset="utf-8"> Add some style definitions to make the web page looking nicer:     <style type="text/css">         video {             width: 384px;             height: 288px;             border: 1px solid black;             text-align: center;         }         .container {             width: 780px;             margin: 0 auto;         }     </style> Include the framework in your project: <script type="text/javascript" src ="https://cdn.oslikas.com/js/WebRTCO-1.0.0-beta-min.js"charset="utf-8"></script></head> Define the onLoad function—it will be called after the web page is loaded. In this function, we will make some preliminary initializing work: <body onload="onLoad();"> Define HTML containers where the local video will be placed: <div class="container">     <video id="localVideo"></video> </div> Define a place where the remote video will be added. Note that we don't create HTML video objects, and we just define a separate div. Further, video objects will be created and added to the page by the framework automatically: <div class="container" id="remoteVideos"></div> <div class="container"> Create the controls for the chat area: <div id="chat_area" style="width:100%; height:250px;overflow: auto; margin:0 auto 0 auto; border:1px solidrgb(200,200,200); background: rgb(250,250,250);"></div></div><div class="container" id="div_chat_input"><input type="text" class="search-query"placeholder="chat here" name="msgline" id="chat_input"><input type="submit" class="btn" id="chat_submit_btn"onclick="sendChatTxt();"/></div> Initialize a few variables: <script type="text/javascript">     var videoCount = 0;     var webrtco = null;     var parent = document.getElementById('remoteVideos');     var chatArea = document.getElementById("chat_area");     var chatColorLocal = "#468847";     var chatColorRemote = "#3a87ad"; Define a function that will be called by the framework when a new remote peer is connected. This function creates a new video object and puts it on the page:     function getRemoteVideo(remPid) {         var video = document.createElement('video');         var id = 'remoteVideo_' + remPid;         video.setAttribute('id',id);         parent.appendChild(video);         return video;     } Create the onLoad function. It initializes some variables and resizes the controls on the web page. Note that this is not mandatory, and we do it just to make the demo page look nicer:     function onLoad() {         var divChatInput =         document.getElementById("div_chat_input");         var divChatInputWidth = divChatInput.offsetWidth;         var chatSubmitButton =         document.getElementById("chat_submit_btn");         var chatSubmitButtonWidth =         chatSubmitButton.offsetWidth;         var chatInput =         document.getElementById("chat_input");         var chatInputWidth = divChatInputWidth -         chatSubmitButtonWidth - 40;         chatInput.setAttribute("style","width:" +         chatInputWidth + "px");         chatInput.style.width = chatInputWidth + 'px';         var lv = document.getElementById("localVideo"); Create a new WebRTCO object and start the application. After this point, the framework will start signaling connection, get access to the user's media, and will be ready for income connections from remote peers: webrtco = new WebRTCO('wss://www.webrtcexample.com/signalling',lv, OnRoomReceived, onChatMsgReceived, getRemoteVideo, OnBye);}; Here, the first parameter of the function is the URL of the signaling server. In this example, we used the signaling server provided by the framework. However, you can install your own signaling server and use an appropriate URL. The second parameter is the local video object ID. Then, we will supply functions to process messages of received room, received message, and received remote video stream. The last parameter is the function that will be called when some of the remote peers have been disconnected. The following function will be called when the remote peer has closed the connection. It will remove video objects that became outdated:     function OnBye(pid) {         var video = document.getElementById("remoteVideo_"         + pid);         if (null !== video) video.remove();     }; We also need a function that will create a URL to share with other peers in order to make them able to connect to the virtual room. The following piece of code represents such a function: function OnRoomReceived(room) {addChatTxt("Now, if somebody wants to join you,should use this link: <ahref=""+window.location.href+"?room="+room+"">"+window.location.href+"?room="+room+"</a>",chatColorRemote);}; The following function prints some text in the chat area. We will also use it to print the URL to share with remote peers:     function addChatTxt(msg, msgColor) {         var txt = "<font color=" + msgColor + ">" +         getTime() + msg + "</font><br/>";         chatArea.innerHTML = chatArea.innerHTML + txt;         chatArea.scrollTop = chatArea.scrollHeight;     }; The next function is a callback that is called by the framework when a peer has sent us a message. This function will print the message in the chat area:     function onChatMsgReceived(msg) {         addChatTxt(msg, chatColorRemote);     }; To send messages to remote peers, we will create another function, which is represented in the following code:     function sendChatTxt() {         var msgline =         document.getElementById("chat_input");         var msg = msgline.value;         addChatTxt(msg, chatColorLocal);         msgline.value = '';         webrtco.API_sendPutChatMsg(msg);     }; We also want to print the time while printing messages; so we have a special function that formats time data appropriately:     function getTime() {         var d = new Date();         var c_h = d.getHours();         var c_m = d.getMinutes();         var c_s = d.getSeconds();           if (c_h < 10) { c_h = "0" + c_h; }         if (c_m < 10) { c_m = "0" + c_m; }         if (c_s < 10) { c_s = "0" + c_s; }         return c_h + ":" + c_m + ":" + c_s + ": ";     }; We have some helper code to make our life easier. We will use it while removing obsolete video objects after remote peers are disconnected:     Element.prototype.remove = function() {         this.parentElement.removeChild(this);     }     NodeList.prototype.remove =     HTMLCollection.prototype.remove = function() {         for(var i = 0, len = this.length; i < len; i++) {             if(this[i] && this[i].parentElement) {                 this[i].parentElement.removeChild(this[i]);             }         }     } </script> </body> </html> Now, save the file and put it on the web server, where it could be accessible from web browser. How it works… Open a web browser and navigate to the place where the file is located on the web server. You will see an image from the web camera and a chat area beneath it. At this stage, the application has created the WebRTCO object and initiated the signaling connection. If everything is good, you will see an URL in the chat area. Open this URL in a new browser window or on another machine—the framework will create a new video object for every new peer and will add it to the web page. The number of peers is not limited by the application. In the following screenshot, I have used three peers: two web browser windows on the same machine and a notebook as the third peer: Taking a screenshot using WebRTC Sometimes, it can be useful to be able to take screenshots from a video during videoconferencing. In this recipe, we will implement such a feature. Getting ready No specific preparation is necessary for this recipe. You can take any basic WebRTC videoconferencing application. We will add some code to the HTML and JavaScript parts of the application. How to do it… Follow these steps: First of all, add image and canvas objects to the web page of the application. We will use these objects to take screenshots and display them on the page: <img id="localScreenshot" src=""> <canvas style="display:none;" id="localCanvas"></canvas> Next, you have to add a button to the web page. After clicking on this button, the appropriate function will be called to take the screenshot from the local stream video: <button onclick="btn_screenshot()" id="btn_screenshot">Make a screenshot</button> Finally, we need to implement the screenshot taking function: function btn_screenshot() { var v = document.getElementById("localVideo"); var s = document.getElementById("localScreenshot"); var c = document.getElementById("localCanvas"); var ctx = c.getContext("2d"); Draw an image on the canvas object—the image will be taken from the video object: ctx.drawImage(v,0,0); Now, take reference of the canvas, convert it to the DataURL object, and insert the value into the src option of the image object. As a result, the image object will show us the taken screenshot: s.src = c.toDataURL('image/png'); } That is it. Save the file and open the application in a web browser. Now, when you click on the Make a screenshot button, you will see the screenshot in the appropriate image object on the web page. You can save the screenshot to the disk using right-click and the pop-up menu. How it works… We use the canvas object to take a frame of the video object. Then, we will convert the canvas' data to DataURL and assign this value to the src parameter of the image object. After that, an image object is referred to the video frame, which is stored in the canvas. Compiling and running a demo for Android Here, you will learn how to build a native demo WebRTC application for Android. Unfortunately, the supplied demo application from Google doesn't contain any IDE-specific project files, so you will have to deal with console scripts and commands during all the building process. Getting ready We will need to check whether we have all the necessary libraries and packages installed on the work machine. For this recipe, I used a Linux box—Ubuntu 14.04.1 x64. So all the commands that might be specific for OS will be relevant to Ubuntu. Nevertheless, using Linux is not mandatory and you can take Windows or Mac OS X. If you're using Linux, it should be 64-bit based. Otherwise, you most likely won't be able to compile Android code. Preparing the system First of all, you need to install the necessary system packages: sudo apt-get install git git-svn subversion g++ pkg-config gtk+-2.0libnss3-dev libudev-dev ant gcc-multilib lib32z1 lib32stdc++6 Installing Oracle JDK By default, Ubuntu is supplied with OpenJDK, but it is highly recommended that you install an Oracle JDK. Otherwise, you can face issues while building WebRTC applications for Android. One another thing that you should keep in mind is that you should probably use Oracle JDK version 1.6—other versions (in particular, 1.7 and 1.8) might not be compatible with the WebRTC code base. This will probably be fixed in the future, but in my case, only Oracle JDK 1.6 was able to build the demo successfully. Download the Oracle JDK from its home page at http://www.oracle.com/technetwork/java/javase/downloads/index.html. In case there is no download link on such an old JDK, you can try another URL: http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-downloads-javase6-419409.html. Oracle will probably ask you to sign in or register first. You will be able to download anything from their archive. Install the downloaded JDK: sudo mkdir –p /usr/lib/jvmcd /usr/lib/jvm && sudo /bin/sh ~/jdk-6u45-linux-x64.bin --noregister Here, I assume that you downloaded the JDK package into the home directory. Register the JDK in the system: sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.6.0_45/bin/javac 50000 sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.6.0_45/bin/java 50000 sudo update-alternatives --config javac sudo update-alternatives --config java cd /usr/lib sudo ln -s /usr/lib/jvm/jdk1.6.0_45 java-6-sun export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_45/ Test the Java version: java -version You should see something like Java HotSpot on the screen—it means that the correct JVM is installed. Getting the WebRTC source code Perform the following steps to get the WebRTC source code: Download and prepare Google Developer Tools:Getting the WebRTC source code mkdir –p ~/dev && cd ~/dev git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git export PATH=`pwd`/depot_tools:"$PATH" Download the WebRTC source code: gclient config http://webrtc.googlecode.com/svn/trunk echo "target_os = ['android', 'unix']" >> .gclient gclient sync The last command can take a couple of minutes (actually, it depends on your Internet connection speed), as you will be downloading several gigabytes of source code. Installing Android Developer Tools To develop Android applications, you should have Android Developer Tools (ADT) installed. This SDK contains Android-specific libraries and tools that are necessary to build and develop native software for Android. Perform the following steps to install ADT: Download ADT from its home page http://developer.android.com/sdk/index.html#download. Unpack ADT to a folder: cd ~/dev unzip ~/adt-bundle-linux-x86_64-20140702.zip Set up the ANDROID_HOME environment variable: export ANDROID_HOME=`pwd`/adt-bundle-linux-x86_64-20140702/sdk How to do it… After you've prepared the environment and installed the necessary system components and packages, you can continue to build the demo application: Prepare Android-specific build dependencies: cd ~/dev/trunk source ./build/android/envsetup.sh Configure the build scripts: export GYP_DEFINES="$GYP_DEFINES build_with_libjingle=1 build_with_chromium=0 libjingle_java=1 OS=android"gclient runhooks Build the WebRTC code with the demo application: ninja -C out/Debug -j 5 AppRTCDemo After the last command, you can find the compiled Android packet with the demo application at ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk. Running on the Android simulator Follow these steps to run an application on the Android simulator: Run Android SDK manager and install the necessary Android components: $ANDROID_HOME/tools/android sdk Choose at least Android 4.x—lower versions don't have WebRTC support. In the following screenshot, I've chosen Android SDK 4.4 and 4.2: Create an Android virtual device: cd $ANDROID_HOME/tools ./android avd & The last command executes the Android SDK tool to create and maintain virtual devices. Create a new virtual device using this tool. You can see an example in the following screenshot: Start the emulator using just the created virtual device: ./emulator –avd emu1 & This can take a couple of seconds (or even minutes), after that you should see a typical Android device home screen, like in the following screenshot: Check whether the virtual device is simulated and running: cd $ANDROID_HOME/platform-tools ./adb devices You should see something like the following: List of devices attached emulator-5554   device This means that your just created virtual device is OK and running; so we can use it to test our demo application. Install the demo application on the virtual device: ./adb install ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk You should see something like the following: 636 KB/s (2507985 bytes in 3.848s) pkg: /data/local/tmp/AppRTCDemo-debug.apk Success This means that the application is transferred to the virtual device and is ready to be started. Switch to the simulator window; you should see the demo application's icon. Execute it like it is a real Android device. In the following screenshot, you can see the installed demo application AppRTC: While trying to launch the application, you might see an error message with a Java runtime exception referring to GLSurfaceView. In this case, you probably need to switch to the Use Host GPU option while creating the virtual device with Android Virtual Device (AVD) tool. Fixing a bug with GLSurfaceView Sometimes if you're using an Android simulator with a virtual device on the ARM architecture, you can be faced with an issue when the application says No config chosen, throws an exception, and exits. This is a known defect in the Android WebRTC code and its status can be tracked at https://code.google.com/p/android/issues/detail?id=43209. The following steps can help you fix this bug in the original demo application: Go to the ~/dev/trunk/talk/examples/android/src/org/appspot/apprtc folder and edit the AppRTCDemoActivity.java file. Look for the following line of code: vsv = new AppRTCGLView(this, displaySize); Right after this line, add the following line of code: vsv.setEGLConfigChooser(8,8,8,8,16,16); You will need to recompile the application: cd ~/dev/trunk ninja -C out/Debug AppRTCDemo  Now you can deploy your application and the issue will not appear anymore. Running on a physical Android device For deploying applications on an Android device, you don't need to have any developer certificates (like in the case of iOS devices). So if you have an Android physical device, it probably would be easier to debug and run the demo application on the device rather than on the simulator. Connect the Android device to the machine using a USB cable. On the Android device, switch the USB debug mode on. Check whether your machine sees your device: cd $ANDROID_HOME/platform-tools ./adb devices If device is connected and the machine sees it, you should see the device's name in the result print of the preceding command: List of devices attached QO4721C35410   device Deploy the application onto the device: cd $ANDROID_HOME/platform-tools ./adb -d install ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk You will get the following output: 3016 KB/s (2508031 bytes in 0.812s) pkg: /data/local/tmp/AppRTCDemo-debug.apk Success After that you should see the AppRTC demo application's icon on the device: After you have started the application, you should see a prompt to enter a room number. At this stage, go to http://apprtc.webrtc.org in your web browser on another machine; you will see an image from your camera. Copy the room number from the URL string and enter it in the demo application on the Android device. Your Android device and another machine will try to establish a peer-to-peer connection, and might take some time. In the following screenshot, you can see the image on the desktop after the connection with Android smartphone has been established: Here, the big image represents what is translated from the frontal camera of the Android smartphone; the small image depicts the image from the notebook's web camera. So both the devices have established direct connection and translate audio and video to each other. The following screenshot represents what was seen on the Android device: There's more… The original demo doesn't contain any ready-to-use IDE project files; so you have to deal with console commands and scripts during all the development process. You can make your life a bit easier if you use some third-party tools that simplify the building process. Such tools can be found at http://tech.pristine.io/build-android-apprtc. Summary In this article, we have learned to create a multiuser conference using WebRTCO, take a screenshot using WebRTC, and compile and run a demo for Android. Resources for Article: Further resources on this subject: Webrtc with Sip and Ims? [article] Using the Webrtc Data Api [article] Applying Webrtc for Education and E Learning [article]
Read more
  • 0
  • 0
  • 3293

Packt
22 Oct 2009
6 min read
Save for later

Working with Rails – Setting up and connecting to a database

Packt
22 Oct 2009
6 min read
In this article, authors Elliot Smith and Rob Nichols explain the setup of a new Rails application and how to integrate it with other data sources. Specifically, this article focuses on turning the abstract data structure for Intranet into a Rails application. This requires a variety of concepts and tools, namely: The structure of a Rails application. Initializing an application using the rails command. Associating Rails with a database. The built-in utility scripts included with each application. Using migrations to maintain a database. Building models and validating them. Using the Rails console to manually test models. Automated testing of models using Test::Unit. Hosting a project in a Subversion repository. Importing data into the application using scripts. In this article, we'll focus on the first 3 concepts. The World According to Rails To understand how Rails applications work, it helps to get under its skin: find out what motivated its development, and the philosophy behind it. The first thing to grasp is that Rails is often referred to as opinionated software (see http://www.oreillynet.com/pub/a/network/2005/08/30/ruby-rails-davidheinemeier-hansson.html). It encapsulates an approach to web application development centered on good practice, emphasizing automation of common tasks and minimization of effort. Rails helps developers make good choices, and even removes the need to make choices where they are just distractions. How is this possible? It boils down to a couple of things: Use of a default design for applications-By making it easy to build applications using the Model-View-Controller (MVC) architecture, Rails encourages separation of an application's database layer, its control logic, and the user interface. Rails' implementation of the MVC pattern is the key to understanding the framework as a whole. Use of conventions instead of explicit configuration-By encouraging use of a standard directory layout and file naming conventions, Rails reduces the need to configure relationships between the elements of the MVC pattern. Code generators are used to great effect in Rails, making it easy to follow the conventions. We'll see each of these features in more detail in the next two sections. Model-View-Controller Architecture The original aim of the MVC pattern was to provide architecture to bridge the gap between human and computer models of data. Over time, MVC has evolved into an architecture which decouples components of an application, so that one component (e.g. the control logic) can be changed with minimal impact on the other components (e.g. the interface). Explaining MVC makes more sense in the context of "traditional" web applications. When using languages such as PHP or ASP, it is tempting to mix application logic with database-access code and HTML generation. (Ruby, itself, can also be used in this way to write CGI scripts.) To highlight how a traditional web application works, here's a pseudo-code example:     # define a file to save email addresses into    email_addresses_file = 'emails.txt'    # get the email_address variable from the querystring    email_address = querystring['email_address']    # CONTROLLER: switch action of the script based on whether    # email address has been supplied    if '' == email_address        # VIEW: generate HTML form to accept user input which        # posts back to this script        content = "<form method='post' action='" + self + "'>        <p>Email address: <input type='text' name='email_address'/></p>        <p><input type='submit' value='Save'/></p>        </form>"    else        # VIEW: generate HTML to confirm data submission        content = "<p>Your email address is " + email_address + "</p>"        # MODEL: persist data        if not file_exists(email_addresses_file)            create_file(email_addresses_file)        end if        write_to_file(email_addresses_file, email_address)    end if    print "<html><head><title>Email manager</title></head>    <body>" + content + "</body></html>" The highlighted comments indicate how the code can be mapped to elements of the MVC architecture: Model components handle an application's state. Typically, the model does this by putting data into some kind of a long-term storage (e.g. database, filesystem). Models also encapsulate business logic, such as data validation rules. Rails uses ActiveRecord as its model layer, enabling data handling in a variety of relational database back-ends.In the example script, the model role is performed by the section of code which saves the email address into a text file. View components generate the user interface (e.g. HTML, XML). Rails uses ActionView (part of the ActionPack library) to manage generation of views.The example script has sections of code to create an appropriate view, generating either an HTML form for the user to enter their email address, or a confirmation message acknowledging their input. The Controller orchestrates between the user and the model, retrieving data from the user's request and manipulating the model in response (e.g. creating objects, populating them with data, saving them to a database). In the case of Rails, ActionController (another part of the ActionPack library) is used to implement controllers. These controllers handle all requests from the user, talk to the model, and generate appropriate views.In the example script, the code which retrieves the submitted email address, is performing the controller role. A conditional statement is used to generate an appropriate response, dependent on whether an email address was supplied or not. In a traditional web application, the three broad classes of behavior described above are frequently mixed together. In a Rails application, these behaviors are separated out, so that a single layer of the application (the model, view, or controller) can be altered with minimal impact on the other layers. This gives a Rails application the right mix of modularity, fl exibility, and power. Next, we'll see another piece of what makes Rails so powerful: the idea of using conventions to create associations between models, views, and controllers. Once you can see how this works, the Rails implementation of MVC makes more sense: we'll return to that topic in the section Rails and MVC.
Read more
  • 0
  • 0
  • 3291

article-image-understanding-backbone
Packt
02 Sep 2013
12 min read
Save for later

Understanding Backbone

Packt
02 Sep 2013
12 min read
(For more resources related to this topic, see here.) Backbone.js is a lightweight JavaScript framework that is based on the Model-View-Controller (MVC) pattern and allows developers to create single-page web applications. With Backbone, it is possible to update a web page quickly using the REST approach with a minimal amount of data transferred between a client and a server. Backbone.js is becoming more popular day by day and is being used on a large scale for web applications and IT startups; some of them are as follows: Groupon Now!: The team decided that their first product would be AJAX-heavy but should still be linkable and shareable. Though they were completely new to Backbone, they found that its learning curve was incredibly quick, so they were able to deliver the working product in just two weeks. Foursquare: This used the Backbone.js library to create model classes for the entities in foursquare (for example, venues, check-ins, and users). They found that Backbone's model classes provide a simple and light-weight mechanism to capture an object's data and state, complete with the semantics of a classical inheritance. LinkedIn mobile: This used Backbone.js to create its next-generation HTML5 mobile web app. Backbone made it easy to keep the app modular, organized, and extensible, so it was possible to program the complexities of LinkedIn's user experience. Moreover, they are using the same code base in their mobile applications for iOS and Android platforms. WordPress.com: This is a SaaS version of Wordpress and uses Backbone.js models, collections, and views in its notification system, and is integrating Backbone.js into the Stats tab and into other features throughout the home page. Airbnb: This is a community marketplace for users to list, discover, and book unique spaces around the world. Its development team has used Backbone in many latest products. Recently, they rebuilt a mobile website with Backbone.js and Node.js tied together with a library named Rendr. You can visit the following links to get acquainted with other usage examples of Backbone.js: http://backbonejs.org/#examples Backbone.js was started by Jeremy Ashkenas from DocumentCloud in 2010 and is now being used and improved by lots of developers all over the world using Git, the distributed version control system. In this article, we are going to provide some practical examples of how to use Backbone.js, and we will structure a design for a program named Billing Application by following the MVC and Backbone pattern. Reading this article is especially useful if you are new to developing with Backbone.js. Designing an application with the MVC pattern MVC is a design pattern that is widely used in user-facing software, such as web applications. It is intended for splitting data and representing it in a way that makes it convenient for user interaction. To understand what it does, understand the following: Model: This contains data and provides business logic used to run the application View: This presents the model to the user Controller: This reacts to user input by updating the model and the view There could be some differences in the MVC implementation, but in general it conforms to the following scheme: Worldwide practice shows that the use of the MVC pattern provides various benefits to the developer: Following the separation of the concerned paradigm, which splits an application into independent parts, it is easier to modify or replace It achieves code reusability by rendering a model in different views without the need to implement model functionality in each view It requires less training and has a quicker startup time for the new developers within an organization To have a better understanding of the MVC pattern, we are going to design a Billing Application. We will refer to this design throughout the book when we are learning specific topics. Our Billing Application will allow users to generate invoices, manage them, and send them to clients. According to the worldwide practice, the invoice should contain a reference number, date, information about the buyer and seller, bank account details, a list of provided products or services, and an invoice sum. Let's have a look at the following screenshot to understand how an invoice appears: How to do it... Let's follow the ensuing steps to design an MVC structure for the Billing Application: Let's write down a list of functional requirements for this application. We assume that the end user may want to be able to do the following: Generate an invoice E-mail the invoice to the buyer Print the invoice See a list of existing invoices Manage invoices (create, read, update, and delete) Update an invoice status (draft, issued, paid, and canceled) View a yearly income graph and other reports To simplify the process of creating multiple invoices, the user may want to manage information about buyers and his personal details in the specific part of the application before he/she creates an invoice. So, our application should provide additional functionalities to the end user, such as the following: The ability to see a list of buyers and use it when generating an invoice The ability to manage buyers (create, read, update, and delete) The ability to see a list of bank accounts and use it when generating an invoice The ability to manage his/her own bank accounts (create, read, update, and delete) The ability to edit personal details and use them when generating an invoice Of course, we may want to have more functions, but this is enough for demonstrating how to design an application using the MVC pattern. Next, we architect an application using the MVC pattern. After we have defined the features of our application, we need to understand what is more related to the model (business logic) and what is more related to the view (presentation). Let's split the functionality into several parts. Then, we learn how to define models. Models present data and provide data-specific business logic. Models can be related to each other. In our case, they are as follows: InvoiceModel InvoiceItemModel BuyerModel SellerModel BankAccountModel Then, will define collections of models. Our application allows users to operate on a number of models, so they need to be organized into a special iterable object named Collection. We need the following collections: InvoiceCollection InvoiceItemCollection BuyerCollection BankAccountCollection Next, we define views. Views present a model or a collection to the application user. A single model or collection can be rendered to be used by multiple views. The views that we need in our application are as follows: EditInvoiceFormView InvoicePageView InvoiceListView PrintInvoicePageView EmailInvoiceFormView YearlyIncomeGraphView EditBuyerFormView BuyerPageView BuyerListView EditBankAccountFormView BankAccountPageView BankAccountListView EditSellerInfoFormView ViewSellectInfoPageView ConfirmationDialogView Finally, we define a controller. A controller allows users to interact with an application. In MVC, each view can have a different controller that is used to do following: Map a URL to a specific view Fetch models from a server Show and hide views Handle user input Defining business logic with models and collections Now, it is time to design business logic for the Billing Application using the MVC and OOP approaches. In this recipe, we are going to define an internal structure for our application with model and collection objects. Although a model represents a single object, a collection is a set of models that can be iterated, filtered, and sorted. Relations between models and collections in the Billing Application conform to the following scheme: How to do it... For each model, we are going to create two tables: one for properties and another for methods: We define BuyerModel properties. Name Type Required Unique id Integer Yes Yes name Text Yes   address Text Yes   phoneNumber Text No   Then, we define SellerModel properties. Name Type Required Unique id Integer Yes Yes name Text Yes   address Text Yes   phoneNumber Text No   taxDetails Text Yes   After this, we define BankAccountModel properties. Name Type Required Unique id Integer Yes Yes beneficiary Text Yes   beneficiaryAccount Text Yes   bank Text No   SWIFT Text Yes   specialInstructions Text No   We define InvoiceItemModel properties. Name Arguments Return Type Unique calculateAmount - Decimal   Next, we define InvoiceItemModel methods. We don't need to store the item amount in the model, because it always depends on the price and the quantity, so it can be calculated. Name Type Required Unique id Integer Yes Yes deliveryDate Date Yes   description Text Yes   price Decimal Yes   quantity Decimal Yes   Now, we define InvoiceModel properties. Name Type Required Unique id Integer Yes Yes referenceNumber Text Yes   date Date Yes   bankAccount Reference Yes   items Collection Yes   comments Text No   status Integer Yes   We define InvoiceModel methods. The invoice amount can easily be calculated as the sum of invoice item amounts. Name Arguments Return Type Unique calculateAmount   Decimal   Finally, we define collections. In our case, they are InvoiceCollection, InvoiceItemCollection, BuyerCollection, and BankAccountCollection. They are used to store models of an appropriate type and provide some methods to add/remove models to/from the collections. How it works... Models in Backbone.js are implemented by extending Backbone.Model, and collections are made by extending Backbone.Collection. To implement relations between models and collections, we can use special Backbone extensions. To learn more about object properties, methods, and OOP programming in JavaScript, you can refer to the following resource: https://developer.mozilla.org/en-US/docs/JavaScript/Introduction_to_Object-Oriented_JavaScript Modeling an application's behavior with views and a router Unlike traditional MVC frameworks, Backbone does not provide any distinct object that implements controller functionality. Instead, the controller is diffused between Backbone.Router and Backbone. View and the following is done: A router handles URL changes and delegates application flow to a view. Typically, the router fetches a model from the storage asynchronously. When the model is fetched, it triggers a view update. A view listens to DOM events and either updates a model or navigates an application through a router. The following diagram shows a typical workflow in a Backbone application: How to do it... Let's follow the ensuing steps to understand how to define basic views and a router in our application: First, we need to create wireframes for an application. Let's draw a couple of wireframes in this recipe: The Edit Invoice page allows users to select a buyer, to select the seller's bank account from the lists, to enter the invoice's date and a reference number, and to build a table of shipped products and services. The Preview Invoice page shows how the final invoice will be seen by a buyer. This display should render all the information we have entered in the Edit Invoice form. Buyer and seller information can be looked up in the application storage. The user has the option to either go back to the Edit display or save this invoice. Then, we will define view objects. According to the previous wireframes, we need to have two main views: EditInvoiceFormView and PreviewInvoicePageView. These views will operate with InvoiceModel; it refers to other objects, such as BankAccountModel and InvoiceItemCollection. Now, we will split views into subviews. For each item in the Products or Services table, we may want to recalculate the Amount field depending on what the user enters in the Price and Quantity fields. The first way to do this is to re-render the entire view when the user changes the value in the table; however, it is not an efficient way, and it takes a significant amount of computer power to do this. We don't need to re-render the entire view if we want to update a small part of it. It is better to split the big view into different, independent pieces, such as subviews, that are able to render only a specific part of the big view. In our case, we can have the following views: As we can see, EditInvoiceItemTableView and PreviewInvoiceItemTableView render InvoiceItemCollection with the help of the additional views EditInvoiceItemView and PreviewInvoiceItemView that render InvoiceItemModel. Such separation allows us to re-render an item inside a collection when it is changed. Finally, we will define URL paths that will be associated with a corresponding view. In our case, we can have several URLs to show different views, for example: /invoice/add /invoice/:id/edit /invoice/:id/preview Here, we assume that the Edit Invoice view can be used for either creating a new invoice or editing an existing one. In the router implementation, we can load this view and show it on specific URLs. How it works... The Backbone.View object can be extended to create our own view that will render model data. In a view, we can define handlers to user actions, such as data input and keyboard or mouse events. In the application, we can have a single Backbone.Router object that allows users to navigate through an application by changing the URL in the address bar of the browser. The router object contains a list of available URLs and callbacks. In a callback function, we can trigger the rendering of a specific view associated with a URL. If we want a user to be able to jump from one view to another, we may want him/her to either click on regular HTML links associated with a view or navigate to an application programmatically.
Read more
  • 0
  • 0
  • 3281

article-image-joomla-virtuemart-product-list-templates
Packt
26 May 2011
23 min read
Save for later

Joomla! VirtueMart: Product List Templates

Packt
26 May 2011
23 min read
Joomla! VirtueMart 1.1 Theme and Template Design Give a unique look and feel to your VirtueMart e-Commerce store         Read more about this book       (For more resources on Joomla!, see here.) The product list page Product list page is the most important starting page for the shopping life cycle. While the landing page will give some general information regarding the shop, the list of items for sale in the shop is the major job of the product list page. Some shop owners even prefer to use product list page as their home page. Product list page is in singular, but actually the product list page is a series of pages. The total number of pages in the series varies from store-to-store and typically depends on the number of categories you have in the site. Each category will have its own page or even pages, if the category contains many products. Furthermore, the product list page is also used to list the products that relate to a particular manufacturer. It is also used for the keyword search and advanced search, if you enable the product search and advanced search Joomla! modules or the product search Joomla! plugin. To simplify our discussion, we will first restrict ourselves to the study of category listing. The manufacturer listing and search listing are very similar. Let's take a look at a typical category listing. From the preceding screenshot, we can identify a number of important elements on a product list page: Page header: This includes the category name, category description, the PDF, and print icons. The layout of the page header will depend on the page header templates. Navigation: This includes the order by form, the order by direction button (toggle between ascending and descending), number per page drop-down box, and the page navigation links. Note that the page navigation links can appear both at the top and the bottom. The navigation layout is controlled by the navigation templates. Product listing: This is the major item of the page, where the products are listed in a way defined by the product listing style and the number of products per row settings. Each of the products displayed within the listing is controlled by the core browse template (the core browse template is explained in the section Core browse templates). Addendum elements: This includes the recent products, latest products, featured products, and so on. Each of the addenda may have its own template. Page footer: This is the element placed at the end of the listing. Right now, there is only one element within the page footer, the page navigation. As we shall see, the layout of each of these elements is controlled by one or more templates. By customizing any one of these templates, we may be able to change the look of the page completely. We need to distinguish the usage between the terms browse templates and core browse templates. For the purpose of making things clear, we retain the term "browse templates" to refer to all templates within the browse template group. Within this broad template group, there are two subgroups: those which control the layout detail of each individual product (each product in the product listing section) and those which control all the other elements. We refer to them as core and non-core templates, respectively. The core browse templates reside directly under the templates/browse subdirectory. All the non-core templates reside under the subdirectory templates/browse/includes. The difference between the core and non-core templates will become clear in the following explanation.   Looking at our first template While VirtueMart templates are different from each other, they actually follow a definite pattern. To understand how the template is structured, probably the best way is to look at a sample. Let's take a look at the file browse_1.tpl.php as an example. This is one of the core browse templates. The full text of this file is as follows (with line numbers added): 1. <?php if( !defined( '_VALID_MOS' ) && !defined( '_JEXEC' ) )die( 'Direct Access to '.basename(__FILE__).' is not allowed.' );2. mm_showMyFileName(__FILE__);3. ?>4. <div class="browseProductContainer">5. <h3 class="browseProductTitle"><a title="<?php echo$product_name ?>" href="<?php echo $product_flypage ?>">6. <?php echo $product_name ?></a>7. </h3>8.9. <div class="browsePriceContainer">10. <?php echo $product_price ?>11. </div>12.13. <div class="browseProductImageContainer">14. <script type="text/javascript">//<![CDATA[15. document.write('<a href="javascript:void window.open('<?php echo $product_full_image ?>', 'win2', 'status=no,toolbar=no,scrollbars=yes,titlebar=no,menubar=no,resizable=yes,width=<?php echo $full_image_width ?>,height=<?php echo $full_image_height ?>,directories=no,location=no');">');16. document.write( '<?php echo ps_product::image_tag( $product_thumb_image, 'class="browseProductImage" border="0"title="'.$product_name.'" alt="'.$product_name .'"' ) ?></a>' );17. //]]>18. </script>19. <noscript>20. <a href="<?php echo $product_full_image ?>"target="_blank" title="<?php echo $product_name ?>">21. <?php echo ps_product::image_tag($product_thumb_image, 'class="browseProductImage" border="0"title="'.$product_name.'" alt="'.$product_name .'"' ) ?>22. </a>23. </noscript>24. </div>25.26. <div class="browseRatingContainer">27. <?php echo $product_rating ?>28. </div>29. <div class="browseProductDescription">30. <?php echo $product_s_desc ?>&nbsp;31. <a href="<?php echo $product_flypage ?>"title="<?php echo $product_details ?>"><br />32. <?php echo $product_details ?>...</a>33. </div>34. <br />35. <span class="browseAddToCartContainer">36. <?php echo $form_addtocart ?>37. </span>38.39. </div> Downloading the example code You can download the example code files here HTML fragments The coding is pretty typical of a VirtueMart template file. You can see that the template is basically an HTML fragment embedded with PHP coding. All PHP code is enclosed within the tag <?php … ?>. In most cases, the PHP code uses the statement echo $field_name to add the field value to the HTML code. We will be looking at those PHP constructs in the next subsection. After parsing the template, the output should be a well-formed HTML code. You should note that the template is just an HTML fragment, meaning no <html>, <head>, and <body> tags are needed. As you can recall, VirtueMart is just a Joomla! component that will handle the main content. So the HTML fragment produced by the template (together with other code, if any, built up by the page file) will be returned to the Joomla! engine for further processing. Typically, the Joomla! engine will pass this HTML fragment into the Joomla! template which, in turn, will insert the HTML into a location designated by the template. The final output of the Joomla! template will then be a valid HTML document. The <html>, <head>, and <body> tags will therefore be the responsibility of the Joomla! template. Let's look at the code to see how these 39 lines of code work. Remarks will only be needed for lines with the PHP tag. All the rest are HTML code that you should be familiar with. Lines 1 to 3 are actually some housekeeping code following the Joomla!/VirtueMart pattern. They will restrict direct access to the code and print out the template filename when debugging. Line 5 will output the product title with the product name embedded inside a hot link pointing to the product detail page. Line 10 will output the product price. Lines 14 to 23 contain a lengthy JavaScript code. The purpose is to output the image thumbnail embedded inside a hot link to open the full image. We need JavaScript here because we want to ensure the pop-up window size fits the full image size. (Otherwise, the pop-up window size will depend on the default size of the browser.) The window size cannot be controlled by HTML and so we need JavaScript help. If JavaScript is not enabled in a client browser, we will fall back to HTML code to handle the pop-up. Line 27 outputs the product rating, as reviewed by the user. Line 30 outputs the product's short description. Lines 31 to 33 outputs the text of product details within a hot link pointing to the product details page. Line 36 outputs the add-to-cart form, which includes the add-to-cart button, the quantity box, and so on. PHP crash course While we are not going to present all the complex program structure of PHP, it will be useful if we have a basic understanding of some of its major constructs. You may not fully understand what exactly each line of code does at first, but stick with us for a little while. You will soon grasp the concept as the pattern appears repeatedly in the exercise we will work on. In the preceding sample template, the PHP coding is pretty simple. (The most complex structure is actually the JavaScript that tries to spit out some HTML on the client browser, not PHP!) We can identify a few basic PHP constructs among the sample code: Variables: Just like any other programming language, a variable is a basic element in PHP. All PHP variables start with the dollar sign $. A variable name consists of alphanumeric characters and the underscore _ character. The first character must be either alphabetical or _, while numerals can also be used after the first character. It should be noted that the space character, together with most punctuation characters, are not allowed in a variable name. Alphabetic characters can be either uppercase or lowercase. Conventionally, VirtueMart will use only lowercase letters for variable names. While both uppercase and lowercase letters can be used without restrictions, variable names are case sensitive, meaning that $Product and $product will be treated as two different variables. The variable name chosen usually reflects the actual usage of the variable. In the sample template, $product_name and $product_flypage, for example, are typical variables and they will represent the value of a product name and product flypage, respectively. VirtueMart uses _ to separate words within a variable name to make the variable name more readable. Actually, many of the variables are passed into the template by the VirtueMart page file. These variables are called available fields. We will have more to say about that in the next subsection. Constants: Variables are changing values. You can assign a new value to it at any time and it will take up the new value. There are times when you want to keep the value unchanged. You can use a constant for that purpose. A constant name is pretty much the same as a variable name, but you don't need the $ character. In line 1 of the sample template, both _VALID_MOS and _JEXEC are constants. You probably recognize that they both use capital letters. This is conventional for Joomla! and VirtueMart so that constants stand out within the code. Constants are values that cannot be changed. If you try to give it another value, PHP will complain and fail. Data type: Any variable will have a data type associated with it. Data type can be a number, a string (that is, a series of characters or text), or other possibilities. Since the major purpose of a VirtueMart template is to produce HTML code, we will find that most of the variables we deal with are strings. Often, we will need to write out a literal string in our coding. To distinguish our string from the rest of the coding, we need to enclose the literal string with quotes. We can use single or double quotes. Single and double quotes actually have subtle differences, but we won't go into the detail for the time being. According to the VirtueMart program standard, a literal string should be enclosed in single quotes such as 'product name'. Note that 'product name' is a literal string containing the text product name. It is different from $product_name, which is a variable and may contain characters like 'circular saw' instead. Operators: You learnt addition and subtraction at school. They are mathematical operations to combine numbers. In PHP, we also have other operations to combine two or more variables. The most important one in our exercises is probably string concatenation, symbolized by . (the dot character). String concatenation combines two or more strings together to form a single string. The operation 'hello'.'world' will give a new string 'helloworld'. Note that there is no space character between the words. To make sure the words are separated by a space, we will need to use two concatenations such as 'hello'.' '.'world', which will give you the new string 'hello world'. Functions: Often, we will find that the same pattern of program code is used repeatedly to produce a given result. In PHP, we can group those code together to form a function. Each function will have a name and can be invoked using the following syntax: function_name (parameters) Here, parameters are values that will need to be passed into the function to evaluate the result. In PHP, we have lots of functions that deal with strings. The function strlen($product_name), for example, will return the number of characters in the string variable $product_name. If $product_name contains the string 'circular saw', strlen($product_name) will return 12. (You probably recognize that strlen is just a short form for string length.) We will learn some more functions along the way. echo statements: This is the most common statement in the template. echo is used to send the value of a string to the output buffer. So echo $product_name literally means "print out the value of the variable $product_name to the output buffer". Sometimes, the echo statement is mistaken to be a function. So you may try to write something like echo($product_name), instead of echo $product_name. While this is acceptable in PHP most of the time, the braces are actually not needed. (You may be aware that sometimes the command print function is used to send data to the output buffer in the place of echo. While print and echo seem interchangeable, echo runs faster than print and so should be the preferred choice to output data.) if statements: The if statement is a construct to test a condition before taking a certain action. The action will be taken only if the condition evaluates to true. The syntax of an if statement is as follows: if (condition) action where the condition is an expression for testing and action is a statement or a series of statements to be performed, if the expression evaluates to true. The expression can be a true-false type condition (such as $i>0), a mathematical expression (such as $i+$j), or some kind of complex operation involving functions. In any case, it will be considered as true, if it evaluates to a nonzero number or a non-empty string. Statement separator: One important PHP construct we usually overlook is the statement separator ; (the semicolon). We need this to separate two or more statements, even if they are on new lines of their own. In the preceding sample code, we have a ";" at the end of line 1 and 2. This ; is very important. Without that, the PHP parser will be confused and will probably refuse to execute and will give you a fatal error. These are just a few constructs in PHP for the time being. We will have more to say about PHP as we encounter more constructs along the way. Available fields Since many of the variables in our template code are passed down from the VirtueMart page file, one natural question to ask is "What variables can we use in our code?". Variables that we can use in a template are known as available fields. The available fields we have inside a template will vary with the template itself. A field which is available in the flypage template may not be available in a browse template. Even among the browse templates, there may be differences. To maximize our customization effort on a template, it is essential to be aware of the available fields in each template. However, there are so many available fields in a template that it may not be wise to list them all here. For now, it will be useful to distinguish four different types of available fields: Database fields: Most of the data we have comes from the database. Often, the VirtueMart page file just passes those fields directly to the template without changing anything. They are called database fields. The same data you put into the database from the backend will be at your fingertips. Examples are $product_id and $product_sku. Formatted database fields: Sometimes the data you stored in the database is raw data. You will need a different format in the presentation. VirtueMart will do some formatting on the data before passing it to the template. An example is $product_available_date, which is stored in the database as an integer. However, you need to display it in a form that is appropriate to your culture such as yyyy-mm-dd, mm-dd-yyyy, and so on. Processed data: Sometimes there may be complex logic before you can produce data that is useful in the template. A typical example is the $product_price. Do not expect this to be a simple number or a formatted number with the currency symbol added. Actually, the product price will depend on a number of factors such as whether the user has logged in, the shopper group, discount, tax, and so on. So the $product_price in the frontend may be different from the value you entered in the backend. Sometimes it is a formatted number and sometimes it is a message such as call for price. Another example is $product_thumb_image. You may expect this to be just the file location you see in the backend, but its value will depend on whether it is an out of site image, whether the image exists, and whether you want the image to be resized from the full image. VirtueMart class object: In certain cases, VirtueMart developers may think there are too many possibilities for the use of a piece of data. So they decided to let the template designer control what to do with the data. In those cases, VirtueMart will simply pass a class object to the template. An example of this is $ps_product. There are lots of opportunities to make good use of these class objects. However, you will need to understand how this can be properly used and bear all the complexities to make it work.   Core browse templates Product listing is unarguably the most important element on the product list page. There are two major factors that will affect the product listing: the product listing style, and the core browse template. Core browse templates are used to define the layout and styles for each product in the product list. There are actually six different core browse templates in the default theme. We can define a default core browse template for general use and also a specific template for each of the product categories. If you take a closer look at the templates, you will find that they are pretty much the same, except the last one which is for creating a PDF file. We already saw the detail coding in the browse_1.php. We don't need to repeat it here again. So, let's start on some exercises with the browse_1 template right away. Exercise 3.1: Adding an Ask-Seller link to the browse page We know that in the product detail page, there is an Ask-Seller link which will bring up a form so that a shopper can ask a question about the product. This link is not available on the product list page. In this exercise, we will add a similar link to the browse page. While we can use the exact same link here, we purposely use a simpler way to do it to make it easier to understand. Steps Open your favorite text editor. Navigate to the VirtueMart frontend root. Open the file themes/default/templates/browse/browse_1.php. Insert the following line of code after line 5: <a href="index.php?option=com_virtuemart&page=shop.ask&product_id=<?php echo $product_id ?>">Ask a question about thisproduct</a><br /> Save the file and upload it to your server. Point your browser to any VirtueMart browse page that uses the browse_1.php template, you should see the Ask-Seller link added to every product. (This exercise is done on the browse_1 template only. If you browse to the product list of an individual category, the new styles will show only if the category is using the browse_1 template. The same applies to most of the following exercises.) Notes The Ask-Seller link is an <a> tag with the href pointing to the Ask Seller page. The href is built using three parameters: option=com_virtuemart points to the VirtueMart component. page=shop.ask points to the actual Ask Seller page. By changing the page parameter, we can point the shopper to any of the VirtueMart pages. product_id=<? echo $product_id ?> provides the product ID to the Ask Seller page so that it knows which product the shopper has questions on. We need to use a variable because the product_id will vary from product to product. In the previous code, we purposely hardcoded the link as a relative URL to make the code simpler. This works unless SEF is enabled. To cater for SEF, a more generic way to create the link will be needed. The text Ask a question about this product is static text. Feel free to change it to anything you think appropriate. This will not affect the function of the link. <br /> is needed to insert a line break after the link. Exercise 3.1 demonstrates the basic technique to modify a template. You can add static text to a template in whatever way you want. If you need variable data, simply insert the appropriate echo statement at the required place.   Exercise 3.2: Changing core browse template CSS One major task of customizing a template is changing the style of HTML elements. In this exercise, we are going to add some CSS styles to the core browse template. Preparation This exercise is built upon the browse_1.php file we modified in Exercise 3.1. If you start from the original template file, the exact line number may differ. Steps Open your favorite text editor. Navigate to the VirtueMart frontend root. Open the file themes/default/templates/browse/browse_1.php. At line 4 (that is, the top of the file), insert the following lines of code: <?php if (!defined(VM_CUSTOM_CSS)) { define ('VM_CUSTOM_CSS',1);?> <style> .browseProductContainer {border:1px solid #999;padding:5px;background:#eee;margin:5px;} </style><?php } ?> Save the file and upload it to your server. Point your browser to any VirtueMart browse page that uses the browse_1.php template. You should see the product list now with the border, margin, and padding added. Notes We added a stylesheet for the class browseProductContainer in the template file. The stylesheet will be included as part of the HTML output to the browser. The core browse template will be applied for each product. So any coding added to it will be repeated for each product. To ensure that the stylesheet is included only once in the HTML, we define a constant named VM_CUSTOM_CSS the first time the stylesheet is included. The if condition at the start of the coding tests for the existence of the constant VM_CUSTOM_CSS. When the code is executed a second time, VM_CUSTOM_CSS is already defined and so the statements within the braces will be skipped. Exercise 3.2 demonstrates another basic technique to modify a template. The technique applies not only to a CSS stylesheet, but to all coding in general. It can be used for JavaScript inclusion, and for other coding that you only need to appear once in the HTML.   Exercise 3.3: Moving and modifying data In this exercise, we are going to experiment with moving data around and adding some new data fields that are available for the template. Preparation This exercise is built upon the browse_1.php file we modified in Exercise 3.2. If you start from the original template file, the exact line numbers may differ. Steps Open your favorite text editor. Navigate to the VirtueMart frontend root. Open the file themes/default/templates/browse/browse_1.php. At line 40, insert the following line of code: <br />Weight: <?php echo number_format($product_weight,1) .' ' . $product_weight_uom ?><br /> Move the Ask-Seller link from line 13 to line 47, that is, after the closing </span> tag for form_addtocart. Move the <br /> tag from the end of line 47 to the beginning of the line, that is, the line will become: <br /><a href="index.php?option=com_virtuemart&page=shop.ask&product_id=<?php echo $product_id ?>">Ask a question about thisproduct</a> Save the file and upload it to your server. Point your browser to any VirtueMart browse page that uses the browse_1.php template and you should see that the Ask-Seller link has moved to the end of the display, and the product weight and unit has been added to every product. Notes In this exercise, we have performed two modifications. We moved the Ask-Seller link to the bottom instead of the top and added the product_weight field to the browse template. Actually, the order of appearance of the product fields can be changed at will. You can move it around to fit your requirement similar way. To add new data to the display, you first need to determine what you want to show and whether the data is within the list of available fields. Since we know $product_weight and $product_weight_uom (uom stands for unit of measure) are available, we can simply use concatenation to build the final text for the output. The weight is rounded off to 1 decimal place using the number_format() function to make it look nicer. You can change the number of decimal places by changing the second parameter to the number_format() function.  
Read more
  • 0
  • 0
  • 3281

article-image-building-queries
Packt
12 Dec 2013
10 min read
Save for later

Building Queries

Packt
12 Dec 2013
10 min read
(For more resources related to this topic, see here.) Understanding DQL DQL is the acronym of Doctrine Query Language. It's a domain-specific language that is very similar to SQL, but is not SQL. Instead of querying the database tables and rows, DQL is designed to query the object model's entities and mapped properties. DQL is inspired by and similar to HQL, the query language of Hibernate, a popular ORM for Java. For more details you can visit this website: http://www.hibernate.org/. Learn more about domain-specific languages at: http://en.wikipedia.org/wiki/Domain-specific_language To better understand what it means, let's run our first DQL query. Doctrine command-line tools are as genuine as a Swiss Army knife. They include a command called orm:run-dql that runs the DQL query and displays it's result. Use it to retrieve title and all the comments of the post with 1 as an identifier: php vendor/bin/doctrine.php orm:run-dql "SELECT p.title,c.bodyFROM BlogEntityPost p JOIN p.comments c WHERE p.id=1" It looks like a SQL query, but it's definitely not a SQL query. Examine the FROM and the JOIN clauses; they contain the following aspects: A fully qualified entity class name is used in the FROM clause as the root of the query All the Comment entities associated with the selected Post entities are joined, thanks to the presence of the comments property of the Post entity class in the JOIN clause As you can see, data from the entities associated with the main entity can be requested in an object-oriented way. Properties holding the associations (on the owning or the inverse side) can be used in the JOIN clause. Despite some limitations (especially in the field of subqueries), DQL is a powerful and flexible language to retrieve object graphs. Internally, Doctrine parses the DQL queries, generates and executes them through Database Abstraction Layer (DBAL) corresponding to the SQL queries, and hydrates the data structures with results. Until now, we only used Doctrine to retrieve the PHP objects. Doctrine is able to hydrate other types of data structures, especially arrays and basic types. It's also possible to write custom hydrators to populate any data structure. If you look closely at the return of the previous call of orm:run-dql, you'll see that it's an array, and not an object graph, that has been hydrated. As with all the topics covered in this book, more information about built-in hydration modes and custom hydrators is available in the Doctrine documentation on the following website: http://docs.doctrine-project.org/en/latest/reference/dql-doctrine-query-language.html#hydration-modes Using the entity repositories Entity repositories are classes responsible for accessing and managing entities. Just like entities are related to the database rows, entity repositories are related to the database tables. All the DQL queries should be written in the entity repository related to the entity type they retrieve. It hides the ORM from other components of the application and makes it easier to re-use, refactor, and optimize the queries. Doctrine entity repositories are an implementation of the Table Data Gateway design pattern. For more details, visit the following website: http://martinfowler.com/eaaCatalog/tableDataGateway.html A base repository, available for every entity, provides useful methods for managing the entities in the following manner: find($id): It returns the entity with $id as an identifier or null It is used internally by the find() method of the Entity Managers. findAll(): It retrieves an array that contains all the entities in this repository findBy(['property1' => 'value', 'property2' => 1], ['property3' => 'DESC', 'property4' => 'ASC']): It retrieves an array that contains entities matching all the criteria passed in the first parameter and ordered by the second parameter findOneBy(['property1' => 'value', 'property2' => 1]): It is similar to findBy() but retrieves only the first entity or null if none of the entities match the criteria Entity repositories also provide shortcut methods that allow a single property to filter entities. They follow this pattern: findBy*() and findOneBy*(). For instance, calling findByTitle('My title') is equivalent to calling findBy(['title' => 'My title']). This feature uses the magical __call() PHP method. For more details visit the following website: http://php.net/manual/en/language.oop5.overloading.php#object.call In our blog app, we want to display comments in the detailed post view, but it is not necessary to fetch them from the list of posts. Eager loading through the fetch attribute is not a good choice for the list, and Lazy loading slows down the detailed view. A solution to this would be to create a custom repository with extra methods for executing our own queries. We will write a custom method that collates comments in the detailed view. Creating custom entity repositories Custom entity repositories are classes extending the base entity repository class provided by Doctrine. They are designed to receive custom methods that run the DQL queries. As usual, we will use the mapping information to tell Doctrine to use a custom repository class. This is the role of the repositoryClass attribute of the @Entity annotation. Kindly perform the following steps to create a custom entity repository: Reopen the Post.php file at the src/Blog/Entity/ location and add a repositoryClass attribute to the existing @Entity annotation like the following line of code: @Entity(repositoryClass="PostRepository") Doctrine command-line tools also provide an entity repository generator. Type the following command to use it: php vendor/bin/doctrine.php orm:generate:repositories src/ Open this new empty custom repository, which we just generated in the PostRepository.phpPostRepository.php file, at the src/Blog/Entity/ location. Add the following method for retrieving the posts and comments: /** * Finds a post with its comments * * @param int $id * @return Post */ public function findWithComments($id) { return $this ->createQueryBuilder('p') ->addSelect('c') ->leftJoin('p.comments', 'c') ->where('p.id = :id') ->orderBy('c.publicationDate', 'ASC') ->setParameter('id', $id) ->getQuery() ->getOneOrNullResult() ; } Our custom repository extends the default entity repository provided by Doctrine. The standard methods, described earlier in the article, are still available. Getting started with Query Builder QueryBuilder is an object designed to help build the DQL queries through a PHP API with a fluent interface. It allows us to retrieve the generated DQL queries through the getDql() method (useful for debugging) or directly use the Query object (provided by Doctrine). To increase performance, QueryBuilder caches the generated DQL queries and manages an internal state. The full API and states of the DQL query are documented on the following website: http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/query-builder.html We will give an in-depth explanation of the findWithComments() method that we created in the PostRepository class. Firstly, a QueryBuilder instance is created with the createQueryBuilder() method inherited from the base entity repository. The QueryBuilder instance takes a string as a parameter. This string will be used as an alias of the main entity class. By default, all the fields of the main entity class are selected and no other clauses except SELECT and FROM are populated. The leftJoin() call creates a JOIN clause that retrieves comments associated with the posts. Its first argument is the property to join and its second is the alias; these will be used in the query for the joined entity class (here, the letter c will be used as an alias for the Comment class). Unless the SQL JOIN clause is used, the DQL query automatically fetches the entities associated with the main entity. There is no need for keywords like ON or USING. Doctrine automatically knows whether a join table or a foreign-key column must be used. The addSelect() call appends comment data to the SELECT clause. The alias of the entity class is used to retrieve all the fields (this is similar to the * operator in SQL). As in the first DQL query of this article, specific fields can be retrieved with the notation alias.propertyName. You guessed it, the call to the where() method sets the WHERE part of the query. Under the hood, Doctrine uses prepared SQL statements. They are more efficient than the standard SQL queries. The id parameter will be populated by the value set by the call to setParameter(). Thanks again to prepared statements and this setParameter() method, SQL Injection attacks are automatically avoided. SQL Injection Attacks are a way to execute malicious SQL queries using user inputs that have not escaped. Let's take the following example of a bad DQL query to check if a user has a specific role: $query = $entityManager->createQuery('SELECT ur FROMUserRole urWHERE ur.username = "' . $username . '" ANDur.role = "' . $role . '"'); $hasRole = count($query->getResult()); This DQL query will be translated into SQL by Doctrine. If someone types the following username: " OR "a"="a the SQL code contained in the string will be injected and the query will always return some results. The attacker has now gained access to a private area. The proper way should be to use the following code: $query = $entityManager->createQuery("SELECT ur FROMUserRole WHEREusername = :username and role = :role"); $query->setParameters([ 'username' => $username, 'role' => $role ]); $hasRole = count($query->getResult()); Thanks to prepared statements, special characters (like quotes) contained in the username are not dangerous, and this snippet will work as expected. The orderBy() call generates an ORDER BY clause that orders results as per the publication date of the comments, older first. Most SQL instructions also have an object-oriented equivalent in DQL. The most common join types can be made using DQL; they generally have the same name. The getQuery() call tells the Query Builder to generate the DQL query (if needed, it will get the query from its cache if possible), to instantiate a Doctrine Query object, and to populate it with the generated DQL query. This generated DQL query will be as follows: SELECT p, c FROM BlogEntityPost p LEFT JOIN p.comments cWHEREp.id = :id ORDER BY c.publicationDate ASC The Query object exposes another useful method for the purpose of debugging: getSql(). As its name implies, getSql() returns the SQL query corresponding to the DQL query, which Doctrine will run on DBMS. For our DQL query, the underlying SQL query is as follows: SELECT p0_.id AS id0, p0_.title AS title1, p0_.bodyAS body2,p0_.publicationDate AS publicationDate3,c1_.id AS id4, c1_.bodyAS body5, c1_.publicationDate AS publicationDate6,c1_.post_id ASpost_id7 FROM Post p0_ LEFT JOIN Commentc1_ ON p0_.id =c1_.post_id WHERE p0_.id= ? ORDER BY c1_.publicationDate ASC The getOneOrNullResult() method executes it, retrieves the first result, and returns it as a Post entity instance (this method returns null if no result is found). Like the QueryBuilder object, the Query object manages an internal state to generate the underlying SQL query only when necessary. Performance is something to be very careful about while using Doctrine. When set in production mode, ORM is able to cache the generated queries (DQL through the QueryBuilder objects, SQL through the Query objects) and results of the queries. ORM must be configured to use one of the blazing, fast, supported systems (APC, Memcache, XCache, or Redis) as shown on the following website: http://docs.doctrine-project.org/en/latest/reference/caching.html We still need to update the view layer to take care of our new findWithComments() method. Open the view-post.php file at the web/location, where you will find the following code snippet: $post = $entityManager->getRepository('BlogEntityPost')->find($_GET['id']); Replace the preceding line of code with the following code snippet: $post = $entityManager->getRepository('BlogEntityPost')->findWithComments($_GET['id']);
Read more
  • 0
  • 0
  • 3280
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-navigating-pages-xaml-browser-applications
Packt
18 Jan 2010
4 min read
Save for later

Navigating Pages in XAML Browser Applications

Packt
18 Jan 2010
4 min read
Page navigation is an important part of application development and Windows Presentation Foundation supports navigation in two types of applications. These are the standalone application and the XAML browser application also known as XBAP applications. In addition to navigation between pages in the application it also supports navigation to other types of content such as HTML, objects and XAML files. Implementing a page Page class is one of the classes supported by WPF and you can use hyperlinks declaratively to go from page to page. You could also go from page to page programmatically using the NavigationService. In this article navigation using hyperlinks will be described. Page can be viewed as a package that consists of content which may be of many different kinds such as .NET framework objects, HTML, XAML, etc. Using the page class you can implement a Page which is a navigable object with XAML content. You implement a page declaratively by using the following markup: <Page /> Page is the root element of a Page which requires XAML namespace in its declaration as shown above. The Page can contain only one child element and you may simplify the snippet to just: <Page > <Page.Content> <!-- Page Content --> Hello, XAML</Page.Content></Page> Since a Page has the content you can access any content on the page by the dot notation Page.content as shown here above. Creating a Page in Visual Studio 2008 When you use Visual Studio 2008 you can create a WPF Browser application in C# as shown in the New Project window (opened with File | New Project...). When you create the project as above you will be creating a project which has a Page element as shown in the figure. Page1.xaml has both a preview window as well as XAML markup in a tabbed arrangement as shown here. Page1 is a named page object with a class [x:Class="Page1"]. You can review the content shown in the earlier snippet by taking out the extra namespace from the page shown here. The contents on a page can consist of other objects such as controls which can be used for various activities one of which is navigating from page to page. These controls can provoke events which can be implemented both as mark up and code that supports the page(also known as code behind). The page created by Visual Studio provides the necessary configuration to interact with the page as shown in Page1.xaml in the above figure and the code behind page shown in the figure below. The default page created by Visual Studio fulfils the three necessary criteria for a page with which you can interact by providing the following: x:Class="Page1" this markup attribute enables the MS Build engine to build a Page1 partial class which has the same name as the attribute of x:Class namely "Page1" This requires the inclusion of a namespace provided by the second namespace declaration The generated class should implement an InitializeComponent and the default page has this implemented as show above Configuring a Start Page When the application starts you need to specify that the browser should bring up a designated page. In order to support this the browser application requires an infrastructure to host the application and the WPF's Application class provides the necessary information in the form of an ApplicationDefinition file. In the XBAP application we have created you may review the information in the Application definition as shown here. You can specify the start up page as the application starts by specifying it in the Application definition as shown in the next figure. Without the StartupURI the page does not get displayed. The StartupURI can also be specified in a Startup event handler. Page appearance in the browser As the application starts up you may want to control how the page hosted in the window appears. You can set certain properties declaratively as shown where the WindowTitle, WindowWidth, WindowHeight, Title can all be set. The first three items are what you will see when the page gets displayed. For example consider the following xaml mark up: <Page x_Class="WPFSharp_Nav01.Page1" WindowWidth="500" WindowHeight="200" Title="Pagex" Background="Blue" WindowTitle="What is this?" > <TextBox Width="400" Height="25">Hello, XAML Browser App</TextBox></Page> The page gets displayed as shown when the application starts up. The WindowWidth is the outer width and the WindowHeight is the outer height of the browser window.
Read more
  • 0
  • 0
  • 3273

article-image-creating-different-font-files-and-using-web-fonts
Packt
16 Sep 2013
12 min read
Save for later

Creating different font files and using web fonts

Packt
16 Sep 2013
12 min read
(For more resources related to this topic, see here.) Creating different font files In this recipe, we will learn how to create or get these fonts and how to generate the different formats for use in different browsers (Embedded Open Type, Open Type, True Type Font, Web Open Font Format, and SVG font) is explained in this recipe. Getting ready To get the original file of the font created during this recipe in addition to the generated font formats and the full source code of the project FontCreation; please refer to the receipe2 project folder. How to do it... The following steps are preformed for creating different font files: Firstly, we will get an original TTF font file. There are two different ways to get fonts: The first method is by downloading one from specialized websites. Both free and commercial solutions can be found with a wide variety of beautiful fonts. The following are a few sites for downloading free fonts: Google fonts, Font squirrel, Dafont, ffonts, Jokal, fontzone, STIX, Fontex, and so on. Here are a few sites for downloading commercial fonts: Typekit, Font Deck, Font Spring, and so on. We will consider the example of Fontex, as shown in the following screenshot. There are a variety of free fonts. You can visit the website at http://www.fontex.org/. The second method is by creating your own font and then generating a TIFF file format. There are a lot of font generators on the Web. We can find online generators or follow the professionals by scanning handwritten typography and finally import it to Adobe Illustrator to change it into vector based letters or symbols. For newbies, I recommend trying Fontstruct (http://fontstruct.com). It is a WYSIWYG flash editor that will help you create your first font file, as shown in the following screenshot: As you can see, we were trying to create the S letter using a grid and some different forms. After completing the font creation, we can now preview it rather than download the TTF file. The file is in the receipe2 project folder. The following screenshot is an example of a font we have created on the run: Now we have to generate the rest of file formats in order to ensure maximum compatibility with common browsers. We highly recommend the use of Font squirrel web font generator (http://www.fontsquirrel.com/tools/webfont-generator). This online tool helps to create fonts for @font-face by generating different font formats. All we need to do is to upload the original file (optionally adding same font variants bold, italic, or bold-italic), select the output formats, add some optimizations, and finally download the package. It is shown in the following screenshot: The following code explains the how to use this font: <!DOCTYPE html><html><head><title>My first @font-face demo</title><style type="text/css">@font-face {font-family: 'font_testregular';src: url('font_test-webfont.eot');src: url('font_test-webfont.eot?#iefix')format('embedded-opentype'),url('font_test-webfont.woff') format('woff'),url('font_test-webfont.ttf') format('truetype'),url('font_test-webfont.svg#font_testregular')format('svg');font-weight: normal;font-style: normal;} Normal font usage: h1 , p{font-family: 'font_testregular', Helvetica, Arial,sans-serif;}h1 {font-size: 45px;}p:first-letter {font-size: 100px;text-decoration: wave;}p {font-size: 18px;line-height: 27px;}</style> Font usage in canvas: <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" ></script><script language="javascript" type="text/javascript">var x = 30, y = 60;function generate(){var canvas = $('canvas')[0],tx = canvas.getContext('2d');var t = 'font_testregular'; var c = 'red';var v =' sample text via canvas';ctx.font = '52px "'+t+'"';ctx.fillStyle = c;ctx.fillText(v, x, y);}</script></head><body onload="generate();"><h1>Header sample</h1><p>Sample text with lettrine effect</p><canvas height="800px" width="500px">Your browser does not support the CANVAS element.Try the latest Firefox, Google Chrome, Safari or Opera.</canvas></body></html> How it works... This recipe takes us through getting an original TTF file: Font download: When downloading a font (either free or commercial) we have to pay close attention to terms of use. Sometimes, you are not allowed to use these fonts on the web and you are only allowed to use them locally. Font creation: During this process, we have to pay attention to some directives. We have to create Glyphs for all the needed alphabets (upper case and lower case), numbers, and symbols to avoid font incompatibility. We have to take care of the spacing between glyphs and eventually, variations, and ligatures. A special creation process is reserved for right- to left-written languages. Font formats generation: Font squirrel is a very good online tool to generate the most common formats to handle the cross-browser compatibility. It is recommended that we optimize the font ourselves via expert mode. We have the possibility of fixing some issues during the font creation such as missing glyphs, X-height matching, and Glyph spacing. Font usage: We will go through the following font usage: Normal font usage: We used the same method as already adopted via font-family; web-safe fonts are also applied: h1 , p{font-family: 'font_testregular', Helvetica, Arial, sans-serif;} Font usage in canvas: The canvas is a HTML5 tag that renders dynamically, bitmap images via scripts Creating 2D shapes. In order to generate this image based on fonts, we will create the canvas tag at first. An alternative text will be displayed if canvas is not supported by the browser. <canvas height="800px" width="500px">Your browser does not support the CANVAS element.Try the latest Firefox, Google Chrome, Safari or Opera.</canvas> We will now use the jQuery library in order to generate the canvas output. An onload function will be initiated to create the content of this tag: <scriptsrc = "http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" ></script> In the following function, we create a variable ctx which is a canvas occurrence of 2D context via canvas.getContext('2d'). We also define font-family using t as a variable, font-size, text to display using v as a variable, and color using c as a variable. These properties will be used as follows: <script language="javascript" type="text/javascript">var x = 30, y = 60;function generate(){var canvas = $('canvas')[0],ctx = canvas.getContext('2d');var t = 'font_testregular';var c = 'red' ;var v =' sample text via canvas'; This is for font-size and family. Here the font-size is 52px and the font-family is font_testregular: ctx.font = '52px "'+t+'"'; This is for color by fillstyle: ctx.fillStyle = c; Here we establish both text to display and axis coordinates where x is the horizontal position and y is vertical one. ctx.fillText(v, x, y); Using Web fonts In this recipe, you will learn how to use fonts hosted in distant servers for many reasons such as support services and special loading scripts. A lot of solutions are widely available on the web such as Typekit, Google fonts, Ascender, Fonts.com web fonts, and Fontdeck. In this task, we will be using Google fonts and its special JavaScript open source library, WebFont loader. Getting ready Please refer to the project WebFonts to get the full source code. How to do it... We will get through four steps: Let us configure the link tag: <link rel="stylesheet" id="linker" type="text/css"href="http://fonts.googleapis.com/css?family=Mr+De+Haviland"> Then we will set up the WebFont loader: <script type="text/javascript">WebFontConfig = {google: {families: [ 'Tangerine' ]}};(function() {var wf = document.createElement('script');wf.src = ('https:' == document.location.protocol ?'https' : 'http') +'://ajax.googleapis.com/ajax/libs/webfont/1/webfont.js';wf.type = 'text/javascript';wf.async = 'true';var s = document.getElementsByTagName('script')[0];s.parentNode.insertBefore(wf, s);})();</script><style type="text/css">.wf-loading p#firstp {font-family: serif}.wf-inactive p#firstp {font-family: serif}.wf-active p#firstp {font-family: 'Tangerine', serif} Next we will write the import command: @import url(http://fonts.googleapis.com/css?family=Bigelow+Rules); Then we will cover font usage: h1 {font-size: 45px;font-family: "Bigelow Rules";}p {font-family: "Mr De Haviland";font-size: 40px;text-align: justify;color: blue;padding: 0 5px;}</style></head><body><div id="container"><h1>This H1 tag's font was used via @import command </h1><p>This font was imported via a Stylesheet link</p><p id="firstp">This font was created via WebFont loaderand managed by wf a script generated from webfonts.js.<br />loading time will be managed by CSS properties :<i>.wf-loading , .wf-inactive and .wf-active</i> </p></div></body></html> How it works... In this recipe and for educational purpose, we used following ways to embed the font in the source code (the link tag, the WebFont loader, and the import command). The link tag: A simple link tag to a style sheet is used referring to the address already created: <link rel="stylesheet" type="text/css"href="http://fonts.googleapis.com/css?family=Mr+De+Haviland"> The WebFont loader: It is a JavaScript library developed by Google and Typekit. It grants advanced control options over the font loading process and exceptions. It lets you use multiple web font providers. In the following script, we can identify the font we used, Tangerine, and the link to predefined address of Google APIs with the world google: WebFontConfig = {google: { families: [ 'Inconsolata:bold' ] }}; We now will create wf which is an instance of an asynchronous JavaScript element. This instance is issued from Ajax Google API: var wf = document.createElement('script');wf.src = ('https:' == document.location.protocol ?'https' : 'http') +'://ajax.googleapis.com/ajax/libs/webfont/1/webfont.js';wf.type = 'text/javascript';wf.async = 'true';var s = document.getElementsByTagName('script')[0];s.parentNode.insertBefore(wf, s);})(); We can have control over fonts during and after loading by using specific class names. In this particular case, only the p tag with the ID firstp will be processed during and after font loading. During loading, we use the class .wf-loading. We can use a safe font (for example, Serif) and not the browser's default page until loading is complete as follows: .wf-loading p#firstp {font-family: serif;} After loading is complete, we will usually use the font that we were importing earlier. We can also add a safe font for older browsers: .wf-active p#firstp {font-family: 'Tangerine', serif;} Loading failure: In case we failed to load the font, we can specify a safe font to avoid falling in default browser's font: .wf-inactive p#firstp {font-family: serif;} The import command: It is the easiest way to link to the fonts: @import url(http://fonts.googleapis.com/css?family=Bigelow+Rules); Font usage: We will use the fonts as we did already via font-family property: h1 {font-family: "Bigelow Rules";}p {font-family: "Mr De Haviland";} There's more... The WebFont loader has the ability to embed fonts from mutiple WebFont providers. It has some predefined providers in the script such as Google, Typekit, Ascender, Fonts.com web fonts, and Fontdeck. For example, the following is the specific source code for Typekit and Ascender: WebFontConfig ={typekit: {id: 'TypekitId'}};WebFontConfig ={ascender: {key: 'AscenderKey',families: ['AscenderSans:bold,bolditalic,italic,regular']}}; For the font providers that are not listed above, a custom module can handle the loading of the specific style sheet: WebFontConfig = {custom: {families: ['OneFont', 'AnotherFont'],urls: ['http://myotherwebfontprovider.com/stylesheet1.css','http://yetanotherwebfontprovider.com/stylesheet2.css' ]}}; For more details and options of the WebFont loader script, you can visit the following link: https://developers.google.com/fonts/docs/webfont_loader To download this API you may access the following URL: https://github.com/typekit/webfontloader How to generate the link to the font? The URL used in every method to import the font in every method (the link tag, the WebFont loader, and the import command) is composed of the Google fonts API base url (http://fonts.googleapis.com/css) and the family parameter including one or more font names, ?family=Tangerine. Multiple fonts are separated with a pipe character (|) as follows: ?family=Tangerine|Inconsolata|Droid+Sans Optionally, we can add subsets or also specify a style for each font: Cantarell:italic|Droid+Serif:bold&subset=latin Browser-dependent output The Google fonts API serves a generated style sheet specific to the client, via the browser's request. The response is relative to the browser. For example, the output for Firefox will be: @font-face {font-family: 'Inconsolata';src: local('Inconsolata'),url('http://themes.googleusercontent.com/fonts/font?kit=J_eeEGgHN8Gk3Eud0dz8jw') format('truetype');} This method lowers the loading time because the generated style sheet is relative to client's browser. No multiformat font files are needed because Google API will generate it, automatically. Summary In this way, we have learned how to create different font formats, such as Embedded Open Type, Open Type, True Type Font, Web Open Font Format, and SVG font, and how to use the different Web fonts such as Typekit, Google fonts, Ascender, Fonts.com web fonts, and Fontdeck. Resources for Article: Further resources on this subject: So, what is Markdown? [Article] Building HTML5 Pages from Scratch [Article] HTML5: Generic Containers [Article]
Read more
  • 0
  • 1
  • 3268

article-image-adding-event-calendar-your-joomla-site-using-jevents
Packt
21 Oct 2010
2 min read
Save for later

Adding an Event Calendar to your Joomla! Site using JEvents

Packt
21 Oct 2010
2 min read
Getting ready... There are many extensions to add event calendars to Joomla!. However, JEvents is the most feature-rich and popular extension. Download this extension from http://www.jevents.net/jevents-download and install it from the Extensions | Install/Uninstall screen. How to do it... After installing JEvents, follow these steps to add the calendar: From the Joomla! administration panel, select Components | JEvents. This will show you the JEvents:: Control Panel screen. Click on the Manage Categories icon in the Control Panel screen. This will show you the Categories screen, listing all the available categories, if any. Click on the New button in the toolbar. It will show you a form similar to that in the following screenshot: Enter a Title, select the parent category (if any), then select Access Level and Administrator, select Yes in Published field, and type a brief description of the category. From the Event Colour field, choose the color for the events in this category. Then click on the Save icon in the toolbar. Repeat the step to create another category. To go back to the Control Panel page, click on the CPanel icon on the Categories screen. Then click on the Manage Calendars icon in the Control Panel screen. That shows the Calendars screen. Click on the New button. This shows you a form similar to the one in the following screenshot: Type a Unique Identifier (name) for the calendar, select the default category and Access Level, select No for the Is Default field, and click on Create Calendar from Scratch. This creates a new calendar. Go back to the Control Panel page, and click on the Manage Events icon. This will shows you the Events screen. Click on the New button in the toolbar, and you will get the Edit Event screen. On the Edit Event screen, first select a calendar and then type the subject of the event. Then select a category, a color for the event, and the access level. In the Activity field briefly describe the activity, and then fill in the Location, Contact, and Extra Info fields. Then click on the Calendar tab.
Read more
  • 0
  • 0
  • 3265

Packt
21 Mar 2013
13 min read
Save for later

Learn Cinder Basics – Now

Packt
21 Mar 2013
13 min read
(For more resources related to this topic, see here.) What is creative coding This is a really short introduction about what creative coding is, and I'm sure that it is possible to find out much more about this topic on the Internet. Nevertheless, I will try to explain how it looks from my perspective. Creative coding is a relatively new term for a field that combines coding and design. The central part of this term might be the "coding" one—to become a creative coder, you need to know how to write code and some other things about programming in general. Another part—the "creative" one—contains design and all the other things that can be combined with coding. Being skilled in coding and design at the same time lets you explain your ideas as working prototypes for interface designs, art installations, phone applications, and other fields. It can save time and effort that you would give in explaining your ideas to someone else so that he/she could help you. The creative coding approach may not work so well in large projects, unless there are more than one creative codes involved. A lot of new tools that make programming more accessible have emerged during the last few years. All of them are easy to use, but usually the less complicated a tool is, the less powerful it is, and vice versa. A few words about Cinder So we are up to some Cinder coding! Cinder is one of the most professional and powerful creative coding frameworks that you can get for free on the Internet. It can help you if you are creating some really complicated interactive real-time audio-visual piece, because it uses one of the most popular and powerful low-level programming languages out there—C++—and relies on minimum third-party code libraries. The creators of Cinder also try to use all the newest C++ language features, even those that are not standardized yet (but soon will be) by using the so called Boost libraries. This book is not intended as an A-to-Z guide about Cinder nor the C++ programming language, nor areas of mathematics involved. This is a short introduction for us who have been working with similar frameworks or tools and know some programming already. As Cinder relies on C++, the more we know about it the better. Knowledge of ActionScript, Java, or even JavaScript will help you understand what is going on here. Introducing the 3D space To use Cinder with 3D we need to understand a bit about 3D computer graphics. First thing that we need to know is that 3D graphics are created in a three-dimensional space that exists somewhere in the computer and is transformed into a two-dimensional image that can be displayed on our computer screens afterwards. Usually there is a projection (frustrum) that has different properties which are similar to the properties of cameras we have in the real world. Frustrum takes care of rendering all the 3D objects that are visible in frustrum. It is responsible for creating the 2D image that we see on the screen. As you can see in the preceding figure, all objects inside the frustrum are being rendered on the screen. Objects outside the view frustrum are being ignored. OpenGL (that is being used for drawing in Cinder) relies on the so called rendering pipeline to map the 3D coordinates of the objects to the 2D screen coordinates. Three kind of matrices are used for this process: the model, view, and projection matrices. The model matrix maps the 3D object's local coordinates to the world (or global) space, the view matrix maps it to the camera space, and finally the projection matrix takes care of the mapping to the 2D screen space. Older versions of OpenGL combine the model and view matrices into one—the modelview matrix. The coordinate system in Cinder starts from the top-left corner of the screen. Any object placed there has the coordinates 0, 0, 0 (these are values of x, y, and z respectively). The x axis extends to the right, y to the bottom, but z extends towards the viewer (us), as shown in the following figure: Drawing in 3D Let's try to draw something by taking into account that there is a third dimension. Create another project by using TinderBox and name it Basic3D. Open the project file ( xcode/Basic3D.xcodeproj on Mac or vc10\Basic3D.sln on Windows). Open Basic3DApp.cpp in the editor and navigate to the draw() method implementation. Just after the gl::clear() method add the following line to draw a cube: gl::drawCube( Vec3f(0,0,0), Vec3f(100,100,100) ); The first parameter defines the position of the center of the cube, the second defines its size. Note that we use the Vec3f() variables to de fine position and size within three (x, y and z) dimensions. Compile and run the project. This will draw a solid cube at the top-left corner of the screen. We are able to see just one quarter of it because the center of the cube is the reference point. Let's move it to the center of the screen by transforming the previous line as follows: gl::drawCube( Vec3f(getWindowWidth()/2,getWindowHeight()/2,0), Vec3f(100,100,100) ); Now we are positioning the cube in the middle of the screen no matter what the window's width or height is, because we pass half of the window's width (getWindowWidth()/2 ) and half of the window's height ( getWindowHeight()/2) as values for the x and y coordinates of the cube's location. Compile and run the project to see the result. Play around with the size parameters to understand the logic behind it. We may want to rotate the cube a bit. There is a built-in rotate() function that we can use. One of the things that we have to remember, though, is that we have to use it before drawing the object. So add the following line before gl::drawCube(): gl::rotate( Vec3f(0,1,0) ); Compile and run the project. You should see a strange rotation animation around the y axis. The problem here is that the rotate() function rotates the whole 3D world of our application including the object in it and it does so by taking into account the scene coordinates. As the center of the 3D world (the place where all axes cross and are zero) is in the top-left corner, all rotation is happening around this point. To change that we have to use the translate() function. It is used to move the scene (or canvas) before we rotate() or drawCube(). To make our cube rotate around the center of the screen, we have to perform the following steps: Use the translate() function to translate the 3D world to the center of the screen. Use the rotate() function to rotate the 3D world. Draw the object (drawCube()). Use the translate()function to translate the scene back. We have to use the translate() function to translate the scene back to the location, because each time we call translate() values are added instead of being replaced. In code it should look similar to the following: gl::translate( Vec3f(getWindowWidth()/2,getWindowHeight()/2,0) ); gl::rotate( Vec3f::yAxis()*1 ); gl::drawCube( Vec3f::zero(), Vec3f(100,100,100) ); gl::translate( Vec3f(-getWindowWidth()/2,-getWindowHeight()/2,0) ); So now we get a smooth rotation of the cube around the y axis. The rotation angle around y axis is increased in each frame by 1 degree as we pass the Vec3f::yAxis()*1 value to the rotate() function. Experiment with the rotation values to understand this a bit more. What if we want the cube to be in a constant rotated position? We have to remember that the rotate() function works similar to the translate function. It adds values to the rotation of the scene instead of replacing them. Instead of rotating the object back, we will use the pushMatrices() and popMatrices() functions. Rotation and translation are transformations. Every time you call translate() or rotate() , you are modifying the modelview matrix. If something is done, it is sometimes not so easy to undo it. Every time you transform something, changes are being made based on all previous transformations in the current state. So what is this state? Each state contains a copy of the current transformation matrices. By calling pushModelView() we enter a fresh state by making a copy of the current modelview matrix and storing it into the stack. We will make some crazy transformations now without worrying about how we will undo them. To go back, we call popModelView() that pops (or deletes) the current modelview matrix from the stack, and returns us to the state with the previous modelview matrix. So let's try this out by adding the following code after the gl::clear() call: gl::pushModelView(); gl::translate( Vec3f(getWindowWidth()/2,getWindowHeight()/2,0) ); gl::rotate( Vec3f(35,20,0) ); gl::drawCube( Vec3f::zero(), Vec3f(100,100,100) ); gl::popModelView(); Compile and run our program now, you should see something similar to the following screenshot: As we can see, before doing anything, we create a copy of the current state with pushModelView(). Then we do the same as before, translate our scene to the middle of the screen, rotate it (this time 35 degrees around x axis and 20 degrees around y axis), and finally draw the cube! To reset the stage to the state it was before, we have to use just one line of code, popModelView(). Using built-in eases Now, say we want to make use of the easing algorithms that we saw in the EaseGallery sample. To do that, we have to change the code by following certain steps. To use the easing functions, we have to include the Easing.h header file:#include "cinder/Easing.h" First we are going to add two more variables, startPostition and circleTimeBase: Vec2f startPosition[CIRCLE_COUNT]; Vec2f currentPosition[CIRCLE_COUNT]; Vec2f targetPosition[CIRCLE_COUNT]; float circleRadius[CIRCLE_COUNT]; float circleTimeBase[CIRCLE_COUNT]; Then, in the setup() method implementation, we have to change the currentPosition parts to startPosition and add an initial value to the circleTimeBase array members: startPosition[i].x = Rand::randFloat(0, getWindowWidth()); startPosition[i].y = Rand::randFloat(0, getWindowHeight()); circleTimeBase[i] = 0; Next, we have to change the update() method so that it can be used along with the easing functions. They are based on time and they return a floating point value between 0 and 1 that defines the playhead position on an abstract 0 to 1 timeline: void BasicAnimationApp::update() { Vec2f difference; for (int i=0; i<CIRCLE_COUNT; i++) { difference = targetPosition[i] - startPosition[i]; currentPosition[i] = easeOutExpo( getElapsedSeconds()-circleTimeBase[i]) * difference + startPosition[i]; if ( currentPosition[i].distance(targetPosition[i]) < 1.0f ) { targetPosition[i].x = Rand::randFloat(0, getWindowWidth()); targetPosition[i].y = Rand::randFloat(0, getWindowHeight()); startPosition[i] = currentPosition[i]; circleTimeBase[i] = getElapsedSeconds(); } } } The highlighted parts in the preceding code snippet are those that have been changed. The most important part of it is the currentPosition[i] calculation part. We take the distance between the start and end points of the timeline and multiply it with the position floating point number that is being returned by our easing function, which in this case is easeOutExpo() . Again, it returns a floating point variable between 0 and 1 that represents the position on an abstract 0 to 1 timeline. If we multiply any number with, say, 0.33f, we get one-third of that number, 0.5f, we get one-half of that number, and so on. So, we add this distance to the circle's starting position and we get it's current position! Compile and run our application now. You should see something as follows: Almost like a snow storm! We will add a small modification to the code though. I will add a TWEEN_SPEED definition at the top of the code and multiply the time parameter passed to the ease function with it, so we can control the speed of the circles: #define TWEEN_SPEED 0.2 Change the following line in the update() method implementation: currentPosition[i] = easeOutExpo( (getElapsedSeconds()-circleTimeBase[i])*TWEEN_SPEED) * difference + startPosition[i]; I did this because the default time base for each tween is 1 second. That means that each transition is happening exactly for 1 second and that's a bit too fast for our current situation. We want it to be slower, so we multiply the time we pass to the easing function with a floating point number that is less than 1.0f and greater than 0.0f. By doing that we ensure that the time is scaled down and instead of 1 second we get 5 seconds for our transition. So try to compile and run this, and see for yourself! Here is the full source code of our circle-creation: #include "cinder/app/AppBasic.h" #include "cinder/gl/gl.h" #include "cinder/Rand.h" #include "cinder/Easing.h" #define CIRCLE_COUNT 100 #define TWEEN_SPEED 0.2 using namespace ci; using namespace ci::app; using namespace std; class BasicAnimationApp : public AppBasic { public: void setup(); void update(); void draw(); void prepareSettings( Settings *settings ); Vec2f startPosition[CIRCLE_COUNT]; Vec2f currentPosition[CIRCLE_COUNT]; Vec2f targetPosition[CIRCLE_COUNT]; float circleRadius[CIRCLE_COUNT]; float circleTimeBase[CIRCLE_COUNT]; }; void BasicAnimationApp::prepareSettings( Settings *settings ) { settings->setWindowSize(800,600); settings->setFrameRate(60); } void BasicAnimationApp::setup() { for(int i=0; i<CIRCLE_COUNT; i++) { currentPosition[i].x=Rand::randFloat(0, getWindowWidth()); currentPosition[i].y=Rand::randFloat(0, getWindowHeight()); targetPosition[i].x=Rand::randFloat(0, getWindowWidth()); targetPosition[i].y=Rand::randFloat(0, getWindowHeight()); circleRadius[i] = Rand::randFloat(1, 10); startPosition[i].x = Rand::randFloat(0, getWindowWidth()); startPosition[i].y = Rand::randFloat(0, getWindowHeight()); circleTimeBase[i] = 0; } } void BasicAnimationApp::update() { Vec2f difference; for (int i=0; i<CIRCLE_COUNT; i++) { difference = targetPosition[i] - startPosition[i]; currentPosition[i] = easeOutExpo( (getElapsedSeconds()-circleTimeBase[i]) * TWEEN_SPEED) * difference + startPosition[i]; if ( currentPosition[i].distance( targetPosition[i]) < 1.0f ) { targetPosition[i].x = Rand::randFloat(0, getWindowWidth()); targetPosition[i].y = Rand::randFloat(0, getWindowHeight()); startPosition[i] = currentPosition[i]; circleTimeBase[i] = getElapsedSeconds(); } } } void BasicAnimationApp::draw() { gl::clear( Color( 0, 0, 0 ) ); for (int i=0; i<CIRCLE_COUNT; i++) { gl::drawSolidCircle( currentPosition[i], circleRadius[i] ); } } CINDER_APP_BASIC( BasicAnimationApp, RendererGl ) Experiment with the properties and try to change the eases. Not all of them will work with this example, but at least you will understand how to use them to create smooth animations with Cinder. Summary This article explains what is Cinder, introduces the 3D space, how to draw in 3D, and also explains in short about using built-in eases. Resources for Article : Further resources on this subject: 3D Vector Drawing and Text with Papervision3D: Part 1 [Article] Sage: 3D Data Plotting [Article] Building your First Application with Papervision3D: Part 2 [Article]
Read more
  • 0
  • 0
  • 3263
article-image-building-ejb-30-persistence-model-oracle-jdeveloper
Packt
27 Aug 2010
5 min read
Save for later

Building an EJB 3.0 Persistence Model with Oracle JDeveloper

Packt
27 Aug 2010
5 min read
(For more resources on Oracle, see here.) WebLogic server 10.x provides some value-added features to facilitate EJB 3 development. WebLogic server 10.x supports automatic deployment of a persistence unit based on the injected variable's name. The @javax.persistence. PersistenceContext and @javax.persistence.PersistenceUnit annotation s are used to inject the persistence context in an EntityManager or EntityManagerFactory variable . A persistence context is a set of entities that are mapped to a database with a global JNDI name. If the name of the injected variable is the same as the persistence unit, the unitName attribute of the @PersistenceContext or @PersistenceUnit is not required to be specified. The EJB container automatically deploys the persistence unit and sets its JNDI name to be the same as the persistence unit name in persistence.xml. For example, if the persistence unit name in the persistence.xml file is em, an EntityManager variable may be injected with the persistence context as follows: @PeristenceContextprivate EntityManager em; We did not need to specify the unitName attribute in the @PersistenceContext because the variable name is the same as the persistence unit. Similarly, an EntityManagerFactory variable may be injected with the persistence context as follows, emf being also the persistence unit name: @PersistenceUnitprivate EntityManagerFactory emf; Another value-added feature in WebLogic server 10.x is support for vendor-specific subinterfaces of the EntityManager interface. For example, the BEA Kodo persistence provider provides the KodoEntityManager subinterface, which may be injected with the persistence context as follows: @PersistenceContextprivate KodoEntityManager em; Setting the environment Before getting started, we need to install Oracle JDeveloper 11g, which may be downloaded from http://www.oracle.com/technology/products/jdev/index.html. Download the Studio Edition, which is the complete version of JDevloper with all the features. Oracle JDeveloper 11g is distributed as a GUI self-extractor application. Click on the jdevstudio11110 install application. The Oracle Installer gets started. Click on Next in the Oracle Installer. Choose a middleware home directory and click on Next: Choose the Install Type as Complete, which includes the integrated WebLogic Server, and click on Next: Confirm the default Product Installation directories and click on Next: The WebLogic Server installation directory is the wlserver_10.3 folder within the middleware home directory. Choose a shortcut location and click on Next. The Installation Summary lists the products that are installed, which include the WebLogic Server and the WebLogic JDBC drivers. Click on Next to install Oracle JDeveloper 11g and the integrated WebLogic Server 10.3. We also need to install the Oracle database 10g/11g or the lightweight Oracle XE, which may be downloaded from http://www.oracle.com/technology/software/products/database/index.html. When installing Oracle database, also install the sample schemas. Creating a datasource in JDeveloper Next, we create a JDBC datasource in JDeveloper. We shall use the datasource in the EJB 3.0 entity bean for database persistence. First, we need to create a database table in some sample schema, OE for example. Run the following SQL script in SQL *Plus: CREATE TABLE Catalog (id INTEGER PRIMARY KEY NOT NULL,journal VARCHAR(100), publisher VARCHAR(100), edition VARCHAR(100),title VARCHAR(100), author VARCHAR(100)); A database table gets created in the OE sample schema. Next, we need to create a JDBC connection in JDeveloper with Oracle database. Open the Database Navigator or select the Database Navigator tab if already open. Right-click on the IDE Connections node and select New Connection: In the Create Database Connection window, specify a Connection Name, select Connection Type as Oracle (JDBC), specify Username as OE, which is the schema in which the Catalog table is created, and specify the password for the OE schema. Select Driver as thin, Host Name as localhost, SID as ORCL, and JDBC Port as 1521. Click on the Test Connection button to test the connection. If the connection gets established, click on OK: The OracleDBConnection gets added to the Database Navigator view. The CATALOG table that we created is listed in the Tables: (Move the mouse over the image to enlarge.) Creating an EJB 3 application In this section, we create an EJB 3.0 application in JDeveloper. Select New Application: Specify an Application Name, select the Java EE Web Application template, which consists of a Model project and a ViewController project, and click on Next: Next, specify the name (EJB3ViewController) for the View and Controller project. In the Project Technologies tab, transfer the EJB project technology from the Available list to the Selected list using the > button. We have selected the EJB project technology, as we shall be creating an EJB 3.0 model. Click on Next: Select the default Java settings for the View project and click on Next: Configure the EJB Settings for the View project. Select EJB Version as Enterprise JavaBeans 3.0 and select Using Annotations. Click on Next. Next, create the Model project. Specify the Project Name (EJB3Model for example), and in the Project Technologies tab transfer the EJB project technology from the Available list to the Selected list using the > button. We have added the EJB project technology, as the EJB 3.0 application client is created in the View project. Click on Next: Select the default Java settings for the Model project and click on Next: Similar to the View project, configure the EJB settings for the Model project. Select EJB Version as Enterprise JavaBeans 3.0, select Using Annotations and click on Finish. As we won't be using a jndi.properties file or an ejb-jar.xml file , we don't need to select the generate option for the jndi.properties file and the ejb-jar.xml file: An EJB 3.0 application, which consists of a Model project and a ViewController project, get added in the Application tab: Select the EJB3Model project in the Application navigator and select Tools | Project Properties. In the Project Properties window, select the Libraries and Classpath node. The EJB 3.0 library should be in the Classpath Entries: Select the EJB Module node and select the OracleDBConnection in the Connection drop-down list. The datasource corresponding to the OracleDBConnection is jdbc/OracleDBConnectionDS.
Read more
  • 0
  • 0
  • 3263

article-image-drupal-intranets-open-atrium-creating-dashboard
Packt
05 Jan 2011
7 min read
Save for later

Drupal Intranets with Open Atrium: Creating Dashboard

Packt
05 Jan 2011
7 min read
Drupal Intranets with Open Atrium Discover an intranet solution for your organization with Open Atrium Unlock the features of Open Atrium to set up an intranet to improve communication and workflow Explore the many features of Open Atrium and how you can utilize them in your intranet Learn how to support, maintain, and administer your intranet A how-to guide written for non-developers to learn how to set up and use Open Atrium   Main dashboard The main dashboard provides an interface for managing and monitoring our Open Atrium installation. This dashboard provides a central place to monitor what's going on across our departments. It will also be used as the central gateway for most of our administrative tasks. From this screen we can add groups, invite users, and customize group dashboards. Each individual who logs in also has the main dashboard and can quickly glance at the overall activity for their company. The dashboard is laid out initially by default in a two column layout. The left side of the screen contains the Main Content section and the right side of the screen contains a Sidebar. In a default installation of Open Atrium, there will be a welcome video in the Main Content area on the left. The first thing that you will notice when you log in is that there is a quick video that you can play on your main dashboard screen. This video provides a quick overview of Open Atrium for our users, and a review of the options you have for working with the dashboard. In the following screenshot, you will see the main dashboard and how the two separate content areas are divided, with a specific section marked that we will discuss later in the article: Each dashboard can be customized to either a two-column or split layout, as shown in the preceding screenshot, or a three-column layout. Under the Modifying Layout section of this article, we will cover how to change the overall layout. As you can see in the preceding image, the dashboard is divided into three distinct sections. There is the header area which includes the Navigation tabs for creating content, modifying settings, and searching the site. Under the header area, we have the main content and sidebar areas. These areas are made up of blocks of content from the site. These blocks can bring forward and include different items depending on how we customize our site. For example, in the left column we could choose to display Recent Activity and Blog Posts, while the right column could show Upcoming Events and a Calendar. Any of the features that we find throughout Open Atrium can be brought forward to a dashboard page. The beauty of this setup is that each group can customize their own. In the next section of this article, we will cover group dashboards in more detail. However, the same basic concepts will apply to all the dashboards. After our users are comfortable with using Open Atrium, we may decide that we no longer need to show the tutorial video on the main dashboard. This video can be easily removed by clicking on the Customizing the dashboard link just above the Recent Activity block or by clicking on the Customize dashboard link on the top right in the header section. Click on the customizing the dashboard link and we will see a dashboard widget on the screen. This will be the main interface for configuring layout and content on our dashboard. Now, hover over the video and on the top right you will see two icons. The first icon that looks like a plus sign (+) indicates that the content can be dragged. We can click on this icon when hovering over a section of content and move that content to another column or below another section of content on our dashboard. The X indicates that we can remove that item from our dashboard. Hovering over any piece of content when you are customizing the dashboard should reveal these two icons. The two icons are highlighted in the following screenshot with a square box drawn around them: To remove the welcome video, we click on the red X and then click on Save changes and the video tutorial will be removed from the dashboard. Group dashboard The group dashboard works the same as the main dashboard. The only difference is that the group dashboard exposes content for the individual departments or groups that are setup on our site. For example, a site could have a separate group for the Human Resources, Accounting, and Management departments. Each of these groups can create a group dashboard that can be customized by any of the administrators for a particular group. The following screenshot shows how the Human Resources department has customized their group dashboard:   In the preceding screenshot, we can see how the HR department customized their dashboard. In the left column they have added a Projects and a Blog section. The Projects section links to specific projects within the site, and the Blog section links to the detailed blog entries. There is also a customized block in the right column where the HR department has added the Upcoming events, a Mini calendar, and a Recent activity block. The Projects section is a block that is provided by the system and exposes content from the Case tracker or Todo sections of the HR website. The Upcoming events section is a customized block that highlights future events entered through the calendar feature. To demonstrate how each department can have a different dashboard, the following screenshot shows the dashboard for the Accounting department: The Accounting dashboard has been configured to show a custom block as the first item in the left column, and below that a listing of Upcoming events. In the right column, the Accounting administrator has added a block which brings forward the Latest cases of all the latest cases, exposing the most recent issues entered into the tracking system. It is also worth noting that the Accounting department has a completely different color scheme from the Human Resources department. The color scheme can be changed by clicking on Settings | Group Settings | Features. We can scroll down to the bottom of the screen and click on BACKGROUND to either enter a hexadecimal color for our main color or pick a color from the color wheel as displayed in the following screenshot: Spaces Spaces is a Drupal API module that allows sitewide configurable options to be overridden by individual spaces. The spaces API is included with our Open Atrium installation and provides the foundation for creating group and user configurable dashboards. Users can then customize their space and set settings that are only applied to their space. This shows the extreme power and flexibility of Open Atrium by allowing users to apply customizations without affecting any of the other areas of Open Atrium. Users can use the functionality provided by spaces to create an individualized home page. Group spaces Group spaces provide an area for each group or department to arrange content in a contextual manner that makes sense for each group. In the preceding examples, the content that is important to the accounting department is not necessarily important to the human resources department. Administrators of each department can take advantage of Open Atrium's complete flexibility to arrange content in a way that works for them. The URLs in the example that we have been looking at are listed as follows: Human Resources: http://acme.alphageekdev.com/hr. Accounting: http://acme.alphageekdev.com/acct Each URL is composed of the site URL, that is, http://acme.alphageekdev.com/ and then the short name that we provided for our group space, hr and acct. User spaces User spaces work in the same way that the group dashboard and spaces work. Each user of the system can customize their dashboard any way that they see appropriate. The following screenshot shows an example of the user's dashboard for the admin account: In the preceding screenshot, we have drawn a box around two areas. These two areas represent two different group spaces showing on the user's dashboard page. This shows how content can be brought forward to various dashboards to show only what is important to a particular user.
Read more
  • 0
  • 0
  • 3258

article-image-organizing-your-balsamiq-files
Packt
09 Oct 2012
3 min read
Save for later

Organizing your Balsamiq files

Packt
09 Oct 2012
3 min read
There are two important things to note about organizing your files in Balsamiq: Keep all of your .bmml files together. The assets folder houses everything else, that is, artwork, logos, PDFs, PSDs, symbols, and so on, as shown in the following screenshot: Naming your files Naming your files in Balsamiq is very important. This is because Balsamiq does not automatically remember the order in which you organized your files after you closed them. Balsamiq will reopen them in the order in which they are sitting in a folder. There are, however, two ways you can gain greater control. Alphabetically You could alphabetize your files, although this could pose a problem as you add and delete files, requiring you to carefully name the new files so that they open in the same order as before. While it is a fine solution, the time it takes to ensure proper alphabetization does not seem worth the effort. Numbering The second, and more productive way, to name your files is to not name them at all, but instead to number them. For example, after naming a new .bmml file, add a number to the end of it in sequential order, for example, filename_1, filename_2, filename_3, and so on. Subpages, in turn, become filename_1a, filename_1b, filename_1c, and so on. Keep in mind, however, that if you add, delete, or modify numbered files, you may still have to modify the remaining page numbers accordingly. Nevertheless, I suspect you will find it to be easier than alphabetizing. Another way to number your files can be found on Balsamiq's website. The link to the exact page is a bit long. Go to http://www.balsamiq.com/ and do a search for Managing Projects in Mockups for Desktop. In the article, they recommend an alternate method of numbering your files by 10s, for example, filename_10, filename_20, filename_30, and so on. The idea being that as you add or remove pages, you can do so incrementally, rather than having to do a complete renumbering each time. In other words, you could add numbers between 11 and 19 and still be fine. Keep in mind that if you choose to use single digits, be sure to add a zero before the filename for consistency and to ensure proper file folder organization, for example, filename_05, filename_06, filename_07, and so on. How you name or number your files is completely up to you. These tips are simply recommendations to consider. The bottom line is to find a system for naming your files that works for you and to stick with it. You will be glad you did.
Read more
  • 0
  • 0
  • 3258
article-image-red5-video-demand-flash-server
Packt
04 Jun 2010
6 min read
Save for later

Red5: A video-on-demand Flash Server

Packt
04 Jun 2010
6 min read
Plone does not provide a responsive user experience out of the box. This is not because the system is slow, but because it simply does (too) much. It does a lot of security checks and workflow operations, handles the content rules, does content validation, and so on. Still, there are some high-traffic sites running with the popular Content Management System. How do they manage? "All Plone integrators are caching experts." This saying is commonly heard and read in the Plone community. And it is true. If we want a fast and responsive system, we have to use caching and load-balancing applications to spread the load. The article discusses a practical example. We will set up a protected video-on-demand solution with Plone and a Red5 server. We will see how to integrate it with Plone for an effective and secure video-streaming solution. The Red5 server is an open source Flash server. It is written in Java and is very extensible via plugins. There are plugins for transcoding, different kinds of streaming, and several other manipulations we might want to do with video or audio content. What we want to investigate here is how to integrate video streams protected by Plone permissions. (For more resources on Plone, see here.) Requirements for setting up a Red5 server The requirement for running a Red5 Flash server is Java 6. We can check the Java version by running this: $ java -versionjava version "1.6.0_17"Java(TM) SE Runtime Environment (build 1.6.0_17-b04-248-9M3125)Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01-101, mixed mode) The version needs to be 1.6 at least. The earlier versions of the Red5 server run with 1.5, but the plugin for protecting the media files needs Java 6. To get Java 6, if we do not have it already, we can download it from the Sun home page. There are packages available for Windows and Linux. Some Linux distributions have different implementations of Java because of licensing issues. You may check the corresponding documentation if this is the case for you. Mac OS X ships with its own Java bundled. To set the Java version to 1.6 on Mac OS X, we need to do the following: $ cd /System/Library/Frameworks/JavaVM.framework/Versions$ rm Current*$ ln -s 1.6 Current$ ln -s 1.6 CurrentJDK After doing so, we should double-check the Java version with the command shown before. The Red5 server is available as a package for various operating systems. In the next section, we will see how we can integrate a Red5 server into a Plone buildout. A Red5 buildout Red5 can be downloaded in several different ways. As it is open source, even the sources are available as a tarball from the product home page. For the buildout, we use the bundle of ready compiled Java libraries. This bundle comes with everything needed to run a standalone Red5 server. There are startup scripts provided for Windows and Bash (usable with Linux and Mac OS X). Let's see how to configure our buildout. The buildout needs the usual common elements for a Plone 3.3.3 installation. Apart from the application and the instance, the Red5-specific parts are also present: a fss storage part and a part for setting up the supervisor. [buildout]newest = falseparts =zope2instancefssred5red5-webappred5-protectedVODsupervisorextends =http://dist.plone.org/release/3.3.3/versions.cfgversions = versionsfind-links =http://dist.plone.org/release/3.3.3http://dist.plone.org/thirdpartyhttp://pypi.python.org/simple/ There is nothing special in the zope2 application part. [zope2]recipe = plone.recipe.zope2installfake-zope-eggs = trueurl = ${versions:zope2-url} On the Plone side, we need—despite of the fss eggs—a package called unimr.red5.protectedvod. This package with the rather complicated name creates rather complicated one-time URLs for the communication with Red5. [instance]recipe = plone.recipe.zope2instancezope2-location = ${zope2:location}user = admin:adminhttp-address = 8080eggs =Ploneunimr.red5.protectedvodiw.fsszcml =unimr.red5.protectedvodiw.fssiw.fss-meta First, we need to configure FileSystemStorage.FileSystemStorage is used for sharing the videos between Plone and Red5. The videos are uploaded via the Plone UI and they are put on the filesystem. The storage strategy needs to be either site1 or site2. These two strategies store the binary data with its original filename and file extension. The extension is needed for the Red5 server to recognize the file. [fss]recipe = iw.recipe.fsszope-instances =${instance:location}storages =global /site /site site2 The red5 part downloads and extracts the Red5 application. We have to envision that everything is placed into the parts directory. This includes configurations, plugins, logs, and even content. We need to be extra careful with changing the recipe in the buildout if running in production mode. The content we share with Plone is symlinked, so this is not a problem. For the logs, we might change the position to outside the parts directory and symlink them back. [red5]recipe = hexagonit.recipe.downloadurl = http://www.red5.org/downloads/0_8/red5-0.8.0.tar.gz The next part adds our custom application, which handles the temporary links used for protection, to the Red5 application. The plugin is shipped together with the unimr.red5.protectedvod egg we use on the Plone side. It is easier to get it from the Subversion repository directly. [red5-webapp]recipe = infrae.subversionurls = http://svn.plone.org/svn/collective/unimr.red5.protectedvod/trunk/unimr/red5/protectedvod/red5-webapp red5-webapp The red5-protectedVOD part configures the protectedVOD plugin. Basically, the WAR archive we checked out in the previous step is extracted. If the location of the fss storage does not exist already, it is symlinked into the streams directory of the plugin. The streams directory is the usual place for media files for Red5. [red5-protectedVOD]recipe = iw.recipe.cmdon_install = trueon_update = falsecmds =mkdir -p ${red5:location}/webapps/protectedVODcd ${red5:location}/webapps/protectedVODjar xvf ${red5-webapp:location}/red5-webapp/protectedVOD_0.1-red5_0.8-java6.warcd streamsif [ ! -L ${red5:location}/webapps/protectedVOD/streams/fss_storage_site ];then ln -s ${buildout:directory}/var/fss_storage_site .;fi The commands used above are Unix/Linux centric. Until Vista/ Server 2008, Windows didn't understand symbolic links. That's why the whole idea of the recipe doesn't work. The recipe might work with Windows Vista, Windows Server 2008, or Windows 7; but the commands look different Finally, we add the Red5 server to our supervisor configuration. We need to set the RED5_HOME environment variable, so that the startup script can find the necessary libraries of Red5. [supervisor]recipe = collective.recipe.supervisorprograms =30 instance2 ${instance2:location}/bin/runzope ${instance2:location}true40 red5 env [RED5_HOME=${red5:location} ${red5:location}/red5.sh]${red5:location} true After running the buildout, we can start the supervisor by issuing the following command: bin/supervisord The supervisor will take care of running all the subprocesses. To find out more on the supervisor, we may visit its website. To check if everything worked, we can request a status report by issuing this: bin/supervisorctl statusinstance RUNNING pid 2176, uptime 3:00:23red5 RUNNING pid 7563, uptime 0:51:25
Read more
  • 0
  • 0
  • 3252

article-image-basic-dijit-knowledge-dojo
Packt
22 Oct 2009
7 min read
Save for later

Basic Dijit Knowledge in Dojo

Packt
22 Oct 2009
7 min read
All Dijits can be subclassed to change parts of their behavior, and then used as the original Dijits, or you can create your own Dijits from scratch and include existing Dijits (Forms, buttons, calendars, and so on) in a hierarchical manner. All Dijits can be created in either of the following two ways: Using the dojoType markup property inside selected tags in the HTML page. Programmatic creation inside any JavaScript. For instance, if you want to have a ColorPalette in your page, you can write the following: <div dojoType="dijit.ColorPalette"></div> But you also need to load the required Dojo packages, which consist of the ColorPalette and any other things it needs. This is generally done in a script statement in the <head> part of the HTML page, along with any CSS resources and the djConfig declaration. So a complete example would look like this: <html> <head> <title>ColorPalette</title> <style> @import "dojo-1.1b1/dojo/resources/dojo.css"; @import "dojo-1.1b1/dijit/themes/tundra/tundra.css"; </style> <script type="text/javascript"> djConfig= { parseOnLoad: true } </script> <script type="text/javascript" src="dojo-1.1b1/dojo/dojo.js"></script> <script type="text/javascript"> dojo.require("dojo.parser"); dojo.require("dijit.ColorPalette"); </script> </head> <body class=”tundra”> <div dojoType="dijit.ColorPalette"></div> </body> </html> Obviously, this shows a simple color palette, which can be told to call a function when a choice has been made. But if we start from the top, I've chosen to include two CSS files in the <style> tag. The first one, dojo.css, is a reset.css, which gives lists, table elements, and various other things their defaults. The file itself is quite small and well commented. The second file is called tundra.css and is a wrapper around lots of other stylesheets; some are generic for the theme it represents, but most are specific for widgets or widget families. The two ways to create Dijits So putting a Dojo widget in your page is very simple. If you would want the ColorPalette dynamically in a script instead, remove the highlighted line just before the closing body tag and instead write the following: <script> new dijit.ColorPalette({}, dojo.byId('myPalette')); </script> This seems fairly easy, but what's up with the empty object literal ( {} ) as the first argument? Well, as some Dijits take few arguments and others more, all arguments to a Dijit get stuffed into the first argument and the others, the last argument is (if needed) the DOM node which the Dijit shall replace with its own content somewhere in the page. The default is, for all Dijits, that if we only give one argument to the constructor, this will be taken as the DOM node where the Dijit is to be created. Let's see how to create a more complex Dijit in our page, a NumberSpinner. This will create a NumberSpinner that is set at the value '200' and which has '500' as a maximum, showing no decimals. To create this NumberSpinner dynamically, we would write the following: <input type="text" name="date1" value="2008-12-30" dojoType="dijit.form.DateTextBox"/> One rather peculiar feature of markup-instantiation of Dijits is that you can use almost any kind of tag for the Dijit. The Dijit will replace the element with its own template when it is initialized. Certain Dijits work in a more complicated fashion and do not replace child nodes of the element where they're defined, but wrap them instead. However, each Dijit has support for template HTML which will be inserted, with variable substitutions whenever that Dijit is put in the page. This is a very powerful feature, since when you start creating your own widgets, you will have an excellent system in place already which constrains where things will be put and how they are called. This means that when you finish your super-complicated graph drawing widget and your client or boss wants three more just like it on the same page, you just slap up three more tags which have the dojoType defining your widget. How do I find my widget? You already know that you can use dojo.byId('foo') as a shorter version of document.getElementById('foo'). If you still think that dojo.byId is too long, you can create a shorthand function like this: var $ = dojo.byId; And then use $('foo') instead of dojo.byId for simple DOM node lookup. But Dijits also seem to have an id. Are those the same as the ids of the DOM node they reside in or what? Well, the answer is both yes and no. All created Dijit widgets have a unique id. That id can be the same string as the id that defines the DOM node where they're created, but it doesn't have to be. Suppose that you create a Dijit like this: <div id='foo' dojoType='dijit._Calendar'></div> The created Dijit will have the same Dijit id as the id of the DOM node it was created in, because no others were given. But can you define another id for the widget than for its DOM node? Sure thing. There's a magic attribute called widgetId. So we could do the following: <div id='foo' dojoType='dijit._Calendar' widgetId='bar'></div> This would give the widget the id of 'bar'. But, really, what is the point? Why would we care the widget / Dijit has some kind of obscure id? All we really need is the DOM node, right? Not at all. Sure, you might want to reach out and do bad things to the DOM node of a widget, but that object will not be the widget and have none of its functions. If you want to grab hold of a widget instance after it is created, you need to know its widget id, so you can call the functions defined in the widget. So it's almost its entire reason to exist! So how do I get hold of a widget obejct now that I have its id? By using dijit.byId(). These two functions look pretty similar, so here is a clear and easy to find (when browsing the book) explanation: dojo.byId(): Returns the DOM node for the given id. dijit.byId(): Returns the widget object for the given widget id. Just one more thing. What happens if we create a widget and don't give either a DOM or widget id? Does the created widget still get an id? How do we get at it? Yes, the widget will get a generated id, if we write the following: <div dojoType='dijit._Calendar'></div> The widget will get a widget id like this: dijit__Calendar_0. The id will be the string of the file or namespace path down to the .js file which declares the widget, with / exchanged to _, and with a static widget counter attached to the end.
Read more
  • 0
  • 0
  • 3249
Modal Close icon
Modal Close icon