Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-creating-direct2d-game-window-class
Packt
23 Dec 2013
12 min read
Save for later

Creating a Direct2D game window class

Packt
23 Dec 2013
12 min read
(For more resources related to this topic, see here.) To put some graphics on the screen; the first step for us would be creating a new game window class that will use Direct2D. This new game window class will derive from our original game window class, while adding the Direct2D functionality. Open Visual Studio. Add a new class to the project called GameWindow2D. We need to change its declaration to: public class GameWindow2D : GameWindow, IDispoable As you can see, it inherits from the GameWindow class meaning that it has all of the public and protected members of the GameWindow class, as though we had implemented them again in this class. It also implements the IDisposable interface, just as the GameWindow class does. Also, don't forget to add a reference to SlimDX to this project if you haven't already. We need to add some using statements to the top of this class file as well. They are all the same using statements that the GameWindow class has, plus one more. The new one is SlimDX.Direct2D. They are as follows: using System.Windows.Forms; using System.Diagnostics; using System.Drawing; using System; using SlimDX; using SlimDX.Direct2D; using SlimDX.Windows; Next, we need to create a handful of member variables: WindowRenderTarget m_RenderTarget; Factory m_Factory; PathGeometry m_Geometry; SolidColorBrush m_BrushRed; SolidColorBrush m_BrushGreen; SolidColorBrush m_BrushBlue; The first variable is a WindowRenderTarget object. The term render target is used to refer to the surface we are going to draw on. In this case, it is our game window. However, this is not always the case. Games can render to other places as well. For example, rendering into a texture object is used to create various effects. One example would be a simple security camera effect. Say, we have a security camera in one room and a monitor in another room. We want the monitor to display what our security camera sees. To do this, we can render the camera's view into a texture, which can then be used to texture the screen of the monitor. Of course, this has to be re-done in every frame so that the monitor screen shows what the camera is currently seeing. This idea is useful in 2D too. Back to our member variables, the second one is a Factory object that we will be using to set up our Direct2D stuff. It is used to create Direct2D resources such as RenderTargets. The third variable is a PathGeometry object that will hold the geometry for the first thing we will draw, which will be a rectangle. The last three variables are all SolidColorBrush objects. We use these to specify the color we want to draw something with. There is a little more to them than that, but that's all we need right now. The constructor Let's turn our attention now to the constructor of our Direct2D game window class. It will do two things. Firstly, it will call the base class constructor (remember the base class is the original GameWindow class), and it will then get our Direct2D stuff initialized. The following is the initial code for our constructor: public GameWindow2D(string title, int width, int height,   bool fullscreen)     : base(title, width, height, fullscreen) {     m_Factory = new Factory();     WindowRenderTargetProperties properties = new       WindowRenderTargetProperties();     properties.Handle = FormObject.Handle;     properties.PixelSize = new Size(width, height);     m_RenderTarget = new WindowRenderTarget(m_Factory,       properties); } In the preceding code, the line starting with a colon is calling the constructor of the base class for us. This ensures that everything inherited from the base class is initialized. In the body of the constructor, the first line creates a new Factory object and stores it in our m_Factory member variable. Next, we create a WindowRenderTargetProperties object and store the handle of our RenderForm object in it. Note that FormObject is one of the properties defined in our GameWindow base class. Remember that the RenderForm object is a SlimDX object that represents a window for us to draw on. The next line saves the size of our game window in the PixelSize property. The WindowRenderTargetProperties object is basically how we specify the initial configuration for a WindowRenderTarget object when we create it. The last line in our constructor creates our WindowRenderTarget object, storing it in our m_RenderTarget member variable. The two parameters we pass in are our Factory object and the WindowRenderTargetProperties object we just created. A WindowRenderTarget object is a render target that refers to the client area of a window. We use the WindowRenderTarget object to draw in a window. Creating our rectangle Now that our render target is set up, we are ready to draw stuff, but first we need to create something to draw! So, we will add a bit more code at the bottom of our constructor. First, we need to initialize our three SolidColorBrush objects. Add these three lines of code at the bottom of the constructor: m_BrushRed = new SolidColorBrush(m_RenderTarget, new Color4(1.0f,   1.0f, 0.0f, 0.0f)); m_BrushGreen = new SolidColorBrush(m_RenderTarget, new   Color4(1.0f, 0.0f, 1.0f, 0.0f)); m_BrushBlue = new SolidColorBrush(m_RenderTarget, new Color4(1.0f,   0.0f, 0.0f, 1.0f)); This code is fairly simple. For each brush, we pass in two parameters. The first parameter is the render target we will use this brush on. The second parameter is the color of the brush, which is an ARGB (Alpha Red Green Blue) value. The first parameter we give for the color is 1.0f. The f character on the end indicates that this number is of the float data type. We set alpha to 1.0 because we want the brush to be completely opaque. A value of 0.0 will make it completely transparent, and a value of 0.5 will be 50 percent transparent. Next, we have the red, green, and blue parameters. These are all float values in the range 0.0 to 1.0 as well. As you can see for the red brush, we set the red channel to 1.0f and the green and blue channels are both set to 0.0f. This means we have maximum red, but no green or blue in our color. With our SolidColorBrush objects set up, we now have three brushes we can draw with, but we still lack something to draw! So, let's fix that by adding some code to make our rectangle. Add this code to the end of the constructor: m_Geometry = new PathGeometry(m_RenderTarget.Factory); using (GeometrySink sink = m_Geometry.Open()) {     int top = (int) (0.25f * FormObject.Height);     int left = (int) (0.25f * FormObject.Width);     int right = (int) (0.75f * FormObject.Width);     int bottom = (int) (0.75f * FormObject.Height);     PointF p0 = new Point(left, top);     PointF p1 = new Point(right, top);     PointF p2 = new Point(right, bottom);     PointF p3 = new Point(left, bottom);     sink.BeginFigure(p0, FigureBegin.Filled);     sink.AddLine(p1);     sink.AddLine(p2);     sink.AddLine(p3);     sink.EndFigure(FigureEnd.Closed);     sink.Close(); } This code is a bit longer, but it's still fairly simple. The first line creates a new PathGeometry object and stores it in our m_Geometry member variable. The next line starts the using block and creates a new GeometrySink object that we will use to build the geometry of our rectangle. The using block will automatically dispose of the GeometrySink object for us when program execution reaches the end of the using block. The using blocks only work with objects that implement the IDisposable interface. The next four lines calculate where each edge of our rectangle will be. For example, the first line calculates the vertical position of the top edge of the rectangle. In this case, we are making the rectangle's top edge be 25 percent of the way down from the top of the screen. Then, we do the same thing for the other three sides of our rectangle. The second group of four lines of code creates four Point objects and initializes them using the values we just calculated. These four Point objects represent the corners of our rectangle. A point is also often referred to as a vertex. When we have more than one vertex, we call them vertices (pronounced as vert-is-ces). The final group of code has six lines. They use the GeometrySink and the Point objects we just created to set up the geometry of our rectangle inside the PathGeometry object. The first line uses the BeginFigure() method to begin the creation of a new geometric figure. The next three lines each add one more line segment to the figure by adding another point or vertex to it. With all four vertices added, we then call the EndFigure() method to specify that we are done adding vertices. The last line calls the Close() method to specify that we are finished adding geometric figures, since we can have more than one if we want. In this case, we are only adding one geometric figure, our rectangle. Drawing our rectangle Since our rectangle never changes, we don't need to add any code to our UpdateScene() method. We will override the base class's UpdateScene() method anyway, in case we need to add some code in here later, which is given as follows: public override void UpdateScene(double frameTime) {     base.UpdateScene(frameTime); } As you can see, we only have one line of code in this override modifier of the base class's UpdateScene() method. It simply calls the base class's version of this method. This is important because the base class's UpdateScene() method contains our code that gets the latest user input data each frame. Now, we are finally ready to write the code that will draw our rectangle on the screen! We will override the RenderScene() method so we can add our custom code. The following is the code: public override void RenderScene() {     if ((!this.IsInitialized) || this.IsDisposed)     {         return;     }     m_RenderTarget.BeginDraw();     m_RenderTarget.Clear(ClearColor);     m_RenderTarget.FillGeometry(m_Geometry, m_BrushBlue);     m_RenderTarget.DrawGeometry(m_Geometry, m_BrushRed, 1.0f);     m_RenderTarget.EndDraw(); } First, we have an if statement, which happens to be identical to the one we put in the base class's RenderScene() method. This is because we are not calling the base class's RenderScene() method, since the only code in it is this if statement. Not calling the base class version of this method will give us a slight performance boost, since we don't have the overhead of that function call. We could do the same thing with the UpdateScene() method as well. In this case we didn't though, because the base class version of that method has a lot more code in it. In your own projects you may want to copy and paste that code into your override of the UpdateScene() method. The next line of code calls the render target's BeginDraw() method to tell it that we are ready to begin drawing. Then, we clear the screen on the next line by filling it with the color stored in the ClearColor property that is defined by our GameWindow base class. The last three lines draw our geometry twice. First, we draw it using the FillGeometry() method of our render target. This will draw our rectangle filled in with the specified brush (in this case, solid blue). Then, we draw the rectangle a second time, but this time with the DrawGeometry() method. This draws only the lines of our shape but doesn't fill it in, so this draws a border on our rectangle. The extra parameter on the DrawGeometry() method is optional and specifies the width of the lines we are drawing. We set it to 1.0f, which means the lines will be one-pixel wide. And the last line calls the EndDraw() method to tell the render target that we are finished drawing. Cleanup As usual, we need to clean things up after ourselves when the program closes. So, we need to add override of the base class's Dispose(bool) method. We've already done this a few times, so it should be somewhat familiar and is not shown here. Our blue rectangle with a red border As you might guess, there is a lot more you can do with drawing geometry. You can draw curved line segments and draw shapes with gradient brushes too for example. You can also draw text on the screen using the render target's DrawText() method. But since we have limited space on these pages, we're going to look at how to draw bitmap images on the screen. These images are something that make up the graphics of most 2D games. Summary In this article, we first made a simple demo application that drew a rectangle on the screen. Then, we got a bit more ambitious and built a 2D tile-based game world. Resources for Article: Further resources on this subject: HTML5 Games Development: Using Local Storage to Store Game Data [Article] Flash Game Development: Creation of a Complete Tetris Game [Article] Interface Designing for Games in iOS [Article]
Read more
  • 0
  • 0
  • 13536

article-image-compression-formats-linux-shell-script
Packt
31 Jan 2011
6 min read
Save for later

Compression Formats in Linux Shell Script

Packt
31 Jan 2011
6 min read
  Linux Shell Scripting Cookbook Solve real-world shell scripting problems with over 110 simple but incredibly effective recipes Master the art of crafting one-liner command sequence to perform tasks such as text processing, digging data from files, and lot more Practical problem solving techniques adherent to the latest Linux platform Packed with easy-to-follow examples to exercise all the features of the Linux shell scripting language Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Compressing with gunzip (gzip) gzip is a commonly used compression format in GNU/Linux platforms. Utilities such as gzip, gunzip, and zcat are available to handle gzip compression file types. gzip can be applied on a file only. It cannot archive directories and multiple files. Hence we use a tar archive and compress it with gzip. When multiple files are given as input it will produce several individually compressed (.gz) files. Let's see how to operate with gzip. How to do it... In order to compress a file with gzip use the following command: $ gzip filename $ ls filename.gz Then it will remove the file and produce a compressed file called filename.gz. Extract a gzip compressed file as follows: $ gunzip filename.gz It will remove filename.gz and produce an uncompressed version of filename.gz. In order to list out the properties of a compressed file use: $ gzip -l test.txt.gz compressed uncompressed ratio uncompressed_name 35 6 -33.3% test.txt The gzip command can read a file from stdin and also write a compressed file into stdout. Read from stdin and out as stdout as follows: $ cat file | gzip -c > file.gz The -c option is used to specify output to stdout. We can specify the compression level for gzip. Use --fast or the --best option to provide low and high compression ratios, respectively. There's more... The gzip command is often used with other commands. It also has advanced options to specify the compression ratio. Let's see how to work with these features. Gzip with tarball We usually use gzip with tarballs. A tarball can be compressed by using the –z option passed to the tar command while archiving and extracting. You can create gzipped tarballs using the following methods: Method - 1 $ tar -czvvf archive.tar.gz [FILES] Or: $ tar -cavvf archive.tar.gz [FILES] The -a option specifies that the compression format should automatically be detected from the extension. Method - 2First, create a tarball: $ tar -cvvf archive.tar [FILES] Compress it after tarballing as follows: $ gzip archive.tar If many files (a few hundreds) are to be archived in a tarball and need to be compressed, we use Method - 2 with few changes. The issue with giving many files as command arguments to tar is that it can accept only a limited number of files from the command line. In order to solve this issue, we can create a tar file by adding files one by one using a loop with an append option (-r) as follows: FILE_LIST="file1 file2 file3 file4 file5" for f in $FILE_LIST; do tar -rvf archive.tar $f done gzip archive.tar In order to extract a gzipped tarball, use the following: -x for extraction -z for gzip specification Or: $ tar -xavvf archive.tar.gz -C extract_directory In the above command, the -a option is used to detect the compression format automatically. zcat – reading gzipped files without extracting zcat is a command that can be used to dump an extracted file from a .gz file to stdout without manually extracting it. The .gz file remains as before but it will dump the extracted file into stdout as follows: $ ls test.gz $ zcat test.gz A test file # file test contains a line "A test file" $ ls test.gz Compression ratio We can specify compression ratio, which is available in range 1 to 9, where: 1 is the lowest, but fastest 9 is the best, but slowest You can also specify the ratios in between as follows: $ gzip -9 test.img This will compress the file to the maximum. Compressing with bunzip (bzip) bunzip2 is another compression technique which is very similar to gzip. bzip2 typically produces smaller (more compressed) files than gzip. It comes with all Linux distributions. Let's see how to use bzip2. How to do it... In order to compress with bzip2 use: $ bzip2 filename $ ls filename.bz2 Then it will remove the file and produce a compressed file called filename.bzip2. Extract a bzipped file as follows: $ bunzip2 filename.bz2 It will remove filename.bz2 and produce an uncompressed version of filename. bzip2 can read a file from stdin and also write a compressed file into stdout. In order to read from stdin and read out as stdout use: $ cat file | bzip2 -c > file.tar.bz2 -c is used to specify output to stdout. We usually use bzip2 with tarballs. A tarball can be compressed by using the -j option passed to the tar command while archiving and extracting. Creating a bzipped tarball can be done by using the following methods: Method - 1 $ tar -cjvvf archive.tar.bz2 [FILES] Or: $ tar -cavvf archive.tar.bz2 [FILES] The -a option specifies to automatically detect compression format from the extension. Method - 2First create the tarball: $ tar -cvvf archive.tar [FILES] Compress it after tarballing: $ bzip2 archive.tar If we need to add hundreds of files to the archive, the above commands may fail. To fix that issue, use a loop to append files to the archive one by one using the –r option. Extract a bzipped tarball as follows: $ tar -xjvvf archive.tar.bz2 -C extract_directory In this command: -x is used for extraction -j is for bzip2 specification -C is for specifying the directory to which the files are to be extracted Or, you can use the following command: $ tar -xavvf archive.tar.bz2 -C extract_directory -a will automatically detect the compression format. There's more... bunzip has several additional options to carry out different functions. Let's go through few of them. Keeping input files without removing them While using bzip2 or bunzip2, it will remove the input file and produce a compressed output file. But we can prevent it from removing input files by using the –k option. For example: $ bunzip2 test.bz2 -k $ ls test test.bz2 Compression ratio We can specify the compression ratio, which is available in the range of 1 to 9 (where 1 is the least compression, but fast, and 9 is the highest possible compression but much slower). For example: $ bzip2 -9 test.img This command provides maximum compression.  
Read more
  • 0
  • 0
  • 13387

article-image-creating-identity-and-resource-pools
Packt
24 Dec 2013
7 min read
Save for later

Creating Identity and Resource Pools in Cisco Unified Computing System

Packt
24 Dec 2013
7 min read
Computers and their various peripherals have some unique identities such as Universally Unique Identifiers (UUIDs), Media Access Control (MAC) addresses of Network Interface Cards (NICs), World Wide Node Numbers (WWNNs) for Host Bus Adapters (HBAs), and others. These identities are used to uniquely identify a computer system in a network. For traditional computers and peripherals, these identities were burned into the hardware and, hence, couldn't be altered easily. Operating systems and some applications rely on these identities and may fail if these identities are changed. In case of a full computer system failure or failure of a computer peripheral with unique identity, administrators have to follow cumbersome firmware upgrade procedures to replicate the identities of the failed components on the replacement components. The Unified Computing System (UCS) platform introduced the idea of creating identity and resource pools to abstract the compute node identities from the UCS Manager (UCSM) instead of using the hardware burned-in identities. In this article, we'll discuss the different pools you can create during UCS deployments and server provisioning. We'll start by looking at what pools are and then discuss the different types of pools and show how to configure each of them. Understanding identity and resource pools The salient feature of the Cisco UCS platform is stateless computing . In the Cisco UCS platform, none of the computer peripherals consume the hardware burned-in identities. Rather, all the unique characteristics are extracted from identity and resource pools, which reside on the Fabric Interconnects (FIs) and are managed using UCSM. These resource and identity pools are defined in an XML format, which makes them extremely portable and easily modifiable. UCS computers and peripherals extract these identities from UCSM in the form of a service profile. A service profile has all the server identities including UUIDs, MACs, WWNNs, firmware versions, BIOS settings, and other server settings. A service profile is associated with the physical server using customized Linux OS that assigns all the settings in a service profile to the physical server. In case of server failure, the failed server needs to be removed and the replacement server has to be associated with the existing service profile of the failed server. In this service profile association process, the new server will automatically pick up all the identities of the failed server, and the operating system or applications dependent upon these identities will not observe any change in the hardware. In case of peripheral failure, the replacement peripheral will automatically acquire the identities of the failed component. This greatly improves the time required to recover a system in case of a failure. Using service profiles with the identity and resource pools also greatly improves the server provisioning effort. A service profile with all the settings can be prepared in advance while an administrator is waiting for the delivery of the physical server. The administrator can create service profile templates that can be used to create hundreds of service profiles; these profiles can be associated with the physical servers with the same hardware specifications. Creating a server template is highly recommended as this greatly reduces the time for server provisioning. This is because a template can be created once and used for any number of physical servers with the same hardware. Server identity and resource pools are created using the UCSM. In order to better organize, it is possible to define as many pools as are needed in each category. Keep in mind that each defined resource will consume space in the UCSM database. It is, therefore, a best practice to create identity and resource pool ranges based on the current and near-future assessments. For larger deployments, it is best practice to define a hierarchy of resources in the UCSM based on geographical, departmental, or other criteria; for example, a hierarchy can be defined based on different departments. This hierarchy is defined as an organization, and the resource pools can be created for each organizational unit. In the UCSM, the main organization unit is root, and further suborganizations can be defined under this organization. The only consideration to be kept in mind is that pools defined under one organizational unit can't be migrated to other organizational units unless they are deleted first and then created again where required. The following diagram shows how identity and resource pools provide unique features to a stateless blade server and components such as the mezzanine card: Learning to create a UUID pool UUID is a 128-bit number assigned to every compute node on a network to identify the compute node globally. UUID is denoted as 32 hexadecimal numbers. In the Cisco UCSM, a server UUID can be generated using the UUID suffix pool. The UCSM software generates a unique prefix to ensure that the generated compute node UUID is unique. Operating systems including hypervisors and some applications may leverage UUID number binding. The UUIDs generated with a resource pool are portable. In case of a catastrophic failure of the compute node, the pooled UUID assigned through a service profile can be easily transferred to a replacement compute node without going through complex firmware upgrades. Following are the steps to create UUIDs for the blade servers: Log in to the UCSM screen. Click on the Servers tab in the navigation pane. Click on the Pools tab and expand root. Right-click on UUID Suffix Pools and click on Create UUID Suffix Pool as shown in the following screenshot: In the pop-up window, assign the Name and Description values to the UUID pool. Leave the Prefix value as Derived to make sure that UCSM makes the prefix unique. The selection of Assignment Order as Default is random. Select Sequential to assign the UUID sequentially. Click on Next as shown in the following screenshot: Click on Add in the next screen. In the pop-up window, change the value for Size to create a desired number of UUIDs. Click on OK and then on Finish in the previous screen as shown in the following screenshot: In order to verify the UUID suffix pool, click on the UUID Suffix Pools tab in the navigation pane and then on the UUID Suffixes tab in the work pane as shown in the following screenshot: Learning to create a MAC pool MAC is a 48-bit address assigned to the network interface for communication in the physical network. MAC address pools make server provisioning easier by providing scalable NIC configurations before the actual deployment. Following are the steps to create MAC pools: Log in to the UCSM screen. Click on the LAN tab in the navigation pane. Click on the Pools tab and expand root. Right-click on MAC Pools and click on Create MAC Pool as shown in the following screenshot: In the pop-up window, assign the Name and Description values to the MAC pool. The selection of Default as the Assignment Order value is random. Select Sequential to assign the MAC addresses sequentially. Click on Next as shown in the following screenshot: Click on Add in the next screen. In the pop-up window, change Size to create the desired number of MAC addresses. Click on OK and then on Finish in the previous screen as shown in the following screenshot: In order to verify the MAC pool, click on the MAC Pools tab in the navigation pane and then on the MAC Addresses tab in the work pane as shown in the following screenshot:
Read more
  • 0
  • 0
  • 13381

article-image-your-first-page-php-nuke
Packt
24 Feb 2010
12 min read
Save for later

Your First Page with PHP-Nuke

Packt
24 Feb 2010
12 min read
We're going to look at our new homepage and from there move on to look at some of the main concepts of PHP-Nuke: blocks, modules, themes, and site security. Along the way, we're going to create the super user, a user with absolute power over our site; we will edit our first piece of content in PHP-Nuke, and begin the construction of the Dinosaur Portal. Your New Homepage Navigate to your site's homepage in your browser. For our newly installed PHP-Nuke site, this will be http://localhost/nuke/. You should be presented with the following screen, which we saw at the end of the last article: Considering that we've not really done anything, this is impressive. I'm sure you won't be able to resist clicking on some of these links and seeing what PHP-Nuke has in store for us. Currently, the system is 'empty', so it has a rather cold and eerie feeling about it. Rest assured that it will start to warm up over the next few articles as we add content to the site. By the way, if you are impressed with the features you're seeing right now, let me tell you that there are others that haven't yet been activated. Also, there are many other add-ons that we can find from various PHP-Nuke resource sites across the Internet. Let's now talk about some of the PHP-Nuke bits that we see on the front page. First of all, there's the look of the page. There is the banner at the top, a site logo, and a horizontal navigation bar: The page 'body' begins below the navigation bar. You can see a three-column layout with a big chunk of information in the middle column. The page layout of a PHP-Nuke site need not always look this; the arrangement of the elements, the choice of color, text styles, and images is controlled by the theme. A different theme can be selected for the site, and immediately, the look and feel of your site is changed. Blocks The elements that you see in the left- and right-hand columns are known as blocks: Blocks in PHP-Nuke are little nuggets of information positioned at the sides or sometimes at the bottom of a page. They often provide 'navigation', linking to other parts of the site, and provide a report or summary of the content that is available either on your site or, possibly, on another site. Typically, many blocks are displayed on a single page. An important block is the Modules block in the left-hand column: This block shows a list of the active modules on your site, and is the standard navigational element of a typical PHP-Nuke site. Each entry in the above list is a link to a module on your site, and by clicking on the links the visitor is able to move between the modules. Modules PHP-Nuke is a modular system. Each module is like a mini website in itself, performing different tasks and working with different types of content. The PHP-Nuke 'core' provides a central mechanism for handling these modules, so that they work together sharing data and user information, and ensuring a consistent look and operation throughout your site. In short, the modules define your site. The good thing with PHP-Nuke is that you can add and remove modules as needed, selecting the best range of features to suit your site and its visitors. We will discuss the standard PHP-Nuke modules over the next few articles. When viewing a page on a PHP-Nuke site, the module currently in play can be known by looking at the URL of that page. For example, if you are looking at the Downloads module, the URL will be something like this: http://localhost/nuke/modules.php?name=Downloads The part of the URL after the ? character is the query string. The query string contains variables that are separated by the & character. In the above URL, the query string contains a single variable, name, which has the value Downloads. PHP-Nuke switches between modules according to the value specified in the name variable. The other query string variables determine what else is to be displayed on that page, such as the required news story for example. (Handling these query string variables appropriately has traditionally been a security weakness in PHP-Nuke, but that is true for many other web applications). The output of the module being currently viewed is displayed in the middle column of the web page. A Fistful of Default Modules Let's have a quick overview of what some of the standard modules offer: Home: Shows the homepage of the site. There isn't actually a Home module but some particular module is associated with the homepage. The homepage actually has the URL index.php, rather than modules.php?name=XXXX. Downloads and Web Links: Allow you to create and maintain categorized lists of downloadable resources or links to other sites. Possibly you have already seen the Downloads module in action when you downloaded PHP-Nuke itself from a PHP-Nuke powered site. This is another 'interactive' module—visitors can submit their own downloadable resources or links here. Recommend Us: Allows the visitor on your site to send a message to their friends suggesting that they come and visit your site. Search: Allows the visitor to search the contents of your site. Statistics: Provides site statistics like the number of visits to your site, the different browsers used by visitors, and the most-viewed stories on your site. Stories Archive: Contains an archive of past stories that have appeared on the site, arranged by month of publication. Submit News: Allows visitors to submit a news story to the site through a form, after which the story goes straight onto the site provided it is acceptable. The story is then said to be published. Surveys: Displays the results of polls that have appeared on the site. Polls can be attached to stories and other pieces of content. Topics: Provides a different view of the stories, this time arranged by their topic. Your Account: Allows visitors to your site to register and create their own accounts. All visitors that register at your site can have their own area, which is accessed through this module. They can customize their own area, including their own Journal. That's not even all of the modules, but it's enough to give you an idea of the breadth of the functionality that PHP-Nuke offers and the kind of experience that your visitors can look forward to. Coming back to the homepage, have a look at the message in the middle that says: For security reasons the best idea is to create the Super User right NOW by clicking HERE It's not everyday that we're invited to create a super user, so I think we should get on with that, especially as the word NOW is in upper case; that always suggests a sense of urgency. Clicking on the word HERE in that message will take you to the page http://localhost/nuke/admin.php; and we can begin creating our super user. Creating the Super User PHP-Nuke enables visitors to your site to create their own user account, and add and maintain their own personal details. The user account is required to identify them for posting news stories, making comments, or contributing to discussions in the forums, among other activities. By registering on the site and creating a user account, the visitors are given greater freedom on the site. However, their freedom has limits. We are about to create a special type of user, the super user. This is a registered user of the site who has almost total freedom on the site and absolute power over it. The super user can access, add, remove, and modify any part of the site, and can configure and control anything on the site. Given the nature of this power, there comes the obvious responsibility of ensuring that the identity of this user is kept a secret. Anyone obtaining these account details will be able to do almost anything to your site, and that could be worse than it sounds, so you must ensure that these details do not fall into the wrong hands. The super user is a site administrator, in fact, the site administrator. We will use the term administrator and super user interchangeably. It is also possible to create other, less powerful, site administrators who can manage various parts of the site, such as approving bits of content submitted by visitors. We shall now create the super user account. As with any user account on PHP-Nuke, it will consist of a username ('nickname', as it is also known in PHP-Nuke) and a password. On the page http://localhost/nuke/admin.php, you will be presented with a form asking you to choose a super user Nickname, the HomePage of that user, a contact Email address and a Password. The password should only contain alphanumeric characters (letters and numbers). This is how the form looks: The super user account is not the only type of user account that can be created with PHP-Nuke. Visitors to your site can register and create their own user accounts, which make them Registered Users of your site. When creating the super user there is an option to create a registered user with the same details, although obviously that user doesn't have the extended power of the super user. This does mean that when you log in with this administrator account, you will enjoy all the personalization benefits of the standard user account. We will create the nickname and password for the super user account now. Do not use nicknames like admin, super user, or root for the super user; these would be the first guess of any miscreant attempting to break into your system. Also, make your password difficult to guess; make it long with a mixture of digits and letters, both upper and lowercase (definitely do not use the word password as your password!). Making the password secure is another vital step toward the overall security of your site. In the page, we will enter dinoportmeister for the nickname, and use the password Pa2112cktXog. You can enter your own nickname and password here if you like, but make sure you remember them! Your email address needs to go into the Email field, this is another required field. The HomePage field does not have to correspond to the address of this site; this is for informational purposes only. The option to create a normal user with the same data will do just that, it will create a user with the same username and password as the administrator account. However, the two accounts are distinct, and changing the password for either account will not affect the other. Click Submit and the super user is created. Becoming the Administrator After you have created the details for the super user, you still have to log yourself in with these details. On the admin.php page, you will find a form for entering the administrator username and password. Hopefully you haven't forgotten them already! After entering the details here, click the Login button and you will pass over to the other side: the administration area of the site. The admin.php page is where you need to log in to access the administration area. Whenever you want to log in as an administrator to perform some site maintenance, you do so from this page. Logging in from any other place on the site will log you 'normally' into the site, as if you were a standard visitor to the site, even if the administrator username and password is accepted. If you think about it, this suggests that unless it has been specially customized, any PHP-Nuke site has an administrator login page at admin.php. This means that anyone intent on accessing the administrator area of that site does not have to look far to find the administrator login (of course, getting the right username and password combination is another matter). To counter this, from PHP-Nuke 7.6 onwards, if you want to rename the admin.php file, you can do so by storing the new name of the file in the $admin_file variable in the config.php file. This relocates your administrator login page. Once you have entered the administration username and password, you will get your first taste of the administration area: That might be more than you were expecting. We are presented with two towering graphical menus; the Administration Menu and the Modules Administration menu, the main navigation tools for the site administrator. (In versions of PHP-Nuke earlier than 7.5, these menus were one—the Administration Menu). We'll dig into more detail about these menus in the next few articles. This is the place where you will spend most of your PHP-Nuke life, so you will need to get comfortable with it. Before we go any further, click the Home link in the Modules block to return to the homepage of your site. A New Welcome When you return to the homepage, you will notice that some extra text has appeared at the bottom of the welcome message: [ View: All Visitors - Unlimited - Edit ] This text is evidence of the super user's extra powers. If you click on the Edit link, you can begin changing the site. The presence of the Edit link is an example of 'in-position' editing, whereby as you browse the site you can quickly edit or delete the content you see. This link is not available to normal users of the site and is a pretty neat feature of PHP-Nuke. When you click the Edit link, you will be taken back to the administration area.
Read more
  • 0
  • 1
  • 13355

article-image-drupal-8-configuration-management
Packt
18 Mar 2015
14 min read
Save for later

Drupal 8 Configuration Management

Packt
18 Mar 2015
14 min read
In this article, by the authors, Stefan Borchert and Anja Schirwinski, of the book, Drupal 8 Configuration Management,we will learn the inner workings of the Configuration Management system in Drupal 8. You will learn about config and schema files and read about the difference between simple configuration and configuration entities. (For more resources related to this topic, see here.) The config directory During installation, Drupal adds a directory within sites/default/files called config_HASH, where HASH is a long random string of letters and numbers, as shown in the following screenshot: This sequence is a random hash generated during the installation of your Drupal site. It is used to add some protection to your configuration files. Additionally to the default restriction enforced by the .htaccess file within the subdirectories of the config directory that prevents unauthorized users from seeing the content of the directories. As a result, would be really hard for someone to guess the folder's name. Within the config directory, you will see two additional directories that are empty by default (leaving the .htaccess and README.txt files aside). One of the directories is called active. If you change the configuration system to use file storage instead of the database for active Drupal site configuration, this directory will contain the active configuration. If you did not customize the storage mechanism of the active configuration (we will learn later how to do this), Drupal 8 uses the database to store the active configuration. The other directory is called staging. This directory is empty by default, but can host the configuration you want to be imported into your Drupal site from another installation. You will learn how to use this later on in this article. A simple configuration example First, we want to become familiar with configuration itself. If you look into the database of your Drupal installation and open up the config table , you will find the entire active configuration of your site, as shown in the following screenshot: Depending on your site's configuration, table names may be prefixed with a custom string, so you'll have to look for a table name that ends with config. Don't worry about the strange-looking text in the data column; this is the serialized content of the corresponding configuration. It expands to single configuration values—that is, system.site.name, which holds the value of the name of your site. Changing the site's name in the user interface on admin/config/system/site-information will immediately update the record in the database; thus, put simply the records in the table are the current state of your site's configuration, as shown in the following screenshot: But where does the initial configuration of your site come from? Drupal itself and the modules you install must use some kind of default configuration that gets added to the active storage during installation. Config and schema files – what are they and what are they used for? In order to provide a default configuration during the installation process, Drupal (modules and profiles) comes with a bunch of files that hold the configuration needed to run your site. To make parsing of these files simple and enhance readability of these configuration files, the configuration is stored using the YAML format. YAML (http://yaml.org/) is a data-orientated serialization standard that aims for simplicity. With YAML, it is easy to map common data types such as lists, arrays, or scalar values. Config files Directly beneath the root directory of each module and profile defining or overriding configuration (either core or contrib), you will find a directory named config. Within this directory, there may be two more directories (although both are optional): install and schema. Check the image module inside core/modules and take a look at its config directory, as shown in the following screenshot: The install directory shown in the following screenshot contains all configuration values that the specific module defines or overrides and that are stored in files with the extension .yml (one of the default extensions for files in the YAML format): During installation, the values stored in these files are copied to the active configuration of your site. In the case of default configuration storage, the values are added to the config table; in file-based configuration storage mechanisms, on the other hand, the files are copied to the appropriate directories. Looking at the filenames, you will see that they follow a simple convention: <module name>.<type of configuration>[.<machine name of configuration object>].yml (setting aside <module name>.settings.yml for now). The explanation is as follows: <module name>: This is the name of the module that defines the settings included in the file. For instance, the image.style.large.yml file contains settings defined by the image module. <type of configuration>: This can be seen as a type of group for configuration objects. The image module, for example, defines several image styles. These styles are a set of different configuration objects, so the group is defined as style. Hence, all configuration files that contain image styles defined by the image module itself are named image.style.<something>.yml. The same structure applies to blocks (<block.block.*.yml>), filter formats (<filter.format.*.yml>), menus (<system.menu.*.yml>), content types (<node.type.*.yml>), and so on. <machine name of configuration object>: The last part of the filename is the unique machine-readable name of the configuration object itself. In our examples from the image module, you see three different items: large, medium, and thumbnail. These are exactly the three image styles you will find on admin/config/media/image-styles after installing a fresh copy of Drupal 8. The image styles are shown in the following screenshot: Schema files The primary reason schema files were introduced into Drupal 8 is multilingual support. A tool was needed to identify all translatable strings within the shipped configuration. The secondary reason is to provide actual translation forms for configuration based on your data and to expose translatable configuration pieces to external tools. Each module can have as many configuration the .yml files as needed. All of these are explained in one or more schema files that are shipped with the module. As a simple example of how schema files work, let's look at the system's maintenance settings in the system.maintenance.yml file at core/modules/system/config/install. The file's contents are as follows: message: '@site is currently under maintenance. We should be back shortly. Thank you for your patience.' langcode: en The system module's schema files live in core/modules/system/config/schema. These define the basic types but, for our example, the most important aspect is that they define the schema for the maintenance settings. The corresponding schema section from the system.schema.yml file is as follows: system.maintenance: type: mapping label: 'Maintenance mode' mapping:    message:      type: text      label: 'Message to display when in maintenance mode'    langcode:      type: string      label: 'Default language' The first line corresponds to the filename for the .yml file, and the nested lines underneath the first line describe the file's contents. Mapping is a basic type for key-value pairs (always the top-level type in .yml). The system.maintenance.yml file is labeled as label: 'Maintenance mode'. Then, the actual elements in the mapping are listed under the mapping key. As shown in the code, the file has two items, so the message and langcode keys are described. These are a text and a string value, respectively. Both values are given a label as well in order to identify them in configuration forms. Learning the difference between active and staging By now, you know that Drupal works with the two directories active and staging. But what is the intention behind those directories? And how do we use them? The configuration used by your site is called the active configuration since it's the configuration that is affecting the site's behavior right now. The current (active) configuration is stored in the database and direct changes to your site's configuration go into the specific tables. The reason Drupal 8 stores the active configuration in the database is that it enhances performance and security. Source: https://www.drupal.org/node/2241059. However, sometimes you might not want to store the active configuration in the database and might need to use a different storage mechanism. For example, using the filesystem as configuration storage will enable you to track changes in the site's configuration using a versioning system such as Git or SVN. Changing the active configuration storage If you do want to switch your active configuration storage to files, here's how: Note that changing the configuration storage is only possible before installing Drupal. After installing it, there is no way to switch to another configuration storage! To use a different configuration storage mechanism, you have to make some modifications to your settings.php file. First, you'll need to find the section named Active configuration settings. Now you will have to uncomment the line that starts with $settings['bootstrap_config_storage'] to enable file-based configuration storage. Additionally, you need to copy the existing default.services.yml (next to your settings.php file) to a file named services.yml and enable the new configuration storage: services: # Override configuration storage. config.storage:    class: DrupalCoreConfigCachedStorage    arguments: ['@config.storage.active', '@cache.config'] config.storage.active:    # Use file storage for active configuration.    alias: config.storage.file This tells Drupal to override the default service used for configuration storage and use config.storage.file as the active configuration storage mechanism instead of the default database storage. After installing the site with these settings, we will take another look at the config directory in sites/default/files (assuming you didn't change to the location of the active and staging directory): As you can see, the active directory contains the entire site's configuration. The files in this directory get copied here during the website's installation process. Whenever you make a change to your website, the change is reflected in these files. Exporting a configuration always exports a snapshot of the active configuration, regardless of the storage method. The staging directory contains the changes you want to add to your site. Drupal compares the staging directory to the active directory and checks for changes between them. When you upload your compressed export file, it actually gets placed inside the staging directory. This means you can save yourself the trouble of using the interface to export and import the compressed file if you're comfortable enough with copy-and-pasting files to another directory. Just make sure you copy all of the files to the staging directory even if only one of the files was changed. Any missing files are interpreted as deleted configuration, and will mess up your site. In order to get the contents of staging into active, we simply have to use the synchronize option at admin/config/development/configuration again. This page will show us what was changed and allows us to import the changes. On importing, your active configuration will get overridden with the configuration in your staging directory. Note that the files inside the staging directory will not be removed after the synchronization is finished. The next time you want to copy-and-paste from your active directory, make sure you empty staging first. Note that you cannot override files directly in the active directory. The changes have to be made inside staging and then synchronized. Changing the storage location of the active and staging directories In case you do not want Drupal to store your configuration in sites/default/files, you can set the path according to your wishes. Actually, this is recommended for security reasons, as these directories should never be accessible over the Web or by unauthorized users on your server. Additionally, it makes your life easier if you work with version control. By default, the whole files directory is usually ignored in version-controlled environments because Drupal writes to it, and having the active and staging directory located within sites/default/files would result in them being ignored too. So how do we change the location of the configuration directories? Before installing Drupal, you will need to create and modify the settings.php file that Drupal uses to load its basic configuration data from (that is, the database connection settings). If you haven't done it yet, copy the default.settings.php file and rename the copy to settings.php. Afterwards, open the new file with the editor of your choice and search for the following line: $config_directories = array(); Change the preceding line to the following (or simply insert your addition at the bottom of the file). $config_directories = array( CONFIG_ACTIVE_DIRECTORY => './../config/active', // folder outside the webroot CONFIG_STAGING_DIRECTORY => './../config/staging', // folder outside the webroot ); The directory names can be chosen freely, but it is recommended that you at least use similar names to the default ones so that you or other developers don't get confused when looking at them later. Remember to put these directories outside your webroot, or at least protect the directories using an .htaccess file (if using Apache as the server). Directly after adding the paths to your settings.php file, make sure you remove write permissions from the file as it would be a security risk if someone could change it. Drupal will now use your custom location for its configuration files on installation. You can also change the location of the configuration directories after installing Drupal. Open up your settings.php file and find these two lines near the end of the file and start with $config_directories. Change their paths to something like this: $config_directories['active'] = './../config/active'; $config_directories['staging] = './../config/staging'; This path places the directories above your Drupal root. Now that you know about active and staging, let's learn more about the different types of configuration you can create on your own. Simple configuration versus configuration entities As soon as you want to start storing your own configuration, you need to understand the differences between simple configuration and configuration entities. Here's a short definition of the two types of configuration used in Drupal. Simple configuration This configuration type is easier to implement and therefore ideal for basic configuration settings that result in Boolean values, integers, or simple strings of text being stored, as well as global variables that are used throughout your site. A good example would be the value of an on/off toggle for a specific feature in your module, or our previously used example of the site name configured by the system module: name: 'Configuration Management in Drupal 8' Simple configuration also includes any settings that your module requires in order to operate correctly. For example, JavaScript aggregation has to be either on or off. If it doesn't exist, the system module won't be able to determine the appropriate course of action. Configuration entities Configuration entities are much more complicated to implement but far more flexible. They are used to store information about objects that users can create and destroy without breaking the code. A good example of configuration entities is an image style provided by the image module. Take a look at the image.style.thumbnail.yml file: uuid: fe1fba86-862c-49c2-bf00-c5e1f78a0f6c langcode: en status: true dependencies: { } name: thumbnail label: 'Thumbnail (100×100)' effects: 1cfec298-8620-4749-b100-ccb6c4500779:    uuid: 1cfec298-8620-4749-b100-ccb6c4500779    id: image_scale    weight: 0    data:      width: 100      height: 100      upscale: false third_party_settings: { } This defines a specific style for images, so the system is able to create derivatives of images that a user uploads to the site. Configuration entities also come with a complete set of create, read, update, and delete (CRUD) hooks that are fired just like any other entity in Drupal, making them an ideal candidate for configuration that might need to be manipulated or responded to by other modules. As an example, the Views module uses configuration entities that allow for a scenario where, at runtime, hooks are fired that allow any other module to provide configuration (in this case, custom views) to the Views module. Summary In this article, you learned about how to store configuration and briefly got to know the two different types of configuration. Resources for Article: Further resources on this subject: Tabula Rasa: Nurturing your Site for Tablets [article] Components - Reusing Rules, Conditions, and Actions [article] Introduction to Drupal Web Services [article]
Read more
  • 0
  • 0
  • 13299

article-image-implementing-wcf-service-real-world
Packt
09 Jun 2010
18 min read
Save for later

Implementing a WCF Service in the Real World

Packt
09 Jun 2010
18 min read
WCF is the acronym for Windows Communication Foundation. It is Microsoft's latest technology that enables applications in a distributed environment to communicate with each other. In this article by, Mike Liu, author of  WCF 4.0 Multi-tier Services Development with LINQ to Entities, we will create and test the WCF service by following these steps: Create the project using a WCF Service Library template Create the project using a WCF Service Application template Create the Service Operation Contracts Create the Data Contracts Add a Product Entity project Add a business logic layer project Call the business logic layer from the service interface layer Test the service Here ,In this article, we will learn how to separate the service interface layer from the business logic layer (Read more interesting articles on WCF 4.0 here.) Why layer a service? An important aspect of SOA design is that service boundaries should be explicit, which means hiding all the details of the implementation behind the service boundary. This includes revealing or dictating what particular technology was used. Furthermore, inside the implementation of a service, the code responsible for the data manipulation should be separated from the code responsible for the business logic. So in the real world, it is always good practice to implement a WCF service in three or more layers. The three layers are the service interface layer, the business logic layer, and the data access layer. Service interface layer: This layer will include the service contracts and operation contracts that are used to define the service interfaces that will be exposed at the service boundary. Data contracts are also defined to pass in and out of the service. If any exception is expected to be thrown outside of the service, then Fault contracts will also be defined at this layer. Business logic layer: This layer will apply the actual business logic to the service operations. It will check the preconditions of each operation, perform business activities, and return any necessary results to the caller of the service. Data access layer: This layer will take care of all of the tasks needed to access the underlying databases. It will use a specific data adapter to query and update the databases. This layer will handle connections to databases, transaction processing, and concurrency controlling. Neither the service interface layer nor the business logic layer needs to worry about these things. Layering provides separation of concerns and better factoring of code, which gives you better maintainability and the ability to split out layers into separate physical tiers for scalability. The data access code should be separated into its own layer that focuses on performing translation services between the databases and the application domain. Services should be placed in a separate service layer that focuses on performing translation services between the service-oriented external world and the application domain. The service interface layer will be compiled into a separate class assembly and hosted in a service host environment. The outside world will only know about and have access to this layer. Whenever a request is received by the service interface layer, the request will be dispatched to the business logic layer, and the business logic layer will get the actual work done. If any database support is needed by the business logic layer, it will always go through the data access layer. Creating a new solution and project using WCF templates We need to create a new solution for this example and add a new WCF project to this solution. This time we will use the built-in Visual Studio WCF templates for the new project. Using the C# WCF service library template There are a few built-in WCF service templates within Visual Studio 2010; two of them are Visual Studio WCF Service Library and Visual Studio WCF Service Application. In this article, we will use the service library template. Follow these steps to create the RealNorthwind solution and the project using the service library template: Start Visual Studio 2010, select menu option File New | Project…|, and you will see the New Project dialog box. From this point onwards, we will create a completely new solution and save it in a different location. In the New Project window, specify Visual C# WCF | WCF| Service Library as the project template, RealNorthwindService as the (project) name, and RealNorthwind as the solution name. Make sure that the checkbox Create directory for solution is selected. Click on the OK button, and the solution is created with a WCF project inside it. The project already has an IService1.cs file to define a service interface and Service1.cs to implement the service. It also has an app.config file, which we will cover shortly. Using the C# WCF service application template Instead of using the Visual Studio WCF Service Library template to create our new WCF project, we can use the Visual Studio Service Application template to create the new WCF project. Because we have created the solution, we will add a new project using the Visual Studio WCF Service Application template. Right-click on the solution item in Solution Explorer, select menu option Add New Project…| from the context menu, and you will see the Add New Project dialog box. In the Add New Project window, specify Visual C# | WCF Service Application as the project template, RealNorthwindService2 as the (project) name, and leave the default location of C:SOAWithWCFandLINQProjectsRealNorthwind unchanged. Click on the OK button and the new project will be added to the solution.The project already has an IService1.cs file to define a service interface, and Service1.svc.cs to implement the service. It also has a Service1.svc file and a web.config file, which are used to host the new WCF service. It has also had the necessary references added to the project such as System.ServiceModel. You can follow these steps to test this service: Change this new project, RealNorthwindService2, to be the startup project(right-click on it from Solution Explorer and select Set as Startup Project). Then run it (Ctrl + F5 or F5). You will see that it can now run. You will see that ASP.NET Development Server has been started, and a browser is open listing all of the files under the RealNorthwindService2 project folder.Clicking on the Service1.svc file will open the metadata page of the WCF service in this project. If you have pressed F5 in the previous step to run this project, you might see a warning message box asking you if you want to enable debugging for the WCF service. As we said earlier, you can choose enable debugging or just run in the non-debugging mode. You may also have noticed that the WCF Service Host is started together with ASP.NET Development Server. This is actually another way of hosting a WCF service in Visual Studio 2010. It has been started at this point because, within the same solution, there is a WCF service project (RealNorthwindService) created using the WCF Service Library template. So far we have used two different Visual Studio WCF templates to create two projects. The first project, using the C# WCF Service Library template, is a more sophisticated one because this project is actually an application containing a WCF service, a hosting application (WcfSvcHost), and a WCF Test Client. This means that we don't need to write any other code to host it, and as soon as we have implemented a service, we can use the built-in WCF Test Client to invoke it. This makes it very convenient for WCF development. The second project, using the C# WCF Service Application template, is actually a website. This is the hosting application of the WCF service so you don't have to create a separate hosting application for the WCF service. As we have already covered them and you now have a solid understanding of these styles, we will not discuss them further. But keep in mind that you have this option, although in most cases it is better to keep the WCF service as clean as possible, without any hosting functionalities attached to it. To focus on the WCF service using the WCF Service Library template, we now need to remove the project RealNorthwindService2 from the solution. In Solution Explorer, right-click on the RealNorthwindService2 project item and select Remove from the context menu. Then you will see a warning message box. Click on the OK button in this message box and the RealNorthwindService2 project will be removed from the solution. Note that all the files of this project are still on your hard drive. You will need to delete them using Windows Explorer. Creating the service interface layer In this article, we will create the service interface layer contracts. Because two sample files have already been created for us, we will try to reuse them as much as possible. Then we will start customizing these two files to create the service contracts. Creating the service interfaces To create the service interfaces, we need to open the IService1.cs file and do the following: Change its namespace from RealNorthwindService to: MyWCFServices.RealNorthwindService Change the interface name from IService1 to IProductService. Don't be worried if you see the warning message before the interface definition line, as we will change the web.config file in one of the following steps. Change the first operation contract definition from this line: string GetData(int value); to this line: Product GetProduct(int id); Change the second operation contract definition from this line: CompositeType GetDataUsingDataContract(CompositeType composite); to this line: bool UpdateProduct(Product product); Change the filename from IService1.cs to IProductService.cs. With these changes, we have defined two service contracts. The first one can be used to get the product details for a specific product ID, while the second one can be used to update a specific product. The product type, which we used to define these service contracts, is still not defined. The content of the service interface for RealNorthwindService.ProductService should look like this now: using System;using System.Collections.Generic;using System.Linq;using System.Runtime.Serialization;using System.ServiceModel;using System.Text;namespace MyWCFServices.RealNorthwindService{ [ServiceContract] public interface IProductService { [OperationContract] Product GetProduct(int id); [OperationContract] bool UpdateProduct(Product product); // TODO: Add your service operations here }} This is not the whole content of the IProductService.cs file. The bottom part of this file should still have the class, CompositeType. Creating the data contracts Another important aspect of SOA design is that you shouldn't assume that the consuming application supports a complex object model. One part of the service boundary definition is the data contract definition for the complex types that will be passed as operation parameters or return values. For maximum interoperability and alignment with SOA principles, you should not pass any .NET-specific types such as DataSet or Exceptions across the service boundary. You should stick to fairly simple data structure objects such as classes with properties and backing member fields. You can pass objects that have nested complex types such as 'Customer with an Order collection'. However, you shouldn't make any assumption about the consumer being able to support object-oriented constructs such as inheritance or base-classes for interoperable web services. In our example, we will create a complex data type to represent a product object. This data contract will have five properties: ProductID, ProductName, QuantityPerUnit, UnitPrice, and Discontinued. These will be used to communicate with client applications. For example, a supplier may call the web service to update the price of a particular product or to mark a product for discontinuation. It is preferable to put data contracts in separate files within a separate assembly but, to simplify our example, we will put DataContract in the same file as the service contract. We will modify the file, IProductService.cs, as follows: Change the DataContract name from CompositeType to Product. Change the fields from the following lines: bool boolValue = true;string stringValue = "Hello "; to these seven lines: int productID;string productName;string quantityPerUnit;decimal unitPrice;bool discontinued; Delete the old boolValue and StringValue DataMember properties. Then, for each of the above fields, add a DataMember property. For example, for productID, we will have this DataMember property: [DataMember]public int ProductID{ get { return productID; } set { productID = value; }} A better way is to take advantage of the automatic property feature of C#, and add the following ProductID DataMember without defining the productID field: [DataMember]public int ProductID { get; set; } To save some space, we will use the latter format. So, we need to delete all of those field definitions and add an automatic property for each field, with the first letter capitalized. The data contract part of the finished service contract file, IProductService.cs,should now look like this: [DataContract]public class Product{ [DataMember] public int ProductID { get; set; } [DataMember] public string ProductName { get; set; } [DataMember] public string QuantityPerUnit { get; set; } [DataMember] public decimal UnitPrice { get; set; } [DataMember] public bool Discontinued { get; set; }} Implementing the service contracts To implement the two service interfaces that we defined, open the Service1.cs file and do the following: Change its namespace from RealNorthwindService to MyWCFServices.RealNorthwindService. Change the class name from Service1 to ProductService. Make it inherit from the IProductService interface, instead of IService1. The class definition line should be like this: public class ProductService : IProductService Delete the GetData and GetDataUsingDataContract methods. Add the following method, to get a product: public Product GetProduct(int id){ // TODO: call business logic layer to retrieve product Product product = new Product(); product.ProductID = id; product.ProductName = "fake product name from service layer"; product.UnitPrice = (decimal)10.0; return product;} In this method, we created a fake product and returned it to the client.Later, we will remove the hard-coded product from this method and call the business logic to get the real product. Add the following method to update a product: public bool UpdateProduct(Product product){ // TODO: call business logic layer to update product if (product.UnitPrice <= 0) return false; else return true;} Also, in this method, we don't update anything. Instead, we always return true if a valid price is passed in. Change the filename from Service1.cs to ProductService.cs. The content of the ProductService.cs file should be like this: using System;using System.Collections.Generic;using System.Linq;using System.Runtime.Serialization;using System.ServiceModel;using System.Text;namespace MyWCFServices.RealNorthwindService{ public class ProductService : IProductService { public Product GetProduct(int id) { // TODO: call business logic layer to retrieve product Product product = new Product(); product.ProductID = id; product.ProductName = "fake product name from service layer"; product.UnitPrice = (decimal)10; return product; } public bool UpdateProduct(Product product) { // TODO: call business logic layer to update product if (product.UnitPrice <= 0) return false; else return true; } }} Modifying the app.config file Because we have changed the service name, we have to make the appropriate changes to the configuration file. Note that when you rename the service, if you have used the refactor feature of Visual Studio, some of the following tasks may have been done by Visual Studio. Follow these steps to change the configuration file: Open the app.config file from Solution Explorer. Change all instances of the RealNorthwindService string except the one in baseAddress to MyWCFServices.RealNorthwindService. This is for the namespace change. Change the RealNorthwindService string in baseAddress to MyWCFServices/RealNorthwindService. Change all instances of the Service1 string to ProductService. This is for the actual service name change. Change the service address port from 8731 to 8080. This is to prepare for the client application, which we will create soon. You can also change Design_Time_Addresses to whatever address you want, or delete the baseAddress part from the service. This can be used to test your service locally. We will leave it unchanged for our example. The content of the app.config file should now look like this: <?xml version="1.0" encoding="utf-8" ?><configuration> <system.web> <compilation debug="true" /> </system.web> <!-- When deploying the service library project, the content of the config file must be added to the host's app.config file. System.Configuration does not support config files for libraries. --> <system.serviceModel> <services> <service name="MyWCFServices.RealNorthwindService. ProductService"> <endpoint address="" binding="wsHttpBinding" contract="MyWCFServices. RealNorthwindService.IProductService"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="http://localhost:8080/Design_Time_ Addresses/MyWCFServices/ RealNorthwindService/ProductService/" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="True"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="False" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> Testing the service using WCF Test Client Because we are using the WCF Service Library template in this example, we are now ready to test this web service. As we pointed out when creating this project, this service will be hosted in the Visual Studio 2010 WCF Service Host environment. To start the service, press F5 or Ctrl + F5. WcfSvcHost will be started and WCF Test Client is also started. This is a Visual Studio 2010 built-in test client for WCF Service Library projects. In order to run the WCF Test Client you have to log into your machine as a local administrator. You also have to start Visual Studio as an administrator because we have changed the service port from 8732 to 8080 (port 8732 is pre-registered but 8080 is not). Again, if you get an Access is denied error, make sure you run Visual Studio as an administrator (under Windows XP you need to log on as an administrator). Now from this WCF Test Client we can double-click on an operation to test it.First, let us test the GetProduct operation. Now the message Invoking Service… will be displayed in the status bar as the client is trying to connect to the server. It may take a while for this initial connection to be made as several things need to be done in the background. Once the connection has been established, a channel will be created and the client will call the service to perform the requested operation. Once the operation has been completed on the server side, the response package will be sent back to the client, and the WCF Test Client will display this response in the bottom panel. If you started the test client in debugging mode (by pressing F5), you can set a breakpoint at a line inside the GetProduct method in the RealNorthwindService.cs file, and when the Invoke button is clicked, the breakpoint will be hit so that you can debug the service as we explained earlier. However, here you don't need to attach to the WCF Service Host. Note that the response is always the same, no matter what product ID you use to retrieve the product. Specifically, the product name is hard-coded, as shown in the diagram. Moreover, from the client response panel, we can see that several properties of the Product object have been assigned default values. Also, because the product ID is an integer value from the WCF Test Client, you can only enter an integer for it. If a non-integer value is entered, when you click on the Invoke button, you will get an error message box to warn you that you have entered a value with the wrong type. Now let's test the operation, UpdateProduct. The Request/Response packages are displayed in grids by default but you have the option of displaying them in XML format. Just select the XML tab at the bottom of the right-side panel, and you will see the XML-formatted Request/Response packages. From these XML strings, you can see that they are SOAP messages. Besides testing operations, you can also look at the configuration settings of the web service. Just double-click on Config File from the left-side panel and the configuration file will be displayed in the right-side panel. This will show you the bindings for the service, the addresses of the service, and the contract for the service. What you see here for the configuration file is not an exact image of the actual configuration file. It hides some information such as debugging mode and service behavior, and includes some additional information on reliable sessions and compression mode. If you are satisfied with the test results, just close the WCF Test Client, and you will go back to Visual Studio IDE. Note that as soon as you close the client, the WCF Service Host is stopped. This is different from hosting a service inside ASP.NET Development Server, where ASP.NET Development Server still stays active even after you close the client.
Read more
  • 0
  • 0
  • 13236
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-animation-effects-aspnet-using-jquery
Packt
03 May 2011
9 min read
Save for later

Animation Effects in ASP.NET using jQuery

Packt
03 May 2011
9 min read
  ASP.NET jQuery Cookbook Over 60 practical recipes for integrating jQuery with ASP.NET   Introduction Some useful inbuilt functions in jQuery that we will explore in this article for achieving animation effects are: animate ( properties, [ duration ], [ easing ], [ complete ] ): This method allows us to create custom animation effects on any numeric css property. The parameters supported by this method are: properties: This is the map of css properties to animate, for e.g. width, height, fontSize, borderWidth, opacity, etc. duration: This is the duration of the animation in milliseconds. The constants slow and fast can be used to specify the durations, and they represent 600 ms and 200 ms respectively. easing: This is the easing function to use. Easing indicates the speed of the animation at different points during the animation. jQuery provides inbuilt swing and linear easing functions. Various plugins can be interfaced if other easing functions are required. complete: This indicates the callback function on completion of the animation. fadeIn ( [ duration ], [ callback ] ): This method animates the opacity of the matched elements from 0 to 1 i.e. transparent to opaque. The parameters accepted are: duration: This is the duration of the animation callback: This is the callback function on completion of the animation fadeOut( [ duration ], [ callback ] ): This method animates the opacity of the matched elements from 1 to 0 i.e. opaque to transparent. The parameters accepted are: duration: This is the duration of the animation callback: This is the callback function on completion of the animation slideUp( [ duration ], [ callback ] ): This method animates the height of the matched elements with an upward sliding motion. When the height of the element reaches 0, the css property display of the element is updated to none so that the element is hidden on the page. The parameters accepted are: duration: This is the duration of the animation callback: This is the callback function on completion of the animation slideDown( [ duration ], [ callback ] ): This method animates the height of the matched elements from 0 to the specified maximum height. Thus, the element appears to slide down on the page. The parameters accepted are: duration: This is the duration of the animation callback: This is the callback function on completion of the animation slideToggle( [ duration ], [ callback ] ): This method animates the height of the matched elements. If the element is initially hidden, it will slide down and become completely visible. If the element is initially visible, it will slide up and become hidden on the page. The parameters accepted are: duration: This is the duration of the animation callback: This is the callback function on completion of the animation jQuery.fx.off: If there is a need to disable animations because of a resource constraint or due to difficulties in viewing the animations, then this utility can be used to turn off the animation completely. This is achieved by setting all animated controls to their final state. stop ( [ clearQueue ], [ jumpToEnd ] ): This method stops the currently running animations on the page. The parameters accepted are: clearQueue: This indicates whether any queued up animations are required to be cleared. The default value is false. jumpToEnd: This indicates if the current animation is to be cleared immediately. The default value is false. In this article, we will cover some of the animation effects that can be achieved in ASP.NET using the capabilities of jQuery. Getting started Let's start by creating a new ASP.NET website in Visual Studio and name it Chapter5. Save the jQuery library in a script folder js in the project. To enable jQuery on any web form, drag-and-drop to add the following to the page: <scriptsrc="js/jquery-1.4.1.js" type="text/javascript"></script> Now let's move on to the recipes where we will see different animation techniques using jQuery. Enlarging text on hover In this recipe, we will animate the font size of text content on hover. Getting ready Add a new web form Recipe1.aspx to the current project. Create a Css class for the text content that we want to animate. The font size specified in the css Class is the original font size of the text before animation is applied to it: .enlarge { font-size:12.5px; font-family:Arial,sans-serif; } Add an ASP.NET Label control on the form and set its Css Class to the preceding style: <asp:LabelCssClass="enlarge"runat="server">Lorem ipsum dolor sit ...............</asp:Label> Thus, the ASPX markup of the form is as follows: <form id="form1" runat="server"> <div align="center"> Mouseover to enlarge text:<br /> <fieldset id="content" style="width:500px;height:300px;"> <asp:LabelCssClass="enlarge" runat="server">Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat.</ asp:Label> </fieldset> </div> </form> Thus, initially, the page will display the Label control as follows: We will now animate the font size of the Label on hover on the containing fieldset element. How to do it… In the document.ready() function of the jQuery script block, retrieve the original font size of the Label: var origFontSize = parseFloat($(".enlarge").css('font-size')); The parseFloat() function takes in an input string and returns the first floating point value in the string. It discards any content after the floating point value. For example, if the css property returns 12.5 px, then the function will discard the px. Define the hover event of the containing fieldset element: $("#content").hover( In the mouseenter event handler of the hover method, update the cursor style to pointer: function() { $(".enlarge").css("cursor", "pointer"); Calculate the maximum font size that we want to animate to. In this example, we will set the maximum size to thrice the original: var newFontSize = origFontSize * 3; Animate the fontSize css property of the Label in 300 ms: $(".enlarge").animate({ fontSize: newFontSize }, 300); }, In the mouseleave event handler of the hover method, animate the fontSize to the original value in 300 ms as shown: function() { $(".enlarge").animate({ fontSize: origFontSize }, 300); } ); Thus, the complete jQuery solution is as follows: <script language="javascript" type="text/javascript"> $(document).ready(function() { var origFontSize = parseFloat($(".enlarge").css('fontsize')); $("#content").hover( function() { $(".enlarge").css("cursor", "pointer"); var newFontSize = origFontSize * 3; $(".enlarge").animate({ fontSize: newFontSize }, 300); }, function() { $(".enlarge").animate({ fontSize: origFontSize }, 300); } ); }); </script> How it works… Run the web form. Mouseover on the fieldset area. The text size will animate over the stated duration and change to the maximum specified font size as displayed in the following screenshot: On removing the mouse from the fieldset area, the text size will return back to the original. Creating a fade effect on hover In this recipe, we will create a fade effect on an ASP.NET Image control on hover. We will use the fadeIn and fadeOut methods to achieve the same. Getting ready Add a new web form Recipe2.aspx to the current project. Add an image control to the form: <asp:Image src="images/Image1.jpg" ID="Image1" runat="server" /> Define the properties of the image in the css: #Image1 { width:438px; height:336px; } Thus, the complete ASPX markup of the web form is as follows: <form id="form1" runat="server"> <div align="center"> Mouseover on the image to view fade effect: <fieldset id="content" style="width:480px;height:370px;"> <br /> <asp:Image src="images/Image1.jpg" ID="Image1" runat="server" /> </fieldset> </div> </form> On page load, the image is displayed as follows: We will now create a fade effect on the image on hover on the containing fieldset area. How to do it… In the document.ready() function of the jQuery script block, define the hover event on the containing fieldset area: $("#content").hover( In the mouseenter event handler of the hover method, update the cursor to pointer: function() { $("#Image1").css("cursor", "pointer"); Apply the fadeOut method on the Image control with an animation duration of 1000 ms: $("#Image1").fadeOut(1000); }, In the mouseleave event handler of the hover method, apply the fadeIn method on the Image control with an animation duration of 1000 ms: function() { $("#Image1").fadeIn(1000); } ); Thus, the complete jQuery solution is as follows: <script language="javascript" type="text/javascript"> $(document).ready(function() { $("#content").hover( function() { $("#Image1").css("cursor", "pointer"); $("#Image1").fadeOut(1000); }, function() { $("#Image1").fadeIn(1000); } ); }); </script> How it works... Run the web page. Mouseover on the Image control on the web page. The image will slowly fade away as shown in the following screenshot: On mouseout from the containing fieldset area, the image reappears. Sliding elements on a page In this recipe, we will use the slideUp and slideDown methods for achieving sliding effects on an ASP.NET panel. Getting ready Add a new web form Recipe3.aspx in the current project. Add an ASP.NET panel to the page as follows: <asp:Panel class="slide" runat="server"> Sliding Panel </asp:Panel> The css class for the panel is defined as follows: .slide { font-size:12px; font-family:Arial,sans-serif; display:none; height:100px; background-color:#9999FF; } Add a button control to trigger the sliding effect on the panel: <asp:Button ID="btnSubmit" runat="server" Text="Trigger Slide" /> Thus, the complete ASPX markup of the web form is as follows: <form id="form1" runat="server"> <div align="center"> <fieldset style="width:400px;height:150px;"> <asp:Button ID="btnSubmit" runat="server" Text="Trigger Slide" /> <br /><br/> <asp:Panel class="slide" runat="server"> Sliding Panel </asp:Panel> </fieldset> </div> </form> On page load, the page appears as shown in the following screenshot: We will now use jQuery to slide up and slide down the panel. How to do it… In the document.ready() function of the jQuery script block, define the click event of the button control: $("#btnSubmit").click(function(e) { Prevent default form submission: e.preventDefault(); Check if the ASP.NET panel control is hidden: if ($(".slide").is(":hidden")) The jQuery selector :hidden selects matched elements that are hidden on the page. If yes, then slide down the panel until its height reaches the maximum (100 px) defined in the css property. $(".slide").slideDown("slow"); If the panel is initially visible then slide up so that its height slowly reduces until it becomes 0 and the panel disappears from the page: else $(".slide").slideUp("slow"); }); Thus, the complete jQuery solution is as follows: <script language="javascript" type="text/javascript"> $(document).ready(function() { $("#btnSubmit").click(function(e) { e.preventDefault(); if ($(".slide").is(":hidden")) $(".slide").slideDown("slow"); else $(".slide").slideUp("slow"); }); }); </script>  
Read more
  • 0
  • 0
  • 13230

article-image-downloading-pyrocms-and-its-pre-requisites
Packt
31 Oct 2013
6 min read
Save for later

Downloading PyroCMS and it's pre-requisites

Packt
31 Oct 2013
6 min read
(For more resources related to this topic, see here.) Getting started PyroCMS, like many other content management systems including WordPress, Typo3, or Drupal, comes with a pre-developed installation process. For PyroCMS, this installation process is easy to use and comes with a number of helpful hints just in case you hit a snag while installing the system. If, for example, your system files don't have the correct permissions profile (writeable versus write-protected), the PyroCMS installer will help you, along with all the other installation details, such as checking for required software and taking care of file permissions. Before you can install PyroCMS (the version used for examples in this article is 2.2) on a server, there are a number of server requirements that need to be met. If you aren't sure if these requirements have been met, the PyroCMS installer will check to make sure they are available before installation is complete. Following are the software requirements for a server before PyroCMS can be installed: HTTP Web Server MySQL 5.x or higher PHP 5.2.x or higher GD2 cURL Among these requirements, web developers interested in PyroCMS will be glad to know that it is built on CodeIgniter, a popular MVC patterned PHP framework. I recommend that the developers looking to use PyroCMS should also have working knowledge of CodeIgniter and the MVC programming pattern. Learn more about CodeIgniter and see their excellent system documentation online at http://ellislab.com/codeigniter. CodeIgniter If you haven't explored the Model-View-Controller (MVC) programming pattern, you'll want to brush up before you start developing for PyroCMS. The primary reason that CodeIgniter is a good framework for a CMS is that it is a well-documented framework that, when leveraged in the way PyroCMS has done, gives developers power over how long a project will take to build and the quality with which it is built. Add-on modules for PyroCMS, for example, follow the MVC method, a programming pattern that saves developers time and keeps their code dry and portable. Dry and portable programming are two different concepts. Dry is an acronym for "don't repeat yourself" code. Portable code is like "plug-and-play" code—write it once so that it can be shared with other projects and used quickly. HTTP web server Out of the PyroCMS software requirements, it is obvious, you can guess, that a good HTTP web server platform will be needed. Luckily, PyroCMS can run on a variety of web server platforms, including the following: Abyss Web Server Apache 2.x Nginx Uniform Server Zend Community Server If you are new to web hosting and haven't worked with web hosting software before, or this is your first time installing PyroCMS, I suggest that you use Apache as a HTTP web server. It will be the system for which you will find the most documentation and support online. If you'd prefer to avoid Apache, there is also good support for running PyroCMS on Nginx, another fairly-well documented web server platform. MySQL Version 5 is the latest major release of MySQL, and it has been in use for quite some time. It is the primary database choice for PyroCMS and is thoroughly supported. You don't need expert level experience with MySQL to run PyroCMS, but you'll need to be familiar with writing SQL queries and building relational databases if you plan to create add-ons for the system. You can learn more about MySQL at http://www.mysql.com. PHP Version 5.2 of PHP is no longer the officially supported release of PHP, which is, at the time of this article, Version 5.4. Version 5.2, which has been criticized as being a low server requirement for any CMS, is allowed with PyroCMS because it is the minimum version requirement for CodeIgniter, the framework upon which PyroCMS is built. While future versions of PyroCMS may upgrade this minimum requirement to PHP 5.3 or higher, you can safely use PyroCMS with PHP 5.2. Also, many server operating systems, like SUSE and Ubuntu, install PHP 5.2 by default. You can, of course, upgrade PHP to the latest version without causing harm to your instance of PyroCMS. To help future-proof your installation of PyroCMS, it may be wise to install PHP 5.3 or above, to maximize your readiness for when PyroCMS more strictly adopts features found in PHP 5.3 and 5.4, such as namespaceing. GD2 GD2, a library used in the manipulation and creation of images, is used by PyroCMS to dynamically generate images (where needed) and to crop and resize images used in many PyroCMS modules and add-ons. The image-based support offered by this library is invaluable. cURL As described on the cURL project website, cURL is "a command line tool for transferring data with URL syntax" using a large number of methods, including HTTP(S) GET, POST, PUT, and so on. You can learn more about the project and how to use cURL on their website http://curl.haxx.se. If you've never used cURL with PHP, I recommend taking time to learn how to use it, especially if you are thinking about building a web-based API using PyroCMS. Most popular web hosting companies meet the basic server requirements for PyroCMS. Downloading PyroCMS Getting your hands on a copy of PyroCMS is very simple. You can download the system files from one of two locations, the PryoCMS project website and GitHub. To download PyroCMS from the project website, visit http://www.pyrocms.com and click on the green button labeled Get PyroCMS! This will take you to a download page that gives you the choice between downloading the Community version of PyroCMS and buying the Professional version. If you are new to PyroCMS, you can start with the Community version, currently at Version 2.2.3. The following screenshot shows the download screen: To download PyroCMS from GitHub, visit https://github.com/pyrocms/pyrocms and click on the button labeled Download ZIP to get the latest Community version of PyroCMS, as shown in the following screenshot: If you know how to use Git, you can also clone a fresh version of PyroCMS using the following command. A word of warning, cloning PyroCMS from GitHub will usually give you the latest, stable release of the system, but it could include changes not described in this article. Make sure you checkout a stable release from PyroCMS's repository. git clone https://github.com/pyrocms/pyrocms.git As a side-note, if you've never used Git, I recommend taking some time to get started using it. PyroCMS is an open source project hosted in a Git repository on Github, which means that the system is open to being improved by any developer looking to contribute to the well-being of the project. It is also very common for PyroCMS developers to host their own add-on projects on Github and other online Git repository services. Summary In this article, we have covered the pre-requisites for using PyroCMS, and also how to download PyroCMS. Resources for Article : Further resources on this subject: Kentico CMS 5 Website Development: Managing Site Structure [Article] Kentico CMS 5 Website Development: Workflow Management [Article] Web CMS [Article]
Read more
  • 0
  • 0
  • 13109

article-image-dhtmlx-grid
Packt
30 Oct 2013
7 min read
Save for later

The DHTMLX Grid

Packt
30 Oct 2013
7 min read
(For more resources related to this topic, see here.) The DHTMLX grid component is one of the more widely used components of the library. It has a vast amount of settings and abilities that are so robust we could probably write an entire book on them. But since we have an application to build, we will touch on some of the main methods and get into utilizing it. Some of the cool features that the grid supports is filtering, spanning rows and columns, multiple headers, dynamic scroll loading, paging, inline editing, cookie state, dragging/ordering columns, images, multi-selection, and events. By the end of this article, we will have a functional grid where we will control the editing, viewing, adding, and removing of users. The grid methods and events When creating a DHTMLX grid, we first create the object; second we add all the settings and then call a method to initialize it. After the grid is initialized data can then be added. The order of steps to create a grid is as follows: Create the grid object Apply settings Initialize Add data Now we will go over initializing a grid. Initialization choices We can initialize a DHTMLX grid in two ways, similar to the other DHTMLX objects. The first way is to attach it to a DOM element and the second way is to attach it to an existing DHTMLX layout cell or layout. A grid can be constructed by either passing in a JavaScript object with all the settings or built through individual methods. Initialization on a DOM element Let's attach the grid to a DOM element. First we must clear the page and add a div element using JavaScript. Type and run the following code line in the developer tools console: document.body.innerHTML = "<div id='myGridCont'></div>"; We just cleared all of the body tags content and replaced it with a div tag having the id attribute value of myGridCont. Now, create a grid object to the div tag, add some settings and initialize it. Type and run the following code in the developer tools console: var myGrid = new dhtmlXGridObject("myGridCont"); myGrid.setImagePath(config.imagePath); myGrid.setHeader(["Column1", "Column2", "Column3"]); myGrid.init(); You should see the page with showing just the grid header with three columns. Next, we will create a grid on an existing cell object. Initialization on a cell object Refresh the page and add a grid to the appLayout cell. Type and run the following code in the developer tools console: var myGrid = appLayout.cells("a").attachGrid(); myGrid.setImagePath(config.imagePath); myGrid.setHeader(["Column1","Column2","Column3"]); myGrid.init(); You will now see the grid columns just below the toolbar. Grid methods Now let's go over some available grid methods. Then we can add rows and call events on this grid. For these exercises we will be using the global appLayout variable. Refresh the page. attachGrid We will begin by creating a grid to a cell. The attachGrid method creates and attaches a grid object to a cell. This is the first step in creating a grid. Type and run the following code line in the console: var myGrid = appLayout.cells("a").attachGrid(); setImagePath The setImagePath method allows the grid to know where we have the images placed for referencing in the design. We have the application image path set in the config object. Type and run the following code line in the console: myGrid.setImagePath(config.imagePath); setHeader The setHeader method sets the column headers and determines how many headers we will have. The argument is a JavaScript array. Type and run the following code line in the console: myGrid.setHeader(["Column1", "Column2", "Column3"]); setInitWidths The setinitWidths method will set the initial widths of each of the columns. The asterisk mark (*) is used to set the width automatically. Type and run the following code line in the console: myGrid.setInitWidths("125,95,*");   setColAlign The setColAlign method allows us to align the column's content position. Type and run the following code line in the console: myGrid.setColAlign("right,center,left"); init Up until this point, we haven't seen much going on. It was all happening behind the scenes. To see these changes the grid must be initialized. Type and run the following code line in the console: myGrid.init(); Now you see the columns that we provided. addRow Now that we have a grid created let's add a couple rows and start interacting. The addRow method adds a row to the grid. The parameters are ID and columns. Type and run the following code in the console: myGrid.addRow(1,["test1","test2","test3"]); myGrid.addRow(2,["test1","test2","test3"]); We just created two rows inside the grid. setColTypes The setColTypes method sets what types of data a column will contain. The available type options are: ro (readonly) ed (editor) txt (textarea) ch (checkbox) ra (radio button) co (combobox) Currently, the grid allows for inline editing if you were to double-click on grid cell. We do not want this for the application. So, we will set the column types to read-only. Type and run the following code in the console: myGrid.setColTypes("ro,ro,ro"); Now the cells are no longer editable inside the grid. getSelectedRowId The getSelectedRowId method returns the ID of the selected row. If there is nothing selected it will return null. Type and run the following code line in the console: myGrid.getSelectedRowId(); clearSelection The clearSelection method clears all selections in the grid. Type and run the following code line in the console: myGrid.clearSelection(); Now any previous selections are cleared. clearAll The clearAll method removes all the grid rows. Prior to adding more data to the grid we first must clear it. If not we will have duplicated data. Type and run the following code line in the console: myGrid.clearAll(); Now the grid is empty. parse The parse method allows the loading of data to a grid in the format of an XML string, CSV string, XML island, XML object, JSON object, and JavaScript array. We will use the parse method with a JSON object while creating a grid for the application. Here is what the parse method syntax looks like (do not run this in console): myGrid.parse(data, "json"); Grid events The DHTMLX grid component has a vast amount of events. You can view them in their entirety in the documentation. We will cover the onRowDblClicked and onRowSelect events. onRowDblClicked The onRowDblClicked event is triggered when a grid row is double-clicked. The handler receives the argument of the row ID that was double-clicked. Type and run the following code in console: myGrid.attachEvent("onRowDblClicked", function(rowId){ console.log(rowId); }); Double-click one of the rows and the console will log the ID of that row. onRowSelect The onRowSelect event will trigger upon selection of a row. Type and run the following code in console: myGrid.attachEvent("onRowSelect", function(rowId){ console.log(rowId); }); Now, when you select a row the console will log the id of that row. This can be perceived as a single click. Summary In this article, we learned about the DHTMLX grid component. We also added the user grid to the application and tested it with the storage and callbacks methods. Resources for Article: Further resources on this subject: HTML5 Presentations - creating our initial presentation [Article] HTML5: Generic Containers [Article] HTML5 Canvas [Article]
Read more
  • 0
  • 0
  • 13091

article-image-creating-video-streaming-site
Packt
16 Sep 2015
16 min read
Save for later

Creating a Video Streaming Site

Packt
16 Sep 2015
16 min read
 In this article by Rachel McCollin, the author of WordPress 4.0 Site Blueprints Second Edition, you'll learn how to stream video from YouTube to your own video sharing site, meaning that you can add more than just the videos to your site and have complete control over how your videos are shown. We'll create a channel on YouTube and then set up a WordPress site with a theme and plugin to help us stream video from that channel WordPress is the world's most popular Content Management System (CMS) and you can use it to create any kind of site you or your clients need. Using free plugins and themes for WordPress, you can create a store, a social media site, a review site, a video site, a network of sites or a community site, and more. WordPress makes it easy for you to create a site that you can update and add to over time, letting you add posts, pages, and more without having to write code. WordPress makes your job of creating your own website simple and hassle-free! (For more resources related to this topic, see here.) Planning your video streaming site The first step is to plan how you want to use your video site. Ask yourself a few questions: Will I be streaming all my video from YouTube? Will I be uploading any video manually? Will I be streaming from multiple sources? What kind of design do I want? Will I include any other types of content on my site? How will I record and upload my videos? Who is my target audience and how will I reach them? Do I want to make money from my videos? How often will I create videos and what will my recording and editing process be? What software and hardware will I need for recording and editing videos? It's beyond the scope of this article to answer all of these questions, but it's worth taking some time before you start to consider how you're going to be using your video site, what you'll be adding to it, and what your objectives are. Streaming from YouTube or uploading videos direct? WordPress lets you upload your videos directly to your site using the Add Media button, the same button you use to insert images. This can seem like the simplest way of doing things as you only need to work in one place. However, I would strongly recommend using a third-party video service instead, for the following reasons: It saves on storage space in your site. It ensures your videos will play on any device people choose to view your site from. It keeps the formats your video is played in up to date so that you don't have to re-upload them when things change. It can have massive SEO benefits socially if you use YouTube. YouTube is owned by Google and has excellent search engine rankings. You'll find that videos streamed via YouTube get better Google rankings than any videos you upload directly to your site. In this article, the focus will be on creating a YouTube channel and streaming video from it to your website. We'll set things up so that when you add new videos to your channel, they'll be automatically streamed to your site. To do that, we'll use a plugin. Understanding copyright considerations Before you start uploading video to YouTube, you need to understand what you're allowed to add, and how copyright affects your videos. You can find plenty of information on YouTube's copyright rules and processes at https://www.youtube.com/yt/copyright/, but it can quite easily be summarized as this: if you created the video, or it was created by someone who has given you explicit permission to use it and publish it online, then you can upload it. If you've recorded a video from the TV or the Web that you didn't make and don't have permission to reproduce (or if you've added copyrighted music to your own videos without permission), then you can't upload it. It may seem tempting to ignore copyright and upload anything you're able to find and record (and you'll find plenty of examples of people who've done just that), but you are running a risk of being prosecuted for copyright infringement and being forced to pay a huge fine. I'd also suggest that if you can create and publish original video content rather than copying someone else's, you'll find an audience of fans for that content, and it will be a much more enjoyable process. If your videos involve screen capture of you using software or playing games, you'll need to check the license for that software or game to be sure that you're entitled to publish video of you interacting with it. Most software and games developers have no problem with this as it provides free advertising for them, but you should check with the software provider and the YouTube copyright advice. Movies and music have stricter rules than games generally do however. If you upload videos containing someone else's video or music content that's copyrighted and you haven't got permission to reproduce, then you will find yourself in violation of YouTube's rules and possibly in legal trouble too. Creating a YouTube channel and uploading videos So, you've planned your channel and you have some videos you want to share with the world. You'll need a YouTube channel so you can upload your videos. Creating your YouTube channel You'll need a YouTube channel in order to do this. Let's create a YouTube channel by following these steps: If you don't already have one, create a Google account for yourself at https://accounts.google.com/SignUp. Head over to YouTube at https://www.youtube.com and sign in. You'll have an account with YouTube because it's part of Google, but you won't have a channel yet. Go to https://www.youtube.com/channel_switcher. Click on the Create a new channel button. Follow the instructions onscreen to create your channel. Customize your channel, uploading images to your profile photo or channel art and adding a description using the About tab. Here's my channel: It can take a while for artwork from Google+ to show up on your channel, so don't worry if you don't see it straight away. Uploading videos The next step is to upload some videos. YouTube accepts videos in the following formats: .MOV .MPEG4 .AVI .WMV .MPEGPS .FLV 3GPP WebM Depending on the video software you've used to record, your video may already be in one of these formats or you may need to export it to one of these and save it before you can upload it. If you're not sure how to convert your file to one of the supported formats, you'll find advice at https://support.google.com/youtube/troubleshooter/2888402 to help you do it. You can also upload videos to YouTube directly from your phone or tablet. On an Android device, you'll need to use the YouTube app, while on an iOS device you can log in to YouTube on the device and upload from the camera app. For detailed instructions and advice for other devices, refer to https://support.google.com/youtube/answer/57407. If you're uploading directly to the YouTube website, simply click on the Upload a video button when viewing your channel and follow the onscreen instructions. Make sure you add your video to a playlist by clicking on the +Add to playlist button on the right-hand side while you're setting up the video as this will help you categorize the videos in your site later. Now when you open your channel page and click on the Videos tab, you'll see all the videos you uploaded: When you click on the Playlists tab, you'll see your new playlist: So you now have some videos and a playlist set up in YouTube. It's time to set up your WordPress site for streaming those videos. Installing and configuring the YouTube plugin Now that you have your videos and playlists set up, it's time to add a plugin to your site that will automatically add new videos to your site when you upload them to YouTube. Because I've created a playlist, I'm going to use a category in my site for the playlist and automatically add new videos to that category as posts. If you prefer you can use different channels for each category or you can just use one video category and link your channel to that. The latter is useful if your site will contain other content as well, such as photos or blog posts. Note that you don't need a plugin to stream YouTube videos to your site. You can simply paste the URL for a video into the editing pane when you're creating a post or page in your site, and WordPress will automatically stream the video. You don't even need to add an embed code, just add the YRL. But if you don't want to automate the process of streaming all of the videos in your channel to your site, this plugin will make that process easy. Installing the Automatic YouTube Video Posts plugin The Automatic YouTube Video Posts plugin lets you link your site to any YouTube channel or playlist and automatically adds each new video to your site as a post. Let's start by installing it. I'm working with a fresh WordPress installation but you can also do this on your existing site if that's what you're working with. Follow these steps: In the WordPress admin, go to Plugins | Add New. In the Search box, type Automatic Youtube. The plugins that meet the search criteria will be displayed. Select the Automatic YouTube Video Posts plugin and then install and activate it. For the plugin to work, you'll need to configure its settings and add one or more channels or playlists. Configuring the plugin settings Let's start with the plugin settings screen. You do this via the Youtube Posts menu, which the plugin has added to your admin menu: Go to Youtube Posts | Settings. Edit the settings as follows:     Automatically publish posts: Set this to Yes     Display YouTube video meta: Set this to Yes     Number of words and Video dimensions: Leave these at the default values     Display related videos: Set this to No     Display videos in post lists: Set this to Yes    Import the latest videos every: Set this to 1 hours (note that the updates will happen every hour if someone visits the site, but not if the site isn't visited) Click on the Save changes button. The settings screen will look similar to the following screenshot: Adding a YouTube channel or playlist The next step is to add a YouTube channel and/or playlist so that the plugin will create posts from your videos. I'm going to add the "Dizzy" playlist I created earlier on. But first, I'll create a category for all my videos from that playlist. Creating a category for a playlist Create a category for your playlist in the normal way: In the WordPress admin, go to Posts | Categories. Add the category name and slug or description if you want to (if you don't, WordPress will automatically create a slug). Click on the Add New Category button. Adding your channel or playlist to the plugin Now you need to configure the plugin so that it creates posts in the category you've just created. In the WordPress admin, go to Youtube Posts | Channels/Playlists. Click on the Add New button. Add the details of your channel or playlist, as shown in the next screenshot. In my case, the details are as follows:     Name: Dizzy     Channel/playlist: This is the ID of my playlist. To find this, open the playlist in YouTube and then copy the last part of its URL from your browser. The URL for my playlist is   https://www.youtube.com/watch?v=vd128vVQc6Y&list=PLG9W2ELAaa-Wh6sVbQAIB9RtN_1UV49Uv and the playlist ID is after the &list= text, so it's PLG9W2ELAaa-Wh6sVbQAIB9RtN_1UV49Uv. If you want to add a channel, add its unique name.      Type: Select Channel or Playlist; I'm selecting Playlist.      Add videos from this channel/playlist to the following categories: Select the category you just created.      Attribute videos from this channel to what author: Select the author you want to attribute videos to, if your site has more than one author. Finally, click on the Add Channel button. Adding a YouTube playlist Once you click on the Add Channel button, you'll be taken back to the Channels/Playlists screen, where you'll see your playlist or channel added: The newly added playlist If you like, you can add more channels or playlists and more categories. Now go to the Posts listing screen in your WordPress admin, and you'll see that the plugin has created posts for each of the videos in your playlist: Automatically added posts Installing and configuring a suitable theme You'll need a suitable theme in your site to make your videos stand out. I'm going to use the Keratin theme which is grid-based with a right-hand sidebar. A grid-based theme works well as people can see your videos on your home page and category pages. Installing the theme Let's install the theme: Go to Appearance | Themes. Click on the Add New button. In the search box, type Keratin. The theme will be listed. Click on the Install button. When prompted, click on the Activate button. The theme will now be displayed in your admin screen as active: The installed and activated theme Creating a navigation menu Now that you've activated a new theme, you'll need to make sure your navigation menu is configured so that it's in the theme's primary menu slot, or if you haven't created a menu yet, you'll need to create one. Follow these steps: Go to Appearance | Menus. If you don't already have a menu, click on the Create Menu button and name your new menu. Add your home page to the menu along with any category pages you've created by clicking on the Categories metabox on the left-hand side. Once everything is in the right place in your menu, click on the Save Menu button. Your Menus screen will look something similar to this: Now that you have a menu, let's take a look at the site: The live site That's looking good, but I'd like to add some text in the sidebar instead of the default content. Adding a text widget to the sidebar Let's add a text widget with some information about the site: In the WordPress admin, go to Appearance | Widgets. Find the text widget on the left-hand side and drag it into the widget area for the main sidebar. Give the widget a title. Type the following text into the widget's contents: Welcome to this video site. To see my videos on YouTube, visit <a href="https://www.youtube.com/channel/UC5NPnKZOjCxhPBLZn_DHOMw">my channel</a>. Replace the link I've added here with a link to your own channel: The Widgets screen with a text widget added Text widgets accept text and HTML. Here we've used HTML to create a link. For more on HTML links, visit http://www.w3schools.com/html/html_links.asp. Alternatively if you'd rather create a widget that gives you an editing pane like the one you use for creating posts, you can install the TinyMCE Widget plugin from https://wordpress.org/plugins/black-studio-tinymce-widget/screenshots/. This gives you a widget that lets you create links and format your text just as you would when creating a post. Now go back to your live site to see how things are looking:The live site with a text widget added It's looking much better! If you click on one of these videos, you're taken to the post for that video: A single post with a video automatically added Your site is now ready. Managing and updating your videos The great thing about using this plugin is that once you've set it up you'll never have to do anything in your website to add new videos. All you need to do is upload them to YouTube and add them to the playlist you've linked to, and they'll automatically be added to your site. If you want to add extra content to the posts holding your videos you can do so. Just edit the posts in the normal way, adding text, images, and anything you want. These will be displayed as well as the videos. If you want to create new playlists in future, you just do this in YouTube and then create a new category on your site and add the playlist in the settings for the plugin, assigning the new channel to the relevant category. You can upload your videos to YouTube in a variety of ways—via the YouTube website or directly from the device or software you use to record and/or edit them. Most phones allow you to sign in to your YouTube account via the video or YouTube app and directly upload videos, and video editing software will often let you do the same. Good luck with your video site, I hope it gets you lots of views! Summary In this article, you learned how to create a WordPress site for streaming video from YouTube. You created a YouTube channel and added videos and playlists to it and then you set up your site to automatically create a new post each time you add a new video, using a plugin. Finally, you installed a suitable theme and configured it, creating categories for your channels and adding these to your navigation menu. Resources for Article: Further resources on this subject: Adding Geographic Capabilities via the GeoPlaces Theme[article] Adding Flash to your WordPress Theme[article] Adding Geographic Capabilities via the GeoPlaces Theme [article]
Read more
  • 0
  • 1
  • 12966
Packt
16 Aug 2010
12 min read
Save for later

URL Shorteners – Designing the TinyURL Clone with Ruby

Packt
16 Aug 2010
12 min read
(For more resources on Ruby, see here.) We start off with an easy application, a simple yet very useful Internet application, URL shorteners. We will take a quick tour of URL shorteners before jumping into the design of a simple URL shortener, followed by an in-depth discussion of how we clone our own URL shortener, Tinyclone. All about URL shorteners Internet applications don't always need to be full of features or cover all aspects of your Internet life to be successful. Sometimes it's ok to be simple and just focus on providing a single feature. It doesn't even need to be earth-shatteringly important—it should be just useful enough for its target users. The archetypical and probably most extreme example of this is the URL shortening application or URL shortener. This service offers a very simple but surprisingly useful feature. It provides a shorter URL that represents a normally longer URL. When a user goes to the short URL, he will be redirected to the original URL. For this simple feature, top three most popular URL shortening services (TinyURL, bit.ly, and is.gd) collectively had about 11 million unique visitors, 110 million page views and a reach of about one percent of the Internet in June 2009. In 2008, the most popular URL shortener at that time, TinyURL, was made one of Time Magazine's Top 50 Best Websites. The idea to shorten long and unwieldy URLs into shorter, more manageable ones has been around for some time. One of the earlier attempts to make it a public service is Make A Shorter Link (MASL), which appeared around July 2001. MASL did just that, though the usefulness was debatable as the domain name was long and the shortened URL could potentially be longer than the original. However, the pioneering site that popularized this concept (and subsequently bought over MASL and a few other similar sites) is TinyURL. TinyURL was launched in January 2002 by Kevin Gilbertson to help him to link directly to newsgroup postings which frequently had long URLs. It rapidly became one of the most popular URL shorteners around. In 2008, an estimated 100 similar services came to existence in various forms. URLs or Uniform Resource Locators are resource identifiers that specify where identified resources are available and how they can be retrieved. A popular term for URL is a Web address. Every URL is made up of the following: <resource type>://<username>:<password>@<domain>:<port>/<file path name>?<query string>#<anchor> Not all parts of the URL are required by a browser, if the resource type is missing, it is normally assumed to be http, if the port is missing, it is normally assumed to be 80 (for http). The username, password, query string and anchor components are optional. Initially, TinyURL and similar types of URL shorteners focused on simply providing a short representative URL to their users. Naturally the competitive breadth for shortening URLs was rather well, short. Many chose TinyURL over MASL because TinyURL had a shorter and easier to remember domain name (http://tinyurl.com over http://makeashorterlink.com) Subsequent competition over this space intensified and extended to providing various other features, including custom short URLs (TinyURL, bit.ly), analysis of click-through statistics (bit.ly), advertisements (Adjix, Linkbee), preview pages (TinyURL, is.gd) and so on. The explosive growth of Twitter (from June 2008 to June 2009, Twitter grew 1,164%) opened a new chapter for URL shorteners. Twitter chose a limit of 140 characters for each tweet to accommodate the 160 characters in an SMS message (Twitter was invented as a service for people to use SMS to tell small groups what they are doing). With Twitter's popularity skyrocketing, came the need for users to shorten URLs to fit into the 140 characters limit. Originally Twitter used TinyURL as its default URL shortener and this triggered a steep climb in the usage of TinyURL during the early days of Twitter. However, in May 2009, bit.ly replaced TinyURL as Twitter's default URL shortener and the impact was immediate. For the first time in that period, TinyURL recorded a drop in the number of users in May 2009, dropping from 6.1 million to 5.3 million unique users, while bit.ly jumped from 1.8 million to 2.9 million almost overnight. That's not the end of the story though. In April 2010 during Twitter's Chirp conference, Twitter announced its own URL shortener (twt.tl). As of writing it is still unclear the market share will pan out but it's clear that URL shorteners have good value and everyone is jumping into this market. In December 2009, Google came up with its own two URL shorteners goo.gl and youtu.be. Amazon.com (amzn.to), Facebook (fb.me) and Wordpress (wp.me) all have their own URL shorteners as well. Next, let's do a quick review of why URLs shorteners are so popular and why they attract criticism as well. Here's a quick summary of the benefits: Create short and easy to remember URLs Allow passing of links in character-limited services such as Twitter Create vanity URLs for marketing purposes Can verbally pass URLs The most obvious benefit of having a shortened URL is that it's, well, short. A typical example of an URL gone bad is a link to a location in Google Maps: http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=singapore +flyer&vps=1&jsv=169c&sll=1.352083,103.819836&sspn=0.68645,1.382904&g =singapore&ie=UTF8&latlng=8354962237652576151&ei=Shh3SsSRDpb4vAPsxLS3 BQ&cd=1&usq=Singapore+Flyer Such URLs are meant to be clicked on as it is virtually impossible to pass it around verbally. It might be justifiable if the URL is cut and pasted on documents, but sometimes certain applications will truncate parts of the URL while processing. This makes a long URL difficult to click on and even produces erroneous links. In fact, this was the main motivation in creating most of the earlier URL shorteners—older email clients tend to truncate URLs when they are more than 80 characters. Short links are of course crucial in character-limited message passing systems like Twitter, Plurk, and SMS. Passing long URLs is impossible without URL shorteners. Short URLs are very useful in cases of vanity URLs where for example, the Google Maps link above could be shortened to http://tinyurl.com/singapore-flyer. Such vanity URLs are useful when passing from one person to another, or even when using in a mass marketing way. Sticking to the maps theme in our examples, if you want to give a Google Maps link to your restaurant and put it up in catalogs and brochures, you will not want to give the long URL. Instead you would want a nice, descriptive and short URL. Short URLs are also useful in cases of accessibility. For example, reading out the Google Maps link above is almost impossible, but reading out the TinyURL link (vanity or otherwise) is much easier in comparison. Many popular URL shorteners also provide some form of statistics and analytics on the usage of the links. This feature allows you to track your short URLs to see how many clicks it received and what kind of patterns can be derived from the clicks. Although the metrics are usually not advanced, they do provide basic usefulness. On the other hand, URL shorteners have it fair share of criticisms as well. Here is a summary of the bad side of URL shorteners: Provides opportunity to spammers because it hide original URLs Could be unreliable if dependent on it for redirection Possible undesirable or vulgar short URLs URL shorteners have security issues. When a URL shortener creates a short URL, it effectively hides the original link and this provides opportunity for spammers or other abusers to redirect users to their sites. One relatively mild form of such attack is 'rickrolling'. Rickrolling uses a classic bait-and-switch trick to redirect users to a Rick Astley music video of Never Gonna Give You Up. For example, you might feel that the URL http://tinyurl.com/singapore-flyer goes to Google Map, but when you click on it, you might be rickrolled and redirected to that Rick Astley music video instead. Also, because most short URLs are not customized, it is quite difficult to see if the link is genuine or not just from the URL. Many prominent websites and applications have such concerns, including MySpace, Flickr and even Microsoft Live Messenger, and have one time or another banned or restricted usage of TinyURL because of this problem. To combat spammers and fraud, URL shortening services have come up with the idea of link previews, which allows users to preview a short URL before it redirects the user to the long URL. For example TinyURL will show the user the long URL on a preview page and requires the user to explicitly go to the long URL. Another problem is performance and reliability. When you access a website, your browser goes to a few DNS servers to resolve the address, but the URL shortener adds another layer of indirection. While DNS servers have redundancy and failsafe measures, there is no such assurance from URL shorteners. If the traffic to a particular link becomes too high, will the shortening service provider be able to add more servers to improve performance or even prevent a meltdown altogether? The problem of course lies in over-dependency on the shortening service. Finally, a negative side effect of random or even customized short URLs is that undesirable, vulgar or embarrassing short URLs can be created. Earlier on TinyURL short URLs were predictable and it was exploited, such as embarrassing short URLs that were made to redirect to the White House websites of then U.S. Vice President Dick Cheney and Second Lady Lynne Cheney. We have just covered significant ground on URL shorteners. If you are a programmer you might be wondering, "Why do I need to know such information? I am really interested in the programming bits, the others are just fluff to me." Background information on the application we want to clone is very important. It tells us what why that application exists in the first place and gives us an idea what are the main features (what makes it popular). It also tells us what problems it faces such that we are aware of the problem while programming it, or even avoid it altogether. This is important when we come to the design of the application. Finally it gives us better appreciation of the application and the motivations and issues faced by the product and technical people behind the application we wish to clone. Main features Next, let's list down the features of a URL shortener. The intention in this section is to distill the basic features of the application, features that define the service. Features listed here will be features that make the application what it is. However, as much as possible we want to also explore some additional features that extend the application and are provided by many of its competitors. Most importantly, the features here are mostly features of the most popular and definitive web application in the category. In this article, this will be TinyURL. These are the main features of a URL shortener: Users can create a short URL that represents a long URL Users who visit the short URL will be redirected to the long URL Users can preview a short URL to enable them to see what the long URL is Users can provide a custom URL to represent the long URL Undesirable words are not allowed in the short URL Users are able to view various statistics involving the short URL, including the number of clicks and where the clicks come from (optional, not in TinyURL) URL shorteners are simple web applications and the one that we will design and build will also be simple. Designing the clone Cloning TinyURL is relatively simple but there is some thought behind the design of the application. We will be building a clone of TinyURL called Tinyclone, which will be hosted at the domain http://tinyclone.saush.com. Creating a short URL for each long URL The domain of the short URL is fixed. What's left is the file path name. We need to represent the long URL with a unique file path name (a key), one for each long URL. This means we need to persist the relationship between the key and the URL. One of the ways we can associate the long URL with a unique key is to hash the long URL and use the resulting hash as the unique key. However, the resulting hash might be long and hashing functions could be slow. The faster and easier way is to use a relational database's auto-incremented row ID as the unique key. The database will help ensure the uniqueness of the ID. However, the running row ID number is base 10. To represent a million URLs would already require 7 characters, to represent 1 billion would take up 9 characters. In order to keep the number of characters smaller, we will need a larger base numbering system. In this clone we will use base 36, which is 26 characters of the alphabet (case insensitive) and 10 numbers. Using this system, we will only need 5 characters to represent 1 million URLs: 1,000,000 base 36 = lfls And 1 billion URLs can be represented in just six characters: 1,000,000,000 base 36 = gjdgxs
Read more
  • 0
  • 1
  • 12828

article-image-quizzes-and-interactions-camtasia-studio
Packt
21 Aug 2014
12 min read
Save for later

Quizzes and Interactions in Camtasia Studio

Packt
21 Aug 2014
12 min read
This article by David B. Demyan, the author of the book eLearning with Camtasia Studio, covers the different types of interactions, description of how interactions are created and how they function, and the quiz feature. In this article, we will cover the following topics specific topics: The types of interactions available in Camtasia Studio Video player requirements Creating simple action hotspots Using the quiz feature (For more resources related to this topic, see here.) Why include learner interactions? Interactions in e-learning support cognitive learning, the application of behavioral psychology to teaching. Students learn a lot when they perform an action based on the information they are presented. Without exhausting the volumes written about this subject, your own background has probably prepared you for creating effective materials that support cognitive learning. To boil it down for our purposes, you present information in chunks and ask learners to demonstrate whether they have received the signal. In the classroom, this is immortalized as a teacher presenting a lecture and asking questions, a basic educational model. In another scenario, it might be an instructor showing a student how to perform a mechanical task and then asking the student to repeat the same task. We know from experience that learners struggle with concepts if you present too much information too rapidly without checking to see if they understand it. In e-learning, the most effective ways to prevent confusion involve chunking information into small, digestible bites and mapping them into an overall program that allows the learner to progress in a logical fashion, all the while interacting and demonstrating comprehension. Interaction is vital to keep your students awake and aware. Interaction, or two-way communication, can take your e-learning video to the next level: a true cognitive learning experience. Interaction types While Camtasia Studio does not pretend to be a full-featured interactive authoring tool, it does contain some features that allow you to build interactions and quizzes. This section defines those features that support learners to take action while viewing an e-learning video when you request them for an interaction. There are three types of interactions available in Camtasia Studio: Simple action hotspots Branching hotspots Quizzes You are probably thinking of ways these techniques can help support cognitive learning. Simple action hotspots Hotspots are click areas. You indicate where the hotspot is using a visual cue, such as a callout. Camtasia allows you to designate the area covered by the callout as a hotspot and define the action to take when it is clicked. An example is to take the learner to another time in the video when the hotspot is clicked. Another click could take the learner back to the original place in the video. Quizzes Quizzes are simple questions you can insert in the video, created and implemented to conform to your testing strategy. The question types available are as follows: Multiple choice Fill in the blanks Short answers True/false Video player requirements Before we learn how to create interactions in Camtasia Studio, you should know some special video player requirements. A simple video file playing on a computer cannot be interactive by itself. A video created and produced in Camtasia Studio without including some additional program elements cannot react when you click on it except for what the video player tells it to do. For example, the default player for YouTube videos stops and starts the video when you click anywhere in the video space. Click interactions in videos created with Camtasia are able to recognize where clicks occur and the actions to take. You provide the click instructions when you set up the interaction. These instructions are required, for example, to intercept the clicking action, determine where exactly the click occurred, and link that spot with a command and destination. These click instructions may be any combination of HyperText Markup Language (HTML), HTML5, JavaScript, and Flash ActionScript. Camtasia takes care of creating the coding behind the scenes, associated with the video player being used. In the case of videos produced with Camtasia Studio, to implement any form of interactivity, you need to select the default Smart Player output options when producing the video. Creating simple hotspots The most basic interaction is clicking a hotspot layered over the video. You can create an interactive hotspot for many purposes, including the following: Taking learners to a specific marker or frame within the video, as determined on the timeline Allowing learners to replay a section of the video Directing learners to a website or document to view reference material Showing a pop up with additional information, such as a phone number or web link Try it – creating a hotspot If you are building the exercise project featured in this book, let's use it to create an interactive hotspot. The task in this exercise is to pause the video and add a Replay button to allow viewers to review a task. After the replay, a prompt will be added to resume the video from where it was paused. Inserting the Replay/Continue buttons The first step is to insert a Replay button to allow viewers to review what they just saw or continue without reviewing. This involves adding two hotspot buttons on the timeline, which can be done by performing the following steps: Open your exercise project in Camtasia Studio or one of your own projects where you can practice. Position the play head right after the part where text is shown being pasted into the CuePrompter window. From the Properties area, select Callouts from the task tabs above the timeline. In the Shape area, select Filled Rounded Rectangle (at the upper-right corner of the drop-down selection). A shape is added to the timeline. Set the Fade in and Fade out durations to about half a second. Select the Effects dropdown and choose Style. Choose the 3D Edge style. It looks like a raised button. Set any other formatting so the button looks the way you want in the preview window. In the Text area, type your button text. For the sample project, enter Replay Copy & Paste. Select the button in the preview window and make a copy of the button. You can use Ctrl + C to copy and Ctrl + V to paste the button. In the second copy of the button, select the text and retype it as Continue. It should be stacked on the timeline as shown in the following screenshot: Select the Continue button in the preview window and drag it to the right-hand side, at the same height and distance from the edge. The final placement of the buttons is shown in the sample project. Save the project. Adding a hotspot to the Continue button The buttons are currently inactive images on the timeline. Viewers could click them in the produced video, but nothing would happen. To make them active, enable the Hotspot properties for each button. To add a hotspot to the Continue button, perform the following steps: With the Continue button selected, select the Make hotspot checkbox in the Callouts panel. Click on the Hotspot Properties... button to set properties for the callout button. Under Actions, make sure to select Click to continue. Click on OK. The Continue button now has an active hotspot assigned to it. When published, the video will pause when the button appears. When the viewer clicks on Continue, the video will resume playing. You can test the video and the operation of the interactive buttons as described later in this article. Adding a hotspot to the Replay button Now, let's move on to create an action for the Replay copy & paste button: Select the Replay copy & paste button in the preview window. Select the Make hotspot checkbox in the Callouts panel. Click on the Hotspot properties... button. Under Actions, select Go to frame at time. Enter the time code for the spot on the timeline where you want to start the replay. In the sample video, this is around 0:01:43;00, just before text is copied in the script. Click on OK. Save the project. The Replay copy & paste button now has an active hotspot assigned to it. Later, when published, the video will pause when the button appears. When viewers click on Replay copy & paste, the video will be repositioned at the time you entered and begin playing from there. Using the quiz feature A quiz added to a video sets it apart. The addition of knowledge checks and quizzes to assess your learners' understanding of the material presented puts the video into the true e-learning category. By definition, a knowledge check is a way for the student to check their understanding without worrying about scoring. Typically, feedback is given to the student for them to better understand the material, the question, and their answer. The feedback can be terse, such as correct and incorrect, or it can be verbose, informing if the answer is correct or not and perhaps giving additional information, a hint, or even the correct answers, depending on your strategy in creating the knowledge check. A quiz can be in the same form as a knowledge check but a record of the student's answer is created and reported to an LMS or via an e-mail report. Feedback to the student is optional, again depending on your testing strategy. In Camtasia Studio, you can insert a quiz question or set of questions anywhere on the timeline you deem appropriate. This is done with the Quizzing task tab. Try it – inserting a quiz In this exercise, you will select a spot on the timeline to insert a quiz, enable the Quizzing feature, and write some appropriate questions following the sample project, Using CuePrompter. Creating a quiz Place your quiz after you have covered a block of information. The sample project, Using CuePrompter, is a very short task-based tutorial, showing some basic steps. Assume for now that you are teaching a course on CuePrompter and need to assess students' knowledge. I believe a good place for a quiz is after the commands to scroll forward, speed up, slow down, and scroll reverse. Let's give it a try with multiple choice and true/false questions: Position the play head at the appropriate part of the timeline. In the sample video, the end of the scrolling command description is at about 3 minutes 12 seconds. Select Quizzing in the task tabs. If you do not see the Quizzing tab above the timeline, select the More tab to reveal it. Click on the Add quiz button to begin adding questions. A marker appears on the timeline where your quiz will appear during the video, as illustrated in the following screenshot: In the Quiz panel, add a quiz name. In the sample project, the quiz is entitled CuePrompter Commands. Scroll down to Question type. Make sure Multiple Choice is selected from the dropdown. In the Question box, type the question text. In the sample project, the first question is With text in the prompter ready to go, the keyboard control to start scrolling forward is _________________. In the Answers box, double-click on the checkbox text that says Default Answer Text. Retype the answer Control-F. In the next checkbox text that says <Type an answer choice here>, double-click on it and add the second possible answer, Spacebar. Check the box next to it to indicate that it is the correct answer. Add two more choices: Alt-Insert and Tab. Your Quiz panel should look like the following screenshot: Click on Add question. From the Question type dropdown, select True/False. In the Question box, type You can stop CuePrompter with the End key. In Answers, select False. For the final question, click on Add question again. From the Question type dropdown, select Multiple Choice. In the Question box, type Which keyboard command tells CuePrompter to reverse?. Enter the four possible answers: Left arrow Right arrow Down arrow Up arrow Select Down arrow as the correct answer. Save the project. Now you have entered three questions and answer choices, while indicating the choice that will be scored correct if selected. Next, preview the quiz to check format and function. Previewing the quiz Camtasia Studio allows you to preview quizzes for correct formatting, wording, and scoring. Continue to follow along in the exercise project and perform the following steps: Leave checkmarks in the Score quiz and Viewer can see answers after submitting boxes. Click on the Preview button. A web page opens in your Internet browser showing the questions, as shown in the following screenshot: Select an answer and click on Next. The second quiz question is displayed. Select an answer and click on Next. The third quiz question is displayed. Select an answer and click on Submit Answers. As this is the final question, there is no Next. Since we left the Score quiz and Viewer can see answers after submitting options selected, the learner receives a prompt, as shown in the following screenshot: Click on View Answers to review the answers you gave. Correct responses are shown with a green checkmark and incorrect ones are shown with a red X mark. If you do not want your learners to see the answers, remove the checkmark from Viewer can see answers after submitting. Exit the browser to discontinue previewing the quiz. Save the project. This completes the Try it exercise for inserting and previewing a quiz in your video e-learning project. Summary In this article, we learned different types of interactions, video player requirements, creating simple action hotspots, and inserting and previewing a quiz. Resources for Article: Further resources on this subject: Introduction to Moodle [article] Installing Drupal [article] Make Spacecraft Fly and Shoot with Special Effects using Blender 3D 2.49 [article]
Read more
  • 0
  • 0
  • 12758

article-image-writing-xml-data-file-system-ssis
Packt
29 Dec 2009
5 min read
Save for later

Writing XML data to the File System with SSIS

Packt
29 Dec 2009
5 min read
Integrating data into applications or reports is one of the most important, expensive and exacting activities in building enterprise data warehousing applications. SQL Server Integration Services which first appeared in MS SQL Server 2005 and continued into MS SQL Server 2008 provides a one-stop solution to the ETL Process. The ETL Process consists of extracting data from a data source, transforming the data so that it can get in cleanly into the destination followed by loading the transformed data to the destination source. Enterprise data can be of very different kinds ranging from flat files to data stored in relational databases. Recently storing data in XML data sources has become common as exchanging data in XML format has many advantages. Creating a stored procedure that retrieves XML In the present example it is assumed that you have a copy of the Northwind database. You could use any other database. We will be creating a stored procedure that selects a number of columns from a table in the database using the For XML clause. The Select query would return an XML fragment from the database. The next listing shows the stored procedure. Create procedure [dbo].[tst]asSelect FirstName, LastName, City from Employeesfor XML raw The result of executing this stored procedure[exec tst] in the SQL Server Management Studio is shown in the next listing. <row FirstName="Nancy" LastName="Davolio" City="Seattle"/><row FirstName="Andrew" LastName="Fuller" City="Tacoma"/><row FirstName="Janet" LastName="Leverling" City="Kirkland"/><row FirstName="Margaret" LastName="Peacock" City="Redmond"/><row FirstName="Steven" LastName="Buchanan" City="London"/><row FirstName="Michael" LastName="Suyama" City="London"/><row FirstName="Robert" LastName="King" City="London"/><row FirstName="Laura" LastName="Callahan" City="Seattle"/><row FirstName="Anne" LastName="Dodsworth" City="London"/> Creating a package in BIDS or Visual Studio 2008 You require SQL Server 2008 installed to create a package. In either of these programs, File | New | Projects... brings up New Project window where you can choose to create a business intelligence project with a Integration Services Project template. You create a project by providing a name for the project. Herein it was named XMLquery. After providing a name and closing the New Project window the XMLquery project will be created with a default package with the file name, Package.dtsx. The file name can be renamed by right clicking the file and clicking OK to the window that pops up regarding the change you are making. Herein the package was named XmlToFile.dtsx. The following figure shows the project created by the program. When the program is created the package designer surface will be open with a tabbed page where you can configure control flow tasks, Data Flow Tasks and Event handlers. You can also look at the package explorer to review the contents of the package. The reader may benefit by reviewing my book, Beginners Guide to SQL Server Integration Services, on this site. Adding and configuring a ExecuteSQL task Using an ExecuteSQL Task component the stored procedure on the SQL Server 2008 will be executed. The result of this will be stored in a package variable which will then be retrieved using a Script Task. In this section you will be configuring the ExecuteSQL Task. Drag and drop a Execute SQL Task under Control Flow items in the Toolbox on to the Control Flow tabbed page of the package designer. Double click Execute SQL Task component in the package designer to display the Execute SQL Task Editor as shown. It is a good practice to provide a description to the task. Herein it is, "Retrieving XML from the SQL Server" as shown. The result set can be of any of those shown in the next figure. Since the information that is retrieved running the stored procedure is XML, XML choice is the correct one to choose. The stored procedure is on the SQL Server 2008 and therefore a connection needs to be established. Leave the connection type as OLE DB and click on an empty area along the line item, Connection. This brings up the Configure OLE DB Connection Manager window where you can select an existing connection, or create a new connection. Hit the New... button to bring the Connection Manager window as shown. The window comes up with just the right provider [Native OLE DBSQL Server Native Client10.0]. You can choose the server by browsing with the drop-down handler as shown. In the present case the Windows Authentication is used with the current user as the database administrator. If this information is correct you can browse the database objects to choose the correct database which hosts the stored procedure as shown. You may also test the connection with the Test Connection button. You must close the Connection Manager window which will bring you back to the Configure OLE DB Connection Manager window which now displays the connection you just made. To proceed further you need to close this window as well. This will bring in the connection information into the Execute SQL Task editor window. The type of input is chosen to be a direct input (the others are file and variable). The query to be executed is the stored procedure, tst described early in the tutorial. The BypassPrepare is set to false. The General page of the Execute SQL Task editor is as shown here.
Read more
  • 0
  • 0
  • 12655
article-image-loading-submitting-and-validating-forms-using-ext-js-4
Packt
31 Aug 2012
25 min read
Save for later

Working with forms using Ext JS 4

Packt
31 Aug 2012
25 min read
Ext JS 4 is Sencha’s latest JavaScript framework for developing cross-platform web applications. Built upon web standards, Ext JS provides a comprehensive library of user interface widgets and data manipulation classes to turbo-charge your application’s development. In this article, written by Stuart Ashworth and Andrew Duncan, the authors of Ext JS 4 Web Application Development Cookbook, we will cover: Constructing a complex form layout Populating your form with data Submitting your form's data Validating form fields with VTypes Creating custom VTypes Uploading files to the server Handling exceptions and callbacks This article introduces forms in Ext JS 4. We begin by creating a support ticket form in the first recipe. To get the most out of this article you should be aware that this form is used by a number of recipes throughout the article. Instead of focusing on how to configure specific fields, we demonstrate more generic tasks for working with forms. Specifically, these are populating forms, submitting forms, performing client-side validation, and handling callbacks/exceptions. Constructing a complex form layout In the previous releases of Ext JS, complicated form layouts were quite difficult to achieve. This was due to the nature of the FormLayout, which was required to display labels and error messages correctly, and how it had to be combined with other nested layouts. Ext JS 4 takes a different approach and utilizes the Ext.form.Labelable mixin, which allows form fields to be decorated with labels and error messages without requiring a specific layout to be applied to the container. This means we can combine all of the layout types the framework has to offer without having to overnest components in order to satisfy the form field's layout requirements. We will describe how to create a complex form using multiple nested layouts and demonstrate how easy it is to get a form to look exactly as we want. Our example will take the structure of a Support Ticket Request form and, once we are finished, it will look like the following screenshot: (Move the mouse over the image to enlarge.) How to do it... We start this recipe by creating a simple form panel that will contain all of the layout containers and their fields: var formPanel = Ext.create('Ext.form.Panel', { title: 'Support Ticket Request', width: 650, height: 500, renderTo: Ext.getBody(), style: 'margin: 50px', items: [] }); Now, we will create our first set of fields— the FirstName and LastName fields. These will be wrapped in an Ext.container.Container component, which is given an hbox layout so our fields appear next to each other on one line: var formPanel = Ext.create('Ext.form.Panel', { title: 'Support Ticket Request', width: 650, height: 500, renderTo: Ext.getBody(), style: 'margin: 50px', items: [{ xtype: 'container', layout: 'hbox', items: [{ xtype: 'textfield', fieldLabel: 'First Name', name: 'FirstName', labelAlign: 'top', cls: 'field-margin', flex: 1 }, { xtype: 'textfield', fieldLabel: 'Last Name', name: 'LastName', labelAlign: 'top', cls: 'field-margin', flex: 1 }] }] }); We have added a CSS class (field-margin) to each field, to provide some spacing between them. We can now add this style inside <style> tags in the head of our document: <style type="text/css"> .field-margin { margin: 10px; }</style> Next, we create a container with a column layout to position our e-mail address and telephone number fields. We nest our telephone number fields in an Ext.form.FieldContainer class , which we will discuss later in the recipe: items: [ ... { xtype: 'container', layout: 'column', items: [{ xtype: 'textfield', fieldLabel: 'Email Address', name: 'EmailAddress', labelAlign: 'top', cls: 'field-margin', columnWidth: 0.6 }, { xtype: 'fieldcontainer', layout: 'hbox', fieldLabel: 'Tel. Number', labelAlign: 'top', cls: 'field-margin', columnWidth: 0.4, items: [{ xtype: 'textfield', name: 'TelNumberCode', style: 'margin-right: 5px;', flex: 2 }, { xtype: 'textfield', name: 'TelNumber', flex: 4 }] }] } ... ] The text area and checkbox group are created and laid out in a similar way to the previous sets, by using an hbox layout: items: [ ... { xtype: 'container', layout: 'hbox', items: [{ xtype: 'textarea', fieldLabel: 'Request Details', name: 'RequestDetails', labelAlign: 'top', cls: 'field-margin', height: 250, flex: 2 }, { xtype: 'checkboxgroup', name: 'RequestType', fieldLabel: 'Request Type', labelAlign: 'top', columns: 1, cls: 'field-margin', vertical: true, items: [{ boxLabel: 'Type 1', name: 'type1', inputValue: '1' }, { boxLabel: 'Type 2', name: 'type2', inputValue: '2' }, { boxLabel: 'Type 3', name: 'type3', inputValue: '3' }, { boxLabel: 'Type 4', name: 'type4', inputValue: '4' }, { boxLabel: 'Type 5', name: 'type5', inputValue: '5' }, { boxLabel: 'Type 6', name: 'type6', inputValue: '6' }], flex: 1 }] } ... ] Finally, we add the last field, which is a file upload field, to allow users to provide attachments: items: [ ... { xtype: 'filefield', cls: 'field-margin', fieldLabel: 'Attachment', width: 300 } ... ] How it works... All Ext JS form fields inherit from the base Ext.Component class and so can be included in all of the framework's layouts. For this reason, we can include form fields as children of containers with layouts (such as hbox and column layouts) and their position and size will be calculated accordingly. Upgrade Tip: Ext JS 4 does not have a form layout meaning a level of nesting can be removed and the form fields' labels will still be displayed correctly by just specifying the fieldLabel config. The Ext.form.FieldContainer class used in step 4 is a special component that allows us to combine multiple fields into a single container, which also implements the Ext.form. Labelable mixin . This allows the container itself to display its own label that applies to all of its child fields while also giving us the opportunity to configure a layout for its child components. Populating your form with data After creating our beautifully crafted and user-friendly form we will inevitably need to populate it with some data so users can edit it. Ext JS makes this easy, and this recipe will demonstrate four simple ways of achieving it. We will start by explaining how to populate the form on a field-by-field basis, then move on to ways of populating the entire form at once. We will also cover populating it from a simple object, a Model instance, and a remote server call. Getting ready We will be using the form created in this article's first recipe as our base for this section, and many of the subsequent recipes in this article, so please look back if you are not familiar with it. All the code we will write in this recipe should be placed under the definition of this form panel. You will also require a working web server for the There's More example, which loads data from an external file. How to do it... We'll demonstrate how to populate an entire form's fields in bulk and also how to populate them individually. Populating individual fields We will start by grabbing a reference to the first name field using the items property's get method. The items property contains an instance of Ext.util. MixedCollection, which holds a reference to each of the container's child components. We use its get method to retrieve the component at the specified index: var firstNameField = formPanel.items.get(0).items.get(0); Next, we use the setValue method of the field to populate it: firstNameField.setValue('Joe'); Populating the entire form To populate the entire form, we must create a data object containing a value for each field. The property names of this object will be mapped to the corresponding form field by the field's name property. For example, the FirstName property of our requestData object will be mapped to a form field with a name property value of FirstName: var requestData = { FirstName: 'Joe', LastName: 'Bloggs', EmailAddress: 'info@swarmonline.com', TelNumberCode: '0777', TelNumber: '7777777', RequestDetails: 'This is some Request Detail body text', RequestType: { type1: true, type2: false, type3: false, type4: true, type5: true, type6: false } }; We then call the setValues method of the form panel's Ext.form.Basic instance, accessed through the getForm method, passing it our requestData variable: formPanel.getForm().setValues(requestData); How it works... Each field contains a method called setValue , which updates the field's value with the value that is passed in. We can see this in action in the first part of the How to do it section. A form panel contains an internal instance of the Ext.form.Basic class (accessible through the getForm method ), which provides all of the validation, submission, loading, and general field management that is required by a form. This class contains a setValues method , which can be used to populate all of the fields that are managed by the basic form class. This method works by simply iterating through all of the fields it contains and calling their respective setValue methods. This method accepts either a simple data object, as in our example, whose properties are mapped to fields based on the field's name property. Alternatively, an array of objects can be supplied, containing id and value properties, with the id mapping to the field's name property. The following code snippet demonstrates this usage: formPanel.getForm().setValues([{id: 'FirstName', value: 'Joe'}]);   There's more... Further to the two previously discussed methods there are two others that we will demonstrate here. Populating a form from a Model instance Being able to populate a form directly from a Model instance is extremely useful and is very simple to achieve. This allows us to easily translate our data structures into a form without having to manually map it to each field. We initially define a Model and create an instance of it (using the data object we used earlier in the recipe): Ext.define('Request', { extend: 'Ext.data.Model', fields: [ 'FirstName', 'LastName', 'EmailAddress', 'TelNumberCode', 'TelNumber', 'RequestDetails', 'RequestType' ] }); var requestModel = Ext.create('Request', requestData); Following this we call the loadRecord method of the Ext.form.Basic class and supply the Model instance as its only parameter. This will populate the form, mapping each Model field to its corresponding form field based on the name: formPanel.getForm().loadRecord(requestModel); Populating a form directly from the server It is also possible to load a form's data directly from the server through an AJAX call. Firstly, we define a JSON file, containing our request data, which will be loaded by the form: { "success": true, "data": { "FirstName": "Joe", "LastName": "Bloggs", "EmailAddress": "info@swarmonline.com", "TelNumberCode": "0777", "TelNumber": "7777777", "RequestDetails": "This is some Request Detail body text", "RequestType": { "type1": true, "type2": false, "type3": false, "type4": true, "type5": true, "type6": false } } } Notice the format of the data: we must provide a success property to indicate that the load was successful and put our form data inside a data property. Next we use the basic form's load method and provide it with a configuration object containing a url property pointing to our JSON file: formPanel.getForm().load({ url: 'requestDetails.json' }); This method automatically performs an AJAX request to the specified URL and populates the form's fields with the data that was retrieved. This is all that is required to successfully load the JSON data into the form. The basic form's load method accepts similar configuration options to a regular AJAX request Submitting your form's data Having taken care of populating the form it's now time to look at sending newly added or edited data back to the server. As with form population you'll learn just how easy this is with the Ext JS framework. There are two parts to this example. Firstly, we will submit data using the options of the basic form that wraps the form panel. The second example will demonstrate binding the form to a Model and saving our data. Getting ready We will be using the form created in the first recipe as our base for this section, so refer to the Constructing a complex form layout recipe, if you are not familiar with it. How to do it... Add a function to submit the form: var submitForm = function(){ formPanel.getForm().submit({ url: 'submit.php' }); }; Add a button to the form that calls the submitForm function: var formPanel = Ext.create('Ext.form.Panel', { ... buttons: [{ text: 'Submit Form', handler: submitForm }], items: [ ... ] }); How it works... As we learned in the previous recipe, a form panel contains an internal instance of the Ext.form.Basic class (accessible through the getForm method). The submit method in Ext.form.Basic is a shortcut to the Ext.form.action.Submit action. This class handles the form submission for us. All we are required to do is provide it with a URL and it will handle the rest. It's also possible to define the URL in the configuration for the Ext.form.Panel.. Before submitting, it must first gather the data from the form. The Ext.form.Basic class contains a getValues method, which is used to gather the data values for each form field. It does this by iterating through all fields in the form making a call to their respective getValue methods. There's more... The previous recipe demonstrated how to populate the form from a Model instance. Here we will take it a step further and use the same Model instance to submit the form as well. Submitting a form from a Model instance Extend the Model with a proxy and load the data into the form: xt.define('Request', { extend: 'Ext.data.Model', fields: ['FirstName', 'LastName', 'EmailAddress', 'TelNumberCode', 'TelNumber', 'RequestDetails', 'RequestType'], proxy: { type: 'ajax', api: { create: 'addTicketRequest.php', update: 'updateTicketRequest.php' }, reader: { type: 'json' } } }); var requestModel = Ext.create('Request', { FirstName: 'Joe', LastName: 'Bloggs', EmailAddress: 'info@swarmonline.com' }); formPanel.getForm().loadRecord(requestModel); Change the submitForm function to get the Model instance, update the record with the form data, and save the record to the server: var submitForm = function(){ var record = formPanel.getForm().getRecord(); formPanel.getForm().updateRecord(record); record.save(); }; Validating form fields with VTypes In addition to form fields' built-in validation (such as allowBlank and minLength), we can apply more advanced and more extensible validation by using VTypes. A VType (contained in the Ext.form.field.VTypes singleton) can be applied to a field and its validation logic will be executed as part of the field's periodic validation routine. A VType encapsulates a validation function, an error message (which will be displayed if the validation fails), and a regular expression mask to prevent any undesired characters from being entered into the field. This recipe will explain how to apply a VType to the e-mail address field in our example form, so that only properly formatted e-mail addresses are deemed valid and an error will be displayed if it doesn't conform to this pattern. How to do it... We will start by defining our form and its fields. We will be using our example form that was created in the first recipe of this article as our base. Now that we have a form we can add the vtype configuration option to our e-mail address field: { xtype: 'textfield', fieldLabel: 'Email Address', name: 'EmailAddress', labelAlign: 'top', cls: 'field-margin', columnWidth: 0.6, vtype: 'email' } That is all we have to do to add e-mail address validation to a field. We can see the results in the following screenshot, with an incorrectly formatted e-mail address on the left and a valid one on the right: How it works... When a field is validated it runs through various checks. When a VType is defined the associated validation routine is executed and will flag the field invalid or not . As previously mentioned, each VType has an error message coupled with it, which is displayed if it is found to be invalid, and a mask expression which prevents unwanted characters being entered. Unfortunately, only one VType can be applied to a field and so, if multiple checks are required, a custom hybrid may need to be created. See the next recipe for details on how to do this. There's more... Along with the e-mail VType, the framework provides three other VTypes that can be applied straight out of the box. These are: alpha: this restricts the field to only alphabetic characters alphnum: this VType allows only alphanumeric characters url: this ensures that the value is a valid URL Creating custom VTypes We have seen in the previous recipe how to use VTypes to apply more advanced validation to our form's fields. The built-in VTypes provided by the framework are excellent but we will often want to create custom implementations to impose more complex and domain specific validation to a field. We will walkthrough creating a custom VType to be applied to our telephone number field to ensure it is in the format that a telephone number should be. Although our telephone number field is split into two (the first field for the area code and the second for the rest of the number), for this example we will combine them so our VType is more comprehensive. For this example, we will be validating a very simple, strict telephone number format of "0777-777-7777". How to do it... We start by defining our VType's structure. This consists of a simple object literal with three properties. A function called telNumber and two strings called telNumberText (which will contain the error message text) and telNumberMask (which holds a regex to restrict the characters allowed to be entered into the field) respectively. var telNumberVType = { telNumber: function(val, field){ // function executed when field is validated // return true when field's value (val) is valid return true; }, telNumberText: 'Your Telephone Number must only include numbers and hyphens.', telNumberMask: /[d-]/ }; Next we define the regular expression that we will use to validate the field's value. We add this as a variable to the telNumber function: telNumber: function(val, field){ var telNumberRegex = /^d{4}-d{3}-d{4}$/; return true; } Once this has been done we can add the logic to this telNumber function that will decide whether the field's current value is valid. This is a simple call to the regular expression string's test method, which returns true if the value matches or false if it doesn't: telNumber: function(val, field){ var telNumberRegex = /^d{4}-d{3}-d{4}$/; return telNumberRegex.test(val); } The final step to defining our new VType is to apply it to the Ext.form.field. VTypes singleton, which is where all of the VTypes are located and where our field's validation routine will go to get its definition: Ext.apply(Ext.form.field.VTypes, telNumberVType); Now that our VType has been defined and registered with the framework, we can apply it to the field by using the vtype configuration option. The result can be seen in the following screenshot: { xtype: 'textfield', name: 'TelNumber', flex: 4, vtype: 'telNumber' } How it works... A VType consists of three parts: The validity checking function The validation error text A keystroke filtering mask (optional) VTypes rely heavily on naming conventions so they can be executed dynamically within a field's validation routine. This means that each of these three parts must follow the standard convention. The validation function's name will become the name used to reference the VType and form the prefix for the other two properties. In our example, this name was telNumber, which can be seen referencing the VType in Step 5. The error text property is then named with the VType's name prefixing the word Text (that is, telNumberText ). Similarly, the filtering mask is the VType's name followed by the word Mask (that is, telNumberMask ). The final step to create our VType is to merge it into the Ext.form.field.VTypes singleton allowing it to be accessed dynamically during validation. The Ext.apply function does this by merging the VType's three properties into the Ext.form.field.VTypes class instance. When the field is validated, and a vtype is defined, the VType's validation function is executed with the current value of the field and a reference to the field itself being passed in. If the function returns true then all is well and the routine moves on. However, if it evaluates to false the VType's Text property is retrieved and pushed onto the errors array. This message is then displayed to the user as our screenshot shown earlier. This process can be seen in the code snippet as follows, taken directly from the framework: if (vtype) { if(!vtypes[vtype](value, me)){ errors.push(me.vtypeText || vtypes[vtype +'Text']); } } There's more... It is often necessary to validate fields based on the values of other fields as well as their own. We will demonstrate this by creating a simple VType for validating that a confirm password field's value matches the value entered in an initial password field. We start by creating our VType structure as we did before: Ext.apply(Ext.form.field.VTypes, { password: function(val, field){ return false; }, passwordText: 'Your Passwords do not match.' }); We then complete the validation logic. We use the field's up method to get a reference to its parent form. Using that reference, we get the values for all of the form's fields by using the getValues method : password: function(val, field){ var parentForm = field.up('form'); // get parent form // get the form's values var formValues = parentForm.getValues(); return false; } The next step is to get the first password field's value. We do this by using an extra property ( firstPasswordFieldName) that we will specify when we add our VType to the confirm password field. This property will contain the name of the initial password field (in this example Password ). We can then compare the confirm password's value with the retrieved value and return the outcome: password: function(val, field){ var parentForm = field.up('form'); // get parent form // get the form's values var formValues = parentForm.getValues(); // get the value from the configured 'First Password' field var firstPasswordValue = formValues[field.firstPasswordFieldName]; // return true if they match return val === firstPasswordValue; } The VType is added to the confirm password field in exactly the same way as before but we must include the extra firstPasswordFieldName option to link the fields together: { xtype: 'textfield', fieldLabel: 'Confirm Password', name: 'ConfirmPassword', labelAlign: 'top', cls: 'field-margin', flex: 1, vtype: 'password', firstPasswordFieldName: 'Password' } Uploading files to the server Uploading files is very straightforward with Ext JS 4. This recipe will demonstrate how to create a basic file upload form and send the data to your server: Getting Ready This recipe requires the use of a web server for accepting the uploaded file. A PHP file is provided to handle the file upload; however, you can integrate this Ext JS code with any server-side technology you wish. How to do it... Create a simple form panel. Ext.create('Ext.form.Panel', { title: 'Document Upload', width: 400, bodyPadding: 10, renderTo: Ext.getBody(), style: 'margin: 50px', items: [], buttons: [] }); In the panel's items collection add a file field: Ext.create('Ext.form.Panel', { ... items: [{ xtype: 'filefield', name: 'document', fieldLabel: 'Document', msgTarget: 'side', allowBlank: false, anchor: '100%' }], buttons: [] }); Add a button to the panel's buttons collection to handle the form submission: Ext.create('Ext.form.Panel', { ... buttons: [{ text: 'Upload Document', handler: function(){ var form = this.up('form').getForm(); if (form.isValid()) { form.submit({ url: 'upload.php', waitMsg: 'Uploading...' }); } } }] }); How it works... Your server-side code should handle these form submissions in the same way they would handle a regular HTML file upload form. You should not have to do anything special to make your server-side code compatible with Ext JS. The example works by defining an Ext.form.field.File ( xtype: 'filefield' ), which takes care of the styling and the button for selecting local files. The form submission handler works the same way as any other form submission; however, behind the scenes the framework tweaks how the form is submitted to the server. A form with a file upload field is not submitted using an XMLHttpRequest object—instead the framework creates and submits a temporary hidden <form> element whose target is referenced to a temporary hidden <iframe>. The request header's Content-Type is set to multipart/form. When the upload is finished and the server has responded, the temporary form and <iframe> are removed. A fake XMLHttpRequest object is then created containing a responseText property (populated from the contents of the <iframe> ) to ensure that event handlers and callbacks work as if we were submitting the form using AJAX. If your server is responding to the client with JSON, you must ensure that the response Content-Type header is text/html. There's more... It's possible to customize your Ext.form.field.File. Some useful config options are highlighted as follows: buttonOnly: Boolean Setting buttonOnly: true removes the visible text field from the file field. buttonText: String If you wish to change the text in the button from the default of "Browse…" it's possible to do so by setting the buttonText config option. buttonConfig: Object Changing the entire configuration of the button is done by defining a standard Ext.button. Button config object in the buttonConfig option. Anything defined in the buttonText config option will be ignored if you use this. Handling exception and callbacks This recipe demonstrates how to handle callbacks when loading and submitting forms. This is particularly useful for two reasons: You may wish to carry our further processing once the form has been submitted (for example, display a thank you message to the user) In the unfortunate event when the submission fails, it's good to be ready and inform the user something has gone wrong and perhaps perform extra processing The recipe shows you what to do in the following circumstances: The server responds informing you the submission was successful The server responds with an unusual status code (for example, 404 , 500 , and so on) The server responds informing you the submission was unsuccessful (for example, there was a problem processing the data) The form is unable to load data because the server has sent an empty data property The form is unable to submit data because the framework has deemed the values in the form to be invalid Getting ready The following recipe requires you to submit values to a server. An example submit.php file has been provided. However, please ensure you have a web server for serving this file. How to do it... Start by creating a simple form panel: var formPanel = Ext.create('Ext.form.Panel', { title: 'Form', width: 300, bodyPadding: 10, renderTo: Ext.getBody(), style: 'margin: 50px', items: [], buttons: [] }); Add a field to the form and set allowBlank to false: var formPanel = Ext.create('Ext.form.Panel', { ... items: [{ xtype: 'textfield', fieldLabel: 'Text field', name: 'field', allowBlank: false }], buttons: [] }); Add a button to handle the forms submission and add success and failure handlers to the submit method's only parameter: var formPanel = Ext.create('Ext.form.Panel', { ... buttons: [{ text: 'Submit', handler: function(){ formPanel.getForm().submit({ url: 'submit.php', success: function(form, action){ Ext.Msg.alert('Success', action.result.message); }, failure: function(form, action){ if (action.failureType === Ext.form.action.Action. CLIENT_INVALID) { Ext.Msg.alert('CLIENT_INVALID', 'Something has been missed. Please check and try again.'); } if (action.failureType === Ext.form.action.Action. CONNECT_FAILURE) { Ext.Msg.alert('CONNECT_FAILURE', 'Status: ' + action.response.status + ': ' + action.response.statusText); } if (action.failureType === Ext.form.action.Action. SERVER_INVALID) { Ext.Msg.alert('SERVER_INVALID', action.result. message); } } }); } }] }); When you run the code, watch for the different failureTypes or the success callback: CLIENT_INVALID is fired when there is no value in the text field. The success callback is fired when the server returns true in the success property. Switch the response in submit.php file and watch for SERVER_INVALID failureType. This is fired when the success property is set to false. Finally, edit url: 'submit.php' to url: 'unknown.php' and CONNECT_FAILURE will be fired. How it works... The Ext.form.action.Submit and Ext.form.action.Load classes both have a failure and success function. One of these two functions will be called depending on the outcome of the action. The success callback is called when the action is successful and the success property is true. The failure callback , on the other hand, can be extended to look for specific reasons why the failure occurred (for example, there was an internal server error, the form did not pass client-side validation, and so on). This is done by looking at the failureType property of the action parameter. Ext.form.action.Action has four failureType static properties: CLIENT_INVALID, SERVER_INVALID, CONNECT_FAILURE, and LOAD_FAILURE, which can be used to compare with what has been returned by the server. There's more... A number of additional options are described as follows: Handling form population failures The Ext.form.action.Action.LOAD_FAILURE static property can be used in the failure callback when loading data into your form. The LOAD_FAILURE is returned as the action parameter's failureType when the success property is false or the data property contains no fields. The following code shows how this failure type can be caught inside the failure callback function: failure: function(form, action){ ... if(action.failureType == Ext.form.action.Action.LOAD_FAILURE){ Ext.Msg.alert('LOAD_FAILURE', action.result.message); } ... } An alternative to CLIENT_INVALID The isValid method in Ext.form.Basic is an alternative method for handling client-side validation before the form is submitted. isValid will return true when client-side validation passes: handler: function(){ if (formPanel.getForm().isValid()) { formPanel.getForm().submit({ url: 'submit.php' }); } }   Further resources on this subject: Ext JS 4: Working with the Grid Component [Article] Ext JS 4: Working with Tree and Form Components [Article] Infinispan Data Grid: Infinispan and JBoss AS 7 [Article]
Read more
  • 0
  • 0
  • 12644

article-image-building-do-list-ajax
Packt
08 Nov 2013
8 min read
Save for later

Building a To-do List with Ajax

Packt
08 Nov 2013
8 min read
(For more resources related to this topic, see here.) Creating and migrating our to-do list's database As you know, migrations are very helpful to control development steps. We'll use migrations in this article. To create our first migration, type the following command: php artisan migrate:make create_todos_table --table=todos --create When you run this command, Artisan will generate a migration to generate a database table named todos. Now we should edit the migration file for the necessary database table columns. When you open the folder migration in app/database/ with a file manager, you will see the migration file under it. Let's open and edit the file as follows: <?php use IlluminateDatabaseMigrationsMigration; class CreateTodosTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('todos', function(Blueprint $table){ $table->create(); $table->increments("id"); $table->string("title", 255); $table->enum('status', array('0', '1'))->default('0'); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::drop("todos"); } } To build a simple TO-DO list, we need five columns: The id column will store ID numbers of to-do tasks The title column will store a to-do task's title The status column will store statuses of the tasks The created_at and updated_at columns will store the created and updated dates of tasks If you write $table->timestamps() in the migration file, Laravel's migration class automatically creates created_at and updated_at columns. As you know, to apply migrations, we should run the following command: php artisan migrate After the command is run, if you check your database, you will see that our todos table and columns have been created. Now we need to write our model. Creating a todos model To create a model, you should open the app/models/ directory with your file manager. Create a file named Todo.php under the directory and write the following code: <?php class Todo extends Eloquent { protected $table = 'todos'; } Let's examine the Todo.php file. As you see, our Todo class extends an Eloquent model, which is the ORM (Object Relational Mapper) database class of Laravel. The protected $table = 'todos'; code tells Eloquent about our model's table name. If we don't set the table variable, Eloquent accepts the plural version of the lower case model name as table name. So this isn't required technically. Now, our application needs a template file, so let's create it. Creating the template Laravel uses a template engine that is called blade for static and application template files. Laravel calls the template files from the app/views/ directory, so we need to create our first template under this directory. Create a file with the name index.blade.php. The file contains the following code: <html> <head> <title>To-do List Application</title> <link rel="stylesheet" href="assets/css/style.css"> <!--[if lt IE 9]><script src = "//html5shim.googlecode.com/svn/trunk/html5.js"> </script><![endif]--> </head> <body> <div class="container"> <section id="data_section" class="todo"> <ul class="todo-controls"> <li><img src = "/assets/img/add.png" width="14px" onClick="show_form('add_task');" /></li> </ul> <ul id="task_list" class="todo-list"> @foreach($todos as $todo) @if($todo->status) <li id="{{$todo->id}}" class="done"> <a href="#" class="toggle"></a> <span id="span_{{$todo->id}}">{ {$todo->title}}</span> <a href="#" onClick="delete_task('{{$todo->id}}');" class="icon-delete">Delete</a> <a href="#" onClick="edit_task('{{$todo->id}}', '{{$todo->title}}');" class="icon-edit">Edit</a></li> @else <li id="{{$todo->id}}"><a href="#" onClick="task_done('{{$todo->id}}');" class="toggle"></a> <span id="span_{ {$todo->id}}">{{$todo->title}}</span> <a href="#" onClick="delete_task('{ {$todo->id}}');" class= "icon-delete">Delete</a> <a href="#" onClick="edit_task('{ {$todo->id}}','{{$todo->title}}');" class="icon-edit">Edit</a></li> @endif @endforeach </ul> </section> <section id="form_section"> <form id="add_task" class="todo" style="display:none"> <input id="task_title" type="text" name="title" placeholder="Enter a task name" value=""/> <button name="submit">Add Task</button> </form> <form id="edit_task" class="todo" style="display:none"> <input id="edit_task_id" type="hidden" value="" /> <input id="edit_task_title" type="text" name="title" value="" /> <button name="submit">Edit Task</button> </form> </section> </div> <script src = "http://code.jquery.com/ jquery-latest.min.js"type="text/javascript"></script> <script src = "assets/js/todo.js" type="text/javascript"></script> </body> </html> The preceding code may be difficult to understand if you're writing a blade template for the first time, so we'll try to examine it. You see a foreach loop in the file. This statement loops our todo records. We will provide you with more knowledge about it when we are creating our controller in this article. If and else statements are used for separating finished and waiting tasks. We use if and else statements for styling the tasks. We need one more template file for appending new records to the task list on the fly. Create a file with the name ajaxData.blade.php under app/views/ folder. The file contains the following code: @foreach($todos as $todo) <li id="{{$todo->id}}"><a href="#" onClick="task_done('{{$todo- >id}}');" class="toggle"></a> <span id="span_{{$todo >id}}">{{$todo->title}}</span> <a href="#" onClick="delete_task('{{$todo->id}}');" class="icon delete">Delete</a> <a href="#" onClick="edit_task('{{$todo >id}}','{{$todo->title}}');" class="icon-edit">Edit</a></li> @endforeach Also, you see the /assets/ directory in the source path of static files. When you look at the app/views directory, there is no directory named assets. Laravel separates the system and public files. Public accessible files stay under your public folder in root. So you should create a directory under your public folder for asset files. We recommend working with these types of organized folders for developing tidy and easy-to-read code. Finally you see that we are calling jQuery from its main website. We also recommend this way for getting the latest, stable jQuery in your application. You can style your application as you wish, hence we'll not examine styling code here. We are putting our style.css files under /public/assets/css/. For performing Ajax requests, we need JavaScript coding. This code posts our add_task and edit_task forms and updates them when our tasks are completed. Let's create a JavaScript file with the name todo.js in /public/assets/js/. The files contain the following code: function task_done(id){ $.get("/done/"+id, function(data) { if(data=="OK"){ $("#"+id).addClass("done"); } }); } function delete_task(id){ $.get("/delete/"+id, function(data) { if(data=="OK"){ var target = $("#"+id); target.hide('slow', function(){ target.remove(); }); } }); } function show_form(form_id){ $("form").hide(); $('#'+form_id).show("slow"); } function edit_task(id,title){ $("#edit_task_id").val(id); $("#edit_task_title").val(title); show_form('edit_task'); } $('#add_task').submit(function(event) { /* stop form from submitting normally */ event.preventDefault(); var title = $('#task_title').val(); if(title){ //ajax post the form $.post("/add", {title: title}).done(function(data) { $('#add_task').hide("slow"); $("#task_list").append(data); }); } else{ alert("Please give a title to task"); } }); $('#edit_task').submit(function() { /* stop form from submitting normally */ event.preventDefault(); var task_id = $('#edit_task_id').val(); var title = $('#edit_task_title').val(); var current_title = $("#span_"+task_id).text(); var new_title = current_title.replace(current_title, title); if(title){ //ajax post the form $.post("/update/"+task_id, {title: title}).done(function(data) { $('#edit_task').hide("slow"); $("#span_"+task_id).text(new_title); }); } else{ alert("Please give a title to task"); } }); Let's examine the JavaScript file.
Read more
  • 0
  • 0
  • 12641
Modal Close icon
Modal Close icon