Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
Packt
18 Aug 2014
3 min read
Save for later

Designing the New FlexCast® Management Architecture

Packt
18 Aug 2014
3 min read
This article written by Luca Dentella, the author of Citrix® XenApp® 7.x Performance Essentials, teaches us how to design a XenApp architecture. It also helps us get acquainted to the key features of the new FlexCast Management Architecture. We learn about the layers as well. The design of a XenApp infrastructure is a complex task that requires good knowledge of XenApp components. Making the right decisions in the design phase may also greatly help system administrators to expand XenApp farms to satisfy new business requirements or to improve the user experience. In this article, you will learn about the following: The key features of the new FlexCast Management Architecture The five-layer model Sizing each layer's components Implementing and using Machine Creation Services to deploy new worker servers in minutes The difference between XenApp 6.5 and 7.5 (For more resources related to this topic, see here.) FlexCast® Management Architecture With XenApp 7.5, Citrix adopted the same architecture that was introduced in XenDesktop 5 and refined in XenDesktop 7, namely, FlexCast Management Architecture (FMA). FMA is primarily made up of Delivery Controllers and agents. Delivery agents are installed on all virtual and/or physical machines that host and publish resources (named worker servers), while the controllers manage users, resources, configurations, and store them in a central SQL server database. Unlike the previous versions of XenApp, the delivery agent now communicates only with the controllers in the Site and does not need to access the Site's database or license server directly, as illustrated in the following figure: Overview of FlexCast infrastructure's elements The main advantage of this architectural change is that now only one underlying infrastructure is used by XenApp and XenDesktop. Therefore, the overall solution might include both published applications and virtual desktops, leveraging the same infrastructure elements. XenApp administrators who have moved to version 7.5 might be a bit confused; there are no more zones or data collectors. By the end of this article, you will find a table that maps concepts and terms from XenApp 6.x to the new ones in XenApp 7.5. The five-layer model When designing a new infrastructure, a common mistake is trying to focus on everything at once. A better and suggested approach is to divide the solution into layers and then analyze, size, and make decisions, one level at a time. FlexCast Management Architecture can be divided into the following five layers: User layer: This defines user groups and locations Access layer: This defines how users access the resources Resource layer: This defines which resources are assigned to the given users Control layer: This defines the components required to run the solution Hardware layer: This defines the physical elements where the software components run The power of a FlexCast architecture is that it's extremely flexible; different users can have their own set of policies and resources, but everything is managed by a single, integrated control layer, as shown in the following figure: The five-layer model of FlexCast Management Architecture The user layer The need for a new application delivery solution normally comes from user requirements. The minimum information that must be collected is as follows: What users need access to (business applications, a personalized desktop environment, and so on) What endpoints the users will use (personal devices, thin clients, smartphones, and so on) Where users connect from (company's internal network, unreliable external networks, and so on) User groups can access more than one resource at a time. For example, office workers can access a shared desktop environment with some common office applications that are installed, and in addition, use some hosted applications.
Read more
  • 0
  • 0
  • 2005

article-image-moving-space-pod-using-touch
Packt
18 Aug 2014
10 min read
Save for later

Moving the Space Pod Using Touch

Packt
18 Aug 2014
10 min read
Moving the Space Pod Using Touch This article written by, Frahaan Hussain, Arutosh Gurung, and Gareth Jones, authors of the book Cocos2d-x Game Development Essentials, will cover how to set up touch events within our game. So far, the game has had no user interaction from a gameplay perspective. This article will rectify this by adding touch controls in order to move the space pod and avoid the asteroids. (For more resources related to this topic, see here.) The topics that will be covered in this article are as follows: Implementing touch Single-touch Multi-touch Using touch locations Moving the spaceship when touching the screen There are two main routes to detect touch provided by Cocos2d-x: Single-touch: This method detects a single-touch event at any given time, which is what will be implemented in the game as it is sufficient for most gaming circumstances Multi-touch: This method provides the functionality that detects multiple touches simultaneously; this is great for pinching and zooming; for example, the Angry Birds game uses this technique Though a single-touch will be the approach that the game will incorporate, multi-touch will also be covered in this article so that you are aware of how to use this in future games. The general process for setting up touches The general process of setting up touch events, be it single or multi-touch, is as follows: Declare the touch functions. Declare a listener to listen for touch events. Assign touch functions to appropriate touch events as follows: When the touch has begun When the touch has moved When the touch has ended Implement touch functions. Add appropriate game logic/code for when touch events have occurred. Single-touch events Single-touch events can be detected at any given time, and for many games this is sufficient as it is for this game. Follow these steps to implement single-touch events into a scene: Declare touch functions in the GameScene.h file as follows: bool onTouchBegan(cocos2d::Touch *touch, cocos2d::Event * event); void onTouchMoved(cocos2d::Touch *touch, cocos2d::Event * event); void onTouchEnded(cocos2d::Touch *touch, cocos2d::Event * event); void onTouchCancelled(cocos2d::Touch *touch, cocos2d::Event * event); This is what the GameScene.h file will look like: The previous functions do the following: The onTouchBegan function detects when a single-touch has occurred, and it returns a Boolean value. This should be true if the event is swallowed by the node and false indicates that it will keep on propagating. The onTouchMoved function detects when the touch moves. The onTouchEnded function detects when the touch event has ended, essentially when the user has lifted up their finger. The onTouchCancelled function detects when a touch event has ended but not by the user; for example, a system alert. The general practice is to call the onTouchEnded method to run the same code, as it can be considered the same event for most games. Declare a Boolean variable in the GameScene.h file, which will be true if the screen is being touched and false if it isn't, and also declare a float variable to keep track of the position being touched: bool isTouching;float touchPosition; This is how it will look in the GameScene.h file: Add the following code in the init() method of GameScene.cpp: auto listener = EventListenerTouchOneByOne::create(); listener->setSwallowTouches(true); listener->onTouchBegan = CC_CALLBACK_2(GameScreen::onTouchBegan, this); listener->onTouchMoved = CC_CALLBACK_2(GameScreen::onTouchMoved, this); listener->onTouchEnded = CC_CALLBACK_2(GameScreen::onTouchEnded, this); listener->onTouchCancelled = CC_CALLBACK_2(GameScreen::onTouchCancelled, this); this->getEventDispatcher()->addEventListenerWithSceneGraphPriority(listener, this); isTouching = false; touchPosition = 0; This is how it will look in the GameScene.cpp file: There is quite a lot of new code in the previous code snippet, so let's run through it line by line: The first statement declares and initializes a listener for a single-touch The second statement prevents layers underneath from where the touch occurred by detecting the touches The third statement assigns our onTouchBegan method to the onTouchBegan listener The fourth statement assigns our onTouchMoved method to the onTouchMoved listener The fifth statement assigns our onTouchEnded method to the onTouchEnded listener The sixth statement assigns our onTouchCancelled method to the onTouchCancelled listener The seventh statement sets the touch listener to the event dispatcher so the events can be detected The eighth statement sets the isTouching variable to false as the player won't be touching the screen initially when the game starts The final statement initializes the touchPosition variable to 0 Implement the touch functions inside the GameScene.cpp file: bool GameScreen::onTouchBegan(cocos2d::Touch *touch,cocos2d::Event * event){isTouching = true;touchPosition = touch->getLocation().x;return true;}void GameScreen::onTouchMoved(cocos2d::Touch *touch,cocos2d::Event * event){// not used for this game}void GameScreen::onTouchEnded(cocos2d::Touch *touch,cocos2d::Event * event){isTouching = false;}void GameScreen::onTouchCancelled(cocos2d::Touch *touch,cocos2d::Event * event){onTouchEnded(touch, event);} The following is what the GameScene.cpp file will look like: Let's go over the touch functions that have been implemented previously: The onTouchBegan method will set the isTouching variable to true as the user is now touching the screen and is storing the starting touch position The onTouchMoved function isn't used in this game but it has been implemented so that you are aware of the steps for implementing it (as an extra task, you can implement touch movement so that if the user moves his/her finger from one side to another direction, the space pod gets changed) The onTouchEnded method will set the isTouching variable to false as the user is no longer touching the screen The onTouchCancelled method will call the onTouchEnded method as a touch event has essentially ended If the game were to be run, the space pod wouldn't move as the movement code hasn't been implemented yet. It will be implemented within the update() method to move left when the user touches in the left half of the screen and move right when user touches in the right half of the screen. Add the following code at the end of the update() method: // check if the screen is being touchedif (true == isTouching){// check which half of the screen is being touchedif (touchPosition < visibleSize.width / 2){// move the space pod leftplayerSprite->setPosition().x(playerSprite->getPosition().x - (0.50 * visibleSize.width * dt));// check to prevent the space pod from going offthe screen (left side)if (playerSprite->getPosition().x <= 0 +(playerSprite->getContentSize().width / 2)){playerSprite->setPositionX(playerSprite->getContentSize().width / 2);}}else{// move the space pod rightplayerSprite->setPosition().x(playerSprite->getPosition().x + (0.50 * visibleSize.width * dt));// check to prevent the space pod from going off thescreen (right side)if (playerSprite->getPosition().x >=visibleSize.width - (playerSprite->getContentSize().width / 2)){playerSprite->setPositionX(visibleSize.width -(playerSprite->getContentSize().width / 2));}}} The following is how this will look after adding the code: The preceding code performs the following steps: Checks whether the screen is being touched. Checks which side of the screen is being touched. Moves the player left or right. Checks whether the player is going off the screen and if so, stops him/her from moving. Repeats the process until the screen is no longer being touched. This section covered how to set up single-touch events and implement them within the game to be able to move the space pod left and right. Multi-touch events Multi-touch is set up in a similar manner of declaring the functions and creating a listener to actively listen out for touch events. Follow these steps to implement multi-touch into a scene: Firstly, the multi-touch feature needs to be enabled in the AppController.mm file, which is located within the ios folder. To do so, add the following code line below the viewController.view = eaglView; line: [eaglView setMultipleTouchEnabled: YES]; The following is what the AppController.mm file will look like: Declare the touch functions within the game scene header file (the functions do the same thing as the single-touch equivalents but enable multiple touches that can be detected simultaneously): void onTouchesBegan(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void onTouchesMoved(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void onTouchesEnded(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void onTouchesCancelled(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); The following is what the header file will look like: Add the following code in the init() method of the scene.cpp file to listen to the multi-touch events that will use the EventListenerTouchAllAtOnce class, which allows multiple touches to be detected at once: auto listener = EventListenerTouchAllAtOnce::create();listener->onTouchesBegan = CC_CALLBACK_2(GameScreen::onTouchesBegan, this);listener->onTouchesMoved = CC_CALLBACK_2(GameScreen::onTouchesMoved, this);listener->onTouchesEnded = CC_CALLBACK_2(GameScreen::onTouchesEnded, this);listener->onTouchesCancelled = CC_CALLBACK_2(GameScreen::onTouchesCancelled, this);this->getEventDispatcher()->addEventListenerWithSceneGraphPriority(listener, this); The following is how this will look: Implement the following multi-touch functions inside the scene.cpp: void GameScreen::onTouchesBegan(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { CCLOG("Multi-touch BEGAN"); } void GameScreen::onTouchesMoved(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { for (int i = 0; i < touches.size(); i++) { CCLOG("Touch %i: %f", i, touches[i]- >getLocation().x); } } void GameScreen::onTouchesEnded(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { CCLOG("MULTI TOUCHES HAVE ENDED"); } Moving the Space Pod Using Touch [ 92 ] void GameScreen::onTouchesCancelled(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { CCLOG("MULTI TOUCHES HAVE BEEN CANCELLED"); } The following is how this will look: The multi-touch functions just print out a log, stating that they have occurred, but when touches are moved, their respective x positions are logged. This section covered how to implement core foundations for multi-touch events so that they can be used for features such as zooming (for example, zooming into a scene in the Clash Of Clans game) and panning. Multi-touch wasn't incorporated within the game as it wasn't needed, but this section is a good starting point to implement it in future games. Summary This article covered how to set up touch listeners to detect touch events for single-touch and multi-touch. We incorporated single-touch within the game to be able to move the space pod left or right, depending on which half of the screen was being touched. Multi-touch wasn't used as the game didn't require it, but its implementation was shown so that it can be used for future projects. Resources for Article: Further resources on this subject: Cocos2d: Uses of Box2D Physics Engine [article] Cocos2d-x: Installation [article] Thumping Moles for Fun [article]
Read more
  • 0
  • 0
  • 9872

article-image-shapefiles-leaflet
Packt
18 Aug 2014
5 min read
Save for later

Shapefiles in Leaflet

Packt
18 Aug 2014
5 min read
This article written by Paul Crickard III, the author of Leaflet.js Essentials, describes the use of shapefiles in Leaflet. It shows us how a shapefile can be used to create geographical features on a map. This article explains how shapefiles can be used to add a pop up or for styling purposes. (For more resources related to this topic, see here.) Using shapefiles in Leaflet A shapefile is the most common geographic file type that you will most likely encounter. A shapefile is not a single file, but rather several files used to create geographic features on a map. When you download a shapefile, you will have .shp, .shx, and .dbf at a minimum. These files are the shapefiles that contain the geometry, the index, and a database of attributes. Your shapefile will most likely include a projection file (.prj) that will tell that application the projection of the data so the coordinates make sense to the application. In the examples, you will also have a .shp.xml file that contains metadata and two spatial index files, .sbn and .sbx. To find shapefiles, you can usually search for open data and a city name. In this example, we will be using a shapefile from ABQ Data, the City of Albuquerque data portal. You can find more data on this at http://www.cabq.gov/abq-data. When you download a shapefile, it will most likely be in the ZIP format because it will contain multiple files. To open a shapefile in Leaflet using the leaflet-shpfile plugin, follow these steps: First, add references to two JavaScript files. The first, leaflet-shpfile, is the plugin, and the second depends on the shapefile parser, shp.js: <script src="leaflet.shpfile.js"></script> <script src="shp.js"></script> Next, create a new shapefile layer and add it to the map. Pass the layer path to the zipped shapefile: var shpfile = new L.Shapefile('council.zip'); shpfile.addTo(map); Your map should display the shapefile as shown in the following screenshot: Performing the preceding steps will add the shapefile to the map. You will not be able to see any individual feature properties. When you create a shapefile layer, you specify the data, followed by specifying the options. The options are passed to the L.geoJson class. The following code shows you how to add a pop up to your shapefile layer: var shpfile = new L.Shapefile('council.zip',{onEachFeature:function(feature, layer) { layer.bindPopup("<a href='"+feature.properties.WEBPAGE+"'>Page</a><br><a href='"+feature. properties.PICTURE+"'>Image</a>"); }}); In the preceding code, you pass council.zip to the shapefile, and for options, you use the onEachFeature option, which takes a function. In this case, you use an anonymous function and bind the pop up to the layer. In the text of the pop up, you concatenate your HTML with the name of the property you want to display using the format feature.properties.NAME-OF-PROPERTY. To find the names of the properties in a shapefile, you can open .dbf and look at the column headers. However, this can be cumbersome, and you may want to add all of the shapefiles in a directory without knowing its contents. If you do not know the names of the properties for a given shapefile, the following example shows you how to get them and then display them with their value in a pop up: var holder=[]; for (var key in feature.properties){holder.push(key+": "+feature.properties[key]+"<br>");popupContent=holder.join(""); layer.bindPopup(popupContent);} shapefile.addTo(map); In the preceding code, you first create an array to hold all of the lines in your pop up, one for each key/value pair. Next, you run a for loop that iterates through the object, grabbing each key and concatenating the key name with the value and a line break. You push each line into the array and then join all of the elements into a single string. When you use the .join() method, it will separate each element of the array in the new string with a comma. You can pass empty quotes to remove the comma. Lastly, you bind the pop up with the string as the content and then add the shapefile to the map. You now have a map that looks like the following screenshot: The shapefile also takes a style option. You can pass any of the path class options, such as the color, opacity, or stroke, to change the appearance of the layer. The following code creates a red polygon with a black outline and sets it slightly transparent: var shpfile = new L.Shapefile('council.zip',{style:function(feature){return {color:"black",fillColor:"red",fillOpacity:.75}}}); Summary In this article, we learned how shapefiles can be added to a geographical map. We learned how pop ups are added to the maps. This article also showed how these pop ups would look once added to the map. You will also learn how to connect to an ESRI server that has an exposed REST service. Resources for Article: Further resources on this subject: Getting started with Leaflet [Article] Using JavaScript Effects with Joomla! [Article] Quick start [Article]
Read more
  • 0
  • 0
  • 34358

article-image-configuring-your-operating-system
Packt
18 Aug 2014
3 min read
Save for later

Configuring Your Operating System

Packt
18 Aug 2014
3 min read
In this article by William Smith, author of Learning Xamarin Studio, we will configure our operating system. (For more resources related to this topic, see here.) Configuring your Mac To configure your Mac, perform the following steps: From the Apple menu, open System Preferences. Open the Personal group. Select the Security and Privacy item. Open the Firewall tab, and ensure the Firewall is turned off. Configuring your Windows machine To configure your Windows machine, download and install the Xamarin Unified Installer. This installer includes a tool called Xamarin Bonjour Service, which runs Apple's network discovery protocol. Xamarin Bonjour Service requires administrator rights, so you may want to just run the installer as an administrator. Configuring a Windows VM within Mac There is really no difference between using the Visual Studio plugin from a Windows machine or from a VM using software, such as Parallels or VMware. However, if you are running Xamarin Studio on a Retina Macbook Pro, it is advisable to adjust the hardware video settings. Otherwise, some of the elements within Xamarin Studio will render poorly making them difficult to use. The following screenshot contains the recommended video settings: To adjust the settings in Parallels, follow these steps: If your Windows VM is running, shut it down. With your VM shut down, go to Virtual Machine | Configure…. Choose the Hardware tab. Select the Video group. Under Resolution, choose Scaled. Final installation steps Now that the necessary tools are installed and the settings have been enabled, you still need to link to your Xamarin account in Visual Studio, as well as connect Visual Studio to your Mac build machine. To connect to your Xamarin account, follow these steps: In Visual Studio, go to Tools | Xamarin Account…. Click Login to your Xamarin Account and enter your credentials. Once your credentials are verified, you will receive a confirmation message. To connect to your Mac build machine, follow these steps: On your Mac, open Spotlight and type Xamarin build host. Choose Xamarin.iOS Build Host under the Applications results group. After the Build Host utility dialog opens click the Pair button to continue. You will be provided with a PIN. Write this down. On your PC, open Xamarin Studio. Go to Tools | Options | Xamarin | iOS Settings. After the Build Host utility opens, click the Continue button. If your Mac and network are correctly configured, you will see your Mac in the list of available build machines. Choose your build machine and click the Continue button. You will be prompted to enter the PIN. Do so, then click the Pair button. Once the machines are paired, you can build, test, and deploy applications using the networked Mac. If for whatever reason you want to unpair these two machines, open the Xamarin.iOS Build Host on your Mac again, and click the Invalidate PIN button. When prompted, complete the process by clicking the Unpair button. Summary In this article, we learned how to configure our operating system. We also learned how to connect to your Mac build machine. Resources for Article: Further resources on this subject: Updating data in the background [Article] Gesture [Article] Making POIApp Location Aware [Article]
Read more
  • 0
  • 0
  • 2911

Packt
14 Aug 2014
10 min read
Save for later

Additional SOA Patterns – Supporting Composition Controllers

Packt
14 Aug 2014
10 min read
In this article by Sergey Popov, author of the book Applied SOA Patterns on the Oracle Platform, we will learn some complex SOA patterns, realized on very interesting Oracle products: Coherence and Oracle Event Processing. (For more resources related to this topic, see here.) We have to admit that for SOA Suite developers and architects (especially from the old BPEL school), the Oracle Event Processing platform could be a bit outlandish. This could be the reason why some people oppose service-oriented and event-driven architecture, or see them as different architectural approaches. The situation is aggravated by the abundance of the acronyms flying around such as EDA EPN, EDN, CEP, and so on. Even here, we use EPN and EDN interchangeably, as Oracle calls it event processing, and generically, it is used in an event delivery network.   The main argument used for distinguishing SOA and EDN is that SOA relies on the application of a standardized contract principle, whereas EDN has to deal with all types of events. This is true, and we have mentioned this fact before. We also mentioned that we have to declare all the event parameters in the form of key-value pairs with their types in <event-type-repository>. We also mentioned that the reference to the event type from the event type repository is not mandatory for a standard EPN adapter, but it's essential when you are implementing a custom inbound adapter in the EPN framework, which is an extremely powerful Java-based feature. As long as it's Java, you can do practically everything! Just follow the programming flow explained in the Oracle documentation; see the EP Input Adapter Implementation section:   import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import com.bea.wlevs.ede.api.EventProperty; import com.bea.wlevs.ede.api.EventRejectedException; import com.bea.wlevs.ede.api.EventType;   import com.bea.wlevs.ede.api.EventTypeRepository; import com.bea.wlevs.ede.api.RunnableBean;   import com.bea.wlevs.ede.api.StreamSender; import com.bea.wlevs.ede.api.StreamSink; import com.bea.wlevs.ede.api.StreamSource; import com.bea.wlevs.util.Service;   import java.lang.RuntimeException;   public class cargoBookingAdapter implements RunnableBean, StreamSource, StreamSink   {   static final Log v_logger = LogFactory. getLog("cargoBookingAdapter");   private String v_eventTypeName; private EventType v_eventType;        private StreamSender v_eventSender;   private EventTypeRepository v_EvtRep = null; public cargoBookingAdapter(){   super();   }   /**   *  Called by the server to pass in the name of the event   *  v_EvTypee to which event data should be bound.   */   public void setEventType(String v_EvType){ v_eventTypeName = v_EvType; }   /**   *  Called by the server to set an event v_EvTypee   *  repository instance that knows about event   *  v_EvTypees configured for this application   *   *  This repository instance will be used to retrieve an   *  event v_EvTypee instance that will be populated   *  with event data retrieved from the event data file   *  @param etr The event repository.   */   @Service(filter = EventTypeRepository.SERVICE_FILTER)   public void setEventTypeRepository(EventTypeRepository etr){ v_EvtRep = etr;   }   /**   *  Executes to retrieve raw event data and   *  create event v_EvTypee instances from it, then   *  sends the events to the next stage in the   *  EPN.   *  This method, implemented from the RunnableBean   *  interface, executes when this adapter instance   *  is active.   */   public void run()   {   if (v_EvtRep == null){   throw new RuntimeException("EventTypeRepository is   not set");   }   //  Get the event v_EvTypee from the repository by using   //  the event v_EvTypee name specified as a property of   //  this adapter in the EPN assembly file.   v_eventType = v_EvtRep.getEventType(v_eventTypeName); if (v_eventType == null){ throw new RuntimeException("EventType(" + v_eventType + ") is not found.");     }   /**   *   Actual Adapters implementation:   *             *  1. Create an object and assign to it   *      an event v_EvTypee instance generated   *      from event data retrieved by the   *      reader   *   *  2. Send the newly created event v_EvTypee instance   *      to a downstream stage that is   *      listening to this adapter.   */   }   }   }   The presented code snippet demonstrates the injection of a dependency into the Adapter class using the setEventTypeRepository method, implanting the event type definition that is specified in the adapter's configuration.   So, it appears that we, in fact, have the data format and model declarations in an XML form for the event, and we put some effort into adapting the inbound flows to our underlying component. Thus, the Adapter Framework is essential in EDN, and dependency injection can be seen here as a form of dynamic Data Model/Format Transformation of the object's data. Going further, just following the SOA reusabilityprinciple, a single adapter can be used in multiple event-processing networks and for that, we can employ the Adapter Factory pattern discussed earlier (although it's not an official SOA pattern, remember?) For that, we will need the Adapter Factory class and the registration of this factory in the EPN assembly file with a dedicated provider name, which we will use further in applications, employing the instance of this adapter. You must follow the OSGi service registry rules if you want to specify additional service properties in the <osgi:service interface="com.bea.wlevs.ede.api.AdapterFactory"> section and register it only once as an OSGi service.   We also use Asynchronous Queuing and persistence storage to provide reliable delivery of events aggregation to event subscribers, as we demonstrated in the previous paragraph. Talking about aggregation on our CQL processors, we have practically unlimited possibilities to merge and correlate various event sources, such as streams:   <query id="cargoQ1"><![CDATA[   select * from CragoBookingStream, VoyPortCallStream   where CragoBookingStream.POL_CODE = VoyPortCallStream.PORT_CODE and VoyPortCallStream.PORT_CALL_PURPOSE ="LOAD" ]]></query> Here, we employ Intermediate Routing (content-based routing) to scale and balance our event processors and also to achieve a desirable level of high availability. Combined together, all these basic SOA patterns are represented in the Event-Driven Network that has Event-Driven Messaging as one of its forms.   Simply put, the entire EDN has one main purpose: effective decoupling of event (message) providers and consumers (Loose Coupling principle) with reliable event identification and delivering capabilities. So, what is it really? It is a subset of the Enterprise Service Bus compound SOA pattern, and yes, it is a form of an extended Publish-Subscribe pattern.   Some may say that CQL processors (or bean processors) are not completely aligned with the classic ESB pattern. Well, you will not find OSB XQuery in the Canonical ESB patterns catalog either; it's just a tool that supports ESB VETRO operations in this matter. In ESB, we can also call Java Beans when it's necessary for message processing; for instance, doing complex sorts inJava Collections is far easier than in XML/XSLT, and it is worth the serialization/ deserialization efforts. In a similar way, EDN extends the classic ESB by providing the following functionalities:   •        Continuous Query Language   •        It operates on multiple streams of disparate data   •        It joins the incoming data with persisted data   •        It has the ability to plug in to any type of adapter   •        It has the ability to plug to any type of adapters   Combined together, all these features can cover almost any range of practical challenges, and the logistics example we used here in this article is probably too insignificant for such a powerful event-driven platform; however, for a more insightful look at Oracle CEP, refer to Getting Started with Oracle Event Processing 11g, Alexandre Alves, Robin J. Smith, Lloyd Williams, Packt Publishing. Using exactly the same principles and patterns, you can employ the already existing tools in your arsenal. The world is apparentlybigger, and this tool can demonstrate all its strength in the following use cases:     •    As already mentioned, Cablecom Enterprise strives to improve the overall customer experience (not only for VOD). It does so by gathering and aggregating information about user preferences through the purchasing history, watch lists, channel switching, activity in social networks, search history and used meta tags in search, other user experiences from the same target group, upcoming related public events (shows, performances, or premieres), and even the duration of the cursor's position over certain elements of corporate web portals. The task is complex and comprises many activities, including meta tag updates in metadata storage that depend on new findings for predicting trends and so on; however, here we can tolerate (to some extent) the events that aren't processed or are not received.     •    For bank transaction monitoring, we do not have such a luxury. All online events must be accounted and processed with the maximum speed possible. If the last transaction with your credit card was at Bond Street in London, (ATM cash withdrawal) and 5 minutes later, the same card is used to purchase expensive jewellery online with a peculiar delivery address, then someone should flag the card with a possible fraud case and contact the card holder. This is the simplest example that we can provide. When it comes to money laundering tracking cases in our borderless world—the decision-parsing tree from the very first figure in this article—based on all possible correlated events will require all the pages of this book, and you will need a strong magnifying glass to read it; the stratagem of the web nodes and links would drive even the most worldly wise spider crazy.   For these mentioned use cases, Oracle EPN is simply compulsory with some spice, like Coherence for cache management and adequate hardware. It would be prudent to avoid implementing homebrewed solutions (without dozens of years of relevant experience), and following the SOA design patterns is essential.   Let's now assemble all that we discussed in the preceding paragraphs in one final figure. Installation routines will not give you any trouble; just install OEPE 3.5, download it, install CEP components for Eclipse, and you are done with the client/ dev environment. The installation of the server should not pose many difficulties either (http://docs.oracle.com/cd/E28280_01/doc.1111/e14476/install.htm#CEPGS472). When the server is up and running, you can register it in Eclipse(1). The graphical interface will support you in assembling event-handling applications from adapters, processor channels, and event beans; however, knowledge of the internal organization of an XML config and application assembly files (as demonstrated in the earlier code snippets) is always beneficial. In addition to the Eclipse development environment, you have the CEP server web console (visualizer) with almost identical functionalities, which gives you a quick hand with practically all CQL constructs (2). Parallel Complex Events Processing
Read more
  • 0
  • 0
  • 1311

article-image-avoiding-obstacles-using-sensors
Packt
14 Aug 2014
8 min read
Save for later

Avoiding Obstacles Using Sensors

Packt
14 Aug 2014
8 min read
In this article by Richard Grimmett, the author of Arduino Robotic Projects, we'll learn the following topics: How to add sensors to your projects How to add a servo to your sensor (For more resources related to this topic, see here.) An overview of the sensors Before you begin, you'll need to decide which sensors to use. You require basic sensors that will return information about the distance to an object, and there are two choices—sonar and infrared. Let's look at each. Sonar sensors The sonar sensor uses ultrasonic sound to calculate the distance to an object. The sensor consists of a transmitter and receiver. The transmitter creates a sound wave that travels out from the sensor, as illustrated in the following diagram: The device sends out a sound wave 10 times a second. If an object is in the path of these waves, the waves reflect off the object. This then returns sound waves to the sensor. The sensor measures the returning sound waves. It uses the time difference between when the sound wave was sent out and when it returns to measure the distance to the object. Infrared sensors Another type of sensor is a sensor that uses infrared (IR) signals to detect distance. An IR sensor also uses both a transmitter and a receiver. The transmitter transmits a narrow beam of light and the sensor receives this beam of light. The difference in transit ends up as an angle measurement at the sensor, as shown in the following diagram: The different angles give you an indication of the distance to the object. Unfortunately, the relationship between the output of the sensor and the distance is not linear, so you'll need to do some calibration to predict the actual distance and its relationship to the output of the sensor. Connecting a sonar sensor to Arduino Here is an image of a sonar sensor, HC-SR04, which works well with Arduino: These sonar sensors are available at most places that sell Arduino products, including amazon.com. In order to connect this sonar sensor to your Arduino, you'll need some of those female-to-male jumper cables. You'll notice that there are four pins to connect the sonar sensor. Two of these supply the voltage and current to the sensor. One pin, the Trig pin, triggers the sensor to send out a sound wave. The Echo pin then senses the return from the echo. To access the sensor with Arduino, make the following connections using the male-to-female jumper wires: Arduino pin Sensor pin 5V Vcc GND GND 12 Trig 11 Echo Accessing the sonar sensor from the Arduino IDE Now that the HW is connected, you'll want to download a library that supports this sensor. One of the better libraries for this sensor is available at https://code.google.com/p/arduino-new-ping/. Download the NewPing library and then open the Arduino IDE. You can include the library in the IDE by navigating to Sketch | Import Library | Add Library | Downloads and selecting the NewPing ZIP file. Once you have the library installed, you can access the example program by navigating to File | Examples | NewPing | NewPingExample as shown in the following screenshot: You will then see the following code in the IDE: Now, upload the code to Arduino and open a serial terminal by navigating to Tools | Serial Monitor in the IDE. Initially, you will see characters that make no sense; you need to change the serial port baud rate to 115200 baud by selecting this field in the lower-right corner of Serial Monitor, as shown in the following screenshot: Now, you should begin to see results that make sense. If you place your hand in front of the sensor and then move it, you should see the distance results change, as shown in the following screenshot: You can now measure the distance to an object using your sonar sensor. Connecting an IR sensor to Arduino One popular choice is the Sharp series of IR sensors. Here is an image of one of the models, Sharp 2Y0A02, which is a unit that provides sensing to a distance of 150 cm: To connect this unit, you'll need to connect the three pins that are available on the bottom of the sensor. Here is the connection list: Arduino pin Sensor pin 5V Vcc GND GND A3 Vo Unfortunately, there are no labels on the unit, but there is a data sheet that you can download from www.phidgets.com/documentation/Phidgets/3522_0_Datasheet.pdf. The following image shows the pins you'll need to connect: One of the challenges of making this connection is that the female-to-male connection jumpers are too big to connect directly to the sensor. You'll want to order the three-wire cable with connectors with the sensor, and then you can make the connections between this cable and your Arduino device using the male-to-male jumper wires. Once the pins are connected, you are ready to access the sensor via the Arduino IDE. Accessing the IR sensor from the Arduino IDE Now, bring up the Arduino IDE. Here is a simple sketch that provides access to the sensor and returns the distance to the object via the serial link: The sketch is quite simple. The three global variables at the top set the input pin to A3 and provide a storage location for the input value and distance. The setup() function simply sets the serial port baud rate to 9600 and prints out a single line to the serial port. In the loop() function, you first get the value from the A3 input port. The next step is to convert it to a distance based on the voltage. To do this, you need to use the voltage to distance chart for the device; in this case, it is similar to the following diagram: There are two parts to the curve. The first is the distance up to about 15 centimeters and then the distance from 15 centimeters to 150 centimeters. This simple example ignores distances closer than 15 centimeters, and models the distance from 15 centimeters and out as a decaying exponential with the following form: Thanks to teaching.ericforman.com/how-to-make-a-sharp-ir-sensor-linear/, the values that work quite well for a cm conversion in distance are 30431 for the constant and -1.169 as the exponential for this curve. If you open the Serial Monitor tab and place an object in front of the sensor, you'll see the readings for the distance to the object, as shown in the following screenshot: By the way, when you place the object closer than 15 cm, you should begin to see distances that seem much larger than should be indicated. This is due to the voltage to distance curve at these much shorter distances. If you truly need very short distances, you'll need a much more complex calculation. Creating a scanning sensor platform While knowing the distance in front of your robotic project is normally important, you might want to know other distances around the robot as well. One solution is to hook up multiple sensors, which is quite simple. However, there is another solution that may be a bit more cost effective. To create a scanning sensor of this type, take a sensor of your choice (in this case, I'll use the IR sensor) and mount it on a servo. I like to use a servo L bracket for this, which is mounted on the servo Pas follows: You'll need to connect both the IR sensor as well as the servo to Arduino. Now, you will need some Arduino code that will move the servo and also take the sensor readings. The following screenshot illustrates the code: The preceding code simply moves the servo to an angle and then prints out the distance value reported by the IR sensor. The specific statements that may be of interest are as follows: servo.attach(servoPin);: This statement attaches the servo control to the pin defined servo.write(angle);: This statement sends the servo to this angle inValue = analogRead(inputPin);: This statement reads the analog input value from this pin distance = 30431 * pow(inValue, -1.169);: This statement translates the reading to distance in centimeters If you upload the sketch, open Serial Monitor , and enter different angle values, the servo should move and you should see something like the following screenshot: Summary Now that you know how to use sensors to understand the environment, you can create even more complex programs that will sense these barriers and then change the direction of your robot to avoid them or collide with them. You learned how to find distance using sonar sensors and how to connect them to Arduino. You also learned about IR sensors and how they can be used with Arduino. Resources for Article: Further resources on this subject: Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [article] Using PVR with Raspbmc [article] Home Security by BeagleBone [article]
Read more
  • 0
  • 0
  • 10778
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-physics-bullet
Packt
13 Aug 2014
7 min read
Save for later

Physics with Bullet

Packt
13 Aug 2014
7 min read
In this article by Rickard Eden author of jMonkeyEngine 3.0 CookBook we will learn how to use physics in games using different physics engine. This article contains the following recipes: Creating a pushable door Building a rocket engine Ballistic projectiles and arrows Handling multiple gravity sources Self-balancing using RotationalLimitMotors (For more resources related to this topic, see here.) Using physics in games has become very common and accessible, thanks to open source physics engines, such as Bullet. jMonkeyEngine supports both the Java-based jBullet and native Bullet in a seamless manner. jBullet is a Java-based library with JNI bindings to the original Bullet based on C++. jMonkeyEngine is supplied with both of these, and they can be used interchangeably by replacing the libraries in the classpath. No coding change is required. Use jme3-libraries-physics for the implementation of jBullet and jme3-libraries-physics-native for Bullet. In general, Bullet is considered to be faster and is full featured. Physics can be used for almost anything in games, from tin cans that can be kicked around to character animation systems. In this article, we'll try to reflect the diversity of these implementations. Creating a pushable door Doors are useful in games. Visually, it is more appealing to not have holes in the walls but doors for the players to pass through. Doors can be used to obscure the view and hide what's behind them for a surprise later. In extension, they can also be used to dynamically hide geometries and increase the performance. There is also a gameplay aspect where doors are used to open new areas to the player and give a sense of progression. In this recipe, we will create a door that can be opened by pushing it, using a HingeJoint class. This door consists of the following three elements: Door object: This is a visible object Attachment: This is the fixed end of the joint around which the hinge swings Hinge: This defines how the door should move Getting ready Simply following the steps in this recipe won't give us anything testable. Since the camera has no physics, the door will just sit there and we will have no way to push it. If you have made any of the recipes that use the BetterCharacterControl class, we will already have a suitable test bed for the door. If not, jMonkeyEngine's TestBetterCharacter example can also be used. How to do it... This recipe consists of two sections. The first will deal with the actual creation of the door and the functionality to open it. This will be made in the following six steps: Create a new RigidBodyControl object called attachment with a small BoxCollisionShape. The CollisionShape should normally be placed inside the wall where the player can't run into it. It should have a mass of 0, to prevent it from being affected by gravity. We move it some distance away and add it to the physicsSpace instance, as shown in the following code snippet: attachment.setPhysicsLocation(new Vector3f(-5f, 1.52f, 0f)); bulletAppState.getPhysicsSpace().add(attachment); Now, create a Geometry class called doorGeometry with a Box shape with dimensions that are suitable for a door, as follows: Geometry doorGeometry = new Geometry("Door", new Box(0.6f, 1.5f, 0.1f)); Similarly, create a RigidBodyControl instance with the same dimensions, that is, 1 in mass; add it as a control to the doorGeometry class first and then add it to physicsSpace of bulletAppState. The following code snippet shows you how to do this: RigidBodyControl doorPhysicsBody = new RigidBodyControl(new BoxCollisionShape(new Vector3f(.6f, 1.5f, .1f)), 1); bulletAppState.getPhysicsSpace().add(doorPhysicsBody); doorGeometry.addControl(doorPhysicsBody); Now, we're going to connect the two with HingeJoint. Create a new HingeJoint instance called joint, as follows: new HingeJoint(attachment, doorPhysicsBody, new Vector3f(0f, 0f, 0f), new Vector3f(-1f, 0f, 0f), Vector3f.UNIT_Y, Vector3f.UNIT_Y); Then, we set the limit for the rotation of the door and add it to physicsSpace as follows: joint.setLimit(-FastMath.HALF_PI - 0.1f, FastMath.HALF_PI + 0.1f); bulletAppState.getPhysicsSpace().add(joint); Now, we have a door that can be opened by walking into it. It is primitive but effective. Normally, you want doors in games to close after a while. However, here, once it is opened, it remains opened. In order to implement an automatic closing mechanism, perform the following steps: Create a new class called DoorCloseControl extending AbstractControl. Add a HingeJoint field called joint along with a setter for it and a float variable called timeOpen. In the controlUpdate method, we get hingeAngle from HingeJoint and store it in a float variable called angle, as follows: float angle = joint.getHingeAngle(); If the angle deviates a bit more from zero, we should increase timeOpen using tpf. Otherwise, timeOpen should be reset to 0, as shown in the following code snippet: if(angle > 0.1f || angle < -0.1f) timeOpen += tpf; else timeOpen = 0f; If timeOpen is more than 5, we begin by checking whether the door is still open. If it is, we define a speed to be the inverse of the angle and enable the door's motor to make it move in the opposite direction of its angle, as follows: if(timeOpen > 5) { float speed = angle > 0 ? -0.9f : 0.9f; joint.enableMotor(true, speed, 0.1f); spatial.getControl(RigidBodyControl.class).activate(); } If timeOpen is less than 5, we should set the speed of the motor to 0: joint.enableMotor(true, 0, 1); Now, we can create a new DoorCloseControl instance in the main class, attach it to the doorGeometry class, and give it the same joint we used previously in the recipe, as follows: DoorCloseControl doorControl = new DoorCloseControl(); doorControl.setHingeJoint(joint); doorGeometry.addControl(doorControl); How it works... The attachment RigidBodyControl has no mass and will thus not be affected by external forces such as gravity. This means it will stick to its place in the world. The door, however, has mass and would fall to the ground if the attachment didn't keep it up with it. The HingeJoint class connects the two and defines how they should move in relation to each other. Using Vector3f.UNIT_Y means the rotation will be around the y axis. We set the limit of the joint to be a little more than half PI in each direction. This means it will open almost 100 degrees to either side, allowing the player to step through. When we try this out, there may be some flickering as the camera passes through the door. To get around this, there are some tweaks that can be applied. We can change the collision shape of the player. Making the collision shape bigger will result in the player hitting the wall before the camera gets close enough to clip through. This has to be done considering other constraints in the physics world. You can consider changing the near clip distance of the camera. Decreasing it will allow things to get closer to the camera before they are clipped through. This might have implications on the camera's projection. One thing that will not work is making the door thicker, since the triangles on the side closest to the player are the ones that are clipped through. Making the door thicker will move them even closer to the player. In DoorCloseControl, we consider the door to be open if hingeAngle deviates a bit more from 0. We don't use 0 because we can't control the exact rotation of the joint. Instead we use a rotational force to move it. This is what we do with joint.enableMotor. Once the door is open for more than five seconds, we tell it to move in the opposite direction. When it's close to 0, we set the desired movement speed to 0. Simply turning off the motor, in this case, will cause the door to keep moving until it is stopped by an external force. Once we enable the motor, we also need to call activate() on RigidBodyControl or it will not move.
Read more
  • 0
  • 0
  • 11701

article-image-lightning-introduction
Packt
13 Aug 2014
11 min read
Save for later

Lightning Introduction

Packt
13 Aug 2014
11 min read
In this article by Jorge González and James Watts, the authors of CakePHP 2 Application Cookbook, we will cover the following recipes: Listing and viewing records Adding and editing records Deleting records Adding a login Including a plugin (For more resources related to this topic, see here.) CakePHP is a web framework for rapid application development (RAD), which admittedly covers a wide range of areas and possibilities. However, at its core, it provides a solid architecture for the CRUD (create/read/update/delete) interface. This chapter is a set of quick-start recipes to dive head first into using the framework and build out a simple CRUD around product management. If you want to try the code examples on your own, make sure that you have CakePHP 2.5.2 installed and configured to use a database—you should see something like this: Listing and viewing records To begin, we'll need a way to view the products available and also allow the option to select and view any one of those products. In this recipe, we'll create a listing of products as well as a page where we can view the details of a single product. Getting ready To go through this recipe, we'll first need a table of data to work with. So, create a table named products using the following SQL statement: CREATE TABLE products ( id VARCHAR(36) NOT NULL, name VARCHAR(100), details TEXT, available TINYINT(1) UNSIGNED DEFAULT 1, created DATETIME, modified DATETIME, PRIMARY KEY(id) ); We'll then need some sample data to test with, so now run this SQL statement to insert some products: INSERT INTO products (id, name, details, available, created, modified) VALUES ('535c460a-f230-4565-8378-7cae01314e03', 'Cake', 'Yummy and sweet', 1, NOW(), NOW()), ('535c4638-c708-4171-985a-743901314e03', 'Cookie', 'Browsers love cookies', 1, NOW(), NOW()), ('535c49d9-917c-4eab-854f-743801314e03', 'Helper', 'Helping you all the way', 1, NOW(), NOW()); Before we begin, we'll also need to create ProductsController. To do so, create a file named ProductsController.php in app/Controller/ and add the following content: <?php App::uses('AppController', 'Controller'); class ProductsController extends AppController { public $helpers = array('Html', 'Form'); public $components = array('Session', 'Paginator'); } Now, create a directory named Products/ in app/View/. Then, in this directory, create one file named index.ctp and another named view.ctp. How to do it... Perform the following steps: Define the pagination settings to sort the products by adding the following property to the ProductsController class: public $paginate = array('limit' => 10); Add the following index() method in the ProductsController class: public function index() { $this->Product->recursive = -1; $this->set('products', $this->paginate()); } Introduce the following content in the index.ctp file that we created: <h2><?php echo __('Products'); ?></h2><table><tr><th><?php echo $this->Paginator->sort('id'); ?></th><th><?php echo $this->Paginator->sort('name'); ?></th><th><?php echo $this->Paginator->sort('created'); ?></th></tr><?php foreach ($products as $product): ?><tr><td><?php echo $product['Product']['id']; ?></td><td><?phpecho $this->Html->link($product['Product']['name'],array('controller' => 'products', 'action' => 'view',$product['Product']['id']));?></td><td><?php echo $this->Time->nice($product['Product']['created']); ?></td></tr><?php endforeach; ?></table><div><?php echo $this->Paginator->counter(array('format' => __('Page{:page} of {:pages}, showing {:current} records out of {:count}total, starting on record {:start}, ending on {:end}'))); ?></div><div><?phpecho $this->Paginator->prev(__('< previous'), array(), null,array('class' => 'prev disabled'));echo $this->Paginator->numbers(array('separator' => ''));echo $this->Paginator->next(__('next >'), array(), null,array('class' => 'next disabled'));?></div> Returning to the ProductsController class, add the following view() method to it: public function view($id) {if (!($product = $this->Product->findById($id))) {throw new NotFoundException(__('Product not found'));}$this->set(compact('product'));} Introduce the following content in the view.ctp file: <h2><?php echo h($product['Product']['name']); ?></h2><p><?php echo h($product['Product']['details']); ?></p><dl><dt><?php echo __('Available'); ?></dt><dd><?php echo __((bool)$product['Product']['available'] ? 'Yes': 'No'); ?></dd><dt><?php echo __('Created'); ?></dt><dd><?php echo $this->Time->nice($product['Product']['created']); ?></dd><dt><?php echo __('Modified'); ?></dt><dd><?php echo $this->Time->nice($product['Product']['modified']); ?></dd></dl> Now, navigating to /products in your web browser will display a listing of the products, as shown in the following screenshot: Clicking on one of the product names in the listing will redirect you to a detailed view of the product, as shown in the following screenshot: How it works... We started by defining the pagination setting in our ProductsController class, which defines how the results are treated when returning them via the Paginator component (previously defined in the $components property of the controller). Pagination is a powerful feature of CakePHP, which extends well beyond simply defining the number of results or sort order. We then added an index() method to our ProductsController class, which returns the listing of products. You'll first notice that we accessed a $Product property on the controller. This is the model that we are acting against to read from our table in the database. We didn't create a file or class for this model, as we're taking full advantage of the framework's ability to determine the aspects of our application through convention. Here, as our controller is called ProductsController (in plural), it automatically assumes a Product (in singular) model. Then, in turn, this Product model assumes a products table in our database. This alone is a prime example of how CakePHP can speed up development by making use of these conventions. You'll also notice that in our ProductsController::index() method, we set the $recursive property of the Product model to -1. This is to tell our model that we're not interested in resolving any associations on it. Associations are other models that are related to this one. This is another powerful aspect of CakePHP. It allows you to determine how models are related to each other, allowing the framework to dynamically generate those links so that you can return results with the relations already mapped out for you. We then called the paginate() method to handle the resolving of the results via the Paginator component. It's common practice to set the $recursive property of all models to -1 by default. This saves heavy queries where associations are resolved to return the related models, when it may not be necessary for the query at hand. This can be done via the AppModel class, which all models extend, or via an intermediate class that you may be using in your application. We had also defined a view($id) method, which is used to resolve a single product and display its details. First, you probably noticed that our method receives an $id argument. By default, CakePHP treats the arguments in methods for actions as parts of the URL. So, if we have a product with an ID of 123, the URL would be /products/view/123. In this case, as our argument doesn't have a default value, in its absence from the URL, the framework would return an error page, which states that an argument was required. You will also notice that our IDs in the products table aren't sequential numbers in this case. This is because we defined our id field as VARCHAR(36). When doing this, CakePHP will use a Universally Unique Identifier (UUID) instead of an auto_increment value. To use a UUID instead of a sequential ID, you can use either CHAR(36) or BINARY(36). Here, we used VARCHAR(36), but note that it can be less performant than BINARY(36) due to collation. The use of UUID versus a sequential ID is usually preferred due to obfuscation, where it's harder to guess a string of 36 characters, but also more importantly, if you use database partitioning, replication, or any other means of distributing or clustering your data. We then used the findById() method on the Product model to return a product by it's ID (the one passed to the action). This method is actually a magic method. Just as you can return a record by its ID, by changing the method to findByAvailable(). For example, you would be able to get all records that have the given value for the available field in the table. These methods are very useful to easily perform queries on the associated table without having to define the methods in question. We also threw NotFoundException for the cases in which a product isn't found for the given ID. This exception is HTTP aware, so it results in an error page if thrown from an action. Finally, we used the set() method to assign the result to a variable in the view. Here we're using the compact() function in PHP, which converts the given variable names into an associative array, where the key is the variable name, and the value is the variable's value. In this case, this provides a $product variable with the results array in the view. You'll find this function useful to rapidly assign variables for your views. We also created our views using HTML, making use of the Paginator, Html, and Time helpers. You may have noticed that the usage of TimeHelper was not declared in the $helpers property of our ProductsController. This is because CakePHP is able to find and instantiate helpers from the core or the application automatically, when it's used in the view for the first time. Then, the sort() method on the Paginator helper helps you create links, which, when clicked on, toggle the sorting of the results by that field. Likewise, the counter(), prev(), numbers(), and next() methods create the paging controls for the table of products. You will also notice the structure of the array that we assigned from our controller. This is the common structure of results returned by a model. This can vary slightly, depending on the type of find() performed (in this case, all), but the typical structure would be as follows (using the real data from our products table here): Array([0] => Array([Product] => Array([id] => 535c460a-f230-4565-8378-7cae01314e03[name] => Cake[details] => Yummy and sweet[available] => true[created] => 2014-06-12 15:55:32[modified] => 2014-06-12 15:55:32))[1] => Array([Product] => Array([id] => 535c4638-c708-4171-985a-743901314e03[name] => Cookie[details] => Browsers love cookies[available] => true[created] => 2014-06-12 15:55:33[modified] => 2014-06-12 15:55:33))[2] => Array([Product] => Array([id] => 535c49d9-917c-4eab-854f-743801314e03[name] => Helper[details] => Helping you all the way[available] => true[created] => 2014-06-12 15:55:34[modified] => 2014-06-12 15:55:34))) We also used the link() method on the Html helper, which provides us with the ability to perform reverse routing to generate the link to the desired controller and action, with arguments if applicable. Here, the absence of a controller assumes the current controller, in this case, products. Finally, you may have seen that we used the __() function when writing text in our views. This function is used to handle translations and internationalization of your application. When using this function, if you were to provide your application in various languages, you would only need to handle the translation of your content and would have no need to revise and modify the code in your views. There are other variations of this function, such as __d() and __n(), which allow you to enhance how you handle the translations. Even if you have no initial intention of providing your application in multiple languages, it's always recommended that you use these functions. You never know, using CakePHP might enable you to create a world class application, which is offered to millions of users around the globe!
Read more
  • 0
  • 0
  • 1573

article-image-program-structure-execution-flow-and-runtime-objects
Packt
30 Jul 2014
8 min read
Save for later

Program structure, execution flow, and runtime objects

Packt
30 Jul 2014
8 min read
(For more resources related to this topic, see here.) Unfortunately, C++ isn't an easy language at all. Sometimes, you can think that it is a limitless language that cannot be learned and understood entirely, but you don't need to worry about that. It is not important to know everything; it is important to use the parts that are required in specific situations correctly. Practice is the best teacher, so it's better to understand how to use as many of the features as needed. In examples to come, we will use author Charles Simonyi's Hungarian notation. In his PhD in 1977, he used Meta-Programming – A software production method to make a standard for notation in programming, which says that the first letter of type or variable should represent the data type. For the example class that we want to call, a Test data type should be CTest, where the first letter says that Test is a class. This is good practice because a programmer who is not familiar with the Test data type will immediately know that Test is a class. The same standard stands for primitive types, such as int or double. For example, iCount stands for an integer variable Count, while dValue stands for a double variable Value. Using the given prefixes, it is easy to read code even if you are not so familiar with it. Getting ready Make sure Visual Studio is up and running. How to do it... Now, let's create our first program by performing the following steps and explaining its structure: Create a new C++ console application named TestDemo. Open TestDemo.cpp. Add the following code: #include "stdafx.h" #include <iostream> using namespace std; int _tmain(int argc, _TCHAR* argv[]) { cout << "Hello world" << endl; return 0; } How it works... The structure of the C++ program varies due to different programming techniques. What most programs must have is the #include or preprocessor directives. The #include <iostream> header tells the compiler to include the iostream.h header file where the available function prototypes reside. It also means that libraries with the functions' implementations should be built into executables. So, if we want to use some API or function, we need to include an appropriate header file, and maybe we will have to add an additional input library that contains the function/API implementation. One more important difference when including files is <header> versus "header". The first (<>) targets solution-configured project paths, while the other ("") targets folders relative to the C++ project. The using command instructs the compiler to use the std namespace. Namespaces are packages with object declarations and function implementations. Namespaces have a very important usage. They are used to minimize ambiguity while including third-party libraries when the same function name is used in two different packages. We need to implement a program entry point—the main function. As we said before, we can use main for an ANSI signature, wmain for a Unicode signature, or _tmain where the compiler will resolve its signature depending on the preprocessor definitions in the project property pages. For a console application, the main function can have the following four different prototypes: int _tmain(int argc, TCHAR* argv[]) void _tmain(int argc, TCHAR* argv[]) int _tmain(void) void _tmain(void) The first prototype has two arguments, argc and argv. The first argument, argc, or the argument count, says how many arguments are present in the second argument, argv, or the argument values. The argv parameter is an array of strings, where each string represents one command-line argument. The first string in argv is always the current program name. The second prototype is the same as the first one, except the return type. This means that the main function may or may not return a value. This value is returned to the OS. The third prototype has no arguments and returns an integer value, while the fourth prototype neither has arguments nor returns a value. It is good practice to use the first format. The next command uses the cout object. The cout object is the name of the standard output stream in C++, and the meaning of the entire statement is to insert a sequence of characters (in this case, the Hello world sequence of characters) into the standard output stream (usually corresponds to the screen). The cout object is declared in the iostream standard file within the std namespace. This is why we need to include the specific file and declare that we will use this specific namespace earlier in our code. In our usual selection of the (int _tmain(int,_TCHAR**)) prototype, _tmain returns an integer. We must specify some int value after the return command, in this case 0. When returning a value to the operating system, 0 usually means success, but this is operating system dependent. This simple program is very easy to create. We use this simple example to demonstrate the basic program structure and usage of the main routine as the entry point for every C++ program. Programs with one thread are executed sequentially line-by-line. This is why our program is not user friendly if we place all code into one thread. After the user gives the input, control is returned to the application. Only now can the application continue the execution. In order to overcome such an issue, we can create concurrent threads that will handle the user input. In this way, the application does not stall and is responsive all the time. After a thread handles its task, it can signal that a user has performed the requested operation to the application. There's more... Every time we have an operation that needs to be executed separately from the main execution flow, we have to think about a separate thread. The simplest example is when we have some calculations, and we want to have a progress bar where the calculation progress will be shown. If the same thread was responsible for calculation as well as to update the progress bar, probably it wouldn't work. This occurs because, if both work and a UI update are performed from a single thread, such threads can't interact with OS painting adequately; so almost always, the UI thread is separate from the working threads. Let's review the following example. Assume that we have created a function where we will calculate something, for example, sines or cosines of some angle, and we want to display progress in every step of the calculation: void CalculateSomething(int iCount) { int iCounter = 0; while (iCounter++ < iCount) { //make calculation //update progress bar } } As commands are executed one after another inside each iteration of the while loop, the operating system doesn't have the required time to properly update the user interface (in this case, the progress bar), so we would see an empty progress bar. After the function returns, a fully filled progress bar appears. The solution for this is to create a progress bar on the main thread. A separate thread should execute the CalculateSomething function; in each step of iteration, it should somehow signal the main thread to update the progress bar step by step. As we said before, threads are switched extremely fast on the CPU, so we can get the impression that the progress bar is updated at the same time at the calculation is performed. To conclude, each time we have to make a parallel task, wait for some kind of user input, or wait for an external dependency such as some response from a remote server, we will create a separate thread so that our program won't hang and be unresponsive. In our future examples, we will discuss static and dynamic libraries, so let's say a few words about both. A static library (*.lib) is usually some code placed in separate files and already compiled for future use. We will add it to the project when we want to use some of its features. When we wrote #include <iostream> earlier, we instructed the compiler to include a static library where implementations of input-output stream functions reside. A static library is built into the executable in compile time before we actually run the program. A dynamic library (*.dll) is similar to a static library, but the difference is that it is not resolved during compilation; it is linked later when we start the program, in another words—at runtime. Dynamic libraries are very useful when we have functions that a lot of programs will use. So, we don't need to include these functions into every program; we will simply link every program at runtime with one dynamic library. A good example is User32.dll, where the Windows OS placed a majority of GUI functions. So, if we create two programs where both have a window (GUI form), we do not need to include CreateWindow in both programs. We will simply link User32.dll at runtime, and the CreateWindow API will be available. Summary Thus the article covers the four main paradigms: imperative, declarative, functional (or structural), and object-oriented. Resources for Article: Further resources on this subject: OpenGL 4.0: Building a C++ Shader Program Class [Article] Application Development in Visual C++ - The Tetris Application [Article] Building UI with XAML for Windows 8 Using C [Article]
Read more
  • 0
  • 0
  • 4733

article-image-making-better-faq-page
Packt
24 Jul 2014
17 min read
Save for later

Making a Better FAQ Page

Packt
24 Jul 2014
17 min read
(For more resources related to this topic, see here.) Marking up the FAQ page We'll get started by taking some extra care and attention with the way we mark up our FAQ list. As with most things that deal with web development, there's no right way of doing anything, so don't assume this approach is the only correct one. Any markup that makes sense semantically and makes it easy to enhance your list with CSS and JavaScript is perfectly acceptable. Time for action – setting up the HTML file Perform the following steps to get the HTML file set up for our FAQ page: We'll get started with our sample HTML file, the jQuery file, the scripts.js file, and the styles.css file. In this case, our HTML page will contain a definition list with the questions inside the <dt> tags and the answers wrapped in the <dd> tags. By default, most browsers will indent the <dd> tags, which means the questions hang into the left margin, making them easy to scan. Inside the <body> tag of your HTML document, add a heading and a definition list as shown in the following code: <h1>Frequently Asked Questions</h1> <dl> <dt>What is jQuery?</dt> <dd> <p>jQuery is an awesome JavaScript library</p> </dd> <dt>Why should I use jQuery?</dt> <dd> <p>Because it's awesome and it makes writing JavaScript faster and easier</p> </dd> <dt>Why would I want to hide the answers to my questions?</dt> <dd> <p>To make it easier to peruse the list of available questions - then you simply click to see the answer you're interested in reading.</p> </dd> <dt>What if my answers were a lot longer and more complicated than these examples?</dt> <dd> <p>The great thing about the &lt;dd&gt; element is that it's a block level element that can contain lots of other elements.</p> <p>That means your answer could contain:</p> <ul> <li>Unordered</li> <li>Lists</li> <li>with lots</li> <li>of items</li> <li>(or ordered lists or even another definition list)</li> </ul> <p>Or it might contain text with lots of <strong>special</strong> <em>formatting</em>.</p> <h2>Other things</h2> <p>It can even contain headings. Your answers could take up an entire screen or more all on their own - it doesn't matter since the answer will be hidden until the user wants to see it.</p> </dd> <dt>What if a user doesn't have JavaScript enabled?</dt> <dd> <p>You have two options for users with JavaScript disabled - which you choose might depend on the content of your page.</p> <p>You might just leave the page as it is - and make sure the &lt;dt&gt; tags are styled in a way that makes them stand out and easy to pick up when you're scanning down through the page. This would be a great solution if your answers are relatively short.</p> <p>If your FAQ page has long answers, it might be helpful to put a table of contents list of links to individual questions at the top of the page so users can click it to jump directly to the question and answer they're interested in.This is similar to what we did in the tabbed example, but in this case, we'd usejQuery to hide the table of contents when the page loaded since users with JavaScript wouldn't need to see the table of contents.</p> </dd> </dl> You can adjust the style of the page however you'd like by adding in some CSS styles. The following screenshot shows how the page is styled: For users with JavaScript disabled, this page works fine as is. The questions hang into the left margin and are bolder and larger than the rest of the text on the page, making them easy to scan. What just happened? We set up a basic definition list to hold our questions and answers. The default style of the definition list lends itself nicely to making the list of questions scannable for site visitors without JavaScript. We can enhance that further with our own custom CSS code to make the style of our list match our site. As this simple collapse-and-show (or accordion) action is such a common one, two new elements have been proposed for HTML5: <summary> and <details> that will enable us to build accordions in HTML without the need for JavaScript interactivity. However, at the time of writing this, the new elements are only supported in Webkit browsers, which require some finagling to get them styled with CSS, and are also not accessible. Do keep an eye on these new elements to see if more widespread support for them develops. You can read about the elements in the HTML5 specs (http://www.whatwg.org/specs/web-apps/current-work/multipage/interactive-elements.html). If you'd like to understand the elements better, the HTML5 Doctor has a great tutorial that explains their use and styling at http://html5doctor.com/the-details-and-summary-elements/. Time for action – moving around an HTML document Perform the following steps to move from one element to another in JavaScript: We're going to keep working with the files we set up in the previously. Open up the scripts.js file that's inside your scripts folder. Add a document ready statement, then write a new empty function called dynamicFAQ, as follows: $(document).ready(function(){ }); function dynamicFAQ() { // Our function will go here } Let's think through how we'd like this page to behave. We'd like to have all the answers to our questions hidden when the page is loaded. Then, when a user finds the question they're looking for, we'd like to show the associated answer when they click on the question. This means the first thing we'll need to do is hide all the answers when the page loads. Get started by adding a class jsOff to the <body> tag, as follows: <body class="jsOff"> Now, inside the document ready statement in scripts.js, add the line of code that removes the jsOff class and adds a class selector of jsOn: $(document).ready(function(){ $('body').removeClass('jsOff').addClass('jsOn'); }); Finally, in the styles.css file, add this bit of CSS to hide the answers for the site visitors who have JavaScript enabled: .jsOn dd { display: none; } Now if you refresh the page in the browser, you'll see that the <dd> elements and the content they contain are no longer visible (see the following screenshot): Now, we need to show the answer when the site visitor clicks on a question. To do that, we need to tell jQuery to do something whenever someone clicks on one of the questions or the <dt> tags. Inside the dynamicFAQ function, add a line of code to add a click event handler to the <dt> elements, as shown in the following code: function dynamicFAQ() { $('dt').on('click', function(){ //Show function will go here }); } When the site visitor clicks on a question, we want to get the answer to that question and show it because our FAQ list is set up as follows: <dl> <dt>Question 1</dt> <dd>Answer to Question 1</dd> <dt>Question 2</dt> <dd>Answer to Question 2</dd> ... </dl> We know that the answer is the next node or element in the DOM after our question. We'll start from the question. When a site visitor clicks on a question, we can get the current question by using jQuery's $(this) selector. The user has just clicked on a question, and we say $(this) to mean the question they just clicked on. Inside the new click function, add $(this) so that we can refer to the clicked question, as follows: $('dt').on('click', function(){ $(this); }); Now that we have the question that was just clicked, we need to get the next thing, or the answer to that question so that we can show it. This is called traversing the DOM in JavaScript. It just means that we're moving to a different element in the document. jQuery gives us the next method to move to the next node in the DOM. We'll select our answer by inserting the following code: $('dt').on('click', function(){ $(this).next(); }); Now, we've moved from the question to the answer. Now all that's left to do is show the answer. To do so, add a line of code as follows: $('dt').on('click', function(){ $(this).next().show(); }); If you refresh the page in the browser, you might be disappointed to see that nothing happens when we click the questions. Don't worry—that's easy to fix. We wrote a dynamicFAQ() function, but we didn't call it. Functions don't work until they're called. Inside the document ready statement, call the function as follows: $(document).ready(function(){ $('body').removeClass('jsOff').addClass('jsOn'); dynamicFAQ(); }); Now, if we load the page in the browser, you can see that all of our answers are hidden until we click on the question. This is nice and useful, but it would be even nicer if the site visitor could hide the answer again when they're done reading it to get it out of their way. Luckily, this is such a common task, jQuery makes this very easy for us. All we have to do is replace our call to the show method with a call to the toggle method as follows: $('dt').on('click', function(){ $(this).next().toggle(); }); Now when you refresh the page in the browser, you'll see that clicking on the question once shows the answer and clicking on the question a second time hides the answer again. What just happened? We learned how to traverse the DOM—how to get from one element to another. Toggling the display of elements on a page is a common JavaScript task, so jQuery already has built-in methods to handle it and make it simple and straightforward to get this up and running on our page. That was pretty easy—just a few lines of code. Sprucing up our FAQ page That was so easy, in fact, that we have plenty of time left over to enhance our FAQ page to make it even better. This is where the power of jQuery becomes apparent—you can not only create a show/hide FAQ page, but you can make it a fancy one and still meet your deadline. How's that for impressing a client or your boss? Time for action – making it fancy Perform the following steps to add some fancy new features to the FAQ page: Let's start with a lit le CSS code to change the cursor to a pointer and add a little hover effect to our questions to make it obvious to site visitors that the questions are clickable. Open up the styles.css file that's inside the styles folder and add the following bit of CSS code: .jsOn dt { cursor: pointer; } .jsOn dt:hover { color: #ac92ec; } We're only applying these styles for those site visitors that have JavaScript enabled. These styles definitely help to communicate to the site visitor that the questions are clickable. You might also choose to change something other than the font color for the hover effect. Feel free to style your FAQ list however you'd like. Have a look at the following screenshot: Now that we've made it clear that our <dt> elements can be interacted with, let's take a look at how to show the answers in a nicer way. When we click on a question to see the answer, the change isn't communicated to the site visitor very well; the jump in the page is a little disconcerting and it takes a moment to realize what just happened. It would be nicer and easier to understand if the questions were to slide into view. The site visitor could literally see the question appearing and would understand immediately what change just happened on the screen. jQuery makes that easy for us. We just have to replace our call to the toggle method with a call to the slideToggle method: $('dt').on('click', function(){ $(this).next().slideToggle(); }); Now if you view the page in your browser, you can see that the questions slide smoothly in and out of view when the question is clicked. It's easy to understand what's happening when the page changes, and the animation is a nice touch. Now, there's just one lit le detail we've still got to take care of. Depending on how you've styled your FAQ list, you might see a lit le jump in the answer at the end of the animation. This is caused by some extra margins around the <p> tags inside the <dd> element. They don't normally cause any issues in HTML, and browsers can figure how to display them correctly. However, when we start working with animation, sometimes this becomes a problem. It's easy to fix. Just remove the top margin from the <p> tags inside the FAQ list as follows: .content dd p { margin-top: 0; } If you refresh the page in the browser, you'll see that the little jump is now gone and our animation smoothly shows and hides the answers to our questions. What just happened? We replaced our toggle method with the slideToggle method to animate the showing and hiding of the answers. This makes it easier for the site visitor to understand the change that's taking place on the page. We also added some CSS to make the questions appear to be clickable to communicate the abilities of our page to our site visitors. We're almost there! jQuery made animating that show and hide so easy that we've still got time left over to enhance our FAQ page even more. It would be nice to add some sort of indicator to our questions to show that they're collapsed and can be expanded, and to add some sort of special style to our questions once they're opened to show that they can be collapsed again. Time for action – adding some final touches Perform the following steps to add some finishing touches to our FAQ list: Let's start with some simple CSS code to add a small arrow icon to the left side of our questions. Head back into style.css and modify the styles a bit to add an arrow as follows: .jsOn dt:before { border: 0.5em solid; border-color: transparent transparent transparent #f2eeef; content: ''; display: inline-block; height: 0; margin-right: 0.5em; vertical-align: middle; width: 0; } .jsOn dt:hover:before { border-left-color: #ac92ec; } You might be wondering about this sort of odd bit of CSS. This is a technique to create triangles in pure CSS without having to use any images. If you're not familiar with this technique, I recommend checking out appendTo's blog post that explains pure CSS triangles at http://appendto.com/2013/03/pure-css-triangles-explained/. We've also included a hover style so that the triangle will match the text color when the site visitor hovers his/her mouse over the question. Note that we're using the jsOn class so that arrows don't get added to the page unless the site visitors have JavaScript enabled. See the triangles created in the following screenshot: Next, we'll change the arrow to a different orientation when the question is opened. We'll create a new CSS class open and use it to de fine some new styles for our CSS arrow using the following code: .jsOn dt.open:before { border-color: #f2eeef transparent transparent transparent; border-bottom-width: 0; } .jsOn dt.open:hover:before { border-left-color: transparent; border-top-color: #ac92ec; } Just make sure you add these new classes after the other CSS we're using to style our <dt> tags. This will ensure that the CSS cascades the way we intended. So we have our CSS code to change the arrows and show our questions are open, but how do we actually use that new class? We'll use jQuery to add the class to our question when it is opened and to remove the class when it's closed. jQuery provides some nice methods to work with CSS classes. The addClass method will add a class to a jQuery object and the removeClass method will remove a class. However, we want to toggle our class just like we're toggling the show and hide phenomenon of our questions. jQuery's got us covered for that too. We want the class to change when we click on the question, so we'll add a line of code inside our dynamicFAQ function that we're calling each time a <dt> tag is clicked as follows: $('dt').on('click', function(){ $(this).toggleClass('open'); $(this).next().slideToggle(); }); Now when you view the page, you'll see your open styles being applied to the <dt> tags when they're open and removed again when they're closed. To see this, have a look at the following screenshot: However, we can actually crunch our code to be a little bit smaller. Remember how we chain methods in jQuery? We can take advantage of chaining again. We have a bit of redundancy in our code because we're starting two different lines with $(this). We can remove this extra $(this) and just add our toggleClass method to the chain we've already started as follows: $(this).toggleClass('open').next().slideToggle(); This helps keep our code short and concise, and just look at what we're accomplishing in one line of code! What just happened? We created the CSS styles to style the open and closed states of our questions, and then we added a bit of code to our JavaScript to change the CSS class of the question to use our new styles. jQuery provides a few different methods to update CSS classes, which is o t en a quick and easy way to update the display of our document in response to input from the site visitor. In this case, since we wanted to add and remove a class, we used the toggleClass method. It saved us from having to figure out on our own whether we needed to add or remove the open class. We also took advantage of chaining to simply add this new functionality to our existing line of code, making the animated show and hide phenomenon of the answer and the change of CSS class of our question happen all in just one line of code. How's that for impressive power in a small amount of code? Summary You learned how to set up a basic FAQ page that hides the answers to the questions until the site visitor needs to see them. Because jQuery made this so simple, we had plenty of time left over to enhance our FAQ page even more, adding animations to our show and hide phenomenon for the answers, and taking advantage of CSS to style our questions with special open and closed classes to communicate to our site visitors how our page works. And we did all of that with just a few lines of code! Resources for Article: Further resources on this subject: Calendars in jQuery 1.3 with PHP using jQuery Week Calendar Plugin: Part 1 [article] Using jQuery and jQuery Animation: Tips and Tricks [article] Using jQuery and jQueryUI Widget Factory plugins with RequireJS [article]
Read more
  • 0
  • 0
  • 9102
article-image-introspecting-maya-python-and-pymel
Packt
23 Jul 2014
8 min read
Save for later

Introspecting Maya, Python, and PyMEL

Packt
23 Jul 2014
8 min read
(For more resources related to this topic, see here.) Maya and Python are both excellent and elegant tools that can together achieve amazing results. And while it may be tempting to dive in and start wielding this power, it is prudent to understand some basic things first. In this article, we will look at Python as a language, Maya as a program, and PyMEL as a framework. We will begin by briefly going over how to use the standard Python interpreter, the Maya Python interpreter, the Script Editor in Maya, and your Integrated Development Environment (IDE) or text editor in which you will do the majority of your development. Our goal for the article is to build a small library that can easily link us to documentation about Python and PyMEL objects. Building this library will illuminate how Maya, Python and PyMEL are designed, and demonstrate why PyMEL is superior to maya.cmds. We will use the powerful technique of type introspection to teach us more about Maya's node-based design than any Hypergraph or static documentation can. Creating your library There are generally three different modes you will be developing in while programming Python in Maya: using the mayapy interpreter to evaluate short bits of code and explore ideas, using your Integrated Development Environment to work on the bulk of the code, and using Maya's Script Editor to help iterate and test your work. In this section, we'll start learning how to use all three tools to create a very simple library. Using the interpreter The first thing we must do is find your mayapy interpreter. It should be next to your Maya executable, named mayapy or mayapy.exe. It is a Python interpreter that can run Python code as if it were being run in a normal Maya session. When you launch it, it will start up the interpreter in interactive mode, which means you enter commands and it gives you results, interactively. The >>> and ... characters in code blocks indicate something you should enter at the interactive prompt; the code listing in the article and your prompt should look basically the same. In later listings, long output lines will be elided with ... to save on space. Start a mayapy process by double clicking or calling it from the command line, and enter the following code: >>> print 'Hello, Maya!' Hello, Maya! >>> def hello(): ... return 'Hello, Maya!' ... >>> hello() 'Hello, Maya!' The first statement prints a string, which shows up under the prompting line. The second statement is a multiline function definition. The ... indicates the line is part of the preceding line. The blank line following the ... indicates the end of the function. For brevity, we will leave out empty ... lines in other code listings. After we define our hello function, we invoke it. It returns the string "Hello, Maya!", which is printed out beneath the invocation. Finding a place for our library Now, we need to find a place to put our library file. In order for Python to load the file as a module, it needs to be on some path where Python can find it. We can see all available paths by looking at the path list on the sys module. >>> import sys >>> for p in sys.path: ... print p C:Program FilesAutodeskMaya2013binpython26.zip C:Program FilesAutodeskMaya2013PythonDLLs C:Program FilesAutodeskMaya2013Pythonlib C:Program FilesAutodeskMaya2013Pythonlibplat-win C:Program FilesAutodeskMaya2013Pythonliblib-tk C:Program FilesAutodeskMaya2013bin C:Program FilesAutodeskMaya2013Python C:Program FilesAutodeskMaya2013Pythonlibsite-packages A number of paths will print out; I've replicated what's on my Windows system, but yours will almost definitely be different. Unfortunately, the default paths don't give us a place to put custom code. They are application installation directories, which we should not modify. Instead, we should be doing our coding outside of all the application installation directories. In fact, it's a good practice to avoid editing anything in the application installation directories entirely. Choosing a development root Let's decide where we will do our coding. To be concise, I'll choose C:mayapybookpylib to house all of our Python code, but it can be anywhere. You'll need to choose something appropriate if you are on OSX or Linux; we will use ~/mayapybook/pylib as our path on these systems, but I'll refer only to the Windows path except where more clarity is needed. Create the development root folder, and inside of it create an empty file named minspect.py. Now, we need to get C:mayapybookpylib onto Python's sys.path so it can be imported. The easiest way to do this is to use the PYTHONPATH environment variable. From a Windows command line you can run the following to add the path, and ensure it worked: > set PYTHONPATH=%PYTHONPATH%;C:mayapybookpylib > mayapy.exe >>> import sys >>> 'C:\mayapybook\pylib' in sys.path True >>> import minspect >>> minspect <module 'minspect' from '...minspect.py'> The following is the equivalent commands on OSX or Linux: $ export PYTHONPATH=$PYTHONPATH:~/mayapybook/pylib $ mayapy >>> import sys >>> '~/mayapybook/pylib' in sys.path True >>> import minspect >>> minspect <module 'minspect' from '.../minspect.py'> There are actually a number of ways to get your development root onto Maya's path. The option presented here (using environment variables before starting Maya or mayapy) is just one of the more straightforward choices, and it works for mayapy as well as normal Maya. Calling sys.path.append('C:\mayapybook\pylib') inside your userSetup.py file, for example, would work for Maya but not mayapy (you would need to use maya.standalone.initialize to register user paths, as we will do later). Using set or export to set environment variables only works for the current process and any new children. If you want it to work for unrelated processes, you may need to modify your global or user environment. Each OS is different, so you should refer to your operating system's documentation or a Google search. Some possibilities are setx from the Windows command line, editing /etc/environment in Linux, or editing /etc/launchd.conf on OS X. If you are in a studio environment and don't want to make changes to people's machines, you should consider an alternative such as using a script to launch Maya which will set up the PYTHONPATH, instead of launching the maya executable directly. Creating a function in your IDE Now it is time to use our IDE to do some programming. We'll start by turning the path printing code we wrote at the interactive prompt into a function in our file. Open C:mayapybookpylibminspect.py in your IDE and type the following code: import sys def syspath(): print 'sys.path:' for p in sys.path: print ' ' + p Save the file, and bring up your mayapy interpreter. If you've closed down the one from the last session, make sure C:mayapybookpylib (or whatever you are using as your development root) is present on your sys.path or the following code will not work! See the preceding section for making sure your development root is on your sys.path. >>> import minspect >>> reload(minspect) <module 'minspect' from '...minspect.py'> >>> minspect.syspath() C:Program FilesAutodeskMaya2013binpython26.zip C:Program FilesAutodeskMaya2013PythonDLLs C:Program FilesAutodeskMaya2013Pythonlib C:Program FilesAutodeskMaya2013Pythonlibplat-win C:Program FilesAutodeskMaya2013Pythonliblib-tk C:Program FilesAutodeskMaya2013bin C:Program FilesAutodeskMaya2013Python C:Program FilesAutodeskMaya2013Pythonlibsite-packages First, we import the minspect module. It may already be imported if this was an old mayapy session. That is fine, as importing an already-imported module is fast in Python and causes no side effects. We then use the reload function, which we will explore in the next section, to make sure the most up-to-date code is loaded. Finally, we call the syspath function, and its output is printed. Your actual paths will likely vary. Reloading code changes It is very common as you develop that you'll make changes to some code and want to immediately try out the changed code without restarting Maya or mayapy. You can do that with Python's built-in reload function. The reload function takes a module object and reloads it from disk so that the new code will be used. When we jump between our IDE and the interactive interpreter (or the Maya application) as we did earlier, we will usually reload the code to see the effect of our changes. I will usually write out the import and reload lines, but occasionally will only mention them in text preceding the code. Keep in mind that reload is not a magic bullet. When you are dealing with simple data and functions as we are here, it is usually fine. But as you start building class hierarchies, decorators, and other things that have dependencies or state, the situation can quickly get out of control. Always test your code in a fresh version of Maya before declaring it done to be sure it does not have some lingering defect hidden by reloading. Though once you are a master Pythonista you can ignore these warnings and figure out how to reload just about anything!
Read more
  • 0
  • 0
  • 3716

article-image-article-creating-an-application-using-aspnetmvcangularjsservicestack
Packt
23 Jul 2014
8 min read
Save for later

Creating an Application using ASP.NET MVC, AngularJS and ServiceStack

Packt
23 Jul 2014
8 min read
(For more resources related to this topic, see here.) Routing considerations for ASP.NET MVC and AngularJS In the previous example, we had to make changes to the ASP.NET MVC routing so it ignores the requests handled by the ServiceStack framework. Since the AngularJS application currently uses hashbang URLs, we don't need to make any other changes to the ASP.NET MVC routing. Changing an AngularJS application to use the HTML5 History API instead of hashbang URLs requires a lot more work as it will conflict directly with the ASP.NET MVC routing. You need to set up IIS URL rewriting and use the URL Rewrite module for IIS 7 and higher, which is available at www.iis.net/downloads/microsoft/url-rewrite. AngularJS application routes have to be mapped using this module to the ASP.NET MVC view that hosts the client-side application. We also need to ensure that web service request paths are excluded from URL rewriting. You can explore some changes required for the HTML5 navigation mode in the project found in the Example2 folder from the source code for this article. The HTML5 History API is not supported in Internet Explorer 8 and 9. Using ASP.NET bundling and minification features for AngularJS files So far, we have referenced and included JavaScript and CSS files directly in the _Layout.cshtml file. This makes it difficult to reuse script references between different views, and the assets are not concatenated and minified when deployed to a production environment. Microsoft provides a NuGet package called Microsoft.AspNet.Web.Optimization that contains this essential functionality. When you create a new ASP.NET MVC project, it gets installed and configured with default options. First, we need to add a new BundleConfig.cs file, which will define collections of scripts and style sheets under a virtual path, such as ~/bundles/app, that does not match a physical file. This file will contain the following code: bundles.Add(new ScriptBundle("~/bundles/app").Include( "~/scripts/app/app.js", "~/scripts/app/services/*.js", "~/scripts/app/controllers/*.js")); You can explore these changes in the project found in the Example3 folder from the source code for this article. If you take a look at the BundleConfig.cs file, you will see three script bundles and one style sheet bundle defined. Nothing is stopping you from defining only one script bundle instead, to reduce the resource requests further. We can now reference the bundles in the _Layout.cshtml file and replace the previous scripts with the following code: @Scripts.Render("~/bundles/basejs") @Scripts.Render("~/bundles/angular") @Scripts.Render("~/bundles/app") Each time we add a new file to a location like ~/scripts/app/services/ it will automatically be included in its bundle. If we add the following line of code to the BundleConfig.RegisterBundles method, when we run the application, the scripts or style sheets defined in a bundle will be minified (all of the whitespace, line separators, and comments will be removed) and concatenated in a single file: BundleTable.EnableOptimizations = true; If we take a look at the page source, the script section now looks like the following code: <script src ="/bundles/basejs?v=bWXds_q0E1qezGAjF9o48iD8-hlMNv7nlAONwLLM0Wo1"></script> <script src ="/bundles/angular?v=k-PtTeaKyBiBwT4gVnEq9YTPNruD0u7n13IOEzGTvfw1"></script> <script src ="/bundles/app?v=OKa5fFQWjXSQCNcBuWm9FJLcPFS8hGM6uq1SIdZNXWc1"></script> Using this process, the previous separate requests for each script or style sheet file will be reduced to a request to one or more bundles that are much reduced in content due to concatenation and minification. For convenience, there is a new EnableOptimizations value in web.config that will enable or disable the concatenation and minification of the asset bundles. Securing the AngularJS application We previously discussed that we need to ensure that all browser requests are secured and validated on the server for specific scenarios. Any browser request can be manipulated and changed even unintentionally, so we cannot rely on client-side validation alone. When discussing securing an AngularJS application, there are a couple of alternatives available, of which I'll mention the following: You can use client-side authentication and employ a web service call to authenticate the current user. You can create a time-limited authentication token that will be passed with each data request. This approach involves additional code in the AngularJS application to handle authentication. You can rely on server-side authentication and use an ASP.NET MVC view that will handle any unauthenticated request. This view will redirect to the view that hosts the AngularJS application only when the authentication is successful. The AngularJS application will implicitly use an authentication cookie that is set on the server side, and it does not need any additional code to handle authentication. I prefer server-side authentication as it can be reused with other server-side views and reduces the code required to implement it on both the client side and server side. We can implement server-side authentication in at least two ways, as follows: We can use the ASP.NET Identity system or the older ASP.NET Membership system for scenarios where we need to integrate with an existing application We can use built-in ServiceStack authentication features, which have a wide range of options with support for many authentication providers. This approach has the benefit that we can add a set of web service methods that can be used for authentication outside of the ASP.NET MVC context. The last approach ensures the best integration between ASP.NET MVC and ServiceStack, and it allows us to introduce a ServiceStack NuGet package that provides new productivity benefits for our sample application. Using the ServiceStack.Mvc library ServiceStack has a library that allows deeper integration with ASP.NET MVC through the ServiceStack.Mvc NuGet package. This library provides access to the ServiceStack dependency injection system for ASP.NET MVC applications. It also introduces a new base controller class called ServiceStackController; this can be used by ASP.NET MVC controllers to gain access to the ServiceStack caching, session, and authentication infrastructures. To install this package, you need to run the following command in the NuGet Package Manager Console: Install-Package ServiceStack.Mvc -Version 3.9.71 The following line needs to be added to the AppHost.Configure method, and it will register a ServiceStack controller factory class for ASP.NET MVC: ControllerBuilder.Current.SetControllerFactory(new FunqControllerFactory(container)); The ControllerBuilder.Current.SetControllerFactory method is an ASP.NET MVC extension point that allows the replacement of its DefaultControllerFactory class with a custom one. This class is tasked with matching requests with controllers, among other responsibilities. The FunqControllerFactory class provided in the new NuGet package inherits the DefaultControllerFactory class and ensures that all controllers that have dependencies managed by the ServiceStack dependency injection system will be resolved at application runtime. To exemplify this, the BicycleRepository class is now referenced in the HomeController class, as shown in the following code: public class HomeController : Controller { public BicycleRepository BicycleRepository { get; set; } // // GET: /Home/ public ActionResult Index() { ViewBag.BicyclesCount = BicycleRepository.GetAll().Count(); return View(); } } The application menu now displays the current number of bicycles as initialized in the BicycleRepository class. If we add a new bicycle and refresh the browser page, the menu bicycle count is updated. This highlights the fact that the ASP.NET MVC application uses the same BicycleRepository instance as ServiceStack web services. You can explore this example in the project found in the Example4 folder from the source code for this article. Using the ServiceStack.Mvc library, we have reached a new milestone by bridging ASP.NET MVC controllers with ServiceStack services. In the next section, we will effectively transition to a single server-side application with unified caching, session, and authentication infrastructures. The building blocks of the ServiceStack security infrastructure ServiceStack has built-in, optional authentication and authorization provided by its AuthFeature plugin, which builds on two other important components as follows: Caching: Every service or controller powered by ServiceStack has optional access to an ICacheClient interface instance that provides cache-related methods. The interface needs to be registered as an instance of one of the many caching providers available: an in-memory cache, a relational database cache, a cache based on a key value data store using Redis, a memcached-based cache, a Microsoft Azure cache, and even a cache based on Amazon DynamoDB. Sessions: These are enabled by the SessionFeature ServiceStack plugin and rely on the caching component when the AuthFeature plugin is not enabled. Every service or controller powered by ServiceStack has an ISession property that provides read and write access to the session data. Each ServiceStack request automatically has two cookies set: an ss-id cookie, which is a regular session cookie, and an ss-pid cookie, which is a permanent cookie with an expiry date set far in the future. You can also gain access to a typed session as part of the AuthFeature plugin that will be explored next.
Read more
  • 0
  • 0
  • 4135

article-image-importance-securing-web-services
Packt
23 Jul 2014
10 min read
Save for later

The Importance of Securing Web Services

Packt
23 Jul 2014
10 min read
(For more resources related to this topic, see here.) In the upcoming sections of this article we are going to briefly explain several concepts about the importance of securing web services. The importance of security The management of securities is one of the main aspects to consider when designing applications. No matter what, neither the functionality nor the information of organizations can be exposed to all users without any kind of restriction. Suppose the case of a human resource management application that allows you to consult wages of their employees, for example, if the company manager needs to know the salary of one of their employees, it is not something of great importance. But in the same context, imagine that one of the employees wants to know the salary of their colleagues, if access to this information is completely open, it could generate problems among employees with varied salaries. Security management options Java provides some options for security management. Right now we will explain some of them and demonstrate how to implement them. All authentication methods are practically based on credentials delivery from the client to the server. In order to perform this, there are several methods: BASIC authentication DIGEST authentication CLIENT CERT authentication Using API keys The Security Management in applications built with Java including those ones with RESTful web services, always rely on JAAS. Basic authentication by providing user credentials Possibly one of the most used techniques in all kind of applications. The user, before gaining functionality over the application is requested to enter a username and password both are validated in order to verify if credentials are correct (belongs to an application user). We are 99 percent sure you have performed this technique at least once, maybe through a customized mechanism, or if you used JEE platform, probably through JAAS. This kind of control is known as basic authentication. In order to have a working example, let’s start our application server JBoss AS 7, then go to bin directory and execute the file add-user.bat (.sh file for UNIX users). Finally, we will create a new user as follows: As a result, we will have a new user in JBOSS_HOME/standalone/configuration/application - users.properties file. JBoss is already set with a default security domain called other; the same one uses the information stored in the file we mentioned earlier in order to authenticate. Right now we are going to configure the application to use this security domain, inside the folder WEB-INF from resteasy-examples project, let's create a file named jboss-web.xml with the following content: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>other</security-domain> </jboss-web> Alright, let's configure the file web.xml in order to aggregate the securities constraints. In the following block of code, you will see on bold what you should add: <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.0" xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <!-- Roles --> <security-role> <description>Any rol </description> <role-name>*</role-name> </security-role> <!-- Resource / Role Mapping --> <security-constraint> <display-name>Area secured</display-name> <web-resource-collection> <web-resource-name>protected_resources</web-resource-name> <url-pattern>/services/*</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <auth-constraint> <description>User with any role</description> <role-name>*</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> </login-config> </web-app> From a terminal let's go to the home folder of the resteasy-examples project and execute mvn jboss-as:redeploy. Now we are going to test our web service as we did earlier by using SoapUI. We are going to perform request using the POST method to the URL SOAP UI shows us the HTTP 401 error; this means that the request wasn't authorized. This is because we performed the request without delivering the credentials to the server. Digest access authentication This authentication method makes use of a hash function to encrypt the password entered by the user before sending it to the server. This makes it obviously much safer than the BASIC authentication method, in which the user’s password travels in plain text that can be easily read by whoever intercepts. To overcome such drawbacks, digest MD5 authentication applies a function on the combination of the values of the username, realm of application security, and password. As a result we obtain an encrypted string that can hardly be interpreted by an intruder. Now, in order to perform what we explained before, we need to generate a password for our example user. And we have to generate it using the parameters we talked about earlier; username, realm, and password. Let’s go into the directory of JBOSS_HOME/modules/org/picketbox/main/ from a terminal and type the following: java -cp picketbox-4.0.7.Final.jar org.jboss. security.auth.callback.RFC2617Digest username MyRealmName password We will obtain the following result: RFC2617 A1 hash: 8355c2bc1aab3025c8522bd53639c168 Through this process we obtain the encrypted password, and use it in our password storage file (the JBOSS_HOME/standalone/configuration/application-users.properties). We must replace the password in the file and it will be used for the user username. We have to replace it because the old password doesn't contain the realm name information of the application. Next, We have to modify the web.xml file in the tag auth-method and change the value FORM to DIGEST, and we should set the application realm name this way: <login-config> <auth-method>DIGEST</auth-method> <realm-name>MyRealmName</realm-name> </login-config> Now, let's create a new security domain in JBoss, so we can manage the authentication mechanism DIGEST. In the file JBOSS_HOME/standalone/configuration/standalone.xml, on the section <security-domains>, let's add the following entry: <security-domain name="domainDigest" cache-type ="default"> <authentication> <login-module code="UsersRoles" flag="required"> <module-option name="usersProperties" value="${jboss.server.config.dir} /application-users.properties"/> <module-option name="rolesProperties" value="${jboss.server.config.dir}/ application-roles.properties"/> <module-option name="hashAlgorithm" value="MD5"/> <module-option name= "hashEncoding" value="RFC2617"/> <module-option name="hashUserPassword" value="false"/> <module-option name="hashStorePassword" value="true"/> <module-option name="passwordIsA1Hash" value="true"/> <module-option name="storeDigestCallback" value=" org.jboss.security.auth.callback.RFC2617Digest"/> </login-module> </authentication> </security-domain> Finally, in the application, change the security domain name in the file jboss-web.xml as shown in the following snippet: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>java:/jaas/domainDigest</security-domain> </jboss-web> We are going to change the authentication method from BASIC to DIGEST in the web.xml file. Also we enter the name of the security realm; all these changes must be applied in the tag login-config, this way: <login-config> <auth-method>DIGEST</auth-method> <realm-name>MyRealmName</realm-name </login-config> Now, restart the application server and then redeploy the application on JBoss. To do this, execute the next command on the terminal: mvn jboss-as:redeploy Authentication through certificates It is a mechanism in which a trust agreement is established between the server and the client through certificates. They must be signed by an agency established to ensure that the certificate presented for authentication is legitimate, it is known as CA. This security mechanism needs that our application server uses HTTPS as communication protocol. So we must enable HTTPS. Let's add a connector in the standalone.xml file; look for the following line: <connector name="http" Add the following block of code: <connector name="https" protocol="HTTP/1.1" scheme="https" socket-binding="https" secure="true"> <ssl password="changeit" certificate-key-file="${jboss.server.config.dir}/server.keystore" verify-client="want" ca-certificate-file="${jboss.server.config.dir}/server.truststore"/> </connector> Next we add the security domain: <security-domain name="RequireCertificateDomain"> <authentication> <login-module code="CertificateRoles" flag="required"> <module-option name="securityDomain" value="RequireCertificateDomain"/> <module-option name="verifier" value=" org.jboss.security.auth.certs.AnyCertVerifier"/> <module-option name="usersProperties" value= "${jboss.server.config.dir}/my-users.properties"/> <module-option name="rolesProperties" value= "${jboss.server.config.dir}/my-roles.properties"/> </login-module> </authentication> <jsse keystore-password="changeit" keystore-url= "file:${jboss.server.config.dir}/server.keystore" truststore-password="changeit" truststore-url ="file:${jboss.server.config.dir}/server.truststore"/> </security-domain> As you can see, we need two files: my-users.properties and my-roles.properties, both are empty and located in the JBOSS_HOME/standalone/configuration path. We are going to add the <user-data-constraint> tag in the web.xml in this way: <security-constraint> ...<user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> Then, change the authentication method to CLIENT-CERT: <login-config> <auth-method>CLIENT-CERT</auth-method> </login-config> And finally change the security domain in the jboss-web.xml file in the following way: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>RequireCertificateDomain</security-domain> </jboss-web> Now, restart the application server, and redeploy the application with Maven: mvn jboss-as:redeploy API keys With the advent of cloud computing, it is not difficult to think of applications that integrate with many others available in the cloud. Right now, it's easy to see how applications interact with Flickr, Facebook, Twitter, Tumblr, and so on through APIKeys usage. This authentication method is used primarily when we need to authenticate from another application but we do not want to access the private user data hosted in another application, on the contrary, if you want to access this information, you must use OAuth. Today it is very easy to get an API key. Simply log into one of the many cloud providers and obtain credentials, consisting of a KEY and a SECRET, the same that are needed to interact with the authenticating service providers. Keep in mind that when creating an API Key, accept the terms of the supplier, which clearly states what we can and cannot do, protecting against abusive users trying to affect their services. The following chart shows how this authentication mechanism works: Summary In this article, we went through some models of authentication. We can apply them to any web service functionality we created. As you realize, it is important to choose the correct security management, otherwise information is exposed and can easily be intercepted and used by third parties. Therefore, tread carefully. Resources for Article: Further resources on this subject: RESTful Java Web Services Design [Article] Debugging REST Web Services [Article] RESTful Services JAX-RS 2.0 [Article]
Read more
  • 0
  • 0
  • 3403
article-image-indexes
Packt
23 Jul 2014
8 min read
Save for later

Indexes

Packt
23 Jul 2014
8 min read
(For more resources related to this topic, see here.) As a database administrator (DBA) or developer, one of your most important goals is to ensure that the query times are consistent with the service-level agreement (SLA) and are meeting user expectations. Along with other performance enhancement techniques, creating indexes for your queries on underlying tables is one of the most effective and common ways to achieve this objective. The indexes of underlying relational tables are very similar in purpose to an index section at the back of a book. For example, instead of flipping through each page of the book, you use the index section at the back of the book to quickly find the particular information or topic within the book. In the same way, instead of scanning each individual row on the data page, SQL Server uses indexes to quickly find the data for the qualifying query. Therefore, by indexing an underlying relational table, you can significantly enhance the performance of your database. Indexing affects the processing speed for both OLTP and OLAP and helps you achieve optimum query performance and response time. The cost associated with indexes SQL Server uses indexes to optimize overall query performance. However, there is also a cost associated with indexes; that is, indexes slow down insert, update, and delete operations. Therefore, it is important to consider the cost and benefits associated with indexes when you plan your indexing strategy. How SQL Server uses indexes A table that doesn't have a clustered index is stored in a set of data pages called a heap. Initially, the data in the heaps is stored in the order in which the rows are inserted into the table. However, SQL Server Database Engine moves the data around the heap to store the rows efficiently. Therefore, you cannot predict the order of the rows for heaps because data pages are not sequenced in any particular order. The only way to guarantee the order of the rows from a heap is to use the SELECT statement with the ORDER BY clause. Access without an index When you access the data, SQL Server first determines whether there is a suitable index available for the submitted SELECT statement. If no suitable index is found for the submitted SELECT statement, SQL Server retrieves the data by scanning the entire table. The database engine begins scanning at the physical beginning of the table and scans through the full table page by page and row by row to look for qualifying data that is specified in the submitted SELECT statement. Then, it extracts and returns the rows that meet the criteria in the format specified in the submitted SELECT statement. Access with an index The process is improved when indexes are present. If the appropriate index is available, SQL Server uses it to locate the data. An index improves the search process by sorting data on key columns. The database engine begins scanning from the first page of the index and only scans those pages that potentially contain qualifying data based on the index structure and key columns. Finally, it retrieves the data rows or pointers that contain the locations of the data rows to allow direct row retrieval. The structure of indexes In SQL Server, all indexes—except full-text, XML, in-memory optimized, and columnstore indexes—are organized as a balanced tree (B-tree). This is because full-text indexes use their own engine to manage and query full-text catalogs, XML indexes are stored as internal SQL Server tables, in-memory optimized indexes use the Bw-tree structure, and columnstore indexes utilize SQL Server in-memory technology. In the B-tree structure, each page is called a node. The top page of the B-tree structure is called the root node. Non-leaf nodes, also referred to as intermediate levels, are hierarchical tree nodes that comprise the index sort order. Non-leaf nodes point to other non-leaf nodes that are one step below in the B-tree hierarchy, until reaching the leaf nodes. Leaf nodes are at the bottom of the B-tree hierarchy. The following diagram illustrates the typical B-tree structure: Index types In SQL Server 2014, you can create several types of indexes. They are explored in the next sections. Clustered indexes A clustered index sorts table or view rows in the order based on clustered index key column values. In short, a leaf node of a clustered index contains data pages, and scanning them will return the actual data rows. Therefore, a table can have only one clustered index. Unless explicitly specified as nonclustered, SQL Server automatically creates the clustered index when you define a PRIMARY KEY constraint on a table. When should you have a clustered index on a table? Although it is not mandatory to have a clustered index per table, according to the TechNet article, Clustered Index Design Guidelines, with few exceptions, every table should have a clustered index defined on the column or columns that used as follows: The table is large and does not have a nonclustered index. The presence of a clustered index improves performance because without it, all rows of the table will have to be read if any row needs to be found. A column or columns are frequently queried, and data is returned in a sorted order. The presence of a clustered index on the sorting column or columns prevents the sorting operation from being started and returns the data in the sorted order. A column or columns are frequently queried, and data is grouped together. As data must be sorted before it is grouped, the presence of a clustered index on the sorting column or columns prevents the sorting operation from being started. A column or columns data that are frequently used in queries to search data ranges from the table. The presence of clustered indexes on the range column will help avoid the sorting of the entire table data. Nonclustered indexes Nonclustered indexes do not sort or store the data of the underlying table. This is because the leaf nodes of the nonclustered indexes are index pages that contain pointers to data rows. SQL Server automatically creates nonclustered indexes when you define a UNIQUE KEY constraint on a table. A table can have up to 999 nonclustered indexes. You can use the CREATE INDEX statement to create clustered and nonclustered indexes. A detailed discussion on the CREATE INDEX statement and its parameters is beyond the scope of this article. For help with this, refer to the CREATE INDEX (Transact-SQL) article at http://msdn.microsoft.com/en-us/library/ms188783.aspx. SQL Server 2014 also supports new inline index creation syntax for standard, disk-based database tables, temp tables, and table variables. For more information, refer to the CREATE TABLE (SQL Server) article at http://msdn.microsoft.com/en-us/library/ms174979.aspx. Single-column indexes As the name implies, single-column indexes are based on a single-key column. You can define it as either clustered or nonclustered. You cannot drop the index key column or change the data type of the underlying table column without dropping the index first. Single-column indexes are useful for queries that search data based on a single column value. Composite indexes Composite indexes include two or more columns from the same table. You can define composite indexes as either clustered or nonclustered. You can use composite indexes when you have two or more columns that need to be searched together. You typically place the most unique key (the key with the highest degree of selectivity) first in the key list. For example, examine the following query that returns a list of account numbers and names from the Purchasing.Vendor table, where the name and account number starts with the character A: USE [AdventureWorks2012]; SELECT [AccountNumber] , [Name] FROM [Purchasing].[Vendor] WHERE [AccountNumber] LIKE 'A%' AND [Name] LIKE 'A%'; GO If you look at the execution plan of this query without modifying the existing indexes of the table, you will notice that the SQL Server query optimizer uses the table's clustered index to retrieve the query result, as shown in the following screenshot: As our search is based on the Name and AccountNumber columns, the presence of the following composite index will improve the query execution time significantly: USE [AdventureWorks2012]; GO CREATE NONCLUSTERED INDEX [AK_Vendor _ AccountNumber_Name] ON [Purchasing].[Vendor] ([AccountNumber] ASC, [Name] ASC) ON [PRIMARY]; GO Now, examine the query execution plan of this query once again, after creating the previous composite index on the Purchasing.Vendor table, as shown in the following screenshot: As you can see, SQL Server performs a seek operation on this composite index to retrieve the qualifying data. Summary Thus we have learned what indexes are, how SQL Server uses indexes, structure of indexes, and some of the types of indexes. Resources for Article: Further resources on this subject: Easily Writing SQL Queries with Spring Python [article] Manage SQL Azure Databases with the Web Interface 'Houston' [article] VB.NET Application with SQL Anywhere 10 database: Part 1 [article]
Read more
  • 0
  • 0
  • 7666

article-image-net-framework-primer
Packt
22 Jul 2014
17 min read
Save for later

The .NET Framework Primer

Packt
22 Jul 2014
17 min read
(For more resources related to this topic, see here.) An evaluation framework for .NET Framework APIs Understanding the .NET Framework in its entirety, including keeping track of the APIs that are available in various versions (for example, 3.5, 4, 4.5, 4.5.1, and so on, and platforms such as Windows 8, Windows Phone 8, and Silverlight 5) is a near impossible undertaking. What software developers and architects need is a high-level framework to logically partition the .NET Framework and identify the APIs that should be used to address a given requirement or category of requirements. API boundaries in the .NET Framework can be a little fuzzy. Some logical APIs span multiple assemblies and namespaces. Some are nicely contained within a neat hierarchy within a single root namespace. To confuse matters even further, single assemblies might contain portions of multiple APIs. The most practical way to distinguish an API is to use the API's root namespace or the namespace that contains the majority of the API's implementation. We will point out the cases where an API spans multiple namespaces or there are peculiarities in the namespaces of an API. Evaluation framework dimensions The dimensions for .NET Framework API evaluation framework are as follows: Maturity: This dimension indicates how long the API has been available, how long it has been part of the .NET Framework, and what the API's expected longevity is. It is also a measure of how relevant the API is or an indication that an API has been subsumed by newer and better APIs. Productivity: This dimension is an indication of how the use of the API will impact developer productivity. This dimension is measured by how easy the API is to learn and use, how well known the API is within the developer community, how simple or complex it is to use, the richness of the tool support (primarily in Visual Studio), and how abstract the API is, that is, whether it is declarative or imperative. Performance: This dimension indicates whether the API was designed with performance, resource utilization, user interface responsiveness, or scalability in mind; alternatively, it indicates whether convenience, ease of use, or code pithiness were the primary design criteria, which often comes at the expense of the former. Availability: This dimension indicates whether the API is available only on limited versions of the .NET Framework and Microsoft operating systems, or whether it is available everywhere that managed code is executed, including third-party implementations on non-Microsoft operating systems, for example, Mono on Linux. Evaluation framework ratings Each dimension of the API evaluation framework is given a four-level rating. Let's take a look at the ratings for each of the dimensions. The following table describes the ratings for Maturity: Rating Glyph Description Emerging This refers to a new API that was either added to the .NET Framework in the last release or is a candidate for addition in an upcoming release that has not gained widespread adoption yet. This also includes APIs that are not officially part of the .NET Framework. New and promising This is an API that has been in the .NET Framework for a couple of releases; it is already being used by the community in production systems, but it has yet to hit the mainstream. This rating may also include Microsoft APIs that are not officially part of .NET, but show a great deal of promise, or are being used extensively in production. Tried and tested This is an API that has been in the .NET Framework for multiple releases, has attained very broad adoption, has been refined and improved with each release, and is probably not going to be subsumed by a new API or deprecated in a later version of the Framework. Showing its age The API is no longer relevant or has been subsumed by a superior API, entirely deprecated in recent versions of .NET, or metamorphosedmerged into a new API. The following table describes the ratings for Productivity: Rating Glyph Description Decrease This is a complex API that is difficult to learn and use and not widely understood within the .NET developer community. Typically, these APIs are imperative, that is, they expose the underlying plumbing that needs to be understood to correctly use the API, and there is little or no tooling provided in Visual Studio. Using this API results in lowered developer productivity. No or little impact This API is fairly well known and used by the .NET developer community, but its use will have little effect on productivity, either because of its complexity, steep learning curve, and lack of tool support, or because there is simply no alternative API Increase This API is well known and used by the .NET developer community, is easy to learn and use, has good tool support, and is typically declarative; that is, the API allows the developer to express the behavior they want without requiring an understanding of the underlying plumbing, and in minimal lines of code too. Significant increase This API is very well known and used in the .NET developer community, is very easy to learn, has excellent tool support, and is declarative and pithy. Its use will significantly improve developer productivity. Performance and Scalability Rating Glyph Description Decrease The API was designed for developer productivity or convenience and will more than likely result in the slower execution of code and the increased usage of system resources (when compared to the use of other .NET APIs that provide the same or similar capabilities). Do not use this API if performance is a concern. No or little impact The API strikes a good balance between performance and developer productivity. Using it should not significantly impact the performance or scalability of your application. If performance is a concern, you can use the API, but do so with caution and make sure you measure its impact. Increase The API has been optimized for performance or scalability, and it generally results in faster, more scalable code that uses fewer system resources. It is safe to use in performance-sensitive code paths if best practices are followed. Significant increase The API was designed and written from the ground up with performance and scalability in mind. The use of this API will result in a significant improvement of performance and scalability over other APIs. The following table describes the ratings for Availability: Rating Glyph Description Rare The API is available in limited versions of the .NET Framework and on limited operating systems. Avoid this API if you are writing code that needs to be portable across all platforms. Limited This API is available on most versions of the .NET Framework and Microsoft operating systems. It is generally safe to use, unless you are targeting very old versions of .NET or Microsoft operating systems. Microsoft Only This API is available on all versions of the .NET Framework and all Microsoft operating systems. It is safe to use if you are on the Microsoft platform and are not targeting third-party CLI implementations, such as Mono.   Universal The API is available on all versions of .NET, including those from third parties, and it is available on all operating systems, including non-Microsoft operating systems. It is always safe to use this API. The .NET Framework The rest of this article will highlight some of the more commonly used APIs within the .NET Framework and rate each of these APIs using the Evaluation framework described previously. The Base Class Library The Base Class Library (BCL) is the heart of the .NET Framework. It contains base types, collections, and APIs to work with events and attributes; console, file, and network I/O; and text, XML, threads, application domains, security, debugging, tracing, serialization, interoperation with native COM and Win32 APIs, and the other core capabilities that most .NET applications need. The BCL is contained within the mscorlib.dll, System.dll, and System.Core.dll assemblies The mscorlib.dll assembly is loaded during the CLR Bootstrap(not by the CLR Loader), contains all non optional APIs and types, and is universally available in every .NET process, such as Silverlight, Windows Phone, and ASP.NET. Optional BCL APIs and types are available in System.dll and System.Core.dll, which are loaded on demand by the CLR Loader, as with all other managed assemblies. It would be a rare exception, however, when a .NET application does not use either of these two aforementioned assemblies since they contain some very useful APIs. When creating any project type in Visual Studio, these assemblies will be referenced by default. For the purpose of this framework, we will treat all of the BCL as a logical unit and not differentiate the nonoptional APIs (that is, the ones contained within mscorlib.dll), from the optional ones. Despite being a significant subset of the .NET Framework, the BCL contains a significant number of namespaces and APIs. The next sections describe a partial list of some of the more notable namespaces/APIs within the BCL, with an evaluation for each: System namespace System.Text namespace System.IO namespace System.Net namespace System.Collections namespace System.Collections.Generic namespace System.Collections.Concurrent namespace System.Linq namespace System.Xml namespace System.Xml.Linq namespace System.Security.Cryptography namespace System.Threading namespace System.Threading.Tasks namespace System.ServiceProcess namespace System.ComponentModel.Composition namespace System.ComponentModel.DataAnnotations namespace ADO.NET Most computer programs are meaningless without appropriate data to operate over. Accessing this data in an efficient way has become one of the greatest challenges modern developers face as the datasets have grown in size, from megabytes, to gigabytes, to terabytes, and now petabytes, in the most extreme cases, for example, Google's search database is around a petabyte. Though relational databases no longer hold the scalability high ground, a significant percentage of the world's data still resides in them and will probably continue to do so for the foreseeable future. ADO.NET contains a number of APIs to work with relational data and data provider APIs to access Microsoft SQL Server, Oracle Database, OLEDB, ODBC, and SQL Server Compact Edition. System.Data namespace System.Data.Entity namespace System.Data.Linq namespace System.Data.Services namespace Windows Forms Windows Forms (WinForms) was the original API for developing the user interface (UI) of Windows desktop applications with the .NET Framework. It was released in the first version of .NET and every version since then. System.Windows.Forms namespace The WinForms API is contained within the System.Windows.Forms namespace. Though WinForms is a managed API, it is actually a fairly thin façade over earlier, unmanaged APIs, primarily Win32 and User32, and any advanced use of WinForms requires a good understanding of these underlying APIs. The advanced customizations of WinForms controls often require the use of the System.Drawing API, which is also just a managed shim over the unmanaged GDI+ API. Many new applications are still developed using WinForms, despite its age and the alternative .NET user interface APIs that are available. It is a very well understood API, is very stable, and has been optimized for performance (though it is not GPU-accelerated like WPF or WinRT). There are a significant number of vendors who produce feature-rich, high-quality, third-party WinForms controls, and WinForms is available in every version of .NET and on most platforms, including Mono. WinForms is clearly showing its age, particularly when its capabilities are compared to those of WPF and WinRT, but it is still a viable API for applications that exclusively target the desktop and where a sophisticated modern UI is not necessary. The following table shows the evaluation of the System.Windows.Forms namespace: Maturity Productivity Performance Availability Windows Presentation Foundation Windows Presentation Foundation (WPF) is an API, introduced in .NET 3.0, for developing rich user interfaces for .NET applications, with no dependencies on legacy Windows APIs and with support for GPU-accelerated 3D rendering, animation, and media playback. If you want to play a video on a clickable button control on the surface of an animated, 3D rotating cube and the only C# code you want to write is the button click event handler, then WPF is the API for the job. See the WPFSample code for a demonstration. System.Windows namespace The System.Windows namespace contains the Windows Presentation Foundation API. WPF includes many of the "standard" controls that are in WinForms, for example, Button, Label, CheckBox, ComboBox, and so on. However, it also includes APIs to create, animate, and render 3D graphics; render multimedia; draw bitmap and vector graphics; and perform animation. WPF addresses many of the limitations of Windows Forms, but this power comes at a price. WPF introduces a number of novel concepts that developers will need to master, including a new, declarative UI markup called Extensible Application Markup Language (XAML), new event handling, data binding and control theming mechanisms, and a variant of the Model-view-controller (MVC) pattern called Model View ViewModel (MVVM); that said, the use of this pattern is optional but highly recommended. WPF has significantly more moving parts than WinForms, if you ignore the underlying native Windows APIs that WinForm abstracts. Microsoft, though, has gone to some lengths to make the WPF development experience easier for both UI designers and developers. Developers using WPF can choose to design and develop user interfaces using XAML, any of the .NET languages, or most often a combination of the two. Visual Studio and Expression Blend provide rich WYSIWYG designers to create WPF controls and interfaces and hide the complexities of the underlying XAML. Direct tweaking of the XAML is sometimes required for precise adjustments. WPF is now a mature, stable API that has been highly optimized for performance—all of its APIs are GPU accelerated. Though it is probably not as well known as WinForms, it has become relatively well known within the developer community, particularly because Silverlight, which is Microsoft's platform for developing rich web and mobile applications, uses a subset of WPF. Many of the third-party control vendors who produce WinForm controls now also produce equivalent WPF controls. The tools for creating WPF applications, predominantly Visual Studio and Expression Blend, are particularly good, and there are also a number of good third-party and open source tools to work with XAML. The introduction of WinRT and the increasingly powerful capabilities of web browser technologies, including HTML5, CSS3, JavaScript, WebGL, and GPU-acceleration, raise valid questions about the long-term future of WPF and Silverlight. Microsoft seems to be continuing to promote the use of WPF, and even WinRT supports a variant of the XAML markup language, so it should remain a viable API for a while. The following table shows the evaluation of the System.Windows namespace: Maturity Productivity Performance Availability ASP.NET The .NET Framework was originally designed to be Microsoft's first web development platform, and it included APIs to build both web applications and web services. These APIs were, and still are, part of the ASP.NET web development framework that lives in the System.Web namespace. ASP.NET has come a very long way since the first release of .NET, and it is the second most widely used and popular web framework in the world today (see http://trends.builtwith.com/framework). The ASP.NET platform provides a number of complimentary APIs that can be used to develop web applications, including Web Forms, web services, MVC, web pages, Web API, and SignalR. System.Web.Forms namespace System.Web.Mvc namespace System.Web.WebPages namespace System.Web.Services namespace Microsoft.AspNet.SignalR namespace Windows Communication Foundation One of the major selling points of the first release of .NET was that the platform had support for web services baked in, in the form of ASP.NET Web Services. Web Services have come a very long way since SOAP was invented in 1998 and the first release of .NET, and WCF has subsumed the limited capabilities of ASP.NET Web Services with a far richer platform. WCF has also subsumed the original .NET Remoting (System.Runtime.Remoting), MSMQ (System.Messaging), and Enterprise Services (System.EnterpriseServices) APIs. System.ServiceModel namespace The root namespace for WCF is System.ServiceModel. This API includes support for most of the WS-* web services standards and non-HTTP or XML-based services, including MSMQ and TCP services that use binary or Message Transmission Optimization Mechanism (MTOM) message encoding. Address, Binding, and Contract (ABC) of WCF are very well understood by the majority of the developer community, though deep technical knowledge of WCF's inner workings is rarer. The use of attributes to declare service and data contracts and a configuration-over-code approach makes the WCF API highly declarative, and creating sophisticated services that use advanced WS-* capabilities is relatively easy. WCF is very stable and can be used to create high-performance distributed applications. WCF is available on all recent versions of .NET, though not all platforms include the server components of WCF. Partial support for WCF is also available on third-party CLI implementations, such as Mono. REST-based web services, that serve relatively simple XML or JSON, have become very popular, and though WCF fairly recently added support for REST, these capabilities have now evolved into the ASP.NET Web API. The following table shows the evaluation of the System.ServiceModel namespace: Maturity Productivity Performance Availability Windows Workflow Foundation Windows Workflow Foundation (WF) is a workflow framework that was introduced in .NET 3.0, and that brings the power and flexibility of declarative workflow or business process design and execution to .NET applications. System.Activities namespace The System.Activities namespace contains the Windows Workflow Foundation API. WF includes a workflow runtime, a hosting API, a number of basic workflow activities, APIs to create custom activities, and a workflow designer control, which was originally a WinForms control but is now a WPF control as of .NET 4.0. WF also uses a variant of the same XAML markup, which WPF and WinRT use, to represent workflows; that said, an excellent designer, hosted by default in Visual Studio, should mean that you never have to directly modify the XAML. The adoption of the first few versions of the WF API was limited, but WF was completely rewritten for .NET 4.0, and many of the shortcomings of the original version were entirely addressed. WF is now a mature, stable, best-of-breed workflow API, with a proven track record. The previous implementation of WF is still available in current versions of the .NET Framework, for migration and interoperation purposes, and is in the System.Workflow namespace. WF is used by SharePoint Server, Windows Server AppFabric, Windows Azure AppFabric, Office 365, Visual Studio Team Foundation Server (MSBuild), and a number of other Microsoft products and services. Windows Server AppFabric and Windows Azure AppFabric enable a new class of scalable SOA server and cloud application called a Workflow Service, which is a combination of the capabilities of WCF and WF. WF has a relatively small but strong following within the .NET developer community. There are also a number of third-party and open source WF activity libraries and tools available. Though applications composed using workflows typically have poorer performance than those that are implemented entirely in code, the flexibility and significantly increased developer productivity (particularly when it comes to modifying existing processes) that workflows give you are often worth the performance price. That said, Microsoft has made significant investments in optimizing the performance of WF, and it should be more than adequate for most enterprise application scenarios. Though versions of WF are available on other CLI platforms, the availability of WF 4.x is limited to Microsoft platforms and .NET 4.0 and higher. The evaluation of the System.Workflow namespace shown in the following table is for the most recent version of WF (the use of versions of WF prior to 4.0 is not recommended for new applications): Maturity Productivity Performance Availability Summary There is more to the .NET Framework than has been articulated in this primer; it includes many useful APIs that have not even been mentioned here, for example, System.Media, System.Speech, and the Windows Identity Framework. There are also a number of very powerful APIs developed by Microsoft (and Microsoft Research) that are not (yet) officially part of the .NET Framework; for example, Reactive Extensions, Microsoft Solver Foundation, Windows Azure APIs, and the new .NET for Windows Store Apps APIs are worth looking into. Resources for Article:   Further resources on this subject: Content Based Routing on Microsoft Platform [article] Building the Content Based Routing Solution on Microsoft Platform [article] Debatching Bulk Data on Microsoft Platform [article]
Read more
  • 0
  • 0
  • 2018
Modal Close icon
Modal Close icon