Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-openid-ultimate-sign
Packt
23 Oct 2009
13 min read
Save for later

OpenID: The Ultimate Sign On

Packt
23 Oct 2009
13 min read
Introduction How many times have you walked away from some Internet forum because you could not remember your login ID or password, and just did not want to go through the tedium of registering again? Or gone back to re-register yourself only to forget you password the next day? Remembering all those login IDs and passwords is indeed an onerous task and one more registration for a new site seems like one too many. We have all tried to get around these problems by jotting down passwords on pieces of paper or sticking notes to our terminal – all potentially dangerous practices that defeat the very purpose of keeping a digital identity secure. If you had the choice of a single user ID and password combination – essentially a single digital identity – imagine how easy it might become to sign up or sign in to new sites. Suppose you could also host your own digital identity or get it hosted by third party providers who you could change at will, or create different identity profiles for different classes of sites, or choose when your User ID with a particular site should expire; suppose you could do all this and more in a free, non-proprietary, open standards based, extensible, community-driven framework (whew!) with Open Source libraries and helpful tutorials to get you on board, you would say: “OpenID”. To borrow a quote from the OpenID website openid.net: “OpenID is an open, decentralized, free framework for user-centric digital identity.” The Concept The concept itself is not new (and there are proprietary authentication frameworks already in existence). We are all aware of reference checks or identity documents where a reliable agency is asked to vouch for your credentials. A Passport or a Driver's License is a familiar example. Web sites, especially those that transact business, have digital certificates provided by a reliable Certification Authority so that they can prove to you, the site visitor, they are indeed who they claim to be. From here, it does not require a great stretch of imagination to appreciate that an individual netizen can have his or her own digital identity based on similar principles. This is how you get the show on the road. First, you need to get yourself a personal identity based on OpenID from one of the numerous OpenID providers[1] or some sites that provide an OpenID with membership. This personal identity comes in the form a URL or URI (essentially a web address that starts with http:// or https://) that is unique to you. When you need to sign up or sign in to a web site that accepts OpenID logins (look for the words 'OpenID' or the OpenID logo), you submit your OpenID URL. The web site then redirects you to the site of your ID provider where you authenticate yourself with your password and optionally choose the details – such as full name, e-mail ID, or nickname, or when your login ID should expire for a particular site – that you want to share with the requesting site and allow the authentication request to go through. You are then returned to the requesting site. That is all there is to it. You are authenticated! The requesting site will usually ask you to associate a nickname with your OpenID. It should be possible to register with and sign in to different sites using different nicknames – one for each site – but the same OpenID. But you may not want to overdo this lest you get into trouble trying to recall the right nickname for a particular site. Just Enough Detail This is not a technical how-to. For serious technical details, you can follow the excellent links in the References section. This is a basic guide to get you started with OpenID, to show you how flexible it is, and to give pointers to its technical intricacies. By the end of this article you should be able to create your own personal digital identities based on OpenID (or discover if you already have one – you just might!), and be able to use them effectively. In the following sections, I have used some real web sites as examples. These are only for the purpose of illustration and in no way shows any preference or endorsement. Getting Your OpenID The simplest and most direct way to get your personal OpenID is to go to a third party provider. But before that, the smart thing to do would be find out if you already have one. For instance, if you blog at wordpress.com, then http://yourblogname.wordpress.com is an OpenID already available to you. There are other sites[1], too, that automatically provide you an OpenID with membership. Yahoo! gives you an OpenID if you have an account with them; but it is not automatic and you need to sign up for it at http://openid.yahoo.com. Your OpenID at Yahoo! will be of the form https://me.yahoo.com/your-nickname. To get your third party hosted OpenID we will choose Verisignlab's Personal Identity Provider (PIP) site -- http://pip.verisignlabs.com/ as an example. You are of course free to decide and choose your own provider(s). The sign up form is a simple no-fuss affair with the minimum number of fields. (If you are tired of hearing 'third party', the reason for using the term will get clearer further on. For the purpose of this article, you, the owner of the OpenID are the first party, the web site that wants you authenticated is the second party, the OpenID provider being the third.) After replying to the confirmation e-mail you are ready to take on the wide world with your OpenID. If you gave your ID as 'johndoe' then you will get an OpenID like: http://johndoe.pip.verisignlabs.com. You can come back to the PIP site and update your profile; some sites request information such as full name or e-mail ID but you are always in control whether you want to pass on this information back to them. If you choose to have just one OpenID, then this is about as much as you would ever do to sign on to any OpenID enabled site. You can also create multiple OpenID's for yourself – remember what we said earlier about having multiple ID's to suite different classes of sites. Testing Your OpenID Now that we have our OpenID we will test it and in the process also see how a typical OpenID-based authentication works in practice. Use the testing form[7] in the References section and enter your OpenID URL that you want tested. When you are redirected to your PIP's site (we are sticking to our Verisign example), enter your password and also choose what information you want passed back to the requesting site before clicking “Allow” to let the authentication go through. Important tip: Enter your password only on the PIP's site and nowhere else! Be aware that this particular testing page may not work with all OpenIDs; that may not necessarily mean that the OpenID itself has a problem. Step-by-Step: Use your WordPress or Verisign OpenID For this tutorial part, we will take the example of http://www.propeller.com (a voting site among other things) that accepts OpenID sign ups and sign ins. For an OpenID we will use the URL of your WordPress blog – http://yourblogname.wordpress.com. You could also use your OpenID URL (the one you got from the Verisign example) and follow through. On the Propeller site, go to the sign up page. Look for the prominent OpenID logo. Type in your OpenID URL and click on the 'Verify ...' button. You are taken to the site of your PIP where you need to authenticate yourself.   If you used your Verisign OpenID, enter your password, complete the details you want to pass back to the requesting site (remember, we are trying to sign up with Propeller) and allow the authentication to go through. You are now back with the Propeller site. Just hang in there a moment as we check the flow for a Wordpress OpenID.   For a WordPress OpenID, you will get a screen instead that asks you to deliberately sign in to your WordPress account. Once you are signed in, you will see a hyperlink that prompts you to continue with the authentication request from Propeller.     Follow this link to a form that asks your permission to pass back information to Propeller such your nickname and e-mail ID. You can change both these fields if you wish and allow the authentication to go through.   Now you should be back at the Propeller site with a successful OpenID verification. The site will ask you to associate a nickname with your OpenID and a working e-mail to complete your registration process. This step is no different from a normal sign up process. Check your e-mail, click on the link provided therein, get back to the Propeller site, and click another link to complete the registration process. You are automatically signed in to Propeller. Sign out for the moment so that we can see how an OpenID sign in works. Go to the sign in page at Propeller. You will see a normal sign in and an OpenID sign in. We will use the OpenID one (of course!). Type in your OpenID URL and click on the “Sign in...” button. Complete the formalities on your PIP site (for Verisign you will get a sign in page; for Wordpress you will need to sign in first unless you are already signed in) and let the authentication go through. This time you are back on the Propeller site all signed in and ready to go. Note that your nickname appears correctly because your OpenID is associated with it. That is all there is to it. Easier done than said. Try this a couple of times and I bet it will feel easier than the remote control of your home entertainment system! Your Custom OpenID URL If you want a personalized OpenID URL and do not like the one provided by your PIP you can always use delegation to get what you want. To make your blog or personal home page as your OpenID URL, insert the following in the head portion (the part that falls between <head> and </head> on an HTML page) of your blog or any page that you own. This will only work with pages that you completely own and have control over their source. There is a Wordpress plug-in that gives delegating capability to your Wordpress.com blog but we will not go into that here. The first URL is your OpenID server. The second URL is your OpenID URL – either the one you host yourself or the one provided by a third party. The requesting site discovers your OpenID and correctly authenticates you. With this approach you can switch providers transparently. At the risk of repeating: test your new personalized URL before you start using it. Note that the 'openid.server' URL may vary depending on the PIP. To get the name of your PIP's OpenID server, use the testing service[7] which reports the correct URL for your PIP to use with the “openid.server” part your delegation mark up. <link rel="openid.server" href="http://pip.verisignlabs.com/server" /><link rel="openid.delegate" href="http://johndoe.pip.verisignlabs.com/" /> Rolling Your Own If you are paranoid about entrusting the management of your digital identity to another web site and also have the technical smarts to match, there are ways you can become your own PIP[5][6]. If you are tech-savvy then you cannot fail to appreciate the elegance of the OpenID architecture and the way it lets control stay where it should – with you. Account Management – Lite? OpenID makes life easier for site visitors. But what about the site and the domain administrators? If administrators decide to go the OpenID way[3], it lightens their load by taking away a major part of the chore of membership administration and authentication. As a bonus, it also potentially opens up a site to the entire community of net users that have OpenID's or are getting one. Security and Reliability As the wisecrack goes – if you want complete security, you should unplug from the Internet. On a serious note, there are some precautions you have to take while using OpenID and they are no different from the precautions you would take for any item associated with your identity, say your Passport or your credit card. Remember to enter your password only on the Identity Provider's site and nowhere else. Be alert to phishing. This explains why WordPress asks you to log in explicitly rather than take you directly to their authentication page. Never use your e-mail ID handle as your OpenID name but use a different one. Using OpenID has its flip side, too. Getting your OpenID from a provider potentially lays open your browsing habits to tracking. You can get around this by being your own PIP, delegating from your own domain, or creating a PIP profile under an alias. There is the possibility that your OpenID provider goes out of service or worse, out of business. It is thus important to choose a reliable identity provider. There are sites that allow you to associate multiple OpenIDs with your account and perhaps this can be a way forward to popularize OpenID and to allay any fears of getting locked in with a single vendor and getting locked out of your identity in the process. Your Call There are many sites today that are not OpenID-ready. There are some sites that allow only OpenID sign ons. However, if you see the elegance of the OpenID mechanism and the convenience it provides both site administrators and members, you might agree that its time has come. Get an OpenID if you do not have one. Convince your friends to get theirs. And if you run an online community or are a member of one, throw your weight around to ensure that your site also provides an OpenID sign on. References http://wiki.openid.net/OpenIDServers is a list of ID providers. http://blogs.zdnet.com/digitalID/?p=78 makes a strong case for OpenID. Read it to get a good perspective on the subject. http://www.plaxo.com/api/openid_recipe is a soup-to-nuts tutorial on how to enable your site for OpenID authentication or migrate to OpenID from your current site-specific authentication scheme. Check out http://www.openidenabled.com/php-openid/ if you are looking for software libraries to OpenID-enable your site. http://www.intertwingly.net/blog/2007/01/03/OpenID-for-non-SuperUsers is a crisp if intermediate-level how-to that lets you try out new things in the OpenID space. http://siege.org/projects/phpMyID/ shows you how you can run your own (yes, your own) PIP server. http://www.openidenabled.com/resources/openid-test/checkup is a link that helps you test your OpenID. Once you get your OpenID, you can submit it to the form on this URL and get yourself authenticated to see if everything works fine. Does not seem to work with Wordpress and Yahoo! OpenIDs as of this writing. http://www.openid.net is the OpenID site.   Read another article by Gurudutt Talgery Podcasting with Linux Command Line Tools and Audacity  
Read more
  • 0
  • 0
  • 5681

article-image-setting-biztalk-server-environment
Packt
09 Apr 2012
18 min read
Save for later

Setting up a BizTalk Server Environment

Packt
09 Apr 2012
18 min read
Gathering requirements by asking the right questions Although, this is not an exact recipe, asking questions to obtain requirements for your BizTalk environment is important. Having a clear view and understanding of the requirements enables you to deploy the desired BizTalk environment that meets expectations of the customer. What are the right questions you may ask yourself? Well, there is quite a large area in general you basically need to cover with questions. These questions will be around the following topics: A BizTalk work load(s) that is functional Non-functional (high availability, scalability, and so on) Licensing (software) Hardware Virtualization Development, Test, Acceptance, and Production (DTAP) environment Tracking/Tracing Hosting Security Getting ready Organize the sessions, and/or the workshop(s) to discuss the BizTalk architecture (environment), functionality, and non-functional requirements, where you do a series of interviews with appropriate stakeholders. This way you will be able to retrieve the necessary requirements and information for a BizTalk environment. You will need to focus on business first and IT later. You will notice that each business will have a different set of requirements on integration of data and processes. Some of these are listed as follows: Business is able to have the access of information from anywhere any time Have the proper information to present to the proper people Have the necessary information available when needed Manage knowledge efficiently and be able to share it with the business Change the information when needed Automate the business process that is error-prone Automate the business process to reduce the processing time of orders, invoices, and so on Regarding the business requirements, BizTalk will have certain workloads, and with the business you determine if you want BizTalk to aid in automating processes, exchange of information with partners, maintaining business rules, visibility of psychical events, and/or integration with different systems. One important factor to reckon with bringing BizTalk into an organization is risk-associated with transitioning to its platform. This risk can be of a technical, operational, political, and financial nature. BizTalk solutions have to operate correctly, meet the business requirements, and be accepted by stakeholders within the organization and should not be too expensive. With IT, you focus more on the technical side of the BizTalk Environment such as, "What messages in size, format, and encoding are sent to the BizTalk system or what does it need to output?" You should consider security around it, when information going to or coming from trading partners is confidential. Encryption and decryption of data such as, "What processes that are automated need to interact with internal and external systems?" or "How are you going to monitor messages that are going in and out?" can come into play. Support needs to be set up properly to keep BizTalk and its solutions healthy. Solutions need to be developed and tested, preferably using different environments such as test and acceptance. For that, you will need an agreed deployment process with IT. These are factors to reckon with and need to be addressed when interviewing or talking to IT stakeholders within the organization. How to do it… Categorize your stakeholders into two categories—business and IT. Create a communication plan and list of questions related to areas mentioned earlier. With the list of questions you can assign each question to a person you think can answer it. This way you ask the right questions to the right people. The following table shows a sample of roles belonging to business and/or IT. It could be that you identify more roles depending on your situation: Category Role Business CEO, CIO, Security Officer, Business Analyst, Enterprise Architect, and Solution Architect. IT IT Manager, Enterprise Architect, Solution Architect, System/Application Architect, System Analyst, Developer, System Engineer, and DBA. Having the roles clear belonging to either business, IT, or both, you will then need to have a list of questions and assign these to the appropriate role. You can find an example list of questions associated to a particular role in the following table: Question Role Will BizTalk integrate with systems in the enterprise? Which consumers and host systems will it integrate with? Enterprise Architect, Solution Architect What are the applicable workloads? Enterprise Architect Is BizTalk going to be strategic for integration with internal/external systems? CEO, CIO, Enterprise Architect, and Business Analyst Number of messages a day/hour Enterprise Architect What are the candidate processes to automate with BizTalk? Business Analyst, Solution Architect What communication protocols are required? Enterprise Architect, Solution Architect Choice of Microsoft platform-Operating System, SQL Server Database Enterprise Architect, Security Officer, Solution Architect, System Engineer, and DBA Encryption algorithm for data Enterprise Architect, Security Officer, Solution Architect, and System Engineer Is Secure Socket Layer required for communication? Enterprise Architect, Security Oficer, Solution Architect, and System Engineer What kind of certificate store is there? Enterprise Architect, Security Officer, Solution Architect, and System Engineer Is the Support for BizTalk going to be outsourced CEO, IT Manager There's more… The best approach to gather the requirements is to view it as a project or a part of the project. You can use a methodology such as PRINCE2. PRINCE2 Projects in Controlled Environments (PRINCE) is a project management method. It covers the management, control, and organization of a project. PRINCE2 is the second major release of it. More information is available at http://www.prince2.com/. Microsoft BizTalk Server website The Microsoft BizTalk Server website provides a lot of information. Especially, the Production Information section provides detailed information on system requirements, roadmap, and the FAQs. The latter sections provide details on pricing, licensing, and so on. Go to http://www.microsoft.com/biztalk/en/us/default.aspx. Analyzing requirements and creating a design Analyzing requirements and creating a design for the BizTalk landscape is the next step forward before planning and installing. With the gathered requirements, you can make decisions on how to design a BizTalk environment(s). If BizTalk is used for the first time in an enterprise environment capacity, planning and server allocation is something to focus on. Once you gather requirements and ask questions, you will have a clear picture of where the platform will be hosted and whether it needs to be scaled up or out. If everything gets placed on one big server, it will introduce a serious single point of failure. You should try to avoid this scenario. Therefore, separating BizTalk from the SQL Server is the first thing you will do in your design, each on a separate hardware preferably. Depending on availability requirements, you will probably cluster the SQL Server. Besides that, you can choose to scale out BizTalk into a multiserver group, because of availability requirements and if the expected load cannot be handled by one BizTalk instance. You can opt for installing BizTalk and SQL separately first and then scale-out after performing benchmark tests. You can scale vertically (scaleup) by increasing the number of processors and the amount of memory each server uses, or you can scale horizontally (scaleout) by adding more servers to your BizTalk Server configuration. Other options you can consider during your design are as follows: Having multiple MessageBox databases Separate BizTalk databases These options are best visualized by the scale-out poster from Microsoft (http://www.microsoft.com/download/en/details.aspx?id=13103). Based on the requirements, you can consider isolating the BizTalk hosts to be able to manage BizTalk applications better and divide the load. By separating send, receive, and processing functionality in different hosts, you will benefit from better memory and thread management. If you expect a high load of large messages or orchestrations that would consume large amounts of resources, you should isolate send and/or receive adapters. Another consideration is to separate a host to handle tracking and relieve processing hosts from it. So far we have discussed scalability and design decisions you could consider. There are some other design considerations for a BizTalk environment such as security, tracking, fault tolerance, load balancing, choice of license, and support for virtualization (http:// support.microsoft.com/kb/842301). BizTalk security can be enhanced by deploying Secure Socket Layer (SSL), IPSec Tunneling, the Inter Security and Acceleration (ISA) server, and certificate services included with the Windows Server 2008. With the BizTalk Server, you can apply access control, implement least rights to limit access, and provide integrated security through Enterprise Single Sign-On (http://msdn.microsoft.com/en-us/library/aa577802%28v=bts.70%29.aspx). Furthermore, you can protect and secure applications and data by authenticating the sender of a message and authorizing the receiver of a message. Tracking messages in BizTalk messages can be useful to see what messages come in and out of the system, or for auditing, troubleshooting, or archiving purposes. Tracking of messages within BizTalk is a process by which parts of a message such as the body, properties, and metadata are stored in a database. These parts can be viewed by running queries from the Group Hub page in the BizTalk Server Administration console. It is important that you decide, or take up into the design, what needs to be tracked based on the requirements. There are some considerations to make regarding tracking. Tracking everything is not the smart thing to do, as each time a message is touched in BizTalk; a copy is made and stored. Focus on scope by tracking only on a specific port, which is better for performance and keeps the database uncluttered. For the latter, it is important that the data purge and archive job is configured properly. As mentioned earlier, it is worth considering a dedicated host for tracking. Fault tolerance and load balancing for BizTalk can be achieved through clustering, separating hosts as described earlier, implement a Storage Area Network (SAN) to house the BizTalk Server databases, cluster Enterprise Single Sign-On (SSO) Master Secret Server, and configuring the Internet Information Services (IIS) web server for isolated host instances and the BAM Portal web page to be highly available using Network Load Balancing (NLB) or other load balancing devices. The best way to implement this is to follow the steps in the Checklist: Providing High Availability with Fault Tolerance or Load Balancing document found on MSDN (http://msdn.microsoft.com/en-us/library/gg634479%28v=bts.70%29.aspx). Another important topic regarding your BizTalk environment is costs and based on requirements you will choose the Branch, Standard, or Enterprise Edition. The editions differ not only in price, but also in functionality. As with the Standard Edition, it is not possible to support scenarios for high availability, fault tolerance, and is limited on CPU and applications. The Branch Edition is even more limited and is designed for hub and spoke deployment scenarios including Radio Frequency Identification (RFID). With any version, you probably want to consider whether or not to virtualize. With virtualization in mind, licensing can be difficult. With the Standard Edition, you need a license for each virtual processor used by the virtual OS environment, regardless of whether the number of virtual processors is less than, or greater than, the number of physical processors on the server. With the Enterprise Edition, if you license all physical CPUs on the server you can run any number of instances in the physical or virtual OS environment. With both of these, a virtual processor is assumed to have the same number of cores as the physical processor. Using less than the number of cores available in the physical processor still counts as a full virtual processor (http://www.microsoft. com/biztalk/en/us/editions.aspx). Last, but not least, you need to consider how to support your BizTalk environment. It is worth considering the System Center Operation Manager to monitor your BizTalk environment using management packs for the SQL Server, Windows Server, and BizTalk Server 2010. The management pack for the BizTalk Server 2010 provides two views, one for the enterprise IT administrator and one for the BizTalk Server administrator. The first will be monitoring the state and health of the various enterprise deployments, the machines hosting the SQL Server databases, machines hosting the Enterprise SSO service, host instance machines, IIS, network services, and is interested in the overall health of the "physical deployment" of a BizTalk Server setup. The BizTalk Server Administrator will be monitoring the state and health of various BizTalk Server application artifacts, such as orchestrations, send ports, receive locations, and is interested in monitoring and tracking the BizTalk Server's health. If necessary, he/she can carry out corrective measures to keep applications running as expected. What you have read so far are considerations, which are useful while analyzing requirements and preparing your design. You need to take a considerable amount of time for analyzing requirements to be able to create a solid design for your BizTalk environment. There is a wealth of information provided by Microsoft in this book. It will be worth investing time now as you will lose a lot time and money if your applications do not perform or the system cripples under load while receiving the process. How to do it... To analyze the requirements, you will need to categorize them to certain topics mentioned in the Gathering requirements by asking the right questions recipe. You will then go over each requirement and decide how it can be met best. For each requirement, you will consider what the best option is and capture that in your design for the BizTalk setup. The BizTalk design will be a Word document, where you capture your design, considerations, and decisions. How it works... During analysis of each requirement, you will capture your considerations and decisions in a word document. Besides that, you will also describe the situation at the enterprise where the BizTalk environment will be deployed. You will find an example structure of a design document for a Development, Test, Acceptance, and Production (DTAP) environment, as follows, where you can place all the information: Introduction Purpose Current situation IT landscape Design Decisions Considerations/Issues Overview DTAP landscape Scope MS BizTalk and SQL Server editions SQL Database Server ICT Policy Operating systems Windows Server Backup Antivirus Windows update Security Settings Backup and Restore Backup procedure Restore procedure Development Development environment Development server Developer machine Test Test server Acceptance SQL Server clustering BizTalk group Acceptance server Production SQL Server clustering BizTalk group (load balancing) Production server Management and security Groups and accounts SCOM Single Sign-On Hosts In process hosts Isolated hosts Trusted and untrusted hosts Hosts configuration DTAP Resources Appendix A Redistributable CAB Files Design decisions are the important parts of your document. Here, you summarize all your design decisions and reference them to each corresponding chapter/section in the document, where a decision is described; you also note issues around your design. There's more... Analyzing requirements is an important task, which should not be taken lightly. Knowing architectural patterns, for instance, can help you choose the right technology and create the appropriate design. It can be that the BizTalk Server is not the right fit for the purpose. The following resources can aid you in analyzing the requirements: Architectural Patterns: Packt has published a book called Applied Architecture Patterns on Microsoft Platform that can aid you in analyzing the requirements by selecting the right technology. Wiki TechNet article: Refer to the Recommendations for Installing, Sizing, Deploying, and Maintaining a BizTalk Server Solution article at http://social.technet. microsoft.com/wiki/contents/articles/666.aspx. Microsoft BizTalk Server 2010 Operations Guide: Microsoft has created a BizTalk Server 2010 Operations Guide for anyone involved in the implementation and administration of a BizTalk solution, particularly IT professionals. You can find it online (http://msdn.microsoft.com/en-us/library/ gg634499%28v=bts.70%29.aspx) or you can download it from http://www. microsoft.com/downloads/en/details.aspx?FamilyID=4ef9eebb-b3f4-4534-b733-3eb2cb83d867&displaylang=en. Microsoft volume licensing brief: Licensing Microsoft Server Products in Virtual Environments is an interesting white paper from Microsoft. It describes licensing models under virtual environments for the server operating systems and server applications. It can help you understand how to use Microsoft server products with virtualization technologies, such as Microsoft Hyper-V technology, Microsoft Virtual Server 2005 R2, or third-party virtualization solutions that are provided by VMWare and Parallels. You can download from the URL: http://www.microsoft.com/ downloads/en/details.aspx?FamilyID=9ef7fc47-c531-40f1-a4e9-9859e593a1f1&displaylang=en. Microsoft poster scale-out configurations: Microsoft has published a poster (normal or interactive) that can be downloaded describing typical scenarios and commonly used options for scaling out the BizTalk Server 2010's physical configurations. This post clearly illustrates how to scale for achieving high availability through load balancing and fault tolerance. It also shows how to configure for high-throughput scenarios. A normal poster can be obtained from the URL: http://www.microsoft.com/ downloads/en/details.aspx?FamilyID=2b70cbfc-d158-45a6-8bbd-99782d6747dc. An interactive poster created in Silverlight can be obtained from the URL:http:// www.microsoft.com/downloads/en/details.aspx?FamilyID=7ef9ae69-9cc8-442a-8193-831a414dfc30. Installing and using the BizTalk Best Practices Analyzer The Best Practices Analyzer (BPA) examines a BizTalk Server 2010 deployment and generates a list of issues pertaining to best practice standards for BizTalk Server deployments. This tool is designed to assess the configuration of a BizTalk installation. The BPA performs configuration-level verification by gathering data from different information sources, such as Windows Management Instrumentation (WMI) classes, SQL Server databases, and registry entries and presents a report to the user. Under the hood, it uses the data to evaluate the deployment configuration. It does not modify any system settings and is not a self-tuning tool. The tool is there to deliver support in achieving the best suitable configuration and report issues or possible issues, that could potentially harm the BizTalk environment. Getting ready The latest version of the BPA tool (V1.2) can be obtained from the Microsoft download center (http://www.microsoft.com/downloads/en/details.aspx?FamilyID=93d432fe-1370-4b6d-aaa8-a0c43c30f5ab&displaylang=en) and must be installed on the BizTalk machine. As a user, you need an account that has local administrative rights, that is a member of the BizTalk Server Administrators group, and a member of the SSO Administrators group to be able to run the BPA. You may need to explicitly set some WMI permissions before you can use the BPA in a distributed environment, where the SQL Server is not installed on the same computer as the BizTalk Server. This is because when the BPA tries to connect to a remote computer running the SQL Server, WMI may not have sufficient access to determine whether the SQL Server Agent is running. This may result in incorrect BPA evaluations. How to do it... To run the Best Practices Analyzer, perform one of the following: Start the BizTalk Server Best Practices Analyzer from the Start menu. Go to Start | Programs | Microsoft BizTalk Server Best Practices Analyzer. Open Windows Explorer and navigate to the Best Practices Analyzer installation directory (by default, c:Program FilesBizTalkBPA) and double-click on BizTalkBPA.exe. Open a command prompt, change to the installation directory, and then enter BizTalkBPACmd.exe. The following steps need to be performed to do the analysis: As soon as you start the BPA, it will check for updates. The user can decide whether or not to check for updates for newer versions of the configuration: (Move the mouse over the image to enlarge.) If a newer version is found, you are able to download the latest updates. The next step is to perform a scan by clicking on Start a scan: After starting the scan, starts data will be gathered from different information sources as described earlier. After the scan has been completed, the user can decide to view the report of the performed scan: You can click View a report of this Best Practices scan and the report will be generated. After generation of the report, several tabs will appear: Critical Issues All Issues Non-Default Settings Recent Changes Baseline Informational Items How it works... When the BPA is running, it gathers information and evaluates them to best practice rules from the Microsoft product group and support. A report is presented to the user providing information on issues, non-default settings, changes, and so on. The report enables you to take action and apply the necessary changes to resolve identified issues. The BPA can be run again to verify that it adheres to all the necessary best practices. This shows the value of the tool when assessing the deployed BizTalk environment before it is operational. When BizTalk becomes operational, the MessageBox Viewer (MBV) has more value. There's more... The BPA is very useful and gives you information that helps you to tune BizTalk and to keep it healthy. There are more tools that can help in sustaining a healthy environment overall. The Microsoft SQL Server 2008 R2 BPA is a diagnostic tool that provides information about a server and a Microsoft SQL Server 2008 or Microsoft SQL Server 2008 R2 instance installed on that server. The Microsoft SQL Server 2008 R2 Best Practices Analyzer can be downloaded from http://www.microsoft.com/download/en/details.aspx?id=15289. There are a couple of analyzers provided by Microsoft that do a good job helping you and the system engineer to put out a healthy, robust, and stable environment: Best Practices Analyzer: http://technet.microsoft.com/en-us/library/dd759260.aspx Microsoft Baseline Configuration Analyzer 2.0: http://www.microsoft.com/download/en/details.aspx?id=16475 Microsoft Baseline Security Analyzer 2.1.1: http://www.microsoft.com/download/en/details.aspx?id=19892
Read more
  • 0
  • 0
  • 5680

article-image-creating-shopping-cart-using-zend-framework-part-1
Packt
27 Oct 2009
13 min read
Save for later

Creating a Shopping Cart using Zend Framework: Part 1

Packt
27 Oct 2009
13 min read
Our next task in creating the storefront is to create the shopping cart. This will allow users to select the products they wish to purchase. Users will be able to select, edit, and delete items from their shopping cart. Lets get started. Creating the Cart Model and Resources We will start by creating our model and model resources. The Cart Model differs from our previous model in the fact that it will use the session to store its data instead of the database. Cart Model The Cart Model will store the products that they wish to purchase. Therefore, the Cart Model will contain Cart Items that will be stored in the session. Let's create this class now. application/modules/storefront/models/Cart.php class Storefront_Model_Cart extends SF_Model_Abstract implements SeekableIterator, Countable, ArrayAccess { protected $_items = array(); protected $_subTotal = 0; protected $_total = 0; protected $_shipping = 0; protected $_sessionNamespace; public function init() { $this->loadSession(); } public function addItem( Storefront_Resource_Product_Item_Interface $product,$qty) { if (0 > $qty) { return false; } if (0 == $qty) { $this->removeItem($product); return false; } $item = new Storefront_Resource_Cart_Item( $product, $qty ); $this->_items[$item->productId] = $item; $this->persist(); return $item; } public function removeItem($product) { if (is_int($product)) { unset($this->_items[$product]); } if ($product instanceof Storefront_Resource_Product_Item_Interface) { unset($this->_items[$product->productId]); } $this->persist(); } public function setSessionNs(Zend_Session_Namespace $ns) { $this->_sessionNamespace = $ns; } public function getSessionNs() { if (null === $this->_sessionNamespace) { $this->setSessionNs(new Zend_Session_Namespace(__CLASS__)); } return $this->_sessionNamespace; } public function persist() { $this->getSessionNs()->items = $this->_items; $this->getSessionNs()->shipping = $this->getShippingCost(); } public function loadSession() { if (isset($this->getSessionNs()->items)) { $this->_items = $this->getSessionNs()->items; } if (isset($this->getSessionNs()->shipping)) { $this->setShippingCost($this->getSessionNs()->shipping); } } public function CalculateTotals() { $sub = 0; foreach ($this as $item) { $sub = $sub + $item->getLineCost(); } $this->_subTotal = $sub; $this->_total = $this->_subTotal + (float) $this->_shipping; } public function setShippingCost($cost) { $this->_shipping = $cost; $this->CalculateTotals(); $this->persist(); } public function getShippingCost() { $this->CalculateTotals(); return $this->_shipping; } public function getSubTotal() { $this->CalculateTotals(); return $this->_subTotal; } public function getTotal() { $this->CalculateTotals(); return $this->_total; } /*...*/ } We can see that the Cart Model class is fairly weighty and in fact, we have not included the full class here. The reason we have slightly truncated the class is that we are implementing the SeekableIterator, Countable, and ArrayAccess interfaces. These interfaces are defined by PHP's SPL Library and we use them to provide a better way to interact with the cart data. For the complete code, copy the methods below getTotal() from the example files for this article. We will look at what each method does shortly in the Cart Model implementation section, but first, let's look at what functionality the SPL interfaces allow us to add. Cart Model interfaces The SeekableIterator interface allows us to access our cart data in these ways: // iterate over the cartforeach($cart as $item) {}// seek an item at a position$cart->seek(1);// standard iterator access$cart->rewind();$cart->next();$cart->current(); The Countable interface allows us to count the items in our cart: count($cart); The ArrayAccess interface allows us to access our cart like an array: $cart[0]; Obviously, the interfaces provide no concrete implementation for the functionality, so we have to provide it on our own. The methods not listed in the previous code listing are: offsetExists($key) offsetGet($key) offsetSet($key, $value) offsetUnset($key) current() key() next() rewind() valid() seek($index) count() We will not cover the actual implementation of these interfaces, as they are standard to PHP. However, you will need to copy all these methods from the example files to get the Cart Model working. Documentation for the SPL library can be found athttp://www.php.net/~helly/php/ext/spl/     Cart Model implementation Going back to our code listing, let's now look at how the Cart Model is implemented. Let's start by looking at the properties and methods of the class. The Cart Model has the following class properties: $_items:An array of cart items $_subTotal: Monetary total of cart items $_total: Monetary total of cart items plus shipping $_shipping: The shipping cost $_sessionNamespace: The session store The Cart Model has the following methods: init(): Called during construct and loads the session data addItem(Storefront_Resource_Product_Item_Interface $product, $qty): Adds or updates items in the cart removeItem($product): Removes a cart item setSessionNs(Zend_Session_Namespace $ns): Sets the session instance to use for storage getSessionNs(): Gets the current session instance persist(): Saves the cart data to the session loadSession(): Loads the stored session values calculateTotals(): Calculates the cart totals setShippingCost($cost): Sets the shipping cost getShippingCost(): Gets the shipping cost getSubTotal(): Gets the total cost for items in the cart (not including the shipping) getTotal(): Gets the subtotal plus the shipping cost When we instantiate a new Cart Model instance, the init() method is called. This is defined in the SF_Model_Abstract class and is called by the __construct() method. This enables us to easily extend the class's instantiation process without having to override the constructor. The init() method simply calls the loadSession() method. This method populates the model with the cart items and shipping information stored in the session. The Cart Model uses Zend_Session_Namespace to store this data, which provides an easy-to-use interface to the $_SESSION variable. If we look at the loadSession() method, we see that it tests whether the items and shipping properties are set in the session namespace. If they are, then we set these values on the Cart Model. To get the session namespace, we use the getSessionNs() method. This method checks if the $_sessionNs property is set and returns it. Otherwise it will lazy load a new Zend_Session_Namespace instance for us. When using Zend_Session_ Namespace, we must provide a string to its constructor that defines the name of the namespace to store our data in. This will then create a clean place to add variables to, without worrying about variable name clashes. For the Cart Model, the default namespace will be Storefront_Model_Cart. The Zend_Session_Namespace component provides a range of functionality that we can use to control the session. For example, we can set the expiration time as follows: $ns = new Zend_Session_Namespace('test');$ns->setExpirationSeconds(60, 'items');$ns->setExpirationHops(10);$ns->setExpirationSeconds(120); This code would set the item's property expiration to 60 seconds and the namespaces expiration to 10 hops (requests) or 120 seconds, whichever is reached first. The useful thing about this is that the expiration is not global. Therefore, we can have specialized expiration per session namespace. There is a full list of Zend_Session_Namespace functionalities in the reference manual. Testing with Zend_Session and Zend_Session_NamespaceTesting with the session components can be fairly diffi cult. For the Cart Model, we use the setSessionNs() method to allow us to inject a mock object for testing, which you can see in the Cart Model unit tests. There are plans to rewrite the session components to make testing easier in the future, so keep an eye out for those updates. To add an item to the cart, we use the addItem() method. This method accepts two parameters,$product and $qty. The $product parameter must be an instance of the Storefront_Resource_Product_Item class, and the $qty parameter must be an integer that defines the quantity that the customer wants to order. If the addItem() method receives a valid $qty, then it will create a new Storefront_Resource_Cart_Item and add it to the $_items array using the productId as the array key. We then call the persist() method. This method simply stores all the relevant cart data in the session namespace for us. You will notice that we are not using a Model Resource in the Cart Model and instead we are directly instantiating a Model Resource Item. This is because the Model Resources represent store items and the Cart Model is already doing this for us so it is not needed. To remove an item, we use the removeItem() method. This accepts a single parameter $product which can be either an integer or a Storefront_Resource_Product_Item instance. The matching cart item will be removed from the $_items array and the data will be saved to the session. Also, addItem() will call removeItem() if the quantity is set to zero. The other methods in the Cart Model are used to calculate the monetary totals for the cart and to set the shipping. We will not cover these in detail here as they are fairly simple mathematical calculations. Cart Model Resources Now that we have our Model created, let's create the Resource Interface and concrete Resource class for our Model to use. application/modules/storefront/models/resources/Cart/Item/Interface.php interface Storefront_Resource_Cart_Item_Interface { public function getLineCost(); } The Cart Resource Item has a very simple interface that has one method, getLineCost(). This method is used when calculating the cart totals in the Cart Model. application/modules/storefront/models/resources/Cart/Item.php class Storefront_Resource_Cart_Item implements Storefront_Resource_Cart_Item_Interface { public $productId; public $name; public $price; public $taxable; public $discountPercent; public $qty; public function __construct(Storefront_Resource_Product_Item_ Interface $product, $qty) { $this->productId = (int) $product->productId; $this->name = $product->name; $this->price = (float) $product->getPrice(false,false); $this->taxable = $product->taxable; $this->discountPercent = (int) $product->discountPercent; $this->qty = (int) $qty; } public function getLineCost() { $price = $this->price; if (0 !== $this->discountPercent) { $discounted = ($price*$this->discountPercent)/100; $price = round($price - $discounted, 2); } if ('Yes' === $this->taxable) { $taxService = new Storefront_Service_Taxation(); $price = $taxService->addTax($price); } return $price * $this->qty; } } The concrete Cart Resource Item has two methods __construct() and getLineCost(). The constructor accepts two parameters $product and $qty that must be a Storefront_Resource_Product_Item_Interface instance and integer respectively. This method will then simply copy the values from the product instance and store them in the matching public properties. We do this because we do not want to simply store the product instance because it has all the database connection data contained within. This object will be serialized and stored in the session. The getLineCost() method simply calculates the cost of the product adding tax and discounts and then multiplies it by the given quantity. Shipping Model We also need to create a Shipping Model so that the user can select what type of shipping they would like. This Model will simply act as a data store for some predefined shipping values. application/modules/storefront/models/Shipping.php class Storefront_Model_Shipping extends SF_Model_Abstract { protected $_shippingData = array( 'Standard' => 1.99, 'Special' => 5.99, ); public function getShippingOptions() { return $this->_shippingData; } } The shipping Model is very simple and only contains the shipping options and a single method to retrieve them. In a normal application, shipping would usually be stored in the database and most likely have its own set of business rules. For the Storefront, we are not creating a complete ordering process so we do not need these complications. Creating the Cart Controller With our Model and Model Resources created, we can now start wiring the application layer together. The Cart will have a single Controller, CartController that will be used to add, view, and update cart items stored in the Cart Model. application/modules/storefront/controllers/CartController.php class Storefront_CartController extends Zend_Controller_Action { protected $_cartModel; protected $_catalogModel; public function init() { $this->_cartModel = new Storefront_Model_Cart(); $this->_catalogModel = new Storefront_Model_Catalog(); } public function addAction() { $product = $this->_catalogModel->getProductById( $this->_getParam('productId') ); if(null === $product) { throw new SF_Exception('Product could not be added to cart as it does not exist' ); } $this->_cartModel->addItem( $product, $this->_getParam('qty') ); $return = rtrim( $this->getRequest()->getBaseUrl(), '/' ) . $this->_getParam('returnto'); $redirector = $this->getHelper('redirector'); return $redirector->gotoUrl($return); } public function viewAction() { $this->view->cartModel = $this->_cartModel; } public function updateAction() { foreach($this->_getParam('quantity') as $id => $value) { $product = $this->_catalogModel->getProductById($id); if (null !== $product) { $this->_cartModel->addItem($product, $value); } } $this->_cartModel->setShippingCost( $this->_getParam('shipping') ); return $this->_helper->redirector('view'); } } The Cart Controller has three actions that provide a way to: add: add cart items view: view the cart contents update: update cart items The addAction() first tries to find the product to be added to the cart. This is done by searching for the product by its productId field, which is passed either in the URL or by post using the Catalog Model. If the product is not found, then we throw an SF_Exception stating so. Next, we add the product to the cart using the addItem() method. When adding the product, we also pass in the qty. The qty can again be either in the URL or post. Once the product has been successfully added to the cart, we then need to redirect back to the page where the product was added. As we can have multiple locations, we send a returnto variable with the add request. This will contain the URL to redirect back to, once the item has been added to the cart. To stop people from being able to redirect away from the storefront, we prepend the baseurl to the redirect string. To perform the actual redirect, we use the redirector Action Helper's gotoUrl() method. This will create an HTTP redirect for us. The viewAction() simply assigns the Cart Model to the cartModel View property. Most of the cart viewing functionality has been pushed to the Cart View Helper and Forms, which we will create shortly. The updateAction() is used to update the Cart Items already stored in the cart. The first part of this updates the quantities. The quantities will be posted to the Action as an array in the quantity parameter. The array will contain the productId as the array key, and the quantity as the value. Therefore, we iterate over the array fi nding the product by its ID and adding it to the cart. The addItem() method will then update the quantity for us if the item exists and remove any with a zero quantity. Once we have updated the cart quantities, we set the shipping and redirect back to the viewAction. >> Continue Reading Creating a Shopping Cart using Zend Framework: Part 2
Read more
  • 0
  • 0
  • 5671

article-image-advanced-system-management
Packt
25 Oct 2013
11 min read
Save for later

Advanced System Management

Packt
25 Oct 2013
11 min read
(For more resources related to this topic, see here.) Beyond backups Of course, backups are not the only issue with managing multiple, remote systems. In particular, managing such multiple configurations using a centralized application is often desirable. Configuration management One of the issues frequently faced by administrators is that of having multiple, remote systems all with similar software for the most part, but with minor differences in what is installed or running. Debian provides several packages that can help manage such an environment in a unified manner. Two of the more popular packages, both available in Debian, are FAI and Puppet. While we don't have the space to go into details, both applications are described briefly here. Fully Automated Installation Fully Automated Installation (FAI) focuses on managing Linux installations, and is developed using Debian, although it works with many different distributions, not just Debian. FAI uses a class concept for categorizing similar systems, and provides a good deal of flexibility and customization via hooks. FAI provides for unattended, automatic installation as well as tools for monitoring and updating groups of systems. FAI is frequently used for creating and maintaining clusters. More information is available at http://fai-project.org/. Puppet Probably the best known application for distributed management is Puppet, developed by Puppet Labs. Unlike FAI, only the Open Source edition is free, the Enterprise edition, which has many additional features, is not. Puppet does include support for environments other than Linux. The desired configuration is described in a custom, high-level definition language, and distributed to systems with installed clients. Unlike FAI, Puppet does not provide its own bare metal remote installation method, but does use existing methods (such as kickstart) to provide this function. A number of companies that make heavy use of distributed and clustered systems use Puppet to manage their environments. More information is available at http://puppetlabs.com/. Other packages There are other packages that can be used to manage a distributed environment, such as Chef and BCFG2. While simpler than Puppet or FAI, they support similar functions and have been used in some distributed and clustered environments. The use of FAI, Puppet, and others in cluster management warrants a brief look at clustering next, and what packages in Debian support clustering. Clusters A cluster is a group of systems that work together in such a way that the whole functions as a single unit. Such clusters can be loosely coupled or tightly coupled. A loosely coupled environment, each system is complete in itself, and can handle all of the tasks any of the other systems can handle. The environment provides mechanisms for redundancy, load sharing, and fail-over between systems, and is often called a High Availability (HA) cluster. In a tightly coupled environment, the systems involved are highly dependent on one another, often sharing memory and disk storage, and all work on the same task together. The environment provides mechanisms for data sharing, avoiding storage conflicts, keeping the systems in synchronization, and splitting up tasks appropriately. This design is often used in super-computing environments. Clustering is an advanced technique that involves more than just installing and configuring software. It also involves hardware integration, and systems and network design, and implementation. Along with the URLs mentioned below, a good text on the subject is Building Clustered Linux Systems, by Robert W. Lucke, Prentice Hall. Here we will only touch the very basics, along with what tools Debian provides. Let's take a brief look at each environment, and some of the tools used to create them. High Availability clusters Two primary functions are required to implement a high availability cluster: A way to handle load balancing and individual host fail-over. A way to synchronize storage so that all servers provide the same view of the data they serve. Debian includes meta packages that bring together software from the Linux High Availability project, including cluster-agents and resource-agents, two of the higher-level meta packages. These packages install various agents that are useful in coordinating and managing load balancing and fail-over. In some cases, a master server is designated to distribute the processing load among other servers. Data synchronization is handled by using shared storage and any of the filesystems that provide for multiple accesses and shared files, such as NFS or AFS. High Availability clusters generally use standard software, along with software that is readily available to manage the dynamics of such environments. Beowulf clusters In addition to the considerations for High Availability clusters, more tightly coupled environments such as Beowulf clusters also require an infrastructure to manage and distribute computing tasks. There are several web pages devoted to creating a Beowulf cluster using Debian as well as packages that aid in creating such a cluster. One such page is https://wiki.debian.org/StartaBeowulf, a Debian Wiki page on Beowulf basics. The manual for FAI also has a section on creating a Beowulf cluster. Books are available as well. Debian provides several packages that are helpful in building such a cluster, such as the OpenMPI libraries for message passing, and various utilities that run commands on multiple systems, such as those in the kadif package. There are even projects that have released scripts and live CDs that allow you to set up a cluster quickly (one such project is the PelicanHPC project, developed for Debian Lenny, hosted at http://www.pelicanhpc.org/. This type of cluster is not something that you can set up and go. Beowulf and other tightly coupled clusters are intended for highly parallel computing, and the programs that do the actual computing must be designed specifically for such an environment. That said, some packages for specific parallel computations do exist in Debian, such as nwchem, which provides several applications for computational chemistry that take advantage of parallelism. Common tools Some common components of clusters have already been mentioned, such as the OpenMPI libraries. Aside from the meta-packages already mentioned, the redhat-cluster suite of tools is available in Debian, as well as many useful libraries, scheduling tools, and failover tools such as booth. All of these can be found using apt-cache or Synaptic by searching for "cluster". Webmin Many administrators will never have to administer a cluster, and many won't be responsible for a large number of systems requiring central backup solutions. However, even administering a single system using command line tools and text editors can be a chore. Even clusters sometimes require administrative tasks on individual systems. Fortunately, there is an application that can ease many administrative tasks, is easy to use, and can handle many aspects of Linux administration. It is called Webmin. Up until Debian Sarge, Webmin was a part of Debian distributions. However, the Debian developer in charge of packaging it had difficulty keeping up with the frequent releases, and it was eventually dropped from Debian. However, the upstream Webmin developers maintain current packages that install cleanly. Some users have reported issues because Webmin does not always handle configuration files exactly as Debian intends, but it most certainly attempts to handle them in a compatible manner, and while some users have experienced problems with upgrades, many administrators are quite happy with Webmin. As long as you are willing to deal with conflicts during upgrades, or restrict use of modules that have major configuration impacts, you will find Webmin quite useful. Installing Webmin Webmin may be installed by adding the following lines to your apt sources file: deb http://download.webmin.com/download/repository sarge contrib deb http://webmin.mirror.somersettechsolutions.co.uk/repository sarge contrib Usually, this is added to a separate webmin.list file in /etc/apt/sources.list.d. The use of 'sarge' for the release name in the configuration is not a mistake. Since Webmin was dropped after the Sarge release (Debian 3.1), the developers update the repository as it is and haven't bothered changing it to keep up with the Debian code names. However, the versions available in the repository are compatible with any Debian release since 3.1. After updating your cache file, Webmin can be installed and maintained using apt-get, aptitude, or Synaptic. Also, if you request a Webmin upgrade from within Webmin itself on a Debian system, it will use the proper Debian package to upgrade. Using Webmin Webmin runs in the background, and provides an HTTP or HTTPS server on localhost port 10,000. You can use any web browser to connect to http://localhost:10000/ to access Webmin. Upon first installation, only the root user or those in a group allowed to use sudo to access the root account, may log in but Webmin users can be managed separately or in conjunction with local users. Webmin provides extensive and easy to understand menus and icons for various configuration tasks. Webmin is also highly modular and extensible, and an extensive list of standard modules is included with the base package. It is not possible to cover Webmin as fully here as it deserves, but a short list of some of its capabilities includes: Configuration of Webmin itself (the server, users, modules, and security) Local system user and password management Filesystem management Bootup and service management CRON job management Software updates Basic filesystem backups Authentication and security configuration APACHE, DNS, SSH, and FTP (if you're using ProFTP) configuration User mail management Qmail or sendmail configuration Network and Firewall configuration and management Bandwidth monitoring Printer management There are even modules that apply to clusters. Also, Webmin can search and allow access to other Webmin servers on the local network or you can define remote servers manually. This allows a central Webmin server, installed on a particular system, to be the gateway to all of the other servers in your environment, essentially providing a single point of access to manage all Webmin enabled servers. Webmin and Debian Webmin understands the configuration file layout of many distributions. The main problem is when a particular module does not handle certain types of configuration in the way the Debian developers prefer, which can make package upgrades somewhat difficult. This can be handled in a couple of ways. Most modules provide a means to edit configuration files directly, so if you have read the Debian documentation you can modify the configuration appropriately to use Debian specific configuration techniques. Or, you may choose to allow Webmin to modify files as it sees fit, and handle any conflicts manually when you upgrade the software involved. Finally, you can avoid those modules involved with specific software that are more likely to cause problems. One such module is Apache, which doesn't use links from sites-enabled to sites-available. Rather, it configures directly in the sites-enabled directory. Some administrators create the configuration in Webmin, and then move and link the files. Others prefer to manually configure Apache outside of Webmin. Webmin modules are constantly changing, and some actually recognize the Debian file layouts well, so it is not possible to give a comprehensive list of modules to avoid at this time. Best practice when using Webmin is to read the documentation and check the configuration files for specific software prior to using Webmin. Then, after configuring with Webmin, check the files again to determine whether changes may be required to work within the particular package's Debian configuration framework. Based upon this, you can decide whether to continue to configure using Webmin or switch back to manual configuration of that particular software. Webmin security Security is always a concern when remote access to a system is involved. Webmin handles this by requiring authentication and providing for detailed access restrictions that provide a layer of control beyond the firewall. Webmin users can be defined separately, or certain local users can be designated. Access to the various modules in Webmin can be restricted to certain users or groups of users, and detailed logs of Webmin actions are kept. Usermin In addition to Webmin, there is a server called Usermin which may be installed from the same repository as Webmin. It allows individual users to perform a number of functions more easily, such as changing their password, accessing their files, read and manage their email, and managing some aspects of their user profile. It is also modular and has the same security features as Webmin. Summary Several powerful and flexible central backup solutions exist that help manage backups for multiple remote servers and sites. Debian provides packages that assist in building High Availability and Beowulf style multiprocessing clusters as well. And, whether you are involved in managing clusters or not, or even a single system, Webmin can ease an administrator's tasks. Resources for Article: Further resources on this subject: Customizing a Linux kernel [Article] Microsoft SharePoint 2010 Administration: Farm Governance [Article] Testing Workflows for Microsoft Dynamics AX 2009 Administration [Article]
Read more
  • 0
  • 0
  • 5659

article-image-managing-your-joomla-media-files-media-manager
Packt
29 Jan 2010
6 min read
Save for later

Managing Your Joomla! Media Files with Media Manager

Packt
29 Jan 2010
6 min read
Overview of the Joomla! Media Manager The Media Manager is a useful file management tool, which is included in the Joomla! CMS. The Media Manager tool is located within your administration area and can be accessed by using the "Quick Link" icon on your Control Panel, or by going to the Menu: Site | Media Manager: One of the main purposes of the Media Manager is to easily allow site administrators, and frontend users with permissions, the ability to upload and manage files for their Joomla! site. In circumstances where you do not have FTP (File Transfer Protocol) access to your web server, the media manager might be the only available tool with which you can add new images, videos, documents, and other files to your website. Uploading media using the Media Manager During initial site development, there are usually regular requirements to upload new files to your Joomla! site. Depending on the content for your website, this process can decrease as you move into the maintenance stages, or stay as a requirement for sites which are updated often. Media Manager settings As with all software applications, the Media Manager tool contains a set of predefined settings. Before using the Media Manager for the first time, it is recommended that you take a look at these as they offer the ability to customize media handling for your website. Depending on your file requirements, adjusting the media configuration settings now may save you time and effort down the line. Your Joomla! site Media Settings can be found by going to Site | Global Configuration. Once the page has loaded, you will then need to click on the link named System. The Media Settings area not only allows you to adjust settings related to the Media Manager, but also contains general settings for the media used throughout your Joomla! website. Information regarding each setting is as follows: The fields are: Legal Extensions (File Types)This field is a comma-separated list of file types that you want to allow to be uploaded to your Joomla! website. This setting applies to the frontend of your site, as well as the backend which includes the Media Manager tool. Maximum Size (in bytes)This field holds the maximum size of the file (in bytes) to be uploaded. This can be set to "0" if you do not wish to restrict your file upload sizes. Most web servers will have their own file size limit that is usually configurable for the server by adjusting the server information file. Path to Media FolderBy default, Joomla! has a media folder called <joomlaroot>/images. This is the area where all files will be uploaded to when using the Media Manager. You can change this value to a different directory if you wish, creating a default path for managing your media. The majority of Joomla! projects would probably leave this value as default. If you do decide to use another folder name for your media directory, it is important to leave the current /images directory on the server as this can often be used by other components. Path to Image FolderThis is generally a path where you put your images for your Joomla! Content Articles. By default, it is set to <joomlaroot>/images/stories. You can change this to be what you wish. If you want to access this folder from the Media Manager, then make this a subfolder of the "Media Folder" previously mentioned. For example, <joomlaroot><mediafoldername>/<imagefoldername>. If you do decide to use another folder name for your image directory, it is important to leave the current /images/stories directory on the server as this can often be used by other components. Restrict UploadsThis feature restricts uploads by user type. The default is set to Yes, which means that users below the status of a "Manager" will only get one folder option to upload files into. That folder is your main "Media Folder". If you set this option to No, then users will also be allowed to upload to subdirectories within your main media folder. Check MIME TypesThis is a security feature, and uses MIME Magic or Fileinfo to verify your uploaded file types. By checking the MIME file information, you help ensure users don't upload malicious files to your site. Further information about Fileinfo can be found at http://www.php.net/manual/en/book.fileinfo.php. Legal Image Extensions (File Types)This is a list of legal image extensions that you and other users are allowed to upload to your Joomla! site. The default list includes bmp, gif, jpg, and png files. Adjust, if you require further image extension types. Ignored Extensions This setting checks the file types which should be ignored for MIME checking. By default, this is left blank so all files would be included if MIME checking is turned on. Legal MIME TypesThis sets the list of legal MIME types for uploading. By default, this setting includes some file types, and it is recommended that you do not adjust this setting unless you know what you are doing. Illegal MIME TypesThis sets the list of illegal MIME types for uploading. As with the legal MIME types, it is recommended that you do not adjust this setting unless you know what you are doing. Enable Flash UploaderThe Media Manager contains an integrated Flash uploader tool. If enabled, this allows you to upload multiple files at once. The default setting is No. If you do decide to enable the Flash uploader and receive uploading issues, then disable this feature again. Issues can arise from incompatible Adobe Flash settings. If you have made adjustments to the default Joomla! Media Settings, then you will need to save these by clicking on the Save button at the top right-hand side of the page in the Global Configuration section. A confirmation message to inform you that these settings have been saved should show on the following page. Now that we have configured our site's Media Settings, let's head over to take a detailed look at the Media Manager upload feature. The Media Manager tool is located within your administration area and can be accessed by using the "Quick Link" icon on your Control Panel, or by going to the Menu: Site | Media Manager.
Read more
  • 0
  • 0
  • 5645

article-image-adding-advanced-form-features-using-chronoforms
Packt
31 Aug 2010
16 min read
Save for later

Adding Advanced Form Features using ChronoForms

Packt
31 Aug 2010
16 min read
(For more resources on ChronoForms, see here.) Using PHP to create "select" dropdowns One frequent request is to create a select drop-down from a set of records in a database table. Getting ready You'll need to know the MySQL query to extract the records you need from the database table. This requires some Joomla! knowledge and some MySQL know-how. Joomla! keeps its articles in the jos_content database table and the two-table columns that we want are the article title and the article id. A quick check in the database tells us that the columns are appropriately called title and id. We can also see that the section id column is called sectionid (with no spaces); the column that tells us if the article is published or not is called state and takes the values 1 for published and 0 for not published. How to do it . . . We are going to use some PHP to look up the information about the articles in the database table and then output the results it finds as a series of options for our drop-down box. You'll recall that normal HTML code for a drop-down box looks something like this: <select . . .> <option value='option_value_1'>Option 1 text</option> <option value='option_value_2'>Option 2 text</option> . . .</select> This is simplified a little so that we can see the main parts that concern us here—the options. Each option uses the same code with two variables—a value attribute and a text description. The value will be returned when the form is submitted; the text description is shown when the form is displayed on your site. In simple forms, the value and the description text are often the same. This can be useful if all you are doing with the results is to display them on a web page or in an e-mail. If you are going to use them for anything more complicated than that, it can be much more useful to use a simplified, coded form in the value. For our list of articles, it will be helpful if our form returns the ID of the article rather than its title. Hence, we need to set the options to be something like this: <option value='99'>Article title</option> Having the article ID will let us look up the article in the database and extract any other information that we might need, or to update the record to change the corresponding record. Here's the code that we'll use. The beginning and ending HTML lines are exactly the same as the standard drop-down code that ChronoForms generates but the "option" lines are replaced by the section inside the <?php . . . ?> tags. The PHP snippet looks up the article IDs and titles from the jos_content table, then loops through the results writing an <option> tag for each one: <div class="form_item"> <div class="form_element cf_dropdown"> <label class="cf_label" style="width: 150px;"> Articles</label> <select class="cf_inputbox validate-selection" id="articles" size="1" name="articles"> <option value=''>--?--</option><?phpif (!$mainframe->isSite() ) {return;}$db =& JFactory::getDBO();$query = " SELECT `id`, `title` FROM `#__content` WHERE `sectionid` = 1 AND `state` = 1 ;";$db->setQuery($query);$options = $db->loadAssocList();foreach ( $options as $o ) { echo "<option value='".$o[id]."'>".$o[title]."</option>";}?> </select> </div> <div class="cfclear">&nbsp;</div></div> The resulting HTML will be a standard select drop-down that displays the list of titles and returns the article ID when the form is submitted. Here's what the form input looks like: A few of the options from the page source are shown: <option value='1'>Welcome to Joomla!</option><option value='2'>Newsflash 1</option><option value='3'>Newsflash 2</option>. . . How it works... We are loading the values of the id and title columns from the database record and then using a PHP for each loop to go through the results and add each id and title pair into an <option> tag. There's more... There are many occasions when we want to add select drop-downs into forms with long lists of options. Date and time selectors, country and language lists, and many others are frequently used. We looked here to get the information from a database table which is simple and straightforward when the data is in a table or when the data can conveniently be stored in a table. It is the preferred solution for data such as article titles that can change from day to day. There are a couple of other solutions that can also be useful: Creating numeric options lists directly from PHP Using arrays to manage option lists that change infrequently Creating numeric options lists Let's imagine that we need to create a set of six numeric drop-downs to select: day, month, year, hour, minute, and second. We could clearly do these with manually-created option lists but it soon gets boring creating sixty similar options. There is a PHP method range() that lets us use a similar approach to the one in the recipe. For a range of zero to 60, we can use range(0, 60). Now, the PHP part of our code becomes: <div class="form_item"> <div class="form_element cf_dropdown"> <label class="cf_label" style="width: 150px;"> Minutes</label> <select class="cf_inputbox validate-selection" id="minutes" size="1" name="minutes"> <option value=''>--?--</option><?phpif (!$mainframe->isSite() ) {return;}foreach ( range(0, 60) as $v ) { echo "<option value='$v'>$v</option>";}?> </select> </div> <div class="cfclear">&nbsp;</div></div> This is slightly simpler than the database foreach code, as we don't need the quotes round the array values. This will work very nicely and we could repeat something very similar for each of the other five drop-downs. However, when we think about it, they will all be very similar and that's usually a sign that we can use more PHP to do some of the work for us. Indeed we can create our own little PHP function to output blocks of HTML for us. Looking at this example, there are four things that will change between the blocks—the label text, the name and id, and the range start and the range end. We can set these as variables in a PHP function: <?phpif ( !$mainframe->isSite() ) {return;}function createRangeSelect($label, $name, $start, $end) {?><div class="form_item"> <div class="form_element cf_dropdown"> <label class="cf_label" style="width: 150px;"> <?php echo $label; ?></label> <select class="cf_inputbox validate-selection" id="<?php echo $name; ?>" size="1" name="<?php echo $name; ?>"> <option value=''>--?--</option><?php foreach ( range($start, $end) as $v ) { echo "<option value='$v'>$v</option>"; }?> </select> </div> <div class="cfclear">&nbsp;</div></div><?php}?> Notice that this is very similar to the previous code example. We've added the function . . . line at the start, the } at the end, and replaced the values with variable names. It's important to get the placement of the <?php . . . ?> tags right. Code inside the tags will be treated as PHP, outside them as HTML. All that remains now is to call the function to generate our drop-downs: <?phpif (!$mainframe->isSite() ) {return;}createRangeSelect('Day', 'day', 0, 31);createRangeSelect('Month', 'month', 1, 12);createRangeSelect('Year', 'year', 2000, 2020);createRangeSelect('Hour', 'hour', 0, 24);createRangeSelect('Minute', 'minute', 0, 60);createRangeSelect('Second', 'second', 0, 60);function createRangeSelect($label, $name, $start, $end) {. . . The result tells us that we have more work to do on the layout, but the form elements work perfectly well. Creating a drop-down from an array In the previous example, we used the PHP range() method to generate our options. This works well for numbers but not for text. Imagine that we have to manage a country list. These do change, but not frequently. So they are good candidates for keeping in an array in the Form HTML. It's not too difficult to find pre-created PHP arrays of countries with a little Google research and it's probably easier to use one of these and correct it for your needs than to start from scratch. As we mentioned with the Article list, it's generally simpler and more flexible to use a list with standard IDs (we've used two-letter codes below). With countries, this can remove many problems with special characters and translations. Here are the first few lines of a country list: $countries = array( 'AF'=>'Afghanistan', 'AL'=>'Albania', 'DZ'=>'Algeria', 'AS'=>'American Samoa', . . .); Once we have this, it's easy to modify our foreach . . . loop to use it: foreach ( $countries as $k => $v ) { echo "<option value='$k'>$v</option>";} If you are going to use the country list in more than one form, then it may be worthwhile keeping it in a separate file that is included in the Form HTML. That way, any changes you make will be updated immediately in all of your forms. Using Ajax to look up e-mail addresses It's not very difficult to add Ajax functionality to ChronoForms, but it's not the easiest task in the world either. We'll walk through a fairly simple example here which will provide you with the basic experience to build more complex applications. You will need some knowledge of JavaScript to follow through this recipe. Normally, the only communication between the ChronoForms client (the user in their browser) and the server (the website host) is when a page is loaded or a form is submitted. Form HTML is sent to the client and a $_POST array of results is returned. Ajax is a technique, or a group of techniques, that enables communication while the user is browsing the page without them having to submit the form. As usual, at the browser end the Ajax communication is driven by JavaScript and at the server end we'll be responding using PHP. Put simply, the browser asks a question, the server replies, then the browser shows the reply to the user. For the browser JavaScript and the server PHP to communicate, there needs to be an agreed set of rules about how the information will be packaged. We'll be using the JSON (www.json.org) format. The task we will work on will use a newsletter form. We'll check to see if the user's e-mail is already listed in our user database. This is slightly artificial but the same code can easily be adapted to work with the other database tables and use more complex checks. Getting ready We'll need a form with an e-mail text input. The input id needs to be email for the following code to work: <div class="form_item"> <div class="form_element cf_textbox"> <label class="cf_label" style="width: 150px;">Email</label> <input class="cf_inputbox" maxlength="150" size="30" title="" id="email" name="email" type="text" /> </div> <div class="cfclear">&nbsp;</div></div> The form we use will also have a name text input and a submit button, but they are to make it look like a real form and aren't used in the Ajax coding. How to do it . . . We'll follow the action and start with the user action in the browser. We need to start our check when the user makes an entry in the e-mail input. So, we'll link our JavaScript to the blur event in that input. Here's the core of the code that goes in the Form JavaScript box: / set the url to send the request tovar url = 'index.php?option=com_chronocontact &chronoformname=form_name&task=extra&format=raw';// define 'email'var email = $('email');// set the Ajax call to run// when the email field loses focusemail.addEvent('blur', function() { // Send the JSON request var jSonRequest = new Json.Remote(url, { . . . }).send({'email': email.value});}); Note that the long line starting with var url = . . . &format=raw'; is all one line and should not have any breaks in it. You also need to replace 'form_name' with the name of your form in this URL. There really isn't too much to this. We are using the MooTools JSON functions and they make sending the code very simple. The next step is to look at what happens back on the server. The URL we used in the JavaScript includes the task=extra parameter. When ChronoForms sees this, it will ignore the normal Form Code and instead run the code from the Extra Code boxes at the bottom of the Form Code tab. By default, ChronoForms will execute the code from Extra code 1. If you need to access one of the other boxes, then use for example, task=extra&extraid=3 to run the code from Extra Code box 3. Now, we are working back on the server. So, we need to use PHP to unpack the Ajax message, check the database, and send a message back: <?php// clean up the JSON message$json = stripslashes($_POST['json']);$json = json_decode($json);$email = strtolower(trim($json->email));// check that the email field isn't empty$response = false;if ( $email ) { // Check the database $db =& JFactory::getDBO(); $query = " SELECT COUNT(*) FROM `#__users` WHERE LOWER(`email`) = ".$db->quote($email)."; "; $db->setQuery($query); $response = (bool) !$db->loadResult();}$response = array('email_ok' => $response );//send the replyecho json_encode($response);// stop the from running$MyForm->stopRunning = true;die;?> This code has three main parts: To start with, we "unwrap" the JSON message. Then, we check if it isn't empty and run the database query. Lastly, we package up the reply and tidy up at the end to stop any more form processing from this request. The result we send will be array('email_ok' => $response ) where $response will be either true or false. This is probably the simplest JSON message possible, but is enough for our purpose. Note that here, true means that this e-mail is not listed and is OK to use. The third step is to go back to the form JavaScript and decide how we are going to respond to the JSON reply. Again, we'll keep it simple and just change the background color of the box—red if the e-mail is already in use (or isn't a valid e-mail) or green if the entry isn't in use and is OK to submit. Here's the code snippet to do this using the onComplete parameter of the MooTools JSON function: onComplete: function(r) { // check the result and set the background color if ( r.email_ok ) { email.setStyle('background-color', 'green'); } else { email.setStyle('background-color', 'red'); }} Instead of (or as well as) changing the background color, we could make other CSS changes, display a message, show a pop-up alert, or almost anything else. Lastly let's put the two parts of the client-side JavaScript together with a little more code to make it run smoothly and to check that there is a valid e-mail before sending the JSON request. window.addEvent('domready', function() { // set the url to send the request to var url = 'index.php?option=com_chronocontact &chronoformname=form_name&task=extra&format=raw'; var email = $('email'); email.addEvent('blur', function() { // clear any background color from the input email.setStyle('background-color', 'white'); // check that the email address is valid regex = /^([^@s]+)@((?:[-a-z0-9]+.)+[a-z]{2,})$/i; var value = email.value.trim(); if ( value.length > 6 && regex.test(value) ) { // if all is well send the JSON request var jSonRequest = new Json.Remote(url, { onComplete: function(r) { // check the result and set the background color if ( r.email_ok ) { email.setStyle('background-color', 'green'); } else { email.setStyle('background-color', 'red'); } } }).send({'email': email.value}); } else { // if this isn't a valid email set background color red email.setStyle('background-color', 'red'); } });}); Note that the long line starting with var url = . . . &format=raw'; is all one line and should not have any breaks in it. You also need to replace form_name with the name of your form in this URL. Make sure both the code blocks are in place in the Form JavaScript box and in the Extra Code 1 box, save, and publish your form. Then, test it to make sure that the code is working OK. The Ajax may take a second or two to respond but once you move out of the e-mail, input by tabbing on to another input or clicking somewhere else; the background colour should go red or green. How it works... As far as the Ajax and JSON parts of this are concerned, all we can say here is that it works. You'll need to dig into the MooTools, Ajax, or JSON documents to find out more. From the point of view of ChronoForms, the "clever" bit is the ability to interpret the URL that the Ajax message uses. We ignored most of it at the time but the JavaScript included this long URL (with the query string broken up into separate parameters): index.php?option=com_chronocontact&chronoformname=form_name&task=extra&format=raw The option parameter is the standard Joomla! way to identify which extension to pass the URL to. The chronoformname parameter tells ChronoForms which form to pass the URL to. The task=extra parameter tells ChronoForms that this URL is a little out of the ordinary (you may have noticed that forms usually have &task=send in the onSubmit URL). When ChronoForms sees this, it will pass the URL to the Extra Code box for processing and bypass the usual OnSubmit processing. Lastly, the format=raw parameter tells Joomla! to show the resulting code without any extra formatting and without adding the template code. This means that all that is sent back is just the JSON message. Without it we'd have to dig the message out from loads of surrounding HTML we don't need.
Read more
  • 0
  • 0
  • 5641
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-table-and-database-operations-php
Packt
23 Oct 2009
8 min read
Save for later

Table and Database Operations in PHP

Packt
23 Oct 2009
8 min read
Various links that enable table operations have been put together on one sub-page of the Table view: Operations. Here is an overview of this sub-page: Table Maintenance During the lifetime of a table, it repeatedly gets modified, and so grows and shrinks. Outages may occur on the server, leaving some tables in a damaged state. Using the Operations sub-page, we can perform various operations, but not every operation is available for every table type: Check table: Scans all rows to verify that deleted links are correct. Also, a checksum is calculated to verify the integrity of the keys; we should get an 'OK' message if everything is all right. Analyze table: Analyzes and stores the key distribution; this will be used on subsequent JOIN operations to determine the order in which the tables should be joined. Repair table: Repairs any corrupted data for tables in the MyISAM and ARCHIVE engines. Note that the table might be so corrupted that we cannot even go into Table view for it! In such a case, refer to the Multi-Table Operations section for the procedure to repair it. Optimize table: This is useful when the table contains overheads. After massive deletions of rows or length changes for VARCHAR fields, lost bytes remain in the table. PhpMyAdmin warns us in various places (for example, in the Structure view) if it feels the table should be optimized. This operation is a kind of defragmentation for the table. In MySQL 4.x, this operation works only on tables in the MyISAM, Berkeley (BDB), and InnoDB engines. In MySQL 5.x, it works only on tables in the MyISAM, InnoDB, andARCHIVE engines. Flush table: This must be done when there have been lots of connection errors and the MySQL server blocks further connections. Flushing will clear some internal caches and allow normal operations to resume. Defragment table: Random insertions or deletions in an InnoDB table fragment its index. The table should be periodically defragmented for faster data retrieval. The operations are based on the underlying MySQL queries available—phpMyAdmin is only calling those queries. Changing Table Attributes Table attributes are the various properties of a table. This section discusses the settings for some of them. Table Type The first attribute we can change is called Table storage engine: This controls the whole behavior of the table: its location (on-disk or in-memory), the index structure, and whether it supports transactions and foreign keys. The drop-down list may vary depending on the table types supported by our MySQL server. Changing the table type may be a long operation if the number of rows is large. Table Comments This allows us to enter comments for the table. These comments will be shown at appropriate places (for example, in the left panel, next to the table name in the Table view and in the export file). Here is what the left panel looks like when the $cfg['ShowTooltip'] parameter is set to its default value of TRUE: The default value of $cfg['ShowTooltipAliasDB'] and $cfg['ShowTooltipAliasTB'] (FALSE) produces the behavior we have seen earlier: the true database and table names are displayed in the left panel and in the Database view for the Structure sub-page. Comments appear when the mouse pointer is moved over a table name. If one of these parameters is set toTRUE, the corresponding item (database names for DB and table names for TB) will be shown as the tooltip instead of the names. This time, the mouse-over shows the true name for the item. This is convenient when the real table names are not meaningful. There is another possibility for $cfg['ShowTooltipAliasTB']: the 'nested' value. Here is what happens if we use this feature: The true table name is displayed in the left panel. The table comment (for example project__) is interpreted as the project name and is displayed as such. Table Order When we Browse a table or execute a statement such as SELECT * from book, without specifying a sort order, MySQL uses the order in which the rows are physically stored. This table order can be changed with the Alter table order by dialog. We can choose any field, and the table will be reordered once on this field. We choose author_id in the example, and after we click Go, the table gets sorted on this field. Reordering is convenient if we know that we will be retrieving rows in this order most of the time. Moreover, if later we use an ORDER BY clause and the table is already physically sorted on this field, the performance should be higher. This default ordering will last as long as there are no changes in the table (no insertions, deletions, or updates). This is why phpMyAdmin shows the (singly) warning. After the sort has been done on author_id, books for author 1 will be displayed first, followed by the books for author 2, and so on. (We are talking about a default browsing of the table without explicit sorting.) We can also specify the sort order: Ascending or Descending. If we insert another row, describing a new book from author 1, and then click Browse, the book will not be displayed along with the other books for this author because the sort was done before the insertion. Table Options Other attributes that influence the table's behavior may be specified using the Table options dialog: The options are: pack_keys:Setting this attribute results in a smaller index; this can be read faster but takes more time to update. Available for the MyISAM storage engine. checksum: This makes MySQL compute a checksum for each row. This results in slower updates, but easier finding of corrupted tables. Available for MyISAM only. delay_key_write: This instructs MySQL not to write the index updates immediately but to queue them for later, which improves performance. Available for MyISAM only. auto-increment: This changes the auto-increment value. It is shown only if the table's primary key has the auto-increment attribute. Renaming, Moving, and Copying Tables The Rename operation is the easiest to understand: the table simply changes its name and stays in the same database. The Move operation (shown in the following screen) can manipulate a table in two ways: change its name and also the database in which it is stored Moving a table is not directly supported by MySQL, so phpMyAdmin has to create the table in the target database, copy the data, and then finally drop the source table. The Copy operation leaves the original table intact and copies its structure or data (or both) to another table, possibly in another database. Here, the book-copy table will be an exact copy of the book source table. After the copy, we will stay in the Table view for the book table unless we selected Switch to copied table. The Structure only copy is done to create a test table with the same structure. Appending Data to a Table The Copy dialog may also be used to append (add) data from one table to another. Both tables must have the same structure. This operation is achieved by entering the table to which we want to copy the data of the current table and choosing Data only. For example, we would want to append data when book data comes from various sources (various publishers), is stored in more than one table, and we want to aggregate all the data to one place. For MyISAM, a similar result can be obtained by using the MERGE storage engine (which is a collection of identical MyISAM tables), but if the table is InnoDB, we need to rely on phpMyAdmin's Copy feature. Multi-Table Operations In the Database view, there is a checkbox next to each table name and a drop-down menu under the table list. This enables us to quickly choose some tables and perform an operation on all those tables at once. Here we select the book-copy and the book tables, and choose the Check operation for these tables. We could also quickly select or deselect all the checkboxes with Check All/Uncheck All. Repairing an "in use" Table The multi-table mode is the only method (unless we know the exact SQL query to type) for repairing a corrupted table. Such tables may be shown with the in use flag in the database list. Users seeking help in the support forums for phpMyAdmin often receive this tip from experienced phpMyAdmin users. Database Operations The Operations tab in the Database view gives access to a panel that enables us to perform operations on a database taken as a whole. Renaming a Database Starting with phpMyAdmin 2.6.0, a Rename database dialog is available. Although this operation is not directly supported by MySQL, phpMyAdmin does it indirectly by creating a new database, renaming each table (thus sending it to the new database), and dropping the original database. Copying a Database Since phpMyAdmin 2.6.1, it is possible to do a complete copy of a database, even if MySQL itself does not support this operation natively. Summary In this article, we covered the operations we can perform on whole tables or databases. We also took a look at table maintenance operations for table repair and optimization, changing various table attributes, table movements, including renaming and moving to another database, and multi-table operations.
Read more
  • 0
  • 0
  • 5632

article-image-yui-28-rich-text-editor
Packt
16 Jul 2010
9 min read
Save for later

YUI 2.8: Rich Text Editor

Packt
16 Jul 2010
9 min read
(For more resources on YUI, see here.) Long gone are the days when we struggled to highlight a word in an e-mail message for lack of underlining or boldfacing. The rich graphic environment that the web provides has extended to anything we do on it; plain text is no longer fashionable. YUI includes a Rich Text Editor (RTE) component in two varieties, the basic YA-HOO.widget.SimpleEditor and the full YAHOO.widget.Editor. Both editors are very simple to include in a web page and they enable our visitors to enter richly formatted documents which we can easily read and use in our applications. Beyond that, the RTE is highly customizable and allows us to tailor the editor we show the user in any way we want. In this article we’ll see: What each of the two editors offers How to create either of them Ways to retrieve the text entered How to add toolbar commands   The Two Editors Nothing comes for free, features take bandwidth so the RTE component has two versions, SimpleEditor which provides the basic editing functionality and Editor which is a subclass of SimpleEditor and adds several features at a cost of close to 40% more size plus several more dependencies which we might have already loaded and might not add to the total. A look at their toolbars can help us to see the differences: The above is the standard toolbar of SimpleEditor. The toolbar allows selection of fonts, sizes and styles, select the color both for the text and the background, create lists and insert links and pictures. The full editor adds to the top toolbar sub and superscript, remove formatting, show source, undo and redo and to the bottom toolbar, text alignment, &ltHn> paragraph styles and indenting commands. The full editor requires, beyond the common dependencies for both, Button and Menu so that the regular HTML &ltselect> boxes can be replaced by a fancier one: Finally, while in the SimpleEditor, when we insert an image or a link, RTE will simply call window.prompt() to show a standard input box asking for the URL for the image or the link destination, the full editor can show a more elaborate dialog box such as the following for the Insert Image command: A simple e-mail editor It is high time we did some coding, however I hope nobody gets frustrated at how little we’ll do because, even though the RTE is quite a complex component and does wonderful things, there is amazingly little we have to do to get one up and running. This is what our page will look like: This is the HTML for the example: <form method="get" action="#" id="form1"> <div class="fieldset"><label for="to">To:</label> <input type="text" name="to" id="to"/></div> <div class="fieldset"><label for="from">From:</label> <input type="text" name="from" id="from" value="me" /></div> <div class="fieldset"><label for="subject">Subject:</label> <input type="text" name="subject" id="subject"/></div> <textarea id="msgBody" name="msgBody" rows="20" cols="75"> Lorem ipsum dolor sit amet, and so on </textarea> <input type="submit" value=" Send Message " /></form> This simple code, assisted by a little CSS would produce something pretty much like the image above, except for the editing toolbar. This is by design, RTE uses Progressive Enhancement to turn the &lttextarea> into a fully featured editing window so, if you don’t have JavaScript enabled, you’ll still be able to get your text edited, though it will be plain text. The form should have its method set to “post”, since the body of the message might be quite long and exceed the browser limit for a “get” request, but using “get” in this demo will allow us to see in the location bar of the browser what would actually get transmitted to the server. Our page will require the following dependencies: yahoo-dom-event.js, element-min.js and simpleeditor-min.js along its CSS file, simpleeditor.css. In a <script> tag right before the closing </body> we will have: YAHOO.util.Event.onDOMReady(function () { var myEditor = new YAHOO.widget.SimpleEditor('msgBody', { height: '300px', width: '740px', handleSubmit: true }); myEditor.get('toolbar').titlebar = false; myEditor.render();}); This is all the code we need turn that &lttextarea> into an RTE; we simply create an instance of SimpleEditor giving the id of the &lttextarea> and a series of options. In this case we set the size of the editor and tell it that it should take care of submitting the data on the RTE along the rest of the form. What the RTE does when this option is true is to set a listener for the form submission and dump the contents of the editor window back into the &lttextarea> so it gets submitted along the rest of the form. The RTE normally shows a title bar over the toolbar; we don't want this in our application and we eliminate it simply by setting the titlebar property in the toolbar configuration attribute to false. Alternatively, we could have set it to any HTML string we wanted shown on that area Finally, we simply render the editor. That is all we need to do; the RTE will take care of all editing chores and when the form is about to be submitted, it will take care of sending its data along with the rest. Filtering the data The RTE will not send the data unfiltered, it will process the HTML in its editing area to make sure it is clean, safe, and compliant. Why would we expect our data to contain anything invalid? If all text was written from within the RTE, there would be no problem at all as the RTE won't generate anything wrong, but that is not always the case. Plenty of text will be cut from somewhere else and pasted into the RTE, and that text brings with it plenty of existing markup. To clean up the text, the RTE will consider the idiosyncrasies of a few user agents and the settings of a couple of configuration attributes. The filterWord configuration attribute will make sure that the extra markup introduced by text pasted into the editor from MS Word does not get through. The markup configuration attribute has four possible settings: semantic: This is the default setting; it will favor semantic tags in contrast to styling tags, for example, it will change &ltb> into &ltstrong>, &ltti&g into &ltem> and &ltfont> into &ltspan style="font: …. css: It will favor CSS style attributes, for example, changing &ltb> into &ltspan style="font-weight:bold">. default: It does the minimum amount of changes required for safety and compliance. xhtml: Among other changes, it makes sure all tags are closed such as &ltbr />, &ltimg />, and &ltinput />.   The default setting, which is not the default, offers the least filtering that will be done in all cases; it will make sure tags have their matching closing tags, extra whitespace is stripped off, and the tags and attributes are in lower case. It will also drop several tags that don't fit in an HTML fragment, such as &lthtml> or &ltbody>, that are unsafe, such as &ltscript> or &ltiframe>, or would involve actions, such as &ltform> or form input elements. The list of invalid tags is stored in property .invalidHTML and can be freely changed. More validation We can further validate what the RTE sends in the form; instead of letting the RTE handle the data submission automatically, we can handle it ourselves by simply changing the previous code to this: YAHOO.util.Event.onDOMReady(function () { var Dom = YAHOO.util.Dom, Event = YAHOO.util.Event; var myEditor = new YAHOO.widget.SimpleEditor('msgBody', { height: '300px', width: '740px' }); myEditor.get('toolbar').titlebar = false; myEditor.render(); Event.on('form1', 'submit', function (ev) { var html = myEditor.getEditorHTML(); html = myEditor.cleanHTML(html); if (html.search(/<strong/gi) > -1) { alert("Don't shout at me!"); Event.stopEvent(ev); } this.msgBody.innerHTML = html; });}); We have dropped the handleSubmit configuration attribute when creating the SimpleEditor instance as we want to handle it ourselves. We listen to the submit event for the form and in the listener we read the actual rich text from the RTE via .getEditorHTML(). We may or may not want to clean it; in this example, we do so by calling .cleanHTML(). In fact, if we call .cleanHTML() with no arguments we will get the cleaned-up rich text; we don't need to call .getEditorHTML() first. Then we can do any validation that we want on that string and any of the other values. We use the Event utility .stopEvent() method to prevent the form from submitting if an error is found, but if everything checks fine, we save the HTML we recovered from the RTE into the &lttextarea>, just as if we had the handleSubmit configuration attribute set, except that now we actually control what goes there. In the case of text in boldface, it would seem easy to filter it out by simply adding this line: myEditor.invalidHTML.strong = true; However, this erases the tag and all the content in between, probably not what we wanted. Likewise, we could have set .invalidHTML.em to true to drop italics, but other elements are not so easy. RTE replaces a &ltu> (long deprecated) by &ltspan style="text-decoration:underline;"> which is impossible to drop in this way. Besides, these replacements depend on the setting of the markup configuration attribute. This example has also served us to see how data can be read from the RTE and cleaned if desired. Data can also be sent to the RTE by using .setEditorHTML(), but not before the editorContentLoaded event is fired, as the RTE would not be ready to receive it. In the example, we wanted to manipulate the editor contents, so we read it and saved it back into the &lttextarea> in separate steps; otherwise, we could have used the .saveHTML() method to send the data back to the &lttextarea> directly. In fact, this is what the RTE itself does when we set handleSubmit to true.
Read more
  • 0
  • 0
  • 5623

article-image-aggregators-file-exchange-over-ftpftps-social-integration-and-enterprise-messaging
Packt
20 Feb 2015
26 min read
Save for later

Aggregators, File exchange Over FTP/FTPS, Social Integration, and Enterprise Messaging

Packt
20 Feb 2015
26 min read
In this article by Chandan Pandey, the author of Spring Integration Essentials, we will explore the out-of-the-box capabilities that the Spring Integration framework provides for a seamless flow of messages across heterogeneous components and see what Spring Integration has in the box when it comes to real-world integration challenges. We will cover Spring Integration's support for external components and we will cover the following topics in detail: Aggregators File exchange over FTP/FTPS Social integration Enterprise messaging (For more resources related to this topic, see here.) Aggregators The aggregators are the opposite of splitters - they combine multiple messages and present them as a single message to the next endpoint. This is a very complex operation, so let's start by a real life scenario. A news channel might have many correspondents who can upload articles and related images. It might happen that the text of the articles arrives much sooner than the associated images - but the article must be sent for publishing only when all relevant images have also arrived. This scenario throws up a lot of challenges; partial articles should be stored somewhere, there should be a way to correlate incoming components with existing ones, and also there should be a way to identify the completion of a message. Aggregators are there to handle all of these aspects - some of the relevant concepts that are used are MessageStore, CorrelationStrategy, and ReleaseStrategy. Let's start with a code sample and then we will dive down to explore each of these concepts in detail: <int:aggregator   input-channel="fetchedFeedChannelForAggregatior"   output-channel="aggregatedFeedChannel"   ref="aggregatorSoFeedBean"   method="aggregateAndPublish"   release-strategy="sofeedCompletionStrategyBean"   release-strategy-method="checkCompleteness"   correlation-strategy="soFeedCorrelationStrategyBean"   correlation-strategy-method="groupFeedsBasedOnCategory"   message-store="feedsMySqlStore "   expire-groups-upon-completion="true">   <int:poller fixed-rate="1000"></int:poller> </int:aggregator> Hmm, a pretty big declaration! And why not—a lot of things combine together to act as an aggregator. Let's quickly glance at all the tags used: int:aggregator: This is used to specify the Spring framework's namespace for the aggregator. input-channel: This is the channel from which messages will be consumed. output-channel: This is the channel to which messages will be dropped after aggregation. ref: This is used to specify the bean having the method that is called on the release of messages. method: This is used to specify the method that is invoked when messages are released. release-strategy: This is used to specify the bean having the method that decides whether aggregation is complete or not. release-strategy-method: This is the method having the logic to check for completeness of the message. correlation-strategy: This is used to specify the bean having the method to correlate the messages. correlation-strategy-method: This is the method having the actual logic to correlate the messages. message-store: This is used to specify the message store, where messages are temporarily stored until they have been correlated and are ready to release. This can be in memory (which is default) or can be a persistence store. If a persistence store is configured, message delivery will be resumed across a server crash. Java class can be defined as an aggregator and, as described in the previous bullet points, the method and ref parameters decide which method of bean (referred by ref) should be invoked when messages have been aggregated as per CorrelationStrategy and released after fulfilment of ReleaseStrategy. In the following example, we are just printing the messages before passing them on to the next consumer in the chain: public class SoFeedAggregator {   public List<SyndEntry> aggregateAndPublish(List<SyndEntry>     messages) {     //Do some pre-processing before passing on to next channel     return messages;   } } Let's get to the details of the three most important components that complete the aggregator. Correlation strategy Aggregator needs to group the messages—but how will it decide the groups? In simple words, CorrelationStrategy decides how to correlate the messages. The default is based on a header named CORRELATION_ID. All messages having the same value for the CORRELATION_ID header will be put in one bracket. Alternatively, we can designate any Java class and its method to define a custom correlation strategy or can extend Spring Integration framework's CorrelationStrategy interface to define it. If the CorrelationStrategy interface is implemented, then the getCorrelationKey() method should be implemented. Let's see our correlation strategy in the feeds example: public class CorrelationStrategy {   public Object groupFeedsBasedOnCategory(Message<?> message) {     if(message!=null){       SyndEntry entry = (SyndEntry)message.getPayload();       List<SyndCategoryImpl> categories=entry.getCategories();       if(categories!=null&&categories.size()>0){         for (SyndCategoryImpl category: categories) {           //for simplicity, lets consider the first category           return category.getName();         }       }     }     return null;   } } So how are we correlating our messages? We are correlating the feeds based on the category name. The method must return an object that can be used for correlating the messages. If a user-defined object is returned, it must satisfy the requirements for a key in a map such as defining hashcode() and equals(). The return value must not be null. Alternatively, if we would have wanted to implement it by extending framework support, then it would have looked like this: public class CorrelationStrategy implements CorrelationStrategy {   public Object getCorrelationKey(Message<?> message) {     if(message!=null){       …             return category.getName();           }         }       }       return null;     }   } } Release strategy We have been grouping messages based on correlation strategy—but when will we release it for the next component? This is decided by the release strategy. Similar to the correlation strategy, any Java POJO can define the release strategy or we can extend framework support. Here is the example of using the Java POJO class: public class CompletionStrategy {   public boolean checkCompleteness(List<SyndEntry> messages) {     if(messages!=null){       if(messages.size()>2){         return true;       }     }     return false;   } } The argument of a message must be of type collection and it must return a Boolean indication whether to release the accumulated messages or not. For simplicity, we have just checked for the number of messages from the same category—if it's greater than two, we release the messages. Message store Until an aggregated message fulfils the release criteria, the aggregator needs to store them temporarily. This is where message stores come into the picture. Message stores can be of two types: in-memory and persistence store. Default is in memory, and if this is to be used, then there is no need to declare this attribute at all. If a persistent message store needs to be used, then it must be declared and its reference should be given to the message- store attribute. A mysql message store can be declared and referenced as follows: <bean id=" feedsMySqlStore "   class="org.springframework.integration.jdbc.JdbcMessageStore">   <property name="dataSource" ref="feedsSqlDataSource"/> </bean> Data source is Spring framework's standard JDBC data source. The greatest advantage of using persistence store is recoverability—if the system recovers from a crash, all in-memory aggregated messages will not be lost. Another advantage is capacity—memory is limited, which can accommodate a limited number of messages for aggregation, but the database can have a much bigger space. FTP/FTPS FTP, or File Transfer Protocol, is used to transfer files across networks. FTP communications consist of two parts: server and client. The client establishes a session with the server, after which it can download or upload files. Spring Integration provides components that act as a client and connect to the FTP server to communicate with it. What about the server—which server will it connect to? If you have access to any public or hosted FTP server, use it. Else, the easiest way for trying out the example in this section is to set up a local instance of the FTP server. FTP setup is out of the scope of this article. Prerequisites To use Spring Integration components for FTP/FTPS, we need to add a namespace to our configuration file and then add the Maven dependency entry in the pom.xml file. The following entries should be made: Namespace support can be added by using the following code snippet:   class="org.springframework.integration.     ftp.session.DefaultFtpSessionFactory">   <property name="host" value="localhost"/>   <property name="port" value="21"/>   <property name="username" value="testuser"/>   <property name="password" value="testuser"/> </bean> The DefaultFtpSessionFactory class is at work here, and it takes the following parameters: Host that is running the FTP server Port at which it's running the server Username Password for the server A session pool for the factory is maintained and an instance is returned when required. Spring takes care of validating that a stale session is never returned. Downloading files from the FTP server Inbound adapters can be used to read the files from the server. The most important aspect is the session factory that we just discussed in the preceding section. The following code snippet configures an FTP inbound adapter that downloads a file from a remote directory and makes it available for processing: <int-ftp:inbound-channel-adapter   channel="ftpOutputChannel"   session-factory="ftpClientSessionFactory"   remote-directory="/"   local-directory=   "C:\Chandan\Projects\siexample\ftp\ftplocalfolder"   auto-create-local-directory="true"   delete-remote-files="true"   filename-pattern="*.txt"   local-filename-generator-expression=   "#this.toLowerCase() + '.trns'">   <int:poller fixed-rate="1000"/> </int-ftp:inbound-channel-adapter> Let's quickly go through the tags used in this code: int-ftp:inbound-channel-adapter: This is the namespace support for the FTP inbound adapter. channel: This is the channel on which the downloaded files will be put as a message. session-factory: This is a factory instance that encapsulates details for connecting to a server. remote-directory: This is the directory on the server where the adapter should listen for the new arrival of files. local-directory: This is the local directory where the downloaded files should be dumped. auto-create-local-directory: If enabled, this will create the local directory structure if it's missing. delete-remote-files: If enabled, this will delete the files on the remote directory after it has been downloaded successfully. This will help in avoiding duplicate processing. filename-pattern: This can be used as a filter, but only files matching the specified pattern will be downloaded. local-filename-generator-expression: This can be used to generate a local filename. An inbound adapter is a special listener that listens for events on the remote directory, for example, an event fired on the creation of a new file. At this point, it will initiate the file transfer. It creates a payload of type Message<File> and puts it on the output channel. By default, the filename is retained and a file with the same name as the remote file is created in the local directory. This can be overridden by using local- filename-generator-expression. Incomplete files On the remote server, there could be files that are still in the process of being written. Typically, there the extension is different, for example, filename.actualext.writing. The best way to avoid reading incomplete files is to use the filename pattern that will copy only those files that have been written completely. Uploading files to the FTP server Outbound adapters can be used to write files to the server. The following code snippet reads a message from a specified channel and writes it inside the FTP server's remote directory. The remote server session is determined as usual by the session factory. Make sure the username configured in the session object has the necessary permission to write to the remote directory. The following configuration sets up a FTP adapter that can upload files in the specified directory:   <int-ftp:outbound-channel-adapter channel="ftpOutputChannel"     remote-directory="/uploadfolder"     session-factory="ftpClientSessionFactory"     auto-create-directory="true">   </int-ftp:outbound-channel-adapter> Here is a brief description of the tags used: int-ftp:outbound-channel-adapter: This is the namespace support for the FTP outbound adapter. channel: This is the name of the channel whose payload will be written to the remote server. remote-directory: This is the remote directory where files will be put. The user configured in the session factory must have appropriate permission. session-factory: This encapsulates details for connecting to the FTP server. auto-create-directory: If enabled, this will automatically create a remote directory if it's missing, and the given user should have sufficient permission. The payload on the channel need not necessarily be a file type; it can be one of the following: java.io.File: A Java file object byte[]: This is a byte array that represents the file contents java.lang.String: This is the text that represents the file contents Avoiding partially written files Files on the remote server must be made available only when they have been written completely and not when they are still partial. Spring uses a mechanism of writing the files to a temporary location and its availability is published only when it has been completely written. By default, the suffix is written, but it can be changed using the temporary-file-suffix property. This can be completely disabled by setting use-temporary-file- name to false. FTP outbound gateway Gateway, by definition, is a two-way component: it accepts input and provides a result for further processing. So what is the input and output in the case of FTP? It issues commands to the FTP server and returns the result of the command. The following command will issue an ls command with the option –l to the server. The result is a list of string objects containing the filename of each file that will be put on the reply- channel. The code is as follows: <int-ftp:outbound-gateway id="ftpGateway"     session-factory="ftpClientSessionFactory"     request-channel="commandInChannel"     command="ls"     command-options="-1"     reply-channel="commandOutChannel"/> The tags are pretty simple: int-ftp:outbound-gateway: This is the namespace support for the FTP outbound gateway session-factory: This is the wrapper for details needed to connect to the FTP server command: This is the command to be issued command-options: This is the option for the command reply-channel: This is the response of the command that is put on this channel FTPS support For FTPS support, all that is needed is to change the factory class—an instance of org.springframework.integration.ftp.session.DefaultFtpsSessionFactory should be used. Note the s in DefaultFtpsSessionFactory. Once the session is created with this factory, it's ready to communicate over a secure channel. Here is an example of a secure session factory configuration: <bean id="ftpSClientFactory"   class="org.springframework.integration.ftp.session.   DefaultFtpsSessionFactory">   <property name="host" value="localhost"/>   <property name="port" value="22"/>   <property name="username" value="testuser"/>   <property name="password" value="testuser"/> </bean> Although it is obvious, I would remind you that the FTP server must be configured to support a secure connection and open the appropriate port. Social integration Any application in today's context is incomplete if it does not provide support for social messaging. Spring Integration provides in-built support for many social interfaces such as e-mails, Twitter feeds, and so on. Let's discuss the implementation of Twitter in this section. Prior to Version 2.1, Spring Integration was dependent on the Twitter4J API for Twitter support, but now it leverages Spring's social module for Twitter integration. Spring Integration provides an interface for receiving and sending tweets as well as searching and publishing the search results in messages. Twitter uses oauth for authentication purposes. An app must be registered before we start Twitter development on it. Prerequisites Let's look at the steps that need to be completed before we can use a Twitter component in our Spring Integration example: Twitter account setup: A Twitter account is needed. Perform the following steps to get the keys that will allow the user to use Twitter using the API: Visit https://apps.twitter.com/. Sign in to your account. Click on Create New App. Enter the details such as Application name, Description, Website, and so on. All fields are self-explanatory and appropriate help has also been provided. The value for the field Website need not be a valid one—put an arbitrary website name in the correct format. Click on the Create your application button. If the application is created successfully, a confirmation message will be shown and the Application Management page will appear, as shown here: Go to the Keys and Access Tokens tab and note the details for Consumer Key (API Key) and Consumer Secret (API Secret) under Application Settings, as shown in the following screenshot: You need additional access tokens so that applications can use Twitter using APIs. Click on Create my access token; it takes a while to generate these tokens. Once it is generated, note down the value of Access Token and Access Token Secret. Go to the Permissions tab and provide permission to Read, Write and Access direct messages. After performing all these steps, and with the required keys and access token, we are ready to use Twitter. Let's store these in the twitterauth.properties property file: twitter.oauth.apiKey= lnrDlMXSDnJumKLFRym02kHsy twitter.oauth.apiSecret= 6wlriIX9ay6w2f6at6XGQ7oNugk6dqNQEAArTsFsAU6RU8F2Td twitter.oauth.accessToken= 158239940-FGZHcbIDtdEqkIA77HPcv3uosfFRnUM30hRix9TI twitter.oauth.accessTokenSecret= H1oIeiQOlvCtJUiAZaachDEbLRq5m91IbP4bhg1QPRDeh The next step towards Twitter integration is the creation of a Twitter template. This is similar to the datasource or connection factory for databases, JMS, and so on. It encapsulates details to connect to a social platform. Here is the code snippet: <context:property-placeholder location="classpath: twitterauth.properties "/> <bean id="twitterTemplate" class=" org.springframework.social.   twitter.api.impl.TwitterTemplate ">   <constructor-arg value="${twitter.oauth.apiKey}"/>   <constructor-arg value="${twitter.oauth.apiSecret}"/>   <constructor-arg value="${twitter.oauth.accessToken}"/>   <constructor-arg value="${twitter.oauth.accessTokenSecret}"/> </bean> As I mentioned, the template encapsulates all the values. Here is the order of the arguments: apiKey apiSecret accessToken accessTokenSecret With all the setup in place, let's now do some real work: Namespace support can be added by using the following code snippet: <beans   twitter-template="twitterTemplate"   channel="twitterChannel"> </int-twitter:inbound-channel-adapter> The components in this code are covered in the following bullet points: int-twitter:inbound-channel-adapter: This is the namespace support for Twitter's inbound channel adapter. twitter-template: This is the most important aspect. The Twitter template encapsulates which account to use to poll the Twitter site. The details given in the preceding code snippet are fake; it should be replaced with real connection parameters. channel: Messages are dumped on this channel. These adapters are further used for other applications, such as for searching messages, retrieving direct messages, and retrieving tweets that mention your account, and so on. Let's have a quick look at the code snippets for these adapters. I will not go into detail for each one; they are almost similar to what have been discussed previously. Search: This adapter helps to search the tweets for the parameter configured in the query tag. The code is as follows: <int-twitter:search-inbound-channel-adapter id="testSearch"   twitter-template="twitterTemplate"   query="#springintegration"   channel="twitterSearchChannel"> </int-twitter:search-inbound-channel-adapter> Retrieving Direct Messages: This adapter allows us to receive the direct message for the account in use (account configured in Twitter template). The code is as follows: <int-twitter:dm-inbound-channel-adapter   id="testdirectMessage"   twitter-template="twiterTemplate"   channel="twitterDirectMessageChannel"> </int-twitter:dm-inbound-channel-adapter> Retrieving Mention Messages: This adapter allows us to receive messages that mention the configured account via the @user tag (account configured in the Twitter template). The code is as follows: <int-twitter:mentions-inbound-channel-adapter   id="testmentionMessage"   twitter-template="twiterTemplate"   channel="twitterMentionMessageChannel"> </int-twitter:mentions-inbound-channel-adapter> Sending tweets Twitter exposes outbound adapters to send messages. Here is a sample code:   <int-twitter:outbound-channel-adapter     twitter-template="twitterTemplate"     channel="twitterSendMessageChannel"/> Whatever message is put on the twitterSendMessageChannel channel is tweeted by this adapter. Similar to an inbound gateway, the outbound gateway provides support for sending direct messages. Here is a simple example of an outbound adapter: <int-twitter:dm-outbound-channel-adapter   twitter-template="twitterTemplate"   channel="twitterSendDirectMessage"/> Any message that is put on the twitterSendDirectMessage channel is sent to the user directly. But where is the name of the user to whom the message will be sent? It is decided by a header in the message TwitterHeaders.DM_TARGET_USER_ID. This must be populated either programmatically, or by using enrichers or SpEL. For example, it can be programmatically added as follows: Message message = MessageBuilder.withPayload("Chandan")   .setHeader(TwitterHeaders.DM_TARGET_USER_ID,   "test_id").build(); Alternatively, it can be populated by using a header enricher, as follows: <int:header-enricher input-channel="twitterIn"   output-channel="twitterOut">   <int:header name="twitter_dmTargetUserId" value=" test_id "/> </int:header-enricher> Twitter search outbound gateway As gateways provide a two-way window, the search outbound gateway can be used to issue dynamic search commands and receive the results as a collection. If no result is found, the collection is empty. Let's configure a search outbound gateway, as follows:   <int-twitter:search-outbound-gateway id="twitterSearch"     request-channel="searchQueryChannel"     twitter-template="twitterTemplate"     search-args-expression="#springintegration"     reply-channel="searchQueryResultChannel"/> And here is what the tags covered in this code mean: int-twitter:search-outbound-gateway: This is the namespace for the Twitter search outbound gateway request-channel: This is the channel that is used to send search requests to this gateway twitter-template: This is the Twitter template reference search-args-expression: This is used as arguments for the search reply-channel: This is the channel on which searched results are populated This gives us enough to get started with the social integration aspects of the spring framework. Enterprise messaging Enterprise landscape is incomplete without JMS—it is one of the most commonly used mediums of enterprise integration. Spring provides very good support for this. Spring Integration builds over that support and provides adapter and gateways to receive and consume messages from many middleware brokers such as ActiveMQ, RabbitMQ, Rediss, and so on. Spring Integration provides inbound and outbound adapters to send and receive messages along with gateways that can be used in a request/reply scenario. Let's walk through these implementations in a little more detail. A basic understanding of the JMS mechanism and its concepts is expected. It is not possible to cover even the introduction of JMS here. Let's start with the prerequisites. Prerequisites To use Spring Integration messaging components, namespaces, and relevant Maven the following dependency should be added: Namespace support can be added by using the following code snippet: > Maven entry can be provided using the following code snippet: <dependency>   <groupId>org.springframework.integration</groupId>   <artifactId>spring-integration-jms</artifactId>   <version>${spring.integration.version}</version> </dependency> After adding these two dependencies, we are ready to use the components. But before we can use an adapter, we must configure an underlying message broker. Let's configure ActiveMQ. Add the following in pom.xml:   <dependency>     <groupId>org.apache.activemq</groupId>     <artifactId>activemq-core</artifactId>     <version>${activemq.version}</version>     <exclusions>       <exclusion>         <artifactId>spring-context</artifactId>         <groupId>org.springframework</groupId>       </exclusion>     </exclusions>   </dependency>   <dependency>     <groupId>org.springframework</groupId>     <artifactId>spring-jms</artifactId>     <version>${spring.version}</version>     <scope>compile</scope>   </dependency> After this, we are ready to create a connection factory and JMS queue that will be used by the adapters to communicate. First, create a session factory. As you will notice, this is wrapped in Spring's CachingConnectionFactory, but the underlying provider is ActiveMQ: <bean id="connectionFactory" class="org.springframework.   jms.connection.CachingConnectionFactory">   <property name="targetConnectionFactory">     <bean class="org.apache.activemq.ActiveMQConnectionFactory">       <property name="brokerURL" value="vm://localhost"/>     </bean>   </property> </bean> Let's create a queue that can be used to retrieve and put messages: <bean   id="feedInputQueue"   class="org.apache.activemq.command.ActiveMQQueue">   <constructor-arg value="queue.input"/> </bean> Now, we are ready to send and retrieve messages from the queue. Let's look into each message one by one. Receiving messages – the inbound adapter Spring Integration provides two ways of receiving messages: polling and event listener. Both of them are based on the underlying Spring framework's comprehensive support for JMS. JmsTemplate is used by the polling adapter, while MessageListener is used by the event-driven adapter. As the name suggests, a polling adapter keeps polling the queue for the arrival of new messages and puts the message on the configured channel if it finds one. On the other hand, in the case of the event-driven adapter, it's the responsibility of the server to notify the configured adapter. The polling adapter Let's start with a code sample: <int-jms:inbound-channel-adapter   connection-factory="connectionFactory"   destination="feedInputQueue"   channel="jmsProcessedChannel">   <int:poller fixed-rate="1000" /> </int-jms:inbound-channel-adapter> This code snippet contains the following components: int-jms:inbound-channel-adapter: This is the namespace support for the JMS inbound adapter connection-factory: This is the encapsulation for the underlying JMS provider setup, such as ActiveMQ destination: This is the JMS queue where the adapter is listening for incoming messages channel: This is the channel on which incoming messages should be put There is a poller element, so it's obvious that it is a polling-based adapter. It can be configured in one of two ways: by providing a JMS template or using a connection factory along with a destination. I have used the latter approach. The preceding adapter has a polling queue mentioned in the destination and once it gets any message, it puts the message on the channel configured in the channel attribute. The event-driven adapter Similar to polling adapters, event-driven adapters also need a reference either to an implementation of the interface AbstractMessageListenerContainer or need a connection factory and destination. Again, I will use the latter approach. Here is a sample configuration: <int-jms:message-driven-channel-adapter   connection-factory="connectionFactory"   destination="feedInputQueue"   channel="jmsProcessedChannel"/> There is no poller sub-element here. As soon as a message arrives at its destination, the adapter is invoked, which puts it on the configured channel. Sending messages – the outbound adapter Outbound adapters convert messages on the channel to JMS messages and put them on the configured queue. To convert Spring Integration messages to JMS messages, the outbound adapter uses JmsSendingMessageHandler. This is is an implementation of MessageHandler. Outbound adapters should be configured with either JmsTemplate or with a connection factory and destination queue. Keeping in sync with the preceding examples, we will take the latter approach, as follows: <int-jms:outbound-channel-adapter   connection-factory="connectionFactory"   channel="jmsChannel"   destination="feedInputQueue"/> This adapter receives the Spring Integration message from jmsChannel, converts it to a JMS message, and puts it on the destination. Gateway Gateway provides a request/reply behavior instead of a one-way send or receive. For example, after sending a message, we might expect a reply or we may want to send an acknowledgement after receiving a message. The inbound gateway Inbound gateways provide an alternative to inbound adapters when request-reply capabilities are expected. An inbound gateway is an event-based implementation that listens for a message on the queue, converts it to Spring Message, and puts it on the channel. Here is a sample code: <int-jms:inbound-gateway   request-destination="feedInputQueue"   request-channel="jmsProcessedChannel"/> However, this is what an inbound adapter does—even the configuration is similar, except the namespace. So, what is the difference? The difference lies in replying back to the reply destination. Once the message is put on the channel, it will be propagated down the line and at some stage a reply would be generated and sent back as an acknowledgement. The inbound gateway, on receiving this reply, will create a JMS message and put it back on the reply destination queue. Then, where is the reply destination? The reply destination is decided in one of the following ways: Original message has a property JMSReplyTo, if it's present it has the highest precedence. The inbound gateway looks for a configured, default-reply-destination which can be configured either as a name or as a direct reference of a channel. For specifying channel as direct reference default-reply-destination tag should be used. An exception will be thrown by the gateway if it does not find either of the preceding two ways. The outbound gateway Outbound gateways should be used in scenarios where a reply is expected for the send messages. Let's start with an example: <int-jms:outbound-gateway   request-channel="jmsChannel"   request-destination="feedInputQueue"   reply-channel="jmsProcessedChannel" /> The preceding configuration will send messages to request-destination. When an acknowledgement is received, it can be fetched from the configured reply-destination. If reply-destination has not been configured, JMS TemporaryQueues will be created. Summary In this article, we covered out-of-the-box component provided by the Spring Integration framework such as aggregator. This article also showcased the simplicity and abstraction that Spring Integration provides when it comes to handling complicated integrations, be it file-based, HTTP, JMS, or any other integration mechanism. Resources for Article: Further resources on this subject: Modernizing Our Spring Boot App [article] Home Security by Beaglebone [article] Integrating With Other Frameworks [article]
Read more
  • 0
  • 0
  • 5607

article-image-themes-and-templates-apache-struts-2
Packt
20 Oct 2009
10 min read
Save for later

Themes and Templates with Apache Struts 2

Packt
20 Oct 2009
10 min read
Extracting the templates The first step to modifying an existing theme or creating our own is to extract the templates from the Struts 2 distribution. This actually has the advantageous performance side effect of keeping the templates in the file system (as opposed to in the library file), which allows FreeMarker to cache the templates properly. Caching the templates provides a performance boost and involves no work other than extracting the templates. The issue with caching templates contained in library files, however, will be fixed. If we examine the Struts 2 core JAR file, we'll see a /template folder. We just need to put that in our application's classpath. The best way to do this depends on your build and deploy environment. For example, if we're using Eclipse, the easiest thing to do is put the /template folder in our source folder; Eclipse should deploy them automatically. A maze of twisty little passages Right now, consider a form having only text fields and a submit button. We'll start by looking at the template for the text field tag. For the most part, Struts 2 custom tags are named similarly to the template file that defines it. As we're using the "xhtml" theme, we'll look in our newly-created /template/xhtml folder. Templates are found in a folder with the same name as the theme. We find the <s:textfield> template in /template/xhtml/text.ftl file. However, when we open it, we are disappointed to find it implemented by the following files—controlheader.ftl file retrieved from the current theme's folder, text.ftl from the simple theme, and controlfooter.ftl file from "xhtml" theme. This is curious, but satisfactory for now. We'll assume what we need is in the controlheader.ftl file. However, upon opening that, we discover we actually need to look in controlheader-core.ftl file. Opening that file shows us the table rows that we're looking for. Going walkabout through source code, both Java and FreeMarker, can be frustrating, but ultimately educational. Developing the habit of looking at framework source can lead to a greater mastery of that framework. It can be frustrating at times, but is a critical skill. Even without a strong understanding of the FreeMarker template language, we can get a pretty good idea of what needs to be done by looking at the controlheader-core.ftl file. We notice that the template sets a convenience variable (hasFieldErrors) when the field being rendered has an error. We'll use that variable to control the style of the table row and cells of our text fields. This is how the class of the text field label is being set. Creating our theme To keep the template clean for the purpose of education, we'll go ahead and create a new theme. (Most of the things will be the same, but we'll strip out some unused code in the templates we modify.) While we have the possibility of extending an existing theme (see the Struts 2 documentation for details), we'll just create a new theme called s2wad by copying the xhtml templates into a folder called s2wad. We can now use the new theme by setting the theme in our <s:form> tag by specifying a theme attribute: <s:form theme="s2wad" ... etc ...> Subsequent form tags will now use our new s2wad theme. Because we decided not to extend the existing "xhtml" theme, as we have a lot of tags with the "xhtml" string hard coded inside. In theory, it probably wasn't necessary to hard code the theme into the templates. However, we're going to modify only a few tags for the time being, while the remaining tags will remain hard coded (although incorrectly). In an actual project, we'd either extend an existing theme or spend more time cleaning up the theme we've created (along with the "xhtml" theme, and provide corrective patches back to the Struts 2 project). First, we'll modify controlheader.ftl to use the theme parameter to load the appropriate controlheader-core.ftl file. Arguably, this is how the template should be implemented anyway, even though we could hard code in the new s2wad theme. Next, we'll start on controlheader-core.ftl. As our site will never use the top label position, we'll remove that. Doing this isn't necessary, but will keep it cleaner for our use. The controlheader-core.ftl template creates a table row for each field error for the field being rendered, and creates the table row containing the field label and input field itself. We want to add a class to both the table row and table cells containing the field label and input field. By adding a class to both, the row itself and each of the two table cells, we maximize our ability to apply CSS styles. Even if we end up styling only one or the other, it's convenient to have the option. We'll also strip out the FreeMarker code that puts the required indicator to the left of the label, once again, largely to keep things clean. Projects will normally have a unified look and feel. It's reasonable to remove unused functionality, and if we're already going through the trouble to create a new theme, then we might as well do that. We're also going to clean up the template a little bit by consolidating how we handle the presence of field errors. Instead of putting several FreeMarker <#if> directives throughout the template, we'll create some HTML attributes at the top of the template, and use them in the table row and table cells later on. Finally, we'll indent the template file to make it easier to read. This may not always be a viable technique in production, as the extra spaces may be rendered improperly, (particularly across browsers), possibly depending on what we end up putting in the tag. For now, imagine that we're using the default "required" indicator, an asterisk, but it's conceivable we might want to use something like an image. Whitespace is something to be aware of when dealing with HTML. Our modified controlheader-core.ftl file now looks like this: <#assign hasFieldErrors = parameters.name?exists && fieldErrors?exists && fieldErrors[parameters.name]?exists/><#if hasFieldErrors> <#assign labelClass = "class='errorLabel'"/> <#assign trClass = "class='hasErrors'"/> <#assign tdClass = "class='tdLabel hasErrors'"/><#else> <#assign labelClass = "class='label'"/> <#assign trClass = ""/> <#assign tdClass = "class='tdLabel'"/></#if><#if hasFieldErrors> <#list fieldErrors[parameters.name] as error> <tr errorFor="${parameters.id}" class="hasErrors"> <td>&nbsp;</td> <td class="hasErrors"><#rt/> <span class="errorMessage">${error?html}</span><#t/> </td><#lt/> </tr> </#list></#if><tr ${trClass}> <td ${tdClass}> <#if parameters.label?exists> <label <#t/> <#if parameters.id?exists> for="${parameters.id?html}" <#t/> </#if> ${labelClass} ><#t/> ${parameters.label?html}<#t/> <#if parameters.required?default(false)> <span class="required">*</span><#t/> </#if> :<#t/> <#include "/${parameters.templateDir}/s2e2e/tooltip.ftl" /> </label><#t/> </#if> </td><#lt/> It's significantly different when compared to the controlheader-core.ftl file of the "xhtml" theme. However, it has the same functionality for our application, with the addition of the new hasErrors class applied to both the table row and cells for the recipe's name and description fields. We've also slightly modified where the field errors are displayed (it is no longer centered around the entire input field row, but directly above the field itself). We'll also modify the controlheader.ftl template to apply the hasErrors style to the table cell containing the input field. This template is much simpler and includes only our new hasErrors class and the original align code. Note that we can use the variable hasFieldErrors, which is defined in controlheader-core.ftl. This is a valuable technique, but has the potential to lead to spaghetti code. It would probably be better to define it in the controlheader.ftl template. <#include "/${parameters.templateDir}/${parameters.theme}/controlheader-core.ftl" /><td<#if hasFieldErrors>class="hasErrors"<#t/></#if><#if parameters.align?exists>align="${parameters.align?html}"<#t/></#if>><#t/> We'll create a style for the table cells with the hasErrors class, setting the background to be just a little red. Our new template sets the hasErrors class on both the label and the input field table cells, and we've collapsed our table borders, so this will create a table row with a light red background. .hasErrors td {background: #fdd;} Now, a missing Name or Description will give us a more noticeable error, as shown in the following screenshot: This is fairly a simple example. However, it does show that it's pretty straightforward to begin customizing our own templates to match the requirements of the application. By encapsulating some of the view layer inside the form tags, our JSP files are kept significantly cleaner. Other uses of templates Anything we can do in a typical JSP page can be done in our templates. We don't have to use Struts 2's template support. We can do many similar things in a JSP custom tag file (or a Java-based tag), but we'd lose some of the functionality that's already been built. Some potential uses of templates might include the addition of accessibility features across an entire site, allowing them to be encapsulated within concise JSP notation. Enhanced JavaScript functionality could be added to all fields, or only specific fields of a form, including things such as detailed help or informational pop ups. This overlaps somewhat with the existing tooltip support, we might have custom usage requirements or our own framework that we need to support. Struts 2 now also ships with a Java-based theme that avoids the use of FreeMarker tags. These tags provide a noticeable speed benefit. However, only a few basic tags are supported at this time. It's bundled as a plug-in, which can be used as a launching point for our own Java-based tags. Summary Themes and templates provide another means of encapsulating functionality and/or appearance across an entire application. The use of existing themes can be a great benefit, particularly when doing early prototyping of a site, and are often sufficient for the finished product. Dealing effectively with templates is largely a matter of digging through the existing template source. It also includes determining what our particular needs are, and modifying or creating our own themes, adding and removing functionality as appropriate. While this article only takes a brief look at templates, it covers the basics and opens the door to implementing any enhancements we may require. If you have read this article you may be interested to view : Exceptions and Logging in Apache Struts 2 Documenting our Application in Apache Struts 2 (part 1) Documenting our Application in Apache Struts 2 (part 2)
Read more
  • 0
  • 0
  • 5601
article-image-freeswitch-106-sip-and-user-directory
Packt
05 Aug 2010
7 min read
Save for later

FreeSWITCH 1.0.6: SIP and the User Directory

Packt
05 Aug 2010
7 min read
(For more resources on Telephony, see here.) Understanding the FreeSWITCH user directory The FreeSWITCH user directory is based on a centralized XML document, comprised of one or more <domain> elements. Each <domain> can contain either <user> elements or <groups> elements. A <groups> element contains one or more <groups> elements, each of which contains one or more <user> elements. A small, simple example would look like the following: <section name="directory"> <domain name="example.com"> <groups> <group name="default"> <user id="1001"> <params> <param name="password" value="1234"/> </params> </user> </group> </groups> </domain></section> Some more basic configurations may not have a need to organize the users in groups so it is possible to omit the <groups> element completely, and just insert several <user> elements into the top <domain> element. The important thing is that each user@domain derived from this directory is available to all components in the system—it's a single centralized directory for storing all of your user information. If you register as a user with a SIP phone or if you try to leave a voicemail message for a user, FreeSWITCH looks in the same place for user data. This is important because it limits duplication of data, and makes it more efficient than it would be if each component kept track of its users separately. This system should work well for a small system with a few users in it, but what about a large system with thousands of users? What if a user wants to connect his existing database to FreeSWITCH to provide the user directory? Well, using mod_xml_curl (download here-ch:1,ch:3), we can create a web service that gets the request for the entries in the user directory, in the same way a web page sends the results of a form submission. In turn, that web service can query an existing database of users formatted any way possible, and construct the XML records in the format that FreeSWITCH registry expects. mod_xml_curl returns the data to the module requesting the lookup. This means that instant, seamless integration with your existing setup is possible; your data is still kept in its original, central location. The user directory can be accessed by any subsystem within FreeSWITCH. This includes modules, scripts, and the FSAPI interface among others. In this article, we are going to learn how the Sofia SIP module employs the user directory to authenticate your softphone or hardware SIP phone. If you are a developer you may appreciate some nifty things you can do with your user directory, such as adding a <variables> element to either the <domain>, the <groups>, or the <user> element. In this element you can set many <variable> elements, allowing you to set channel variables that will apply to every call made by a particular authenticated user. This can come in very handy in the Dialplan because it allows you to make user-specific routing decisions. It is also possible to define IP address ranges using CIDR notation, which can be used to authenticate particular users based on what remote network address they connect from. This removes the need for a login and password, if your user always logs in from the same remote IP address. The directory is implemented in pure XML. This is advantageous for several reasons, not the least of which is the "X" in XML: Extensible. Since XML is, by definition, extensible, the directory structure is also extensible. If we need to add a new element into the directory, we can do so simply by adding to the existing XML structure. Authentication versus authorizationAuthentication is the process of identifying a user. Authorization is the process of determining the level of access of a user. Authentication answers the question, "Is this person really who he says he is?" Authorization answers the question, "What is this person allowed to do here?" When you see expressions such as "IP Auth" and "Digest Auth", remember that they are referring to the two primary ways of identifying (that is, authenticating) a user. IP authorization is based upon the user's IP address. Digest authentication is based upon the user supplying a username and password. SIP (and FreeSWITCH) can use either method. Visit http://en.wikipedia.org/wiki/Digest_access_authentication for a discussion of how digest authentication works Working with the FreeSWITCH user directory The default configuration has one domain with a directory of 20 users. Users can be added or removed very easily. There is no set limit to how many users can be defined on the system. The list of users is collectively referred to as the directory. Users can belong to one or more groups. Finally, all the users belong to a single domain. By default, the domain is the IP address of the FreeSWITCH server. In the following sections we will discuss these topics: User features Adding a user Testing voicemail Groups of users User features Let's begin by looking at the XML file that defines a user. Locate the file conf/directory/default/1000.xml and open it in an editor. You should see a file like the following: <include> <user id="1000"> <params> <param name="password" value="$${default_password}"/> <param name="vm-password" value="1000"/> </params> <variables> <variable name="toll_allow" value="domestic,international,local"/> <variable name="accountcode" value="1000"/> <variable name="user_context" value="default"/> <variable name="effective_caller_id_name" value="Extension 1000"/> <variable name="effective_caller_id_number" value="1000"/> <variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/> <variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/> <variable name="callgroup" value="techsupport"/> </variables> </user></include> The XML structure of a user is simple. Within the <include> tags the user has the following: The user element with the id attribute The params element, wherein parameters are specified The variables element, wherein channel variables are defined Even before we know what much of the specifics mean, we can glean from this file that the user id is 1000 and that there is both a password and a vm-password. In this case, the password parameter refers to the SIP authorization password. The expression $${default_password} refers to the value contained in the global variable default_password which is defined in the conf/vars.xml file. If you surmised that vm-password means "voicemail password" then you are correct. This value refers to the digits that the user needs to dial when logging in to check his or her voicemail messages. The value of id is used both as the authorization username and the SIP username. Additionally, there are a number of channel variables that are defined for this user. Most of these are directly related to the default Dialplan. The following table lists each variable and what it is used for: Variable Purpose toll_allow Specifies which types of calls this user can make accountcode Arbitrary value that shows up in CDR data user_context The Dialplan context that is used when this person makes a phone call effective_caller_id_name Caller ID name displayed on called party's phone when calling another registered user effective_caller_id_number Caller ID number displayed on called party's phone when calling another registered user outbound_caller_id_name Caller ID name sent to provider on outbound calls outbound_caller_id_number Caller ID number sent to provider on outbound calls callgroup Arbitrary value that can be used in Dialplan or CDR In summary, a user in the default configuration has the following: A username for SIP and for authorization A voicemail password A means of allowing/restricting dialling A means of handling caller ID being sent out Several arbitrary variables that can be used or ignored as needed Let's now add a new user to our directory.
Read more
  • 0
  • 0
  • 5596

article-image-building-color-picker-hex-rgb-conversion
Packt
02 Mar 2015
18 min read
Save for later

Building a Color Picker with Hex RGB Conversion

Packt
02 Mar 2015
18 min read
In this article by Vijay Joshi, author of the book Mastering jQuery UI, we are going to create a color selector, or color picker, that will allow the users to change the text and background color of a page using the slider widget. We will also use the spinner widget to represent individual colors. Any change in colors using the slider will update the spinner and vice versa. The hex value of both text and background colors will also be displayed dynamically on the page. (For more resources related to this topic, see here.) This is how our page will look after we have finished building it: Setting up the folder structure To set up the folder structure, follow this simple procedure: Create a folder named Article inside the MasteringjQueryUI folder. Directly inside this folder, create an HTML file and name it index.html. Copy the js and css folder inside the Article folder as well. Now go inside the js folder and create a JavaScript file named colorpicker.js. With the folder setup complete, let's start to build the project. Writing markup for the page The index.html page will consist of two sections. The first section will be a text block with some text written inside it, and the second section will have our color picker controls. We will create separate controls for text color and background color. Inside the index.html file write the following HTML code to build the page skeleton: <html> <head> <link rel="stylesheet" href="css/ui-lightness/jquery-ui- 1.10.4.custom.min.css"> </head> <body> <div class="container"> <div class="ui-state-highlight" id="textBlock"> <p> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. </p> <p> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. </p> <p> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. </p> </div> <div class="clear">&nbsp;</div> <ul class="controlsContainer"> <li class="left"> <div id="txtRed" class="red slider" data-spinner="sptxtRed" data-type="text"></div><input type="text" value="0" id="sptxtRed" data-slider="txtRed" readonly="readonly" /> <div id="txtGreen" class="green slider" dataspinner=" sptxtGreen" data-type="text"></div><input type="text" value="0" id="sptxtGreen" data-slider="txtGreen" readonly="readonly" /> <div id="txtBlue" class="blue slider" dataspinner=" sptxtBlue" data-type="text"></div><input type="text" value="0" id="sptxtBlue" data-slider="txtBlue" readonly="readonly" /> <div class="clear">&nbsp;</div> Text Color : <span>#000000</span> </li> <li class="right"> <div id="bgRed" class="red slider" data-spinner="spBgRed" data-type="bg" ></div><input type="text" value="255" id="spBgRed" data-slider="bgRed" readonly="readonly" /> <div id="bgGreen" class="green slider" dataspinner=" spBgGreen" data-type="bg" ></div><input type="text" value="255" id="spBgGreen" data-slider="bgGreen" readonly="readonly" /> <div id="bgBlue" class="blue slider" data-spinner="spBgBlue" data-type="bg" ></div><input type="text" value="255" id="spBgBlue" data-slider="bgBlue" readonly="readonly" /> <div class="clear">&nbsp;</div> Background Color : <span>#ffffff</span> </li> </ul> </div> <script src="js/jquery-1.10.2.js"></script> <script src="js/jquery-ui-1.10.4.custom.min.js"></script> <script src="js/colorpicker.js"></script> </body> </html> We started by including the jQuery UI CSS file inside the head section. Proceeding to the body section, we created a div with the container class, which will act as parent div for all the page elements. Inside this div, we created another div with id value textBlock and a ui-state-highlight class. We then put some text content inside this div. For this example, we have made three paragraph elements, each having some random text inside it. After div#textBlock, there is an unordered list with the controlsContainer class. This ul element has two list items inside it. First list item has the CSS class left applied to it and the second has CSS class right applied to it. Inside li.left, we created three div elements. Each of these three div elements will be converted to a jQuery slider and will represent the red (R), green (G), and blue (B) color code, respectively. Next to each of these divs is an input element where the current color code will be displayed. This input will be converted to a spinner as well. Let's look at the first slider div and the input element next to it. The div has id txtRed and two CSS classes red and slider applied to it. The red class will be used to style the slider and the slider class will be used in our colorpicker.js file. Note that this div also has two data attributes attached to it, the first is data-spinner, whose value is the id of the input element next to the slider div we have provided as sptxtRed, the second attribute is data-type, whose value is text. The purpose of the data-type attribute is to let us know whether this slider will be used for changing the text color or the background color. Moving on to the input element next to the slider now, we have set its id as sptxtRed, which should match the value of the data-spinner attribute on the slider div. It has another attribute named data-slider, which contains the id of the slider, which it is related to. Hence, its value is txtRed. Similarly, all the slider elements have been created inside div.left and each slider has an input next to id. The data-type attribute will have the text value for all sliders inside div.left. All input elements have also been assigned a value of 0 as the initial text color will be black. The same pattern that has been followed for elements inside div.left is also followed for elements inside div.right. The only difference is that the data-type value will be bg for slider divs. For all input elements, a value of 255 is set as the background color is white in the beginning. In this manner, all the six sliders and the six input elements have been defined. Note that each element has a unique ID. Finally, there is a span element inside both div.left and div.right. The hex color code will be displayed inside it. We have placed #000000 as the default value for the text color inside the span for the text color and #ffffff as the default value for the background color inside the span for background color. Lastly, we have included the jQuery source file, the jQuery UI source file, and the colorpicker.js file. With the markup ready, we can now write the properties for the CSS classes that we used here. Styling the content To make the page presentable and structured, we need to add CSS properties for different elements. We will do this inside the head section. Go to the head section in the index.html file and write these CSS properties for different elements: <style type="text/css">   body{     color:#025c7f;     font-family:Georgia,arial,verdana;     width:700px;     margin:0 auto;   }   .container{     margin:0 auto;     font-size:14px;     position:relative;     width:700px;     text-align:justify;    } #textBlock{     color:#000000;     background-color: #ffffff;   }   .ui-state-highlight{     padding: 10px;     background: none;   }   .controlsContainer{       border: 1px solid;       margin: 0;       padding: 0;       width: 100%;       float: left;   }   .controlsContainer li{       display: inline-block;       float: left;       padding: 0 0 0 50px;       width: 299px;   }   .controlsContainer div.ui-slider{       margin: 15px 0 0;       width: 200px;       float:left;   }   .left{     border-right: 1px solid;   }   .clear{     clear: both;   }     .red .ui-slider-range{ background: #ff0000; }   .green .ui-slider-range{ background: #00ff00; }   .blue .ui-slider-range{ background: #0000ff; }     .ui-spinner{       height: 20px;       line-height: 1px;       margin: 11px 0 0 15px;     }   input[type=text]{     margin-top: 0;     width: 30px;   } </style> First, we defined some general rules for page body and div .container. Then, we defined the initial text color and background color for the div with id textBlock. Next, we defined the CSS properties for the unordered list ul .controlsContainer and its list items. We have provided some padding and width to each list item. We have also specified the width and other properties for the slider as well. Since the class ui-slider is added by jQuery UI to a slider element after it is initialized, we have added our properties in the .controlsContainer div .ui-slider rule. To make the sliders attractive, we then defined the background colors for each of the slider bars by defining color codes for red, green, and blue classes. Lastly, CSS rules have been defined for the spinner and the input box. We can now check our progress by opening the index.html page in our browser. Loading it will display a page that resembles the following screenshot: It is obvious that sliders and spinners will not be displayed here. This is because we have not written the JavaScript code required to initialize those widgets. Our next section will take care of them. Implementing the color picker In order to implement the required functionality, we first need to initialize the sliders and spinners. Whenever a slider is changed, we need to update its corresponding spinner as well, and conversely if someone changes the value of the spinner, we need to update the slider to the correct value. In case any of the value changes, we will then recalculate the current color and update the text or background color depending on the context. Defining the object structure We will organize our code using the object literal. We will define an init method, which will be the entry point. All event handlers will also be applied inside this method. To begin with, go to the js folder and open the colorpicker.js file for editing. In this file, write the code that will define the object structure and a call to it: var colorPicker = {   init : function ()   {       },   setColor : function(slider, value)   {   },   getHexColor : function(sliderType)   {   },   convertToHex : function (val)   {   } }   $(function() {   colorPicker.init(); }); An object named colorPicker has been defined with four methods. Let's see what all these methods will do: init: This method will be the entry point where we will initialize all components and add any event handlers that are required. setColor: This method will be the main method that will take care of updating the text and background colors. It will also update the value of the spinner whenever the slider moves. This method has two parameters; the slider that was moved and its current value. getHexColor: This method will be called from within setColor and it will return the hex code based on the RGB values in the spinners. It takes a sliderType parameter based on which we will decide which color has to be changed; that is, text color or background color. The actual hex code will be calculated by the next method. convertToHex: This method will convert an RGB value for color into its corresponding hex value and return it to get a HexColor method. This was an overview of the methods we are going to use. Now we will implement these methods one by one, and you will understand them in detail. After the object definition, there is the jQuery's $(document).ready() event handler that will call the init method of our object. The init method In the init method, we will initialize the sliders and the spinners and set the default values for them as well. Write the following code for the init method in the colorpicker.js file:   init : function () {   var t = this;   $( ".slider" ).slider(   {     range: "min",     max: 255,     slide : function (event, ui)     {       t.setColor($(this), ui.value);     },     change : function (event, ui)     {       t.setColor($(this), ui.value);     }   });     $('input').spinner(   {     min :0,     max : 255,     spin : function (event, ui)     {       var sliderRef = $(this).data('slider');       $('#' + sliderRef).slider("value", ui.value);     }   });       $( "#txtRed, #txtGreen, #txtBlue" ).slider('value', 0);   $( "#bgRed, #bgGreen, #bgBlue" ).slider('value', 255); } In the first line, we stored the current scope value, this, in a local variable named t. Next, we will initialize the sliders. Since we have used the CSS class slider on each slider, we can simply use the .slider selector to select all of them. During initialization, we provide four options for sliders: range, max, slide, and change. Note the value for max, which has been set to 255. Since the value for R, G, or B can be only between 0 and 255, we have set max as 255. We do not need to specify min as it is 0 by default. The slide method has also been defined, which is invoked every time the slider handle moves. The call back for slide is calling the setColor method with an instance of the current slider and the value of the current slider. The setColor method will be explained in the next section. Besides slide, the change method is also defined, which also calls the setColor method with an instance of the current slider and its value. We use both the slide and change methods. This is because a change is called once the user has stopped sliding the slider handle and the slider value has changed. Contrary to this, the slide method is called each time the user drags the slider handle. Since we want to change colors while sliding as well, we have defined the slide as well as change methods. It is time to initialize the spinners now. The spinner widget is initialized with three properties. These are min and max, and the spin. min and max method has been set to 0 and 255, respectively. Every time the up/down button on the spinner is clicked or the up/down arrow key is used, the spin method will be called. Inside this method, $(this) refers to the current spinner. We find our related slider to this spinner by reading the data-slider attribute of this spinner. Once we get the exact slider, we set its value using the value method on the slider widget. Note that calling the value method will invoke the change method of the slider as well. This is the primary reason we have defined a callback for the change event while initializing the sliders. Lastly, we will set the default values for the sliders. For sliders inside div.left, we have set the value as 0 and for sliders inside div.right, the value is set to 255. You can now check the page on your browser. You will find that the slider and the spinner elements are initialized now, with the values we specified: You can also see that changing the spinner value using either the mouse or the keyboard will update the value of the slider as well. However, changing the slider value will not update the spinner. We will handle this in the next section where we will change colors as well. Changing colors and updating the spinner The setColor method is called each time the slider or the spinner value changes. We will now define this method to change the color based on whether the slider's or spinner's value was changed. Go to the setColor method declaration and write the following code: setColor : function(slider, value) {   var t = this;   var spinnerRef = slider.data('spinner');   $('#' + spinnerRef).spinner("value", value);     var sliderType = slider.data('type')     var hexColor = t.getHexColor(sliderType);   if(sliderType == 'text')   {       $('#textBlock').css({'color' : hexColor});       $('.left span:last').text(hexColor);                  }   else   {       $('#textBlock').css({'background-color' : hexColor});       $('.right span:last').text(hexColor);                  } } In the preceding code, we receive the current slider and its value as a parameter. First we get the related spinner to this slider using the data attribute spinner. Then we set the value of the spinner to the current value of the slider. Now we find out the type of slider for which setColor is being called and store it in the sliderType variable. The value for sliderType will either be text, in case of sliders inside div.left, or bg, in case of sliders inside div.right. In the next line, we will call the getHexColor method and pass the sliderType variable as its argument. The getHexColor method will return the hex color code for the selected color. Next, based on the sliderType value, we set the color of div#textBlock. If the sliderType is text, we set the color CSS property of div#textBlock and display the selected hex code in the span inside div.left. If the sliderType value is bg, we set the background color for div#textBlock and display the hex code for the background color in the span inside div.right. The getHexColor method In the preceding section, we called the getHexColor method with the sliderType argument. Let's define it first, and then we will go through it in detail. Write the following code to define the getHexColor method: getHexColor : function(sliderType) {   var t = this;   var allInputs;   var hexCode = '#';   if(sliderType == 'text')   {     //text color     allInputs = $('.left').find('input[type=text]');   }   else   {     //background color     allInputs = $('.right').find('input[type=text]');   }   allInputs.each(function (index, element) {     hexCode+= t.convertToHex($(element).val());   });     return hexCode; } The local variable t has stored this to point to the current scope. Another variable allInputs is declared, and lastly a variable to store the hex code has been declared, whose value has been set to # initially. Next comes the if condition, which checks the value of parameter sliderType. If the value of sliderType is text, it means we need to get all the spinner values to change the text color. Hence, we use jQuery's find selector to retrieve all input boxes inside div.left. If the value of sliderType is bg, it means we need to change the background color. Therefore, the else block will be executed and all input boxes inside div.right will be retrieved. To convert the color to hex, individual values for red, green, and blue will have to be converted to hex and then concatenated to get the full color code. Therefore, we iterate in inputs using the .each method. Another method convertToHex is called, which converts the value of a single input to hex. Inside the each method, we keep concatenating the hex value of the R, G, and B components to a variable hexCode. Once all iterations are done, we return the hexCode to the parent function where it is used. Converting to hex convertToHex is a small method that accepts a value and converts it to the hex equivalent. Here is the definition of the convertToHex method: convertToHex : function (val) {   var x  = parseInt(val, 10).toString(16);   return x.length == 1 ? "0" + x : x; } Inside the method, firstly we will convert the received value to an integer using the parseInt method and then we'll use JavaScript's toString method to convert it to hex, which has base 16. In the next line, we will check the length of the converted hex value. Since we want the 6-character dash notation for color (such as #ff00ff), we need two characters each for red, green, and blue. Hence, we check the length of the created hex value. If it is only one character, we append a 0 to the beginning to make it two characters. The hex value is then returned to the parent function. With this, our implementation is complete and we can check it on a browser. Load the page in your browser and play with the sliders and spinners. You will see the text or background color changing, based on their value: You will also see the hex code displayed below the sliders. Also note that changing the sliders will change the value of the corresponding spinner and vice versa. Improving the Colorpicker This was a very basic tool that we built. You can add many more features to it and enhance its functionality. Here are some ideas to get you started: Convert it into a widget where all the required DOM for sliders and spinners is created dynamically Instead of two sliders, incorporate the text and background changing ability into a single slider with two handles, but keep two spinners as usual Summary In this article, we created a basic color picker/changer using sliders and spinners. You can use it to view and change the colors of your pages dynamically. Resources for Article: Further resources on this subject: Testing Ui Using WebdriverJs? [article] Important Aspect Angularjs Ui Development [article] Kendo Ui Dataviz Advance Charting [article]
Read more
  • 0
  • 0
  • 5586

article-image-nodejs-fundamentals-and-asynchronous-javascript
Packt
19 Feb 2016
5 min read
Save for later

Node.js Fundamentals and Asynchronous JavaScript

Packt
19 Feb 2016
5 min read
Node.js is a JavaScript-driven technology. The language has been in development for more than 15 years, and it was first used in Netscape. Over the years, they've found interesting and useful design patterns, which will be of use to us in this book. All this knowledge is now available to Node.js coders. Of course, there are some differences because we are running the code in different environments, but we are still able to apply all these good practices, techniques, and paradigms. I always say that it is important to have a good basis to your applications. No matter how big your application is, it should rely on flexible and well-tested code (For more resources related to this topic, see here.) Node.js fundamentals Node.js is a single-threaded technology. This means that every request is processed in only one thread. In other languages, for example, Java, the web server instantiates a new thread for every request. However, Node.js is meant to use asynchronous processing, and there is a theory that doing this in a single thread could bring good performance. The problem of the single-threaded applications is the blocking I/O operations; for example, when we need to read a file from the hard disk to respond to the client. Once a new request lands on our server, we open the file and start reading from it. The problem occurs when another request is generated, and the application is still processing the first one. Let's elucidate the issue with the following example: var http = require('http'); var getTime = function() { var d = new Date(); return d.getHours() + ':' + d.getMinutes() + ':' + d.getSeconds() + ':' + d.getMilliseconds(); } var respond = function(res, str) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end(str + 'n'); console.log(str + ' ' + getTime()); } var handleRequest = function (req, res) { console.log('new request: ' + req.url + ' - ' + getTime()); if(req.url == '/immediately') { respond(res, 'A'); } else { var now = new Date().getTime(); while(new Date().getTime() < now + 5000) { // synchronous reading of the file } respond(res, 'B'); } } http.createServer(handleRequest).listen(9000, '127.0.0.1'); The http module, which we initialize on the first line, is needed for running the web server. The getTime function returns the current time as a string, and the respond function sends a simple text to the browser of the client and reports that the incoming request is processed. The most interesting function is handleRequest, which is the entry point of our logic. To simulate the reading of a large file, we will create a while cycle for 5 seconds. Once we run the server, we will be able to make an HTTP request to http://localhost:9000. In order to demonstrate the single-thread behavior we will send two requests at the same time. These requests are as follows: One request will be sent to http://localhost:9000, where the server will perform a synchronous operation that takes 5 seconds The other request will be sent to http://localhost:9000/immediately, where the server should respond immediately The following screenshot is the output printed from the server, after pinging both the URLs: As we can see, the first request came at 16:58:30:434, and its response was sent at 16:58:35:440, that is, 5 seconds later. However, the problem is that the second request is registered when the first one finishes. That's because the thread belonging to Node.js was busy processing the while loop. Of course, Node.js has a solution for the blocking I/O operations. They are transformed to asynchronous functions that accept callback. Once the operation finishes, Node.js fires the callback, notifying that the job is done. A huge benefit of this approach is that while it waits to get the result of the I/O, the server can process another request. The entity that handles the external events and converts them into callback invocations is called the event loop. The event loop acts as a really good manager and delegates tasks to various workers. It never blocks and just waits for something to happen; for example, a notification that the file is written successfully. Now, instead of reading a file synchronously, we will transform our brief example to use asynchronous code. The modified example looks like the following code: var handleRequest = function (req, res) { console.log('new request: ' + req.url + ' - ' + getTime()); if(req.url == '/immediately') { respond(res, 'A'); } else { setTimeout(function() { // reading the file respond(res, 'B'); }, 5000); } } The while loop is replaced with the setTimeout invocation. The result of this change is clearly visible in the server's output, which can be seen in the following screenshot: The first request still gets its response after 5 seconds. However, the second one is processed immediately. Summary In this article, we went through the most common programming paradigms in Node.js. We learned how Node.js handles parallel requests. We understood how to write modules and make them communicative. We saw the problems of the asynchronous code and their most popular solutions. For more information on Node.js you can refer to the following URLs: https://www.packtpub.com/web-development/mastering-nodejs https://www.packtpub.com/web-development/deploying-nodejs Resources for Article: Further resources on this subject: Node.js Fundamentals [Article] AngularJS Project [Article] Working with Live Data and AngularJS [Article]
Read more
  • 0
  • 0
  • 5556
article-image-angularjs-performance
Packt
04 Mar 2015
20 min read
Save for later

AngularJS Performance

Packt
04 Mar 2015
20 min read
In this article by Chandermani, the author of AngularJS by Example, we focus our discussion on the performance aspect of AngularJS. For most scenarios, we can all agree that AngularJS is insanely fast. For standard size views, we rarely see any performance bottlenecks. But many views start small and then grow over time. And sometimes the requirement dictates we build large pages/views with a sizable amount of HTML and data. In such a case, there are things that we need to keep in mind to provide an optimal user experience. Take any framework and the performance discussion on the framework always requires one to understand the internal working of the framework. When it comes to Angular, we need to understand how Angular detects model changes. What are watches? What is a digest cycle? What roles do scope objects play? Without a conceptual understanding of these subjects, any performance guidance is merely a checklist that we follow without understanding the why part. Let's look at some pointers before we begin our discussion on performance of AngularJS: The live binding between the view elements and model data is set up using watches. When a model changes, one or many watches linked to the model are triggered. Angular's view binding infrastructure uses these watches to synchronize the view with the updated model value. Model change detection only happens when a digest cycle is triggered. Angular does not track model changes in real time; instead, on every digest cycle, it runs through every watch to compare the previous and new values of the model to detect changes. A digest cycle is triggered when $scope.$apply is invoked. A number of directives and services internally invoke $scope.$apply: Directives such as ng-click, ng-mouse* do it on user action Services such as $http and $resource do it when a response is received from server $timeout or $interval call $scope.$apply when they lapse A digest cycle tracks the old value of the watched expression and compares it with the new value to detect if the model has changed. Simply put, the digest cycle is a workflow used to detect model changes. A digest cycle runs multiple times till the model data is stable and no watch is triggered. Once you have a clear understanding of the digest cycle, watches, and scopes, we can look at some performance guidelines that can help us manage views as they start to grow. (For more resources related to this topic, see here.) Performance guidelines When building any Angular app, any performance optimization boils down to: Minimizing the number of binding expressions and hence watches Making sure that binding expression evaluation is quick Optimizing the number of digest cycles that take place The next few sections provide some useful pointers in this direction. Remember, a lot of these optimization may only be necessary if the view is large. Keeping the page/view small The sanest advice is to keep the amount of content available on a page small. The user cannot interact/process too much data on the page, so remember that screen real estate is at a premium and only keep necessary details on a page. The lesser the content, the lesser the number of binding expressions; hence, fewer watches and less processing are required during the digest cycle. Remember, each watch adds to the overall execution time of the digest cycle. The time required for a single watch can be insignificant but, after combining hundreds and maybe thousands of them, they start to matter. Angular's data binding infrastructure is insanely fast and relies on a rudimentary dirty check that compares the old and the new values. Check out the stack overflow (SO) post (http://stackoverflow.com/questions/9682092/databinding-in-angularjs), where Misko Hevery (creator of Angular) talks about how data binding works in Angular. Data binding also adds to the memory footprint of the application. Each watch has to track the current and previous value of a data-binding expression to compare and verify if data has changed. Keeping a page/view small may not always be possible, and the view may grow. In such a case, we need to make sure that the number of bindings does not grow exponentially (linear growth is OK) with the page size. The next two tips can help minimize the number of bindings in the page and should be seriously considered for large views. Optimizing watches for read-once data In any Angular view, there is always content that, once bound, does not change. Any read-only data on the view can fall into this category. This implies that once the data is bound to the view, we no longer need watches to track model changes, as we don't expect the model to update. Is it possible to remove the watch after one-time binding? Angular itself does not have something inbuilt, but a community project bindonce (https://github.com/Pasvaz/bindonce) is there to fill this gap. Angular 1.3 has added support for bind and forget in the native framework. Using the syntax {{::title}}, we can achieve one-time binding. If you are on Angular 1.3, use it! Hiding (ng-show) versus conditional rendering (ng-if/ng-switch) content You have learned two ways to conditionally render content in Angular. The ng-show/ng-hide directive shows/hides the DOM element based on the expression provided and ng-if/ng-switch creates and destroys the DOM based on an expression. For some scenarios, ng-if can be really beneficial as it can reduce the number of binding expressions/watches for the DOM content not rendered. Consider the following example: <div ng-if='user.isAdmin'>   <div ng-include="'admin-panel.html'"></div></div> The snippet renders an admin panel if the user is an admin. With ng-if, if the user is not an admin, the ng-include directive template is neither requested nor rendered saving us of all the bindings and watches that are part of the admin-panel.html view. From the preceding discussion, it may seem that we should get rid of all ng-show/ng-hide directives and use ng-if. Well, not really! It again depends; for small size pages, ng-show/ng-hide works just fine. Also, remember that there is a cost to creating and destroying the DOM. If the expression to show/hide flips too often, this will mean too many DOMs create-and-destroy cycles, which are detrimental to the overall performance of the app. Expressions being watched should not be slow Since watches are evaluated too often, the expression being watched should return results fast. The first way we can make sure of this is by using properties instead of functions to bind expressions. These expressions are as follows: {{user.name}}ng-show='user.Authorized' The preceding code is always better than this: {{getUserName()}}ng-show = 'isUserAuthorized(user)' Try to minimize function expressions in bindings. If a function expression is required, make sure that the function returns a result quickly. Make sure a function being watched does not: Make any remote calls Use $timeout/$interval Perform sorting/filtering Perform DOM manipulation (this can happen inside directive implementation) Or perform any other time-consuming operation Be sure to avoid such operations inside a bound function. To reiterate, Angular will evaluate a watched expression multiple times during every digest cycle just to know if the return value (a model) has changed and the view needs to be synchronized. Minimizing the deep model watch When using $scope.$watch to watch for model changes in controllers, be careful while setting the third $watch function parameter to true. The general syntax of watch looks like this: $watch(watchExpression, listener, [objectEquality]); In the standard scenario, Angular does an object comparison based on the reference only. But if objectEquality is true, Angular does a deep comparison between the last value and new value of the watched expression. This can have an adverse memory and performance impact if the object is large. Handling large datasets with ng-repeat The ng-repeat directive undoubtedly is the most useful directive Angular has. But it can cause the most performance-related headaches. The reason is not because of the directive design, but because it is the only directive that allows us to generate HTML on the fly. There is always the possibility of generating enormous HTML just by binding ng-repeat to a big model list. Some tips that can help us when working with ng-repeat are: Page data and use limitTo: Implement a server-side paging mechanism when a number of items returned are large. Also use the limitTo filter to limit the number of items rendered. Its syntax is as follows: <tr ng-repeat="user in users |limitTo:pageSize">…</tr> Look at modules such as ngInfiniteScroll (http://binarymuse.github.io/ngInfiniteScroll/) that provide an alternate mechanism to render large lists. Use the track by expression: The ng-repeat directive for performance tries to make sure it does not unnecessarily create or delete HTML nodes when items are added, updated, deleted, or moved in the list. To achieve this, it adds a $$hashKey property to every model item allowing it to associate the DOM node with the model item. We can override this behavior and provide our own item key using the track by expression such as: <tr ng-repeat="user in users track by user.id">…</tr> This allows us to use our own mechanism to identify an item. Using your own track by expression has a distinct advantage over the default hash key approach. Consider an example where you make an initial AJAX call to get users: $scope.getUsers().then(function(users){ $scope.users = users;}) Later again, refresh the data from the server and call something similar again: $scope.users = users; With user.id as a key, Angular is able to determine what elements were added/deleted and moved; it can also determine created/deleted DOM nodes for such elements. Remaining elements are not touched by ng-repeat (internal bindings are still evaluated). This saves a lot of CPU cycles for the browser as fewer DOM elements are created and destroyed. Do not bind ng-repeat to a function expression: Using a function's return value for ng-repeat can also be problematic, depending upon how the function is implemented. Consider a repeat with this: <tr ng-repeat="user in getUsers()">…</tr> And consider the controller getUsers function with this: $scope.getUser = function() {   var orderBy = $filter('orderBy');   return orderBy($scope.users, predicate);} Angular is going to evaluate this expression and hence call this function every time the digest cycle takes place. A lot of CPU cycles were wasted sorting user data again and again. It is better to use scope properties and presort the data before binding. Minimize filters in views, use filter elements in the controller: Filters defined on ng-repeat are also evaluated every time the digest cycle takes place. For large lists, if the same filtering can be implemented in the controller, we can avoid constant filter evaluation. This holds true for any filter function that is used with arrays including filter and orderBy. Avoiding mouse-movement tracking events The ng-mousemove, ng-mouseenter, ng-mouseleave, and ng-mouseover directives can just kill performance. If an expression is attached to any of these event directives, Angular triggers a digest cycle every time the corresponding event occurs and for events like mouse move, this can be a lot. We have already seen this behavior when working with 7 Minute Workout, when we tried to show a pause overlay on the exercise image when the mouse hovers over it. Avoid them at all cost. If we just want to trigger some style changes on mouse events, CSS is a better tool. Avoiding calling $scope.$apply Angular is smart enough to call $scope.$apply at appropriate times without us explicitly calling it. This can be confirmed from the fact that the only place we have seen and used $scope.$apply is within directives. The ng-click and updateOnBlur directives use $scope.$apply to transition from a DOM event handler execution to an Angular execution context. Even when wrapping the jQuery plugin, we may require to do a similar transition for an event raised by the JQuery plugin. Other than this, there is no reason to use $scope.$apply. Remember, every invocation of $apply results in the execution of a complete digest cycle. The $timeout and $interval services take a Boolean argument invokeApply. If set to false, the lapsed $timeout/$interval services does not call $scope.$apply or trigger a digest cycle. Therefore, if you are going to perform background operations that do not require $scope and the view to be updated, set the last argument to false. Always use Angular wrappers over standard JavaScript objects/functions such as $timeout and $interval to avoid manually calling $scope.$apply. These wrapper functions internally call $scope.$apply. Also, understand the difference between $scope.$apply and $scope.$digest. $scope.$apply triggers $rootScope.$digest that evaluates all application watches whereas, $scope.$digest only performs dirty checks on the current scope and its children. If we are sure that the model changes are not going to affect anything other than the child scopes, we can use $scope.$digest instead of $scope.$apply. Lazy-loading, minification, and creating multiple SPAs I hope you are not assuming that the apps that we have built will continue to use the numerous small script files that we have created to separate modules and module artefacts (controllers, directives, filters, and services). Any modern build system has the capability to concatenate and minify these files and replace the original file reference with a unified and minified version. Therefore, like any JavaScript library, use minified script files for production. The problem with the Angular bootstrapping process is that it expects all Angular application scripts to be loaded before the application can bootstrap. We cannot load modules, controllers, or in fact, any of the other Angular constructs on demand. This means we need to provide every artefact required by our app, upfront. For small applications, this is not a problem as the content is concatenated and minified; also, the Angular application code itself is far more compact as compared to the traditional JavaScript of jQuery-based apps. But, as the size of the application starts to grow, it may start to hurt when we need to load everything upfront. There are at least two possible solutions to this problem; the first one is about breaking our application into multiple SPAs. Breaking applications into multiple SPAs This advice may seem counterintuitive as the whole point of SPAs is to get rid of full page loads. By creating multiple SPAs, we break the app into multiple small SPAs, each supporting parts of the overall app functionality. When we say app, it implies a combination of the main (such as index.html) page with ng-app and all the scripts/libraries and partial views that the app loads over time. For example, we can break the Personal Trainer application into a Workout Builder app and a Workout Runner app. Both have their own start up page and scripts. Common scripts such as the Angular framework scripts and any third-party libraries can be referenced in both the applications. On similar lines, common controllers, directives, services, and filters too can be referenced in both the apps. The way we have designed Personal Trainer makes it easy to achieve our objective. The segregation into what belongs where has already been done. The advantage of breaking an app into multiple SPAs is that only relevant scripts related to the app are loaded. For a small app, this may be an overkill but for large apps, it can improve the app performance. The challenge with this approach is to identify what parts of an application can be created as independent SPAs; it totally depends upon the usage pattern of the application. For example, assume an application has an admin module and an end consumer/user module. Creating two SPAs, one for admin and the other for the end customer, is a great way to keep user-specific features and admin-specific features separate. A standard user may never transition to the admin section/area, whereas an admin user can still work on both areas; but transitioning from the admin area to a user-specific area will require a full page refresh. If breaking the application into multiple SPAs is not possible, the other option is to perform the lazy loading of a module. Lazy-loading modules Lazy-loading modules or loading module on demand is a viable option for large Angular apps. But unfortunately, Angular itself does not have any in-built support for lazy-loading modules. Furthermore, the additional complexity of lazy loading may be unwarranted as Angular produces far less code as compared to other JavaScript framework implementations. Also once we gzip and minify the code, the amount of code that is transferred over the wire is minimal. If we still want to try our hands on lazy loading, there are two libraries that can help: ocLazyLoad (https://github.com/ocombe/ocLazyLoad): This is a library that uses script.js to load modules on the fly angularAMD (http://marcoslin.github.io/angularAMD): This is a library that uses require.js to lazy load modules With lazy loading in place, we can delay the loading of a controller, directive, filter, or service script, until the page that requires them is loaded. The overall concept of lazy loading seems to be great but I'm still not sold on this idea. Before we adopt a lazy-load solution, there are things that we need to evaluate: Loading multiple script files lazily: When scripts are concatenated and minified, we load the complete app at once. Contrast it to lazy loading where we do not concatenate but load them on demand. What we gain in terms of lazy-load module flexibility we lose in terms of performance. We now have to make a number of network requests to load individual files. Given these facts, the ideal approach is to combine lazy loading with concatenation and minification. In this approach, we identify those feature modules that can be concatenated and minified together and served on demand using lazy loading. For example, Personal Trainer scripts can be divided into three categories: The common app modules: This consists of any script that has common code used across the app and can be combined together and loaded upfront The Workout Runner module(s): Scripts that support workout execution can be concatenated and minified together but are loaded only when the Workout Runner pages are loaded. The Workout Builder module(s): On similar lines to the preceding categories, scripts that support workout building can be combined together and served only when the Workout Builder pages are loaded. As we can see, there is a decent amount of effort required to refactor the app in a manner that makes module segregation, concatenation, and lazy loading possible. The effect on unit and integration testing: We also need to evaluate the effect of lazy-loading modules in unit and integration testing. The way we test is also affected with lazy loading in place. This implies that, if lazy loading is added as an afterthought, the test setup may require tweaking to make sure existing tests still run. Given these facts, we should evaluate our options and check whether we really need lazy loading or we can manage by breaking a monolithic SPA into multiple smaller SPAs. Caching remote data wherever appropriate Caching data is the one of the oldest tricks to improve any webpage/application performance. Analyze your GET requests and determine what data can be cached. Once such data is identified, it can be cached from a number of locations. Data cached outside the app can be cached in: Servers: The server can cache repeated GET requests to resources that do not change very often. This whole process is transparent to the client and the implementation depends on the server stack used. Browsers: In this case, the browser caches the response. Browser caching depends upon the server sending HTTP cache headers such as ETag and cache-control to guide the browser about how long a particular resource can be cached. Browsers can honor these cache headers and cache data appropriately for future use. If server and browser caching is not available or if we also want to incorporate any amount of caching in the client app, we do have some choices: Cache data in memory: A simple Angular service can cache the HTTP response in the memory. Since Angular is SPA, the data is not lost unless the page refreshes. This is how a service function looks when it caches data: var workouts;service.getWorkouts = function () {   if (workouts) return $q.resolve(workouts);   return $http.get("/workouts").then(function (response){       workouts = response.data;       return workouts;   });}; The implementation caches a list of workouts into the workouts variable for future use. The first request makes a HTTP call to retrieve data, but subsequent requests just return the cached data as promised. The usage of $q.resolve makes sure that the function always returns a promise. Angular $http cache: Angular's $http service comes with a configuration option cache. When set to true, $http caches the response of the particular GET request into a local cache (again an in-memory cache). Here is how we cache a GET request: $http.get(url, { cache: true}); Angular caches this cache for the lifetime of the app, and clearing it is not easy. We need to get hold of the cache dedicated to caching HTTP responses and clear the cache key manually. The caching strategy of an application is never complete without a cache invalidation strategy. With cache, there is always a possibility that caches are out of sync with respect to the actual data store. We cannot affect the server-side caching behavior from the client; consequently, let's focus on how to perform cache invalidation (clearing) for the two client-side caching mechanisms described earlier. If we use the first approach to cache data, we are responsible for clearing cache ourselves. In the case of the second approach, the default $http service does not support clearing cache. We either need to get hold of the underlying $http cache store and clear the cache key manually (as shown here) or implement our own cache that manages cache data and invalidates cache based on some criteria: var cache = $cacheFactory.get('$http');cache.remove("http://myserver/workouts"); //full url Using Batarang to measure performance Batarang (a Chrome extension), as we have already seen, is an extremely handy tool for Angular applications. Using Batarang to visualize app usage is like looking at an X-Ray of the app. It allows us to: View the scope data, scope hierarchy, and how the scopes are linked to HTML elements Evaluate the performance of the application Check the application dependency graph, helping us understand how components are linked to each other, and with other framework components. If we enable Batarang and then play around with our application, Batarang captures performance metrics for all watched expressions in the app. This data is nicely presented as a graph available on the Performance tab inside Batarang: That is pretty sweet! When building an app, use Batarang to gauge the most expensive watches and take corrective measures, if required. Play around with Batarang and see what other features it has. This is a very handy tool for Angular applications. This brings us to the end of the performance guidelines that we wanted to share in this article. Some of these guidelines are preventive measures that we should take to make sure we get optimal app performance whereas others are there to help when the performance is not up to the mark. Summary In this article, we looked at the ever-so-important topic of performance, where you learned ways to optimize an Angular app performance. Resources for Article: Further resources on this subject: Role of AngularJS [article] The First Step [article] Recursive directives [article]
Read more
  • 0
  • 0
  • 5548

Packt
09 Oct 2013
11 min read
Save for later

Navigation Stack – Robot Setups

Packt
09 Oct 2013
11 min read
Introduction to the navigation stacks and their powerful capabilities—clearly one of the greatest pieces of software that comes with ROS. The TF is explained in order to show how to transform from the frame of one physical element to the other; for example, the data received using a sensor or the command for the desired position of an actuator. We will see how to create a laser driver or simulate it. We will learn how the odometry is computed and published, and how Gazebo provides it. A base controller will be presented, including a detailed description of how to create one for your robot. We will see how to execute SLAM with ROS. That is, we will show you how you can build a map from the environment with your robot as it moves through it. Finally, you will be able to localize your robot in the map using the localization algorithms of the navigation stack. The navigation stack in ROS In order to understand the navigation stack, you should think of it as a set of algorithms that use the sensors of the robot and the odometry, and you can control the robot using a standard message. It can move your robot without problems (for example, without crashing or getting stuck in some location, or getting lost) to another position. You would assume that this stack can be easily used with any robot. This is almost true, but it is necessary to tune some configuration files and write some nodes to use the stack. The robot must satisfy some requirements before it uses the navigation stack: The navigation stack can only handle a differential drive and holonomic-wheeled robots. The shape of the robot must be either a square or a rectangle. However, it can also do certain things with biped robots, such as robot localization, as long as the robot does not move sideways. It requires that the robot publishes information about the relationships between all the joints and sensors' position. The robot must send messages with linear and angular velocities. A planar laser must be on the robot to create the map and localization. Alternatively, you can generate something equivalent to several lasers or a sonar, or you can project the values to the ground if they are mounted in another place on the robot. The following diagram shows you how the navigation stacks are organized. You can see three groups of boxes with colors (gray and white) and dotted lines. The plain white boxes indicate those stacks that are provided by ROS, and they have all the nodes to make your robot really autonomous: In the following sections, we will see how to create the parts marked in gray in the diagram. These parts depend on the platform used; this means that it is necessary to write code to adapt the platform to be used in ROS and to be used by the navigation stack. Creating transforms The navigation stack needs to know the position of the sensors, wheels, and joints. To do that, we use the TF (which stands for Transform Frames) software library. It manages a transform tree. You could do this with mathematics, but if you have a lot of frames to calculate, it will be a bit complicated and messy. Thanks to TF, we can add more sensors and parts to the robot, and the TF will handle all the relations for us. If we put the laser 10 cm backwards and 20 cm above with regard to the origin of the coordinates of base_link, we would need to add a new frame to the transformation tree with these offsets. Once inserted and created, we could easily know the position of the laser with regard to the base_link value or the wheels. The only thing we need to do is call the TF library and get the transformation. Creating a broadcaster Let's test it with a simple code. Create a new file in chapter7_tutorials/src with the name tf_broadcaster.cpp, and put the following code inside it: #include <ros/ros.h> #include <tf/transform_broadcaster.h> int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_publisher"); ros::NodeHandle n; ros::Rate r(100); tf::TransformBroadcaster broadcaster; while(n.ok()){ broadcaster.sendTransform( tf::StampedTransform( tf::Transform(tf::Quaternion(0, 0, 0, 1), tf::Vector3(0.1, 0.0, 0.2)), ros::Time::now(),"base_link", "base_laser")); r.sleep(); } } Remember to add the following line in your CMakelist.txt file to create the new executable: rosbuild_add_executable(tf_broadcaster src/tf_broadcaster.cpp) And we also create another node that will use the transform, and it will give us the position of a point of a sensor with regard to the center of base_link (our robot). Creating a listener Create a new file in chapter7_tutorials/src with the name tf_listener.cpp and input the following code: #include <ros/ros.h> #include <geometry_msgs/PointStamped.h> #include <tf/transform_listener.h> void transformPoint(const tf::TransformListener& listener){ //we'll create a point in the base_laser frame that we'd like totransform to the base_link frame geometry_msgs::PointStamped laser_point; laser_point.header.frame_id = "base_laser"; //we'll just use the most recent transform available for our simple example laser_point.header.stamp = ros::Time(); //just an arbitrary point in space laser_point.point.x = 1.0; laser_point.point.y = 2.0; laser_point.point.z = 0.0; geometry_msgs::PointStamped base_point; listener.transformPoint("base_link", laser_point, base_point); ROS_INFO("base_laser: (%.2f, %.2f. %.2f) -----> base_link: (%.2f, %.2f, %.2f) at time %.2f", laser_point.point.x, laser_point.point.y, laser_point.point.z, base_point.point.x, base_point.point.y, base_point.point.z, base_point.header.stamp.toSec()); ROS_ERROR("Received an exception trying to transform a point from \"base_laser\" to \"base_link\": %s", ex.what()); } int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_listener"); ros::NodeHandle n; tf::TransformListener listener(ros::Duration(10)); //we'll transform a point once every second ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPoint, boost::ref(listener))); ros::spin(); } Remember to add the line in the CMakeList.txt file to create the executable. Compile the package and run both the nodes using the following commands: $ rosmake chapter7_tutorials $ rosrun chapter7_tutorials tf_broadcaster $ rosrun chapter7_tutorials tf_listener Then you will see the following message: [ INFO] [1368521854.336910465]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521854.33 [ INFO] [1368521855.336347545]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521855.33 This means that the point that you published on the node, with the position (1.00, 2.00, 0.00) relative to base_laser, has the position (1.10, 2.00, 0.20) relative to base_link. As you can see, the tf library performs all the mathematics for you to get the coordinates of a point or the position of a joint relative to another point. A transform tree defines offsets in terms of both translation and rotation between different coordinate frames. Let us see an example to help you understand this. We are going to add another laser, say, on the back of the robot (base_link): The system had to know the position of the new laser to detect collisions, such as the one between wheels and walls. With the TF tree, this is very simple to do and maintain and is also scalable. Thanks to tf, we can add more sensors and parts, and the tf library will handle all the relations for us. All the sensors and joints must be correctly configured on tf to permit the navigation stack to move the robot without problems, and to exactly know where each one of their components is. Before starting to write the code to configure each component, keep in mind that you have the geometry of the robot specified in the URDF file. So, for this reason, it is not necessary to configure the robot again. Perhaps you do not know it, but you have been using the robot_state_publisher package to publish the transform tree of your robot. We used it for the first time; therefore, you do have the robot configured to be used with the navigation stack. Watching the transformation tree If you want to see the transformation tree of your robot, use the following command: $ roslaunch chapter7_tutorials gazebo_map_robot.launch model:="`rospack find chapter7_tutorials`/urdf/robot1_base_04.xacro"$ rosrun tf view_frames The resultant frame is depicted as follows: And now, if you run tf_broadcaster and run the rosrun tf view_frames command again, you will see the frame that you have created by code: $ rosrun chapter7_tutorials tf_broadcaster $ rosrun tf view_frames The resultant frame is depicted as follows: Publishing sensor information Your robot can have a lot of sensors to see the world; you can program a lot of nodes to take these data and do something, but the navigation stack is prepared only to use the planar laser's sensor. So, your sensor must publish the data with one of these types: sensor_msgs/LaserScan or sensor_msgs/PointCloud. We are going to use the laser located in front of the robot to navigate in Gazebo. Remember that this laser is simulated on Gazebo, and it publishes data on the base_scan/scan frame. In our case, we do not need to configure anything of our laser to use it on the navigation stack. This is because we have tf configured in the .urdf file, and the laser is publishing data with the correct type. If you use a real laser, ROS might have a driver for it. Anyway, if you are using a laser that has no driver on ROS and want to write a node to publish the data with the sensor_msgs/LaserScan sensor, you have an example template to do it, which is shown in the following section. But first, remember the structure of the message sensor_msgs/LaserScan. Use the following command: $ rosmsg show sensor_msgs/LaserScan std_msgs/Header header uint32 seq time stamp string frame_id float32 angle_min float32 angle_max float32 angle_increment float32 time_increment float32 scan_time float32 range_min float32 range_max float32[] rangesfloat32[] intensities Creating the laser node Now we will create a new file in chapter7_tutorials/src with the name laser.cpp and put the following code in it: #include <ros/ros.h> #include <sensor_msgs/LaserScan.h> int main(int argc, char** argv){ ros::init(argc, argv, "laser_scan_publisher"); ros::NodeHandle n; ros::Publisher scan_pub = n.advertise<sensor_msgs::LaserScan>("scan", 50); unsigned int num_readings = 100; double laser_frequency = 40; double ranges[num_readings]; double intensities[num_readings]; int count = 0; ros::Rate r(1.0); while(n.ok()){ //generate some fake data for our laser scan for(unsigned int i = 0; i < num_readings; ++i){ ranges[i] = count; intensities[i] = 100 + count; } ros::Time scan_time = ros::Time::now(); //populate the LaserScan message sensor_msgs::LaserScan scan; scan.header.stamp = scan_time; scan.header.frame_id = "base_link"; scan.angle_min = -1.57; scan.angle_max = 1.57; scan.angle_increment = 3.14 / num_readings; scan.time_increment = (1 / laser_frequency) / (num_readings); scan.range_min = 0.0; scan.range_max = 100.0; scan.ranges.resize(num_readings); scan.intensities.resize(num_readings); for(unsigned int i = 0; i < num_readings; ++i){ scan.ranges[i] = ranges[i]; scan.intensities[i] = intensities[i]; } scan_pub.publish(scan); ++count; r.sleep(); } } As you can see, we are going to create a new topic with the name scan and the message type sensor_msgs/LaserScan. You must be familiar with this message type from sensor_msgs/LaserScan. The name of the topic must be unique. When you configure the navigation stack, you will select this topic to be used for the navigation. The following command line shows how to create the topic with the correct name: ros::Publisher scan_pub = n.advertise<sensor_msgs::LaserScan>("scan", 50); It is important to publish data with header, stamp, frame_id, and many more elements because, if not, the navigation stack could fail with such data: scan.header.stamp = scan_time; scan.header.frame_id = "base_link"; Other important data on header is frame_id. It must be one of the frames created in the .urdf file and must have a frame published on the tf frame transforms. The navigation stack will use this information to know the real position of the sensor and make transforms such as the one between the data sensor and obstacles. With this template, you can use any laser although it has no driver for ROS. You only have to change the fake data with the right data from your laser. This template can also be used to create something that looks like a laser but is not. For example, you could simulate a laser using stereoscopy or using a sensor such as a sonar.
Read more
  • 0
  • 0
  • 5537
Modal Close icon
Modal Close icon