Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-mobility
Packt
19 Sep 2014
11 min read
Save for later

Mobility

Packt
19 Sep 2014
11 min read
In this article by Martyn Coupland, author of the book, Microsoft System Center Configuration Manager Advanced Deployments, we will explore some of these options and look at how they can help you manage mobility in your workforce. We'll cover the following topics: Deploying company resource profiles Managing roaming devices Integrating the Microsoft Exchange connector Using Windows Intune (For more resources related to this topic, see here.) Deploying company resource profiles One of the improvements that shipped with the R2 release of Configuration Manager 2012 was the ability to deploy company resources such as Wi-Fi profiles, certificates, and VPN profiles. This functionality really opened up the management story for organizations that already have a big take up of bring your own device or have mobility in their long-term strategy. You do not need Windows Intune to deploy company resource profiles. The company resource profiles are really useful in extending some of the services that you provide to domain-based clients using Group Policy. Some examples of this include deploying VPN and Wi-Fi profiles to domain clients using Group Policy preferences. As you cannot deploy a group policy to non-domain-joined devices, it becomes really useful to manage and deploy these via Configuration Manager. Another great use case for company resource profiles is deploying certificates. Configuration Manager includes the functionality to allow managed clients to have certificates enrolled to them. This can include those resources that rarely or never contact the domain. This scenario is becoming more common, so it is important that we have the capability to deploy these settings to users without relying on the domain. Managing Wi-Fi profiles with Configuration Manager The deployment of Wi-Fi profiles in Configuration Manager is very similar to that of a manual setup. The wizard provides you with the same options that you would expect to see should you configure the network manually within Windows. You can also configure a number of security settings, such as certificates for clients and server authentication. You can configure the following device types with Wi-Fi profiles: Windows 8.1 32-bit Windows 8.1 64-bit Windows RT 8.1 Windows Phone 8.1 iOS 5, iOS 6, and iOS 7 iOS 5, iOS 6, and iOS 7 Android devices that run Version 4 Configuring a Wi-Fi network profile in Configuration Manager is a simple process that is wizard-driven. First, in the Assets and Compliance workspace, expand Compliance Settings and Company Resource Access, and then click on Wi-Fi Profiles. Right-click on the node and select Create Wi-Fi Profile, or select this option from the home tab on the ribbon. On the general page of the wizard, provide a name for the profile. If required, you can add a description here as well. If you have exported the settings from a Windows 8.1 device, you can import them here as well. Click on Next in the Wi-Fi Profile page that you need to provide information about the network you want to connect to. Network Name is what is displayed on the users' device and so should be friendly for them. You also need to enter the SSID of the network. Make sure this is entered correctly as clients will use this to attempt to connect to the network. You can also specify other settings here, like you can in Windows, such as specifying whether we should connect if the network is not broadcasting or while the network is in range. Click on Next to continue to the security configuration page. Depending on the security, encryption, and Extensible Authentication Protocol (EAP) settings that you select, some items on this page of the wizard might not be available. As shown in the previous screenshot, the settings you configure here replicate those that you can configure in Windows when manually connecting to the network. On the Advanced Settings page of Create Wi-Fi Profile Wizard, specify any additional settings for the Wi-Fi profile. This can be the authentication mode, single sign-on options, and Federal Information Processing Standards (FIPS) compliance. If you require any proxy settings, you can also configure these on the next page as well as providing information on which platforms should process this profile. When the profile has been created, you can then right-click on the profile to deploy it to a collection. Managing certificates with Configuration Manager Deploying a certificate profile in Configuration Manager is actually a little quicker than creating a Wi-Fi profile. However, before you move on to deploying a certificate, you need some prerequisites in your environment. First, you need to deploy the Network Device Enrollment Service (NDES), which is part of the Certificate Services functionality in Windows Server. You can find guidance on deploying NDES in the Active Directory TechNet library at http://bit.ly/1kjpgxD. You must then install and configure at least one certificate registration point in the Configuration Manager hierarchy, and you can install this site system role in the central administration site or in a primary site. In the preceding screenshot, you can see the configuration screen in the wizard to deploy the certificate enrollment point in Configuration Manager. For the URL, enter the address in the https://<FQDN>/certsrv/mscep/mscep.dll format. For the root certificate, you should browse for the certificate file of your certificate authority. If you are using certificates in Configuration Manager, this will be the same certificate that you imported in the Client Communication tab in Site Settings. When this is configured on the server that runs the NDES, log on as a domain administrator and copy the files listed from the <ConfigMgrInstallationMedia>SMSSETUPPOLICYMODULEX64 folder on the Configuration Manager installation media to a folder on your server: PolicyModule.msi PolicyModuleSetup.exe On the Certificate Registration Point page, specify the URL of the certificate registration point and the virtual application name. The default virtual application name is CMCertificateRegistration. For example, if the site system server has an FQDN of scep1.contoso.com and you used the default virtual application name, specify https://scep1.contoso.com/CMCertificateRegistration. Creating certificate profiles Click on Certificate Profiles in the Assets and Compliance workspace under the Compliance Settings folder. On the General page, provide the name and description of the profile, and then provide information about the type of certificate that you want to deploy. Select the trusted CA certificate profile type if you want to deploy a trusted root certification authority (CA) or intermediate CA certificate, for example, you might want to deploy you own internal CA certificate to your own workgroup devices managed by Configuration Manager. Select the SCEP certificate profile type if you want to request a certificate for a user or device using the Simple Certificate Enrollment Protocol and the Network Device Enrollment Service role service. You will be provided with different settings depending on the option that you specify. If you select SCEP, then you will be asked about the number of retries and storage information about TPM. You can find specific information about each of the settings on the TechNet library at http://bit.ly/1n5CtZF. Configuring a trusted CA certificate is much simpler; provide the certificate settings and the destination store, as shown in the following screenshot: When you have finished configuring information on your certificate profile, select the supported platforms for the profile and continue through the wizard to create the profile. When it has been created, you can right-click on the profile to deploy it to a collection. Managing VPN profiles with Configuration Manager At a high level, the process to create VPN profiles is the same as creating Wi-Fi profiles; no prerequisites are required such as deploying certificates. Click on VPN Profiles in the Assets and Compliance workspace under the Compliance Settings folder. Create a new VPN profile, and on the initial screen, provide simple information about the profile. The following table provides an overview of which profiles are supported on which device: Connection type iOS Windows 8.1 Windows RT Windows RT 8.1 Windows Phone 8.1 Cisco AnyConnect Yes No No No No Juniper Pulse Yes Yes No Yes Yes F5 Edge Client Yes Yes No Yes Yes Dell SonicWALL Mobile Connect Yes Yes No Yes Yes Check Point Mobile VPN Yes Yes No Yes Yes Microsoft SSL (SSTP) No Yes Yes Yes No Microsoft Automatic No Yes Yes Yes No IKEv2 No Yes Yes Yes Yes PPTP Yes Yes Yes Yes No L2TP Yes Yes Yes Yes No Specific options will be required, depending on which technology you choose from the drop-down list. Ensure that the settings are specified, and move on to the profile information in the authentication method. If you require proxy settings with your VPN profile, then specify these settings on the Proxy Settings page of the wizard. See the following screenshot for an example of this screen: Continue through the wizard and select the supported profiles for the profile. When the profile is created, you can right-click on the profile and select Deploy. Managing Internet-based devices We have already looked at deploying certain company resources to those clients to whom we have very little connectivity on a regular basis. We can use Configuration Manager to manage these devices just like those domain-based clients over the Internet. This scenario works really well when the clients do not use VPN or DirectAccess, or maybe when we do not deploy a remote access solution for our remote users. This is where we can use Configuration Manager to manage clients using Internet-based client management (IBCM). How Internet-based client management works We have the ability to manage Internet-based clients in Configuration Manager by deploying certain site system roles in DMZ. By doing this, we make the management point, distribution point, and software update point Internet-facing and configure clients to connect to this while on the Internet. With these measures in place, we now have the ability to manage clients that are on the Internet, extending our management capabilities. Functionality in Internet-based client management In general, functionality will not be supported for Internet-based client management when we have to rely on network functionality that is not appropriate on a public network or relies some kind of communication with Active Directory. The following is not supported for Internet-based clients: Client push and software-update-based client deployment Automatic site assignment Network access protection Wake-On-LAN Operating system deployment Remote control Out-of-band management Software distribution for users is only supported when the Internet-based management point can authenticate the user in Active Directory using the Windows authentication. Requirements for Internet-based client management In terms of requirements, the list is fairly short but depending on your current setup, this might take a while to set up. The first seems fairly obvious, but any site system server or client must have Internet connectivity. This might mean some firewall configuration, depending on your configuration. A public key infrastructure (PKI) is also required. It must be able to deploy and manage certificates to clients that are on the Internet and site systems that are Internet-based. This does not mean deploying certificates over the public Internet. The following information can help you plan and deploy Internet-based client management in your environment: Planning for Internet-based client management (http://bit.ly/1p1qtsU) Planning for certificates (http://bit.ly/1kj9PFr) PKI certificate requirements (http://bit.ly/1hssMFM) Using Internet-based client management As the administrator, you have no additional concerns and requirements in terms of how you manage your clients when they are based on the Internet and are reporting to an Internet-facing management point. When you are administering clients that are Internet-based, you will see them report to the Internet-facing management point. This is the only thing you will see. You will see that the preceding features we listed are not working. The icon for the client in the list of devices does not change; this is one of the reasons the functionality is powerful, as it gives you many of the management capabilities you already perform on your premise devices. Lots of people will implement DirectAccess to get around the need to set up additional Configuration Manager Infrastructure and provisioning certificates. DirectAccess with the Manage Out functionality is a viable alternative. Summary In this article, we explored a number of ways in which you can manage the growing popularity of bring your own device and also look at how we can manage mobility in your user estate. We explored the deployment of profiles that contain settings for Wi-Fi profiles, VPN profiles for Windows, and other devices as well as deploying certificates via Configuration Manager. Resources for Article: Further resources on this subject: Wireless and Mobile Hacks [Article] Introduction to Mobile Forensics [Article] Getting Started with Microsoft Dynamics CRM 2013 Marketing [Article]
Read more
  • 0
  • 0
  • 2266

article-image-caches
Packt
18 Sep 2014
18 min read
Save for later

Caches

Packt
18 Sep 2014
18 min read
In this article, by Federico Razzoli, author of the book Mastering MariaDB, we will see that how in order to avoid accessing disks, MariaDB and storage engines have several caches that a DBA should know about. (For more resources related to this topic, see here.) InnoDB caches Since InnoDB is the recommended engine for most use cases, configuring it is very important. The InnoDB buffer pool is a cache that should speed up most read and write operations. Thus, every DBA should know how it works. The doublewrite buffer is an important mechanism that guarantees that a row is never half-written to a file. For heavy-write workloads, we may want to disable it to obtain more speed. InnoDB pages Tables, data, and indexes are organized in pages, both in the caches and in the files. A page is a package of data that contains one or two rows and usually some empty space. The ratio between the used space and the total size of pages is called the fill factor. By changing the page size, the fill factor changes inevitably. InnoDB tries to keep the pages 15/16 full. If a page's fill factor is lower than 1/2, InnoDB merges it with another page. If the rows are written sequentially, the fill factor should be about 15/16. If the rows are written randomly, the fill factor is between 1/2 and 15/16. A low fill factor represents a memory waste. With a very high fill factor, when pages are updated and their content grows, they often need to be reorganized, which negatively affects the performance. The columns with a variable length type (TEXT, BLOB, VARCHAR, or VARBIT) are written into separate data structures called overlow pages. Such columns are called off-page columns. They are better handled by the DYNAMIC row format, which can be used for most tables when backward compatibility is not a concern. A page never changes its size, and the size is the same for all pages. The page size, however, is configurable: it can be 4 KB, 8 KB, or 16 KB. The default size is 16 KB, which is appropriate for many workloads and optimizes full table scans. However, smaller sizes can improve the performance of some OLTP workloads involving many small insertions because of lower memory allocation, or storage devices with smaller blocks (old SSD devices). Another reason to change the page size is that this can greatly affect the InnoDB compression. The page size can be changed by setting the innodb_page_size variable in the configuration file and restarting the server. The InnoDB buffer pool On servers that mainly use InnoDB tables (the most common case), the buffer pool is the most important cache to consider. Ideally, it should contain all the InnoDB data and indexes to allow MariaDB to execute queries without accessing the disks. Changes to data are written into the buffer pool first. They are flushed to the disks later to reduce the number of I/O operations. Of course, if the data does not fit the server's memory, only a subset of them can be in the buffer pool. In this case, that subset should be the so-called working set: the most frequently accessed data. The default size of the buffer pool is 128 MB and should always be changed. On production servers, this value is too low. On a developer's computer, usually, there is no need to dedicate so much memory to InnoDB. The minimum size, 5 MB, is usually more than enough when developing a simple application. Old and new pages We can think of the buffer pool as a list of data pages that are sorted with a variation of the classic Last Recently Used (LRU) algorithm. The list is split into two sublists: the new list contains the most used pages, and the old list contains the less used pages. The first page in each sublist is called the head. The head of the old list is called the midpoint. When a page is accessed that is not in the buffer pool, it is inserted into the midpoint. The other pages in the old list shift by one position, and the last one is evicted. When a page from the old list is accessed, it is moved from the old list to the head of the new list. When a page in the new list is accessed, it goes to the head of the list. The following variables affect the previously described algorithm: innodb_old_blocks_pct: This variable defines the percentage of the buffer pool reserved to the old list. The allowed range is 5 to 95, and it is 37 (3/5) by default. innodb_old_blocks_time: If this value is not 0, it represents the minimum age (in milliseconds) the old pages must reach before they can be moved into the new list. If an old page is accessed that did not reach this age, it goes to the head of the old list. innodb_max_dirty_pages_pct: This variable defines the maximum percentage of pages that were modified in-memory. This mechanism will be discussed in the Dirty pages section later in this article. This value is not a hard limit, but InnoDB tries not to exceed it. The allowed range is 0 to 100, and the default is 75. Increasing this value can reduce the rate of writes, but the shutdown will take longer (because dirty pages need to be written onto the disk before the server can be stopped in a clean way). innodb_flush_neighbors: If set to 1, when a dirty page is flushed from memory to a disk, even the contiguous pages are flushed. If set to 2, all dirty pages from the same extent (the portion of memory whose size is 1 MB) are flushed. With 0, only dirty pages are flushed when their number exceeds innodb_max_dirty_pages_pct or when they are evicted from the buffer pool. The default is 1. This optimization is only useful for spinning disks. Write-incentive workloads may need an aggressive flushing strategy; however, if the pages are written too often, they degrade the performance. Buffer pool instances On MariaDB versions older than 5.5, InnoDB creates only one instance of the buffer pool. However, concurrent threads are blocked by a mutex, and this may become a bottleneck. This is particularly true if the concurrency level is high and the buffer pool is very big. Splitting the buffer pool into multiple instances can solve the problem. Multiple instances represent an advantage only if the buffer pool size is at least 2 GB. Each instance should be of size 1 GB. InnoDB will ignore the configuration and will maintain only one instance if the buffer pool size is less than 1 GB. Furthermore, this feature is more useful on 64-bit systems. The following variables control the instances and their size: innodb_buffer_pool_size: This variable defines the total size of the buffer pool (no single instances). Note that the real size will be about 10 percent bigger than this value. A percentage of this amount of memory is dedicated to the change buffer. innodb_buffer_pool_instances: This variable defines the number of instances. If the value is -1, InnoDB will automatically decide the number of instances. The maximum value is 64. The default value is 8 on Unix and depends on the innodb_buffer_pool_size variable on Windows. Dirty pages When a user executes a statement that modifies data in the buffer pool, InnoDB initially modifies the data that is only in memory. The pages that are only modified in the buffer pool are called dirty pages. Pages that have not been modified or whose changes have been written on the disk are called as clean pages. Note that changes to data are also written to the redo log. If a crash occurs before those changes are applied to data files, InnoDB is usually able to recover the data, including the last modifications, by reading the redo log and the doublewrite buffer. The doublewrite buffer will be discussed later, in the Explaining the doublewrite buffer section. At some point, the data needs to be flushed to the InnoDB data files (the .ibd files). In MariaDB 10.0, this is done by a dedicated thread called the page cleaner. In older versions, this was done by the master thread, which executes several InnoDB maintenance operations. The flushing is not only concerned with the buffer pool, but also with the InnoDB redo and undo log. The list of dirty pages is frequently updated when transactions write data at the physical level. It has its own mutex that does not lock the whole buffer pool. The maximum number of dirty pages is determined by innodb_max_dirty_pages_pct as a percentage. When this maximum limit is reached, dirty pages are flushed. The innodb_flush_neighbor_pages value determines how InnoDB selects the pages to flush. If it is set to none, only selected pages are written. If it is set to area, even the neighboring dirty pages are written. If it is set to cont, all contiguous blocks of the dirty pages are flushed. On shutdown, a complete page flushing is only done if innodb_fast_shutdown is 0. Normally, this method should be preferred, because it leaves data in a consistent state. However, if many changes have been requested but still not written to disk, this process could be very slow. It is possible to speed up the shutdown by specifying a higher value for innodb_fast_shutdown. In this case, a crash recovery will be performed on the next restart. The read ahead optimization The read ahead feature is designed to reduce the number of read operations from the disks. It tries to guess which data will be needed in the near future and reads it with one operation. Two algorithms are available to choose the pages to read in advance: linear read ahead random read ahead The linear read ahead is used by default. It counts the pages in the buffer pool that are read sequentially. If their number is greater than or equal to innodb_read_ahead_threshold, InnoDB will read all data from the same extent (a portion of data whose size is always 1 MB). The innodb_read_ahead_threshold value must be a number from 0 to 64. The value 0 disables the linear read ahead but does not enable the random read ahead. The default value is 56. The random read ahead is only used if the innodb_random_read_ahead server variable is set to ON. By default, it is set to OFF. This algorithm checks whether at least 13 pages in the buffer pool have been read to the same extent. In this case, it does not matter whether they were read sequentially. With this variable enabled, the full extent will be read. The 13-page threshold is not configurable. If innodb_read_ahead_threshold is set to 0 and innodb_random_read_ahead is set to OFF, the read ahead optimization is completely turned off. Diagnosing the buffer pool performance MariaDB provides some tools to monitor the activities of the buffer pool and the InnoDB main thread. By inspecting these activities, a DBA can tune the relevant server variables to improve the performance. In this section, we will discuss the SHOW ENGINE INNODB STATUS SQL statement and the INNODB_BUFFER_POOL_STATS table in the information_schema database. While the latter provides more information about the buffer pool, the SHOW ENGINE INNODB STATUS output is easier to read. The INNODB_BUFFER_POOL_STATS table contains the following columns: Column name Description POOL_ID Each InnoDB buffer pool instance has a different ID. POOL_SIZE Size (in pages) of the instance. FREE_BUFFERS Number of free pages. DATABASE_PAGES Total number of data pages. OLD_DATABASE_PAGES Pages in the old list. MODIFIED_DATABASE_PAGES Dirty pages. PENDING_DECOMPRESS Number of pages that need to be decompressed. PENDING_READS Pending read operations. PENDING_FLUSH_LRU Pages in the old or new lists that need to be flushed. PENDING_FLUSH_LIST Pages in the flush list that need to flushed. PAGES_MADE_YOUNG Number of pages moved into the new list. PAGES_NOT_MADE_YOUNG Old pages that did not become young. PAGES_MADE_YOUNG_RATE Pages made young per second. This value is reset each time it is shown. PAGES_MADE_NOT_YOUNG_RATE Pages read but not made young (this happens because they do not reach the minimum age) per second. This value is reset each time it is shown. NUMBER_PAGES_READ Number of pages read from disk. NUMBER_PAGES_CREATED Number of pages created in the buffer pool. NUMBER_PAGES_WRITTEN Number of pages written to disk. PAGES_READ_RATE Pages read from disk per second. PAGES_CREATE_RATE Pages created in the buffer pool per second. PAGES_WRITTEN_RATE Pages written to disk per second. NUMBER_PAGES_GET Requests of pages that are not in the buffer pool. HIT_RATE Rate of page hits. YOUNG_MAKE_PER_THOUSAND_GETS Pages made young per thousand physical reads. NOT_YOUNG_MAKE_PER_THOUSAND_GETS Pages that remain in the old list per thousand reads. NUMBER_PAGES_READ_AHEAD Number of pages read with a read ahead operation. NUMBER_READ_AHEAD_EVICTED The number of pages read with a read ahead operation that were never used and then were evicted. READ_AHEAD_RATE Similar to NUMBER_PAGES_READ_AHEAD, but this is a per second rate. READ_AHEAD_EVICTED_RATE Similar to NUMBER_READ_AHEAD_EVICTED, but this is a per-second rate. LRU_IO_TOTAL Total number of pages read or written to disk. LRU_IO_CURRENT Pages read or written to disk within the last second. UNCOMPRESS_TOTAL Pages that have been uncompressed. UNCOMPRESS_CURRENT Pages that have been uncompressed within the last second. The per-second values are reset after they are shown. The PAGES_MADE_YOUNG_RATE and PAGES_NOT_MADE_YOUNG_RATE values show us, respectively, how often old pages become new and how much old pages are never accessed in a reasonable amount of time. If the former value is too high, the old list is probably not big enough and vice versa. Comparing READ_AHEAD_RATE and READ_AHEAD_EVICTED_RATE is useful to tune the read ahead feature. The READ_AHEAD_EVICTED_RATE value should be low, because it indicates which pages read with the read ahead operations were not useful. If their ratio is good but READ_AHEAD_RATE is low, probably the read ahead should be used more often. In this case, if the linear read ahead is used, we can try to increase or decrease innodb_read_ahead_threshold. Or, we can change the used algorithm (linear or random read ahead). The columns whose names end with _RATE better describe the current server activities. They should be examined several times a day, and during the whole week or month, perhaps with the help of one of more monitoring tools. Good, free software monitoring tools include Cacti and Nagios. The Percona Monitoring Tools package includes MariaDB (and MySQL) plugins that provide an interface to these tools. Dumping and loading the buffer pool In some cases, one may want to save the current contents of the buffer pool and reload it later. The most common case is when the server is stopped. Normally, on startup, the buffer pool is empty, and InnoDB needs to fill it with useful data. This process is called warm-up. Until the warm-up is complete, the InnoDB performance is lower than usual. Two variables help avoid the warm-up phase: innodb_buffer_pool_dump_at_shutdown and innodb_buffer_pool_load_at_startup. If their value is ON, InnoDB automatically saves the buffer pool into a file at shut down and restores it at startup. Their default value is OFF. Turning them ON can be very useful, but remember the caveats: The startup and shutdown time might be longer. In some cases, we might prefer MariaDB to start more quickly even if it is slower during warm-up. We need the disk space necessary to store the buffer pool. The user may also want to dump the buffer pool at any moment and restore it without restarting the server. This is advisable when the buffer pool is optimal and some statements are going to heavily change its contents. A common example is when a big InnoDB table is fully scanned. This happens, for example, during logical backups. A full table scan will fill the old list with non-frequently accessed data. A good way to solve the problem is to dump the buffer pool before the table scan and reload it later. This operation can be performed by setting two special variables: innodb_buffer_pool_dump_now and innodb_buffer_pool_load_now. Reading the values of these variables always returns OFF. Setting the first variable to ON forces InnoDB to immediately dump the buffer pool into a file. Setting the latter variable to ON forces InnoDB to load the buffer pool from that file. In both cases, the progress of the dump or load operation is indicated by the Innodb_buffer_pool_dump_status and Innodb_buffer_pool_load_status status variables. If loading the buffer pool takes too long, it is possible to stop it by setting innodb_buffer_pool_load_abort to ON. The name and path of the dump file is specified in the innodb_buffer_pool_filename server variable. Of course, we should be sure that the chosen directory can contain the file, but it is much smaller than the memory used by the buffer pool. InnoDB change buffer The change buffer is a cache that is a part of the buffer pool. It contains dirty pages related to secondary indexes (not primary keys) that are not stored in the main part of the buffer pool. If the modified data is read later, it will be merged into the buffer pool. In older versions, this buffer was called the insert buffer, but now it is renamed, because it can handle deletions. The change buffer speeds up the following write operations: insertions: When new rows are written. deletions: When existing row are marked for deletion but not yet physically erased for performance reasons. purges: The physical elimination of previously marked rows and obsolete index values. This is periodically done by a dedicated thread. In some cases, we may want to disable the change buffer. For example, we may have a working set that only fits the memory if the change buffer is discarded. In this case, even after disabling it, we will still have all the frequently accessed secondary indexes in the buffer pool. Also, DML statements may be rare for our database, or we may have just a few secondary indexes: in these cases, the change buffer does not help. The change buffer can be configured using the following variables: innodb_change_buffer_max_size: This is the maximum size of the change buffer, expressed as a percentage of the buffer pool. The allowed range is 0 to 50, and the default value is 25. innodb_change_buffering: This determines which types of operations are cached by the change buffer. The allowed values are none (to disable the buffer), all, inserts, deletes, purges, and changes (to cache inserts and deletes, but not purges). The all value is the default value. Explaining the doublewrite buffer When InnoDB writes a page to disk, at least two events can interrupt the operation after it is started: a hardware failure or an OS failure. In the case of an OS failure, this should not be possible if the pages are not bigger than the blocks written by the system. In this case, the InnoDB redo and undo logs are not sufficient to recover the half-written page, because they only contain pages ID's, not their data. This improves the performance. To avoid half-written pages, InnoDB uses the doublewrite buffer. This mechanism involves writing every page twice. A page is valid after the second write is complete. When the server restarts, if a recovery occurs, half-written pages are discarded. The doublewrite buffer has a small impact on performance, because the writes are sequential, and are flushed to disk together. However, it is still possible to disable the doublewrite buffer by setting the innodb_doublewrite variable to OFF in the configuration file or by starting the server with the --skip-innodb-doublewrite parameter. This can be done if data correctness is not important. If performance is very important, and we use a fast storage device, we may note the overhead caused by the additional disk writes. But if data correctness is important to us, we do not want to simply disable it. MariaDB provides an alternative mechanism called atomic writes. These writes are like a transaction: they completely succeed or they completely fail. Half-written data is not possible. However, MariaDB does not directly implement this mechanism, so it can only be used on FusionIO storage devices using the DirectFS filesystem. FusionIO flash memories are very fast flash memories that can be used as block storage or DRAM memory. To enable this alternative mechanism, we can set innodb_use_atomic_writes to ON. This automatically disables the doublewrite buffer. Summary In this article, we discussed the main MariaDB buffers. The most important ones are the caches used by the storage engine. We dedicated much space to the InnoDB buffer pool, because it is more complex and, usually, InnoDB is the most used storage engine. Resources for Article:  Further resources on this subject: Building a Web Application with PHP and MariaDB – Introduction to caching [article] Installing MariaDB on Windows and Mac OS X [article] Using SHOW EXPLAIN with running queries [article]
Read more
  • 0
  • 0
  • 2186

Packt
18 Sep 2014
6 min read
Save for later

Waiting for AJAX, as always…

Packt
18 Sep 2014
6 min read
In this article, by Dima Kovalenko, author of the book, Selenium Design Patterns and Best Practices, we will learn how test automation have progressed over the period of time. Test automation was simpler in the good old days, before asynchronous page loading became mainstream. Previously the test would click on a button causing the whole page to reload; after the new page load we could check if any errors were displayed. The act of waiting for the page to load guaranteed that all of the items on the page are already there, and if expected element was missing our test could fail with confidence. Now, an element might be missing for several seconds, and magically show up after an unspecified delay. The only thing for a test to do is become smarter! (For more resources related to this topic, see here.) Filling out credit card information is a common test for any online store. Let’s take a look at a typical credit card form: Our form has some default values for user to fill out, and a quick JavaScript check that the required information was entered into the field, by adding Done next to a filled out input field, like this: Once all of the fields have been filled out and seem correct, JavaScript makes the Purchase button clickable. Clicking on the button will trigger an AJAX request for the purchase, followed by successful purchase message, like this: Very simple and straight forward, anyone who has made an online purchase has seen some variation of this form. Writing a quick test to fill out the form and make sure the purchase is complete should be a breeze! Testing AJAX with sleep method Let’s take a look at a simple test, written to test this form. Our tests are written in Ruby for this demonstration for easy of readability. However, this technique will work in Java or any other programming language you may choose to use. To follow along with this article, please make sure you have Ruby and selenium-webdriver gem installed. Installers for both can be found here https://www.ruby-lang.org/en/installation/ and http://rubygems.org/gems/selenium-webdriver. Our test file starts like this: If this code looks like a foreign language to you, don’t worry we will walk through it until it all makes sense. First three lines of the test file specify all of the dependencies such as selenium-webdriver gem. On line five, we declare our test class as TestAjax which inherits its behavior from the Test::Unit framework we required on line two. The setup and teardown methods will take care of the Selenium instance for us. In the setup we create a new instance of Firefox browser and navigate to a page, which contains the mentioned form; the teardown method closes the browser after the test is complete. Now let’s look at the test itself: Lines 17 to 21 fill out the purchase form with some test data, followed by an assertion that Purchase complete! text appears in the DIV with ID of success. Let’s run this test to see if it passes. The following is the output result of our test run; as you can see it’s a failure: Our test fails because it was expecting to see Purchase complete! right here: But no text was found, because the AJAX request took a much longer time than expected. The AJAX request in progress indicator is seen here: Since this AJAX request can take anywhere from 15 to 30 seconds to complete, the most logical next step is to add a pause in between the click on the Purchase button and the test assertion; shown as follows: However, this obvious solution is really bad for two reasons: If majority of AJAX requests take 15 seconds to run, than our test is wasting another 15 seconds waiting for things instead continuing. If our test environment is under heavy load, the AJAX request can take as long as 45 seconds to complete, so our test will fail. The better choice is to make our tests smart enough to wait for AJAX request to complete, instead of using a sleep method. using smart AJAX waits To solve the shortcomings of the sleep methods we will create a new method called wait_for_ajax, seen here: In this method, we use the Wait class built into the WebDriver. The until method in the Wait class allows us to pause the test execution for an arbitrary reason. In this case to sleep for 1 second, on line 29, and to execute a JavaScript command in the browser with the help of the execute_script method. This method allows us to run a JavaScript snippet in the current browser window on the current page, which gives us access to all of the variables and methods that JavaScript has. The snippet of JavaScript that we are sending to the browser is a query against jQuery framework. The active method in jQuery returns an integer of currently active AJAX requests. Zero means that the page is fully loaded, and there are no background HTTP requests happening. On line 30, we ask the execute_script to return the current active count of AJAX requests happening on the page, and if the returned value equals 0 we break out of the Wait loop. Once the loop is broken, our tests can continue on their way. Note that the upper limit of the wait_for_ajax method is set to 60 seconds on line 28. This value can be increased or decreased, depending on how slow the test environment is. Let’s replace the sleep method call with our newly created method, shown here: And run our tests one more time, to see this passing result: Now that we stabilized our test against slow and unpredictable AJAX requests, we need to add a method that will wait for JavaScript animations to finish. These animations can break our tests just as much as the AJAX requests. Also, are tests are incredibly vulnerable due third party slowness; such as when the Facebook Like button takes long time to load. Summary This article introduced you to using a simple method that intelligently waits for all of the AJAX requests to complete, we have increased the overall stability of our test and test suite. Furthermore, we have removed a wasteful delay, which adds unnecessary delay in our test execution. In conclusion, we have improved the test stability while at the same time making our test run faster! Resources for Article: Further resources on this subject: Quick Start into Selenium Tests [article] Behavior-driven Development with Selenium WebDriver [article] Exploring Advanced Interactions of WebDriver [article]
Read more
  • 0
  • 0
  • 1171

article-image-redis-autosuggest
Packt
18 Sep 2014
8 min read
Save for later

Redis in Autosuggest

Packt
18 Sep 2014
8 min read
In this article by Arun Chinnachamy, the author of Redis Applied Design Patterns, we are going to see how to use Redis to build a basic autocomplete or autosuggest server. Also, we will see how to build a faceting engine using Redis. To build such a system, we will use sorted sets and operations involving ranges and intersections. To summarize, we will focus on the following topics in this article: (For more resources related to this topic, see here.) Autocompletion for words Multiword autosuggestion using a sorted set Faceted search using sets and operations such as union and intersection Autosuggest systems These days autosuggest is seen in virtually all e-commerce stores in addition to a host of others. Almost all websites are utilizing this functionality in one way or another from a basic website search to programming IDEs. The ease of use afforded by autosuggest has led every major website from Google and Amazon to Wikipedia to use this feature to make it easier for users to navigate to where they want to go. The primary metric for any autosuggest system is how fast we can respond with suggestions to a user's query. Usability research studies have found that the response time should be under a second to ensure that a user's attention and flow of thought are preserved. Redis is ideally suited for this task as it is one of the fastest data stores in the market right now. Let's see how to design such a structure and use Redis to build an autosuggest engine. We can tweak Redis to suit individual use case scenarios, ranging from the simple to the complex. For instance, if we want only to autocomplete a word, we can enable this functionality by using a sorted set. Let's see how to perform single word completion and then we will move on to more complex scenarios, such as phrase completion. Word completion in Redis In this section, we want to provide a simple word completion feature through Redis. We will use a sorted set for this exercise. The reason behind using a sorted set is that it always guarantees O(log(N)) operations. While it is commonly known that in a sorted set, elements are arranged based on the score, what is not widely acknowledged is that elements with the same scores are arranged lexicographically. This is going to form the basis for our word completion feature. Let's look at a scenario in which we have the words to autocomplete: jack, smith, scott, jacob, and jackeline. In order to complete a word, we need to use n-gram. Every word needs to be written as a contiguous sequence. n-gram is a contiguous sequence of n items from a given sequence of text or speech. To find out more, check http://en.wikipedia.org/wiki/N-gram. For example, n-gram of jack is as follows: j ja jac jack$ In order to signify the completed word, we can use a delimiter such as * or $. To add the word into a sorted set, we will be using ZADD in the following way: > zadd autocomplete 0 j > zadd autocomplete 0 ja > zadd autocomplete 0 jac > zadd autocomplete 0 jack$ Likewise, we need to add all the words we want to index for autocompletion. Once we are done, our sorted set will look as follows: > zrange autocomplete 0 -1 1) "j" 2) "ja" 3) "jac" 4) "jack$" 5) "jacke" 6) "jackel" 7) "jackeli" 8) "jackelin" 9) "jackeline$" 10) "jaco" 11) "jacob$" 12) "s" 13) "sc" 14) "sco" 15) "scot" 16) "scott$" 17) "sm" 18) "smi" 19) "smit" 20) "smith$" Now, we will use ZRANK and ZRANGE operations over the sorted set to achieve our desired functionality. To autocomplete for ja, we have to execute the following commands: > zrank autocomplete jac 2 zrange autocomplete 3 50 1) "jack$" 2) "jacke" 3) "jackel" 4) "jackeli" 5) "jackelin" 6) "jackeline$" 7) "jaco" 8) "jacob$" 9) "s" 10) "sc" 11) "sco" 12) "scot" 13) "scott$" 14) "sm" 15) "smi" 16) "smit" 17) "smith$" Another example on completing smi is as follows: zrank autocomplete smi 17 zrange autocomplete 18 50 1) "smit" 2) "smith$" Now, in our program, we have to do the following tasks: Iterate through the results set. Check if the word starts with the query and only use the words with $ as the last character. Though it looks like a lot of operations are performed, both ZRANGE and ZRANK are O(log(N)) operations. Therefore, there should be virtually no problem in handling a huge list of words. When it comes to memory usage, we will have n+1 elements for every word, where n is the number of characters in the word. For M words, we will have M(avg(n) + 1) records where avg(n) is the average characters in a word. The more the collision of characters in our universe, the less the memory usage. In order to conserve memory, we can use the EXPIRE command to expire unused long tail autocomplete terms. Multiword phrase completion In the previous section, we have seen how to use the autocomplete for a single word. However, in most real world scenarios, we will have to deal with multiword phrases. This is much more difficult to achieve as there are a few inherent challenges involved: Suggesting a phrase for all matching words. For instance, the same manufacturer has a lot of models available. We have to ensure that we list all models if a user decides to search for a manufacturer by name. Order the results based on overall popularity and relevance of the match instead of ordering lexicographically. The following screenshot shows the typical autosuggest box, which you find in popular e-commerce portals. This feature improves the user experience and also reduces the spell errors: For this case, we will use a sorted set along with hashes. We will use a sorted set to store the n-gram of the indexed data followed by getting the complete title from hashes. Instead of storing the n-grams into the same sorted set, we will store them in different sorted sets. Let's look at the following scenario in which we have model names of mobile phones along with their popularity: For this set, we will create multiple sorted sets. Let's take Apple iPhone 5S: ZADD a 9 apple_iphone_5s ZADD ap 9 apple_iphone_5s ZADD app 9 apple_iphone_5s ZADD apple 9 apple_iphone_5s ZADD i 9 apple_iphone_5s ZADD ip 9 apple_iphone_5s ZADD iph 9 apple_iphone_5s ZADD ipho 9 apple_iphone_5s ZADD iphon 9 apple_iphone_5s ZADD iphone 9 apple_iphone_5s ZADD 5 9 apple_iphone_5s ZADD 5s 9 apple_iphone_5s HSET titles apple_iphone_5s "Apple iPhone 5S" In the preceding scenario, we have added every n-gram value as a sorted set and created a hash that holds the original title. Likewise, we have to add all the titles into our index. Searching in the index Now that we have indexed the titles, we are ready to perform a search. Consider a situation where a user is querying with the term apple. We want to show the user the five best suggestions based on the popularity of the product. Here's how we can achieve this: > zrevrange apple 0 4 withscores 1) "apple_iphone_5s" 2) 9.0 3) "apple_iphone_5c" 4) 6.0 As the elements inside the sorted set are ordered by the element score, we get the matches ordered by the popularity which we inserted. To get the original title, type the following command: > hmget titles apple_iphone_5s 1) "Apple iPhone 5S" In the preceding scenario case, the query was a single word. Now imagine if the user has multiple words such as Samsung nex, and we have to suggest the autocomplete as Samsung Galaxy Nexus. To achieve this, we will use ZINTERSTORE as follows: > zinterstore samsung_nex 2 samsung nex aggregate max ZINTERSTORE destination numkeys key [key ...] [WEIGHTS weight [weight ...]] [AGGREGATE SUM|MIN|MAX] This computes the intersection of sorted sets given by the specified keys and stores the result in a destination. It is mandatory to provide the number of input keys before passing the input keys and other (optional) arguments. For more information about ZINTERSTORE, visit http://redis.io/commands/ZINTERSTORE. The previous command, which is zinterstore samsung_nex 2 samsung nex aggregate max, will compute the intersection of two sorted sets, samsung and nex, and stores it in another sorted set, samsung_nex. To see the result, type the following commands: > zrevrange samsung_nex 0 4 withscores 1) samsung_galaxy_nexus 2) 7 > hmget titles samsung_galaxy_nexus 1) Samsung Galaxy Nexus If you want to cache the result for multiword queries and remove it automatically, use an EXPIRE command and set expiry for temporary keys. Summary In this article, we have seen how to perform autosuggest and faceted searches using Redis. We have also understood how sorted sets and sets work. We have also seen how Redis can be used as a backend system for simple faceting and autosuggest system and make the system ultrafast. Further resources on this subject: Using Redis in a hostile environment (Advanced) [Article] Building Applications with Spring Data Redis [Article] Implementing persistence in Redis (Intermediate) [Article] Resources used for creating the article: Credit for the featured tiger image: Big Cat Facts - Tiger
Read more
  • 0
  • 0
  • 9713

article-image-backups-vmware-view-infrastructure-0
Packt
17 Sep 2014
18 min read
Save for later

Backups in the VMware View Infrastructure

Packt
17 Sep 2014
18 min read
In this article, by Chuck Mills and Ryan Cartwright, authors of the book VMware Horizon 6 Desktop Virtualization Solutions, we will study about the back up options available in VMware. It also provides guidance on scheduling appropriate backups of a Horizon View environment. (For more resources related to this topic, see here.) While a single point of failure should not exist in the VMware View environment, it is still important to ensure regular backups are taken for a quick recovery when failures occur. Also, if a setting becomes corrupted or is changed, a backup could be used to restore to a previous point in time. The backup of the VMware View environment should be performed on a regular basis in line with an organization's existing backup methodology. A VMware View environment contains both files and databases. The main backup points of a VMware View environment are as follows: VMware View Connection Server—ADAM database VMware View Security Server VMware View Composer Database Remote Desktop Service host servers Remote Desktop Service host templates and virtual machines Virtual desktop templates and parent VMs Virtual desktops Linked clones (stateless) Full clones (stateful) ThinApp repository Persona Management VMware vCenter Restoring the VMware View environment Business Continuity and Disaster Recovery With a backup of all of the preceding components, the VMware View Server infrastructure can be recovered during a time of failure. To maximize the chances of success in a recovery environment, it is advised to take backups of the View ADAM database, View Composer, and vCenter database at the same time to avoid discrepancies. Backups can be scheduled and automated or can be manually executed; ideally, scheduled backups will be used to ensure that they are performed and completed regularly. Proper design dictates that there should always be two or more View Connection Servers. As all View Connection Servers in the same replica pool contain the same configuration data, it is only necessary to back up one View Connection Server. This backup is typically configured for the first View Connection Server installed in standard mode in an environment. VMware View Connection Server – ADAM Database backup View Connection Server stores the View Connection Server configuration data in the View LDAP repository. View Composer stores the configuration data for linked clone desktops in the View Composer database. When you use View Administrator to perform backups, the Connection Server backs up the View LDAP configuration data and the View Composer database. Both sets of backup files will be stored in the same location. The LDAP data is exported in LDAP data interchange format (LDIF). If you have multiple View Connection Server(s) in a replicated group, you only need to export data from one of the instances. All replicated instances contain the same configuration data. It is a not good practice to rely on replicated instances of View Connection Server as your backup mechanism. When the Connection Server synchronizes data across the instances of Connection Server, any data lost on one instance might be lost in all the members of the group. If the View Connection Server uses multiple vCenter Server instances and multiple View Composer services, then the View Connection Server will back up all the View Composer databases associated with the vCenter Server instances. View Connection Server backups are configured from the VMware View Admin console. The backups dump the configuration files and the database information to a location on the View Connection Server. Then, the data must be backed up through normal mechanisms, like a backup agent and scheduled job. The procedure for a View Connection Server backup is as follows: Schedule VMware View backup runs and exports to C:View_Backup. Use your third-party backup solution on the View Connection Server and have it back up the System State, Program Files, and C:View_Backup folders that were created in step 1. From within the View Admin console, there are three primary options that must be configured to back up the View Connection Server settings: Automatic backup frequency: This is the frequency at which backups are automatically taken. The recommendation is as follows: Recommendation (every day): As most server backups are performed daily, if the automatic View Connection Server backup is taken before the full backup of the Windows server, it will be included in the nightly backup. This is adjusted as necessary. Backup time: This displays the time based on the automatic backup frequency. (Every day produces the 12 midnight time.) Maximum number of backups: This is the maximum number of backups that can be stored on the View Connection Server; once the maximum number has been reached, backups will be rotated out based on age, with the oldest backup being replaced by the newest backup. The recommendation is as follows: Recommendation—30 days: This will ensure that approximately one month of backups are retained on the server. This is adjusted as necessary. Folder location: This is the location on the View Connection Server, where the backups will be stored. Ensure that the third-party backup solution is backing up this location. The following screenshot shows the Backup tab: Performing a manual backup of the View database Use the following steps to perform a manual backup of your View database: Log in to the View Administrator console. Expand the Catalog option under Inventory (on the left-hand side of the console). Select the first pool and right-click on it. Select Disable Provisioning, as shown in the following screenshot: Continue to disable provisioning for each of the pools. This will assure that no new information will be added to the ADAM database. After you disable provisioning for all the pools, there are two ways to perform the backup: The View Administrator console Running a command using the command prompt The View Administrator console Follow these steps to perform a backup: Log in to the View Administrator console. Expand View Configuration found under Inventory. Select Servers, which displays all the servers found in your environment. Select the Connection Servers tab. Right-click on one of the Connection Servers and choose Backup Now, as shown in the following screenshot After the backup process is complete, enable provisioning to the pools. Using the command prompt You can export the ADAM database by executing a built-in export tool in the command prompt. Perform the following steps: Connect directly to the View Connection Server with a remote desktop utility such as RDP. Open a command prompt and use the cd command to navigate to C:Program FilesVMwareVMware ViewServertoolsbin. Execute the vdmexport.exe command and use the –f option to specify a location and filename, as shown in the following screenshot (for this example, C:View_Backup is the location and vdmBackup.ldf is the filename): Once a backup has been either automatically run or manually executed, there will be two types of files saved in the backup location: LDF files: These are the LDIF exports from the VMware View Connection Server ADAM database and store the configuration settings of the VMware View environment SVI files: These are the backups of the VMware View Composer database The backup process of the View Connection Server is fairly straightforward. While the process is easy, it should not be overlooked. Security Server considerations Surprisingly, there is no option to back up the VMware View Security Server via the VMware View Admin console. For View Connection Servers, backup is configured by selecting the server, selecting Edit, and then clicking on Backup. Highlighting the View Security Server provides no such functionality. Instead, the security server should be backed up via normal third-party mechanisms. The installation directory is of primary concern, which is C:Program FilesVMwareVMware ViewServer by default. The .config file is in the …sslgatewayconf directory, and it includes the following settings: pcoipClientIPAddress: This is the public address used by the Security Server pcoipClientUDPPort: This is the port used for UDP traffic (the default is 4172) In addition, the settings file is located in this directory, which includes settings such as the following: maxConnections: This is the maximum number of concurrent connections the View Security Server can have at one time (the default is 2000) serverID: This is the hostname used by the security server In addition, custom certificates and logfiles are stored within the installation directory of the VMware View Security Server. Therefore, it is important to back up the data regularly if the logfile data is to be maintained (and is not being ingested into a larger enterprise logfile solution). The View Composer database The View Composer database used for linked clones is backed up using the following steps: Log in to the View Administrator console. Expand the Catalog option under Inventory (left-hand side of the console). Select the first pool and right-click on it. Select Disable Provisioning. Connect directly to the server where the View Composer was installed, using a remote desktop utility such as RDP. Stop the View Composer service, as shown in the following screenshot. This will prevent provisioning request that would change the composer database. After the service is stopped, use the standard practice for backed up databases in the current environment. Restart the Composer service after the backup completes. Remote Desktop Service host servers VMware View 6 uses virtual machines to deliver hosted applications and desktops. In some cases, tuning and optimization, or other customer specific configurations to the environment or applications may be built on the Remote Desktop Service (RDS) host. Use the Windows Server Backup tool or the current backup software deployed in your environment. RDS Server host templates and virtual machines The virtual machine templates and virtual machines are an important part of the Horizon View infrastructure and need protection in the event that the system needs to be recovered. Back up the RDS host templates when changes are made and the testing/validation is completed. The production RDS host machines should be backed up if they contains user data or any other elements that require protection at frequent intervals. Third-party backup solutions are used in this case. Virtual desktop templates and parent VMs Horizon View uses virtual machine templates to create the desktops in pools for full virtual machines and uses parent VMs to create the desktops in a linked clone desktop pool. These virtual machine templates and the parent VMs are another important part of the View infrastructure that needs protection. These backups are a crucial part of being able to quickly restore the desktop pools and the RDS hosts in the event of data loss. While frequent changes occur for standard virtual machines, the virtual machine templates and parent VMs only need backing up after new changes have been made to the template and parent VM images. These backups should be readily available for rapid redeployment when required. For environments that use full cloning as the provisioning technique for the vDesktops, the gold template should be backed up regularly. The gold template is the master vDesktop that all other vDesktops are cloned from. The VMware KB article, Backing up and restoring virtual machine templates using VMware APIs, covers the steps to both back up and restore a template. In short, most backup solutions will require that the gold template is converted from a template to a regular virtual machine and it can then be backed up. You can find more information at http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2009395. Backing up the parent VM can be tricky as it is a virtual machine, often with many different point-in-time snapshots. The most common technique is to collapse the virtual machine snapshot tree at a given point-in-time snapshot, and then back up or copy the newly created virtual machine to a second datastore. By storing the parent VM on a redundant storage solution, it is quite unlikely that the parent VM will be lost. What's more likely is that a point-in-time snapshot of the parent VM may be created while it's in a nonfunctional or less-than-ideal state. Virtual desktops There are three types of virtual desktops in a Horizon View environment, which are as follows: Linked clone desktops Stateful desktops Stateless desktops Linked clone desktops Virtual desktops that are created by View Composer using the linked clone technology present special challenges with backup and restoration. In many cases, a linked clone desktop will also be considered as a stateless desktop. The dynamic nature of a linked clone desktop and the underlying structure of the virtual machine itself means the linked clone desktops are not a good candidate for backup and restoration. However, the same qualities that impede the use of a standard backup solution provide an advantage for rapid reprovisioning of virtual desktops. When the underlying infrastructure for things such as the delivery of applications and user data, along with the parent VMs, are restored, then linked clone desktop pools can be recreated and made available within a short amount of time, and therefore lessening the impact of an outage or data loss. Stateful desktops In the stateful desktop pool scenario, all of the virtual desktops retain user data when the user logs back in to the virtual desktop. So, in this case, backing up the virtual machines with third-party tools like any other virtual machine in vSphere is considered the optimal method for protection and recovery. Stateless desktops With the stateless desktop architecture, the virtual desktops do not retain the desktop state when the user logs back in to the virtual desktop. The nature of the stateless desktops does not require and nor do they directly contain any data that requires a backup. All the user data in a stateless desktop is stored on a file share. The user data includes any files the user creates, changes, or copies within the virtual infrastructure, along with the user persona data. Therefore, because no user data is stored within the virtual desktop, there will be no need to back up the desktop. File shares should be included in the standard backup strategy and all user data and persona information will be included in the existing daily backups. The ThinApp repository The ThinApp repository is similar in nature to the user data on the stateless desktops in that it should reside on a redundant file share that is backed up regularly. If the ThinApp packages are configured to preserve each user's sandbox, the ThinApp repository should likely be backed up nightly. Persona Management With the View Persona Management feature, the user's remote profile is dynamically downloaded after the user logs in to a virtual desktop. The secure, centralized repository can be configured in which Horizon View will store user profiles. The standard practice is to back up network shares on which View Persona Management stores the profile repository. View Persona Management will ensure that user profiles are backed up to the remote profile share, eliminating the need for additional tools to back up user data on the desktops. Therefore, backup software to protect the user profile on the View desktop is unnecessary. VMware vCenter Most established IT departments are using backup tools from the storage or backup vendor to protect the datastores where the VM's are stored. This will make the recovery of the base vSphere environment faster and easier. The central piece of vCenter is the vCenter database. If there is a total loss of database you will lose all your configuration information of vSphere, including the configuration specific to View (for examples, users, folders, and many more). Another important item to understand is that even if you rebuild your vCenter using the same folder and resource pool names, your View environment will not reconnect and use the new vCenter. The reason is that each object in vSphere has what is called a Managed object Reference (MoRef) and they are stored in the vSphere database. View uses the MoRef information to talk to vCenter. As View and vSphere rely on each other, making a backup of your View environment without backing up your vSphere environment doesn't make sense. Restoring the VMware View environment If your environment has multiple Connection Servers, the best thing to do would be delete all the servers but one, and then use the following steps to restore the ADAM database: Connect directly to the server where the View Connection Server is located using a remote desktop utility such as RDP. Stop the View Connection service, as shown in the following screenshot: Locate the backup (or exported) ADAM database file that has the .ldf extension. The first step of the import is to decrypt the file by opening a command prompt and use the cd command to navigate to C:Program FilesVMwareVMware ViewServertoolsbin. Use the following command: vdmimport –f View_BackupvdmBackup.ldf –d >View_BackupvmdDecrypt.ldf You will be prompted to enter the password from the account you used to create the backup file. Now use the vdmimport –f [decrypted file name] command (from the preceding example, the filename will be vmdDecrypt.ldf). After the ADAM database is updated, you can restart the View Connection Server service. Replace the delete Connection Servers by running the Connection Server installation and using the Replica option. To reinstall the View Composer database, you can connect to the server where Composer is installed. Stop the View Composer service and use your standard procedure for restoring a database. After the restore, start the View Composer service. While this provides the steps to restore the main components of the Connection server, the steps to perform a complete View Connection Server restore can be found in the VMware KB article, Performing an end-to-end backup and restore for VMware View Manager, at http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1008046. Reconciliation after recovery One of the main factors to consider when performing a restore in a Horizon View infrastructure is the possibility that the Connection Server environment could be out of sync with the current View state and a reconciliation is required. After restoring the Connection Server ADAM database, there may be missing desktops that are shown in the Connection Server Admin user interface if the following actions are executed after the backup but before a restore: The administrator deleted pools or desktops The desktop pool was recomposed, which resulted in the removal of the unassigned desktops Missing desktops or pools can be manually removed from the Connection Server Admin UI. Some of the automated desktops may become disassociated with their pools due to the creation of a pool between the time of the backup and the restore time. View administrators may be able to make them usable by cloning the linked clone desktop to a full clone desktop using vCenter Server. They would be created as an individual desktop in the Connection Server and then assign those desktops to a specific user. Business Continuity and Disaster Recovery It's important to ensure that the virtual desktops along with the application delivery infrastructure is included and prioritized as a Business Continuity and Disaster Recovery plan. Also, it's important to ensure that the recovery procedures are tested and validated on a regular cycle, as well as having the procedures and mechanisms in place that ensure critical data (images, software media, data backup, and so on) is always stored and ready in an alternate location. This will ensure an efficient and timely recovery. It would be ideal to have a disaster recovery plan and business continuity plan that recovers the essential services to an alternate "standby" data center. This will allow the data to be backed up and available offsite to the alternate facility for an additional measure of protection. The alternate data center could have "hot" standby capacity for the virtual desktops and application delivery infrastructure. This site would then address 50 percent capacity in the event of a disaster and also 50 percent additional capacity in the event of a business continuity event that prevents users from accessing the main facility. The additional capacity will also provide a rollback option if there were failed updates to the main data center. Operational procedures should ensure the desktop and server images are available to the alternate facility when changes are made to the main VMware View system. Desktop and application pools should also be updated in the alternate data center whenever maintenance procedures are executed and validated in the main data center. Summary As expected, it is important to back up the fundamental components of a VMware View solution. While a resilient design should mitigate most types of failure, there are still occasions when a backup may be needed to bring an environment back up to an operational level. This article covered the major components of View and provided some of the basic options for creating backups of those components. The Connection Server and Composer database along with vCenter were explained. There was a good overview of the options used to protect the different types of virtual desktops. The ThinApp repository and Persona Management was also explained. The article also covered the basic recovery options and where to find information on the complete View recovery procedures. Resources for Article: Further resources on this subject: Introduction to Veeam® Backup & Replication for VMware [article] Design, Install, and Configure [article] VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [article]
Read more
  • 0
  • 0
  • 10866

article-image-index-item-sharding-and-projection-dynamodb
Packt
17 Sep 2014
13 min read
Save for later

Index, Item Sharding, and Projection in DynamoDB

Packt
17 Sep 2014
13 min read
Understanding the secondary index and projections should go hand in hand because of the fact that a secondary index cannot be used efficiently without specifying projection. In this article by Uchit Vyas and Prabhakaran Kuppusamy, authors of DynamoDB Applied Design Patterns, we will take a look at local and global secondary indexes, and projection and its usage with indexes. (For more resources related to this topic, see here.) The use of projection in DynamoDB is pretty much similar to that of traditional databases. However, here are a few things to watch out for: Whenever a DynamoDB table is created, it is mandatory to create a primary key, which can be of a simple type (hash type), or it can be of a complex type (hash and range key). For the specified primary key, an index will be created (we call this index the primary index). Along with this primary key index, the user is allowed to create up to five secondary indexes per table. There are two kinds of secondary index. The first is a local secondary index (in which the hash key of the index must be the same as that of the table) and the second is the global secondary index (in which the hash key can be any field). In both of these secondary index types, the range key can be a field that the user needs to create an index for. Secondary indexes A quick question: while writing a query in any database, keeping the primary key field as part of the query (especially in the where condition) will return results much faster compared to the other way. Why? This is because of the fact that an index will be created automatically in most of the databases for the primary key field. This the case with DynamoDB also. This index is called the primary index of the table. There is no customization possible using the primary index, so the primary index is seldom discussed. In order to make retrieval faster, the frequently-retrieved attributes need to be made as part of the index. However, a DynamoDB table can have only one primary index and the index can have a maximum of two attributes (hash and range key). So for faster retrieval, the user should be given privileges to create user-defined indexes. This index, which is created by the user, is called the secondary index. Similar to the table key schema, the secondary index also has a key schema. Based on the key schema attributes, the secondary index can be either a local or global secondary index. Whenever a secondary index is created, during every item insertion, the items in the index will be rearranged. This rearrangement will happen for each item insertion into the table, provided the item contains both the index's hash and range key attribute. Projection Once we have an understanding of the secondary index, we are all set to learn about projection. While creating the secondary index, it is mandatory to specify the hash and range attributes based on which the index is created. Apart from these two attributes, if the query wants one or more attribute (assuming that none of these attributes are projected into the index), then DynamoDB will scan the entire table. This will consume a lot of throughput capacity and will have comparatively higher latency. The following is the table (with some data) that is used to store book information: Here are few more details about the table: The BookTitle attribute is the hash key of the table and local secondary index The Edition attribute is the range key of the table The PubDate attribute is the range key of the index (let's call this index IDX_PubDate) Local secondary index While creating the secondary index, the hash and range key of the table and index will be inserted into the index; optionally, the user can specify what other attributes need to be added. There are three kinds of projection possible in DynamoDB: KEYS_ONLY: Using this, the index consists of the hash and range key values of the table and index INCLUDE: Using this, the index consists of attributes in KEYS_ONLY plus other non-key attributes that we specify ALL: Using this, the index consists of all of the attributes from the source table The following code shows the creation of a local secondary index named Idx_PubDate with BookTitle as the hash key (which is a must in the case of a local secondary index), PubDate as the range key, and using the KEYS_ONLY projection: private static LocalSecondaryIndex getLocalSecondaryIndex() { ArrayList<KeySchemaElement> indexKeySchema =    newArrayList<KeySchemaElement>(); indexKeySchema.add(new KeySchemaElement()    .withAttributeName("BookTitle")    .withKeyType(KeyType.HASH)); indexKeySchema.add(new KeySchemaElement()    .withAttributeName("PubDate")    .withKeyType(KeyType.RANGE)); LocalSecondaryIndex lsi = new LocalSecondaryIndex()    .withIndexName("Idx_PubDate")    .withKeySchema(indexKeySchema)    .withProjection(new Projection()    .withProjectionType("KEYS_ONLY")); return lsi; } The usage of the KEYS_ONLY index type will create the smallest possible index and the usage of ALL will create the biggest possible index. We will discuss the trade-offs between these index types a little later. Going back to our example, let us assume that we are using the KEYS_ONLY index type, so none of the attributes (other than the previous three key attributes) are projected into the index. So the index will look as follows: You may notice that the row order of the index is almost the same as that of the table order (except the second and third rows). Here, you can observe one point: the table records will be grouped primarily based on the hash key, and then the records that have the same hash key will be ordered based on the range key of the index. In the case of the index, even though the table's range key is part of the index attribute, it will not play any role in the ordering (only the index's hash and range keys will take part in the ordering). There is a negative in this approach. If the user is writing a query using this index to fetch BookTitle and Publisher with PubDate as 28-Dec-2008, then what happens? Will DynamoDB complain that the Publisher attribute is not projected into the index? The answer is no. The reason is that even though Publisher is not projected into the index, we can still retrieve it using the secondary index. However, retrieving a nonprojected attribute will scan the entire table. So if we are sure that certain attributes need to be fetched frequently, then we must project it into the index; otherwise, it will consume a large number of capacity units and retrieval will be much slower as well. One more question: if the user is writing a query using the local secondary index to fetch BookTitle and Publisher with PubDate as 28-Dec-2008, then what happens? Will DynamoDB complain that the PubDate attribute is not part of the primary key and hence queries are not allowed on nonprimary key attributes? The answer is no. It is a rule of thumb that we can write queries on the secondary index attributes. It is possible to include nonprimary key attributes as part of the query, but these attributes must at least be key attributes of the index. The following code shows how to add non-key attributes to the secondary index's projection: private static Projection getProjectionWithNonKeyAttr() { Projection projection = new Projection()    .withProjectionType(ProjectionType.INCLUDE); ArrayList<String> nonKeyAttributes = new ArrayList<String>(); nonKeyAttributes.add("Language"); nonKeyAttributes.add("Author2"); projection.setNonKeyAttributes(nonKeyAttributes); return projection; } There is a slight limitation with the local secondary index. If we write a query on a non-key (both table and index) attribute, then internally DynamoDB might need to scan the entire table; this is inefficient. For example, consider a situation in which we need to retrieve the number of editions of the books in each and every language. Since both of the attributes are non-key, even if we create a local secondary index with either of the attributes (Edition and Language), the query will still result in a scan operation on the entire table. Global secondary index A problem arises here: is there any way in which we can create a secondary index using both the index keys that are different from the table's primary keys? The answer is the global secondary index. The following code shows how to create the global secondary index for this scenario: private static GlobalSecondaryIndex getGlobalSecondaryIndex() { GlobalSecondaryIndex gsi = new GlobalSecondaryIndex()    .withIndexName("Idx_Pub_Edtn")    .withProvisionedThroughput(new ProvisionedThroughput()    .withReadCapacityUnits((long) 1)    .withWriteCapacityUnits((long) 1))    .withProjection(newProjection().withProjectionType      ("KEYS_ONLY"));   ArrayList<KeySchemaElement> indexKeySchema1 =    newArrayList<KeySchemaElement>();   indexKeySchema1.add(new KeySchemaElement()    .withAttributeName("Language")    .withKeyType(KeyType.HASH)); indexKeySchema1.add(new KeySchemaElement()    .withAttributeName("Edition")    .withKeyType(KeyType.RANGE));   gsi.setKeySchema(indexKeySchema1); return gsi; } While deciding the attributes to be projected into a global secondary index, there are trade-offs we must consider between provisioned throughput and storage costs. A few of these are listed as follows: If our application doesn't need to query a table so often and it performs frequent writes or updates against the data in the table, then we must consider projecting the KEYS_ONLY attributes. The global secondary index will be minimum size, but it will still be available when required for the query activity. The smaller the index, the cheaper the cost to store it and our write costs will be cheaper too. If we need to access only those few attributes that have the lowest possible latency, then we must project only those (lesser) attributes into a global secondary index. If we need to access almost all of the non-key attributes of the DynamoDB table on a frequent basis, we can project these attributes (even the entire table) into the global secondary index. This will give us maximum flexibility with the trade-off that our storage cost would increase, or even double if we project the entire table's attributes into the index. The additional storage costs to store the global secondary index might equalize the cost of performing frequent table scans. If our application will frequently retrieve some non-key attributes, we must consider projecting these non-key attributes into the global secondary index. Item sharding Sharding, also called horizontal partitioning, is a technique in which rows are distributed among the database servers to perform queries faster. In the case of sharding, a hash operation will be performed on the table rows (mostly on one of the columns) and, based on the hash operation output, the rows will be grouped and sent to the proper database server. Take a look at the following diagram: As shown in the previous diagram, if all the table data (only four rows and one column are shown for illustration purpose) is stored in a single database server, the read and write operations will become slower and the server that has the frequently accessed table data will work more compared to the server storing the table data that is not accessed frequently. The following diagram shows the advantage of sharding over a multitable, multiserver database environment: In the previous diagram, two tables (Tbl_Places and Tbl_Sports) are shown on the left-hand side with four sample rows of data (Austria.. means only the first column of the first item is illustrated and all other fields are represented by ..).We are going to perform a hash operation on the first column only. In DynamoDB, this hashing will be performed automatically. Once the hashing is done, similar hash rows will be saved automatically in different servers (if necessary) to satisfy the specified provisioned throughput capacity. Have you ever wondered about the importance of the hash type key while creating a table (which is mandatory)? Of course we all know the importance of the range key and what it does. It simply sorts items based on the range key value. So far, we might have been thinking that the range key is more important than the hash key. If you think that way, then you may be correct, provided we neither need our table to be provisioned faster nor do we need to create any partitions for our table. As long as the table data is smaller, the importance of the hash key will be realized only while writing a query operation. However, once the table grows, in order to satisfy the same provision throughput, DynamoDB needs to partition our table data based on this hash key (as shown in the previous diagram). This partitioning of table items based on the hash key attribute is called sharding. It means the partitions are created by splitting items and not attributes. This is the reason why a query that has the hash key (of table and index) retrieves items much faster. Since the number of partitions is managed automatically by DynamoDB, we cannot just hope for things to work fine. We also need to keep certain things in mind, for example, the hash key attribute should have more distinct values. To simplify, it is not advisable to put binary values (such as Yes or No, Present or Past or Future, and so on) into the hash key attributes, thereby restricting the number of partitions. If our hash key attribute has either Yes or No values in all the items, then DynamoDB can create only a maximum of two partitions; therefore, the specified provisioned throughput cannot be achieved. Just consider that we have created a table called Tbl_Sports with a provisioned throughput capacity of 10, and then we put 10 items into the table. Assuming that only a single partition is created, we are able to retrieve 10 items per second. After a point of time, we put 10 more items into the table. DynamoDB will create another partition (by hashing over the hash key), thereby satisfying the provisioned throughput capacity. There is a formula taken from the AWS site: Total provisioned throughput/partitions = throughput per partition OR No. of partitions = Total provisioned throughput/throughput per partition In order to satisfy throughput capacity, the other parameters will be automatically managed by DynamoDB. Summary In this article, we saw what the local and global secondary indexes are. We walked through projection and its usage with indexes. Resources for Article: Further resources on this subject: Comparative Study of NoSQL Products [Article] Ruby with MongoDB for Web Development [Article] Amazon DynamoDB - Modelling relationships, Error handling [Article]
Read more
  • 0
  • 0
  • 10079
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-turning-your-powerpoint-prezi
Packt
17 Sep 2014
5 min read
Save for later

Turning your PowerPoint into a Prezi

Packt
17 Sep 2014
5 min read
In this article by Domi Sinclair, the author of Prezi Essentials, we look at how to import our existing content from a PowerPoint presentation. For this, you will need an existing PowerPoint presentation to work with. (For more resources related to this topic, see here.) When you add your PowerPoint slides to Prezi, it turns each slide into a separate frame. A frame is Prezi's way of grouping together different content, just like PowerPoint groups content by using slides. Once the slide is converted into a frame, Prezi identifies each individual element of content that went into making that slide, such as images, title text, and body text. This means you can still edit and remove these elements individually, just as you would be able to in PowerPoint. In Prezi, it is easy to have frames within other frames (whereas it is inelegant and clunky to have a slide within a slide when using PowerPoint); this allows you to create subframes. Subframes can be used to hold additional details you are not sure whether you'll have time for. You might also use them to hold notes that you don't need for the main presentation, but that might be useful for colleagues presenting the Prezi or for audience members reviewing the presentation at a later date. You can add a slide as a subframe and simply zoom into it, if you need it, or skip past it, if it's not needed. This is much more subtle than having to skip past the slide in a PowerPoint. We will learn more about this as we develop our Prezi skills. Without further delay, let's have a go at adding some PowerPoint slides to our Prezi: In the template, locate the Insert button (the middle button in the center of the bar that goes across the top of the screen). Click on the Insert button and then from the drop-down menu that appears, select PowerPoint…, as shown in the following screenshot: You should now be presented with a file explorer window. Use this to navigate to the PowerPoint file you wish to use for this example. When you find the file, click on it and upload it by clicking on Open. This file can be any PowerPoint you wish to use but you will also find an example to download at http://digidomi.wordpress.com/2014/08/31/essential-prezi-resources/. This should load the slides into a panel within Prezi, on the right-hand side of the screen. Left-click on the second slide and then drag it onto the Prezi canvas. Release the mouse to drop it somewhere on the screen and then click on the green tick to place it there. This is shown in the following screenshot: You can zoom in to it. Do this by clicking on the slide to select it with a blue box, and then click on the Zoom to Frame button. You may notice that the slide has become slightly jumbled in the conversion to a Prezi frame. Click on any element of the frame, such as the title or body text, then click and hold the hand icon to drag the content to the desired location: You can also use the Edit Text button to enter additional text or the dustbin icon to delete the element. When you are happy, click on the save icon that resembles a floppy disk on the left-hand side of the bar that runs across the top of the screen; this will save your progress. Although Prezi does autosave as you go along, it is always a good idea to do a manual save after making important changes. Why not try adding some other slides and experiment with how to move and edit the content? It is also a good idea to try dragging one of the slides from the right-hand panel onto the canvas and into an existing template's frame to turn it into a subframe. Although we have just looked at how to add slides individually, you can insert all the slides from a PowerPoint in just one click. In the side panel that appears when you upload the PowerPoint, instead of dragging the slides one by one, simply click on the button at the top of the panel labeled Insert All…. This will then give you a list of layout options and the ability to add a path between them, as shown in the following screenshot. Select the layout you want and tick the path option if desired, and then click on Insert. If you do not want any of those layouts, then don't worry, once all of the slides have been placed into the Prezi, you can then move them around by hovering over a slide, and then left-clicking on it and dragging it to your desired location, so just choose any layout. Although you can just add all the slides, it might, however, be useful to go through the slides individually as it can be a good time to review your PowerPoint content and assess what is still useful. This is advisable, rather than simply continuing with the same content, which may no longer be relevant. It is best to only include important information in a Prezi, so narrowing it down is a valuable exercise. Summary This article describes how we can convert our existing PowerPoint into Prezi. Resources for Article: Further resources on this subject: Using Prezi - The Online Presentation Software Tool [Article] The Fastest Way to Go from an Idea to a Prezi [Article] Turning your PowerPoint presentation into a Prezi [Article]
Read more
  • 0
  • 0
  • 1467

Packt
17 Sep 2014
12 min read
Save for later

What is REST?

Packt
17 Sep 2014
12 min read
This article by Bhakti Mehta, the author of Restful Java Patterns and Best Practices, starts with the basic concepts of REST, how to design RESTful services, and best practices around designing REST resources. It also covers the architectural aspects of REST. (For more resources related to this topic, see here.) Where REST has come from The confluence of social networking, cloud computing, and era of mobile applications creates a generation of emerging technologies that allow different networked devices to communicate with each other over the Internet. In the past, there were traditional and proprietary approaches for building solutions encompassing different devices and components communicating with each other over a non-reliable network or through the Internet. Some of these approaches such as RPC, CORBA, and SOAP-based web services, which evolved as different implementations for Service Oriented Architecture (SOA) required a tighter coupling between components along with greater complexities in integration. As the technology landscape evolves, today’s applications are built on the notion of producing and consuming APIs instead of using web frameworks that invoke services and produce web pages. This requirement enforces the need for easier exchange of information between distributed services along with predictable, robust, well-defined interfaces. API based architecture enables agile development, easier adoption and prevalence, scale and integration with applications within and outside the enterprise HTTP 1.1 is defined in RFC 2616, and is ubiquitously used as the standard protocol for distributed, collaborative and hypermedia information systems. Representational State Transfer (REST) is inspired by HTTP and can be used wherever HTTP is used. The widespread adoption of REST and JSON opens up the possibilities of applications incorporating and leveraging functionality from other applications as needed. Popularity of REST is mainly because it enables building lightweight, simple, cost-effective modular interfaces, which can be consumed by a variety of clients. This article covers the following topics Introduction to REST Safety and Idempotence HTTP verbs and REST Best practices when designing RESTful services REST architectural components Introduction to REST REST is an architectural style that conforms to the Web Standards like using HTTP verbs and URIs. It is bound by the following principles. All resources are identified by the URIs. All resources can have multiple representations All resources can be accessed/modified/created/deleted by standard HTTP methods. There is no state on the server. REST is extensible due to the use of URIs for identifying resources. For example, a URI to represent a collection of book resources could look like this: http://foo.api.com/v1/library/books A URI to represent a single book identified by its ISBN could be as follows: http://foo.api.com/v1/library/books/isbn/12345678 A URI to represent a coffee order resource could be as follows: http://bar.api.com/v1/coffees/orders/1234 A user in a system can be represented like this: http://some.api.com/v1/user A URI to represent all the book orders for a user could be: http://bar.api.com/v1/user/5034/book/orders All the preceding samples show a clear readable pattern, which can be interpreted by the client. All these resources could have multiple representations. These resource examples shown here can be represented by JSON or XML and can be manipulated by HTTP methods: GET, PUT, POST, and DELETE. The following table summarizes HTTP Methods and descriptions for the actions taken on the resource with a simple example of a collection of books in a library. HTTP method Resource URI Description GET /library/books Gets a list of books GET /library/books/isbn/12345678 Gets a book identified by ISBN “12345678” POST /library/books Creates a new book order DELETE /library/books/isbn/12345678 Deletes a book identified by ISBN “12345678” PUT /library/books/isbn/12345678 Updates a specific book identified by ISBN “12345678’ PATCH /library/books/isbn/12345678 Can be used to do partial update for a book identified by ISBN “12345678” REST and statelessness REST is bound by the principle of statelessness. Each request from the client to the server must have all the details to understand the request. This helps to improve visibility, reliability and scalability for requests. Visibility is improved, as the system monitoring the requests does not have to look beyond one request to get details. Reliability is improved, as there is no check-pointing/resuming to be done in case of partial failures. Scalability is improved, as the number of requests that can be processed is increases as the server is not responsible for storing any state. Roy Fielding’s dissertation on the REST architectural style provides details on the statelessness of REST, check http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm With this initial introduction to basics of REST, we shall cover the different maturity levels and how REST falls in it in the following section. Richardson Maturity Model Richardson maturity model is a model, which is developed by Leonard Richardson. It talks about the basics of REST in terms of resources, verbs and hypermedia controls. The starting point for the maturity model is to use HTTP layer as the transport. Level 0 – Remote Procedure Invocation This level contains SOAP or XML-RPC sending data as POX (Plain Old XML). Only POST methods are used. This is the most primitive way of building SOA applications with a single method POST and using XML to communicate between services. Level 1 – REST resources This uses POST methods and instead of using a function and passing arguments uses the REST URIs. So it still uses only one HTTP method. It is better than Level 0 that it breaks a complex functionality into multiple resources with one method. Level 2 – more HTTP verbs This level uses other HTTP verbs like GET, HEAD, DELETE, PUT along with POST methods. Level 2 is the real use case of REST, which advocates using different verbs based on the HTTP request methods and the system can have multiple resources. Level 3 – HATEOAS Hypermedia as the Engine of Application State (HATEOAS) is the most mature level of Richardson’s model. The responses to the client requests, contains hypermedia controls, which can help the client decide what the next action they can take. Level 3 encourages easy discoverability and makes it easy for the responses to be self- explanatory. Safety and Idempotence This section discusses in details about what are safe and idempotent methods. Safe methods Safe methods are methods that do not change the state on the server. GET and HEAD are safe methods. For example GET /v1/coffees/orders/1234 is a safe method. Safe methods can be cached. PUT method is not safe as it will create or modify a resource on the server. POST method is not safe for the same reasons. DELETE method is not safe as it deletes a resource on the server. Idempotent methods An idempotent method is a method that will produce the same results irrespective of how many times it is called. For example GET method is idempotent, as multiple calls to the GET resource will always return the same response. PUT method is idempotent as calling PUT method multiple times will update the same resource and not change the outcome. POST is not idempotent and calling POST method multiple times can have different results and will result in creating new resources. DELETE is idempotent because once the resource is deleted it is gone and calling the method multiple times will not change the outcome. HTTP verbs and REST HTTP verbs inform the server what to do with the data sent as part of the URL GET GET is the simplest verb of HTTP, which enables to get access to a resource. Whenever the client clicks a URL in the browser it sends a GET request to the address specified by the URL. GET is safe and idempotent. GET requests are cached. Query parameters can be used in GET requests. For example a simple GET request is as follows: curl http://api.foo.com/v1/user/12345 POST POST is used to create a resource. POST requests are neither idempotent nor safe. Multiple invocations of the POST requests can create multiple resources. POST requests should invalidate a cache entry if exists. Query parameters with POST requests are not encouraged For example a POST request to create a user can be curl –X POST -d’{“name”:”John Doe”,“username”:”jdoe”, “phone”:”412-344-5644”} http://api.foo.com/v1/user PUT PUT is used to update a resource. PUT is idempotent but not safe. Multiple invocations of PUT requests should produce the same results by updating the resource. PUT requests should invalidate the cache entry if exists. For example a PUT request to update a user can be curl –X PUT -d’{ “phone”:”413-344-5644”} http://api.foo.com/v1/user DELETE DELETE is used to delete a resource. DELETE is idempotent but not safe. DELETE is idempotent because based on the RFC 2616 "the side effects of N > 0 requests is the same as for a single request". This means once the resource is deleted calling DELETE multiple times will get the same response. For example, a request to delete a user is as follows: curl –X DELETE http://foo.api.com/v1/user/1234 HEAD HEAD is similar like GET request. The difference is that only HTTP headers are returned and no content is returned. HEAD is idempotent and safe. For example, a request to send HEAD request with curl is as follows: curl –X HEAD http://foo.api.com/v1/user It can be useful to send a HEAD request to see if the resource has changed before trying to get a large representation using a GET request. PUT vs POST According to RFC the difference between PUT and POST is in the Request URI. The URI identified by POST defines the entity that will handle the POST request. The URI in the PUT request includes the entity in the request. So POST /v1/coffees/orders means to create a new resource and return an identifier to describe the resource In contrast PUT /v1/coffees/orders/1234 means to update a resource identified by “1234” if it does not exist else create a new order and use the URI orders/1234 to identify it. Best practices when designing resources This section highlights some of the best practices when designing RESTful resources: The API developer should use nouns to understand and navigate through resources and verbs with the HTTP method. For example the URI /user/1234/books is better than /user/1234/getBook. Use associations in the URIs to identify sub resources. For example to get the authors for book 5678 for user 1234 use the following URI /user/1234/books/5678/authors. For specific variations use query parameters. For example to get all the books with 10 reviews /user/1234/books?reviews_counts=10. Allow partial responses as part of query parameters if possible. An example of this case is to get only the name and age of a user, the client can specify, ?fields as a query parameter and specify the list of fields which should be sent by the server in the response using the URI /users/1234?fields=name,age. Have defaults for the output format for the response incase the client does not specify which format it is interested in. Most API developers choose to send json as the default response mime type. Have camelCase or use _ for attribute names. Support a standard API for count for example users/1234/books/count in case of collections so the client can get the idea of how many objects can be expected in the response. This will also help the client, with pagination queries. Support a pretty printing option users/1234?pretty_print. Also it is a good practice to not cache queries with pretty print query parameter. Avoid chattiness by being as verbose as possible in the response. This is because if the server does not provide enough details in the response the client needs to make more calls to get additional details. That is a waste of network resources as well as counts against the client’s rate limits. REST architecture components This section will cover the various components that must be considered when building RESTful APIs As seen in the preceding screenshot, REST services can be consumed from a variety of clients and applications running on different platforms and devices like mobile devices, web browsers etc. These requests are sent through a proxy server. The HTTP requests will be sent to the resources and based on the various CRUD operations the right HTTP method will be selected. On the response side there can be Pagination, to ensure the server sends a subset of results. Also the server can do Asynchronous processing thus improving responsiveness and scale. There can be links in the response, which deals with HATEOAS. Here is a summary of the various REST architectural components: HTTP requests use REST API with HTTP verbs for the uniform interface constraint Content negotiation allows selecting a representation for a response when there are multiple representations available. Logging helps provide traceability to analyze and debug issues Exception handling allows sending application specific exceptions with HTTP codes Authentication and authorization with OAuth2.0 gives access control to other applications, to take actions without the user having to send their credentials Validation provides support to send back detailed messages with error codes to the client as well as validations for the inputs received in the request. Rate limiting ensures the server is not burdened with too many requests from single client Caching helps to improve application responsiveness. Asynchronous processing enables the server to asynchronously send back the responses to the client. Micro services which comprises breaking up a monolithic service into fine grained services HATEOAS to improve usability, understandability and navigability by returning a list of links in the response Pagination to allow clients to specify items in a dataset that they are interested in. The REST Architectural components in the image can be chained one after the other as shown priorly. For example, there can be a filter chain, consisting of filters related with Authentication, Rate limiting, Caching, and Logging. This will take care of authenticating the user, checking if the requests from the client are within rate limits, then a caching filter which can check if the request can be served from the cache respectively. This can be followed by a logging filter, which can log the details of the request. For more details, check RESTful Patterns and best practices.
Read more
  • 0
  • 0
  • 2615

article-image-using-r-statistics-research-and-graphics
Packt
16 Sep 2014
12 min read
Save for later

Using R for Statistics, Research, and Graphics

Packt
16 Sep 2014
12 min read
In this article by David Alexander Lillis, author of the R Graph Essentials, we will talk about R. Developed by Professor Ross Ihaka and Dr. Robert Gentleman at Auckland University (New Zealand) during the early 1990s, the R statistics environment is a real success story. R is open source software, which you can download in a couple of minutes from the Comprehensive R Network (CRAN) website (http://cran.r-project.org/), and combines a powerful programming language, outstanding graphics, and a comprehensive range of useful statistical functions. If you need a statistics environment that includes a programming language, R is ideal. It's true that the learning curve is longer than for spreadsheet-based packages, but once you master the R programming syntax, you can develop your own very powerful analytic tools. Many contributed packages are available on the web for use with R, and very often the analytic tools you need can be downloaded at no cost. (For more resources related to this topic, see here.) The main problem for those new to R is the time required to master the programming language, but several nice graphical user interfaces, such as John Fox's R Commander package, are available, which make it much easier for the newcomer to develop proficiency in R than it used to be. For many statisticians and researchers, R is the package of choice because of its powerful programming language, the easy availability of code, and because it can import Excel spreadsheets, comma separated variable (.csv) spreadsheets, and text files, as well as SPSS files, STATA files, and files produced within other statistical packages. You may be looking for a tool for your own data analysis. If so, let's take a brief look at what R can do for you. Some basic R syntax Data can be created in R or else read in from .csv or other files as objects. For example, you can read in the data contained within a .csv file called mydata.csv as follows: A <- read.csv(mydata.csv, h=T) A The object A now contains all the data of the original file. The syntax A[3,7] picks out the element in row 3 and column 7. The syntax A[14, ] selects the fourteenth row and A[,6] selects the sixth column. The functions mean(A) and sd(A) find the mean and standard deviation of each column. The syntax 3*A + 7 would triple each element of A and add 7 to each element and store the new array as the object B Now you could save this array as a .csv file called Outputfile.csv as follows: write.csv(B, file="Outputfile.csv") Statistical modeling R provides a comprehensive range of basic statistical functions relating to the commonly-used distributions (normal distribution, t-distribution, Poisson, gamma, and so on), and many less-well known distributions. It also provides a range of non-parametric tests that are appropriate when your data are not distributed normally. Linear and non-linear regressions are easy to perform, and finding the optimum model (that is, by eliminating non-significant independent variables and non-significant factor interactions) is particularly easy. Implementing Generalized Linear Models and other commonly-used models such as Analysis of Variance, Multivariate Analysis of Variance, and Analysis of Covariance is also straightforward and, once you know the syntax, you may find that such tasks can be done more quickly in R than in other packages. The usual post-hoc tests for identifying factor levels that are significantly different from the other levels (for example, Tukey and Sheffe tests) are available, and testing for interactions between factors is easy. Factor Analysis, and the related Principal Components Analysis, are well known data reduction techniques that enable you to explain your data in terms of smaller sets of independent variables (or factors). Both methods are available in R, and code for complex designs, including One and Two Way Repeated Measures, and Four Way ANOVA (for example, two repeated measures and two between-subjects), can be written relatively easily or downloaded from various websites (for example, http://www.personality-project.org/r/). Other analytic tools include Cluster Analysis, Discriminant Analysis, Multidimensional Scaling, and Correspondence Analysis. R also provides various methods for fitting analytic models to data and smoothing (for example, lowess and spline-based methods). Miscellaneous packages for specialist methods You can find some very useful packages of R code for fields as diverse as biometry, epidemiology, astrophysics, econometrics, financial and actuarial modeling, the social sciences, and psychology. For example, if you are interested in Astrophysics, Penn State Astrophysics School offers a nice website that includes both tutorials and code (http://www.iiap.res.in/astrostat/RTutorials.html). Here I'll mention just a few of the popular techniques: Monte Carlo methods A number of sources give excellent accounts of how to perform Monte Carlo simulations in R (that is, drawing samples from multidimensional distributions and estimating expected values). A valuable text is Christian Robert's book Introducing Monte Carlo Methods with R. Murali Haran gives another interesting Astrophysical example in the CAStR website (http://www.stat.psu.edu/~mharan/MCMCtut/MCMC.html). Structural Equation Modeling Structural Equation Modelling (SEM) is becoming increasingly popular in the social sciences and economics as an alternative to other modeling techniques such as multiple regression, factor analysis and analysis of covariance. Essentially, SEM is a kind of multiple regression that takes account of factor interactions, nonlinearities, measurement error, multiple latent independent variables, and latent dependent variables. Useful references for conducting SEM in R include those of Revelle, Farnsworth (2008), and Fox (2002 and 2006). Data mining A number of very useful resources are available for anyone contemplating data mining using R. For example, Luis Torgo has just published a book on data mining using R, and presents case studies, along with the datasets and code, which the interested student can work through. Torgo's book provides the usual analytic and graphical techniques used every day by data miners, including visualization techniques, dealing with missing values, developing prediction models, and methods for evaluating the performance of your models. Also of interest to the data miner is the Rattle GUI (R Analytical Tool to Learn Easily). Rattle is a data mining facility for analyzing very large data sets. It provides many useful statistical and graphical data summaries, presents mechanisms for developing a variety of models, and summarizes the performance of your models. Graphics in R Quite simply, the quality and range of graphics available through R is superb and, in my view, vastly superior to those of any other package I have encountered. Of course, you have to write the necessary code, but once you have mastered this skill, you have access to wonderful graphics. You can write your own code from scratch, but many websites provide helpful examples, complete with code, which you can download and modify to suit your own needs. R's base graphics (graphics created without the use of any additional contributed packages) are superb, but various graphics packages such as ggplot2 (and the associated qplot function) help you to create wonderful graphs. R's graphics capabilities include, but are not limited to, the following: Base graphics in R Basic graphics techniques and syntax Creating scatterplots and line plots Customizing axes, colors, and symbols Adding text – legends, titles, and axis labels Adding lines – interpolation lines, regression lines, and curves Increasing complexity – graphing three variables, multiple plots, or multiple axes Saving your plots to multiple formats – PDF, postscript, and JPG Including mathematical expressions on your plots Making graphs clear and pretty – including a grid, point labels, and shading Shading and coloring your plot Creating bar charts, histograms, boxplots, pie charts, and dotcharts Adding loess smoothers Scatterplot matrices R's color palettes Adding error bars Creating graphs using qplot Using basic qplot graphics techniques and syntax to customize in easy steps Creating scatterplots and line plots in qplot Mapping symbol size, symbol type and symbol color to categorical data Including regressions and confidence intervals on your graphs Shading and coloring your graph Creating bar charts, histograms, boxplots, pie charts, and dotcharts Labelling points on your graph Creating graphs using ggplot Ploting options – backgrounds, sizes, transparencies, and colors Superimposing points Controlling symbol shapes and using pretty color schemes Stacked, clustered, and paneled bar charts Methods for detailed customization of lines, point labels, smoothers, confidence bands, and error bars The following graph records information on the heights in centimeters and weights in kilograms of patients in a medical study. The curve in red gives a smoothed version of the data, created using locally weighted scatterplot smoothing. Both the graph and the modelling required to produce the smoothed curve, were performed in R. Here is another graph. It gives the heights and body masses of female patients receiving treatment in a hospital. Each patient is identified by name. This graph was created very easily using ggplot, and shows the default background produced by ggplot (a grey plotting background and white grid lines). Next, we see a histogram of patients' heights and body masses, partitioned by gender. The bars are given in an orange and an ivory color. The ggplot package provides a wide range of colors and hues, as well as a wide range of color palettes. Finally, we see a line graph of height against age for a group of four children. The graph includes both points and lines and we have a unique color for each child. The ggplot package makes it possible to create attractive and effective graphs for research and data analysis. Summary For many scientists and data analysts, mastery of R could be an investment for the future, particularly for those who are beginning their careers. The technology for handling scientific computation is advancing very quickly, and is a major impetus for scientific advance. Some level of mastery of R has become, for many applications, essential for taking advantage of these developments. Spatial analysis, where R provides an integrated framework access to abilities that are spread across many different computer programs, is a good example. A few years ago, I would not have recommended R as a statistics environment for generalist data analysts or postgraduate students, except those working directly in areas involving statistical modeling. However, many tutorials are downloadable from the Internet and a number of organizations provide online tutorials and/or face-to-face workshops (for example, The Analysis Factor http://www.theanalysisfactor.com/). In addition, the appearance of GUIs, such as R Commander and the new iNZight GUI33 (designed for use in schools), makes it easier for non-specialists to learn and use R effectively. I am most happy to provide advice to anyone contemplating learning to use this outstanding statistical and research tool. References Some useful material on R are as follows: L'analyse des donn´ees. Tome 1: La taxinomie, Tome 2: L'analyse des correspondances, Dunod, Paris, Benz´ecri, J. P (1973). Computation of Correspondence Analysis, Blasius J, Greenacre, M. J (1994). In M J Greenacre, J Blasius (eds.), Correspondence Analysis in the Social Sciences, pp. 53–75, Academic Press, London. Statistics: An Introduction using R, Crawley, M. J. (m.crawley@imperial.ac.uk), Imperial College, Silwood Park, Ascot, Berks, Published in 2005 by John Wiley & Sons, Ltd. http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0470022973,subjectCd-ST05.html (ISBN 0-470-02297-3). http://www3.imperial.ac.uk/naturalsciences/research/statisticsusingr. Structural Equation Models Appendix to An R and S-PLUS Companion to Applied Regression, Fox, John, http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-sems.pdf. Getting Started with the R Commander, Fox, John, 26 August 2006. The R Commander: A Basic-Statistics Graphical User Interface to R, Fox, John, Journal of Statistical Software, September 2005, Volume 14, Issue 9. http://www.jstatsoft.org/. Structural Equation Modeling With the sem Package in R, Fox, John, Structural Equation Modeling, 13(3), 465–486. Lawrence Erlbaum Associates, Inc. 2006. Biplots in Biomedical Research, Gabriel, K, R and Odoroff, C, 9, 469–485, Statistics in Medicine, 1990. Theory and Applications of Correspondence Analysis, Greenacre M. J., Academic Press, London, 1984. Using R for Data Analysis and Graphics Introduction, Code and Commentary, Maindonald, J. H, Centre for Mathematics and its Applications, Australian National University. Introducing Monte Carlo Methods with R, Series Use R, Robert, Christian P., Casella, George, 2010, XX, 284 p., Softcover, ISBN 978-1-4419-1575-7. <p>Useful tutorials available on the web are as follows:</p> An Introduction to R: examples for Actuaries, De Silva, N, 2006, http://toolkit.pbworks.com/f/R%20Examples%20for%20Actuaries%20v0.1-1.pdf. Econometrics in R, Farnsworth, Grant, V, October 26, 2008, http://cran.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf. An Introduction to the R Language, Harte, David, Statistics Research Associates Limited, www.statsresearch.co.nz. Quick R, Kabakoff, Rob, http://www.statmethods.net/index.html. R for SAS and SPSS Users, Muenchen, Bob, http://RforSASandSPSSusers.com. Statistical Analysis with R - a quick start, Nenadi´,C and Zucchini, Walter. R for Beginners, Paradis, Emannuel (paradis@isem.univ-montp2.fr), Institut des Sciences de l' Evolution, Universite Montpellier II, F-34095 Montpellier c_edex 05, France. Data Mining with R learning by case studies, Torgo, Luis, http://www.liaad.up.pt/~ltorgo/DataMiningWithR/. SimpleR - Using R for Introductory Statistics, Verzani, John, http://cran.r-project.org/doc/contrib/Verzani-SimpleR.pdf. Time Series Analysis and Its Applications: With R Examples, http://www.stat.pitt.edu/stoffer/tsa2/textRcode.htm#ch2. The irises of the Gaspé peninsula, E. Anderson, Bulletin of the American Iris Society, 59, 2-5. 1935. Introducing Monte Carlo Methods with R, Series Use R, Robert, Christian P., Casella, George. 2010, XX, 284 p., Softcover, ISBN: 978-1-4419-1575-7. Resources for Article: Further resources on this subject: Aspects of Data Manipulation in R [Article] Learning Data Analytics with R and Hadoop [Article] First steps with R [Article]
Read more
  • 0
  • 0
  • 4073

article-image-standard-functionality
Packt
16 Sep 2014
16 min read
Save for later

Standard Functionality

Packt
16 Sep 2014
16 min read
In this article by Mark Brummel, author of Microsoft Dynamics NAV 2013 Application Design, we will learn how to search in the standard functionality and reuse parts in our own software. For this part, we will look at resources in Microsoft Dynamics NAV. Resources are similar to using as products as items but far less complex making it easier to look and learn. (For more resources related to this topic, see here.) Squash court master data Our company has 12 courts that we want to register in Microsoft Dynamics NAV. This master data is comparable to resources so we'll go ahead and copy this functionality. Resources are not attached to the contact table like the vendor/squash player tables. We need the number series again so we'll add a new number series to our Squash Setup table. The Squash Court table should look like this after creation: Chapter objects The Object Designer window shows the Page tab, as shown in the following screenshot: After the import process is completed make sure that your current database is the default database for the role tailored client and run page 123456701, Squash Setup. From this page select the action Initialize Squash Application. This will execute the C/AL code in the InitSquashApp function of this page, which will prepare the demo data for us to play with. The objects are prepared and tested in a Microsoft Dynamics NAV 2013 R2 W1 database. Reservations When running a squash court, we want to be able to keep track of reservations. Looking at standard Dynamics NAV functionality, it might be a good idea to create a squash player journal. The journal can create entries for reservations that can be invoiced. A journal needs the object structure. Creating a new journal from scratch is a lot of work and can easily lead to making mistakes. It is easier and safer to copy an existing journal structure from the standard application that is similar to the journal we need for our design. In our example, we have copied the Resource Journals: You can export these objects in text format and then rename and renumber the objects to be reused easily. The Squash Journal objects are renumbered and renamed from the Resource Journal. All journals have the same structure. The template, batch, and register tables are almost always the same whereas the journal line and ledger entry table contain function-specific fields. Let's have a look at the first. The Journal Template has several fields, as shown in the following screenshot: Let's discuss these fields in more detail: Name: This is the unique name. It is possible to define as many templates as required but usually one template per form ID and one for recurring will do. If you want journals with different source codes, you need to have more templates. Description: A readable and understandable description for its purpose. Test Report ID: All templates have a test report that allows the user to check for posting errors. Form ID: For some journals, more UI objects are required. For example, the General Journals have a special form for bank and cash. Posting Report ID: This report is printed when a user selects Post and Print. Force Posting Report: Use this option when a posting report is mandatory. Source Code: Here you can enter a trail code for all the postings done via this journal. Reason Code: This functionality is similar to Source Code. Recurring: Whenever you post lines from a recurring journal, new lines are automatically created with a posting date defined in the recurring date formula. No. Series: When you use this feature the Document No. in the journal line is automatically populated with a new number from this Number Series. Posting No. Series: Use this feature for recurring journals. The Journal Batch has various fields, as shown in the following screenshot: Let's discuss these fields in more detail: Journal Template Name: The name of the journal template this batch refers to Name: Each batch should have a unique code Description: A readable and explaining description for this batch Reason Code: When populated this Reason Code will overrule the Reason Code from the Journal Template No. Series: When populated this No. Series will overrule the No. Series from the Journal Template Posting No. Series: When populated this Posting No. Series will overrule the Posting No. Series from the Journal Template The Register table has various fields, as shown in the following screenshot: Terms from the Journal Register tab that you need to know would be: No.: This field is automatically and incrementally populated for each transaction with this journal and there are no gaps between the numbers From Entry No.: A reference to the first ledger entry created is with this transaction To Entry No.: A reference to the last ledger entry is created with this transaction Creation Date: Always populated with the real date when the transaction was posted User ID: The ID of the end user who has posted the transaction The journal The journal line has a number of mandatory fields that are required for all journals and some fields that are required for its designed functionality. In our case, the journal should create a reservation which then can be invoiced. This requires some information to be populated in the lines. Reservation The reservation process is a logistical process that requires us to know the number of the squash court, the date, and the time of the reservation. We also need to know how long the players want to play. To check the reservation, it might also be useful to store the number of the squash player. Invoicing For the invoicing part, we need to know the price we need to invoice. It might also be useful to store the cost to see our profit. For the system to figure out the proper G/L Account for the turnover, we also need to define a General Product Posting Group. Let's discuss these fields in more detail: Journal Template Name: This is a reference to the current Journal Template. Line No.: Each journal has a virtually unlimited number of lines; this number is automatically incremented by 10000 allowing lines to be created in between. Entry Type: This is the reservation or invoice. Document No.: This number can be used to give to the squash player as a reservation number. When the Entry Type is Invoice, it is the invoice number. Posting Date: This is usually the reservation date but when the Entry Type is Invoice, it might be the date of the invoice, which might differ from the posting date in the general ledger. Squash Player No.: This is a reference to the squash player who has made the reservation. Squash Court No.: This is a reference to the squash court. Description: This is automatically updated with the number of the squash court, reservation date and times, but can be changed by the user. Reservation Date: This is the actual date of the reservation. From Time: This is the starting time of the reservation. We only allow whole and half hours. To Time: This is the ending time of the reservation. We only allow whole and half hours. This is automatically populated when people enter a quantity. Quantity: This is the number of hours playing time. We only allow units of 0,5 to be entered here. This is automatically calculated when the times are populated. Unit Cost: This is the cost to run a squash court for one hour. Total Cost: This is the cost for this reservation. Unit Price: This is the invoice price for this reservation per hour. This depends on whether or not the squash player is a member or not. Total Price: This is the total invoice price for this reservation. Shortcut Dimension Code 1 & 2: This is a reference to the dimensions used for this transaction. Applies-to Entry No.: When a reservation is invoiced, this is the reference to the Squash Entry No. of the reservation. Source Code: This is inherited from the journal batch or template and used when posting the transaction. Chargeable: When this option is used, there will not be an invoice for the reservation. Journal Batch Name: This is a reference to the journal batch that is used for this transaction. Reason Code: This is inherited from the journal batch or template and used when posting the transaction. Recurring Method: When the journal is a recurring journal, you can use this field to determine if the Amount field is blanked after posting the lines. Recurring Frequency: This field determines the new posting date after the recurring lines are posted. Gen. Bus. Posting Group: The combination of general business and product posting group determines the G/L Account for turnover when we invoice the reservation. The Gen. Bus. Posting Group is inherited from the bill-to customer. Gen. Prod. Posting Group: This will be inherited from the squash player. External Document No.: When a squash player wants us to note a reference number, we can store it here. Posting No. Series: When the Journal Template has a Posting No. Series, it is populated here to be used when posting. Bill-to Customer No.: This determines who is paying for the reservation. We will inherit this from the squash player. So now we have a place to enter reservations but we have something to do before we can start doing this. Some fields were determined to be inherited and calculated. The time field needs calculation to avoid people entering wrong values The Unit Price should be calculated The Unit Cost, Posting groups, and Bill-to Customer No. need to be inherited As final cherry on top, we will look at implementing dimensions Time calculation As it comes to the time, we want only to allow specific start and end time. Our squash court can be used in blocks of half an hour. The Quantity field should be calculated based on the entered times and vice versa. To have the most flexible solution possible, we will create a new table with allowed starting and ending times. This table will have two fields: Reservation Time and Duration. The Duration field will be a decimal field that we will promote to a SumIndexField. This will enable us to use SIFT to calculate the quantity. When populated the table will look like this: The time fields in the squash journal table will now get a table relation with this table. This prevents a user to enter values that are not in the table thus, only valid starting and ending times. This is all done without any C/AL code and flexible when times change later. Now, we need some code that calculates the quantity based on the input: From Time - OnValidate() CalcQty;   To Time - OnValidate() CalcQty;   CalcQty() IF ("From Time" <> 0T) AND ("To Time" <> 0T) THEN BEGIN IF "To Time" <= "From Time" THEN    FIELDERROR("To Time"); ResTime.SETRANGE("Reservation Time", "From Time",    "To Time"); ResTime.FIND('+'); ResTime.NEXT(-1); ResTime.SETRANGE("Reservation Time", "From Time",    ResTime."Reservation Time"); ResTime.CALCSUMS(Duration); VALIDATE(Quantity, ResTime.Duration); END; When a user enters a value in the From Time or To Time fields, the CalcQty function is executed. This checks if both fields have a value and then checks whether To Time is larger than From Time. Then we place a filter on the Reservation Time table. Now, when a user makes a reservation from 8:00 to 9:00, there are three records in the filter making the result of the Calcsums (total of all records) of duration 1,5. Therefore, we find the previous reservation time and use that. This example shows how easy it is to use the built-in Microsoft Dynamics NAV functionality such as table relations and Calcsums instead of complex time calculations, which we could have also used. Price calculation There is a special technique to determine prices. Prices are stored in a table with all possible parameters as fields and by filtering down on these fields, the best price is determined. If required, with extra logic to find the lowest (or highest) price, if more prices are found. To look, learn, and love this part of the standard application, we have used table Sales Price (7002) and codeunit Sales Price Calc. Mgt. (7000), even though we only need a small part of this functionality. This mechanism of price calculation is used throughout the application and offers a normalized way of calculating sales prices. A similar construction is used for purchase prices with table Purchase Price (7012) and codeunit Purch. Price Calc. Mgt. (7010). Squash prices In our case, we have already determined that we have a special rate for members, but let's say we have also a special rate for daytime and evening in winter and summer. This could make our table look like: We can make special prices for members on dates for winter and summer and make a price only valid until a certain time. We can also make a special price for a court. This table could be creatively expanded with all kinds of codes until we end up with table Sales Price (7002) in the standard product which was the template for our example. Price Calc Mgt. codeunit To calculate the price, we need a codeunit similar to the standard product. This codeunit is called with a squash journal line record and stores all valid prices in a buffer table and then finds the lowest price if there is overlap. FindSquashPrice() WITH FromSquashPrice DO BEGIN SETFILTER("Ending Date",'%1|>=%2',0D,StartingDate); SETRANGE("Starting Date",0D,StartingDate);   ToSquashPrice.RESET; ToSquashPrice.DELETEALL;   SETRANGE(Member, IsMember);   SETRANGE("Ending Time", 0T); SETRANGE("Squash Court No.", ''); CopySquashPriceToSquashPrice(FromSquashPrice,ToSquashPrice);   SETRANGE("Ending Time", 0T); SETRANGE("Squash Court No.", CourtNo); CopySquashPriceToSquashPrice(FromSquashPrice,ToSquashPrice);   SETRANGE("Squash Court No.", ''); IF StartingTime <> 0T THEN BEGIN    SETFILTER("Ending Time",'%1|>=%2',000001T,StartingTime);    CopySquashPriceToSquashPrice(FromSquashPrice,      ToSquashPrice); END;   SETRANGE("Squash Court No.", CourtNo); IF StartingTime <> 0T THEN BEGIN    SETFILTER("Ending Time",'%1|>=%2',000001T,StartingTime);    CopySquashPriceToSquashPrice(FromSquashPrice,      ToSquashPrice); END; END; If there is no price in the filter, it uses the unit price from the squash court, as shown: CalcBestUnitPrice() WITH SquashPrice DO BEGIN FoundSquashPrice := FINDSET; IF FoundSquashPrice THEN BEGIN    BestSquashPrice := SquashPrice;    REPEAT      IF SquashPrice."Unit Price" <        BestSquashPrice."Unit Price"      THEN        BestSquashPrice := SquashPrice;    UNTIL NEXT = 0; END; END;   // No price found in agreement IF BestSquashPrice."Unit Price" = 0 THEN BestSquashPrice."Unit Price" := SquashCourt."Unit Price";   SquashPrice := BestSquashPrice; Inherited data To use the journal for the product part of the application, we want to inherit some of the fields from the master data tables. In order to make that possible, we need to copy and paste these fields from other tables to our master data table and populate it. In our example, we can copy and paste the fields from the Resource table (156). We also need to add code to the OnValidate triggers in the journal line table. The squash court table, for example, is expanded with the fields Unit Code, Unit Price, Gen. Prod. Posting Group, and VAT Prod. Posting Group, as shown in the preceding screenshot. We can now add code to the OnValidate of the Squash Court No. field in the Journal Line table. Squash Court No. - OnValidate() IF SquashCourt.GET("Squash Court No.") THEN BEGIN Description := SquashCourt.Description; "Unit Cost" := SquashCourt."Unit Cost"; "Gen. Prod. Posting Group" := SquashCourt."Gen. Prod. Posting Group"; FindSquashPlayerPrice; END; Please note that unit price is used in the Squash Price Calc. Mgt. codeunit that is executed from the FindSquashPlayerPrice function. Dimensions In Microsoft Dynamics NAV, dimensions are defined in master data and posted to the ledger entries to be used in analysis view entries. We will now discuss how to analyze the data generated by dimensions. In between that journey they move around a lot in different tables as follows: Table 348 | Dimension: This is where the main dimension codes are defined. Table 349 | Dimension Value: This is where each dimension can have an unlimited number of values. Table 350 | Dimension Combination: In this table, we can block certain combinations of dimension codes. Table 351 | Dimension Value Combination: In this table, we can block certain combinations of dimension values. If this table is populated, the value Limited is populated in the dimension combination table for these dimensions. Table 352 | Default Dimension: This table is populated for all master data that has dimensions defined. Table 354 | Default Dimension Priority: When more than one master data record in one transaction have the same dimensions, it is possible here to set priorities. Table 480 | Dimension Set Entry: This table contains a matrix of all used dimension combinations. Codeunit 408 | Dimension Management: This codeunit is the single point in the application where all dimension movement is done. In our application, dimensions are moved from the squash player, squash court, and customer table via the squash journal line to the squash ledger entries. When we create an invoice, we move the dimensions from the ledger entries to the sales line table. Master data To connect dimensions to master data, we first need to allow this changing codeunit 408 dimension management. SetupObjectNoList() TableIDArray[1] := DATABASE::"Salesperson/Purchaser"; TableIDArray[2] := DATABASE::"G/L Account"; TableIDArray[3] := DATABASE::Customer; ... TableIDArray[22] := DATABASE::"Service Item Group"; TableIDArray[23] := DATABASE::"Service Item";   //* Squash Application TableIDArray[49] := DATABASE::"Squash Player"; TableIDArray[50] := DATABASE::"Squash Court"; //* Squash Application   Object.SETRANGE(Type,Object.Type::Table);   FOR Index := 1 TO ARRAYLEN(TableIDArray) DO BEGIN ... The TableIDArray variable has a default number of 23 dimensions. This we have changed to 50. By leaving gaps we allow Microsoft to add master data tables in future without us having to change our code. Without this change, the system would return the following error message when we try to use dimensions: Next change is to add the Global Dimension fields to the master data tables. They can be copied and pasted from other master data tables. When these fields are validated, the ValidateShortcutDimCode function is executed as follows: ValidateShortcutDimCode() DimMgt.ValidateDimValueCode(FieldNumber,ShortcutDimCode); DimMgt.SaveDefaultDim(DATABASE::"Squash Player","No.", FieldNumber,ShortcutDimCode); MODIFY; Summary In this article, we learned to better understand how Journals and Ledger entries work throughout the system, and how to create your own Journal application. You also learned how to reverse engineer the standard application to learn from it and apply this to your own customizations. Resources for Article: Further resources on this subject: Achieving site resilience for the Mailbox server [Article] Setting Up and Managing E-mails and Batch Processing [Article] Where Is My Data and How Do I Get to It? [Article]
Read more
  • 0
  • 0
  • 2125
article-image-security-settings-salesforce
Packt
11 Sep 2014
10 min read
Save for later

Security Settings in Salesforce

Packt
11 Sep 2014
10 min read
In the article by Rakesh Gupta and Sagar Pareek, authors of Salesforce.com Customization Handbook, we will discuss Organization-Wide Default (OWD) and various ways to share records. We will also discuss the various security settings in Salesforce. The following topics will be covered in this article: (For more resources related to this topic, see here.) Concepts of OWD The sharing rule Field-Level Security and its effect on data visibility Setting up password polices Concepts of OWD Organization-Wide Default is also known as OWD. This is the base-level sharing and setting of objects in your organization. By using this, you can secure your data so that other users can't access data that they don't have access to. The following diagram shows the basic database security in Salesforce. In this, OWD plays a key role. It's a base-level object setting in the organization, and you can't go below this. So here, we will discuss OWD in Salesforce. Let's start with an example. Sagar Pareek is the system administrator in Appiuss. His manager Sara Barellies told him that the user who has created or owns the account records as well as the users that are higher in the role hierarchy can access the records. Here, you have to think first about OWD because it is the basic thing to restrict object-level access in Salesforce. To achieve this, Sagar Pareek has to set Organization-Wide Default for the account object to private. Setting up OWD To change or update OWD for your organization, follow these steps: Navigate to Setup | Administer | Security Controls | Sharing Settings. From the Manage sharing settings for drop-down menu, select the object for which you want to change OWD. Click on Edit. From the Default Access drop-down menu, select an access as per your business needs. For the preceding scenario, select Private to grant access to users who are at a high position in the role hierarchy, by selecting Grant access using hierarchy. For standard objects, it is automatically selected, and for custom objects, you have the option to select it. Click on Save. The following table describes the various types of OWD access and their respective description: OWD access Description Private Only the owner of the records and the higher users in the role hierarchy are able to access and report on the records. Public read only All users can view the records, but only the owners and the users higher in the role hierarchy can edit them. Public read/write All users can view, edit, and report on all records. Public read/write/ transfer All users can view, edit, transfer, and report on all records. This is only available for case and lead objects. Controlled by parent This says that access on the child object's records is controlled by the parent. Public full access This is available for campaigns. In this, all users can view, edit, transfer, and report on all records.   You can assign this access to campaigns, accounts, cases, contacts, contracts, leads, opportunities, users, and custom objects. This feature is only available for Professional, Enterprise, Unlimited, Performance, Developer, and Database Editions. Basic OWD settings for objects Whenever you buy your Salesforce Instance, it comes with the predefined OWD settings for standard objects. You can change them anytime by following the path Setup | Administer | Security Controls | Sharing Settings. The following table describes the default access to objects: Object Default access Account Public read/write Activity Private Asset Public read/write Campaign Public full access Case Public read/write transfer Contact Controlled by parent (that is, account) Contract Public read/write Custom Object Public read/write Lead Public read/write transfer Opportunity Public read only Users Public read only and private for external users Let's continue with another example. Sagar Pareek is the system administrator in Appiuss. His manager Sara Barellies told him that only the users who created the record for the demo object can access the records, and no one else can have the power to view/edit/delete it. To do this, you have to change OWD for a demo object to private, and don't select Grant Access Using Hierarchy. When you select the Grant Access Using Hierarchy field, it provides access to people who are above in the role hierarchy. Sharing Rule To open the record-level access for a group of users, roles, or roles and subordinates beyond OWD, you can use Sharing Rule. Sharing Rule is used for open access; you can't use Sharing Rule to restrict access. Let's start with an example where Sagar Pareek is the system administrator in Appiuss. His manager Sara Barellies wants every user in the organization to be able to view the account records but only a group of users (all the users do not belong to the same role or have the same profile) can edit it. To solve the preceding business requirement, you have to follow these steps: First, change the OWD account to Public Read Only by following the path Setup | Administer | Security Controls | Sharing Settings, so all users from the organization can view the account records. Now, create a public group Account access and add users as per the business requirement. To create a public group, follow the path Name | Setup | Administration Setup | Manage Users | Public Groups. Finally, you have to create a sharing rule. To create sharing rules, follow the path Setup | Administer | Security Controls | Sharing Settings, and navigate to the list related to Account Sharing Rules: Click on New, and it will redirect you to a new window where you have to enter Label, Rule Name, and Description (always write a description so that other administrators or developers get to know why this rule was created). Then, for Rule Type, select Based on criteria. Select the criteria by which records are to be shared and create a criterion so that all records fall under it (such as Account Name not equal to null). Select Public Groups in the Share with option and your group name. Select the level of access for the users. Here, select Read/Write from the drop-down menu of Default Account, Contract and Asset Access. Finally, it will look like the following screenshot: Types of Sharing Rules What we did to solve the preceding business requirement is called Sharing Rule. There is a limitation on Sharing Rules; you can write only 50 Sharing Rules (criteria-based) and 300 Sharing Rules (both owner- and criteria-based) per object. The following are the types of Sharing Rules in Salesforce: Manual Sharing: Only when OWD is set to Private or Public Read for any object will a sharing button be enabled in the record detail page. Record owners or users, who are at a higher position in role and hierarchy, can share records with other users. For the last business use case, we changed the account OWD to Public Read Only. If you navigate to the Account records detail page, you can see the Sharing button: Click on the Sharing button and it will redirect you to a new window. Now, click on Add and you are ready to share records with the following: Public groups Users Roles Roles and subordinates Select the access type for each object and click on Save. It will look like what is shown in the following screenshot: The Lead and Case Sharing buttons will be enabled when OWD is Private, Public Read Only, and Public Read/Write. Apex Sharing: When all other Sharing Rules can't fulfill your requirements, then you can use the Apex Sharing method to share records. It gives you the flexibility to handle complex sharing. Apex-managed sharing is a type of programmatic sharing that allows you to define a custom sharing reason to associate with your programmatic share. Standard Salesforce objects support programmatic sharing while custom objects support Apex-managed sharing. Field-Level Security and its effect on data visibility Data on fields is very important for any organization. They want to show some data to the field-specific users. In Salesforce, you can use Field-Level Security to make fields hidden or read-only for a specific profile. There are three ways in Salesforce to set Field-Level Security: From an object-field From a profile Field accessibility From an object-field Let's start with an example where Sagar Pareek is the system administrator in Appiuss. His manager Sara Barellies wants to create a field (phone) on an account object and make this field read-only for all users and also allowing system administrators to edit the field. To solve this business requirement, follow these steps: Navigate to Setup | Customize | Account | Fields and then click on the Phone (it's a hyperlink) field. It will redirect you to the detail page of the Phone field; you will see a page like the following screenshot: Click on the Set Field-Level Security button, and it will redirect you to a new page where you can set the Field-Level Security. Select Visible and Read-Only for all the profiles other than that of the system administrator. For the system administrator, select only Visible. Click on Save. If you select Read-Only, the visible checkbox will automatically get selected. From a profile Similarly, in Field-Level settings, you can also achieve the same results from a profile. Let's follow the preceding business use case to be achieved through the profile. To do this, follow these steps: Navigate to Setup | Administer | Manage Users | Profile, go to the System Administrator profile, and click on it. Now, you are on the profile detail page. Navigate to the Field-Level Security section. It will look like the following screenshot: Click on the View link beside the Account option. It will open Account Field-Level Security for the profile page. Click on the Edit button and edit Field-Level Security as we did in the previous section. Field accessibility We can achieve the same outcome by using field accessibility. To do this, follow these steps: Navigate to Setup | Administer | Security Controls | Field Accessibility. Click on the object name; in our case, it's Account. It will redirect you to a new page where you can select View by Fields or View by Profiles: In our case, select View by Fields and then select the field Phone. Click on the editable link as shown in the following screenshot: It will open the Access Settings for Account Field page, where you can edit the Field-Level Security. Once done, click on Save. Setting up password policies For security purposes, Salesforce provides an option to set password policies for the organization. Let's start with an example. Sagar Pareek, the system administrator of an organization, has decided to create a policy regarding the password for the organization, where the password of each user must be of 10 characters and must be a combination of alphanumeric and special characters. To do this, he will have to follow these steps: Navigate to Setup | Security Controls | Password Policies. It will open the Password Policies setup page: In the Minimum password length field, select 10 characters. In the Password complexity requirement field, select Must mix Alpha, numeric and special characters. Here, you can also decide when the password should expire under the User password expire in option. Enforce the password history under the option enforce password history, and set a password question requirement as well as the number of invalid attempts allowed and the lock-out period. Click on Save. Summary In this article, we have gone through various security setting features available on Salesforce. Starting from OWD, followed by Sharing Rules and Field-Level Security, we also covered password policy concepts. Resources for Article: Further resources on this subject: Introducing Salesforce Chatter [Article] Salesforce CRM Functions [Article] Adding a Geolocation Trigger to the Salesforce Account Object [Article]
Read more
  • 0
  • 0
  • 3467

article-image-livecode-loops-and-timers
Packt
10 Sep 2014
9 min read
Save for later

LiveCode: Loops and Timers

Packt
10 Sep 2014
9 min read
In this article by Dr Edward Lavieri, author of LiveCode Mobile Development Cookbook, you will learn how to use timers and loops in your mobile apps. Timers can be used for many different functions, including a basketball shot clock, car racing time, the length of time logged into a system, and so much more. Loops are useful for counting and iterating through lists. All of this will be covered in this article. (For more resources related to this topic, see here.) Implementing a countdown timer To implement a countdown timer, we will create two objects: a field to display the current timer and a button to start the countdown. We will code two handlers: one for the button and one for the timer. How to do it... Perform the following steps to create a countdown timer: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Down. Add the following code to the Count Down button: on mouseUp local pTime put 19 into pTime put pTime into fld "timerDisplay" countDownTimer pTime end mouseUp Add the following code to the Count Down button: on countDownTimer currentTimerValue subtract 1 from currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue > 0 then send "countDownTimer" && currentTimerValue to me in 1 sec end if end countDownTimer Test the code using a mobile simulator or an actual device. How it works... To implement our timer, we created a simple callback situation where the countDownTimer method will be called each second until the timer is zero. We avoided the temptation to use a repeat loop because that would have blocked all other messages and introduced unwanted app behavior. There's more... LiveCode provides us with the send command, which allows us to transfer messages to handlers and objects immediately or at a specific time, such as this recipe's example. Implementing a count-up timer To implement a count-up timer, we will create two objects: a field to display the current timer and a button to start the upwards counting. We will code two handlers: one for the button and one for the timer. How to do it... Perform the following steps to implement a count-up timer: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Up. Add the following code to the Count Up button: on mouseUp local pTime put 0 into pTime put pTime into fld "timerDisplay" countUpTimer pTime end mouseUp Add the following code to the Count Up button: on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue < 10 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer Test the code using a mobile simulator or an actual device. How it works... To implement our timer, we created a simple callback situation where the countUpTimer method will be called each second until the timer is at 10. We avoided the temptation to use a repeat loop because that would have blocked all other messages and introduced unwanted app behavior. There's more... Timers can be tricky, especially on mobile devices. For example, using the repeat loop control when working with timers is not recommended because repeat blocks other messages. Pausing a timer It can be important to have the ability to stop or pause a timer once it is started. The difference between stopping and pausing a timer is in keeping track of where the timer was when it was interrupted. In this recipe, you'll learn how to pause a timer. Of course, if you never resume the timer, then the act of pausing it has the same effect as stopping it. How to do it... Use the following steps to create a count-up timer and pause function: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Up. Add the following code to the Count Up button: on mouseUp local pTime put 0 into pTime put pTime into fld "timerDisplay" countUpTimer pTime end mouseUp Add the following code to the Count Up button: on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue < 60 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer Add a button to the card and name it Pause. Add the following code to the Pause button: on mouseUp repeat for each line i in the pendingMessages cancel (item 1 of i) end repeat end mouseUp In LiveCode, the pendingMessages option returns a list of currently scheduled messages. These are messages that have been scheduled for delivery but are yet to be delivered. To test this, first click on the Count Up button, and then click on the Pause button before the timer reaches 60. How it works... We first created a timer that counts up from 0 to 60. Next, we created a Pause button that, when clicked, cancels all pending system messages, including the call to the countUpTimer handler. Resuming a timer If you have a timer as part of your mobile app, you will most likely want the user to be able to pause and resume a timer, either directly or through in-app actions. See previous recipes in this article to create and pause a timer. This recipe covers how to resume a timer once it is paused. How to do it... Perform the following steps to resume a timer once it is paused: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Up. Add the following code to the Count Up button: on mouseUp local pTime put 0 into pTime put pTime into fld "timerDisplay" countUpTimer pTime end mouseUp on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue < 60 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer Add a button to the card and name it Pause. Add the following code to the Pause button: on mouseUp repeat for each line i in the pendingMessages cancel (item 1 of i) end repeat end mouseUp Place a button on the card and name it Resume. Add the following code to the Resume button: on mouseUp local pTime put the text of fld "timerDisplay" into pTime countUpTimer pTime end mouseUp on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue <60 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer To test this, first, click on the Count Up button, then click on the Pause button before the timer reaches 60. Finally, click on the Resume button. How it works... We first created a timer that counts up from 0 to 60. Next, we created a Pause button that, when clicked, cancels all pending system messages, including the call to the countUpTimer handler. When the Resume button is clicked on, the current value of the timer, based on the timerDisplay button, is used to continue incrementing the timer. In LiveCode, pendingMessages returns a list of currently scheduled messages. These are messages that have been scheduled for delivery but are yet to be delivered. Using a loop to count There are numerous reasons why you might want to implement a counter in a mobile app. You might want to count the number of items on a screen (that is, cold pieces in a game), the number of players using your app simultaneously, and so on. One of the easiest methods of counting is to use a loop. This recipe shows you how to easily implement a loop. How to do it... Use the following steps to instantiate a loop that counts: Create a new main stack. Rename the stack's default card to MainScreen. Drag a label field to the card and name it counterDisplay. Drag five checkboxes to the card and place them anywhere. Change the names to 1, 2, 3, 4, and 5. Drag a button to the card and name it Loop to Count. Add the following code to the Loop to Count button: on mouseUp local tButtonNumber put the number of buttons on this card into tButtonNumber if tButtonNumber > 0 then repeat with tLoop = 1 to tButtonNumber set the label of btn value(tLoop) to "Changed " & tLoop end repeat put "Number of button's changed: " & tButtonNumber into fld "counterDisplay" end if end mouseUp Test the code by running it in a mobile simulator or on an actual device. How it works... In this recipe, we created several buttons on a card. Next, we created code to count the number of buttons and a repeat control structure to sequence through the buttons and change their labels. Using a loop to iterate through a list In this recipe, we will create a loop to iterate through a list of text items. Our list will be a to-do or action list. Our loop will process each line and number them on screen. This type of loop can be useful when you need to process lists of unknown lengths. How to do it... Perform the following steps to create an iterative loop: Create a new main stack. Drag a scrolling list field to the stack's card and name it myList. Change the contents of the myList field to the following, paying special attention to the upper- and lowercase values of each line: Wash Truck Write Paper Clean Garage Eat Dinner Study for Exam Drag a button to the card and name it iterate. Add the following code to the iterate button: on mouseUp local tLines put the number of lines of fld "myList" into tLines repeat with tLoop = 1 to tLines put tLoop & " - " & line tLoop of fld "myList"into line tLoop of fld "myList" end repeat end mouseUp Test the code by clicking on the iterate button. How it works... We used the repeat control structure to iterate through a list field one line at a time. This was accomplished by first determining the number of lines in that list field, and then setting the repeat control structure to sequence through the lines. Summary In this article we examined the LiveCode scripting required to implement and control count-up and countdown timers. We also learnt how to use loops to count and iterate through a list. Resources for Article:  Further resources on this subject: Introduction to Mobile Forensics [article] Working with Pentaho Mobile BI [article] Building Mobile Apps [article]
Read more
  • 0
  • 0
  • 13704

article-image-physics-engine
Packt
04 Sep 2014
9 min read
Save for later

The physics engine

Packt
04 Sep 2014
9 min read
In this article by Martin Varga, the author of Learning AndEngine, we will look at the physics in AndEngine. (For more resources related to this topic, see here.) AndEngine uses the Android port of the Box2D physics engine. Box2D is very popular in games, including the most popular ones such as Angry Birds, and many game engines and frameworks use Box2D to simulate physics. It is free, open source, and written in C++, and it is available on multiple platforms. AndEngine offers a Java wrapper API for the C++ Box2D backend, and therefore, no prior C++ knowledge is required to use it. Box2D can simulate 2D rigid bodies. A rigid body is a simplification of a solid body with no deformations. Such objects do not exist in reality, but if we limit the bodies to those moving much slower than the speed of light, we can say that solid bodies are also rigid. Box2D uses real-world units and works with physics terms. A position in a scene in AndEngine is defined in pixel coordinates, whereas in Box2D, it is defined in meters. AndEngine uses a pixel to meter conversion ratio. The default value is 32 pixels per meter. Basic terms Box2D works with something we call a physics world. There are bodies and forces in the physics world. Every body in the simulation has the following few basic properties: Position Orientation Mass (in kilograms) Velocity (in meters per second) Torque (or angular velocity in radians per second) Forces are applied to bodies and the following Newton's laws of motion apply: The first law, An object that is not moving or moving with constant velocity will stay that way until a force is applied to it, can be tweaked a bit The second law, Force is equal to mass multiplied by acceleration, is especially important to understand what will happen when we apply force to different objects The third law, For every action, there is an equal and opposite reaction, is a bit flexible when using different types of bodies Body types There are three different body types in Box2D, and each one is used for a different purpose. The body types are as follows: Static body: This doesn't have velocity and forces do not apply to a static body. If another body collides with a static body, it will not move. Static bodies do not collide with other static and kinematic bodies. Static bodies usually represent walls, floors, and other immobile things. In our case, they will represent platforms which don't move. Kinematic body: This has velocity, but forces don't apply to it. If a kinematic body is moving and a dynamic body collides with it, the kinematic body will continue in its original direction. Kinematic bodies also do not collide with other static and kinematic bodies. Kinematic bodies are useful to create moving platforms, which is exactly how we are going to use them. Dynamic body: A dynamic body has velocity and forces apply to it. Dynamic bodies are the closest to real-world bodies and they collide with all types of bodies. We are going to use a dynamic body for our main character. It is important to understand the consequences of choosing each body type. When we define gravity in Box2D, it will pull all dynamic bodies to the direction of the gravitational acceleration, but static bodies will remain still and kinematic bodies will either remain still or keep moving in their set direction as if there was no gravity. Fixtures Every body is composed of one or more fixtures. Each fixture has the following four basic properties: Shape: In Box2D, fixtures can be circles, rectangles, and polygons Density: This determines the mass of the fixture Friction: This plays a major role in body interactions Elasticity: This is sometimes called restitution and determines how bouncy the object is There are also special properties of fixtures such as filters and filter categories and a single Boolean property called sensor. Shapes The position of fixtures and their shapes in the body determine the overall shape, mass, and the center of gravity of the body. The upcoming figure is an example of a body that consists of three fixtures. The fixtures do not need to connect. They are part of one body, and that means their positions relative to each other will not change. The red dot represents the body's center of gravity. The green rectangle is a static body and the other three shapes are part of a dynamic body. Gravity pulls the whole body down, but the square will not fall. Density Density determines how heavy the fixtures are. Because Box2D is a two-dimensional engine, we can imagine all objects to be one meter deep. In fact, it doesn't matter as long as we are consistent. There are two bodies, each with a single circle fixture, in the following figure. The left circle is exactly twice as big as the right one, but the right one has double the density of the first one. The triangle is a static body and the rectangle and the circles are dynamic, creating a simple scale. When the simulation is run, the scales are balanced. Friction Friction defines how slippery a surface is. A body can consist of multiple fixtures with different friction values. When two bodies collide, the final friction is calculated from the point of collision based on the colliding fixtures. Friction can be given a value between 0 and 1, where 0 means completely frictionless and 1 means super strong friction. Let's say we have a slope which is made of a body with a single fixture that has a friction value of 0.5, as shown in the following figure: The other body consists of a single square fixture. If its friction is 0, the body slides very fast all the way down. If the friction is more than 0, then it would still slide, but slow down gradually. If the value is more than 0.25, it would still slide but not reach the end. Finally, with friction close to 1, the body will not move at all. Elasticity The coefficient of restitution is a ratio between the speeds before and after a collision, and for simplicity, we can call the material property elasticity. In the following figure, there are three circles and a rectangle representing a floor with restitution 0, which means not bouncy at all. The circles have restitutions (from left to right) of 1, 0.5, and 0. When this simulation is started, the three balls will fall with the same speed and touch the floor at the same time. However, after the first bounce, the first one will move upwards and climb all the way to the initial position. The middle one will bounce a little and keep bouncing less and less until it stops. The right one will not bounce at all. The following figure shows the situation after the first bounce: Sensor When we need a fixture that detects collisions but is otherwise not affected by them and doesn't affect other fixtures and bodies, we use a sensor. A goal line in a 2D air hockey top-down game is a good example of a sensor. We want it to detect the disc passing through, but we don't want it to prevent the disc from entering the goal. The physics world The physics world is the whole simulation including all bodies with their fixtures, gravity, and other settings that influence the performance and quality of the simulation. Tweaking the physics world settings is important for large simulations with many objects. These settings include the number of steps performed per second and the number of velocity and position interactions per step. The most important setting is gravity, which is determined by a vector of gravitational acceleration. Gravity in Box2D is simplified, but for the purpose of games, it is usually enough. Box2D works best when simulating a relatively small scene where objects are a few tens of meters big at most. To simulate, for example, a planet's (radial) gravity, we would have to implement our own gravitational force and turn the Box2D built-in gravity off. Forces and impulses Both forces and impulses are used to make a body move. Gravity is nothing else but a constant application of a force. While it is possible to set the position and velocity of a body in Box2D directly, it is not the right way to do it, because it makes the simulation unrealistic. To move a body properly, we need to apply a force or an impulse to it. These two things are almost the same. While forces are added to all the other forces and change the body velocity over time, impulses change the body velocity immediately. In fact, an impulse is defined as a force applied over time. We can imagine a foam ball falling from the sky. When the wind starts blowing from the left, the ball will slowly change its trajectory. Impulse is more like a tennis racket that hits the ball in flight and changes its trajectory immediately. There are two types of forces and impulses: linear and angular. Linear makes the body move left, right, up, and down, and angular makes the body spin around its center. Angular force is called torque. Linear forces and impulses are applied at a given point, which will have different effects based on the position. The following figure shows a simple body with two fixtures and quite high friction, something like a carton box on a carpet. First, we apply force to the center of the large square fixture. When the force is applied, the body simply moves on the ground to the right a little. This is shown in the following figure: Second, we try to apply force to the upper-right corner of the large box. This is shown in the following figure: Using the same force at a different point, the body will be toppled to the right side. This is shown in the following figure:
Read more
  • 0
  • 0
  • 3736
article-image-working-data-access-and-file-formats-using-nodejs
Packt
04 Sep 2014
27 min read
Save for later

Working with Data Access and File Formats Using Node.js

Packt
04 Sep 2014
27 min read
In this article by Surendra Mohan, the author of Node.js Essentials, we will cover the following concepts: Reading and writing files using Node.js MySQL database handling using Node.js Working with data formats using Node.js Let's get started! (For more resources related to this topic, see here.) Reading and writing files The easiest and most convenient way of reading a file in a PHP application is by using the PHP file_get_contents() API function. Let's look into the following example PHP code snippet, wherein we intend to read a sample text file named sampleaf.txt that resides in the same directory as that of our PHP file (sampleaf.php): <?php $text = file_get_contents('sampleaf.txt'); print $text; ?> In the preceding code snippet, if the source file, sampleaf.txt, exists or can be read, the content of this file is assigned to the $text variable (long string type); otherwise, it will result in a Boolean value as false. All PHP API functions are blocking in nature, so is the file_get_contents() API function. Thus, the PHP code that is supposed to be executed after the file_get_contents() API function call gets blocked until the former code either executes successfully or completely fails. There is no callback mechanism available for this PHP API. Let's convert the preceding PHP code snippet into its corresponding Node.js code. Because the readFileSync() API function in the fs module is the closest Node.js equivalent to that of the PHP file_get_contents() API function, let's use it. Our Node.js code equivalent looks something like the following code snippet (sampleaf.njs): var fs = require('fs'); var text = false; try { text = fs.readFileSync(__dirname+'/'+'sampleaf.txt', 'utf8'); } catch (err) { // No action } console.log(text); Node.js functions come in both asynchronous as well as synchronous forms, asynchronous being the default. In our preceding Node.js code, we have appended the Sync term in our Node.js fs.readFile() API function, which is asynchronous in nature, and gets converted to its synchronous version once Sync gets appended to the end of it. The asynchronous version is nonblocking in nature, and depends upon the callbacks to take care of the Node.js API function call results. On the other hand, the synchronous version is blocking in nature (same as of our PHP code), which results in blocking of the Node.js code that is supposed to be executed after the API function till it completely succeeds or fails. If we look into the arguments passed with the Node.js fs.readFileSync() API function, we find the source file sampleaf.txt (prepended with the _dirname variable) that needs to be read, and utf8 stating the encoding we intend to use. The _dirname variable holds the directory name where our Node.js code file, sampleaf.njs, resides. The use of the _dirname variable instructs the Node.js fs.readFileSync() API function to locate the source file in the directory returned by this variable. If this variable is missing, our API function will try to find the source file in the directory where the Node.js server was started. By default, the second argument doesn't encode, which results in the function to return a raw buffer of bytes instead of a string. In order to enable the function to return a string, we pass the utf8 string (for UTF-8 encoding) as the second parameter to the function. Because we are dealing with the synchronous version of the API function, we handled Node.js exceptions by using the Node.js try and catch keywords. In this case, if the try block code gets executed successfully, the catch block code will be ignored and never get executed; otherwise, the try block code will immediately stop executing, thereby helping the catch block code to get executed. While replacing the PHP file_get_contents() API function with its corresponding Node.js API function, it is recommended to use the Node.js fs.readFile() API function instead of fs.readFileSync() due to the fact that synchronous API functions (in our case, fs.readFileSync()) are blocking in nature, whereas asynchronous API functions (in our case, fs.readFile()) are not. So, let's try converting the preceding PHP code snippet to its corresponding asynchronous Node.js code by using the fs.readFile() API function. We write the following Node.js code snippet and save it in a new file with the filename sampleafa.njs: var fs = require('fs'); var text = false; fs.readFile(__dirname+'/'+'sampleaf.txt', 'utf8', function(err, fo) { if (!err) { text = fo; } console.log(text); }); Our preceding asynchronous Node.js fs.readFile() API function accepts a callback function as its third argument that can return both the data as well as error, whichever is applicable. There is another way to read a file in Node.js. However, it doesn't match with any features available in PHP so far. We do so by creating a Node.js stream that will help us read the file. While the stream is read, events such as data, error, and close, are sent along with the stream. In such scenarios, we need to set up event handlers that would take care of such events. The PHP file() API function The file() API function helps us read content of a file and returns it as an indexed array. The content in the array are stored in such a way that each value of the array holds a single line of the file. If we want to print the first line of our source file (sampleaf.txt) in PHP, we write the following code snippet that includes the End Of Line (EOL) character sequence at the end of the line: $x = file('sampleaf.txt'); print $x[0]; If we are using PHP5 version, we get an opportunity to include the second and optional parameter (the flags parameter) to our file() API function. The flags parameter provides us with three options that can be either used individually or can be combined together using the OR operator (|), and they are as follows: FILE_IGNORE_NEW_LINES FILE_SKIP_EMPTY_LINES FILE_USE_INCLUDE_PATH The FILE_IGNORE_NEW_LINES flag option is normally used, and it instructs the PHP file() API function to eradicate EOL characters from the end of each line. Let's rework on our preceding PHP code snippet such that it prints the first line of the sampleaf.txt file, but eradicates the EOL character sequence at the end of each value in the array. So, our modified PHP code snippet will look like the following: $x = file('sampleaf.txt', FILE_IGNORE_NEW_LINES); print $x[0]; Now it's time to convert the preceding PHP code snippet into its corresponding Node.js code snippet. Converting the PHP file() API function is a bit complicated as compared to that of converting the PHP file_get_contents() API function. The following code demonstrates the converted Node.js code snippet corresponding to our PHP code snippet: var fs = require('fs'); var FILE_IGNORE_NEW_LINES = 0x2; var x = false; var flag = FILE_IGNORE_NEW_LINES; fs.readFile(__dirname+'/'+'sampleaf.txt', 'utf8', function(err, data) { if (!err) { x = data.replace(/rn?/g,'n'); x = x.split('n'); x.neol = x.length - 1; if ((x.length > 0) && (x[x.length-1] === '')) { x.splice(x.length-1, 1); } if ((flag & FILE_IGNORE_NEW_LINES) === 0) { for (var i=0; i < x.neol; ++i) { x[i] += 'n'; } } delete x.neol; } console.log(x[0]); }); In the preceding Node.js code, the !err condition within the if statement is the real culprit that makes this code actually complicated. Let's now walk through the preceding Node.js code snippet, especially the ones we have embedded in the if statement: First of all, we converted EOL characters for the Linux, Windows, and Mac text files into the end-of-line character (n) for the operating system our Node.js server is current running, using the following code chunk: x = data.replace(/rn?/g,'n'); Then, we converted the string to an array of lines that complies with the PHP file() API function standards, using the following line of code: x = x.split('n'); Then, we handled the last line of the file by implementing the following code snippet: x.neol = x.length - 1; if ((x.length > 0) && (x[x.length-1] === '')) { x.splice(x.length-1, 1); } Finally, in the following code snippet, we check whether FILE_IGNORE_NEW_LINES has been specified or not. If it hasn't been specified, the EOL character will be added to the end of the lines: if ((flag & FILE_IGNORE_NEW_LINES) === 0) { for (var i=0; i < x.neol; ++i) { x[i] += 'n'; } } File handling APIs The core set of file handling APIs in PHP and Node.js are shaped based on the C language file handling API functions. For instance, the PHP fopen() API function opens a file in different modes, such as read, write, and append. The Node.js open() API function is the equivalent to this PHP fopen() API function, and both of these API functions are shaped with the fopen() from the C language. Let's consider the following PHP code snippet that opens a file for reading purposes and reads the first 500 bytes of content from the file: $fo = fopen('sampleaf.txt', 'r'); $text = fread($fo, 500); fclose($fo); In case the file size is less than 500 bytes, our preceding PHP code snippet will read the entire file. In Node.js, the Node.js fs.read() API function is used to read from the file and uses the buffer built-in module to hold an ordered collection of bytes. Likewise, the Node.js fs.close() API function is used to close the file once it is read as intended. In order to convert the preceding PHP code snippet, we write the following Node.js code snippet: var fs = require('fs'); var Buffer = require('buffer').Buffer; fs.open(__dirname+'/'+'sampleaf.txt', 'r', function(err, fo) { var text = ''; var b = new Buffer(500); fs.read(fo, b, 0, b.length, null, function(err, bytesRead, buf){ var bufs = buf.slice(0, bytesRead); text += bufs.toString(); fs.close(fo, function() { console.log(text); }); }); }); In our Node.js code, besides the usual callback functions, we have used a couple of buffer variables, such as the b and bufs variables that adds some complexity to our code. The b variable holds the data that is read using the Node.js fs.read() API function. The bufs variable holds the actual bytes that are read, wherein the unused bytes of the b variable are sliced off. The buf argument is an alias of the b variable. Both PHP and the Node.js maintain a file pointer that indicates the next bytes that should be read from the file. We can cross-check the end of the file using the PHP feof() API function. In order to implement the PHP feof() API function, we write the following PHP code snippet: $fo = fopen('sampleaf.txt', 'r'); $text = ''; while (!feof($fo)) { $text .= fread($fo, 500); } fclose($fo); print $text; Node.js doesn't have anything that is equivalent to the PHP feof() API function. Instead, we use the bytesRead argument that is passed to the callback function and is compared with the number of bytes requested in order to read the file. We land to the following Node.js code snippet when we convert our preceding and modified PHP code snippet: var fs = require('fs'); var Buffer = require('buffer').Buffer; fs.open(__dirname+'/'+'sampleaf.txt', 'r', function(err, fo) { var text = ''; var b = new Buffer(500); var fread = function() { fs.read(fo, b, 0, b.length, null, function(err, bytesRead, buf) { var eof = (bytesRead != b.length); if (!eof) { text += buf.toString(); fread(); } else { if (bytesRead > 0) { var bufs = buf.slice(0, bytesRead); text += bufs.toString(); } fs.close(fo, function() { console.log(text); }); } }); }; fread(); }) Due to callbacks in Node.js, the fread function variable must be defined in such a way that it can be called if the file size is greater than the b buffer variable. The fread function variable is triggered continuously till the end of the file. By the end of the file, the partially occupied buffer is proceeded, and then the Node.js fs.close() API function is triggered. We also use the linearity concept, where the Node.js console.log() API function call is embedded in the callback of the Node.js fs.close() API function. MySQL access In the previous section, we learned how to access the data from files using PHP code and exercised how to convert such PHP code to its corresponding Node.js code. As an alternative to what we discussed earlier, we can even access the necessary data from our database, and write to it as a record or set of records. Because the database server can be accessed remotely, PHP and Node.js are capable enough to connect to the intended database, regardless of whether it is running on the same server or remote server. You must be aware that data in a database is arranged in rows and columns. This makes it easy to organize and store data such as usernames. On the other hand, it is quite complex to organize and store certain types of data, such as image files or any other media files. In this section, we will use the MySQL database with PHP, and learn how to convert our PHP code that uses the MySQL database into its equivalent Node.js code based on different scenarios. The reason behind choosing the MySQL database for our exercise is that it is quite popular in the database and hosting market. Moreover, PHP applications have a special bond with MySQL database. We assume that the MySQL server has already been installed, so that we can create and use the MySQL database with the PHP and Node.js code during our exercise. In order to access our database through a PHP or Node.js application, our web application server (where PHP or Node.js is running) needs some tweaking, so that necessary accesses are granted to the database. If you are running the Apache2 web server on a Linux environment, the phpx-myql extension needs to be installed, where x denotes the PHP version you are using. For instance, when using PHP 5.x, the required and related extension that needs to be installed would be php5-mysql. Likewise, the php4-mysql and php6-mysql extensions are necessary for PHP versions 4.x and 6.x, respectively. On the other hand, if you are using the Apache2 web server on a Windows environment, you need to install the PHP-to-MySQL extension during the Apache2 web server installation. Database approaches Node.js doesn't have a built-in module that can help a Node.js application access the MySQL database. However, we have a number of modules that are provided by the Node.js npm package and can be installed in order to achieve database access in variety of approaches. Using the MySQL socket protocol MySQL socket protocol is one of the easiest approaches that can be implemented in Node.js using the Node.js npm package. This npm package uses the built-in net module to open a network socket for the MySQL server to connect with the application and exchange packets in a format that is supported and expected by the MySQL server. The Node.js npm package bluffs and surpasses other MySQL drivers (shipped with the MySQL server installer), unaware of the fact that it is communicating to the Node.js driver instead of the default driver that has been built in C language. There are a number of ways MySQL socket protocol can be implemented in Node.js. The most popular implementation is using the Node.js node-mysql npm package. In order to install this npm package, you can either retrieve it from its GitHub repository at http://github.com/felixge/node-mysql or run npm install mysql on the command line. An alternative to the Node.js implementation of this protocol is the Node.js mysql-native npm package. In order to install this package, you can either retrieve it from its GitHub repository at http://github.com/sidorares/nodejs-mysql-native, or run npm install mysql-native on the command line. In order to play around with database records, the SQL language that needs to be applied constitutes of commands such as SELECT, INSERT, UPDATE, and DELETE along with other commands. However, Node.js stores data in the database as properties on a Node.js object. Object-relational mapping (ORM or O/R mapping) is a set of planned actions to read and write objects (in our case, Node.js objects) to a SQL-based database (in our case, the MySQL database). This ORM is implemented on the top of other database approaches. Thus, the Node.js ORM npm package can use any other Node.js npm packages (for instance, node-mysql and mysql-native) to access and play around with the database. Normally, ORM npm packages use SQL statements during implementation; however, it provides a better and logical set of API functions to do the database access and data exchange job. The following are a couple of Node.js npm modules that provide object-relational mapping support to Node.js: The Node.js persistencejs npm module: This is an asynchronous JavaScript-based ORM library. We recommend you to refer its GitHub repository documentation at https://github.com/coresmart/persistencejs, in case you wish to learn about it. The Node.js sequelize npm module : This is a JavaScript-based ORM library that provides access to databases such as MySQL, SQLite, PostgreSQL, and so on, by mapping database records to objects and vice versa. If you want to learn more about the sequelize library, we recommend that you refer to its documentation at http://sequelizejs.com/. Normally, an object-relational mapping layer makes the Node.js world quite simple, convenient, and developer friendly. Using the node-mysql Node.js npm package In this section, we will learn how to implement the Node.js node-mysql npm package, which is the most popular way of accessing a MySQL database using Node.js. In order to use the node-mysql package, we need to install it. This Node.js npm module can be installed in the same way as we install other Node.js npm packages. So, to install it, we run the following command: npm install mysql As soon as the node-mysql package gets installed, we need to make this package available for use by using the Node.js require() API function. To do so, we create a mysql variable to access the Node.js node-mysql module, as demonstrated in the following line of Node.js code: var mysql = require('mysql'); Before you can use database records to read or write, it is mandatory to connect your PHP or Node.js application to this database. In order to connect our PHP application to our MySQL database, we use three sets of the PHP API function (mysql, mysqli, and PDO) that use the PDO_MySQL driver. Let's write the following PHP code snippet: $sql_host = '192.168.0.100'; $sql_user = 'adminuser'; $sql_pass = 'password'; $conn = mysql_connect($sql_host, $sql_user, $sql_pass); In the preceding code snippet, the $conn variable holds the database collection. In order to establish the database connection, we used the PHP mysql_connect() API function that accepts three arguments: database server as the IP address or DNS (in our case, the IP address is 192.168.0.100), database username (in our case, adminuser), and the password associated to the database user (in our case, password). When working with the node-mysql Node.js npm package, the Node.js createClient() API function is used as the equivalent to the PHP mysql_connect() API function. Unlike the PHP API function, the Node.js API function accesses a Node.js object with the three properties as its parameters. Moreover, we want our Node.js code to load the mysql Node.js npm package. Thus, we use Node.js require() to achieve this. Let's write the following Node.js code snippet that is equivalent to our preceding PHP code snippet: Var mysql = require('mysql'); var sql_host = '192.168.0.100'; var sql_user = 'adminuser'; var sql_pass = 'password'; var sql_conn = {host: sql_host, user: sql_user, password: sql_pass}; var conn = mysql.createClient(sql_conn); We can even merge the last two statements (highlighted) in a single statement. Thus, we replace the highlighted statements with the following one: var conn = mysql.createClient({host: sql_host, user: sql_user, password: sql_pass}); In the case of both the PHP $conn and Node.js conn variables, a meaningful value is assigned to these variables if the MySQL server is accessible; otherwise, they are assigned a false value. Once the database is no longer needed, it needs to be disconnected from our PHP and Node.js code. Using PHP, the MySQL connection variable (in our case, $conn) needs to be closed using the PHP mysql_close() API function by implementing the following PHP code statement: $disconn = mysql_close($conn); The PHP mysql_close() API function returns a Boolean value that indicates whether the connection has been closed successfully or failed. In the case of Node.js, we use the destroy() method on the conn object in order to close the database connection using the following Node.js code statement: conn.destroy(); Once our applications get connected to the desired MySQL database on the MySQL server, the database needs to be selected. In case of PHP, the PHP mysql_select_db() API function is used to do this job. The following PHP code snippet demonstrates how we select the desired database: $sql_db = 'desiredDB'; $selectedDB = mysql_select_db($sql_db, $conn); While converting the PHP code into its equivalent Node.js code, it should be refactored to explicitly pass the PHP $conn variable to all the mysql API functions. As soon as the database is selected, we use the PHP mysql_query() API function to play around with the data of the selected database. In the case of Node.js, we use Node.js query() methods, which is equivalent to its corresponding PHP code statement. In order to select a database, the SQL USE command is used as demonstrated in the following code line: USE desiredDB; In the preceding statement, if we are using a single database, the semicolon (;) is optional. It is used as a separator in case you plan to use more than one database. When working with Node.js, the USE desiredDB SQL command needs to be sent using the Node.js query() method. Let's write the following Node.js code snippet that selects the desiredDB database from our MySQL server via the Node.js conn variable: var sql_db = 'desiredDB'; var sql_db_select = 'USE '+sql_db; conn.query(sql_db_select, function(err) { if (!err) { // Selects the desired MySQL database, that is, desiredDB } else { // MySQL database selection error } }); We can even merge the highlighted statements in the preceding code snippet into one statement. Our Node.js code snippet will look something like the following once these statements get merged: var sql_db = 'desiredDB'; conn.query('USE '+sql_db, function(err) { if (!err) { // Selects the desired MySQL database, that is, desiredDB } else { // MySQL database selection error } }); By now, we are able to connect our PHP and Node.js applications to the MySQL server and select the desired MySQL database to play around with. Our data in the selected database can be accessed (in terms of reading and writing) using popular SQL commands, such as CREATE TABLE, DROP TABLE, SELECT, INSERT, UPDATE, and DELETE. Creating a table Let's consider the CREATE TABLE SQL command. In PHP, the CREATE TABLE SQL command is triggered using the PHP myql_query() API function, as demonstrated in the following PHP code snippet: $sql_prefix = 'desiredDB_'; $sql_cmd = 'CREATE TABLE `'.$sql_prefix.'users` (`id` int AUTO_INCREMENT KEY, `user` text)'; $tabCreated = mysql_query($sql_cmd, $conn); In the preceding PHP code snippet, we created a table, desiredDB_users, which consists of two columns: id and user. The id column holds an integer value and possesses the SQL AUTO_INCREMENT and KEY options. These SQL options indicate that the MySQL server should set a unique value for each row that is associated to the id column, and that the user has no control over this value. The user column holds string values and is set with the value indicated by the requester when a new row is inserted into our MySQL database. Let's write the following Node.js code snippet, which is equivalent to our preceding PHP code: var sql_prefix = 'desiredDB_'; var sql_cmd = 'CREATE TABLE `'+sql_prefix+'users` (`id` int AUTO_INCREMENT KEY, `user` text)'; var tabCreated = false; conn.query(sql_cmd, function(err, rows, fields) { if (!e) { tabCreated = true; } }); Here, the err parameter that is placed in our query() method's callback function indicates whether any error has been triggered or not. Deleting a table Let's now learn how to delete a table and attempt to delete the same table, users, we just created. This activity is quite similar to creating a table, which we recently discussed. In PHP, we use the DROP TABLE SQL command to achieve our purpose as demonstrated in the following PHP code snippet: $sql_prefix = 'desiredDB_'; $sql_cmd = 'DROP TABLE `'.$sql_prefix.'users`'; $tabDropped = mysql_query($sql_cmd, $conn); Converting the preceding PHP code snippet into its corresponding Node.js code snippet, we follow the same basis as we discussed while creating the table. Our converted Node.js code snippet will look like the following code snippet: var sql_prefix = 'desiredDB_'; var sql_cmd = 'DROP TABLE `'+sql_prefix+'users`'; var tabDropped = false; conn.query(sql_cmd, function(err, rows, fields) { if (!err) { tabDropped = true; } }); Using a SELECT statement Coming to the SQL SELECT statement, it is used to read data from the database tables and functions in a bit different way. In PHP, the PHP mysql_query() API function triggers the statement and returns a result object that is stored in the PHP $sql_result variable. In order to access the actual data, the PHP mysql_fetch_assoc() API function is implemented that runs in a loop in order to fetch the data from more than one row. We assume that we have retained our users table and all the records that we had deleted recently. Our PHP code snippet will look like the following one: $sql_prefix = 'desiredDB_'; $sql_cmd = 'SELECT user FROM `'.$sql_prefix.'users`'; $sql_result = mysql_query($sql_cmd, $conn); while ($row = mysql_fetch_assoc($sql_result)) { $user = $row['user']; print $user; } It is always good to extract the PHP $row variable into an array-free variable (in our case, $user), due to the fact that doing so eliminates complexity when converting PHP code to its corresponding Node.js code. When converting the preceding PHP code snippet into a Node.js code snippet, we use the Node.js query() method to trigger the statement that returns the data as arguments to the callback function. The rows parameter to the callback function holds a two-dimensional array of data, that is, an indexed array of rows along with the array of values associated to each row. Our Node.js code snippet for the preceding PHP code snippet will look like the following one: var sql_prefix = 'desiredDB_'; var sql_cmd = 'SELECT user FROM `'.$sql_prefix.'users`'; conn.query(sql_cmd, function(err, rows, fields) { if (!err) { for (var i=0; i < rows.length; ++i) { var row = rows[i]; var user = row['user']; console.log(user); } } }); In the preceding Node.js code snippet, we could have defined row[i]['user'] so as to show that the Node.js rows variable is a two-dimensional array. Using the UPDATE statement Now, let's try out implementing the SQL UPDATE statement that is used to modify any data of a table. In PHP, we trigger the SQL UPDATE statement by using the PHP mysql_query() API function, as demonstrated in the following code snippet: $sql_prefix = 'desiredDB_'; $sql_cmd = 'UPDATE `'.$sql_prefix.'users` SET `user`="mohan" WHERE `user`="surendra"'; $tabUpdated = mysql_query($sql_cmd, $conn); if ($tabUpdated) { $rows_updated = mysql_affected_rows($conn); print 'Updated '.$rows_updated.' rows.'; } Here, the PHP mysql_affected_rows() API function returns the number of rows that have been modified due to the SQL UPDATE statement. In Node.js, we use the same SQL UPDATE statement. Additionally, we use the affectedRows property of the row's object that holds the same value which is returned out of the PHP mysql_affected_rows() API function. Our Node.js converted code snippet will look like the following one: var sql_prefix = 'desiredDB_'; var sql_cmd = 'UPDATE `'+sql_prefix+'users` SET `user`="mohan" WHERE `user`="surendra"'; conn.query(sql_cmd, function(err, rows, fields) { if (!err) { var rows_updated = rows.affectedRows; console.log('Updated '+rows_updated+' rows.'); } }); Using the INSERT statement Now it is time to write the PHP code to insert data into a table, and then convert the code to its equivalent Node.js code. In order to insert data into a table, we use the SQL INSERT statement. In PHP, the SQL INSERT statement is triggered using the PHP mysql_query() API function, as demonstrated in the following PHP code snippet: $sql_prefix = 'desiredDB_'; $sql_cmd = 'INSERT INTO `'.$sql_prefix.'users` (`id`, `user`) VALUES (0, "surmohan")'; $tabInserted = mysql_query($sql_cmd, $conn); if ($tabInserted) { $inserted_id = mysql_insert_id($conn); print 'Successfully inserted row with id='.$inserted_id.'.'; } Here, the PHP mysql_insert_id() API function returns the value of id that is associated to the newly inserted data. In Node.js, we use the same SQL INSERT statement. Additionally, we use the insertId property of the row's object that holds the same value that is returned from the PHP mysql_insert_id() API function. The Node.js code snippet that is equivalent to the preceding PHP code snippet looks like the following one: var sql_prefix = 'desiredDB_'; var sql_cmd = 'INSERT INTO `'+sql_prefix+'users` (`id`, `user`) VALUES (0, "surmohan")'; conn.query(sql_cmd, function(err, rows, fields) { if (!err) { var inserted_id = rows.insertId; console.log(''Successfully inserted row with id='+inserted_id+'.'); } }); Using the DELETE statement Finally, we have reached our last activity of this section, that is, use of the SQL DELETE statement. Like the SQL statements we discussed earlier in this section, the SQL DELETE statement is also triggered using the PHP mysql_query() API function as demonstrated in the following PHP code snippet: $sql_prefix = 'desiredDB_'; $sql_cmd = 'DELETE FROM `'.$sql_prefix.'users` WHERE `user`="surmohan"'; $tabDeleted = mysql_query($sql_cmd, $conn); if ($tabDeleted) { $rows_deleted = mysql_affected_rows($conn); print 'Successfully deleted '.$rows_deleted.' rows.'; } In Node.js, we use the same SQL DELETE statement. We also use the affectedRows property that serves us in the same way as discussed while dealing with the SQL UPDATE statement. The equivalent Node.js code snippet will look like the following one: var sql_prefix = 'desiredDB_'; var sql_cmd = 'DELETE FROM `'+sql_prefix+'users` WHERE `user`="surmohan"'; conn.query(sql_cmd, function(err, rows, fields) { if (!err) { var rows_deleted = rows.affectedRows; console.log('Successfully deleted '+rows_deleted +' rows.'); } });
Read more
  • 0
  • 0
  • 9104

article-image-components-unity
Packt
26 Aug 2014
13 min read
Save for later

Components in Unity

Packt
26 Aug 2014
13 min read
In this article by Simon Jackson, author of Mastering Unity 2D Game Development, we will have a walkthrough of the new 2D system and other new features. We will then understand some of the Unity components deeply. We will then dig into animation and its components. (For more resources related to this topic, see here.) Unity 4.3 improvements Unity 4.3 was not just about the new 2D system; there are also a host of other improvements and features with this release. The major highlights of Unity 4.3 are covered in the following sections. Improved Mecanim performance Mecanim is a powerful tool for both 2D and 3D animations. In Unity 4.3, there have been many improvements and enhancements, including a new game object optimizer that ensures objects are more tightly bound to their skeletal systems and removes unnecessary transform holders. Thus making Mecanim animations lighter and smoother. Refer to the following screenshot: In Unity 4.3, Mecanim also adds greater control to blend animations together, allowing the addition of curves to have smooth transitions, and now it also includes events that can be hooked into at every step. The Windows Phone API improvements and Windows 8.1 support Unity 4.2 introduced Windows Phone and Windows 8 support, since then things have been going wild, especially since Microsoft has thrown its support behind the movement and offered free licensing for the existing Pro owners. Refer to the following screenshot: Unity 4.3 builds solidly on the v4 foundations by bringing additional platform support, and it closes some more gaps between the existing platforms. Some of the advantages are as follows: The emulator is now fully supported with Windows Phone (new x86 phone build) It has more orientation support, which allows even the splash screens to rotate properly and enabling pixel perfect display It has trial application APIs for both Phone and Windows 8 It has improved sensors and location support On top of this, with the recent release of Windows 8.1, Unity 4.3 now also supports Windows 8.1 fully; additionally, Unity 4.5.3 will introduce support Windows Phone 8.1 and universal projects. Dynamic Nav Mesh (Pro version only) If you have only been using the free version of Unity till now, you will not be aware of what a Nav Mesh agent is. Nav Meshes are invisible meshes that are created for your 3D environment at the build time to simplify path finding and navigation for movable entities. Refer to the following screenshot: You can, of course, create the simplified models for your environment and use them in your scenes; however, every time you change your scene, you need to update your navigation model. Nav Meshes simply remove this overhead. Nav Meshes are crucial, especially in larger environments where collision and navigation calculations can make the difference between your game running well or not. Unity 4.3 has improved this by allowing more runtime changes to the dynamic Nav Mesh, allowing you to destroy parts of your scene that alter the walkable parts of your terrain. Nav Mesh calculations are also now multithreaded to give even an even better speed boost to your game. Also, there have been many other under-the-hood fixes and tweaks. Editor updates The Unity editor received a host of updates in Unity 4.3 to improve the performance and usability of the editor, as you can see in the following demo screenshot. Granted most of the improvements are behind the scenes. The improved Unity Editor GUI with huge improvements The editor refactored a lot of the scripting features on the platform, primarily to reduce the code complexity required for a lot of scripting components, such as unifying parts of the API into single components. For example, the LookLikeControls and LookLikeInspector options have been unified into a single LookLike function, which allows easier creation of the editor GUI components. Further simplification of the programmable editor interface is an ongoing task and a lot of headway is being made in each release. Additionally, the keyboard controls have been tweaked to ensure that the navigation works in a uniform way and the sliders/fields work more consistently. MonoDevelop 4.01 Besides the editor features, one of the biggest enhancements has to be the upgrade of the MonoDevelop editor (http://monodevelop.com/), which Unity supports and is shipped with. This has been a long running complaint for most developers simply due to the brand new features in the later editions. Refer to the following screenshot: MonoDevelop isn't made by Unity; it's an open source initiative run by Xamarin hosted on GitHub (https://github.com/mono/monodevelop) for all the willing developers to contribute and submit fixes to. Although the current stable release is 4.2.1, Unity is not fully up to date. Hopefully, this recent upgrade will mean that Unity can keep more in line with the future versions of this free tool. Sadly, this doesn't mean that Unity has yet been upgraded from the modified V2 version of the Mono compiler (http://www.mono-project.com/Main_Page) it uses to the current V3 branch, most likely, due to the reduced platform and the later versions of the Mono support. Movie textures Movie textures is not exactly a new feature in Unity as it has been available for some time for platforms such as Android and iOS. However, in Unity 4.3, it was made available for both the new Windows 8 and Windows Phone platforms. This adds even more functionality to these platforms that were missing in the initial Unity 4.2 release where this feature was introduced. Refer to the following screenshot: With movie textures now added to the platform, other streaming features are also available, for example, webcam (or a built-in camera in this case) and microphone support were also added. Understanding components Components in Unity are the building blocks of any game; almost everything you will use or apply will end up as a component on a GameObject inspector in a scene. Until you build your project, Unity doesn't know which components will be in the final game when your code actually runs (there is some magic applied in the editor). So, these components are not actually attached to your GameObject inspector but rather linked to them. Accessing components using a shortcut Now, in the previous Unity example, we added some behind-the-scenes trickery to enable you to reference a component without first discovering it. We did this by adding shortcuts to the MonoBehavior class that the game object inherits from. You can access the components with the help of the following code: this.renderer.collider.attachedRigidbody.angularDrag = 0.2f; What Unity then does behind the scenes for you is that it converts the preceding code to the following code: var renderer = this.GetComponent<Renderer>(); var collider = renderer.GetComponent<Collider>(); var ridgedBody = collider.GetComponent<Rigidbody>(); ridgedBody.angularDrag = 0.2f; The preceding code will also be the same as executing the following code: GetComponent<Renderer>().GetComponent<Collider>().GetComponent<Rigidbody>().angularDrag = 0.2f; Now, while this is functional and working, it isn't very performant or even a best practice as it creates variables and destroys them each time you use them; it also calls GetComponent for each component every time you access them. Using GetComponent in the Start or Awake methods isn't too bad as they are only called once when the script is loaded; however, if you do this on every frame in the update method, or even worse, in FixedUpdate methods, the problem multiplies; not to say you can't, you just need to be aware of the potential cost of doing so. A better way to use components – referencing Now, every programmer knows that they have to worry about garbage and exactly how much memory they should allocate to objects for the entire lifetime of the game. To improve things based on the preceding shortcut code, we simply need to manually maintain the references to the components we want to change or affect on a particular object. So, instead of the preceding code, we could simply use the following code: Rigidbody myScriptRigidBody; void Awake() { var renderer = this.GetComponent<Renderer>(); var collider = renderer.GetComponent<Collider>(); myScriptRigidBody = collider.GetComponent<Rigidbody>(); } void Update() { myScriptRigidBody.angularDrag = 0.2f * Time.deltaTime; } This way the RigidBody object that we want to affect can simply be discovered once (when the scripts awakes); then, we can just update the reference each time a value needs to be changed instead of discovering it every time. An even better way Now, it has been pointed out (by those who like to test such things) that even the GetComponent call isn't as fast as it should be because it uses C# generics to determine what type of component you are asking for (it's a two-step process: first, you determine the type and then get the component). However, there is another overload of the GetComponent function in which instead of using generics, you just need to supply the type (therefore removing the need to discover it). To do this, we will simply use the following code instead of the preceding GetComponent<>: myScriptRigidBody =(Rigidbody2D)GetComponent(typeof(Rigidbody2D)); The code is slightly longer and arguably only gives you a marginal increase, but if you need to use every byte of the processing power, it is worth keeping in mind. If you are using the "." shortcut to access components, I recommend that you change that practice now. In Unity 5, they are being removed. There will, however, be a tool built in the project's importer to upgrade any scripts you have using the shortcuts that are available for you. This is not a huge task, just something to be aware of; act now if you can! Animation components All of the animation in the new 2D system in Unity uses the new Mecanim system (introduced in Version 4) for design and control, which once you get used to is very simple and easy to use. It is broken up into three main parts: animation controllers, animation clips, and animator components. Animation controllers Animation controllers are simply state machines that are used to control when an animation should be played and how often, including what conditions control the transition between each state. In the new 2D system, there must be at least one controller per animation for it to play, and controllers can contain many animations as you can see here with three states and transition lines between them: Animation clips Animation clips are the heart of the animation system and have come very far from their previous implementation in Unity. Clips were used just to hold the crafted animations of the 3D models with a limited ability to tweak them for use on a complete 3D model: The new animation dope sheet system (as shown in the preceding screenshot) is very advanced; in fact, now it tracks almost every change in the inspector for sprites, allowing you to animate just about everything. You can even control which sprite from a spritesheet is used for each frame of the animation. The preceding screenshot shows a three-frame sprite animation and a modified x position modifier for the middle image, giving a hopping effect to the sprite as it runs. This ability of the dope sheet system implies there is less burden on the shoulders of art designers to craft complex animations as the animation system itself can be used to produce a great effect. Sprites don't have to be picked from the same spritesheet to be animated. They can come from individual textures or picked from any spritesheet you have imported. The Animator component To use the new animation prepared in a controller, you need to apply it to a game object in the scene. This is done through the Animator component, as shown here: The only property we actually care about in 2D is the Controller property. This is where we attach the controller we just created. Other properties only apply to the 3D humanoid models, so we can ignore them for 2D. For more information about the complete 3D Mecanim system, refer to the Unity Learn guide at http://unity3d.com/learn/tutorials/modules/beginner/animation. Animation is just one of the uses of the Mecanim system. Setting up animation controllers So, to start creating animations, you first need an animation controller in order to define your animation clips. As stated before, this is just a state machine that controls the execution of animations even if there is only one animation. In this case, the controller runs the selected animation for as long as it's told to. If you are browsing around the components that can be added to the game object, you will come across the Animator component, which takes a single animation clip as a parameter. This is the legacy animation system for backward compatibility only. Any new animation clip created and set to this component will not work; it will simply generate a console log item stating The AnimationClip used by the Animation component must be marked as Legacy. So, in Unity 4.3 onwards, just avoid this. Creating an animation controller is just as easy as any other game object. In the Project view, simply right-click on the view and select Create | Animator Controller. Opening the new animation will show you the blank animator controller in the Mecanim state manager window, as shown in the following screenshot: There is a lot of functionality in the Mecanim state engine, which is largely outside the scope of this article. Check out for more dedicated books on this, such as Unity 4 Character Animation with Mecanim, Jamie Dean, Packt Publishing. If you have any existing clips, you can just drag them to the Mecanim controller's Edit window; alternatively, you can just select them in the Project view, right-click on them, and select From selected clip under Create. However, we will cover more of this later in practice. Once you have a controller, you can add it to any game object in your project by clicking on Add Component in the inspector or by navigating to Component | Create and Miscellaneous | Animator and selecting it. Then, you can select your new controller as the Controller property of the animator. Alternatively, you can just drag your new controller to the game object you wish to add it to. Clips in a controller are bound to the spritesheet texture of the object the controller is attached to. Changing or removing this texture will prevent the animation from being displayed correctly. However, it will appear as it's still running. So with a controller in place, let's add some animation to it. Summary In this article, we did a detailed analysis of the new 2D features added in Unity 4.3. Then we overviewed all the main Unity components. Resources for Article: Further resources on this subject: Parallax scrolling [article] What's Your Input? [article] Unity 3-0 Enter the Third Dimension [article]
Read more
  • 0
  • 0
  • 5940
Modal Close icon
Modal Close icon