Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7014 Articles
article-image-define-necessary-connections
Packt
02 Dec 2016
5 min read
Save for later

Define the Necessary Connections

Packt
02 Dec 2016
5 min read
In this article by Robert van Mölken and Phil Wilkins, the author of the book Implementing Oracle Integration Cloud Service, where we will see creating connections which is one of the core components of an integration we can easily navigate to the Designer Portal and start creating connections. (For more resources related to this topic, see here.) On the home page, click the Create link of the Connection tile as given in the following screenshot: Because we click on this link the Connections page is loaded, which lists of all created connections, a modal dialogue automatically opens on top of the list. This pop-up shows all the adapter types we can create. For our first integration we define two technology adapter connections, an inbound SOAP connection and an outbound REST connection. Inbound SOAP connection In the pop-up we can scroll down the list and find the SOAP adapter, but the modal dialogue also includes a search field. Just search on SOAP and the list will show the adapters matching the search criteria: Find your adapter by searching on the name or change the appearance from card to list view to show more adapters at ones. Click Select to open the New Connection page. Before we can setup any adapter specific configurations every creation starts with choosing a name and an optional description: Create the connection with the following details: Connection Name FlightAirlinesSOAP_Ch2 Identifier This will be proposed based on the connection name and there is no need to change unless you'd like an alternate name. It is usually the name in all CAPITALS and without spaces and has a max length of 32 characters. Connection Role Trigger The role chosen restricts the connection to be used only in selected role(s). Description This receives in Airline objects as a SOAP service. Click the Create button to accept the details. This will bring us to the specific adapter configuration page where we can add and modify the necessary properties. The one thing all the adapters have in common is the optional Email Address under Connection Administration. This email address is used to send notification to when problems or changes occur in the connection. A SOAP connection consists of three sections; Connection Properties, Security, and an optional Agent Group. On the right side of each section we can find a button to configure its properties.Let's configure each section using the following steps: Click the Configure Connectivity button. Instead of entering in an URL we are uploading the WSDL file. Check the box in the Upload File column. Click the newly shown Upload button. Upload the file ICSBook-Ch2-FlightAirlines-Source WSDL. Click OK to save the properties. Click the Configure Credentials button. In the pop-up that is shown we can configure the security credentials. We have the choice for Basic authentication, Username Password Token, or No Security Policy. Because we use it for our inbound connection we don't have to configure this. Select No Security Policy from the dropdown list. This removes the username and password fields. Click OK to save the properties. We leave the Agent Group section untouched. We can attach an Agent Group if we want to use it as an outbound connection to an on-premises web service. Click Test to check if the connection is working (otherwise it can't be used). For SOAP and REST it simply pings the given domain to check the connectivity, but others for example the Oracle SaaS adapters also authenticate and collect metadata. Click the Save button at the top of the page to persist our changes. Click Exit Connection to return to the list from where we started. Outbound REST connection Now that the inbound connection is created we can create our REST adapter. Click the Create New Connection button to show the Create Connection pop-up again and select the REST adapter. Create the connection with the following details: Connection Name FlightAirlinesREST_Ch2 Identifier This will be proposed based on the connection name Connection Role Invoke Description This returns the Airline objects as a REST/JSON service Email Address Your email address to use to send notifications to Let’s configure the connection properties using the following steps: Click the Configure Connectivity button. Select REST API Base URL for the Connection Type. Enter the URL were your Apiary mock is running on: http://private-xxxx-yourapidomain.apiary-mock.com. Click OK to save the values. Next configure the security credentials using the following steps: Click the Configure Credentials button. Select No Security Policy for the Security Policy. This removes the username and password fields. Click the OK button to save out choice. Click Test at the top to check if the connection is working. Click the Save button at the top of the page to persist our changes. Click Exit Connection to return to the list from where we started. Troubleshooting If the test fails for one of these connections check if the correct WSDL is used or that the connection URL for the REST adapter exists or is reachable. Summary In this article we looked at the processes of creating and testing the necessary connections and the creation of the integration itself. We have seen an inbound SOAP connection and an outbound REST connection. In demonstrating the integration we have also seen how to use Apiary to document and mock our backend REST service. Resources for Article: Further resources on this subject: Getting Started with a Cloud-Only Scenario [article] Extending Oracle VM Management [article] Docker Hosts [article]
Read more
  • 0
  • 0
  • 1417

article-image-modelling-rpg-d
Ryan Roden-Corrent
02 Dec 2016
7 min read
Save for later

Modelling a RPG in D

Ryan Roden-Corrent
02 Dec 2016
7 min read
In this post, I'll show off some of the cool features of a language called D in the context of creating a game, specifically a RPG. Character Stats For our RPG, let's say there are three categories of stats on every character: Attributes: An int value for each of the classic six (Strength, Dexterity, and so on). Skills: An int value for each of several skills (diplomacy, stealth, and so on). Resistance: An int value for each 'type' (physical, fire, and so on) of damage. In D, we can represent such a character like so: struct Character { // attributes int strength; int dexterity; int constitution; int intellect; int wisdom; int charisma; // skills int stealth; int perception; int diplomacy; // resistances int resistPhysical; int resistFire; int resistWater; int resistAir; int resistEarth; } However, it would be nicer if we could have each category (attributes, skills, and resistances) represented as a single group of values. First, let's define some enums: enum Attribute { strength, dexterity, constitution, intellect, wisdom, charisma } enum Skill { stealth, perception, diplomacy } enum Element { physical, fire, water, air, earth } Now we want to map each of these enum members to a value for that particular attribute, skill, or resistance. One option is an associative array, which would look like this: struct Character { int[Attribute] attributes; int[Skill] attributes; int[Element] attributes; } int[Attribute] attributes declares that Character.attributes returns an int when indexed by an Attribute, like so: if (hero.attributes[Attribute.dexterity] < 4) hero.trip(); However, associative arrays are heap allocated and don't have a default value for each key. It seems like overkill for storing a small bundle of values. Another option is a static array. Static arrays are stack-allocated value types and will contain exactly the number of values that we need. struct Character { int[6] attributes; int[3] skills; int[5] resistances; } Our enum values are backed by ints, so we can use them directly as indexes just as we did with the associative array: if (hero.attributes[Attribute.intellect] > 12) hero.pontificate(); This is more efficient for our needs, but nothing enforces using enums as keys. If we accidentally gave an out-of-bounds index, the compiler wouldn't catch it and we'd get a runtime error. Ideally, we want the efficiency of the static array with the syntax of the associative array. Even better, it would be nice if we could say something like attributes.charisma instead of attributes[Attribute.charisma], like you would with a table in Lua. Fortunately, you can achieve this with only a few lines of D code. The Enumap import std.traits; /// Map each member of the enum `K` to a value of type `V` struct Enumap(K, V) { private enum N = EnumMembers!K.length; private V[N] _store; auto opIndex(K key) { return _store[key]; } auto opIndexAssign(T value, K key) { return _store[key] = value; } } Here's a line-by-line breakdown: import std.traits; We need access to std.traits.EnumMembers, a standard-library function that returns (at compile-time!) the members of an enum. struct Enumap(K, V) Here, we declare a templated struct. In many other languages, this would look like Enumap<K, V>. K will be our key type (the enum) and V will be the value. K and V are known as 'compile-time parameters'. In this case, they are simply used to create a generic type, but in D, such parameters can be used for much more than just generic types, as we will see later. private enum N = EnumMembers!K.length;` private V[N] _store;` Here we leverage EnumMembers to determine how many entries are in the provided enum. We use this to declare a static array capable of holding exactly NVs. 6: auto opIndex(K key) { return _store[key]; } 7: auto opIndexAssign(T value, K key) { return _store[key] = value; } opIndex is a special method that allows us to provide a custom implementation of the indexing ([]) operator. The call skills[Skill.stealth] is translated to sklls.opIndex(Skill.stealth), while the assignment skills[Skill.stealth] = 5 is translated to sklls.opIndexAssign(Skill.stealth, 5). Let's use that in our Character struct: struct Character { Enumap!(Attribute, int) attributes; Enumap!(Skill , int) skills; Enumap!(Element , int) resistances; } if (hero.attributes[Attribute.wisdom] < 2) hero.drink(unidentifiedPotion); There! Now the length of each underlying array is figured out for us, and the values can only be accessed using the enum members as keys. The underlying array _store is statically sized, so it requires no managed-memory allocation. Here's the really clever bit: import std.conv; //... struct Enumap(K, V) { //... auto opDispatch(string s)() { return this[s.to!K]; } auto opDispatch(string s)(V val) { return this[s.to!K] = val; } } if (hero.attributes.charisma < 5) hero.makeAwkwardJoke(); if (hero.attributes.charisma < 5) hero.makeAwkwardJoke(); opDispatch essentially overloads the . operator to provide some nice syntactic sugar. Here's a quick rundown of what happens for hero.attributes.charisma = 5: The compiler sees attributes.charisma. It looks for the charisma symbol in the type Enumap!(Attribute, int). Failing to find this, it tries attributes.opDispatch!"charisma". That call resolves to attributes["charisma".to!Attribute]. And further resolves to attributes[Attribute.charisma]. Remember I mentioned that compile-time arguments can be much more than types? Here is a compile-time string argument—in this case, its value is whatever symbol follows the.. Note that the above happens at compile time and is equivalent to using the indexing operator. So, we get the "charisma" string, but what we actually want is the enum member. Attribute.charisma. std.conv.to, makes quick work of this; it can, among other things, translate between strings and enum names. A Step Further – Enumap Arithmetic Let's suppose we add items to the game, and each item can provide some stat bonuses: struct Item { Enumap!(Attribute, int) bonuses; } It would be really nice if we could just add these bonuses to our character's base stats, like so: auto totalStats = character.attributes + item.bonuses; Yet again, D lets us implement this quite concisely, this time by leveraging opBinary. struct Enumap(K, V) { //... auto opBinary(string op)(typeof(this) other) { V[N] result = mixin("_store[] " ~ op ~ " other._store[]"); return typeof(this)(result); } Breakdown time again! auto opBinary(string op)(typeof(this) other) An expression like enumap1 + enumap2 will get translated (at compile time!) to enumap1.opBinary!"+"(enumap2). The operator (in this case, +`) is passed as a compile-time string argument. If passing the operator as a string sounds weird, read on… V[N] result = mixin("_store[]" ~ op ~ "other._store[]"); mixin is a D keyword that translates a compile-time string into code. Continuing with our + example, we end up with V[N] result = mixin("_store[]" ~ "+" ~ "other._store[]"), which simplifies to V[N] result = _store[] + other._store[]). The _store[] + other._store[] expression is called an "array-wise operation". It's a concise way of performing an operation between corresponding elements of two arrays, in this case, adding each pair of integers into a resulting array. return typeof(this)(result); Here we wrap the resulting array in an Enumap before returning it. typeof(this) resolves to the enclosing type. It is equivalent, but preferable, to Enumap!(K, V), as if we change the name of the class we won't have to refactor this line. In many languages, we'd have to separately define opAdd, opSub, opMult, and more, most of which would likely contain similar code. However, thanks to the way opBinary allows us to work with a string representation of the operator at compile time, our single opBinary implementation supports operators like - and * as well. Summary I hope you enjoyed learning a little about D! There is a full implementation of Enumap available here: https://github.com/rcorre/enumap. About the Author Ryan Roden-Corrent is a software developer by trade and hobby. He is an active contributor to the free/open source software community and has a passion for simple but effective tools. He started gaming at a young age and dabbles in all aspects of game development, from coding to art and music. He's also an aspiring musician and yoga teacher. You can find his open source work here  and Creative Commons art here.
Read more
  • 0
  • 0
  • 3581

article-image-sales-and-purchase-process
Packt
02 Dec 2016
21 min read
Save for later

The Sales and Purchase Process

Packt
02 Dec 2016
21 min read
In this article by Anju Bala, the author of the book Microsoft Dynamics NAV 2016 Financial Management - Second Edition, we will see the sales and purchase process using Microsoft Dynamics NAV 2016 in detail. Sales and purchases are two essential business areas in all companies. In many organizations, the salesperson or the purchase department are the ones responsible for generating quotes and orders. People from the finance area are the ones in charge of finalizing the sales and purchase processes by issuing the documents that have an accountant reflection: invoices and credit memos. In the past, most systems required someone to translate all the transactions to accountancy language, so they needed a financer to do the job. In Dynamics NAV, anyone can issue an invoice, with zero accountant knowledge needed. But a lot of companies keep their old division of labor between departments. This is why we have decided to explain the sales and purchase processes in this book. This article explains how their workflows are managed in Dynamics NAV. In this article you will learn: What is Dynamics NAV and what it can offer to your company To define the master data needed to sell and purchase How to set up your pricing policies (For more resources related to this topic, see here.) Introducing Microsoft Dynamics NAV Dynamics NAV is an Enterprise Resource Planning (ERP) system targeted at small and medium-sized companies. An ERP is a system, a software, which integrates the internal and external management information across an entire organization. The purpose of an ERP is to facilitate the flow of information between all business functions inside the boundaries of the organizations. An ERP system is meant to handle all the organization areas on a single software system. This way the output of an area can be used as an input of another area. Dynamics NAV 2016 covers the following functional areas: Financial Management: It includes accounting, G/L budgets, account schedules, financial reporting, cash management, receivables and payables, fixed assets, VAT reporting, intercompany transactions, cost accounting, consolidation, multicurrency, and Intrastat. Sales and Marketing: This area covers customers, order processing, pricing, contacts, marketing campaigns, and so on. Purchase: The purchase area includes vendors, order processing, approvals, planning, costing, and other such areas. Warehouse: Under the warehouse area you will find inventory, shipping and receiving, locations, picking, assembly, and so on. Manufacturing: This area includes product design, capacities, planning, execution, costing, subcontracting, and so on. Job: Within the job area you can create projects, phases and tasks, planning, time sheets, work in process, and other such areas. Resource Planning: Manage resources, capacity, and so on. Service: Within this area you can manage service items, contracts, order processing, planning and dispatching, service tasks, and so on. Human Resources: Manage employees, absences, and so on. Some of these areas will be covered in detail in this book. Dynamics NAV offers much more than robust financial and business management functionalities. It is also a perfect platform to customize the solution to truly fit your company needs. If you have studied different ERP solutions, you know by now customizations to fit your specific needs will always be necessary. Dynamics NAV has a reputation as being easy to customize, which is a distinct advantage. Since you will probably have customizations in your system, you might find some differences with what is explained in this book. Your customizations could imply that: You have more functionality in your implementation Some steps are automated, so some manual work can be avoided Some features behave different than explained here There are new functional areas in your Dynamics NAV In addition Dynamics NAV has around forty different country localizations that are meant to cover country-specific legal requirements or common practices. Many people and companies have already developed solutions on top of Dynamics NAV to cover horizontal or industry-specific needs, and they have registered their solution as an add-on, such as: Solutions for the retail industry or the food and beverages industry Electronic Data Interchange (EDI) Quality or Maintenance management Integration with third-party applications such as electronic shops, data warehouse solutions, or CRM systems Those are just a few examples. You can find almost 2,000 registered third-party solutions that cover all kinds of functional areas. If you feel that Dynamics NAV does not cover your needs and you will need too much customization, the best solution will probably be to look for an existing add-on and implement it along with your Dynamics NAV. Anyway, with or without an add-on, we said that you will probably need customizations. How many customizations can you expect? This is hard to tell as each case is particular, but we'll try to give you some highlights. If your ERP system covers a 100 percent of your needs without any customization, you should worry. This means that your procedures are so standard that there is no difference between you and your competence. You are not offering any special service to your customer, so they are only going to measure you by the price they are getting. On the other hand, if your Dynamics NAV only covers a low percentage of your needs it could just mean two things: this is not the product you need; or your organization is too chaotic and you should re-think your processes to standardize them a bit. Some people agree that the ideal scenario would be to get about 70-80 percent of your needs covered out of the box, and about 20-30 percent customizations to cover those needs that make you different from your competitors. Importance of Financial Management In order to use Dynamics NAV, all organizations have to use the Financial Management area. It is the epicenter of the whole application. Any other area is optional and their usage depends on the organization's needs. The sales and the purchase areas are also used in almost any Dynamics NAV implementation. Actually, accountancy is the epicenter, and the general ledger is included inside the Financial Management area. In Dynamics NAV everything leads to accounting. It makes sense as accountancy is the act of recording, classifying, and summarizing, in terms of money, the transactions and events that take place in the company. Every time the warehouse guy ships an item, or the payment department orders a transfer, these actions can be written in terms of money using accounts, credit, and debit amounts. An accountant could collect all the company transactions and translate them one by one to accountancy language. But this means manual duplicate work, a lot of chances of getting errors and inconsistencies, and no real-time data. On the other hand, Dynamics NAV is capable to interpret such transactions and translate them to accountancy on the fly. In Dynamics NAV everything leads to accountancy, so all the company's employees are helping the financial department with their job. The financers can now focus on analyzing the data and taking decisions, and they don't have to bother on entering the data anymore. Posted data cannot be modified (or deleted) One of the first things you will face when working with Dynamics NAV is the inability to modify what has been posted, whether it's a sales invoice, a shipment document, a general ledger entry, or any other data. Any posted document or entry is unchangeable. This might cause frustration, especially if you are used to work with other systems that allow you to modify data. However, this feature is a great advantage since it ensures data integrity. You will never find an unbalanced transaction. If you need to correct any data, the Dynamics NAV approach is to post new entries to null the incorrect ones, and then post the good entries again. For instance, if you have posted and invoice, and the prices were wrong, you will have to post a credit memo to null the original invoice and then issue a new invoice with the correct prices. Document No. Amount Invoice 01 1000 Credit Memo 01 -1000 This nulls the original invoice Invoice 02 800 As you can see this method for correcting mistakes always leaves a track of what was wrong and how we solved it. Users get the feeling that they have to perform too many steps to correct the data; with the addition that everyone can see that there was a mistake at some point. Our experience tells us that users tend to pay more attention before they post anything in Dynamics NAV, which leads to make fewer mistakes on the first place. So another great advantage of using Dynamics NAV as your ERP system is that the whole organization tends to improve their internal procedures, so no mistakes. No save button Dynamics NAV does not have any kind of save button anywhere in the application. Data is saved into the database while it is being introduced. When you enter data in one field, right after you leave the field, the data is already saved. There is no undo feature. The major advantage is that you can create any card (for instance, Customer Card), any document (for instance, Sales Order), or any other kind of data without knowing all the information that is needed. Imagine you need to create a new customer. You have all their fiscal data except their VAT Number. You could create the card, fill in all the information except the VAT Registration No. field, and leave the card without losing the rest of the information. When you have figured out the VAT Number of your customer, you can come back and fill it in. The not-losing-the-rest-of-the-information part is important. Imagine that there actually was a Save button; you spend a few minutes filling in all the information and, at the end, click on Save. At that moment, the system carries out some checks and finds out that one field is missing. It throws you a message saying that the Customer Card cannot be saved. So you basically have two options: To lose the information introduced, find out the VAT number for the customer, and start all over again. To cheat. Fill the field with some wrong value so that the system actually lets you save the data. Of course, you can come back to the card and change the data once you've found out the right one. But nothing will prevent any other user to post a transaction with the customer in the meantime. Understanding master data Master data is all the key information to the operation of a business. Third-party companies, such as customers and vendors, are part of the master data. The items a company manufactures or sells are also part of the master data. Many other things can be considered master data, such as the warehouses or locations, the resources, or the employees. The first thing you have to do when you start using Dynamics NAV is loading your master data into the system. Later on, you will keep growing your master data by adding new customers, for instance. To do so, you need to know which kind of information you have to provide. Customers We will open a Customer Card to see which kind of information is stored in Dynamics NAV about customers. To open a Customer Card, follow these steps: Navigate to Departments/Sales & Marketing/Sales/Customers. You will see a list of customers, find No. 10000 The Cannon Group PLC. Double-click on it to open its card, or select it and click on the View icon found on the Home tab of the ribbon. The following screenshot shows the Customer Card for The Cannon Group PLC: Customers are always referred to by their No., which is a code that identifies them. We can also provide the following information: Name, Address, and Contact: A Search Name can also be provided if you refer to your customer by its commercial name rather than by its fiscal name. Invoicing information: It includes posting groups, price and discount rates, and so on. You may still don't know what a posting group is, since it is the first time those words are mentioned on this book. At this moment, we can only tell you that posting groups are important. But it's not time to go through them yet. We will talk about posting groups in Chapter 6, Financial Management Setup. Payments information: It includes when and how will we receive payments from the customer. Shipping information: It explains how do we ship items to the customer. Besides the information you see on the card, there is much other information we can introduce about customers. Take a look at the Navigate tab found on the ribbon. Other information that can be entered is as follows: Information about bank accounts: so that we can know where can we request the payments. Multiple bank accounts can be setup for each customer. Credit card information: in case customers pay using this procedure. Prepayment information: in case you require your customers to pay in advance, either totally or partially. Additional addresses: where goods can be shipped (Ship-to Addresses). Contacts: You may relate to different departments or individuals from your customers. Relations: between our items and the customer's items (Cross References). Prices and Discounts: which will be discussed in the Pricing section. But customers, just as any other master data record, do not only have information that users inform manually. They have a bunch of other information that is filled in automatically by the system as actions are performed: History: You can see it on the right side of the card and it holds information such as how many quotes or orders are currently being processed or how many invoices and credit memos have been issued. Entries: You can access the ledger entries of a customer through the Navigate tab. They hold the details of every single monetary transaction done (invoices, credit memos, payments, and so on). Statistics: You can see them on the right side and they hold monetary information such as the amount in orders or what is the amount of goods or services that have been shipped but not yet invoiced. The Balance: It is a sum of all invoices issued to the customer minus all payments received from the customer. Not all the information we have seen on the Customer Card is mandatory. Actually, the only information that is required if you want to create a transaction is to give it a No. (its identification) and to fill in the posting group's fields (Gen. Bus. Posting Group and Customer Posting Group). All other information can be understood as default information and setup that will be used in transactions so that you don't have to write it down every single time. You don't want to write the customer's address in every single order or invoice, do you? Items Let's take a look now at an Item Card to see which kind of information is stored in Dynamics NAV about items. To open an Item Card, follow these steps: Navigate to Departments/Sales & Marketing/Inventory & Pricing/Items. You will see a list of items, find item 1000 Bicycle. Double-click on it to open its card. The following screenshot shows the item card for item 1000 Bicycle: As you can see in the screenshot, items first have a No., which is a code that identifies them. For an item, we can enter the following information: Description: It's the item's description. A Search Description can also be provided if you better identify an item using a different name. Base Unit of Measure: It is the unit of measure in which most quantities and other information such as Unit Price for the item will be expressed. We will see later what other units of measure can be used as well, but the Base is the most important one and should be the smallest measure in which the item can be referred. Classification: Item Category Code and Product Group Code fields offer a hierarchical classification to group items. The classification can fill in the invoicing information we will see in the next point. Invoicing information: It includes posting groups, costing method used for the item, and so on. Posting groups are explained in Chapter 6, Financial Management Setup, and costing methods are explained in Chapter 3, Accounting Processes. Pricing information: It is the item's unit price and other pricing configuration that we will cover in more detail in the Pricing section. Foreign trade information: It is needed if you have to do Instrastat reporting. Replenishment, planning, item tracking, and warehouse information: These fast-tabs are not explained in detail because they are out of the scope of this book. They are used to determine how to store the stock and how to replenish it. Besides the information you see on the Item Card, there is much other information we can introduce about items through the Navigate tab found on the ribbon: As you can see, other information that can be entered is as follows: Units of Measure: It is useful when you can sell your item either in units, boxes, or other units of measure at the same time. Variants: It is useful when you have multiple items that are actually the same one (thus, they share most of the information) but with some slight differences. You can use variants to differentiate colors, sizes, or any other small difference you can think of. Extended texts: It is useful when you need long descriptions or technical info to be shown on documents. Translations: It is used so that you can show item's descriptions on other languages, depending on the language used by your customers. Prices and discounts: It will be discussed in the Pricing section. As with customers, not all the information in the Item Card is mandatory. Vendors, resources, and locations We will start with third-parties; customers and vendors. They work exactly the same way. We will just look at customers, but everything we will explain about them can be applied to vendors as well. Then, we will look at items, and finally, we will take a brief look to locations and resources. The concepts learned can be used in resources and locations, and also to other master data such as G/L accounts, fixed assets, employees, service items, and so on. Pricing Pricing is the combination of prices for items and resources and the discounts that can be applied to individual document lines or to the whole document. Prices can be defined for items and resources and can be assigned to customers. Discounts can be defined for items and documents and can also be assigned to customers. Both prices and discounts can be defined at different levels and can cover multiple pricing policies. The following diagram illustrates different pricing policies that can be established in Dynamics NAV: Defining sales prices Sales prices can be defined in different levels to target different pricing policies. The easiest scenario is when we have a single price per item or resource. That is, the One single price for everyone policy. In that case, the sales price can be specified on the Item Card or on the Resource Card, in a field called Unit Price. In a more complex scenario, where prices depend on different conditions, we will have to define the possible combinations and the resulting price. We will explain how prices can be configured for items. Prices for resources can be defined in a similar way, although they offer fewer possibilities. To define sales prices for an Item, follow these steps: Navigate to Departments/Sales & Marketing/Inventory & Pricing/Items. You will see a list of items, find item 1936-S BERLIN Guest Chair, yellow. Double-click on it to open its card. On the Navigate tab, click on the Prices icon found under the Sales group. The Edit – Sales Prices page will open. As you can see in the screenshot, multiple prices have been defined for the same item. A specific price will only be used when all the conditions are met. For example, a Unit Price will be used for any customer that buys item 1936-S after the 20/01/2017 but only if they buy a minimum of 11 units. Different fields can be used to address each of the pricing policies: The combination of Sales Type and Sales Code fields enable the different prices for different customers policy Fields Unit of Measure Code and Minimum Quantity are used on the different prices per volume policy Fields Starting Date, Ending Date, and Currency Code are used on the different prices per period or currency policy They can all be used at the same time to enable mixed policies. When multiple pricing conditions are met, the price that is used is the one that is most favorable to the customer (the cheapest one). Imagine Customer 10000 belongs to the RETAIL price group. On 20/01/2017 he buys 20 units of item 1936-S. There are three different prices that could be used: the one defined for him, the one defined for its price group, and the one defined to all customers when they buy at least 11 units. Among the three prices, 130.20 is the cheapest one, so this is the one that will be used. Prices can be defined including or excluding VAT. Defining sales discounts Sales discounts can be defined in different levels to target different pricing policies. We can also define item discounts based on conditions. This addresses the Discounts based on items policy and also the Discounts per volume, period or currency policy, depending on which fields are used to establish the conditions. In the following screenshot, we can see some examples of item discounts based on conditions, which are called Line Discounts because they will be applied to individual document lines. In some cases, items or customers may already have a very low profit for the company and we may want to prevent the usage of line discounts, even if the conditions are met. A field called Allow Line Disc, can be found on the Customer Card and on sales prices. By unchecking it, we will prevent line discounts to be applied to a certain customer or when a specific sales price is used. Besides the line discounts, invoice discounts can be defined to use the General discounts per customer policy. Invoice discounts apply to the whole document and they depend only on the customer. Follow these steps to see and define invoice discounts for a specific customer: Open the Customer Card for customer 10000, The Cannon Group PLC. On the Navigate tab, click on Invoice Discounts. The following screenshot shows that customer 10000 has an invoice discount of 5 percent: Just as line discounts, invoice discounts can also be disabled using a field called Allow Invoice disc. that can be found on the Item Card and on sales prices. There is a third kind of discount, payment discount, which can be defined to use the Financial discounts per early payments policy. This kind of discount applies to the whole document and depends on when the payment is done. Payment discounts are bound to a Payment Term and are to be applied if the payment is received within a specific number of days. The following screenshot shows the Payment Terms that can be found by navigating to Departments/Sales & Marketing/Administration/Payment Terms: As you can see, a 2 percent payment discount has been established when the 1M(8D) Payment Term is used and the payment is received within the first eight days. Purchase pricing Purchase prices and discounts can also be defined in Dynamics NAV. The way they are defined is exactly the same as you can define sales prices and discounts. There are some slight differences: When defining single purchase pricing on the Item Card, instead of using the Unit Price field, we will use the Last Direct Cost field. This field gets automatically updated as purchase invoices are posted. Purchase prices and discounts can only be defined per single vendors and not per group of vendors as we could do in sales prices and discounts. Purchase discounts can only be defined per single items and not per group of items as we could do in sales discounts. We cannot prevent purchase discounts to be applied. Purchase prices can only be defined excluding VAT. Summary In this chapter, we have learned that Dynamics NAV as an ERP system meant to handle all the organization areas on a single software system. The sales and purchases processes can be held by anyone without the need of having accountancy knowledge, because the system is capable of translating all the transactions to accountant language on the fly. Customers, vendors, and items are the master data of these areas. Its information is used in documents to post transactions. There are multiple options to define your pricing policy: from one single price to everyone to different prices and discounts per groups of customers, per volume, or per period or currency. You can also define financial discounts per early payment. In the next chapter, we will learn how to manage cash by showing how to handle receivables, payables, and bank accounts. Resources for Article: Further resources on this subject: Modifying the System using Microsoft Dynamics Nav 2009: Part 3 [article] Introducing Dynamics CRM [article] Features of Dynamics GP [article]
Read more
  • 0
  • 0
  • 2034

article-image-administering-swarm-cluster
Packt
02 Dec 2016
12 min read
Save for later

Administering a Swarm Cluster

Packt
02 Dec 2016
12 min read
In this article by Fabrizio Soppelsa and Chanwit Kaewkasi, the author of Native Docker Clustering with Swarm we're now going to see how to administer a running Swarm cluster. The topics include scaling the cluster size (adding and removing nodes), updating the cluster and nodes information; handling the node status (promotion and demotion), troubleshooting, and graphical interfaces (UI). (For more resources related to this topic, see here.) Docker Swarm standalone In standalone mode, cluster operations must be done directly inside the container 'swarm'. We're not going to cover every option in detail. Swarm standalone is not deprecated yet, and is used around, the reason for which we're discussing it here, but it will be probably declared deprecated soon. It is obsoleted by the Swarm mode. The commands to administer a Docker Swarm standalone cluster are: Create (c): Typically, in production people use Consul or Etcd, so this command has no relevance for production List (l): This shows the list of cluster nodes, basing on a iteration through Consul or Etcd, that is, the Consul or Etcd must be passed as an argument Join (j): This joins a node on which the swarm container is running to the cluster. Here, still, a discovery mechanism must be passed at the command line Manage (m): This is the core of the Standalone mode. By managing a cluster, here it's meant how to change some cluster properties, such as Filters, Schedulers, external CA URLs, and timeouts. Docker Swarm mode: Scale a cluster size Manually adding nodes You can choose to create Docker hosts either way you prefer. If you plan to use Docker Machine, you're probably going to hit Machine's limits very soon, and you will need to be very patient while even listing machines, having to wait several seconds for Machine to get and print all the information on the whole. My favorite method is to use Machine with the generic driver, thus delegate to something else (that is, Ansible) the host provisioning (Operating System installation, network and security groups configurations, and so on), and later exploit Machine to install Docker the proper way: Manually configure the cloud environment (security groups, networks, and so on) Provision Ubuntu hosts with a third-party tool Run Machine with the generic driver on these hosts with the only goal to properly install Docker Then handle hosts with the tool in part 2, or even others. If you use Machine's generic driver, it will select the latest stable Docker binaries. While we were writing this article, in order to use Docker 1.12, we had to overcome this by passing Machine a special option to get the latest, unstable, version of Docker: docker-machine create -d DRIVER--engine-install-url https://test.docker.com mymachine For a production Swarm (mode), at the time you'll be reading this article, 1.12 will be already stable, so this trick will not be necessary anymore, unless you need to use some of the very latest Docker features. Managers The theory of HA suggests us that the number of managers must be odd, and equal or more than 3. This is to grant a quorum in high availability, that is the majority of nodes agree on what part of nodes are leading the operations. If there were two managers, and one goes down and comes back, it's possible that both will think to be the leaders. That causes a logical crash in the cluster organization called split brain. The more managers you have, the higher is the resistance ratio to failures. Refer to the following table: Number of managers Quorum (majority) Maximum possible failures 3 2 1 5 3 2 7 4 3 9 5 4 Also, in Swarm Mode, an overlay network is created automatically and associated as ingress traffic to the nodes. Its purpose is to be used with containers: You will want that your containers be associated to an internal overlay (VxLAN meshed) network to communicate with each other, rather than using public or other networks. Thus, Swarm creates this already for you, ready to use. We recommend, further, to geographically distribute managers. If an earthquake hits the datacenter where all managers are serving, the cluster would go down, wouldn't it? So, consider to place each manager or groups of managers into different physical locations. With the advent of cloud computing, that's really easy, you can spawn up each manager in a different AWS region, or even better have a manager running each on different providers on different regions, that is on AWS, on Digital Ocean, on Azure and also on private cloud, such as OpenStack. IMAGE OF A WORLD WITH SCATTERED MANAGERS IN CONTINENTS? Workers You can add an arbitrary number of workers. This is the elastic part of the Swarm. It's totally fine to have 5, 15, 200, or 2,300 running workers. This is the easiest part to handle: You can add and remove workers with no burdens, at any time, at any size. Scripted nodes addition The very easiest way to add nodes, if you plan to not go over 100 nodes total, is to use basic scripting. At the time of docker swarm init, just copy and paste the lines printed in the output. Then, create a certain bunch of workers: #!/bin/bash for i in `seq 0 9`; do docker-machine create -d amazonec2 --engine-install-url https://test.docker.com --amazonec2-instance-type "t2.large" swarm-worker-$i done After that, it will be only necessary to go through the list of machines, ssh into them, and join the nodes: #!/bin/bash SWARMWORKER="swarm-worker-" for machine in `docker-machine ls --format {{.Name}} | grep $SWARMWORKER`; do docker-machine ssh $machine sudo docker swarm join --token SWMTKN-1-5c3mlb7rqytm0nk795th0z0eocmcmt7i743ybsffad5e04yvxt-9m54q8xx8m1wa1g68im8srcme 172.31.10.250:2377 done This script runs through the machines, and for each with a name starting with swarm-worker-, will ssh into, and join the node to the existing Swarm, to the leader manager, here 172.31.10.250. Refer to https://github.com/swarm2k/swarm2k/tree/master/amazonec2 for some further details or to download these one liners. Belt Belt is another tool for massively provisioning Docker Engines. It is basically a SSH wrapper on steroids and it requires you to prepare provider-specific images as well as provisioning templates before go massively. In this section, we'll learn to do so: You can compile Belt yourself by getting its source from Github. # Set $GOPATH here go get https://github.com/chanwit/belt Currently, Belt supports the DigitalOcean driver. We can prepare our template for provisioning such as the following inside config.yml: digitalocean: image: "docker-1.12-rc4" region: nyc3 ssh_key_fingerprint: "your SSH ID" ssh_user: root Then we can create a hundred nodes basically with a couple of commands. First we create three boxes of 16 GB, namely, mg0, mg1, and mg2. $ belt create 16gb mg[0:2] NAME IPv4 MEMORY REGION IMAGE STATUS mg2 104.236.231.136 16384 nyc3 Ubuntu docker-1.12-rc4 active mg1 45.55.136.207 16384 nyc3 Ubuntu docker-1.12-rc4 active mg0 45.55.145.205 16384 nyc3 Ubuntu docker-1.12-rc4 active Then we can use the status command to wait for all nodes to become active: $ belt status --wait active=3 STATUS #NODES NAMES active 3 mg2, mg1, mg0 We'll do this again for 10 worker nodes. $ belt create 512mb node[1:10] $ belt status --wait active=13 STATUS #NODES NAMES active 3 node10, node9, node8, node7 Use Ansible You can alternatively use Ansible (as you like, and it's becoming very popular) to make things more repeatable. I (Fabrizio) created some Ansible modules to work directly with Machine and Swarm (Mode), compatible with Docker 1.12 (https://github.com/fsoppelsa/ansible-swarm). They require Ansible 2.2+, the very first version of Ansible compatible with binary modules. You will need to compile the modules (written in Go), and then pass them to the ansible-playbook -M parameter. git clone https://github.com/fsoppelsa/ansible-swarm cd ansible-swarm/library go build docker_machine_ go build docker_swarm_ cd .. There are some examples of plays in playbooks/. Ansible's plays syntax is that easy to understand, that's even superfluous to explain in detail. I used this play to join 10 workers to the Swarm2k experiment: --- name: Join the Swarm2k project hosts: localhost connection: local gather_facts: False #mg0 104.236.18.183 #mg1 104.236.78.154 #mg2 104.236.87.10 tasks: name: Load shell variables shell: > eval $(docker-machine env "{{ machine_name }}") echo $DOCKER_TLS_VERIFY && echo $DOCKER_HOST && echo $DOCKER_CERT_PATH && echo $DOCKER_MACHINE_NAME register: worker name: Set facts set_fact: whost: "{{ worker.stdout_lines[0] }}" wcert: "{{ worker.stdout_lines[1] }}" name: Join a worker to Swarm2k docker_swarm: role: "worker" operation: "join" join_url: ["tcp://104.236.78.154:2377"] secret: "d0cker_swarm_2k" docker_url: "{{ whost }}" tls_path: "{{ wcert }}" register: swarm_result name: Print final msg debug: msg="{{ swarm_result.msg }}" Basically, after loading some host facts from Machine, it invokes the docker_swarm module: The operation is join. The role of the new node is worker. The new node joins "tcp://104.236.78.154:2377", that was the leader manager at the time of joining. This argument takes an array of managers, such as might be ["tcp://104.236.78.154:2377", "104.236.18.183:2377", "tcp://104.236.87.10:2377"]. It passes the password (secret). It specifies some basic Engine connection facts. The module will connect to the dockerurl using the certificates at tlspath. After having docker_swarm.go compiled in library/, adding workers to the swarm is as easy as: #!/bin/bash SWARMWORKER="swarm-worker-" for machine in `docker-machine ls --format {{.Name}} | grep $SWARMWORKER`; do ansible-playbook -M library --extra-vars "{machine_name: $machine}" playbook.yaml don Cluster management We now operate a little bit with this example, made of 3 managers and 10 workers. You can reference the nodes by calling them either by their hostname (manager1) or by their ID (ctv03nq6cjmbkc4v1tc644fsi). The other columns in this list statement describe the properties of the cluster nodes. STATUS: This is about the physical reachability of the node. If the node is up, it's Ready, otherwise Down AVAILABILITY: This is the node availability. A node can be either Active (so participating to the cluster operations), Pause (in standby, suspended, not accepting tasks), or Drain (waiting to evacuate its tasks). MANAGER STATUS: This is about the current status of manager. If a node is not a manager, this field will be empty. If a node is a manager, this field can be either Reachable (one of the managers presents to guarantee high availability) or Leader (the host leading all operations). Nodes operations The docker node command comes with some possible options. Demotion and promotion Promotion is possible for worker nodes (transforming them into managers), while demotion is possible for manager nodes (transforming them into workers). Always keep in mind the table to guarantee high availability when managing the number of managers and workers (odd number, more than or equal to 3). Use the following syntax to promote worker0 and worker1 to managers: docker node promote worker0 docker node promote worker1 There is nothing magic behind the curtain. It is just that Swarm attempts to change the node role, with an on-the-fly instruction. Demote is the same (docker node demote worker1). But be careful to not demote the node you're working from, otherwise you'll get locked out. What happens if you try to demote a Leader manager? In this case, the RAFT algorithm will start an election and a new leader will be selected among the Active managers. Tagging nodes You must have noticed, in the preceding screenshot, that worker9 is in Drain availability. This means that the node is in the process of evacuating its tasks (if any), which will be rescheduled somewhere else on the cluster. You can change the availability of a node by updating its status using the docker node update command: The --availability option can take either active, pause, or drain. Here we just restored worker9 to the active state. Active: This means that the node is running and ready to accept tasks pause: This means that the node is running, but not accepting tasks drain: This means that the node is running and not accepting tasks, it is currently draining its tasks, that are getting rescheduled somewhere else Another powerful update argument is about labels. There are --label-add and --label-rm that respectively allow us to add labels to Swarm nodes. Docker Swarm labels do not affect the Engine labels. It's possible to specify labels when starting the Docker Engine (dockerd [...] --label "staging" --label "dev" [...]). But Swarm has no power to edit/change them. The labels we see here only affect the Swarm behavior. Labels are useful to categorize nodes. When you start services, you can then filter and decide where to physically spawn containers, using labels. For instance, if you want to dedicate a bunch of nodes with SSD to host MySQL, you can actually do this: docker node update --label-add type=ssd --label-add type=mysql worker1 docker node update --label-add type=ssd --label-add type=mysql worker2 docker node update --label-add type=ssd --label-add type=mysql worker3 Later, when you will start a service with some replica factor, say 3, you'll be sure that it will start MySQL containers exactly on worker1, worker2, and worker3, if you filter by node.type: docker service create --replicas 3 --constraint 'node.type == mysql' --name mysql-service mysql:5.5. Summary In this article, we went through the typical Swarm administration procedures and options. After showing how to add managers and workers to the cluster, we explained in detail how to update cluster and node properties, how to check the Swarm health, and we encountered Shipyard as a UI. After this focus on infrastructure, now it's time to use our Swarms. Resources for Article: Further resources on this subject: Hands On with Docker Swarm [article] Setting up a Project Atomic host [article] Getting Started with Flocker [article]
Read more
  • 0
  • 0
  • 5649

article-image-introduction-functional-programming
Packt
01 Dec 2016
19 min read
Save for later

Introduction to the Functional Programming

Packt
01 Dec 2016
19 min read
In this article by Wisnu Anggoro, the author of the book, Functional C#, we are going to explore the functional programming by testing it. We will use the power of C# to construct some functional code. We will also deal with the features in C#, which are mostly used in developing functional programs. By the end of this chapter, we will have an idea how the functional approach in C# will be like. Here are the topics we will cover in this chapter: Introduction to functional programming concept Comparing between the functional and imperative approach The concepts of functional programming The advantages and disadvantages of functional programming (For more resources related to this topic, see here.) In functional programming, we write functions without side effects the way we write in Mathematics. The variable in the code function represents the value of the function parameter, and it is similar to the mathematical function. The idea is that a programmer defines the functions that contain the expression, definition, and the parameters that can be expressed by a variable in order to solve problems. After a programmer builds the function and sends the function to the computer, it's the computer's turn to do its job. In general, the role of the computer is to evaluate the expression in the function and return the result. We can imagine that the computer acts like a calculator since it will analyze the expression from the function and yield the result to the user in a printed format. The calculator will evaluate a function which are composed of variables passed as parameters and expressions which forms the body of the function. Variables are substituted by its value in the expression. We can give simple expression and compound expressions using Algebraic operators. Since expression without assignments never alter the value, sub expressions needs to be evaluated only once. Suppose we have the expression 3 + 5 inside a function. The computer will definitely return 8 as the result right after it completely evaluates it. However, this is just a simple example of how the computer acts in evaluating an expression. In fact, a programmer can increase the ability of the computer by creating a complex definition and expression inside the function. Not only can the computer evaluate the simple expression, but it can also evaluate the complex calculation and expression. Understanding definitions, scripts, and sessions As we discuss earlier about  a calculator that will analyze the expression from the function, let's imagine we have a calculator that has a console panel like a computer does. The difference between that and a conventional calculator is that we have to press Enter instead of = (equal to) in order to run the evaluation process of the expression. Here, we can type the expression and then press Enter. Now, imagine that we type the following expression: 3 x 9 Immediately after pressing Enter, the computer will print 27 in the console, and that's what we are expecting. The computer has done a great job of evaluating the expression we gave. Now, let's move to analyzing the following definitions. Imagine that we type them on our functional calculator: square a = a * a max a b = a, if a ≥ b = b, if b > a We have defined the two definitions, square and max. We can call the list of definitions script. By calling the square function followed by any number representing variable a, we will be given the square of that number. Also, in the max definition, we serve two numbers to represent variable a and b, and then the computer will evaluate this expression to find out the biggest number between the variables. By defining these two definitions, we can use them as a function, which we can call session, as follows: square (1 + 2) The computer will definitely print 9 after evaluating the preceding function. The computer will also be able to evaluate the following function: max 1 2 It will return 2 as the result based on the definition we defined earlier. This is also possible if we provide the following expression: square (max 2 5) Then, 25 will be displayed in our calculator console panel. We can also modify a definition using the previous definition. Suppose we want to quadruple an integer number and take advantage of the definition of the square function; here is what we can send to our calculator: quad q = square q * square a quad 10 The first line of the preceding expression is a definition of the quad function. In the second line, we call that function, and we will be provided with 10000 as the result. The script can define the variable value; for instance, take a look at the following: radius = 20 So, we should expect the computer to be able to evaluate the following definition: area = (22 / 7) * square (radius) Understanding the functions for functional programming Functional programming uses a technique of emphasizing functions and their application instead of commands and their execution. Most values in functional programming are function values. Let's take a look at the following mathematical notation: f :: A -> B From the preceding notation, we can say that function f is a relation of each element stated there, which is A and B. We call A, the source type, and B, the target type. In other words, the notation of A à B states that A is an argument where we have to input the value, and B is a return value or the output of the function evaluation. Consider that x denotes an element of A and x + 2 denotes an element of B, so we can create the mathematical notation as follows: f(x) = x + 2 In mathematics, we use f(x) to denote a functional application. In functional programming, the function will be passed an argument and will return the result after the evaluation of the expression. We can construct many definitions for one and the same function. The following two definitions are similar and will triple the input passed as an argument: triple y = y + y + y triple' y = 3 * y As we can see, triple and triple' have different expressions. However, they are the same functions, so we can say that triple = triple'. Although we have many definitions to express one function, we will find that there is only one definition that will prove to be the most efficient in the procedure of evaluation in the sense of the reducing the expression we discussed previously. Unfortunately, we cannot determine which one is the most efficient from our preceding two definitions since that depends on the characteristic of the evaluation mechanism. Forming the definition Now, let's go back to our discussion on definitions at the beginning of this chapter. We have the following definition in order to retrieve the value from the case analysis: max a b = a, if a ≥ b = b, if b > a There are two expressions in this definition, distinguished by a Boolean-value expression. This distinguisher is called guards, and we use them to evaluate the value of True or False. The first line is one of the alternative result values for this function. It states that the return value will be a if the expression a ≥ b is True. In contrast, the function will return value b if the expression b ≥ a is True. Using these two cases, a ≥ b and b ≥ a, the max value depends on the value of a and b. The order of the cases doesn't matter. We can also define the max function using the special word otherwise. This word ensures that the otherwise case will be executed if no expression results in a True value. Here, we will refactor our max function using the word otherwise: max a b = a, if a ≥ b = b, otherwise From the preceding function definition, we can see that if the first expression is False, the function will return b immediately without performing any evaluation. In other words, the otherwise case will always return True if all previous guards return False. Another special word usually used in mathematical notations is where. This word is used to set the local definition for the expression of the function. Let's take a look at the following example: f x y = (z + 2) * (z + 3) where z = x + y In the preceding example, we have a function f with variable z, whose value is determined by x and y. There, we introduce a local z definition to the function. This local definition can also be used along with the case analysis we have discussed earlier. Here is an example of the conjunction local definition with the case analysis: f x y = x + z, if x > 100 = x - z, otherwise where z = triple(y + 3) In the preceding function, there is a local z definition, which qualifies for both x + z and x – z expressions. As we discussed earlier, although the function has two equal to (=) signs, only one expression will return the value. Currying Currying is a simple technique of changing structure arguments by sequence. It will transform a n-ary function into n unary function. It is a technique which was created to circumvent limitations of Lambda functions which are unary functions Let's go back to our max function again and get the following definition: max a b = a, if a ≥ b = b, if b > a We can see that there is no bracket in the max a b function name. Also, there is no comma-separated a and b in the function name. We can add a bracket and a comma to the function definition, as follows: max' (a,b) = a, if a ≥ b = b, if b > a At first glance, we find the two functions to be the same since they have the same expression. However, they are different because of their different type. The max' function has a single argument, which consists of a pair of numbers. The type of max' function can be written as follows: max' :: (num, num) -> num On the other hand, the max function has two arguments. The type of this function can be written as follows: max :: num -> (num -> num) The max function will take a number and then return a function from single number to many numbers. From the preceding max function, we pass the variable a to the max function, which returns a value. Then, that value is compared to variable b in order to find the maximum number. Comparison between functional and imperative programming The main difference between functional and imperative programming is that imperative programming produces side-effects while functional programming doesn't. In Imperative programming, the expressions are evaluated and its resulting value is assigned to variables. So, when we group series of expressions into a function, the resulting value depends upon the state of variables at that point in time. This is called side effects. Because of the continues change in state, the order of evaluation matter. In Functional programming world, destructive assignment is forbidden and each time an assignment happens a new variable is induced. Concepts of functional programming We can also distinguish functional programming over imperative programming by the concepts. The core ideas of Functional programming are encapsulated in the constructs like First Class Functions, Higher Order Functions, Purity, Recursion over Loops, and Partial Functions. We will discuss the concepts in this topic. First-class and higher-order functions In Imperative programming, the given data is more importance and are passed through series of functions (with side effects). Functions are special constructs with its own semantics. In effect, functions do not have the same place as variables and constants. Since a function cannot be passed as parameter or not returned as a result, they are regarded as second class citizens of the programming world. In the functional programming world, we can pass function as a parameter and return function as a result. They obey the same semantics as variables and their values. Thus, they are First Class Citizens. We can also create function of functions called Second Order Function through Composition. There is no limit imposed on the composability of function and they are called Higher Order Functions. Fortunately, the C# language has supported these two concepts since it has a feature called function object, which has types and values. To discuss more details about the function object, let's take a look at the following code: class Program { static void Main(string[] args) { Func<int, int> f = (x) => x + 2; int i = f(1); Console.WriteLine(i); f = (x) => 2 * x + 1; i = f(1); Console.WriteLine(i); } } We can find the code in FuncObject.csproj, and if we run it, it will display the following output on the console screen: Why do we display it? Let's continue the discussion on function types and function values. Hit Ctrl + F5 instead of F5 in order to run the code in debug mode but without the debugger. It's useful to stop the console from closing on the exit. Pure functions In the functional programming, most of the functions do not have side-effects. In other words, the function doesn't change any variables outside the function itself. Also, it is consistent, which means that it always returns the same value for the same input data. The following are example actions that will generate side-effects in programming: Modifying a global variable or static variable since it will make a function interact with the outside world. Modifying the argument in a function. This usually happens if we pass a parameter as a reference. Raising an exception. Taking input and output outside—for instance, get a keystroke from the keyboard or write data to the screen. Although it does not satisfy the rule of a pure function, we will use many Console.WriteLine() methods in our program in order to ease our understanding in the code sample. The following is the sample non-pure function that we can find in NonPureFunction1.csproj: class Program { private static string strValue = "First"; public static void AddSpace(string str) { strValue += ' ' + str; } static void Main(string[] args) { AddSpace("Second"); AddSpace("Third"); Console.WriteLine(strValue); } } If we run the preceding code, as expected, the following result will be displayed on the console: In this code, we modify the strValue global variable inside the AddSpace function. Since it modifies the variable outside, it's not considered a pure function. Let's take a look at another non-pure function example in NonPureFunction2.csproj: class Program { public static void AddSpace(StringBuilder sb, string str) { sb.Append(' ' + str); } static void Main(string[] args) { StringBuilder sb1 = new StringBuilder("First"); AddSpace(sb1, "Second"); AddSpace(sb1, "Third"); Console.WriteLine(sb1); } } We see the AddSpace function again but this time with the addition of an argument-typed StringBuilder argument. In the function, we modify the sb argument with hyphen and str. Since we pass the sb variable by reference, it also modifies the sb1 variable in the Main function. Note that it will display the same output as NonPureFunction2.csproj. To convert the preceding two non-pure function code into pure function code, we can refactor the code to be the following. This code can be found at PureFunction.csproj: class Program { public static string AddSpace(string strSource, string str) { return (strSource + ' ' + str); } static void Main(string[] args) { string str1 = "First"; string str2 = AddSpace(str1, "Second"); string str3 = AddSpace(str2, "Third"); Console.WriteLine(str3); } } Running PureFunction.csproj, we will get the same output compared to the two previous non-pure function code. However, in this pure function code, we have three variables in the Main function. This is because in functional programming, we cannot modify the variable we have initialized earlier. In the AddSpace function, instead of modifying the global variable or argument, it now returns a string value to satisfy the the functional rule. The following are the advantages we will have if we implement the pure function in our code: Our code will be easy to be read and maintain because the function does not depend on external state and variables. It is also designed to perform specific tasks that increase maintainability. The design will be easier to be changed since it is easier to refactor. Testing and debugging will be easier since it's quite easy to isolate the pure function. Recursive functions In imperative programming world, we have got destructive assignment to mutate the state if a variable. By using loops, one can change multiple variables to achieve the computational objective. In Functional programming world, since variable cannot be destructively assigned, we need a Recursive function calls to achieve the objective of looping. Let's create a factorial function. In mathematical terms, the factorial of the nonnegative integer N is the multiplication of all positive integers less than or equal to N. This is usually denoted by N!. We can denote the factorial of 7 as follows: 7! = 7 x 6 x 5 x 4 x 3 x 2 x 1 = 5040 If we look deeper at the preceding formula, we will discover that the pattern of the formula is as follows: N! = N * (N-1) * (N-2) * (N-3) * (N-4) * (N-5) ... Now, let's take a look at the following factorial function in C#. It's an imperative approach and can be found in the RecursiveImperative.csproj file. public partial class Program { private static int GetFactorial(int intNumber) { if (intNumber == 0) { return 1; } return intNumber * GetFactorial(intNumber - 1); } } As we can see, we invoke the GetFactorial() function from the GetFactorial() function itself. This is what we call a recursive function. We can use this function by creating a Main() method containing the following code: public partial class Program { static void Main(string[] args) { Console.WriteLine( "Enter an integer number (Imperative approach)"); int inputNumber = Convert.ToInt32(Console.ReadLine()); int factorialNumber = GetFactorial(inputNumber); Console.WriteLine( "{0}! is {1}", inputNumber, factorialNumber); } } We invoke the GetFactorial() method and pass our desired number to the argument. The method will then multiply our number with what's returned by the GetFactorial() method, in which the argument has been subtracted by 1. The iteration will last until intNumber – 1 is equal to 0, which will return 1. Now, let's compare the preceding recursive function in the imperative approach with one in the functional approach. We will use the power of the Aggregate operator in the LINQ feature to achieve this goal. We can find the code in the RecursiveFunctional.csproj file. The code will look like what is shown in the following: class Program { static void Main(string[] args) { Console.WriteLine( "Enter an integer number (Functional approach)"); int inputNumber = Convert.ToInt32(Console.ReadLine()); IEnumerable<int> ints = Enumerable.Range(1, inputNumber); int factorialNumber = ints.Aggregate((f, s) => f * s); Console.WriteLine( "{0}! is {1}", inputNumber, factorialNumber); } } We initialize the ints variable, which contains a value from 1 to our desired integer number in the preceding code, and then we iterate ints using the Aggregate operator. The output of RecursiveFunctional.csproj will be completely the same compared to the output of RecursiveImperative.csproj. However, we use the functional approach in the code in RecursiveFunctional.csproj. The advantages and disadvantages of functional programming So far, we have had to deal with functional programming by creating code using functional approach. Now, we can look at the advantages of the functional approach, such as the following: The order of execution doesn't matter since it is handled by the system to compute the value we have given rather than the one defined by programmer. In other words, the declarative of the expressions will become unique. Because functional programs have an approach toward mathematical concepts, the system will designed with the notation as close as possible to the mathematical way of concept. Variables can be replaced by their value since the evaluation of expression can be done any time. The functional code is then more mathematically traceable because the program is allowed to be manipulated or transformed by substituting equals with equals. This feature is called Referential Transparency. Immutability makes the functional code free of side-effects. A shared variable, which is an example of a side-effect, is a serious obstacle for creating parallel code and result in non-deterministic execution. By removing the side-effect, we can have a good coding approach. The power of lazy evaluation will make the program run faster because it only provides what we really required for the queries result. Suppose we have a large amount of data and want to filter it by a specific condition, such as showing only the data that contains the word Name. In imperative programming, we will have to evaluate each operation of all the data. The problem is when the operation takes a long time, the program will need more time to run as well. Fortunately, the functional programming that applies LINQ will perform the filtering operation only when it is needed. That's why functional programming will save much of our time using lazy evaluation. We have a solution for complex problems using composability. It is a rule principle that manages a problem by dividing it, and it gives pieces of the problem to several functions. The concept is similar to a situation when we organize an event and ask different people to take up a particular responsibility. By doing this, we can ensure that everything will done properly by each person. Beside the advantages of functional programming, there are several disadvantages as well. Here are some of them: Since there's no state and no update of variables is allowed, loss of performance will take place. The problem occurs when we deal with a large data structure and it needs to perform a duplication of any data even though it only changes a small part of the data. Compared to imperative programming, much garbage will be generated in functional programming due to the concept of immutability, which needs more variables to handle specific assignments. Because we cannot control the garbage collection, the performance will decrease as well. Summary So we have been acquainted with the functional approach by discussing the introduction of functional programming. We also have compared the functional approach to the mathematical concept when we create functional program. It's now clear that the functional approach uses the mathematical approach to compose a functional program. The comparison between functional and imperative programming also led us to the important point of distinguishing the two. It's now clear that in functional programming, the programmer focuses on the kind of desired information and the kind of required transformation, while in the imperative approach, the programmer focuses on the way of performing the task and tracking changes in the state. Resources for Article: Further resources on this subject: Introduction to C# and .NET [article] Why we need Design Patterns? [article] Parallel Computing [article]
Read more
  • 0
  • 0
  • 65539

article-image-storage-practices-and-migration-hyper-v-2016
Packt
01 Dec 2016
17 min read
Save for later

Storage Practices and Migration to Hyper-V 2016

Packt
01 Dec 2016
17 min read
In this article by Romain Serre, the author of Hyper-V 2016 Best Practices, we will learn about Why Hyper-V projects fail, Overview of the failover cluster, Storage Replica, Microsoft System Center, Migrating VMware virtual machines, and Upgrading single Hyper-v hosts. (For more resources related to this topic, see here.) Why Hyper-V projects fail Before you start deploying your first production Hyper-V host, make sure that you have completed a detailed planning phase. I have been called in to many Hyper-V projects to assist in repairing what a specialist has implemented. Most of the time, I start by correcting the design because the biggest failures happen there, but are only discovered later during implementation. I remember many projects in which I was called in to assist with installations and configurations during the implementation phases, because these were the project phases where a real expert was needed. However, based on experience, this notion is wrong. Most critical to a successful design phase are two features—its rare existence and someone with technological and organizational experience with Hyper-V. If you don't have the latter, look out for a Microsoft Partner with a Gold Competency called Management and Virtualization on Microsoft Pinpoint (http://pinpoint.microsoft.com) and take a quick look at the reviews done by customers for successful Hyper-V projects. If you think it's expensive to hire a professional, wait until you hire an amateur. Having an expert in the design phase is the best way to accelerate your Hyper-V project. Overview of the failover cluster Before you start your first deployment in production, make sure you have defined the aim of the project and its smart criteria and have done a thorough analysis of the current state. After this, you should be able to plan the necessary steps to reach the target state, including a pilot phase. instantly detected. The virtual machines running on that particular node are powered off immediately because of the hardware failure on their computing node. The remaining cluster nodes then immediately take over these VMs in an unplanned failover process and start them on their respective own hardware. The virtual machines will be the backup running after a successful boot of their operating systems and applications in just a few minutes. Hyper-V Failover Clusters work under the condition that all compute nodes have access to a shared storage instance, holding the virtual machine configuration data and its virtual hard disks. In case of a planned failover, that is, for patching compute nodes, it's possible to move running virtual machines from one cluster node to another without interrupting the VM. All cluster nodes can run virtual machines at the same time, as long as there is enough failover capacity running all services when a node goes down. Even though a Hyper-V cluster is still called a Failover Cluster, utilizing the Windows Server Failover Clustering feature, it is indeed capable of running an Active/Active Cluster. To ensure that all these capabilities of a Failover Cluster are indeed working, it demands an accurate planning and implementation process. Storage Replica Storage Replica is a new feature in Windows Server 2016 that provides block replication from storage level for a data recovery plan or for a stretched cluster. Storage Replica can be used in the following scenarios: Server-to-server storage replication using Storage Replica Storage replication in a stretch cluster using Storage Replica Cluster-to-cluster storage replication using Storage Replica Server-to-itself to replicate between volumes using Storage Replica Regarding the scenario or the bandwidth and the latency of the inter-site link, you can choose between a synchronous and an asynchronous replication. For further information about Storage Replica, you can about read this topic at http://bit.ly/2albebS. The SMB3 protocol is used to make Storage Replica. You can leverage TCP/IP or RDMA on the network. I recommend you to implement RDMA when it is possible to reduce latency and CPU workload and to increase throughput. Compared to Hyper-V Replica, the Storage Replica feature provides a replication of all Virtual Machines stored in a volume from the block level. Moreover, Storage Replica can replicate in the synchronous mode while Hyper-V Replica is always in the asynchronous mode. To finish, with Hyper-V Replica you have to specify the failover IP Address because the replication is executed from the VM level, whereas with Storage Replica, you don't need to specify the failover IP Address, but in case of a replication between two clusters in two different rooms, the VM networks must be configured in the destination room. The use of Hyper-V Replica or Storage Replica depends on the disaster recovery plan you need. If you want to protect some application workloads, you can use Hyper-V Replica. On the other hand, if you have the passive room ready to restart in case of issues in the active room, Storage Replica can be a great solution because all the VMs in a volume will be already replicated. To deploy a replication between two clusters you need two sets of storage based on iSCSI, SAS JBOD, fibre channel SAN, or Shared VHDX. For better performance I recommend you to implement SSD that will be used for the logs of Storage Replica. Microsoft System Center Microsoft System Center 2016 is Microsoft's solution for advanced management of Windows Server and its components, along with its dependencies such as various hardware and software products. It consists of various components that support every stage of your IT services from planning to operating over backup to automation. System Center has existed since 1994 and has evolved continuously. It now offers a great set of tools for very efficient management of server and client infrastructures. It also offers the ability to create and operate whole clouds—run in your own data center or a public cloud data center such as Microsoft Azure. Today, it's your choice whether to run your workloads on-premises or off-premises. System Center provides a standardized set of tools for a unique and consistent Cloud OS management experience. System Center does not add any new features to Hyper-V, but it does offer great ways to make the most out of it and ensure streamlined operating processes after its implementation. System Center is licensed via the same model as Windows Server is, leveraging Standard and data center Editions on a physical host level. While every System Center component offers great value in itself, the binding of multiple components into a single workflow offers even more advantages, as shown in the following screenshot: System Center overview When do you need System Center? There is no right or wrong answer to this, and the most given answer by any IT consultant around the world is, "It depends". System Center adds value to any IT environment starting with only a few systems. In my experience, a Hyper-V environment with up to three hosts, and 15 VMs can be managed efficiently without the use of System Center. If you plan to use more hosts or virtual machines, System Center will definitely be a great solution for you. Let's take a look at the components of System Center. Migrating VMware virtual machines If you are running virtual machines on VMware ESXi hosts, there are really good options available for moving them to Hyper-V. There are different approaches on how to convert a VMware virtual machine to Hyper-V: from the inside of the VM on a guest level, running cold conversions with the VM powered off on the host level, running hot conversions on a running VM, and so on. I will give you a short overview of the currently available tools in the market. System Center VMM SCVMM should not be the first tool of your choice, take a look at MVMC combined with MAT to get an equal functionality from a better working tool. The earlier versions of SCVMM allowed online or offline conversions of VMs; the current version, 2016, allows only offline conversions of VMs. Select a powered-off VM on a VMware host or from the SCVMM library share to start the conversion. The VM conversion will convert VMware-hosted virtual machines through vCenter and ensure that the entire configuration, such as memory, virtual processor, and other machine configurations, is also migrated from the initial source. The tool also adds virtual NICs to the deployed virtual machine on Hyper-V. The VMware tools must be uninstalled before the conversion because you won't be able to remove the VMware tools when the VM is not running on a VMware host. SCVMM 2016 supports ESXi hosts running 4.1 and 5.1 but not the latest ESX Version 5.5. SCVMM conversions are great to automate through their integrated PowerShell support and it's very easy to install upgraded Hyper-V integration services as part of the setup or by adding any kind of automation through PowerShell or System Center Orchestrator. Besides manually removing VMware tools, using SCVMM is an end-to-end solution in the migration process. You can find some PowerShell examples for SCVMM-powered V2V conversion scripts at http://bit.ly/Y4bGp8. I don't recommend the use of this tool anymore because Microsoft doesn't spend time on this tool anymore. Microsoft Virtual Machine Converter Microsoft released its first version of the free solution accelerator Microsoft Virtual Machine Converter (MVMC) in 2013, and it should be available in Version 3.1 by the release of this book. MVMC provides a small and easy option to migrate selected virtual machines to Hyper-V. It takes a very similar approach to the conversion as SCVMM does. The conversion happens at a host level and offers a fully integrated end-to-end solution. MVMC supports all recent versions of VMware vSphere. It will even uninstall the VMware tools and install the Hyper-V integration services. MVMC 2.0 works with all supported Hyper-V guest operating systems, including Linux. MVMC comes with a full GUI wizard as well as a fully scriptable command-line interface (CLI). Besides being a free tool, it is fully supported by Microsoft in case you experience any problems during the migration process. MVMC should be the first tool of your choice if you do not know which tool to use. Like most other conversion tools, it does the actual conversion on the MVMC server itself and requires its disk space to host the original VMware virtual disk as well as the converted Hyper-V disk. MVMC even offers an add-on for VMware virtual center servers to start conversions directly from the vSphere console. The current release of MV is freely available at its official download site at http://bit.ly/1HbRIg7. Download MVMC to the conversion system and start the click-through setup. After finishing the download, start the MVMC with the GUI by executing Mvmc.Gui.exe. The wizard guides you through some choices. MVMC is not only capable of migrating to Hyper-V but also allows you to move virtual machines to Microsoft Azure. Follow these few steps to convert a VMware VM: Select Hyper-V as a target. Enter the name of the Hyper-V host you want this VM to run on and specify a fileshare to use and the format of the disks you want to create. Choosing the dynamically expanding disks should be the best option most of the time. Enter the name of the ESXi server you want to use as a source as well as valid credentials. Select the virtual machine to convert. Make sure it has VMware tools installed. The VM can be either powered on or off. Enter a workspace folder to store the converted disk. Wait for the process to finish. There is some additional guidance available at http://bit.ly/1vBqj0U. This is a great and easy way to migrate a single virtual machine. Repeat the steps for every other virtual machine you have, or use some automation. Upgrading single Hyper-V hosts If you are currently running a single host with an older version of Hyper-V and now want to upgrade this host on the same hardware, there is a limited set of decisions to be made. You want to upgrade the host with the least amount of downtime and without losing any data from your virtual machine. Before you start the upgrade process, make sure all components from your infrastructure are compatible with the new version of Hyper-V. Then it's time to prepare your hardware for this new version of Hyper-V by upgrading all firmware to the latest available version and downloading the necessary drivers for Windows Server 2016 with Hyper-V along with its installation media. One of the most crucial questions in this update scenario is whether you should use the integrated installation option called in-place upgrade, where the existing operating system will be transformed to the recent version of Hyper-V or delete the current operating system and perform a clean installation. While the installation experience of in-place upgrades works well when only the Hyper-V role is installed, based on experiences, some versions of upgraded systems are more likely to suffer problems. Numbers pulled from the Elanity support database show about 15 percent more support cases on upgraded systems from Windows Server 2008 R2 than clean installations. Remember how fast and easy it is nowadays to do a clean install of Hyper-V; this is why it is highly recommended to do this over upgrading existing installations. If you are currently using Windows Server 2012 R2 and want to upgrade to Windows Server 2016, note that we have not yet seen any differences in the number of support cases between the installation methods. However, clean installations of Hyper-V being so fast and easy, I barely use them. Before starting any type of upgrade scenario, make sure you have current backups of all affected virtual machines. Nonetheless, if you want to use the in-place upgrade, insert the Windows Server 2016 installation media and run this command from your current operating system: Setup.exe /auto:upgrade If it fails, it's most likely due to an incompatible application installed on the older operating system. Start the setup without the parameter to find out which applications need to be removed before executing the unattended setup. If you upgrade from Windows Server 2012 R2, there is no additional preparation needed; if you upgrade from older operating systems, make sure to remove all snapshots from your virtual machines. Importing virtual machines If you choose to do a clean installation of the operating systems, you do not necessarily have to export the virtual machines first; just make sure all VMs are powered off and are stored on a different partition than your Hyper-V host OS. If you are using a SAN, disconnect all LUNs before the installation and reconnect them afterwards to ensure their integrity through the installation process. After the installation process, just reconnect the LUNs and set the disk online in diskpart or in Disk Management at Control Panel | Computer Management. If you are using local disks, make sure not to reformat the partition with your virtual machines on it. You should export VM to another location and import them back after reformatting; more efforts are required but it is safer. Set the partition online and then reimport the virtual machines. Before you start the reimport process, make sure all dependencies of your virtual machines are available, especially vSwitches. To import a single Hyper-V VM, use the following PowerShell cmdlet: Import-VM -Path 'D:VMsVM01Virtual Machines2D5EECDA-8ECC-4FFC- ACEE-66DAB72C8754.xml' To import all virtual machines from a specific folder, use this command: Get-ChildItem d:VMs -Recurse -Filter "Virtual Machines" | %{Get- ChildItem $_.FullName -Filter *.xml} | %{import-vm $_.FullName - Register} After that, all VMs are registered and ready for use on your new Hyper-V hosts. Make sure to update the Hyper-V integration services of all virtual machines before going back into production. If you still have virtual disks in the old .vhd format, it's now time to convert them to .vhdx files. Use this PowerShell cmdlet on powered-off VMs or standalone vDisk to convert a single .vhd file: Convert-VHD –Path d:VMstestvhd.vhd –DestinationPath d:VMstestvhdx.vhdx If you want to convert the disks of all your VMs, fellow MVPs, Aidan Finn and Didier van Hoye, provided a great end-to-end solution to achieve this. This can be found at http://bit.ly/1omOagi. I often hear from customers that they don't want to upgrade their disks, so as to be able to revert to older versions of Hyper-V when needed. First, you should know that I have never met a customer who has done that because there really is no technical reason why anyone should do this. Second, even if you would do this backwards move, running virtual machines on older Hyper-V hosts is not supported, if they had been deployed on more modern versions of Hyper-V before. The reason for this is very simple; Hyper-V does not offer a way for downgrading Hyper-V integration services. The only way to move a virtual machine back to an older Hyper-V host is by restoring a backup of the VM made earlier before the upgrade process. Exporting virtual machines If you want to use another physical system running a newer version of Hyper-V, you have multiple possible options. They are as follows: When using a SAN as a shared storage, make sure all your virtual machines, including their virtual disks, are located on other LUNs rather than on the host operating system. Disconnect all LUNs hosting virtual machines from the source host and connect them to the target host. Bulk import the VMs from the specified folders. When using SMB3 shared storage from scale-out file servers, make sure to switch access to the shared hosting VMs to the new Hyper-V hosts. When using local hard drives and upgrading from Windows Server 2008 SP2 or Windows Server 2008 R2 with Hyper-V, it's necessary to export the virtual machines to a storage location reachable from the new host. Hyper-V servers running legacy versions of the OS (prior to 2012 R2) need to power off the VMs before an export can occur. To export a virtual machine from a host, use the following PowerShell cmdlet: Export-VM –Name VM –Path D: To export all virtual machines to a folder underneath the following root, use the following command: Get-VM | Export-VM –Path D: In most cases, it is also possible to just copy the virtual machine folders containing virtual hard disk's and configuration files to the target location and import them to Windows Server 2016 Hyper-V hosts. However, the export method is more reliable and should be preferred. A good alternative for moving virtual machines can be the recreation of virtual machines. If you have another host up and running with a recent version of Hyper-V, it may be a good opportunity to also upgrade some guest OSes. For instance, Windows Server 2003 and 2003 R2 are out of extended support since July 2015. Depending on your applications, it may now be the right choice to create new virtual machines with Windows Server 2016 as a guest operating system and migrate your existing workloads from older VMs to these new machines. Summary In this article, we learned about Why Hyper-V projects fail, how to migrate VMware virtual machine and also about upgrading single Hyper-V hosts. This article also covers the overview of the failover cluster and Storage Replica. Resources for Article: Further resources on this subject: Hyper-V Basics [article] The importance of Hyper-V Security [article] Hyper-V building blocks for creating your Microsoft virtualization platform [article]
Read more
  • 0
  • 0
  • 18808
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-structuring-your-projects
Packt
24 Nov 2016
20 min read
Save for later

Structuring Your Projects

Packt
24 Nov 2016
20 min read
In this article written by Serghei Iakovlev and David Schissler, authors of the book Phalcon Cookbook , we will cover: (For more resources related to this topic, see here.) Choosing the best place for an implementation Automation of routine tasks Creating the application structure by using code generation tools Introduction In this article you will learn that, often, by creating new projects, developers can face some issues such as what components should they create, where to place them in the application structure, what would each component implement, what naming convention to follow, and so on. Actually, creating custom components isn't a difficult matter; we will sort it out in this article. We will create our own component which will display different menus on your site depending on where we are in the application. From one project to another, the developer's work is usually repeated. This holds true for tasks such as creating the project structure, configuration, creating data models, controllers, views, and so on. For those tasks, we will discover the power of Phalcon Developer Tools and how to use them. You will learn how to create an application skeleton by using one command, and even how to create a fully functional application prototype in less than 10 minutes without writing a single line of code. Developers often come up against a situation where they need to create a lot of predefined code templates. Until you are really familiar with the framework it can be useful for you to do everything manually. But anyway all of us would like to reduce repetitive tasks. Phalcon tries to help you by providing an easy and at the same time flexible code generation tool named Phalcon Developer Tools. These tools help you simplify the creation of CRUD components for a regular application. Therefore, you can create working code in a matter of seconds without creating the code yourself. Often, when creating an application using a framework, we need to extend or add functionality to the framework components. We don't have to reinvent the wheel by rewriting those components. We can use class inheritance and extensibility, but often this approach does not work. In such cases, it is better to use additional layers between the main application and the framework by creating a middleware layer. The term middleware has a wide range of meanings, but in the context of PHP web applications it means code, which will be called in turns by each request. We will look into the main principles of creating and using middleware in your application. We will not get into each solution in depth, but instead we will work with tasks that are common for most projects, and implementations extending Phalcon. Choosing the best place for an implementation Let's pretend you want to add a custom component. As the case may be, this component allows to change your site navigation menu. For example, when you have a Sign In link on your navigation menu and you are logged in, that link needs to change to Sign Out. Then you're asking yourself where is the best place in the project to put the code, where to place the files, how to name the classes, how to make them autoload by the autoloader. Getting ready… For successful implementation of this recipe you must have your application deployed. By this we mean that you need to have a web server installed and configured for handling requests to your application, an application must be able to receive requests, and have implemented the necessary components such as Controllers, Views, and a bootstrap file. For this recipe, we assume that our application is located in the apps directory. If this is not the case, you should change this part of the path in the examples shown in this article. How to do it… Follow these steps to complete this recipe: Create the /library/ directory app, if you haven't got one, where user components will be stored. Next, create the Elements (app/library/Elements.php) component. This class extends PhalconMvcUserComponent. Generally, it is not necessary, but it helps get access to application services quickly. The contents of Elements should be: <?php namespace Library; use PhalconMvcUserComponent; use PhalconMvcViewSimple as View; class Elements extends Component { public function __construct() { // ... } public function getMenu() { // ... } } Now we register this class in the Dependency Injection Container. We use a shared instance in order to prevent creating new instances by each service resolving: $di->setShared('elements', function () { return new LibraryElements(); }); If your Session service is not initialized yet, it's time to do it in your bootstrap file. We use a shared instance for the following reasons: $di->setShared('session', function () { $session = new PhalconSessionAdapterFiles(); $session->start(); return $session; }); Create the templates directory within the directory with your views/templates. Then you need to tell the class autoloader about a new namespace, which we have just entered. Let's do it in the following way: $loader->registerNamespaces([ // The APP_PATH constant should point // to the project's root 'Library' => APP_PATH . '/apps/library/', // ... ]); Add the following code right after the tag body in the main layout of your application: <div class="container"> <div class="navbar navbar-inverse"> <div class="container-fluid"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#blog-top- menu" aria-expanded="false"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Blog 24</a> </div> <?php echo $this->elements->getMenu(); ?> </div> </div> </div> Next, we need to create a template for displaying your top menu. Let's create it in views/templates/topMenu.phtml: <div class="collapse navbar-collapse" id="blog-top-menu"> <ul class="nav navbar-nav"> <li class="active"> <a href="#">Home</a> </li> </ul> <ul class="nav navbar-nav navbar-right"> <li> <?php if ($this->session->get('identity')): ?> <a href="#">Sign Out</a> <?php else: ?> <a href="#">Sign In</a> <?php endif; ?> </li> </ul> </div> Now, let's put the component to work. First, create the protected field $simpleView and initialize it in the controller: public function __construct() { $this->simpleView = new View(); $this->simpleView->setDI($this->getDI()); } And finally, implement the getMenu method as follows: public function getMenu() { $this->simpleView->setViewsDir($this->view- >getViewsDir()); return $this->simpleView->render('templates/topMenu'); } Open the main page of your site to ensure that your top menu is rendered. How it works… The main idea of our component is to generate a top menu, and to display the correct menu option depending on the situation, meaning whether the user is authorized or not. We create the user component, Elements, putting it in a place specially designed for the purpose. Of course, when creating the directory library and placing a new class there, we should tell the autoloader about a new namespace. This is exactly what we have done. However, we should take note of one important peculiarity. We should note that if you want to get access to your components quickly even in HTML templates like $this->elements, then you should put the components in the DI container. Therefore, we put our component, LibraryElements, in the container named elements. Since our component inherits PhalconMvcUserComponent, we are able to access all registered application services just by their names. For example, the following instruction, $this->view can be written in a long form, $this->getDi()->getShared('view'), but the first one is obviously more concise. Although not necessary, for application structure purposes, it is better to use a separate directory for different views not connected straight to specific controllers and actions. In our case, the directory views/templates serves for this purpose. We create a HTML template for menu rendering and place it in views/templates/topMenu.phtml. When using the method getMenu, our component will render the view topMenu.phtml and return HTML. In the method getMenu, we get the current path for all our views and set it for the PhalconMvcViewSimple component, created earlier in the constructor. In the view topMenu we access to the session component, which earlier we placed in the DI container. By generating the menu, we check whether the user is authorized or not. In the former case, we use the Sign out menu item, in the latter case we display the menu item with an invitation to Sign in. Automation of routine tasks The Phalcon project provides you with a great tool named Developer Tools. It helps automating repeating tasks, by means of code generation of components as well as a project skeleton. Most of the components of your application can be created only with one command. In this recipe, we will consider in depth the Developer Tools installation and configuration. Getting Ready… Before you begin to work on this recipe, you should have a DBMS configured, a web server installed and configured for handling requests from your application. You may need to configure a virtual host (this is optional) for your application which will receive and handle requests. You should be able to open your newly-created project in a browser at http://{your-host-here}/appname or http://{your-host-here}/, where your-host-here is the name of your project. You should have Git installed, too. In this recipe, we assume that your operating system is Linux. Developer Tools installation instructions for Mac OS X and Windows will be similar. You can find the link to the complete documentation for Mac OS X and Windows at the end of this recipe. We used the Terminal to create the database tables, and chose MySQL as our RDBMS. Your setup might vary. The choice of a tool for creating a table in your database, as well as a particular DBMS, is yours. Note that syntax for creating a table by using other DBMSs than MySQL may vary. How to do it… Follow these steps to complete this recipe: Clone Developer Tools in your home directory: git clone git@github.com:phalcon/phalcon-devtools.git devtools Go to the newly created directory devtools, run the./phalcon.sh command, and wait for a message about successful installation completion: $ ./phalcon.sh Phalcon Developer Tools Installer Make sure phalcon.sh is in the same dir as phalcon.php and that you are running this with sudo or as root. Installing Devtools... Working dir is: /home/user/devtools Generating symlink... Done. Devtools installed! Run the phalcon command without arguments to see the available command list and your current Phalcon version: $ phalcon Phalcon DevTools (3.0.0) Available commands: commands (alias of: list, enumerate) controller (alias of: create-controller) model (alias of: create-model) module (alias of: create-module) all-models (alias of: create-all-models) project (alias of: create-project) scaffold (alias of: create-scaffold) migration (alias of: create-migration) webtools (alias of: create-webtools) Now, let's create our project. Go to the folder where you plan to create the project and run the following command: $ phalcon project myapp simple Open the website which you have just created with the previous command in your browser. You should see a message about the successful installation. Create a database for your project: mysql -e 'CREATE DATABASE myapp' -u root -p You will need to configure our application to connect to the database. Open the file app/config/config.php and correct the database connection configuration. Draw attention to the baseUri: parameter if you have not configured your virtual host according to your project. The value of this parameter must be / or /myapp/. As the result, your configuration file must look like this: <?php use PhalconConfig; defined('APP_PATH') || define('APP_PATH', realpath('.')); return new Config([ 'database' => [ 'adapter' => 'Mysql', 'host' => 'localhost', 'username' => 'root', 'password' => '', 'dbname' => 'myapp', 'charset' => 'utf8', ], 'application' => [ 'controllersDir' => APP_PATH . '/app/controllers/', 'modelsDir' => APP_PATH . '/app/models/', 'migrationsDir' => APP_PATH . '/app/migrations/', 'viewsDir' => APP_PATH . '/app/views/', 'pluginsDir' => APP_PATH . '/app/plugins/', 'libraryDir' => APP_PATH . '/app/library/', 'cacheDir' => APP_PATH . '/app/cache/', 'baseUri' => '/myapp/', ] ]); Now, after you have configured the database access, let's create a users table in your database. Create the users table and fill it with the primary data: CREATE TABLE `users` ( `id` INT(11) unsigned NOT NULL AUTO_INCREMENT, `email` VARCHAR(128) NOT NULL, `first_name` VARCHAR(64) DEFAULT NULL, `last_name` VARCHAR(64) DEFAULT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `users_email` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `users` (`email`, `first_name`, `last_name`) VALUES ('john@doe.com', 'John', 'Doe'), ('janie@doe.com', 'Janie', 'Doe'); After that we need to create a new controller, UsersController. This controller must provide us with the main CRUD actions on the Users model and, if necessary, display data with the appropriate views. Lets do it with just one command: $ phalcon scaffold users In your web browser, open the URL associated with your newly created resource User and try to find one of the users of our database table at http://{your-host-here}/appname/users (or http://{your-host-here}/users, depending on how you have configured your server for application request handling. Finally, open your project in your file manager to see all the project structure created with Developer Tools: +-- app ¦ +-- cache ¦ +-- config ¦ +-- controllers ¦ +-- library ¦ +-- migrations ¦ +-- models ¦ +-- plugins ¦ +-- schemas ¦ +-- views ¦ +-- index ¦ +-- layouts ¦ +-- users +-- public +-- css +-- files +-- img +-- js +-- temp How it works… We installed Developer Tools with only two commands, git clone and ./phalcon. This is all we need to start using this powerful code generation tool. Next, using only one command, we created a fully functional application environment. At this stage, the application doesn't represent something outstanding in terms of features, but we have saved time from manually creating the application structure. Developer Tools did that for you! If after this command completion you examine your newly created project, you may notice that the primary application configuration has been generated also, including the bootstrap file. Actually, the phalcon project command has additional options that we have not demonstrated in this recipe. We are focusing on the main commands. Enter the command help to see all available project creating options: $ phalcon project help In the modern world, you can hardly find a web application which works without access to a database. Our application isn't an exception. We created a database for our application, and then we created a users table and filled it with primary data. Of course, we need to supply our application with what we have done in the app/config/config.php file with the database access parameters as well as the database name. After the successful database and table creation, we used the scaffold command for the pre-defined code template generation, particularly the Users controller with all main CRUD actions, all the necessary views, and the Users model. As before, we have used only one command to generate all those files. Phalcon Developer Tools is equipped with a good amount of different useful tools. To see all the available options, you can use the command help. We have taken only a few minutes to create the first version of our application. Instead of spending time with repetitive tasks (such as the creation of the application skeleton), we can now use that time to do more exciting tasks. Phalcon Developer Tools helps us save time where possible. But wait, there is more! The project is evolving, and it becomes more featureful day by day. If you have any problems you can always visit the project on GitHub https://github.com/phalcon/phalcon-devtools and search for a solution. There's more… You can find more information on Phalcon Developer Tools installation for Windows and OS X at: https://docs.phalconphp.com/en/latest/reference/tools.html. More detailed information on web server configuration can be found at: https://docs.phalconphp.com/en/latest/reference/install.html Creating the application structure by using code generation tools In the following recipe, we will discuss available code generation tools that can be used for creating a multi-module application. We don't need to create the application structure and main components manually. Getting Ready… Before you begin, you need to have Git installed, as well as any DBMS (for example, MySQL, PostgreSQL, SQLite, and the like), the Phalcon PHP extension (usually it is named php5-phalcon) and a PHP extension, which offers database connectivity support using PDO (for example, php5-mysql, php5-pgsql or php5-sqlite, and the like). You also need to be able to create tables in your database. To accomplish the following recipe, you will require Phalcon Developer Tools. If you already have it installed, you may skip the first three steps related to the installation and go to the fourth step. In this recipe, we assume that your operating system is Linux. Developer Tools installation instructions for Mac OS X and Windows will be similar. You can find the link to the complete documentation for Mac OS X and Windows at the end of this recipe. We used the Terminal to create the database tables, and chose MySQL as our RDBMS. Your setup might vary. The choice of a tool for creating a table in your database, as well as particular DBMS, is yours. Note that syntax for creating a table by using other DBMSs than MySQL may vary. How to do it… Follow these steps to complete this recipe: First you need to decide where you will install Developer Tools. Put the case that you are going to place Developer Tools in your home directory. Then, go to your home directory and run the following command: git clone git@github.com:phalcon/phalcon-devtools.git Now browse to the newly created phalcon-devtools directory and run the following command to ensure that there are no problems: ./phalcon.sh Now, as far as you have Developer Tools installed, browse to the directory, where you intend to create your project, and run the following command: phalcon project blog modules If there were no errors during the previous step, then create a Help Controller by running the following command: phalcon controller Help --base-class=ControllerBase — namespace=Blog\Frontend\Controllers Open the newly generated HelpController in the apps/frontend/controllers/HelpController.php file to ensure that you have the needed controller, as well as the initial indexAction. Open the database configuration in the Frontend module, blog/apps/frontend/config/config.php, and edit the database configuration according to your current environment. Enter the name of an existing database user and a password that has access to that database, and the application database name. You can also change the database adapter that your application needs. If you do not have a database ready, you can create one now. Now, after you have configured the database access, let's create a users table in your database. Create the users table and fill it with the primary data: CREATE TABLE `users` ( `id` INT(11) unsigned NOT NULL AUTO_INCREMENT, `email` VARCHAR(128) NOT NULL, `first_name` VARCHAR(64) DEFAULT NULL, `last_name` VARCHAR(64) DEFAULT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `users_email` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `users` (`email`, `first_name`, `last_name`) VALUES ('john@doe.com', 'John', 'Doe'), ('janie@doe.com', 'Janie', 'Doe'); Next, let's create Controller, Views, Layout, and Model by using the scaffold command: phalcon scaffold users --ns- controllers=Blog\Frontend\Controllers —ns- models=Blog\Frontend\Models Open the newly generated UsersController located in the apps/frontend/controllers/UsersController.php file to ensure you have generated all actions needed for user search, editing, creating, displaying, and deleting. To check if all actions work as designed, if you have a web server installed and configured for this recipe, you can go to http://{your-server}/users/index. In so doing, you can make sure that the required Users model is created in the apps/frontend/models/Users.php file, all the required views are created in the apps/frontend/views/users folder, and the user layout is created in the apps/frontend/views/layouts folder. If you have a web server installed and configured for displaying the newly created site, go to http://{your-server}/users/search to ensure that the users from our table are shown. How it works… In the world of programming, code generation is designed to lessen the burden of manually creating repeated code by using predefined code templates. The Phalcon framework provides perfect code generation tools which come with Phalcon Developer Tools. We start with the installation of Phalcon Developer Tools. Note, that if you already have Developer Tools installed, you should skip the steps involving these installation. Next, we generate a fully functional MVC application, which implements the multi-module principle. One command is enough to get a working application at once. We save ourselves the trouble of creating the application directory structure, creating the bootstrap file, creating all the required files, and setting up the initial application structure. For that end, we use only one command. It's really great, isn't it? Our next step is creating a controller. In our example, we use HelpController, which displays just such an approach to creating controllers. Next, we create the table users in our database and fill it with data. With that done, we use a powerful tool for generating predefined code templates, which is called Scaffold. Using only one command in the Terminal, we generate the controller UsersController with all the necessary actions and appropriate views. Besides this, we get the Users model and required layout. If you have a web server configured you can check out the work of Developer Tools at http://{your-server}/users/index. When we use the Scaffold command, the generator determines the presence and names of our table fields. Based on these data, the tool generates a model, as well as views with the required fields. The generator provides you with ready-to-use code in the controller, and you can change this code according to your needs. However, even if you don't change anything, you can use your controller safely. You can search for users, edit and delete them, create new users, and view them. And all of this was made possible with one command. We have discussed only some of the features of code generation. Actually, Phalcon Developer Tools has many more features. For help on the available commands you can use the command phalcon (without arguments). There's more… For more detailed information on installation and configuration of PDO in PHP, visit http://php.net/manual/en/pdo.installation.php. You can find detailed Phalcon Developer Tools installation instructions at https://docs.phalconphp.com/en/latest/reference/tools.html. For more information on Scaffold, refer to https://en.wikipedia.org/wiki/Scaffold_(programming). Summary In this article, you learned about the automation of routine tasks andcreating the application structure. Resources for Article: Further resources on this subject: Using Phalcon Models, Views, and Controllers [Article] Phalcon's ORM [Article] Planning and Structuring Your Test-Driven iOS App [Article]
Read more
  • 0
  • 0
  • 11551

article-image-designing-user-interface
Packt
23 Nov 2016
7 min read
Save for later

Designing a User Interface

Packt
23 Nov 2016
7 min read
In this article by Marcin Jamro, the author of the book Windows Application Development Cookbook, we will see how to add a button in your application. (For more resources related to this topic, see here.) Introduction You know how to start your adventure by developing universal applications for smartphones, tablets, and desktops running on the Windows 10 operating system. In the next step, it is crucial to get to know how to design particular pages within the application to provide the user with a convenient user interface that works smoothly on screens with various resolutions. Fortunately, designing the user interface is really simple using the XAML language, as well as Microsoft Visual Studio Community 2015. A designer can use a set of predefined controls, such as textboxes, checkboxes, images, or buttons. What's more, one can easily arrange controls in various variants, either vertically, horizontally, or in a grid. This is not all; developers could prepare their own controls as well. Such controls could be configured and placed on many pages within the application. It is also possible to prepare dedicated versions of particular pages for various types of devices, such as smartphones and desktops. You have already learned how to place a new control on a page by dragging it from the Toolbox window. In this article, you will see how to add a control as well as how to programmatically handle controls. Thus, some controls could either change their appearance, or the new controls could be added to the page when some specific conditions are met. Another important question is how to provide the user with a consistent user interface within the whole application. While developing solutions for the Windows 10 operating system, such a task could be easily accomplished by applying styles. In this article, you will learn how to specify both page-limited and application-limited styles that could be applied to either particular controls or to all the controls of a given type. At the end, you could ask yourself a simple question, "Why should I restrict access to my new awesome application only to people who know a particular language in which the user interface is prepared?" You should not! And in this article, you will also learn how to localize content and present it in various languages. Of course, the localization will use additional resource files, so translations could be prepared not by a developer, but by a specialist who knows the given language well. Adding a button When developing applications, you can use a set of predefined controls among which a button exists. It allows you to handle the event of pressing the button by a user. Of course, the appearance of the button could be easily adjusted, for instance, by choosing a proper background or border, as you could see in this recipe. The button can present textual content; however, it can also be adjusted to the user's needs, for instance, by choosing a proper color or font size. This is not all because the content shown on the button could not be only textual. For instance, you can prepare a button that presents an image instead of a text, a text over an image, or a text located next to the small icon that visually informs about the operation. Such modifications are presented in the following part of this recipe as well. Getting ready To step through this recipe, you only need the automatically generated project. How to do it… Add a button to the page by modifying the content of the MainPage.xaml file, as follows: <Page (...)> <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"> <Button Content="Click me!" Foreground="#0a0a0a" FontWeight="SemiBold" FontSize="20" FontStyle="Italic" Background="LightBlue" BorderBrush="RoyalBlue" BorderThickness="5" Padding="20 10" VerticalAlignment="Center" HorizontalAlignment="Center" /> </Grid> </Page> Generate a method for handling the event of clicking the button by pressing the button (either in a graphical designer or in the XAML code) and double-clicking on the Click field in the Properties window with the Event handlers for the selected element option (the lightning icon) selected. The automatically generated method is as follows: private void Button_Click(object sender, RoutedEventArgs e) { } How it works… In the preceding example, the Button control is placed within a grid. It is centered both vertically and horizontally, as specified by the VerticalAlignment and HorizontalAlignment properties that are set to Center. The background color (Background) is set to LightBlue. The border is specified by two properties, namely BorderBrush and BorderThickness. The first property chooses its color (RoyalBlue), while the other represents its thickness (5 pixels). What's more, the padding (Padding) is set to 20 pixels on the left- and right-hand side and 10 pixels at the top and bottom. The button presents the Click me! text defined as a value of the Content property. The text is shown in the color #0a0a0a with semi-bold italic font with size 20, as specified by the Foreground, FontWeight, FontStyle, and FontSize properties, respectively. If you run the application on a local machine, you should see the following result: It is worth mentioning that the IDE supports a live preview of the designed page. So, you can modify the values of particular properties and have real-time feedback regarding the target appearance directly in the graphical designer. It is a really great feature that does not require you to run the application to see an impact of each introduced change. There's more… As already mentioned, even the Button control has many advanced features. For example, you could place an image instead of a text, present a text over an image, or show an icon next to the text. Such scenarios are presented and explained now. First, let's focus on replacing the textual content with an image by modifying the XAML code that represents the Button control, as follows: <Button MaxWidth="300" VerticalAlignment="Center" HorizontalAlignment="Center"> <Image Source="/Assets/Image.jpg" /> </Button> Of course, you should also add the Image.jpg file to the Assets directory. To do so, navigate to Add | Existing Item… from the context menu of the Assets node in the Solution Explorer window, shown as follows: In the Add Existing Item window, choose the Image.jpg file and click on the Add button. As you could see, the previous example uses the Image control. In this recipe, no more information about such a control is presented because it is the topic of one of the next recipes, namely Adding an image. If you run the application now, you should see a result similar to the following: The second additional example presents a button with a text over an image. To do so, let's modify the XAML code, as follows: <Button MaxWidth="300" VerticalAlignment="Center" HorizontalAlignment="Center"> <Grid> <Image Source="/Assets/Image.jpg" /> <TextBlock Text="Click me!" Foreground="White" FontWeight="Bold" FontSize="28" VerticalAlignment="Bottom" HorizontalAlignment="Center" Margin="10" /> </Grid> </Button> You'll find more information about the Grid, Image, and TextBlock controls in the next recipes, namely Arranging controls in a grid, Adding an image, and Adding a label. For this reason, the usage of such controls is not explained in the current recipe. If you run the application now, you should see a result similar to the following: As the last example, you will see a button that contains both a textual label and an icon. Such a solution could be accomplished using the StackPanel, TextBlock, and Image controls, as you could see in the following code snippet: <Button Background="#353535" VerticalAlignment="Center" HorizontalAlignment="Center" Padding="20"> <StackPanel Orientation="Horizontal"> <Image Source="/Assets/Icon.png" MaxHeight="32" /> <TextBlock Text="Accept" Foreground="White" FontSize="28" Margin="20 0 0 0" /> </StackPanel> </Button> Of course, you should not forget to add the Icon.png file to the Assets directory, as already explained in this recipe. The result should be similar to the following: Resources for Article: Further resources on this subject: Deployment and DevOps [article] Introduction to C# and .NET [article] Customizing Kernel and Boot Sequence [article]
Read more
  • 0
  • 0
  • 12388

article-image-how-create-breakout-game-godot-engine-part-2
George Marques
23 Nov 2016
8 min read
Save for later

How to create a Breakout game with Godot Engine – Part 2

George Marques
23 Nov 2016
8 min read
In part one of this article you learned how to set up a project and create a basic scene with input and scripting. By now you should grasp the basic concepts of the Godot Engine, such as nodes and scenes. Here we're going to complete the game up to a playable demo. Game scene Let's create new scene to hold the game itself. Click on the menu Scene > New Scene and add a Node2D as its root. You may feel tempted to resize this node to occupy the scene, but you shouldn't. If you resize it you'll be changing the scale and position, which will be reflected on child nodes. We want the position and scale both to be (0, 0). Rename the root node to Game and save the scene as game.tscn. Go to the Project Settings, and in the Application section, set this as the main_scene option. This will make the Game scene run when the game starts. Drag the paddle.tscn file from the FileSystem dock and drop it over the root Game node. This will create a new instance of the paddle scene. It's also possible to click on the chain icon on the Scene dock and select a scene to instance. You can then move the instanced paddle to the bottom of the screen where it should stay in the game (use the guides in the editor as reference). Play the project and you can then move the paddle with your keyboard. If you find the movement too slow or too fast, you can select the Paddle node and adjust the Speed value on the Inspector because it's an exported script variable. This is a great way to tweak the gameplay without touching the code. It also allows you to put multiple paddles in the game, each with its own speed if you wish. To make this better, you can click the Debug Options button (the last one on the top center bar) and activate Sync Scene Changes. This will reflect the changes on the editor in the running game, so you can set the speed without having to stop and play again. The ball Let's create a moving object to interact with. Make a new scene and add a RigidBody2D as the root. Rename it to Ball and save the scene as ball.tscn. The rigid body can be moved by the physics engine and interact with other physical objects, like the static body of the paddle. Add a Sprite as a child node and set the following image as its texture: Ball Now add a CollisionShape2D as a child of the Ball. Set its shape to new CircleShape2D and adjust the radius to cover the image. We need to adjust some of the Ball properties to behave in an appropriate way for this game. Set the Mode property to Character to avoid rotation. Set the Gravity Scale to 0 so it doesn't fall. Set the Friction to 0 and the Damp Override > Linear to 0 in to avoid the loss of momentum. Finally, set the Bounce property to 1, as we want the ball to completely bounce when touching the paddle. With that done, add the following script to the ball, so it starts moving when the scene plays: extends RigidBody2D # Export the ball speed to be changed in the editor export var ball_speed = 150.0 func _ready(): # Apply the initial impulse to the ball so it starts moving # It uses a bit of vector math to make the speed consistent apply_impulse(Vector2(), Vector2(1, 1).normalized() * ball_speed) Walls Going back to the Game scene, instance the ball as a child of the root node. We're going to add the walls so the ball doesn't get lost in the world. Add a Node2D as a child of the root and rename it to Walls. This will be the root for the wall nodes, to keep things organized. As a child of that, add four StaticBody2D nodes, each with its own rectangular collision shape to cover the borders of the screen. You'll end up with something like the following: Walls By now you can play the game a little bit and use the paddle to deflect the ball or leave it to bounce on the bottom wall. Bricks The last part of this puzzle left is the bricks. Create a new scene, add a StaticBody2D as the root and rename it to Brick. Save the scene as brick.tscn. Add a Sprite as its child and set the texture to the following image: Brick Add a CollisionShape2D and set its shape to rectangle, making it cover the whole image. Now add the following script to the root to make a little bit of magic: # Tool keyword makes the script run in editor # In this case you can see the color change in the editor itself tool extends StaticBody2D # Export the color variable and a setter function to pass it to the sprite export (Color) var brick_color = Color(1, 1, 1) setget set_color func _ready(): # Set the color when first entering the tree set_color(brick_color) # This is a setter function and will be called whenever the brick_color variable is set func set_color(color): brick_color = color # We make sure the node is inside the tree otherwise it cannot access its children if is_inside_tree(): # Change the modulate property of the sprite to change its color get_node("Sprite").set_modulate(color) This will allow to set the color of the brick using the Inspector, removing the need to make a scene for each brick color. To make it easier to see, you can click the eye icon besides the CollisionShape2D to hide it. Hide CollisionShape2D The last thing to be done is to make the brick disappear when touched by the ball. Using the Node dock, add the group brick to the root Brick node. Then go back to the Ball scene and, again using the Node dock, but this time in the Signals section, double-click the body_enter signal. Click the Connect button with the default values. This will open the script editor with a new function. Replace it with this: func _on_Ball_body_enter( body ): # If the body just touched in member of the "brick" group if body.is_in_group("brick"): # Mark it for deletion in the next idle frame body.queue_free() Using the Inspector, change the Ball node to enable the Contact Monitor property and increase the Contacts Reported to 1. This will make sure the signal is sent when the ball touches something. Level Make a new scene for the level. Add a Node2D as the root, rename it to Level 1 and save the scene as level1.tscn. Now instance a brick in the scene. Position it anywhere, set a color using the Inspector and then duplicate it. You can repeat this process to make the level look the way you want. Using the Edit menu you can set a grid with snapping to make it easier to position the bricks. Then go back to the Game scene, instance the level there as a child of the root. Play the game and you will finally see the ball bouncing around and destroying the bricks it touches. Breakout Game Going further This is just a basic tutorial showing some of the fundamental aspects of the Godot Engine. The Node and Scene system, physics bodies, scripting, signals, and groups are very useful concepts but not all that Godot has to offer. Once you get acquainted with those, it's easy to learn other functions of the engine. The finished game in this tutorial is just bare bones. There are many things you can do, such as adding a start menu, progressing the levels as they are finished and detecting when the player loses. Thankfully, Godot makes all those things very easy and it should not be much effort to make this a complete game. Author: George Marques is a Brazilian software developer who has been playing with programming in a variety of environments since he was a kid. He works as a freelancer programmer for web technologies based on open source solutions such as WordPress and Open Journal Systems. He's also one of the regular contributors of the Godot Engine, helping solving bugs and adding new features to the software, while also giving solutions to the community for the questions they have.
Read more
  • 0
  • 0
  • 15427

article-image-android-game-development-unity3d
Packt
23 Nov 2016
8 min read
Save for later

Android Game Development with Unity3D

Packt
23 Nov 2016
8 min read
In this article by Wajahat Karim, author of the book Mastering Android Game Development with Unity, we will be creating addictive fun games by using a very famous game engine called Unity3D. In this article, we will cover the following topics: Game engines and Unity3D Features of Unity3D Basics of Unity game development (For more resources related to this topic, see here.) Game engines and Unity3D A game engine is a software framework designed for the creation and development of video games. Many tools and frameworks are available for game designers and developers to code a game quickly and easily without building from the ground up. As time passed by, game engines became more mature and easy for developers, with feature-rich environments. Starting from native code frameworks for Android such as AndEngine, Cocos2d-x, LibGDX, and so on, game engines started providing clean user interfaces and drag-drop functionalities to make game development easier for developers. These engines include lots of tools which are different in user interface, features, porting, and many more things; but all have one thing in common— they create video games in the end. Unity (http://unity3d.com) is a cross-platform game engine developed by Unity Technologies. It made its first public announcement at Apple Worldwide Developers Conference in 2005, and supported only game development for Mac OS, but since then it has been extended to target more than 15 platforms for desktop, mobile, and consoles. It is notable for its one-click ability to port games on multiple platforms including BlackBerry 10, Windows Phone 8, Windows, OS X, Linux, Android, iOS, Unity Web Player (including Facebook), Adobe Flash, PlayStation 3, PlayStation 4, PlayStation Vita, Xbox 360, Xbox One, Wii U, and Wii. Unity has a fantastic interface, which lets the developers manage the project really efficiently from the word go. It has a nice drag-drop functionality with connecting behavior scripts written in either C#, JavaScript (or UnityScript), or Boo to define the custom logic and functionality with visual objects quite easily. Unity has been proven quite easy to learn for new developers who are just starting out with game development. Now more largely studios have also started using , and that too for good reasons. Unity is one of those engines that provide support for both 2D and 3D games without putting developers in trouble or confusing them. Due to its popularity all over the game development industry, it has a vast collection of online tutorials, great documentation, and a very helping community of developers. Features of Unity3D Unity is a game development ecosystem comprising a powerful rendering engine, intuitive tools, rapid workflows for 2D and 3D games, all-in-one deployment support, and thousands of already created free and paid assets with a helping developer's community. The feature list includes the following: Easy workflow allowing developers to rapidly assemble scenes in an intuitive editor workspace Quality game creation like AAA visuals, high-definition audio, full-throttle action without any glitches on screen Dedicated tools for both 2D and 3D game creation with shared conventions to make it easy for developers A very unique and flexible animation system to create natural animations with very less time-consuming efforts Smooth frame rate with reliable performance on all the platforms where developers publish their games One-click ability to deploy to all platforms from desktops, browsers, and mobiles to consoles within minutes Reduces time of development by using already created reusable assets available at the huge asset store Basics of Unity game development Before delving into details of Unity3D and game development concepts, let's have a look at some of the very basics of Unity 5.0. We will go through the Unity interface, menu items, using assets, creating scenes, and publishing builds. Unity editor interface When you launch Unity 5.0 for the first time, you will be presented with an editor with a few panels on the left, right, and bottom of the screen. The following screenshot shows the editor interface when it's first launched: Fig 1.7 Unity 5 editor interface at first launch First of all, take your time to look over the editor, and become a little familiar with it. The Unity editor is divided into different small panels and views, which can be dragged to customize the workspace according to the developer/designer's needs. Unity 5 comes with some prebuilt workspace layout templates, which can be selected from the Layout drop-down menu at top-right corner of the screen, as shown in the following screenshot: Fig 1.8 Unity 5 editor layouts The layout currently displayed in the editor shown in the preceding screenshot is the Default layout. You can select these layouts, and see how the editor's interface changes, and how the different panels are placed at different positions in each layout. This book uses the 2 by 3 workspace layout for the whole game. The following figure shows the 2 by 3 workspace with the names of the views and panels highlighted: Fig 1.9 Unity 5 2 by 3 Layout with views and panel names As you can see in the preceding figure, the Unity editor contains different views and panels. Every panel and view have a specific purpose, which is described as follows: Scene view The Scene view is the whole stage for the game development, and it contains every asset in the game from a tiny point to any heavy 3D model. The Scene view is used to select and position environments, characters, enemies, the player, camera, and all other objects which can be placed on the stage for the game. All those objects which can be placed and shown in the game are called game objects. The Scene view allows developers to manipulate game objects such as selecting, scaling, rotating, deleting, moving, and so on. It also provides some controls such as navigation and transformation.  In simple words, the Scene view is the interactive sandbox for developers and designers. Game view The Game view is the final representation of how your game will look when published and deployed on the target devices, and it is rendered from the cameras of the scene. This view is connected to the play mode navigation bar in the center at the top of the whole Unity workspace. The play mode navigation bar is shown in the following: figure. Fig 1.14 Play mode bar When the game is played in the editor, this control bar gets changed to blue color. A very interesting feature of Unity is that it allows developers to pause the game and code while running, and the developers can see and change the properties, transforms, and much more at runtime, without recompiling the whole game, for quick workflow. Hierarchy view The Hierarchy view is the first point to select or handle any game object available in the scene. This contains every game object in the current scene. It is tree-type structure, which allows developers to utilize the parent and child concept on the game objects easily. The following figure shows a simple Hierarchy view: Fig 1.16 Hierarchy view Project browser panel This panel looks like a view, but it is called the Project browser panel. It is an embedded files directory in Unity, and contains all the files and folders included in the game project. The following figure shows a simple Project browser panel: Fig 1.17 Project browser panel The left side of the panel shows a hierarchal directory, while the rest of the panel shows the files, or, as they are called, assets in Unity. Unity represents these files with different icons to differentiate these according to their file types. These files can be sprite images, textures, model files, sounds, and so on. You can search any specific file by typing in the search text box. On the right side of search box, there are button controls for further filters such as animation files, audio clip files, and so on. An interesting thing about the Project browser panel is that if any file is not available in the Assets, then Unity starts looking for it on the Unity Asset Store, and presents you with the available free and paid assets. Inspector panel This is the most important panel for development in Unity. Unity structures the whole game in the form of game objects and assets. These game objects further contain components such as transform, colliders, scripts, meshes, and so on. Unity lets developers manage these components of each game object through the Inspector panel. The following figure shows a simple Inspector panel of a game object: Fig 1.18 Inspector panel These components vary in types, for example, Physics, Mesh, Effects, Audio, UI, and so on. These components can be added in any object by selecting it from the Component menu. The following figure shows the Component menu: Fig 1.19 Components menu Summary In this article, you learned about game engines, such as Unity3D, which is used to create games for Android devices. We also discussed the important features of Unity along with the basics of its development environment. Resources for Article: Further resources on this subject: The Game World [article] Customizing the Player Character [article] Animation features in Unity 5 [article]
Read more
  • 0
  • 0
  • 13145
article-image-client-side-validation-jquery-validation-plugin
Jabran Rafique
23 Nov 2016
9 min read
Save for later

Client-Side Validation with the jQuery Validation Plugin

Jabran Rafique
23 Nov 2016
9 min read
Form validation is a critical procedure in data collection. A correct validation in place makes sure that forms collect correct and expected data. Validation must be done on server side and optionally on client side. Server-side validation is robust and secure because a user will not access to modify its behaviour. However, client-side validation can be tempered easily. Applications relying on client-side validation completely and bypassing at server side are more open to security threats with data exploits. Client-side validation is about giving users a better experience. This is so that users don't have to go through everything on the page or submit a whole form to find out that they have one or even a few entries incorrect, but rather alerting users instantly to correct mistakes in place. Modern browsers enforce client-side validation by default for form fields in an HTML5 document with certain attributes (that is, required). There are cross-browser limitations on how these validation messages can be styled, positioned, and labeled. JavaScript plays a vital role in client-side form validation. There are many different ways to validate a form using JavaScript. JavaScript libraries such as jQuery also provide a number of ways to validate a form. If a project is using any such library already, it would easier to utilize the library's form validation methods. jQuery validation is a jQuery plugin that makes it convenient to validate a HTML form. We will take a quick look at it below. Installation There are a number of following ways to install this library. Download directly from GitHub Use CDN hosted files Using bower $ bower install jquery-validation --save For simplicity of this tutorial's demo we will use CDN hosted files. Usage Now that we have installed jQuery Validation we start by adding it to our HTML document. Here is the HTML for our demo app: <!DOCTYPE html> <html> <head> <title>Learn jQuery Validation</title> </head> <body> <div class="container"> <h1>Learn jQuery Validation</h1> <form method="post"> <div> <label for="first-name">First Name:</label> <input type="text" id="first-name" name="first_name"> </div> <div> <label for="last-name">Last Name:</label> <input type="text" id="last-name" name="last_name"> </div> <div> <label for="email-address">Email:</label> <input type="text" id="email-address" name="email_address"> </div> <div> <button type="submit" id="submit-cta">Submit</button> </div> </form> </div> <script src="https://code.jquery.com/jquery-2.2.4.min.js" integrity="sha256-BbhdlvQf/xTY9gja0Dq3HiwQF8LaCRTXxZKRutelT44=" crossorigin="anonymous"></script> <script src="//cdnjs.cloudflare.com/ajax/libs/jquery-validate/1.15.0/jquery.validate.min.js)></script> </body> </html> All jQuery validation plugin methods are instantly available after the page is loaded. In the following example we add required attribute to our form fields, which will be used by the jQuery Validation plugin for basic validation: ... <form method="post"> <div> <label for="first-name">First Name:</label> <input type="text" id="first-name" name="first_name" required> </div> <div> <label for="last-name">Last Name:</label> <input type="text" id="last-name" name="last_name" required> </div> <div> <label for="email-address">Email:</label> <input type="text" id="email-address" name="email_address" required> </div> <div> <button type="submit" id="submit-cta">Submit</button> </div> </form> ... This simply enables HTML5 browser validation in most modern browsers. Adding the following JavaScript line will activate the jQuery Validation plugin. $('form').validate(); validate() is a special method exposed by the jQuery Validation plugin and it can help customise our validation further. We will discuss this method in more details later in this article. Here is it in action: See the Pen jQuery Validation – Part I by Jabran Rafique (@jabranr) on CodePen. Submitting the form will validate the form and for empty fields it will return an error message. The generic error messages (This field is required) you see are set as default messages by the plugin. These can easily be changed by adding a data-msg-required attribute to the form field element, as shown in the following example: ... <form method="post"> <div> <label for="first-name">First Name:</label> <input type="text" id="first-name" name="first_name" required data-msg-required="First name is required."> </div> <div> <label for="last-name">Last Name:</label> <input type="text" id="last-name" name="last_name" required data-msg-required="Last name is required."> </div> <div> <label for="email-address">Email:</label> <input type="text" id="email-address" name="email_address" required data-msg-required="Email is required."> </div> <div> <button type="submit" id="submit-cta">Submit</button> </div> </form> ... Here it is in action: See the Pen jQuery Validation – Part II by Jabran Rafique (@jabranr) on CodePen. Similarly we can add different types of validation to our form fields using attributes such as minlength, maxlength, and so on, as shown in the following examples: ... <div> <label for="phone">Phone:</label> <!-- With default error messages --> <input type="text" id="phone" name="phone" minlength="11" maxlength="15"> <!-- With custom error messages --> <input type="text" id="phone" name="phone" minlength="11" maxlength="15" data-msg-min="Enter minimum of 11 digits." data-msg-max="Enter maximum of 15 digits."> </div> ... The alternative way of setting rules and messages is to pass these settings as arguments to the validate() method. Here are the above examples using the validate() method. Here is the HTML for the form with no custom validation messages and optionally no validation attributes: ... <form method="post"> <div> <label for="first-name">First Name:</label> <input type="text" id="first-name" name="first_name"> </div> <div> <label for="last-name">Last Name:</label> <input type="text" id="last-name" name="last_name"> </div> <div> <label for="email-address">Email:</label> <input type="text" id="email-address" name="email_address"> </div> <div> <button type="submit" id="submit-cta">Submit</button> </div> </form> ... Here is the JavaScript that enables validation and set custom validation messages for this form: $('form').validate({ rules: { 'first_name': required, 'last_name': required, 'email_address': { required: true, email: true } }, messages: { 'first_name': 'First name is required.', 'last_name': 'Last name is required.', 'email_address': { required: 'Email is required.', email: 'A valid email is required.' } } }); As you may have noticed a new validation constraint in the above code that is email: true. This enables checks for a valid email address. There are many more built-in constraints in the jQuery validation plugin. You can find those in the official documentation. validate() has other properties that can be updated to customised the behavior of the jQuery Validation plugin completely. Some of those are the following: errorClass: Set custom CSS class for error message element validClass: Set custom CSS class for a validated element errorPlacement(): Define where to put error messages highlight(): Define method to highlight error/validated states unhighlight(): Define method to unhighlight error/validated states The default plugin methods can also be overidden for custom implementations using Validator object. Additional Methods Sooner or later in every other project, there arises a requirement for some advanced validation. Let's think of date of birth fields. We want to validate all three fields as one and it is not possible to do so using the default methods of this plugin. There are optional additional methods to include if such a requirement appears. The additional methods can either be a script included using CDN link or each of them can be included individually when using bower installation. Contribute Contributions to any open source project make it robust and more useful with bugfixes and new features. Just like any other open source project, the jQuery Validation plugin also welcomes contributions. To contribute to the jQuery Validation plugin project just head over to the GitHub project and fork the repository to work on an existing issue or by adding or start a new one. Don't forget to read the contribution guidelines before starting. All of the demos in this tutorial can be found at CodePen as Collection. Hope this quick get started guide will make it easier to use the jQuery Validation plugin in your next project for better form validation. Author Jabran Rafique is a London-based web engineer. He currently works as a front-end web developer at Rated People. He has a master’s in computer science from Staffordshire University and more than 6 years of professional experience in web systems. He has also served as aregional lead and advocate at Google Map Makersince 2008, where he contributed to building digital maps in order to make them available to millions of people worldwide as well as organize and speak at international events. He writes on his website/blog about different things, and shares code at GitHub and thoughts on Twitter.
Read more
  • 0
  • 1
  • 11862

article-image-debugging-vulkan
Packt
23 Nov 2016
16 min read
Save for later

Debugging in Vulkan

Packt
23 Nov 2016
16 min read
In this article by Parminder Singh, author of Learning Vulkan, we learn Vulkan debugging in order to avoid unpleasant mistakes. Vulkan allows you to perform debugging through validation layers. These validation layer checks are optional and can be injected into the system at runtime. Traditional graphics APIs perform validation right up front using some sort of error-checking mechanism, which is a mandatory part of the pipeline. This is indeed useful in the development phase, but actually, it is an overhead during the release stage because the validation bugs might have already been fixed at the development phase itself. Such compulsory checks cause the CPU to spend a significant amount of time in error checking. On the other hand, Vulkan is designed to offer maximum performance, where the optional validation process and debugging model play a vital role. Vulkan assumes the application has done its homework using the validation and debugging capabilities available at the development stage, and it can be trusted flawlessly at the release stage. In this article, we will learn the validation and debugging process of a Vulkan application. We will cover the following topics: Peeking into Vulkan debugging Understanding LunarG validation layers and their features Implementing debugging in Vulkan (For more resources related to this topic, see here.) Peeking into Vulkan debugging Vulkan debugging validates the application implementation. It not only surfaces the errors, but also other validations, such as proper API usage. It does so by verifying each parameter passed to it, warning about the potentially incorrect and dangerous API practices in use and reporting any performance-related warnings when the API is not used optimally. By default, debugging is disabled, and it's the application's responsibility to enable it. Debugging works only for those layers that are explicitly enabled at the instance level at the time of the instance creation (VkInstance). When debugging is enabled, it inserts itself into the call chain for the Vulkan commands the layer is interested in. For each command, the debugging visits all the enabled layers and validates them for any potential error, warning, debugging information, and so on. Debugging in Vulkan is simple. The following is an overview that describes the steps required to enable it in an application: Enable the debugging capabilities by adding the VK_EXT_DEBUG_REPORT_EXTENSION_NAME extension at the instance level. Define the set of the validation layers that are intended for debugging. For example, we are interested in the following layers at the instance and device level. For more information about these layer functionalities, refer to the next section: VK_LAYER_GOOGLE_unique_objects VK_LAYER_LUNARG_api_dump VK_LAYER_LUNARG_core_validation VK_LAYER_LUNARG_image VK_LAYER_LUNARG_object_tracker VK_LAYER_LUNARG_parameter_validation VK_LAYER_LUNARG_swapchain VK_LAYER_GOOGLE_threading The Vulkan debugging APIs are not part of the core command, which can be statically loaded by the loader. These are available in the form of extension APIs that can be retrieved at runtime and dynamically linked to the predefined function pointers. So, as the next step, the debug extension APIs vkCreateDebugReportCallbackEXT and vkDestroyDebugReportCallbackEXT are queried and linked dynamically. These are used for the creation and destruction of the debug report. Once the function pointers for the debug report are retrieved successfully, the former API (vkCreateDebugReportCallbackEXT) creates the debug report object. Vulkan returns the debug reports in a user-defined callback, which has to be linked to this API. Destroy the debug report object when debugging is no more required. Understanding LunarG validation layers and their features The LunarG Vulkan SDK supports the following layers for debugging and validation purposes. In the following points, we have described some of the layers that will help you understand the offered functionalities: VK_LAYER_GOOGLE_unique_objects: Non-dispatchable handles are not required to be unique; a driver may return the same handle for multiple objects that it considers equivalent. This behavior makes the tracking of the object difficult because it is not clear which object to reference at the time of deletion. This layer packs the Vulkan objects into a unique identifier at the time of creation and unpacks them when the application uses it. This ensures there is proper object lifetime tracking at the time of validation. As per LunarG's recommendation, this layer must be last in the chain of the validation layer, making it closer to the display driver. VK_LAYER_LUNARG_api_dump: This layer is helpful in knowing the parameter values passed to the Vulkan APIs. It prints all the data structure parameters along with their values. VK_LAYER_LUNARG_core_validation: This is used for validating and printing important pieces of information from the descriptor set, pipeline state, dynamic state, and so on. This layer tracks and validates the GPU memory, object binding, and command buffers. Also, it validates the graphics and compute pipelines. VK_LAYER_LUNARG_image: This layer can be used for validating texture formats, rendering target formats, and so on. For example, it verifies whether the requested format is supported on the device. It validates whether the image view creation parameters are reasonable for the image that the view is being created for. VK_LAYER_LUNARG_object_tracker: This keeps track of object creation along with its use and destruction, which is helpful in avoiding memory leaks. It also validates that the referenced object is properly created and is presently valid. VK_LAYER_LUNARG_parameter_validation: This validation layer ensures that all the parameters passed to the API are correct as per the specification and are up to the required expectation. It checks whether the value of a parameter is consistent and within the valid usage criteria defined in the Vulkan specification. Also, it checks whether the type field of a Vulkan control structure contains the same value that is expected for a structure of that type. VK_LAYER_LUNARG_swapchain: This layer validates the use of the WSI swapchain extensions. For example, it checks whether the WSI extension is available before its functions could be used. Also, it validates that an image index is within the number of images in a swapchain. VK_LAYER_GOOGLE_threading: This is helpful in the context of thread safety. It checks the validity of multithreaded API usage. This layer ensures the simultaneous use of objects using calls running under multiple threads. It reports threading rule violations and enforces a mutex for such calls. Also, it allows an application to continue running without actually crashing, despite the reported threading problem. VK_LAYER_LUNARG_standard_validation: This enables all the standard layers in the correct order. For more information on validation layers, visit LunarG's official website. Check out https://vulkan.lunarg.com/doc/sdk and specifically refer to the Validation layer details section for more details. Implementing debugging in Vulkan Since debugging is exposed by validation layers, most of the core implementation of the debugging will be done under the VulkanLayerAndExtension class (VulkanLED.h/.cpp). In this section, we will learn about the implementation that will help us enable the debugging process in Vulkan: The Vulkan debug facility is not part of the default core functionalities. Therefore, in order to enable debugging and access the report callback, we need to add the necessary extensions and layers: Extension: Add the VK_EXT_DEBUG_REPORT_EXTENSION_NAME extension to the instance level. This will help in exposing the Vulkan debug APIs to the application: vector<const char *> instanceExtensionNames = { . . . . // other extensios VK_EXT_DEBUG_REPORT_EXTENSION_NAME, }; Layer: Define the following layers at the instance level to allow debugging at these layers: vector<const char *> layerNames = { "VK_LAYER_GOOGLE_threading", "VK_LAYER_LUNARG_parameter_validation", "VK_LAYER_LUNARG_device_limits", "VK_LAYER_LUNARG_object_tracker", "VK_LAYER_LUNARG_image", "VK_LAYER_LUNARG_core_validation", "VK_LAYER_LUNARG_swapchain", “VK_LAYER_GOOGLE_unique_objects” }; In addition to the enabled validation layers, the LunarG SDK provides a special layer called VK_LAYER_LUNARG_standard_validation. This enables basic validation in the correct order as mentioned here. Also, this built-in metadata layer loads a standard set of validation layers in the optimal order. It is a good choice if you are not very specific when it comes to a layer. a) VK_LAYER_GOOGLE_threading b) VK_LAYER_LUNARG_parameter_validation c) VK_LAYER_LUNARG_object_tracker d) VK_LAYER_LUNARG_image e) VK_LAYER_LUNARG_core_validation f) VK_LAYER_LUNARG_swapchain g) VK_LAYER_GOOGLE_unique_objects These layers are then supplied to the vkCreateInstance() API to enable them: VulkanApplication* appObj = VulkanApplication::GetInstance(); appObj->createVulkanInstance(layerNames, instanceExtensionNames, title); // VulkanInstance::createInstance() VkResult VulkanInstance::createInstance(vector<const char *>& layers, std::vector<const char *>& extensionNames, char const*const appName) { . . . VkInstanceCreateInfo instInfo = {}; // Specify the list of layer name to be enabled. instInfo.enabledLayerCount = layers.size(); instInfo.ppEnabledLayerNames = layers.data(); // Specify the list of extensions to // be used in the application. instInfo.enabledExtensionCount = extensionNames.size(); instInfo.ppEnabledExtensionNames = extensionNames.data(); . . . vkCreateInstance(&instInfo, NULL, &instance); } The validation layer is very specific to the vendors and SDK version. Therefore, it is advisable to first check whether the layers are supported by the underlying implementation before passing them to the vkCreateInstance() API. This way, the application remains portable throughout when ran against another driver implementation. The areLayersSupported() is a user-defined utility function that inspects the incoming layer names against system-supported layers. The unsupported layers are informed to the application and removed from the layer names before feeding them into the system: // VulkanLED.cpp VkBool32 VulkanLayerAndExtension::areLayersSupported (vector<const char *> &layerNames) { uint32_t checkCount = layerNames.size(); uint32_t layerCount = layerPropertyList.size(); std::vector<const char*> unsupportLayerNames; for (uint32_t i = 0; i < checkCount; i++) { VkBool32 isSupported = 0; for (uint32_t j = 0; j < layerCount; j++) { if (!strcmp(layerNames[i], layerPropertyList[j]. properties.layerName)) { isSupported = 1; } } if (!isSupported) { std::cout << "No Layer support found, removed” “ from layer: "<< layerNames[i] << endl; unsupportLayerNames.push_back(layerNames[i]); } else { cout << "Layer supported: " << layerNames[i] << endl; } } for (auto i : unsupportLayerNames) { auto it = std::find(layerNames.begin(), layerNames.end(), i); if (it != layerNames.end()) layerNames.erase(it); } return true; } The debug report is created using the vkCreateDebugReportCallbackEXT API. This API is not a part of Vulkan's core commands; therefore, the loader is unable to link it statically. If you try to access it in the following manner, you will get an undefined symbol reference error: vkCreateDebugReportCallbackEXT(instance, NULL, NULL, NULL); All the debug-related APIs need to be queried using the vkGetInstanceProcAddr() API and linked dynamically. The retrieved API reference is stored in a corresponding function pointer called PFN_vkCreateDebugReportCallbackEXT. The VulkanLayerAndExtension::createDebugReportCallback() function retrieves the create and destroy debug APIs, as shown in the following implementation: /********* VulkanLED.h *********/ // Declaration of the create and destroy function pointers PFN_vkCreateDebugReportCallbackEXT dbgCreateDebugReportCallback; PFN_vkDestroyDebugReportCallbackEXT dbgDestroyDebugReportCallback; /********* VulkanLED.cpp *********/ VulkanLayerAndExtension::createDebugReportCallback(){ . . . // Get vkCreateDebugReportCallbackEXT API dbgCreateDebugReportCallback=(PFN_vkCreateDebugReportCallbackEXT) vkGetInstanceProcAddr(*instance,"vkCreateDebugReportCallbackEXT"); if (!dbgCreateDebugReportCallback) { std::cout << "Error: GetInstanceProcAddr unable to locate vkCreateDebugReportCallbackEXT function.n"; return VK_ERROR_INITIALIZATION_FAILED; } // Get vkDestroyDebugReportCallbackEXT API dbgDestroyDebugReportCallback= (PFN_vkDestroyDebugReportCallbackEXT)vkGetInstanceProcAddr (*instance, "vkDestroyDebugReportCallbackEXT"); if (!dbgDestroyDebugReportCallback) { std::cout << "Error: GetInstanceProcAddr unable to locate vkDestroyDebugReportCallbackEXT function.n"; return VK_ERROR_INITIALIZATION_FAILED; } . . . } The vkGetInstanceProcAddr() API obtains the instance-level extensions dynamically; these extensions are not exposed statically on a platform and need to be linked through this API dynamically. The following is the signature of this API: PFN_vkVoidFunction vkGetInstanceProcAddr( VkInstance instance, const char* name); The following table describes the API fields: Parameters Description instance This is a VkInstance variable. If this variable is NULL, then the name must be one of these: vkEnumerateInstanceExtensionProperties, vkEnumerateInstanceLayerProperties, or vkCreateInstance. name This is the name of the API that needs to be queried for dynamic linking.   Using the dbgCreateDebugReportCallback()function pointer, create the debugging report object and store the handle in debugReportCallback. The second parameter of the API accepts a VkDebugReportCallbackCreateInfoEXT control structure. This data structure defines the behavior of the debugging, such as what should the debug information include—errors, general warnings, information, performance-related warning, debug information, and so on. In addition, it also takes the reference of a user-defined function (debugFunction); this helps filter and print the debugging information once it is retrieved from the system. Here's the syntax for creating the debugging report: struct VkDebugReportCallbackCreateInfoEXT { VkStructureType type; const void* next; VkDebugReportFlagsEXT flags; PFN_vkDebugReportCallbackEXT fnCallback; void* userData; }; The following table describes the purpose of the mentioned API fields: Parameters Description type This is the type information of this control structure. It must be specified as VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT. flags This is to define the kind of debugging information to be retrieved when debugging is on; the next table defines these flags. fnCallback This field refers to the function that filters and displays the debug messages. The VkDebugReportFlagBitsEXT control structure can exhibit a bitwise combination of the following flag values: Insert table here The createDebugReportCallback function implements the creation of the debug report. First, it creates the VulkanLayerAndExtension control structure object and fills it with relevant information. This primarily includes two things: first, assigning a user-defined function (pfnCallback) that will print the debug information received from the system (see the next point), and second, assigning the debugging flag (flags) in which the programmer is interested: /********* VulkanLED.h *********/ // Handle of the debug report callback VkDebugReportCallbackEXT debugReportCallback; // Debug report callback create information control structure VkDebugReportCallbackCreateInfoEXT dbgReportCreateInfo = {}; /********* VulkanLED.cpp *********/ VulkanLayerAndExtension::createDebugReportCallback(){ . . . // Define the debug report control structure, // provide the reference of 'debugFunction', // this function prints the debug information on the console. dbgReportCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT; dbgReportCreateInfo.pfnCallback = debugFunction; dbgReportCreateInfo.pUserData = NULL; dbgReportCreateInfo.pNext = NULL; dbgReportCreateInfo.flags = VK_DEBUG_REPORT_WARNING_BIT_EXT | VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT | VK_DEBUG_REPORT_ERROR_BIT_EXT | VK_DEBUG_REPORT_DEBUG_BIT_EXT; // Create the debug report callback and store the handle // into 'debugReportCallback' result = dbgCreateDebugReportCallback (*instance, &dbgReportCreateInfo, NULL, &debugReportCallback); if (result == VK_SUCCESS) { cout << "Debug report callback object created successfullyn"; } return result; } Define the debugFunction() function that prints the retrieved debug information in a user-friendly way. It describes the type of debug information along with the reported message: VKAPI_ATTR VkBool32 VKAPI_CALL VulkanLayerAndExtension::debugFunction( VkFlags msgFlags, VkDebugReportObjectTypeEXT objType, uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg, void *pUserData){ if (msgFlags & VK_DEBUG_REPORT_ERROR_BIT_EXT) { std::cout << "[VK_DEBUG_REPORT] ERROR: [" <<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if (msgFlags & VK_DEBUG_REPORT_WARNING_BIT_EXT) { std::cout << "[VK_DEBUG_REPORT] WARNING: ["<<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if (msgFlags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT) { std::cout<<"[VK_DEBUG_REPORT] INFORMATION:[" <<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if(msgFlags& VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT){ cout <<"[VK_DEBUG_REPORT] PERFORMANCE: ["<<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if (msgFlags & VK_DEBUG_REPORT_DEBUG_BIT_EXT) { cout << "[VK_DEBUG_REPORT] DEBUG: ["<<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else { return VK_FALSE; } return VK_SUCCESS; } The following table describes the various fields from the debugFunction()callback: Parameters Description msgFlags This specifies the type of debugging event that has triggered the call, for example, an error, warning, performance warning, and so on. objType This is the type object that is manipulated by the triggering call. srcObject This is the handle of the object that's being created or manipulated by the triggered call. location This refers to the place of the code describing the event. msgCode This refers to the message code. layerPrefix This is the layer responsible for triggering the debug event. msg This field contains the debug message text. userData Any application-specific user data is specified to the callback using this field.  The debugFunction callback has a Boolean return value. The true return value indicates the continuation of the command chain to subsequent validation layers even after an error is occurred. However, the false value indicates the validation layer to abort the execution when an error occurs. It is advisable to stop the execution at the very first error. Having an error itself indicates that something has occurred unexpectedly; letting the system run in these circumstances may lead to undefined results or further errors, which could be completely senseless sometimes. In the latter case, where the execution is aborted, it provides a better chance for the developer to concentrate and fix the reported error. In contrast, it may be cumbersome in the former approach, where the system throws a bunch of errors, leaving the developers in a confused state sometimes. In order to enable debugging at vkCreateInstance, provide dbgReportCreateInfo to the VkInstanceCreateInfo’spNext field: VkInstanceCreateInfo instInfo = {}; . . . instInfo.pNext = &layerExtension.dbgReportCreateInfo; vkCreateInstance(&instInfo, NULL, &instance); Finally, once the debug is no longer in use, destroy the debug callback object: void VulkanLayerAndExtension::destroyDebugReportCallback(){ VulkanApplication* appObj = VulkanApplication::GetInstance(); dbgDestroyDebugReportCallback(instance,debugReportCallback,NULL); } The following is the output from the implemented debug report. Your output may differ from this based on the GPU vendor and SDK provider. Also, the explanation of the errors or warnings reported are very specific to the SDK itself. But at a higher level, the specification will hold; this means you can expect to see a debug report with a warning, information, debugging help, and so on, based on the debugging flag you have turned on. Summary This article was short, precise, and full of practical implementations. Working on Vulkan without debugging capabilities is like shooting in the dark. We know very well that Vulkan demands an appreciable amount of programming and developers make mistakes for obvious reasons; they are humans after all. We learn from our mistakes, and debugging allows us to find and correct these errors. It also provides insightful information to build quality products. Let's do a quick recap. We learned the Vulkan debugging process. We looked at the various LunarG validation layers and understood the roles and responsibilities offered by each one of them. Next, we added a few selected validation layers that we were interested to debug. We also added the debug extension that exposes the debugging capabilities; without this, the API's definition could not be dynamically linked to the application. Then, we implemented the Vulkan create debug report callback and linked it to our debug reporting callback; this callback decorates the captured debug report in a user-friendly and presentable fashion. Finally, we implemented the API to destroy the debugging report callback object. Resources for Article: Further resources on this subject: Get your Apps Ready for Android N [article] Multithreading with Qt [article] Manage Security in Excel [article]
Read more
  • 0
  • 0
  • 35623

article-image-all-about-protocol
Packt
22 Nov 2016
19 min read
Save for later

All About the Protocol

Packt
22 Nov 2016
19 min read
In this article, Jon Hoffman, the author of the book Swift 3 Protocol Oriented Programming - Second Edition, talks about coming from an object-oriented background, I am very familiar with protocols (or interfaces, as they are known in other object-oriented languages). However, prior to Apple introducing protocol-oriented programming, protocols, or interfaces, were rarely the focal point of my application designs, unless I was working with an Open Service Gateway Initiative (OSGi) based project. When I designed an application in an object-oriented way, I always began the design with the objects. The protocols or interfaces were then used where they were appropriate, mainly for polymorphism when a class hierarchy did not make sense. Now, all that has changed, and with protocol-oriented programming, the protocol has been elevated to the focal point of our application design. (For more resources related to this topic, see here.) In this article you will learn the following: How to define property and method requirements within a protocol How to use protocol inheritance and composition How to use a protocol as a type What polymorphism is When we are designing an application in an object-oriented way, we begin the design by focusing on the objects and how they interact. The object is a data structure that contains information about the attributes of the object in the form of properties, and the actions performed by or to the object in the form of methods. We cannot create an object without a blueprint that tells the application what attributes and actions to expect from the object. In most object-oriented languages, this blueprint comes in the form of a class. A class is a construct that allows us to encapsulate the properties and actions of an object into a single type. Most object-oriented programming languages contain an interface type. This interface is a type that contains method and property signatures, but does not contain any implementation details. An interface can be considered a contract where any type that conforms to the interface must implement the required functionality defined within it. Interfaces in most object-oriented languages are primarily used as a way to achieve polymorphism. There are some frameworks, such as OSGi, that use interfaces extensively; however, in most object-oriented designs, the interface takes a back seat to the class and class hierarchy. Designing an application in a protocol-oriented way is significantly different from designing it in an object-oriented way. As we stated earlier, object-oriented design begins with the objects and the interaction between the objects, while protocol-oriented design begins with the protocol. While protocol-oriented design is about so much more than just the protocol, we can think of the protocol as the backbone of protocol-oriented programming. After all, it would be pretty hard to have protocol-oriented programming without the protocol. A protocol in Swift is similar to interfaces in object-oriented languages, where the protocol acts as a contract that defines the methods, properties, and other requirements needed by our types to perform their tasks. We say that the protocol acts as a contract because any type that adopts, or conforms, to the protocol promises to implement the requirements defined by the protocol. Any class, structure, or enumeration can conform to a protocol. A type cannot conform to a protocol unless it implements all required functionality defined within the protocol. If a type adopts a protocol and it does not implement all functionality defined by the protocol, we will get a compile time error and the project will not compile. Most modern object-oriented programming languages implement their standard library with a class hierarchy; however, the basis of Swift's standard library is the protocol (https://developer.apple.com/library/prerelease/ios/documentation/General/Reference/SwiftStandardLibraryReference/index.html). Therefore, not only does Apple recommend that we use the protocol-oriented programming paradigm in our applications, but they also use it in the Swift standard library. With the protocol being the basis of the Swift standard library and also the backbone of the protocol-oriented programming paradigm, it is very important that we fully understand what the protocol is and how we can use it. In this article, we will go over the basic usage of the protocol, which will include the syntax for defining the protocol, how to define requirements in a protocol, and how to make our types conform to a given protocol. Protocol syntax In this section, we will look at how to define a protocol, define requirements within a protocol, and specify that a type conforms to a protocol. Defining a protocol The syntax we use to define a protocol is very similar to the syntax used to define a class, structure, or enumeration. The following example shows the syntax used to define a protocol: protocol MyProtocol { //protocol definition here } To define the protocol, we use the protocol keyword followed by the name of the protocol. We then put the requirements, which our protocol defines, between curly brackets. Custom types can state that they conform to a particular protocol by placing the name of the protocol after the type's name, separated by a colon. The following example shows how we would state that the MyStruct structure conforms to the MyProtocol protocol: struct MyStruct: MyProtocol { //structure implementation here } A type can also conform to multiple protocols. We list the multiple protocols that the type conforms to by separating them with commas. The following example shows how we would specify that the MyStruct structure type conforms to the MyProtocol, AnotherProtocol, and ThirdProtocol protocols: struct MyStruct: MyProtocol, AnotherProtocol, ThirdProtocol { // Structure implementation here } Having a type conform to multiple protocols is a very important concept within protocol-oriented programming, as we will see later in the article. This concept is known as protocol composition. Now let's see how we would add property requirements to our protocol. Property requirements A protocol can require that the conforming types provide certain properties with specified names and types. The protocol does not say whether the property should be a stored or computed property because the implementation details are left up to the conforming types. When defining a property within a protocol, we must specify whether the property is a read-only or a read-write property by using the get and set keywords. We also need to specify the property's type since we cannot use the type inference in a protocol. Let's look at how we would define properties within a protocol by creating a protocol named FullName, as shown in the next example: protocol FullName { var firstName: String {get set} var lastName: String {get set} } In the FullName protocol, we define two properties named firstName and lastName. Both of these properties are defined as read-write properties. Any type that conforms to the FullName protocol must implement these properties. If we wanted to define the property as read-only, we would define it using only the get keyword, as shown in the following code: var readOnly: String {get} If the property is going to be a type property, then we must define it in the protocol. A type property is defined using the static keyword, as shown in the following example: static var typeProperty: String {get} Now let's see how we would add method requirements to our protocol. Method requirements A protocol can require that the conforming types provide specific methods. These methods are defined within the protocol exactly as we define them within a class or structure, but without the curly brackets and method body. We can define these methods as instance or type methods using the static keyword. Adding default values to the method's parameters is not allowed when defining the method withina protocol. Let's add a method named getFullName() to our FullName protocol: protocol FullName { var firstName: String {get set} var lastName: String {get set} func getFullName() -> String } Our fullName protocol now requires one method named getFullName() and two read-write properties named firstName and lastName. For value types, such as the structure, if we intend for a method to modify the instances that it belongs to, we must prefix the method definition with the mutating keyword. This keyword indicates that the method is allowed to modify the instance it belongs to. The following example shows how to use the mutating keyword with a method definition: mutating func changeName() If we mark a method requirement as mutating, we do not need to write the mutating keyword for that method when we adopt the protocol with a reference (class) type. The mutating keyword is only used with value (structures or enumerations) types. Optional requirements There are times when we want protocols to define optional requirements—that is, methods or properties that are not required to be implemented. To use optional requirements, we need to start off by marking the protocol with the @objc attribute. It is important to note that only classes can adopt protocols that use the @objc attribute. Structures and enumerations cannot adopt these protocols. To mark a property or method as optional, we use the optional keyword. Let's look at how we would use the optional keyword to define optional properties and methods: @objc protocol Phone { var phoneNumber: String {get set} @objc optional var emailAddress: String {get set} func dialNumber() @objc optional func getEmail() } In the Phone protocol we just created, we define a required property named phoneNumber and an optional property named emailAddress. We also defined a required function named dialNumber() and an optional function named getEmail(). Now let's explore how protocol inheritance works. Protocol inheritance Protocols can inherit requirements from one or more other protocols and then add additional requirements. The following code shows the syntax for protocol inheritance: protocol ProtocolThree: ProtocolOne, ProtocolTwo { // Add requirements here } The syntax for protocol inheritance is very similar to class inheritance in Swift, except that we are able to inherit from more than one protocol. Let's see how protocol inheritance works. We will use the FullName protocol that we defined earlier in this section and create a new protocol named Person: protocol Person: FullName { var age: Int {get set} } Now, when we create a type that conforms to the Person protocol, we must implement the requirements defined in the Person protocol, as well as the requirements defined in the FullName protocol. As an example, we could define a Student structure that conforms to the Person protocol as shown in the following code: struct Student: Person { var firstName = "" var lastName = "" var age = 0 func getFullName() -> String { return "(firstName) (lastName)" } } Note that in the Student structure we implemented the requirements defined in both the FullName and Person protocols; however, the only protocol specified when we defined the Student structure was the Person protocol. We only needed to list the Person protocol because it inherited all of the requirements from the FullName protocol. Now let's look at a very important concept in the protocol-oriented programming paradigm: protocol composition. Protocol composition Protocol composition lets our types adopt multiple protocols. This is a major advantage that we get when we use protocols rather than a class hierarchy because classes, in Swift and other single-inheritance languages, can only inherit from one superclass. The syntax for protocol composition is the same as the protocol inheritance that we just saw. The following example shows how to do protocol composition: struct MyStruct: ProtocolOne, ProtocolTwo, Protocolthree { // implementation here } Protocol composition allows us to break our requirements into many smaller components rather than inheriting all requirements from a single superclass or class hierarchy. This allows our type families to grow in width rather than height, which means we avoid creating bloated types that contain requirements that are not needed. Protocol composition may seem like a very simple concept, but it is a concept that is essential to protocol-oriented programming. Let's look at an example of protocol composition so we can see the advantage we get from using it. Let's say that we have the class hierarchy shown in the following diagram: In this class hierarchy, we have a base class named Athlete. The Athlete base class then has two subclasses named Amateur and Pro. These classes are used depending on whether the athlete is an amateur athlete or a pro athlete. An amateur athlete may be a colligate athlete, and we would need to store information such as which school they go to and their GPA. A pro athlete is one that gets paid for playing the game. For the pro athletes, we would need to store information such as what team they play for and their salary. In this example, things get a little messy under the Amateur and Pro classes. As we can see, we have a separate football player class under both the Amateur and Pro classes (the AmFootballPlayer and ProFootballPlayer classes). We also have a separate baseball class under both the Amateur and Pro classes (the AmBaseballPlayer and ProBaseballPlayer classes). This will require us to have a lot of duplicate code between theses classes. With protocol composition, instead of having a class hierarchy where our subclasses inherit all functionality from a single superclass, we have a collection of protocols that we can mix and match in our types. We then use one or more of these protocols as needed for our types. For example, we can create an AmFootballPlayer structure that conforms to the Athlete, Amateur, and Footballplayer protocols. We could also create the ProFootballPlayer structure that conforms to the Athlete, Pro, and Footballplayer protocols. This allows us to be very specific about the requirements for our types and only adopt the requirements that we need. From a pure protocol point of view, this last example may not make a lot of sense right now because protocols only define the requirements. One word of warning: If you find yourself creating numerous protocols that only contain one or two requirements in them, then you are probably making your protocols too granular. This will lead to a design that is hard to maintain and manage. Now let's look at how a protocol is a full-fledged type in Swift. Using protocols as a type Even though no functionality is implemented in a protocol, they are still considered a full-fledged type in the Swift programming language, and can mostly be used like any other type. What this means is that we can use protocols as parameters or return types for a function. We can also use them as the type for variables, constants, and collections. Let's take a look at some examples. For these next few examples, we will use the following PersonProtocol protocol: protocol PersonProtocol { var firstName: String {get set} var lastName: String {get set} var birthDate: Date {get set} var profession: String {get} init (firstName: String, lastName: String, birthDate: Date) } In this PersonProtocol, we define four properties and one initializer. For this first example, we will show how to use a protocol as a parameter and return type for a function, method, or initializer. Within the function itself, we also use the PersonProtocol as the type for a variable: func updatePerson(person: PersonProtocol) -> PersonProtocol { var newPerson: PersonProtocol // Code to update person goes here return newPerson } We can also use protocols as the type to store in a collection, as shown in the next example: var personArray = [PersonProtocol]() var personDict = [String: PersonProtocol]() The one thing we cannot do with protocols is create an instance of one. This is because no functionality is implemented within a protocol. As an example, if we tried to create an instance of the PersonProtocol protocol, as shown in the following example, we would receive the error error: protocol type 'PersonProtocol' cannot be instantiated: var test = PersonProtocol(firstName: "Jon", lastName: "Hoffman", ?birthDate: bDateProgrammer) We can use the instance of any type that conforms to our protocol anywhere that the protocol type is required. As an example, if we define a variable to be of the PersonProtocol protocol type, we can then populate that variable with the instance of any type that conforms to the PersonProtocol protocol. Let's assume that we have two types named SwiftProgrammer and FootballPlayer that conform to the PersonProtocol protocol. We can then use them as shown in this next example: var myPerson: PersonProtocol myPerson = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) myPerson = FootballPlayer(firstName: "Dan", lastName: "Marino", ?birthDate: bDatePlayer) In this example, the myPerson variable is defined to be of the PersonProtocol protocol type. We can then set this variable to instances of either of the SwiftProgrammer and FootballPlayer types. One thing to note is that Swift does not care if the instance is a class, structure, or enumeration. It only matters that the type conforms to the PersonProtocol protocol type. As we saw earlier, we can use our PersonProtocol protocol as the type for an array, which means that we can populate the array with instances of any type that conforms to the PersonProtocol protocol. The following is an example of this (note that the bDateProgrammer and bDatePlayer variables are instances of the Date type that would represent the birthdate of the individual): var programmer = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) var player = FootballPlayer(firstName: "Dan", lastName: "Marino", birthDate: bDatePlayer) var people: [PersonProtocol] = [] people.append(programmer) people.append(player) What we are seeing in these last couple of examples is a form of polymorphism. To use protocols to their fullest potential, we need to understand what polymorphism is. Polymorphism with protocols The word polymorphism comes from the Greek roots poly (meaning many) and morphe (meaning form). In programming languages, polymorphism is a single interface to multiple types (many forms). There are two reasons to learn the meaning of the word polymorphism. The first reason is that using such a fancy word can make you sound very intelligent in casual conversion. The second reason is that polymorphism provides one of the most useful programming techniques not only in object-oriented programming, but also protocol-oriented programming. Polymorphism lets us interact with multiple types though a single uniform interface. In the object-oriented programming world the single uniform interface usually comes from a superclass, while in the protocol-oriented programming world that single interface usually comes from a protocol. In the last section, we saw two examples of polymorphism with Swift. The first example was the following code: var myPerson: PersonProtocol myPerson = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) myPerson = FootballPlayer(firstName: "Dan", lastName: "Marino", birthDate: bDatePlayer) In this example, we had a single variable of the PersonProtocol type. Polymorphism allowed us to set the variable to instances of any type that conforms to the PersonProtocol protocol, such as the SwiftProgrammer or FootballPlayer types. The other example of polymorphism was in the following code: var programmer = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) var player = FootballPlayer(firstName: "Dan", lastName: "Marino", birthDate: bDatePlayer) var people: [PersonProtocol] = [] people.append(programmer) people.append(player) In this example, we created an array of PersonProtocol types. Polymorphism allowed us to add instances of any types that conform to PersonProtocol to this array. When we access an instance of a type though a single uniform interface, as we just showed, we are unable to access type-specific functionality. As an example, if we had a property in the FootballPlayer type that records the age of the player, we would be unable to access that property because it is not defined in the PeopleProtocol protocol. If we do need to access type-specific functionality, we can use type casting. Type casting with protocols Type casting is a way to check the type of an instance and/or to treat the instance as a specified type. In Swift, we use the is keyword to check whether an instance is of a specific type and the as keyword to treat an instance as a specific type. The following example shows how we would use the is keyword: if person is SwiftProgrammer { print("(person.firstName) is a Swift Programmer") } In this example, the conditional statement returns true if the person instance is of the SwiftProgrammer type or false if it isn't. We can also use the switch statement (as shown in the next example) if we want to check for multiple types: for person in people { switch (person) { case is SwiftProgrammer: print("(person.firstName) is a Swift Programmer") case is FootballPlayer: print("(person.firstName) is a Football Player") default: print("(person.firstName) is an unknown type") } } We can use the where statement in combination with the is keyword to filter an array to only return instances of a specific type. In the next example, we filter an array that contains instances of the PersonProtocol to only return those elements of the array that are instances of the SwiftProgrammer type: for person in people where person is SwiftProgrammer { print("(person.firstName) is a Swift Programmer") } Now let's look at how we would cast an instance to a specific type. To do this, we can use the as keyword. Since the cast can fail if the instance is not of the specified type, the as keyword comes in two forms: as? and as!. With the as? form, if the casting fails it returns a nil; with the as! form, if the casting fails we get a runtime error. Therefore, it is recommended to use the as? form unless we are absolutely sure of the instance type or we perform a check of the instance type prior to doing the cast. The following example shows how we would use the as? keyword to attempt to cast an instance of a variable to the SwiftProgammer type: if let p = person as? SwiftProgrammer { print("(person.firstName) is a Swift Programmer") } Since the as? keyword returns an optional, in the last example we could use optional binding to perform the cast. If we are sure of the instance type, we can use the as! keyword as shown in the next example: for person in people where person is SwiftProgrammer { let p = person as! SwiftProgrammer } Summary While protocol-oriented programming is about so much more than just the protocol, it would be impossible to have the protocol-oriented programming paradigm without the protocol. We can think of the protocol as the backbone of protocol-oriented programming. Therefore, it is important to fully understand the protocol in order to properly implement protocol-oriented programming. Resources for Article: Further resources on this subject: Using Protocols and Protocol Extensions [Article] What we can learn from attacks on the WEP Protocol [Article] Hosting the service in IIS using the TCP protocol [Article]
Read more
  • 0
  • 0
  • 1635
article-image-how-create-breakout-game-godot-engine-part-1
George Marques
22 Nov 2016
8 min read
Save for later

How to create a Breakout game with Godot Engine – Part 1

George Marques
22 Nov 2016
8 min read
The Godot Engine is a piece of open source software designed to help you make any kind of game. It possesses a great number of features to ease the work flow of game development. This two-part article will cover some basic features of the engine—such as physics and scripting—by showing you how to make a game with the mechanics of the classic Breakout. To install Godot, you can download it from the website and extract it in your place of preference. You can also install it through Steam. While the latter has a larger download, it has all the demos and export templates already installed. Setting up the project When you first open Godot, you are presented with the Project Manager window. In here, you can create new or import existing projects. It's possible to run a project directly from here without opening the editor itself. The Templates tab shows the available projects in the online Asset Library where you can find and download community-made content. Note that there might also be a console window, which shows some messages of what's happening on the engine, like warnings and error messages. This window must remain open but it's also helpful for debugging. It will not show up in your final exported version of the game. Project Manager To start creating our game, let's click on the New Project button. Using the dialog interface, navigate to where you want to place it, and then create a new folder just for the project. Select it and choose a name for the project ("Breakout" may be a good choice). Once you do that, the new project will be shown at the top of the list. You can double-click to open it in the editor. Creating the paddle You will see first the main screen of the Godot editor. I rearranged the docks to suit my preferences, but you can leave the default one or change it to something you like. If you click on the Settings button at the top-right corner, you can save and load layouts. Godot uses a system in which every object is a Node. A Scene is just a tree of nodes and can also be used as a "prefab", like other engines call it. Every scene can be instanced as a part of another scene. This helps in dividing the project and reusing the work. This is all explained in the documentation and you can consult it if you have any doubt. Main screen Now we are going to create a scene for the paddle, which can later be instanced on the game scene. I like to start with an object that can be controlled by the player so that we can start to feel what the interactivity looks like. On the Scene dock, click on the "+" (plus) button to add a new Node (pressing Ctrl + A will also work). You'll be presented with a large collection of Nodes to choose from, each with its own behavior. For the paddle, choose a StaticBody2D. The search field can help you find it easily. This will be root of the scene. Remember that a scene is a tree of nodes, so it needs a "root" to be its main anchor point. You may wonder why we chose a static body for the paddle since it will move. The reason for using this kind of node is that we don't want it to be moved by physical interaction. When the ball hits the paddle, we want it to be kept in the same place. We will move it only through scripting. Select the node you just added in the Scene dock and rename it to Paddle. Save the scene as paddle.tscn. Saving often is good to avoid losing your work. Add a new node of the type Sprite (not Sprite3D, since this is a 2D game). This is now a child of the root node in the tree. The sprite will serve as the image of the paddle. You can use the following image in it: Paddle Save the image in the project folder and use the FileSystem dock in the Godot editor to drag and drop into the Texture property on the Inspector dock. Any property that accepts a file can be set with drag and drop. Now the Static Body needs a collision area so that it can tell where other physics objects (like the ball) will collide. To do this, select the Paddle root node in the Scene dock and add a child node of type CollisionShape2D to it. The warning icon is there because we didn't set a shape to it yet, so let's do that now. On the Inspector dock, set the Shape property to a New RectangleShape2D. You can set the shape extents visually using the editor. Or you can click on the ">" button just in front of the Shape property on the Inspector to edit its properties and set the extents to (100, 15) if you're using the provided image. This is the half-extent, so the rectangle will be doubled in each dimension based on what you set here. User input While our paddle is mostly done, it still isn't controllable. Before we delve into the scripting world, let's set up the Input Map. This is a Godot functionality that allows us to abstract user input into named actions. You can later modify the keys needed to move the player later without changing the code. It also allows you to use multiple keys and joystick buttons for a single action, making the game work on keyboard and gamepad seamlessly. Click on the Project Settings option under the Scene menu in the top left of the window. There you can see the Input Map tab. There are some predefined actions, which are needed by UI controls. Add the actions move_left and move_right that we will use to move the paddle. Then map the left and right arrow keys of the keyboard to them. You can also add a mapping to the D-pad left and right buttons if you want to use a joystick. Close this window when done. Now we're ready to do some coding. Right-click on the Paddle root node in the Scene dock and select the option Add Script from the context menu. The "Create Node Script" dialog will appear and you can use the default settings, which will create a new script file with the same name as the scene (with a different extension). Godot has its own script language called "GDScript", which has a syntax a bit like Python and is quite easy to learn if you are familiar with programming. You can use the following code on the Paddle script: extends StaticBody2D # Paddle speed in pixels per second export var speed = 150.0 # The "export" keyword allows you to edit this value from the Inspector # Holds the limits off the screen var left_limit = 0 var right_limit = 0 # This function is called when this node enters the game tree # It is useful for initialization code func _ready(): # Enable the processing function for this node set_process(true) # Set the limits to the paddle based on the screen size left_limit = get_viewport_rect().pos.x + (get_node("Sprite").get_texture().get_width() / 2) right_limit = get_viewport_rect().pos.x + get_viewport_rect().size.x - (get_node("Sprite").get_texture().get_width() / 2) # The processing function func _process(delta): var direction = 0 if Input.is_action_pressed("move_left"): # If the player is pressing the left arrow, move to the left # which means going in the negative direction of the X axis direction = -1 elif Input.is_action_pressed("move_right"): # Same as above, but this time we go in the positive direction direction = 1 # Create a movement vector var movement = Vector2(direction * speed * delta, 0) # Move the paddle using vector arithmetic set_pos(get_pos() + movement) # Here we clamp the paddle position to not go off the screen if get_pos().x < left_limit: set_pos(Vector2(left_limit, get_pos().y)) elif get_pos().x > right_limit: set_pos(Vector2(right_limit, get_pos().y)) If you play the scene (using the top center bar or pressing F6), you can see the paddle and move it with the keyboard. You may find it too slow, but this will be covered in part two of this article when we set up the game scene. Up next You now have a project set up and a paddle on the screen that can be controlled by the player. You also have some understanding of how the Godot Engine operates with its nodes, scenes, and scripts. In part two, you will learn how to add the ball and the destroyable bricks. About the Author: George Marques is a Brazilian software developer who has been playing with programming in a variety of environments since he was a kid. He works as a freelancer programmer for web technologies based on open source solutions such as WordPress and Open Journal Systems. He's also one of the regular contributors of the Godot Engine, helping solving bugs and adding new features to the software, while also giving solutions to the community for the questions they have.
Read more
  • 0
  • 1
  • 15004

article-image-installing-vcenter-site-recovery-manager-61
Packt
22 Nov 2016
3 min read
Save for later

Installing vCenter Site Recovery Manager 6.1

Packt
22 Nov 2016
3 min read
In this article by Abhilash G B, the author of the Disaster Recovery using VMware vSphere Replication and vCenter Site Recovery Manager - Second Editon, we will learn about Site Recovery Manager and its architecture. (For more resources related to this topic, see here.) What is Site Recovery Manager? vCenter Site Recovery Manager (SRM) is an orchestration software that is used to automate disaster recovery testing and failover. It can be configured to leverage either vSphere replication or a supported array-based replication. With SRM, you can create protection groups and run recovery plans against them. These recovery plans can then be used to test the Disaster Recovery (DR) setup, perform a planned failover, or be initiated during a DR. SRM is a not a product that performs an automatic failover, which means there is no intelligence built into SRM that would detect a disaster/outage and cause failover of the virtual machines (VMs). The DR process should be manually initiated. Hence, it is not a high-availability solution either, but purely a tool that orchestrates a recovery plan. The SRM architecture vCenter SRM is not a tool that works on its own. It needs to talk to other components in your vSphere environment. The following are the components that will be involved in an SRM-protected environment: SRM requires both the protected and the recovery sites to be managed by separate instances of vCenter Server. It also requires an SRM Instance at both the sites. SRM now uses PSC as an intermediary to fetch vCenter information. The following are the possible multiple topologies: SRM as a solution cannot work on its own. This is because it is only an orchestration tool and does not include a replication engine. However, it can leverage either a supported array-based replication or VMware's proprietary replication engine vSphere Replication. Array manager Each SRM instance needs to be configured with an array manager for it to communicate with the storage array. The Array Manager will detect the storage array using the information you supply to connect to the array. Before even you could add an array manager, you will need to install an array specific Storage Replication Adapter (SRA). This is because, the Array Manager uses the SRA installed to collect the replication information from the array: Storage Replication Adapter (SRA) The SRA is a storage vendor component that makes SRM aware of the replication configuration at the array. SRM leverages the SRA's ability to gather information regarding the replicated volumes and direction of the replication from the array. SRM also uses the SRA for the following functions: Test Failover Recovery Reprotect It is important to understand that SRM requires the SRA to be installed for all of its functions leveraging array-based replication. When all these components are put together, a site protected by SRM would look as depicted in the following figure: SRM conceptually assumes that both the protected and the recovery sites are geographically separated, but such a separation is not mandatory. You can use SRM to protect a chassis of servers and have another chassis in the same data center as the recovery site. Summary In this article, we learned what VMware vCenter Site Recovery Manager is and also its architecture. Resources for Article: Further resources on this subject: Virtualization [article] VM, It Is Not What You Think! [article] The importance of Hyper-V Security [article]
Read more
  • 0
  • 0
  • 2194
Modal Close icon
Modal Close icon