Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-event-detection-news-headlines-hadoop
Packt
08 Dec 2016
13 min read
Save for later

Event detection from the news headlines in Hadoop

Packt
08 Dec 2016
13 min read
In this article by Anurag Shrivastava, author of Hadoop Blueprints, we will be learning how to build a text analytics system which detects the specific events from the random news headlines. Internet has become the main source of news in the world. There are thousands of website which constantly publish and update the news stories around the world. Not every news items is relevant for everyone but some news items are very critical for some people or businesses. For example, if you were major car manufacturer based in Germany having your suppliers located in India then you would be interested in the news from the region which can affect your supply chain. (For more resources related to this topic, see here.) Road accidents in India are a major social and economic problem. Road accidents leave a large number of fatalities behind and result in the loss of capital. In this example, we will build a system which detects if a news item refers to a road accident event. Let us define what we mean by it in the next paragraph. A road accident event may or may not result in fatal injuries. One or more vehicles and pedestrians may be involved in the accidents. A non road accident event news item is everything else which can not be categorized as a road accident event. It could be a road accident trend analysis related to road accidents or something totally unrelated. Technology stack To build this system, we will use the following technologies: Task Technology Data storage HDFS Data processing Hadoop MapReduce Query engine Hive and Hive UDF Data ingestion Curl and HDFS copy Event detection OpenNLP The event detection system is a machine learning based natural language processing system. The natural language processing system brings the intelligence to detect the events in the random headline sentences from the news items. An OpenNLP OpenSourceNaturalLanguageProcessingFramework (OpenNLP) is from apache software foundation. You can download the version 1.6.0 from https://opennlp.apache.org/ to run the examples in this blog. It is capable of detecting the entities, document categories, parts of speech, and so on in the text written by humans. We will use document categorization feature of OpenNLP in our system. Document categorization feature requires you to train the OpenNLP model with the help of sample text. As a result of training, we get a model. This resulting model is used to categorize the new text. Our training data looks as follows: r 1.46 lakh lives lost on Indian roads last year - The Hindu. r Indian road accident data | OpenGovernmentData (OGD) platform... r 400 people die everyday in road accidents in India: Report - India TV. n Top Indian female biker dies in road accident during country-wide tour. n Thirty die in road accidents in north India mountains—World—Dunya... n India's top woman biker Veenu Paliwal dies in road accident: India... r Accidents on India's deadly roads cost the economy over $8 billion... n Thirty die in road accidents in north India mountains (The Express) The first column can take two values: n indicates that the news item is a road accident event r indicates that the news item is not a road accident event or everything else This training set has total 200 lines. Please note that OpenNLP requires at least 15000 lines in the training set to deliver good results. Because we do not have so much training data, we will start with a small set but remain aware about the limitations of our model. You will see that even with a small training dataset, this model works reasonably well. Let us train and build our model: $ opennlp DoccatTrainer -model en-doccat.bin -lang en -data roadaccident.train.prn -encoding UTF-8 Here the file roadaccident.train.prn contains the training data. The output file en-doccat.bin contains the model which we will use in our data pipeline. We have built our model using the command line utility but it is also possible to build the model programmatically. The training data file is a plain text file, which you can expand with a bigger corpus of knowledge to make the model smarter. Next we will build the data pipeline as follows: Fetch RSS feeds This component will fetch RSS news feeds from the popular news web sites. In this case, we will just use one news from Google. We can always add more sites after our first RSS feed has been integrated. The whole RSS feed can be downloaded using the following command: $ curl "https://news.google.com/news?cf=all&hl=en&ned=in&topic=n&output=rss" The previous command downloads the news headline for India. You can customize the RSS feed by visiting the Google news site is https://news.google.com for your region. Scheduler Our scheduler will fetch the RSS feed once in 6 hours. Let us assume that in 6 hours time interval, we have good likelihood of fetching fresh news items. We will wrap our feed fetching script in a shell file and invoke it using cron. The script is as follows: $ cat feedfetch.sh NAME= "newsfeed-"`date +%Y-%m-%dT%H.%M.%S` curl "https://news.google.com/news?cf=all&hl=en&ned=in&topic=n&output=rss" > $NAME hadoop fs -put $NAME /xml/rss/newsfeeds Cron job setup line will be as follows: 0 */6 * * * /home/hduser/mycommand Please edit your cron job table using the following command and add the setup line in it: $ cronjob -e Loading data in HDFS To load data in HDFS, we will use HDFS put command which copies the downloaded RSS feed in a directory in HDFS. Let us make this directory in HDFS where our feed fetcher script will store the rss feeds: $ hadoop fs -mkdir /xml/rss/newsfeeds Query using Hive First we will create an external table in Hive for the new RSS feed. Using Xpath based select queries, we will extract the news headlines from the RSS feeds. These headlines will be passed to UDF to detect the categories: CREATE EXTERNAL TABLE IF NOT EXISTS rssnews( document STRING) COMMENT 'RSS Feeds from media' STORED AS TEXTFILE location '/xml/rss/newsfeeds'; The following command parses the XML to retrieve the title or the headlines from XML and explodes them in a single column table: SELECT explode(xpath(name, '//item/title/text()')) FROM xmlnews1; The sample output of the above command on my system is as follows: hive> select explode(xpath(document, '//item/title/text()')) from rssnews; Query ID = hduser_20161010134407_dcbcfd1c-53ac-4c87-976e-275a61ac3e8d Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1475744961620_0016, Tracking URL = http://localhost:8088/proxy/application_1475744961620_0016/ Kill Command = /home/hduser/hadoop-2.7.1/bin/hadoop job -kill job_1475744961620_0016 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 2016-10-10 14:46:14,022 Stage-1 map = 0%, reduce = 0% 2016-10-10 14:46:20,464 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.69 sec MapReduce Total cumulative CPU time: 4 seconds 690 msec Ended Job = job_1475744961620_0016 MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Cumulative CPU: 4.69 sec HDFS Read: 120671 HDFS Write: 1713 SUCCESS Total MapReduce CPU Time Spent: 4 seconds 690 msec OK China dispels hopes of early breakthrough on NSG, sticks to its guns on Azhar - The Hindu Pampore attack: Militants holed up inside govt building; combing operations intensify - Firstpost CPI(M) worker hacked to death in Kannur - The Hindu Akhilesh Yadav's comment on PM Modi's Lucknow visit shows Samajwadi Party's insecurity: BJP - The Indian Express PMO maintains no data about petitions personally read by PM - Daily News & Analysis AIADMK launches social media campaign to put an end to rumours regarding Amma's health - Times of India Pakistan, India using us to play politics: Former Baloch CM - Times of India Indian soldier, who recited patriotic poem against Pakistan, gets death threat - Zee News This Dussehra effigies of 'terrorism' to go up in flames - Business Standard 'Personal reasons behind Rohith's suicide': Read commission's report - Hindustan Times Time taken: 5.56 seconds, Fetched: 10 row(s) Hive UDF Our Hive User Defined Function (UDF) categorizeDoc takes a news headline and suggests if it is a news about a road accident or the road accident event as we explained earlier. This function is as follows: package com.mycompany.app;import org.apache.hadoop.io.Text;import org.apache.hadoop.hive.ql.exec.Description;import org.apache.hadoop.hive.ql.exec.UDF;import org.apache.hadoop.io.Text;import opennlp.tools.util.InvalidFormatException;import opennlp.tools.doccat.DoccatModel;import opennlp.tools.doccat.DocumentCategorizerME;import java.lang.String;import java.io.FileInputStream;import java.io.InputStream;import java.io.IOException;@Description( name = "getCategory", value = "_FUNC_(string) - gets the catgory of a document ")public final class MyUDF extends UDF { public Text evaluate(Text input) { if (input == null) return null; try { return new Text(categorizeDoc(input.toString())); } catch (Exception ex) { ex.printStackTrace(); return new Text("Sorry Failed: >> " + input.toString()); } } public String categorizeDoc(String doc) throws InvalidFormatException, IOException { InputStream is = new FileInputStream("./en-doccat.bin"); DoccatModel model = new DoccatModel(is); is.close(); DocumentCategorizerME classificationME = new DocumentCategorizerME(model); String documentContent = doc; double[] classDistribution = classificationME.categorize(documentContent); String predictedCategory = classificationME.getBestCategory(classDistribution); return predictedCategory; }} The function categorizeDoc take a single string as input. It loads the model which we created earlier from the file en-doccat.bin from the local directory. Finally it calls the classifier which returns the result to the calling function. The calling function MyUDF extends the hive UDF class. It calls the function categorizeDoc for each string line item input. If the it succeed then the value is returned to the calling program otherwise a message is returned which indicates that the category detection has failed. The pom.xml file to build the above file is as follows: $ cat pom.xml <?xml version="1.0" encoding="UTF-8"?> <project xsi_schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mycompany</groupId> <artifactId>app</artifactId> <version>1.0</version> <packaging>jar</packaging> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.7</maven.compiler.source> <maven.compiler.target>1.7</maven.compiler.target> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.7.1</version> <type>jar</type> </dependency> <dependency> <groupId>org.apache.hive</groupId> <artifactId>hive-exec</artifactId> <version>2.0.0</version> <type>jar</type> </dependency> <dependency> <groupId>org.apache.opennlp</groupId> <artifactId>opennlp-tools</artifactId> <version>1.6.0</version> </dependency> </dependencies> <build> <pluginManagement> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.8</version> </plugin> <plugin> <artifactId>maven-assembly-plugin</artifactId> <configuration> <archive> <manifest> <mainClass>com.mycompany.app.App</mainClass> </manifest> </archive> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> </plugin> </plugins> </pluginManagement> </build> </project> You can build the jar with all the dependencies in it using the following commands: $ mvn clean compile assembly:single The resulting jar file app-1.0-jar-with-dependencies.jar can be found in the target directory. Let us use this jar file in Hive to categorise the news headlines as follows: Copy jar file to the bin subdirectory in the Hive root: $ cp app-1.0-jar-with-dependencies.jar $HIVE_ROOT/bin Copy the trained model in the bin sub directory in the Hive root: $ cp en-doccat.bin $HIVE_ROOT/bin Run the categorization queries Run Hive: $hive Add jar file in Hive: hive> ADD JAR ./app-1.0-jar-with-dependencies.jar ; Create a temporary categorization function catDoc: hive> CREATE TEMPORARY FUNCTION catDoc as 'com.mycompany.app.MyUDF'; Create a table headlines to hold the headlines extracted from the RSS feed: hive> create table headlines( headline string); Insert the extracted headlines in the table headlines: hive> insert overwrite table headlines select explode(xpath(document, '//item/title/text()')) from rssnews; Let's test our UDF by manually passing a real news headline to it from a newspaper website: hive> hive> select catDoc("8 die as SUV falls into river while crossing bridge in Ghazipur") ; OK N The output is N which means this is indeed a headline about a road accident incident. This is reasonably good, so now let us run this function for the all the headlines: hive> select headline, catDoc(*) from headlines; OK China dispels hopes of early breakthrough on NSG, sticks to its guns on Azhar - The Hindu r Pampore attack: Militants holed up inside govt building; combing operations intensify - Firstpost r Akhilesh Yadav Backs Rahul Gandhi's 'Dalali' Remark - NDTV r PMO maintains no data about petitions personally read by PM Narendra Modi - Economic Times n Mobile Internet Services Suspended In Protest-Hit Nashik - NDTV n Pakistan, India using us to play politics: Former Baloch CM - Times of India r CBI arrests Central Excise superintendent for taking bribe - Economic Times n Be extra vigilant during festivals: Centre's advisory to states - Times of India r CPI-M worker killed in Kerala - Business Standard n Burqa-clad VHP activist thrashed for sneaking into Muslim women gathering - The Hindu r Time taken: 0.121 seconds, Fetched: 10 row(s) You can see that our headline detection function works and output r or n. In the above example, we see many false positives where a headline has been incorrectly identified as a road accident. A better training for our model can improve the quality of our results. Further reading The book Hadoop Blueprints covers several case studies where we can apply Hadoop, HDFS, data ingestion tools such as Flume and Sqoop, query and visualization tools such as Hive and Zeppelin, machine learning tools such as BigML and Spark to build the solutions. You will discover how to build a fraud detection system using Hadoop or build a Data Lake for example. Summary In this article we have learned to build a text analytics system which detects the specific events from the random news headlines. This also covers how to apply Hadoop, HDFS, and other different tools. Resources for Article: Further resources on this subject: Spark for Beginners [article] Hive Security [article] Customizing heat maps (Intermediate) [article]
Read more
  • 0
  • 0
  • 1609

article-image-simple-content-pipeline-make
Ryan Roden-Corrent
08 Dec 2016
5 min read
Save for later

A simple content pipeline with Make

Ryan Roden-Corrent
08 Dec 2016
5 min read
Many game engines have the concept of a content pipeline. Your project includes a collection of assets like images, sounds, and music. These may be stored in one format that you use for development but translated into another format that gets packaged along with the game. The content pipeline is responsible for this translation. If you are developing a small game, the overhead of a large-scale game engine may be more than what you want to deal with. For those who prefer a minimalistic work environment, a relatively simple Make file can serve as your content pipeline. It took a few game projects for me to set up a pipeline that I was happy with, and looking back, I really wish I had a post like this to get me started. I'm hoping this will be specific enough to get you started but generic enough to be adaptable to your needs! The setup Suppose you are making a game and you use Aseprite to create pixel art and MMPZ LMMS to compose music, your game's file structure would look like this: - src/ - ... source code ... - content/ - song1.mmpz - song2.mmpz - ... - image1.ase - image2.ase - ... - bin/ - game - song1.ogg - song2.ogg - ... - image1.png - image2.png - ... src contains your source code—the language is irrelevant for this discussion. content contains the work-in-progress art and music for your game. They are saved in the source formats for Aseprite and LMMS (.ase and .mmpz, respectively). The bin folder represents the actual game "package"—the thing you would distribute to those who want to play your game. bin/game represents the executable built from the source files. bin/ also contains playable .ogg files that are exported from the corresponding .mmpz files. Similarly, bin contains .png files that are built from the corresponding .ase files. We want to automate the process of exporting the content files into their game-ready format. The Makefile I'll start by showing the example Makefile and then explain how it works: CONTENT_DIR = content BIN_DIR = bin IMAGE_FILES := $(wildcard $(CONTENT_DIR)/*.ase) MUSIC_FILES := $(wildcard $(CONTENT_DIR)/*.mmpz) all: code music art code: bin_dir # build code here ... bin_dir: @mkdir -p $(BIN_DIR) art: bin_dir $(IMAGE_FILES:$(CONTENT_DIR)/%.ase=$(BIN_DIR)/%.png) $(BIN_DIR)/%.png : $(CONTENT_DIR)/%.ase @echo building image $* @aseprite --batch --sheet $(BIN_DIR)/$*.png $(CONTENT_DIR)/$*.ase --data /dev/null music: bin_dir $(MUSIC_FILES:$(CONTENT_DIR)/%.mmpz=$(BIN_DIR)/%.ogg) $(BIN_DIR)/%.ogg : $(CONTENT_DIR)/%.mmpz @echo building song $* lmms -r $(CONTENT_DIR)/$*.mmpz -f ogg -b 64 -o $(BIN_DIR)/$*.ogg clean: $(RM) -r $(BIN_DIR) The first rule (all) will be run when you just type make. This depends on code, music, and art. I won't get into the specifics of code, as that will differ depending on the language you use. Whatever the code is, it should build your source code into an executable that gets placed in the bin directory. You can see that code, art, and music all depend on bin_dir, which ensures that the bin folder exists before we try to build anything. Let's take a look at how the art rule works. At the top of the file, we define IMAGE_FILES := $(wildcard $(CONTENT_DIR)/*.ase). This uses a wildcard search to collect the names of all the .ase files in our content directory. The expression $(IMAGE_FILES:$(CONTENT_DIR)/%.ase=$(BIN_DIR)/%.png) says that for every .ase file in the content directory, we want a corresponding .png file in bin. The rule below that provides a recipe for building a single .png from a single .ase: $(BIN_DIR)/%.png : $(CONTENT_DIR)/%.ase @echo building image $* @aseprite --batch --sheet $(BIN_DIR)/$*.png $(CONTENT_DIR)/$*.ase --data /dev/null That is, for every png file we want in bin, we need to find a matching ase file in content and invoke the given aseprite command on it. The music rule works pretty much the same way, but for .mmpz and .ogg files instead. Now you can run make music to build music files, make art to build art files, or just make to build everything. As all the resulting content ends up in bin, the clean rule just removes the bin directory. Advantages You don't have to remember to export content every time you work on it. Without a system like this, you would typically have to save whatever you are working on to a source file (e.g. a .mmpz file for LMMS) and export it to the output format (e.g. .ogg). This is tedious and the second part is easy to forget. If you are using a version control system (and you should be!), it doesn't have to track the bin directory, as it can be generated just by running make. For git, this means you can put bin/ in.gitignore, which is a huge advantage as git doesn't handle large binary files well. It is relatively easy to create a distributable package for your game. Just run make and compress the bin directory. Summary I hope this illuminated how a process that is typically wrapped up in the complexity of a large-scale game engine can be made quite simple. While I used LMMS and Aseprite as specific examples, this method can be easily adapted to any content-creation programs that have a command-line tool you can use to export files. About the author Ryan Roden-Corrent is a software developer by trade and hobby. He is an active contributor in the free/opensource software community and has a passion for simple but effective tools. He started gaming at a young age and dabbles in all aspects of game development, from coding to art and music. He's also an aspiring musician and yoga teacher. You can find his open source work at here and Creative Commons art at here.
Read more
  • 0
  • 0
  • 1574

article-image-build-chatbot
Packt
07 Dec 2016
23 min read
Save for later

Build a Chatbot

Packt
07 Dec 2016
23 min read
In this article written by Alexander T. Combs, author of the book Python Machine Learning Blueprints, we are going to learn how to construct a chatbot from scratch. Along the way, we'll learn more about the history of the field and its future prospects. We'll cover the following topics in this article: The Turing test The history of chatbots The design of chatbots Building a chatbot (For more resources related to this topic, see here.) The Turing test In 1950, Alan Turing posed the question, “Can machines think?” in one of his more famous papers. Being the practical genius that he was, he decided not to tackle this question head on but to pose it in the framework of the “problem of other minds” instead. This problem poses the question, “How do we truly know that other people have minds like our own?”. As we can only observe their behavior and not the inner workings of their mind, we must take it on faith that they are like us. Turing proposed that if a machine behaved as if it was intelligent, then in fact, we should view it as such. This, in a nutshell, is the Turing test. Passing this test means convincing humans that a machine is a fellow human. You may wonder at this point whether a program ever passed the Turing test successfully. The answer is that there really is no official Turing test. The closest thing to an official event is the Loebner Prize. This is a contest held annually that awards modest prizes to chatbots that appear to be the most convincingly human to a panel of judges. If this sounds rigorous and academic, it has been described as anything but. Loebner's sponsorship for the contest is said to have been a product of his total disdain for labor of any kind. Hopefully, by this point, you realized that any claims about “passing the Turing test” are more spectacle than actual science. However, despite this, Turing's original concept is a useful construct. The history of chatbots I still remember the day my father sat me down in front of his IBM XT with its black and green flickering monitor and asked me to talk to “it”. The conversation went something like the following: >Hello? How are you feeling today? >Hi. Who is this? >Hi there… how are you today? >I'm fine. You didn't tell me who you are. >We should be discussing you, not me. >We should? Why? >Let's change focus a bit… Tell me about your family. >You are impossible to have a conversation with. >Can you elaborate on that? >You are killing me. >Perhaps you would like me to be killing you. >Ok, you are freaking me out. >Perhaps you're really talking about yourself? >What?? That makes no sense. >How would an answer to that help you? >END >Very Interesting. >QUIT >Thank you for talking with me. >#$###$#@@…. The “it” that I talked to was a program called ELIZA. Written in the mid-1960s at MIT's Artificial Intelligence Lab by Joseph Weizenbaum, ELIZA was created to mimic the responses of a Rogerian psychotherapist. Though nearly comical when examined in any depth, the program was capable of convincing some users that they were chatting with an actual human. This was a remarkable feat considering it was a scant 200 lines of code that used randomization and regular expressions to parrot back responses. Even today, this simple program remains a staple of popular culture. If you ask Siri who ELIZA is, she will tell you she is a friend and brilliant psychiatrist. If ELIZA was an early example of chatbots, what have we seen after this? In recent years, there has been an explosion of new chatbots; most notable of these is Cleverbot. Cleverbot was released to the world via the web in 1997. Since then, this bot has racked up hundreds of millions of conversions. Unlike early chatbots, Cleverbot (as the name suggests) appears to become more intelligent with each conversion. Though the exact details of the workings of the algorithm are difficult to find, it is said to work by recording all conversations in a database and finding the most appropriate response by identifying the most similar questions and responses in the database. I made up a nonsensical question in the following screenshot, and you can see that it found something similar to the object of my question in terms of a string match. I persisted: Again I got something…similar? You'll also notice that topics can persist across the conversation. In response to my answer, I was asked to go into more detail and justify my answer. This is one of the things that appears to make Cleverbot, well, clever. While chatbots that learn from humans can be quite amusing, they can also have a darker side. Just this past year, Microsoft released a chatbot named Tay on Twitter. People were invited to ask questions of Tay, and Tay would respond in accordance with her “personality”. Microsoft had apparently programmed the bot to appear to be 19-year-old American girl. She was intended to be your virtual “bestie”; the only problem was she started sounding like she would rather hang with the Nazi youth than you. As a result of these unbelievably inflammatory tweets, Microsoft was forced to pull Tay off Twitter and issue an apology: “As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.” -March 25, 2016 Official Microsoft Blog Clearly, brands that want to release chatbots into the wild in the future should take a lesson from this debacle. There is no doubt that brands are embracing chatbots. Everyone from Facebook to Taco Bell is getting in on the game. Witness the TacoBot: Yes, this is a real thing, and despite the stumbles such as Tay, there is a good chance the future of UI looks a lot like TacoBot. One last example might even help explain why. Quartz recently launched an app that turns news into a conversation. Rather than lay out the day's stories as a flat list, you are engaged in a chat as if you were getting news from a friend. David Gasca, a PM at Twitter, describes his experience using the app in a post on Medium. He describes how the conversational nature invoked feelings that were normally only triggered in human relationships. This is his take on how he felt when he encountered an ad in the app: "Unlike a simple display ad, in a conversational relationship with my app, I feel like I owe something to it: I want to click. At the most subconscious level, I feel the need to reciprocate and not let the app down: The app has given me this content. It's been very nice so far and I enjoyed the GIFs. I should probably click since it's asking nicely.” If this experience is universal—and I expect that it is—this could be the next big thing in advertising, and have no doubt that advertising profits will drive UI design: “The more the bot acts like a human, the more it will be treated like a human.” -Mat Webb, technologist and co-author of Mind Hacks At this point, you are probably dying to know how these things work, so let's get on with it! The design of chatbots The original ELIZA application was two-hundred odd lines of code. The Python NLTK implementation is similarly short. An excerpt can be seen at the following link from NLTK's website (http://www.nltk.org/_modules/nltk/chat/eliza.html). I have also reproduced an except below: # Natural Language Toolkit: Eliza # # Copyright (C) 2001-2016 NLTK Project # Authors: Steven Bird <stevenbird1@gmail.com> # Edward Loper <edloper@gmail.com> # URL: <http://nltk.org/> # For license information, see LICENSE.TXT # Based on an Eliza implementation by Joe Strout <joe@strout.net>, # Jeff Epler <jepler@inetnebr.com> and Jez Higgins <mailto:jez@jezuk.co.uk>. # a translation table used to convert things you say into things the # computer says back, e.g. "I am" --> "you are" from future import print_function # a table of response pairs, where each pair consists of a # regular expression, and a list of possible responses, # with group-macros labelled as %1, %2. pairs = ((r'I need (.*)',("Why do you need %1?", "Would it really help you to get %1?","Are you sure you need %1?")),(r'Why don't you (.*)', ("Do you really think I don't %1?","Perhaps eventually I will %1.","Do you really want me to %1?")), [snip](r'(.*)?',("Why do you ask that?", "Please consider whether you can answer your own question.", "Perhaps the answer lies within yourself?", "Why don't you tell me?")), (r'quit',("Thank you for talking with me.","Good-bye.", "Thank you, that will be $150. Have a good day!")), (r'(.*)',("Please tell me more.","Let's change focus a bit... Tell me about your family.","Can you elaborate on that?","Why do you say that %1?","I see.", "Very interesting.","%1.","I see. And what does that tell you?","How does that make you feel?", "How do you feel when you say that?")) ) eliza_chatbot = Chat(pairs, reflections) def eliza_chat(): print("Therapistn---------") print("Talk to the program by typing in plain English, using normal upper-") print('and lower-case letters and punctuation. Enter "quit" when done.') print('='*72) print("Hello. How are you feeling today?") eliza_chatbot.converse() def demo(): eliza_chat() if name demo() == " main ": As you can see from this code, input text was parsed and then matched against a series of regular expressions. Once the input was matched, a randomized response (that sometimes echoed back a portion of the input) was returned. So, something such as I need a taco would trigger a response of Would it really help you to get a taco? Obviously, the answer is yes, and fortunately, we have advanced to the point that technology can provide one to you (bless you, TacoBot), but this was still in the early days. Shockingly, some people did actually believe ELIZA was a real human. However, what about more advanced bots? How are they constructed? Surprisingly, most of the chatbots that you're likely to encounter don't even use machine learning; they use what's known as retrieval-based models. This means responses are predefined according to the question and the context. The most common architecture for these bots is something called Artificial Intelligence Markup Language (AIML). AIML is an XML-based schema to represent how the bot should interact to the user's input. It's really just a more advanced version of how ELIZA works. Let's take a look at how responses are generated using AIML. First, all inputs are preprocessed to normalize them. This means when you input “Waaazzup???”, it is mapped to “WHAT IS UP”. This preprocessing step funnels down the myriad ways of saying the same thing into one input that can run against a single rule. Punctuation and other extraneous inputs are removed as well at this point. Once this is complete, the input is matched against the appropriate rule. The following is a sample template: <category> <pattern>WHAT IS UP</pattern> <template>The sky, duh. Pfft. Humans...</template> </category> This is the basic setup, but you can also layer in wildcards, randomization, and prioritization schemes. For example, the following pattern uses wildcard matching: <category> <pattern>* FOR ME<pattern> <template>I'm a bot. I don't <star/>. Ever.</template> </category> Here, the *wildcard matches one or more words before FORMEand then repeats these back in the output template. If the user were to type in Danceforme!, the response would be, I'm a bot. I don't dance. Ever. As you can see, these rules don't make for anything that approximates any type of real intelligence, but there are a few tricks that strengthen the illusion. One of the better ones is the ability to generate responses conditioned to a topic. For example, here is a rule that invokes a topic: <category> <pattern>I LIKE TURTLES</pattern> <template>I feel like this whole <set name="topic">turle</set> thing could be a problem. What do you like about them? </template> </category> Once the topic is set, then the rules specific to that context can be matched: <topic name="turtles"> <category> <pattern>* SHELL IS *</pattern> <template>I dislike turtles primarily because of their shells. What other creepy things do you like about turtles? </template> </category> <category> <pattern>* HIDE *</pattern> <template>I wish like a turtle that I could hide from this conversation.</template> </category> </topic> Let's see what this interaction looks like: >I like turtles! >I feel like this whole turtle thing could be a problem. What do you like about them? >I like how they hide in their shell >I wish like a turtle I could hide from this conversation. You can see that the continuity across the conversation adds a measure of realism. You probably think that this can't be state-of-the-art in this age of deep learning, and you're right. While most bots are rule-based, the next generation of chatbots are emerging, and they are based on neural networks. In 2015, Oriol Vinyas and Quoc Le of Google published a paper (http://arxiv.org/pdf/1506.05869v1.pdf), which described the construction of a neural network, based on sequence-to-sequence models. This type of model maps an input sequence, such as “ABC”, to an output sequence, such as “XYZ”. These inputs and outputs can be translations from one language to another for example. However, in the case of their work here, the training data was not language translation, but rather tech support transcripts and movie dialog. While the results from both models are both interesting, it was the interactions that were based on movie model that stole the headlines. The following are sample interactions taken from the paper: None of this was explicitly encoded by humans or present in a training set as asked, and yet, looking at this is, it is frighteningly like speaking with a human. However, let's see more… Note that the model responds with what appears to be knowledge of gender (he, she), of place (England), and career (player). Even questions of meaning, ethics, and morality are fair game: The conversation continues: If this transcript doesn't give you a slight chill of fear for the future, there's a chance you may already be some sort of AI. I wholeheartedly recommend reading the entire paper. It isn't overly technical, and it will definitely give you a glimpse of where this technology is headed. We talked a lot about the history, types, and design of chatbots, but let's now move on to building our own! Building a chatbot Now, having seen what is possible in terms of chatbots, you most likely want to build the best, most state-of-the-art, Google-level bot out there, right? Well, just put that out of your mind right now because we will do just the opposite! We will build the best, most awful bot ever! Let me tell you why. Building a chatbot comparable to what Google built takes some serious hardware and time. You aren't going to whip up a model on your MacBook Pro that takes anything less than a month or two to run with any type of real training set. This means that you will have to rent some time on an AWS box, and not just any box. This box will need to have some heavy-duty specs and preferably be GPU-enabled. You are more than welcome to attempt such a thing. However, if your goal is just to build something very cool and engaging, I have you covered here. I should also warn you in advance, although Cleverbot is no Tay, the conversations can get a bit salty. If you are easily offended, you may want to find a different training set. Ok, let's get started! First, as always, we need training data. Again, as always, this is the most challenging step in the process. Fortunately, I have come across an amazing repository of conversational data. The notsocleverbot.com site has people submit the most absurd conversations they have with Cleverbot. How can you ask for a better training set? Let's take a look at a sample conversation between Cleverbot and a user from the site: So, this is where we'll begin. We'll need to download the transcripts from the site to get started: You'll just need to paste the link into the form on the page. The format will be like the following: http://www.notsocleverbot.com/index.php?page=1. Once this is submitted, the site will process the request and return a page back that looks like the following: From here, if everything looks right, click on the pink Done button near the top right. The site will process the page and then bring you to the following page: Next, click on the Show URL Generator button in the middle: Next, you can set the range of numbers that you'd like to download from. For example, 1-20, by 1 step. Obviously, the more pages you capture, the better this model will be. However, remember that you are taxing the server, so please be considerate. Once this is done, click on Add to list and hit Return in the text box, and you should be able to click on Save. It will begin running, and when it is complete, you will be able to download the data as a CSV file. Next, we'll use our Jupyter notebook to examine and process the data. We'll first import pandasand the Python regular expressions library, re. We will also set the option in pandasto widen our column width so that we can see the data better: import pandas as pd import re pd.set_option('display.max_colwidth',200) Now, we'll load in our data: df = pd.read_csv('/Users/alexcombs/Downloads/nscb.csv') df The preceding code will result in the following output: As we're only interested in the first column, the conversation data, we'll parse this out: convo = df.iloc[:,0] convo The preceding code will result in the following output: You should be able to make out that we have interactions between User and Cleverbot, and that either can initiate the conversation. To get the data in the format that we need, we'll have to parse it into question and response pairs. We aren't necessarily concerned with who says what, but we are concerned with matching up each response to each question. You'll see why in a bit. Let's now perform a bit of regular expression magic on the text: clist = [] def qa_pairs(x): cpairs = re.findall(": (.*?)(?:$|n)", x) clist.extend(list(zip(cpairs, cpairs[1:]))) convo.map(qa_pairs); convo_frame = pd.Series(dict(clist)).to_frame().reset_index() convo_frame.columns = ['q', 'a'] The preceding code results in the following output: Okay, there's a lot of code there. What just happened? We first created a list to hold our question and response tuples. We then passed our conversations through a function to split them into these pairs using regular expressions. Finally, we set it all into a pandas DataFramewith columns labelled qand a. We will now apply a bit of algorithm magic to match up the closest question to the one a user inputs: from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity vectorizer = TfidfVectorizer(ngram_range=(1,3)) vec = vectorizer.fit_transform(convo_frame['q']) What we did in the preceding code was to import our TfidfVectorizationlibrary and the cosine similarity library. We then used our training data to create a tf-idf matrix. We can now use this to transform our own new questions and measure the similarity to existing questions in our training set. We covered cosine similarity and tf-idf algorithms in detail, so flip back there if you want to understand how these work under the hood. Let's now get our similarity scores: my_q = vectorizer.transform(['Hi. My name is Alex.']) cs = cosine_similarity(my_q, vec) rs = pd.Series(cs[0]).sort_values(ascending=0) top5 = rs.iloc[0:5] top5 The preceding code results in the following output: What are we looking at here? This is the cosine similarity between the question I asked and the top five closest questions. To the left is the index and on the right is the cosine similarity. Let's take a look at these: convo_frame.iloc[top5.index]['q'] This results in the following output: As you can see, nothing is exactly the same, but there are definitely some similarities. Let's now take a look at the response: rsi = rs.index[0] rsi convo_frame.iloc[rsi]['a'] The preceding code results in the following output: Okay, so our bot seems to have an attitude already. Let's push further. We'll create a handy function so that we can test a number of statements easily: def get_response(q): my_q = vectorizer.transform([q]) cs = cosine_similarity(my_q, vec) rs = pd.Series(cs[0]).sort_values(ascending=0) rsi = rs.index[0] return convo_frame.iloc[rsi]['a'] get_response('Yes, I am clearly more clever than you will ever be!') This results in the following output: We have clearly created a monster, so we'll continue: get_response('You are a stupid machine. Why must I prove anything to you?') This results in the following output: I'm enjoying this. Let's keep rolling with it: get_response('My spirit animal is a menacing cat. What is yours?') To which I responded: get_response('I mean I didn't actually name it.') This results in the following output: Continuing: get_response('Do you have a name suggestion?') This results in the following output: To which I respond: get_response('I think it might be a bit aggressive for a kitten') This results in the following output: I attempt to calm the situation: get_response('No need to involve the police.') This results in the following output: And finally, get_response('And I you, Cleverbot') This results in the following output: Remarkably, this may be one of the best conversations I've had in a while: bot or no bot. Now that we have created this cake-based intelligence, let's set it up so that we can actually chat with it via text message. We'll need a few things to make this work. The first is a twilio account. They will give you a free account that lets you send and receive text messages. Go to http://ww.twilio.com and click to sign up for a free developer API key. You'll set up some login credentials and they will text your phone to confirm your number. Once this is set up, you'll be able to find the details in their Quickstart documentation. Make sure that you select Python from the drop-down menu in the upper left-hand corner. Sending messages from Python code is a breeze, but you will need to request a twilio number. This is the number that you will use to send a receive messages in your code. The receiving bit is a little more complicated because it requires that you to have a webserver running. The documentation is succinct, so you shouldn't have that hard a time getting it set up. You will need to paste a public-facing flask server's URL in under the area where you manage your twilio numbers. Just click on the number and it will bring you to the spot to paste in your URL: Once this is all set up, you will just need to make sure that you have your Flask web server up and running. I have condensed all the code here for you to use on your Flask app: from flask import Flask, request, redirect import twilio.twiml import pandas as pd import re from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity app = Flask( name ) PATH_TO_CSV = 'your/path/here.csv' df = pd.read_csv(PATH_TO_CSV) convo = df.iloc[:,0] clist = [] def qa_pairs(x): cpairs = re.findall(": (.*?)(?:$|n)", x) clist.extend(list(zip(cpairs, cpairs[1:]))) convo.map(qa_pairs); convo_frame = pd.Series(dict(clist)).to_frame().reset_index() convo_frame.columns = ['q', 'a'] vectorizer = TfidfVectorizer(ngram_range=(1,3)) vec = vectorizer.fit_transform(convo_frame['q']) @app.route("/", methods=['GET', 'POST']) def get_response(): input_str = request.values.get('Body') def get_response(q): my_q = vectorizer.transform([input_str]) cs = cosine_similarity(my_q, vec) rs = pd.Series(cs[0]).sort_values(ascending=0) rsi = rs.index[0] return convo_frame.iloc[rsi]['a'] resp = twilio.twiml.Response() if input_str: resp.message(get_response(input_str)) return str(resp) else: resp.message('Something bad happened here.') return str(resp) It looks like there is a lot going on, but essentially we use the same code that we used before, only now we grab the POST data that twilio sends—the text body specifically—rather than the data we hand-entered before into our get_requestfunction. If all goes as planned, you should have your very own weirdo bestie that you can text anytime, and what could be better than that! Summary In this article, we had a full tour of the chatbot landscape. It is clear that we are just on the cusp of an explosion of these sorts of applications. The Conversational UI revolution is just about to begin. Hopefully, this article has inspired you to create your own bot, but if not, at least perhaps you have a much richer understanding of how these applications work and how they will shape our future. I'll let the app say the final words: get_response("Say goodbye, Clevercake") Resources for Article: Further resources on this subject: Supervised Machine Learning [article] Unsupervised Learning [article] Specialized Machine Learning Topics [article]
Read more
  • 0
  • 0
  • 2997

Packt
07 Dec 2016
6 min read
Save for later

An Architect’s Critical Competencies

Packt
07 Dec 2016
6 min read
In this article by Sameer Paradkar, the author of the book Cracking the IT Architect Interview, gives the information into a single reference guide that will save time prior to interviews and can be a ready reference for important topics that need to be revised before the interviews. (For more resources related to this topic, see here.) A good architect is one who leads by example, and without a good understanding of the technology stack and business domain, an architect is not equipped to deliver the pre-requisite outcomes for the enterprise. The team members typically have deep-dive expertise in the specific technology areas but will lack confidence in the architect if he does not have the competencies in the domain or technology. The architect is the bridge between the technology and the business team, and hence he/she must understand all aspects of the technology stack to be able to liaison with the business. The architect must be conversant in the business domain in order to drive the team and all the stakeholders toward a common organizational goal. An architect might not be busy all the time, but he/she leverages decades of expertise to solve and monitor the organizational IT landscape, making quick decisions during various stages of the SDLC. The project manager handles the people management aspects, freeing the architect of the hassles of operational tasks. An excellent architect is pretty much a hands-on person and should be able to mentor members of the design and implementation teams. He should be knowledgeable and competent to handle any complex situation. An architect’s success in interviews does not come easily. One has to spend hours prior to each interview, wading through various books and references for preparation. The motivation for this book was to consolidate all this information into a single reference guide that will save time prior to interviews and can be a ready reference for important topics that need to be revised before the interviews. Leadership: The architect has to make decisions and take ownership, and a lot of times, the right choice is not simple. The architect needs to find a solution that works, and it may not always be the best alternative on technical merits but it should work best in the given situation. To take such decisions, the architect must have an excellent understanding of the cultural and political environments within the organizations and should have the ability to generate buy-in from the key stakeholders. Strategic Mindset: This is the ability of an architect to look at things from a 10,000-foot elevation, at a strategic level, isolating the operational nuances. This requires creating an organizational vision such as making the product a market leader and then dividing it into achievable objectives to make it simpler for all the stakeholders to achieve these results. Architects are often tasked upon finding an alternative solution that provides the best ROI to the organization and creating a business case for getting sponsorships. Architects often work with top-level executives such as CEO, CTO, and CIO, where it is necessary to create and present strategic architectures and roadmaps for organizations. Domain Knowledge: It is a critical aspect to understand the problem domain before creating and defining a solution. It is also a mandatory requirement to be knowledgeable about the domain-specific requirements, such as legal and regulatory requirements. A sound domain understanding is not only essential for understanding the requirements and evangelizing the target state but also helps in articulating the right decisions. The architect must be able speak the business vocabulary and draw experiences from the domain to be able to have meaningful discussions with the business stakeholders. Technical Acumen: This is a key competency as architects are hired for their technical expertise and acumen. The architect should have a breadth of expertise in technologies and platforms to understand their strengths and weaknesses and make the right decisions. Even for technical architect roles, it is mandatory to have skills in multiple technology stacks and frameworks and to be knowledgeable about technology trends. Architects’ growth paths Software architecture discipline has matured since its inception. This architecting practice is no longer reserved for the veteran practitioners. The core concepts and principles of this discipline can now be acquired in training programs, books and college curriculum. The discipline is turning from an art into a competency accessible through training and experience. A significant number of methodologies, frameworks and processes have been developed to support various perspectives of the architecture practice. A software architect is responsible for creating  most appropriate architecture for the enterprise or system to suit the business goals, fulfill user requirements, and achieve the desired business outcome. A software architect’s career starts with a rigorous education of computer science. An architect is liable for making the hardest decisions on software architecture and design. Hence he must have a sound understanding of the concepts, patterns, and principles independent of any programming languages. There are a number of architect flavors that exist: enterprise architect, business architect, business strategy architect, solution architect, infrastructure architect, security architect, integration architect, technical architect, systems architect and software designer. There are other variations as well, but this section describes the previously mentioned flavors in more detail. Finally, for an architect, learning must never stop. Continuous participation in the communities and learning about new technologies, methodologies, and frameworks are mandatory for value creation and to stay ahead of the demand curve. The following section describes different roles basis the breadth against depth: Summary Individual passion is the primary driving factor that determines the growth path of an Architect. For instance, a security architect who is passionate about the domain of IT security and must have developed an immensely valuable body of knowledge over time should ideally not be coerced into seeing a shift to a solution architect and eventually a governance role. Resources for Article: Further resources on this subject: Opening up to OpenID with Spring Security [article] Thinking Functionally [article] Setting up Development Environment for Android Wear Applications [article]
Read more
  • 0
  • 0
  • 1506

article-image-gathering-and-analyzing-stock-market-data-r-part-1-2
Erik Kappelman
07 Dec 2016
6 min read
Save for later

Gathering and analyzing stock market data with R Part 1 of 2

Erik Kappelman
07 Dec 2016
6 min read
This two-part blogseries walks through a set of R scripts used to collect and analyze data from the New York Stock Exchange. Collecting data in realtime from the stock market can be valuable in at least two ways. First, historical intraday trading data is valuable. There are many companies you can find around the Web that sell historical intraday trading data. This data can be used to make quick investment decisions. Investment strategies like day trading and short selling rely on being able to ride waves in the stock market that might only last a few hours or minutes. So, if a person could collect daily trading data for a time long enough, this data would eventually become valuable and could be sold. While almost any programming language can be used to collect data from the Internet, using R to collect stock market data is somewhat more convenient if R will be used to analyze and make predictions with the data. Additionally, I find R to be an intuitive scripting language that can be used for a wide range of solutions. I will first discuss how to create a script that can collect intraday trading data. I will then discuss using R to collect historical daily trading data. I will also discuss analyzing this data and making predictions from it. There is a lot of ground to cover, so this post is split into two parts. All of the code and accompanying files can be found in this repository. So, let’s get started. If you don’t have the R binaries installed, go ahead and get them as they are going to be a must for following along. Additionally, I would highly recommend using RStudio in development projects centered around R. Although there are absolutely flaws with RStudio, in my opinion, it is the best choice. library(httr) library(jsonlite) source('DataFetch.R') The above three lines source the file containing the functions that actually collect the data and load the required packages to execute their commands. Libraries are a common feature in R. Before you try to do something too complex, make sure that you check whether there is an existing library that already performs the operation. The R community is extensive and thriving, which makes using R for development that much better. Sys.sleep(55*60) frame.list<-list() ticker<-function(rest.time){ ptm<-proc.time() df<-data.frame(get.data(),Date= date()) timer.time<-proc.time()-ptm Sys.sleep(as.numeric(rest.time-timer.time[3])) return(list(df)) } The next lines of code stop the system until it is time for the stock market to open. I start this script before I go to work in the morning. So, 55*60 is about how many seconds pass between when I leave for work and the market opens. We then initialize an empty list using the next line of code. If you are new to R, you will notice the use of an arrow instead of an equals sign. Although the equals sign does work, many people, including me, use the arrow. This list is going to hold the dataframes containing the stock data that is created throughout the day. We then initialize the ticker function, which is used to repeatedly call the set of functions that retrieve the data and then return the data in the form of a dataframe. for(i in1:80){ frame.list<-c(suppressWarnings(ticker(5*30)),frame.list) } save(frame.list,file="RealTimeData.rda") The ticker function takes the number of seconds to wait between queries to the market as its only argument. This number is modified based on the length of time the query takes. This ensures that the timing of the data points is consistent. The ticker function is called eighty times in five minute intervals. The results are appended onto the list of dataframes. After the for-loop is completed. The data is saved in the R format. Now let’s look into the functions that fetch the data located in DataFetch.R. R code can become pretty verbose, so it is good to get in the habit of segmenting your code into multiple files. The functions used to fetch data are displayed below. We will start by discussing the parse.data function because it is the work horse, and the get.data function is more of a controller. parse.data<-function(symbols,range){ base.URL <-"http://finance.google.com/finance/info?client=ig&q=" start= min(range) end= max(range) symbol.string<-paste0("NYSE:",symbols[start],",") for(i in(start+1):end){ temp<- paste0("NYSE:",symbols[i],",") symbol.string<-paste(symbol.string,temp,sep="") } URL <-paste(base.URL,symbol.string,sep="") data<- GET(URL) now<- date() bin<- content(data,"raw") writeBin(bin,"data.txt") conn<- file("data.txt",open="r") linn<-readLines(conn) jstring<-"[" for(i in3:length(linn)){ jstring<- paste0(jstring,linn[i]) } close(conn) file.remove("data.txt") obj<-fromJSON(jstring) return(data.frame(Symbol=obj$t,Price=as.numeric(obj$l))) } The first function takes a list of stock symbols and the list indices of the symbols that are to be queried. The function then builds a string in the proper format to be used to query Google Finance for the latest price information on the chosen symbols. The query is performed using the ‘httr’ R package, a package used to perform HTTP tasks. The response from the web request is shuttled through a few formats in order to get the data into an easy-to-use format. The function then returns a dataframe containing the symbols and prices. get.data<-function(){ syms<- read.csv("NYSE.txt",header=2,sep="t") sb<-grep("[A-Z]{4}|[A-Z]{3}",syms$Symbol,perl= F, value = T) result<- c() in.list<-list() list.seq<-seq(1,2901,100) for(i in1:(length(list.seq)-1)){ range<-list.seq[i]:list.seq[i+1] result<-rbind(result,parse.data(sb,range)) } return(droplevels.data.frame(na.omit(result))) } The get.data function above is called by the ticker function. It serves as a controller on the parse.data function by calling for the prices in chunks so that the queries are small enough. It also reads the symbol list in from the "NYSE.txt" file, which is a simple list of stocks in the New York Stock Exchange and their symbols. The symbols are then put through a RegEx routine that eliminates symbols that do not follow the right format for Google Finance. Gathering intraday data from the stock market using R, or any language, is obviously somewhat of a pain; however, if properly executed, the results could be quite useful and valuable. I hope you read part two of this blog series where we use R to gather and analyze historical stock market data. About the author Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company.
Read more
  • 0
  • 0
  • 6845

article-image-define-necessary-connections
Packt
02 Dec 2016
5 min read
Save for later

Define the Necessary Connections

Packt
02 Dec 2016
5 min read
In this article by Robert van Mölken and Phil Wilkins, the author of the book Implementing Oracle Integration Cloud Service, where we will see creating connections which is one of the core components of an integration we can easily navigate to the Designer Portal and start creating connections. (For more resources related to this topic, see here.) On the home page, click the Create link of the Connection tile as given in the following screenshot: Because we click on this link the Connections page is loaded, which lists of all created connections, a modal dialogue automatically opens on top of the list. This pop-up shows all the adapter types we can create. For our first integration we define two technology adapter connections, an inbound SOAP connection and an outbound REST connection. Inbound SOAP connection In the pop-up we can scroll down the list and find the SOAP adapter, but the modal dialogue also includes a search field. Just search on SOAP and the list will show the adapters matching the search criteria: Find your adapter by searching on the name or change the appearance from card to list view to show more adapters at ones. Click Select to open the New Connection page. Before we can setup any adapter specific configurations every creation starts with choosing a name and an optional description: Create the connection with the following details: Connection Name FlightAirlinesSOAP_Ch2 Identifier This will be proposed based on the connection name and there is no need to change unless you'd like an alternate name. It is usually the name in all CAPITALS and without spaces and has a max length of 32 characters. Connection Role Trigger The role chosen restricts the connection to be used only in selected role(s). Description This receives in Airline objects as a SOAP service. Click the Create button to accept the details. This will bring us to the specific adapter configuration page where we can add and modify the necessary properties. The one thing all the adapters have in common is the optional Email Address under Connection Administration. This email address is used to send notification to when problems or changes occur in the connection. A SOAP connection consists of three sections; Connection Properties, Security, and an optional Agent Group. On the right side of each section we can find a button to configure its properties.Let's configure each section using the following steps: Click the Configure Connectivity button. Instead of entering in an URL we are uploading the WSDL file. Check the box in the Upload File column. Click the newly shown Upload button. Upload the file ICSBook-Ch2-FlightAirlines-Source WSDL. Click OK to save the properties. Click the Configure Credentials button. In the pop-up that is shown we can configure the security credentials. We have the choice for Basic authentication, Username Password Token, or No Security Policy. Because we use it for our inbound connection we don't have to configure this. Select No Security Policy from the dropdown list. This removes the username and password fields. Click OK to save the properties. We leave the Agent Group section untouched. We can attach an Agent Group if we want to use it as an outbound connection to an on-premises web service. Click Test to check if the connection is working (otherwise it can't be used). For SOAP and REST it simply pings the given domain to check the connectivity, but others for example the Oracle SaaS adapters also authenticate and collect metadata. Click the Save button at the top of the page to persist our changes. Click Exit Connection to return to the list from where we started. Outbound REST connection Now that the inbound connection is created we can create our REST adapter. Click the Create New Connection button to show the Create Connection pop-up again and select the REST adapter. Create the connection with the following details: Connection Name FlightAirlinesREST_Ch2 Identifier This will be proposed based on the connection name Connection Role Invoke Description This returns the Airline objects as a REST/JSON service Email Address Your email address to use to send notifications to Let’s configure the connection properties using the following steps: Click the Configure Connectivity button. Select REST API Base URL for the Connection Type. Enter the URL were your Apiary mock is running on: http://private-xxxx-yourapidomain.apiary-mock.com. Click OK to save the values. Next configure the security credentials using the following steps: Click the Configure Credentials button. Select No Security Policy for the Security Policy. This removes the username and password fields. Click the OK button to save out choice. Click Test at the top to check if the connection is working. Click the Save button at the top of the page to persist our changes. Click Exit Connection to return to the list from where we started. Troubleshooting If the test fails for one of these connections check if the correct WSDL is used or that the connection URL for the REST adapter exists or is reachable. Summary In this article we looked at the processes of creating and testing the necessary connections and the creation of the integration itself. We have seen an inbound SOAP connection and an outbound REST connection. In demonstrating the integration we have also seen how to use Apiary to document and mock our backend REST service. Resources for Article: Further resources on this subject: Getting Started with a Cloud-Only Scenario [article] Extending Oracle VM Management [article] Docker Hosts [article]
Read more
  • 0
  • 0
  • 1417
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-modelling-rpg-d
Ryan Roden-Corrent
02 Dec 2016
7 min read
Save for later

Modelling a RPG in D

Ryan Roden-Corrent
02 Dec 2016
7 min read
In this post, I'll show off some of the cool features of a language called D in the context of creating a game, specifically a RPG. Character Stats For our RPG, let's say there are three categories of stats on every character: Attributes: An int value for each of the classic six (Strength, Dexterity, and so on). Skills: An int value for each of several skills (diplomacy, stealth, and so on). Resistance: An int value for each 'type' (physical, fire, and so on) of damage. In D, we can represent such a character like so: struct Character { // attributes int strength; int dexterity; int constitution; int intellect; int wisdom; int charisma; // skills int stealth; int perception; int diplomacy; // resistances int resistPhysical; int resistFire; int resistWater; int resistAir; int resistEarth; } However, it would be nicer if we could have each category (attributes, skills, and resistances) represented as a single group of values. First, let's define some enums: enum Attribute { strength, dexterity, constitution, intellect, wisdom, charisma } enum Skill { stealth, perception, diplomacy } enum Element { physical, fire, water, air, earth } Now we want to map each of these enum members to a value for that particular attribute, skill, or resistance. One option is an associative array, which would look like this: struct Character { int[Attribute] attributes; int[Skill] attributes; int[Element] attributes; } int[Attribute] attributes declares that Character.attributes returns an int when indexed by an Attribute, like so: if (hero.attributes[Attribute.dexterity] < 4) hero.trip(); However, associative arrays are heap allocated and don't have a default value for each key. It seems like overkill for storing a small bundle of values. Another option is a static array. Static arrays are stack-allocated value types and will contain exactly the number of values that we need. struct Character { int[6] attributes; int[3] skills; int[5] resistances; } Our enum values are backed by ints, so we can use them directly as indexes just as we did with the associative array: if (hero.attributes[Attribute.intellect] > 12) hero.pontificate(); This is more efficient for our needs, but nothing enforces using enums as keys. If we accidentally gave an out-of-bounds index, the compiler wouldn't catch it and we'd get a runtime error. Ideally, we want the efficiency of the static array with the syntax of the associative array. Even better, it would be nice if we could say something like attributes.charisma instead of attributes[Attribute.charisma], like you would with a table in Lua. Fortunately, you can achieve this with only a few lines of D code. The Enumap import std.traits; /// Map each member of the enum `K` to a value of type `V` struct Enumap(K, V) { private enum N = EnumMembers!K.length; private V[N] _store; auto opIndex(K key) { return _store[key]; } auto opIndexAssign(T value, K key) { return _store[key] = value; } } Here's a line-by-line breakdown: import std.traits; We need access to std.traits.EnumMembers, a standard-library function that returns (at compile-time!) the members of an enum. struct Enumap(K, V) Here, we declare a templated struct. In many other languages, this would look like Enumap<K, V>. K will be our key type (the enum) and V will be the value. K and V are known as 'compile-time parameters'. In this case, they are simply used to create a generic type, but in D, such parameters can be used for much more than just generic types, as we will see later. private enum N = EnumMembers!K.length;` private V[N] _store;` Here we leverage EnumMembers to determine how many entries are in the provided enum. We use this to declare a static array capable of holding exactly NVs. 6: auto opIndex(K key) { return _store[key]; } 7: auto opIndexAssign(T value, K key) { return _store[key] = value; } opIndex is a special method that allows us to provide a custom implementation of the indexing ([]) operator. The call skills[Skill.stealth] is translated to sklls.opIndex(Skill.stealth), while the assignment skills[Skill.stealth] = 5 is translated to sklls.opIndexAssign(Skill.stealth, 5). Let's use that in our Character struct: struct Character { Enumap!(Attribute, int) attributes; Enumap!(Skill , int) skills; Enumap!(Element , int) resistances; } if (hero.attributes[Attribute.wisdom] < 2) hero.drink(unidentifiedPotion); There! Now the length of each underlying array is figured out for us, and the values can only be accessed using the enum members as keys. The underlying array _store is statically sized, so it requires no managed-memory allocation. Here's the really clever bit: import std.conv; //... struct Enumap(K, V) { //... auto opDispatch(string s)() { return this[s.to!K]; } auto opDispatch(string s)(V val) { return this[s.to!K] = val; } } if (hero.attributes.charisma < 5) hero.makeAwkwardJoke(); if (hero.attributes.charisma < 5) hero.makeAwkwardJoke(); opDispatch essentially overloads the . operator to provide some nice syntactic sugar. Here's a quick rundown of what happens for hero.attributes.charisma = 5: The compiler sees attributes.charisma. It looks for the charisma symbol in the type Enumap!(Attribute, int). Failing to find this, it tries attributes.opDispatch!"charisma". That call resolves to attributes["charisma".to!Attribute]. And further resolves to attributes[Attribute.charisma]. Remember I mentioned that compile-time arguments can be much more than types? Here is a compile-time string argument—in this case, its value is whatever symbol follows the.. Note that the above happens at compile time and is equivalent to using the indexing operator. So, we get the "charisma" string, but what we actually want is the enum member. Attribute.charisma. std.conv.to, makes quick work of this; it can, among other things, translate between strings and enum names. A Step Further – Enumap Arithmetic Let's suppose we add items to the game, and each item can provide some stat bonuses: struct Item { Enumap!(Attribute, int) bonuses; } It would be really nice if we could just add these bonuses to our character's base stats, like so: auto totalStats = character.attributes + item.bonuses; Yet again, D lets us implement this quite concisely, this time by leveraging opBinary. struct Enumap(K, V) { //... auto opBinary(string op)(typeof(this) other) { V[N] result = mixin("_store[] " ~ op ~ " other._store[]"); return typeof(this)(result); } Breakdown time again! auto opBinary(string op)(typeof(this) other) An expression like enumap1 + enumap2 will get translated (at compile time!) to enumap1.opBinary!"+"(enumap2). The operator (in this case, +`) is passed as a compile-time string argument. If passing the operator as a string sounds weird, read on… V[N] result = mixin("_store[]" ~ op ~ "other._store[]"); mixin is a D keyword that translates a compile-time string into code. Continuing with our + example, we end up with V[N] result = mixin("_store[]" ~ "+" ~ "other._store[]"), which simplifies to V[N] result = _store[] + other._store[]). The _store[] + other._store[] expression is called an "array-wise operation". It's a concise way of performing an operation between corresponding elements of two arrays, in this case, adding each pair of integers into a resulting array. return typeof(this)(result); Here we wrap the resulting array in an Enumap before returning it. typeof(this) resolves to the enclosing type. It is equivalent, but preferable, to Enumap!(K, V), as if we change the name of the class we won't have to refactor this line. In many languages, we'd have to separately define opAdd, opSub, opMult, and more, most of which would likely contain similar code. However, thanks to the way opBinary allows us to work with a string representation of the operator at compile time, our single opBinary implementation supports operators like - and * as well. Summary I hope you enjoyed learning a little about D! There is a full implementation of Enumap available here: https://github.com/rcorre/enumap. About the Author Ryan Roden-Corrent is a software developer by trade and hobby. He is an active contributor to the free/open source software community and has a passion for simple but effective tools. He started gaming at a young age and dabbles in all aspects of game development, from coding to art and music. He's also an aspiring musician and yoga teacher. You can find his open source work here  and Creative Commons art here.
Read more
  • 0
  • 0
  • 3581

article-image-sales-and-purchase-process
Packt
02 Dec 2016
21 min read
Save for later

The Sales and Purchase Process

Packt
02 Dec 2016
21 min read
In this article by Anju Bala, the author of the book Microsoft Dynamics NAV 2016 Financial Management - Second Edition, we will see the sales and purchase process using Microsoft Dynamics NAV 2016 in detail. Sales and purchases are two essential business areas in all companies. In many organizations, the salesperson or the purchase department are the ones responsible for generating quotes and orders. People from the finance area are the ones in charge of finalizing the sales and purchase processes by issuing the documents that have an accountant reflection: invoices and credit memos. In the past, most systems required someone to translate all the transactions to accountancy language, so they needed a financer to do the job. In Dynamics NAV, anyone can issue an invoice, with zero accountant knowledge needed. But a lot of companies keep their old division of labor between departments. This is why we have decided to explain the sales and purchase processes in this book. This article explains how their workflows are managed in Dynamics NAV. In this article you will learn: What is Dynamics NAV and what it can offer to your company To define the master data needed to sell and purchase How to set up your pricing policies (For more resources related to this topic, see here.) Introducing Microsoft Dynamics NAV Dynamics NAV is an Enterprise Resource Planning (ERP) system targeted at small and medium-sized companies. An ERP is a system, a software, which integrates the internal and external management information across an entire organization. The purpose of an ERP is to facilitate the flow of information between all business functions inside the boundaries of the organizations. An ERP system is meant to handle all the organization areas on a single software system. This way the output of an area can be used as an input of another area. Dynamics NAV 2016 covers the following functional areas: Financial Management: It includes accounting, G/L budgets, account schedules, financial reporting, cash management, receivables and payables, fixed assets, VAT reporting, intercompany transactions, cost accounting, consolidation, multicurrency, and Intrastat. Sales and Marketing: This area covers customers, order processing, pricing, contacts, marketing campaigns, and so on. Purchase: The purchase area includes vendors, order processing, approvals, planning, costing, and other such areas. Warehouse: Under the warehouse area you will find inventory, shipping and receiving, locations, picking, assembly, and so on. Manufacturing: This area includes product design, capacities, planning, execution, costing, subcontracting, and so on. Job: Within the job area you can create projects, phases and tasks, planning, time sheets, work in process, and other such areas. Resource Planning: Manage resources, capacity, and so on. Service: Within this area you can manage service items, contracts, order processing, planning and dispatching, service tasks, and so on. Human Resources: Manage employees, absences, and so on. Some of these areas will be covered in detail in this book. Dynamics NAV offers much more than robust financial and business management functionalities. It is also a perfect platform to customize the solution to truly fit your company needs. If you have studied different ERP solutions, you know by now customizations to fit your specific needs will always be necessary. Dynamics NAV has a reputation as being easy to customize, which is a distinct advantage. Since you will probably have customizations in your system, you might find some differences with what is explained in this book. Your customizations could imply that: You have more functionality in your implementation Some steps are automated, so some manual work can be avoided Some features behave different than explained here There are new functional areas in your Dynamics NAV In addition Dynamics NAV has around forty different country localizations that are meant to cover country-specific legal requirements or common practices. Many people and companies have already developed solutions on top of Dynamics NAV to cover horizontal or industry-specific needs, and they have registered their solution as an add-on, such as: Solutions for the retail industry or the food and beverages industry Electronic Data Interchange (EDI) Quality or Maintenance management Integration with third-party applications such as electronic shops, data warehouse solutions, or CRM systems Those are just a few examples. You can find almost 2,000 registered third-party solutions that cover all kinds of functional areas. If you feel that Dynamics NAV does not cover your needs and you will need too much customization, the best solution will probably be to look for an existing add-on and implement it along with your Dynamics NAV. Anyway, with or without an add-on, we said that you will probably need customizations. How many customizations can you expect? This is hard to tell as each case is particular, but we'll try to give you some highlights. If your ERP system covers a 100 percent of your needs without any customization, you should worry. This means that your procedures are so standard that there is no difference between you and your competence. You are not offering any special service to your customer, so they are only going to measure you by the price they are getting. On the other hand, if your Dynamics NAV only covers a low percentage of your needs it could just mean two things: this is not the product you need; or your organization is too chaotic and you should re-think your processes to standardize them a bit. Some people agree that the ideal scenario would be to get about 70-80 percent of your needs covered out of the box, and about 20-30 percent customizations to cover those needs that make you different from your competitors. Importance of Financial Management In order to use Dynamics NAV, all organizations have to use the Financial Management area. It is the epicenter of the whole application. Any other area is optional and their usage depends on the organization's needs. The sales and the purchase areas are also used in almost any Dynamics NAV implementation. Actually, accountancy is the epicenter, and the general ledger is included inside the Financial Management area. In Dynamics NAV everything leads to accounting. It makes sense as accountancy is the act of recording, classifying, and summarizing, in terms of money, the transactions and events that take place in the company. Every time the warehouse guy ships an item, or the payment department orders a transfer, these actions can be written in terms of money using accounts, credit, and debit amounts. An accountant could collect all the company transactions and translate them one by one to accountancy language. But this means manual duplicate work, a lot of chances of getting errors and inconsistencies, and no real-time data. On the other hand, Dynamics NAV is capable to interpret such transactions and translate them to accountancy on the fly. In Dynamics NAV everything leads to accountancy, so all the company's employees are helping the financial department with their job. The financers can now focus on analyzing the data and taking decisions, and they don't have to bother on entering the data anymore. Posted data cannot be modified (or deleted) One of the first things you will face when working with Dynamics NAV is the inability to modify what has been posted, whether it's a sales invoice, a shipment document, a general ledger entry, or any other data. Any posted document or entry is unchangeable. This might cause frustration, especially if you are used to work with other systems that allow you to modify data. However, this feature is a great advantage since it ensures data integrity. You will never find an unbalanced transaction. If you need to correct any data, the Dynamics NAV approach is to post new entries to null the incorrect ones, and then post the good entries again. For instance, if you have posted and invoice, and the prices were wrong, you will have to post a credit memo to null the original invoice and then issue a new invoice with the correct prices. Document No. Amount Invoice 01 1000 Credit Memo 01 -1000 This nulls the original invoice Invoice 02 800 As you can see this method for correcting mistakes always leaves a track of what was wrong and how we solved it. Users get the feeling that they have to perform too many steps to correct the data; with the addition that everyone can see that there was a mistake at some point. Our experience tells us that users tend to pay more attention before they post anything in Dynamics NAV, which leads to make fewer mistakes on the first place. So another great advantage of using Dynamics NAV as your ERP system is that the whole organization tends to improve their internal procedures, so no mistakes. No save button Dynamics NAV does not have any kind of save button anywhere in the application. Data is saved into the database while it is being introduced. When you enter data in one field, right after you leave the field, the data is already saved. There is no undo feature. The major advantage is that you can create any card (for instance, Customer Card), any document (for instance, Sales Order), or any other kind of data without knowing all the information that is needed. Imagine you need to create a new customer. You have all their fiscal data except their VAT Number. You could create the card, fill in all the information except the VAT Registration No. field, and leave the card without losing the rest of the information. When you have figured out the VAT Number of your customer, you can come back and fill it in. The not-losing-the-rest-of-the-information part is important. Imagine that there actually was a Save button; you spend a few minutes filling in all the information and, at the end, click on Save. At that moment, the system carries out some checks and finds out that one field is missing. It throws you a message saying that the Customer Card cannot be saved. So you basically have two options: To lose the information introduced, find out the VAT number for the customer, and start all over again. To cheat. Fill the field with some wrong value so that the system actually lets you save the data. Of course, you can come back to the card and change the data once you've found out the right one. But nothing will prevent any other user to post a transaction with the customer in the meantime. Understanding master data Master data is all the key information to the operation of a business. Third-party companies, such as customers and vendors, are part of the master data. The items a company manufactures or sells are also part of the master data. Many other things can be considered master data, such as the warehouses or locations, the resources, or the employees. The first thing you have to do when you start using Dynamics NAV is loading your master data into the system. Later on, you will keep growing your master data by adding new customers, for instance. To do so, you need to know which kind of information you have to provide. Customers We will open a Customer Card to see which kind of information is stored in Dynamics NAV about customers. To open a Customer Card, follow these steps: Navigate to Departments/Sales & Marketing/Sales/Customers. You will see a list of customers, find No. 10000 The Cannon Group PLC. Double-click on it to open its card, or select it and click on the View icon found on the Home tab of the ribbon. The following screenshot shows the Customer Card for The Cannon Group PLC: Customers are always referred to by their No., which is a code that identifies them. We can also provide the following information: Name, Address, and Contact: A Search Name can also be provided if you refer to your customer by its commercial name rather than by its fiscal name. Invoicing information: It includes posting groups, price and discount rates, and so on. You may still don't know what a posting group is, since it is the first time those words are mentioned on this book. At this moment, we can only tell you that posting groups are important. But it's not time to go through them yet. We will talk about posting groups in Chapter 6, Financial Management Setup. Payments information: It includes when and how will we receive payments from the customer. Shipping information: It explains how do we ship items to the customer. Besides the information you see on the card, there is much other information we can introduce about customers. Take a look at the Navigate tab found on the ribbon. Other information that can be entered is as follows: Information about bank accounts: so that we can know where can we request the payments. Multiple bank accounts can be setup for each customer. Credit card information: in case customers pay using this procedure. Prepayment information: in case you require your customers to pay in advance, either totally or partially. Additional addresses: where goods can be shipped (Ship-to Addresses). Contacts: You may relate to different departments or individuals from your customers. Relations: between our items and the customer's items (Cross References). Prices and Discounts: which will be discussed in the Pricing section. But customers, just as any other master data record, do not only have information that users inform manually. They have a bunch of other information that is filled in automatically by the system as actions are performed: History: You can see it on the right side of the card and it holds information such as how many quotes or orders are currently being processed or how many invoices and credit memos have been issued. Entries: You can access the ledger entries of a customer through the Navigate tab. They hold the details of every single monetary transaction done (invoices, credit memos, payments, and so on). Statistics: You can see them on the right side and they hold monetary information such as the amount in orders or what is the amount of goods or services that have been shipped but not yet invoiced. The Balance: It is a sum of all invoices issued to the customer minus all payments received from the customer. Not all the information we have seen on the Customer Card is mandatory. Actually, the only information that is required if you want to create a transaction is to give it a No. (its identification) and to fill in the posting group's fields (Gen. Bus. Posting Group and Customer Posting Group). All other information can be understood as default information and setup that will be used in transactions so that you don't have to write it down every single time. You don't want to write the customer's address in every single order or invoice, do you? Items Let's take a look now at an Item Card to see which kind of information is stored in Dynamics NAV about items. To open an Item Card, follow these steps: Navigate to Departments/Sales & Marketing/Inventory & Pricing/Items. You will see a list of items, find item 1000 Bicycle. Double-click on it to open its card. The following screenshot shows the item card for item 1000 Bicycle: As you can see in the screenshot, items first have a No., which is a code that identifies them. For an item, we can enter the following information: Description: It's the item's description. A Search Description can also be provided if you better identify an item using a different name. Base Unit of Measure: It is the unit of measure in which most quantities and other information such as Unit Price for the item will be expressed. We will see later what other units of measure can be used as well, but the Base is the most important one and should be the smallest measure in which the item can be referred. Classification: Item Category Code and Product Group Code fields offer a hierarchical classification to group items. The classification can fill in the invoicing information we will see in the next point. Invoicing information: It includes posting groups, costing method used for the item, and so on. Posting groups are explained in Chapter 6, Financial Management Setup, and costing methods are explained in Chapter 3, Accounting Processes. Pricing information: It is the item's unit price and other pricing configuration that we will cover in more detail in the Pricing section. Foreign trade information: It is needed if you have to do Instrastat reporting. Replenishment, planning, item tracking, and warehouse information: These fast-tabs are not explained in detail because they are out of the scope of this book. They are used to determine how to store the stock and how to replenish it. Besides the information you see on the Item Card, there is much other information we can introduce about items through the Navigate tab found on the ribbon: As you can see, other information that can be entered is as follows: Units of Measure: It is useful when you can sell your item either in units, boxes, or other units of measure at the same time. Variants: It is useful when you have multiple items that are actually the same one (thus, they share most of the information) but with some slight differences. You can use variants to differentiate colors, sizes, or any other small difference you can think of. Extended texts: It is useful when you need long descriptions or technical info to be shown on documents. Translations: It is used so that you can show item's descriptions on other languages, depending on the language used by your customers. Prices and discounts: It will be discussed in the Pricing section. As with customers, not all the information in the Item Card is mandatory. Vendors, resources, and locations We will start with third-parties; customers and vendors. They work exactly the same way. We will just look at customers, but everything we will explain about them can be applied to vendors as well. Then, we will look at items, and finally, we will take a brief look to locations and resources. The concepts learned can be used in resources and locations, and also to other master data such as G/L accounts, fixed assets, employees, service items, and so on. Pricing Pricing is the combination of prices for items and resources and the discounts that can be applied to individual document lines or to the whole document. Prices can be defined for items and resources and can be assigned to customers. Discounts can be defined for items and documents and can also be assigned to customers. Both prices and discounts can be defined at different levels and can cover multiple pricing policies. The following diagram illustrates different pricing policies that can be established in Dynamics NAV: Defining sales prices Sales prices can be defined in different levels to target different pricing policies. The easiest scenario is when we have a single price per item or resource. That is, the One single price for everyone policy. In that case, the sales price can be specified on the Item Card or on the Resource Card, in a field called Unit Price. In a more complex scenario, where prices depend on different conditions, we will have to define the possible combinations and the resulting price. We will explain how prices can be configured for items. Prices for resources can be defined in a similar way, although they offer fewer possibilities. To define sales prices for an Item, follow these steps: Navigate to Departments/Sales & Marketing/Inventory & Pricing/Items. You will see a list of items, find item 1936-S BERLIN Guest Chair, yellow. Double-click on it to open its card. On the Navigate tab, click on the Prices icon found under the Sales group. The Edit – Sales Prices page will open. As you can see in the screenshot, multiple prices have been defined for the same item. A specific price will only be used when all the conditions are met. For example, a Unit Price will be used for any customer that buys item 1936-S after the 20/01/2017 but only if they buy a minimum of 11 units. Different fields can be used to address each of the pricing policies: The combination of Sales Type and Sales Code fields enable the different prices for different customers policy Fields Unit of Measure Code and Minimum Quantity are used on the different prices per volume policy Fields Starting Date, Ending Date, and Currency Code are used on the different prices per period or currency policy They can all be used at the same time to enable mixed policies. When multiple pricing conditions are met, the price that is used is the one that is most favorable to the customer (the cheapest one). Imagine Customer 10000 belongs to the RETAIL price group. On 20/01/2017 he buys 20 units of item 1936-S. There are three different prices that could be used: the one defined for him, the one defined for its price group, and the one defined to all customers when they buy at least 11 units. Among the three prices, 130.20 is the cheapest one, so this is the one that will be used. Prices can be defined including or excluding VAT. Defining sales discounts Sales discounts can be defined in different levels to target different pricing policies. We can also define item discounts based on conditions. This addresses the Discounts based on items policy and also the Discounts per volume, period or currency policy, depending on which fields are used to establish the conditions. In the following screenshot, we can see some examples of item discounts based on conditions, which are called Line Discounts because they will be applied to individual document lines. In some cases, items or customers may already have a very low profit for the company and we may want to prevent the usage of line discounts, even if the conditions are met. A field called Allow Line Disc, can be found on the Customer Card and on sales prices. By unchecking it, we will prevent line discounts to be applied to a certain customer or when a specific sales price is used. Besides the line discounts, invoice discounts can be defined to use the General discounts per customer policy. Invoice discounts apply to the whole document and they depend only on the customer. Follow these steps to see and define invoice discounts for a specific customer: Open the Customer Card for customer 10000, The Cannon Group PLC. On the Navigate tab, click on Invoice Discounts. The following screenshot shows that customer 10000 has an invoice discount of 5 percent: Just as line discounts, invoice discounts can also be disabled using a field called Allow Invoice disc. that can be found on the Item Card and on sales prices. There is a third kind of discount, payment discount, which can be defined to use the Financial discounts per early payments policy. This kind of discount applies to the whole document and depends on when the payment is done. Payment discounts are bound to a Payment Term and are to be applied if the payment is received within a specific number of days. The following screenshot shows the Payment Terms that can be found by navigating to Departments/Sales & Marketing/Administration/Payment Terms: As you can see, a 2 percent payment discount has been established when the 1M(8D) Payment Term is used and the payment is received within the first eight days. Purchase pricing Purchase prices and discounts can also be defined in Dynamics NAV. The way they are defined is exactly the same as you can define sales prices and discounts. There are some slight differences: When defining single purchase pricing on the Item Card, instead of using the Unit Price field, we will use the Last Direct Cost field. This field gets automatically updated as purchase invoices are posted. Purchase prices and discounts can only be defined per single vendors and not per group of vendors as we could do in sales prices and discounts. Purchase discounts can only be defined per single items and not per group of items as we could do in sales discounts. We cannot prevent purchase discounts to be applied. Purchase prices can only be defined excluding VAT. Summary In this chapter, we have learned that Dynamics NAV as an ERP system meant to handle all the organization areas on a single software system. The sales and purchases processes can be held by anyone without the need of having accountancy knowledge, because the system is capable of translating all the transactions to accountant language on the fly. Customers, vendors, and items are the master data of these areas. Its information is used in documents to post transactions. There are multiple options to define your pricing policy: from one single price to everyone to different prices and discounts per groups of customers, per volume, or per period or currency. You can also define financial discounts per early payment. In the next chapter, we will learn how to manage cash by showing how to handle receivables, payables, and bank accounts. Resources for Article: Further resources on this subject: Modifying the System using Microsoft Dynamics Nav 2009: Part 3 [article] Introducing Dynamics CRM [article] Features of Dynamics GP [article]
Read more
  • 0
  • 0
  • 2034

article-image-administering-swarm-cluster
Packt
02 Dec 2016
12 min read
Save for later

Administering a Swarm Cluster

Packt
02 Dec 2016
12 min read
In this article by Fabrizio Soppelsa and Chanwit Kaewkasi, the author of Native Docker Clustering with Swarm we're now going to see how to administer a running Swarm cluster. The topics include scaling the cluster size (adding and removing nodes), updating the cluster and nodes information; handling the node status (promotion and demotion), troubleshooting, and graphical interfaces (UI). (For more resources related to this topic, see here.) Docker Swarm standalone In standalone mode, cluster operations must be done directly inside the container 'swarm'. We're not going to cover every option in detail. Swarm standalone is not deprecated yet, and is used around, the reason for which we're discussing it here, but it will be probably declared deprecated soon. It is obsoleted by the Swarm mode. The commands to administer a Docker Swarm standalone cluster are: Create (c): Typically, in production people use Consul or Etcd, so this command has no relevance for production List (l): This shows the list of cluster nodes, basing on a iteration through Consul or Etcd, that is, the Consul or Etcd must be passed as an argument Join (j): This joins a node on which the swarm container is running to the cluster. Here, still, a discovery mechanism must be passed at the command line Manage (m): This is the core of the Standalone mode. By managing a cluster, here it's meant how to change some cluster properties, such as Filters, Schedulers, external CA URLs, and timeouts. Docker Swarm mode: Scale a cluster size Manually adding nodes You can choose to create Docker hosts either way you prefer. If you plan to use Docker Machine, you're probably going to hit Machine's limits very soon, and you will need to be very patient while even listing machines, having to wait several seconds for Machine to get and print all the information on the whole. My favorite method is to use Machine with the generic driver, thus delegate to something else (that is, Ansible) the host provisioning (Operating System installation, network and security groups configurations, and so on), and later exploit Machine to install Docker the proper way: Manually configure the cloud environment (security groups, networks, and so on) Provision Ubuntu hosts with a third-party tool Run Machine with the generic driver on these hosts with the only goal to properly install Docker Then handle hosts with the tool in part 2, or even others. If you use Machine's generic driver, it will select the latest stable Docker binaries. While we were writing this article, in order to use Docker 1.12, we had to overcome this by passing Machine a special option to get the latest, unstable, version of Docker: docker-machine create -d DRIVER--engine-install-url https://test.docker.com mymachine For a production Swarm (mode), at the time you'll be reading this article, 1.12 will be already stable, so this trick will not be necessary anymore, unless you need to use some of the very latest Docker features. Managers The theory of HA suggests us that the number of managers must be odd, and equal or more than 3. This is to grant a quorum in high availability, that is the majority of nodes agree on what part of nodes are leading the operations. If there were two managers, and one goes down and comes back, it's possible that both will think to be the leaders. That causes a logical crash in the cluster organization called split brain. The more managers you have, the higher is the resistance ratio to failures. Refer to the following table: Number of managers Quorum (majority) Maximum possible failures 3 2 1 5 3 2 7 4 3 9 5 4 Also, in Swarm Mode, an overlay network is created automatically and associated as ingress traffic to the nodes. Its purpose is to be used with containers: You will want that your containers be associated to an internal overlay (VxLAN meshed) network to communicate with each other, rather than using public or other networks. Thus, Swarm creates this already for you, ready to use. We recommend, further, to geographically distribute managers. If an earthquake hits the datacenter where all managers are serving, the cluster would go down, wouldn't it? So, consider to place each manager or groups of managers into different physical locations. With the advent of cloud computing, that's really easy, you can spawn up each manager in a different AWS region, or even better have a manager running each on different providers on different regions, that is on AWS, on Digital Ocean, on Azure and also on private cloud, such as OpenStack. IMAGE OF A WORLD WITH SCATTERED MANAGERS IN CONTINENTS? Workers You can add an arbitrary number of workers. This is the elastic part of the Swarm. It's totally fine to have 5, 15, 200, or 2,300 running workers. This is the easiest part to handle: You can add and remove workers with no burdens, at any time, at any size. Scripted nodes addition The very easiest way to add nodes, if you plan to not go over 100 nodes total, is to use basic scripting. At the time of docker swarm init, just copy and paste the lines printed in the output. Then, create a certain bunch of workers: #!/bin/bash for i in `seq 0 9`; do docker-machine create -d amazonec2 --engine-install-url https://test.docker.com --amazonec2-instance-type "t2.large" swarm-worker-$i done After that, it will be only necessary to go through the list of machines, ssh into them, and join the nodes: #!/bin/bash SWARMWORKER="swarm-worker-" for machine in `docker-machine ls --format {{.Name}} | grep $SWARMWORKER`; do docker-machine ssh $machine sudo docker swarm join --token SWMTKN-1-5c3mlb7rqytm0nk795th0z0eocmcmt7i743ybsffad5e04yvxt-9m54q8xx8m1wa1g68im8srcme 172.31.10.250:2377 done This script runs through the machines, and for each with a name starting with swarm-worker-, will ssh into, and join the node to the existing Swarm, to the leader manager, here 172.31.10.250. Refer to https://github.com/swarm2k/swarm2k/tree/master/amazonec2 for some further details or to download these one liners. Belt Belt is another tool for massively provisioning Docker Engines. It is basically a SSH wrapper on steroids and it requires you to prepare provider-specific images as well as provisioning templates before go massively. In this section, we'll learn to do so: You can compile Belt yourself by getting its source from Github. # Set $GOPATH here go get https://github.com/chanwit/belt Currently, Belt supports the DigitalOcean driver. We can prepare our template for provisioning such as the following inside config.yml: digitalocean: image: "docker-1.12-rc4" region: nyc3 ssh_key_fingerprint: "your SSH ID" ssh_user: root Then we can create a hundred nodes basically with a couple of commands. First we create three boxes of 16 GB, namely, mg0, mg1, and mg2. $ belt create 16gb mg[0:2] NAME IPv4 MEMORY REGION IMAGE STATUS mg2 104.236.231.136 16384 nyc3 Ubuntu docker-1.12-rc4 active mg1 45.55.136.207 16384 nyc3 Ubuntu docker-1.12-rc4 active mg0 45.55.145.205 16384 nyc3 Ubuntu docker-1.12-rc4 active Then we can use the status command to wait for all nodes to become active: $ belt status --wait active=3 STATUS #NODES NAMES active 3 mg2, mg1, mg0 We'll do this again for 10 worker nodes. $ belt create 512mb node[1:10] $ belt status --wait active=13 STATUS #NODES NAMES active 3 node10, node9, node8, node7 Use Ansible You can alternatively use Ansible (as you like, and it's becoming very popular) to make things more repeatable. I (Fabrizio) created some Ansible modules to work directly with Machine and Swarm (Mode), compatible with Docker 1.12 (https://github.com/fsoppelsa/ansible-swarm). They require Ansible 2.2+, the very first version of Ansible compatible with binary modules. You will need to compile the modules (written in Go), and then pass them to the ansible-playbook -M parameter. git clone https://github.com/fsoppelsa/ansible-swarm cd ansible-swarm/library go build docker_machine_ go build docker_swarm_ cd .. There are some examples of plays in playbooks/. Ansible's plays syntax is that easy to understand, that's even superfluous to explain in detail. I used this play to join 10 workers to the Swarm2k experiment: --- name: Join the Swarm2k project hosts: localhost connection: local gather_facts: False #mg0 104.236.18.183 #mg1 104.236.78.154 #mg2 104.236.87.10 tasks: name: Load shell variables shell: > eval $(docker-machine env "{{ machine_name }}") echo $DOCKER_TLS_VERIFY && echo $DOCKER_HOST && echo $DOCKER_CERT_PATH && echo $DOCKER_MACHINE_NAME register: worker name: Set facts set_fact: whost: "{{ worker.stdout_lines[0] }}" wcert: "{{ worker.stdout_lines[1] }}" name: Join a worker to Swarm2k docker_swarm: role: "worker" operation: "join" join_url: ["tcp://104.236.78.154:2377"] secret: "d0cker_swarm_2k" docker_url: "{{ whost }}" tls_path: "{{ wcert }}" register: swarm_result name: Print final msg debug: msg="{{ swarm_result.msg }}" Basically, after loading some host facts from Machine, it invokes the docker_swarm module: The operation is join. The role of the new node is worker. The new node joins "tcp://104.236.78.154:2377", that was the leader manager at the time of joining. This argument takes an array of managers, such as might be ["tcp://104.236.78.154:2377", "104.236.18.183:2377", "tcp://104.236.87.10:2377"]. It passes the password (secret). It specifies some basic Engine connection facts. The module will connect to the dockerurl using the certificates at tlspath. After having docker_swarm.go compiled in library/, adding workers to the swarm is as easy as: #!/bin/bash SWARMWORKER="swarm-worker-" for machine in `docker-machine ls --format {{.Name}} | grep $SWARMWORKER`; do ansible-playbook -M library --extra-vars "{machine_name: $machine}" playbook.yaml don Cluster management We now operate a little bit with this example, made of 3 managers and 10 workers. You can reference the nodes by calling them either by their hostname (manager1) or by their ID (ctv03nq6cjmbkc4v1tc644fsi). The other columns in this list statement describe the properties of the cluster nodes. STATUS: This is about the physical reachability of the node. If the node is up, it's Ready, otherwise Down AVAILABILITY: This is the node availability. A node can be either Active (so participating to the cluster operations), Pause (in standby, suspended, not accepting tasks), or Drain (waiting to evacuate its tasks). MANAGER STATUS: This is about the current status of manager. If a node is not a manager, this field will be empty. If a node is a manager, this field can be either Reachable (one of the managers presents to guarantee high availability) or Leader (the host leading all operations). Nodes operations The docker node command comes with some possible options. Demotion and promotion Promotion is possible for worker nodes (transforming them into managers), while demotion is possible for manager nodes (transforming them into workers). Always keep in mind the table to guarantee high availability when managing the number of managers and workers (odd number, more than or equal to 3). Use the following syntax to promote worker0 and worker1 to managers: docker node promote worker0 docker node promote worker1 There is nothing magic behind the curtain. It is just that Swarm attempts to change the node role, with an on-the-fly instruction. Demote is the same (docker node demote worker1). But be careful to not demote the node you're working from, otherwise you'll get locked out. What happens if you try to demote a Leader manager? In this case, the RAFT algorithm will start an election and a new leader will be selected among the Active managers. Tagging nodes You must have noticed, in the preceding screenshot, that worker9 is in Drain availability. This means that the node is in the process of evacuating its tasks (if any), which will be rescheduled somewhere else on the cluster. You can change the availability of a node by updating its status using the docker node update command: The --availability option can take either active, pause, or drain. Here we just restored worker9 to the active state. Active: This means that the node is running and ready to accept tasks pause: This means that the node is running, but not accepting tasks drain: This means that the node is running and not accepting tasks, it is currently draining its tasks, that are getting rescheduled somewhere else Another powerful update argument is about labels. There are --label-add and --label-rm that respectively allow us to add labels to Swarm nodes. Docker Swarm labels do not affect the Engine labels. It's possible to specify labels when starting the Docker Engine (dockerd [...] --label "staging" --label "dev" [...]). But Swarm has no power to edit/change them. The labels we see here only affect the Swarm behavior. Labels are useful to categorize nodes. When you start services, you can then filter and decide where to physically spawn containers, using labels. For instance, if you want to dedicate a bunch of nodes with SSD to host MySQL, you can actually do this: docker node update --label-add type=ssd --label-add type=mysql worker1 docker node update --label-add type=ssd --label-add type=mysql worker2 docker node update --label-add type=ssd --label-add type=mysql worker3 Later, when you will start a service with some replica factor, say 3, you'll be sure that it will start MySQL containers exactly on worker1, worker2, and worker3, if you filter by node.type: docker service create --replicas 3 --constraint 'node.type == mysql' --name mysql-service mysql:5.5. Summary In this article, we went through the typical Swarm administration procedures and options. After showing how to add managers and workers to the cluster, we explained in detail how to update cluster and node properties, how to check the Swarm health, and we encountered Shipyard as a UI. After this focus on infrastructure, now it's time to use our Swarms. Resources for Article: Further resources on this subject: Hands On with Docker Swarm [article] Setting up a Project Atomic host [article] Getting Started with Flocker [article]
Read more
  • 0
  • 0
  • 5649

article-image-introduction-functional-programming
Packt
01 Dec 2016
19 min read
Save for later

Introduction to the Functional Programming

Packt
01 Dec 2016
19 min read
In this article by Wisnu Anggoro, the author of the book, Functional C#, we are going to explore the functional programming by testing it. We will use the power of C# to construct some functional code. We will also deal with the features in C#, which are mostly used in developing functional programs. By the end of this chapter, we will have an idea how the functional approach in C# will be like. Here are the topics we will cover in this chapter: Introduction to functional programming concept Comparing between the functional and imperative approach The concepts of functional programming The advantages and disadvantages of functional programming (For more resources related to this topic, see here.) In functional programming, we write functions without side effects the way we write in Mathematics. The variable in the code function represents the value of the function parameter, and it is similar to the mathematical function. The idea is that a programmer defines the functions that contain the expression, definition, and the parameters that can be expressed by a variable in order to solve problems. After a programmer builds the function and sends the function to the computer, it's the computer's turn to do its job. In general, the role of the computer is to evaluate the expression in the function and return the result. We can imagine that the computer acts like a calculator since it will analyze the expression from the function and yield the result to the user in a printed format. The calculator will evaluate a function which are composed of variables passed as parameters and expressions which forms the body of the function. Variables are substituted by its value in the expression. We can give simple expression and compound expressions using Algebraic operators. Since expression without assignments never alter the value, sub expressions needs to be evaluated only once. Suppose we have the expression 3 + 5 inside a function. The computer will definitely return 8 as the result right after it completely evaluates it. However, this is just a simple example of how the computer acts in evaluating an expression. In fact, a programmer can increase the ability of the computer by creating a complex definition and expression inside the function. Not only can the computer evaluate the simple expression, but it can also evaluate the complex calculation and expression. Understanding definitions, scripts, and sessions As we discuss earlier about  a calculator that will analyze the expression from the function, let's imagine we have a calculator that has a console panel like a computer does. The difference between that and a conventional calculator is that we have to press Enter instead of = (equal to) in order to run the evaluation process of the expression. Here, we can type the expression and then press Enter. Now, imagine that we type the following expression: 3 x 9 Immediately after pressing Enter, the computer will print 27 in the console, and that's what we are expecting. The computer has done a great job of evaluating the expression we gave. Now, let's move to analyzing the following definitions. Imagine that we type them on our functional calculator: square a = a * a max a b = a, if a ≥ b = b, if b > a We have defined the two definitions, square and max. We can call the list of definitions script. By calling the square function followed by any number representing variable a, we will be given the square of that number. Also, in the max definition, we serve two numbers to represent variable a and b, and then the computer will evaluate this expression to find out the biggest number between the variables. By defining these two definitions, we can use them as a function, which we can call session, as follows: square (1 + 2) The computer will definitely print 9 after evaluating the preceding function. The computer will also be able to evaluate the following function: max 1 2 It will return 2 as the result based on the definition we defined earlier. This is also possible if we provide the following expression: square (max 2 5) Then, 25 will be displayed in our calculator console panel. We can also modify a definition using the previous definition. Suppose we want to quadruple an integer number and take advantage of the definition of the square function; here is what we can send to our calculator: quad q = square q * square a quad 10 The first line of the preceding expression is a definition of the quad function. In the second line, we call that function, and we will be provided with 10000 as the result. The script can define the variable value; for instance, take a look at the following: radius = 20 So, we should expect the computer to be able to evaluate the following definition: area = (22 / 7) * square (radius) Understanding the functions for functional programming Functional programming uses a technique of emphasizing functions and their application instead of commands and their execution. Most values in functional programming are function values. Let's take a look at the following mathematical notation: f :: A -> B From the preceding notation, we can say that function f is a relation of each element stated there, which is A and B. We call A, the source type, and B, the target type. In other words, the notation of A à B states that A is an argument where we have to input the value, and B is a return value or the output of the function evaluation. Consider that x denotes an element of A and x + 2 denotes an element of B, so we can create the mathematical notation as follows: f(x) = x + 2 In mathematics, we use f(x) to denote a functional application. In functional programming, the function will be passed an argument and will return the result after the evaluation of the expression. We can construct many definitions for one and the same function. The following two definitions are similar and will triple the input passed as an argument: triple y = y + y + y triple' y = 3 * y As we can see, triple and triple' have different expressions. However, they are the same functions, so we can say that triple = triple'. Although we have many definitions to express one function, we will find that there is only one definition that will prove to be the most efficient in the procedure of evaluation in the sense of the reducing the expression we discussed previously. Unfortunately, we cannot determine which one is the most efficient from our preceding two definitions since that depends on the characteristic of the evaluation mechanism. Forming the definition Now, let's go back to our discussion on definitions at the beginning of this chapter. We have the following definition in order to retrieve the value from the case analysis: max a b = a, if a ≥ b = b, if b > a There are two expressions in this definition, distinguished by a Boolean-value expression. This distinguisher is called guards, and we use them to evaluate the value of True or False. The first line is one of the alternative result values for this function. It states that the return value will be a if the expression a ≥ b is True. In contrast, the function will return value b if the expression b ≥ a is True. Using these two cases, a ≥ b and b ≥ a, the max value depends on the value of a and b. The order of the cases doesn't matter. We can also define the max function using the special word otherwise. This word ensures that the otherwise case will be executed if no expression results in a True value. Here, we will refactor our max function using the word otherwise: max a b = a, if a ≥ b = b, otherwise From the preceding function definition, we can see that if the first expression is False, the function will return b immediately without performing any evaluation. In other words, the otherwise case will always return True if all previous guards return False. Another special word usually used in mathematical notations is where. This word is used to set the local definition for the expression of the function. Let's take a look at the following example: f x y = (z + 2) * (z + 3) where z = x + y In the preceding example, we have a function f with variable z, whose value is determined by x and y. There, we introduce a local z definition to the function. This local definition can also be used along with the case analysis we have discussed earlier. Here is an example of the conjunction local definition with the case analysis: f x y = x + z, if x > 100 = x - z, otherwise where z = triple(y + 3) In the preceding function, there is a local z definition, which qualifies for both x + z and x – z expressions. As we discussed earlier, although the function has two equal to (=) signs, only one expression will return the value. Currying Currying is a simple technique of changing structure arguments by sequence. It will transform a n-ary function into n unary function. It is a technique which was created to circumvent limitations of Lambda functions which are unary functions Let's go back to our max function again and get the following definition: max a b = a, if a ≥ b = b, if b > a We can see that there is no bracket in the max a b function name. Also, there is no comma-separated a and b in the function name. We can add a bracket and a comma to the function definition, as follows: max' (a,b) = a, if a ≥ b = b, if b > a At first glance, we find the two functions to be the same since they have the same expression. However, they are different because of their different type. The max' function has a single argument, which consists of a pair of numbers. The type of max' function can be written as follows: max' :: (num, num) -> num On the other hand, the max function has two arguments. The type of this function can be written as follows: max :: num -> (num -> num) The max function will take a number and then return a function from single number to many numbers. From the preceding max function, we pass the variable a to the max function, which returns a value. Then, that value is compared to variable b in order to find the maximum number. Comparison between functional and imperative programming The main difference between functional and imperative programming is that imperative programming produces side-effects while functional programming doesn't. In Imperative programming, the expressions are evaluated and its resulting value is assigned to variables. So, when we group series of expressions into a function, the resulting value depends upon the state of variables at that point in time. This is called side effects. Because of the continues change in state, the order of evaluation matter. In Functional programming world, destructive assignment is forbidden and each time an assignment happens a new variable is induced. Concepts of functional programming We can also distinguish functional programming over imperative programming by the concepts. The core ideas of Functional programming are encapsulated in the constructs like First Class Functions, Higher Order Functions, Purity, Recursion over Loops, and Partial Functions. We will discuss the concepts in this topic. First-class and higher-order functions In Imperative programming, the given data is more importance and are passed through series of functions (with side effects). Functions are special constructs with its own semantics. In effect, functions do not have the same place as variables and constants. Since a function cannot be passed as parameter or not returned as a result, they are regarded as second class citizens of the programming world. In the functional programming world, we can pass function as a parameter and return function as a result. They obey the same semantics as variables and their values. Thus, they are First Class Citizens. We can also create function of functions called Second Order Function through Composition. There is no limit imposed on the composability of function and they are called Higher Order Functions. Fortunately, the C# language has supported these two concepts since it has a feature called function object, which has types and values. To discuss more details about the function object, let's take a look at the following code: class Program { static void Main(string[] args) { Func<int, int> f = (x) => x + 2; int i = f(1); Console.WriteLine(i); f = (x) => 2 * x + 1; i = f(1); Console.WriteLine(i); } } We can find the code in FuncObject.csproj, and if we run it, it will display the following output on the console screen: Why do we display it? Let's continue the discussion on function types and function values. Hit Ctrl + F5 instead of F5 in order to run the code in debug mode but without the debugger. It's useful to stop the console from closing on the exit. Pure functions In the functional programming, most of the functions do not have side-effects. In other words, the function doesn't change any variables outside the function itself. Also, it is consistent, which means that it always returns the same value for the same input data. The following are example actions that will generate side-effects in programming: Modifying a global variable or static variable since it will make a function interact with the outside world. Modifying the argument in a function. This usually happens if we pass a parameter as a reference. Raising an exception. Taking input and output outside—for instance, get a keystroke from the keyboard or write data to the screen. Although it does not satisfy the rule of a pure function, we will use many Console.WriteLine() methods in our program in order to ease our understanding in the code sample. The following is the sample non-pure function that we can find in NonPureFunction1.csproj: class Program { private static string strValue = "First"; public static void AddSpace(string str) { strValue += ' ' + str; } static void Main(string[] args) { AddSpace("Second"); AddSpace("Third"); Console.WriteLine(strValue); } } If we run the preceding code, as expected, the following result will be displayed on the console: In this code, we modify the strValue global variable inside the AddSpace function. Since it modifies the variable outside, it's not considered a pure function. Let's take a look at another non-pure function example in NonPureFunction2.csproj: class Program { public static void AddSpace(StringBuilder sb, string str) { sb.Append(' ' + str); } static void Main(string[] args) { StringBuilder sb1 = new StringBuilder("First"); AddSpace(sb1, "Second"); AddSpace(sb1, "Third"); Console.WriteLine(sb1); } } We see the AddSpace function again but this time with the addition of an argument-typed StringBuilder argument. In the function, we modify the sb argument with hyphen and str. Since we pass the sb variable by reference, it also modifies the sb1 variable in the Main function. Note that it will display the same output as NonPureFunction2.csproj. To convert the preceding two non-pure function code into pure function code, we can refactor the code to be the following. This code can be found at PureFunction.csproj: class Program { public static string AddSpace(string strSource, string str) { return (strSource + ' ' + str); } static void Main(string[] args) { string str1 = "First"; string str2 = AddSpace(str1, "Second"); string str3 = AddSpace(str2, "Third"); Console.WriteLine(str3); } } Running PureFunction.csproj, we will get the same output compared to the two previous non-pure function code. However, in this pure function code, we have three variables in the Main function. This is because in functional programming, we cannot modify the variable we have initialized earlier. In the AddSpace function, instead of modifying the global variable or argument, it now returns a string value to satisfy the the functional rule. The following are the advantages we will have if we implement the pure function in our code: Our code will be easy to be read and maintain because the function does not depend on external state and variables. It is also designed to perform specific tasks that increase maintainability. The design will be easier to be changed since it is easier to refactor. Testing and debugging will be easier since it's quite easy to isolate the pure function. Recursive functions In imperative programming world, we have got destructive assignment to mutate the state if a variable. By using loops, one can change multiple variables to achieve the computational objective. In Functional programming world, since variable cannot be destructively assigned, we need a Recursive function calls to achieve the objective of looping. Let's create a factorial function. In mathematical terms, the factorial of the nonnegative integer N is the multiplication of all positive integers less than or equal to N. This is usually denoted by N!. We can denote the factorial of 7 as follows: 7! = 7 x 6 x 5 x 4 x 3 x 2 x 1 = 5040 If we look deeper at the preceding formula, we will discover that the pattern of the formula is as follows: N! = N * (N-1) * (N-2) * (N-3) * (N-4) * (N-5) ... Now, let's take a look at the following factorial function in C#. It's an imperative approach and can be found in the RecursiveImperative.csproj file. public partial class Program { private static int GetFactorial(int intNumber) { if (intNumber == 0) { return 1; } return intNumber * GetFactorial(intNumber - 1); } } As we can see, we invoke the GetFactorial() function from the GetFactorial() function itself. This is what we call a recursive function. We can use this function by creating a Main() method containing the following code: public partial class Program { static void Main(string[] args) { Console.WriteLine( "Enter an integer number (Imperative approach)"); int inputNumber = Convert.ToInt32(Console.ReadLine()); int factorialNumber = GetFactorial(inputNumber); Console.WriteLine( "{0}! is {1}", inputNumber, factorialNumber); } } We invoke the GetFactorial() method and pass our desired number to the argument. The method will then multiply our number with what's returned by the GetFactorial() method, in which the argument has been subtracted by 1. The iteration will last until intNumber – 1 is equal to 0, which will return 1. Now, let's compare the preceding recursive function in the imperative approach with one in the functional approach. We will use the power of the Aggregate operator in the LINQ feature to achieve this goal. We can find the code in the RecursiveFunctional.csproj file. The code will look like what is shown in the following: class Program { static void Main(string[] args) { Console.WriteLine( "Enter an integer number (Functional approach)"); int inputNumber = Convert.ToInt32(Console.ReadLine()); IEnumerable<int> ints = Enumerable.Range(1, inputNumber); int factorialNumber = ints.Aggregate((f, s) => f * s); Console.WriteLine( "{0}! is {1}", inputNumber, factorialNumber); } } We initialize the ints variable, which contains a value from 1 to our desired integer number in the preceding code, and then we iterate ints using the Aggregate operator. The output of RecursiveFunctional.csproj will be completely the same compared to the output of RecursiveImperative.csproj. However, we use the functional approach in the code in RecursiveFunctional.csproj. The advantages and disadvantages of functional programming So far, we have had to deal with functional programming by creating code using functional approach. Now, we can look at the advantages of the functional approach, such as the following: The order of execution doesn't matter since it is handled by the system to compute the value we have given rather than the one defined by programmer. In other words, the declarative of the expressions will become unique. Because functional programs have an approach toward mathematical concepts, the system will designed with the notation as close as possible to the mathematical way of concept. Variables can be replaced by their value since the evaluation of expression can be done any time. The functional code is then more mathematically traceable because the program is allowed to be manipulated or transformed by substituting equals with equals. This feature is called Referential Transparency. Immutability makes the functional code free of side-effects. A shared variable, which is an example of a side-effect, is a serious obstacle for creating parallel code and result in non-deterministic execution. By removing the side-effect, we can have a good coding approach. The power of lazy evaluation will make the program run faster because it only provides what we really required for the queries result. Suppose we have a large amount of data and want to filter it by a specific condition, such as showing only the data that contains the word Name. In imperative programming, we will have to evaluate each operation of all the data. The problem is when the operation takes a long time, the program will need more time to run as well. Fortunately, the functional programming that applies LINQ will perform the filtering operation only when it is needed. That's why functional programming will save much of our time using lazy evaluation. We have a solution for complex problems using composability. It is a rule principle that manages a problem by dividing it, and it gives pieces of the problem to several functions. The concept is similar to a situation when we organize an event and ask different people to take up a particular responsibility. By doing this, we can ensure that everything will done properly by each person. Beside the advantages of functional programming, there are several disadvantages as well. Here are some of them: Since there's no state and no update of variables is allowed, loss of performance will take place. The problem occurs when we deal with a large data structure and it needs to perform a duplication of any data even though it only changes a small part of the data. Compared to imperative programming, much garbage will be generated in functional programming due to the concept of immutability, which needs more variables to handle specific assignments. Because we cannot control the garbage collection, the performance will decrease as well. Summary So we have been acquainted with the functional approach by discussing the introduction of functional programming. We also have compared the functional approach to the mathematical concept when we create functional program. It's now clear that the functional approach uses the mathematical approach to compose a functional program. The comparison between functional and imperative programming also led us to the important point of distinguishing the two. It's now clear that in functional programming, the programmer focuses on the kind of desired information and the kind of required transformation, while in the imperative approach, the programmer focuses on the way of performing the task and tracking changes in the state. Resources for Article: Further resources on this subject: Introduction to C# and .NET [article] Why we need Design Patterns? [article] Parallel Computing [article]
Read more
  • 0
  • 0
  • 65539
article-image-storage-practices-and-migration-hyper-v-2016
Packt
01 Dec 2016
17 min read
Save for later

Storage Practices and Migration to Hyper-V 2016

Packt
01 Dec 2016
17 min read
In this article by Romain Serre, the author of Hyper-V 2016 Best Practices, we will learn about Why Hyper-V projects fail, Overview of the failover cluster, Storage Replica, Microsoft System Center, Migrating VMware virtual machines, and Upgrading single Hyper-v hosts. (For more resources related to this topic, see here.) Why Hyper-V projects fail Before you start deploying your first production Hyper-V host, make sure that you have completed a detailed planning phase. I have been called in to many Hyper-V projects to assist in repairing what a specialist has implemented. Most of the time, I start by correcting the design because the biggest failures happen there, but are only discovered later during implementation. I remember many projects in which I was called in to assist with installations and configurations during the implementation phases, because these were the project phases where a real expert was needed. However, based on experience, this notion is wrong. Most critical to a successful design phase are two features—its rare existence and someone with technological and organizational experience with Hyper-V. If you don't have the latter, look out for a Microsoft Partner with a Gold Competency called Management and Virtualization on Microsoft Pinpoint (http://pinpoint.microsoft.com) and take a quick look at the reviews done by customers for successful Hyper-V projects. If you think it's expensive to hire a professional, wait until you hire an amateur. Having an expert in the design phase is the best way to accelerate your Hyper-V project. Overview of the failover cluster Before you start your first deployment in production, make sure you have defined the aim of the project and its smart criteria and have done a thorough analysis of the current state. After this, you should be able to plan the necessary steps to reach the target state, including a pilot phase. instantly detected. The virtual machines running on that particular node are powered off immediately because of the hardware failure on their computing node. The remaining cluster nodes then immediately take over these VMs in an unplanned failover process and start them on their respective own hardware. The virtual machines will be the backup running after a successful boot of their operating systems and applications in just a few minutes. Hyper-V Failover Clusters work under the condition that all compute nodes have access to a shared storage instance, holding the virtual machine configuration data and its virtual hard disks. In case of a planned failover, that is, for patching compute nodes, it's possible to move running virtual machines from one cluster node to another without interrupting the VM. All cluster nodes can run virtual machines at the same time, as long as there is enough failover capacity running all services when a node goes down. Even though a Hyper-V cluster is still called a Failover Cluster, utilizing the Windows Server Failover Clustering feature, it is indeed capable of running an Active/Active Cluster. To ensure that all these capabilities of a Failover Cluster are indeed working, it demands an accurate planning and implementation process. Storage Replica Storage Replica is a new feature in Windows Server 2016 that provides block replication from storage level for a data recovery plan or for a stretched cluster. Storage Replica can be used in the following scenarios: Server-to-server storage replication using Storage Replica Storage replication in a stretch cluster using Storage Replica Cluster-to-cluster storage replication using Storage Replica Server-to-itself to replicate between volumes using Storage Replica Regarding the scenario or the bandwidth and the latency of the inter-site link, you can choose between a synchronous and an asynchronous replication. For further information about Storage Replica, you can about read this topic at http://bit.ly/2albebS. The SMB3 protocol is used to make Storage Replica. You can leverage TCP/IP or RDMA on the network. I recommend you to implement RDMA when it is possible to reduce latency and CPU workload and to increase throughput. Compared to Hyper-V Replica, the Storage Replica feature provides a replication of all Virtual Machines stored in a volume from the block level. Moreover, Storage Replica can replicate in the synchronous mode while Hyper-V Replica is always in the asynchronous mode. To finish, with Hyper-V Replica you have to specify the failover IP Address because the replication is executed from the VM level, whereas with Storage Replica, you don't need to specify the failover IP Address, but in case of a replication between two clusters in two different rooms, the VM networks must be configured in the destination room. The use of Hyper-V Replica or Storage Replica depends on the disaster recovery plan you need. If you want to protect some application workloads, you can use Hyper-V Replica. On the other hand, if you have the passive room ready to restart in case of issues in the active room, Storage Replica can be a great solution because all the VMs in a volume will be already replicated. To deploy a replication between two clusters you need two sets of storage based on iSCSI, SAS JBOD, fibre channel SAN, or Shared VHDX. For better performance I recommend you to implement SSD that will be used for the logs of Storage Replica. Microsoft System Center Microsoft System Center 2016 is Microsoft's solution for advanced management of Windows Server and its components, along with its dependencies such as various hardware and software products. It consists of various components that support every stage of your IT services from planning to operating over backup to automation. System Center has existed since 1994 and has evolved continuously. It now offers a great set of tools for very efficient management of server and client infrastructures. It also offers the ability to create and operate whole clouds—run in your own data center or a public cloud data center such as Microsoft Azure. Today, it's your choice whether to run your workloads on-premises or off-premises. System Center provides a standardized set of tools for a unique and consistent Cloud OS management experience. System Center does not add any new features to Hyper-V, but it does offer great ways to make the most out of it and ensure streamlined operating processes after its implementation. System Center is licensed via the same model as Windows Server is, leveraging Standard and data center Editions on a physical host level. While every System Center component offers great value in itself, the binding of multiple components into a single workflow offers even more advantages, as shown in the following screenshot: System Center overview When do you need System Center? There is no right or wrong answer to this, and the most given answer by any IT consultant around the world is, "It depends". System Center adds value to any IT environment starting with only a few systems. In my experience, a Hyper-V environment with up to three hosts, and 15 VMs can be managed efficiently without the use of System Center. If you plan to use more hosts or virtual machines, System Center will definitely be a great solution for you. Let's take a look at the components of System Center. Migrating VMware virtual machines If you are running virtual machines on VMware ESXi hosts, there are really good options available for moving them to Hyper-V. There are different approaches on how to convert a VMware virtual machine to Hyper-V: from the inside of the VM on a guest level, running cold conversions with the VM powered off on the host level, running hot conversions on a running VM, and so on. I will give you a short overview of the currently available tools in the market. System Center VMM SCVMM should not be the first tool of your choice, take a look at MVMC combined with MAT to get an equal functionality from a better working tool. The earlier versions of SCVMM allowed online or offline conversions of VMs; the current version, 2016, allows only offline conversions of VMs. Select a powered-off VM on a VMware host or from the SCVMM library share to start the conversion. The VM conversion will convert VMware-hosted virtual machines through vCenter and ensure that the entire configuration, such as memory, virtual processor, and other machine configurations, is also migrated from the initial source. The tool also adds virtual NICs to the deployed virtual machine on Hyper-V. The VMware tools must be uninstalled before the conversion because you won't be able to remove the VMware tools when the VM is not running on a VMware host. SCVMM 2016 supports ESXi hosts running 4.1 and 5.1 but not the latest ESX Version 5.5. SCVMM conversions are great to automate through their integrated PowerShell support and it's very easy to install upgraded Hyper-V integration services as part of the setup or by adding any kind of automation through PowerShell or System Center Orchestrator. Besides manually removing VMware tools, using SCVMM is an end-to-end solution in the migration process. You can find some PowerShell examples for SCVMM-powered V2V conversion scripts at http://bit.ly/Y4bGp8. I don't recommend the use of this tool anymore because Microsoft doesn't spend time on this tool anymore. Microsoft Virtual Machine Converter Microsoft released its first version of the free solution accelerator Microsoft Virtual Machine Converter (MVMC) in 2013, and it should be available in Version 3.1 by the release of this book. MVMC provides a small and easy option to migrate selected virtual machines to Hyper-V. It takes a very similar approach to the conversion as SCVMM does. The conversion happens at a host level and offers a fully integrated end-to-end solution. MVMC supports all recent versions of VMware vSphere. It will even uninstall the VMware tools and install the Hyper-V integration services. MVMC 2.0 works with all supported Hyper-V guest operating systems, including Linux. MVMC comes with a full GUI wizard as well as a fully scriptable command-line interface (CLI). Besides being a free tool, it is fully supported by Microsoft in case you experience any problems during the migration process. MVMC should be the first tool of your choice if you do not know which tool to use. Like most other conversion tools, it does the actual conversion on the MVMC server itself and requires its disk space to host the original VMware virtual disk as well as the converted Hyper-V disk. MVMC even offers an add-on for VMware virtual center servers to start conversions directly from the vSphere console. The current release of MV is freely available at its official download site at http://bit.ly/1HbRIg7. Download MVMC to the conversion system and start the click-through setup. After finishing the download, start the MVMC with the GUI by executing Mvmc.Gui.exe. The wizard guides you through some choices. MVMC is not only capable of migrating to Hyper-V but also allows you to move virtual machines to Microsoft Azure. Follow these few steps to convert a VMware VM: Select Hyper-V as a target. Enter the name of the Hyper-V host you want this VM to run on and specify a fileshare to use and the format of the disks you want to create. Choosing the dynamically expanding disks should be the best option most of the time. Enter the name of the ESXi server you want to use as a source as well as valid credentials. Select the virtual machine to convert. Make sure it has VMware tools installed. The VM can be either powered on or off. Enter a workspace folder to store the converted disk. Wait for the process to finish. There is some additional guidance available at http://bit.ly/1vBqj0U. This is a great and easy way to migrate a single virtual machine. Repeat the steps for every other virtual machine you have, or use some automation. Upgrading single Hyper-V hosts If you are currently running a single host with an older version of Hyper-V and now want to upgrade this host on the same hardware, there is a limited set of decisions to be made. You want to upgrade the host with the least amount of downtime and without losing any data from your virtual machine. Before you start the upgrade process, make sure all components from your infrastructure are compatible with the new version of Hyper-V. Then it's time to prepare your hardware for this new version of Hyper-V by upgrading all firmware to the latest available version and downloading the necessary drivers for Windows Server 2016 with Hyper-V along with its installation media. One of the most crucial questions in this update scenario is whether you should use the integrated installation option called in-place upgrade, where the existing operating system will be transformed to the recent version of Hyper-V or delete the current operating system and perform a clean installation. While the installation experience of in-place upgrades works well when only the Hyper-V role is installed, based on experiences, some versions of upgraded systems are more likely to suffer problems. Numbers pulled from the Elanity support database show about 15 percent more support cases on upgraded systems from Windows Server 2008 R2 than clean installations. Remember how fast and easy it is nowadays to do a clean install of Hyper-V; this is why it is highly recommended to do this over upgrading existing installations. If you are currently using Windows Server 2012 R2 and want to upgrade to Windows Server 2016, note that we have not yet seen any differences in the number of support cases between the installation methods. However, clean installations of Hyper-V being so fast and easy, I barely use them. Before starting any type of upgrade scenario, make sure you have current backups of all affected virtual machines. Nonetheless, if you want to use the in-place upgrade, insert the Windows Server 2016 installation media and run this command from your current operating system: Setup.exe /auto:upgrade If it fails, it's most likely due to an incompatible application installed on the older operating system. Start the setup without the parameter to find out which applications need to be removed before executing the unattended setup. If you upgrade from Windows Server 2012 R2, there is no additional preparation needed; if you upgrade from older operating systems, make sure to remove all snapshots from your virtual machines. Importing virtual machines If you choose to do a clean installation of the operating systems, you do not necessarily have to export the virtual machines first; just make sure all VMs are powered off and are stored on a different partition than your Hyper-V host OS. If you are using a SAN, disconnect all LUNs before the installation and reconnect them afterwards to ensure their integrity through the installation process. After the installation process, just reconnect the LUNs and set the disk online in diskpart or in Disk Management at Control Panel | Computer Management. If you are using local disks, make sure not to reformat the partition with your virtual machines on it. You should export VM to another location and import them back after reformatting; more efforts are required but it is safer. Set the partition online and then reimport the virtual machines. Before you start the reimport process, make sure all dependencies of your virtual machines are available, especially vSwitches. To import a single Hyper-V VM, use the following PowerShell cmdlet: Import-VM -Path 'D:VMsVM01Virtual Machines2D5EECDA-8ECC-4FFC- ACEE-66DAB72C8754.xml' To import all virtual machines from a specific folder, use this command: Get-ChildItem d:VMs -Recurse -Filter "Virtual Machines" | %{Get- ChildItem $_.FullName -Filter *.xml} | %{import-vm $_.FullName - Register} After that, all VMs are registered and ready for use on your new Hyper-V hosts. Make sure to update the Hyper-V integration services of all virtual machines before going back into production. If you still have virtual disks in the old .vhd format, it's now time to convert them to .vhdx files. Use this PowerShell cmdlet on powered-off VMs or standalone vDisk to convert a single .vhd file: Convert-VHD –Path d:VMstestvhd.vhd –DestinationPath d:VMstestvhdx.vhdx If you want to convert the disks of all your VMs, fellow MVPs, Aidan Finn and Didier van Hoye, provided a great end-to-end solution to achieve this. This can be found at http://bit.ly/1omOagi. I often hear from customers that they don't want to upgrade their disks, so as to be able to revert to older versions of Hyper-V when needed. First, you should know that I have never met a customer who has done that because there really is no technical reason why anyone should do this. Second, even if you would do this backwards move, running virtual machines on older Hyper-V hosts is not supported, if they had been deployed on more modern versions of Hyper-V before. The reason for this is very simple; Hyper-V does not offer a way for downgrading Hyper-V integration services. The only way to move a virtual machine back to an older Hyper-V host is by restoring a backup of the VM made earlier before the upgrade process. Exporting virtual machines If you want to use another physical system running a newer version of Hyper-V, you have multiple possible options. They are as follows: When using a SAN as a shared storage, make sure all your virtual machines, including their virtual disks, are located on other LUNs rather than on the host operating system. Disconnect all LUNs hosting virtual machines from the source host and connect them to the target host. Bulk import the VMs from the specified folders. When using SMB3 shared storage from scale-out file servers, make sure to switch access to the shared hosting VMs to the new Hyper-V hosts. When using local hard drives and upgrading from Windows Server 2008 SP2 or Windows Server 2008 R2 with Hyper-V, it's necessary to export the virtual machines to a storage location reachable from the new host. Hyper-V servers running legacy versions of the OS (prior to 2012 R2) need to power off the VMs before an export can occur. To export a virtual machine from a host, use the following PowerShell cmdlet: Export-VM –Name VM –Path D: To export all virtual machines to a folder underneath the following root, use the following command: Get-VM | Export-VM –Path D: In most cases, it is also possible to just copy the virtual machine folders containing virtual hard disk's and configuration files to the target location and import them to Windows Server 2016 Hyper-V hosts. However, the export method is more reliable and should be preferred. A good alternative for moving virtual machines can be the recreation of virtual machines. If you have another host up and running with a recent version of Hyper-V, it may be a good opportunity to also upgrade some guest OSes. For instance, Windows Server 2003 and 2003 R2 are out of extended support since July 2015. Depending on your applications, it may now be the right choice to create new virtual machines with Windows Server 2016 as a guest operating system and migrate your existing workloads from older VMs to these new machines. Summary In this article, we learned about Why Hyper-V projects fail, how to migrate VMware virtual machine and also about upgrading single Hyper-V hosts. This article also covers the overview of the failover cluster and Storage Replica. Resources for Article: Further resources on this subject: Hyper-V Basics [article] The importance of Hyper-V Security [article] Hyper-V building blocks for creating your Microsoft virtualization platform [article]
Read more
  • 0
  • 0
  • 18808

article-image-structuring-your-projects
Packt
24 Nov 2016
20 min read
Save for later

Structuring Your Projects

Packt
24 Nov 2016
20 min read
In this article written by Serghei Iakovlev and David Schissler, authors of the book Phalcon Cookbook , we will cover: (For more resources related to this topic, see here.) Choosing the best place for an implementation Automation of routine tasks Creating the application structure by using code generation tools Introduction In this article you will learn that, often, by creating new projects, developers can face some issues such as what components should they create, where to place them in the application structure, what would each component implement, what naming convention to follow, and so on. Actually, creating custom components isn't a difficult matter; we will sort it out in this article. We will create our own component which will display different menus on your site depending on where we are in the application. From one project to another, the developer's work is usually repeated. This holds true for tasks such as creating the project structure, configuration, creating data models, controllers, views, and so on. For those tasks, we will discover the power of Phalcon Developer Tools and how to use them. You will learn how to create an application skeleton by using one command, and even how to create a fully functional application prototype in less than 10 minutes without writing a single line of code. Developers often come up against a situation where they need to create a lot of predefined code templates. Until you are really familiar with the framework it can be useful for you to do everything manually. But anyway all of us would like to reduce repetitive tasks. Phalcon tries to help you by providing an easy and at the same time flexible code generation tool named Phalcon Developer Tools. These tools help you simplify the creation of CRUD components for a regular application. Therefore, you can create working code in a matter of seconds without creating the code yourself. Often, when creating an application using a framework, we need to extend or add functionality to the framework components. We don't have to reinvent the wheel by rewriting those components. We can use class inheritance and extensibility, but often this approach does not work. In such cases, it is better to use additional layers between the main application and the framework by creating a middleware layer. The term middleware has a wide range of meanings, but in the context of PHP web applications it means code, which will be called in turns by each request. We will look into the main principles of creating and using middleware in your application. We will not get into each solution in depth, but instead we will work with tasks that are common for most projects, and implementations extending Phalcon. Choosing the best place for an implementation Let's pretend you want to add a custom component. As the case may be, this component allows to change your site navigation menu. For example, when you have a Sign In link on your navigation menu and you are logged in, that link needs to change to Sign Out. Then you're asking yourself where is the best place in the project to put the code, where to place the files, how to name the classes, how to make them autoload by the autoloader. Getting ready… For successful implementation of this recipe you must have your application deployed. By this we mean that you need to have a web server installed and configured for handling requests to your application, an application must be able to receive requests, and have implemented the necessary components such as Controllers, Views, and a bootstrap file. For this recipe, we assume that our application is located in the apps directory. If this is not the case, you should change this part of the path in the examples shown in this article. How to do it… Follow these steps to complete this recipe: Create the /library/ directory app, if you haven't got one, where user components will be stored. Next, create the Elements (app/library/Elements.php) component. This class extends PhalconMvcUserComponent. Generally, it is not necessary, but it helps get access to application services quickly. The contents of Elements should be: <?php namespace Library; use PhalconMvcUserComponent; use PhalconMvcViewSimple as View; class Elements extends Component { public function __construct() { // ... } public function getMenu() { // ... } } Now we register this class in the Dependency Injection Container. We use a shared instance in order to prevent creating new instances by each service resolving: $di->setShared('elements', function () { return new LibraryElements(); }); If your Session service is not initialized yet, it's time to do it in your bootstrap file. We use a shared instance for the following reasons: $di->setShared('session', function () { $session = new PhalconSessionAdapterFiles(); $session->start(); return $session; }); Create the templates directory within the directory with your views/templates. Then you need to tell the class autoloader about a new namespace, which we have just entered. Let's do it in the following way: $loader->registerNamespaces([ // The APP_PATH constant should point // to the project's root 'Library' => APP_PATH . '/apps/library/', // ... ]); Add the following code right after the tag body in the main layout of your application: <div class="container"> <div class="navbar navbar-inverse"> <div class="container-fluid"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#blog-top- menu" aria-expanded="false"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Blog 24</a> </div> <?php echo $this->elements->getMenu(); ?> </div> </div> </div> Next, we need to create a template for displaying your top menu. Let's create it in views/templates/topMenu.phtml: <div class="collapse navbar-collapse" id="blog-top-menu"> <ul class="nav navbar-nav"> <li class="active"> <a href="#">Home</a> </li> </ul> <ul class="nav navbar-nav navbar-right"> <li> <?php if ($this->session->get('identity')): ?> <a href="#">Sign Out</a> <?php else: ?> <a href="#">Sign In</a> <?php endif; ?> </li> </ul> </div> Now, let's put the component to work. First, create the protected field $simpleView and initialize it in the controller: public function __construct() { $this->simpleView = new View(); $this->simpleView->setDI($this->getDI()); } And finally, implement the getMenu method as follows: public function getMenu() { $this->simpleView->setViewsDir($this->view- >getViewsDir()); return $this->simpleView->render('templates/topMenu'); } Open the main page of your site to ensure that your top menu is rendered. How it works… The main idea of our component is to generate a top menu, and to display the correct menu option depending on the situation, meaning whether the user is authorized or not. We create the user component, Elements, putting it in a place specially designed for the purpose. Of course, when creating the directory library and placing a new class there, we should tell the autoloader about a new namespace. This is exactly what we have done. However, we should take note of one important peculiarity. We should note that if you want to get access to your components quickly even in HTML templates like $this->elements, then you should put the components in the DI container. Therefore, we put our component, LibraryElements, in the container named elements. Since our component inherits PhalconMvcUserComponent, we are able to access all registered application services just by their names. For example, the following instruction, $this->view can be written in a long form, $this->getDi()->getShared('view'), but the first one is obviously more concise. Although not necessary, for application structure purposes, it is better to use a separate directory for different views not connected straight to specific controllers and actions. In our case, the directory views/templates serves for this purpose. We create a HTML template for menu rendering and place it in views/templates/topMenu.phtml. When using the method getMenu, our component will render the view topMenu.phtml and return HTML. In the method getMenu, we get the current path for all our views and set it for the PhalconMvcViewSimple component, created earlier in the constructor. In the view topMenu we access to the session component, which earlier we placed in the DI container. By generating the menu, we check whether the user is authorized or not. In the former case, we use the Sign out menu item, in the latter case we display the menu item with an invitation to Sign in. Automation of routine tasks The Phalcon project provides you with a great tool named Developer Tools. It helps automating repeating tasks, by means of code generation of components as well as a project skeleton. Most of the components of your application can be created only with one command. In this recipe, we will consider in depth the Developer Tools installation and configuration. Getting Ready… Before you begin to work on this recipe, you should have a DBMS configured, a web server installed and configured for handling requests from your application. You may need to configure a virtual host (this is optional) for your application which will receive and handle requests. You should be able to open your newly-created project in a browser at http://{your-host-here}/appname or http://{your-host-here}/, where your-host-here is the name of your project. You should have Git installed, too. In this recipe, we assume that your operating system is Linux. Developer Tools installation instructions for Mac OS X and Windows will be similar. You can find the link to the complete documentation for Mac OS X and Windows at the end of this recipe. We used the Terminal to create the database tables, and chose MySQL as our RDBMS. Your setup might vary. The choice of a tool for creating a table in your database, as well as a particular DBMS, is yours. Note that syntax for creating a table by using other DBMSs than MySQL may vary. How to do it… Follow these steps to complete this recipe: Clone Developer Tools in your home directory: git clone git@github.com:phalcon/phalcon-devtools.git devtools Go to the newly created directory devtools, run the./phalcon.sh command, and wait for a message about successful installation completion: $ ./phalcon.sh Phalcon Developer Tools Installer Make sure phalcon.sh is in the same dir as phalcon.php and that you are running this with sudo or as root. Installing Devtools... Working dir is: /home/user/devtools Generating symlink... Done. Devtools installed! Run the phalcon command without arguments to see the available command list and your current Phalcon version: $ phalcon Phalcon DevTools (3.0.0) Available commands: commands (alias of: list, enumerate) controller (alias of: create-controller) model (alias of: create-model) module (alias of: create-module) all-models (alias of: create-all-models) project (alias of: create-project) scaffold (alias of: create-scaffold) migration (alias of: create-migration) webtools (alias of: create-webtools) Now, let's create our project. Go to the folder where you plan to create the project and run the following command: $ phalcon project myapp simple Open the website which you have just created with the previous command in your browser. You should see a message about the successful installation. Create a database for your project: mysql -e 'CREATE DATABASE myapp' -u root -p You will need to configure our application to connect to the database. Open the file app/config/config.php and correct the database connection configuration. Draw attention to the baseUri: parameter if you have not configured your virtual host according to your project. The value of this parameter must be / or /myapp/. As the result, your configuration file must look like this: <?php use PhalconConfig; defined('APP_PATH') || define('APP_PATH', realpath('.')); return new Config([ 'database' => [ 'adapter' => 'Mysql', 'host' => 'localhost', 'username' => 'root', 'password' => '', 'dbname' => 'myapp', 'charset' => 'utf8', ], 'application' => [ 'controllersDir' => APP_PATH . '/app/controllers/', 'modelsDir' => APP_PATH . '/app/models/', 'migrationsDir' => APP_PATH . '/app/migrations/', 'viewsDir' => APP_PATH . '/app/views/', 'pluginsDir' => APP_PATH . '/app/plugins/', 'libraryDir' => APP_PATH . '/app/library/', 'cacheDir' => APP_PATH . '/app/cache/', 'baseUri' => '/myapp/', ] ]); Now, after you have configured the database access, let's create a users table in your database. Create the users table and fill it with the primary data: CREATE TABLE `users` ( `id` INT(11) unsigned NOT NULL AUTO_INCREMENT, `email` VARCHAR(128) NOT NULL, `first_name` VARCHAR(64) DEFAULT NULL, `last_name` VARCHAR(64) DEFAULT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `users_email` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `users` (`email`, `first_name`, `last_name`) VALUES ('john@doe.com', 'John', 'Doe'), ('janie@doe.com', 'Janie', 'Doe'); After that we need to create a new controller, UsersController. This controller must provide us with the main CRUD actions on the Users model and, if necessary, display data with the appropriate views. Lets do it with just one command: $ phalcon scaffold users In your web browser, open the URL associated with your newly created resource User and try to find one of the users of our database table at http://{your-host-here}/appname/users (or http://{your-host-here}/users, depending on how you have configured your server for application request handling. Finally, open your project in your file manager to see all the project structure created with Developer Tools: +-- app ¦ +-- cache ¦ +-- config ¦ +-- controllers ¦ +-- library ¦ +-- migrations ¦ +-- models ¦ +-- plugins ¦ +-- schemas ¦ +-- views ¦ +-- index ¦ +-- layouts ¦ +-- users +-- public +-- css +-- files +-- img +-- js +-- temp How it works… We installed Developer Tools with only two commands, git clone and ./phalcon. This is all we need to start using this powerful code generation tool. Next, using only one command, we created a fully functional application environment. At this stage, the application doesn't represent something outstanding in terms of features, but we have saved time from manually creating the application structure. Developer Tools did that for you! If after this command completion you examine your newly created project, you may notice that the primary application configuration has been generated also, including the bootstrap file. Actually, the phalcon project command has additional options that we have not demonstrated in this recipe. We are focusing on the main commands. Enter the command help to see all available project creating options: $ phalcon project help In the modern world, you can hardly find a web application which works without access to a database. Our application isn't an exception. We created a database for our application, and then we created a users table and filled it with primary data. Of course, we need to supply our application with what we have done in the app/config/config.php file with the database access parameters as well as the database name. After the successful database and table creation, we used the scaffold command for the pre-defined code template generation, particularly the Users controller with all main CRUD actions, all the necessary views, and the Users model. As before, we have used only one command to generate all those files. Phalcon Developer Tools is equipped with a good amount of different useful tools. To see all the available options, you can use the command help. We have taken only a few minutes to create the first version of our application. Instead of spending time with repetitive tasks (such as the creation of the application skeleton), we can now use that time to do more exciting tasks. Phalcon Developer Tools helps us save time where possible. But wait, there is more! The project is evolving, and it becomes more featureful day by day. If you have any problems you can always visit the project on GitHub https://github.com/phalcon/phalcon-devtools and search for a solution. There's more… You can find more information on Phalcon Developer Tools installation for Windows and OS X at: https://docs.phalconphp.com/en/latest/reference/tools.html. More detailed information on web server configuration can be found at: https://docs.phalconphp.com/en/latest/reference/install.html Creating the application structure by using code generation tools In the following recipe, we will discuss available code generation tools that can be used for creating a multi-module application. We don't need to create the application structure and main components manually. Getting Ready… Before you begin, you need to have Git installed, as well as any DBMS (for example, MySQL, PostgreSQL, SQLite, and the like), the Phalcon PHP extension (usually it is named php5-phalcon) and a PHP extension, which offers database connectivity support using PDO (for example, php5-mysql, php5-pgsql or php5-sqlite, and the like). You also need to be able to create tables in your database. To accomplish the following recipe, you will require Phalcon Developer Tools. If you already have it installed, you may skip the first three steps related to the installation and go to the fourth step. In this recipe, we assume that your operating system is Linux. Developer Tools installation instructions for Mac OS X and Windows will be similar. You can find the link to the complete documentation for Mac OS X and Windows at the end of this recipe. We used the Terminal to create the database tables, and chose MySQL as our RDBMS. Your setup might vary. The choice of a tool for creating a table in your database, as well as particular DBMS, is yours. Note that syntax for creating a table by using other DBMSs than MySQL may vary. How to do it… Follow these steps to complete this recipe: First you need to decide where you will install Developer Tools. Put the case that you are going to place Developer Tools in your home directory. Then, go to your home directory and run the following command: git clone git@github.com:phalcon/phalcon-devtools.git Now browse to the newly created phalcon-devtools directory and run the following command to ensure that there are no problems: ./phalcon.sh Now, as far as you have Developer Tools installed, browse to the directory, where you intend to create your project, and run the following command: phalcon project blog modules If there were no errors during the previous step, then create a Help Controller by running the following command: phalcon controller Help --base-class=ControllerBase — namespace=Blog\Frontend\Controllers Open the newly generated HelpController in the apps/frontend/controllers/HelpController.php file to ensure that you have the needed controller, as well as the initial indexAction. Open the database configuration in the Frontend module, blog/apps/frontend/config/config.php, and edit the database configuration according to your current environment. Enter the name of an existing database user and a password that has access to that database, and the application database name. You can also change the database adapter that your application needs. If you do not have a database ready, you can create one now. Now, after you have configured the database access, let's create a users table in your database. Create the users table and fill it with the primary data: CREATE TABLE `users` ( `id` INT(11) unsigned NOT NULL AUTO_INCREMENT, `email` VARCHAR(128) NOT NULL, `first_name` VARCHAR(64) DEFAULT NULL, `last_name` VARCHAR(64) DEFAULT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `users_email` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `users` (`email`, `first_name`, `last_name`) VALUES ('john@doe.com', 'John', 'Doe'), ('janie@doe.com', 'Janie', 'Doe'); Next, let's create Controller, Views, Layout, and Model by using the scaffold command: phalcon scaffold users --ns- controllers=Blog\Frontend\Controllers —ns- models=Blog\Frontend\Models Open the newly generated UsersController located in the apps/frontend/controllers/UsersController.php file to ensure you have generated all actions needed for user search, editing, creating, displaying, and deleting. To check if all actions work as designed, if you have a web server installed and configured for this recipe, you can go to http://{your-server}/users/index. In so doing, you can make sure that the required Users model is created in the apps/frontend/models/Users.php file, all the required views are created in the apps/frontend/views/users folder, and the user layout is created in the apps/frontend/views/layouts folder. If you have a web server installed and configured for displaying the newly created site, go to http://{your-server}/users/search to ensure that the users from our table are shown. How it works… In the world of programming, code generation is designed to lessen the burden of manually creating repeated code by using predefined code templates. The Phalcon framework provides perfect code generation tools which come with Phalcon Developer Tools. We start with the installation of Phalcon Developer Tools. Note, that if you already have Developer Tools installed, you should skip the steps involving these installation. Next, we generate a fully functional MVC application, which implements the multi-module principle. One command is enough to get a working application at once. We save ourselves the trouble of creating the application directory structure, creating the bootstrap file, creating all the required files, and setting up the initial application structure. For that end, we use only one command. It's really great, isn't it? Our next step is creating a controller. In our example, we use HelpController, which displays just such an approach to creating controllers. Next, we create the table users in our database and fill it with data. With that done, we use a powerful tool for generating predefined code templates, which is called Scaffold. Using only one command in the Terminal, we generate the controller UsersController with all the necessary actions and appropriate views. Besides this, we get the Users model and required layout. If you have a web server configured you can check out the work of Developer Tools at http://{your-server}/users/index. When we use the Scaffold command, the generator determines the presence and names of our table fields. Based on these data, the tool generates a model, as well as views with the required fields. The generator provides you with ready-to-use code in the controller, and you can change this code according to your needs. However, even if you don't change anything, you can use your controller safely. You can search for users, edit and delete them, create new users, and view them. And all of this was made possible with one command. We have discussed only some of the features of code generation. Actually, Phalcon Developer Tools has many more features. For help on the available commands you can use the command phalcon (without arguments). There's more… For more detailed information on installation and configuration of PDO in PHP, visit http://php.net/manual/en/pdo.installation.php. You can find detailed Phalcon Developer Tools installation instructions at https://docs.phalconphp.com/en/latest/reference/tools.html. For more information on Scaffold, refer to https://en.wikipedia.org/wiki/Scaffold_(programming). Summary In this article, you learned about the automation of routine tasks andcreating the application structure. Resources for Article: Further resources on this subject: Using Phalcon Models, Views, and Controllers [Article] Phalcon's ORM [Article] Planning and Structuring Your Test-Driven iOS App [Article]
Read more
  • 0
  • 0
  • 11551

article-image-how-create-breakout-game-godot-engine-part-2
George Marques
23 Nov 2016
8 min read
Save for later

How to create a Breakout game with Godot Engine – Part 2

George Marques
23 Nov 2016
8 min read
In part one of this article you learned how to set up a project and create a basic scene with input and scripting. By now you should grasp the basic concepts of the Godot Engine, such as nodes and scenes. Here we're going to complete the game up to a playable demo. Game scene Let's create new scene to hold the game itself. Click on the menu Scene > New Scene and add a Node2D as its root. You may feel tempted to resize this node to occupy the scene, but you shouldn't. If you resize it you'll be changing the scale and position, which will be reflected on child nodes. We want the position and scale both to be (0, 0). Rename the root node to Game and save the scene as game.tscn. Go to the Project Settings, and in the Application section, set this as the main_scene option. This will make the Game scene run when the game starts. Drag the paddle.tscn file from the FileSystem dock and drop it over the root Game node. This will create a new instance of the paddle scene. It's also possible to click on the chain icon on the Scene dock and select a scene to instance. You can then move the instanced paddle to the bottom of the screen where it should stay in the game (use the guides in the editor as reference). Play the project and you can then move the paddle with your keyboard. If you find the movement too slow or too fast, you can select the Paddle node and adjust the Speed value on the Inspector because it's an exported script variable. This is a great way to tweak the gameplay without touching the code. It also allows you to put multiple paddles in the game, each with its own speed if you wish. To make this better, you can click the Debug Options button (the last one on the top center bar) and activate Sync Scene Changes. This will reflect the changes on the editor in the running game, so you can set the speed without having to stop and play again. The ball Let's create a moving object to interact with. Make a new scene and add a RigidBody2D as the root. Rename it to Ball and save the scene as ball.tscn. The rigid body can be moved by the physics engine and interact with other physical objects, like the static body of the paddle. Add a Sprite as a child node and set the following image as its texture: Ball Now add a CollisionShape2D as a child of the Ball. Set its shape to new CircleShape2D and adjust the radius to cover the image. We need to adjust some of the Ball properties to behave in an appropriate way for this game. Set the Mode property to Character to avoid rotation. Set the Gravity Scale to 0 so it doesn't fall. Set the Friction to 0 and the Damp Override > Linear to 0 in to avoid the loss of momentum. Finally, set the Bounce property to 1, as we want the ball to completely bounce when touching the paddle. With that done, add the following script to the ball, so it starts moving when the scene plays: extends RigidBody2D # Export the ball speed to be changed in the editor export var ball_speed = 150.0 func _ready(): # Apply the initial impulse to the ball so it starts moving # It uses a bit of vector math to make the speed consistent apply_impulse(Vector2(), Vector2(1, 1).normalized() * ball_speed) Walls Going back to the Game scene, instance the ball as a child of the root node. We're going to add the walls so the ball doesn't get lost in the world. Add a Node2D as a child of the root and rename it to Walls. This will be the root for the wall nodes, to keep things organized. As a child of that, add four StaticBody2D nodes, each with its own rectangular collision shape to cover the borders of the screen. You'll end up with something like the following: Walls By now you can play the game a little bit and use the paddle to deflect the ball or leave it to bounce on the bottom wall. Bricks The last part of this puzzle left is the bricks. Create a new scene, add a StaticBody2D as the root and rename it to Brick. Save the scene as brick.tscn. Add a Sprite as its child and set the texture to the following image: Brick Add a CollisionShape2D and set its shape to rectangle, making it cover the whole image. Now add the following script to the root to make a little bit of magic: # Tool keyword makes the script run in editor # In this case you can see the color change in the editor itself tool extends StaticBody2D # Export the color variable and a setter function to pass it to the sprite export (Color) var brick_color = Color(1, 1, 1) setget set_color func _ready(): # Set the color when first entering the tree set_color(brick_color) # This is a setter function and will be called whenever the brick_color variable is set func set_color(color): brick_color = color # We make sure the node is inside the tree otherwise it cannot access its children if is_inside_tree(): # Change the modulate property of the sprite to change its color get_node("Sprite").set_modulate(color) This will allow to set the color of the brick using the Inspector, removing the need to make a scene for each brick color. To make it easier to see, you can click the eye icon besides the CollisionShape2D to hide it. Hide CollisionShape2D The last thing to be done is to make the brick disappear when touched by the ball. Using the Node dock, add the group brick to the root Brick node. Then go back to the Ball scene and, again using the Node dock, but this time in the Signals section, double-click the body_enter signal. Click the Connect button with the default values. This will open the script editor with a new function. Replace it with this: func _on_Ball_body_enter( body ): # If the body just touched in member of the "brick" group if body.is_in_group("brick"): # Mark it for deletion in the next idle frame body.queue_free() Using the Inspector, change the Ball node to enable the Contact Monitor property and increase the Contacts Reported to 1. This will make sure the signal is sent when the ball touches something. Level Make a new scene for the level. Add a Node2D as the root, rename it to Level 1 and save the scene as level1.tscn. Now instance a brick in the scene. Position it anywhere, set a color using the Inspector and then duplicate it. You can repeat this process to make the level look the way you want. Using the Edit menu you can set a grid with snapping to make it easier to position the bricks. Then go back to the Game scene, instance the level there as a child of the root. Play the game and you will finally see the ball bouncing around and destroying the bricks it touches. Breakout Game Going further This is just a basic tutorial showing some of the fundamental aspects of the Godot Engine. The Node and Scene system, physics bodies, scripting, signals, and groups are very useful concepts but not all that Godot has to offer. Once you get acquainted with those, it's easy to learn other functions of the engine. The finished game in this tutorial is just bare bones. There are many things you can do, such as adding a start menu, progressing the levels as they are finished and detecting when the player loses. Thankfully, Godot makes all those things very easy and it should not be much effort to make this a complete game. Author: George Marques is a Brazilian software developer who has been playing with programming in a variety of environments since he was a kid. He works as a freelancer programmer for web technologies based on open source solutions such as WordPress and Open Journal Systems. He's also one of the regular contributors of the Godot Engine, helping solving bugs and adding new features to the software, while also giving solutions to the community for the questions they have.
Read more
  • 0
  • 0
  • 15427
article-image-designing-user-interface
Packt
23 Nov 2016
7 min read
Save for later

Designing a User Interface

Packt
23 Nov 2016
7 min read
In this article by Marcin Jamro, the author of the book Windows Application Development Cookbook, we will see how to add a button in your application. (For more resources related to this topic, see here.) Introduction You know how to start your adventure by developing universal applications for smartphones, tablets, and desktops running on the Windows 10 operating system. In the next step, it is crucial to get to know how to design particular pages within the application to provide the user with a convenient user interface that works smoothly on screens with various resolutions. Fortunately, designing the user interface is really simple using the XAML language, as well as Microsoft Visual Studio Community 2015. A designer can use a set of predefined controls, such as textboxes, checkboxes, images, or buttons. What's more, one can easily arrange controls in various variants, either vertically, horizontally, or in a grid. This is not all; developers could prepare their own controls as well. Such controls could be configured and placed on many pages within the application. It is also possible to prepare dedicated versions of particular pages for various types of devices, such as smartphones and desktops. You have already learned how to place a new control on a page by dragging it from the Toolbox window. In this article, you will see how to add a control as well as how to programmatically handle controls. Thus, some controls could either change their appearance, or the new controls could be added to the page when some specific conditions are met. Another important question is how to provide the user with a consistent user interface within the whole application. While developing solutions for the Windows 10 operating system, such a task could be easily accomplished by applying styles. In this article, you will learn how to specify both page-limited and application-limited styles that could be applied to either particular controls or to all the controls of a given type. At the end, you could ask yourself a simple question, "Why should I restrict access to my new awesome application only to people who know a particular language in which the user interface is prepared?" You should not! And in this article, you will also learn how to localize content and present it in various languages. Of course, the localization will use additional resource files, so translations could be prepared not by a developer, but by a specialist who knows the given language well. Adding a button When developing applications, you can use a set of predefined controls among which a button exists. It allows you to handle the event of pressing the button by a user. Of course, the appearance of the button could be easily adjusted, for instance, by choosing a proper background or border, as you could see in this recipe. The button can present textual content; however, it can also be adjusted to the user's needs, for instance, by choosing a proper color or font size. This is not all because the content shown on the button could not be only textual. For instance, you can prepare a button that presents an image instead of a text, a text over an image, or a text located next to the small icon that visually informs about the operation. Such modifications are presented in the following part of this recipe as well. Getting ready To step through this recipe, you only need the automatically generated project. How to do it… Add a button to the page by modifying the content of the MainPage.xaml file, as follows: <Page (...)> <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"> <Button Content="Click me!" Foreground="#0a0a0a" FontWeight="SemiBold" FontSize="20" FontStyle="Italic" Background="LightBlue" BorderBrush="RoyalBlue" BorderThickness="5" Padding="20 10" VerticalAlignment="Center" HorizontalAlignment="Center" /> </Grid> </Page> Generate a method for handling the event of clicking the button by pressing the button (either in a graphical designer or in the XAML code) and double-clicking on the Click field in the Properties window with the Event handlers for the selected element option (the lightning icon) selected. The automatically generated method is as follows: private void Button_Click(object sender, RoutedEventArgs e) { } How it works… In the preceding example, the Button control is placed within a grid. It is centered both vertically and horizontally, as specified by the VerticalAlignment and HorizontalAlignment properties that are set to Center. The background color (Background) is set to LightBlue. The border is specified by two properties, namely BorderBrush and BorderThickness. The first property chooses its color (RoyalBlue), while the other represents its thickness (5 pixels). What's more, the padding (Padding) is set to 20 pixels on the left- and right-hand side and 10 pixels at the top and bottom. The button presents the Click me! text defined as a value of the Content property. The text is shown in the color #0a0a0a with semi-bold italic font with size 20, as specified by the Foreground, FontWeight, FontStyle, and FontSize properties, respectively. If you run the application on a local machine, you should see the following result: It is worth mentioning that the IDE supports a live preview of the designed page. So, you can modify the values of particular properties and have real-time feedback regarding the target appearance directly in the graphical designer. It is a really great feature that does not require you to run the application to see an impact of each introduced change. There's more… As already mentioned, even the Button control has many advanced features. For example, you could place an image instead of a text, present a text over an image, or show an icon next to the text. Such scenarios are presented and explained now. First, let's focus on replacing the textual content with an image by modifying the XAML code that represents the Button control, as follows: <Button MaxWidth="300" VerticalAlignment="Center" HorizontalAlignment="Center"> <Image Source="/Assets/Image.jpg" /> </Button> Of course, you should also add the Image.jpg file to the Assets directory. To do so, navigate to Add | Existing Item… from the context menu of the Assets node in the Solution Explorer window, shown as follows: In the Add Existing Item window, choose the Image.jpg file and click on the Add button. As you could see, the previous example uses the Image control. In this recipe, no more information about such a control is presented because it is the topic of one of the next recipes, namely Adding an image. If you run the application now, you should see a result similar to the following: The second additional example presents a button with a text over an image. To do so, let's modify the XAML code, as follows: <Button MaxWidth="300" VerticalAlignment="Center" HorizontalAlignment="Center"> <Grid> <Image Source="/Assets/Image.jpg" /> <TextBlock Text="Click me!" Foreground="White" FontWeight="Bold" FontSize="28" VerticalAlignment="Bottom" HorizontalAlignment="Center" Margin="10" /> </Grid> </Button> You'll find more information about the Grid, Image, and TextBlock controls in the next recipes, namely Arranging controls in a grid, Adding an image, and Adding a label. For this reason, the usage of such controls is not explained in the current recipe. If you run the application now, you should see a result similar to the following: As the last example, you will see a button that contains both a textual label and an icon. Such a solution could be accomplished using the StackPanel, TextBlock, and Image controls, as you could see in the following code snippet: <Button Background="#353535" VerticalAlignment="Center" HorizontalAlignment="Center" Padding="20"> <StackPanel Orientation="Horizontal"> <Image Source="/Assets/Icon.png" MaxHeight="32" /> <TextBlock Text="Accept" Foreground="White" FontSize="28" Margin="20 0 0 0" /> </StackPanel> </Button> Of course, you should not forget to add the Icon.png file to the Assets directory, as already explained in this recipe. The result should be similar to the following: Resources for Article: Further resources on this subject: Deployment and DevOps [article] Introduction to C# and .NET [article] Customizing Kernel and Boot Sequence [article]
Read more
  • 0
  • 0
  • 12388

article-image-android-game-development-unity3d
Packt
23 Nov 2016
8 min read
Save for later

Android Game Development with Unity3D

Packt
23 Nov 2016
8 min read
In this article by Wajahat Karim, author of the book Mastering Android Game Development with Unity, we will be creating addictive fun games by using a very famous game engine called Unity3D. In this article, we will cover the following topics: Game engines and Unity3D Features of Unity3D Basics of Unity game development (For more resources related to this topic, see here.) Game engines and Unity3D A game engine is a software framework designed for the creation and development of video games. Many tools and frameworks are available for game designers and developers to code a game quickly and easily without building from the ground up. As time passed by, game engines became more mature and easy for developers, with feature-rich environments. Starting from native code frameworks for Android such as AndEngine, Cocos2d-x, LibGDX, and so on, game engines started providing clean user interfaces and drag-drop functionalities to make game development easier for developers. These engines include lots of tools which are different in user interface, features, porting, and many more things; but all have one thing in common— they create video games in the end. Unity (http://unity3d.com) is a cross-platform game engine developed by Unity Technologies. It made its first public announcement at Apple Worldwide Developers Conference in 2005, and supported only game development for Mac OS, but since then it has been extended to target more than 15 platforms for desktop, mobile, and consoles. It is notable for its one-click ability to port games on multiple platforms including BlackBerry 10, Windows Phone 8, Windows, OS X, Linux, Android, iOS, Unity Web Player (including Facebook), Adobe Flash, PlayStation 3, PlayStation 4, PlayStation Vita, Xbox 360, Xbox One, Wii U, and Wii. Unity has a fantastic interface, which lets the developers manage the project really efficiently from the word go. It has a nice drag-drop functionality with connecting behavior scripts written in either C#, JavaScript (or UnityScript), or Boo to define the custom logic and functionality with visual objects quite easily. Unity has been proven quite easy to learn for new developers who are just starting out with game development. Now more largely studios have also started using , and that too for good reasons. Unity is one of those engines that provide support for both 2D and 3D games without putting developers in trouble or confusing them. Due to its popularity all over the game development industry, it has a vast collection of online tutorials, great documentation, and a very helping community of developers. Features of Unity3D Unity is a game development ecosystem comprising a powerful rendering engine, intuitive tools, rapid workflows for 2D and 3D games, all-in-one deployment support, and thousands of already created free and paid assets with a helping developer's community. The feature list includes the following: Easy workflow allowing developers to rapidly assemble scenes in an intuitive editor workspace Quality game creation like AAA visuals, high-definition audio, full-throttle action without any glitches on screen Dedicated tools for both 2D and 3D game creation with shared conventions to make it easy for developers A very unique and flexible animation system to create natural animations with very less time-consuming efforts Smooth frame rate with reliable performance on all the platforms where developers publish their games One-click ability to deploy to all platforms from desktops, browsers, and mobiles to consoles within minutes Reduces time of development by using already created reusable assets available at the huge asset store Basics of Unity game development Before delving into details of Unity3D and game development concepts, let's have a look at some of the very basics of Unity 5.0. We will go through the Unity interface, menu items, using assets, creating scenes, and publishing builds. Unity editor interface When you launch Unity 5.0 for the first time, you will be presented with an editor with a few panels on the left, right, and bottom of the screen. The following screenshot shows the editor interface when it's first launched: Fig 1.7 Unity 5 editor interface at first launch First of all, take your time to look over the editor, and become a little familiar with it. The Unity editor is divided into different small panels and views, which can be dragged to customize the workspace according to the developer/designer's needs. Unity 5 comes with some prebuilt workspace layout templates, which can be selected from the Layout drop-down menu at top-right corner of the screen, as shown in the following screenshot: Fig 1.8 Unity 5 editor layouts The layout currently displayed in the editor shown in the preceding screenshot is the Default layout. You can select these layouts, and see how the editor's interface changes, and how the different panels are placed at different positions in each layout. This book uses the 2 by 3 workspace layout for the whole game. The following figure shows the 2 by 3 workspace with the names of the views and panels highlighted: Fig 1.9 Unity 5 2 by 3 Layout with views and panel names As you can see in the preceding figure, the Unity editor contains different views and panels. Every panel and view have a specific purpose, which is described as follows: Scene view The Scene view is the whole stage for the game development, and it contains every asset in the game from a tiny point to any heavy 3D model. The Scene view is used to select and position environments, characters, enemies, the player, camera, and all other objects which can be placed on the stage for the game. All those objects which can be placed and shown in the game are called game objects. The Scene view allows developers to manipulate game objects such as selecting, scaling, rotating, deleting, moving, and so on. It also provides some controls such as navigation and transformation.  In simple words, the Scene view is the interactive sandbox for developers and designers. Game view The Game view is the final representation of how your game will look when published and deployed on the target devices, and it is rendered from the cameras of the scene. This view is connected to the play mode navigation bar in the center at the top of the whole Unity workspace. The play mode navigation bar is shown in the following: figure. Fig 1.14 Play mode bar When the game is played in the editor, this control bar gets changed to blue color. A very interesting feature of Unity is that it allows developers to pause the game and code while running, and the developers can see and change the properties, transforms, and much more at runtime, without recompiling the whole game, for quick workflow. Hierarchy view The Hierarchy view is the first point to select or handle any game object available in the scene. This contains every game object in the current scene. It is tree-type structure, which allows developers to utilize the parent and child concept on the game objects easily. The following figure shows a simple Hierarchy view: Fig 1.16 Hierarchy view Project browser panel This panel looks like a view, but it is called the Project browser panel. It is an embedded files directory in Unity, and contains all the files and folders included in the game project. The following figure shows a simple Project browser panel: Fig 1.17 Project browser panel The left side of the panel shows a hierarchal directory, while the rest of the panel shows the files, or, as they are called, assets in Unity. Unity represents these files with different icons to differentiate these according to their file types. These files can be sprite images, textures, model files, sounds, and so on. You can search any specific file by typing in the search text box. On the right side of search box, there are button controls for further filters such as animation files, audio clip files, and so on. An interesting thing about the Project browser panel is that if any file is not available in the Assets, then Unity starts looking for it on the Unity Asset Store, and presents you with the available free and paid assets. Inspector panel This is the most important panel for development in Unity. Unity structures the whole game in the form of game objects and assets. These game objects further contain components such as transform, colliders, scripts, meshes, and so on. Unity lets developers manage these components of each game object through the Inspector panel. The following figure shows a simple Inspector panel of a game object: Fig 1.18 Inspector panel These components vary in types, for example, Physics, Mesh, Effects, Audio, UI, and so on. These components can be added in any object by selecting it from the Component menu. The following figure shows the Component menu: Fig 1.19 Components menu Summary In this article, you learned about game engines, such as Unity3D, which is used to create games for Android devices. We also discussed the important features of Unity along with the basics of its development environment. Resources for Article: Further resources on this subject: The Game World [article] Customizing the Player Character [article] Animation features in Unity 5 [article]
Read more
  • 0
  • 0
  • 13145
Modal Close icon
Modal Close icon