Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7014 Articles
article-image-thinking-probabilistically
Packt
04 Oct 2016
16 min read
Save for later

Thinking Probabilistically

Packt
04 Oct 2016
16 min read
In this article by Osvaldo Martin, the author of the book Bayesian Analysis with Python, we will learn that Bayesian statistics has been developing for more than 250 years now. During this time, it has enjoyed as much recognition and appreciation as disdain and contempt. In the last few decades, it has gained an increasing amount of attention from people in the field of statistics and almost all the other sciences, engineering, and even outside the walls of the academic world. This revival has been possible due to theoretical and computational developments; modern Bayesian statistics is mostly computational statistics. The necessity for flexible and transparent models and more intuitive interpretation of the results of a statistical analysis has only contributed to the trend. (For more resources related to this topic, see here.) Here, we will adopt a pragmatic approach to Bayesian statistics and we will not care too much about other statistical paradigms and their relationship with Bayesian statistics. The aim of this book is to learn how to do Bayesian statistics with Python; philosophical discussions are interesting but they have already been discussed elsewhere in a much richer way than we could discuss in these pages. We will use a computational and modeling approach, and we will learn to think in terms of probabilistic models and apply Bayes' theorem to derive the logical consequences of our models and data. Models will be coded using Python and PyMC3, a great library for Bayesian statistics that hides most of the mathematical details of Bayesian analysis from the user. Bayesian statistics is theoretically grounded in probability theory, and hence it is no wonder that many books about Bayesian statistics are full of mathematical formulas requiring a certain level of mathematical sophistication. Nevertheless, programming allows us to learn and do Bayesian statistics with only modest mathematical knowledge. This is not to say that learning the mathematical foundations of statistics is useless; don't get me wrong, that could certainly help you build better models and gain an understanding of problems, models, and results. In this article, we will cover the following topics: Statistical modeling Probabilities and uncertainty Statistical modeling Statistics is about collecting, organizing, analyzing, and interpreting data, and hence statistical knowledge is essential for data analysis. Another useful skill when analyzing data is knowing how to write code in a programming language such as Python. Manipulating data is usually necessary given that we live in a messy world with even more messy data, and coding helps to get things done. Even if your data is clean and tidy, programming will still be very useful since, as will see, modern Bayesian statistics is mostly computational statistics. Most introductory statistical courses, at least for non-statisticians, are taught as a collection of recipes that more or less go like this; go to the the statistical pantry, pick one can and open it, add data to taste and stir until obtaining a consisting p-value, preferably under 0.05 (if you don't know what a p-value is, don't worry; we will not use them in this book). The main goal in this type of course is to teach you how to pick the proper can. We will take a different approach: we will also learn some recipes, but this will be home-made food rather than canned food; we will learn hot to mix fresh ingredients that will suit different gastronomic occasions. But before we can cook we must learn some statistical vocabulary and also some concepts. Exploratory data analysis Data is an essential ingredient of statistics. Data comes from several sources, such as experiments, computer simulations, surveys, field observations, and so on. If we are the ones that will be generating or gathering the data, it is always a good idea to first think carefully about the questions we want to answer and which methods we will use, and only then proceed to get the data. In fact, there is a whole branch of statistics dealing with data collection known as experimental design. In the era of data deluge, we can sometimes forget that getting data is not always cheap. For example, while it is true that the Large Hadron Collider (LHC) produces hundreds of terabytes a day, its construction took years of manual and intellectual effort. In this book we will assume that we already have collected the data and also that the data is clean and tidy, something rarely true in the real world. We will make these assumptions in order to focus on the subject of this book. If you want to learn how to use Python for cleaning and manipulating data and also a primer on statistics and machine learning, you should probably read Python Data Science Handbook by Jake VanderPlas. OK, so let's assume we have our dataset; usually, a good idea is to explore and visualize it in order to get some idea of what we have in our hands. This can be achieved through what is known as Exploratory Data Analysis (EDA), which basically consists of the following: Descriptive statistics Data visualization The first one, descriptive statistics, is about how to use some measures (or statistics) to summarize or characterize the data in a quantitative manner. You probably already know that you can describe data using the mean, mode, standard deviation, interquartile ranges, and so forth. The second one, data visualization, is about visually inspecting the data; you probably are familiar with representations such as histograms, scatter plots, and others. While EDA was originally thought of as something you apply to data before doing any complex analysis or even as an alternative to complex model-based analysis, through the book we will learn that EDA is also applicable to understanding, interpreting, checking, summarizing, and communicating the results of Bayesian analysis. Inferential statistics Sometimes, plotting our data and computing simple numbers, such as the average of our data, is all what we need. Other times, we want to go beyond our data to understand the underlying mechanism that could have generated the data, or maybe we want to make predictions for future data, or we need to choose among several competing explanations for the same data. That's the job of inferential statistics. To do inferential statistics we will rely on probabilistic models. There are many types of model and most of science, and I will add all of our understanding of the real world, is through models. The brain is just a machine that models reality (whatever reality might be) http://www.tedxriodelaplata.org/videos/m%C3%A1quina-construye-realidad. What are models? Models are a simplified descriptions of a given system (or process). Those descriptions are purposely designed to capture only the most relevant aspects of the system, and hence, most models do not try to pretend they are able to explain everything; on the contrary, if we have a simple and a complex model and both models explain the data well, we will generally prefer the simpler one. Model building, no matter which type of model you are building, is an iterative process following more or less the same basic rules. We can summarize the Bayesian modeling process using three steps: Given some data and some assumptions on how this data could have been generated, we will build models. Most of the time, models will be crude approximations, but most of the time this is all we need. Then we will use Bayes' theorem to add data to our models and derive the logical consequences of mixing the data and our assumptions. We say we are conditioning the model on our data. Lastly, we will check that the model makes sense according to different criteria, including our data and our expertise on the subject we are studying. In general, we will find ourselves performing these three steps in a non-linear iterative fashion. Sometimes we will retrace our steps at any given point: maybe we made a silly programming mistake, maybe we found a way to change the model and improve it, maybe we need to add more data. Bayesian models are also known as probabilistic models because they are built using probabilities. Why probabilities? Because probabilities are the correct mathematical tool for dealing with uncertainty in our data and models, so let's take a walk through the garden of forking paths. Probabilities and uncertainty While probability theory is a mature and well-established branch of mathematics, there is more than one interpretation of what probabilities are. To a Bayesian, a probability is a measure that quantifies the uncertainty level of a statement. If we know nothing about coins and we do not have any data about coin tosses, it is reasonable to think that the probability of a coin landing heads could take any value between 0 and 1; that is, in the absence of information, all values are equally likely, our uncertainty is maximum. If we know instead that coins tend to be balanced, then we may say that the probability of acoin landing is exactly 0.5 or may be around 0.5 if we admit that the balance is not perfect. If we collect data, we can update these prior assumptions and hopefully reduce the uncertainty about the bias of the coin. Under this definition of probability, it is totally valid and natural to ask about the probability of life on Mars, the probability of the mass of the electron being 9.1 x 10-31 kg, or the probability of the 9th of July of 1816 being a sunny day. Notice for example that life on Mars exists or not; it is a binary outcome, but what we are really asking is how likely is it to find life on Mars given our data and what we know about biology and the physical conditions on that planet? The statement is about our state of knowledge and not, directly, about a property of nature. We are using probabilities because we can not be sure about the events, not because the events are necessarily random. Since this definition of probability is about our epistemic state of mind, sometimes it is referred to as the subjective definition of probability, explaining the slogan of subjective statistics often attached to the Bayesian paradigm. Nevertheless, this definition does not mean all statements should be treated as equally valid and so anything goes; this definition is about acknowledging that our understanding about the world is imperfect and conditioned by the data and models we have made. There is not such a thing as a model-free or theory-free understanding of the world; even if it will be possible to free ourselves from our social preconditioning, we will end up with a biological limitation: our brain, subject to the evolutionary process, has been wired with models of the world. We are doomed to think like humans and we will never think like bats or anything else! Moreover, the universe is an uncertain place and all we can do is make probabilistic statements about it. Notice that does not matter if the underlying reality of the world is deterministic or stochastic; we are using probability as a tool to quantify uncertainty. Logic is about thinking without making mistakes. In Aristotelian or classical logic, we can only have statements that are true or false. In Bayesian definition of probability, certainty is just a special case: a true statement has a probability of 1, a false one has probability. We would assign a probability of 1 about life on Mars only after having conclusive data indicating something is growing and reproducing and doing other activities we associate with living organisms. Notice, however, that assigning a probability of 0 is harder because we can always think that there is some Martian spot that is unexplored, or that we have made mistakes with some experiment, or several other reasons that could lead us to falsely believe life is absent on Mars when it is not. Interesting enough, Cox mathematically proved that if we want to extend logic to contemplate uncertainty we must use probabilities and probability theory, from which Bayes' theorem is just a logical consequence as we will see soon. Hence, another way of thinking about Bayesian statistics is as an extension of logic when dealing with uncertainty, something that clearly has nothing to do with subjective reasoning in the pejorative sense. Now that we know the Bayesian interpretation of probability, let's see some of the mathematical properties of probabilities. For a more detailed study of probability theory, you can read Introduction to probability by Joseph K Blitzstein & Jessica Hwang. Probabilities are numbers in the interval [0, 1], that is, numbers between 0 and 1, including both extremes. Probabilities follow some rules; one of these rules is the product rule: We read this as follows: the probability of A and B is equal to the probability of A given B, multiplied by the probability of B. The expression p(A|B) is used to indicate a conditional probability; the name refers to the fact that the probability of A is conditioned by knowing B. For example, the probability that a pavement is wet is different from the probability that the pavement is wet if we know (or given that) is raining. In fact, a conditional probability is always larger than or equal to the unconditioned probability. If knowing B does not provides us with information about A, then p(A|B)=p(A). That is A and B are independent of each other. On the contrary, if knowing B give as useful information about A, then p(A|B) > p(A). Conditional probabilities are a key concept in statistics, and understanding them is crucial to understanding Bayes' theorem, as we will see soon. Let's try to understand them from a different perspective. If we reorder the equation for the product rule, we get the following: Hence, p(A|B) is the probability that both A and B happens, relative to the probability of B happening. Why do we divide by p(B)? Knowing B is equivalent to saying that we have restricted the space of possible events to B and thus, to find the conditional probability, we take the favorable cases and divide them by the total number of events. Is important to realize that all probabilities are indeed conditionals, there is not such a thing as an absolute probability floating in vacuum space. There is always some model, assumptions, or conditions, even if we don't notice or know them. The probability of rain is not the same if we are talking about Earth, Mars, or some other place in the Universe, the same way the probability of a coin landing heads or tails depends on our assumptions of the coin being biased in one way or another. Now that we are more familiar with the concept of probability, let's jump to the next topic, probability distributions. Probability distributions A probability distribution is a mathematical object that describes how likely different events are. In general, these events are restricted somehow to a set of possible events. A common and useful conceptualization in statistics is to think that data was generated from some probability distribution with unobserved parameters. Since the parameters are unobserved and we only have data, we will use Bayes' theorem to invert the relationship, that is, to go from the data to the parameters. Probability distributions are the building blocks of Bayesian models; by combining them in proper ways we can get useful complex models. We will meet several probability distributions throughout the book; every time we discover one we will take a moment to try to understand it. Probably the most famous of all of them is the Gaussian or normal distribution. A variable x follows a Gaussian distribution if its values are dictated by the following formula: In the formula, and are the parameters of the distributions. The first one can take any real value, that is, , and dictates the mean of the distribution (and also the median and mode, which are all equal). The second is the standard deviation, which can only be positive and dictates the spread of the distribution. Since there are an infinite number of possible combinations of and values, there is an infinite number of instances of the Gaussian distribution and all of them belong to the same Gaussian family. Mathematical formulas are concise and unambiguous and some people say even beautiful, but we must admit that meeting them can be intimidating; a good way to break the ice is to use Python to explore them. Let's see what the Gaussian distribution family looks like: import matplotlib.pyplot as plt import numpy as np from scipy import stats import seaborn as sns mu_params = [-1, 0, 1] sd_params = [0.5, 1, 1.5] x = np.linspace(-7, 7, 100) f, ax = plt.subplots(len(mu_params), len(sd_params), sharex=True, sharey=True) for i in range(3): for j in range(3): mu = mu_params[i] sd = sd_params[j] y = stats.norm(mu, sd).pdf(x) ax[i,j].plot(x, y) ax[i,j].set_ylim(0, 1) ax[i,j].plot(0, 0, label="$\alpha$ = {:3.2f}n$\beta$ = {:3.2f}".format(mu, sd), alpha=0) ax[i,j].legend() ax[2,1].set_xlabel('$x$') ax[1,0].set_ylabel('$pdf(x)$') The output of the preceding code is as follows: A variable, such as x, that comes from a probability distribution is called a random variable. It is not that the variable can take any possible value. On the contrary, the values are strictly dictated by the probability distribution; the randomness arises from the fact that we could not predict which value the variable will take, but only the probability of observing those values. A common notation used to say that a variable is distributed as a Gaussian or normal distribution with parameters and is as follows: The symbol ~ is read as is distributed as. There are two types of random variable, continuous and discrete. Continuous variables can take any value from some interval (we can use Python floats to represent them), and the discrete variables can take only certain values (we can use Python integers to represent them). Many models assume that successive values of a random variables are all sampled from the same distribution and those values are independent of each other. In such a case, we will say that the variables are independently and identically distributed, or iid variables for short. Using mathematical notation, we can see that two variables are independent if for every value of x and y: A common example of non iid variables are temporal series, where a temporal dependency in the random variable is a key feature that should be taken into account. Summary In this article we shall take up a practical approach to Bayesian statistics and discover how to implement Bayesian statistics with Python. Here we will learn to think of problems in terms of their probability and uncertainty and apply the Bayes' theorem to derive their results. Resources for Article: Further resources on this subject: Python Data Science Up and Running [article] Mining Twitter with Python – Influence and Engagement [article] Exception Handling in MySQL for Python [article]
Read more
  • 0
  • 0
  • 2255

article-image-reactive-python-real-time-events-processing
Xavier Bruhiere
04 Oct 2016
8 min read
Save for later

Reactive Python - Real-time events processing

Xavier Bruhiere
04 Oct 2016
8 min read
A recent trend in programming literature promotes functional programming as a sensible alternative to object-oriented programs for many use cases. This subject feeds many discussions and highlights how important program design is as our applications are becoming more and more complex. Although there might be here some seductive intellectual challenge (because yeah, we love to juggle with elegant abstractions), there are also real business values : Building sustainable, maintainable programs Decoupling architecture components for proper team work Limiting bug exposure Better product iteration When developers spot an interesting approach to solve a recurrent issue in our industry, they formalize it as a design pattern. Today, we will discuss a powerful member of this family: the pattern observer. We won't dive into the strict rhetorical details (sorry, not sorry). Instead, we will delve how reactive programming can level up the quality of our work. It's Python Week. That means you can not only save 50% on some of our latest Python products, but you can also pick up a free Python eBook every single day! The scene That was a bold statement; let's illustrate that with a real-world scenario. Say we were tasked to build a monitoring system. We need some way to collect data, analyze it, and take actions when things go unexpected. Anomaly detection is an exciting yet challenging problem. We don't want our data scientists to be bothered by infrastructure failures. And in the same spirit, we need other engineers to focus only on how to react to specific disaster scenarios. The core of our approach consists of two components—a monitoring module firing and forgetting its discoveries on channels and another processing brick intercepting those events with an appropriate response. The UNIX philosophy at its best: do one thing and do it well. We split the infrastructure by concerns and the workers by event types. Assuming that our team defines well-documented interfaces, this is a promising design. The rest of the article will discuss the technical implementation but keep in mind that I/O documentation and proper processing of load estimation are also fundamental. The strategy Our local lab is composed of three elements: The alert module that we will emulate with a simple cli tool, which publishes alert messages. The actual processing unit subscribing to events it knows how to react to. A message broker supporting the Publish / Subscribe (or PUBSUB) pattern. For this purpose, Redis offers a popular, efficient, and rock solid solution. This is highly recommended, but the database isn't designed for this case. NATS, however, presents itself as follows: NATS acts as a central nervous system for distributed systems such as mobile devices, IoT networks, enterprise microservices and cloud native infrastructure. Unlike traditional enterprise messaging systems, NATS provides an always on ‘dial-tone’. Sounds promising! Client libraries are available for major languages, and Apcera, the company sponsoring the technology, has a solid reputation for building reliable distributed systems. Again, we won't delve how processing actually happens, only the orchestration of this three moving parts. The setup Since NATS is a message broker, we need to run a server locally (version 0.8.0 as of today). Gnatsd is the official and scalable first choice. It is written in Go, so we get performances and drop-in binary out of the box. For fans of microservices (as I am), an official Docker image is available for pulling. Also, for lazy ones (as I am), a demo server is already running at nats://demo.nats.io:4222. Services will use Python 3.5.1, but 2.7.10 should do the job with minimal changes. Our scenario is mostly about data analysis and system administration on the backend, and Python has a wide range of tools for both areas. So let's install the requirements: $ pip --version pip 8.1.1 $ pip install -e git+https://github.com/mcuadros/pynats@6851e84eb4b244d22ffae65e9fbf79bd9872a5b3#egg=pynats click==6.6 # for cli integration Thats'all. We are now ready to write services. Publishing events Let's warm up by sending some alerts to the cloud. First, we need to connect to the NATS server: # -*- coding: utf-8 -*- # vim_fenc=utf-8 # # filename: broker.py import pynats def nats_conn(conf): """Connect to nats server from environment variables. The point is to allow easy switching without to change the code. You can read more on this approach stolen from 12 factors apps. """ # the default value comes from docker-compose (https://docs.docker.com/compose/) services link behavior host = conf.get('__BROKER_HOST__', 'nats') port = conf.get('__BROKER_PORT__', 4222) opts = { 'url': conf.get('url', 'nats://{host}:{port}'.format(host=host, port=port)), 'verbose': conf.get('verbose', False) } print('connecting to broker ({opts})'.format(opts=opts)) conn = pynats.Connection(**opts) conn.connect() return conn This should be enough to start our client: # -*- coding: utf-8 -*- # vim_fenc=utf-8 # # filename: observer.py import os import broker def send(channel, msg): # use environment variables for configuration nats = broker.nats_conn(os.environ) nats.publish(channel, msg) nats.close() And right after that, a few lines of code to shape a cli tool: #! /usr/bin/env python # -*- coding: utf-8 -*- # vim_fenc=utf-8 # # filename: __main__.py import click @click.command() @click.argument('command') @click.option('--on', default='some_event', help='messages topic name') def main(command, on): if command == 'send': click.echo('publishing message') observer.send(on, 'Terminator just dropped in our space-time') if__name__ == '__main__': main() chmod +x ./__main__.py gives it execution permission so we can test how our first bytes are doing. $ # `click` package gives us a productive cli interface $ ./__main__.py --help Usage: __main__.py [OPTIONS] COMMAND Options: --on TEXT messages topic name --help Show this message and exit. $ __BROKER_HOST__="demo.nats.io"./__main__.py send --on=click connecting to broker ({'verbose': False, 'url': 'nats://demo.nats.io:4222'}) publishing message ... This is indeed quite poor in feedback, but no exception means that we did connect to the server and published a message. Reacting to events We're done with the heavy lifting! Now that interesting events are flying through the Internet, we can catch them and actually provide business values. Don't forget the point: let the team write reactive programs without worrying how it will be triggered. I found the following snippet to be a readable syntax for such a goal: # filename: __main__.py import observer @observer.On('terminator_detected') def alert_sarah_connor(msg): print(msg.data) As the capitalized letter of On suggests, this is a Python class, wrapping a NATS connection. It aims to call the decorated function whenever a new message goes through the given channel. Here is a naive implementation shamefully ignoring any reasonable error handling and safe connection termination (broker.nats_conn would be much more production-ready as a context manger, but hey, we do things that don't scale, move fast, and break things): # filename: observer.py class On(object): def__init__(self, event_name, **kwargs): self._count = kwargs.pop('count', None) self._event = event_name self._opts = kwargs or os.environ def__call__(self, fn): nats = broker.nats_conn(self._opts) subscription = nats.subscribe(self._event, fn) def inner(): print('waiting for incoming messages') nats.wait(self._count) # we are done nats.unsubscribe(subscription) return nats.close() return inner Instil some life into this file from the __main__.py: # filename: __main__.py @click.command() @click.argument('command') @click.option('--on', default='some_event', help='messages topic name') def main(command, on): if command == 'send': click.echo('publishing message') observer.send(on, 'bad robot detected') elif command == 'listen': try: alert_sarah_connor(): exceptKeyboardInterrupt: click.echo('caught CTRL-C, cleaning after ourselves...') Your linter might complain about the injection of the msg argument in alert_sarah_connor, but no offense, it should just work (tm): $ In a first terminal, listen to messages $ __BROKER_HOST__="demo.nats.io"./__main__.py listen connecting to broker ({'url': 'nats://demo.nats.io:4222', 'verbose': False}) waiting for incoming messages $ And fire up alerts in a second terminal __BROKER_HOST__="demo.nats.io"--on='terminator_detected' The data appears in the first terminal, celebrate! Conclusion Reactive programming implemented with the Publish/Subscribe pattern brings a lot of benefits for events-oriented products. Modular development, decoupled components, scalable distributed infrastructure, single-responsibility principle.One should think about how data flows into the system before diving into the technical details. This kind of approach also gains traction from real-time data processing pipelines (Riemann, Spark, and Kafka). NATS performances, indeed, allow ultra low-latency architectures development without too much of a deployment overhead. We covered in a few lines of Python the basics of a reactive programming design, with a lot of improvement opportunities: events filtering, built-in instrumentation, and infrastructure-wide error tracing. I hope you found in this article the building block to develop upon! About the author Xavier Bruhiere is the lead developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high intensity sports.
Read more
  • 0
  • 0
  • 6204

article-image-cloud-and-async-communication
Packt
03 Oct 2016
6 min read
Save for later

Cloud and Async Communication

Packt
03 Oct 2016
6 min read
In this article by Matteo Bortolu and Engin Polat, the author of the book Xamarin 4 By Example, we are going to create a new projects called fast food with help of Service and Presentation layer. (For more resources related to this topic, see here.) Example project – Xamarin fast food First of all, we create a new Xamarin.Forms PCL project. Prepare the empty subfolders of Core to define the Business Logic of our project. To use the Base classes, we need to import on our projects the SQLite.Net PCL from the NuGet Package manager. It is a good practice to update all the packages before you start. As soon as a new package has been updated, we will be notified on the Packages folder. To update the package right click on the Packages folder and select Update from the contextual menu. We can create, under the Business subfolder of the Core, the class MenuItem that contains the properties of the available Items to order. A MenuItem will have: Name Price Required seconds. The class will be developed as: public class MenuItem : BaseEntity<int> { public string Name { get; set; } public int RequiredSeconds { get; set; } public float Price { get; set; } } We will also prepare the Data Layer element and the Business Layer element for this class. In first instance they will only use the inheritance with the base classes. The Data layer will be coded like this: public class MenuItemData : BaseData<MenuItem, int>{ public MenuItemData () { }} and the Business layer will look like: public class MenuItemBusiness : BaseBusiness<MenuItem, int> { public MenuItemBusiness () : base (new MenuItemData ()) { } } Now we can add a new base class under the Services subfolder of the base layer. Service layer In this example we will develop a simple service that make the request wait for the required seconds. We will change the bsssssase service later in the article in order to make server requests. We will define our Base Service using a generic Base Entity type: public class BaseService<TEntity, TKey> where TEntity : BaseEntity<TKey> { // we will write here the code for the base service } Inside the Base Service we need to define an event to throw when the response is ready to be dispatched: public event ResponseReceivedHandler ResponseReceived; public delegate void ResponseReceivedHandler (TEntity item); We will raise this event when our process has been completed. Before we raise an event we always need to check if it has been subscribed from someone. It is a good practice to use a design pattern called observer. A design pattern is a model of solution for common problems and they help us to reuse the design of the software. To be compliant with the Observer we only need to add to the code we wrote, the following code snippet that raises the event only when the event has been subscribed: protected void OnResponseReceived (TEntity item) { if (ResponseReceived != null) { ResponseReceived (item); } } The only thing we need to do in order to raise the ResponseReceived event, is to call the method OnResponseReceived. Now we will write a base method that gives us a response after a number of seconds that we will pass as parameter as seen in the following code: public virtual asyncTask<TEntity>GetDelayedResponse(TEntity item,int seconds) { await Task.Delay (seconds * 1000); OnResponseReceived(item); return item; } We will use this base to simulate a delayed response. Let's create the Core service layer object for MenuItem. We can name it MenuItemService and it will inherit the BaseService as follows: public class MenuItemService : BaseService<MenuItem,int> { public MenuItemService () { } } We have now all the core ingredients to start writing our UI. Add a new empty class named OrderPage in the Presentation subfolder of Core. We will insert here a label to read the results and three buttons to make the requests: public class OrderPage : ContentPage { public OrderPage () : base () { Label response = new Label (); Button buttonSandwich = new Button { Text = "Order Sandwich" }; Button buttonSoftdrink = new Button { Text = "Order Drink" }; Button buttonShowReceipt = new Button { Text = "Show Receipt" }; // ... insert here the presentation logic } } Presentation layer We can now define the presentation logic creating instances of the business object and the service object. We will also define our items. MenuItemBusiness menuManager = new MenuItemBusiness (); MenuItemService service = new MenuItemService (); MenuItem sandwich = new MenuItem { Name = "Sandwich", RequiredSeconds = 10, Price = 5 }; MenuItem softdrink = new MenuItem { Name = "Sprite", RequiredSeconds = 5, Price = 2 }; Now we need to subscribe the buttons click event to send the order to our service. The GetDelayedResponse method of the service is simulating a slow response. In this case we will have a real delay that depends on the network availability and the time that the remote server needs to process the request and send back a response: buttonSandwich.Clicked += (sender, e) => { service.GetDelayedResponse (sandwich, sandwich.RequiredSeconds); }; buttonSoftdrink.Clicked += (sender, e) => { service.GetDelayedResponse (softdrink, softdrink.RequiredSeconds); }; Our service will raise an event when the response is ready. We can subscribe this event to present the results on the label and to save the items in our local database: service.ResponseReceived += (item) => { // Append the received item to the label response.Text += String.Format ("nReceived: {0} ({1}$)", item.Name, item.Price); // Read the data from the local database List<MenuItem> itemlist = menuManager.Read (); //calculate the new database key for the item item.Key = itemlist.Count == 0 ? 0 : itemlist.Max (x => x.Key) + 1; //Add The item in the local database menuManager.Create (item); }; We now can subscribe the click event of the receipt button in order to display an alert that displays the number of the items saved in the local database and the total price to pay: buttonShowReceipt.Clicked += (object sender, EventArgs e) => { List<MenuItem> itemlist = menuManager.Read (); float total = itemlist.Sum (x => x.Price); DisplayAlert ( "Receipt", String.Format( "Total:{0}$ ({1} items)", total, itemlist.Count), "OK"); }; The last step is to add the component to the content page: Content = new StackLayout { VerticalOptions = LayoutOptions.CenterAndExpand, HorizontalOptions = LayoutOptions.CenterAndExpand, Children = { response, buttonSandwich, buttonSoftdrink, buttonShowReceipt } }; At this point we are ready to run the iOS version and to try it out. In order to make the Android version work we need to set the permissions to read and write in the database file. To do that we can double click the Droid project and, under the section Android Application, check the ReadExternalStorage and WriteExternalStorage permissions: In the OnCreate method of the MainActivity of the Droid project we also need to: Create the database file when it hasn't been created yet. Set the database path in the Configuration file. var path = System.Environment.GetFolderPath ( System.Environment.SpecialFolder.ApplicationData ); if (!Directory.Exists (path)) { Directory.CreateDirectory (path); } var filename = Path.Combine (path, "fastfood.db"); if (!File.Exists (filename)) { File.Create (filename); } Configuration.DatabasePath = filename; Summary In this article, we have learned how to create a project in Xamarin with the help of Service and Presentation layer. We have also seen that, how to set read and write permissions to make an Android version work. Resources for Article: Further resources on this subject: A cross-platform solution with Xamarin.Forms and MVVM architecture [article] Working with Xamarin.Android [article] Integrating Accumulo into Various Cloud Platforms [article]
Read more
  • 0
  • 0
  • 21938

article-image-introduction-aws-lumberyard-game-development
Packt
03 Oct 2016
15 min read
Save for later

An Introduction to AWS Lumberyard Game Development

Packt
03 Oct 2016
15 min read
In this article by Dr. Edward Lavieri, author of the book Learning AWS Lumberyard Game Development, you will learn what the Lumberyard game engine means for game developers and the game development industry. (For more resources related to this topic, see here.) What is Lumberyard? Lumberyard is a free 3D game engine that has, in addition to typical 3D game engine capabilities, an impressive set of unique qualities. Most impressively, Lumberyard integrates with Amazon Web Services (AWS) for cloud computing and storage. Lumberyard, also referred to as Amazon Lumberyard, integrates with Twitch to facilitate in-game engagement with fans. Another component that makes Lumberyard unique among other game engines is the tremendous support for multiplayer games. The use of Amazon GameLift empowers developers to instantiate multiplayer game sessions with relative ease. Lumberyard is presented as a game engine intended for creating cross-platform AAA games. There are two important components of that statement. First, cross-platform refers to, in the case of Lumberyard, the ability to develop games for PC/Windows, PlayStation 4, and Xbox One. There is even additional support for Mac OS, iOS, and Android devices. The second component of the earlier statement is AAA games. A triple-A (AAA) game is like a top-grossing movie, one that had a tremendous budget, was extensively advertised, and wildly successful. If you can think of a console game (for Xbox One and/or PlayStation 4) that is advertised on national television, it is a sign the title is an AAA game. Now that this AAA game engine is available for free, it is likely that more than just AAA games will be developed using Lumberyard. This is an exciting time to be a game developer. More specifically, Amazon hopes that Lumberyard will be used to develop multiplayer online games that use AWS for cloud computing and storage, and that integrates with Twitch for user engagement. The engine is free, but AWS usage is not. Don't worry, you can create single player games with Lumberyard as well. System requirements Amazon recommends a system with the following specifications for developing games with Lumberyard: PC running a 64-bit version of Windows 7 or Windows 10 At least 8 GB RAM Minimum of 60 GB hard disk storage A 3 GHz or greater quad-core processor A DirectX 11 (DX11) compatible video card with at least 2 GB of video RAM (VRAM) As mentioned earlier, there is no support for running Lumberyard on a Mac OS or Linux computer. The game engine is a very large and complex software suite. You should take the system requirements seriously and, if at all possible, exceed the minimum requirements. Beta software As you likely know, the Lumberyard game engine is, at the time of this book's publication, in beta. What does that mean? It means a couple of things that are worth exploring. First, developers (that's you!) get early access to amazing software. Other than the cool factor of being able to experiment with a new game engine, it can accelerate game projects. There are several detractors to this as well. Here are the primary detractors from using beta software: Not all functions and features will be implemented. Depending on the engine's specific limitations, this can be a showstopper for your game project. Some functions and features might be partially implemented, not function correctly, or be unreliable. If the features that have these characteristics are not the ones you plan to use, then this is not an issue for you. This, of course, can be a tremendous problem. For example, let's say that the engine's gravity system is buggy. That would make testing your game very difficult as you would not be able to rely on the gravity system and not know if your code has issues or not. Things can change from release to release. Anything done in one beta version is apt to work just fine in subsequent beta releases. Things that tend to change between beta versions, other than bug fixes and improvements, are interface changes. This can slow a project up considerably as development workflows you have adopted may no longer work. In the next section, you will see what changes were ushered in with each sequential beta release. Release notes Amazon initially launched the Lumberyard game engine in February 2016. Since then, there have been several new versions. At the time of this book's publication, there were five releases: 1.0, 1.1, 1.2, 1.3, and 1.4. The following graphic shows the timeline of the five releases: Let's look at the major offerings of each beta release. Beta 1.0 The initial beta of the Lumberyard game engine was released on February 9, 2016. This was an innovative offering from Amazon. Lumberyard was released as a triple-A cross-platform game engine at no cost to developers. Developers had full access to the game engine along with the underlying source code. This permits developers to release games without a revenue share and to even create their own game engines using the Lumberyard source code as a base. Beta 1.1 Beta 1.1 was released just a few short weeks after Beta 1.0. According to Amazon, there were 208 feature upgrades, bug fixes, and improvements with this release. Here are the highlights: Autoscaling features Component Entity System FBX Importer New Game Gems Cloud Canvas Resource Manager New Twitch ChatPlay Features Beta 1.2 In just a few short weeks after Beta 1.1 was released, Beta 1.2 was made available. The rapid release of sequential beta versions is indicative of tremendous development work by the Lumberyard team. This also gives some indication as to the amount of support the game engine is likely to have once it is no longer in beta. With this beta, Amazon announced 218 enhancements and bug fixes to nearly two-dozen core Lumberyard components. Here are the largest Lumberyard game engine components to be upgraded in Beta 1.2: Particle editor Mannequin Geppetto FBX Importer Multiplayer Gem Cloud Canvas Resource Manager Beta 1.3 The three previous beta versions were released in subsequent months. This was an impressive pace, but not likely sustainable due to the tremendous complexities of game engine modifications and the fact that the Lumberyard game engine continues to mature. Released in June 2016, the Beta 1.3 release of Lumberyard introduced support for Virtual Reality (VR) and High Dynamic Range (HDR). Adding support for VR and HDR is enough reason to release a new beta version. Impressively, this release also contained over 130 enhancements and bug fixes to the game engine. Here is partial list of game engine components that were updated in this release: Volumetric fog Motion blur Height Mapped Ambient Occlusion Depth of field Emittance Integrated Graphics Profiler FBX Importer UI Editor FlowGraph Nodes Cloud Canvas Resource Manager Beta 1.4 At the time of this book's publication, the current beta version of Lumberyard was 1.4, which was released in August 2016. This release contained over 230 enhancements and bug fixes as well as some new features. The primary focus of this release seemed to focus on multiplayer games and making them more efficient. The result of the changes provided in this release are greater cost-efficiencies for multiplayer games when using Amazon GameLift. Sample game content Creating triple-A games is a complex process that typically involves a large number of people in a variety of roles including developers, designers, artists, and more. There is no industry average for how long it takes to create a Triple-A game because there are too many variables including budget, team size, game specifications, and individual and team experience. This being said, it is likely to take up to 2 years to create a triple-A game from scratch. Triple-A, or AAA, games typically have very large budgets, large design and development teams, large advertising efforts, and are are largely successful. In a nutshell, Triple-A games are large! Around 2 years is a long time, so we have shortened things for you in this book using available game samples that come bundled with the game engine. As illustrated in the following section, Lumberyard comes with a great set of starter content and sample games. Starter content When you first launch Lumberyard, you are able to create a new level or open an existing one. In the Open a Level dialog window, you will find nine levels listed under Levels | GettingStartedFiles. Each of these levels presents a unique opportunity to explore the game engine and learn how things are done. Let's look at each of these, next. getting-started-completed-level As the level name suggests, this is a complete game level featuring a small game grid and a player-controlled robot character. The character is moved to the standard WASD keyboard keys, rotated with the mouse, and uses the spacebar to jump. The level does a great job of demonstrating physics. As is indicated in the following screenshot, the level contains a wall of blocks that have natural physics applied. The robot can run into the wall and fallen blocks as well as use the ramp to launch into the wall. More than just playing the game, you can examine how it was created. This level is fully playable. Simply use the Ctrl + G keyboard combination to enter Game Mode. When you are through playing the game, press the Ecs key to exit. This completed level can be further examined by loading the remaining levels, featured in this section. These subsequent levels make it easier to examine specific components of the level. start-section03-terrain This section simply contains the terrain. The complete terrain is provided as well as additional space to explore and practice creating, duplicating, and placing objects. start-section04-lighting This level is presented with an expanded terrain to help you explore lighting options. There are environmental lighting effects as well as street lamp objects that can be used to emit light and generate shadows in the game. This level is not playable and is provided to aid your learning of Lumberyard's lighting system. start-section05-camera-playerstart This non-playable level is convenient for examining camera placement and discovering how that impacts the player's starting position on game launch. start-section06-designer-objects This level is playable, but only to the extent that you can control the robot character and explore the game's environment. With this level, you can focus your exploration on editing the objects. start-section07-materials This level includes the full game environment along with two natively created objects: a block and a sphere. You can freely edit these objects and see how they look in Game Mode. This represents a great way to learn as it is a no-risk situation. This simply means that you do not have to save your changes, as you are essentially working in a sandbox with no impact to a real game project. This is a playable level that allows you to explore the game environment and preview any changes you make to the level. start-section08-physics This starter level has the same two 3D objects (block and sphere) as the previous starter level. In this level, the objects have textures. No physics are already applied to this level, so it is a good level to use to practice creating objects with physics. One option is to attempt to replicate the wall of stacked objects that is present in the completed level. start-section09-flowgraph-scripting This playable level contains the wall of 3D blocks that can be knocked over. The game's gameplay is instantiated with FlowGraphs, which can be viewed and edited using this starter level. start-section10-audio This final starter level contains the full playable game that serves as a testing ground for implementing audio in the game. Sample games There are six sample unrelated game levels accessible through the Open a Level dialog window, you will find nine levels listed under Levels | Samples. Each of these levels are single level games that demonstrate specific functionality and gameplay. Let's look at each of these next. Animation_Basic_Sample This game level contains an animated character in an empty game environment. There is a camera and light. You can play the game to watch the animated character's idle animation. When you create 3D characters, you will use Geppetto, Lumberyard's animation tool. Camera_Sample You can use this sample game to help learn how to create gameplay. The sample game includes a Heads Up Display (HUD) that presents the player with three game modes, each with a different camera. This game can also be used to further explore FlowGraphs and the FlowGraph Editor. Dont_Die The Don't Die game level provides a colorful example of full gameplay with an interactive menu system. The game starts with a Press any key to start message followed by a color selection menu depicted here. The selected color is applied to the spacecraft used in the game. Movers_Sample With this game, you can learn how to instantiate animations triggered by user input. Using this game, you will also gain exposure to FlowGraphs and FlowGraph Editor. Trigger_Sample This sample game provides several examples of Proximity and Area Triggers. Here is the list of triggers instantiated in this sample game: Proximity trigger Player only Any entity One entity at a time Only selected entity Three entities required to trigger Area trigger Two volumes use same trigger Stand in all three volumes at the same time Step inside each trigger in any order Step inside each trigger in correct sequence UIEditor_Sample This sample game is not playable but provides a commercial-quality User Interface (UI) example. If you run the level in Game Mode, you will not have a game to play, but the stunning visuals of the UI give you a glimpse of what is possible with the Lumberyard game engine. Amazon Web Services AWS is a family of cloud-based scalable services to support, in the context of Lumberyard, your game. AWS includes several technologies that can support your Lumberyard game. These services include: Cloud Canvas Cloud Computing GameLift Simple Notification Service (SNS) Simple Query Service (SQS) Simple Storage Service (S3) Asset creation Game assets include graphic files such as materials, textures, color palettes, 2D objects, and 3D objects. These assets are used to bring a game to life. A terrain, for example, is nothing without grass and dirt textures applied to it. Much of this content is likely to be created with external tools. One internal tool used to implement the externally created graphical assets is the Material Editor. Audio system Lumberyard has Audio System that controls how in-game audio is instantiated. No audio sounds are created directly in Lumberyard. Instead, they are created using Wwise Software (Wave Works Interactive Sound Engine) by Audiokinetic. Because audio is created external to Lumberyard, a game project's audio team will likely consist of content creators and developers that implement the content in the Lumberyard game. Cinematics system Lumberyard has a Cinematics System that can be used to create cut-scenes and promotional videos. With this system, you can also make your cinematics interactive. Flow graph system Lumberyard's flow graph system is a visual scripting system for creating gameplay. This tool is likely to be used by many of your smaller teams. It can be beneficial to have someone that oversees all Flow Graphs to ensure compatibility and standardization. Geppetto Geppetto is Lumberyard's character tool. A character team will likely create the game's characters using external tools such as Maya or 3D Studio Max. Using those systems, they can export the necessary files to support importing the character assets into your Lumberyard game. Lumberyard has an FBX Importer tool that is used to import characters created in external programs. Mannequin editor Animating objects, especially 3D objects, is a complex process that takes artistic talent, and technical expertise. Some projects incorporate separate teams for object creation and animation. For example, you might have a small team that creates robot characters and another team that generates their animations. Production team The production team is responsible for creating builds and distributing releases. They will also handle testing coordination. One of their primary tools will be the Waf Build System. Terrain editor A game's environment consists of terrain and objects. The terrain is the foundation for the entire game experience and is the focus of exacting efforts. The creation of a terrain starts when a new level is created. The Height Map resolution is the first decision a level editor, or person responsible for creating terrain, is faced with. Twitch ChatPlay system Twitch integration represents exciting game possibilities. Twitch integration allows you to engage your game's users in unique ways. UI editor Creating a user interface is often, at least on very large projects, the responsibility of a specialized team. This team, or individual, will create the user interface components on each game level to ensure consistency. Artwork required for the user interfaces is likely to be produced by the Asset team. Summary In this article, you learned about AWS Lumberyard and what it is capable of. You gained an appreciation for Lumberyard's significance to the game development industry. You also learned about the beta history of Lumberyard and how quickly it is maturing into a game engine of choice. Resources for Article: Further resources on this subject: What Makes a Game a Game? [article] Integrating Accumulo into Various Cloud Platforms [article] CryENGINE 3: Breaking Ground with Sandbox [article]
Read more
  • 0
  • 0
  • 19398

article-image-extending-yii
Packt
03 Oct 2016
14 min read
Save for later

Extending Yii

Packt
03 Oct 2016
14 min read
Introduction      In this article by Dmitry Eliseev, the author of the book Yii Application Development Cookbook Third Edition, we will see three Yii extensions—helpers, behaviors, and components. In addition, we will learn how to make your extension reusable and useful for the community and will focus on the many things you should do in order to make your extension as efficient as possible. (For more resources related to this topic, see here.) Helpers There are a lot of built-in framework helpers, like StringHelper in the yiihelpers namespace. It contains sets of helpful static methods for manipulating strings, files, arrays, and other subjects. In many cases, for additional behavior you can create your own helper and put any static functions into one. For example, we will implement a number helper in this recipe. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. How to do it… Create the helpers directory in your project and write the NumberHelper class: <?php namespace apphelpers; class NumberHelper { public static function format($value, $decimal = 2) { return number_format($value, $decimal, '.', ','); } } Add the actionNumbers method into SiteController: <?php ... class SiteController extends Controller { … public function actionNumbers() { return $this->render('numbers', ['value' => 18878334526.3]); } } Add the views/site/numbers.php view: <?php use apphelpersNumberHelper; use yiihelpersHtml; /* @var $this yiiwebView */ /* @var $value float */ $this->title = 'Numbers'; $this->params['breadcrumbs'][] = $this->title; ?> <div class="site-numbers"> <h1><?= Html::encode($this->title) ?></h1> <p> Raw number:<br /> <b><?= $value ?></b> </p> <p> Formatted number:<br /> <b><?= NumberHelper::format($value) ?></b> </p> </div> Open the action and see this result: In other cases you can specify another count of decimal numbers; for example: NumberHelper::format($value, 3) How it works… Any helper in Yii2 is just a set of functions implemented as static methods in corresponding classes. You can use one to implement any different format of output for manipulations with values of any variable, and for other cases. Note: Usually, static helpers are light-weight clean functions with a small count of arguments. Avoid putting your business logic and other complicated manipulations into helpers . Use widgets or other components instead of helpers in other cases. See also For more information about helpers, refer to http://www.yiiframework.com/doc-2.0/guide-helper-overview.html. And for examples of built-in helpers, see sources in the helpers directory of the framework, refer to https://github.com/yiisoft/yii2/tree/master/framework/helpers. Creating model behaviors There are many similar solutions in today's web applications. Leading products such as Google's Gmail are defining nice UI patterns; one of these is soft delete. Instead of a permanent deletion with multiple confirmations, Gmail allows users to immediately mark messages as deleted and then easily undo it. The same behavior can be applied to any object such as blog posts, comments, and so on. Let's create a behavior that will allow marking models as deleted, restoring models, selecting not yet deleted models, deleted models, and all models. In this recipe we'll follow a test-driven development approach to plan the behavior and test if the implementation is correct. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. Create two databases for working and for tests. Configure Yii to use the first database in your primary application in config/db.php. Make sure the test application uses a second database in tests/codeception/config/config.php. Create a new migration: <?php use yiidbMigration; class m160427_103115_create_post_table extends Migration { public function up() { $this->createTable('{{%post}}', [ 'id' => $this->primaryKey(), 'title' => $this->string()->notNull(), 'content_markdown' => $this->text(), 'content_html' => $this->text(), ]); } public function down() { $this->dropTable('{{%post}}'); } } Apply the migration to both working and testing databases: ./yii migrate tests/codeception/bin/yii migrate Create a Post model: <?php namespace appmodels; use appbehaviorsMarkdownBehavior; use yiidbActiveRecord; /** * @property integer $id * @property string $title * @property string $content_markdown * @property string $content_html */ class Post extends ActiveRecord { public static function tableName() { return '{{%post}}'; } public function rules() { return [ [['title'], 'required'], [['content_markdown'], 'string'], [['title'], 'string', 'max' => 255], ]; } } How to do it… Let's prepare a test environment, starting with defining the fixtures for the Post model. Create the tests/codeception/unit/fixtures/PostFixture.php file: <?php namespace apptestscodeceptionunitfixtures; use yiitestActiveFixture; class PostFixture extends ActiveFixture { public $modelClass = 'appmodelsPost'; public $dataFile = '@tests/codeception/unit/fixtures/data/post.php'; } Add a fixture data file in tests/codeception/unit/fixtures/data/post.php: <?php return [ [ 'id' => 1, 'title' => 'Post 1', 'content_markdown' => 'Stored *markdown* text 1', 'content_html' => "<p>Stored <em>markdown</em> text 1</p>n", ], ]; Then, we need to create a test case tests/codeception/unit/MarkdownBehaviorTest: . .php: <?php namespace apptestscodeceptionunit; use appmodelsPost; use apptestscodeceptionunitfixturesPostFixture; use yiicodeceptionDbTestCase; class MarkdownBehaviorTest extends DbTestCase { public function testNewModelSave() { $post = new Post(); $post->title = 'Title'; $post->content_markdown = 'New *markdown* text'; $this->assertTrue($post->save()); $this->assertEquals("<p>New <em>markdown</em> text</p>n", $post->content_html); } public function testExistingModelSave() { $post = Post::findOne(1); $post->content_markdown = 'Other *markdown* text'; $this->assertTrue($post->save()); $this->assertEquals("<p>Other <em>markdown</em> text</p>n", $post->content_html); } public function fixtures() { return [ 'posts' => [ 'class' => PostFixture::className(), ] ]; } } Run unit tests: codecept run unit MarkdownBehaviorTest and ensure that tests have not passed Codeception PHP Testing Framework v2.0.9 Powered by PHPUnit 4.8.27 by Sebastian Bergmann and contributors. Unit Tests (2) --------------------------------------------------------------------------- Trying to test ... MarkdownBehaviorTest::testNewModelSave Error Trying to test ... MarkdownBehaviorTest::testExistingModelSave Error --------------------------------------------------------------------------- Time: 289 ms, Memory: 16.75MB Now we need to implement a behavior, attach it to the model, and make sure the test passes. Create a new directory, behaviors. Under this directory, create the MarkdownBehavior class: <?php namespace appbehaviors; use yiibaseBehavior; use yiibaseEvent; use yiibaseInvalidConfigException; use yiidbActiveRecord; use yiihelpersMarkdown; class MarkdownBehavior extends Behavior { public $sourceAttribute; public $targetAttribute; public function init() { if (empty($this->sourceAttribute) || empty($this->targetAttribute)) { throw new InvalidConfigException('Source and target must be set.'); } parent::init(); } public function events() { return [ ActiveRecord::EVENT_BEFORE_INSERT => 'onBeforeSave', ActiveRecord::EVENT_BEFORE_UPDATE => 'onBeforeSave', ]; } public function onBeforeSave(Event $event) { if ($this->owner->isAttributeChanged($this->sourceAttribute)) { $this->processContent(); } } private function processContent() { $model = $this->owner; $source = $model->{$this->sourceAttribute}; $model->{$this->targetAttribute} = Markdown::process($source); } } Let's attach the behavior to the Post model: class Post extends ActiveRecord { ... public function behaviors() { return [ 'markdown' => [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'content_markdown', 'targetAttribute' => 'content_html', ], ]; } } Run the test and make sure it passes: Codeception PHP Testing Framework v2.0.9 Powered by PHPUnit 4.8.27 by Sebastian Bergmann and contributors. Unit Tests (2) --------------------------------------------------------------------------- Trying to test ... MarkdownBehaviorTest::testNewModelSave Ok Trying to test ... MarkdownBehaviorTest::testExistingModelSave Ok --------------------------------------------------------------------------- Time: 329 ms, Memory: 17.00MB That's it. We've created a reusable behavior and can use it for all future projects by just connecting it to a model. How it works… Let's start with the test case. Since we want to use a set of models, we will define fixtures. A fixture set is put into the DB each time the test method is executed. We will prepare unit tests for specifying how the behavior works: First, we test processing new model content. The behavior must convert Markdown text from a source attribute to HTML and store the second one to target attribute. Second, we test updated content of an existing model. After changing Markdown content and saving the model, we must get updated HTML content. Now let's move to the interesting implementation details. In behavior, we can add our own methods that will be mixed into the model that the behavior is attached to. We can also subscribe to our own component events. We are using it to add our own listener: public function events() { return [ ActiveRecord::EVENT_BEFORE_INSERT => 'onBeforeSave', ActiveRecord::EVENT_BEFORE_UPDATE => 'onBeforeSave', ]; } And now we can implement this listener: public function onBeforeSave(Event $event) { if ($this->owner->isAttributeChanged($this->sourceAttribute)) { $this->processContent(); } } In all methods, we can use the owner property to get the object the behavior is attached to. In general we can attach any behavior to your models, controllers, application, and other components that extend the yiibaseComponent class. We can also attach one behavior again and again to model for the processing of different attributes: class Post extends ActiveRecord { ... public function behaviors() { return [ [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'description_markdown', 'targetAttribute' => 'description_html', ], [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'content_markdown', 'targetAttribute' => 'content_html', ], ]; } } Besides, we can also extend the yiibaseAttributeBehavior class, like yiibehaviorsTimestampBehavior, to update specified attributes for any event. See also To learn more about behaviors and events, refer to the following pages: http://www.yiiframework.com/doc-2.0/guide-concept-behaviors.html http://www.yiiframework.com/doc-2.0/guide-concept-events.html For more information about Markdown syntax, refer to http://daringfireball.net/projects/markdown/. Creating components If you have some code that looks like it can be reused but you don't know if it's a behavior, widget, or something else, it's most probably a component. The component should be inherited from the yiibaseComponent class. Later on, the component can be attached to the application and configured using the components section of a configuration file. That's the main benefit compared to using just a plain PHP class. We are also getting behaviors, events, getters, and setters support. For our example, we'll implement a simple Exchange application component that will be able to get currency rates from the http://fixer.io site, attach them to the application, and use them. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. How to do it… To get a currency rate, our component should send an HTTP GET query to a service URL, like http://api.fixer.io/2016-05-14?base=USD. The service must return all supported rates on the nearest working day: { "base":"USD", "date":"2016-05-13", "rates": { "AUD":1.3728, "BGN":1.7235, ... "ZAR":15.168, "EUR":0.88121 } } The component should extract needle currency from the response in a JSON format and return a target rate. Create a components directory in your application structure. Create the component class example with the following interface: <?php namespace appcomponents; use yiibaseComponent; class Exchange extends Component { public function getRate($source, $destination, $date = null) { } } Implement the component functional: <?php namespace appcomponents; use yiibaseComponent; use yiibaseInvalidConfigException; use yiibaseInvalidParamException; use yiicachingCache; use yiidiInstance; use yiihelpersJson; class Exchange extends Component { /** * @var string remote host */ public $host = 'http://api.fixer.io'; /** * @var bool cache results or not */ public $enableCaching = false; /** * @var string|Cache component ID */ public $cache = 'cache'; public function init() { if (empty($this->host)) { throw new InvalidConfigException('Host must be set.'); } if ($this->enableCaching) { $this->cache = Instance::ensure($this->cache, Cache::className()); } parent::init(); } public function getRate($source, $destination, $date = null) { $this->validateCurrency($source); $this->validateCurrency($destination); $date = $this->validateDate($date); $cacheKey = $this->generateCacheKey($source, $destination, $date); if (!$this->enableCaching || ($result = $this->cache->get($cacheKey)) === false) { $result = $this->getRemoteRate($source, $destination, $date); if ($this->enableCaching) { $this->cache->set($cacheKey, $result); } } return $result; } private function getRemoteRate($source, $destination, $date) { $url = $this->host . '/' . $date . '?base=' . $source; $response = Json::decode(file_get_contents($url)); if (!isset($response['rates'][$destination])) { throw new RuntimeException('Rate not found.'); } return $response['rates'][$destination]; } private function validateCurrency($source) { if (!preg_match('#^[A-Z]{3}$#s', $source)) { throw new InvalidParamException('Invalid currency format.'); } } private function validateDate($date) { if (!empty($date) && !preg_match('#d{4}-d{2}-d{2}#s', $date)) { throw new InvalidParamException('Invalid date format.'); } if (empty($date)) { $date = date('Y-m-d'); } return $date; } private function generateCacheKey($source, $destination, $date) { return [__CLASS__, $source, $destination, $date]; } } Attach our component in the config/console.php or config/web.php configuration files: 'components' => [ 'cache' => [ 'class' => 'yiicachingFileCache', ], 'exchange' => [ 'class' => 'appcomponentsExchange', 'enableCaching' => true, ], // ... db' => $db, ], We can now use a new component directly or via a get method: echo Yii::$app->exchange->getRate('USD', 'EUR'); echo Yii::$app->get('exchange')->getRate('USD', 'EUR', '2014-04-12'); Create a demonstration console controller: <?phpnamespace appcommands;use yiiconsoleController;class ExchangeController extends Controller{ public function actionTest($currency, $date = null) { echo Yii::$app->exchange->getRate('USD', $currency, $date) . PHP_EOL; }} And try to run any commands: $ ./yii exchange/test EUR > 0.90196 $ ./yii exchange/test EUR 2015-11-24 > 0.93888 $ ./yii exchange/test OTHER > Exception 'yiibaseInvalidParamException' with message 'Invalid currency format.' $ ./yii exchange/test EUR 2015/24/11 Exception 'yiibaseInvalidParamException' with message 'Invalid date format.' $ ./yii exchange/test ASD > Exception 'RuntimeException' with message 'Rate not found.' As a result you must see rate values in success cases or specific exceptions in error ones. In addition to creating your own components, you can do more. Overriding existing application components Most of the time there will be no need to create your own application components, since other types of extensions, such as widgets or behaviors, cover almost all types of reusable code. However, overriding core framework components is a common practice and can be used to customize the framework's behavior for your specific needs without hacking into the core. For example, to be able to format numbers using the Yii::app()->formatter->asNumber($value) method instead of the NumberHelper::format method from the Helpers recipe, follow the next steps: Extend the yiii18nFormatter component like the following: <?php namespace appcomponents; class Formatter extends yiii18nFormatter { public function asNumber($value, $decimal = 2) { return number_format($value, $decimal, '.', ','); } } Override the class of the built-in formatter component: 'components' => [ // ... formatter => [ 'class' => 'appcomponentsFormatter, ], // … ], Right now, we can use this method directly: echo Yii::app()->formatter->asNumber(1534635.2, 3); or as a new format for GridView and DetailView widgets: <?= yiigridGridView::widget([ 'dataProvider' => $dataProvider, 'columns' => [ 'id', 'created_at:datetime', 'title', 'value:number', ], ]) ?> You can also extend every existing component without overwriting its source code. How it works… To be able to attach a component to an application it can be extended from the yiibaseComponent class. Attaching is as simple as adding a new array to the components’ section of configuration. There, a class value specifies the component's class and all other values are set to a component through the corresponding component's public properties and setter methods. Implementation itself is very straightforward; We are wrapping http://api.fixer.io calls into a comfortable API with validators and caching. We can access our class by its component name using Yii::$app. In our case, it will be Yii::$app->exchange. See also For official information about components, refer to http://www.yiiframework.com/doc-2.0/guide-concept-components.html. For the NumberHelper class sources, see Helpers recipe. Summary In this article we learnt about the Yii extensions—helpers, behavior, and components. Helpers contains sets of helpful static methods for manipulating strings, files, arrays, and other subjects. Behaviors allow you to enhance the functionality of an existing component class without needing to change the class's inheritance. Components are the main building blocks of Yii applications. A component is an instance of CComponent or its derived class. Using a component mainly involves accessing its properties and raising/handling its events. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [article] Atmosfall – Managing Game Progress with Coroutines [article] Optimizing Games for Android [article]
Read more
  • 0
  • 0
  • 13205

article-image-build-universal-javascript-app-part-2
John Oerter
30 Sep 2016
10 min read
Save for later

Build a Universal JavaScript App, Part 2

John Oerter
30 Sep 2016
10 min read
In this post series, we will walk through how to write a universal (or isomorphic) JavaScript app. Part 1 covered what a universal JavaScript application is, why it is such an exciting concept, and the first two steps for creating our app, which are serving post data and adding React. In this second part of the series, we walk through steps 3-6, which are client-side routing with React Router, server rendering, data flow refactoring, and data loading of the app. Let’s get started. Save on some of our very best React and Angular product from the 7th to 13th November - it's a perfect opportunity to get stuck into two tools that are truly redefining modern web development. Save 50% on featured eBooks and 80% on featured video courses here. Step 3: Client-side routing with React Router git checkout client-side-routing && npm install Now that we're pulling and displaying posts, let's add some navigation to individual pages for each post. To do this, we will turn our list of posts from step 2 (see the Part 1 post) into links that are always present on the page. Each post will live at http://localhost:3000/:postId/:postSlug. We can use React Router and a routes.js file to set up this structure: // components/routes.js import React from 'react' import { Route } from 'react-router' import App from './App' import Post from './Post' module.exports = ( <Route path="/" component={App}> <Route path="/:postId/:postName" component={Post} /> </Route> ) We've changed the render method in App.js to render links to posts instead of just <li> tags: // components/App.js import React from 'react' import { Link } from 'react-router' const allPostsUrl = '/api/post' class App extends React.Component { constructor(props) { super(props) this.state = { posts: [] } } ... render() { const posts = this.state.posts.map((post) => { const linkTo = `/${post.id}/${post.slug}`; return ( <li key={post.id}> <Link to={linkTo}>{post.title}</Link> </li> ) }) return ( <div> <h3>Posts</h3> <ul> {posts} </ul> {this.props.children} </div> ) } } export default App And, we'll add a Post.js component to render each post's content: // components/Post.js import React from 'react' class Post extends React.Component { constructor(props) { super(props) this.state = { title: '', content: '' } } fetchPost(id) { const request = new XMLHttpRequest() request.open('GET', '/api/post/' + id, true) request.setRequestHeader('Content-type', 'application/json'); request.onload = () => { if (request.status === 200) { const response = JSON.parse(request.response) this.setState({ title: response.title, content: response.content }); } } request.send(); } componentDidMount() { this.fetchPost(this.props.params.postId) } componentWillReceiveProps(nextProps) { this.fetchPost(nextProps.params.postId) } render() { return ( <div> <h3>{this.state.title}</h3> <p>{this.state.content}</p> </div> ) } } export default Post The componentDidMount() and componentWillReceiveProps() methods are important because they let us know when we should fetch a post from the server. componentDidMount() will handle the first time the Post.js component is rendered, and then componentWillReceiveProps() will take over as React Router handles rerendering the component with different props. Run npm build:client && node server.js again to build and run the app. You will now be able to go to http://localhost:3000 and navigate around to the different posts. However, if you try to refresh on a single post page, you will get something like Cannot GET /3/debugging-node-apps. That's because our Express server doesn't know how to handle that kind of route. React Router is handling it completely on the front end. Onward to server rendering! Step 4: Server rendering git checkout server-rendering && npm install Okay, now we're finally getting to the good stuff. In this step, we'll use React Router to help our server take application requests and render the appropriate markup. To do that, we need to also build a server bundle like we build a client bundle, so that the server can understand JSX. Therefore, we've added the below webpack.server.config.js: // webpack.server.config.js var fs = require('fs') var path = require('path') module.exports = { entry: path.resolve(__dirname, 'server.js'), output: { filename: 'server.bundle.js' }, target: 'node', // keep node_module paths out of the bundle externals: fs.readdirSync(path.resolve(__dirname, 'node_modules')).concat([ 'react-dom/server', 'react/addons', ]).reduce(function (ext, mod) { ext[mod] = 'commonjs ' + mod return ext }, {}), node: { __filename: true, __dirname: true }, module: { loaders: [ { test: /.js$/, exclude: /node_modules/, loader: 'babel-loader?presets[]=es2015&presets[]=react' } ] } } We've also added the following code to server.js: // server.js import React from 'react' import { renderToString } from 'react-dom/server' import { match, RouterContext } from 'react-router' import routes from './components/routes' const app = express() ... app.get('*', (req, res) => { match({ routes: routes, location: req.url }, (err, redirect, props) => { if (err) { res.status(500).send(err.message) } else if (redirect) { res.redirect(redirect.pathname + redirect.search) } else if (props) { const appHtml = renderToString(<RouterContext {...props} />) res.send(renderPage(appHtml)) } else { res.status(404).send('Not Found') } }) }) function renderPage(appHtml) { return ` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Universal Blog</title> </head> <body> <div id="app">${appHtml}</div> <script src="/bundle.js"></script> </body> </html> ` } ... Using React Router's match function, the server can find the appropriate requested route, renderToString, and send the markup down the wire. Run npm start to build the client and server bundles and start the app. Fantastic right? We're not done yet. Even though the markup is being generated on the server, we're still fetching all the data client side. Go ahead and click through the posts with your dev tools open, and you'll see the requests. It would be far better to load the data while we're rendering the markup instead of having to request it separately on the client. Since server rendering and universal apps are still bleeding-edge, there aren't really any established best practices for data loading. If you're using some kind of Flux implementation, there may be some specific guidance. But for this use case, we will simply grab all the posts and feed them through our app. In order to this, we first need to do some refactoring on our current architecture. Step 5: Data Flow Refactor git checkout data-flow-refactor && npm install It's a little weird how each post page has to make a request to the server for its content, even though the App component already has all the posts in its state. A better solution would be to have an App simply pass the appropriate content down to the Post component. // components/routes.js import React from 'react' import { Route } from 'react-router' import App from './App' import Post from './Post' module.exports = ( <Route path="/" component={App}> <Route path="/:postId/:postName" /> </Route> ) In our routes.js, we've made the Post route a componentless route. It's still a child of the App route, but now has to completely rely on the App component for rendering. Below are the changes to App.js: // components/App.js ... render() {    const posts = this.state.posts.map((post) => {      const linkTo = `/${post.id}/${post.slug}`;      return (        <li key={post.id}>          <Link to={linkTo}>{post.title}</Link>        </li>      )    })    const { postId, postName } = this.props.params;    let postTitle, postContent    if (postId && postName) {      const post = this.state.posts.find(p => p.id == postId)      postTitle = post.title      postContent = post.content    }    return (      <div>        <h3>Posts</h3>        <ul>          {posts}        </ul>        {postTitle && postContent ? (          <Post title={postTitle} content={postContent} />        ) : (          <h1>Welcome to the Universal Blog!</h1>        )}      </div>    ) } } export default App If we are on a post page, then props.params.postId and props.params.postName will both be defined and we can use them to grab the desired post and pass the data on to the Post component to be rendered. If those properties are not defined, then we're on the home page and can simply render a greeting. Now, our Post.js component can be a simple stateless functional component that simply renders its properties. // components/Post.js import React from 'react' const Post = ({title, content}) => ( <div> <h3>{title}</h3> <p>{content}</p> </div>) export default Post With that refactoring complete, we're ready to implement data loading. Step 6: Data Loading git checkout data-loading && npm install For this final step, we just need to make two small changes in server.js and App.js: // server.js ... app.get('*', (req, res) => { match({ routes: routes, location: req.url }, (err, redirect, props) => { if (err) { res.status(500).send(err.message) } else if (redirect) { res.redirect(redirect.pathname + redirect.search) } else if (props) { const routerContextWithData = ( <RouterContext {...props} createElement={(Component, props) => { return <Component posts={posts} {...props} /> }} /> ) const appHtml = renderToString(routerContextWithData) res.send(renderPage(appHtml)) } else { res.status(404).send('Not Found') } }) }) ... // components/App.js import React from 'react' import Post from './Post' import { Link, IndexLink } from 'react-router' const allPostsUrl = '/api/post' class App extends React.Component { constructor(props) { super(props) this.state = { posts: props.posts || [] } } ... In server.js, we're changing how the RouterContext creates elements by overwriting its createElement function and passing in our data as additional props. These props will get passed to any component that is matched by the route, which in this case will be our App component. Then, when the App component is initialized, it sets its posts state property to what it got from props or an empty array. That's it! Run npm start one last time, and cruise through your app. You can even disable JavaScript, and the app will automatically degrade to requesting whole pages. Thanks for reading! About the author John Oerter is a software engineer from Omaha, Nebraska, USA. He has a passion for continuous improvement and learning in all areas of software development, including Docker, JavaScript, and C#. He blogs at here.
Read more
  • 0
  • 0
  • 9870
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-parallel-computing
Packt
30 Sep 2016
9 min read
Save for later

Parallel Computing

Packt
30 Sep 2016
9 min read
In this article written by Jalem Raj Rohit, author of the book Julia Cookbook, cover the following recipes: Basic concepts of parallel computing Data movement Parallel map and loop operations Channels (For more resources related to this topic, see here.) Introduction In this article, you will learn about performing parallel computing and using it to handle big data. So, some concepts like data movements, sharded arrays, and the map-reduce framework are important to know in order to handle large amounts of data by computing on it using parallelized CPUs. So, all the concepts discussed in this article will help you build good parallel computing and multiprocessing basics, including efficient data handling and code optimization. Basic concepts of parallel computing Parallel computing is a way of dealing with data in a parallel way. This can be done by connecting multiple computers as a cluster and using their CPUs for carrying out the computations. This style of computation is used when handling large amounts of data and also while running complex algorithms over significantly large data. The computations are executed faster due to the availability of multiple CPUs running them in parallel as well as the direct availability of RAM to each of them. Getting ready Julia has an in-built support for parallel computing and multiprocessing. So, these computations rarely require any external libraries for the task. How to do it… Julia can be started in your local computer using multiple cores of your CPU. So, we will now have multiple workers for the process. This is how you can fire up Julia in the multi-processing mode in your terminal. This creates two worker process in the machine, which means it uses twwo CPU cores for the purpose julia -p 2 The output looks something like this. It might differ for different operating systems and different machines: Now, we will look at the remotecall() function. It takes in multiple arguments, the first one being the process which we want to assign the task to. The next argument would be the function which we want to execute. The subsequent arguments would be the parameters or the arguments of that function which we want to execute. In this example, we will create a 2 x 2 random matrix and assign it to the process number 2. This can be done as follows: task = remotecall(2, rand, 2, 2) The preceding command gives the following output: Now that the remotecall() function for remote referencing has been executed, we will fetch the results of the function through the fetch() function. This can be done as follows: fetch(task) The preceding command gives the following output: Now, to perform some mathematical operations on the generated matrix, we can use the @spawnat macro, which takes in the mathematical operation and the fetch() function. The @spawnat macro actually wraps the expression 5 .+ fetch(task) into an anonymous function and runs it on the second machine This can be done as follows: task2 = @spawnat 5 .+ fetch(task) There is also a function that eliminates the need of using two different functions: remotecall() and fetch(). The remotecall_fetch() function takes in multiple arguments. The first one being the process that the task is being assigned. The next argument is the function which you want to be executed. The subsequent arguments would be the arguments or the parameters of the function that you want to execute. Now, we will use the remote call_fetch() function to fetch an element of the task matrix for a particular index. This can be done as follows: remotecall_fetch(2, getindex, task2, 1, 1) How it works… Julia can be started in the multiprocessing mode by specifying the number of processes needed while starting up the REPL. In this example, we started Julia as a two process mode. The maximum number of processes depends on the number of cores available in the CPU. The remotecall() function helps in selecting a particular process from the running processes in order to run a function or, in fact, any computation for us. The fetch() function is used to fetch the results of the remotecall() function from a common data resource (or the process) for all the running processes. The details of the data source would be covered in the later sections. The results of the fetch() function can also be used for further computations, which can be carried out with the @spawnat macro along with the results of fetch(). This would assign a process for the computation. The remotecall_fetch() function further eliminates the need for the fetch function in case of a direct execution. This has both the remotecall() and fetch() operations built into it. So, it acts as a combination of both the second and third points in this section. Data movement In parallel computing, data movements are quite common and are also a thing to be minimized due to the time and the network overhead due to the movements. In this recipe, we will see how that can be optimized to avoid latency as much as we can. Getting ready To get ready for this recipe, you need to have the Julia REPL started in the multiprocessing mode. This is explained in the Getting ready section of the preceding recipe. How to do it… Firstly, we will see how to do a matrix computation using the @spawn macro, which helps in data movement. So, we construct a matrix of shape 200 x 200 and then try to square it using the @spawn macro. This can be done as follows: mat = rand(200, 200) exec_mat = @spawn mat^2 fetch(exec_mat) The preceding command gives the following output: Now, we will look at an another way to achieve the same. This time, we will use the @spawn macro directly instead of the initialization step. We will discuss the advantages and drawbacks of each method in the How it works… section. So, this can be done as follows: mat = @spawn rand(200, 200)^2 fetch(mat) The preceding command gives the following output: How it works… In this example, we try to construct a 200X200 matrix and then used the @spawn macro to spawn a process in the CPU to execute the same for us. The @spawn macro spawns one of the two processes running, and it uses one of them for the computation. In the second example, you learned how to use the @spawn macro directly without an extra initialization part. The fetch() function helps us fetch the results from a common data resource of the processes. More on this will be covered in the following recipes. Parallel maps and loop operations In this recipe, you will learn a bit about the famous Map Reduce framework and why it is one of the most important ideas in the domains of big data and parallel computing. You will learn how to parallelize loops and use reducing functions on them through the several CPUs and machines and the concept of parallel computing, which you learned about in the previous recipes. Getting ready Just like the previous sections, Julia just needs to be running in the multiprocessing mode to follow along the following examples. This can be done through the instructions given in the first section. How to do it… Firstly, we will write a function that takes and adds n random bits. The writing of this function has nothing to do with multiprocessing. So, it has simple Julia functions and loops. This function can be written as follows: Now, we will use the @spawn macro, which we learned previously to run the count_heads() function as separate processes. The count_heads()function needs to be in the same directory for this to work. This can be done as follows: require("count_heads") a = @spawn count_heads(100) b = @spawn count_heads(100) fetch(a) + fetch(b) However, we can use the concept of multi-processing and parallelize the loop directly as well as take the sum. The parallelizing part is called mapping, and the addition of the parallelized bits is called reduction. Thus, the process constitutes the famous Map-Reduce framework. This can be made possible using the @parallel macro, as follows: nheads = @parallel (+) for i = 1:200 Int(rand(Bool)) end How it works… The first function is a simple Julia function that adds random bits with every loop iteration. It was created just for the demonstration of Map-Reduce operations. In the second point, we spawn two separate processes for executing the function and then fetch the results of both of them and add them up. However, that is not really a neat way to carry out parallel computation of functions and loops. Instead, the @parallel macro provides a better way to do it, which allows the user to parallelize the loop and then reduce the computations through an operator, which together would be called the Map-Reduce operation. Channels Channels are like the background plumbing for parallel computing in Julia. They are like the reservoirs from where the individual processes access their data from. Getting ready The requisite is similar to the previous sections. This is mostly a theoretical section, so you just need to run your experiments on your own. For that, you need to run your Julia REPL in a multiprocessing mode. How to do it… Channels are shared queues with a fixed length. They are common data reservoirs for the processes which are running. The channels are like common data resources, which multiple readers or workers can access. They can access the data through the fetch() function, which we already discussed in the previous sections. The workers can also write to the channel through the put!() function. This means that the workers can add more data to the resource, which can be accessed by all the workers running a particular computation. Closing a channel after usage is a good practice to avoid data corruption and unnecessary memory usage. It can be done using the close() function. Summary In this article we covered the basic concepts of parallel computing and data movement that takes place in the network. We also learned about parallel maps and loop operations along with the famous Map Reduce framework. At the end we got a brief understanding of channels and how individual processes access their data from channels. Resources for Article: Further resources on this subject: More about Julia [article] Basics of Programming in Julia [article] Simplifying Parallelism Complexity in C# [article]
Read more
  • 0
  • 0
  • 3370

article-image-introduction-neural-networks-chainer-part-1
Hiroyuki Vincent
30 Sep 2016
8 min read
Save for later

Introduction to Neural Networks with Chainer – Part 1

Hiroyuki Vincent
30 Sep 2016
8 min read
With the increasing popularity of neural networks, or deep learning, companies ranging from smaller start-ups to major ones such as Google have been releasing frameworks for deep learning related tasks. Caffe from the Berkeley Vision and Learning Center (BVLC), Torch and Theano has been around for quite a while. TensorFlow was open sourced by Google last year in 2015 and has since then been expanding its community. Neon from Nervana Systems is a more recent addition to this repository with good reputation for its performance. This three-part post series will introduce you to yet another framework for neural networks called Chainer, which similarly to most of the previously mentioned frameworks is based on Python. It has an intuitive interface with a low learning curve but hasn't been widely adopted yet outside the borders of Japan where it is being developed. This article is split up into three parts where this first part aims to explain the characteristics of Chainer and the basic data structures. The second and third parts will help you get started with the actual training. The basic theory of neural networks will not be covered but if you are familiar with forward pass, back propagation and gradient descent and on top of that have some coding experience, you should be be able to follow this article. What is Chainer? Chainer is an open sourced Python based framework maintained by Preferred Infrastructure/Preferred Networks in Japan. The company behind the framework put a heavy emphasis on closing the gap between the machine learning research being carried out in academia and the more practical applications of machine learning. They focus on deep learning, IoT, edge-heavy computing with applications in the automobile manufacturing and healthcare markets by for instance developing autonomous cars and factory robots with grasping capabilities. Why Chainer? Defining and training simple neural networks with Chainer can be done with just few lines of code. It can also scale to larger models with more complex architecture with little effort. It is a framework for basically anyone, working, studying or researching in neural networks. There are however other alternatives as mentioned in the introduction. This section will explain the characteristics of the framework and why you might want to try it out. One major issue with deep learning related tasks is configuring the hyper parameters. Chainer makes this less of a pain. It comes with many layers, activation functions, loss functions and optimization algorithms in a plug-and-play fashion. With a single line of code or a single function call, those components can be added or removed without affecting the rest of the program. The abstraction and class structure of the framework makes it intuitive to learn and start experimenting. We will dig deeper into that in the second part of this series. On top of that, it is well documented. It is actually so well documented that you may stop reading this article right here and jump to the official documentation. Much of the content in this post is extracted from the official documentation, but I will try to complement it with additional details and awareness for common pitfalls. GPU Support NumPy is a Python package commonly used in academia due to its rich interface for manipulating multidimensional arrays similar to MATLAB. If you are working with neural networks, chances are, you're well familiar with NumPy and its methods and operations. What Chainer does is that it comes with CuPy (chainer.cuda.cupy), a GPU alternative to NumPy. CuPy is a CUDA-based GPU backend for array manipulation that implements a subset of the NumPy interface. Hence, it is possible to write almost generic code for both the CPU and the GPU. You can simply change the NumPy package to CuPy and vice versa in your source code to switch from one to another. It unfortunately lacks some features such as advanced indexing and numpy.where. Multi-GPU training in terms of model parallelism and data parallelism is also supported, although not in the scope of this series. As we will discover, the most fundamental data structure in Chainer is a NumPy (or CuPy) array wrapper with added functionality. The Basics of Chainer Let’s write some code to get familiar with the Chainer interface. First, make sure that you have a working Python environment. We will use pip to install Chainer even though it can be installed directly from source. The code included in this post is verified with Python 3.5.1 and Chainer 1.8.2. Installation Install NumPy and Chainer using pip, which comes with the Python environment. pip install numpy pip install chainer Chainer Variables and Variable Differentiation You are now ready to start writing code. First, let's take a look at the snippet below. import numpy as np from chainer import Variable x_data = np.array([4], dtype=np.float32) x = Variable(x_data) assert x.data == x_data # True y = x ** 2 + 5 * x + 3 assertisinstance(y, Variable) # True # Compute the gradient for x and store it in x.grad y.backward() # y'(x) = 2 * x + 5; y'(4) = 13 assert x.grad == 13# True The most fundamental data structure is the Chainer variable class, chainer.Variable. It can be initialized by passing a NumPy array (which must have the datatype numpy.float32) or by applying functions to already instantiated Chainer variables. Think of them as wrappers of NumPy's N-dimensional arrays. To access the NumPy array from the variable, use the data property. So what makes them different? Each Chainer variable actually holds a reference to it's creator, unless it is a leaf node, x in the example above. This means that y can reference x through the functions that created it. That way, a computational graph is maintained by the framework which is used for computing the gradients during back propagation. This is exactly what happens when calling the Variable.backward() method on y. The variable is differentiated and the gradient with respect to x is stored in the x variable itself. A pitfall here is that if x would be an array with more than one element, y.grad would need to be initialized with an initial error. If, as in the above mentioned example, x only contains one element, the error is automatically set to 1. Principles of Gradient Descent Using the differentiation mechanism above, you may implement a gradient descent optimization algorithm the following way. Initialize a weight w, a one dimensional array with one element to any value, say 4 as in the previous example, and iteratively optimizes the loss function w ** 2. The loss function is a function depending on the parameter w, just as y was depending on x. The loss function is obviously at it's global minimum when w == 0. This is what we want to achieve with gradient descent. We can get very close by repeating the optimization step using Variable.backward(). import numpy as np from chainer import Variable w = Variable(np.array([4], dtype=np.float32)) learning_rate = 0.1 max_iters = 100 for i in range(max_iters): loss = w ** 2 loss.backward() # Compute w.grad # Optimize / Update the parameter using gradient descent w.data -= learning_rate * w.grad # Reset gradient for the next iteration, w.grad == 0 w.zerograd() print('Iteration: {} Loss: {}'.format(i, loss.data)) In each iteration, the parameter is updated towards the negative gradient to lower the loss. We are performing a gradient descent. Note that the gradient was scaled by a learning rate before the parameter was updated in order to stabilize the optimization. In fact, if the learning rate were removed from this example, the loss would be stuck at 16, since the derivative of the loss functions is 2 * w, which would cause w to simply jump back and forth between 4 and -4. With the learning rate, we see that the loss decreases with each iteration. loss.data as seen in the output below is an array with one element since it has the same dimensions as w.data. Iteration: 0 Loss: [ 16.] Iteration: 1 Loss: [ 10.24000072] Iteration: 2 Loss: [ 6.55359983] Iteration: 3 Loss: [ 4.19430351] Iteration: 4 Loss: [ 2.68435407] Iteration: 5 Loss: [ 1.71798646] Iteration: 6 Loss: [ 1.09951138] Iteration: 7 Loss: [ 0.70368725] Iteration: 8 Loss: [ 0.45035988] Iteration: 9 Loss: [ 0.2882303] ... Summary This was a very brief introduction to the framework and its fundamental chainer.Variable class, how variable differentiation using computational graphs form the core concept of the framework. In the second and third part of this series, we will implement a complete training algorithm using Chainer. About the Author Hiroyuki Vincent Yamazaki is a graduate student at KTH, Royal Institute of Technology in Sweden, currently conducting research in convolutional neural networks at Keio University in Tokyo, partially using Chainer as a part of a double-degree programme. GitHub  LinkedIn 
Read more
  • 0
  • 0
  • 1463

article-image-functions-swift
Packt
30 Sep 2016
15 min read
Save for later

Functions in Swift

Packt
30 Sep 2016
15 min read
In this article by Dr. Fatih Nayebi, the author of the book Swift 3 Functional Programming, we will see that as functions are the fundamental building blocks in functional programming, this article dives deeper into it and explains all the aspects related to the definition and usage of functions in functional Swift with coding examples. This article will cover the following topics with coding examples: The general syntax of functions Defining and using function parameters Setting internal and external parameters Setting default parameter values Defining and using variadic functions Returning values from functions Defining and using nested functions (For more resources related to this topic, see here.) What is a function? Object-oriented programming (OOP) looks very natural to most developers as it simulates a real-life situation of classes or, in other words, blueprints and their instances, but it brought a lot of complexities and problems such as instance and memory management, complex multithreading, and concurrency programming. Before OOP became mainstream, we were used to developing in procedural languages. In the C programming language, we did not have objects and classes; we would use structs and function pointers. So now we are talking about functional programming that relies mostly on functions just as procedural languages relied on procedures. We are able to develop very powerful programs in C without classes; in fact, most operating systems are developed in C. There are other multipurpose programming languages such as Go by Google that is not object-oriented and is getting very popular because of its performance and simplicity. So, are we going to be able to write very complex applications without classes in Swift? We might wonder why we should do this. Generally, we should not, but attempting it will introduce us to the capabilities of functional programming. A function is a block of code that executes a specific task, can be stored, can persist data, and can be passed around. We define them in standalone Swift files as global functions or inside other building blocks such as classes, structs, enums, and protocols as methods. They are called methods if they are defined in classes but in terms of definition, there is no difference between a function and method in Swift. Defining them in other building blocks enables methods to use the scope of the parent or to be able to change them. They can access the scope of their parent and they have their own scope. Any variable that is defined inside a function is not accessible outside of it. The variables defined inside them and the corresponding allocated memory goes away when the function terminates. Functions are very powerful in Swift. We can compose a program with only functions as functions can receive and return functions, capture variables that exist in the context they were declared, and can persist data inside themselves. To understand the functional programming paradigms, we need to understand the capability of functions in detail. We need to think if we can avoid classes and only use functions so we will cover all the details related to functions in the upcoming sections of this article. The general syntax of functions and methods We can define functions or methods as follows: accessControl func functionName(parameter: ParameterType) throws -> ReturnType { } As we know already, when functions are defined in objects, they become methods. The first step to define a method is to tell the compiler from where it can be accessed. This concept is called access control in Swift and there are three levels of access control. We are going to explain them for methods as follows: Public access: Any entity can access a method that is defined as public if it is in the same module. If an entity is not in the same module, we will need to import the module to be able to call the method. We need to mark our methods and objects as public when we develop frameworks in order to enable other modules to use them. Internal access: Any method that is defined as internal can be accessed from other entities in a module but cannot be accessed from other modules. Private access: Any method that is defined as private can be accessed only from the same source file. By default, if we do not provide the access modifier, a variable or function becomes internal. Using these access modifiers, we can structure our code properly, for instance, we can hide details from other modules if we define an entity as internal. We can even hide the details of a method from other files if we define them as private. Before Swift 2.0, we had to define everything as public or add all source files to the testing target. Swift 2.0 introduced the @testable import syntax that enables us to define internal or private methods that can be accessed from testing modules. Methods can generally be in three forms: Instance methods: We need to obtain an instance of an object (In this article we will refer to classes, structs, and enums as objects) in order to be able to call the method defined in it, and then we will be able to access the scope and data of the object. Static methods: Swift names them type methods also. They do not need any instances of objects and they cannot access the instance data. They are called by putting a dot after the name of the object type (for example, Person.sayHi()). The static methods cannot be overridden by the subclasses of the object that they reside in. Class methods: Class methods are like the static methods but they can be overridden by subclasses. We have covered the keywords that are required for method definitions; now we will concentrate on the syntax that is shared among functions and methods. There are other concepts related to methods that are out of scope of this article as we will concentrate on functional programming in Swift. Continuing to cover the function definition, now comes the func keyword that is mandatory and is used to tell the compiler that it is going to deal with a function. Then comes the function name that is mandatory and is recommended to be camel-cased with the first letter as lowercase. The function name should be stating what the function does and is recommended to be in the form of a verb when we define our methods in objects. Basically, our classes will be named nouns and methods will be verbs that are in the form of orders to the class. In pure functional programming, as the function does not reside in other objects, they can be named by their functionalities. Parameters follow the func name. They will be defined in parentheses to pass arguments to the function. Parentheses are mandatory even if we do not have any parameters. We will cover all aspects of parameters in an upcoming section of this article. Then comes throws, which is not mandatory. A function or method that is marked with the throw keyword may or may not throw errors. At this point, it is enough to know what they are when we see them in a function or method signature. The next entity in a function type declaration is the return type. If a function is not void, the return type will come after the -> sign. The return type indicates the type of entity that is going to be returned from a function. We will cover return types in detail in an upcoming section in this article, so now we can move on to the last piece of function that is present in most programming languages, our beloved { }. We defined functions as blocks of functionality and {} defines the borders of the block so that the function body is declared and execution happens in there. We will write the functionality inside {}. Best practices in function definition There are proven best practices for function and method definition provided by amazing software engineering resources, such as Clean Code, Code Complete, and Coding Horror, that we can summarize as follows: Try not to exceed 8-10 lines of code in each function as shorter functions or methods are easier to read, understand, and maintain. Keep the number of parameters minimal because the more parameters a function has, the more complex it is. Functions should have at least one parameter and one return value. Avoid using type names in function names as it is going to be redundant. Aim for one and only one functionality in a function. Name a function or method in a way that it describes its functionality properly and is easy to understand. Name functions and methods consistently. If we have a connect function, we can have a disconnect one. Write functions to solve the current problem and generalize it when needed. Try to avoid what if scenarios as probably you aren't going to need it (YAGNI). Calling functions We have covered a general syntax to define a function and method if it resides in an object. Now it is time to talk about how we call our defined functions and methods. To call a function, we will use its name and provide its required parameters. There are complexities with providing parameters that we will cover in the upcoming section. For now, we are going to cover the most basic type of parameter providing as follows: funcName(paramName, secondParam: secondParamName) This type of function calling should be familiar to Objective-C developers as the first parameter name is not named and the rest are named. To call a method, we need to use the dot notation provided by Swift. The following examples are for class instance methods and static class methods: let someClassInstance = SomeClass() someClassInstance.funcName(paramName, secondParam: secondParamName) StaticClass.funcName(paramName, secondParam: secondParamName)   Defining and using function parameters In function definition, parameters follow the function name and they are constants by default so we will not able to alter them inside the function body if we do not mark them with var. In functional programming, we avoid mutability, therefore, we would never use mutable parameters in functions. Parameters should be inside parentheses. If we do not have any parameters, we simply put open and close parentheses without any characters between them: func functionName() { } In functional programming, it is important to have functions that have at least one parameter. We will explain why it is important in upcoming sections. We can have multiple parameters separated by commas. In Swift, parameters are named so we need to provide the parameter name and type after putting a colon, as shown in the following example: func functionName(parameter: ParameterType, secondParameter: ParameterType) { } // To call: functionName(parameter, secondParameter: secondParam) ParameterType can also be an optional type so the function becomes the following if our parameters need to be optionals: func functionName(parameter: ParameterType?, secondParameter: ParameterType?) { } Swift enables us to provide external parameter names that will be used when functions are called. The following example presents the syntax: Func functionName(externalParamName localParamName: ParameterType) // To call: functionName(externalParamName: parameter) Only the local parameter name is usable in the function body. It is possible to omit the parameter names with the _ syntax, for instance, if we do not want to provide any parameter name when the function is called, we can use _ as externalParamName for the second or subsequent parameters. If we want to have a parameter name for the first parameter name in function calls, we can basically provide the local parameter name as external also. In this article, we are going to use the default function parameter definition. Parameters can have default values as follows: func functionName(parameter: Int = 3) { print("(parameter) is provided." } functionName(5) // prints "5 is provided." functionName() // prints "3 is provided" Parameters can be defined as inout to enable function callers obtaining parameters that are going to be changed in the body of a function. As we can use tuples for function returns, it is not recommended to use inout parameters unless we really need them. We can define function parameters as tuples. For instance, the following example function accepts a tuple of the (Int, Int) type: func functionWithTupleParam(tupleParam: (Int, Int)) {} As, under the hood, variables are represented by tuples in Swift, the parameters to a function can also be tuples. For instance, let's have a simple convert function that takes an array of Int and a multiplier and converts it to a different structure. Let's not worry about the implementation of this function for now: let numbers = [3, 5, 9, 10] func convert(numbers: [Int], multiplier: Int) -> [String] { let convertedValues = numbers.enumerate().map { (index, element) in return "(index): (element * multiplier)" } return convertedValues } If we use this function as convert(numbers, multiplier: 3), the result is going to be ["0: 9", "1: 15", "2: 27", "3: 30"]. We can call our function with a tuple. Let's create a tuple and pass it to our function: let parameters = (numbers, multiplier: 3) convert(parameters) The result is identical to our previous function call. However, passing tuples in function calls is deprecated and will be removed in Swift 3.0, so it is not recommended to use them. We can define higher-order functions that can receive functions as parameters. In the following example, we define funcParam as a function type of (Int, Int) -> Int: func functionWithFunctionParam(funcParam: (Int, Int)-> Int) In Swift, parameters can be of a generic type. The following example presents a function that has two generic parameters. In this syntax, any type (for example, T or V) that we put inside <> should be used in parameter definition: func functionWithGenerics<T, V>(firstParam: T, secondParam) Defining and using variadic functions Swift enables us to define functions with variadic parameters. A variadic parameter accepts zero or more values of a specified type. Variadic parameters are similar to array parameters but they are more readable and can only be used as the last parameter in the multiparameter functions. As variadic parameters can accept zero values, we will need to check whether it is empty. The following example presents a function with variadic parameters of the String type: func greet(names: String…) { for name in names { print("Greetings, (name)") } } // To call this function greet("Steve", "Craig") // prints twice greet("Steve", "Craig", "Johny") // prints three times Returning values from functions If we need our function to return a value, tuple, or another function, we can specify it by providing ReturnType after ->. For instance, the following example returns String: func functionName() -> String { } Any function that has ReturnType in its definition should have a return keyword with the matching type in its body. Return types can be optionals in Swift so the function becomes as follows if the return needs to be optional: func functionName() -> String? { } Tuples can be used to provide multiple return values. For instance, the following function returns tuple of the (Int, String) type: func functionName() -> (code: Int, status: String) { } As we are using parentheses for tuples, we should avoid using parentheses for single return value functions. Tuple return types can be optional too so the syntax becomes as follows: func functionName() -> (code: Int, status: String)? { } This syntax makes the entire tuple optional; if we want to make only status optional, we can define the function as follows: func functionName() -> (code: Int, status: String?) { } In Swift, functions can return functions. The following example presents a function with the return type of a function that takes two Int values and returns Int: func funcName() -> (Int, Int)-> Int {} If we do not expect a function to return any value, tuple, or function, we simply do not provide ReturnType: func functionName() { } We could also explicitly declare it with the Void keyword: func functionName() { } In functional programming, it is important to have return types in functions. In other words, it is a good practice to avoid functions that have Void as return types. A function with the Void return type typically is a function that changes another entity in the code; otherwise, why would we need to have a function? OK, we might have wanted to log an expression to the console/log file or write data to a database or file to a filesystem. In these cases, it is also preferable to have a return or feedback related to the success of the operation. As we try to avoid mutability and stateful programming in functional programming, we can assume that our functions will have returns in different forms. This requirement is in line with mathematical underlying bases of functional programming. In mathematics, a simple function is defined as follows: y = f(x) or f(x) -> y Here, f is a function that takes x and returns y. Therefore, a function receives at least one parameter and returns at least a value. In functional programming, following the same paradigm makes reasoning easier, function composition possible, and code more readable. Summary This article explained the function definition and usage in detail by giving examples for parameter and return types. You can also refer the following books on the similar topics: Protocol-Oriented Programming with Swift: https://www.packtpub.com/application-development/protocol-oriented-programming-swift OpenStack Object Storage (Swift) Essentials: https://www.packtpub.com/virtualization-and-cloud/openstack-object-storage-swift-essentials Implementing Cloud Storage with OpenStack Swift: https://www.packtpub.com/virtualization-and-cloud/implementing-cloud-storage-openstack-swift Resources for Article: Further resources on this subject: Introducing the Swift Programming Language [article] Swift for Open Source Developers [article] Your First Swift App [article]
Read more
  • 0
  • 0
  • 3884

article-image-learning-how-manage-records-visualforce
Packt
29 Sep 2016
7 min read
Save for later

Learning How to Manage Records in Visualforce

Packt
29 Sep 2016
7 min read
In this article by Keir Bowden, author of the book, Visualforce Development Cookbook - Second Edition we will cover the following styling fields and table columns as per requirement One of the common use cases for Visualforce pages is to simplify, streamline, or enhance the management of sObject records. In this article, we will use Visualforce to carry out some more advanced customization of the user interface—redrawing the form to change available picklist options, or capturing different information based on the user's selections. (For more resources related to this topic, see here.) Styling fields as required Standard Visualforce input components, such as <apex:inputText />, can take an optional required attribute. If set to true, the component will be decorated with a red bar to indicate that it is required, and form submission will fail if a value has not been supplied, as shown in the following screenshot: In the scenario where one or more inputs are required and there are additional validation rules, for example, when one of either the Email or Phone fields is defined for a contact, this can lead to a drip feed of error messages to the user. This is because the inputs make repeated unsuccessful attempts to submit the form, each time getting slightly further in the process. Now, we will create a Visualforce page that allows a user to create a contact record. The Last Name field is captured through a non-required input decorated with a red bar identical to that created for required inputs. When the user submits the form, the controller validates that the Last Name field is populated and that one of the Email or Phone fields is populated. If any of the validations fail, details of all errors are returned to the user. Getting ready This topic makes use of a controller extension so this must be created before the Visualforce page. How to do it… Navigate to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredStylingExt.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredStyling in the Label field. Accept the default RequiredStyling that is automatically generated for the Name field. Paste the contents of the RequiredStyling.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredStyling page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredStyling page to create a new contact record: https://<instance>/apex/RequiredStyling. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. Clicking on the Save button without populating any of the fields results in the save failing with a number of errors: The Last Name field is constructed from a label and text input component rather than a standard input field, as an input field would enforce the required nature of the field and stop the submission of the form: <apex:pageBlockSectionItem > <apex:outputLabel value="Last Name"/> <apex:outputPanel id="detailrequiredpanel" layout="block" styleClass="requiredInput"> <apex:outputPanel layout="block" styleClass="requiredBlock" /> <apex:inputText value="{!Contact.LastName}"/> </apex:outputPanel> </apex:pageBlockSectionItem> The required styles are defined in the Visualforce page rather than relying on any existing Salesforce style classes to ensure that if Salesforce changes the names of its style classes, this does not break the page. The controller extension save action method carries out validation of all fields and attaches error messages to the page for all validation failures: if (String.IsBlank(cont.name)) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please enter the contact name')); error=true; } if ( (String.IsBlank(cont.Email)) && (String.IsBlank(cont.Phone)) ) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please supply the email address or phone number')); error=true; } Styling table columns as required When maintaining records that have required fields through a table, using regular input fields can end up with an unsightly collection of red bars striped across the table. Now, we will create a Visualforce page to allow a user to create a number of contact records via a table. The contact Last Name column header will be marked as required, rather than the individual inputs. Getting ready This topic makes use of a custom controller, so this will need to be created before the Visualforce page. How to do it… First, create the custom controller by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredColumnController.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Next, create a Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredColumn in the Label field. Accept the default RequiredColumn that is automatically generated for the Name field. Paste the contents of the RequiredColumn.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredColumn page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredColumn page: https://<instance>/apex/RequiredColumn. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. The Last Name column header is styled in red, indicating that this is a required field. Attempting to create a record where only First Name is specified results in an error message being displayed against the Last Name input for the particular row: The Visualforce page sets the required attribute on the inputField components in the Last Name column to false, which removes the red bar from the component: <apex:column > <apex:facet name="header"> <apex:outputText styleclass="requiredHeader" value="{!$ObjectType.Contact.fields.LastName.label}" /> </apex:facet> <apex:inputField value="{!contact.LastName}" required="false"/> </apex:column> The Visualforce page custom controller Save method checks if any of the fields in the row are populated, and if this is the case, it checks that the last name is present. If the last name is missing from any record, an error is added. If an error is added to any record, the save does not complete: if ( (!String.IsBlank(cont.FirstName)) || (!String.IsBlank(cont.LastName)) ) { // a field is defined - check for last name if (String.IsBlank(cont.LastName)) { error=true; cont.LastName.addError('Please enter a value'); } String.IsBlank() is used as this carries out three checks at once: to check that the supplied string is not null, it is not empty, and it does not only contain whitespace. Summary Thus in this article we successfully mastered the techniques to style fields and table columns as per the custom needs. Resources for Article: Further resources on this subject: Custom Components in Visualforce [Article] Visualforce Development with Apex [Article] Using Spring JMX within Java Applications [Article]
Read more
  • 0
  • 0
  • 8891
article-image-getting-started-keystonejs
Jake Stockwin
29 Sep 2016
5 min read
Save for later

Getting Started with KeystoneJS

Jake Stockwin
29 Sep 2016
5 min read
KeystoneJS is a content management framework for node.js. It is an easy-to-use system that does all the hard work of making a website for you. This article works through a simple example to get you started with KeystoneJS. Initial Setup KeystoneJS comes paired with a generator to make setup simple. You'll need to have node.js and mongodb installed before you begin. To generate your site, all you need to do is run npm install -g generator-keystone and then yo keystone. You'll be asked a few questions, and after a while your site is ready. Running node keystone, you'll find a site with a readymade blog, gallery and contact form, but the main feature is of KeystoneJS is the admin UI. Navigate to localhost:3000/keystone and sign in with the default credentials and you'll be able to manage all the content on your site from a user-friendly interface. Take a look around your site and the code so that you're familiar with it, and it's also worth having a read through the documentation. Keystone Models You now have a site up and running, but what if you need more than just a blog and a gallery? Perhaps, you would like a page to display upcoming events. No problem, to achieve this we create a model. Open the models folder in your file browser and you will be able to see the existing models, User.js for example. We're going to add our own model; create a new file called Event.js in the models folder. Say, our event should have both a start and an end time, a name and a description. Then our model will look like this: var keystone = require('keystone'); var Types = keystone.Field.Types; var Event = new keystone.List('Event'); Event.add({ name: { type: Types.Name, required: true, index: true }, description: { type: Types.Textarea }, start: { type: Types.Datetime }, end: { type: Types.Datetime } }); Event.register(); Now restart your app. Under the hood, KeystoneJS is managing all the database schemas for you, and if you sign back in to the admin, UI you'll see that there is now a page to manage your events. All that was required was to create a new model, and Keystone wrote the entire backend for you—this shows the power of Keystone. You don't have to spend your time writing the backend for your site, and are free to focus on the client-facing side of things. Routes and Templates We have created our model and are now able to log in to the admin UI and manage our events. However, we still need to display these events to our website viewers. This is done in two parts; a route is used to obtain the data from the database and makes this data available to the template, which displays the data. First, create the route. Open a new file, routes/views/events.js, and enter the following code: var keystone = require('keystone'); exports = module.exports = function(req, res) { var view = new keystone.View(req, res); var locals = res.locals; // Set locals locals.section = 'events'; // Load the events view.on('init', function(next) { var q = keystone.list('Event').model.find(); q.exec(function(err, results) { locals.data.events = results; next(err); }); }); // Render the view view.render('post'); }; You can now create your template. The events will be available to the template as data.events, because we have set locals.data.events in the route. KeystoneJS gives you the option of which template engine to use. The default is jade, so we will use this as the example here, but you can easily adapt the code to any other engine, and if you get stuck, a good place to start is the blog post template. Templates are stored in templates/views, so create templates/views/events.js with the following code: extends ../layouts/default mixin event(event) h2 event.name p if event.start | start: #{event._.start.format('MMMM Do, YYYY')} p if event.end | end: #{event._.end.format('MMMM Do, YYYY')} p if event.description | details: event.description block content .container: .row .events each event in data.events +event(event) This is by no means a well-designed page, but will do for this example. We're almost done, but if you go to /events in your web browser, you'll get a 404 error. That's because we haven't told our route controllers about the new page yet. This is done in routes/index.js and you just need to add the line app.get('/events', routes.views.events);. This tells your app to send any get requests for /events to your new route, which in turn renders the new template. You can also add your new events page to your header by simply adding { label: 'Events',      key: 'events',     href: '/events' }, to routes/middleware.js. The key in this should match the res.locals.section in the route we created. Conclusion By simply running yo keystone and adding just over 50 lines of code, we've created an events page to display our events. You can log in to the admin UI and create, update and delete events; and your website will update automatically. This really highlights what keystone does. We don’t have to spend our time configuring all the node modules and writing the backend of our server; keystone has done all the work for us. This means we can dedicate all our time to making our client-facing website look as good as possible. About the Author Jake Stockwin is a third-year mathematics and statistics undergraduate at the University of Oxford, and a novice full-stack developer. He has a keen interest in programming, both in his academic studies and in his spare time. Next year, he plans to write his dissertation on reinforcement learning, an area of machine learning. Over the past few months, he has designed websites for various clients and has begun developing in Node.js.
Read more
  • 0
  • 0
  • 2862

article-image-deep-learning-torch
Preetham Sreenivas
29 Sep 2016
10 min read
Save for later

Deep Learning with Torch

Preetham Sreenivas
29 Sep 2016
10 min read
Torch is a scientific computing framework built on top of Lua[JIT]. The nn package and the ecosystem around it provide a very powerful framework for building deep learning models, striking a perfect balance between speed and flexibility. It is used at Facebook AI Research(FAIR), Twitter Cortex, DeepMind, Yann LeCun's group at NYU, Fei-Fei Li's at Stanford, and many more industrial and academic labs. If you are like me, and don't like writing equations for backpropagation every time you want to try a simple model, Torch is a great solution. With Torch, you can also do pretty much anything you can imagine, whether that is writing custom loss functions, dreaming up an arbitrary acyclic graph network, or even using multiple GPUs or loading pre-trained models on imagenet from caffe model-zoo (yes, you can load models trained in caffe with a single line). Without further ado, let's jump right into the awesome world of deep learning. Prerequisites Some knowledge of deep learning—A Primer, Bengio's deep learning book, Hinton's Coursera course. A bit of Lua. Its syntax is very C-like and can be picked up fairly quickly if you know Python or JavaScript—Learn Lua in 15 minutes, Torch For Numpy Users. A machine with Torch installed since this is intended to be hands-on. On Ubuntu 12+ and Mac OS X, installing Torch looks like this: # in a terminal, run the commands WITHOUT sudo $ git clone https://github.com/torch/distro.git ~/torch --recursive $ cd ~/torch; bash install-deps; $ ./install.sh # On Linux with bash $ source ~/.bashrc # On OSX or in Linux with no bash. $ source ~/.profile Once you’ve installed Torch, you can run a Torch script using: $ th script.lua # alternatively you can fire up a terminal torch interpreter using th -i $ th -i # and run multiple scripts one by one, the variables will be accessible to other scripts > dofile 'script1.lua' > dofile 'script2.lua' > print(variable) -- variable from either of these scripts. The sections below are very code intensive, but you can run these commands from Torch's terminal interpreter. $th -i Building a Model: The Basics A module is the basic building block of any Torch model. It has forward and backward methods for forward and backward passes of backpropagation. You can combine them using containers, and of course, calling forward and backward on containers propagates inputs and gradients correctly. -- A simple mlp model with sigmoids require 'nn' linear1 = nn.Linear(100,10) -- A linear layer Module linear2 = nn.Linear(10,2) -- You can combine modulues using containers, sequential is the most used one model = nn.Sequential() -- A container model:add(linear1) model:add(nn.Sigmoid()) model:add(linear2) model:add(nn.Sigmoid()) -- the forward step input = torch.rand(100) target = torch.rand(2) output = linear:forward(input) Now we need a criterion to measure how well our model is performing, in other words, a loss function. nn.Criterion is the abstract class that all loss functions inherit. It provides forward and backward methods, computing loss and gradients respectively. Torch provides most of the commonly used criterions out of the box. It isn't much of an effort to write your own either. criterion = nn.MSECriterioin() -- mean squared error criterion loss = criterion:forward(output,target) gradientsAtOutput = criterion:backward(output,target) -- To perform the backprop step, we need to pass these gradients to the backward -- method of the model gradAtInput = model:backward(input,gradientsAtOutput) lr = 0.1 -- learning rate for our model model:updateParameters(lr) -- updates the parameters using the lr parameter. The updateParameters method just subtracts the model parameters by gradients scaled by the learning rate. This is the vanilla stochastic gradient descent. Typically, the updates we do are more complex. For example, if we want to use momentum, we need to keep a track of updates we did in the previous epoch. There are a lot more fancy optimization schemes such as RMSProp, adam, adagrad, and L-BFGS that do more complex things like adapting learning rate, momentum factor, and so on. The optim package provides optimization routines out of the box. Dataset We'll use the German Traffic Sign Recognition Benchmark(GTSRB) dataset. This dataset has 43 classes of traffic signs of varying sizes, illuminations and occlusions. There are 39,000 training images and 12,000 test images. Traffic signs in each of the images are not centered and they have a 10% border around them. I have included a shell script for downloading the data along with the code for this tutorial in this github repo.[1] git clone https://github.com/preethamsp/tutorial.gtsrb.torch.git cd tutorial.gtsrb.torch/datasets bash download_gtsrb.sh Model Let's build a downsized vgg style model with what we've learned. function createModel() require 'nn' nbClasses = 43 local net = nn.Sequential() --[[building block: adds a convolution layer, batch norm layer and a relu activation to the net]]-- function ConvBNReLU(nInputPlane, nOutputPlane) The code in the repo is much more polished than the snippets in the tutorial. It is modular and allows you to change the model and/or datasets easily. -- kernel size = (3,3), stride = (1,1), padding = (1,1) net:add(nn.SpatialConvolution(nInputPlane, nOutputPlane, 3,3, 1,1, 1,1)) net:add(nn.SpatialBatchNormalization(nOutputPlane,1e-3)) net:add(nn.ReLU(true)) end ConvBNReLU(3,32) ConvBNReLU(32,32) net:add(nn.SpatialMaxPooling(2,2,2,2)) net:add(nn.Dropout(0.2)) ConvBNReLU(32,64) ConvBNReLU(64,64) net:add(nn.SpatialMaxPooling(2,2,2,2)) net:add(nn.Dropout(0.2)) ConvBNReLU(64,128) ConvBNReLU(128,128) net:add(nn.SpatialMaxPooling(2,2,2,2)) net:add(nn.Dropout(0.2)) net:add(nn.View(128*6*6)) net:add(nn.Dropout(0.5)) net:add(nn.Linear(128*6*6,512)) net:add(nn.BatchNormalization(512)) net:add(nn.ReLU(true)) net:add(nn.Linear(512,nbClasses)) net:add(nn.LogSoftMax()) return net end The first layer contains three input channels because we're going to pass RGB images (three channels). For grayscale images, the first layer has one input channel. I encourage you to play around and modify the network.[2] There are a bunch of new modules that need some elaboration. The Dropout module randomly deactivates a neuron with some probability. It is known to help generalization by preventing co-adaptation between neurons; that is, a neuron should now depend less on its peer, forcing it to learn a bit more. BatchNormalization is a very recent development. It is known to speed up convergence by normalizing the outputs of a layer to unit gaussian using the statistics of a batch. Let’s use this model and train it. In the interest of brievity, I'll use these constructs directly. The code describing these constructs is in datasets/gtsrb.lua. DataGen:trainGenerator(batchSize) DataGen:valGenerator(batchSize) These provide iterators over batches of train and test data respectively. You'll find that the model code (models/vgg_small.lua) in the repo is different. It is designed to allow you to experiment quickly. Using optim to train the model Using a stochastic gradient descent (sgd) from the optim package to minimize a function f looks like this: optim.sgd(feval, params, optimState) Where: feval: A user-defined function that respects the API: f, df/params = feval(params) params: The current parameter vector (a 1D torch.Tensor) optimState: A table of parameters, and state variables, dependent upon the algorithm Since we are optimizing the loss of the neural network, parameters should be the weights and other parameters of the network. We get these as a flattened 1D tensor using model:getParameters. It also returns a tensor containing the gradients of these parameters. This is useful in creating the feval function above. model = createModel() criterion = nn.ClassNLLCriterion() -- criterion we are optimizing: negative log loss params, gradParams = model:getParameters() local function feval() -- criterion.output stores the latest output of criterion return criterion.output, gradParams end We need to create an optimState table and initialize it with a configuration of our optimizer like learning rate and momentum: optimState = { learningRate = 0.01, momentum = 0.9, dampening = 0.0, nesterov = true, } Now, an update to the model should do the following: Compute the output of the model using model:forward(). Compute the loss and the gradients at output layer using criterion:forward() and criterion:backward() respectively. Update the gradients of the model parameters using model:backward(). Update the model using optim.sgd. -- Forward pass output = model:forward(input) loss = criterion:forward(output, target) -- Backward pass critGrad = criterion:backward(output, target) model:backward(input, critGrad) -- Updates optim.sgd(feval, params, optimState) Note: The order above should be respected, as backward assumes forward was run just before it. Changing this order might result in gradients not being computed correctly. Putting it all together Let's put it all together and write a function that trains the model for an epoch. We'll create a loop that iterates over the train data in batches and updates the model. model = createModel() criterion = nn.ClassNLLCriterion() dataGen = DataGen('datasets/GTSRB/') -- Data generator params, gradParams = model:getParameters() batchSize = 32 optimState = { learningRate = 0.01, momentum = 0.9, dampening = 0.0, nesterov = true, } function train() -- Dropout and BN behave differently during training and testing -- So, switch to training mode model:training() local function feval() return criterion.output, gradParams end for input, target in dataGen:trainGenerator(batchSize) do -- Forward pass local output = model:forward(input) local loss = criterion:forward(output, target) -- Backward pass model:zeroGradParameters() -- clear grads from previous update local critGrad = criterion:backward(output, target) model:backward(input, critGrad) -- Updates optim.sgd(feval, params, optimState) end end The test function is extremely similar, except that we don't need to update the parameters: confusion = optim.ConfusionMatrix(nbClasses) -- to calculate accuracies function test() model:evaluate() -- switch to evaluate mode confusion:zero() -- clear confusion matrix for input, target in dataGen:valGenerator(batchSize) do local output = model:forward(input) confusion:batchAdd(output, target) end confusion:updateValids() local test_acc = confusion.totalValid * 100 print(('Test accuracy: %.2f'):format(test_acc)) end Now that everything is set, you can train your network and print the test accuracies: max_epoch = 20 for i = 1,20 do train() test() end An epoch takes around 30 seconds on a TitanX and gives about 97.7% accuracy after 20 epochs. This is a very basic model and honestly I haven't tried optimizing the parameters much. There are a lot of things that can be done to crank up the accuracies. Try different processing procedures. Experiment with the net structure. Different weight initializations, and learning rate schedules. An Ensemble of different models; for example, train multiple models and take a majority vote. You can have a look at the state of the art on this dataset here. They achieve upwards of 99.5% accuracy using a clever method to boost the geometric variation of CNNs. Conclusion We looked at how to build a basic mlp in Torch. We then moved on to building a Convolutional Neural Network and trained it to solve a real-world problem of traffic sign recognition. For a beginner, Torch/LUA might not be as easy. But once you get a hang of it, you have access to a deep learning framework which is very flexible yet fast. You will be able to easily reproduce latest research or try new stuff unlike in rigid frameworks like keras or nolearn. I encourage you to give it a fair try if you are going anywhere near deep learning. Resources Torch Cheat Sheet Awesome Torch Torch Blog Facebook's Resnet Code Oxford's ML Course Practicals Learn torch from Github repos About the author Preetham Sreenivas is a data scientist at Fractal Analytics. Prior to that, he was a software engineer at Directi.
Read more
  • 0
  • 0
  • 11302

article-image-directory-services
Packt
29 Sep 2016
11 min read
Save for later

Directory Services

Packt
29 Sep 2016
11 min read
In this article by Gregory Boyce, author Linux Networking Cookbook, we will focus on getting you started by Configuring Samba as an Active Directory compatible directory service and Joining a Linux box to the domain. (For more resources related to this topic, see here.) If you have worked in corporate environments, then you are probably familiar with a directory service such as Active Directory. What you may not realize is that Samba, originally created to be an open source implementation of Windows file sharing (SMB/CIFS), can now operate as an Active Directory compatible directory service. It can even act as a Backup Domain Controller (BDC) in an Active Directory domain. In this article, we will configure Samba to centralize authentication for your network services. We will also configure a Linux client to leverage it for authentication and set up a RADIUS server, which uses the directory server for authentication. Configuring Samba as an Active Directory compatible directory service As of Samba 4.0, Samba has the ability to act as a primary domain controller (PDC) in a manner that is compatible with Active Directory. How to do it… Installing on Ubuntu 14.04: Configure your system with a static IP address and update /etc/hosts to point to that IP address rather than localhost. Make sure that your time is kept up to date by installing an NTP client: sudo apt-get install ntp Pre-emptively disable smbd/nmbd from running automatically: sudo bash -c 'echo "manual" > /etc/init/nmbd.override' sudo bash –c 'echo "manual" > /etc/init/smbd.override' Install Samba and smbclient: sudo apt-get install samba smbclient Remove stock smb.conf: sudo rm /etc/samba/smb.conf Provision the domain:sudo samba-tool domain provision --realm ad.example.org --domain example --use-rfc2307 --option="interfaces=lo eth1" --option="bind interfaces only=yes" --dns-backend BIND9_DLZ Save the randomly generated admin password. Symlink the AD krb5.conf to /etc: sudo ln -sf /var/lib/samba/private/krb5.conf /etc/krb5.conf Edit /etc/bind/named.conf.local to allow Samba to publish data: dlz "AD DNS Zone" { # For BIND 9.9.0 database "dlopen /usr/lib/x86_64-linux-gnu/samba/bind9/dlz_bind9_9.so"; }; Edit /etc/bind/named.conf.options to use the Kerberos keytab within the options stanza: tkey-gssapi-keytab "/var/lib/samba/private/dns.keytab"; Modify your zone record to allow updates from Samba: zone "example.org" { type master; notify no; file "/var/lib/bind/example.org.db"; update-policy { grant AD.EXAMPLE.ORG ms-self * A AAAA; grant Administrator@AD.EXAMPLE.ORG wildcard * A AAAA SRV CNAME; grant SERVER$@ad.EXAMPLE.ORG wildcard * A AAAA SRV CNAME; grant DDNS wildcard * A AAAA SRV CNAME; }; }; Modify /etc/apparmor.d/usr.sbin.named to allow bind9 access to a few additional resources within the /usr/sbin/named stanza: /var/lib/samba/private/dns/** rw, /var/lib/samba/private/named.conf r, /var/lib/samba/private/named.conf.update r, /var/lib/samba/private/dns.keytab rk, /var/lib/samba/private/krb5.conf r, /var/tmp/* rw, /dev/urandom rw, Reload the apparmor configuration: sudo service apparmor restart Restart bind9: sudo service bind9 restart Restart the samba service: sudo service apparmor restart Installing on CentOS 7: Unfortunately, setting up a domain controller on CentOS 7 is not possible using the default packages provided by the distribution. This is due to Samba utilizing the Heimdal implementation of Kerberos while Red Hat, CentOS, and Fedora using the MIT Kerberos 5 implementation. How it works… The process for provisioning Samba to act as an Active Directory compatible domain is deceptively easy given all that is happening on the backend. Let us look at some of the expectation and see how we are going to meet them as well as what is happening behind the scenes. Active Directory requirements Successfully running an Active Directory Forest has a number of requirements need to be in place: Synchronized time: AD uses Kerberos for authentication, which can be very sensitive to time skews. In our case, we are going to use ntpd, but other options including openntpd or chrony are also available. The ability to manage DNS records: AD automatically generates a number of DNS records, including SRV records that tell clients of the domain how to locate the domain controller itself. A static IP address: Due to a number of pieces of the AD functionality being very dependent on the specific IP address of your domain controller, it is recommended that you use a static IP address. A static DHCP lease may work as long as you are certain the IP address will not change. A rogue DHCP server on the network for example may cause difficulties. Selecting a realm and domain name The Samba team has published some very useful information regarding the proper naming of your realm and your domain along with a link to Microsoft's best practices on the subject. It may be found on: https://wiki.samba.org/index.php/Active_Directory_Naming_FAQ. The short version is that your domain should be globally unique while the realm should be unique within the layer 2 broadcast domain of your network. Preferably, the domain should be a subdomain of a registered domain owned by you. This ensures that you can buy SSL certificates if necessary and you will not experience conflicts with outside resources. Samba-tool will default to using the first part of the domain you specified as the realm, ad from ad.example.org. The Samba group instead recommends using the second part, example in our case, as it is more likely to be locally unique. Using a subdomain of your purchased domain rather than a domain itself makes life easier when splitting internal DNS records, which are managed by your AD instance from the more publicly accessible external names. Using Samba-Tool Samba-tool can work in an automated fashion with command line options, or it can operate in interactive mode. We are going to specify the options that we want to use on the command line: sudo samba-tool domain provision --realm ad.example.org --domain example --use-rfc2307 --option="interfaces=lo eth1" --option="bind interfaces only=yes" --dns-backend BIND9_DLZ The realm and domain options here specify the name for your domain as described above. Since we are going to be supporting Linux systems, we are going to want the AD schema to support RFC2307 settings, which allow for definitions for UID, GID, shell, home directory, and other settings, which Unix systems will require. The pair of options specified on our command-line is used for restricting what interfaces Samba will bind to. While not strictly required, it is a good practice to keep your Samba services bound to the internal interfaces. Finally, samba wants to be able to manage your DNS in order to add systems to the zone automatically. This is handled by a variety of available DNS backends. These include: SAMBA_INTERNAL: This is a built-in method where a samba process acts as a DNS service. This is a good quick option for small networks. BIND9_DLZ: This option allows you to tie your local named/bind9 instance in with your Samba server. It introduces a named plugin for bind versions 9.8.x/9.9.x to support reading host information directly out of the Samba data stores. BIND_FLATFILE: This option is largely deprecated in favor of BIND9_DLZ, but it is still an option if you are running an older version of bind. It causes the Samba services to write out zone files periodically, which Bind may use. Bind configuration Now that Samba is set up to support BIND9_DLZ, we need to configure named to leverage it. There are a few pieces to this support: tkey-gssapi-keytab: This settings in your named options section defines the Kerberos key tab file to use for DNS updates. This allows the Samba server to communicate with the Bind server in order to let it know about zone file changes. dlz setting: This tells bind to load the dynamic module which Samba provides in order to have it read from Samba's data files. Zone updating: In order to be able to update the zone file, you need to switch from an allow-update definition to update-policy, which allows more complex definitions including Kerberos based updates. Apparmor rules changes: Ubuntu uses a Linux Security Module called Apparmor, which allows you to define the allowed actions of a particular executable. Apparmor contains rules restricting the access rights of the named process, but these existing rules do not account for integration with Samba. We need to adjust the existing rules to allow named to access some additional required resources. Joining a Linux box to the domain In order to participate in an AD style domain, you must have the machine joined to the domain using Administrator credentials. This will create the machine's account within the database, and provide credentials to the system for querying the ldap server. How to do it… Install Samba, heimdal-clients, and winbind: sudo apt-get install winbind Populate /etc/samba/smb.conf: [global] workgroup = EXAMPLE realm = ad.example.org security = ads idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = yes winbind enum groups = yes template homedir = /home/%U template shell = /bin/bash winbind use default domain = yes Join the system to the domain: sudo net ads join -U Administrator Configure the system to use winbind for account information in /etc/nsswitch.conf: passwd: compat winbind group: compat winbind How it works… Joining a Linux box to an AD domain, you need to utilize winbind that provides a PAM interface for interacting with Windows RPC calls for user authentication. Winbind requires that you set up your smb.conf file, and then join the domain before it functions. Nsswitch.conf controls how glibc attempts to look up particular types of information. In our case, we are modifying them to talk to winbind for user and group information. Most of the actual logic is in the smb.conf file itself, so let us look: Define the AD Domain we're working with, including both the workgroup/domain and the realm: workgroup = EXAMPLE realm = ad.example.org Now we tell Samba to use Active Directory Services (ADS) security mode: security = ads AD domains use Windows Security IDs (SID) for providing unique user and group identifiers. In order to be compatible with Linux systems, we need to map those SIDs to UIDs and GIDs. Since we're only dealing with a single client for now, we're going to let the local Samba instance map the SIDs to UIDs and GIDs from a range which we provide: idmap uid = 10000-20000 idmap gid = 10000-20000 Some unix utilities such as finger depend on the ability to loop through all of the user/group instances. On a large AD domain, this can be far too many entries so winbind suppresses this capability by default. For now, we're going to want to enable it: winbind enum users = yes winbind enum groups = yes Unless you go through specific steps to populate your AD domain with per-user home directory and shell information, then Winbind will use templates for home directories and shells. We'll want to define these templates in order to avoid the defaults of /home/%D/%U (/home/EXAMPLE/user) and /bin/false: template homedir = /home/%U template shell = /bin/bash The default winbind configuration takes users in the form of username@example.org rather than the more Unix style of user username. Let's override that setting: winbind use default domain = yes Joining a windows box to the domain While not a Linux configuration topic, the most common use for an Active Directory domain is to manage a network of Windows systems. While the overarching topic of managing windows via an AD domain is too large and out of scope for this articl, let's look at how we can join a Windows system to our new domain. How to do it… Click on Start and go to Settings. Click on System. Select About. Select Join a Domain. Type in the name of your domain; ad.example.org in our case. Enter your administrator credentials for the domain. Select a user who will own the system. How it works… When you tell your Windows system to join an AD Domain, it first attempts to find the domain by looking up a series SRV record for the domain, including _ldap._tcp.dc._msdcs.ad.example.org in order to determine what hosts to connect to within the domain for authentication purposes. From there a connection is established. Resources for Article: Further resources on this subject: OpenStack Networking in a Nutshell [article] Zabbix Configuration [article] Supporting hypervisors by OpenNebula [article]
Read more
  • 0
  • 0
  • 11497
article-image-how-to-apply-themes-sails-applications-part-1
Luis Lobo
29 Sep 2016
8 min read
Save for later

How to Apply Themes to Sails Applications, Part 1

Luis Lobo
29 Sep 2016
8 min read
The Sails Framework is a popular MVC framework that is designed for building practical, production-ready Node.js apps. Themes customize the look and feel of your app, but Sails does not come with a configuration or setting for handling themes by itself. This two-part post shows one of the ways you can set up theming for your Sails application, thus making use of some of Sails’ capabilities. You may have an application that needs to handle theming for different reasons, like custom branding, licensing, dynamic theme configuration, and so on. You can adjust the theming of your application, based on external factors, like patterns in the domain of the site you are browsing. Imagine you have an application that handles deliveries that you customize per client. So, your app renders the default theme when browsed as http://www.smartdelivery.com, but when logged in as a customer, let's say, "Burrito", it changes the domain name as http://burrito.smartdelivery.com. In this series we make use of Less as our language to define our CSS. Sails already handles Less right out of the box. The default Less file is located in /assets/styles/importer.less. We will also use Bootstrap as our base CSS Framework, importing its Less file into our importer.less file. The technique showed here consists of having a base CSS, and a theme CSS that varies according to the host name. Step 1 - Adding Bootstrap to Sails We use Bower to add Bootstrap to our project. First, install it by issuing the following command: npm install bower --save-dev Then, initialize the Bower configuration file. node_modules/bower/bin/bower init This command allows us to configure our bower.json file. Answer the questions asked by bower. ? name sails-themed-application ? description Sails Themed Application ? main file app.js ? keywords ? authors lobo ? license MIT ? homepage ? set currently installed components as dependencies? Yes ? add commonly ignored files to ignore list? Yes ? would you like to mark this package as private which prevents it from being accidentally published to the registry? No { name: 'sails-themed-application', description: 'Sails Themed Application', main: 'app.js', authors: [ 'lobo' ], license: 'MIT', homepage: '', ignore: [ '**/.*', 'node_modules', 'bower_components', 'assets/vendor', 'test', 'tests' ] } This generates a bower.json file in the root of your project. Now we need to tell bower to install everything in a specific directory. Create a file named .bowerrc and put this configuration into it: {"directory" : "assets/vendor"} Finally, install Bootstrap: node_modules/bower/bin/bower install bootstrap --save --production This action creates a folder in assets named vendor, with boostrap inside of it. Since Bootstrap uses JQuery, you also have a jquery folder: ├── api │ ├── controllers │ ├── models │ ├── policies │ ├── responses │ └── services ├── assets │ ├── images │ ├── js │ │ └── dependencies │ ├── styles │ ├── templates │ ├── themes │ └── vendor │ ├── bootstrap │ │ ├── dist │ │ │ ├── css │ │ │ ├── fonts │ │ │ └── js │ │ ├── fonts │ │ ├── grunt │ │ ├── js │ │ ├── less │ │ │ └── mixins │ │ └── nuget │ └── jquery │ ├── dist │ ├── external │ │ └── sizzle │ │ └── dist │ └── src │ ├── ajax │ │ └── var │ ├── attributes │ ├── core │ │ └── var │ ├── css │ │ └── var │ ├── data │ │ └── var │ ├── effects │ ├── event │ ├── exports │ ├── manipulation │ │ └── var │ ├── queue │ ├── traversing │ │ └── var │ └── var ├── config │ ├── env │ └── locales ├── tasks │ ├── config │ └── register └── views We need now to add Bootstrap into our importer. Edit /assets/styles/importer.less and add this instruction at the end of it: @import "../vendor/bootstrap/less/bootstrap.less"; Now you need to tell Sails where to import Bootstrap and JQuery JavaScript files from. Edit /tasks/pipeline.js and add the following code after it loads the sails.io.js file: // Load sails.io before everything else 'js/dependencies/sails.io.js', // <ADD THESE LINES> // JQuery JS 'vendor/jquery/dist/jquery.min.js', // Bootstrap JS 'vendor/bootstrap/dist/js/bootstrap.min.js', // </ADD THESE LINES> Now you have to edit your views layout and pages to use the Bootstrap style. In this series I created an application from scratch, so I have the default views and layouts. In your layout, insert the following line after your tag: <link rel="stylesheet" href="/themes/<%= typeof theme == 'undefined' ? 'default' : theme %>.css"> This loads a second CSS file, which defaults to /themes/default.css, into your views. As a sample, here are the /views/layout.ejs and /views/homepage.ejs I changed (the text under the headings is random text): /views/layout.ejs <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags --> <title><%= typeof title == 'undefined' ? 'Sails Themed Application' : title %></title> <!--STYLES--> <link rel="stylesheet" href="/styles/importer.css"> <!--STYLES END--> <!-- THIS IS WHERE THE THEME CSS IS LOADED --> <link rel="stylesheet" href="/themes/<%= typeof theme == 'undefined' ? 'default' : theme %>.css"> </head> <body> <%- body %> <!--TEMPLATES--> <!--TEMPLATES END--> <!--SCRIPTS--> <script src="/js/dependencies/sails.io.js"></script> <script src="/vendor/jquery/dist/jquery.min.js"></script> <script src="/vendor/bootstrap/dist/js/bootstrap.min.js"></script> <!--SCRIPTS END--> </body> </html> Notice the lines after the <!--STYLES END--> tag. /views/homepage.ejs <nav class="navbar navbar-inverse navbar-fixed-top"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Project name</a> </div> <div id="navbar" class="navbar-collapse collapse"> <form class="navbar-form navbar-right"> <div class="form-group"> <input type="text" placeholder="Email" class="form-control"> </div> <div class="form-group"> <input type="password" placeholder="Password" class="form-control"> </div> <button type="submit" class="btn btn-success">Sign in</button> </form> </div><!--/.navbar-collapse --> </div> </nav> <!-- Main jumbotron for a primary marketing message or call to action --> <div class="jumbotron"> <div class="container"> <h1>Hello, world!</h1> <p>This is a template for a simple marketing or informational website. It includes a large callout called a jumbotron and three supporting pieces of content. Use it as a starting point to create something more unique.</p> <p><a class="btn btn-primary btn-lg" href="#" role="button">Learn more &raquo;</a></p> </div> </div> <div class="container"> <!-- Example row of columns --> <div class="row"> <div class="col-md-4"> <h2>Heading</h2> <p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> <div class="col-md-4"> <h2>Heading</h2> <p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> <div class="col-md-4"> <h2>Heading</h2> <p>Donec sed odio dui. Cras justo odio, dapibus ac facilisis in, egestas eget quam. Vestibulum id ligula porta felis euismod semper. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus.</p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> </div> <hr> <footer> <p>&copy; 2015 Company, Inc.</p> </footer> </div> <!-- /container --> You can now lift Sails and see your Bootstrapped Sails application. Now that we have our Bootstrapped Sails app set up, in Part 2 we will compile our theme’s CSS and the necessary Less files, and we will set bup the theme Sails hook to complete our application. About the author Luis Lobo Borobia is the CTO at FictionCity.NET, is a mentor and advisor, independent software engineer consultant, and conference speaker. He has a background as a software analyst and designer, creating, designing, and implementing Software products and solutions, frameworks, and platforms for several kinds of industries. In the last years he has focused on research and development for the Internet of Things, using the latest bleeding-edge software and hardware technologies available.
Read more
  • 0
  • 1
  • 9679

article-image-qt-style-sheets
Packt
29 Sep 2016
26 min read
Save for later

QT Style Sheets

Packt
29 Sep 2016
26 min read
In this article by Lee Zhi Eng, author of the book, Qt5 C++ GUI Programming Cookbook, we will see how Qt allows us to easily design our program's user interface through a method which most people are familiar with. Qt not only provides us with a powerful user interface toolkit called Qt Designer, which enables us to design our user interface without writing a single line of code, but it also allows advanced users to customize their user interface components through a simple scripting language called Qt Style Sheets. (For more resources related to this topic, see here.) In this article, we will cover the following recipes: Using style sheets with Qt Designer Basic style sheets customization Creating a login screen using style sheet Use style sheets with Qt Designer In this example, we will learn how to change the look and feel of our program and make it look more professional by using style sheets and resources. Qt allows you to decorate your GUIs (Graphical User Interfaces) using a style sheet language called Qt Style Sheets, which is very similar to CSS (Cascading Style Sheets) used by web designers to decorate their websites. How to do it… The first thing we need to do is open up Qt Creator and create a new project. If this is the first time you have used Qt Creator, you can either click the big button that says New Project with a + sign, or simply go to File | New File or New Project. Then, select Application under the Project window and select Qt Widgets Application. After that, click the Choose button at the bottom. A window will then pop out and ask you to insert the project name and its location. Once you're done with that, click Next several times and click the Finish button to create the project. We will just stick to all the default settings for now. Once the project is created, the first thing you will see is the panel with tons of big icons on the left side of the window which is called the Mode Selector panel; we will discuss this more later in the How it works section. Then, you will also see all your source files listed on the Side Bar panel which is located right next to the Mode Selector panel. This is where you can select which file you want to edit, which, in this case, is mainwindow.ui because we are about to start designing the program's UI! Double click mainwindow.ui and you will see an entirely different interface appearing out of nowhere. Qt Creator actually helped you to switch from the script editor to the UI editor (Qt Designer) because it detected .ui extension on the file you're trying to open. You will also notice that the highlighted button on the Mode Selector panel has changed from the Edit button to the Design button. You can switch back to the script editor or change to any other tools by clicking one of the buttons located at the upper half of the Mode Selector panel. Let's go back to the Qt Designer and look at the mainwindow.ui file. This is basically the main window of our program (as the file name implies) and it's empty by default, without any widget on it. You can try to compile and run the program by pressing the Run button (green arrow button) at the bottom of the Mode Selector panel, and you will see an empty window popping out once the compilation is complete: Now, let's add a push button to our program's UI by clicking on the Push Button item in the widget box (under the Buttons category) and drag it to your main window in the form editor. Then, keep the push button selected and now you will see all the properties of this button inside the property editor on the right side of your window. Scroll down to somewhere around the middle and look for a property called styleSheet. This is where you apply styles to your widget, which may or may not inherit to its children or grandchildren recursively depending on how you set your style sheet. Alternatively, you can also right click on any widget in your UI at the form editor and select Change Style Sheet from the pop up menu. You can click on the input field of the styleSheet property to directly write the style sheet code, or click on the … button besides the input field to open up the Edit Style Sheet window which has a bigger space for writing longer style sheet code. At the top of the window you can find several buttons such as Add Resource, Add Gradient, Add Color, and Add Font that can help you to kick-start your coding if you don't remember the properties' names. Let's try to do some simple styling with the Edit Style Sheet window. Click Add Color and choose color. Pick a random color from the color picker window, let's say, a pure red color. Then click Ok. Now, you will see a line of code has been added to the text field on the Edit Style Sheet window, which in my case is as follows: color: rgb(255, 0, 0); Click the Ok button and now you will see the text on your push button has changed to a red color. How it works Let's take a bit of time to get ourselves familiar with Qt Designer's interface before we start learning how to design our own UI: Menu bar: The menu bar houses application-specific menus which provide easy access to essential functions such as create new projects, save files, undo, redo, copy, paste, and so on. It also allows you to access development tools that come with Qt Creator, such as the compiler, debugger, profiler, and so on. Widget box: This is where you can find all the different types of widgets provided by Qt Designer. You can add a widget to your program's UI by clicking one of the widgets from the widget box and dragging it to the form editor. Mode selector: The mode selector is a side panel that places shortcut buttons for easy access to different tools. You can quickly switch between the script editor and form editor by clicking the Edit or Design buttons on the mode selector panel which is very useful for multitasking. You can also easily navigate to the debugger and profiler tools in the same speed and manner. Build shortcuts: The build shortcuts are located at the bottom of the mode selector panel. You can build, run, and debug your project easily by pressing the shortcut buttons here. Form editor: Form editor is where you edit your program's UI. You can add different widgets to your program by selecting a widget from the widget box and dragging it to the form editor. Form toolbar: From here, you can quickly select a different form to edit, click the drop down box located above the widget box and select the file you want to open with Qt Designer. Beside the drop down box are buttons for switching between different modes for the form editor and also buttons for changing the layout of your UI. Object inspector: The object inspector lists out all the widgets within your current .ui file. All the widgets are arranged according to its parent-child relationship in the hierarchy. You can select a widget from the object inspector to display its properties in the property editor. Property editor: Property editor will display all the properties of the widget you selected either from the object inspector window or the form editor window. Action Editor and Signals & Slots Editor: This window contains two editors – Action Editor and the Signals & Slots Editor which can be accessed from the tabs below the window. The action editor is where you create actions that can be added to a menu bar or toolbar in your program's UI. Output panes: Output panes consist of several different windows that display information and output messages related to script compilation and debugging. You can switch between different output panes by pressing the buttons that carry a number before them, such as 1-Issues, 2-Search Results, 3-Application Output, and so on. There's more… In the previous section, we discussed how to apply style sheets to Qt widgets through C++ coding. Although that method works really well, most of the time the person who is in charge of designing the program's UI is not the programmer himself, but a UI designer who specializes in designing user-friendly UI. In this case, it's better to let the UI designer design the program's layout and style sheet with a different tool and not mess around with the code. Qt provides an all-in-one editor called the Qt Creator. Qt Creator consists of several different tools, such as script editor, compiler, debugger, profiler, and UI editor. The UI editor, which is also called the Qt Designer, is the perfect tool for designers to design their program's UI without writing any code. This is because Qt Designer adopted the What-You-See-Is-What-You-Get approach by providing accurate visual representation of the final result, which means whatever you design with Qt Designer will turn out exactly the same when the program is compiled and run. The similarities between Qt Style Sheets and CSS are as follows: CSS: h1 { color: red; background-color: white;} Qt Style Sheets: QLineEdit { color: red; background-color: white;} As you can see, both of them contain a selector and a declaration block. Each declaration contains a property and a value, separated by a colon. In Qt, a style sheet can be applied to a single widget by calling QObject::setStyleSheet() function in C++ code. For example: myPushButton->setStyleSheet("color : blue"); The preceding code will turn the text of a button with the variable name myPushButton to a blue color. You can also achieve the same result by writing the declaration in the style sheet property field in Qt Designer. We will discuss more about Qt Designer in the next section. Qt Style Sheets also supports all the different types of selectors defined in CSS2 standard, including Universal selector, Type selector, Class selector, ID selector, and so on, which allows us to apply styling to a very specific individual or group of widgets. For instance, if we want to change the background color of a specific line edit widget with the object name usernameEdit, we can do this by using an ID selector to refer to it: QLineEdit#usernameEdit { background-color: blue } To learn about all the selectors available in CSS2 (which are also supported by Qt Style Sheets), please refer to this document: http://www.w3.org/TR/REC-CSS2/selector.html. Basic style sheet customization In the previous example, you learned how to apply a style sheet to a widget with Qt Designer. Let's go crazy and push things further by creating a few other types of widgets and change their style properties to something bizarre for the sake of learning. This time, however, we will not apply the style to every single widget one by one, but we will learn to apply the style sheet to the main window and let it inherit down the hierarchy to all the other widgets so that the style sheet is easier to manage and maintain in long run. How to do it… First of all, let's remove the style sheet from the push button by selecting it and clicking the small arrow button besides the styleSheet property. This button will revert the property to the default value, which in this case is the empty style sheet. Then, add a few more widgets to the UI by dragging them one by one from the widget box to the form editor. I've added a line edit, combo box, horizontal slider, radio button, and a check box. For the sake of simplicity, delete the menu bar, main toolbar, and the status bar from your UI by selecting them from the object inspector, right click, and choose Remove. Now your UI should look something similar to this: Select the main window either from the form editor or the object inspector, then right click and choose Change Stylesheet to open up the Edit Style Sheet. border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; Now what you will see is a completely bizarre-looking UI with everything covered in yellow with a thick border. This is because the precedingstyle sheet does not have any selector, which means the style will apply to the children widgets of the main window all the way down the hierarchy. To change that, let's try something different: QPushButton { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } This time, only the push button will get the style described in the preceding code, and all other widgets will return to the default styling. You can try to add a few more push buttons to your UI and they will all look the same: This happens because we specifically tell the selector to apply the style to all the widgets with the class called QPushButton. We can also apply the style to just one of the push buttons by mentioningit's name in thestyle sheet, like so: QPushButton#pushButton_3 { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } Once you understand this method, we can add the following code to the style sheet : QPushButton { color: red; border: 0px; padding: 0 8px; background: white; } QPushButton#pushButton_2 { border: 1px solid red; border-radius: 10px; } QPushButton#pushButton_3 { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } What it does is basically change the style of all the push buttons as well as some properties of a specific button named pushButton_2. We keep the style sheet of pushButton_3 as it is. Now the buttons will look like this: The first set of style sheet will change all widgets of QPushButton type to a white rectangular button with no border and red text. Then the second set of style sheets changes only the border of a specific QPushButton widget by name of pushButton_2. Notice that the background color and text color of pushButton_2 remain as white and red color respectively because we didn't override them in the second set of style sheet, hence it will follow back the style described in the first set of style sheet since it's applicable to all QPushButton type widgets. Do notice that the text of the third button has also changed to red because we didn't describe the color property in the third set of style sheet. After that, create another set of style using the universal selector, like so: * { background: qradialgradient(cx: 0.3, cy: -0.4, fx: 0.3, fy: -0.4, radius: 1.35, stop: 0 #fff, stop: 1 #888); color: rgb(255, 255, 255); border: 1px solid #ffffff; } The universal selector will affect all the widgets regardless of their type. Therefore, the preceding style sheet will apply a nice gradient color to all the widgets' background as well as setting their text as white color and giving them a one-pixel solid outline which is also in white color. Instead of writing the name of the color (that is, white), we can also use the rgb function (rgb(255, 255, 255)) or hex code (#ffffff) to describe the color value. Just as before, the preceding style sheet will not affect the push buttons because we have already given them their own styles which will override the general style described in the universal selector. Just remember that in Qt, the style which is more specific will ultimately be used when there is more than one style having influence on a widget. This is how the UI will look like now: How it works If you are ever involved in web development using HTML and CSS, Qt's style sheet works exactly the same way as CSS. Style Sheet provides the definitions for describing the presentation of the widgets – what the colors are for each element in the widget group, how thick the border should be, and so on and so forth. If you specify the name of the widget to the style sheet, it will change the style of a particular push button widget with the name you provide. All the other widgets will not be affected and will remain as the default style. To change the name of a widget, select the widget either from the form editor or the object inspector and change the property called objectName in the property window. If you have used the ID selector previously to change the style of the widget, changing its object name will break the style sheet and lose the style. To fix this problem, simply change the object name in the style sheet as well. Creating a login screen using style sheet Next, we will learn how to put all the knowledge we learned in the previous example together and create a fake graphical login screen for an imaginary operating system. Style sheet is not the only thing you need to master in order to design a good UI. You will also need to learn how to arrange the widgets neatly using the layout system in Qt Designer. How to do it… The first thing we need to do is design the layout of the graphical login screen before we start doing anything. Planning is very important in order to produce good software. The following is a sample layout design I made to show you how I imagine the login screen will look. Just a simple line drawing like this is sufficient as long as it conveys the message clearly: Now that we know exactly how the login screen should look, let's go back to Qt Designer again. We will be placing the widgets at the top panel first, then the logo and the login form below it. Select the main window and change its width and height from 400 and 300 to 800 and 600 respectively because we'll need a bigger space in which to place all the widgets in a moment. Click and drag a label under theDisplay Widgets category from the widget box to the form editor. Change the objectName property of the label to currentDateTime and change its Text property to the current date and time just for display purposes, such as Monday, 25-10-2015 3:14 PM. Click and drag a push button under the Buttons category to the form editor. Repeat this process one more time because we have two buttons on the top panel. Rename the two buttons to restartButton and shutdownButton respectively. Next, select the main window and click the small icon button on the form toolbar that says Lay Out Vertically when you mouse-over it. Now you will see the widgets are being automatically arranged on the main window, but it's not exactly what we want yet. Click and drag a horizontal layout widget under the Layouts category to the main window. Click and drag the two push buttons and the text label into the horizontal layout. Now you will see the three widgets being arranged in a horizontal row, but vertically they are located in the middle of the screen. The horizontal arrangement is almost correct, but the vertical position is totally off. Click and drag a vertical spacer from the Spacers category and place it below the horizontal layout we just created previously (below the red rectangular outline). Now you will see all the widgets are being pushed to the top by the spacer. Now, place a horizontal spacer between the text label and the two buttons to keep them apart. This will make the text label always stick to the left and the buttons align to the right. Set both the Horizontal Policy and Vertical Policy properties of the two buttons to Fixed and set the minimumSize property to 55x55. Then, set the text property of the buttons to empty as we will be using icons instead of text. We will learn how to place an icon in the button widgets in the following section: Now your UI should look similar to this: Next, we will be adding the logo by using the following steps: Add a horizontal layout between the top panel and the vertical spacer to serve as a container for the logo. After adding the horizontal layout, you will find the layout is way too thin in height to be able to add any widget to it. This is because the layout is empty and it's being pushed by the vertical spacer below it into zero height. To solve this problem, we can set its vertical margin (either layoutTopMargin or layoutBottomMargin) to be temporarily bigger until a widget is added to the layout. Next, add a label to the horizontal layout that you just created and rename it to logo. We will learn more about how to insert an image into the label to use it as logo in the next section. For now, just empty out the text property and set both its Horizontal Policy and Vertical Policy properties to Fixed. Then, set the minimumSize property to 150x150. Set the vertical margin of the layout back to zero if you haven't done so. The logo now looks invisible, so we will just place a temporary style sheet to make it visible until we add an image to it in the next section. The style sheet is really simple: border: 1px solid; Now your UI should look something similar to this: Now let's create the login form by using the following steps: Add a horizontal layout between the logo's layout and the vertical spacer. Just as we did previously, set the layoutTopMargin property to a bigger number (that is,100) so that you can add a widget to it more easily. After that, add a vertical layout inside the horizontal layout you just created. This layout will be used as a container for the login form. Set its layoutTopMargin to a number lower than that of the horizontal layout (that is, 20) so that we can place widgets in it. Next, right click the vertical layout you just created and choose Morph into -> QWidget. The vertical layout is now being converted into an empty widget. This step is essential because we will be adjusting the width and height of the container for the login form. A layout widget does not contain any properties for width and height, but only margins, due to the fact that a layout will expand toward the empty space surrounding it, which does make sense, considering that it does not have any size properties. After you have converted the layout to a QWidget object, it will automatically inherit all the properties from the widget class and so we are now able to adjust its size to suit our needs. Rename the QWidget object which we just converted from the layout to loginForm and change both its Horizontal Policy and Vertical Policy properties to Fixed. Then, set the minimumSize property to 350x200. Since we already placed the loginForm widget inside the horizontal layout, we can now set its layoutTopMargin property back to zero. Add the same style sheet as the logo to the loginForm widget to make it visible temporarily, except this time we need to add an ID selector in front so that it will only apply the style to loginForm and not its children widgets: #loginForm { border: 1px solid; } Now your UI should look something like this: We are not done with the login form yet. Now that we have created the container for the login form, it's time to put more widgets into the form: Place two horizontal layouts into the login form container. We need two layouts as one for the username field and another for the password field. Add a label and a line edit to each of the layouts you just added. Change the text property of the upper label to Username: and the one below as Password:. Then, rename the two line edits as username and password respectively. Add a push button below the password layout and change its text property to Login. After that, rename it as loginButton. You can add a vertical spacer between the password layout and the login button to distance them slightly. After the vertical spacer has been placed, change its sizeType property to Fixed and change the Height to 5. Now, select the loginForm container and set all its margins to 35. This is to make the login form look better by adding some space to all its sides. You can also set the Height property of the username, password, and loginButton widgets to 25 so that they don't look so cramped. Now your UI should look something like this: We're not done yet! As you can see the login form and the logo are both sticking to the top of the main window due to the vertical spacer below them. The logo and the login form should be placed at the center of the main window instead of the top. To fix this problem use the following steps: Add another vertical spacer between the top panel and the logo's layout. This way it will counter the spacer at the bottom which balances out the alignment. If you think that the logo is sticking too close to the login form, you can also add a vertical spacer between the logo's layout and the login form's layout. Set its sizeType property to Fixed and the Height property to 10. Right click the top panel's layout and choose Morph into -> QWidget. Then, rename it topPanel. The reason why the layout has to be converted into QWidget because we cannot apply style sheets to a layout, as it doesn't have any properties other than margins. Currently you can see there is a little bit of margin around the edges of the main window – we don't want that. To remove the margins, select the centralWidget object from the object inspector window, which is right under the MainWindow panel, and set all the margin values to zero. At this point, you can run the project by clicking the Run button (withgreen arrow icon) to see what your program looks like now. If everything went well, you should see something like this: After we've done the layout, it's time for us to add some fanciness to the UI using style sheets! Since all the important widgets have been given an object name, it's easier for us to apply the style sheets to it from the main window, since we will only write the style sheets to the main window and let them inherit down the hierarchy tree. Right click on MainWindow from the object inspector window and choose Change Stylesheet. Add the following code to the style sheet: #centralWidget { background: rgba(32, 80, 96, 100); } Now you will see that the background of the main window changes its color. We will learn how to use an image for the background in the next section so the color is just temporary. In Qt, if you want to apply styles to the main window itself, you must apply it to its central widget instead of the main window itself because the window is just a container. Then, we will add a nice gradient color to the top panel: #topPanel { background-color: qlineargradient(spread:reflect, x1:0.5, y1:0, x2:0, y2:0, stop:0 rgba(91, 204, 233, 100), stop:1 rgba(32, 80, 96, 100)); } After that, we will apply black color to the login form and make it look semi-transparent. After that, we will also make the corners of the login form container slightly rounded by setting the border-radius property: #loginForm { background: rgba(0, 0, 0, 80); border-radius: 8px; } After we're done applying styles to the specific widgets, we will apply styles to the general types of widgets instead: QLabel { color: white; } QLineEdit { border-radius: 3px; } The style sheets above will change all the labels' texts to a white color, which includes the text on the widgets as well because, internally, Qt uses the same type of label on the widgets that have text on it. Also, we made the corners of the line edit widgets slightly rounded. Next, we will apply style sheets to all the push buttons on our UI: QPushButton { color: white; background-color: #27a9e3; border-width: 0px; border-radius: 3px; } The preceding style sheet changes the text of all the buttons to a white color, then sets its background color to blue, and makes its corners slightly rounded as well. To push things even further, we will change the color of the push buttons when we mouse-over it, using the keyword hover: QPushButton:hover { background-color: #66c011; } The preceding style sheet will change the background color of the push buttons to green when we mouse-over. We will talk more about this in the following section. You can further adjust the size and margins of the widgets to make them look even better.Remember to remove the border line of the login form by removing the style sheet which we applied directly to it earlier. Now your login screen should look something like this: How it works This example focuses more on the layout system of Qt. The Qt layout system provides a simple and powerful way of automatically arranging child widgets within a widget to ensure that they make good use of the available space. The spacer items used in the preceding example help to push the widgets contained in a layout outward to create spacing along the width of the spacer item. To locate a widget to the middle of the layout, put two spacer items to the layout, one on the left side of the widget and another on the right side of the widget. The widget will then be pushed to the middle of the layout by the two spacers. Summary So in this article we saw how Qt allows us to easily design our program's user interface through a method which most people are familiar with. We also covered the toolkit, Qt Designer, which enables us to design our user interface without writing a single line of code. Finally, we saw how to create a login screen. For more information on Qt5 and C++ you can check other books by Packt, mentioned as follows: Qt 5 Blueprints: https://www.packtpub.com/application-development/qt-5-blueprints Boost C++ Application Development Cookbook: https://www.packtpub.com/application-development/boost-c-application-development-cookbook Learning Boost C++ Libraries: https://www.packtpub.com/application-development/learning-boost-c-libraries Resources for Article: Further resources on this subject: OpenCart Themes: Styling Effects of jQuery Plugins [article] Responsive Web Design [article] Gearing Up for Bootstrap 4 [article]
Read more
  • 0
  • 0
  • 31767
Modal Close icon
Modal Close icon