Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-debugging-xamarin-application-on-visual-studio-tutorial
Gebin George
11 Jul 2018
6 min read
Save for later

Debugging Xamarin Application on Visual Studio [Tutorial]

Gebin George
11 Jul 2018
6 min read
Visual Studio is a great IDE for debugging any application, whether it's a web, mobile, or a desktop application. It uses the same debugger that comes with the IDE for all three, and is very easy to follow. In this tutorial,  we will learn how to debug a mobile application using Visual studio. This article is an excerpt from the book, Mobile DevOps,  written by Rohin Tak and Jhalak Modi. Using the output window The output window in Visual Studio is a window where you can see the output of what's happening. To view the output window in Visual Studio, follow these steps: Go to View and click Output: This will open a small window at the bottom where you can see the current and useful output being written by Visual Studio. For example, this is what is shown in the output windows when we rebuild the application: Using the Console class to show useful output The Console class can be used to print some useful information, such as logs, to the output window to get an idea of what steps are being executed. This can help if a method is failing after certain steps, as that will be printed in the output window. To achieve this, C# has the Console class, which is a static class. This class has methods such as Write() and WriteLine() to write anything to the output window. The Write() method writes anything to the output window, and the WriteLine() method writes the same way with a new line at the end: Look at the following screenshot and analyze how Console.WriteLine() is used to break down the method into several steps (it is the same Click event method that was written while developing PhoneCallApp): Add Console.WriteLine() to your code, as shown in the preceding screenshot. Now, run the application, perform the operation, and see the output written as per your code: This way, Console.WriteLine() can be used to write useful step-based outputs/logs to the output window, which can be analyzed to identify issues while debugging. Using breakpoints As described earlier, breakpoints are a great way to dig deep into the code without much hassle. They can help check variables and their values, and the flow at a point or line in the code. Using breakpoints is very simple: The simplest way to add a breakpoint on a line is to click on the margin, which is on the left side, in front of the line, or click on the line and hit the F9 key: You'll see a red dot in the margin area where you clicked when the breakpoint is set, as shown in the preceding screenshot. Now, run the application and perform a call button click on it; the flow should stop at the breakpoint and the line will turn yellow when it does: At this point, you can inspect the values of variables before the breakpoint line by hovering over them: Setting a conditional breakpoint You can also set a conditional breakpoint in the code, which is basically telling Visual Studio to pause the flow only when a certain condition is met: Right-click on the breakpoint set in the previous steps, and click Conditions: This will open a small window over the code to set a condition for the breakpoint. For example, in the following screenshot, a condition is set to when phoneNumber == "9900000700". So, the breakpoint will only be hit when this condition is met; otherwise, it'll not be hit. Stepping through the code When a breakpoint has been reached, the debug tools enable you to get control over the program's execution flow. You'll see some buttons in the toolbar, allowing you to run and step through the code: You can hover over these buttons to see their respective names: Step Over (F10): This executes the next line of code. Step Over will execute the function if the next line is a function call, and will stop after the function: Step Into (F11): Step Into will stop at the next line in the case of a function call, allowing you to continue line-by-line debugging of the function. If the next line is not a function, it will behave the same as Step Over: Step Out (Shift + F11): This will return to the line where the current function was called: Continue: This will continue the execution and run until the next breakpoint is reached: Stop Debugging: This will stop the debugging process: Using a watch A watch is a very useful function in debugging; it allows us to see the values, types, and other details related to variables, and evaluate them in a better way than hovering over the variables. There are two types of watch tools available in Visual Studio: QuickWatch QuickWatch is similar to watch, but as the name suggests, it allows us to evaluate the values at the time. Follow these steps to use QuickWatch in Visual Studio: Right-click on the variable you want to analyze and click on QuickWatch: This will open a new window where you can see the type, value, and other details related to the variable: This is very useful when a variable has a long value or string that cannot be read and evaluated properly by just hovering over the variable. Adding a watch Adding a watch is similar to QuickWatch, but it is more useful when you have multiple variables to analyze, and looking at each variable's value can take a lot of time. Follow these steps to add a watch on variables: Right-click on the variable and click Add Watch: This will add the variable to watch and show you its value always, as well as reflect any time it changes at runtime. You can also see these variable values in a particular format for different data types, so you can have an XML value shown in XML format, or a JSON object value shown in .json format: It is a lifesaver when you want to evaluate a variable's value in each step of the code, and see how it changes with every line. To summarize, we learned how to debug a Xamarin application using Visual Studio. If you found this post useful, do check out the book Mobile DevOps, to continuously improve your mobile application development process. Debugging Your.Net Debugging in Vulkan Debugging Your .NET Application
Read more
  • 0
  • 0
  • 24436

article-image-create-machine-learning-pipelines-using-unsupervised-automl
Sunith Shetty
07 Aug 2018
11 min read
Save for later

Create machine learning pipelines using unsupervised AutoML [Tutorial]

Sunith Shetty
07 Aug 2018
11 min read
AutoML uses unsupervised algorithms for performing an automated process of algorithm selection, hyperparameter tuning, iterative modeling, and model assessment.  When your dataset doesn't have a target variable, you can use clustering algorithms to explore it, based on different characteristics. These algorithms group examples together, so that each group will have examples as similar as possible to each other, but dissimilar to examples in other groups. Since you mostly don't have labels when you are performing such analysis, there is a performance metric that you can use to examine the quality of the resulting separation found by the algorithm. It is called the Silhouette Coefficient. The Silhouette Coefficient will help you to understand two things: Cohesion: Similarity within clusters Separation: Dissimilarity among clusters It will give you a value between 1 and -1, with values close to 1 indicating well-formed clusters. Clustering algorithms are used to tackle many different tasks such as finding similar users, songs, or images, detecting key trends and changes in patterns, understanding community structures in social networks. This tutorial deals with using unsupervised machine learning algorithms for creating machine learning pipelines. The code files for this article are available on Github. This article is an excerpt from a book written by Sibanjan Das, Umit Mert Cakmak titled Hands-On Automated Machine Learning.  Commonly used clustering algorithms There are two types of commonly used clustering algorithms: distance-based and probabilistic models. For example, k-means and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) are distance-based algorithms, whereas the Gaussian mixture model is probabilistic. Distance-based algorithms may use a variety of distance measures where Euclidean distance metrics are usually used. Probabilistic algorithms will assume that there is a generative process with a mixture of probability distributions with unknown parameters and the goal is to calculate these parameters from the data. Since there are many clustering algorithms, picking the right one depends on the characteristics of your data. For example, k-means will work with centroids of clusters and this requires clusters in your data to be evenly sized and convexly shaped. This means that k-means will not work well on elongated clusters or irregularly shaped manifolds. When your clusters in your data are not evenly sized or convexly shaped, you many want to use DBSCAN to cluster areas of any shape. Knowing a thing or two about your data will bring you closer to finding the right algorithms, but what if you don't know much about your data? Many times when you are performing exploratory analysis, it might be hard to get your head around what's happening. If you find yourself in this kind of situation, an automated unsupervised ML pipeline can help you to understand the characteristics of your data better. Be careful when you perform this kind of analysis, though; the actions you will take later will be driven by the results you will see and this could quickly send you down the wrong path if you are not cautious. Creating sample datasets with sklearn In sklearn, there are some useful ways to create sample datasets for testing algorithms: # Importing necessary libraries for visualization import matplotlib.pyplot as plt import seaborn as sns # Set context helps you to adjust things like label size, lines and various elements # Try "notebook", "talk" or "paper" instead of "poster" to see how it changes sns.set_context('poster') # set_color_codes will affect how colors such as 'r', 'b', 'g' will be interpreted sns.set_color_codes() # Plot keyword arguments will allow you to set things like size or line width to be used in charts. plot_kwargs = {'s': 10, 'linewidths': 0.1} import numpy as np import pandas as pd # Pprint will better output your variables in console for readability from pprint import pprint # Creating sample dataset using sklearn samples_generator from sklearn.datasets.samples_generator import make_blobs from sklearn.preprocessing import StandardScaler # Make blobs will generate isotropic Gaussian blobs # You can play with arguments like center of blobs, cluster standard deviation centers = [[2, 1], [-1.5, -1], [1, -1], [-2, 2]] cluster_std = [0.1, 0.1, 0.1, 0.1] # Sample data will help you to see your algorithms behavior X, y = make_blobs(n_samples=1000, centers=centers, cluster_std=cluster_std, random_state=53) # Plot generated sample data plt.scatter(X[:, 0], X[:, 1], **plot_kwargs) plt.show() We get the following plot from the preceding code: cluster_std will affect the amount of dispersion. Change it to [0.4, 0.5, 0.6, 0.5] and try again: cluster_std = [0.4, 0.5, 0.6, 0.5] X, y = make_blobs(n_samples=1000, centers=centers, cluster_std=cluster_std, random_state=53) plt.scatter(X[:, 0], X[:, 1], **plot_kwargs) plt.show() We get the following plot from the preceding code: Now it looks more realistic! Let's write a small class with helpful methods to create unsupervised experiments. First, you will use the fit_predict method to apply one or more clustering algorithms on the sample dataset: class Unsupervised_AutoML: def __init__(self, estimators=None, transformers=None): self.estimators = estimators self.transformers = transformers pass Unsupervised_AutoML class will initialize with a set of estimators and transformers. The second class method will be fit_predict: def fit_predict(self, X, y=None): """ fit_predict will train given estimator(s) and predict cluster membership for each sample """ # This dictionary will hold predictions for each estimator predictions = [] performance_metrics = {} for estimator in self.estimators: labels = estimator['estimator'](*estimator['args'], **estimator['kwargs']).fit_predict(X) estimator['estimator'].n_clusters_ = len(np.unique(labels)) metrics = self._get_cluster_metrics(estimator['estimator'].__name__, estimator['estimator'].n_clusters_, X, labels, y) predictions.append({estimator['estimator'].__name__: labels}) performance_metrics[estimator['estimator'].__name__] = metrics self.predictions = predictions self.performance_metrics = performance_metrics return predictions, performance_metrics The fit_predict method uses the _get_cluster_metrics method to get the performance metrics, which is defined in the following code block: # Printing cluster metrics for given arguments def _get_cluster_metrics(self, name, n_clusters_, X, pred_labels, true_labels=None): from sklearn.metrics import homogeneity_score, completeness_score, v_measure_score, adjusted_rand_score, adjusted_mutual_info_score, silhouette_score print("""################## %s metrics #####################""" % name) if len(np.unique(pred_labels)) >= 2: silh_co = silhouette_score(X, pred_labels) if true_labels is not None: h_score = homogeneity_score(true_labels, pred_labels) c_score = completeness_score(true_labels, pred_labels) vm_score = v_measure_score(true_labels, pred_labels) adj_r_score = adjusted_rand_score(true_labels, pred_labels) adj_mut_info_score = adjusted_mutual_info_score(true_labels, pred_labels) metrics = {"Silhouette Coefficient": silh_co, "Estimated number of clusters": n_clusters_, "Homogeneity": h_score, "Completeness": c_score, "V-measure": vm_score, "Adjusted Rand Index": adj_r_score, "Adjusted Mutual Information": adj_mut_info_score} for k, v in metrics.items(): print("t%s: %0.3f" % (k, v)) return metrics metrics = {"Silhouette Coefficient": silh_co, "Estimated number of clusters": n_clusters_} for k, v in metrics.items(): print("t%s: %0.3f" % (k, v)) return metrics else: print("t# of predicted labels is {}, can not produce metrics. n".format(np.unique(pred_labels))) The _get_cluster_metrics method calculates metrics, such as homogeneity_score, completeness_score, v_measure_score, adjusted_rand_score, adjusted_mutual_info_score, and silhouette_score. These metrics will help you to assess how well the clusters are separated and also measure the similarity within and between clusters. K-means algorithm in action You can now apply the KMeans algorithm to see how it works: from sklearn.cluster import KMeans estimators = [{'estimator': KMeans, 'args':(), 'kwargs':{'n_clusters': 4}}] unsupervised_learner = Unsupervised_AutoML(estimators) You can see the estimators: unsupervised_learner.estimators These will output the following: [{'args': (), 'estimator': sklearn.cluster.k_means_.KMeans, 'kwargs': {'n_clusters': 4}}] You can now invoke fit_predict to obtain predictions and performance_metrics: predictions, performance_metrics = unsupervised_learner.fit_predict(X, y) Metrics will be written to the console: ################## KMeans metrics ##################### Silhouette Coefficient: 0.631 Estimated number of clusters: 4.000 Homogeneity: 0.951 Completeness: 0.951 V-measure: 0.951 Adjusted Rand Index: 0.966 Adjusted Mutual Information: 0.950 You can always print metrics later: pprint(performance_metrics) This will output the name of the estimator and its metrics: {'KMeans': {'Silhouette Coefficient': 0.9280431207593165, 'Estimated number of clusters': 4, 'Homogeneity': 1.0, 'Completeness': 1.0, 'V-measure': 1.0, 'Adjusted Rand Index': 1.0, 'Adjusted Mutual Information': 1.0}} Let's add another class method to plot the clusters of the given estimator and predicted labels: # plot_clusters will visualize the clusters given predicted labels def plot_clusters(self, estimator, X, labels, plot_kwargs): palette = sns.color_palette('deep', np.unique(labels).max() + 1) colors = [palette[x] if x >= 0 else (0.0, 0.0, 0.0) for x in labels] plt.scatter(X[:, 0], X[:, 1], c=colors, **plot_kwargs) plt.title('{} Clusters'.format(str(estimator.__name__)), fontsize=14) plt.show() Let's see the usage: plot_kwargs = {'s': 12, 'linewidths': 0.1} unsupervised_learner.plot_clusters(KMeans, X, unsupervised_learner.predictions[0]['KMeans'], plot_kwargs) You get the following plot from the preceding block: In this example, clusters are evenly sized and clearly separate from each other but, when you are doing this kind of exploratory analysis, you should try different hyperparameters and examine the results. You will write a wrapper function later in this article to apply a list of clustering algorithms and their hyperparameters to examine the results. For now, let's see one more example with k-means where it does not work well. When clusters in your dataset have different statistical properties, such as differences in variance, k-means will fail to identify clusters correctly: X, y = make_blobs(n_samples=2000, centers=5, cluster_std=[1.7, 0.6, 0.8, 1.0, 1.2], random_state=220) # Plot sample data plt.scatter(X[:, 0], X[:, 1], **plot_kwargs) plt.show() We get the following plot from the preceding code: Although this sample dataset is generated with five centers, it's not that obvious and there might be four clusters, as well: from sklearn.cluster import KMeans estimators = [{'estimator': KMeans, 'args':(), 'kwargs':{'n_clusters': 4}}] unsupervised_learner = Unsupervised_AutoML(estimators) predictions, performance_metrics = unsupervised_learner.fit_predict(X, y) Metrics in the console are as follows: ################## KMeans metrics ##################### Silhouette Coefficient: 0.549 Estimated number of clusters: 4.000 Homogeneity: 0.729 Completeness: 0.873 V-measure: 0.795 Adjusted Rand Index: 0.702 Adjusted Mutual Information: 0.729 KMeans clusters are plotted as follows: plot_kwargs = {'s': 12, 'linewidths': 0.1} unsupervised_learner.plot_clusters(KMeans, X, unsupervised_learner.predictions[0]['KMeans'], plot_kwargs) We get the following plot from the preceding code: In this example, points between red (dark gray) and bottom-green clusters (light gray) seem to form one big cluster. K-means is calculating the centroid based on the mean value of points surrounding that centroid. Here, you need to have a different approach. The DBSCAN algorithm in action DBSCAN is one of the clustering algorithms that can deal with non-flat geometry and uneven cluster sizes. Let's see what it can do: from sklearn.cluster import DBSCAN estimators = [{'estimator': DBSCAN, 'args':(), 'kwargs':{'eps': 0.5}}] unsupervised_learner = Unsupervised_AutoML(estimators) predictions, performance_metrics = unsupervised_learner.fit_predict(X, y) Metrics in the console are as follows: ################## DBSCAN metrics ##################### Silhouette Coefficient: 0.231 Estimated number of clusters: 12.000 Homogeneity: 0.794 Completeness: 0.800 V-measure: 0.797 Adjusted Rand Index: 0.737 Adjusted Mutual Information: 0.792 DBSCAN clusters are plotted as follows: plot_kwargs = {'s': 12, 'linewidths': 0.1} unsupervised_learner.plot_clusters(DBSCAN, X, unsupervised_learner.predictions[0]['DBSCAN'], plot_kwargs) We get the following plot from the preceding code: Conflict between red (dark gray) and bottom-green (light gray) clusters from the k-means case seems to be gone, but what's interesting here is that some small clusters appeared and some points were not assigned to any cluster based on their distance. DBSCAN has the eps(epsilon) hyperparameter, which is related to proximity for points to be in same neighborhood; you can play with that parameter to see how the algorithm behaves. When you are doing this kind of exploratory analysis where you don't know much about the data, visual clues are always important, because metrics can mislead you since not every clustering algorithm can be assessed using similar metrics. To summarize we learned about many different aspects when it comes to choosing a suitable ML pipeline for a given problem. You gained a better understanding of how unsupervised algorithms may suit your needs for a given problem. To have a clearer understanding of the different aspects of automated Machine Learning, and how to incorporate automation tasks using practical datasets, check out this book Hands-On Automated Machine Learning. Read more Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine) How machine learning as a service is transforming cloud Selecting Statistical-based Features in Machine Learning application
Read more
  • 0
  • 0
  • 24434

article-image-can-devops-promote-empathy-in-software-engineering
Richard Gall
14 Oct 2019
8 min read
Save for later

Can DevOps promote empathy in software engineering?

Richard Gall
14 Oct 2019
8 min read
If DevOps is, at a really basic level, about getting different teams to work together, then you could say that DevOps is a discipline that promotes empathy. It’s an interesting idea, and one that’s explored in Viktor Farcic’s book The DevOps Paradox. What’s particularly significant about the notion of empathy existing inside DevOps is that it could help us to focus on exactly what we’re trying to achieve by employing it. In turn, this could help us develop or evolve the way we actually do DevOps - so, instead of worrying about a new toolchain, or new platform purchases, we could, perhaps, just explore ways of getting people to simply understand what their respective needs are. However, in the DevOps Paradox there are a number of different insights on empathy in DevOps and what it means not just for the field itself, but also its place in a modern business. Let’s take a look at some DevOps experts thoughts on empathy and DevOps. "Empathy helps developers put the user at the center of what they do" Jeff Sussna (@jeffsussna) is an IT consultant and coach that helps organizations to design, build and deliver products quickly. "There's a lot of confusion and anxiety about [empathy’s] meaning, and a lot of people tend to misunderstand it. Sometimes people think empathy means wallowing in someone else's pain. In fact, there's actually a philosopher from Yale University who is now putting out the idea that empathy is actually bad, and that it's the cause of all of the world's problems and what we need instead is compassion. "From my perspective, that represents a misunderstanding of both empathy and compassion, but my favorite is when people say things like, "Sociopaths are really good at empathizing". My answer to that is, if you have a sociopath in your organization, you have a much bigger problem, and DevOps isn't going to solve it. At that point, you have an HR problem. What you need to distinguish between is emotional empathy and cognitive empathy, and I use cognitive empathy in the context of DevOps in a very simple way, which is the ability to think about things as if from another's perspective. "If you're a developer and you think, ‘What is the experience of deploying and running my application going to be?’ you're thinking about it from the perspective of the operations person. "If you're an operations person and you're thinking in terms of, ‘What is the experience going to be when you need to spin up a test server in a matter of hours in order to test a hotfix because all of your testing swim lanes are full of other things, and what does that mean for my process of provisioning servers?’ then you're thinking about things from the tester's point of view. "And so, to me, that's empathy, and that's empathizing, which is really at the heart of customer service. It's at the heart of design thinking, and it's at the heart of product development. What is it that our customers are trying to accomplish, what help do they need from us, and how can we help them?" Listen: Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] "As soon as you have empathy you can understand why you provide value" Damien Duportal (@DamienDuportal) is a Developer Advocate at Traefik, Containous’ cloud native edge router. "If you have a tool that helps you to share empathy, then you have a great foundation for starting the conversation. Even if this seems boring to engineers, at least they'll start talking and listening to each other. I mean, once they've stopped debating sterile tabs versus spaces or JavaScript versus Java—or whatever sterile debate it is—they'll have to focus on the value they're going to provide. So, this is really how I would sum up DevOps, which again is about how you bring empathy back and focus on the value creation and interaction side of IT. "Empathy is one of the most advanced bricks you can have for building human interaction. If we are able to achieve so many different things—with different people, different opinions, and different cultures—it's because we, as humans, are capable of having high levels of empathy. "As soon as you have empathy, you can understand why you provide value. If you don't, then what's the point of trying to create value? It will only be from your point of view, and there are over seven billion other people in the world. So, ultimately, we need empathy to understand what we are going to do with our tools." Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin “Let’s not wait for culture to change: culture is in the rearview mirror” As CSO of PraxisFlow, Kevin Behr spends his time working with clients who seek to develop their DevOps process. His 25 years of experience have been driven by a passion for engaging with the complex problems that large IT organizations face, and how we can use DevOps to solve them. You can follow Kevin on Twitter at @kevinbehr. What do we mean when we talk about empathy in DevOps? We're saying that we understand what it feels like to do what you're doing and that I'll never do that to you again. So, let's build a system together that will allow us to never be there. DevOps to me has evolved into a lot of tools because we're humans, and humans love tools of all kinds. As a species, we've defined ourselves by our tools and technologies. And, as a species, we also talk about culture a lot, but, to my mind, culture is a rearview mirror. Culture is just all the things that we've done: our organizational disposition. The way to change culture is to do things differently. Let's not wait for culture, because culture is in the rearview mirror: it's the past. If you're in a transition, then what are you transitioning toward and what does that mean about how you need to act? The very interesting thing about DevOps is that while frequently, its mission is to create a change in the culture of an organization, this change requires far more than coordination: it also requires pure collaboration, and co-laboring. These can be particularly awkward to achieve given the likelihood that we haven't worked with the people in an organization before. And it can become intensely awkward, when those people may have already made villains out of each other because they couldn't get what they wanted. The goal of the DevOps process is to create a new culture, despite these challenges.... ....When you manage to introduce empathy to a team, the development and the operations people seem finally to come together. You suddenly hear someone in operations say, 'Oh, can we do that differently? When you threw that thing at me last time, it gave me a black eye and I had to stay up for four days straight!' And the developer is like, 'It did? How did it do that? Next time, if something happens, please call me, I want to come help.' That empathy of figuring out what went wrong, and working together, is what builds trust." “The CFO doesn’t give a shit about empathy” Chris Riley (@HoardingInfo) is a self-proclaimed bad coder turned editor of Sweetcode.io at fixate.io, a content marketing firm for those who sell to technical audiences. Through this, he's involved with DevOps, SecOps, big data, machine learning, and blockchain. He's a member of the DevOps Institute Board of Regents, a position he's held for over four years. “...The CFO doesn't give a shit about empathy, and the person with the money may not care about that at all. The HR department might, but that's the problem with selling anything. You have to speak their language, and the CFO is going to respond to money. Either you're saving us money, or you're making us more money, and I think DevOps is doing both, which is cool. I think what's nice about that explanation is the fact it doesn't seem insurmountable. It's kind of like how Pixar was structured. “After Steve Jobs started at Pixar, he structured all of the work environments where the idea was to create chance encounters among the employees, so that the graphic designer of one movie would talk to the application developer of another, even when they don't even have any real reason to interact with each other. The way they did it at Pixar was that, as everybody has to go to the bathroom, they put the bathrooms in a large communal area where these people are going to run into each other—that's what created that empathy. They understand what each other's job is. They're excited about each other's movies. They're excited about what they're working on, and they're aware of that in everything they do. It's a really good explanation.” What do you think? Can DevOps help promote empathy inside engineering teams and across wider businesses? Or is there anything else we should be doing?
Read more
  • 0
  • 0
  • 24396

article-image-neural-network
Packt
04 Jan 2017
11 min read
Save for later

What is an Artificial Neural Network?

Packt
04 Jan 2017
11 min read
In this article by Prateek Joshi, author of book Artificial Intelligence with Python, we are going to learn about artificial neural networks. We will start with an introduction to artificial neural networks and the installation of the relevant library. We will discuss perceptron and how to build a classifier based on that. We will learn about single layer neural networks and multilayer neural networks. (For more resources related to this topic, see here.) Introduction to artificial neural networks One of the fundamental premises of Artificial Intelligence is to build machines that can perform tasks that require human intelligence. The human brain is amazing at learning new things. Why not use the model of the human brain to build a machine? An artificial neural network is a model designed to simulate the learning process of the human brain. Artificial neural networks are designed such that they can identify the underlying patterns in data and learn from them. They can be used for various tasks such as classification, regression, segmentation, and so on. We need to convert any given data into the numerical form before feeding it into the neural network. For example, we deal with many different types of data including visual, textual, time-series, and so on. We need to figure out how to represent problems in a way that can be understood by artificial neural networks. Building a neural network The human learning process is hierarchical. We have various stages in our brain’s neural network and each stage corresponds to a different granularity. Some stages learn simple things and some stages learn more complex things. Let’s consider an example of visually recognizing an object. When we look at a box, the first stage identifies simple things like corners and edges. The next stage identifies the generic shape and the stage after that identifies what kind of object it is. This process differs for different tasks, but you get the idea! By building this hierarchy, our human brain quickly separates the concepts and identifies the given object. To simulate the learning process of the human brain, an artificial neural network is built using layers of neurons. These neurons are inspired from the biological neurons we discussed in the previous paragraph. Each layer in an artificial neural network is a set of independent neurons. Each neuron in a layer is connected to neurons in the adjacent layer. Training a neural network If we are dealing with N-dimensional input data, then the input layer will consist of N neurons. If we have M distinct classes in our training data, then the output layer will consist of M neurons. The layers between the input and output layers are called hidden layers. A simple neural network will consist of a couple of layers and a deep neural network will consist of many layers. Consider the case where we want to use a neural network to classify the given data. The first step is to collect the appropriate training data and label it. Each neuron acts as a simple function and the neural network trains itself until the error goes below a certain value. The error is basically the difference between the predicted output and the actual output. Based on how big the error is, the neural network adjusts itself and retrains until it gets closer to the solution. You can learn more about neural networks here: http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.html. We will be using a library called NeuroLab . You can find more about it here: https://pythonhosted.org/neurolab. You can install it by running the following command on your Terminal: $ pip3 install neurolab Once you have installed it, you can proceed to the next section. Building a perceptron based classifier Perceptron is the building block of an artificial neural network. It is a single neuron that takes inputs, performs computation on them, and then produces an output. It uses a simple linear function to make the decision. Let’s say we are dealing with an N-dimension input datapoint. A perceptron computes the weighted summation of those N numbers and it then adds a constant to produce the output. The constant is called the bias of the neuron. It is remarkable to note that these simple perceptrons are used to design very complex deep neural networks. Let’s see how to build a perceptron based classifier using NeuroLab. Create a new python file and import the following packages: importnumpy as np importmatplotlib.pyplot as plt importneurolab as nl Load the input data from the text file data_perceptron.txt provided to you. Each line contains space separated numbers where the first two numbers are the features and the last number is the label: # Load input data text = np.loadtxt(‘data_perceptron.txt’) Separate the text into datapoints and labels: # Separate datapoints and labels data = text[:, :2] labels = text[:, 2].reshape((text.shape[0], 1)) Plot the datapoints: # Plot input data plt.figure() plt.scatter(data[:,0], data[:,1]) plt.xlabel(‘Dimension 1’) plt.ylabel(‘Dimension 2’) plt.title(‘Input data’) Define the maximum and minimum values that each dimension can take: # Define minimum and maximum values for each dimension dim1_min, dim1_max, dim2_min, dim2_max = 0, 1, 0, 1 Since the data is separated into two classes, we just need one bit to represent the output. So the output layer will contain a single neuron. # Number of neurons in the output layer num_output = labels.shape[1] We have a dataset where the datapoints are 2-dimensional. Let’s define a perceptron with 2 input neurons where we assign one neuron for each dimension. # Define a perceptron with 2 input neurons (because we # have 2 dimensions in the input data) dim1 = [dim1_min, dim1_max] dim2 = [dim2_min, dim2_max] perceptron = nl.net.newp([dim1, dim2], num_output) Train the perceptron with the training data: # Train the perceptron using the data error_progress = perceptron.train(data, labels, epochs=100, show=20, lr=0.03) Plot the training progress using the error metric: # Plot the training progress plt.figure() plt.plot(error_progress) plt.xlabel(‘Number of epochs’) plt.ylabel(‘Training error’) plt.title(‘Training error progress’) plt.grid() plt.show() The full code is given in the file perceptron_classifier.py. If you run the code, you will get two output figures. The first figure indicates the input datapoints: The second figure represents the training progress using the error metric: As we can observe from the preceding figure, the error goes down to 0 at the end of fourth epoch. Constructing a single layer neural network A perceptron is a good start, but it cannot do much. The next step is to have a set of neurons act as a unit to see what we can achieve. Let’s create a single neural network that consists of independent neurons acting on input data to produce the output. Create a new python file and import the following packages: importnumpy as np importmatplotlib.pyplot as plt importneurolab as nl Load the input data from the file data_simple_nn.txt provided to you. Each line in this file contains 4 numbers. The first two numbers form the datapoint and the last two numbers are the labels. Why do we need to assign two numbers for labels? Because we have 4 distinct classes in our dataset, so we need two bits represent them. # Load input data text = np.loadtxt(‘data_simple_nn.txt’) Separate the data into datapoints and labels: # Separate it into datapoints and labels data = text[:, 0:2] labels = text[:, 2:] Plot the input data: # Plot input data plt.figure() plt.scatter(data[:,0], data[:,1]) plt.xlabel(‘Dimension 1’) plt.ylabel(‘Dimension 2’) plt.title(‘Input data’) Extract the minimum and maximum values for each dimension (we don’t need to hardcode it like we did in the previous section): # Minimum and maximum values for each dimension dim1_min, dim1_max = data[:,0].min(), data[:,0].max() dim2_min, dim2_max = data[:,1].min(), data[:,1].max() Define the number of neurons in the output layer: # Define the number of neurons in the output layer num_output = labels.shape[1] Define a single layer neural network using the above parameters: # Define a single-layer neural network dim1 = [dim1_min, dim1_max] dim2 = [dim2_min, dim2_max] nn = nl.net.newp([dim1, dim2], num_output) Train the neural network using training data: # Train the neural network error_progress = nn.train(data, labels, epochs=100, show=20, lr=0.03) Plot the training progress: # Plot the training progress plt.figure() plt.plot(error_progress) plt.xlabel(‘Number of epochs’) plt.ylabel(‘Training error’) plt.title(‘Training error progress’) plt.grid() plt.show() Define some sample test datapoints and run the network on those points: The full code is given in the file simple_neural_network.py. If you run the code, you will get two figures. The first figure represents the input datapoints: # Run the classifier on test datapoints print(‘nTest results:’) data_test = [[0.4, 4.3], [4.4, 0.6], [4.7, 8.1]] for item in data_test: print(item, ‘-->‘, nn.sim([item])[0]) The second figure shows the training progress: You will see the following printed on your Terminal: If you locate those test datapoints on a 2D graph, you can visually verify that the predicted outputs are correct. Constructing a multilayer neural network In order to enable higher accuracy, we need to give more freedom the neural network. This means that a neural network needs more than one layer to extract the underlying patterns in the training data. Let’s create a multilayer neural network to achieve that. Create a new python file and import the following packages: importnumpy as np importmatplotlib.pyplot as plt importneurolab as nl In the previous two sections, we saw how to use a neural network as a classifier. In this section, we will see how to use a multilayer neural network as a regressor. Generate some sample datapoints based on the equation y = 3x^2 + 5 and then normalize the points: # Generate some training data min_val = -15 max_val = 15 num_points = 130 x = np.linspace(min_val, max_val, num_points) y = 3 * np.square(x) + 5 y /= np.linalg.norm(y) Reshape the above variables to create a training dataset: # Create data and labels data = x.reshape(num_points, 1) labels = y.reshape(num_points, 1) Plot the input data: # Plot input data plt.figure() plt.scatter(data, labels) plt.xlabel(‘Dimension 1’) plt.ylabel(‘Dimension 2’) plt.title(‘Input data’) Define a multilayer neural network with 2 hidden layers. You are free to design a neural network any way you want. For this case, let’s have 10 neurons in the first layer and 6 neurons in the second layer. Our task is to predict the value, so the output layer will contain a single neuron. # Define a multilayer neural network with 2 hidden layers; # First hidden layer consists of 10 neurons # Second hidden layer consists of 6 neurons # Output layer consists of 1 neuron nn = nl.net.newff([[min_val, max_val]], [10, 6, 1]) Set the training algorithm to gradient descent: # Set the training algorithm to gradient descent nn.trainf = nl.train.train_gd Train the neural network using the training data that was generated: # Train the neural network error_progress = nn.train(data, labels, epochs=2000, show=100, goal=0.01) Run the neural network on the training datapoints: # Run the neural network on training datapoints output = nn.sim(data) y_pred = output.reshape(num_points) Plot the training progress: # Plot training error plt.figure() plt.plot(error_progress) plt.xlabel(‘Number of epochs’) plt.ylabel(‘Error’) plt.title(‘Training error progress’) Plot the predicted output: # Plot the output x_dense = np.linspace(min_val, max_val, num_points * 2) y_dense_pred = nn.sim(x_dense.reshape(x_dense.size,1)).reshape(x_dense.size) plt.figure() plt.plot(x_dense, y_dense_pred, ‘-’, x, y, ‘.’, x, y_pred, ‘p’) plt.title(‘Actual vs predicted’) plt.show() The full code is given in the file multilayer_neural_network.py. If you run the code, you will get three figures. The first figure shows the input data: The second figure shows the training progress: The third figure shows the predicted output overlaid on top of input data: The predicted output seems to follow the general trend. If you continue to train the network and reduce the error, you will see that the predicted output will match the input curve even more accurately. You will see the following printed on your Terminal: Summary In this article, we learnt more about artificial neural networks. We discussed how to build and train neural networks. We also talked about perceptron and built a classifier based on that. We also learnt about single layer neural networks as well as multilayer neural networks. Resources for Article: Further resources on this subject: Training and Visualizing a neural network with R [article] Implementing Artificial Neural Networks with TensorFlow [article] How to do Machine Learning with Python [article]
Read more
  • 0
  • 0
  • 24394

article-image-modern-web-development-what-makes-it-modern
Richard Gall
14 Oct 2019
10 min read
Save for later

Modern web development: what makes it ‘modern’?

Richard Gall
14 Oct 2019
10 min read
The phrase 'modern web development' is one that I have clung to during years writing copy for Packt books. But what does it really mean? I know it means something because it sounds right - but there’s still a part of me that feels that it’s a bit vague and empty. Although it might sound somewhat fluffy the truth is that there probably is such a thing as modern web development. Insofar as the field has changed significantly in a matter of years, and things are different now from how they were in, say, 2013, modern web development can be easily characterised as all the things that are being done in web development in 2019 that are different from 5-10 years ago. By this I don’t just mean trends like artificial intelligence and mobile development (although those are both important). I’m also talking about the more specific ways in which we actually build web projects. So, let’s take a look at how we got to where we are and the various ways in which ‘modern web development’ is, well, modern. The story of modern web development: how we got here It sounds obvious, but the catalyst for the changes that we currently see in web development today is the rise of mobile. Mobile and the rise of the web applications There are a few different parts to this that have got us to where we are today. In the first instance the growth of mobile in the middle part of this decade (around 2013 or 2014) initiated the trend of mobile-first or responsive web design. Those terms might sound a bit old-hat. If they do, it’s a mark of how quickly the web development world has changed. Primarily, though this was about appearance and UI - making web properties easy to use and navigate on mobile devices, rather than just desktop. Tools like Bootstrap grew quickly, providing an easy and templated way to build mobile-first and responsive websites. But what began as a trend concerned primarily with appearance later shifted as mobile usage grew. This called for a more sophisticated approach as mobile users came to expect richer and faster web experiences, and businesses a new way to monetize these significant changes user behavior. Explore Packt's Bootstrap titles here. Lightweight apps for data-intensive user experiences This is where concepts like the single page web app came to the fore. Lightweight and dynamic, and capable of handling data-intensive tasks and changes in state, single page web apps were unique in that they handled logic in the browser rather than on the server. This was arguably a watershed in changing how we think about web development. It was instrumental in collapsing the well-established distinction between backend and front end. Behind this trend we saw a shift towards new technologies. Node.js quietly emerged on the scene (arguably its only in the last couple of years that its popularity has really exploded), and frameworks like Angular were at the height of their popularity. Find a massive range of Node.js eBooks and videos on the Packt store. Full-stack web development It’s around this time that full stack development started to accelerate as a trend. Look at Google trends. You can see how searches for the phrase have grown since the beginning of 2012: If you look closely, it’s around 2015 that the term undergoes a step change in the level of interest. Undoubtedly one of the reasons for this is that the relationship between client and server was starting to shift. This meant the web developer skill set was starting to change as well. As a web developer, you weren’t only concerned with how to build the front end, but also how that front end managed dynamic content and different states. The rise and fall of Angular A good corollary to this tale is the fate of AngularJS. While it rose to the top amidst the chaos and confusion of the mid-teens framework bubble, as the mobile revolution matured in a way that gave way to more sophisticated web applications, the framework soon became too cumbersome. And while Google - the frameworks’ creator - aimed to keep it up to date with Angular 2 and subsequent versions, delays and missteps meant the project lost ground to React. Indeed, this isn’t to say that Angular is dead and buried. There are plenty of reasons to use Angular over React and other JavaScript tools if the use case is right. But it is nevertheless the case that the Angular project no longer defines web development to the extent that it used to. Explore Packt's Angular eBooks and videos. The fact that Ionic, the JavaScript mobile framework, is now backed by Web Components rather than Angular is an important indicator for what modern web development actually looks like - and how it contrasts with what we were doing just a few years ago. The core elements of modern web development in 2019 So, there are a number of core components to modern web development that have evolved out of industry changes over the last decade. Some of these are tools, some are ideas and approaches. All of them are based on the need to manage a balance of complex requirements with performance and simplicity. Web Components Web Components are the most important element if we’re trying to characterise ‘modern’ web development. The principle is straightforward: Web Components provides a set of reusable custom elements. This makes it easier to build web pages and applications without writing additional lines of code that add complexity to your codebase. The main thing to keep in mind here is that Web Components improve encapsulation. This concept, which is really about building in a more modular and loosely coupled manner, is crucial when thinking about what makes modern web development modern. There are three main elements to Web Components: Custom elements, which are a set of JavaScript APIs that you can call and define however you need them to work. The shadow DOM, which acts as a DOM that’s attached to individual elements on your page. This essentially isolates the resources different elements and components need to work on your page which makes it easier to manage from a development perspective, and can unlock better performance for users. HTML Templates, which are bits of HTML that can be reused and called upon only when needed. These elements together paint a picture of modern web development. This is one in which developers are trying to handle more complexity and sophistication while improving their productivity and efficiency. Want to get started with Web Components? You can! Read Getting Started with Web Components. React.js One of the reasons that React managed to usurp Angular is the fact that it does many of the things that Google wanted Angular to do. Perhaps the most significant difference between React and Angular is that React tackles some of the scalability issues presented by Angular’s two-way data binding (which was, for a while, incredibly exciting and innovative) with unidirectional flow. There’s a lot of discussion around this, but by moving towards a singular model of data flow, applications can handle data on a much larger scale without running into problems. Elsewhere, concepts like the virtual DOM (which is distinct from a shadow DOM, more on that here) help to improve encapsulation for developers. Indeed, flexibility is one of the biggest draws of React. To use Angular you need to know TypeScript, for example. And although you can use TypeScript when working with React, it isn’t essential. You have options. Explore Packt's React.js eBooks and videos. Redux, Flux, and how we think about application state The growth of React has got web developers thinking more and more about application state. While this isn’t something that’s new, as applications have become more interactive and complex it has become more important for developers to take the issue of ‘statefulness’ seriously. Consequently, libraries such as Flux and Redux have emerged on the scene which act as objects in which all the values that comprise an application’s state can be stored. This article on egghead.io explains why state is important in a clear and concise way: "For me, the key to understanding state management was when I realised that there is always state… users perform actions, and things change in response to those actions. State management makes the state of your app tangible in the form of a data structure that you can read from and write to. It makes your ‘invisible’ state clearly visible for you to work with." Find Redux eBooks and videos from Packt. Or, check out a wide range of Flux titles. APIs and microservices One software trend that we haven’t mentioned yet but nevertheless remains important when thinking about modern web development is the rise of APIs and microservices. For web developers, the trend is reinforcing the importance of encapsulation and modularity that things like Web Components and React are designed to help with. Insofar as microservices are simplifying the development process but adding architectural complexity, it’s not hard to see that web developers are having to think in a more holistic manner about how their applications interact with a variety of services and data sources. Indeed, you could even say that this trend is only extending the growth of the full-stack developer as a job role. If development is today more about pulling together multiple different services and components, rather than different roles building different parts of a monolithic app, it makes sense that the demand for full stack developers is growing. But there’s another, more specific, way in which the microservices trend is influencing modern web development: micro frontends. Micro frontends Micro frontends take the concept of microservices and apply them to the frontend. Rather than just building an application in which the front end is powered by microservices (in a way that’s common today), you also treat individual constituent parts of the frontend as a microservice. In turn, you build teams around each of these parts. So, perhaps one works on search, another on check out, another on user accounts. This is more of an organizational shift than a technological one. But it again feeds into the idea that modern web development is something modular, broken up and - usually - full-stack. Conclusion: modern web development is both a set of tools and a way of thinking The web development toolset has been evolving for more than a decade. Mobile was the catalyst for significant change, and has helped get us to a world that is modular, lightweight, and highly flexible. Heavyweight frameworks like AngularJS paved the way, but it appears an alternative has found real purchase with the wider development community. Of course, it won’t always be this way. And although React has dominated developer mindshare for a good three years or so (quite a while in the engineering world), something will certainly replace it at some point. But however the tool chain evolves, the basic idea that we build better applications and websites when we break things apart will likely stick. Complexity won’t decrease. Even if writing code gets easier, understanding how component parts of an application fit together - from front end elements to API integration - will become crucial. Even if it starts to bend your mind, and pose new problems you hadn’t even thought of, it’s clear that things are going to remain interesting as far as web development in the future is concerned.
Read more
  • 0
  • 0
  • 24393

article-image-introduction-penetration-testing-and-kali-linux
Packt
22 Sep 2015
4 min read
Save for later

Introduction to Penetration Testing and Kali Linux

Packt
22 Sep 2015
4 min read
 In this article by Juned A Ansari, author of the book, Web Penetration Testing with Kali Linux, Second Edition, the author wants us to learn about the following topics: Introduction to penetration testing An Overview of Kali Linux Using Tor for penetration testing (For more resources related to this topic, see here.) Introduction to penetration testing Penetration testing or Ethical hacking is a proactive way of testing your web applications by simulating an attack that's similar to a real attack that could occur on any given day. We will use the tools provided in Kali Linux to accomplish this. Kali Linux is the rebranded version of Backtrack and is now based on Debian-derived Linux distribution. It comes preinstalled with a large list of popular hacking tools that are ready to use with all the prerequisites installed. We will dwell deep into the tools that would help Pentest web applications, and also attack websites in a lab vulnerable to major flaws found in real world web applications. An Overview of Kali Linux Kali Linux is security-focused Linux distribution based on Debian. It's a rebranded version of the famous Linux distribution known as Backtrack, which came with a huge repository of open source hacking tools for network, wireless, and web application penetration testing. Although Kali Linux contains most of the tools from Backtrack, the main aim of Kali Linux is to make it portable so that it can be installed on devices based on the ARM architectures, such as tablets and Chromebook, which makes the tools available at your disposal with much ease. Using open source hacking tools comes with a major drawback. They contain a whole lot of dependencies when installed on Linux, and they need to be installed in a predefined sequence; authors of some tools have not released accurate documentation, which makes our life difficult. Kali Linux simplifies this process; it contains many tools preinstalled with all the dependencies and are in ready-to-use condition so that you can pay more attention for the actual attack and not on installing the tool. Updates for tools installed in Kali Linux are more frequently released, which helps you to keep the tools up to date. A noncommercial toolkit that has all the major hacking tools preinstalled to test real-world networks and applications is a dream of every ethical hacker and the authors of Kali Linux make every effort to make our life easy, which enables us to spend more time on finding the actual flaws rather than building a toolkit. Using Tor for penetration testing The main aim of a penetration test is to hack into a web application in a way that a real-world malicious hacker would do it. Tor provides an interesting option to emulate the steps that a black hat hacker uses to protect his identity and location. Although an ethical hacker trying to improve the security of a web application should be not be concerned about hiding his location, Tor will give an additional option of testing the edge security systems such as network firewalls, web application firewalls, and IPS devices. Black hat hackers try every method to protect their location and true identity; they do not use a permanent IP address and constantly change it to fool cybercrime investigators. You will find port scanning request from a different range of IP addresses, and the actual exploitation having the source IP address that you edge security systems are logging for the first time. With the necessary written approval from the client, you can use Tor to emulate an attacker by connecting to the web application from an unknown IP address that the system does not usually see connections from. Using Tor makes it more difficult to trace back the intrusion attempt to the actual attacker. Tor uses a virtual circuit of interconnected network relays to bounce encrypted data packets. The encryption is multilayered and the final network relay releasing the data to the public Internet cannot identify the source of the communication as the entire packet was encrypted and only a part of it is decrypted at each node. The destination computer sees the final exit point of the data packet as the source of the communication, thus protecting the real identify and location of the user. The following figure shows the working of Tor: Summary This article served as an introduction to penetration testing of web application and Kali Linux. At the end, we looked at how to use Tor for penetration testing. Resources for Article: Further resources on this subject: An Introduction to WEP[article] WLAN Encryption Flaws[article] What is Kali Linux [article]
Read more
  • 0
  • 0
  • 24387
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-xamarinforms
Packt
09 Dec 2016
11 min read
Save for later

Xamarin.Forms

Packt
09 Dec 2016
11 min read
Since the beginning of Xamarin's lifetime as a company, their motto has always been to present the native APIs on iOS and Android idiomatically to C#. This was a great strategy in the beginning, because applications built with Xamarin.iOS or Xamarin.Android were pretty much indistinguishable from a native Objective-C or Java applications. Code sharing was generally limited to non-UI code, which left a potential gap to fill in the Xamarin ecosystem—a cross-platform UI abstraction. Xamarin.Forms is the solution to this problem, a cross-platform UI framework that renders native controls on each platform. Xamarin.Forms is a great framework for those that know C# (and XAML), but also may not want to get into the full details of using the native iOS and Android APIs. In this article by Jonathan Peppers, author of the book Xamarin 4.x Cross-Platform Application Development - Third Edition, we will discuss the following topics: Use XAML with Xamarin.Forms Cover data binding and MVVM with Xamarin.Forms (For more resources related to this topic, see here.) Using XAML in Xamarin.Forms In addition to defining Xamarin.Forms controls from C# code, Xamarin has provided the tooling for developing your UI in XAML (Extensible Application Markup Language). XAML is a declarative language that is basically a set of XML elements that map to a certain control in the Xamarin.Forms framework. Using XAML is comparable to what you would think of using HTML to define the UI on a webpage, with the exception that XAML in Xamarin.Forms is creating a C# objects that represent a native UI. To understand how XAML works in Xamarin.Forms, let's create a new page with lots of UI on it. Return to your HelloForms project from earlier, and open the HelloFormsPage.xaml file. Add the following XAML code between the <ContentPage> tag: <StackLayout Orientation="Vertical" Padding="10,20,10,10"> <Label Text="My Label" XAlign="Center" /> <Button Text="My Button" /> <Entry Text="My Entry" /> <Image Source="https://www.xamarin.com/content/images/ pages/branding/assets/xamagon.png" /> <Switch IsToggled="true" /> <Stepper Value="10" /> </StackLayout> Go ahead and run the application on iOS and Android, your application will look something like the following screenshots: First, we created a StackLayout control, which is a container for other controls. It can layout controls either vertically or horizontally one by one as defined by the Orientation value. We also applied a padding of 10 around the sides and bottom, and 20 from the top to adjust for the iOS status bar. You may be familiar with this syntax for defining rectangles if you are familiar with WPF or Silverlight. Xamarin.Forms uses the same syntax of left, top, right, and bottom values delimited by commas.  We also used several of the built-in Xamarin.Forms controls to see how they work: Label: We used this earlier in the article. Used only for displaying text, this maps to a UILabel on iOS and a TextView on Android. Button: A general purpose button that can be tapped by a user. This control maps to a UIButton on iOS and a Button on Android. Entry: This control is a single-line text entry. It maps to a UITextField on iOS and an EditText on Android. Image: This is a simple control for displaying an image on the screen, which maps to a UIImage on iOS and an ImageView on Android. We used the Source property of this control, which loads an image from a web address. Using URLs on this property is nice, but it is best for performance to include the image in your project where possible. Switch: This is an on/off switch or toggle button. It maps to a UISwitch on iOS and a Switch on Android. Stepper: This is a general-purpose input for entering numbers via two plus and minus buttons. On iOS this maps to a UIStepper, while on Android Xamarin.Forms implements this functionality with two Buttons. This are just some of the controls provided by Xamarin.Forms. There are also more complicated controls such as the ListView and TableView you would expect for delivering mobile UIs. Even though we used XAML in this example, you could also implement this Xamarin.Forms page from C#. Here is an example of what that would look like: public class UIDemoPageFromCode : ContentPage { public UIDemoPageFromCode() { var layout = new StackLayout { Orientation = StackOrientation.Vertical, Padding = new Thickness(10, 20, 10, 10), }; layout.Children.Add(new Label { Text = "My Label", XAlign = TextAlignment.Center, }); layout.Children.Add(new Button { Text = "My Button", }); layout.Children.Add(new Image { Source = "https://www.xamarin.com/content/images/pages/ branding/assets/xamagon.png", }); layout.Children.Add(new Switch { IsToggled = true, }); layout.Children.Add(new Stepper { Value = 10, }); Content = layout; } } So you can see where using XAML can be a bit more readable, and is generally a bit better at declaring UIs. However, using C# to define your UIs is still a viable, straightforward approach. Using data-binding and MVVM At this point, you should be grasping the basics of Xamarin.Forms, but are wondering how the MVVM design pattern fits into the picture. The MVVM design pattern was originally conceived for its use along with XAML and the powerful data binding features XAML provides, so it is only natural that it is a perfect design pattern to be used with Xamarin.Forms. Let's cover the basics of how data-binding and MVVM is setup with Xamarin.Forms: Your Model and ViewModel layers will remain mostly unchanged from the MVVM pattern. Your ViewModels should implement the INotifyPropertyChanged interface, which facilitates data binding. To simplify things in Xamarin.Forms, you can use the BindableObject base class and call OnPropertyChanged when values change on your ViewModels. Any Page or control in Xamarin.Forms has a BindingContext, which is the object that it is data bound to. In general, you can set a corresponding ViewModel to each view's BindingContext property. In XAML, you can setup a data binding by using syntax of the form Text="{Binding Name}". This example would bind the Text property of the control to a Name property of the object residing in the BindingContext. In conjunction with data binding, events can be translated to commands using the ICommand interface. So for example, a Button's click event can be data bound to a command exposed by a ViewModel. There is a built-in Command class in Xamarin.Forms to support this. Data binding can also be setup from C# code in Xamarin.Forms via the Binding class. However, it is generally much easier to setup bindings from XAML, since the syntax has been simplified with XAML markup extensions. Now that we have covered the basics, let's go through step-by-step and to use Xamarin.Forms. For the most part we can reuse most of the Model and ViewModel layers, although we will have to make a few minor changes to support data binding from XAML. Let's begin by creating a new Xamarin.Forms application backed by a PCL named XamSnap: First create three folders in the XamSnap project named Views, ViewModels, and Models. Add the appropriate ViewModels and Models. Build the project, just to make sure everything is saved. You will get a few compiler errors we will resolve shortly. The first class we will need to edit is the BaseViewModel class, open it and make the following changes: public class BaseViewModel : BindableObject { protected readonly IWebService service = DependencyService.Get<IWebService>(); protected readonly ISettings settings = DependencyService.Get<ISettings>(); bool isBusy = false; public bool IsBusy { get { return isBusy; } set { isBusy = value; OnPropertyChanged(); } } } First of all, we removed the calls to the ServiceContainer class, because Xamarin.Forms provides it's own IoC container called the DependencyService. It has one method, Get<T>, and registrations are setup via an assembly attribute that we will setup shortly. Additionally we removed the IsBusyChanged event, in favor of the INotifyPropertyChanged interface that supports data binding. Inheriting from BindableObject gave us the helper method, OnPropertyChanged, which we use to inform bindings in Xamarin.Forms that the value has changed. Notice we didn't pass a string containing the property name to OnPropertyChanged. This method is using a lesser-known feature of .NET 4.0 called CallerMemberName, which will automatically fill in the calling property's name at runtime. Next, let's setup our needed services with the DependencyService. Open App.xaml.cs in the root of the PCL project and add the following two lines above the namespace declaration: [assembly: Dependency(typeof(XamSnap.FakeWebService))] [assembly: Dependency(typeof(XamSnap.FakeSettings))] The DependencyService will automatically pick up these attributes and inspect the types we declared. Any interfaces these types implement will be returned for any future callers of DependencyService.Get<T>. I normally put all Dependency declarations in the App.cs file, just so they are easy to manage and in one place. Next, let's modify LoginViewModel by adding a new property: public Command LoginCommand { get; set; } We'll use this shortly for data binding a button's command. One last change in the view model layer is to setup INotifyPropertyChanged for the MessageViewModel: Conversation[] conversations; public Conversation[] Conversations { get { return conversations; } set { conversations = value; OnPropertyChanged(); } } Likewise, you could repeat this pattern for the remaining public properties throughout the view model layer, but this is all we will need for this example. Next, let's create a new Forms ContentPage Xaml file under the Views folder named LoginPage. In the code-behind file, LoginPage.xaml.cs, we'll just need to make a few changes: public partial class LoginPage : ContentPage { readonly LoginViewModel loginViewModel = new LoginViewModel(); public LoginPage() { Title = "XamSnap"; BindingContext = loginViewModel; loginViewModel.LoginCommand = new Command(async () => { try { await loginViewModel.Login(); await Navigation.PushAsync(new ConversationsPage()); } catch (Exception exc) { await DisplayAlert("Oops!", exc.Message, "Ok"); } }); InitializeComponent(); } } We did a few important things here, including setting the BindingContext to our LoginViewModel. We setup the LoginCommand, which basically invokes the Login method and displays a message if something goes wrong. It also navigates to a new page if successful. We also set the Title, which will show up in the top navigation bar of the application. Next, open LoginPage.xaml and we'll add the following XAML code inside the ContentPage's content: <StackLayout Orientation="Vertical" Padding="10,10,10,10"> <Entry Placeholder="Username" Text="{Binding UserName}" /> <Entry Placeholder="Password" Text="{Binding Password}" IsPassword="true" /> <Button Text="Login" Command="{Binding LoginCommand}" /> <ActivityIndicator IsVisible="{Binding IsBusy}" IsRunning="true" /> </StackLayout> This will setup the basics of two text fields, a button, and a spinner complete with all the bindings to make everything work. Since we setup the BindingContext from the LoginPage code behind, all the properties are bound to the LoginViewModel. Next, create a ConversationsPage as a XAML page just like before, and edit the ConversationsPage.xaml.cs code behind: public partial class ConversationsPage : ContentPage { readonly MessageViewModel messageViewModel = new MessageViewModel(); public ConversationsPage() { Title = "Conversations"; BindingContext = messageViewModel; InitializeComponent(); } protected async override void OnAppearing() { try { await messageViewModel.GetConversations(); } catch (Exception exc) { await DisplayAlert("Oops!", exc.Message, "Ok"); } } } In this case, we repeated a lot of the same steps. The exception is that we used the OnAppearing method as a way to load the conversations to display on the screen. Now let's add the following XAML code to ConversationsPage.xaml: <ListView ItemsSource="{Binding Conversations}"> <ListView.ItemTemplate> <DataTemplate> <TextCell Text="{Binding UserName}" /> </DataTemplate> </ListView.ItemTemplate> </ListView> In this example, we used a ListView to data bind a list of items and display on the screen. We defined a DataTemplate class, which represents a set of cells for each item in the list that the ItemsSource is data bound to. In our case, a TextCell displaying the Username is created for each item in the Conversations list. Last but not least, we must return to the App.xaml.cs file and modify the startup page: MainPage = new NavigationPage(new LoginPage()); We used a NavigationPage here so that Xamarin.Forms can push and pop between different pages. This uses a UINavigationController on iOS, so you can see how the native APIs are being used on each platform. At this point, if you compile and run the application, you will get a functional iOS and Android application that can login and view a list of conversations: Summary In this article we covered the basics of Xamarin.Forms and how it can be very useful for building your own cross-platform applications. Xamarin.Forms shines for certain types of apps, but can be limiting if you need to write more complicated UIs or take advantage of native drawing APIs. We discovered how to use XAML for declaring our Xamarin.Forms UIs and understood how Xamarin.Forms controls are rendered on each platform. We also dove into the concepts of data binding and how to use the MVVM design pattern with Xamarin.Forms. Resources for Article: Further resources on this subject: Getting Started with Pentaho Data Integration [article] Where Is My Data and How Do I Get to It? [article] Configuring and Managing the Mailbox Server Role [article]
Read more
  • 0
  • 0
  • 24378

article-image-polyglot-programming-allows-developers-to-choose-the-right-language-to-solve-tough-engineering-problems
Richard Gall
11 Jun 2019
9 min read
Save for later

Polyglot programming allows developers to choose the right language to solve tough engineering problems

Richard Gall
11 Jun 2019
9 min read
Programming languages can divide opinion. They are, for many engineers, a mark of identity. Yes, they say something about the kind of work you do, but they also say something about who you are and what you value. But this is changing, with polyglot programming becoming a powerful and important trend. We’re moving towards a world in which developers are no longer as loyal to their chosen programming languages as they were. Instead, they are more flexible and open minded about the languages they use. This year’s Skill Up report highlights that there are a number of different drivers behind the programming languages developers use which, in turn, imply a level of contextual decision making. Put simply, developers today are less likely to stick with a specific programming language, and instead move between them depending on the problems they are trying to solve and the tasks they need to accomplish. Download this year's Skill Up report here. [caption id="attachment_28338" align="aligncenter" width="554"] Skill Up 2019 data[/caption] As the data above shows, languages aren’t often determined by organizational requirements. They are more likely to be if you’re primarily using Java or C#, but that makes sense as these are languages that have long been associated with proprietary software organizations (Oracle and Microsoft respectively); in fact, programming languages are often chosen due to projects and use cases. The return to programming language standardization This is something backed up by the most recent ThoughtWorks Radar, published in April. Polyglot programming finally moved its way into the Adopt ‘quadrant’. This is after 9 years of living in the Trial quadrant. Part of the reason for this, ThoughtWorks explains, is that the organization is seeing a reaction against this flexibility, writing that “we're seeing a new push to standardize language stacks by both developers and enterprises.” The organization argues - quite rightly - that , “promoting a few languages that support different ecosystems or language features is important for both enterprises to accelerate processes and go live more quickly and developers to have the right tools to solve the problem at hand.” Arguably, we’re in the midst of a conflict within software engineering. On the one hand the drive to standardize tooling in the face of increasingly complex distributed systems makes sense, but it’s one that we should resist. This level of standardization will ultimately remove decision making power from engineers. What’s driving polyglot programming? It’s probably worth digging a little deeper into why developers are starting to be more flexible about the languages they use. One of the most important drivers of this change is the dominance of Agile as a software engineering methodology. As Agile has become embedded in the software industry, software engineers have found themselves working across the stack rather than specializing in a specific part of it. Full-stack development and polyglot programming This is something suggested by Stack Overflow survey data. This year 51.9% of developers described themselves as full-stack developers compared to 50.0% describing themselves as backend developers. This is a big change from 2018 where 57.9% described themselves as backend developers compared to 48.2% of respondents calling themselves full-stack developers. Given earlier Stack Overflow data from 2016 indicates that full-stack developers are comfortable using more languages and frameworks than other roles, it’s understandable that today we’re seeing developers take more ownership and control over the languages (and, indeed, other tools) they use. With developers sitting in small Agile teams working more closely to problem domains than they may have been a decade ago, the power is now much more in their hands to select and use the programming languages and tools that are most appropriate. If infrastructure is code, more people are writing code... which means more people are using programming languages But it's not just about full-stack development. With infrastructure today being treated as code, it makes sense that those responsible for managing and configuring it - sysadmins, SREs, systems engineers - need to use programming languages. This is a dramatic shift in how we think about system administration and infrastructure management; programming languages are important to a whole new group of people. Python and polyglot programming The popularity of Python is symptomatic of this industry-wide change. Not only is it a language primarily selected due to use case (as the data above shows), it’s also a language that’s popular across the industry. When we asked our survey respondents what language they want to learn next, Python came out on top regardless of their primary programming language. [caption id="attachment_28340" align="aligncenter" width="563"] Skill Up 2019 data[/caption] This highlights that Python has appeal across the industry. It doesn’t fit neatly into a specific job role, it isn’t designed for a specific task. It’s flexible - as developers today need to be. Although it’s true that Python’s popularity is being driven by machine learning, it would be wrong to see this as the sole driver. It is, in fact, its wide range of use cases ranging from scripting to building web services and APIs that is making Python so popular. Indeed, it’s worth noting that Python is viewed as a tool as much as it is a programming language. When we specifically asked survey respondents what tools they wanted to learn, Python came up again, suggesting it occupies a category unlike every other programming language. [caption id="attachment_28341" align="aligncenter" width="585"] Skill Up 2019 data[/caption] What about other programming languages? The popularity of Python is a perfect starting point for today’s polyglot programmer. It’s relatively easy to learn, and it can be used for a range of different tasks. But if we’re to convincingly talk about a new age of programming, where developers are comfortable using multiple programming languages, we have to look beyond the popularity of Python at other programming languages. Perhaps a good way to do this is to look at the languages developers primarily using Python want to learn next. If you look at the graphic above, there’s no clear winner for Python developers. While every other language is showing significant interest in Python, Python developers are looking at a range of different languages. This alone isn’t evidence of the popularity of polyglot programming, but it does indicate some level of fragmentation in the programming language ‘marketplace’. Or, to put it another way, we’re moving to a place where it becomes much more difficult to say that given languages are definitive in a specific field. The popularity of Golang Go has particular appeal for Python programmers with almost 20% saying they want to learn it next. This isn’t that surprising - Go is a flexible language that has many applications, from microservices to machine learning, but most importantly can give you incredible performance. With powerful concurrency, goroutines, and garbage collection, it has features designed to ensure application efficiency. Given it was designed by Google this isn’t that surprising - it’s almost purpose built for software engineering today. It’s popularity with JavaScript developers further confirms that it holds significant developer mindshare, particularly among those in positions where projects and use cases demand flexibility. Read next: Is Golang truly community driven and does it really matter? A return to C++ An interesting contrast to the popularity of Go is the relative popularity of C++ in our Skill Up results. C++ is ancient in comparison to Golang, but it nevertheless seems to occupy a similar level of developer mindshare. The reasons are probably similar - it’s another language that can give you incredible power and performance. For Python developers part of the attraction is down to its usefulness for deep learning (TensorFlow is written in C++). But more than that, C++ is also an important foundational language. While it isn’t easy to learn, it does help you to understand some of the fundamentals of software. From this perspective, it provides a useful starting point to go on and learn other languages; it’s a vital piece that can unlock the puzzle of polyglot programming. A more mature JavaScript JavaScript also came up in our Skill Up survey results. Indeed, Python developers are keen on the language, which tells us something about the types of tasks Python developers are doing as well as the way JavaScript has matured. On the one hand, Python developers are starting to see the value of web-based technologies, while on the other JavaScript is also expanding in scope to become much more than just a front end programming language. Read next: Is web development dying? Kotlin and TypeScript The appearance of other smaller languages in our survey results emphasises the way in which the language ecosystem is fragmenting. TypeScript, for example, may not ever supplant JavaScript, but it could become an important addition to a developer’s skill set if they begin running into problems scaling JavaScript. Kotlin represents something similar for Java developers - indeed, it could even eventually out pace its older relative. But again, it’s popularity will emerge according to specific use cases. It will begin to take hold in particular where Java’s limitations become more exposed, such as in modern app development. Rust: a goldilocks programming language perfect for polyglot programming One final mention deserves to go to Rust. In many ways Rust’s popularity is related to the continued relevance of C++, but it offers some improvements - essentially, it’s easier to leverage Rust, while using C++ to its full potential requires experience and skill. Read next: How Deliveroo migrated from Ruby to Rust without breaking production One commenter on Hacker News described it as a ‘Goldilocks’ language - “It's not so alien as to make it inaccessible, while being alien enough that you'll learn something from it.” This is arguably what a programming language should be like in a world where polyglot programming rules. It shouldn’t be so complex as to consume your time and energy, but it should also be sophisticated enough to allow you to solve difficult engineering problems. Learning new programming languages makes it easier to solve engineering problems The value of learning multiple programming languages is indisputable. Python is the language that’s changing the game, becoming a vital additional extra to a range of developers from different backgrounds, but there are plenty of other languages that could prove useful. What’s ultimately important is to explore the options that are available and to start using a language that’s right for you. Indeed, that’s not always immediately obvious - but don’t let that put you off. Give yourself some time to explore new languages and find the one that’s going to work for you.
Read more
  • 0
  • 0
  • 24377

article-image-hardware-configuration
Packt
21 Jul 2014
2 min read
Save for later

Hardware configuration

Packt
21 Jul 2014
2 min read
The hardware configuration of this project is not really complex. For each motion sensor module you want to build, you'll need to do the following steps. The first one is to plug an XBee module on the XBee shield. Then, you need to plug the shield into your Arduino board, as shown in the following image: Now, you can connect the motion sensor. It has three pins: VCC (for the positive power supply), GND (which corresponds to the reference voltage level), and SIG (which will turn to a digital HIGH state in case any motion is detected). Connect VCC to the Arduino 5V pin, GND to Arduino GND, and SIG to Arduino pin number 8 (the example code uses pin 8, but you could also use any digital pin). You should end up with something similar to this image: You will also need to set a jumper correctly on the board so we can upload a sketch. On the XBee shield, you have a little switch close to the XBee module to choose between the XBee module being connected directly to the Arduino board serial interface (which means you can't upload any sketches anymore) or leaving it disconnected. As we need to upload the Arduino sketch first, you need to put this switch to DLINE, as shown in this image: You will also need to connect the XBee explorer board to your computer at this point. Simply insert one XBee module to the board as shown in the following image: Now that this is done, you can power up everything by connecting the Arduino board and explorer module to your computer via USB cables. If you want to use several XBee motion sensors, you will need to repeat the beginning of the procedure for each of them: assemble one Arduino board with an XBee shield, one XBee module, and one motion sensor. However, you only need one USB XBee module connected to your computer if you have many sensors. Summary In this article, we learned about the hardware configuration required to build wireless XBee motion detectors. We looked at Arduino R3 board, XBee module, and XBee shield and the other important hardware configuration. Resources for Article: Further resources on this subject: Playing with Max 6 Framework [Article] Our First Project – A Basic Thermometer [Article] Sending Data to Google Docs [Article]
Read more
  • 0
  • 0
  • 24366

article-image-overview-physics-bodies-and-physics-materials
Packt
30 Sep 2015
14 min read
Save for later

Overview of Physics Bodies and Physics Materials

Packt
30 Sep 2015
14 min read
In this article by Katax Emperor and Devin Sherry, author of the book Unreal Engine Physics Essentials, we will take a deeper look at Physics Bodies in Unreal Engine 4. We will also look at some of the detailed properties available to these assets. In addition, we will discuss the following topics: Physical materials – an overview For the purposes of this article, we will continue to work with Unreal Engine 4 and the Unreal_PhyProject. Let's begin by discussing Physics Bodies in Unreal Engine 4. (For more resources related to this topic, see here.) Physics Bodies – an overview When it comes to creating Physics Bodies, there are multiple ways to go about it (most of which we have covered up to this point), so we will not go into much detail about the creation of Physics Bodies. We can have Static Meshes react as Physics Bodies by checking the Simulate Physics property of the asset when it is placed in our level: We can also create Physics Bodies by creating Physics Assets and Skeletal Meshes, which automatically have the properties of physics by default. Lastly, Shape Components in blueprints, such as spheres, boxes, and capsules will automatically gain the properties of a Physics Body if they are set for any sort of collision, overlap, or other physics simulation events. As always, remember to ensure that our asset has a collision applied to it before attempting to simulate physics or establish Physics Bodies, otherwise the simulation will not work. When you work with the properties of Physics on Static Meshes or any other assets that we will attempt to simulate physics with, we will see a handful of different parameters that we can change in order to produce the desired effect under the Details panel. Let's break down these properties: Simulate Physics: This parameter allows you to enable or simulate physics with the asset you have selected. When this option is unchecked, the asset will remain static, and once enabled, we can edit the Physics Body properties for additional customization. Auto Weld: When this property is set to True, and when the asset is attached to a parent object, such as in a blueprint, the two bodies are merged into a single rigid body. Physics settings, such as collision profiles and body settings, are determined by Root Component. Start Awake: This parameter determines whether the selected asset will Simulate Physics at the start once it is spawned or whether it will Simulate Physics at a later time. We can change this parameter with the level and actor blueprints. Override Mass: When this property is checked and set to True, we can then freely change the Mass of our asset using kilograms (kg). Otherwise, the Mass in Kg parameter will be set to a default value that is based on a computation between the physical material applied and the mass scale value. Mass in Kg: This parameter determines the Mass of the selected asset using kilograms. This is important when you work with different sized physics objects and want them to react to forces appropriately. Locked Axis: This parameter allows you to lock the physical movement of our object along a specified axis. We have the choice to lock the default axes as specified in Project Settings. We also have the choice to lock physical movement along the individual X, Y, and Z axes. We can have none of the axes either locked in translation or rotation, or we can customize each axis individually with the Custom option. Enable Gravity: This parameter determines whether the object should have the force of gravity applied to it. The force of gravity can be altered in the World Settings properties of the level or in the Physics section of the Engine properties in Project Settings. Use Async Scene: This property allows you to enable the use of Asynchronous Physics for the specified object. By default, we cannot edit this property. In order to do so, we must navigate to Project Settings and then to the Physics section. Under the advanced Simulation tab, we will find the Enable Async Scene parameter. In an asynchronous scene, objects (such as Destructible actors) are simulated, and a Synchronous scene is where classic physics tasks, such as a falling crate, take place. Override Walkable Slope on Instance: This parameter determines whether or not we can customize an object's walkable slope. In general, we would use this parameter for our player character, but this property enables the customization of how steep a slope is that an object can walk on. This can be controlled specifically by the Walkable Slope Angle parameter and the Walkable Slope Behavior parameter. Override Max Depenetration Velocity: This parameter allows you to customize Max Depenetration Velocity of the selected physics body. Center of Mass Offset: This property allows you to specify a specific vector offset for the selected objects' center of mass from the calculated location. Being able to know and even modify the center of the mass for our objects can be very useful when you work with sensitive physics simulations (such as flight). Sleep Family: This parameter allows you to control the set of functions that the physics object uses when in a sleep mode or when the object is moving and slowly coming to a stop. The SF Sensitive option contains values with a lower sleep threshold. This is best used for objects that can move very slowly or for improved physics simulations (such as billiards). The SF Normal option contains values with a higher sleep threshold, and objects will come to a stop in a more abrupt manner once in motion as compared to the SF Sensitive option. Mass Scale: This parameter allows you to scale the mass of our object by multiplying a scalar value. The lower the number, the lower the mass of the object will become, whereas the larger the number, the larger the mass of the object will become. This property can be used in conjunction with the Mass in Kg parameter to add more customization to the mass of the object. Angular Damping: This property is a modifier of the drag force that is applied to the object in order to reduce angular movement, which means to reduce the rotation of the object. We will go into more detail regarding Angular Damping. Linear Damping: This property is used to simulate the different types of friction that can assist in the game world. This modifier adds a drag force to reduce linear movement, reducing the translation of the object. We will go into more detail regarding Linear Damping. Max Angular Velocity: This parameter limits Max Angular Velocity of the selected object in order to prevent the object from rotating at high rates. By increasing this value, the object will spin at very high speeds once it is impacted by an outside force that is strong enough to reach the Max Angular Velocity value. By decreasing this value, the object will not rotate as fast, and it will come to a halt much faster depending on the angular damping applied. Position Solver Iteration Count: This parameter reflects the physics body's solver iteration count for its position; the solver iteration count is responsible for periodically checking the physics body's position. Increasing this value will be more CPU intensive, but better stabilized. Velocity Solver Iteration Count: This parameter reflects the physics body's solver iteration count for its velocity; the solver iteration count is responsible for periodically checking the physics body's velocity. Increasing this value will be more CPU intensive, but better stabilized. Now that we have discussed all the different parameters available to Physics Bodies in Unreal Engine 4, feel free to play around with these values in order to obtain a stronger grasp of what each property controls and how it affects the physical properties of the object. As there are a handful of properties, we will not go into detailed examples of each, but the best way to learn more is to experiment with these values. However, we will work with how to create various examples of physics bodies in order to explore Physics Damping and Friction. Physical Materials – an overview Physical Materials are assets that are used to define the response of a physics body when you dynamically interact with the game world. When you first create Physical Material, you are presented with a set of default values that are identical to the default Physical Material that is applied to all physics objects. To create Physical Material, let's navigate to Content Browser and select the Content folder so that it is highlighted. From here, we can right-click on the Content folder and select the New Folder option to create a new folder for our Physical Material; name this new folder PhysicalMaterials. Now, in the PhysicalMaterials folder, right-click on the empty area of Content Browser and navigate to the Physics section and select Physical Material. Make sure to name this new asset PM_Test. Double-click on the new Physical Material asset to open Generic Asset Editor and we should see the following values that we can edit in order to make our physics objects behave in certain ways: Let's take a few minutes to break down each of these properties: Friction: This parameter controls how easily objects can slide on this surface. The lower the friction value, the more slippery the surface. The higher the friction value, the less slippery the surface. For example, ice would have a Friction surface value of .05, whereas a Friction surface value of 1 would cause the object not to slip as much once moved. Friction Combine Mode: This parameter controls how friction is computed for multiple materials. This property is important when it comes to interactions between multiple physical materials and how we want these calculations to be made. Our choices are Average, Minimum, Maximum, and Multiply. Override Friction Combine Mode: This parameter allows you to set the Friction Combine Mode parameter instead of using Friction Combine Mode, found in the Project Settings | Engine | Physics section. Restitution: This parameter controls how bouncy the surface is. The higher the value, the more bouncy the surface will become. Density: This parameter is used in conjunction with the shape of the object to calculate its mass properties. The higher the number, the heavier the object becomes (in grams per cubic centimeter). Raise Mass to Power: This parameter is used to adjust the way in which the mass increases as the object gets larger. This is applied to the mass that is calculated based on a solid object. In actuality, larger objects do not tend to be solid and become more like shells (such as a vehicle). The values are clamped to 1 or less. Destructible Damage Threshold Scale: This parameter is used to scale the damage threshold for the destructible objects that this physical material is applied to. Surface Type: This parameter is used to describe what type of real-world surface we are trying to imitate for our project. We can edit these values by navigating to the Project Settings | Physics | Physical Surface section. Tire Friction Scale: This parameter is used as the overall tire friction scalar for every type of tire and is multiplied by the parent values of the tire. Tire Friction Scales: This parameter is almost identical to the Tire Friction Scale parameter, but it looks for a Tire Type data asset to associate it to. Tire Types can be created through the use of Data Assets by right-clicking on the Content Browser | Miscellaneous | Data Asset | Tire Type section. Now that we have briefly discussed how to create Physical Materials and what their properties are, let's take a look at how to apply Physical Materials to our physics bodies. In FirstPersonExampleMap, we can select any of the physics body cubes throughout the level and in the Details panel under Collision, we will find the Phys Material Override parameter. It is here that we can apply our Physical Material to the cube and view how it reacts to our game world. For the sake of an example, let's return to the Physical Material, PM_Test, that we created earlier, change the Friction property from 0.7 to 0.2, and save it. With this change in place, let's select a physics body cube in FirstPersonExampleMap and apply the Physical Material, PM_Test, to the Phys Material Override parameter of the object. Now, if we play the game, we will see that the cube we applied the Physical Material, PM_Test, to will start to slide more once shot by the player than it did when it had a Friction value of 0.7. We can also apply this Physical Material to the floor mesh in FirstPersonExampleMap to see how it affects the other physics bodies in our game world. From here, feel free to play around with the Physical Material parameters to see how we can affect the physics bodies in our game world. Lastly, let's briefly discuss how to apply Physical Materials to normal Materials, Material Instances, and Skeletal Meshes. To apply Physical Material to a normal material, we first need to either create or open an already created material in Content Browser. To create a material, just right-click on an empty area of Content Browser and select Material from the drop-down menu.Double-click on Material to open Material Editor, and we will see the parameter for Phys Material under the Physical Material section of Details panel in the bottom-left of Material Editor: To apply Physical Material to Material Instance, we first need to create Material Instance by navigating to Content Browser and right-clicking on an empty area to bring up the context drop-down menu. Under the Materials & Textures section, we will find an option for Material Instance. Double-click on this option to open Material Instance Editor. Under the Details panel in the top-left corner of this editor, we will find an option to apply Phys Material under the General section: Lastly, to apply Physical Material to Skeletal Mesh, we need to either create or open an already created Physics Asset that contains Skeletal Mesh. In the First Person Shooter Project template, we can find TutorialTPP_PhysicsAsset under the Engine Content folder. If the Engine Content folder is not visible by default in Content Browser, we need to simply navigate to View Options in the bottom-right corner of Content Browser and check the Show Engine Content parameter. Under the Engine Content folder, we can navigate to the Tutorial folder and then to the TutorialAssets folder to find the TutorialTPP_PhysicsAsset asset. Double-click on this asset to open Physical Asset Tool. Now, we can click on any of the body parts found on Skeletal Mesh to highlight it. Once this is highlighted, we can view the option for Simple Collision Physical Material in the Details panel under the Physics section. Here, we can apply any of our Physical Materials to this body part. Summary In this article, we discussed what Physics Bodies are and how they function in Unreal Engine 4. Moreover, we looked at the properties that are involved in Physics Bodies and how these properties can affect the behavior of these bodies in the game. Additionally, we briefly discussed Physical Materials, how to create them, and what their properties entail when it comes to affecting its behavior in the game. We then reviewed how to apply Physical Materials to static meshes, materials, material instances, and skeletal meshes. Now that we have a stronger understanding of how Physics Bodies work in the context of angular and linear velocities, momentum, and the application of damping, we can move on and explore in detail how Physical Materials work and how they are implemented. Resources for Article: Further resources on this subject: Creating a Brick Breaking Game[article] Working with Away3D Cameras[article] Replacing 2D Sprites with 3D Models [article]
Read more
  • 0
  • 0
  • 24362
article-image-systems-and-logics
Packt
06 Apr 2017
19 min read
Save for later

Systems and Logics

Packt
06 Apr 2017
19 min read
In this article by Priya Kuber, Rishi Gaurav Bhatnagar, and Vijay Varada, authors of the book Arduino for Kids explains structure and various components of a code: How does a code work What is code What is a System How to download, save and access a file in the Arduino IDE (For more resources related to this topic, see here.) What is a System? Imagine system as a box which in which a process is completed. Every system is solving a larger problem, and can be broken down into smaller problems that can be solved and assembled. Sort of like a Lego set! Each small process has 'logic' as the backbone of the solution. Logic, can be expressed as an algorithm and implemented in code. You can design a system to arrive at solutions to a problem. Another advantage to breaking down a system into small processes is that in case your solution fails to work, you can easily spot the source of your problem, by checking if your individual processes work. What is Code? Code is a simple set of written instructions, given to a specific program in a computer, to perform a desired task. Code is written in a computer language. As we all know by now, a computer is an intelligent, electronic device capable of solving logical problems with a given set of instructions. Some examples of computer languages are Python, Ruby, C, C++ and so on. Find out some more examples of languages from the internet and write it down in your notebook. What is an Algorithm? A logical set by step process, guided by the boundaries (or constraints) defined by a problem, followed to find a solution is called an algorithm. In a better and more pictorial form, it can be represented as follows: (Logic + Control = Algorithm) (A picture depicting this equation) What does that even mean? Look at the following example to understand the process. Let's understand what an algorithm means with the help of an example. It's your friend's birthday and you have been invited for the party (Isn't this exciting already?). You decide to gift her something. Since it's a gift, let's wrap it. What would you do to wrap the gift? How would you do it? Look at the size of the gift Fetch the gift wrapping paper Fetch the scissors Fetch the tape Then you would proceed to place the gift inside the wrapping paper. You will start start folding the corners in a way that it efficiently covers the Gift. In the meanwhile, to make sure that your wrapping is tight, you would use a scotch tape. You keep working on the wrapper till the whole gift is covered (and mind you, neatly! you don't want mommy scolding you, right?). What did you just do? You used a logical step by step process to solve a simple task given to you. Again coming back to the sentence: (Logic + Control = Algorithm) 'Logic' here, is the set of instructions given to a computer to solve the problem. 'Control' are the words making sure that the computer understands all your boundaries. Logic Logic is the study of reasoning and when we add this to the control structures, they become algorithms. Have you ever watered the plants using a water pipe or washed a car with it? How do you think it works? The pipe guides the water from the water tap to the car. It makes sure optimum amount of water reaches the end of the pipe. A pipe is a control structure for water in this case. We will understand more about control structures in the next topic. How does a control structure work? A very good example to understand how a control structure works, is taken from wikiversity. (https://en.wikiversity.org/wiki/Control_structures) A precondition is the state of a variable before entering a control structure. In the gift wrapping example, the size of the gift determines the amount of gift wrapping paper you will use. Hence, it is a condition that you need to follow to successfully finish the task. In programming terms, such condition is called precondition. Similarly, a post condition is the state of the variable after exiting the control structure. And a variable, in code, is an alphabetic character, or a set of alphabetic characters, representing or storing a number, or a value. Some examples of variables are x, y, z, a, b, c, kitten, dog, robot Let us analyze flow control by using traffic flow as a model. A vehicle is arriving at an intersection. Thus, the precondition is the vehicle is in motion. Suppose the traffic light at the intersection is red. The control structure must determine the proper course of action to assign to the vehicle. Precondition: The vehicle is in motion. Control Structure: Is the traffic light green? If so, then the vehicle may stay in motion. Is the traffic light red? If so, then the vehicle must stop. End of Control Structure: Post condition: The vehicle comes to a stop. Thus, upon exiting the control structure, the vehicle is stopped. If you wonder where you learnt to wrap the gift, you would know that you learnt it by observing other people doing a similar task through your eyes. Since our microcontroller does not have eyes, we need to teach it to have a logical thinking using Code. The series of logical steps that lead to a solution is called algorithm as we saw in the previous task. Hence, all the instructions we give to a micro controller are in the form of an algorithm. A good algorithm solves the problem in a fast and efficient way. Blocks of small algorithms form larger algorithms. But algorithm is just code! What will happen when you try to add sensors to your code? A combination of electronics and code can be called a system. Picture: (block diagram of sensors + Arduino + code written in a dialogue box ) Logic is universal. Just like there can be multiple ways to fold the wrapping paper, there can be multiple ways to solve a problem too! A micro controller takes the instructions only in certain languages. The instructions then go to a compiler that translates the code that we have written to the machine. What language does your Arduino Understand? For Arduino, we will use the language 'processing'. Quoting from processing.org, Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Processing is an open source programming language and integrated development environment (IDE). Processing was originally built for designers and it was extensively used in electronics arts and visual design communities with the sole purpose of teaching the fundamentals of computer sciences in a visual context. This also served as the foundations of electronic sketchbooks. From the previous example of gift wrapping, you noticed that before you need to bring in the paper and other stationery needed, you had to see the size of the problem at hand (the gift). What is a Library? In computer language, the stationery needed to complete your task, is called "Library". A library is a collection of reusable code that a programmer can 'call' instead of writing everything again. Now imagine if you had to cut a tree, make paper, then color the paper into the beautiful wrapping paper that you used, when I asked you to wrap the gift. How tiresome would it be? (If you are inventing a new type of paper, sure, go ahead chop some wood!) So before writing a program, you make sure that you have 'called' all the right libraries. Can you search the internet and make a note of a few arduino libraries in your inventor's diary? Please remember, that libraries are also made up of code! As your next activity, we will together learn more about how a library is created. Activity: Understanding the Morse Code During the times before the two-way mobile communication, people used a one-way communication called the Morse code. The following image is the experimental setup of a Morse code. Do not worry; we will not get into how you will perform it physically, but by this example, you will understand how your Arduino will work. We will show you the bigger picture first and then dissect it systematically so that you understand what a code contains. The Morse code is made up of two components "Short" and "Long" signals. The signals could be in the form of a light pulse or sound. The following image shows how the Morse code looks like. A dot is a short signal and a dash is a long signal. Interesting, right? Try encrypting your message for your friend with this dots and dashes. For example, "Hello" would be: The image below shows how the Arduino code for Morse code will looks like. The piece of code in dots and dashes is the message SOS that I am sure you all know, is an urgent appeal for help. SOS in Morse goes: dot dot dot; dash dash dash; dot dot dot. Since this is a library, which is being created using dots and dashes, it is important that we define how the dot becomes dot, and dash becomes dash first. The following sections will take smaller sections or pieces of main code and explain you how they work. We will also introduce some interesting concepts using the same. What is a function? Functions have instructions in a single line of code, telling the values in the bracket how to act. Let us see which one is the function in our code. Can you try to guess from the following screenshot? No? Let me help you! digitalWrite() in the above code is a Function, that as you understand, 'writes' on the correct pin of the Arduino. delay is a Function that tells the controller how frequently it should send the message. The higher the delay number, the slower will be the message (Imagine it as a way to slow down your friend who speaks too fast, helping you to understand him better!) Look up the internet to find out what is the maximum number that you stuff into delay. What is a constant? A constant is an identifier with pre-defined, non-changeable values. What is an identifier you ask? An identifier is a name that labels the identity of a unique object or value. As you can see from the above piece of code, HIGH and LOW are Constants. Q: What is the opposite of Constant? Ans: Variable The above food for thought brings us to the next section. What is a variable? A variable is a symbolic name for information. In plain English, a 'teacher' can have any name; hence, the 'teacher' could be a variable. A variable is used to store a piece of information temporarily. The value of a variable changes, if any action is taken on it for example; Add, subtract, multiply etc. (Imagine how your teacher praises you when you complete your assignment on time and scolds you when you do not!) What is a Datatype? Datatypes are sets of data that have a pre-defined value. Now look at the first block of the example program in the following image: int as shown in the above screenshot, is a Datatype The following table shows some of the examples of a Datatype. Datatype Use Example int describes an integer number is used to represent whole numbers 1, 2, 13, 99 etc float used to represent that the numbers are decimal 0.66, 1.73 etc char represents any character. Strings are written in single quotes 'A', 65 etc str represent string "This is a good day!" With the above definition, can we recognize what pinMode is? Every time you have a doubt in a command or you want to learn more about it, you can always look it up at Arduino.cc website. You could do the same for digitalWrite() as well! From the pinMode page of Arduino.cc we can define it as a command that configures the specified pin to behave either as an input or an output. Let us now see something more interesting. What is a control structure? We have already seen the working of a control structure. In this section, we will be more specific to our code. Now I draw your attention towards this specific block from the main example above: Do you see void setup() followed by a code in the brackets? Similarly void loop() ? These make the basics of the structure of an Arduino program sketch. A structure, holds the program together, and helps the compiler to make sense of the commands entered. A compiler is a program that turns code understood by humans into the code that is understood by machines. There are other loop and control structures as you can see in the following screenshot: These control structures are explained next. How do you use Control Structures? Imagine you are teaching your friend to build 6 cm high lego wall. You ask her to place one layer of lego bricks, and then you further ask her to place another layer of lego bricks on top of the bottom layer. You ask her to repeat the process until the wall is 6 cm high. This process of repeating instructions until a desired result is achieved is called Loop. A micro-controller is only as smart as you program it to be. Hence, we will move on to the different types of loops. While loop: Like the name suggests, it repeats a statement (or group of statements) while the given condition is true. The condition is tested before executing the loop body. For loop: Execute a sequence of statements multiple times and abbreviates the code that manages the loop variable. Do while loop: Like a while statement, except that it tests the condition at the end of the loop body Nested loop: You can use one or more loop inside any another while, for or do..while loop. Now you were able to successfully tell your friend when to stop, but how to control the micro controller? Do not worry, the magic is on its way! You introduce Control statements. Break statements: Breaks the flow of the loop or switch statement and transfers execution to the statement that is immediately following the loop or switch. Continue statements: This statement causes the loop to skip the remainder of its body and immediately retest its condition before reiterating. Goto statements: This transfers control to a statement which is labeled . It is no advised to use goto statement in your programs. Quiz time: What is an infinite loop? Look up the internet and note it in your inventor-notebook. The Arduino IDE The full form of IDE is Integrated Development Environment. IDE uses a Compiler to translate code in a simple language that the computer understands. Compiler is the program that reads all your code and translates your instructions to your microcontroller. In case of the Arduino IDE, it also verifies if your code is making sense to it or not. Arduino IDE is like your friend who helps you finish your homework, reviews it before you give it for submission, if there are any errors; it helps you identify them and resolve them. Why should you love the Arduino IDE? I am sure by now things look too technical. You have been introduced to SO many new terms to learn and understand. The important thing here is not to forget to have fun while learning. Understanding how the IDE works is very useful when you are trying to modify or write your own code. If you make a mistake, it would tell you which line is giving you trouble. Isn't it cool? The Arduino IDE also comes with loads of cool examples that you can plug-and-play. It also has a long list of libraries for you to access. Now let us learn how to get the library on to your computer. Ask an adult to help you with this section if you are unable to succeed. Make a note of the following answers in your inventor's notebook before downloading the IDE. Get your answers from google or ask an adult. What is an operating system?What is the name of the operating system running on your computer?What is the version of your current operating system?Is your operating system 32 bit or 64 bit?What is the name of the Arduino board that you have? Now that we did our homework, let us start playing! How to download the IDE? Let us now, go further and understand how to download something that's going to be our playground. I am sure you'd be eager to see the place you'll be working in for building new and interesting stuff! For those of you wanting to learn and do everything my themselves, open any browser and search for "Arduino IDE" followed by the name of your operating system with "32 bits" or "64 bits" as learnt in the previous section. Click to download the latest version and install! Else, the step-by-step instructions are here: Open your browser (Firefox, Chrome, Safari)(Insert image of the logos of firefox, chrome and safari) Go to www.arduino.ccas shown in the following screenshot Click on the 'Download' section of the homepage, which is the third option from your left as shown in the following screenshot. From the options, locate the name of your operating system, click on the right version (32 bits or 64 bits) Then click on 'Just Download' after the new page appears. After clicking on the desired link and saving the files, you should be able to 'double click' on the Arduino icon and install the software. If you have managed to install successfully, you should see the following screens. If not, go back to step 1 and follow the procedure again. The next screenshot shows you how the program will look like when it is loading. This is how the IDE looks when no code is written into it. Your first program Now that you have your IDE ready and open, it is time to start exploring. As promised before, the Arduino IDE comes with many examples, libraries, and helping tools to get curious minds such as you to get started soon. Let us now look at how you can access your first program via the Arduino IDE. A large number of examples can be accessed in the File > Examples option as shown in the following screenshot. Just like we all have nicknames in school, a program, written in in processing is called a 'sketch'. Whenever you write any program for Arduino, it is important that you save your work. Programs written in processing are saved with the extension .ino The name .ino is derived from the last 3 letters of the word ArduINO. What are the other extensions are you aware of? (Hint: .doc, .ppt etc) Make a note in your inventor's notebook. Now ask yourself why do so many extensions exist. An extension gives the computer, the address of the software which will open the file, so that when the contents are displayed, it makes sense. As we learnt above, that the program written in the Arduino IDE is called a 'Sketch'. Your first sketch is named 'blink'. What does it do? Well, it makes your Arduino blink! For now, we can concentrate on the code. Click on File | Examples | Basics | Blink. Refer to next image for this. When you load an example sketch, this is how it would look like. In the image below you will be able to identify the structure of code, recall the meaning of functions and integers from the previous section. We learnt that the Arduino IDE is a compiler too! After opening your first example, we can now learn how to hide information from the compiler. If you want to insert any extra information in plain English, you can do so by using symbols as following. /* your text here*/ OR // your text here Comments can also be individually inserted above lines of code, explaining the functions. It is good practice to write comments, as it would be useful when you are visiting back your old code to modify at a later date. Try editing the contents of the comment section by spelling out your name. The following screenshot will show you how your edited code will look like. Verifying your first sketch Now that you have your first complete sketch in the IDE, how do you confirm that your micro-controller with understand? You do this, by clicking on the easy to locate Verify button with a small . You will now see that the IDE informs you "Done compiling" as shown in the screenshot below. It means that you have successfully written and verified your first code. Congratulations, you did it! Saving your first sketch As we learnt above, it is very important to save your work. We now learn the steps to make sure that your work does not get lost. Now that you have your first code inside your IDE, click on File > SaveAs. The following screenshot will show you how to save the sketch. Give an appropriate name to your project file, and save it just like you would save your files from Paintbrush, or any other software that you use. The file will be saved in a .ino format. Accessing your first sketch Open the folder where you saved the sketch. Double click on the .ino file. The program will open in a new window of IDE. The following screen shot has been taken from a Mac Os, the file will look different in a Linux or a Windows system. Summary Now we know about systems and how logic is used to solve problems. We can write and modify simple code. We also know the basics of Arduino IDE and studied how to verify, save and access your program. Resources for Article:  Further resources on this subject: Getting Started with Arduino [article] Connecting Arduino to the Web [article] Functions with Arduino [article]
Read more
  • 0
  • 0
  • 24358

article-image-how-tflearn-makes-building-tensorflow-models-easier
Savia Lobo
04 Jun 2018
7 min read
Save for later

How TFLearn makes building TensorFlow models easier

Savia Lobo
04 Jun 2018
7 min read
Today, we will introduce you to TFLearn, and will create layers and models which are directly beneficial in any model implementation with Tensorflow. TFLearn is a modular library in Python that is built on top of core TensorFlow. [box type="note" align="" class="" width=""]This article is an excerpt taken from the book Mastering TensorFlow 1.x written by Armando Fandango. In this book, you will learn how to build TensorFlow models to work with multilayer perceptrons using Keras, TFLearn, and R.[/box] TIP: TFLearn is different from the TensorFlow Learn package which is also known as TF Learn (with one space in between TF and Learn). It is available at the following link; and the source code is available on GitHub. TFLearn can be installed in Python 3 with the following command: pip3  install  tflearn Note: To install TFLearn in other environments or from source, please refer to the following link: http://tflearn.org/installation/ The simple workflow in TFLearn is as follows:  Create an input layer first.  Pass the input object to create further layers.  Add the output layer.  Create the net using an estimator layer such as regression.  Create a model from the net created in the previous step.  Train the model with the model.fit() method.  Use the trained model to predict or evaluate. Creating the TFLearn Layers Let us learn how to create the layers of the neural network models in TFLearn:  Create an input layer first: input_layer  =  tflearn.input_data(shape=[None,num_inputs]  Pass the input object to create further layers: layer1  =  tflearn.fully_connected(input_layer,10, activation='relu') layer2  =  tflearn.fully_connected(layer1,10, activation='relu')  Add the output layer: output  =  tflearn.fully_connected(layer2,n_classes, activation='softmax')  Create the final net from the estimator layer such as regression: net  =  tflearn.regression(output, optimizer='adam', metric=tflearn.metrics.Accuracy(), loss='categorical_crossentropy' ) The TFLearn provides several classes for layers that are described in following sub-sections. TFLearn core layers TFLearn offers the following layers in the tflearn.layers.core module: Layer classDescriptioninput_dataThis layer is used to specify the input layer for the neural network.fully_connectedThis layer is used to specify a layer where all the neurons are connected to all the neurons in the previous layer.dropoutThis layer is used to specify the dropout regularization. The input elements are scaled by 1/keep_prob while keeping the expected sum unchanged.Layer classDescriptioncustom_layerThis layer is used to specify a custom function to be applied to the input. This class wraps our custom function and presents the function as a layer.reshapeThis layer reshapes the input into the output of specified shape.flattenThis layer converts the input tensor to a 2D tensor.activationThis layer applies the specified activation function to the input tensor.single_unitThis layer applies the linear function to the inputs.highwayThis layer implements the fully connected highway function.one_hot_encodingThis layer converts the numeric labels to their binary vector one-hot encoded representations.time_distributedThis layer applies the specified function to each time step of the input tensor.multi_target_dataThis layer creates and concatenates multiple placeholders, specifically used when the layers use targets from multiple sources. TFLearn convolutional layers TFLearn offers the following layers in the tflearn.layers.conv module: Layer classDescriptionconv_1dThis layer applies 1D convolutions to the input dataconv_2dThis layer applies 2D convolutions to the input dataconv_3dThis layer applies 3D convolutions to the input dataconv_2d_transposeThis layer applies transpose of conv2_d to the input dataconv_3d_transposeThis layer applies transpose of conv3_d to the input dataatrous_conv_2dThis layer computes a 2-D atrous convolutiongrouped_conv_2dThis layer computes a depth-wise 2-D convolutionmax_pool_1dThis layer computes 1-D max poolingmax_pool_2dThis layer computes 2D max poolingavg_pool_1dThis layer computes 1D average poolingavg_pool_2dThis layer computes 2D average poolingupsample_2dThis layer applies the row and column wise 2-D repeat operationupscore_layerThis layer implements the upscore as specified in http://arxiv. org/abs/1411.4038global_max_poolThis layer implements the global max pooling operationglobal_avg_poolThis layer implements the global average pooling operationresidual_blockThis layer implements the residual block to create deep residual networksresidual_bottleneckThis layer implements the residual bottleneck block for deep residual networksresnext_blockThis layer implements the ResNeXt block TFLearn recurrent layers TFLearn offers the following layers in the tflearn.layers.recurrent module: Layer classDescriptionsimple_rnnThis layer implements the simple recurrent neural network modelbidirectional_rnnThis layer implements the bi-directional RNN modellstmThis layer implements the LSTM modelgruThis layer implements the GRU model TFLearn normalization layers TFLearn offers the following layers in the tflearn.layers.normalization module: Layer classDescriptionbatch_normalizationThis layer normalizes the output of activations of previous layers for each batchlocal_response_normalizationThis layer implements the LR normalizationl2_normalizationThis layer applies the L2 normalization to the input tensors TFLearn embedding layers TFLearn offers only one layer in the tflearn.layers.embedding_ops module: Layer classDescriptionembeddingThis layer implements the embedding function for a sequence of integer IDs or floats TFLearn merge layers TFLearn offers the following layers in the tflearn.layers.merge_ops module: Layer classDescriptionmerge_outputsThis layer merges the list of tensors into a single tensor, generally used to merge the output tensors of the same shapemergeThis layer merges the list of tensors into a single tensor; you can specify the axis along which the merge needs to be done TFLearn estimator layers TFLearn offers only one layer in the tflearn.layers.estimator module: Layer classDescriptionregressionThis layer implements the linear or logistic regression While creating the regression layer, you can specify the optimizer and the loss and metric functions. TFLearn offers the following optimizer functions as classes in the tflearn.optimizers module: SGD RMSprop Adam Momentum AdaGrad Ftrl AdaDelta ProximalAdaGrad Nesterov Note: You can create custom optimizers by extending the tflearn.optimizers.Optimizer base class. TFLearn offers the following metric functions as classes or ops in the tflearn.metrics module: Accuracy or  accuracy_op Top_k or top_k_op R2 or r2_op WeightedR2  or weighted_r2_op Binary_accuracy_op Note : You can create custom metrics by extending the tflearn.metrics.Metric base class. TFLearn provides the following loss functions, known as objectives, in the tflearn.objectives module: Softymax_categorical_crossentropy categorical_crossentropy binary_crossentropy Weighted_crossentropy mean_square hinge_loss roc_auc_score Weak_cross_entropy_2d While specifying the input, hidden, and output layers, you can specify the activation functions to be applied to the output. TFLearn provides the following activation functions in the tflearn.activations module: linear tanh Sigmoid softmax softplus Softsign relu relu6 leaky_relu Prelu elu Crelu selu Creating the TFLearn Model Create the model from the net created in the previous step (step 4 in creating the TFLearn layers section): model  =  tflearn.DNN(net) Types of TFLearn models The TFLearn offers two different classes of the models: DNN  (Deep Neural Network) model: This class allows you to create a multilayer perceptron from the network that you have created from the layers SequenceGenerator model: This class allows you to create a deep neural network that can generate sequences Training the TFLearn Model After creating, train the model with the model.fit() method: model.fit(X_train, Y_train, n_epoch=n_epochs, batch_size=batch_size, show_metric=True, run_id='dense_model') Using the TFLearn Model Use the trained model to predict or evaluate: score  =  model.evaluate(X_test,  Y_test) print('Test  accuracy:',  score[0]) The complete code for the TFLearn MNIST classification example is provided in the notebook ch-02_TF_High_Level_Libraries. The output from the TFLearn MNIST example is as follows: Training  Step:  5499         |  total  loss:  0.42119  |  time:  1.817s |  Adam  |  epoch:  010  |  loss:  0.42119  -  acc:  0.8860  --  iter:  54900/55000 Training  Step:  5500         |  total  loss:  0.40881  |  time:  1.820s |  Adam  |  epoch:  010  |  loss:  0.40881  -  acc:  0.8854  --  iter:  55000/55000 -- Test  accuracy:  0.9029 Note: You can get more information about TFLearn from the following link: http://tflearn.org/. To summarize, we got to know about TFLearn and the different TFLearn layers and models. If you found this post useful, do check out this book Mastering TensorFlow 1.x, to explore advanced features of TensorFlow 1.x, and gain insight into TensorFlow Core, Keras, TF Estimators, TFLearn, TF Slim, Pretty Tensor, and Sonnet. TensorFlow.js 0.11.1 releases! How to Build TensorFlow Models for Mobile and Embedded devices Distributed TensorFlow: Working with multiple GPUs and servers  
Read more
  • 0
  • 0
  • 24333

article-image-what-is-hcl-hashicorp-configuration-language-how-does-it-relate-to-terraform-and-why-is-it-growing-in-popularity
Savia Lobo
18 Jul 2019
6 min read
Save for later

What is HCL (Hashicorp Configuration Language), how does it relate to Terraform, and why is it growing in popularity?

Savia Lobo
18 Jul 2019
6 min read
HCL (Hashicorp Configuration language), is rapidly growing in popularity. Last year's Octoverse report by GitHub showed it to be the second fastest growing language on the platform, more than doubling in contributors since 2017 (Kotlin was top, with GitHub contributors growing 2.6 times). However, despite its growth, it hasn’t had the level of attention that other programming languages have had. One of the reasons for this is that HCL is a configuration language. It's also part of a broader ecosystem of tools built by cloud automation company HashiCorp that largely center around Terraform. What is Terraform? Terraform is an infrastructure-as-code tool that makes it easier to define and manage your cloud infrastructure. HCL is simply the syntax that allows you to better leverage its capabilities. It gives you a significant degree of control over your infrastructure in a way that’s more ‘human-readable’ than other configuration languages such as YAML and JSON. HCL and Terraform are both important parts of the DevOps world. They are not only built for a world that has transitioned to infrastructure-as-code, but also one in which this transition demands more from engineers. By making HCL a more readable, higher-level configuration language, the language can better facilitate collaboration and transparency between cross-functional engineering teams. With all of this in mind, HCL’s growing popularity can be taken to indicate broader shifts in the software development world. HashiCorp clearly understands them very well and is eager to help drive them forward. But before we go any further, let's dive a bit deeper into why HCL was created, how it works, and how it sits within the Terraform ecosystem. Why did Hashicorp create HCL? The development of HCL was borne from of HashiCorp’s experience of trying multiple different options for configuration languages. “What we learned,” the team explains on GitHub, “is that some people wanted human-friendly configuration languages and some people wanted machine-friendly languages.” The HashiCorp team needed a compromise - something that could offer a degree of flexibility and accessibility. As the team outlines their thinking, it’s clear to see what the drivers behind HCL actually are. JSON, they say, “is fairly verbose and... doesn't support comments” while YAML is viewed as too complex for beginners to properly parse and use effectively. Traditional programming languages also pose problems. Again, they’re too sophisticated and demand too much background knowledge from users to make them a truly useful configuration language. Put together, this underlines the fact that with HCL HashiCorp wanted to build something that is accessible to engineers of different abilities and skill sets, while also being clear enough to enable appropriate levels of transparency between teams. It is “designed to be written and modified by humans.” Listen: Uber engineer Yuri Shkuro talks distributed tracing and observability on the Packt Podcast How does the Hashicorp Configuration Language work? HCL is not a replacement for the likes of YAML or JSON. The team’s aim “is not to alienate other configuration languages. It is,” they say, “instead to provide HCL as a specialized language for our tools, and JSON as the interoperability layer.” Effectively, it builds on some of the things you can get with JSON, but reimagines them in the context of infrastructure and application configuration. According to the documentation, we should see HCL as a “structured configuration language rather than a data structure serialization language.” HCL is “always decoded using an application-defined schema,” which gives you a level of flexibility. It quite means the application is always at the center of the language. You don't have to work around it. If you want to learn more about the HCL syntax and how it works at a much deeper level, the documentation is a good place to start, as is this page on GitHub. Read next: Why do IT teams need to transition from DevOps to DevSecOps? The advantages of HCL and Terraform You can’t really talk about the advantages of HCL without also considering the advantages of Terraform. Indeed, while HCL might well be a well designed configuration language that’s accessible and caters to a wide range of users and use cases, it’s only in the context of Terraform that its growth really makes sense. Why is Terraform so popular? To understand the popularity of Terraform, you need to place it in the context of current trends and today’s software marketplace for infrastructure configuration. Terraform is widely seen as a competitor to configuration management tools like Chef, Ansible and Puppet. However, Terraform isn’t exactly a configuration management - it’s more accurate to call it a provisioning tool (config management tools configure software on servers that already exist - provisioning tools set up new ones). This is important because thanks to Docker and Kubernetes, the need for configuration has radically changed - you might even say that it’s no longer there. If a Docker container is effectively self-sufficient, with all the configuration files it needs to run, then the need for ‘traditional’ configuration management begins to drop. Of course, this isn’t to say that one tool is intrinsically better than any other. There are use cases for all of these types of tools. But the fact remains is that Terraform suits use cases that are starting to grow. Part of this is due to the rise of cloud agnosticism. As multi-cloud and hybrid cloud architectures become prevalent, DevOps teams need tools that let them navigate and manage resources across different platforms. Although all the major public cloud vendors have native tools for managing resources, these can sometimes be restrictive. The templates they offer can also be difficult to reuse. Take Azure ARM templates, for example - it can only be used to create an Azure resource. In contrast, Terraform allows you to provision and manage resources across different cloud platforms. Conclusion: Terraform and HCL can make DevOps more accessible It’s not hard to see why ThoughtWorks sees Terraform as such an important emerging technology. (In the last edition of ThoughtWorks Radar is claimed that now is the time to adopt it.) But it’s also important to understand that HCL is an important element in the success of Terraform. It makes infrastructure-as-code not only something that’s accessible to developers that might have previously only dipped their toes in operations, but also something that can be more collaborative, transparent, and observable for team members. The DevOps picture will undoubtedly evolve over the next few years, but it would appear that HashiCorp is going to have a big part to play in it.
Read more
  • 0
  • 0
  • 24331
article-image-top-reasons-why-businesses-should-adopt-enterprise-collaboration-tools
Guest Contributor
05 Mar 2019
8 min read
Save for later

Top reasons why businesses should adopt enterprise collaboration tools

Guest Contributor
05 Mar 2019
8 min read
Following the trends of the modern digital workplace, organizations apply automation even to the domains that are intrinsically human-centric. Collaboration is one of them. And if we can say that organizations have already gained broad experience in digitizing business processes while foreseeing potential pitfalls, the situation is different with collaboration. The automation of collaboration processes can bring a significant number of unexpected challenges even to those companies that have tested the waters. State of Collaboration 2018 reveals a curious fact: even though organizations can be highly involved in collaborative initiatives, employees still report that both they and their companies are poorly prepared to collaborate. Almost a quarter of respondents (24%) affirm that they lack relevant enterprise collaboration tools, while 27% say that their organizations undervalue collaboration and don't offer any incentives for them to support it. Two reasons can explain these stats: The collaboration process can be hardly standardized and split into precise workflows. The number of collaboration scenarios is enormous, and it’s impossible to get them all into a single software solution. It’s also pretty hard to manage collaboration, assess its effectiveness, or understand bottlenecks. Unlike business process automation systems that play a critical role in an organization and ensure core production or business activities, enterprise collaboration tools are mostly seen as supplementary solutions, so they are the last to be implemented. Moreover, as organizations often don’t spend much effort on adapting collaboration tools to their specifics, the end solutions are frequently subject to poor adoption. At the same time, the IT market offers numerous enterprise collaboration tools Slack, Trello, Stride, Confluence, Google Suite, Workplace by Facebook, SharePoint and Office 365, to mention a few, compete to win enterprises’ loyalty. But how to choose the right enterprise Collaboration tools and make them effective? Or how to make employees use the implemented enterprise Collaboration tools actively? To answer these questions and understand how to succeed in their collaboration-focused projects, organizations have to examine both tech- and employee-related challenges they may face. Challenges rooted in technologies From the enterprise Collaboration tools' deployment model to its customization and integration flexibility, companies should consider a whole array of aspects before they decide which solution they will implement. Selecting a technologically suitable solution Finding a proper solution is a long process that requires companies to make several important decisions: Cloud or on-premises? By choosing the deployment type, organizations define their future infrastructure to run the solution, required management efforts, data location, and the amount of customization available. Cloud solutions can help enterprises save both technical and human resources. However, companies often mistrust them because of multiple security concerns. On-premises solutions can be attractive from the customization, performance, and security points of view, but they are resource-demanding and expensive due to high licensing costs. Ready-to-use or custom? Today many vendors offer ready-made enterprise collaboration tools, particularly in the field of enterprise intranets. This option is attractive for organizations because they can save on customizing a solution from scratch. However, with ready-made products, organizations can face a bigger risk of following a vendor’s rigid politics (subscription/ownership price, support rates, functional capabilities, etc.). If companies choose custom enterprise collaboration software, they have a wider choice of IT service providers to cooperate with and adjust their solutions to their needs. One tool or several integrated tools? Some organizations prefer using a couple of apps that cover different collaboration needs (for example, document management, video conferencing, instant messaging). At the same time, companies can also go for a centralized solution, such as SharePoint or Office 365 that can support all collaboration types and let users create a centralized enterprise collaboration environment. Exploring integration options Collaboration isn’t an isolated process. It is tightly related to business or organizational activities that employees do. That’s why integration capabilities are among the most critical aspects companies should check before investing in their collaboration stack. Connecting an enterprise Collaboration tool to ERP, CRM, HRM, or ITSM solutions will not only contribute to the business process consistency but will also reduce the risk of collaboration gaps and communication inconsistencies. Planning ongoing investment Like any other business solution, an enterprise collaboration tool requires financial investment to implement, customize (even ready-made solutions require tuning), and support it. The initial budget will strongly depend on the deployment type, the estimated number of users, and needed customizations. While planning their yearly collaboration investment, companies should remember that their budgets should cover not only the activities necessary to ensure the solution’s technical health but also a user adoption program. Eliminating duplicate functionality Let’s consider the following scenario: a company implements a collaboration tool that includes the project management functionality, while they also run a legacy project management system. The same situation can happen with time tracking, document management, knowledge management systems, and other stand-alone solutions. In this case, it will be reasonable to consider switching to the new suite completely and depriving the legacy one. For example, by choosing SharePoint Server or Online, organizations can unite various functions within a single solution. To ensure a smooth transition to a new environment, SharePoint developers can migrate all the data from legacy systems, thus making it part of the new solution. Choosing a security vector As mentioned before, the solution’s deployment model dictates the security measures that organizations have to take. Sometimes security is the paramount reason that holds enterprises’ collaboration initiatives back. Security concerns are particularly characteristic of organizations that hesitate between on-premises and cloud solutions. SharePoint and Office 365 trends 2018 show that security represents the major worry for organizations that consider changing their on-premises deployments for cloud environments. What’s even more surprising is that while software providers, like Microsoft, are continually improving their security measures, the degree of concern keeps on growing. The report mentioned above reveals that 50% of businesses were concerned about security in 2018 compared to 36% in 2017 and 32% in 2016. Human-related challenges Technology challenges are multiple, but they all can be solved quite quickly, especially if a company partners with a professional IT service provider that backs them up at the tech level. At the same time, companies should be ready to face employee-related barriers that may ruin their collaboration effort. Changing employees’ typical style of collaboration Don’t expect that your employees will welcome the new collaboration solution. It’s about to change their typical collaboration style, which may be difficult for many. Some employees won’t share their knowledge openly, while others will find it difficult to switch from one-to-one discussions to digitized team meetings. In this context, change management should work at two levels: a technological one and a mental one. Companies should not just explain to employees how to use the new solution effectively, but also show each team how to adapt the collaboration system to the needs of each team member without damaging the usual collaboration flow. Finding the right tools for collaborators and non-collaborators Every team consists of different personalities. Some people can be open to collaboration; others can be quite hesitant. The task is to ensure a productive co-work of these two very different types of employees and everyone in between. Teams shouldn’t wait for instant collaboration consistency or general satisfaction. These are only possible to achieve if the entire team works together to create an optimal collaboration area for each individual. Launching digital collaboration within large distributed teams When it’s about organizing collaboration within a small or medium-sized team, collaboration difficulties can be quite simple to avoid, as the collaboration flow is moderate. But when it comes to collaboration in big teams, the risk of failure increases dramatically. Organizing effective communication of remote employees, connecting distributed offices, offering relevant collaboration areas to the entire team and subteams, enable cross-device consistency of collaboration — these are just a few steps to undertake for effective teamwork. Preparing strategies to overcome adoption difficulties He biggest human-related the poor adoption of an enterprise collaboration system. It can be hard for employees to get used to the new solution, accept the new communication medium, its UI and logic. Adoption issues are critical to address because they may engender more severe consequences than the tech-related ones. Say, if there is a functional defect in a solution, a company can fix it within a few days. However, if there are adoption issues, it means that all the efforts an organization puts into technology polishing can be blown away because their employees don’t use the solution at all. Ongoing training and communication between collaboration manager and particular teams is a must to keep employees’ satisfied with the solution they use. Is there more pain than gain? On recognizing all the challenges, companies might feel that there are too many barriers to overcome to get a decent collaboration solution. So maybe it’s reasonable to stay away from the collaboration race? Is it the case? Not really. If you take a look at Internet Trends 2018, you will see that there are multiple improvements that companies get as they adopt enterprise collaboration tools. Typical advantages include reduced meeting time, quicker onboarding, less time required for support, more effective document management, and a substantial rise in teams’ productivity. If your company wants to get all these advantages, be brave to face the possible collaboration challenges to get a great reward. Author Bio Sandra Lupanova is SharePoint and Office 365 Evangelist at Itransition, a software development and IT consulting company headquartered in Denver. Sandra focuses on the SharePoint and Office 365 capabilities, challenges that companies face while adopting these platforms, as well as shares practical tips on how to improve SharePoint and Office 365 deployments through her articles.
Read more
  • 0
  • 0
  • 24308

article-image-challenge-deep-learning-sustain-current-pace-innovation-ivan-vasilev-machine-learning-engineer
Sugandha Lahoti
13 Dec 2019
8 min read
Save for later

“The challenge in Deep Learning is to sustain the current pace of innovation”, explains Ivan Vasilev, machine learning engineer

Sugandha Lahoti
13 Dec 2019
8 min read
If we talk about recent breakthroughs in the software community, machine learning and deep learning is a major contender - the usage, adoption, and experimentation of deep learning has exponentially increased. Especially in the areas of computer vision, speech, natural language processing and understanding, deep learning has made unprecedented progress. GANs, variational autoencoders and deep reinforcement learning are also creating impressive AI results. To know more about the progress of deep learning, we interviewed Ivan Vasilev, a machine learning engineer and researcher based in Bulgaria. Ivan is also the author of the book Advanced Deep Learning with Python. In this book, he teaches advanced deep learning topics like attention mechanism, meta-learning, graph neural networks, memory augmented neural networks, and more using the Python ecosystem. In this interview, he shares his experiences working on this book, compares TensorFlow and PyTorch, as well as talks about computer vision, NLP, and GANs. On why he chose Computer Vision and NLP as two major focus areas of his book Computer Vision and Natural Language processing are two popular areas where a number of developments are ongoing. In his book, Advanced Deep Learning with Python, Ivan delves deep into these two broad application areas. “One of the reasons I emphasized computer vision and NLP”, he clarifies, “is that these fields have a broad range of real-world commercial applications, which makes them interesting for a large number of people.” The other reason for focusing on Computer Vision, he says “is because of the natural (or human-driven if you wish) progress of deep learning. One of the first modern breakthroughs was in 2012, when a solution based on convolutional network won the ImageNet competition of that year with a large margin compared to any previous algorithms. Thanks in part to this impressive result, the interest in the field was renewed and brought many other advances including solving complex tasks like object detection and new generative models like generative adversarial networks. In parallel, the NLP domain saw its own wave of innovation with things like word vector embeddings and the attention mechanism.” On the ongoing battle between TensorFlow and PyTorch There are two popular machine learning frameworks that are currently at par - TensorFlow and PyTorch (Both had new releases in the past month, TensorFlow 2.0 and PyTorch 1.3). There is an ongoing debate that pitches TensorFlow and PyTorch as rivaling tech and communities. Ivan does not think there is a clear winner between the two libraries and this is why he has included them both in the book. He explains, “On the one hand, it seems that the API of PyTorch is more streamlined and the library is more popular with the academic community. On the other hand, TensorFlow seems to have better cloud support and enterprise features. In any case, developers will only benefit from the competition. For example, PyTorch has demonstrated the importance of eager execution and TensorFlow 2.0 now has much better support for eager execution to the point that it is enabled by default. In the past, TensorFlow had internal competing APIs, whereas now Keras is promoted as its main high-level API. On the other hand, PyTorch 1.3 has introduced experimental support for iOS and Android devices and quantization (computation operations with reduced precision for increased efficiency).” Using Machine Learning in the stock trading process can make markets more efficient Ivan discusses his venture into the field of financial machine learning, being the author of an ML-oriented event-based algorithmic trading library. However, financial machine learning (and stock price prediction in particular) is usually not in the focus of mainstream deep learning research. “One reason”, Ivan states, “is that the field isn’t as appealing as, say, computer vision or NLP. At first glance, it might even appear gimmicky to predict stock prices.” He adds, “Another reason is that quality training data isn’t freely available and can be quite expensive to obtain. Even if you have such data, pre-processing it in an ML-friendly way is not a straightforward process, because the noise-to-signal ratio is a lot higher compared to images or text. Additionally, the data itself could have huge volume.” “However”, he counters, “using ML in finance could have benefits, besides the obvious (getting rich by trading stocks). The participation of ML algorithms in the stock trading process can make the markets more efficient. This efficiency will make it harder for market imbalances to stay unnoticed for long periods of time. Such imbalances will be corrected early, thus preventing painful market corrections, which could otherwise lead to economic recessions.” GANs can be used for nefarious purposes, but that doesn’t warrant discarding them Ivan has also given a special emphasis to Generative adversarial networks in his book. Although extremely useful, in recent times GANs have been used to generate high-dimensional fake data that look very convincing. Many researchers and developers have raised concerns about the negative repercussions of using GANs and wondered if it is even possible to prevent and counter its misuse/abuse. Ivan acknowledges that GANs may have unintended outcomes but that shouldn’t be the sole reason to discard them. He says, “Besides great entertainment value, GANs have some very useful applications and could help us better understand the inner workings of neural networks. But as you mentioned, they can be used for nefarious purposes as well. Still, we shouldn’t discard GANs (or any algorithm with similar purpose) because of this. If only because the bad actors won’t discard them. I think the solution to this problem lies beyond the realm of deep learning. We should strive to educate the public on the possible adverse effects of these algorithms, but also to their benefits. In this way we can raise the awareness of machine learning and spark an honest debate about its role in our society.” Machine learning can have both intentional and unintentional harmful effects Awareness and Ethics go in parallel. Ethics is one of the most important topics to emerge in machine learning and artificial intelligence over the last year. Ivan agrees that the ethics and algorithmic bias in machine learning are of extreme importance. He says, “We can view the potential harmful effects of machine learning as either intentional and unintentional. For example, the bad actors I mentioned when we discussed GANs fall into the intentional category. We can limit their influence by striving to keep the cutting edge of ML research publicly available, thus denying them any unfair advantage of potentially better algorithms. Fortunately, this is largely the case now and hopefully will remain that way in the future. “ “I don't think algorithmic bias is necessarily intentional,'' he says. “Instead, I believe that it is the result of the underlying injustices in our society, which creep into ML through either skewed training datasets or unconscious bias of the researchers. Although the bias might not be intentional, we still have a responsibility to put a conscious effort to eliminate it.” Challenges in the Machine learning ecosystem “The field of ML exploded (in a good sense) a few years ago,'' says Ivan, “thanks to a combination of algorithmic and computer hardware advances. Since then, the researches have introduced new smarter and more elegant deep learning algorithms. But history has shown that AI can generate such a great hype that even the impressive achievements of the last few years could fall short of the expectations of the general public.” “So, in a broader sense, the challenge in front of ML is to sustain the current pace of innovation. In particular, current deep learning algorithms fall short in some key intelligence areas, where humans excel. For example, neural networks have a hard time learning multiple unrelated tasks. They also tend to perform better when working with unstructured data (like images), compared to structured data (like graphs).” “Another issue is that neural networks sometimes struggle to remember long-distance dependencies in sequential data. Solving these problems might require new fundamental breakthroughs, and it’s hard to give an estimation of such one time events. But even at the current level, ML can fundamentally change our society (hopefully for the better). For instance, in the next 5 to 10 years, we can see the widespread introduction of fully autonomous vehicles, which have the potential to transform our lives.” This is just a snapshot of some of the important focus areas in the deep learning ecosystem. You can check out more of Ivan’s work in his book Advanced Deep Learning with Python. In this book you will investigate and train CNN models with GPU accelerated libraries like TensorFlow and PyTorch. You will also apply deep neural networks to state-of-the-art domains like computer vision problems, NLP, GANs, and more. Author Bio Ivan Vasilev started working on the first open source Java Deep Learning library with GPU support in 2013. The library was acquired by a German company, where he continued its development. He has also worked as a machine learning engineer and researcher in the area of medical image classification and segmentation with deep neural networks. Since 2017 he has focused on financial machine learning. He is working on a Python based platform, which provides the infrastructure to rapidly experiment with different ML algorithms for algorithmic trading. You can find him on Linkedin and GitHub. Kaggle’s Rachel Tatman on what to do when applying deep learning is overkill  Brad Miro talks TensorFlow 2.0 features and how Google is using it internally François Chollet, creator of Keras on TensorFlow 2.0 and Keras integration, tricky design decisions in deep learning and more
Read more
  • 0
  • 0
  • 24286
Modal Close icon
Modal Close icon