Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7013 Articles
article-image-aspnet-core-high-performance
Packt
07 Mar 2018
20 min read
Save for later

ASP.NET Core High Performance

Packt
07 Mar 2018
20 min read
In this article by James Singleton, the author of the book ASP.NET Core High Performance, we will see that many things have changed for version 2 of the ASP.NET Core framework and there have also been a lot of improvements to the various supporting technologies. Now is a great time to give it a try, as the code has stabilized and the pace of change has settled down a bit. There were significant differences between the original release candidate and version 1 of ASP.NET Core and yet more alterations between version 1 and version 2. Some of these changes have been controversial, particularly around tooling but the scope of .NET Core has grown massively and ultimately this is a good thing. One of the highest profile differences between 1 and 2 is the change (and some would say regression) away from the new JavaScript Object Notation (JSON) based project format and back towards the Extensible Markup Language (XML) based .csproj format. However, it is a simplified and stripped down version compared to the format used in the original .NET Framework. There has been a move towards standardization between the different .NET frameworks and .NET Core 2 has a much larger API surface as a result. The interface specification known as .NET Standard 2 covers the intersection between .NET Core, the .NET Framework, and Xamarin. There is also an effort to standardize Extensible Application Markup Language (XAML) into the XAML Standard that will work across Universal Windows Platform (UWP) and Xamarin.Forms apps. C# and .NET can be used on a huge amount of platforms and in a large number of use cases, from server side web applications to mobile apps and even games using engines like Unity 3D. In this article we will go over the changes between version 1 and version 2 of the new Core releases. We will also look at some new features of the C# language. There are many useful additions and a plethora of performance improvement too. In this article we will cover: .NET Core 2 scope increases ASP.NET Core 2 additions Performance improvements .NET Standard 2 New C# 6 features New C# 7 features JavaScript considerations New in Core 2 There are two different products in the Core family. The first is .NET Core, which is the low level framework providing basic libraries. This can be used to write console applications and it is also the foundation for higher level application frameworks. The second is ASP.NET Core, which is a framework for building web applications that run on a server and service clients (usually web browsers). This was originally the only workload for .NET Core until it grew in scope to handle a more diverse range of scenarios. We'll cover the differences in the new versions separately for each of these frameworks. The changes in .NET Core will also apply to ASP.NET Core, unless you are running it on top of the .NET Framework version 4. New in .NET Core 2 The main focus of .NET Core 2 is the huge increase in scope. There are more than double the number of APIs included and it supports .NET Standard 2 (covered later in this article). You can also refer .NET Framework assemblies with no recompile required. This should just work as long as the assemblies only use APIs that have been implemented in .NET Core. This means that more NuGet packages will work with .NET Core. Finding if your favorite library was supported or not, was always a challenge with the previous version. The author set up a repository listing package compatibility to help with this. You can find the ASP.NET Core Library and Framework Support (ANCLAFS) list at github.com/jpsingleton/ANCLAFS and also via anclafs.com. If you want to make a change then please send a pull request. Hopefully in the future all packages will support Core and this list will no longer be required. There is now support in Core for Visual Basic and for more Linux distributions. You can also perform live unit testing with Visual Studio 2017, much like the old NCrunch extension. Performance improvements Some of the more interesting changes for 2 are the performance improvements over the original .NET Framework. There have been tweaks to the implementations of many of the framework data structures. Some of the classes and methods that have seen speed improvements or memory reduction include: List<T> Queue<T> SortedSet<T> ConcurrentQueue<T> Lazy<T> Enumerable.Concat() Enumerable.OrderBy() Enumerable.ToList() Enumerable.ToArray() DeflateStream SHA256 BigInteger BinaryFormatter Regex WebUtility.UrlDecode() Encoding.UTF8.GetBytes() Enum.Parse() DateTime.ToString() String.IndexOf() String.StartsWith() FileStream Socket NetworkStream SslStream ThreadPool SpinLock We won't go into specific benchmarks here because benchmarking is hard and the improvements you see will clearly depend on your usage. The thing to take away is that lots of work has been done to increase the performance of .NET Core, both over the previous version 1 and .NET Framework 4.7. Many of these changes have come from the community, which shows one of the benefits of open source development. Some of these advances will probably work their way back into a future version of the regular .NET Framework too. There have also been improvements to the RyuJIT compiler for .NET Core 2. As just one example, finally blocks are now almost as efficient as not using exception handing at all, in the normal situation where no exceptions are thrown. You now have no excuses not to liberally use try and using blocks, for example by having checked arithmetic to avoid integer overflows. New in ASP.NET Core 2 ASP.NET Core 2 takes advantage of all the improvements to .NET Core 2, if that is what you choose to run it on. It will also run on .NET Framework 4.7 but it's best to run it on .NET Core, if you can. With the increase in scope and support of .NET Core 2 this should be less of a problem than it was previously. It includes a new meta package so you only need to reference one NuGet item to get all the things! However, it is still composed of individual packages if you want to pick and choose. They haven't reverted back to the bad old days of one huge System.Web assembly. A new package trimming feature will ensure that if you don't use a package then its binaries won't be included in your deployment, even if you use the meta package to reference it. There is also a sensible default for setting up a web host configuration. You don't need to add logging, Kestrel, and IIS individually anymore. Logging has also got simpler and, as it is built in, you have no excuses not to use it from the start. A new feature is support for controller-less Razor Pages. These are exactly what they sound like and allow you to write pages with just a Razor template. This is similar to the Web Pages product, not to be confused with Web Forms. There is talk of Web Forms making a comeback, but if so then hopefully the abstraction will be thought out more and it won't carry so much state around with it. There is a new authentication model that makes better use of Dependency Injection. ASP.NET Core Identity allows you to use OpenID, OAuth 2 and get access tokens for your APIs. A nice time saver is you no longer need to emit anti-forgery tokens in forms (to prevent Cross Site Request Forgery) with attributes to validate them on post methods. This is all done automatically for you, which should prevent you forgetting to do this and leaving a security vulnerability. Performance improvements There have been additional increases to performance in ASP.NET Core that are not related to the improvements in .NET Core, which also help. Startup time has been reduced by shipping binaries that have already been through the Just In Time compilation process. Although not a new feature in ASP.NET Core 2, output caching is now available. In 1.0, only response caching was included, which simply set the correct HTTP headers. In 1.1, an in-memory cache was added and today you can use local memory or a distributed cache kept in SQL Server or Redis. Standards Standards are important, that's why we have so many of them. The latest version of the .NET Standard is 2 and .NET Core 2 implements this. A good way to think about .NET Standard is as an interface that a class would implement. The interface defines an abstract API but the concrete implementation of that API is left up to the classes that inherit from it. Another way to think about it is like the HTML5 standard that is supported by different web browsers. Version 2 of the .NET Standard was defined by looking at the intersection of the .NET Framework and Mono. This standard was then implemented by .NET Core 2, which is why is contains so many more APIs than version 1. Version 4.6.1 of the .NET Framework also implements .NET Standard 2 and there is work to support the latest versions of the .NET Framework, UWP and Xamarin (including Xamarin.Forms). There is also the new XAML Standard that aims to find the common ground between Xamarin.Forms and UWP. Hopefully it will include Windows Presentation Foundation (WPF) in the future. If you create libraries and packages that use these standards then they will work on all the platforms that support them. As a developer simply consuming libraries, you don't need to worry about these standards. It just means that you are more likely to be able to use the packages that you want, on the platforms you are working with. New C# features It not just the frameworks and libraries that have been worked on. The underlying language has also had some nice new features added. We will focus on C# here as it is the most popular language for the Common Language Runtime (CLR). Other options include Visual Basic and the functional programming language F#. C# is a great language to work with, especially when compared to a language like JavaScript. Although JavaScript is great for many reasons (such as its ubiquity and the number of frameworks available), the elegance and design of the language is not one of them. Many of these new features are just syntactic sugar, which means they don't add any new functionality. They simply provide a more succinct and easier to read way of writing code that does the same thing. C# 6 Although the latest version of C# is 7, there are some very handy features in C# 6 that often go underused. Also, some of the new additions in 7 are improvements on features added in 6 and would not make much sense without context. We will quickly cover a few features of C# 6 here, in case you are unaware of how useful they can be. String interpolation String interpolation is a more elegant and easier to work with version of the familiar string format method. Instead of supplying the arguments to embed in the string placeholders separately, you can now embed them directly in the string. This is far more readable and less error prone. Let us demonstrate with an example. Consider the following code that embeds an exception in a string. catch (Exception e) { Console.WriteLine("Oh dear, oh dear! {0}", e); } This embeds the first (and in this case only) object in the string at the position marked by the zero. It may seem simple but this quickly gets complex if you have many objects and want to add another at the start. You then have to correctly renumber all the placeholders. Instead you can now prefix the string with a dollar character and embed the object directly in it. This is shown in the following code that behaves the same as the previous example. catch (Exception e) { Console.WriteLine($"Oh dear, oh dear! {e}"); } The ToString() method on an exception outputs all the required information including name, message, stack trace and any inner exceptions. There is no need to deconstruct it manually, you may even miss things if you do. You can also use the same format strings as you are used to. Consider the following code that formats a date in a custom manner. Console.WriteLine($"Starting at: {DateTimeOffset.UtcNow:yyyy/MM/ddHH:mm:ss}"); When this feature was being built, the syntax was slightly different. So be wary of any old blog posts or documentation that may not be correct. Null conditional The null conditional operator is a way of simplifying null checks. You can now inline a check for null rather than using an if statement or ternary operator. This makes it easier to use in more places and will hopefully help you to avoid the dreaded null reference exception. You can avoid doing a manual null check like in the following code. int? length = (null == bytes) ? null : (int?)bytes.Length; This can now be simplified to the following statement by adding a question mark. int? length = bytes?.Length; Exception filters You can filter exceptions more easily with the when keyword. You no longer need to catch every type of exception that you are interested in and then filter manually inside the catch block. This is a feature that was already present in VB and F# so it's nice that C# has finally caught up. There are some small benefits to this approach. For example, if your filter is not matched then the exception can still be caught by other catch blocks in the same try statement. You also don't need to remember to re-throw the exception to avoid it being swallowed. This helps with debugging, as Visual Studio will no longer break, as it would when you throw. For example, you could check to see if there is a message in the exception and handle it differently, as shown here. catch (Exception e) when (e?.Message?.Length> 0) When this feature was in development, a different keyword (if) was used. So be careful of any old information online. One thing to keep in mind is that relying on a particular exception message is fragile. If your application is localized then the message may be in a different language than what you expect. This holds true outside of exception filtering too. Asynchronous availability Another small improvement is that you can use the await keyword inside catch and finally blocks. This was not initially allowed when this incredibly useful feature was added in C# 5. There is not a lot more to say about this. The implementation is complex but you don't need to worry about this unless you're interested in the internals. From a developer point of view, it just works, as in this trivial example. catch (Exception e) when (e?.Message?.Length> 0) { await Task.Delay(200); } This feature has been improved in C# 7, so read on. You will see async and await used a lot. Asynchronous programming is a great way of improving performance and not just from within your C# code. Expression bodies Expression bodies allow you to assign an expression to a method or getter property using the lambda arrow operator (=>) that you may be familiar with from fluent LINQ syntax. You no longer need to provide a full statement or method signature and body. This feature has also been improved in C# 7 so see the examples in the next section. For example, a getter property can be implemented like so. public static string Text => $"Today: {DateTime.Now:o}"; A method can be written in a similar way, such as the following example. private byte[] GetBytes(string text) => Encoding.UTF8.GetBytes(text); C# 7 The most recent version of the C# language is 7 and there are yet more improvements to readability and ease of use. We'll cover a subset of the more interesting changes here. Literals There are couple of minor additional capabilities and readability enhancements when specifying literal values in code. You can specify binary literals, which means you don't have to work out how to represent them using a different base anymore. You can also put underscores anywhere within a literal to make it easier to read the number. The underscores are ignored but allow you to separate digits into convention groupings. This is particularly well suited to the new binary literal as it can be very verbose listing out all those zeros and ones. Take the following example using the new 0b prefix to specify a binary literal that will be rendered as an integer in a string. Console.WriteLine($"Binary solo! {0b0000001_00000011_000000111_00001111}"); You can do this with other bases too, such as this integer, which is formatted to use a thousands separator. Console.WriteLine($"Over {9_000:#,0}!"); // Prints "Over 9,000!" Tuples One of the big new features in C# 7 is support for tuples. Tuples are groups of values and you can now return them directly from method calls. You are no longer restricted to returning a single value. Previously you could work around this limitation in a few sub-optimal ways, including creating a custom complex object to return, perhaps with a Plain Old C# Object (POCO) or Data Transfer Object (DTO), which are the same thing. You could have also passed in a reference using the ref or out keywords, which although there are improvements to the syntax are still not great. There was System.Tuple in C# 6 but this wasn't ideal. It was a framework feature, rather than a language feature and the items were only numbered and not named. With C# 7 tuples, you can name the objects and they make a great alternative to anonymous types, particularly in LINQ query expression lambda functions. As an example, if you only want to work on a subset of the data available, perhaps when filtering a database table with an O/RM such as Entity Framework, then you could use a tuple for this. The following example returns a tuple from a method. You may need to add the System.ValueTupleNuGet package for this to work. private static (int one, string two, DateTime three) GetTuple() { return (one: 1, two: "too", three: DateTime.UtcNow); } You can also use tuples in string interpolation and all the values are rendered, as shown here. Console.WriteLine($"Tuple = {GetTuple()}"); Out variables If you did want to pass parameters into a method for modification then you have always needed to declare them first. This is no longer necessary and you can simply declare the variables at the point you pass them in. You can also declare a variable to be discarded by using an underscore. This is particularly useful if you don't want to use the returned value, for example in some of the try parse methods of the native framework data types. Here we parse a date without declaring the dt variable first. DateTime.TryParse("2017-08-09", out var dt); In this example we test for an integer but we don't care what it is. var isInt = int.TryParse("w00t", out _); References You can now return values by reference from a method as well as consume them. This is a little like working with pointers in C but safer. For example, you can only return references that were passed into the method and you can't modify references to point to a different location in memory. This is a very specialist feature but in certain niche situations it can dramatically improve performance. Given the following method. private static ref string GetFirstRef(ref string[] texts) { if (texts?.Length> 0) { return ref texts[0]; } throw new ArgumentOutOfRangeException(); } You could call it like so, and the second console output line would appear differently (one instead of 1). var strings = new string[] { "1", "2" }; ref var first = ref GetFirstRef(ref strings); Console.WriteLine($"{strings?[0]}"); // 1 first = "one"; Console.WriteLine($"{strings?[0]}"); // one Patterns The other big addition is you can now match patterns in C# 7 using the is keyword. This simplifies testing for null and matching against types, among other things. It also lets you easily use the cast value. This is a simpler alternative to using full polymorphism (where a derived class can be treated as a base class and override methods). However, if you control the code base and can make use of proper polymorphism, then you should still do this and follow good Object-Oriented Programming (OOP) principles. In the following example, pattern matching is used to parse the type and value of an unknown object. private static int PatternMatch(object obj) { if (obj is null) { return 0; } if (obj is int i) { return i++; } if (obj is DateTime d || (obj is string str && DateTime.TryParse(str, out d))) { return d.DayOfYear; } return -1; } You can also use pattern matching in the cases of a switch statement and you can switch on non-primitive types such as custom objects. More expression bodies Expression bodies are expanded from the offering in C# 6 and you can now use them in more places, for example as object constructors and property setters. Here we extend our previous example to include setting the value on the property we were previously just reading. private static string text; public static string Text { get => text ?? $"Today: {DateTime.Now:r}"; set => text = value; } More asynchronous improvements There have been some small improvements to what async methods can return and, although small, they could offer big performance gains in certain situations. You no longer have to return a task, which can be beneficial if the value is already available. This can reduce the overheads of using async methods and creating a task object. JavaScript You can't write a book on web applications without covering JavaScript. It is everywhere. If you write a web app that does a full page load on every request and it's not a simple content site then it will feel slow. Users expect responsiveness. If you are a back-end developer then you may think that you don't have to worry about this. However, if you are building an API then you may want to make it easy to consume with JavaScript and you will need to make sure that your JSON is correctly and quickly serialized. Even if you are building a Single Page Application (SPA) in JavaScript (or TypeScript) that runs in the browser, the server can still play a key role. You can use SPA services to run Angular or React on the server and generate the initial output. This can increase performance, as the browser has something to render immediately. For example, there is a project called React.NET that integrates React with ASP.NET, and it supports ASP.NET Core. If you have been struggling to keep up with the latest developments in the .NET world then JavaScript is on another level. There seems to be something new almost every week and this can lead to framework fatigue and the paradox of choice. There is so much to choose from that you don't know what to pick. Summary In this article, you have seen a brief high-level summary of what has changed in .NET Core 2 and ASP.NET Core 2, compared to the previous versions. You are also now aware of .NET Standard 2 and what it is for. We have shown examples of some of the new features available in C# 6 and C# 7. These can be very useful in letting you write more with less, and in making your code more readable and easier to maintain.
Read more
  • 0
  • 0
  • 15312

article-image-implement-long-short-term-memory-lstm-tensorflow
Gebin George
06 Mar 2018
4 min read
Save for later

Implement Long-short Term Memory (LSTM) with TensorFlow

Gebin George
06 Mar 2018
4 min read
[box type="note" align="" class="" width=""]This article is an excerpt from the book, Deep Learning Essentials written by Wei Di, Anurag Bhardwaj, and Jianing Wei. This book will help you get started with the essentials of deep learning and neural network modeling.[/box] In today’s tutorial, we will look at an example of using LSTM in TensorFlow to perform sentiment classification. The input to LSTM will be a sentence or sequence of words. The output of LSTM will be a binary value indicating a positive sentiment with 1 and a negative sentiment with 0. We will use a many-to-one LSTM architecture for this problem since it maps multiple inputs onto a single output. Figure LSTM: Basic cell architecture shows this architecture in more detail. As shown here, the input takes a sequence of word tokens (in this case, a sequence of three words). Each word token is input at a new time step and is input to the hidden state for the corresponding time step. For example, the word Book is input at time step t and is fed to the hidden state ht: Sentiment analysis: To implement this model in TensorFlow, we need to first define a few variables as follows: batch_size = 4 lstm_units = 16 num_classes = 2 max_sequence_length = 4 embedding_dimension = 64 num_iterations = 1000 As shown previously, batch_size dictates how many sequences of tokens we can input in one batch for training. lstm_units represents the total number of LSTM cells in the network. max_sequence_length represents the maximum possible length of a given sequence. Once defined, we now proceed to initialize TensorFlow-specific data structures for input data as follows: import tensorflow as tf labels = tf.placeholder(tf.float32, [batch_size, num_classes]) raw_data = tf.placeholder(tf.int32, [batch_size, max_sequence_length]) Given we are working with word tokens, we would like to represent them using a good feature representation technique. Let us assume the word embedding representation takes a word token and projects it onto an embedding space of dimension, embedding_dimension. The two-dimensional input data containing raw word tokens is now transformed into a three-dimensional word tensor with the added dimension representing the word embedding. We also use pre-computed word embedding, stored in a word_vectors data structure. We initialize the data structures as follows: data = tf.Variable(tf.zeros([batch_size, max_sequence_length, embedding_dimension]),dtype=tf.float32) data = tf.nn.embedding_lookup(word_vectors,raw_data) Now that the input data is ready, we look at defining the LSTM model. As shown previously, we need to create lstm_units of a basic LSTM cell. Since we need to perform a classification at the end, we wrap the LSTM unit with a dropout wrapper. To perform a full temporal pass of the data on the defined network, we unroll the LSTM using a dynamic_rnn routine of TensorFlow. We also initialize a random weight matrix and a constant value of 0.1 as the bias vector, as follows: weight = tf.Variable(tf.truncated_normal([lstm_units, num_classes])) bias = tf.Variable(tf.constant(0.1, shape=[num_classes])) lstm_cell = tf.contrib.rnn.BasicLSTMCell(lstm_units) wrapped_lstm_cell = tf.contrib.rnn.DropoutWrapper(cell=lstm_cell, output_keep_prob=0.8) output, state = tf.nn.dynamic_rnn(wrapped_lstm_cell, data, dtype=tf.float32) Once the output is generated by the dynamic unrolled RNN, we transpose its shape, multiply it by the weight vector, and add a bias vector to it to compute the final prediction value: output = tf.transpose(output, [1, 0, 2]) last = tf.gather(output, int(output.get_shape()[0]) - 1) prediction = (tf.matmul(last, weight) + bias) weight = tf.cast(weight, tf.float64) last = tf.cast(last, tf.float64) bias = tf.cast(bias, tf.float64) Since the initial prediction needs to be refined, we define an objective function with crossentropy to minimize the loss as follows: loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits (logits=prediction, labels=labels)) optimizer = tf.train.AdamOptimizer().minimize(loss) After this sequence of steps, we have a trained, end-to-end LSTM network for sentiment classification of arbitrary length sentences. To summarize, we saw how effectively we can implement LSTM network using TensorFlow. If you are interested to know more, check out this book Deep Learning Essentials which will help you take first steps in training efficient deep learning models and apply them in various practical scenarios.  
Read more
  • 0
  • 0
  • 16484

article-image-logistic-regression-using-tensorflow
Packt
06 Mar 2018
9 min read
Save for later

Logistic Regression Using TensorFlow

Packt
06 Mar 2018
9 min read
In this article, by PKS Prakash and Achyutuni Sri Krishna Rao, authors of R Deep Learning Cookbook we will learn how to Perform logistic regression using TensorFlow. In this recipe, we will cover the application of TensorFlow in setting up a logistic regression model. The example will use a similar dataset to that used in the H2O model setup. (For more resources related to this topic, see here.) What is TensorFlow TensorFlow is another open source library developed by the Google Brain Team to build numerical computation models using data flow graphs. The core of TensorFlow was developed in C++ with the wrapper in Python. The tensorflow package in R gives you access to the TensorFlow API composed of Python modules to execute computation models. TensorFlow supports both CPU- and GPU-based computations. The tensorflow package in R calls the Python tensorflow API for execution, which is essential to install the tensorflow package in both R and Python to make R work. The following are the dependencies for tensorflow: Python 2.7 / 3.x  R (>3.2) devtools package in R for installing TensorFlow from GitHub  TensorFlow in Python pip Getting ready The code for this section is created on Linux but can be run on any operating system. To start modeling, load the tensorflow package in the environment. R loads the default TensorFlow environment variable and also the NumPy library from Python in the np variable:  library("tensorflow") # Load TensorFlow np <- import("numpy") # Load numpy library How to do it... The data is imported using a standard function from R, as shown in the following code. The data is imported using the read.csv file and transformed into the matrix format followed by selecting the features used to model as defined in xFeatures and yFeatures. The next step in TensorFlow is to set up a graph to run optimization: # Loading input and test data xFeatures = c("Temperature", "Humidity", "Light", "CO2", "HumidityRatio") yFeatures = "Occupancy" occupancy_train <- as.matrix(read.csv("datatraining.txt",stringsAsFactors = T)) occupancy_test <- as.matrix(read.csv("datatest.txt",stringsAsFactors = T)) # subset features for modeling and transform to numeric values occupancy_train<-apply(occupancy_train[, c(xFeatures, yFeatures)], 2, FUN=as.numeric) occupancy_test<-apply(occupancy_test[, c(xFeatures, yFeatures)], 2, FUN=as.numeric) # Data dimensions nFeatures<-length(xFeatures) nRow<-nrow(occupancy_train) Before setting up the graph, let's reset the graph using the following command: # Reset the graph tf$reset_default_graph() Additionally, let's start an interactive session as it will allow us to execute variables without referring to the session-to-session object: # Starting session as interactive session sess<-tf$InteractiveSession() Define the logistic regression model in TensorFlow: # Setting-up Logistic regression graph x <- tf$constant(unlist(occupancy_train[, xFeatures]), shape=c(nRow, nFeatures), dtype=np$float32) # W <- tf$Variable(tf$random_uniform(shape(nFeatures, 1L))) b <- tf$Variable(tf$zeros(shape(1L))) y <- tf$matmul(x, W) + b The input feature x is defined as a constant as it will be an input to the system. The weight W and bias b are defined as variables that will be optimized during the optimization process. The y is set up as a symbolic representation between x, W, and b. The weight W is set up to initialize random uniform distribution and b is assigned the value zero.  The next step is to set up the cost function for logistic regression: # Setting-up cost function and optimizer y_ <- tf$constant(unlist(occupancy_train[, yFeatures]), dtype="float32", shape=c(nRow, 1L)) cross_entropy<- tf$reduce_mean(tf$nn$sigmoid_cross_entropy_with_logits(labe ls=y_, logits=y, name="cross_entropy")) optimizer <- tf$train$GradientDescentOptimizer(0.15)$minimize(cross_entr opy) # Start a session init <- tf$global_variables_initializer() sess$run(init) Execute the gradient descent algorithm for the optimization of weights using cross entropy as the loss function: # Running optimization for (step in 1:5000) { sess$run(optimizer) if (step %% 20== 0) cat(step, "-", sess$run(W), sess$run(b), "==>", sess$run(cross_entropy), "n") } How it works... The performance of the model can be evaluated using AUC: # Performance on Train library(pROC) ypred <- sess$run(tf$nn$sigmoid(tf$matmul(x, W) + b)) roc_obj <- roc(occupancy_train[, yFeatures], as.numeric(ypred)) # Performance on test nRowt<-nrow(occupancy_test) xt <- tf$constant(unlist(occupancy_test[, xFeatures]), shape=c(nRowt, nFeatures), dtype=np$float32) ypredt <- sess$run(tf$nn$sigmoid(tf$matmul(xt, W) + b)) roc_objt <- roc(occupancy_test[, yFeatures], as.numeric(ypredt)). AUC can be visualized using the plot.auc function from the pROC package, as shown in the screenshot following this command. The performance for training and testing (holdout) is very similar. plot.roc(roc_obj, col = "green", lty=2, lwd=2) plot.roc(roc_objt, add=T, col="red", lty=4, lwd=2) Performance of logistic regression using TensorFlow Visualizing TensorFlow graphs TensorFlow graphs can be visualized using TensorBoard. It is a service that utilizes TensorFlow event files to visualize TensorFlow models as graphs. Graph model visualization in TensorBoard is also used to debug TensorFlow models. Getting ready TensorBoard can be started using the following command in the terminal: $ tensorboard --logdir home/log --port 6006 The following are the major parameters for TensorBoard: --logdir : To map to the directory to load TensorFlow events --debug: To increase log verbosity  --host: To define the host to listen to its localhost (127.0.0.1) by default  --port: To define the port to which TensorBoard will serve The preceding command will launch the TensorFlow service on localhost at port 6006, as shown in the following screenshot:                                                                                                                                                         TensorBoard The tabs on the TensorBoard capture relevant data generated during graph execution. How to do it... The section covers how to visualize TensorFlow models and output in TernsorBoard.  To visualize summaries and graphs, data from TensorFlow can be exported using the FileWriter command from the summary module. A default session graph can be added using the following command:  # Create Writer Obj for log log_writer = tf$summary$FileWriter('c:/log', sess$graph) The graph for logistic regression developed using the preceding code is shown in the following screenshot:                                                                                 Visualization of the logistic regression graph in TensorBoard Similarly, other variable summaries can be added to the TensorBoard using correct summaries, as shown in the following code: # Adding histogram summary to weight and bias variable w_hist = tf$histogram_summary("weights", W) b_hist = tf$histogram_summary("biases", b) Create a cross entropy evaluation for test. An example script to generate the cross entropy cost function for test and train is shown in the following command: # Set-up cross entropy for test nRowt<-nrow(occupancy_test) xt <- tf$constant(unlist(occupancy_test[, xFeatures]), shape=c(nRowt, nFeatures), dtype=np$float32) ypredt <- tf$nn$sigmoid(tf$matmul(xt, W) + b) yt_ <- tf$constant(unlist(occupancy_test[, yFeatures]), dtype="float32", shape=c(nRowt, 1L)) cross_entropy_tst<- tf$reduce_mean(tf$nn$sigmoid_cross_entropy_with_logits(labe ls=yt_, logits=ypredt, name="cross_entropy_tst")) Add summary variables to be collected: # Add summary ops to collect data w_hist = tf$summary$histogram("weights", W) b_hist = tf$summary$histogram("biases", b) crossEntropySummary<-tf$summary$scalar("costFunction", cross_entropy) crossEntropyTstSummary<- tf$summary$scalar("costFunction_test", cross_entropy_tst) Open the writing object, log_writer. It writes the default graph to the location, c:/log: # Create Writer Obj for log log_writer = tf$summary$FileWriter('c:/log', sess$graph) Run the optimization and collect the summaries: for (step in 1:2500) { sess$run(optimizer) # Evaluate performance on training and test data after 50 Iteration if (step %% 50== 0){ ### Performance on Train ypred <- sess$run(tf$nn$sigmoid(tf$matmul(x, W) + b)) roc_obj <- roc(occupancy_train[, yFeatures], as.numeric(ypred)) ### Performance on Test ypredt <- sess$run(tf$nn$sigmoid(tf$matmul(xt, W) + b)) roc_objt <- roc(occupancy_test[, yFeatures], as.numeric(ypredt)) cat("train AUC: ", auc(roc_obj), " Test AUC: ", auc(roc_objt), "n") # Save summary of Bias and weights log_writer$add_summary(sess$run(b_hist), global_step=step) log_writer$add_summary(sess$run(w_hist), global_step=step) log_writer$add_summary(sess$run(crossEntropySummary), global_step=step) log_writer$add_summary(sess$run(crossEntropyTstSummary), global_step=step) } } Collect all the summaries to a single tensor using the merge_all command from the summary module: summary = tf$summary$merge_all() Write the summaries to the log file using the log_writer object: log_writer = tf$summary$FileWriter('c:/log', sess$graph) summary_str = sess$run(summary) log_writer$add_summary(summary_str, step) log_writer$close() Summary In this article, we have learned how to perform logistic regression using TensorFlow also we have covered the application of TensorFlow in setting up a logistic regression model. Resources for Article:   Further resources on this subject: [article] [article] [article]
Read more
  • 0
  • 0
  • 4403

article-image-gather-intel-and-plan-attack-strategies
Packt
06 Mar 2018
2 min read
Save for later

Gather Intel and Plan Attack Strategies

Packt
06 Mar 2018
2 min read
In this article by Himanshu Sharma, author of the  Kali Linux - An Ethical Hacker's Cookbook, we will cover the following recipes: Getting a list of subdomains Shodan honeyscore Shodan plugins Using Nmap to find open ports (For more resources related to this topic, see here.) In this article,we'll dive a little deeper and look at other different tools available for gathering intel on our target. We'll start by using some of the infamous tools of Kali Linux, such as Fierce. Gathering information is a very crucial stage of performing a penetration test,as every step we take after this will totally be an outcome of all the information we gather during this stage. So it is very important that we gather as much information as possible before jumping into the exploitation stage. Getting a list of subdomains Not always do we have a situation where a client has defined a full detailed scope of what needs to be pentested. So, we will use the followingrecipes to gather as much information we can to perform a pentest. How to do it… We will see how to get a list of subdomains in the following ways: Fierce We'll start with jumping into Kali's terminal and using the first and mostly widely used tool,Fierce. To launch Fierce,type fierce –h to see the help menu: fierce –dns host.com –threads 10 To perform a subdomain scan, we use this command: fierce –dns host.com –threads 10 Dnsdumpster Dnsdumpster is a free project by HackerTarget to lookup subdomains. It relies on https://scans.io/ for its results. It is pretty simply to use.We type the domain name we want the subdomains for and it will show us the results. Using Shodan for fun and profit Shodan is the world's first search engine to search for devices connected on the Internet. It was launched in 2009 by John Matherly. Shodan can be used to lookup webcams, databases, industrial systems, videogames,and so on. Shodan mostly collects data on the most popular web services running, such as HTTP, HTTPS, MongoDB,and FTP. Getting ready To use Shodan, we will need to create an account. How to do it... Open your browser and visit https://www.shodan.io: We begin by performing a simple search for FTP services running.To do this, we can use the following Shodan dorks: port:"21" This search can be made more specific by specifying a particular country, organization,and so on: port:21 country:"IN" We can now see all the FTP servers running in India.We can also see the servers that allow anonymous login and the version of FTP server they are running. Next, we'll try the organization filter by typing the following: port:21 country:"IN"org:"BSNL" Shodan has other tags aswell, which can be used to perform advanced searches: net: To scan IP ranges city: To filter by city More details can be found at https://www.shodan.io/explore. Shodan honeyscore Shodan Honeyscore is another great project built in Python.It helps us figure out whether an IP address we have is a honeypot or a real system. How to do it... To use Shodan Honeyscore, visit https://honeyscore.shodan.io/: Enter the IP address you want to check, and that's it! Shodan plugins To make our lives even easier,Shodan has plugins for Chrome and Firefox that can be used to check for open ports for websites we visit on the go! How to do it... Download and install the plugin from https://www.shodan.io/. Browse any website, and you will see that by clicking on the plugin,you can see the open ports.   Using Nmap to find open ports Nmap, or Network Mapper, is a security scanner written by Gordon Lyon. It is used to find hosts and services in a network. It first came out in September 1997. Nmap has various features as well as scripts to perform various tests, such as finding the OS andservice version,and it can be used to brute force default logins too. Some of the most common types of scan are as follows: TCP connect()scan SYN stealth scan UDP scan Ping scan Idle scan How to do it... Nmap comespre installed in Kali Linux. We can type the following command to start it and see all the options available: nmap –h To perform a basic scan,use the following command: nmap –sV –Pn x.x.x.x Here, –Pn implies that we do not check whether the host is up or not by performing a ping request first. The –sVparameter is to list all the running services on the found open ports. Another flag we can use is–A , which automatically performs OS detection, version detection, script scanning, and traceroute. The command is as follows: nmap –A –Pn x.x.x.x To scan an IP range or multiple IP's, we can use this command: nmap –A –Pn x.x.x.0/24 Using scripts NSE, or the Nmap scripting engine, allows users to create their own scripts to perform different tasks automatically. These scripts are executed side by side when a scan is run. They can be used to perform more effective version detection,exploitation of a vulnerability, and so on.The command for using a script is this: nmap –Pn –sV host.com –script dns-brute The following is the output of the preceding command: Here,the dns-brutescript tries to fetch available subdomains by brute forcing it against a set of common subdomain names. See also More information on the scripts can be found in the official NSE documentation at https://nmap.org/nsedoc/ Summary In this article, we learned how to get a list of subdomains on the network. Then we learned how to tell whether a system is a honeypot by calculating its Shodan Honeyscore. Chrome and Firefox have plugins that allow you to do this from your browser itself. Finally, we looked at how to use Nmap to find open ports. Resources for Article: Further resources on this subject: Wireless Attacks in Kali Linux [article] Introduction to Penetration Testing and Kali Linux [article] What is Kali Linux [article]
Read more
  • 0
  • 0
  • 5158

article-image-how-to-compute-interpolation-in-scipy
Pravin Dhandre
05 Mar 2018
8 min read
Save for later

How to Compute Interpolation in SciPy

Pravin Dhandre
05 Mar 2018
8 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book co-authored by L. Felipe Martins, Ruben Oliva Ramos and V Kishore Ayyadevara titled SciPy Recipes. This book provides numerous recipes in mastering common tasks related to SciPy and associated libraries such as NumPy, pandas, and matplotlib.[/box] In today’s tutorial, we will see how to compute and solve polynomial, univariate interpolations using SciPy with detailed process and instructions. In this recipe, we will look at how to compute data polynomial interpolation by applying some important methods which are discussed in detail in the coming How to do it... section. Getting ready We will need to follow some instructions and install the prerequisites. How to do it… Let's get started. In the following steps, we will explain how to compute a polynomial interpolation and the things we need to know: They require the following parameters: points: An ndarray of floats, shape (n, D) data point coordinates. It can be either an array of shape (n, D) or a tuple of ndim arrays. values: An ndarray of float or complex shape (n,) data values. xi: A 2D ndarray of float or tuple of 1D array, shape (M, D). Points at which to interpolate data. method: A {'linear', 'nearest', 'cubic'}—This is an optional method of interpolation. One of the nearest return value is at the data point closest to the point of interpolation. See NearestNDInterpolator for more details. linear tessellates the input point set to n-dimensional simplices, and interpolates linearly on each simplex. See LinearNDInterpolator for more details. cubic (1D): Returns the value determined from a cubic spline. cubic (2D): Returns the value determined from a piecewise cubic, continuously differentiable (C1), and approximately curvature-minimizing polynomial surface. See CloughTocher2DInterpolator for more details. fill_value: float; optional. It is the value used to fill in for requested points outside of the convex hull of the input points. If it is not provided, then the default is nan. This option has no effect on the nearest method. rescale: bool; optional. Rescale points to the unit cube before performing interpolation. This is useful if some of the input dimensions have non-commensurable units and differ by many orders of magnitude. How it works… One can see that the exact result is reproduced by all of the methods to some degree, but for this smooth function, the piecewise cubic interpolant gives the best results: import matplotlib.pyplot as plt import numpy as np methods = [None, 'none', 'nearest', 'bilinear', 'bicubic', 'spline16', 'spline36', 'hanning', 'hamming', 'hermite', 'kaiser', 'quadric', 'catrom', 'gaussian', 'bessel', 'mitchell', 'sinc', 'lanczos'] # Fixing random state for reproducibility np.random.seed(19680801) grid = np.random.rand(4, 4) fig, axes = plt.subplots(3, 6, figsize=(12, 6), subplot_kw={'xticks': [], 'yticks': []}) fig.subplots_adjust(hspace=0.3, wspace=0.05) for ax, interp_method in zip(axes.flat, methods): ax.imshow(grid, interpolation=interp_method, cmap='viridis') ax.set_title(interp_method) plt.show() This is the result of the execution: Univariate interpolation In the next section, we will look at how to solve univariate interpolation. Getting ready We will need to follow some instructions and install the prerequisites. How to do it… The following table summarizes the different univariate interpolation modes coded in SciPy, together with the processes that we may use to resolve them: Finding a cubic spline that interpolates a set of data In this recipe, we will look at how to find a cubic spline that interpolates with the main method of spline. Getting ready We will need to follow some instructions and install the prerequisites. How to do it… We can use the following functions to solve the problems with this parameter: x: array_like, shape (n,). A 1D array containing values of the independent variable. The values must be real, finite, and in strictly increasing order. y: array_like. An array containing values of the dependent variable. It can have an arbitrary number of dimensions, but the length along axis must match the length of x. The values must be finite. axis: int; optional. The axis along which y is assumed to be varying, meaning for x[i], the corresponding values are np.take(y, i, axis=axis). The default is 0. bc_type: String or two-tuple; optional. Boundary condition type. Two additional equations, given by the boundary conditions, are required to determine all coefficients of polynomials on each segment. Refer to: https:/​/​docs.​scipy.​org/doc/​scipy-​0.​19.​1/​reference/​generated/​scipy.​interpolate.​CubicSpline.html#r59. If bc_type is a string, then the specified condition will be applied at both ends of a spline. The available conditions are: not-a-knot (default): The first and second segment at a curve end are the same polynomial. This is a good default when there is no information about boundary conditions. periodic: The interpolated function is assumed to be periodic in the period x[-1] - x[0]. The first and last value of y must be identical: y[0] == y[-1]. This boundary condition will result in y'[0] == y'[-1] and y''[0] == y''[-1]. clamped: The first derivatives at the curve ends are zero. Assuming there is a 1D y, bc_type=((1, 0.0), (1, 0.0)) is the same condition. natural: The second derivatives at the curve ends are zero. Assuming there is a 1D y, bc_type=((2, 0.0), (2, 0.0)) is the same condition. If bc_type is two-tuple, the first and the second value will be applied at the curve's start and end respectively. The tuple value can be one of the previously mentioned strings (except periodic) or a tuple (order, deriv_values), allowing us to specify arbitrary derivatives at curve ends: order: The derivative order; it is 1 or 2. deriv_value: An array_like containing derivative values. The shape must be the same as y, excluding the axis dimension. For example, if y is 1D, then deriv_value must be a scalar. If y is 3D with shape (n0, n1, n2) and axis=2, then deriv_value must be 2D and have the shape (n0, n1). extrapolate: {bool, 'periodic', None}; optional. bool, determines whether or not to extrapolate to out-of-bounds points based on first and last intervals, or to return NaNs. periodic, periodic extrapolation is used. If none (default), extrapolate is set to periodic for bc_type='periodic' and to True otherwise. How it works... We have the following example: %pylab inline from scipy.interpolate import CubicSpline import matplotlib.pyplot as plt x = np.arange(10) y = np.sin(x) cs = CubicSpline(x, y) xs = np.arange(-0.5, 9.6, 0.1) plt.figure(figsize=(6.5, 4)) plt.plot(x, y, 'o', label='data') plt.plot(xs, np.sin(xs), label='true') plt.plot(xs, cs(xs), label="S") plt.plot(xs, cs(xs, 1), label="S'") plt.plot(xs, cs(xs, 2), label="S''") plt.plot(xs, cs(xs, 3), label="S'''") plt.xlim(-0.5, 9.5) plt.legend(loc='lower left', ncol=2) plt.show() We can see the result here: We see the next example: theta = 2 * np.pi * np.linspace(0, 1, 5) y = np.c_[np.cos(theta), np.sin(theta)] cs = CubicSpline(theta, y, bc_type='periodic') print("ds/dx={:.1f} ds/dy={:.1f}".format(cs(0, 1)[0], cs(0, 1)[1])) x=0.0 ds/dy=1.0 xs = 2 * np.pi * np.linspace(0, 1, 100) plt.figure(figsize=(6.5, 4)) plt.plot(y[:, 0], y[:, 1], 'o', label='data') plt.plot(np.cos(xs), np.sin(xs), label='true') plt.plot(cs(xs)[:, 0], cs(xs)[:, 1], label='spline') plt.axes().set_aspect('equal') plt.legend(loc='center') plt.show() In the following screenshot, we can see the final result: Defining a B-spline for a given set of control points In the next section, we will look at how to solve B-splines given some controlled data. Getting ready We need to follow some instructions and install the prerequisites. How to do it… Univariate the spline in the B-spline basis Execute the following: S(x)=∑j=0n-1cjBj,k;t(x)S(x)=∑j=0n-1cjBj,k;t(x) Where it's Bj,k;tBj,k;t are B-spline basis functions of degree k and knots t We can use the following parameters: How it works ... Here, we construct a quadratic spline function on the base interval 2 <= x <= 4 and compare it with the naive way of evaluating the spline: from scipy import interpolate import numpy as np import matplotlib.pyplot as plt # sampling x = np.linspace(0, 10, 10) y = np.sin(x) # spline trough all the sampled points tck = interpolate.splrep(x, y) x2 = np.linspace(0, 10, 200) y2 = interpolate.splev(x2, tck) # spline with all the middle points as knots (not working yet) # knots = x[1:-1] # it should be something like this knots = np.array([x[1]]) # not working with above line and just seeing what this line does weights = np.concatenate(([1],np.ones(x.shape[0]-2)*.01,[1])) tck = interpolate.splrep(x, y, t=knots, w=weights) x3 = np.linspace(0, 10, 200) y3 = interpolate.splev(x2, tck) # plot plt.plot(x, y, 'go', x2, y2, 'b', x3, y3,'r') plt.show() Note that outside of the base interval, results differ. This is because BSpline extrapolates the first and last polynomial pieces of B-spline functions active on the base interval. This is the result of solving the problem: We successfully compute numerical computation and find interpolation function using polynomial and univariate interpolation coded in SciPy. If you found this tutorial useful, do check out the book SciPy Recipes to get quick recipes for performing other mathematical operations like differential equation, K-means and Discrete Fourier Transform.
Read more
  • 0
  • 0
  • 11456

article-image-implement-reinforcement-learning-tensorflow
Gebin George
05 Mar 2018
3 min read
Save for later

How to implement Reinforcement Learning with TensorFlow

Gebin George
05 Mar 2018
3 min read
[box type="note" align="" class="" width=""]This article is an excerpt from the book, Deep Learning Essentials co-authored by Wei Di, Anurag Bhardwaj, and Jianing Wei. This book will help you get to grips with the essentials of deep learning by leveraging the power of Python.[/box] In today’s tutorial, we will implement reinforcement learning with TensorFlow-based Qlearning algorithm. We will look at a popular game, FrozenLake, which has an inbuilt environment in the OpenAI gym package. The idea behind the FrozenLake game is quite simple. It consists of 4 x 4 grid blocks, where each block can have one of the following four states: S: Starting point/Safe state F: Frozen surface/Safe state H: Hole/Unsafe state G: Goal/Safe or Terminal state In each of the 16 cells, you can use one of the four actions, namely up/down/left/right, to move to a neighboring state. The goal of the game is to start from state S and end at state G. We will show how we can use a neural network-based Q-learning system to learn a safe path from state S to state G. First, we import the necessary packages and define the game environment: import gym import numpy as np import random import tensorflow as tf env = gym.make('FrozenLake-v0') Once the environment is defined, we can define the network structure that learns the Qvalues. We will use a one-layer neural network with 16 hidden neurons and 4 output neurons as follows: input_matrix = tf.placeholder(shape=[1,16],dtype=tf.float32) weight_matrix = tf.Variable(tf.random_uniform([16,4],0,0.01)) Q_matrix = tf.matmul(input_matrix,weight_matrix) prediction_matrix = tf.argmax(Q_matrix,1) nextQ = tf.placeholder(shape=[1,4],dtype=tf.float32) loss = tf.reduce_sum(tf.square(nextQ - Q_matrix)) train = tf.train.GradientDescentOptimizer(learning_rate=0.05) model = train.minimize(loss) init_op = tf.global_variables_initializer() Now we can choose the action greedily: ip_q = np.zeros(num_states) ip_q[current_state] = 1 a,allQ = sess.run([prediction_matrix,Q_matrix],feed_dict={input_matrix: [ip_q]}) if np.random.rand(1) < sample_epsilon: a[0] = env.action_space.sample() next_state, reward, done, info = env.step(a[0]) ip_q1 = np.zeros(num_states) ip_q1[next_state] = 1 Q1 = sess.run(Q_matrix,feed_dict={input_matrix:[ip_q1]}) maxQ1 = np.max(Q1) targetQ = allQ targetQ[0,a[0]] = reward + y*maxQ1 _,W1 = sess.run([model,weight_matrix],feed_dict={input_matrix: [ip_q],nextQ:targetQ}) Figure RL with Q-learning example shows the sample output of the program when executed. You can see different values of Q matrix as the agent moves from one state to the other. You also notice a value of reward 1 when the agent is in state 15: To summarize, we saw how reinforcement learning can be practically implemented using TensorFlow. If you found this post useful, do check out the book Deep Learning Essentials which will help you fine-tune and optimize your deep learning models for better performance.  
Read more
  • 0
  • 0
  • 17755
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-walkthrough-storm-ui
Packt
04 Mar 2018
5 min read
Save for later

Walkthrough of Storm UI

Packt
04 Mar 2018
5 min read
In this article by Ankit Jain, the author of the book Mastering Apache Storm,This section we will seehow you how we can start the Storm UI daemon. However, before starting the Storm UI daemon, we assume that you have a running Storm cluster. The Storm cluster deployment steps are mentioned in previous step. NNow, go to the Storm home directory (cd $STORM_HOME) at the Leader Nimbus machine and run the following command to start the Storm UI daemon: $> cd $STORM_HOME $> bin/storm ui & (For more resources related to this topic, see here.) By default, the Storm UI starts on the 8080 port of the machine where it is started. Now, we will browse to the http://nimbus-node:8080 page to view the Storm UI, where Nnimbus-node is the IP address or hostname of the Nimbus machine. The following is a screenshot of the Storm home page: Insert Image B06182_Article02_0103.png Cluster Summary Section This portion of the Storm UI shows the version of Storm deployed in a cluster, uptime of the Nnimbus nodes, number of free worker slots, number of used worker slots, and so on. While submitting a topology to the cluster, the user first needs to make sure that the value of the Free slots column should not be zero; otherwise, the topology doesn't get any worker for processing and will wait in the queue till a worker becomes free. Nimbus Summary section This portion of the Storm UI shows the number of Nimbus processes are running in Storm Cluster. The section also shows Status of Nimbus nodes. A node with Status "Leader" is an Active master while the node with Status is "Not a Leader" is a Passive master. Supervisor Summary section This portion of the Storm UI shows the list of supervisor nodes running in the cluster along with their Id, Host, Uptime, Slots, and Used slots columns. Nimbus Configuration section This portion of the Storm UI shows the configuration of the Nimbus node. Some of the important properties are: supervisor.slots.ports storm.zookeeper.port storm.zookeeper.servers storm.zookeeper.retry.interval worker.childopts supervisor.childopts The definition of each of this property are covert in chapter 3. Following is the screenshots of Nimbus Configuration: Topology Summary section This portion of the Storm UI shows the list of topologies running in the Storm cluster along with their ID, number of workers assigned to the topology, number of executors, number of tasks, uptime, and so on. Let's deploy the sample topology (if not running already) in a remote Storm cluster by running the following command: $> cd $STORM_HOME $> bin/storm jar ~/storm_example-0.0.1-SNAPSHOT-jar-with-dependencies.jar com.stormadvance.storm_example.SampleStormClusterTopology storm_example We have created SampleStormClusterTopology topology by defining three worker processes, two executors for SampleSpout, and four executors for SampleBolt. The information about the worker, executor and task is mentioned in next chapter. After submitting SampleStormClusterTopology on the Storm cluster, the user has to refresh the Storm home page. The following screenshot shows that the row is added for SampleStormClusterTopology in the Topology summary section. The topology section contains the name of the topology, unique ID of the topology, status of the topology, uptime, number of workers assigned to the topology, and so on. The possible values of status fields are ACTIVE, KILLED, and INACTIVE. Let's click on SampleStormClusterTopology to view its detailed statistics. I am attaching there are two screenshots for the same. The first one content the information about the number of numberworkers, executors, and tasks assigned to SampleStormClusterTopology topology The next screenshot image contains information about spout and bolts— The number of executors and tasks assigned to each spout and bolt: The information showed in the previous screenshots are: Topology stats: This section will give the information about the number of tuples emitted, transferred, and acknowledged, the capacity latency, and so on, within the window of 10 minutes, 3 hours, 1 day, and since the start of the topology. Spouts (All time): This section shows the statistics of all the spouts running inside a topology.The detailed information of Spout stats is mentioned in chapter 3. Bolts (All time): This section shows the statistics of all the bolts running inside a topology. The detailed information of Bolt stats is mentioned in chapter 3. Topology actions: This section allows us to perform activate, deactivate, rebalance, kill, and other operation on topology's directly through the Storm UI. Deactivate: Click on deactivate to deactivate the topology. Once the topology is deactivate, the spout stopped emitting tuples and the status of topology changed to INACTIVE on storm UI  Deactivating the topology does not free the Storm resource. Activate: Click on Activate button to activate the topology. Once the topology is activate, the spout again started emitting tuples. Kill: Click on Kill button to destroy/kill the topology. Once the topology is killed, it will free all the Storm resource allotted to this topology. While killing the topology, Storm will first deactivate the spouts and wait for the kill time mentioned on the alerts box, so the bolts have a chance to finish the processing of the tuples emitted by spouts before the kill command. The following screenshot shows how we can kill the topology through the Storm UI: Let's go to the Storm UI's home page to check the status of SampleStormClusterToplogy, as shown in the following screenshot: Summary We have seen how to start Storm UI daemon and we have also seen Storm home page with its sections Resources for Article:   Further resources on this subject: [article] [article] [article]
Read more
  • 0
  • 0
  • 29398

article-image-fasttrack-oop-classes-and-interfaces
Packt
04 Mar 2018
7 min read
Save for later

FastTrack to OOP - Classes and Interfaces

Packt
04 Mar 2018
7 min read
In this article by Mohamed Sanaulla and Nick Samoylov, the authors of Java 9 Cookbook, we will cover the following recipe: Implementing object-oriented design using classes (For more resources related to this topic, see here.) Implementing object-oriented design using classes In this recipe, you will learn about the first two OOD concepts--object/class and encapsulation. Getting ready Object is the coupling of data and procedures that can be applied to them. Neither data nor procedures are required, but one of them is--and typically, both are--always present. The data is called object properties, while the procedures are called methods. Properties capture the state of the object. Methods describe the object's behavior. An object has a type, which is defined by its class (see the information box). Object is said to be an instance of a class. Class is a collection of definitions of properties and methods that will be present in each of its instances--the objects created based on this class. Encapsulation is the hiding of object properties and methods that should not be accessible by other objects. Encapsulation is achieved by Java keywords private or protected in the declaration of the properties and methods. How to do it... Create an Engine class with a horsePower property, a setHorsePower() method, which sets this property's value, and the getSpeedMph() method, which calculates the speed of a vehicle based on the time when the vehicle started moving, the vehicle weight, and the engine power: public class Engine { private int horsePower; public void setHorsePower(int horsePower) { this.horsePower = horsePower; } public double getSpeedMph(double timeSec, int weightPounds) { double v = 2.0*this.horsePower*746; v = v*timeSec*32.17/weightPounds; return Math.round(Math.sqrt(v)*0.68); } } Create the Vehicle class: public class Vehicle { private int weightPounds; private Engine engine; public Vehicle(int weightPounds, Engine engine) { this.weightPounds = weightPounds; this.engine = engine; } public double getSpeedMph(double timeSec){ return this.engine.getSpeedMph(timeSec, weightPounds); } } Create the application that uses these classes: public static void main(String... arg) { double timeSec = 10.0; int horsePower = 246; int vehicleWeight = 4000; Engine engine = new Engine(); engine.setHorsePower(horsePower); FastTrack to OOP - Classes and Interfaces [ 3 ] Vehicle vehicle = new Vehicle(vehicleWeight, engine); System.out.println("Vehicle speed (" + timeSec + " sec)=" + vehicle.getSpeedMph(timeSec) + " mph"); } How it works... The preceding application yields the following output: As you can see, an engine object was created by invoking the default constructor of the Engine class, without parameters, and with the Java keyword, new, which allocated memory for the newly created object on the heap. The second object, vehicle, was created with the explicitly defined constructor of the Vehicle class with two parameters. The second parameter of the constructor is the engine object, which carries the horsePower property with 246 set as its value using the setHorsePower() method. It also contains the getSpeedMph() method, which can be called by any object with access to engine, as is done in the getSpeedMph() method of the Vehicle class. It's worth noticing that the getSpeedMph() method of the Vehicle class relies on the presence of a value assigned to the engine property. The object of the Vehicle class delegates speed calculation to the object of the Engine class. If the latter is not set (null passed in the Vehicle() constructor, for example), we will get an unpleasant NullPointerException at runtime. To avoid it, we can place a check for the presence of this value, either in the Vehicle() constructor or in the getSpeedMph() method of the Vehicle class. Here's the check that we can place in Vehicle(): if(engine == null){ throw new RuntimeException("Engine" + " is required parameter."); } Here is the check that you can place in the getSpeedMph() method of the Vehicle class: if(getEngine() == null){ throw new RuntimeException("Engine value is required."); } This way, we avoid the ambiguity of NullPointerException and tell the user exactly what the source of the problem is. As you probably noticed, the getSpeedMph() method can be removed from the Engine class and fully implemented in the Vehicle class: public double getSpeedMph(double timeSec){ double v = 2.0 * this.engine.getHorsePower() * 746; v = v * timeSec * 32.174 / this.weightPounds; return Math.round(Math.sqrt(v) * 0.68); } To do so, we would need to add a public getHorsePower() method to the Engine class in order to be available for usage by the getSpeedMph() method in Vehicle. For now, we leave the getSpeedMph() method in the Engine class. This is one of the design decisions you need to make. If you think that an object of the Engine class is going to be passed around to the objects of different classes (not only Vehicle), then you would keep the getSpeedMph() method in the Engine class. Otherwise, if you think that the Vehicle class is going to be responsible for the speed calculation (which makes sense, since it is the speed of a vehicle, not of an engine), then you should implement the method inside Vehicle. There's more... Java provides the capability to extend a class and to allow its subclass to access all the functionalities of the base class. For example, you can decide that every object that can be asked about its speed belongs to a subclass that is derived from the Vehicle class. In such a case, the Car class may look as follows: private int passengersCount; public Car(int passengersCount, int weightPounds, Engine engine){ super(weightPounds, engine); this.passengersCount = passengersCount; } public int getPassengersCount() { return this.passengersCount; } } Now, we can change our test code by replacing Vehicle with Car: public static void main(String... arg) { double timeSec = 10.0; FastTrack to OOP - Classes and Interfaces [ 5 ] int horsePower = 246; int vehicleWeight = 4000; Engine engine = new Engine(); engine.setHorsePower(horsePower); Vehicle vehicle = new Car(4, vehicleWeight, engine); System.out.println("Car speed (" + timeSec + " sec) = " + vehicle.getSpeedMph(timeSec) + " mph"); } When we run the preceding code, we get the same value as with an object of the Vehicle class: Because of polymorphism, a reference to an object of Car can be assigned to the reference of the base class, Vehicle. The object of Car has two types--its own type (Car) and the type of the base class (Vehicle). There are usually many ways to design the same functionality. It all depends on the needs of your project and the style adopted by the development team. But in any context, clarity of design helps to communicate the intent. A good design contributes to the quality and longevity of your code. Summary This article gives you a quick introduction to the components of object-oriented programming and covers the new enhancements to these components in Java 8 and Java 9. We have also tried to share good OOD practices wherever applicable. Throughout the recipe, we used the new enhancements (introduced in Java 8 and Java 9), defined and demonstrated the concepts of OOD in specific code examples, and presented new capabilities for better code documentation. One can spend many hours reading articles and practical advice on OOD in books and on the internet. Some of these activities can be beneficial for some people. But, in our experience, the fastest way to get hold of OOD is to try its principles early on in your own code. This was exactly the goal of this article--to give you a chance to see and use OOD principles so that the formal definition makes sense immediately. Resources for Article:   Further resources on this subject: [article] [article] [article]
Read more
  • 0
  • 0
  • 3558

article-image-article-2
Packt
04 Mar 2018
6 min read
Save for later

Article

Packt
04 Mar 2018
6 min read
In this article by Rahul Mohta, Yogesh Kasat, and Jila Jeet Yadav, the authors of Implementing Microsoft Dynamics 365 for Finance and Operations, we will discuss organizations' need for a system of records to manage the data, control it, and use it for their growth. This often leads to embracing business applications for managing their resources well and keep improving. This used to traditionally happen in software installed in the customer location; it later evolved to hosting either internally or at the partner's premise. Now, in this modern world, it has transformed into leveraging the power and elasticity of cloud. Dynamics 365 is a cloud service from Microsoft, combining several business needs into a single, scalable, and agile platform, allowing organizations to bring in the much needed digital disruption. This chapter will introduce you to Microsoft Dynamics 365 and share the details of various apps, solution elements, buying choices, and complimentary tools. We hope you will get an insight of various tools, offerings, and options provided by Microsoft in Dynamics 365. This may help you in your business transformation initiatives, solutions, and platform evaluation, spanning CRM (Customer Relationship Management), ERP (Enterprise Resource Planning), and BI (Business Intelligence). (For more resources related to this topic, see here.) What is Microsoft Dynamics 365? To understand Dynamics 365, let's first understand the Microsoft cloud competencies and the overall cloud vision. The Microsoft cloud has numerous offerings and services; Microsoft categorizes these offering in four broad categories, namely modern workplace, business applications, application and infrastructure, and data and AI. Each of these categories comprises multiple applications and services. The following image highlights these four categories and the service and applications offerings. As shown in the preceding image, the Modern Workplace category combines Office 365, Windows 10, and enterprise mobility and security, and is offered as Microsoft 365. The Business Applications category is a combination of ERP and CRM and is offered as Dynamics 365. The third category is Applications and Infrastructure, which are powered through Azure. The last category is Data and AI, which deals with data, AI, and analytics. Turning our focus back to the business applications category, in the business application world, business leaders are looking for greater business process automation to achieve digital transformation. What gets in the way today is monolithic application suites, which try to solve business process automation as a single application. You need modular applications that are built for a specific purpose, but at the same time, you need these applications to talk with each other and produce a connected graph of data, which can be further used for AI and analytics. Microsoft, for the past several years, has been focused on building modular, integrated applications, infused with AI and analytics capabilities. Microsoft Dynamics 365 is the next generation of intelligent business applications in the cloud. Microsoft Dynamics 365 is a unification of the current CRM and ERP cloud solutions into one cloud service, delivered by purpose-built applications. It enables end-to-end business processes driven by unified navigation, core user experience in how they look and feel, and also seamlessly integration with each other. Microsoft Dynamics 365 further extends Microsoft's commitment of being a cloud-committed company, bringing in worldclass business apps together in their overall cloud offering. Dynamics 365 applications can be independently deployed. A customer can start with what they need, and as per the business demands, the customer can adopt additional applications. Many of you may be new to Microsoft Dynamics 365, and it would be a good idea to register the logo/brand image of this solution from Microsoft. The following is a common symbol you could expect to gain a lot of traction among organizations embracing business application in Microsoft cloud: Let's now explore the key deciding factors for adopting Microsoft Dynamics 365 in your day-to-day organizational life with the help of its usage benefits and salient features. Benefits of Microsoft Dynamics 365 Any business application and its platform decision is often based on benefits, return on investment, and commitment of the product principal with the assured road map. We would like to share the top three among several benefits of leveraging Dynamics 365 as your business solution platform: Productivity like never before, with purpose-built applications Powerful and highly adaptable platform to enable the business to transform effectively Integrated applications to eliminate data silos Insightful intelligence to drive informed decision making Microsoft Dynamics 365 salient features What makes Microsoft Dynamics 365 stand apart from its competition and an enabler for organizations lies in its features, capabilities, and offerings. Here are the salient features of Dynamic 365: Cloud-driven, browser-based application Generally made available on Nov 01, 2016 to a number of markets Mobile apps available in Android, iOS, and Windows platforms Availabe in 137+ markets Available in 40+ languages More than 18 country localizations are built in and more are on its way Seamlessly integrated with Office 365, all out-of-the-box to increase productivity and stand apart from others Intelligence built in for predictive analysis and decision making support Releveled and revolutionized traditional approach towards business solutions Dynamics 365 is the next generation of intelligent business applications in the cloud (public and private) as well as on premise. It is expected to transform how businesses use technological solutions to achieve their goals. Microsoft Dynamics 365 apps The Microsoft Dynamics 365 approach to business applications unifies Microsoft's current CRM and ERP cloud solutions into one cloud service, with new purpose-built business applications that work together seamlessly to help you manage specific business functions. Let's now get an insight at a high level of the various apps available in Dynamics 365. the following image shows the apps and their association to ERP/CRM: Now let's get personal with these apps starting with their names, their former solution base, and their brand logos. The following is a matrix of business solution enablers in Microsoft Dynamics 365, with their quick URL: Microsoft Dynamics 365 for Sales (popularly known as Dynamics CRM)  https://www.microsoft.com/en-us/dynamics365/sales   Microsoft Dynamics 365 for Customer Service (popularly known as Dynamics CRM) https://www.microsoft.com/en-us/dynamics365/customer-servic e   Microsoft Dynamics 365 for Field Service (popularly known as Dynamics CRM) https://www.microsoft.com/en-us/dynamics365/field-service   Microsoft Dynamics 365 for Project Service Automation (popularly known as Dynamics CRM) https://www.microsoft.com/en-us/dynamics365/project-service -automation   Microsoft Dynamics 365 for Finance and Operations, Enterprise edition (popularly known as Dynamics AX) https://www.microsoft.com/en-us/dynamics365/operations   Microsoft Dynamics 365 for Finance and Operations, Business Edition (also known as Project Madeira and based on popularly known Dynamics NAV) https://www.microsoft.com/en-us/dynamics365/financials   Microsoft Dynamics 365 for Talent https://www.microsoft.com/en-us/dynamics365/talent   Microsoft Dynamics 365 for Retail https://www.microsoft.com/en-us/dynamics365/retail   Microsoft Dynamics 365 for Marketing https://www.microsoft.com/en-us/dynamics365/marketing   Microsoft Dynamics 365 for Customer insights https://www.microsoft.com/en-us/dynamics365/customer-insigh ts   Summary In this article, you learned about Microsoft Dynamics 365 and all the different products that are part of it. Resources for Article:   Further resources on this subject: [article] [article] [article]
Read more
  • 0
  • 0
  • 2479

article-image-internationalization-and-localization
Packt
03 Mar 2018
16 min read
Save for later

Internationalization and localization

Packt
03 Mar 2018
16 min read
In this article by Dmitry Sheiko, the author of the book, Cross Platform Desktop Application Development: Electron, Node, NW.js and React, will cover the concept of Internationalization and localization and will be also covering context menu and system clipboard in detail. Internationalization, often abbreviated as i18n, implies a particular software design capable of adapting to the requirements of target local markets. In other words if we want to distribute our application to the markets other than USA we need to take care of translations, formatting of datetime, numbers, addresses, and such. (For more resources related to this topic, see here.) Date format by country Internationalization is a cross-cutting concern. When you are changing the locale it usually affects multiple modules. So I suggest going with the observer pattern that we already examined while working on DirService'. The ./js/Service/I18n.js file contains the following code: const EventEmitter = require( "events" ); class I18nService extends EventEmitter { constructor(){ super(); this.locale = "en-US"; } Internationalization and localization [ 2 ] notify(){ this.emit( "update" ); } } As you see, we can change the locale by setting a new value to locale property. As soon as we call notify method, then all the subscribed modules immediately respond. But locale is a public property and therefore we have no control on its access and mutation. We can fix it by using overloading. The ./js/Service/I18n.js file contains the following code: //... constructor(){ super(); this._locale = "en-US"; } get locale(){ return this._locale; } set locale( locale ){ // validate locale... this._locale = locale; } //... Now if we access locale property of I18n instance it gets delivered by the getter (get locale). When setting it a value, it goes through the setter (set locale). Thus we can add extra functionality such as validation and logging on property access and mutation. Remember we have in the HTML, a combobox for selecting language. Why not give it a view? The ./js/View/LangSelector.j file contains the following code: class LangSelectorView { constructor( boundingEl, i18n ){ boundingEl.addEventListener( "change", this.onChanged.bind( this ), false ); this.i18n = i18n; } onChanged( e ){ const selectEl = e.target; this.i18n.locale = selectEl.value; this.i18n.notify(); } } Internationalization and localization [ 3 ] exports.LangSelectorView = LangSelectorView; In the preceding code, we listen for change events on the combobox. When the event occurs we change locale property of the passed in I18n instance and call notify to inform the subscribers. The ./js/app.js file contains the following code: const i18nService = new I18nService(), { LangSelectorView } = require( "./js/View/LangSelector" ); new LangSelectorView( document.querySelector( "[data-bind=langSelector]" ), i18nService ); Well, we can change the locale and trigger the event. What about consuming modules? In FileList view we have static method formatTime that formats the passed in timeString for printing. We can make it formated in accordance with currently chosen locale. The ./js/View/FileList.js file contains the following code: constructor( boundingEl, dirService, i18nService ){ //... this.i18n = i18nService; // Subscribe on i18nService updates i18nService.on( "update", () => this.update( dirService.getFileList() ) ); } static formatTime( timeString, locale ){ const date = new Date( Date.parse( timeString ) ), options = { year: "numeric", month: "numeric", day: "numeric", hour: "numeric", minute: "numeric", second: "numeric", hour12: false }; return date.toLocaleString( locale, options ); } update( collection ) { //... this.el.insertAdjacentHTML( "beforeend", `<li class="file-list__li" data-file="${fInfo.fileName}"> <span class="file-list__li__name">${fInfo.fileName}</span> <span class="filelist__li__size">${filesize(fInfo.stats.size)}</span> <span class="file-list__li__time">${FileListView.formatTime( fInfo.stats.mtime, this.i18n.locale )}</span> </li>` ); //... } //... In the constructor, we subscribe for I18n update event and update the file list every time the locale changes. Static method formatTime converts passed in string into a Date object and uses Date.prototype.toLocaleString() method to format the datetime according to a given locale. This method belongs to so called The ECMAScript Internationalization API (http://norbertlindenberg.com/2012/12/ecmascript-internationalization-api/index .html). The API describes methods of built-in object String, Date and Number designed to format and compare localized data. But what it really does is formatting a Date instance with toLocaleString for the English (United States) locale ("en-US") and it returns the date as follows: 3/17/2017, 13:42:23 However if we feed to the method German locale ("de-DE") we get quite a different result: 17.3.2017, 13:42:23 To put it into action we set an identifier to the combobox. The ./index.html file contains the following code: .. <select class="footer__select" data-bind="langSelector"> .. And of course, we have to create an instance of I18n service and pass it in LangSelectorView and FileListView: ./js/app.js // ... const { I18nService } = require( "./js/Service/I18n" ), { LangSelectorView } = require( "./js/View/LangSelector" ), i18nService = new I18nService(); new LangSelectorView( document.querySelector( "[data-bind=langSelector]" ), i18nService ); // ... new FileListView( document.querySelector( "[data-bind=fileList]" ), dirService, i18nService ); Now we start the application. Yeah! As we change the language in the combobox the file modification dates adjust accordingly: Multilingual support Localization dates and number is a good thing, but it would be more exciting to provide translation to multiple languages. We have a number of terms across the application, namely the column titles of the file list and tooltips (via title attribute) on windowing action buttons. What we need is a dictionary. Normally it implies sets of token translation pairs mapped to language codes or locales. Thus when you request from the translation service a term, it can correlate to a matching translation according to currently used language/locale. Here I have suggested making the dictionary as a static module that can be loaded with the required function. The ./js/Data/dictionary.js file contains the following code: exports.dictionary = { "en-US": { NAME: "Name", SIZE: "Size", MODIFIED: "Modified", MINIMIZE_WIN: "Minimize window", Internationalization and localization [ 6 ] RESTORE_WIN: "Restore window", MAXIMIZE_WIN: "Maximize window", CLOSE_WIN: "Close window" }, "de-DE": { NAME: "Dateiname", SIZE: "Grösse", MODIFIED: "Geändert am", MINIMIZE_WIN: "Fenster minimieren", RESTORE_WIN: "Fenster wiederherstellen", MAXIMIZE_WIN: "Fenster maximieren", CLOSE_WIN: "Fenster schliessen" } }; So we have two locales with translations per term. We are going to inject the dictionary as a dependency into our I18n service. The ./js/Service/I18n.js file contains the following code: //... constructor( dictionary ){ super(); this.dictionary = dictionary; this._locale = "en-US"; } translate( token, defaultValue ) { const dictionary = this.dictionary[ this._locale ]; return dictionary[ token ] || defaultValue; } //... We also added a new method translate that accepts two parameters: token and default translation. The first parameter can be one of the keys from the dictionary like NAME. The second one is guarding value for the case when requested token does not yet exist in the dictionary. Thus we still get a meaningful text at least in English. Let's see how we can use this new method. The ./js/View/FileList.js file contains the following code: //... update( collection ) { this.el.innerHTML = `<li class="file-list__li file-list__head"> <span class="file-list__li__name">${this.i18n.translate( "NAME", "Name" )}</span> <span class="file-list__li__size">${this.i18n.translate( "SIZE", Internationalization and localization [ 7 ] "Size" )}</span> <span class="file-list__li__time">${this.i18n.translate( "MODIFIED", "Modified" )}</span> </li>`; //... We change in FileList view hardcoded column titles with calls for translate method of I18n instance, meaning that every time view updates it receives the actual translations. We shall not forget about TitleBarActions view where we have windowing action buttons. The ./js/View/TitleBarActions.js file contains the following code: constructor( boundingEl, i18nService ){ this.i18n = i18nService; //... // Subscribe on i18nService updates i18nService.on( "update", () => this.translate() ); } translate(){ this.unmaximizeEl.title = this.i18n.translate( "RESTORE_WIN", "Restore window" ); this.maximizeEl.title = this.i18n.translate( "MAXIMIZE_WIN", "Maximize window" ); this.minimizeEl.title = this.i18n.translate( "MINIMIZE_WIN", "Minimize window" ); this.closeEl.title = this.i18n.translate( "CLOSE_WIN", "Close window" ); } Here we add method translate, which updates button title attributes with actual translations. We subscribe for i18n update event to call the method every time user changes locale:   Context menu Well, with our application we can already navigate through the file system and open files. Yet, one might expect more of a File Explorer. We can add some file related actions like delete, copy/paste. Usually these tasks are available via the context menu, what gives us a good opportunity to examine how to make it with NW.js. With the environment integration API we can create an instance of system menu (http://docs.nwjs.io/en/latest/References/Menu/). Then we compose objects representing menu items and attach them to the menu instance (http://docs.nwjs.io/en/latest/References/MenuItem/). This menu can be shown in an arbitrary position: const menu = new nw.Menu(), menutItem = new nw.MenuItem({ label: "Say hello", click: () => console.log( "hello!" ) }); menu.append( menu ); menu.popup( 10, 10 ); Yet our task is more specific. We have to display the menu on the right mouse click in the position of the cursor. That is, we achieve by subscribing a handler to contextmenu DOM event: document.addEventListener( "contextmenu", ( e ) => { console.log( `Show menu in position ${e.x}, ${e.y}` ); }); Now whenever we right-click within the application window the menu shows up. It's not exactly what we want, isn't it? We need it only when the cursor resides within a particular region. For an instance, when it hovers a file name. That means we have to test if the target element matches our conditions: document.addEventListener( "contextmenu", ( e ) => { const el = e.target; if ( el instanceof HTMLElement && el.parentNode.dataset.file ) { console.log( `Show menu in position ${e.x}, ${e.y}` ); } }); Here we ignore the event until the cursor hovers any cell of file table row, given every row is a list item generated by FileList view and therefore provided with a value for data file attribute. This passage explains pretty much how to build a system menu and how to attach it to the file list. But before starting on a module capable of creating menu, we need a service to handle file operations. The ./js/Service/File.js file contains the following code: const fs = require( "fs" ), path = require( "path" ), // Copy file helper cp = ( from, toDir, done ) => { const basename = path.basename( from ), to = path.join( toDir, basename ), write = fs.createWriteStream( to ) ; fs.createReadStream( from ) .pipe( write ); write .on( "finish", done ); }; class FileService { Internationalization and localization [ 10 ] constructor( dirService ){ this.dir = dirService; this.copiedFile = null; } remove( file ){ fs.unlinkSync( this.dir.getFile( file ) ); this.dir.notify(); } paste(){ const file = this.copiedFile; if ( fs.lstatSync( file ).isFile() ){ cp( file, this.dir.getDir(), () => this.dir.notify() ); } } copy( file ){ this.copiedFile = this.dir.getFile( file ); } open( file ){ nw.Shell.openItem( this.dir.getFile( file ) ); } showInFolder( file ){ nw.Shell.showItemInFolder( this.dir.getFile( file ) ); } }; exports.FileService = FileService; What's going on here? FileService receives an instance of DirService as a constructor argument. It uses the instance to obtain the full path to a file by name ( this.dir.getFile( file ) ). It also exploits notify method of the instance to request all the views subscribed to DirService to update. Method showInFolder calls the corresponding method of nw.Shell to show the file in the parent folder with the system file manager. As you can recon method remove deletes the file. As for copy/paste we do the following trick. When user clicks copy we store the target file path in property copiedFile. So when user next time clicks paste we can use it to copy that file to the supposedly changed current location. Method open evidently opens file with the default associated program. That is what we do in FileList view directly. Actually this action belongs to FileService. So we rather refactor the view to use the service. The ./js/View/FileList.js file contains the following code: constructor( boundingEl, dirService, i18nService, fileService ){ this.file = fileService; //... } Internationalization and localization [ 11 ] bindUi(){ //... this.file.open( el.dataset.file ); //... } Now we have a module to handle context menu for a selected file. The module will subscribe for contextmenu DOM event and build a menu when user right clicks on a file. This menu will contain items Show Item in the Folder, Copy, Paste, and Delete. Whereas copy and paste are separated from other items with delimiters. Besides, Paste will be disabled until we store a file with copy. Further goes the source code. The ./js/View/ContextMenu.js file contains the following code: class ConextMenuView { constructor( fileService, i18nService ){ this.file = fileService; this.i18n = i18nService; this.attach(); } getItems( fileName ){ const file = this.file, isCopied = Boolean( file.copiedFile ); return [ { label: this.i18n.translate( "SHOW_FILE_IN_FOLDER", "Show Item in the Folder" ), enabled: Boolean( fileName ), click: () => file.showInFolder( fileName ) }, { type: "separator" }, { label: this.i18n.translate( "COPY", "Copy" ), enabled: Boolean( fileName ), click: () => file.copy( fileName ) }, { label: this.i18n.translate( "PASTE", "Paste" ), enabled: isCopied, click: () => file.paste() }, { type: "separator" }, { Internationalization and localization [ 12 ] label: this.i18n.translate( "DELETE", "Delete" ), enabled: Boolean( fileName ), click: () => file.remove( fileName ) } ]; } render( fileName ){ const menu = new nw.Menu(); this.getItems( fileName ).forEach(( item ) => menu.append( new nw.MenuItem( item ))); return menu; } attach(){ document.addEventListener( "contextmenu", ( e ) => { const el = e.target; if ( !( el instanceof HTMLElement ) ) { return; } if ( el.classList.contains( "file-list" ) ) { e.preventDefault(); this.render() .popup( e.x, e.y ); } // If a child of an element matching [data-file] if ( el.parentNode.dataset.file ) { e.preventDefault(); this.render( el.parentNode.dataset.file ) .popup( e.x, e.y ); } }); } } exports.ConextMenuView = ConextMenuView; So in ConextMenuView constructor, we receive instances of FileService and I18nService. During the construction we also call attach method that subscribes for contextmenu DOM event, creates the menu and shows it in the position of the mouse cursor. The event gets ignored unless the cursor hovers a file or resides in empty area of the file list component. When user right clicks the file list, the menu still appears, but with all items disable except paste (in case a file was copied before). Method render create an instance of menu and populates it with nw.MenuItems created by getItems method. The method creates an array representing menu items. Elements of the array are object literals. Internationalization and localization [ 13 ] Property label accepts translation for item caption. Property enabled defines the state of item depending on our cases (whether we have copied file or not, whether the cursor on a file or not). Finally property click expects the handler for click event. Now we need to enable our new components in the main module. The ./js/app.js file contains the following code: const { FileService } = require( "./js/Service/File" ), { ConextMenuView } = require( "./js/View/ConextMenu" ), fileService = new FileService( dirService ); new FileListView( document.querySelector( "[data-bind=fileList]" ), dirService, i18nService, fileService ); new ConextMenuView( fileService, i18nService ); Let's now run the application, right-click on a file and voilà! We have the context menu and new file actions. System clipboard Usually Copy/Paste functionality involves system clipboard. NW.js provides an API to control it (http://docs.nwjs.io/en/latest/References/Clipboard/). Unfortunately it's quite limited, we cannot transfer an arbitrary file between applications, what you may expect of a file manager. Yet some things we are still available to us. Transferring text In order to examine text transferring with the clipboard we modify the method copy of FileService: copy( file ){ this.copiedFile = this.dir.getFile( file ); const clipboard = nw.Clipboard.get(); clipboard.set( this.copiedFile, "text" ); } What does it do? As soon as we obtained file full path, we create an instance of nw.Clipboard and save the file path there as a text. So now, after copying a file within the File Explorer we can switch to an external program (for example, a text editor) and paste the copied path from the clipboard. Transferring graphics It doesn't look very handy, does it? It would be more interesting if we could copy/paste a file. Unfortunately NW.js doesn't give us many options when it comes to file exchange. Yet we can transfer between NW.js application and external programs PNG and JPEG images. The ./js/Service/File.js file contains the following code: //... copyImage( file, type ){ const clip = nw.Clipboard.get(), // load file content as Base64 data = fs.readFileSync( file ).toString( "base64" ), // image as HTML html = `<img src="file:///${encodeURI( data.replace( /^//, "" ) )}">`; // write both options (raw image and HTML) to the clipboard clip.set([ Internationalization and localization [ 16 ] { type, data: data, raw: true }, { type: "html", data: html } ]); } copy( file ){ this.copiedFile = this.dir.getFile( file ); const ext = path.parse( this.copiedFile ).ext.substr( 1 ); switch ( ext ){ case "jpg": case "jpeg": return this.copyImage( this.copiedFile, "jpeg" ); case "png": return this.copyImage( this.copiedFile, "png" ); } } //... We extended our FileService with private method copyImage. It reads a given file, converts its contents in Base64 and passes the resulting code in a clipboard instance. In addition, it creates HTML with image tag with Base64-encoded image in data Uniform Resource Identifier (URI). Now after copying an image (PNG or JPEG) in the File Explorer, we can paste it in an external program such as graphical editor or text processor. Receiving text and graphics We've learned how to pass a text and graphics from our NW.js application to external programs. But how can we receive data from outside? As you can guess it is accessible through get method of nw.Clipboard. Text can be retrieved that simple: const clip = nw.Clipboard.get(); console.log( clip.get( "text" ) ); When graphic is put in the clipboard we can get it with NW.js only as Base64-encoded content or as HTML. To see it in practice we add a few methods to FileService. The ./js/Service/File.js file contains the following code: //... hasImageInClipboard(){ const clip = nw.Clipboard.get(); return clip.readAvailableTypes().indexOf( "png" ) !== -1; } pasteFromClipboard(){ const clip = nw.Clipboard.get(); if ( this.hasImageInClipboard() ) { Internationalization and localization [ 17 ] const base64 = clip.get( "png", true ), binary = Buffer.from( base64, "base64" ), filename = Date.now() + "--img.png"; fs.writeFileSync( this.dir.getFile( filename ), binary ); this.dir.notify(); } } //... Method hasImageInClipboard checks if the clipboard keeps any graphics. Method pasteFromClipboard takes graphical content from the clipboard as Base64-encoded PNG. It converts the content into binary code, writes into a file and requests DirService subscribers to update. To make use of these methods we need to edit ContextMenu view. The ./js/View/ContextMenu.js file contains the following code: getItems( fileName ){ const file = this.file, isCopied = Boolean( file.copiedFile ); return [ //... { label: this.i18n.translate( "PASTE_FROM_CLIPBOARD", "Paste image from clipboard" ), enabled: file.hasImageInClipboard(), click: () => file.pasteFromClipboard() }, //... ]; } We add to the menu a new item Paste image from clipboard, which is enabled only when there is any graphic in the clipboard. Summary In this article, we have covered concept of internationalization and localization and also covered context menu and system clipboard in detail. Resources for Article:   Further resources on this subject: [article] [article] [article]
Read more
  • 0
  • 0
  • 21172
article-image-introduction-raspberry-pi-zero-w-wireless
Packt
03 Mar 2018
14 min read
Save for later

Introduction to Raspberry Pi Zero W Wireless

Packt
03 Mar 2018
14 min read
In this article by Vasilis Tzivaras, the author of the book Raspberry Pi Zero W Wireless Projects, we will be covering the following topics:  An overview of the Raspberry Pi family  An introduction to the new Raspberry Pi Zero W Distributions  Common issues Raspberry Pi Zero W is the new product of the Raspberry Pi Zero family. In early 2017, Raspberry Pi community has announced a new board with wireless extension. It offers wireless functionality and now everyone can develop his own projects without cables and other components. Comparing the new board with Raspberry Pi 3 Model B we can easily see that it is quite smaller with many possibilities over the Internet of Things. But what is a Raspberry Pi Zero W and why do you need it? Let' s go though the rest of the family and introduce the new board. In the following article we will cover the following topics: (For more resources related to this topic, see here.) Raspberry Pi family As said earlier Raspberry Pi Zero W is the new member of Raspberry Pi family boards. All these years Raspberry Pi are evolving and become more user friendly with endless possibilities. Let's have a short look at the rest of the family so we can understand the difference of the Pi Zero board. Right now, the heavy board is named Raspberry Pi 3 Model B. It is the best solution for projects such as face recognition, video tracking, gaming or anything else that is demanding:                                      RASPBERRY PI 3 MODEL B It is the 3rd generation of Raspberry Pi boards after Raspberry Pi 2 and has the following specs:  A 1.2GHz 64-bit quad-core ARMv8 CPU 802.11n Wireless LAN Bluetooth 4.1 Bluetooth Low Energy (BLE)  Like the Pi 2, it also has 1GB RAM 4 USB ports 40 GPIO pins Full HDMI port Ethernet port Combined 3.5mm audio jack and composite video  Camera interface (CSI)  Display interface (DSI)  Micro SD card slot (now push-pull rather than push-push)  VideoCore IV 3D graphics core The next board is Raspberry Pi Zero, in which the Zero W was based. A small low cost and power board able to do many things:                                     Raspberry Pi Zero The specs of this board can be found as follows:  1GHz, Single-core CPU  512MB RAM  Mini-HDMI port Micro-USB OTG port  Micro-USB power  HAT-compatible 40-pin header  Composite video and reset headers  CSI camera connector (v1.3 only) At this point we should not forget to mention that apart from the boards mentioned earlier there are several other modules and components such as the Sense Hat or Raspberry Pi Touch Display available which will work great for advance projects. The 7″ Touchscreen Monitor for Raspberry Pi gives users the ability to create all-in-one, integrated projects such as tablets, infotainment systems and embedded projects:                                                        RASPBERRY PI Touch Display Where Sense HAT is an add-on board for Raspberry Pi, made especially for the Astro Pi mission. The Sense HAT has an 8×8 RGB LED matrix, a five-button joystick and includes the following sensors: Gyroscope Accelerometer  Magnetometer Temperature  Barometric pressure Humidity                                                                         sense HAT Stay tuned with more new boards and modules at the official website: https://www.raspberrypi.org/ Raspberry Pi Zero W Raspberry Pi Zero W is a small device that has the possibilities to be connected either on an external monitor or TV and of course it is connected to the internet. The operating system varies as there are many distros in the official page and almost everyone is baled on Linux systems.                                                        Raspberry Pi Zero W   With Raspberry Pi Zero W you have the ability to do almost everything, from automation to gaming! It is a small computer that allows you easily program with the help of the GPIO pins and some other components such as a camera. Its possibilities are endless! Specifications If you have bought Raspberry PI 3 Model B you would be familiar with Cypress CYW43438 wireless chip. It provides 802.11n wireless LAN and Bluetooth 4.0 connectivity. The new Raspberry Pi Zero W is equipped with that wireless chip as well. Following you can find the specifications of the new board: Dimensions: 65mm × 30mm × 5mm SoC:Broadcom BCM 2835 chip ARM11 at 1GHz, single core CPU 512ΜΒ RAM Storage: MicroSD card  Video and Audio:1080P HD video and stereo audio via mini-HDMI connector Power:5V, supplied via micro USB connector  Wireless:2.4GHz 802.11 n wireless LAN Bluetooth: Bluetooth classic 4.1 and Bluetooth Low Energy (BLE) Output: Micro USB  GPIO: 40-pin GPIO, unpopulated                                Raspberry Pi Zero W Notice that all the components are on the top side of the board so you can easily choose your case without any problems and keep it safe. As far as the antenna concern, it is formed by etching away copper on each layer of the PCB. It may not be visible as it is in other similar boards but it is working great and offers quite a lot functionalities:                  Raspberry Pi Zero W Capacitors Also, the product is limited to only one piece per buyer and costs 10$. You can buy a full kit with microsd card, a case and some more extra components for about 45$ or choose the camera full kit which contains a small camera component for 55$. Camera support Image processing projects such as video tracking or face recognition require a camera. Following you can see the official camera support of Raspberry Pi Zero W. The camera can easily be mounted at the side of the board using a cable like the Raspberry Pi 3 Model B board:The official Camera support of Raspberry Pi Zero W Depending on your distribution you many need to enable the camera though command line. More information about the usage of this module will be mentioned at the project. Accessories Well building projects with the new board there are some other gadgets that you might find useful working with. Following there is list of some crucial components. Notice that if you buy Raspberry Pi Zero W kit, it includes some of them. So, be careful and don't double buy them:  OTG cable  powerHUB GPIO header  microSD card and card adapter  HDMI to miniHDMI cable  HDMI to VGA cable Distributions The official site https://www.raspberrypi.org/downloads/ contains several distributions for downloading. The two basic operating systems that we will analyze after are RASPBIAN and NOOBS. Following you can see how the desktop environment looks like. Both RASPBIAN and NOOBS allows you to choose from two versions. There is the full version of the operating system and the lite one. Obviously the lite version does not contain everything that you might use so if you tend to use your Raspberry with a desktop environment choose and download the full version. On the other side if you tend to just ssh and do some basic stuff pick the lite one. It' s really up to you and of course you can easily download again anything you like and re-write your microSD card. NOOBS distribution Download NOOBS: https://www.raspberrypi.org/downloads/noobs/. NOOBS distribution is for the new users with not so much knowledge in linux systems and Raspberry PI boards. As the official page says it is really "New Out Of the Box Software". There is also pre-installed NOOBS SD cards that you can purchase from many retailers, such as Pimoroni, Adafruit, and The Pi Hut, and of course you can download NOOBS and write your own microSD card. If you are having trouble with the specific distribution take a look at the following links: Full guide at https://www.raspberrypi.org/learning/software-guide/. View the video at https://www.raspberrypi.org/help/videos/#noobs-setup. NOOBS operating system contains Raspbian and it provides various of other operating systems available to download. RASPBIAN distribution Download RASPBIAN: https://www.raspberrypi.org/downloads/raspbian/. Raspbian is the official supported operating system. It can be installed though NOOBS or be downloading the image file at the following link and going through the guide of the official website. Image file: https://www.raspberrypi.org/documentation/installation/installing-images/README.md. It has pre-installed plenty of software such as Python, Scratch, Sonic Pi, Java, Mathematica, and more! Furthermore, more distributions like Ubuntu MATE, Windows 10 IOT Core or Weather Station are meant to be installed for more specific projects like Internet of Things (IoT) or weather stations. To conclude with, the right distribution to install actually depends on your project and your expertise in Linux systems administration. Raspberry Pi Zero W needs an microSD card for hosting any operating system. You are able to write Raspbian, Noobs, Ubuntu MATE, or any other operating system you like. So, all that you need to do is simple write your operating system to that microSD card. First of all you have to download the image file from https://www.raspberrypi.org/downloads/ which, usually comes as a .zip file. Once downloaded, unzip the zip file, the full image is about 4.5 Gigabytes. Depending on your operating system you have to use different programs:  7-Zip for Windows  The Unarchiver for Mac  Unzip for Linux Now we are ready to write the image in the MicroSD card. You can easily write the .img file in the microSD card by following one of the next guides according to your system. For Linux users dd tool is recommended. Before connecting your microSD card with your adaptor in your computer run the following command:  df -h Now connect your card and run the same command again. You must see some new records. For example if the new device is called /dev/sdd1 keep in your mind that the card is at /dev/sdd (without the 1). The next step is to use the dd command and copy the image to the microSD card. We can do this by the following command:  dd if= of= Where if is the input file (image file or the distribution) and of is the output file (microSD card). Again be careful here and use only /dev/sdd or whatever is yours without any numbers. If you are having trouble with that please use the full manual at the following link https://www.raspberrypi.org/documentation/installation/installing-images/linux.md. A good tool that could help you out for that job is GParted. If it is not installed on your system you can easily install it with the following command:  sudo apt-get install gparted Then run sudogparted to start the tool. Its handles partitions very easily and you can format, delete or find information about all your mounted partitions. More information about ddcan be found here: https://www.raspberrypi.org/documentation/installation/installing-images/linux.md For Mac OS users dd tool is always recommended: https://www.raspberrypi.org/documentation/installation/installing-images/mac.md For Windows users Win32DiskImager utility is recommended: https://www.raspberrypi.org/documentation/installation/installing-images/windows.md There are several other ways to write an image file in a microSD card. So, if you are against any kind of problems when following the guides above feel free to use any other guide available on the Internet. Now, assuming that everything is ok and the image is ready. You can now gently plugin the microcard to your Raspberry PI Zero W board. Remember that you can always confirm that your download was successful with the sha1 code. In Linux systems you can use sha1sum followed by the file name (the image) and print the sha1 code that should and must be the same as it is at the end of the official page where you downloaded the image. Common issues Sometimes, working with Raspberry Pi boards can lead to issues. We all have faced some of them and hope to never face them again. The Pi Zero is so minimal and it can be tough to tell if it is working or not. Since, there is no LED on the board, sometimes a quick check if it is working properly or something went wrong is handy. Debugging steps With the following steps you will probably find its status: Take your board, with nothing in any slot or socket. Remove even the microSD card!  Take a normal micro-USB to USB-ADATA SYNC cable and connect the one side to your computer and the other side to the Pi's USB, (not the PWR_IN).  If the Zero is alive: • On Windows the PC will go ding for the presence of new hardware and you should see BCM2708 Boot in Device Manager. On Linux, with a ID 0a5c:2763 Broadcom Corp message from dmesg. Try to run dmesg in a Terminal before your plugin the USB and after that. You will find a new record there. Output example: [226314.048026] usb 4-2: new full-speed USB device number 82 using uhci_hcd [226314.213273] usb 4-2: New USB device found, idVendor=0a5c, idProduct=2763 [226314.213280] usb 4-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [226314.213284] usb 4-2: Product: BCM2708 Boot [226314.213] usb 4-2: Manufacturer: Broadcom If you see any of the preceding, so far so good, you know the Zero's not dead. microSD card issue Remember that if you boot your Raspberry and there is nothing working, you may have burned your microSD card wrong. This means that your card many not contain any boot partition as it should and it is not able to boot the first files. That problem occurs when the distribution is burned to /dev/sdd1 and not to /dev/sdd as we should. This is a quite common mistake and there will be no errors in your monitor. It will just not work! Case protection Raspberry Pi boards are electronics and we never place electronics in metallic surfaces or near magnetic objects. It will affect the booting operation of the Raspberry and it will probably not work. So a tip of advice, spend some extra money for the Raspberry PI Case and protect your board from anything like that. There are many problems and issues when hanging your raspberry pi using tacks. It may be silly, but there are many that do that. Summary Raspberry Pi Zero W is a new promising board allowing everyone to connect their devices to the Internet and use their skills to develop projects including software and hardware. This board is the new toy of any engineer interested in Internet of Things, security, automation and more! We have gone through an introduction in the new Raspberry Pi Zero board and the rest of its family and a brief analysis on some extra components that you should buy as well. Resources for Article:   Further resources on this subject: Raspberry Pi Zero W Wireless Projects Full Stack Web Development with Raspberry Pi 3
Read more
  • 0
  • 0
  • 42765

article-image-compute-discrete-fourier-transform-dft-using-scipy
Pravin Dhandre
02 Mar 2018
5 min read
Save for later

How to compute Discrete Fourier Transform (DFT) using SciPy

Pravin Dhandre
02 Mar 2018
5 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book co-authored by L. Felipe Martins, Ruben Oliva Ramos and V Kishore Ayyadevara titled SciPy Recipes. This book provides numerous recipes to tackle day-to-day challenges associated with scientific computing and data manipulation using SciPy stack.[/box] Today, we will compute Discrete Fourier Transform (DFT) and inverse DFT using SciPy stack. In this article, we will focus majorly on the syntax and the application of DFT in SciPy assuming you are well versed with the mathematics of this concept. Discrete Fourier Transforms   A discrete Fourier transform transforms any signal from its time/space domain into a related signal in frequency domain. This allows us to not only analyze the different frequencies of the data, but also enables faster filtering operations, when used properly. It is possible to turn a signal in a frequency domain back to its time/spatial domain, thanks to inverse Fourier transform (IFT). How to do it… To follow with the example, we need to continue with the following steps: The basic routines in the scipy.fftpack module compute the DFT and its inverse, for discrete signals in any dimension—fft, ifft (one dimension), fft2, ifft2 (two dimensions), and fftn, ifftn (any number of dimensions). Verify all these routines assume that the data is complex valued. If we know beforehand that a particular dataset is actually real-valued, and should offer realvalued frequencies, we use rfft and irfft instead, for a faster algorithm. In order to complete with this, these routines are designed so that composition with their inverses always yields the identity. The syntax is the same in all cases, as follows: fft(x[, n, axis, overwrite_x]) The first parameter, x, is always the signal in any array-like form. Note that fft performs one-dimensional transforms. This means that if x happens to be two-dimensional, for example, fft will output another two-dimensional array, where each row is the transform of each row of the original. We can use columns instead, with the optional axis parameter. The rest of the parameters are also optional; n indicates the length of the transform and overwrite_x gets rid of the original data to save memory and resources. We usually play with the n integer when we need to pad the signal with zeros or truncate it. For a higher dimension, n is substituted by shape (a tuple) and axis by axes (another tuple). To better understand the output, it is often useful to shift the zero frequencies to the center of the output arrays with ifftshift. The inverse of this operation, ifftshift, is also included in the module. How it works… The following code shows some of these routines in action when applied to a checkerboard: import numpy from scipy.fftpack import fft,fft2, fftshift import matplotlib.pyplot as plt B=numpy.ones((4,4)); W=numpy.zeros((4,4)) signal = numpy.bmat("B,W;W,B") onedimfft = fft(signal,n=16) twodimfft = fft2(signal,shape=(16,16)) plt.figure() plt.gray() plt.subplot(121,aspect='equal') plt.pcolormesh(onedimfft.real) plt.colorbar(orientation='horizontal') plt.subplot(122,aspect='equal') plt.pcolormesh(fftshift(twodimfft.real)) plt.colorbar(orientation='horizontal') plt.show() Note how the first four rows of the one-dimensional transform are equal (and so are the last four), while the two-dimensional transform (once shifted) presents a peak at the origin and nice symmetries in the frequency domain. In the following screenshot, which has been obtained from the previous code, the image on the left is the fft and the one on the right is the fft2 of a 2 x 2 checkerboard signal: Computing the discrete Fourier transform (DFT) of a data series using the FFT Algorithm In this section, we will see how to compute the discrete Fourier transform and some of its Applications. How to do it… In the following table, we will see the parameters to create a data series using the FFT algorithm: How it works… This code represents computing an FFT discrete Fourier in the main part: np.fft.fft(np.exp(2j * np.pi * np.arange(8) / 8)) array([ -3.44505240e-16 +1.14383329e-17j, 8.00000000e+00 -5.71092652e-15j, 2.33482938e-16 +1.22460635e-16j, 1.64863782e-15 +1.77635684e-15j, 9.95839695e-17 +2.33482938e-16j, 0.00000000e+00 +1.66837030e-15j, 1.14383329e-17 +1.22460635e-16j, -1.64863782e-15 +1.77635684e-15j]) In this example, real input has an FFT that is Hermitian, that is, symmetric in the real part and anti-symmetric in the imaginary part, as described in the numpy.fft documentation. import matplotlib.pyplot as plt t = np.arange(256) sp = np.fft.fft(np.sin(t)) freq = np.fft.fftfreq(t.shape[-1]) plt.plot(freq, sp.real, freq, sp.imag) [<matplotlib.lines.Line2D object at 0x...>, <matplotlib.lines.Line2D object at 0x...>] plt.show() The following screenshot shows how we represent the results: Computing the inverse DFT of a data series In this section, we will learn how to compute the inverse DFT of a data series. How to do it… In this section we will see how to compute the inverse Fourier transform. The returned complex array contains y(0), y(1),..., y(n-1) where: How it works… In this part, we represent the calculous of the DFT: np.fft.ifft([0, 4, 0, 0]) array([ 1.+0.j, 0.+1.j, -1.+0.j, 0.-1.j]) Create and plot a band-limited signal with random phases: import matplotlib.pyplot as plt t = np.arange(400) n = np.zeros((400,), dtype=complex) n[40:60] = np.exp(1j*np.random.uniform(0, 2*np.pi, (20,))) s = np.fft.ifft(n) plt.plot(t, s.real, 'b-', t, s.imag, 'r--') plt.legend(('real', 'imaginary')) plt.show() Then we represent it, as shown in the following screenshot:   We successfully explored how to transform signals from time or space domain into frequency domain and vice-versa, allowing you to analyze frequencies in detail. If you found this tutorial useful, do check out the book SciPy Recipes to get hands-on recipes to perform various data science tasks with ease.    
Read more
  • 0
  • 1
  • 40087

article-image-how-to-use-mapreduce-with-mongo-shell
Amey Varangaonkar
02 Mar 2018
8 min read
Save for later

How to use MapReduce with Mongo shell

Amey Varangaonkar
02 Mar 2018
8 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book Mastering MongoDB 3.x authored by Alex Giamas. This book demonstrates the power of MongoDB to build high performance database solutions with ease.[/box] MongoDB is one of the most popular NoSQL databases in the world and can be combined with various Big Data tools for efficient data processing. In this article we explore interesting features of MongoDB, which has been underappreciated and not widely supported throughout the industry as yet - the ability to write MapReduce natively using shell. MapReduce is a data processing method for getting aggregate results from a large set of data. The main advantage is that it is inherently parallelizable as evidenced by frameworks such as Hadoop. A simple example of MapReduce would be as follows, given that our input books collection is as follows: > db.books.find() { "_id" : ObjectId("592149c4aabac953a3a1e31e"), "isbn" : "101", "name" : "Mastering MongoDB", "price" : 30 } { "_id" : ObjectId("59214bc1aabac954263b24e0"), "isbn" : "102", "name" : "MongoDB in 7 years", "price" : 50 } { "_id" : ObjectId("59214bc1aabac954263b24e1"), "isbn" : "103", "name" : "MongoDB for experts", "price" : 40 } And our map and reduce functions are defined as follows: > var mapper = function() { emit(this.id, 1); }; In this mapper, we simply output a key of the id of each document with a value of 1: > var reducer = function(id, count) { return Array.sum(count); }; In the reducer, we sum across all values (where each one has a value of 1): > db.books.mapReduce(mapper, reducer, { out:"books_count" }); { "result" : "books_count", "timeMillis" : 16613, "counts" : { "input" : 3, "emit" : 3, "reduce" : 1, "output" : 1 }, "ok" : 1 } > db.books_count.find() { "_id" : null, "value" : 3 } > Our final output is a document with no ID, since we didn't output any value for id, and a value of 6, since there are six documents in the input dataset. Using MapReduce, MongoDB will apply map to each input document, emitting key-value pairs at the end of the map phase. Then each reducer will get key-value pairs with the same key as input, processing all multiple values. The reducer's output will be a single key-value pair for each key. Optionally, we can use a finalize function to further process the results of the mapper and reducer. MapReduce functions use JavaScript and run within the mongod process. MapReduce can output inline as a single document, subject to the 16 MB document size limit, or as multiple documents in an output collection. Input and output collections can be sharded. MapReduce concurrency MapReduce operations will place several short-lived locks that should not affect operations. However, at the end of the reduce phase, if we are outputting data to an existing collection, then output actions such as merge, reduce, and replace will take an exclusive global write lock for the whole server, blocking all other writes in the db instance. If we want to avoid that we should invoke MapReduce in the following way: > db.collection.mapReduce( Mapper, Reducer, { out: { merge/reduce: bookOrders, nonAtomic: true } }) We can apply nonAtomic only to merge or reduce actions. replace will just replace the contents of documents in bookOrders, which would not take much time anyway. With the merge action, the new result is merged with the existing result if the output collection already exists. If an existing document has the same key as the new result, then it will overwrite that existing document. With the reduce action, the new result is processed together with the existing result if the output collection already exists. If an existing document has the same key as the new result, it will apply the reduce function to both the new and the existing documents and overwrite the existing document with the result. Although MapReduce has been present since the early versions of MongoDB, it hasn't evolved as much as the rest of the database, resulting in its usage being less than that of specialized MapReduce frameworks such as Hadoop. Incremental MapReduce Incremental MapReduce is a pattern where we use MapReduce to aggregate to previously calculated values. An example would be counting non-distinct users in a collection for different reporting periods (that is, hour, day, month) without the need to recalculate the result every hour. To set up our data for incremental MapReduce we need to do the following: Output our reduce data to a different collection At the end of every hour, query only for the data that got into the collection in the last hour With the output of our reduce data, merge our results with the calculated results from the previous hour Following up on the previous example, let's assume that we have a published field in each of the documents, with our input dataset being: > db.books.find() { "_id" : ObjectId("592149c4aabac953a3a1e31e"), "isbn" : "101", "name" : "Mastering MongoDB", "price" : 30, "published" : ISODate("2017-06-25T00:00:00Z") } { "_id" : ObjectId("59214bc1aabac954263b24e0"), "isbn" : "102", "name" : "MongoDB in 7 years", "price" : 50, "published" : ISODate("2017-06-26T00:00:00Z") } Using our previous example of counting books we would get the following: var mapper = function() { emit(this.id, 1); }; var reducer = function(id, count) { return Array.sum(count); }; > db.books.mapReduce(mapper, reducer, { out: "books_count" }) { "result" : "books_count", "timeMillis" : 16700, "counts" : { "input" : 2, "emit" : 2, "reduce" : 1, "output" : 1 }, "ok" : 1 } > db.books_count.find() { "_id" : null, "value" : 2 } Now we get a third book in our mongo_books collection with a document: { "_id" : ObjectId("59214bc1aabac954263b24e1"), "isbn" : "103", "name" : "MongoDB for experts", "price" : 40, "published" : ISODate("2017-07-01T00:00:00Z") } > db.books.mapReduce( mapper, reducer, { query: { published: { $gte: ISODate('2017-07-01 00:00:00') } }, out: { reduce: "books_count" } } ) > db.books_count.find() { "_id" : null, "value" : 3 } What happened here, is that by querying for documents in July 2017 we only got the new document out of the query and then used its value to reduce the value with the already calculated value of 2 in our books_count document, adding 1 to the final sum of three documents. This example, as contrived as it is, shows a powerful attribute of MapReduce: the ability to re-reduce results to incrementally calculate aggregations over time. Troubleshooting MapReduce Throughout the years, one of the major shortcomings of MapReduce frameworks has been the inherent difficulty in troubleshooting as opposed to simpler non-distributed patterns. Most of the time, the most effective tool is debugging using log statements to verify that output values match our expected values. In the mongo shell, this being a JavaScript shell, this is as simple as outputting using the console.log()function. Diving deeper into MapReduce in MongoDB we can debug both in the map and the reduce phase by overloading the output values. Debugging the mapper phase, we can overload the emit() function to test what the output key values are: > var emit = function(key, value) { print("debugging mapper's emit"); print("key: " + key + " value: " + tojson(value)); } We can then call it manually on a single document to verify that we get back the key-value pair that we would expect: > var myDoc = db.orders.findOne( { _id: ObjectId("50a8240b927d5d8b5891743c") } ); > mapper.apply(myDoc); The reducer function is somewhat more complicated. A MapReduce reducer function must meet the following criteria: It must be idempotent The order of values coming from the mapper function should not matter for the reducer's result The reduce function must return the same type of result as the mapper function We will dissect these following requirements to understand what they really mean: It must be idempotent: MapReduce by design may call the reducer multiple times for the same key with multiple values from the mapper phase. It also doesn't need to reduce single instances of a key as it's just added to the set. The final value should be the same no matter the order of execution. This can be verified by writing our own "verifier" function forcing the reducer to re-reduce or by executing the reducer many, many times: reduce( key, [ reduce(key, valuesArray) ] ) == reduce( key, valuesArray ) It must be commutative: Again, because multiple invocations of the reducer may happen for the same key, if it has multiple values, the following should hold: reduce(key, [ C, reduce(key, [ A, B ]) ] ) == reduce( key, [C, A, B ] ) The order of values coming from the mapper function should not matter for the reducer's result: We can test that the order of values from the mapper doesn't change the output for the reducer by passing in documents to the mapper in a different order and verifying that we get the same results out: reduce( key, [ A, B ] ) == reduce( key, [ B, A ] ) The reduce function must return the same type of result as the mapper function: Hand-in-hand with the first requirement, the type of object that the reduce function returns should be the same as the output of the mapper function. We saw how MapReduce is useful when implemented on a data pipeline. Multiple MapReduce commands can be chained to produce different results. An example would be aggregating data by different reporting periods (hour, day, week, month, year) where we use the output of each more granular reporting period to produce a less granular report. If you found this article useful, make sure to check our book Mastering MongoDB 3.x to get more insights and information about MongoDB’s vast data storage, management and administration capabilities.
Read more
  • 0
  • 0
  • 10940
article-image-preparing-spring-web-development-environment
Packt
02 Mar 2018
28 min read
Save for later

Preparing the Spring Web Development Environment

Packt
02 Mar 2018
28 min read
In this article by Ajitesh Kumar, the author of the book Building Web Apps with Spring 5 and Angular, we will see the key aspects of web request-response handling in relation with Spring Web MVC framework. In this article, we will go into the details of setting up development environment for working with Spring web applications. Following are going to be the key areas we are going to look into: Installing Java SDK Installing/configuring Maven Installing Eclipse IDE Installing/configuring Apache Tomcat Server Installing/configuring MySQL Database Introducing Docker containers Setting up development environment using Docker-compose (For more resources related to this topic, see here.) Installing Java SDK First and foremost, we will install Java SDK. We will work with Java 8 throughout this book. Go ahead and access this page (http://www.oracle.com/technetwork/java/javase /downloads/jdk8-downloads-2133151.html). Download the appropriate JDK kit. For Windows OS, there are two different versions, one for x86 and another for x64. One should select appropriate version and download “exe” file. Once downloaded, double-click on the executable file. This would start the installer. Once installed, following needs to be done:  Set the JAVA_HOME as the path where JDK is installed. Include the path in %JAVA_HOME%/bin in the environment variable. One could do that by adding the %JAVA_HOME%/bin directory to his/her user PATH environment variable by opening up the system properties (WinKey + Pause), selecting the “Advanced” tab, and the “Environment Variables” button, then adding or selecting the PATH variable in the user variables with the value. Once done with the preceding steps, open a shell and type the command, "java - version". It should print the version of Java you installed just now. Next, let us try and understand how to install and configure Maven, a tool for building and managing Java projects. Installing/Configuring Maven Maven is a tool which can be used for building and managing Java-based project. Following are some of the key benefits of using Maven as a build tool: It provides a simple project setup that follows best practices - Get a new project or module started in seconds. It allows a project to build using its Project Object Model (POM) and a set of plugins that are shared by all projects using Maven, providing a uniform build system. It allows usage of large and growing repository of libraries and metadata to use out of the box. Based on model based builds, it provides ability to work with multiple projects at the same time. Any number of projects can be built into predefined output types such as a JAR, WAR, or distribution based on metadata about the project, without the need to do any scripting in most cases. One can download Maven from https://maven.apache.org/download.cgi. Before installing Maven, make sure Java is installed and configured (JAVA_HOME) appropriately as mentioned in the previous section. On Windows, you could check the same by typing the command, “echo %JAVA_HOME%”:  Extract distribution archive in any directory. If it works on Windows, install unzip tool such as WinRAR. Right-click on the ZIP file and unzip it. A directory (with name as “apache-maven-3.3.9”, the version of maven at the time of writing) holding the files such as bin, conf and so on will be created. Add the bin directory of the created directory, “apache-maven-3.3.9” to the PATH environment variable. One could do that by adding the bin directory to his/her user PATH environment variable by opening up the system properties (WinKey + Pause), selecting the “Advanced” tab, and the “Environment Variables” button, then adding or selecting the PATH variable in the user variables with the value Open a new shell and type, “mvn -v”. The result should print the Maven version along with details including Java version, Java home, OS name, and so on. Now, let’s look at how can we create a Java project using Maven from command prompt before we get on to creating a Maven project in Eclipse IDE. Use following mvn command to create a Java project:  mvn archetype:generate -DgroupId=com.healthapp -DartifactId=HealthApp - DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false With archetype:generate and -DarchetypeArtifactId=maven-archetypequickstart template, following project directory structure is created: In the preceding diagram, healthapp folders within src/main and src/test folder consist of a hello world program named as "App.java" and a corresponding test program such as "AppTest.java". Also, the at the top most folder, a pom.xml file is created. In the next section, we will install Eclipse IDE and create a maven project using the functionality provided by the IDE. Installing Eclipse IDE In this section, we will get ourselves setup with Eclipse IDE, a tool used by Java developers to create Java EE and web applications. Go to Eclipse website, http://www.eclipse.org and download the latest version of Eclipse and install thereafter. As we shall be working with web applications, select the option such as "Eclipse IDE for Java EE Developers" while downloading the IDE. As you launch the IDE, it will ask to select a folder for workspace. Select appropriate path and start the IDE. Following are some of the different types of projects developers could work using IDE: A new Java EE Web project A new JavaScript project. This option will be very useful when you are working with standalone JavaScript project and planning to integrate with server components using APIs. Checkout existing Eclipse projects from Git and work on them Import one or more existing Eclipse projects from filesystem or archive Import existing Maven Project in Eclipse In previous section, we have created a maven project namely HealthApp. We will now see how we can import this project into Eclipse IDE:  Click File > import. Type Maven in the search box under Select an import source. Select Existing Maven Projects. Click Next. Click Browse and select the HealthApp folder which is the root of the Maven project. Note that it contains the pom.xml file. Click Finish. The project will be imported in Eclipse. Make sure this is how it looks like: Figure 2: Maven project imported into Eclipse Let's also see how one can create a new Maven project with Eclipse IDE. Create new Maven Project in Eclipse Follow the instructions given to create new Java Maven project with Eclipse IDE:  Click File > New > Project. Type Maven in the search box under Wizards. Select Maven project. A dialog box with title as "New Maven Project", having option "use default Workspace location" as checked, appears. Make sure that Group Id is selected as org.apache.maven.archetypes with Artifact Id selected as maven-archetype-quickstart. Give a name to Group Id, say, "com.orgname". Give a name to Artifact Id, say, "healthapp2". Click Finish. As a result of preceding steps, a new Maven project will be created in Eclipse. Make sure this is how it looks like: Figure 3: Maven project created within Eclipse In next section, we will see how to install and configure Tomcat Server. Installing/Configuring Apache Tomcat Server In this section, we will learn about some of the following: How to install and configure Apache Tomcat server Common deployment approaches with Tomcat server How to add Tomcat server in Eclipse The Apache Tomcat software is an open source implementation of the Java Servlet, JavaServer Pages (JSPs), Java Expression Language and Java WebSocket technologies. We will work with Apache Tomcat 8.x version in this book. We will look at both Windows and Unix version of Java. One can go to http://tomcat.apache.org/ and download the appropriate version from this page. At the time of installation, it requires you to choose the path to one of the JREs installed on your computer. Once installation is complete, Apache Tomcat server is started as a Windows service. With default installation options, one can then access the Tomcat server by accessing URL such as http://127.0.0.1:8080/. A page such as following will be displayed: Figure 4: Apache Tomcat Server Homepage Following is how Tomcat's folder structure looks like: Figure 5: Apache Tomcat Folder Structure In the preceding diagram, note the "webapps" folder which will contain our web apps. The following description uses the variable name such as following: $CATALINA_HOME, the directory into which Tomcat is installed. $CATALINA_BASE, the base directory against which most relative paths are resolved. If you have not configured Tomcat for multiple instances by setting a CATALINA_BASE directory, then $CATALINA_BASE will be set to the value of $CATALINA_HOME. Following are most commonly used approaches to deploy web apps in Tomcat: Copy unpacked directory hierarchy into a subdirectory in directory $CATALINA_BASE/webapps/. Tomcat will assign a context path to your application based on the subdirectory name you choose. Copy the web application archive (WAR) file into directory $CATALINA_BASE/webapps/. When Tomcat is started, it will automatically expand the web application archive file into its unpacked form, and execute the application that way. Let us learn how to configure Apache Tomcat from within Eclipse. This would be very useful as one could start and stop Tomcat from Eclipse while working with his/her web applications. Adding/Configuring Apache Tomcat in Eclipse In this section, we will learn how to add and configure Apache Tomcat in Eclipse. It would help to start and stop the server from within Eclipse IDE. Following steps need to be taken to achieve this objective: Make sure you are in Java EE perspective. Click on "Servers" tab in lower panel. You will find a link saying "No servers are available. Click this link to create a new server...". Click on this link. Type "Tomcat" under Select the server type. It would show a list of Tomcat server with different versions. Select "Tomcat v8.5 Server" and click Next. Select the Tomcat installation directory. Click on "Installed JREs..." button and make sure that appropriate JRE is checked. Click Next. Click Finish. This would create an entry for Tomcat server in "Servers" tab. Double-click on Tomcat server. This would open up a configuration window where multiple options such as Server Locations, Server Options, Ports can be configured. Under Server Locations, click on "Browse Path" button to select the path to "webapps" folder within your local Tomcat installation folder. Once done, save it using Ctrl-S. Right click on "Tomcat Server" link listed under "Servers" panel and click "Start". This should start the server. You should be able to access the Tomcat page on the URL, http://localhost:8080/. Installing/Configuring MySQL Database In this section, we will learn on how to install MySQL database. Go to MySQL Downloads site (https://www.mysql.com/downloads/) and click on "Community (GPL) Downloads" under MySQL community edition. On the next page, you will see listing of several MySQL software packages. Download following: MySQL Community Server MySQL Connector for Java development (Connector/J) Installing/Configuring MySQL Server In this section, we will see how to download, install and configure the MySQL database and related utility such as MySQL Workbench. Note that MySQL Workbench is a unified visual tool which can be used by database architects, developers and DBA for activities such as data modeling, SQL development, and comprehensive administration tools for server configuration, user administration etc. Follow the instructions given for installation & configuration of MySQL server and workbench:  Click on "Download" link under "MySQL Community Server (GPL)" found as first entry on "MySQL Community Downloads" page. We shall be working with Windows version of MySQL in the following instructions. Click the "Download" button against the entry "Windows (x86, 32-bit), MySQL Installer MSI". This would download the an exe file such as mysql-installercommunity-5.7.16.0.exe. Double-click on the installer to start the installation. As you progress ahead after accepting the license terms and condition, you would find the interactive UI such as following. Choose the appropriate version of MySQL server and also, MySQL Workbench and click on Next. Figure 6: Selecting and installing MySQL Server and MySQL Workbench Clicking on Execute would install the MySQL server and MySQL workbench as shown in the following diagram: Figure 7: MySQL Server and Workbench installation in progress Once installation is complete, next few steps would require you to configure the MySQL database including setting root password, adding one or more users, opting to start MySQL server as a Windows service and so on. The quickest way will be to use default instructions as much as possible and finish the installation. Once all is done, you would see UI such as following: Figure 8: Completion of MySQL Server and Workbench installation Clicking on "Finish" button will take on the next window where you could choose to start MySQL workbench. Following is how the MySQL Workbench would look like after you click on MySQL server instance on the Workbench homepage, enter the root password and execute "Show databases" command: Figure 9: MySQL Workbench Using MySQL Connector Before testing MySQL database connection from Java program, one would need to add the MySQL JDBC connector library to the classpath. In this section, we will learn how to configure/add MySQL JDBC connector library to classpath while working with Eclipse IDE or command console. The MySQL connector (Connector/J) comes in ZIP file (*.tar.gz). The MySQL connector is a concrete implementation of JDBC API. Once extracted, one can see a JAR file with name such as mysql-connector-java-xxx.jar. Following are different ways in which this JAR file is dealt with while working with or without IDEs such as Eclipse:  While working with Eclipse IDE, one can add the JAR file to the classpath by adding it as Library to the Build Path in project's properties. While working with command console, one needs to specify the path to the JAR file in the -cp or -classpath argument when executing the Java application. Following is the sample command representing the preceding: java -cp .;/path/to/mysql-connector-java-xxx.jar com.healthapp.JavaClassName Note the "." in classpath (-cp) option. This is there to add the current directory to the classpath as well such that com.healthapp.JavaClassName can be located. Connecting to MySQL Database from a Java Class In this section, we will learn how to test the MySQL database connection from a Java program. Before executing the code shown as follows in your Eclipse IDE, make sure to do the following:  Add the MySQL connector jar file by right-clicking on top-level project folder, clicking on "Properties", clicking on "Java Build Path" and, then, adding mysqlconnector-java-xxx.jar file by clicking on "Add External JARs...": Figure 10: Adding MySQL Java Connector to Java Build Path in Eclipse IDE Create a MySQL database namely "healthapp". You could do that by accessing MySQL Workbench and executing the MySQL command such as "create database healthapp". Following diagram represents the same: Figure 11: Creating new MySQL Database using MySQL Workbench Once done with the preceding steps, use the following code to test the connection to MySQL database from your Java class. On successful connection, you should be able to see "Database connected!" getting printed. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; /** * Sample program to test MySQL database connection */ public class App { public static void main( String[] args ) { String url = "jdbc:mysql://localhost:3306/healthapp"; String username = "root"; String password = "r00t"; //Root password set during MySQL installation procedure as described above. System.out.println("Connecting database..."); try { Connection connection = DriverManager.getConnection(url, username, password); System.out.println("Database connected!"); } catch (SQLException e) { throw new IllegalStateException("Cannot connect the database!", e); } } } Introduction to Dockers Docker is a virtualization technology which helps IT organizations achieve some of the following:  Enable Dev/QA team develop and test applications in a quick and easy manner in any environment. Break the barriers between Dev/QA and Operations teams during software development life cycle (SDLC) processes. Optimize infrastructure usage in the most appropriate manner. In this section, we will emphasize on first point which would help us setup Spring web application development in quick and easy manner. So far, we have seen traditional manners in which we could set up the Java web application development environment by installing different tools in independent manner and later configuring them appropriately. In a traditional setup, one would be required to setup and configure Java, Maven, Tomcat, MySQL server and so on, one tool at a time, by following manual steps. On the same lines, you could see that all of the steps described in preceding sections have to be performed one-by-one in manual fashion. Following are some of the disadvantages of setting up development/test environments in this manner:  Conflicting Runtimes: If a need arises to use software packages (say, different versions of Java and Tomcat) of different versions to run and test the same web application, it can become very cumbersome to manually set up the environment having different versions of software. Environments getting corrupted: If more than one developers are working in a particular development environment, there are chances that the environment could get corrupted due to changes made by one developer while others are not aware about. And, that generally leads to developers'/team's productivity loss due to time spent in fixing the configuration issue or re-installing the development environment from scratch. "Works for me" syndrome: Have you come across another member of your team saying that the application works in their environment although the application seems to have broken? New Developers/Testers' On-boarding: If there is a need to quickly on-board the new developers, manually setting up development environment takes some significant amount of time depending upon the applications' complexity. All of the praceding disadvantages could be taken care by making use of Dockers technology. In this section, we will learn briefly about some of the following: What are Docker Containers? What are key building blocks of Docker containers? Installing Dockers Useful commands to work with Docker containers What are Docker Containers? In this section, we will try and understand what are Docker containers while comparing them with real-world containers. Simply speaking, Docker is an open platform for developing, shipping and running applications. It provides the ability to package and run an application in a loosely isolated environment called a container. Before going into details of Docker containers, let us try and understand the problems that are solved by real-world containers. What are real-world containers good for? Following picture represents real world containers which are used to package annything and everything and, then, transport the goods from one place to other in an easy and safe manner:Figure 12: Real-world containers The following diagram represents different form of goods which needs to be transported using different from of transport mechanisms from one place to another: Figure 13: Different forms of goods vis-a-vis different form of transport mechanisms The following diagram displays the matrix representing need to transport each of the goods via different transport mechanism. The challenge is to make sure that these goods get transported in easy and safe manner: Figure 14: Complexity associated with transporting goods of different types using different transport mechanisms In order to solve preceding problem of transporting the goods in safe and easy manner irrespective of transport medium, the containers are used. Look at the following diagram: Figure 15: Goods can be packed within containers, and containers can be transported. How does Docker containers relate to the real-world containers? Now imagine the act of moving a software application from one environment to another environment starting from development right up to production. Following diagram represents complexity associated with making different application components work in different environments: Figure 16: Complexity associated with making different application components work in different environments As per the preceding diagram, to make different application components work in different environments (different hardware platforms), one would require to make sure environment compatible software versions and related configurations are set appropriately. Doing this using manual steps can be real cumbersome and error prone task. This is where docker containers fit in. Following diagram represents containerizing different application components using Docker containers. As like real-world containers, it would become very easy to move the containerized application components from one environment to another with very less or no issues: Figure 17: Docker containers to move application components across different environments Docker containers In simple terms, Docker containers provide an isolated and secured environment for the application components to run. The isolation and security allows one or many containers to run simultaneously on a given host. Often, for simplicity sake, Docker containers are loosely termed as lightweight-VMs (Virtual Machine). However, they are very much different from the traditional VMs. Docker containers do not need hypervisors to run as like virtual machines and, thus, multiple containers can be run on a given hardware combination. Virtual machines include the application, the necessary binaries and libraries, and an entire guest operating system; all of which can amount to tens of GBs. On the other hand, Docker Containers include the application and all of its dependencies; but share the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud. This very aspect make them look like a real-world container. Following diagram sums it all: Figure 18: Difference between traditional VMs and Docker containers Following are some of the key building blocks of Docker technology: Docker Containers: Isolated and secured environment for applications to run. Docker engine: A client-server application having following components: Daemon process used to create and manage Docker objects, such as images, containers, networks, and data volumes. A REST API interface and A command line interface (CLI) client Docker client: Client program that invokes Docker engine using APIs. Docker host: Underlying operating system sharing the kernel space with Docker containers. Until recently, Windows OS needed Linux virtualization to host Docker containers.  Docker hub: Public repository used to manage Docker images posted by various users. Images made public are available for all to download in order to create containers using those images. What are key building blocks of Dockers containers? For setting up our development environment, we will rely on Docker containers and assemble them together using the tool called as Docker compose which we shall learn about little later. Let us understand some of the following which can also be termed as key building blocks of Docker containers:  Docker image: In simple terms, Docker image can be thought of as a "class" in Java. Docker containers can be thought of as running instances of the image as like having one or more "instances" of a Java class. Technically speaking, Docker images consist of a list of layers that are stacked on top of each other to form a base for containers' root file system. Following diagram represents command which can be used to create a Docker container using an image named helloworld: Figure 19: Docker command representing creation of Docker container using a Docker image. In order to set up our development environment, we will require images of following to create the respective Docker containers: - Tomcat - MySQL. Dockerfile: Dockerfile is a text document that contains all the commands which could be called on the command line to assemble or build an image. docker build command is used to build an image from a Dockerfile and a context. In order to create custom images for Tomcat and MySQL, it may be required to create a Dockerfile and, then, build the image. Following is a sample command for building an image using a Dockerfile: docker build -f tomcat.df -t tomcat_debug The preceding command would look for the Dockerfile "tomcat.df" in the current directory specified by "." and build the image with tag, "tomcat_debug". Installing Dockers Now that we have got an understanding on What are Dockers, lets install Dockers. We shall look into steps that are required to install Dockers on Windows OS:  Download the Windows version of Docker Toolbox from the webpage, https https://www.docker.com/products/docker-toolbox. Docker toolbox comes as an installer which can be double-clicked for quick setup and launch of the docker environment. Following comes with Docker toolbox installation: Docker Machine for running docker-machine commands. Docker Engine for running the docker commands. Docker Compose for running the docker-compose commands. This is what we are looking for. Kitematic, the Docker GUI. A shell preconfigured for a Docker command-line environment. Oracle VirtualBox. Setting up Development Environment using Docker Compose In this section, we will learn how to setup on-demand, self-service development environment using Docker compose. Following are some of the points covered in this section: What is Docker compose? Docker compose script for setting up the development environment What is Docker Compose? Docker compose is a tool for defining and running multi-container Docker applications. One will require to create a Compose file to configure the application's services. Following steps are required to be taken in order to work with Docker compose: Define the application’s environment with a Dockerfile so it can be reproduced anywhere. Define the services that make up the application in docker-compose.yml so they can be run together in an isolated environment. Lastly, run docker-compose up and Compose will start and run the entire application. As we are going to setup a multi-container applications using Tomcat and MySQL as different containers, we will use Docker compose to configure both of them and, then, assemble the application. Docker Compose script for setting up the development environment In order to come up with a Docker compose script which can set up our Spring Web App development environment with one script execution, we will first set up images for following by creating independent Dockerfiles. Tomcat 8.x with Java and Maven installed as one container MySQL as another container Setting up Tomcat 8.x as a Container Service Following steps can be used to setup Tomcat 8.x along with Java 8 and Maven 3.x as one container: Create a folder and put following files within the folder. The source code for the files will be given as follows: tomcat.df create_tomcat_admin_user.sh run.sh Copy following source code for tomcat.df: FROM phusion/baseimage:0.9.17 RUN echo "deb http://archive.ubuntu.com/ubuntu trusty main universe" > /etc/apt/sources.list RUN apt-get -y update RUN DEBIAN_FRONTEND=noninteractive apt-get install -y -q pythonsoftware-properties software-properties-common ENV JAVA_VER 8 ENV JAVA_HOME /usr/lib/jvm/java-8-oracle RUN echo 'deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main' >> /etc/apt/sources.list && echo 'deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main' >> /etc/apt/sources.list && apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C2518248EEA14886 && apt-get update && echo oracle-java${JAVA_VER}-installer shared/accepted-oraclelicense-v1-1 select true | sudo /usr/bin/debconf-set-selections && apt-get install -y --force-yes --no-install-recommends oraclejava${JAVA_VER}-installer oracle-java${JAVA_VER}-set-default && apt-get clean && rm -rf /var/cache/oracle-jdk${JAVA_VER}-installer RUN update-java-alternatives -s java-8-oracle RUN echo "export JAVA_HOME=/usr/lib/jvm/java-8-oracle" >> ~/.bashrc RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* ENV MAVEN_VERSION 3.3.9 RUN mkdir -p /usr/share/maven && curl -fsSL http://apache.osuosl.org/maven/maven-3/$MAVEN_VERSION/binaries/apac he-maven-$MAVEN_VERSION-bin.tar.gz | tar -xzC /usr/share/maven --strip-components=1 && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn ENV MAVEN_HOME /usr/share/maven VOLUME /root/.m2 RUN apt-get update && apt-get install -yq --no-install-recommends wget pwgen cacertificates && apt-get clean && rm -rf /var/lib/apt/lists/* ENV TOMCAT_MAJOR_VERSION 8 ENV TOMCAT_MINOR_VERSION 8.5.8 ENV CATALINA_HOME /tomcat RUN wget -q https://archive.apache.org/dist/tomcat/tomcat-${TOMCAT_MAJOR_VERSIO N}/v${TOMCAT_MINOR_VERSION}/bin/apache-tomcat${TOMCAT_MINOR_VERSION}.tar.gz && wget -qOhttps://archive.apache.org/dist/tomcat/tomcat-${TOMCAT_MAJOR_VERSIO N}/v${TOMCAT_MINOR_VERSION}/bin/apache-tomcat${TOMCAT_MINOR_VERSION}.tar.gz.md5 | md5sum -c - && tar zxf apache-tomcat-*.tar.gz && rm apache-tomcat-*.tar.gz && mv apache-tomcat* tomcat ADD create_tomcat_admin_user.sh /create_tomcat_admin_user.sh RUN mkdir /etc/service/tomcat ADD run.sh /etc/service/tomcat/run RUN chmod +x /*.sh RUN chmod +x /etc/service/tomcat/run EXPOSE 8080 CMD ["/sbin/my_init"] Copy following code in a file named as create_tomcat_admin_user.sh. This file should be created in the same folder as preceding file, tomcat.df. While copying into notepad and later using with docker terminal, you may find Ctrl-M character inserted at the end of the line. Make sure that those lines are appropriately handled and removed: #!/bin/bash if [ -f /.tomcat_admin_created ]; then echo "Tomcat 'admin' user already created" exit 0 fi PASS=${TOMCAT_PASS:-$(pwgen -s 12 1)} _word=$( [ ${TOMCAT_PASS} ] && echo "preset" || echo "random" ) echo "=> Creating an admin user with a ${_word} password in Tomcat" sed -i -r 's/</tomcat-users>//' ${CATALINA_HOME}/conf/tomcatusers.xml echo '<role rolename="manager-gui"/>' >> ${CATALINA_HOME}/conf/tomcat-users.xml echo '<role rolename="manager-script"/>' >> ${CATALINA_HOME}/conf/tomcat-users.xml echo '<role rolename="manager-jmx"/>' >> ${CATALINA_HOME}/conf/tomcat-users.xml echo '<role rolename="admin-gui"/>' >> ${CATALINA_HOME}/conf/tomcat-users.xml echo '<role rolename="admin-script"/>' >> ${CATALINA_HOME}/conf/tomcat-users.xml echo "<user username="admin" password="${PASS}" roles="managergui,manager-script,manager-jmx,admin-gui, admin-script"/>" >> ${CATALINA_HOME}/conf/tomcat-users.xml echo '</tomcat-users>' >> ${CATALINA_HOME}/conf/tomcat-users.xml echo "=> Done!" touch /.tomcat_admin_created echo "================================================================== ======" echo "You can now configure to this Tomcat server using:" echo "" echo " admin:${PASS}" echo "" echo "================================================================== ======" Copy following code in a file named as run.sh in the same folder as preceding two files: #!/bin/bash if [ ! -f /.tomcat_admin_created ]; then /create_tomcat_admin_user.sh fi exec ${CATALINA_HOME}/bin/catalina.sh run Open up a Docker terminal and go to folder where these files are located. Execute following command to create the Tomcat image. In few minutes, the tomcat image will be created: docker build -f tomcat.df -t demo/tomcat:8 . Execute the command such as following and make sure that an image with name as demo/tomcat is found: docker images Next, run a container with name such as "tomcatdev" using following command: docker run -ti -d -p 8080:8080 --name tomcatdev -v "$PWD":/mnt/ demo/tomcat:8 Open a browser and type the URL as http://192.168.99.100:8080/. You should be able to see following page getting loaded. Note the URL and the Tomcat version, 8.5.8. This is the same version we earlier installed (check figure 1.4): Figure 20: Tomcat 8.5.8 installed as a Docker container You could access the container through the terminal using command with following command. Make sure to check the Tomcat installation inside folder "/tomcat". Also, execute command such as "java -version" and "mvn -v" to check the version of Java and Maven respectively: docker exec -ti tomcatdev /bin/bash In this section, we learnt to setup Tomcat 8.5.8 along with Java 8 and Maven 3.x as one container. Setting up MySQL as a Container Service In this section, we will learn how to setup MySQL as a container service. In the docker terminal, execute the following command: docker run -ti -d -p 3326:3306 --name mysqldev -e MYSQL_ROOT_PASSWORD=r00t -v "$PWD":/mnt/ mysql:5.7 The preceding command setup MySQL 5.7 version within the container and starts the mysqld service. Open MySQL Workbench and create a new connection by entering the details such as following and click "Test Connection". You should be able to establish the connection successfully: Figure 21: MySQL server running in the container and accessible from host machine at 3326 port using MySQL Workbench Docker Compose script to setup the Dev Environment Now, that we have setup both Tomcat and MySQL as individual containers, let us learn to create a Docker compose script using which both the containers can be started simultaneously thereby starting the Dev environment.  Save following source code as docker-compose.yml in the same folder as preceding mentioned files: version: '2' services: web: build: context: . dockerfile: tomcat.df ports: - "8080:8080" volumes: - .:/mnt/ links: - db db: image: mysql:5.7 ports: - "3326:3306" environment: - MYSQL_ROOT_PASSWORD=r00t Execute following command to start and stop the services: // For starting the services in the foreground docker-compose up // For starting the services in the background (detached mode) docker-compose up -d // For stopping the services docker-compose stop Test whether both the default Tomcat web app and MySQL server can be accessed. Access the URL, 192.168.99.100:8080 and make sure that the web page as shown in figure 1.20 is displayed. Also, open MySQL Workbench and access the MySQL server at IP, 192.168.99.100 and port 3326 (as specified in the preceding docker-compose.yml file). Summary In this article, we learnt how we could start and stop the Web app Dev environment on- demand. Note that with these scripts including Dockerfiles, shell scripts and Dockercompose file, you could setup the Dev environment on any machine where Docker Toolbox could be installed. Resources for Article:   Further resources on this subject: Building Web Apps with Spring 5 and Angular 4 Spring 5 Design Patterns
Read more
  • 0
  • 0
  • 16765

article-image-4-must-know-levels-in-mongodb-security
Amey Varangaonkar
01 Mar 2018
8 min read
Save for later

4 must-know levels in MongoDB security

Amey Varangaonkar
01 Mar 2018
8 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book Mastering MongoDB 3.x written by Alex Giamas. It presents the techniques and essential concepts needed to tackle even the trickiest problems when it comes to working and administering your MongoDB instance.[/box] Security is a multifaceted goal in a MongoDB cluster. In this article, we will examine different attack vectors and how we can protect MongoDB against them. 1. Authentication in MongoDB Authentication refers to verifying the identity of a client. This prevents impersonating someone else in order to gain access to our data. The simplest way to authenticate is using a username/password pair. This can be done via the shell in two ways: > db.auth( <username>, <password> ) Passing in a comma separated username and password will assume default values for the rest of the fields: > db.auth( { user: <username>, pwd: <password>, mechanism: <authentication mechanism>, digestPassword: <boolean> } ) If we pass a document object we can define more parameters than username/password. The (authentication) mechanism parameter can take several different values with the default being SCRAM-SHA-1. The parameter value MONGODB-CR is used for backwards compatibility with versions earlier than 3.0 MONGODB-X509 is used for TLS/SSL authentication. Users and internal replica set servers can be authenticated using SSL certificates, which are self-generated and signed, or come from a trusted third-party authority. This for the configuration file: security.clusterAuthMode / net.ssl.clusterFile Or like this on the command line: --clusterAuthMode and --sslClusterFile > mongod --replSet <name> --sslMode requireSSL --clusterAuthMode x509 --sslClusterFile <path to membership certificate and key PEM file> --sslPEMKeyFile <path to SSL certificate and key PEM file> --sslCAFile <path to root CA PEM file> MongoDB Enterprise Edition, the paid offering from MongoDB Inc., adds two more options for authentication. The first added option is GSSAPI (Kerberos). Kerberos is a mature and robust authentication system that can be used, among others, for Windows based Active Directory Deployments. The second added option is PLAIN (LDAP SASL). LDAP is just like Kerberos; a mature and robust authentication mechanism. The main consideration when using PLAIN authentication mechanism is that credentials are transmitted in plaintext over the wire. This means that we should secure the path between client and server via VPN or a TSL/SSL connection to avoid a man in the middle stealing our credentials. 2. Authorization in MongoDB After we have configured authentication to verify that users are who they claim they are when connecting to our MongoDB server, we need to configure the rights that each one of them will have in our database. This is the authorization aspect of permissions. MongoDB uses role-based access control to control permissions for different user classes. Every role has permissions to perform some actions on a resource. A resource can be a collection or a database or any collections or any databases. The command's format is: { db: <database>, collection: <collection> } If we specify "" (empty string) for either db or collection it means any db or collection. For example: { db: "mongo_books", collection: "" } This would apply our action in every collection in database mongo_books. Similar to the preceding, we can define: { db: "", collection: "" } We define this to apply our rule to all collections across all databases, except system collections of course. We can also apply rules across an entire cluster as follows: { resource: { cluster : true }, actions: [ "addShard" ] } The preceding example grants privileges for the addShard action (adding a new shard to our system) across the entire cluster. The cluster resource can only be used for actions that affect the entire cluster rather than a collection or database, as for example shutdown, replSetReconfig, appendOplogNote, resync, closeAllDatabases, and addShard. What follows is an extensive list of cluster specific actions and some of the most widely used actions. The list of most widely used actions are: find insert remove update bypassDocumentValidation viewRole / viewUser createRole / dropRole createUser / dropUser inprog killop replSetGetConfig / replSetConfigure / replSetStateChange / resync getShardMap / getShardVersion / listShards / moveChunk / removeShard / addShard dropDatabase / dropIndex / fsync / repairDatabase / shutDown serverStatus / top / validate Cluster-specific actions are: unlock authSchemaUpgrade cleanupOrphaned cpuProfiler inprog invalidateUserCache killop appendOplogNote replSetConfigure replSetGetConfig replSetGetStatus replSetHeartbeat replSetStateChange resync addShard flushRouterConfig getShardMap listShards removeShard shardingState applicationMessage closeAllDatabases connPoolSync fsync getParameter hostInfo logRotate setParameter shutdown touch connPoolStats cursorInfo diagLogging getCmdLineOpts getLog listDatabases netstat serverStatus top If this sounds too complicated that is because it is. The flexibility that MongoDB allows in configuring different actions on resources means that we need to study and understand the extensive lists as described previously. Thankfully, some of the most common actions and resources are bundled in built-in roles. We can use the built-in roles to establish the baseline of permissions that we will give to our users and then fine grain these based on the extensive list. User roles in MongoDB There are two different generic user roles that we can specify: read: A read-only role across non-system collections and the following system collections: system.indexes, system.js, and system.namespaces collections readWrite: A read and modify role across non-system collections and the system.js collection Database administration roles in MongoDB There are three database specific administration roles shown as follows: dbAdmin: The basic admin user role which can perform schema-related tasks, indexing, gathering statistics. A dbAdmin cannot perform user and role management. userAdmin: Create and modify roles and users. This is complementary to the dbAdmin role. dbOwner: Combining readWrite, dbAdmin, and userAdmin roles, this is the most powerful admin user role. Cluster administration roles in MongoDB These are the cluster wide administration roles available: hostManager: Monitor and manage servers in a cluster. clusterManager: Provides management and monitoring actions on the cluster. A user with this role can access the config and local databases, which are used in sharding and replication, respectively. clusterMonitor: Read-only access for monitoring tools provided by MongoDB such as MongoDB Cloud Manager and Ops Manager agent. clusterAdmin: Provides the greatest cluster-management access. This role combines the privileges granted by the clusterManager, clusterMonitor, and hostManager roles. Additionally, the role provides the dropDatabase action. Backup restore roles Role-based authorization roles can be defined in the backup restore granularity level as Well: backup: Provides privileges needed to back-up data. This role provides sufficient privileges to use the MongoDB Cloud Manager backup agent, Ops Manager backup agent, or to use mongodump. restore: Provides privileges needed to restore data with mongorestore without the --oplogReplay option or without system.profile collection data. Roles across all databases Similarly, here are the set of available roles across all databases: readAnyDatabase: Provides the same read-only permissions as read, except it applies to all but the local and config databases in the cluster. The role also provides the listDatabases action on the cluster as a whole. readWriteAnyDatabase: Provides the same read and write permissions as readWrite, except it applies to all but the local and config databases in the cluster. The role also provides the listDatabases action on the cluster as a whole. userAdminAnyDatabase: Provides the same access to user administration operations as userAdmin, except it applies to all but the local and config databases in the cluster. Since the userAdminAnyDatabase role allows users to grant any privilege to any user, including themselves, the role also indirectly provides superuser access. dbAdminAnyDatabase: Provides the same access to database administration operations as dbAdmin, except it applies to all but the local and config databases in the cluster. The role also provides the listDatabases action on the cluster as a whole. Superuser Finally, these are the superuser roles available: root: Provides access to the operations and all the resources of the readWriteAnyDatabase, dbAdminAnyDatabase, userAdminAnyDatabase, clusterAdmin, restore, and backup combined. __internal: Similar to root user, any __internal user can perform any action against any object across the server. 3. Network level security Apart from MongoDB specific security measures, there are best practices established for network level security: Only allow communication between servers and only open the ports that are used for communicating between them. Always use TLS/SSL for communication between servers. This prevents man-inthe- middle attacks impersonating a client. Always use different sets of development, staging, and production environments and security credentials. Ideally, create different accounts for each environment and enable two-factor authentication in both staging and production environments. 4. Auditing security No matter how much we plan our security measures, a second or third pair of eyes from someone outside our organization can give a different view of our security measures and uncover problems that we may not have thought of or underestimated. Don't hesitate to involve security experts / white hat hackers to do penetration testing in your servers. Special cases Medical or financial applications require added levels of security for data privacy reasons. If we are building an application in the healthcare space, accessing users' personal identifiable information, we may need to get HIPAA certified. If we are building an application interacting with payments and managing cardholder information, we may need to become PCI/DSS compliant. The specifics of each certification are outside the scope of this book but it is important to know that MongoDB has use cases in these fields that fulfill the requirements and as such it can be the right tool with proper design beforehand. To sum up, in addition to the best practices listed above, developers and administrators must always use common sense so that security interferes only as much as needed with operational goals. If you found our article useful, make sure to check out this book Mastering MongoDB 3.x to master other MongoDB administration-related techniques and become a true MongoDB expert.  
Read more
  • 0
  • 0
  • 15876
Modal Close icon
Modal Close icon