Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-visualizing-data-r-and-python-using-anaconda
Natasha Mathur
09 Aug 2018
7 min read
Save for later

Visualizing data in R and Python using Anaconda [Tutorial]

Natasha Mathur
09 Aug 2018
7 min read
It is said that a picture is worth a thousand words. Through various pictures and graphical presentations, we can express many abstract concepts, theories, data patterns, or certain ideas much clearer. Data can be messy at times, and simply showing the data points would confuse audiences further. If we could have a simple graph to show its main characteristics, properties, or patterns, it would help greatly. In this tutorial, we explain why we should care about data visualization and then we will discuss techniques used for data visualization in R and Python. This article is an excerpt from a book 'Hands-On Data Science with Anaconda' written by Dr. Yuxing Yan, James Yan. Data visualization in R Firstly, let's see the simplest graph for R. With the following one-line R code, we draw a cosine function from -2π to 2π: > plot(cos,-2*pi,2*pi) The related graph is shown here: Histograms could also help us understand the distribution of data points. The previous graph is a simple example of this. First, we generate a set of random numbers drawn from a standard normal distribution. For the purposes of illustration, the first line of set.seed() is actually redundant. Its existence would guarantee that all users would get the same set of random numbers if the same seed was used ( 333 in this case). In other words, with the same set of input values, our histogram would look the same. In the next line, the rnorm(n) function draws n random numbers from a standard normal distribution. The last line then has the hist() function to generate a histogram: > set.seed(333) > data<-rnorm(5000) > hist(data) The associated histogram is shown here: Note that the code of rnorm(5000) is the same as rnorm(5000,mean=0,sd=1), which implies that the default value of the mean is 0 and the default value for sd is 1. The next R program would shade the left-tail for a standard normal distribution: x<-seq(-3,3,length=100) y<-dnorm(x,mean=0,sd=1) title<-"Area under standard normal dist & x less than -2.33" yLabel<-"standard normal distribution" xLabel<-"x value" plot(x,y,type="l",lwd=3,col="black",main=title,xlab=xLabel,ylab=yLabel) x<-seq(-3,-2.33,length=100) y<-dnorm(x,mean=0,sd=1) polygon(c(-4,x,-2.33),c(0,y,0),col="red") The related graph is shown here: Note that according to the last line in the preceding graph, the shaded area is red. In terms of exploring the properties of various datasets, the R package called rattle is quite useful. If the rattle package is not preinstalled, we could run the following code to install it: > install.packages("rattle") Then, we run the following code to launch it; > library(rattle) > rattle() After hitting the Enter key, we can see the following: As our first step, we need to import certain datasets. For the sources of data, we choose from seven potential formats, such as File, ARFF, ODBC, R Dataset, and RData File, and we can load our data from there. The simplest way is using the Library option, which would list all the embedded datasets in the rattle package. After clicking Library, we can see a list of embedded datasets. Assume that we choose acme:boot:Monthly Excess Returns after clicking Execute in the top left. We would then see the following: Now, we can study the properties of the dataset. After clicking Explore, we can use various graphs to view our dataset. Assume that we choose Distribution and select the Benford check box. We can then refer to the following screenshot for more details: After clicking Execute, the following would pop up. The top red line shows the frequencies for the Benford Law for each digits of 1 to 9, while the blue line at the bottom shows the properties of our data set. Note that if you don't have the reshape package already installed in your system, then this either won't run or will ask for permission to install the package to your computer: The dramatic difference between those two lines indicates that our data does not follow a distribution suggested by the Benford Law. In our real world, we know that many people, events, and economic activities are interconnected, and it would be a great idea to use various graphs to show such a multi-node, interconnected picture. If the qgraph package is not preinstalled, users have to run the following to install it: > install.packages("qgraph") The next program shows the connection from a to b, a to c, and the like: library(qgraph) stocks<-c("IBM","MSFT","WMT") x<-rep(stocks, each = 3) y<-rep(stocks, 3) correlation<-c(0,10,3,10,0,3,3,3,0) data <- as.matrix(data.frame(from =x, to =y, width =correlation)) qgraph(data, mode = "direct", edge.color = rainbow(9)) If the data is shown, the meaning of the program will be much clearer. The correlation shows how strongly those stocks are connected. Note that all those values are randomly chosen with no real-world meanings: > data from to width [1,] "IBM" "IBM" " 0" [2,] "IBM" "MSFT" "10" [3,] "IBM" "WMT" " 3" [4,] "MSFT" "IBM" "10" [5,] "MSFT" "MSFT" " 0" [6,] "MSFT" "WMT" " 3" [7,] "WMT" "IBM" " 3" [8,] "WMT" "MSFT" " 3" [9,] "WMT" "WMT" " 0" A high value for the third variable suggests a stronger correlation. For example, IBM is more strongly correlated with MSFT, with a value of 10, than its correlation with WMT, with a value of 3. The following graph shows how strongly those three stocks are correlated: The following program shows the relationship or interconnection between five factors: library(qgraph) data(big5) data(big5groups) title("Correlations among 5 factors",line = 2.5) qgraph(cor(big5),minimum = 0.25,cut = 0.4,vsize = 1.5, groups = big5groups,legend = TRUE, borders = FALSE,theme = 'gray') The related graph is shown here: Data visualization in Python The most widely used Python package for graphs and images is called matplotlib. The following program can be viewed as the simplest Python program to generate a graph since it has just three lines: import matplotlib.pyplot as plt plt.plot([2,3,8,12]) plt.show() The first command line would upload a Python package called matplotlib.pyplot and rename it to plt. Note that we could even use other short names, but it is conventional to use plt for the matplotlib package. The second line plots four points, while the last one concludes the whole process. The completed graph is shown here: For the next example, we add labels for both x and y, and a title. The function is the cosine function with an input value varying from -2π to 2π: import scipy as sp import matplotlib.pyplot as plt x=sp.linspace(-2*sp.pi,2*sp.pi,200,endpoint=True) y=sp.cos(x) plt.plot(x,y) plt.xlabel("x-value") plt.ylabel("Cosine function") plt.title("Cosine curve from -2pi to 2pi") plt.show() The nice-looking cosine graph is shown here: If we received $100 today, it would be more valuable than what would be received in two years. This concept is called the time value of money, since we could deposit $100 today in a bank to earn interest. The following Python program uses size to illustrate this concept: import matplotlib.pyplot as plt fig = plt.figure(facecolor='white') dd = plt.axes(frameon=False) dd.set_frame_on(False) dd.get_xaxis().tick_bottom() dd.axes.get_yaxis().set_visible(False) x=range(0,11,2) x1=range(len(x),0,-1) y = [0]*len(x); plt.annotate("$100 received today",xy=(0,0),xytext=(2,0.15),arrowprops=dict(facecolor='black',shrink=2)) plt.annotate("$100 received in 2 years",xy=(2,0),xytext=(3.5,0.10),arrowprops=dict(facecolor='black',shrink=2)) s = [50*2.5**n for n in x1]; plt.title("Time value of money ") plt.xlabel("Time (number of years)") plt.scatter(x,y,s=s); plt.show() The associated graph is shown here. Again, the different sizes show their present values in relative terms: To summarize, we discussed ways data visualization works in Python and R.  Visual presentations can help our audience understand data better. If you found this post useful, check out the book 'Hands-On Data Science with Anaconda' to learn about different types of visual representation written in languages such as R, Python, Julia, etc. A tale of two tools: Tableau and Power BI Anaconda Enterprise version 5.1.1 released! 10 reasons why data scientists love Jupyter notebooks
Read more
  • 0
  • 0
  • 52606

article-image-building-a-twitter-news-bot-using-twitter-api-tutorial
Bhagyashree R
07 Sep 2018
11 min read
Save for later

Building a Twitter news bot using Twitter API [Tutorial]

Bhagyashree R
07 Sep 2018
11 min read
This article is an excerpt from a book written by Srini Janarthanam titled Hands-On Chatbots and Conversational UI Development. In this article, we will explore the Twitter API and build core modules for tweeting, searching, and retweeting. We will further explore a data source for news around the globe and build a simple bot that tweets top news on its timeline. Getting started with the Twitter app To get started, let us explore the Twitter developer platform. Let us begin by building a Twitter app and later explore how we can tweet news articles to followers based on their interests: Log on to Twitter. If you don't have an account on Twitter, create one. Go to Twitter Apps, which is Twitter's application management dashboard. Click the Create New App button: Create an application by filling in the form providing name, description, and a website (fully-qualified URL). Read and agree to the Developer Agreement and hit Create your Twitter application: You will now see your application dashboard. Explore the tabs: Click Keys and Access Tokens: Copy consumer key and consumer secret and hang on to them. Scroll down to Your Access Token: Click Create my access token to create a new token for your app: Copy the Access Token and Access Token Secret and hang on to them. Now, we have all the keys and tokens we need to create a Twitter app. Building your first Twitter bot Let's build a simple Twitter bot. This bot will listen to tweets and pick out those that have a particular hashtag. All the tweets with a given hashtag will be printed on the console. This is a very simple bot to help us get started. In the following sections, we will explore more complex bots. To follow along you can download the code from the book's GitHub repository. Go to the root directory and create a new Node.js program using npm init: Execute the npm install twitter --save command to install the Twitter Node.js library: Run npm install request --save to install the Request library as well. We will use this in the future to make HTTP GET requests to a news data source. Explore your package.json file in the root directory: { "name": "twitterbot", "version": "1.0.0", "description": "my news bot", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "license": "ISC", "dependencies": { "request": "^2.81.0", "twitter": "^1.7.1" } } Create an index.js file with the following code: //index.js var TwitterPackage = require('twitter'); var request = require('request'); console.log("Hello World! I am a twitter bot!"); var secret = { consumer_key: 'YOUR_CONSUMER_KEY', consumer_secret: 'YOUR_CONSUMER_SECRET', access_token_key: 'YOUR_ACCESS_TOKEN_KEY', access_token_secret: 'YOUR_ACCESS_TOKEN_SECRET' } var Twitter = new TwitterPackage(secret); In the preceding code, put the keys and tokens you saved in their appropriate variables. We don't need the request package just yet, but we will later. Now let's create a hashtag listener to listen to the tweets on a specific hashtag: //Twitter stream var hashtag = '#brexit'; //put any hashtag to listen e.g. #brexit console.log('Listening to:' + hashtag); Twitter.stream('statuses/filter', {track: hashtag}, function(stream) { stream.on('data', function(tweet) { console.log('Tweet:@' + tweet.user.screen_name + '\t' + tweet.text); console.log('------') }); stream.on('error', function(error) { console.log(error); }); }); Replace #brexit with the hashtag you want to listen to. Use a popular one so that you can see the code in action. Run the index.js file with the node index.js command. You will see a stream of tweets from Twitter users all over the globe who used the hashtag: Congratulations! You have built your first Twitter bot. Exploring the Twitter SDK In the previous section, we explored how to listen to tweets based on hashtags. Let's now explore the Twitter SDK to understand the capabilities that we can bestow upon our Twitter bot. Updating your status You can also update your status on your Twitter timeline by using the following status update module code: tweet ('I am a Twitter Bot!', null, null); function tweet(statusMsg, screen_name, status_id){ console.log('Sending tweet to: ' + screen_name); console.log('In response to:' + status_id); var msg = statusMsg; if (screen_name != null){ msg = '@' + screen_name + ' ' + statusMsg; } console.log('Tweet:' + msg); Twitter.post('statuses/update', { status: msg }, function(err, response) { // if there was an error while tweeting if (err) { console.log('Something went wrong while TWEETING...'); console.log(err); } else if (response) { console.log('Tweeted!!!'); console.log(response) } }); } Comment out the hashtag listener code and instead add the preceding status update code and run it. When run, your bot will post a tweet on your timeline: In addition to tweeting on your timeline, you can also tweet in response to another tweet (or status update). The screen_name argument is used to create a response. tweet. screen_name is the name of the user who posted the tweet. We will explore this a bit later. Retweet to your followers You can retweet a tweet to your followers using the following retweet status code: var retweetId = '899681279343570944'; retweet(retweetId); function retweet(retweetId){ Twitter.post('statuses/retweet/', { id: retweetId }, function(err, response) { if (err) { console.log('Something went wrong while RETWEETING...'); console.log(err); } else if (response) { console.log('Retweeted!!!'); console.log(response) } }); } Searching for tweets You can also search for recent or popular tweets with hashtags using the following search hashtags code: search('#brexit', 'popular') function search(hashtag, resultType){ var params = { q: hashtag, // REQUIRED result_type: resultType, lang: 'en' } Twitter.get('search/tweets', params, function(err, data) { if (!err) { console.log('Found tweets: ' + data.statuses.length); console.log('First one: ' + data.statuses[1].text); } else { console.log('Something went wrong while SEARCHING...'); } }); } Exploring a news data service Let's now build a bot that will tweet news articles to its followers at regular intervals. We will then extend it to be personalized by users through a conversation that happens over direct messaging with the bot. In order to build a news bot, we need a source where we can get news articles. We are going to explore a news service called NewsAPI.org in this section. News API is a service that aggregates news articles from roughly 70 newspapers around the globe. Setting up News API Let us set up an account with the News API data service and get the API key: Go to NewsAPI.org: Click Get API key. Register using your email. Get your API key. Explore the sources: https://newsapi.org/v1/sources?apiKey=YOUR_API_KEY. There are about 70 sources from across the globe including popular ones such as BBC News, Associated Press, Bloomberg, and CNN. You might notice that each source has a category tag attached. The possible options are: business, entertainment, gaming, general, music, politics, science-and-nature, sport, and technology. You might also notice that each source also has language (en, de, fr) and country (au, de, gb, in, it, us) tags. The following is the information on the BBC-News source: { "id": "bbc-news", "name": "BBC News", "description": "Use BBC News for up-to-the-minute news, breaking news, video, audio and feature stories. BBC News provides trusted World and UK news as well as local and regional perspectives. Also entertainment, business, science, technology and health news.", "url": "http://www.bbc.co.uk/news", "category": "general", "language": "en", "country": "gb", "urlsToLogos": { "small": "", "medium": "", "large": "" }, "sortBysAvailable": [ "top" ] } Get sources for a specific category, language, or country using: https://newsapi.org/v1/sources?category=business&apiKey=YOUR_API_KEY The following is the part of the response to the preceding query asking for all sources under the business category: "sources": [ { "id": "bloomberg", "name": "Bloomberg", "description": "Bloomberg delivers business and markets news, data, analysis, and video to the world, featuring stories from Businessweek and Bloomberg News.", "url": "http://www.bloomberg.com", "category": "business", "language": "en", "country": "us", "urlsToLogos": { "small": "", "medium": "", "large": "" }, "sortBysAvailable": [ "top" ] }, { "id": "business-insider", "name": "Business Insider", "description": "Business Insider is a fast-growing business site with deep financial, media, tech, and other industry verticals. Launched in 2007, the site is now the largest business news site on the web.", "url": "http://www.businessinsider.com", "category": "business", "language": "en", "country": "us", "urlsToLogos": { "small": "", "medium": "", "large": "" }, "sortBysAvailable": [ "top", "latest" ] }, ... ] Explore the articles: https://newsapi.org/v1/articles?source=bbc-news&apiKey=YOUR_API_KEY The following is the sample response: "articles": [ { "author": "BBC News", "title": "US Navy collision: Remains found in hunt for missing sailors", "description": "Ten US sailors have been missing since Monday's collision with a tanker near Singapore.", "url": "http://www.bbc.co.uk/news/world-us-canada-41013686", "urlToImage": "https://ichef1.bbci.co.uk/news/1024/cpsprodpb/80D9/ production/_97458923_mediaitem97458918.jpg", "publishedAt": "2017-08-22T12:23:56Z" }, { "author": "BBC News", "title": "Afghanistan hails Trump support in 'joint struggle'", "description": "President Ghani thanks Donald Trump for supporting Afghanistan's battle against the Taliban.", "url": "http://www.bbc.co.uk/news/world-asia-41012617", "urlToImage": "https://ichef.bbci.co.uk/images/ic/1024x576/p05d08pf.jpg", "publishedAt": "2017-08-22T11:45:49Z" }, ... ] For each article, the author, title, description, url, urlToImage,, and publishedAt fields are provided. Now that we have explored a source of news data that provides up-to-date news stories under various categories, let us go on to build a news bot. Building a Twitter news bot Now that we have explored News API, a data source for the latest news updates, and a little bit of what the Twitter API can do, let us combine them both to build a bot tweeting interesting news stories, first on its own timeline and then specifically to each of its followers: Let's build a news tweeter module that tweets the top news article given the source. The following code uses the tweet() function we built earlier: topNewsTweeter('cnn', null); function topNewsTweeter(newsSource, screen_name, status_id){ request({ url: 'https://newsapi.org/v1/articles?source=' + newsSource + '&apiKey=YOUR_API_KEY', method: 'GET' }, function (error, response, body) { //response is from the bot if (!error && response.statusCode == 200) { var botResponse = JSON.parse(body); console.log(botResponse); tweetTopArticle(botResponse.articles, screen_name); } else { console.log('Sorry. No new'); } }); } function tweetTopArticle(articles, screen_name, status_id){ var article = articles[0]; tweet(article.title + " " + article.url, screen_name); } Run the preceding program to fetch news from CNN and post the topmost article on Twitter: Here is the post on Twitter: Now, let us build a module that tweets news stories from a randomly-chosen source in a list of sources: function tweetFromRandomSource(sources, screen_name, status_id){ var max = sources.length; var randomSource = sources[Math.floor(Math.random() * (max + 1))]; //topNewsTweeter(randomSource, screen_name, status_id); } Let's call the tweeting module after we acquire the list of sources: function getAllSourcesAndTweet(){ var sources = []; console.log('getting sources...') request({ url: 'https://newsapi.org/v1/sources? apiKey=YOUR_API_KEY', method: 'GET' }, function (error, response, body) { //response is from the bot if (!error && response.statusCode == 200) { // Print out the response body var botResponse = JSON.parse(body); for (var i = 0; i < botResponse.sources.length; i++){ console.log('adding.. ' + botResponse.sources[i].id) sources.push(botResponse.sources[i].id) } tweetFromRandomSource(sources, null, null); } else { console.log('Sorry. No news sources!'); } }); } Let's create a new JS file called tweeter.js. In the tweeter.js file, call getSourcesAndTweet() to get the process started: //tweeter.js var TwitterPackage = require('twitter'); var request = require('request'); console.log("Hello World! I am a twitter bot!"); var secret = { consumer_key: 'YOUR_CONSUMER_KEY', consumer_secret: 'YOUR_CONSUMER_SECRET', access_token_key: 'YOUR_ACCESS_TOKEN_KEY', access_token_secret: 'YOUR_ACCESS_TOKEN_SECRET' } var Twitter = new TwitterPackage(secret); getAllSourcesAndTweet(); Run the tweeter.js file on the console. This bot will tweet a news story every time it is called. It will choose top news stories from around 70 news sources randomly. Hurray! You have built your very own Twitter news bot. In this tutorial, we have covered a lot. We started off with the Twitter API and got a taste of how we can automatically tweet, retweet, and search for tweets using hashtags. We then explored a News source API that provides news articles from about 70 different newspapers. We integrated it with our Twitter bot to create a new tweeting bot. If you found this post useful, do check out the book, Hands-On Chatbots and Conversational UI Development, which will help you explore the world of conversational user interfaces. Build and train an RNN chatbot using TensorFlow [Tutorial] Building a two-way interactive chatbot with Twilio: A step-by-step guide How to create a conversational assistant or chatbot using Python
Read more
  • 0
  • 1
  • 52341

article-image-how-to-create-your-own-r-package-with-rstudio-tutorial
Natasha Mathur
17 Apr 2019
3 min read
Save for later

How to create your own R package with RStudio [Tutorial]

Natasha Mathur
17 Apr 2019
3 min read
In this tutorial, we will look at the process of creating your own R package. If you are going to create code and put it into production, it's always a good idea to create a package with version control, examples, and other features. Plus, with RStudio, it is easy to do. So, we will use a simple example with one small function to show how easy it is. This tutorial is an excerpt taken from the book Mastering Machine Learning with R - Third Edition written by Cory Lesmeister. The book explores expert techniques for solving data analytics and covers machine learning challenges that can help you gain insights from complex projects and power up your applications. Before getting started, you will need to load two packages: > install.packages("roxygen2") > install.packages("devtools") You now want to open File in RStudio and select New Project, which will put you at this point: Select a new directory as desired, and specify R Package, as shown in the following screenshot: You will now name your package – I've innovatively called this one package – and select Create Project: Go to your Files tab in RStudio and you should see several files populated like this: Notice the folder called R. That is where we will put the R functions for our package. But first, click on Description and fill it out accordingly, and save it. Here is my version, which will be a function to code all missing values in a dataframe to zero: I've left imports and suggests blank. This is where you would load other packages, such as tidyverse or caret. Now, open up the hello.R function in the R folder, and delete all of it. The following format will work nicely: Title: Your package title of course Description: A brief description Param: The parameters for that function; the arguments Return: The values returned Examples: Provide any examples of how to use the function Export: Here, write the function you desire Here is the function for our purposes, which just turns all NAs to zero: You will now go to Build - Configure Build Tools and you should end up here: Click the checkmark for Generate documentation with Roxygen. Doing so will create this popup, which you can close and hit OK. You probably want to rename your function now from hello.R to something relevant. Now comes the moment of truth to build your package. Do this by clicking Build - Clean and Rebuild. Now you can search for your package, and it should appear: Click on it and go through the documentation: There you have it, a useless package, but think of what you can do by packaging your own or your favorite functions, and anyone who inherits your code will thank you. In this tutorial, we went through the process of creating an R package, which can help you and your team put your code into production. We created one user-defined function for our package, but your only limit is your imagination. I hope you've enjoyed it and can implement the methods in here, as well as other methods you learn over time. If you want to learn other concepts of Machine Learning with R, be sure to check out the book Mastering Machine Learning with R - Third Edition. How to make machine learning based recommendations using Julia [Tutorial] The rise of machine learning in the investment industry GitHub Octoverse: top machine learning packages, languages, and projects of 2018
Read more
  • 0
  • 0
  • 52218

article-image-xamarin-how-to-add-a-mvvm-pattern-to-an-app-tutorial
Sugandha Lahoti
22 Jun 2018
13 min read
Save for later

Xamarin: How to add a MVVM pattern to an app [Tutorial]

Sugandha Lahoti
22 Jun 2018
13 min read
In our previous tutorial, we created a basic travel app using Xamarin.Forms. In this post, we will look at adding the Model-View-View-Model (MVVM) pattern to our travel app. The MVVM elements are offered with the Xamarin.Forms toolkit and we can expand on them to truly take advantage of the power of the pattern. As we dig into MVVM, we will apply what we have learned to the TripLog app that we started building in our previous tutorial. This article is an excerpt from the book Mastering Xamaring.Forms by Ed Snider. Understanding the MVVM pattern At its core, MVVM is a presentation pattern designed to control the separation between user interfaces and the rest of an application. The key elements of the MVVM pattern are as follows: Models: Models represent the business entities of an application. When responses come back from an API, they are typically deserialized to models. Views: Views represent the actual pages or screens of an application, along with all of the elements that make them up, including custom controls. Views are very platform-specific and depend heavily on platform APIs to render the application's user interface (UI). ViewModels: ViewModels control and manipulate the Views by serving as their data context. ViewModels are made up of a series of properties represented by Models. These properties are part of what is bound to the Views to provide the data that is displayed to users, or to collect the data that is entered or selected by users. In addition to model-backed properties, ViewModels can also contain commands, which are action-backed properties that bind the actual functionality and execution to events that occur in the Views, such as button taps or list item selections. Data binding: Data binding is the concept of connecting data properties and actions in a ViewModel with the user interface elements in a View. The actual implementation of how data binding happens can vary and, in most cases is provided by a framework, toolkit, or library. In Windows app development, data binding is provided declaratively in XAML. In traditional (non-Xamarin.Forms) Xamarin app development, data binding is either a manual process or dependent on a framework such as MvvmCross (https://github.com/MvvmCross/MvvmCross), a popular framework in the .NET mobile development community. Data binding in Xamarin.Forms follows a very similar approach to Windows app development. Adding MVVM to the app The first step of introducing MVVM into an app is to set up the structure by adding folders that will represent the core tenants of the pattern, such as Models, ViewModels, and Views. Traditionally, the Models and ViewModels live in a core library (usually, a portable class library or .NET standard library), whereas the Views live in a platform-specific library. Thanks to the power of the Xamarin.Forms toolkit and its abstraction of platform-specific UI APIs, the Views in a Xamarin.Forms app can also live in the core library. Just because the Views can live in the core library with the ViewModels and Models, this doesn't mean that separation between the user interface and the app logic isn't important. When implementing a specific structure to support a design pattern, it is helpful to have your application namespaces organized in a similar structure. This is not a requirement but it is something that can be useful. By default, Visual Studio for Mac will associate namespaces with directory names, as shown in the following screenshot: Setting up the app structure For the TripLog app, we will let the Views, ViewModels, and Models all live in the same core portable class library. In our solution, this is the project called TripLog. We have already added a Models folder in our previous tutorial, so we just need to add a ViewModels folder and a Views folder to the project to complete the MVVM structure. In order to set up the app structure, perform the following steps: Add a new folder named ViewModels to the root of the TripLog project. Add a new folder named Views to the root of the TripLog project. Move the existing XAML pages files (MainPage.xaml, DetailPage.xaml, and NewEntryPage.xaml and their .cs code-behind files) into the Views folder that we have just created. Update the namespace of each Page from TripLog to TripLog.Views. Update the x:Class attribute of each Page's root ContentPage from TripLog.MainPage, TripLog.DetailPage, and TripLog.NewEntryPage to TripLog.Views.MainPage, TripLog.Views.DetailPage, and TripLog.Views.NewEntryPage, respectively. Update the using statements on any class that references the Pages. Currently, this should only be in the App class in App.xaml.cs, where MainPage is instantiated. Once the MVVM structure has been added, the folder structure in the solution should look similar to the following screenshot: In MVVM, the term View is used to describe a screen. Xamarin.Forms uses the term View to describe controls, such as buttons or labels, and uses the term Page to describe a screen. In order to avoid confusion, I will stick with the Xamarin.Forms terminology and refer to screens as Pages, and will only use the term Views in reference to screens for the folder where the Pages will live, in order to stick with the MVVM pattern. Adding ViewModels In most cases, Views (Pages) and ViewModels have a one-to-one relationship. However, it is possible for a View (Page) to contain multiple ViewModels or for a ViewModel to be used by multiple Views (Pages). For now, we will simply have a single ViewModel for each Page. Before we create our ViewModels, we will start by creating a base ViewModel class, which will be an abstract class containing the basic functionality that each of our ViewModels will inherit. Initially, the base ViewModel abstract class will only contain a couple of members and will implement INotifyPropertyChanged, but we will add to this class as we continue to build upon the TripLog app throughout this book. In order to create a base ViewModel, perform the following steps: Create a new abstract class named BaseViewModel in the ViewModels folder using the following code: public abstract class BaseViewModel { protected BaseViewModel() { } } Update BaseViewModel to implement INotifyPropertyChanged: public abstract class BaseViewModel : INotifyPropertyChanged { protected BaseViewModel() { } public event PropertyChangedEventHandler PropertyChanged; protected virtual void OnPropertyChanged( [CallerMemberName] string propertyName = null) { PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName)); } } The implementation of INotifyPropertyChanged is key to the behavior and role of the ViewModels and data binding. It allows a Page to be notified when the properties of its ViewModel have changed. Now that we have created a base ViewModel, we can start adding the actual ViewModels that will serve as the data context for each of our Pages. We will start by creating a ViewModel for MainPage. Adding MainViewModel The main purpose of a ViewModel is to separate the business logic, for example, data access and data manipulation, from the user interface logic. Right now, our MainPage directly defines the list of data that it is displaying. This data will eventually be dynamically loaded from an API but for now, we will move this initial static data definition to its ViewModel so that it can be data bound to the user interface. In order to create the ViewModel for MainPage, perform the following steps: Create a new class file in the ViewModels folder and name it MainViewModel. Update the MainViewModel class to inherit from BaseViewModel: public class MainViewModel : BaseViewModel { // ... } Add an ObservableCollection<T> property to the MainViewModel class and name it LogEntries. This property will be used to bind to the ItemsSource property of the ListView element on MainPage.xaml: public class MainViewModel : BaseViewModel { ObservableCollection<TripLogEntry> _logEntries; public ObservableCollection<TripLogEntry> LogEntries { get { return _logEntries; } set { _logEntries = value; OnPropertyChanged (); } } // ... } Next, remove the List<TripLogEntry> that populates the ListView element on MainPage.xaml and repurpose that logic in the MainViewModel—we will put it in the constructor for now: public MainViewModel() { LogEntries = new ObservableCollection<TripLogEntry>(); LogEntries.Add(new TripLogEntry { Title = "Washington Monument", Notes = "Amazing!", Rating = 3, Date = new DateTime(2017, 2, 5), Latitude = 38.8895, Longitude = -77.0352 }); LogEntries.Add(new TripLogEntry { Title = "Statue of Liberty", Notes = "Inspiring!", Rating = 4, Date = new DateTime(2017, 4, 13), Latitude = 40.6892, Longitude = -74.0444 }); LogEntries.Add(new TripLogEntry { Title = "Golden Gate Bridge", Notes = "Foggy, but beautiful.", Rating = 5, Date = new DateTime(2017, 4, 26), Latitude = 37.8268, Longitude = -122.4798 }); } Set MainViewModel as the BindingContext property for MainPage. Do this by simply setting the BindingContext property of MainPage in its code-behind file to a new instance of MainViewModel. The BindingContext property comes from the Xamarin.Forms.ContentPage base class: public MainPage() { InitializeComponent(); BindingContext = new MainViewModel(); } Finally, update how the ListView element on MainPage.xaml gets its items. Currently, its ItemsSource property is being set directly in the Page's code behind. Remove this and instead update the ListView element's tag in MainPage.xaml to bind to the MainViewModel LogEntries property: <ListView ... ItemsSource="{Binding LogEntries}"> Adding DetailViewModel Next, we will add another ViewModel to serve as the data context for DetailPage, as follows: Create a new class file in the ViewModels folder and name it DetailViewModel. Update the DetailViewModel class to inherit from the BaseViewModel abstract class: public class DetailViewModel : BaseViewModel { // ... } Add a TripLogEntry property to the class and name it Entry. This property will be used to bind details about an entry to the various labels on DetailPage: public class DetailViewModel : BaseViewModel { TripLogEntry _entry; public TripLogEntry Entry { get { return _entry; } set { _entry = value; OnPropertyChanged (); } } // ... } Update the DetailViewModel constructor to take a TripLogEntry parameter named entry. Use this constructor property to populate the public Entry property created in the previous step: public class DetailViewModel : BaseViewModel { // ... public DetailViewModel(TripLogEntry entry) { Entry = entry; } } Set DetailViewModel as the BindingContext for DetailPage and pass in the TripLogEntry property that is being passed to DetailPage: public DetailPage (TripLogEntry entry) { InitializeComponent(); BindingContext = new DetailViewModel(entry); // ... } Next, remove the code at the end of the DetailPage constructor that directly sets the Text properties of the Label elements: public DetailPage(TripLogEntry entry) { // ... // Remove these lines of code: //title.Text = entry.Title; //date.Text = entry.Date.ToString("M"); //rating.Text = $"{entry.Rating} star rating"; //notes.Text = entry.Notes; } Next, update the Label element tags in DetailPage.xaml to bind their Text properties to the DetailViewModel Entry property: <Label ... Text="{Binding Entry.Title}" /> <Label ... Text="{Binding Entry.Date, StringFormat='{0:M}'}" /> <Label ... Text="{Binding Entry.Rating, StringFormat='{0} star rating'}" /> <Label ... Text="{Binding Entry.Notes}" /> Finally, update the map to get the values it is plotting from the ViewModel. Since the Xamarin.Forms Map control does not have bindable properties, the values have to be set directly to the ViewModel properties. The easiest way to do this is to add a private field to the page that returns the value of the page's BindingContext and then use that field to set the values on the map: public partial class DetailPage : ContentPage { DetailViewModel _vm { get { return BindingContext as DetailViewModel; } } public DetailPage(TripLogEntry entry) { InitializeComponent(); BindingContext = new DetailViewModel(entry); TripMap.MoveToRegion(MapSpan.FromCenterAndRadius( new Position(_vm.Entry.Latitude, _vm.Entry.Longitude), Distance.FromMiles(.5))); TripMap.Pins.Add(new Pin { Type = PinType.Place, Label = _vm.Entry.Title, Position = new Position(_vm.Entry.Latitude, _vm.Entry.Longitude) }); } } Adding NewEntryViewModel Finally, we will need to add a ViewModel for NewEntryPage, as follows: Create a new class file in the ViewModels folder and name it NewEntryViewModel. Update the NewEntryViewModel class to inherit from BaseViewModel: public class NewEntryViewModel : BaseViewModel { // ... } Add public properties to the NewEntryViewModel class that will be used to bind it to the values entered into the EntryCell elements in NewEntryPage.xaml: public class NewEntryViewModel : BaseViewModel { string _title; public string Title { get { return _title; } set { _title = value; OnPropertyChanged(); } } double _latitude; public double Latitude { get { return _latitude; } set { _latitude = value; OnPropertyChanged(); } } double _longitude; public double Longitude { get { return _longitude; } set { _longitude = value; OnPropertyChanged(); } } DateTime _date; public DateTime Date { get { return _date; } set { _date = value; OnPropertyChanged(); } } int _rating; public int Rating { get { return _rating; } set { _rating = value; OnPropertyChanged(); } } string _notes; public string Notes { get { return _notes; } set { _notes = value; OnPropertyChanged(); } } // ... } Update the NewEntryViewModel constructor to initialize the Date and Rating properties: public NewEntryViewModel() { Date = DateTime.Today; Rating = 1; } Add a public Command property to NewEntryViewModel and name it SaveCommand. This property will be used to bind to the Save ToolbarItem in NewEntryPage.xaml. The Xamarin. Forms Command type implements System.Windows.Input.ICommand to provide an Action to run when the command is executed, and a Func to determine whether the command can be executed: public class NewEntryViewModel : BaseViewModel { // ... Command _saveCommand; public Command SaveCommand { get { return _saveCommand ?? (_saveCommand = new Command(ExecuteSaveCommand, CanSave)); } } void ExecuteSaveCommand() { var newItem = new TripLogEntry { Title = Title, Latitude = Latitude, Longitude = Longitude, Date = Date, Rating = Rating, Notes = Notes }; } bool CanSave () { return !string.IsNullOrWhiteSpace (Title); } } In order to keep the CanExecute function of the SaveCommand up to date, we will need to call the SaveCommand.ChangeCanExecute() method in any property setters that impact the results of that CanExecute function. In our case, this is only the Title property: public string Title { get { return _title; } set { _title = value; OnPropertyChanged(); SaveCommand.ChangeCanExecute(); } } The CanExecute function is not required, but by providing it, you can automatically manipulate the state of the control in the UI that is bound to the Command so that it is disabled until all of the required criteria are met, at which point it becomes enabled. Next, set NewEntryViewModel as the BindingContext for NewEntryPage: public NewEntryPage() { InitializeComponent(); BindingContext = new NewEntryViewModel(); // ... } Next, update the EntryCell elements in NewEntryPage.xaml to bind to the NewEntryViewModel properties: <EntryCell Label="Title" Text="{Binding Title}" /> <EntryCell Label="Latitude" Text="{Binding Latitude}" ... /> <EntryCell Label="Longitude" Text="{Binding Longitude}" ... /> <EntryCell Label="Date" Text="{Binding Date, StringFormat='{0:d}'}" /> <EntryCell Label="Rating" Text="{Binding Rating}" ... /> <EntryCell Label="Notes" Text="{Binding Notes}" /> Finally, we will need to update the Save ToolbarItem element in NewEntryPage.xaml  to bind to the NewEntryViewModel SaveCommand property: <ToolbarItem Text="Save" Command="{Binding SaveCommand}" /> Now, when we run the app and navigate to the new entry page, we can see the data binding in action, as shown in the following screenshots. Notice how the Save button is disabled until the title field contains a value: To summarize, we updated the app that we had created in this article; Create a basic travel app using Xamarin.Forms. We removed data and data-related logic from the Pages, offloading it to a series of ViewModels and then binding the Pages to those ViewModels. If you liked this tutorial, read our book, Mastering Xamaring.Forms , to create an architecture rich mobile application with good design patterns and best practices using Xamarin.Forms. Xamarin Forms 3, the popular cross-platform UI Toolkit, is here! Five reasons why Xamarin will change mobile development Creating Hello World in Xamarin.Forms_sample
Read more
  • 0
  • 0
  • 52131

article-image-how-to-build-microservices-using-rest-framework
Gebin George
28 Mar 2018
7 min read
Save for later

How to build Microservices using REST framework

Gebin George
28 Mar 2018
7 min read
Today, we will learn to build microservices using REST framework. Our microservices are Java EE 8 web projects, built using maven and published as separate Payara Micro instances, running within docker containers. The separation allows them to scale individually, as well as have independent operational activities. Given the BCE pattern used, we have the business component split into boundary, control, and entity, where the boundary comprises of the web resource (REST endpoint) and business service (EJB). The web resource will publish the CRUD operations and the EJB will in turn provide the transactional support for each of it along with making external calls to other resources. Here's a logical view for the boundary consisting of the web resource and business service: The microservices will have the following REST endpoints published for the projects shown, along with the boundary classes XXXResource and XXXService: Power Your APIs with JAXRS and CDI, for Server-Sent Events. In IMS, we publish task/issue updates to the browser using an SSE endpoint. The code observes for the events using the CDI event notification model and triggers the broadcast. The ims-users and ims-issues endpoints are similar in API format and behavior. While one deals with creating, reading, updating, and deleting a User, the other does the same for an Issue. Let's look at this in action. After you have the containers running, we can start firing requests to the /users web resource. The following curl command maps the URI /users to the @GET resource method named getAll() and returns a collection (JSON array) of users. The Java code will simply return a Set<User>, which gets converted to JsonArray due to the JSON binding support of JSON-B. The method invoked is as follows: @GET public Response getAll() {... } curl -v -H 'Accept: application/json' http://localhost:8081/ims-users/resources/users ... HTTP/1.1 200 OK ... [{ "id":1,"name":"Marcus","email":"marcus_jee8@testem.com" "credential":{"password":"1234","username":"marcus"} }, { "id":2,"name":"Bob","email":"bob@testem.com" "credential":{"password":"1234","username":"bob"} }] Next, for selecting one of the users, such as Marcus, we will issue the following curl command, which uses the /users/xxx path. This will map the URI to the @GET method which has the additional @Path("{id}") annotation as well. The value of the id is captured using the @PathParam("id") annotation placed before the field. The response is a User entity wrapped in the Response object returned. The method invoked is as follows: @GET @Path("{id}") public Response get(@PathParam("id") Long id) { ... } curl -v -H 'Accept: application/json' http://localhost:8081/ims-users/resources/users/1 ... HTTP/1.1 200 OK ... { "id":1,"name":"Marcus","email":"marcus_jee8@testem.com" "credential":{"password":"1234","username":"marcus"} } In both the preceding methods, we saw the response returned as 200 OK. This is achieved by using a Response builder. Here's the snippet for the method: return Response.ok( ..entity here..).build(); Next, for submitting data to the resource method, we use the @POST annotation. You might have noticed earlier that the signature of the method also made use of a UriInfo object. This is injected at runtime for us via the @Context annotation. A curl command can be used to submit the JSON data of a user entity. The method invoked is as follows: @POST public Response add(User newUser, @Context UriInfo uriInfo) We make use of the -d flag to send the JSON body in the request. The POST request is implied: curl -v -H 'Content-Type: application/json' http://localhost:8081/ims-users/resources/users -d '{"name": "james", "email":"james@testem.io", "credential": {"username":"james","password":"test123"}}' ... HTTP/1.1 201 Created ... Location: http://localhost:8081/ims-users/resources/users/3 The 201 status code is sent by the API to signal that an entity has been created, and it also returns the location for the newly created entity. Here's the relevant snippet to do this: //uriInfo is injected via @Context parameter to this method URI location = uriInfo.getAbsolutePathBuilder() .path(newUserId) // This is the new entity ID .build(); // To send 201 status with new Location return Response.created(location).build(); Similarly, we can also send an update request using the PUT method. The method invoked is as follows: @PUT @Path("{id}") public Response update(@PathParam("id") Long id, User existingUser) curl -v -X PUT -H 'Content-Type: application/json' http://localhost:8081/ims-users/resources/users/3 -d '{"name": "jameson", "email":"james@testem.io"}' ... HTTP/1.1 200 Ok The last method we need to map is the DELETE method, which is similar to the GET operation, with the only difference being the HTTP method used. The method invoked is as follows: @DELETE @Path("{id}") public Response delete(@PathParam("id") Long id) curl -v -X DELETE http://localhost:8081/ims-users/resources/users/3 ... HTTP/1.1 200 Ok You can try out the Issues endpoint in a similar manner. For the GET requests of /users or /issues, the code simply fetches and returns a set of entity objects. But when requesting an item within this collection, the resource method has to look up the entity by the passed in id value, captured by @PathParam("id"), and if found, return the entity, or else a 404 Not Found is returned. Here's a snippet showing just that: final Optional<Issue> issueFound = service.get(id); //id obtained if (issueFound.isPresent()) { return Response.ok(issueFound.get()).build(); } return Response.status(Response.Status.NOT_FOUND).build(); The issue instance can be fetched from a database of issues, which the service object interacts with. The persistence layer can return a JPA entity object which gets converted to JSON for the calling code. We will look at persistence using JPA in a later section. For the update request which is sent as an HTTP PUT, the code captures the identifier ID using @PathParam("id"), similar to the previous GET operation, and then uses that to update the entity. The entity itself is submitted as a JSON input and gets converted to the entity instance along with the passed in message body of the payload. Here's the code snippet for that: @PUT @Path("{id}") public Response update(@PathParam("id") Long id, Issue updated) { updated.setId(id); boolean done = service.update(updated); return done ? Response.ok(updated).build() : Response.status(Response.Status.NOT_FOUND).build(); } The code is simple to read and does one thing—it updates the identified entity and returns the response containing the updated entity or a 404 for a non-existing entity. The service references that we have looked at so far are @Stateless beans which are injected into the resource class as fields: // Project: ims-comments @Stateless public class CommentsService {... } // Project: ims-issues @Stateless public class IssuesService {... } // Project: ims-users @Stateless public class UsersService {... } These will in turn have the EntityManager injected via @PersistenceContext. Combined with the resource and service, our components have made the boundary ready for clients to use. Similar to the WebSockets section in Chapter 6, Power Your APIs with JAXRS and CDI, in IMS, we use a @ServerEndpoint which maintains the list of active sessions and then uses that to broadcast a message to all users who are connected. A ChatThread keeps track of the messages being exchanged through the @ServerEndpoint class. For the message to besent, we use the stream of sessions and filter it by open sessions, then send the message for each of the sessions: chatSessions.getSessions().stream().filter(Session::isOpen) .forEach(s -> { try { s.getBasicRemote().sendObject(chatMessage); }catch(Exception e) {...} }); To summarize, we practically saw how to leverage REST framework to build microservices. This article is an excerpt from the book, Java EE 8 and Angular written by Prashant Padmanabhan. The book covers building modern user friendly web apps with Java EE  
Read more
  • 0
  • 0
  • 51959

article-image-drones-everything-you-wanted-know
Aarthi Kumaraswamy
11 Apr 2018
10 min read
Save for later

Drones: Everything you ever wanted to know!

Aarthi Kumaraswamy
11 Apr 2018
10 min read
When you were a kid, did you have fun with paper planes? They were so much fun. So, what is a gliding drone? Well, before answering this, let me be clear that there are other types of drones, too. We will know all common types of drones soon, but before doing that, let's find out what a drone first. Drones are commonly known as Unmanned Aerial Vehicles (UAV). A UAV is a flying thing without a human pilot on it. Here, by thing we mean the aircraft. For drones, there is the Unmanned Aircraft System (UAS), which allows you to communicate with the physical drone and the controller on the ground. Drones are usually controlled by a human pilot, but they can also be autonomously controlled by the system integrated on the drone itself. So what the UAS does, is it communicates between the UAS and UAV. Simply, the system that communicates between the drone and the controller, which is done by the commands of a person from the ground control station, is known as the UAS. Drones are basically used for doing something where humans cannot go or carrying out a mission that is impossible for humans. Drones have applications across a wide spectrum of industries from military, scientific research, agriculture, surveillance, product delivery, aerial photography, recreations, to traffic control. And of course, like any technology or tool it can do great harm when used for malicious purposes like for terrorist attacks and smuggling drugs. Types of drones Classifying drones based on their application Drones can be categorized into the following six types based on their mission: Combat: Combat drones are used for attacking in the high-risk missions. They are also known as Unmanned Combat Aerial Vehicles (UCAV). They carry missiles for the missions. Combat drones are much like planes. The following is a picture of a combat drone: Logistics: Logistics drones are used for delivering goods or cargo. There are a number of famous companies, such as Amazon and Domino's, which deliver goods and pizzas via drones. It is easier to ship cargo with drones when there is a lot of traffic on the streets, or the route is not easy to drive. The following diagram shows a logistic drone: Civil: Civil drones are for general usage, such as monitoring the agriculture fields, data collection, and aerial photography. The following picture is of an aerial photography drone: Reconnaissance: These kinds of drones are also known as mission-control drones. A drone is assigned to do a task and it does it automatically, and usually returns to the base by itself, so they are used to get information from the enemy on the battlefield. These kinds of drones are supposed to be small and easy to hide. The following diagram is a reconnaissance drone for your reference, they may vary depending on the usage: Target and decoy: These kinds of drones are like combat drones, but the difference is, the combat drone provides the attack capabilities for the high-risk mission and the target and decoy drones provide the ground and aerial gunnery with a target that simulates the missile or enemy aircrafts. You can look at the following figure to get an idea what a target and decoy drone looks like: Research and development: These types of drones are used for collecting data from the air. For example, some drones are used for collecting weather data or for providing internet. [box type="note" align="" class="" width=""]Also read this interesting news piece on Microsoft committing $5 billion to IoT projects.[/box] Classifying drones based on wing types We can also classify drones by their wing types. There are three types of drones depending on their wings or flying mechanism: Fixed wing: A fixed wing drone has a rigid wing. They look like airplanes. These types of drones have a very good battery life, as they use only one motor (or less than the multiwing). They can fly at a high altitude. They can carry more weight because they can float on air for the wings. There are also some disadvantages of fixed wing drones. They are expensive and require a good knowledge of aerodynamics. They break a lot and training is required to fly them. The launching of the drone is hard and the landing of these types of drones is difficult. The most important thing you should know about the fixed wing drones is they can only move forward. To change the directions to left or right, we need to create air pressure from the wing. We will build one fixed wing drone in this book. I hope you would like to fly one. Single rotor: Single rotor drones are simply like helicopter. They are strong and the propeller is designed in a way that it helps to both hover and change directions. Remember, the single rotor drones can only hover vertically in the air. They are good with battery power as they consume less power than a multirotor. The payload capacity of a single rotor is good. However, they are difficult to fly. Their wing or the propeller can be dangerous if it loosens. Multirotor: Multirotor drones are the most common among the drones. They are classified depending on the number of wings they have, such as tricopter (three propellers or rotors), quadcopter (four rotors), hexacopter (six rotors), and octocopter (eight rotors). The most common multirotor is the quadcopter. The multirotors are easy to control. They are good with payload delivery. They can take off and land vertically, almost anywhere. The flight is more stable than the single rotor and the fixed wing. One of the disadvantages of the multirotor is power consumption. As they have a number of motors, they consume a lot of power. Classifying drones based on body structure We can also classify multirotor drones by their body structure. They can be known by the number of propellers used on them. Some drones have three propellers. They are called tricopters. If there are four propellers or rotors, they are called quadcopters. There are hexacopters and octacopters with six and eight propellers, respectively. The gliding drones or fixed wings do not have a structure like copters. They look like the airplane. The shapes and sizes of the drones vary from purpose to purpose. If you need a spy drone, you will not make a big octacopter right? If you need to deliver a cargo to your friend's house, you can use a multirotor or a single rotor: The Ready to Fly (RTF) drones do not require any assembly of the parts after buying. You can fly them just after buying them. RTF drones are great for the beginners. They require no complex setup or programming knowledge. The Bind N Fly (BNF) drones do not come with a transmitter. This means, if you have bought a transmitter for yourother drone, you can bind it with this type of drone and fly. The problem is that an old model of transmitter might not work with them and the BNF drones are for experienced flyers who have already flown drones with safety, and had the transmitter to test with other drones. The Almost Ready to Fly (ARF) drones come with everything needed to fly, but a few parts might be missing that might keep it from flying properly. Just kidding! They come with all the parts, but you have to assemble them together before flying. You might lose one or two things while assembling. So be careful if you buy ARF drones. I always lose screws or spare small parts of the drones while I assemble. From the name of these types of drones, you can imagine why they are called by this name. The ARF drones require a lot of patience to assemble and bind to fly. Just be calm while assembling. Don't throw away the user manuals like me. You might end up with either pocket screws or lack of screws or parts. Key components for building a drone To build a drone, you will need a drone frame, motors, radio transmitter and reciever, battery, battery adapters/chargers, connectors and modules to make the drone smarter. Drone frames Basically, the drone frame is the most important component to build a drone. It helps to mount the motors, battery, and other parts on it. If you want to build a copter or a glide, you first need to decide what frame you will buy or build. For example, if you choose a tricopter, your drone will be smaller, the number of motors will be three, the number of propellers will be three, the number of ESC will be three, and so on. If you choose a quadcopter it will require four of each of the earlier specifications. For the gliding drone, the number of parts will vary. So, choosing a frame is important as the target of making the drone depends on the body of the drone. And a drone's body skeleton is the frame. In this book, we will build a quadcopter, as it is a medium size drone and we can implement all the things we want on it. If you want to buy the drone frame, there are lots of online shops who sell ready-made drone frames. Make sure you read the specification before buying the frames. While buying frames, always double check the motor mount and the other screw mountings. If you cannot mount your motors firmly, you will lose the stability of the drone in the air. About the aerodynamics of the drone flying, we will discuss them soon. The following figure shows a number of drone frames. All of them are pre-made and do not need any calculation to assemble. You will be given a manual which is really easy to follow: You should also choose a material which light but strong. My personal choice is carbon fiber. But if you want to save some money, you can buy strong plastic frames. You can also buy acrylic frames. When you buy the frame, you will get all the parts of the frame unassembled, as mentioned earlier. The following picture shows how the frame will be shipped to you, if you buy from the online shop: If you want to build your own frame, you will require a lot of calculations and knowledge about the materials. You need to focus on how the assembling will be done, if you build a frame by yourself. The thrust of the motor after mounting on the frame is really important. It will tell you whether your drone will float in the air or fall down or become imbalanced. To calculate the thrust of the motor, you can follow the equation that we will speak about next. If P is the payload capacity of your drone (how much your drone can lift. I'll explain how you can find it), M is the number of motors, W is the weight of the drone itself, and H is the hover throttle % (will be explained later). Then, our thrust of the motors T will be as follows: The drone's payload capacity can be found with the following equation: [box type="note" align="" class="" width=""]Remember to keep the frame balanced and the center of gravity remains in the hands of the drone.[/box] Check out the book, Building Smart Drones with ESP8266 and Arduino by Syed Omar Faruk Towaha to read about the other components that go into making a drone and then build some fun drone projects from follow me drones, to drone that take selfies to those that race and glide. Check out other posts on IoT: How IoT is going to change tech teams AWS Sydney Summit 2018 is all about IoT 25 Datasets for Deep Learning in IoT  
Read more
  • 0
  • 0
  • 51927
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-is-web-development-dying
Richard Gall
23 May 2018
7 min read
Save for later

Is web development dying?

Richard Gall
23 May 2018
7 min read
It's not hard to find people asking whether web development is dying. A quick search throws up questions on Quora, Reddit, and other forums. "Is web development a dying profession or does it just smell funny?" asks one Reddit user. The usual suspects in the world of content (Forbes et al) have responded with their own takes and think pieces on whether web development is dead. And why wouldn't they? I, for one, would never miss out on an opportunity to write something with an outlandish and provocative headline for clicks. So, is web development dying or simply very unwell? Why do people think web development is dying? The question might seem a bit overwrought, but there are good reasons for people to ask the question. One reason is that getting a website has never been easier or cheaper. Think about it: if you want to create a content site, it doesn't take much to set one up with WordPress. You barely need to be technically literate, let alone a developer. Similarly, if you want an eCommerce store there are plenty of off-the-shelf solutions that allow people to start running an online business with very little work at all. Even if you do want a custom solution, you can now do that pretty cheaply. On the Treehouse forums, one user comments that thanks to sites like SquareSpace, businesses can now purchase a website for less than £100 (about $135). The commenter remarks that whereas he'd typically charge around £3000 for a complete website build, potential clients are coming back puzzled as to why he would think they'd spend so much when they could get the same result for a fraction of the price. From a professional perspective, this sort of anecdotal evidence indicates that it's becoming more and more difficult to be successful in web development. For all the talk around 'learning to code' and the digital economy, maybe building websites isn't the best area to get into. Web development is getting easier When people say web development is dying, they might actually be saying that there isn't as much money in it any more. If freelancers are struggling to charge the rates that they used to, that's because there is someone out there who is going to do it for a lot less money. The reason for this isn't that there's a new generation of web developers able to subsist on a paltry sum of money. It's actually getting a lot easier. Aside from solutions like WordPress and Shopify, the task of building websites from scratch (sort of scratch) is now easier than it has ever been. Are templates killing web development? Templates make everything easy for web developers and designers. Why would you want to do much more than drag and drop templates if you could? If the result looks good and does the job, then why spend time doing more? The more you do yourself, the more you're likely to break things. And the more you break things the more you've got to fix. Of course, templates are lowering the barrier to entry into web development and design. And while we shouldn't be precious about new web developers entering the industry, it is understandable that many experienced web developers are anxious about what the future might hold. From this perspective, templates aren't killing web development, but they are changing what the profession looks like. And without wishing to sound euphemistic, this is both a challenge and an opportunity for everyone in web development. Whether you're experienced or new to the industry, these changes mean people are going to have to adapt. Web development isn't dying, it's fragmenting The way web developers are going to have to adapt is by choosing what path they want to take in their career. Web development as we've always known it is, perhaps well and truly dead. Instead, it's fragmenting into specialized areas; design on the one hand, and full-stack on the other. This means your skill set needs to be unique. In a world where building websites takes very little skill or technical knowledge, specific expertise is vital. This is something journalist Andrew Pierno noted in a blog post on Medium. Pierno writes:  ...we are in a scenario where the web developer no longer has the skill set to build that interesting differentiator anymore, particularly if the main value prop is around A.I, computer vision, machine learning, AR, VR, blockchain, etc. Building websites is no longer remarkable - as we've seen, people that can do it are ubiquitous. But building a native application; that's not quite so easy. Building a mobile app that uses computer vision to compare you to Renaissance paintings - that's even harder to do. These are the sorts of things that are going to be valuable - and these are the sorts of things that web developers are going to need to learn how to do. Full-stack development and the expansion of the developer skill set In his piece, Pierno argues that the scope of the web developers role is shrinking. However, I don't think that's quite right. Yes, it might be fragmenting, but the scope of, say, full-stack development, is huge. In fact, full-stack developers need to know a huge range of technologies and tools. If they're to differentiate themselves in the job market, as Pierno suggests they should, they need to know machine learning, they need to know mobile, databases, and maybe even Blockchain. From this perspective, it's not hard to see how the 'web' part of web development might be dying. To some extent, as the web becomes more ubiquitous and less of a rarefied 'space' in people's lives, the more we have to get into the detail of how we utilize the technologies around it. Web development's decline is design's gain If web development as a discipline is dying, that's only going to make design more important. If, as we saw earlier, building websites is going to become a free for all for just about anyone with an internet connection and enough confidence, standards and quality might start to slip. That means the value of someone who understands good design will be higher than ever. As a web developer you might disappear into the ether of everyone else out there. But if you market yourself as a designer, someone who understands the intricacies of UI and UX implicitly, you immediately start to look a little different. Think of it like a sandwich shop - anyone can start making sandwiches. But to make a great sandwich shop, the type that wins awards and the type that people want to Instagram, requires extra attention to detail. It demands more skill and more culinary awareness. Maybe web development is dying, but maybe it just needs to change Clearly, what we call web development is very different in 2018 than what it was 5 years ago. There are a huge number of reasons for this, but perhaps the most important is that it doesn't really make sense to talk about 'the web' any more. Because 'the web' is now an outdated concept, perhaps web development needs to die. Maybe we're holding on to something which is only going to play into the hands of poor design and poor quality software. It's might even damage the careers of talented engineers and designers. You could make a pretty good comparison between 'the web' and 'big data'. Even reading those words feels oddly outdated today, but they're still at the center of the tech landscape. Big data, for example, is everywhere - it's hard to imagine our lives outside of it, but it doesn't make sense to talk about it in the abstract. Instead, what's interesting is how it's applied, how engineers make data accessible, usable and secure. The same is true of the web. It's not dead, but it has certainly assumed a slightly different form. And web development might well be dying, but the world will always need developers and designers. It's simply time to adapt. Read next Why is everyone talking about JavaScript fatigue? Is novelty ruining web development?
Read more
  • 0
  • 4
  • 51771

article-image-applications-with-aws-services-amazon-dynamodb-amazon-kinesis
Natasha Mathur
05 Jul 2018
17 min read
Save for later

Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial]

Natasha Mathur
05 Jul 2018
17 min read
AWS provides hybrid capabilities for networking, storage, database, application development, and management tools for secure and seamless integration. In today's tutorial, we will integrate applications with the two popular AWS services namely Amazon DynamoDB and Amazon Kinesis. Amazon DynamoDB is a fast, fully managed, highly available, and scalable NoSQL database service from AWS. DynamoDB uses key-value and document store data models. Amazon Kinesis is used to collect real-time data to process and analyze it. This article is an excerpt from a book 'Expert AWS Development' written by Atul V. Mistry. By the end of this tutorial, you will know how to integrate applications with the relative AWS services and best practices. Amazon DynamoDB The Amazon DynamoDB service falls under the Database category. It is a fast NoSQL database service from Amazon. It is highly durable as it will replicate data across three distinct geographical facilities in AWS regions. It's great for web, mobile, gaming, and IoT applications. DynamoDB will take care of software patching, hardware provisioning, cluster scaling, setup, configuration, and replication. You can create a database table and store and retrieve any amount and variety of data. It will delete expired data automatically from the table. It will help to reduce the usage storage and cost of storing data which is no longer needed. Amazon DynamoDB Accelerator (DAX) is a highly available, fully managed, and in-memory cache. For millions of requests per second, it reduces the response time from milliseconds to microseconds. DynamoDB is allowed to store up to 400 KB of large text and binary objects. It uses SSD storage to provide high I/O performance. Integrating DynamoDB into an application The following diagram provides a high-level overview of integration between your application and DynamoDB: Please perform the following steps to understand this integration: Your application in your programming language which is using an AWS SDK. DynamoDB can work with one or more programmatic interfaces provided by AWS SDK. From your programming language, AWS SDK will construct an HTTP or HTTPS request with a DynamoDB low-level API. The AWS SDK will send a request to the DynamoDB endpoint. DynamoDB will process the request and send the response back to the AWS SDK. If the request is executed successfully, it will return HTTP 200 (OK) response code. If the request is not successful, it will return HTTP error code and error message. The AWS SDK will process the response and send the result back to the application. The AWS SDK provides three kinds of interfaces to connect with DynamoDB. These interfaces are as follows: Low-level interface Document interface Object persistence (high-level) interface Let's explore all three interfaces. The following diagram is the Movies table, which is created in DynamoDB and used in all our examples: Low-level interface AWS SDK programming languages provide low-level interfaces for DynamoDB. These SDKs provide methods that are similar to low-level DynamoDB API requests. The following example uses the Java language for the low-level interface of AWS SDKs. Here you can use Eclipse IDE for the example. In this Java program, we request getItem from the Movies table, pass the movie name as an attribute, and print the movie release year: Let's create the MovieLowLevelExample file. We have to import a few classes to work with the DynamoDB. AmazonDynamoDBClient is used to create the DynamoDB client instance. AttributeValue is used to construct the data. In AttributeValue, name is datatype and value is data: GetItemRequest is the input of GetItem GetItemResult is the output of GetItem The following code will create the dynamoDB client instance. You have to assign the credentials and region to this instance: Static AmazonDynamoDBClient dynamoDB; In the code, we have created HashMap, passing the value parameter as AttributeValue().withS(). It contains actual data and withS is the attribute of String: String tableName = "Movies"; HashMap<String, AttributeValue> key = new HashMap<String, AttributeValue>(); key.put("name", new AttributeValue().withS("Airplane")); GetItemRequest will create a request object, passing the table name and key as a parameter. It is the input of GetItem: GetItemRequest request = new GetItemRequest() .withTableName(tableName).withKey(key); GetItemResult will create the result object. It is the output of getItem where we are passing request as an input: GetItemResult result = dynamoDB.getItem(request); It will check the getItem null condition. If getItem is not null then create the object for AttributeValue. It will get the year from the result object and create an instance for yearObj. It will print the year value from yearObj: if (result.getItem() != null) { AttributeValue yearObj = result.getItem().get("year"); System.out.println("The movie Released in " + yearObj.getN()); } else { System.out.println("No matching movie was found"); } Document interface This interface enables you to do Create, Read, Update, and Delete (CRUD) operations on tables and indexes. The datatype will be implied with data from this interface and you do not need to specify it. The AWS SDKs for Java, Node.js, JavaScript, and .NET provides support for document interfaces. The following example uses the Java language for the document interface in AWS SDKs. Here you can use the Eclipse IDE for the example. In this Java program, we will create a table object from the Movies table, pass the movie name as attribute, and print the movie release year. We have to import a few classes. DynamoDB is the entry point to use this library in your class. GetItemOutcomeis is used to get items from the DynamoDB table. Table is used to get table details: static AmazonDynamoDB client; The preceding code will create the client instance. You have to assign the credentials and region to this instance: String tableName = "Movies"; DynamoDB docClient = new DynamoDB(client); Table movieTable = docClient.getTable(tableName); DynamoDB will create the instance of docClient by passing the client instance. It is the entry point for the document interface library. This docClient instance will get the table details by passing the tableName and assign it to the movieTable instance: GetItemOutcome outcome = movieTable.getItemOutcome("name","Airplane"); int yearObj = outcome.getItem().getInt("year"); System.out.println("The movie was released in " + yearObj); GetItemOutcome will create an outcome instance from movieTable by passing the name as key and movie name as parameter. It will retrieve the item year from the outcome object and store it into the yearObj object and print it: Object persistence (high-level) interface In the object persistence interface, you will not perform any CRUD operations directly on the data; instead, you have to create objects which represent DynamoDB tables and indexes and perform operations on those objects. It will allow you to write object-centric code and not database-centric code. The AWS SDKs for Java and .NET provide support for the object persistence interface. Let's create a DynamoDBMapper object in AWS SDK for Java. It will represent data in the Movies table. This is the MovieObjectMapper.java class. Here you can use the Eclipse IDE for the example. You need to import a few classes for annotations. DynamoDBAttribute is applied to the getter method. If it will apply to the class field then its getter and setter method must be declared in the same class. The DynamoDBHashKey annotation marks property as the hash key for the modeled class. The DynamoDBTable annotation marks DynamoDB as the table name: @DynamoDBTable(tableName="Movies") It specifies the table name: @DynamoDBHashKey(attributeName="name") public String getName() { return name;} public void setName(String name) {this.name = name;} @DynamoDBAttribute(attributeName = "year") public int getYear() { return year; } public void setYear(int year) { this.year = year; } In the preceding code, DynamoDBHashKey has been defined as the hash key for the name attribute and its getter and setter methods. DynamoDBAttribute specifies the column name and its getter and setter methods. Now create MovieObjectPersistenceExample.java to retrieve the movie year: static AmazonDynamoDB client; The preceding code will create the client instance. You have to assign the credentials and region to this instance. You need to import DynamoDBMapper, which will be used to fetch the year from the Movies table: DynamoDBMapper mapper = new DynamoDBMapper(client); MovieObjectMapper movieObjectMapper = new MovieObjectMapper(); movieObjectMapper.setName("Airplane"); The mapper object will be created from DynamoDBMapper by passing the client. The movieObjectMapper object will be created from the POJO class, which we created earlier. In this object, set the movie name as the parameter: MovieObjectMapper result = mapper.load(movieObjectMapper); if (result != null) { System.out.println("The song was released in "+ result.getYear()); } Create the result object by calling DynamoDBMapper object's load method. If the result is not null then it will print the year from the result's getYear() method. DynamoDB low-level API This API is a protocol-level interface which will convert every HTTP or HTTPS request into the correct format with a valid digital signature. It uses JavaScript Object Notation (JSON) as a transfer protocol. AWS SDK will construct requests on your behalf and it will help you concentrate on the application/business logic. The AWS SDK will send a request in JSON format to DynamoDB and DynamoDB will respond in JSON format back to the AWS SDK API. DynamoDB will not persist data in JSON format. Troubleshooting in Amazon DynamoDB The following are common problems and their solutions: If error logging is not enabled then enable it and check error log messages. Verify whether the DynamoDB table exists or not. Verify the IAM role specified for DynamoDB and its access permissions. AWS SDKs take care of propagating errors to your application for appropriate actions. Like Java programs, you should write a try-catch block to handle the error or exception. If you are not using an AWS SDK then you need to parse the content of low-level responses from DynamoDB. A few exceptions are as follows: AmazonServiceException: Client request sent to DynamoDB but DynamoDB was unable to process it and returned an error response AmazonClientException: Client is unable to get a response or parse the response from service ResourceNotFoundException: Requested table doesn't exist or is in CREATING state Now let's move on to Amazon Kinesis, which will help to collect and process real-time streaming data. Amazon Kinesis The Amazon Kinesis service is under the Analytics product category. This is a fully managed, real-time, highly scalable service. You can easily send data to other AWS services such as Amazon DynamoDB, AmazaonS3, and Amazon Redshift. You can ingest real-time data such as application logs, website clickstream data, IoT data, and social stream data into Amazon Kinesis. You can process and analyze data when it comes and responds immediately instead of waiting to collect all data before the process begins. Now, let's explore an example of using Kinesis streams and Kinesis Firehose using AWS SDK API for Java. Amazon Kinesis streams In this example, we will create the stream if it does not exist and then we will put the records into the stream. Here you can use Eclipse IDE for the example. You need to import a few classes. AmazonKinesis and AmazonKinesisClientBuilder are used to create the Kinesis clients. CreateStreamRequest will help to create the stream. DescribeStreamRequest will describe the stream request. PutRecordRequest will put the request into the stream and PutRecordResult will print the resulting record. ResourceNotFoundException will throw an exception when the stream does not exist. StreamDescription will provide the stream description: Static AmazonKinesis kinesisClient; kinesisClient is the instance of AmazonKinesis. You have to assign the credentials and region to this instance: final String streamName = "MyExampleStream"; final Integer streamSize = 1; DescribeStreamRequest describeStreamRequest = new DescribeStreamRequest().withStreamName(streamName); Here you are creating an instance of describeStreamRequest. For that, you will pass the streamNameas parameter to the withStreamName() method: StreamDescription streamDescription = kinesisClient.describeStream(describeStreamRequest).getStreamDescription(); It will create an instance of streamDescription. You can get information such as the stream name, stream status, and shards from this instance: CreateStreamRequest createStreamRequest = new CreateStreamRequest(); createStreamRequest.setStreamName(streamName); createStreamRequest.setShardCount(streamSize); kinesisClient.createStream(createStreamRequest); The createStreamRequest instance will help to create a stream request. You can set the stream name, shard count, and SDK request timeout. In the createStream method, you will pass the createStreamRequest: long createTime = System.currentTimeMillis(); PutRecordRequest putRecordRequest = new PutRecordRequest(); putRecordRequest.setStreamName(streamName); putRecordRequest.setData(ByteBuffer.wrap(String.format("testData-%d", createTime).getBytes())); putRecordRequest.setPartitionKey(String.format("partitionKey-%d", createTime)); Here we are creating a record request and putting it into the stream. We are setting the data and PartitionKey for the instance. It will create the records: PutRecordResult putRecordResult = kinesisClient.putRecord(putRecordRequest); It will create the record from the putRecord method and pass putRecordRequest as a parameter: System.out.printf("Success : Partition key "%s", ShardID "%s" and SequenceNumber "%s".n", putRecordRequest.getPartitionKey(), putRecordResult.getShardId(), putRecordResult.getSequenceNumber()); It will print the output on the console as follows: Troubleshooting tips for Kinesis streams The following are common problems and their solutions: Unauthorized KMS master key permission error: Without authorized permission on the master key, when a producer or consumer application tries to writes or reads an encrypted stream Provide access permission to an application using Key policies in AWS KMS or IAM policies with AWS KMS Sometimes producer becomes writing slower. Service limits exceeded: Check whether the producer is throwing throughput exceptions from the service, and validate what API operations are being throttled. You can also check Amazon Kinesis Streams limits because of different limits based on the call. If calls are not an issue, check you have selected a partition key that allows distributing put operations evenly across all shards, and that you don't have a particular partition key that's bumping into the service limits when the rest are not. This requires you to measure peak throughput and the number of shards in your stream. Producer optimization: It has either a large producer or small producer. A large producer is running from an EC2 instance or on-premises while a small producer is running from the web client, mobile app, or IoT device. Customers can use different strategies for latency. Kinesis Produce Library or multiple threads are useful while writing for buffer/micro-batch records, PutRecords for multi-record operation, PutRecord for single-record operation. Shard iterator expires unexpectedly: The shard iterator expires because its GetRecord methods have not been called for more than 5 minutes, or you have performed a restart of your consumer application. The shard iterator expires immediately before you use it. This might indicate that the DynamoDB table used by Kinesis does not have enough capacity to store the data. It might happen if you have a large number of shards. Increase the write capacity assigned to the shard table to solve this. Consumer application is reading at a slower rate: The following are common reasons for read throughput being slower than expected: Total reads for multiple consumer applications exceed per-shard limits. In the Kinesis stream, increase the number of shards. Maximum number of GetRecords per call may have been configured with a low limit value. The logic inside the processRecords call may be taking longer for a number of possible reasons; the logic may be CPU-intensive, bottlenecked on synchronization, or I/O blocking. We have covered Amazon Kinesis streams. Now, we will cover Kinesis Firehose. Amazon Kinesis Firehose Amazon Kinesis Firehose is a fully managed, highly available and durable service to load real-time streaming data easily into AWS services such as Amazon S3, Amazon Redshift, or Amazon Elasticsearch. It replicates your data synchronously at three different facilities. It will automatically scale as per throughput data. You can compress your data into different formats and also encrypt it before loading. AWS SDK for Java, Node.js, Python, .NET, and Ruby can be used to send data to a Kinesis Firehose stream using the Kinesis Firehose API. The Kinesis Firehose API provides two operations to send data to the Kinesis Firehose delivery stream: PutRecord: In one call, it will send one record PutRecordBatch: In one call, it will send multiple data records Let's explore an example using PutRecord. In this example, the MyFirehoseStream stream has been created. Here you can use Eclipse IDE for the example. You need to import a few classes such as AmazonKinesisFirehoseClient, which will help to create the client for accessing Firehose. PutRecordRequest and PutRecordResult will help to put the stream record request and its result: private static AmazonKinesisFirehoseClient client; AmazonKinesisFirehoseClient will create the instance firehoseClient. You have to assign the credentials and region to this instance: String data = "My Kinesis Firehose data"; String myFirehoseStream = "MyFirehoseStream"; Record record = new Record(); record.setData(ByteBuffer.wrap(data.getBytes(StandardCharsets.UTF_8))); As mentioned earlier, myFirehoseStream has already been created. A record in the delivery stream is a unit of data. In the setData method, we are passing a data blob. It is base-64 encoded. Before sending a request to the AWS service, Java will perform base-64 encoding on this field. A returned ByteBuffer is mutable. If you change the content of this byte buffer then it will reflect to all objects that have a reference to it. It's always best practice to call ByteBuffer.duplicate() or ByteBuffer.asReadOnlyBuffer() before reading from the buffer or using it. Now you have to mention the name of the delivery stream and the data records you want to create the PutRecordRequest instance: PutRecordRequest putRecordRequest = new PutRecordRequest() .withDeliveryStreamName(myFirehoseStream) .withRecord(record); putRecordRequest.setRecord(record); PutRecordResult putRecordResult = client.putRecord(putRecordRequest); System.out.println("Put Request Record ID: " + putRecordResult.getRecordId()); putRecordResult will write a single record into the delivery stream by passing the putRecordRequest and get the result and print the RecordID: PutRecordBatchRequest putRecordBatchRequest = new PutRecordBatchRequest().withDeliveryStreamName("MyFirehoseStream") .withRecords(getBatchRecords()); You have to mention the name of the delivery stream and the data records you want to create the PutRecordBatchRequest instance. The getBatchRecord method has been created to pass multiple records as mentioned in the next step: JSONObject jsonObject = new JSONObject(); jsonObject.put("userid", "userid_1"); jsonObject.put("password", "password1"); Record record = new Record().withData(ByteBuffer.wrap(jsonObject.toString().getBytes())); records.add(record); In the getBatchRecord method, you will create the jsonObject and put data into this jsonObject . You will pass jsonObject to create the record. These records add to a list of records and return it: PutRecordBatchResult putRecordBatchResult = client.putRecordBatch(putRecordBatchRequest); for(int i=0;i<putRecordBatchResult.getRequestResponses().size();i++){ System.out.println("Put Batch Request Record ID :"+i+": " + putRecordBatchResult.getRequestResponses().get(i).getRecordId()); } putRecordBatchResult will write multiple records into the delivery stream by passing the putRecordBatchRequest, get the result, and print the RecordID. You will see the output like the following screen: Troubleshooting tips for Kinesis Firehose Sometimes data is not delivered at specified destinations. The following are steps to solve common issues while working with Kinesis Firehose: Data not delivered to Amazon S3: If error logging is not enabled then enable it and check error log messages for delivery failure. Verify that the S3 bucket mentioned in the Kinesis Firehose delivery stream exists. Verify whether data transformation with Lambda is enabled, the Lambda function mentioned in your delivery stream exists, and Kinesis Firehose has attempted to invoke the Lambda function. Verify whether the IAM role specified in the delivery stream has given proper access to the S3 bucket and Lambda function or not. Verify your Kinesis Firehose metrics to check whether the data was sent to the Kinesis Firehose delivery stream successfully. Data not delivered to Amazon Redshift/Elasticsearch: For Amazon Redshift and Elasticsearch, verify the points mentioned in Data not delivered to Amazon S3, including the IAM role, configuration, and public access. For CloudWatch and IoT, delivery stream not available as target: Some AWS services can only send messages and events to a Kinesis Firehose delivery stream which is in the same region. Verify that your Kinesis Firehose delivery stream is located in the same region as your other services. We completed implementations, examples, and best practices for Amazon DynamoDB and Amazon Kinesis AWS services using AWS SDK. If you found this post useful, do check out the book 'Expert AWS Development' to learn application integration with other AWS services like Amazon Lambda, Amazon SQS, and Amazon SWF. A serverless online store on AWS could save you money. Build one. Why is AWS the preferred cloud platform for developers working with big data? Verizon chooses Amazon Web Services(AWS) as its preferred cloud provider
Read more
  • 0
  • 1
  • 51765

article-image-how-to-secure-and-deploy-an-android-app
Sugandha Lahoti
20 Apr 2018
17 min read
Save for later

How to Secure and Deploy an Android App

Sugandha Lahoti
20 Apr 2018
17 min read
In this article, we will be covering two extremely important Android-related topics: Android application security Android application deployment We will kick off the post by discussing Android application security. Securing an Android application It should come as no surprise that security is an important consideration when building software. Besides the security measures put in place in the Android operating system, it is important that developers pay extra attention to ensure that their applications meet the set security standards. In this section, a number of important security considerations and best practices will be broken down for your understanding. Following these best practices will make your applications less vulnerable to malicious programs that may be installed on a client device. Data storage All things being equal, the privacy of data saved by an application to a device is the most common security concern in developing an Android application. Some simple rules can be followed to make your application data more secure. Securing your data when using internal storage As we saw in the previous chapter, internal storage is a good way to save private data on a device. Every Android application has a corresponding internal storage directory in which private files can be created and written to. These files are private to the creating application, and as such cannot be accessed by other applications on the client device. As a rule of thumb, if data should only be accessible by your application and it is possible to store it in internal storage, do so. Feel free to refer to the previous chapter for a refresher on how to use internal storage. Securing your data when using external storage External storage files are not private to applications, and, as such, can be easily accessed by other applications on the same client device. As a result of this, you should consider encrypting application data before storing it in external storage. There are a number of libraries and packages that can be used to encrypt data prior to its saving to external storage. Facebook's Conceal (http://facebook.github.io/conceal/) library is a good option for external-storage data encryption. In addition to this, as another rule of thumb, do not store sensitive data in external storage. This is because external storage files can be manipulated freely. Validation should also be performed on input retrieved from external storage. This validation should be done as a result of the untrustworthy nature of data stored in external storage. Securing your data when using internal storage. Content providers can either prevent or enable external access to your application data. Use the android:exported attribute when registering your content provider in the manifest file to specify whether external access to the content provider should be permitted. Set android:exported to true if you wish the content provider to be exported, otherwise set the attribute to false. In addition to this, content provider query methods—for example, query(), update(), and delete()—should be used to prevent SQL injection (a code injection technique that involves the execution of malicious SQL statements in an entry field by an attacker). Networking security Best practices for your Android App development There are a number of best practices that should be followed when performing network transactions via an Android application. These best practices can be split into different categories. We shall speak about Internet Protocol (IP) networking and telephony networking best practices in this section. IP networking When communicating with a remote computer via IP, it is important to ensure that your application makes use of HTTPs wherever possible (thus wherever it is supported in the server). One major reason for doing this is because devices often connect to insecure networks, such as public wireless connections. HTTPs ensure encrypted communication between clients and servers, regardless of the network they are connected to. In Java, an HttpsURLConnection can be used for secure data transfer over a network. It is important to note that data received via an insecure network connection should not be trusted. Telephony networking In instances where data needs to be transferred freely across a server and client applications, Firebase Cloud Messaging (FCM)—along with IP networking—should be utilized instead of other means, such as the Short Messaging Service (SMS) protocol. FCM is a multi-platform messaging solution that facilitates the seamless and reliable transfer of messages between applications. SMS is not a good candidate for transferring data messages, because: It is not encrypted It is not strongly authenticated Messages sent via SMS are subject to spoofing SMS messages are subject to interception Input validation The validation of user input is extremely important in order to avoid security risks that may arise. One such risk, as explained in the Using content providers section, is SQL injection. The malicious injection of SQL script can be prevented by the use of parameterized queries and the extensive sanitation of inputs used in raw SQL queries. In addition to this, inputs retrieved from external storage must be appropriately validated because external storage is not a trusted data source. Working with user credentials The risk of phishing can be alleviated by reducing the requirement of user credential input in an application. Instead of constantly requesting user credentials, consider using an authorization token. Eliminate the need for storing usernames and passwords on the device. Instead, make use of a refreshable authorization token. Code obfuscation Before publishing an Android application, it is imperative to utilize a code obfuscation tool, such as ProGuard, to prevent individuals from getting unhindered access to your source code by utilizing various means, such as decompilation. ProGuard is prepackaged included within the Android SDK, and, as such, no dependency inclusion is required. It is automatically included in the build process if you specify your build type to be a release. You can find out more about ProGuard here: https://www.guardsquare.com/en/proguard . Securing broadcast receivers By default, a broadcast receiver component is exported and as a result can be invoked by other applications on the same device. You can control access of applications to your apps's broadcast receiver by applying security permissions to it. Permissions can be set for broadcast receivers in an application's manifest file with the <receiver> element. Securing your Dynamically loading code In scenarios in which the dynamic loading of code by your application is necessary, you must ensure that the code being loaded comes from a trusted source. In addition to this, you must make sure to reduce the risk of tampering code at all costs. Loading and executing code that has been tampered with is a huge security threat. When code is being loaded from a remote server, ensure it is transferred over a secure, encrypted network. Keep in mind that code that is dynamically loaded runs with the same security permissions as your application (the permissions you defined in your application's manifest file). Securing services Unlike broadcast receivers, services are not exported by the Android system by default. The default exportation of a service only happens when an intent filter is added to the declaration of a service in the manifest file. The android:exported attribute should be used to ensure services are exported only when you want them to be. Set android:exported to true when you want a service to be exported and false otherwise. Deploying your Android Application So far, we have taken an in-depth look at the Android system, application development in Android, and some other important topics, such as Android application security. It is time for us to cover our final topic for this article pertaining to the Android ecosystem—launching and publishing an Android application. You may be wondering at this juncture what the words launch and publish mean. A launch is an activity that involves the introduction of a new product to the public (end users). Publishing an Android application is simply the act of making an Android application available to users. Various activities and processes must be carried out to ensure the successful launch of an Android application. There are 15 of these activities in all. They are: Understanding the Android developer program policies Preparing your Android developer account Localization planning Planning for simultaneous release Testing against the quality guideline Building a release-ready APK Planning your application's Play Store listing Uploading your application package to the alpha or beta channel Device compatibility definition Pre-launch report assessment Pricing and application distribution setup Distribution option selection In-app products and subscriptions setup Determining your application's content rating Publishing your application Wow! That's a long list. Don't fret if you don't understand everything on the list. Let's look at each item in more detail. Understanding the Android developer program policies There is a set of developer program policies that were created for the sole purpose of making sure that the Play Store remains a trusted source of software for its users. Consequences exist for the violation of these defined policies. As a result, it is important that you peruse and fully understand these developer policies—their purposes and consequences—before continuing with the process of launching your application. Preparing your Android developer account You will need an Android developer account to launch your application on the Play Store. Ensure that you set one up by signing up for a developer account and confirming the accuracy of your account details. If you ever need to sell products on an Android application of yours, you will need to set up a merchant account. Localization planning Sometimes, for the purpose of localization, you may have more than one copy of your application, with each localized to a different language. When this is the case, you will need to plan for localization early on and follow the recommended localization checklist for Android developers. You can view this checklist here: https://developer.android.com/distribute/best-practices/launch/localization-checklist.html. Planning for simultaneous release You may want to launch a product on multiple platforms. This has a number of advantages, such as increasing the potential market size of your product, reducing the barrier of access to your product, and maximizing the number of potential installations of your application. Releasing on numerous platforms simultaneously is generally a good idea. If you wish to do this with any product of yours, ensure you plan for this well in advance. In cases where it is not possible to launch an application on multiple platforms at once, ensure you provide a means by which interested potential users can submit their contact details so as to ensure that you can get in touch with them once your product is available on their platform of choice. Testing against the quality guidelines Quality guidelines provide testing templates that you can use to confirm that your application meets the fundamental functional and non-functional requirements that are expected by Android users. Ensure that you run your applications through these quality guides before launch. You can access these application quality guides here: https://developer.android.com/develop/quality-guidelines/index.html. Building a release-ready application package (APK) A release-ready APK is an Android application that has been packaged with optimizations and then built and signed with a release key. Building a release-ready APK is an important step in the launch of an Android application. Pay extra attention to this step. Planning your application's Play Store listing This step involves the collation of all resources necessary for your product's Play Store listing. These resources include, but are not limited to, your application's log, screenshots, descriptions, promotional graphics, and videos, if any. Ensure you include a link to your application's privacy policy along with your application's Play Store listing. It is also important to localize your application's product listing to all languages that your application supports. Uploading your application package to the alpha or beta channel As testing is an efficient and battle-tested way of detecting defects in software and improving software quality, it is a good idea to upload your application package to alpha and beta channels to facilitate carrying out alpha and beta software testing on your product. Alpha testing and beta testing are both types of acceptance testing. Device compatibility definition This step involves the declaration of Android versions and screen sizes that your application was developed to work on. It is important to be as accurate as possible in this step as defining inaccurate Android versions and screen sizes will invariably lead to users experiencing problems with your application. Pre-launch report assessment Pre-launch reports are used to identify issues found after the automatic testing of your application on various Android devices. Pre-launch reports will be delivered to you, if you opt in to them, when you upload an application package to an alpha or beta channel. Pricing and application distribution setup First, determine the means by which you want to monetize you application. After determining this, set up your application as either a free install or a paid download. After you have set up the desired pricing of your application, select the countries you wish to distribute you applications to. Distribution option selection This step involves the selection of devices and platforms—for example, Android TV and Android Wear—that you wish to distribute your app on. After doing this, the Google Play team will be able to review your application. If your application is approved after its review, Google Play will make it more discoverable. In-app products and subscriptions setup If you wish to sell products within your application, you will need to set up your in-app products and subscriptions. Here, you will specify the countries that you can sell into and take care of various monetary-related issues, such as tax considerations. In this step, you will also set up your merchant account. Determining your application's content rating It is necessary that you provide an accurate rating for the application you are publishing to the Play Store. This step is mandated by the Android Developer Program Policies for good reason. It aids the appropriate age group you are targeting to discover your application. Publishing your application Once you have catered for the necessary steps prior to this, you are ready to publish your application to the production channel of the Play Store. Firstly, you will need to roll out a release. A release allows you to upload the APK files of your application and roll out your application to a specific track. At the end of the release procedure, you can publish your application by clicking Confirm rollout. So, that was all we need to know to publish a new application on the Play Store. In most cases, you will not need to follow all these steps in a linear manner, you will just need to follow a subset of the steps—more specifically, those pertaining to the type of application you wish to publish. Releasing your Android app Having signed your  application, you can proceed with completing the required application details toward the goal of releasing your app. Firstly, you need to create a suitable store listing for the application. Open the application  in the Google Play Console and navigate to the store-listing page (this can be done by selecting Store Listing on the side navigation bar). You will need to fill out all the required information in the store listing page before we proceed further. This information includes product details, such as a title, short description, full description, as well as graphic assets and categorization information—including the application type, category and content rating, contact details, and privacy policy. The Google Play Console store listing page is shown in the following screenshot: Once the store listing information has been filled in, the next thing to fill in is the pricing and distribution information. Select Pricing & distribution on the left navigation bar to open up its preference selection page. For the sake of this demonstration, we set the pricing of this app to FREE. We also selected five random countries to distribute this application to. These countries are Nigeria, India, the United States of America, the United Kingdom, and Australia: Besides selecting the type of pricing and the available countries for product distribution, you will need to provide additional preference information. The necessary information to be provided includes device category information, user program information, and consent information. It is now time to add our signed APK to our Google Play Console app. Navigate to App releases | MANAGE BETA | EDIT RELEASE. In the page that is presented to you, you may be asked whether you want to opt into Google play app signing: For the sake of this example, select OPT-OUT. Once OPT-OUT is selected, you will be able to choose your APK file for upload from your computer's file system. Select your APK for upload by clicking BROWSE FILES, as shown in the following screenshot: After selecting an appropriate APK, it will be uploaded to the Google Play Console. Once the upload is done, the play console will automatically add a suggested release name for your beta release. This release name is based on the version name of the uploaded APK. Modify the release name if you are not comfortable with the suggestion. Next, add a suitable release note in the text field provided. Once you are satisfied with the data you have input, save and continue by clicking the Review button at the bottom of the web page. After reviewing the beta release, you can roll it out if you have added beta testers to your app. Rolling out a beta release is not our focus, so let's divert back to our main goal: publishing the Messenger app. Having uploaded an APK for your application, you can now complete the mandatory content rating questionnaire. Click the Content rating navigation item on the sidebar and follow the instructions to do this. Once the questionnaire is complete, appropriate ratings for your application will be generated: With the completion of the content rating questionnaire, the application is ready to be published to production. Applications that are published to production are made available to all users on the Google Play Store. On the play console, navigate to App releases | Manage Production | Create releases. When prompted to upload an APK, click the ADD APK FROM LIBRARY button to the right of the screen and select the APK we previously uploaded (the APK with a version name of 1.0) and complete the necessary release details similar to how we did when creating a beta release. Click the review button at the bottom of the page once you are ready to proceed. You will be given a brief release summary in the page that follows: Go through the information presented in the summary carefully. Start the roll out to production once you have asserted that you are satisfied with the information presented to you in the summary. Once you start the roll out to production, you will be prompted to confirm your understanding that your app will become available to users of the Play Store: Click Confirm once you are ready for the app to go live on the Play Store. Congratulations! You have now published your first application to the Google Play Store! In this article, we learned how to secure and publish Android applications to the Google Play Store. We identified security threats to Android applications and fully explained ways to alleviate them, we also noted best practices to follow when developing applications for the Android ecosystem.  Finally, we took a deep dive into the process of application publication to the Play Store covering all the necessary steps for the successful publication of an Android application. You enjoyed an excerpt from the book, Kotlin Programming By Example, written by Iyanu Adelekan. This book will take on Android development with Kotlin, from building a classic game Tetris to a messenger app, a level up in terms of complexity. Build your first Android app with Kotlin Creating a custom layout implementation for your Android app  
Read more
  • 0
  • 0
  • 51574

article-image-how-to-secure-data-in-salesforce-einstein-analytics
Amey Varangaonkar
22 Mar 2018
5 min read
Save for later

How to secure data in Salesforce Einstein Analytics

Amey Varangaonkar
22 Mar 2018
5 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book Learning Einstein Analytics written by Santosh Chitalkar. This book includes techniques to build effective dashboards and Business Intelligence metrics to gain useful insights from data.[/box] Before getting into security in Einstein Analytics, it is important to set up your organization, define user types so that it is available to use. In this article we explore key aspects of security in Einstein Analytics. The following are key points to consider for data security in Salesforce: Salesforce admins can restrict access to data by setting up field-level security and object-level security in Salesforce. These settings prevent data flow from loading sensitive Salesforce data into a dataset. Dataset owners can restrict data access by using row-level security. Analytics supports security predicates, a robust row-level security feature that enables you to model many different types of access control on datasets. Analytics also supports sharing inheritance. Take a look at the following diagram: Salesforce data security In Einstein Analytics, dataflows bring the data to the Analytics Cloud from Salesforce. It is important that Einstein Analytics has all the necessary permissions and access to objects as well as fields. If an object or a field is not accessible to Einstein then the data flow fails and it cannot extract data from Salesforce. So we need to make sure that the required access is given to the integration user and security user. We can configure the permission set for these users. Let’s configure permissions for an integration user by performing the following steps: Switch to classic mode and enter Profiles in the Quick Find / Search… box Select and clone the Analytics Cloud Integration User profile and Analytics Cloud Security User profile for the integration user and security user respectively: Save the cloned profiles and then edit them Set the permission to Read for all objects and fields Save the profile and assign it to users Take a look at the following diagram: Data pulled from Salesforce can be made secure from both sides: Salesforce as well as Einstein Analytics. It is important to understand that Salesforce and Einstein Analytics are two independent databases. So, a user security setting given to Einstein will not affect the data in Salesforce. There are the following ways to secure data pulled from Salesforce: Salesforce Security Einstein Analytics Security Roles and profiles Inheritance security Organization-Wide Defaults (OWD) and record ownership Security predicates Sharing rules Application-level security Sharing mechanism in Einstein All Analytics users start off with Viewer access to the default Shared App that’s available out-of-the-box; administrators can change this default setting to restrict or extend access. All other applications created by individual users are private, by default; the application owner and administrators have Manager access and can extend access to other Users, groups, or roles. The following diagram shows how the sharing mechanism works in Einstein Analytics: Here’s a summary of what users can do with Viewer, Editor, and Manager access: Action / Access level Viewer Editor Manager View dashboards, lenses, and datasets in the application. If the underlying dataset is in a different application than a lens or dashboard, the user must have access to both applications to view the lens or dashboard. Yes Yes Yes See who has access to the application. Yes Yes Yes Save contents of the application to another application that the user has Editor or Manager access to. Yes Yes Yes Save changes to existing dashboards, lenses, and datasets in the application (saving dashboards requires the appropriate permission set license and permission). Yes Yes Change the application’s sharing settings. Yes Rename the application. Yes Delete the application. Yes Confidentiality, integrity, and availability together are referred to as the CIA Triad and it is designed to help organizations decide what security policies to implement within the organization. Salesforce knows that keeping information private and restricting access by unauthorized users is essential for business. By sharing the application, we can share a lens, dashboard, and dataset all together with one click. To share the entire application, do the following: Go to your Einstein Analytics and then to Analytics Studio Click on the APPS tab and then the icon for your application that you want to share, as shown in the following screenshot: 3. Click on Share and it will open a new popup window, as shown in the following screenshot: Using this window, you can share the application with an individual user, a group of users, or a particular role. You can define the access level as Viewer, Editor, or Manager After selecting User, click on the user you wish to add and click on Add Save and then close the popup And that’s it. It’s done. Mass-sharing the application Sometimes, we are required to share the application with a wide audience: There are multiple approaches to mass-sharing the Wave application such as by role or by username In Salesforce classic UI, navigate to Setup|Public Groups | New For example, to share a sales application, label a public group as Analytics_Sales_Group Search and add users to a group by Role, Roles and Subordinates, or by Users (username): 5. Search for the Analytics_Sales public group 6. Add the Viewer option as shown in the following screenshot: 7. Click on Save Protecting data from breaches, theft, or from any unauthorized user is very important. And we saw that Einstein Analytics provides the necessary tools to ensure the data is secure. If you found this excerpt useful and want to know more about securing your analytics in Einstein, make sure to check out this book Learning Einstein Analytics.  
Read more
  • 0
  • 0
  • 51523
article-image-chat-application-kotlin-node-js-javascript
Sugandha Lahoti
14 May 2018
30 min read
Save for later

Building chat application with Kotlin using Node.js, the powerful Server-side JavaScript platform

Sugandha Lahoti
14 May 2018
30 min read
When one mentions server-side JavaScript technology, Node.js is what comes to our mind first. Node.js is an extremely powerful and robust platform. Using this JavaScript platform, we can build server-side applications very easily. In today's tutorial, we will focus on creating a chat application that uses Kotlin by using the Node.js technology. So, basically, we will transpile Kotlin code to JavaScript on the server side. This article is an excerpt from the book, Kotlin Blueprints, written by Ashish Belagali, Hardik Trivedi, and Akshay Chordiya. With this book, you will get to know the building blocks of Kotlin and best practices when using quality world-class applications. Kotlin is a modern language and is gaining popularity in the JavaScript community day by day. The Kotlin language with modern features and statically typed; is superior to JavaScript. Similar to JavaScript, developers that know Kotlin can use the same language skills on both sides but, they also have the advantage of using a better language. The Kotlin code gets transpiled to JavaScript and that in turn works with the Node.js. This is the mechanism that lets you use the Kotlin code to work with a server-side technology, such as Node.js. Creating a chat application Our chat app will have following functionalities: User can log in by entering their nickname User can see the list of online users User will get notified when a new user joins User can receive chats from anyone User can perform a group chat in a chat room User will receive a notification when any user leaves the chat To visualize the app that we will develop, take a look at the following screenshots. The following screenshot is a page where the user will enter a nickname and gain an entry in our chat app: In the following screen, you can see a chat window and a list of online users: We have slightly configured this application in a different way. We have kept the backend code module and frontend code module separate using the following method: Create a new project named kotlin_node_chat_app Now, create a new Gradle module named backend and select Kotlin (JavaScript) under the libraries and additional information window, and follow the remaining steps. Similarly, also create a Gradle module named webapp. The backend module will contain all the Kotlin code that will be converted into Node.JS code later, and the webapp module will contain all the Kotlin code that will later be converted into the JavaScript code. We have referred to the directory structure from Github. After performing the previous steps correctly, your project will have three build.gradle files. We have highlighted all three files in the project explorer section, as shown in the following screenshot: Setting up the Node.js server We need to initialize our root directory for the node. Execute npm init and it will create package.json. Now our login page is created. To run it, we need to set up the Node.js server. We want to create the server in such a way that by executing npm start, it should start the server. To achieve it, our package.json file should look like the following piece of code: { "name": "kotlin_node_chat_app", "version": "1.0.0", "description": "", "main": "backend/server/app.js", "scripts": { "start": "node backend/server/app.js" }, "author": "Hardik Trivedi", "license": "ISC", "dependencies": { "ejs": "^2.5.7", "express": "^4.16.2", "kotlin": "^1.1.60", "socket.io": "^2.0.4" } } We have specified a few dependencies here as well: EJS to render HTML pages Express.JS as its framework, which makes it easier to deal with Node.js Kotlin, because, ultimately, we want to write our code into Kotlin and want it compiled into the Node.js code Socket.IO to perform chat Execute npm install on the Terminal/Command Prompt and it should trigger the download of all these dependencies. Specifying the output files Now, it's very important where your output will be generated once you trigger the build. For that, build.gradle will help us. Specify the following lines in your module-level build.gradle file. The backend module's build.gradle will have the following lines of code: compileKotlin2Js { kotlinOptions.outputFile = "${projectDir}/server/app.js" kotlinOptions.moduleKind = "commonjs" kotlinOptions.sourceMap = true } The webapp module's build.gradle will have the following lines of code: compileKotlin2Js { kotlinOptions.metaInfo = true kotlinOptions.outputFile = "${projectDir}/js/main.js" kotlinOptions.sourceMap = true kotlinOptions.main = "call" } In both the compileKotlin2Js nodes, kotlinOptions.outputFile plays a key role. This basically tells us that once Kotlin's code gets compiled, it will generate app.js and main.js for Node.js and JavaScript respectively. In the index.ejs file, you should define a script tag to load main.js. It will look something like the following line of code: <script type="text/javascript" src="js/main.js"></script> Along with this, also specify the following two tags: <script type="text/javascript" src="lib/kotlin/kotlin.js"></script> <script type="text/javascript" src="lib/kotlin/kotlinx-html-js.js"> </script> Examining the compilation output The kotlin.js and kotlinx-html-js.js files are nothing but the Kotlin output files. It's not compilation output, but actually transpiled output. The following are output compilations: kotlin.js: This is the runtime and standard library. It doesn't change between applications, and it's tied to the version of Kotlin being used. {module}.js: This is the actual code from the application. All files are compiled into a single JavaScript file that has the same name as the module. {file}.meta.js: This metafile will be used for reflection and other functionalities. Let's assume our final Main.kt file will look like this: fun main(args: Array<String>) { val socket: dynamic = js("window.socket") val chatWindow = ChatWindow { println("here") socket.emit("new_message", it) } val loginWindow = LoginWindow { chatWindow.showChatWindow(it) socket.emit("add_user", it) } loginWindow.showLogin() socket.on("login", { data -> chatWindow.showNewUserJoined(data) chatWindow.showOnlineUsers(data) }) socket.on("user_joined", { data -> chatWindow.showNewUserJoined(data) chatWindow.addNewUsers(data) }) socket.on("user_left", { data -> chatWindow.showUserLeft(data) }) socket.on("new_message", { data -> chatWindow.showNewMessage(data) }) } For this, inside main.js, our main function will look like this: function main(args) { var socket = window.socket; var chatWindow = new ChatWindow(main$lambda(socket)); var loginWindow = new LoginWindow(main$lambda_0(chatWindow, socket)); loginWindow.showLogin(); socket.on('login', main$lambda_1(chatWindow)); socket.on('user_joined', main$lambda_2(chatWindow)); socket.on('user_left', main$lambda_3(chatWindow)); socket.on('new_message', main$lambda_4(chatWindow)); } The actual main.js file will be bulkier because it will have all the code transpiled, including other functions and LoginWindow and ChatWindow classes. Keep a watchful eye on how the Lambda functions are converted into simple JavaScript functions. Lambda functions, for all socket events, are transpiled into the following piece of code: function main$lambda_1(closure$chatWindow) { return function (data) { closure$chatWindow.showNewUserJoined_qk3xy8$(data); closure$chatWindow.showOnlineUsers_qk3xy8$(data); }; } function main$lambda_2(closure$chatWindow) { return function (data) { closure$chatWindow.showNewUserJoined_qk3xy8$(data); closure$chatWindow.addNewUsers_qk3xy8$(data); }; } function main$lambda_3(closure$chatWindow) { return function (data) { closure$chatWindow.showUserLeft_qk3xy8$(data); }; } function main$lambda_4(closure$chatWindow) { return function (data) { closure$chatWindow.showNewMessage_qk3xy8$(data); }; } As can be seen, Kotlin aims to create very concise and readable JavaScript, allowing us to interact with it as needed. Specifying the router We need to write a behavior in the route.kt file. This will let the server know which page to load when any request hits the server. The router.kt file will look like this: fun router() { val express = require("express") val router = express.Router() router.get("/", { req, res -> res.render("index") }) return router } This simply means that whenever a get request with no name approaches the server, it should display an index page to the user. We are told to instruct the framework to refer to the router.kt file by writing the following line of code: app.use("/", router()) Starting the node server Now let's create a server. We should create an app.kt file under the backend module at the backend/src/kotlin path. Refer to the source code to verify. Write the following piece of code in app.kt: external fun require(module: String): dynamic external val process: dynamic external val __dirname: dynamic fun main(args: Array<String>) { println("Server Starting!") val express = require("express") val app = express() val path = require("path") val http = require("http") /** * Get port from environment and store in Express. */ val port = normalizePort(process.env.PORT) app.set("port", port) // view engine setup app.set("views", path.join(__dirname, "../../webapp")) app.set("view engine", "ejs") app.use(express.static("webapp")) val server = http.createServer(app) app.use("/", router()) app.listen(port, { println("Chat app listening on port http://localhost:$port") }) } fun normalizePort(port: Int) = if (port >= 0) port else 7000 These are multiple things to highlight here: external: This is basically an indicator for Kotlin that the line written along with this a pure JavaScript code. Also, when this code gets compiled into the respected language, the compiler understands that the class, function, or property written along with that will be provided by the developer, and so no JavaScript code should be generated for that invocation. The external modifier is automatically applied to nested declarations. For example, consider the following code block. We declare the class as external and automatically all its functions and properties are treated as external: external class Node { val firstChild: Node fun append(child: Node): Node fun removeChild(child: Node): Node // etc } dynamic: You will often see the usage of dynamic while working with JavaScript. Kotlin is a statically typed language, but it still has to interoperate with languages such as JavaScript. To support such use cases with a loosely or untyped programming language, dynamic is useful. It turns off Kotlin's type checker. A value of this type can be assigned to any variable or passed anywhere as a parameter. Any value can be assigned to a variable of dynamic type or passed to a function that takes dynamic as a parameter. Null checks are disabled for such values. require("express"): We typically use ExpressJS with Node.js. It's a framework that goes hand in hand with Node.js. It's designed with the sole purpose of developing web applications. A Node.js developer must be very familiar with it. process.env.PORT: This will find an available port on the server, as simple as that. This line is required if you want to deploy your application on a utility like Heroku. Also, notice the normalizePort function. See how concise it is. The if…else condition is written as an expression. No explicit return keyword is required. Kotlin compiler also identifies that if (port >= 0) port else 7000 will always return Int, hence no explicit return type is required. Smart, isn't it! __dirname: This is always a location where your currently executing script is present. We will use it to create a path to indicate where we have kept our web pages. app.listen(): This is a crucial one. It starts the socket and listens for the incoming request. It takes multiple parameters. Mainly, we will use two parameterized functions, that take the port number and connection callback as an argument. The app.listen() method is identical to http.Server.listen(). In Kotlin, it takes a Lambda function. Now, it's time to kick-start the server. Hit the Gradle by using ./gradlew build. All Kotlin code will get compiled into Node.js code. On Terminal, go to the root directory and execute npm start. You should be able to see the following message on your Terminal/Command Prompt: Creating a login page Now, let's begin with the login page. Along with that, we will have to enable some other settings in the project as well. If you refer to a screenshot that we mentioned at the beginning of the previous section, you can make out that we will have the title, the input filed, and a button as a part of the login page. We will create the page using Kotlin and the entire HTML tree structure, and by applying CSS to them, the will be part of our Kotlin code. For that, you should refer to the Main.kt and LoginWindow files. Creating an index.ejs file We will use EJS (effective JavaScript templating) to render HTML content on the page. EJS and Node.js go hand in hand. It's simple, flexible, easy to debug, and increases development speed. Initially, index.ejs would look like the following code snippet: <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial- scale=1.0"/> </head> <body> <div id="container" class="mainContainer"> </div> </body> </html> The <div> tag will contain all different views, for example, the Login View, Chat Window View, and so on. Using DSL DSL stands for domain-specific language. As the name indicates, it gives you the feeling as if you are writing code in a language using terminology particular to a given domain without being geeky, but then, this terminology is cleverly embedded as a syntax in a powerful language. If you are from the Groovy community, you must be aware of builders. Groovy builders allow you to define data in a semi-declarative way. It's a kind of mini-language of its own. Builders are considered good for generating XML and laying out UI components. The Kotlin DSL uses Lambdas a lot. The DSL in Kotlin is a type-safe builder. It means we can detect compilation errors in IntelliJ's beautiful IDE. The type-check builders are much better than the dynamically typed builders of Groovy. Using kotlinx.html The DSL to build HTML trees is a pluggable dependency. We, therefore, need to set it up and configure it for our project. We are using Gradle as a build tool and Gradle has the best way to manage the dependencies. We will define the following line of code in our build.gradle file to use kotlinx.html: compile("org.jetbrains.kotlinx:kotlinx-html-js:$html_version") Gradle will automatically download this dependency from jcenter(). Build your project from menu Build | Build Project. You can also trigger a build from the terminal/command prompt. To build a project from the Terminal, go to the root directory of your project and then execute ./gradlew build. Now create the index.ejs file under the webapp directory. At this moment, your index.ejs file may look like the following: Inside your LoginWindow class file, you should write the following piece of code: class LoginWindow(val callback: (String) -> Unit) { fun showLogin() { val formContainer = document.getElementById("container") as HTMLDivElement val loginDiv = document.create.div { id = "loginDiv" h3(classes = "title") { +"Welcome to Kotlin Blueprints chat app" } input(classes = "nickNameInput") { id = "nickName" onInputFunction = onInput() maxLength = 16.toString() placeholder = "Enter your nick name" } button(classes = "loginButton") { +"Login" onClickFunction = onLoginButtonClicked() } } formContainer.appendChild(loginDiv) } } Observe how we have provided the ID, input types, and a default ZIP code value. A default ZIP code value is optional. Let's spend some time understanding the previous code. The div, input, button, and h3 all these are nothing but functions. They are basically Lambda functions. The following are the functions that use Lambda as the last parameter. You can call them in different ways: someFunction({}) someFunction("KotlinBluePrints",1,{}) someFunction("KotlinBluePrints",1){} someFunction{} Lambda functions Lambda functions are nothing but functions without a name. We used to call them anonymous functions. A function is basically passed into a parameter of a function call, but as an expression. They are very useful. They save us a lot of time by not writing specific functions in an abstract class or interface. Lambda usage can be as simple the following code snippet, where it seems like we are simply binding a block an invocation of the helloKotlin function: fun main(args: Array<String>) { val helloKotlin={println("Hello from KotlinBlueprints team!")} helloKotlin() } At the same time, lambda can be a bit complex as well, just like the following code block: fun <T> lock(lock: Lock, body: () -> T): T { lock.lock() try { return body() } finally { lock.unlock() } } In the previous function, we are acquiring a lock before executing a function and releasing it when the function gets executed. This way, you can synchronously call a function in a multithreaded environment. So, if we have a use case where we want to execute sharedObject.someCrucialFunction() in a thread-safe environment, we will call the preceding lock function like this: lock(lock,{sharedObject.someCrucialFunction()}) Now, the lambda function is the last parameter of a function call, so it can be easily written like this: lock(lock) { sharedObject.someCrucialFunction() } Look how expressive and easy to understand the code is. We will dig more into the Lambda in the upcoming section. Reading the nickname In the index.ejs page, we will have an input field with the ID nickName when it is rendered. We can simply read the value by writing the following lines of code: val nickName = (document.getElementById("nickName") as? HTMLInputElement)?.value However, to cover more possibilities, we have written it in a slightly different way. We have written it as if we are taking the input as an event. The following code block will continuously read the value that is entered into the nickName input field: private fun onInput(): (Event) -> Unit { return { val input = it.currentTarget as HTMLInputElement when (input.id) { "nickName" -> nickName = input.value "emailId" -> email = input.value } } } Check out, we have used the when function, which is a replacement for the switch case. The preceding code will check whether the ID of the element is nickName or emailId, and, based on that, it will assign the value to the respective objects by reading them from the in-out field. In the app, we will only have the nickname as the input file, but using the preceding approach, you can read the value from multiple input fields. In its simplest form, it looks like this: when (x) { 1 -> print("x == 1") 2 -> print("x == 2") else -> { // Note the block print("x is neither 1 nor 2") } } The when function compares its argument against all branches, top to bottom until some branch condition is met. The when function can be used either as an expression or as a statement. The else branch is evaluated if none of the other branch conditions are satisfied. If when is used as an expression, the else branch is mandatory, unless the compiler can prove that all possible cases are covered with branch conditions. If many cases should be handled in the same way, the branch conditions may be combined with a comma, as shown in the following code: when (x) { 0, 1 -> print("x == 0 or x == 1") else -> print("otherwise") } The following uses arbitrary expressions (not only constants) as branch conditions: when (x) { parseInt(s) -> print("s encodes x") else -> print("s does not encode x") } The following is used to check a value for being in or !in a range or a collection: when (x) { in 1..10 -> print("x is in the range") in validNumbers -> print("x is valid") !in 10..20 -> print("x is outside the range") else -> print("none of the above") } Passing nickname to the server Once our setup is done, we are able to start the server and see the login page. It's time to pass the nickname or server and enter the chat room. We have written a function named onLoginButtonClicked(). The body for this function should like this: private fun onLoginButtonClicked(): (Event) -> Unit { return { if (!nickName.isBlank()) { val formContainer = document.getElementById("loginDiv") as HTMLDivElement formContainer.remove() callback(nickName) } } } The preceding function does two special things: Smart casting Registering a simple callback Smart cast Unlike any other programming language, Kotlin also provides class cast support. The document.getElementById() method returns an Element type if instance. We basically want it to cast into HTMLDivElement to perform some <div> related operation. So, using as, we cast the Element into HTMLDivElement. With the as keyword, it's unsafe casting. It's always better to use as?. On a successful cast, it will give an instance of that class; otherwise, it returns null. So, while using as?, you have to use Kotlin's null safety feature. This gives your app a great safety net onLoginButtonClicked can be refactored by modifying the code a bit. The following code block is the modified version of the function. We have highlighted the modification in bold: private fun onLoginButtonClicked(): (Event) -> Unit { return { if (!nickName.isBlank()) { val formContainer = document.getElementById("loginDiv") as? HTMLDivElement formContainer?.remove() callback(nickName) } } } Registering a callback Oftentimes, we need a function to notify us when something gets done. We prefer callbacks in JavaScript. To write a click event for a button, a typical JavaScript code could look like the following: $("#btn_submit").click(function() { alert("Submit Button Clicked"); }); With Kotlin, it's simple. Kotlin uses the Lambda function to achieve this. For the LoginWindow class, we have passed a callback as a constructor parameter. In the LoginWindow class (val callback: (String) -> Unit), the class header specifies that the constructor will take a callback as a parameter, which will return a string when invoked. To pass a callback, we will write the following line of code: callback(nickName) To consume a callback, we will write code that will look like this: val loginWindow = LoginWindow { chatWindow.showChatWindow(it) socket.emit("add_user", it) } So, when callback(nickName) is called, chatWindow.showChatWindow will get called and the nickname will be passed. Without it, you are accessing nothing but the nickname. Establishing a socket connection We shall be using the Socket.IO library to set up sockets between the server and the clients. Socket.IO takes care of the following complexities: Setting up connections Sending and receiving messages to multiple clients Notifying clients when the connection is disconnected Read more about Socket.IO at https://socket.io/. Setting up Socket.IO We have already specified the dependency for Socket.IO in our package.json file. Look at this file. It has a dependency block, which is mentioned in the following code block: "dependencies": { "ejs": "^2.5.7", "express": "^4.16.2", "kotlin": "^1.1.60", "socket.io": "^2.0.4" } When we perform npm install, it basically downloads the socket-io.js file and keeps node_modules | socket.io inside. We will add this JavaScript file to our index.ejs file. There we can find the following mentioned <script> tag inside the <body> tag: <script type="text/javascript" src="/socket.io/socket.io.js"> </script> Also, initialize socket in the same index.js file like this: <script> window.socket = io(); </script> Listening to events With the Socket.IO library, you should open a port and listen to the request using the following lines of code. Initially, we were directly using app.listen(), but now, we will pass that function as a listener for sockets: val io = require("socket.io").listen(app.listen(port, { println("Chat app listening on port http://localhost:$port") })) The server will listen to the following events and based on those events, it will perform certain tasks: Listen to the successful socket connection with the client Listen for the new user login events Whenever a new user joins the chat, add it to the online users chat list and broadcast it to every client so that they will know that a new member has joined the chat Listen to the request when someone sends a message Receive the message and broadcast it to all the clients so that the client can receive it and show it in the chat window Emitting the event The Socket.IO library works on a simple principle—emit and listen. Clients emit the messages and a listener listens to those messages and performs an action associated with them. So now, whenever a user successfully logs in, we will emit an event named add_user and the server will add it to an online user's list. The following code line emits the message: socket.emit("add_user", it) The following code snippet listens to the message and adds a user to the list: socket.on("add_user", { nickName -> socket.nickname= nickName numOfUsers = numOfUsers.inc() usersList.add(nickName as String) }) The socket.on function will listen to the add_user event and store the nickname in the socket. Incrementing and decrementing operator overloading There are a lot of things operator overloading can do, and we have used quite a few features here. Check out how we increment a count of online users: numOfUsers = numOfUsers.inc() It is a much more readable code compared to numOfUsers = numOfUsers+1, umOfUsers += 1, or numOfUsers++. Similarly, we can decrement any number by using the dec() function. Operator overloading applies to the whole set of unary operators, increment-decrement operators, binary operators, and index access operator. Read more about all of them here. Showing a list of online users Now we need to show the list of online users. For this, we need to pass the list of all online users and the count of users along with it. Using the data class The data class is one of the most popular features among Kotlin developers. It is similar to the concept of the Model class. The compiler automatically derives the following members from all properties declared in the primary constructor: The equals()/hashCode() pair The toString() of the User(name=John, age=42) form The componentN() functions corresponding to the properties in their order of declaration The copy() function A simple version of the data class can look like the following line of code, where name and age will become properties of a class: data class User(val name: String, val age: Int) With this single line and, mainly, with the data keyword, you get equals()/hasCode(), toString() and the benefits of getters and setters by using val/var in the form of properties. What a powerful keyword! Using the Pair class In our app, we have chosen the Pair class to demonstrate its usage. The Pair is also a data class. Consider the following line of code: data class Pair<out A, out B> : Serializable It represents a generic pair of two values. You can look at it as a key–value utility in the form of a class. We need to create a JSON object of a number of online users with the list of their nicknames. You can create a JSON object with the help of a Pair class. Take a look at the following lines of code: val userJoinedData = json(Pair("numOfUsers", numOfUsers), Pair("nickname", nickname), Pair("usersList", usersList)) The preceding JSON object will look like the following piece of code in the JSON format: { "numOfUsers": 3, "nickname": "Hardik Trivedi", "usersList": [ "Hardik Trivedi", "Akshay Chordiya", "Ashish Belagali" ] } Iterating list The user's list that we have passed inside the JSON object will be iterated and rendered on the page. Kotlin has a variety of ways to iterate over the list. Actually, anything that implements iterable can be represented as a sequence of elements that can be iterated. It has a lot of utility functions, some of which are mentioned in the following list: hasNext(): This returns true if the iteration has more elements hasPrevious(): This returns true if there are elements in the iteration before the current element next(): This returns the next element in the iteration nextIndex(): This returns the index of the element that would be returned by a subsequent call to next previous(): This returns the previous element in the iteration and moves the cursor position backward previousIndex(): This returns the index of the element that would be returned by a subsequent call to previous() There are some really useful extension functions, such as the following: .asSequence(): This creates a sequence that returns all elements from this iterator. The sequence is constrained to be iterated only once. .forEach(operation: (T) -> Unit): This performs the given operation on each element of this iterator. .iterator(): This returns the given iterator itself and allows you to use an instance of the iterator in a for the loop. .withIndex(): Iterator<IndexedValue<T>>: This returns an iterator and wraps each value produced by this iterator with the IndexedValue, containing a value, and its index. We have used forEachIndexed; this gives the extracted value at the index and the index itself. Check out the way we have iterated the user list: fun showOnlineUsers(data: Json) { val onlineUsersList = document.getElementById("onlineUsersList") onlineUsersList?.let { val usersList = data["usersList"] as? Array<String> usersList?.forEachIndexed { index, nickName -> it.appendChild(getUserListItem(nickName)) } } } Sending and receiving messages Now, here comes the interesting part: sending and receiving a chat message. The flow is very simple: The client will emit the new_message event, which will be consumed by the server, and the server will emit it in the form of a broadcast for other clients. When the user clicks on Send Message, the onSendMessageClicked method will be called. It sends the value back to the view using callback and logs the message in the chat window. After successfully sending a message, it clears the input field as well. Take a look at the following piece of code: private fun onSendMessageClicked(): (Event) -> Unit { return { if (chatMessage?.isNotBlank() as Boolean) { val formContainer = document.getElementById("chatInputBox") as HTMLInputElement callback(chatMessage!!) logMessageFromMe(nickName = nickName, message = chatMessage!!) formContainer.value = "" } } } Null safety We have defined chatMessage as nullable. Check out the declaration here: private var chatMessage: String? = null Kotlin is, by default, null safe. This means that, in Kotlin, objects cannot be null. So, if you want any object that can be null, you need to explicitly state that it can be nullable. With the safe call operator ?., we can write if(obj !=null) in the easiest way ever. The if (chatMessage?.isNotBlank() == true) can only be true if it's not null, and does not contain any whitespace. We do know how to use the Elvis operator while dealing with null. With the help of the Elvis operator, we can provide an alternative value if the object is null. We have used this feature in our code in a number of places. The following are some of the code snippets that highlight the usage of the safe call operator. Removing the view if not null: formContainer?.remove() Iterating over list if not null: usersList?.forEachIndexed { _, nickName -> it.appendChild(getUserListItem(nickName)) } Appending a child if the div tag is not null: onlineUsersList?.appendChild(getUserListItem (data["nickName"].toString())) Getting a list of all child nodes if the <ul> tag is not null: onlineUsersList?.childNodes Checking whether the string is not null and not blank: chatMessage?.isNotBlank() Force unwraps Sometimes, you will have to face a situation where you will be sure that the object will not be null at the time of accessing it. However, since you have declared nullable at the beginning, you will have to end up using force unwraps. Force unwraps have the syntax of !!. This means you have to fetch the value of the calling object, irrespective of it being nullable. We are explicitly reading the chatMessage value to pass its value in the callback. The following is the code: callback(chatMessage!!) Force unwraps are something we should avoid. We should only use them while dealing with interoperability issues. Otherwise, using them is basically nothing but throwing away Kotlin's beautiful features. Using the let function With the help of Lambda and extension functions, Kotlin is providing yet another powerful feature in the form of let functions. The let() function helps you execute a series of steps on the calling object. This is highly useful when you want to perform some code where the calling object is used multiple times and you want to avoid a null check every time. In the following code block, the forEach loop will only get executed if onlineUsersList is not null. We can refer to the calling object inside the let function using it: fun showOnlineUsers(data: Json) { val onlineUsersList = document.getElementById("onlineUsersList") onlineUsersList?.let { val usersList = data["usersList"] as? Array<String> usersList?.forEachIndexed { _, nickName -> it.appendChild(getUserListItem(nickName)) } } } Named parameter What if we told you that while calling, it's not mandatory to pass the parameter in the same sequence that is defined in the function signature? Believe us. With Kotlin's named parameter feature, it's no longer a constraint. Take a look at the following function that has a nickName parameter and the second parameter is message: private fun logMessageFromMe(nickName: String, message: String) { val onlineUsersList = document.getElementById("chatMessages") val li = document.create.li { div(classes = "sentMessages") { span(classes = "chatMessage") { +message } span(classes = "filledInitialsMe") { +getInitials(nickName) } } } onlineUsersList?.appendChild(li) } If you call a function such as logMessageForMe(mrssage,nickName), it will be a blunder. However, with Kotlin, you can call a function without worrying about the sequence of the parameter. The following is the code for this: fun showNewMessage(data: Json) { logMessage(message = data["message"] as String, nickName = data["nickName"] as String) } Note how the showNewMessage() function is calling it, passing message as the first parameter and nickName as the second parameter. Disconnecting a socket Whenever any user leaves the chat room, we will show other online users a message saying x user left. Socket.IO will send a notification to the server when any client disconnects. Upon receiving the disconnect, the event server will remove the user from the list, decrement the count of online users, and broadcast the event to all clients. The code can look something like this: socket.on("disconnect", { usersList.remove(socket.nicknameas String) numOfUsers = numOfUsers.dec() val userJoinedData = json(Pair("numOfUsers", numOfUsers), Pair("nickName", socket.nickname)) socket.broadcast.emit("user_left", userJoinedData) }) Now, it's the client's responsibility to show the message for that event on the UI. The client will listen to the event and the showUsersLeft function will be called from the ChatWindow class. The following code is used for receiving the user_left broadcast: socket.on("user_left", { data -> chatWindow.showUserLeft(data) }) The following displays the message with the nickname of the user who left the chat and the count of the remaining online users: fun showUserLeft(data: Json) { logListItem("${data["nickName"]} left") logListItem(getParticipantsMessage(data["numOfUsers"] as Int)) } Styling the page using CSS We saw how to build a chat application using Kotlin, but without showing the data on a beautiful UI, the user will not like the web app. We have used some simple CSS to give a rich look to the index.ejs page. The styling code is kept inside webapp/css/ styles.css. However, we have done everything so far entirely and exclusively in Kotlin. So, it's better we apply CSS using Kotlin as well. You may have already observed that there are a few mentions of classes. It's nothing but applying the CSS in a Kotlin way. Take a look at how we have applied the classes while making HTML tree elements using a DSL: fun showLogin() { val formContainer = document.getElementById("container") as HTMLDivElement val loginDiv = document.create.div { id = "loginDiv" h3(classes = "title") { +"Welcome to Kotlin Blueprints chat app" } input(classes = "nickNameInput") { id = "nickName" onInputFunction = onInput() maxLength = 16.toString() placeholder = "Enter your nick name" } button(classes = "loginButton") { +"Login" onClickFunction = onLoginButtonClicked() } } formContainer.appendChild(loginDiv) } We developed an entire chat application using Kotlin. If you liked this extract, read our book Kotlin Blueprints to build a REST API using Kotlin. Read More Top 4 chatbot development frameworks for developers How to build a basic server-side chatbot using Go 5 reasons to choose Kotlin over Java
Read more
  • 0
  • 0
  • 51498

article-image-exploring-windows-powershell-50
Packt
12 Oct 2015
16 min read
Save for later

Exploring Windows PowerShell 5.0

Packt
12 Oct 2015
16 min read
In this article by Chendrayan Venkatesan, the author of the book Windows PowerShell for .NET Developers, we will cover the following topics: Basics of Desired State Configuration (DSC) Parsing structured objects using PowerShell Exploring package management Exploring PowerShell Get-Module Exploring other enhanced features (For more resources related to this topic, see here.) Windows PowerShell 5.0 has many significant benefits, to know more features about its features refer to the following link: http://go.microsoft.com/fwlink/?LinkID=512808 A few highlights of Windows PowerShell 5.0 are as follows: Improved usability Backward compatibility Class and Enum keywords are introduced Parsing structured objects are made easy using ConvertFrom string command We have some new modules introduced in Windows PowerShell 5.0, such as Archive, Package Management (this was formerly known as OneGet) and so on ISE supported transcriptions Using PowerShell Get-Module cmdlet, we can find, install, and publish modules Debug at runspace can be done using Microsoft.PowerShell.Utility module Basics of Desired State Configuration Desired State Configuration also known as DSC is a new management platform in Windows PowerShell. Using DSC, we can deploy and manage configuration data for software servicing and manage the environment. DSC can be used to streamline datacenters and this was introduced along with Windows Management Framework 4.0 and it heavily extended into Windows Management Framework 5.0. Few highlights of DSC in April 2015 Preview are as follows: New cmdlets are introduced in WMF 5.0 Few DSC commands are updated and remarkable changes are made to the configuration management platform in PowerShell 5.0 DSC resources can be built using class, so no need of MOF file It's not mandatory to know PowerShell to learn DSC but it's a great added advantage. Similar to function we can also use configuration keyword but it has a huge difference because in DSC everything is declarative, which is a cool thing in Desired State Configuration. So before beginning this exercise, I created a DSCDemo lab machine in Azure cloud with Windows Server 2012 and it's available out of the box. So, the default PowerShell version is 4.0. For now let's create and define a simple configuration, which creates a file in the local host. Yeah! A simple New-Item command can do that but it's an imperative cmdlet and we need to write a program to tell the computer to create it, if it does not exist. Structure of the DSC configuration is as follows: Configuration Name { Node ComputerName { ResourceName <String> { } } } To create a simple text file with contents, we use the following code: Configuration FileDemo { Node $env:COMPUTERNAME { File FileDemo { Ensure = 'Present' DestinationPath = 'C:TempDemo.txt' Contents = 'PowerShell DSC Rocks!' Force = $true } } } Look at the following screenshot: Following are the steps represented in the preceding figure: Using the Configuration keyword, we are defining a configuration with the name FileDemo—it's a friendly name. Inside the Configuration block we created a Node block and also a file on the local host. File is the resource name. FileDemo is a friendly name of a resource and it's also a string. Properties of the file resource. This creates MOF file—we call this similar to function. But wait, here a code file is not yet created. We just created a MOF file. Look at the MOF file structure in the following image: We can manually edit the MOF and use it on another machine that has PS 4.0 installed on it. It's not mandatory to use PowerShell for generating MOF, if you are comfortable with PowerShell, you can directly write the MOF file. To explore the available DSC resources you can execute the following command: Get-DscResource The output is illustrated in the following image: Following are the steps represented in the preceding figure: Shows you how the resources are implemented. Binary, Composite, PowerShell, and so on. In the preceding example, we created a DSC Configuration that's FileDemo and that is listed as Composite. Name of the resource. Module name the resource belongs to. Properties of the resource. To know the Syntax of a particular DSC resource we can try the following code: Get-DscResource -Name Service -Syntax The output is illustrated in the following figure ,which shows the resource syntax in detail: Now, let's see how DSC works and its three different phases: The authoring phase. The staging phase. The "Make it so" phase. The authoring phase In this phase we will create a DSC Configuration using PowerShell and this outputs a MOF file. We saw a FileDemo example to create a configuration is considered to be an authoring phase. The staging phase In this phase the declarative MOF will be staged and it's as per node. DSC has a push and pull model, where push is simply pushing the configuration to target nodes. The custom providers need to be manually placed in target machines whereas in pull mode, we need to build an IIS Server that will have MOF for target nodes and this is well defined by the OData interface. In pull mode, the custom providers are downloaded to target system. The "Make it so" phase This is the phase for enacting the configuration, that is applying the configuration on the target nodes. Before we summarize the basics of DSC, let's see a few more DSC Commands. We can do this by executing the following command: Get-Command -Noun DSC* The output is as follows: We are using a PowerShell 4.0 stable release and not 5.0, so the version will not be available. Local Configuration Manager (LCM) is the engine for DSC and it runs on all nodes. LCM is responsible to call the configuration resources that are included in a DSC configuration script. Try executing Get-DscLocalConfigurationManager cmdlet to explore its properties. To Apply the LCM settings on target nodes we can use Set-DscLocalConfigurationManager cmdlet. Use case of classes in WMF 5.0 Using classes in PowerShell makes IT professionals, system administrators, and system engineers to start learning development in WMF. It's time for us to switch back to Windows PowerShell 5.0 because the Class keyword is supported from version 5.0 onwards. Why do we need to write class in PowerShell? Is there any special need? May be we will answer this in this section but this is one reason why I prefer to say that, PowerShell is far more than a scripting language. When the Class keyword was introduced, it mainly focused on creating DSC resources. But using class we can create objects like in any other object oriented programming language. The documentation that reads New-Object is not supported. But it's revised now. Indeed it supports the New-Object. The class we create in Windows PowerShell is a .NET framework type. How to create a PowerShell Class? It's easy, just use the Class keyword! The following steps will help you to create a PowerShell class. Create a class named ClassName {}—this is an empty class. Define properties in the class as Class ClassName {$Prop1 , $prop2} Instantiate the class as $var = [ClassName]::New() Now check the output of $var: Class ClassName { $Prop1 $Prop2 } $var = [ClassName]::new() $var Let's now have a look at how to create a class and its advantages. Let us define the properties in class: Class Catalog { #Properties $Model = 'Fujitsu' $Manufacturer = 'Life Book S Series' } $var = New-Object Catalog $var The following image shows the output of class, its members, and setting the property value: Now, by changing the property value, we get the following output: Now let's create a method with overloads. In the following example we have created a method name SetInformation that accepts two arguments $mdl and $mfgr and these are of string type. Using $var.SetInformation command with no parenthesis will show the overload definitions of the method. The code is as follows: Class Catalog { #Properties $Model = 'Fujitsu' $Manufacturer = 'Life Book S Series' SetInformation([String]$mdl,[String]$mfgr) { $this.Manufacturer = $mfgr $this.Model = $mdl } } $var = New-Object -TypeName Catalog $var.SetInformation #Output OverloadDefinitions ------------------- void SetInformation(string mdl, string mfgr) Let's set the model and manufacturer using set information, as follows: Class Catalog { #Properties $Model = 'Fujitsu' $Manufacturer = 'Life Book S Series' SetInformation([String]$mdl,[String]$mfgr) { $this.Manufacturer = $mfgr $this.Model = $mdl } } $var = New-Object -TypeName Catalog $var.SetInformation('Surface' , 'Microsoft') $var The output is illustrated in following image: Inside the PowerShell class we can use PowerShell cmdlets as well. The following code is just to give a demo of using PowerShell cmdlet. Class allows us to validate the parameters as well. Let's have a look at the following example: Class Order { [ValidateSet("Red" , "Blue" , "Green")] $color [ValidateSet("Audi")] $Manufacturer Book($Manufacturer , $color) { $this.color = $color $this.Manufacturer = $Manufacturer } } The parameter $Color and $Manufacturer has ValidateSet property and has a set of values. Now let's use New-Object and set the property with an argument which doesn't belong to this set, shown as follows: $var = New-Object Order $var.color = 'Orange' Now, we get the following error: Exception setting "color": "The argument "Orange" does not belong to the set "Red,Blue,Green" specified by the ValidateSet attribute. Supply an argument that is in the set and then try the command again." Let's set the argument values correctly to get the result using Book method, as follows: $var = New-Object Order $var.Book('Audi' , 'Red') $var The output is illustrated in the following figure: Constructors A constructor is a special type of method that creates new objects. It has the same name as the class and the return type is void. Multiple constructors are supported, but each one takes different numbers and types of parameters. In the following code, let's see the steps to create a simple constructor in PowerShell that simply creates a user in the active directory. Class ADUser { $identity $Name ADUser($Idenity , $Name) { New-ADUser -SamAccountName $Idenity -Name $Name $this.identity = $Idenity $this.Name = $Name } } $var = [ADUser]::new('Dummy' , 'Test Case User') $var We can also hide the properties in PowerShell class, for example let's create two properties and hide one. In theory, it just hides the property but we can use the property as follows: Class Hide { [String]$Name Hidden $ID } $var = [Hide]::new() $var The preceding code is illustrated in the following figure: Additionally, we can carry out operations, such as Get and Set, as shown in the following code: Class Hide { [String]$Name Hidden $ID } $var = [Hide]::new() $var.Id = '23' $var.Id This returns output as 23. To explore more about class use help about_Classes -Detailed. Parsing structured objects using PowerShell In Windows PowerShell 5.0 a new cmdlet ConvertFrom-String has been introduced and it's available in Microsoft.PowerShell.Utility. Using this command, we can parse the structured objects from any given string content. To see information, use help command with ConvertFrom-String -Detailed command. The help has an incorrect parameter as PropertyName. Copy paste will not work, so use help ConvertFrom-String –Parameter * and read the parameter—it's actually PropertyNames. Now, let's see an example of using ConvertFrom-String. Let us examine a scenario where a team has a custom code which generates log files for daily health check-up reports of their environment. Unfortunately, the tool delivered by the vendor is an EXE file and no source code is available. The log file format is as follows: "Error 4356 Lync" , "Warning 6781 SharePoint" , "Information 5436 Exchange", "Error 3432 Lync" , "Warning 4356 SharePoint" , "Information 5432 Exchange" There are many ways to manipulate this record but let's see how PowerShell cmdlet ConvertFrom-String helps us. Using the following code, we will simply extract the Type, EventID, and Server: "Error 4356 Lync" , "Warning 6781 SharePoint" , "Information 5436 Exchange", "Error 3432 Lync" , "Warning 4356 SharePoint" , "Information 5432 Exchange" | ConvertFrom-String -PropertyNames Type , EventID, Server Following figure shows the output of the code we just saw: Okay, what's interesting in this? It's cool because now your output is a PSCustom object and you can manipulate it as required. "Error 4356 Lync" , "Warning 6781 SharePoint" , "Information 5436 Exchange", "Error 3432 SharePoint" , "Warning 4356 SharePoint" , "Information 5432 Exchange" | ConvertFrom-String -PropertyNames Type , EventID, Server | ? {$_.Type -eq 'Error'} An output in Lync and SharePoint has some error logs that needs to be taken care of on priority. Since, requirement varies you can use this cmdlet as required. ConvertFrom-String has a delimiter parameter, which helps us to manipulate the strings as well. In the following example let's use the –Delimiter parameter that removes white space and returns properties, as follows: "Chen V" | ConvertFrom-String -Delimiter "s" -PropertyNames "FirstName" , "SurName" This results FirstName and SurName – FirstName as Chen and SurName as V In the preceding example, we walked you through using template file to manipulate the string as we need. To do this we need to use the parameter –Template Content. Use help ConvertFrom-String –Parameter Template Content Before we begin we need to create a template file. To do this let's ping a web site. Ping www.microsoft.com and the output returned is, as shown: Pinging e10088.dspb.akamaiedge.net [2.21.47.138] with 32 bytes of data: Reply from 2.21.47.138: bytes=32 time=37ms TTL=51 Reply from 2.21.47.138: bytes=32 time=35ms TTL=51 Reply from 2.21.47.138: bytes=32 time=35ms TTL=51 Reply from 2.21.47.138: bytes=32 time=36ms TTL=51 Ping statistics for 2.21.47.138: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 35ms, Maximum = 37ms, Average = 35ms Now, we have the information in some structure. Let's extract IP and bytes; to do this I replaced the IP and Bytes as {IP*:2.21.47.138} Pinging e10088.dspb.akamaiedge.net [2.21.47.138] with 32 bytes of data: Reply from {IP*:2.21.47.138}: bytes={[int32]Bytes:32} time=37ms TTL=51 Reply from {IP*:2.21.47.138}: bytes={[int32]Bytes:32} time=35ms TTL=51 Reply from {IP*:2.21.47.138}: bytes={[int32]Bytes:32} time=36ms TTL=51 Reply from {IP*:2.21.47.138}: bytes={[int32]Bytes:32} time=35ms TTL=51 Ping statistics for 2.21.47.138: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 35ms, Maximum = 37ms, Average = 35ms ConvertFrom-String has a debug parameter using which we can debug our template file. In the following example let's see the debugging output: ping www.microsoft.com | ConvertFrom-String -TemplateFile C:TempTemplate.txt -Debug As we mentioned earlier PowerShell 5.0 is a Preview release and has few bugs. Let's ignore those for now and focus on the features, which works fine and can be utilized in environment. Exploring package management In this topic, we will walk you through the features of package management, which is another great feature of Windows Management Framework 5.0. This was introduced in Windows 10 and was formerly known as OneGet. Using package management we can automate software discovery, installation of software, and inventorying. Do not think about Software Inventory Logging (SIL) for now. As we know, in Windows Software Installation, technology has its own way of doing installations, for example MSI type, MSU type, and so on. This is a real challenge for IT professionals and developers, to think about the unique automation of software installation or deployment. Now, we can do it using package management module. To begin with, let's see the package management Module using the following code: Get-Module -Name PackageManagement The output is illustrated as follows: Yeah, well we got an output that is a binary module. Okay, how to know the available cmdlets and their usage? PowerShell has the simplest way to do things, as shown in the following code: Get-Module -Name PackageManagement The available cmdlets are shown in the following image: Package Providers are the providers connected to package management (OneGet) and package sources are registered for providers. To view the list of providers and sources we use the following cmdlets: Now, let's have a look at the available packages—in the following example I am selecting the first 20 packages, for easy viewing: Okay, we have 20 packages so using Install-Package cmdlet, let us now install WindowsAzurePowerShell on our Windows 2012 Server. We need to ensure that the source are available prior to any installation. To do this just execute the cmdlet Get-PackageSource. If the chocolatey source didn't come up in the output, simply execute the following code—do not change any values. This code will install chocolatey package manager on your machine. Once the installation is done we need to restart the PowerShell: Invoke-Expression ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1')) Find-Package -Name WindowsAzurePowerShell | Install-Package -Verbose The command we just saw shows the confirmation dialog for chocolatey, which is the package source, as shown in the following figure: Click on Yes and install the package. Following are the steps represented in the figure that we just saw: Installs the prerequisites. Creates a temporary folder. Installation successful. Windows Server 2012 has .NET 4.5 in the box by default, so the verbose turned up as False for .NET 4.5, which says PowerShell not installed but WindowsAzurePowerShell is installed successfully. If you are trying to install the same package and the same version that is available on your system – the cmdlet will skip the installation. Find-Package -Name PowerShell Here | Install-Package -Verbose VERBOSE: Skipping installed package PowerShellHere 0.0.3 Explore all the package management cmdlets and automate your software deployments. Exploring PowerShell Get-Module PowerShell Get-Module is a module available in Windows PowerShell 5.0 preview. Following are few more modules: Search through modules in the gallery with Find-Module Save modules to your system from the gallery with Save-Module Install modules from the gallery with Install-Module Update your modules to the latest version with Update-Module Add your own custom repository with Register-PSRepository The following screenshot shows the additional cmdlets that are available: This will allow us to find a module from PowerShell gallery and install it in our environment. PS gallery is a repository of modules. Using Find-Module cmdlet we get a list of module available in the PS gallery. Pipe and install the required module, alternatively we can save the module and examine it before installation, to do this use Save-Module cmdlet. The following screenshot illustrates the installation and deletion of the xJEA module: We can also publish module in the PS gallery, which will be available over the internet to others. This is not a great module. All it does is get user-information from an active directory for the same account name—creates a function and saves it as PSM1 in module folder. In order to publish the module in PS gallery, we need to ensure that the module has manifest. Following are the steps to publish your module: Create a PSM1 file. Create a PSD1 file that is a manifest module (also known as data file). Get your NuGet API key from the PS gallery link shared above. Publish your module using the Publish-PSModule cmdlet. Following figure shows modules that are currently published: Following figure shows the commands to publish modules: Summary In this article, we saw that Windows PowerShell 5.0 preview has got a lot more significant features, such as enhancement in PowerShell DSC, cmdlets improvements and new cmdlets, ISE support transcriptions, support class, and using class. We can create Custom DSC resources with easy string manipulations, A new Network Switch module is introduced using which we can automate and manage Microsoft signed network switches. Resources for Article: Further resources on this subject: Windows Phone 8 Applications[article] The .NET Framework Primer[article] Unleashing Your Development Skills with PowerShell [article]
Read more
  • 0
  • 0
  • 51494

article-image-prompt-engineering-with-azure-prompt-flow
Shankar Narayanan SGS
06 May 2024
10 min read
Save for later

Prompt Engineering with Azure Prompt Flow

Shankar Narayanan SGS
06 May 2024
10 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionThe ability to generate relevant and creative prompts is one of the imperative aspects of the natural language processing system. Especially when the world is evolving in the landscape of artificial intelligence, it is one of the crucial prospects. During this situation, Microsoft's Azure prompt flow provides groundbreaking solutions while empowering the data, scientists, and developers to engineer prompts effectively.  Here, let us explore the nuances of Azure prompt flow while delving deep into the realm of prompt engineering. Significance of Prompt Engineering With the help of prompt engineering, one can construct problems, helping the user with the guide of machine learning models effectively. However, it involves Formulating contextually relevant and specific questions or statements that elicit the desired responses from the artificial intelligence models. Azure prompt flow is one of the sophisticated tools by Microsoft Azure that simplifies intricate processes while enabling the developers to create brands that can have meaningful and accurate outcomes. Getting started with Azure prompt flow Even before exploring the practical applications of Azure Prompt flow, it is necessary to understand the few essential components of Azure prompt flow. The core of prompt flow utilizes the GPT 3.5 architecture to generate various relevant responses to prompts. With the integration of Azure, one can expect a secure and seamless environment for prompt engineering.  Let us consider a practical example of a chatbot application.  from azure.ai.textanalytics import TextAnalyticsClient from azure.core.credentials import AzureKeyCredential # Set up Azure Text Analytics client key = "YOUR_AZURE_TEXT_ANALYTICS_KEY" endpoint = "YOUR_AZURE_TEXT_ANALYTICS_ENDPOINT" credential = AzureKeyCredential(key) text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=credential) # User input user_input = "Tell me a joke." # Generate a prompt using Azure Promptflow prompt = f"User: {user_input}\nChatbot:" # Get chatbot's response response = text_analytics_client.analyze_sentiment(prompt) # Output the response print(f"Chatbot: {response[0]['sentiment']}") In this particular example, we can see that the user inputs a request. Azure prompt flow constructs the required form for the chatbot while generating a sentiment analysis response. Here is the output:Chatbot: Positive Tuning prompts using Azure Promptflow  Crafting good prompts can be a challenging task. With the concept of variants, the user would be able to test the behavior of the model under various conditions. Example:  If the user wants to create a chatbot using Azure Promptflow, then this example might help one to respond creatively to the queries about movies.  Prompt Tuning: User: "Tell me about your favorite movie." Chatbot: "Certainly! One of my favorite movies is 'Inception.' Directed by Christopher Nolan, it's a mind-bending sci-fi thriller that explores the depths of the human mind." Python code:  from azure.ai.textanalytics import TextAnalyticsClient from azure.core.credentials import AzureKeyCredential # Set up Azure Text Analytics client key = "YOUR_AZURE_TEXT_ANALYTICS_KEY" endpoint = "YOUR_AZURE_TEXT_ANALYTICS_ENDPOINT" credential = AzureKeyCredential(key) text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=credential) # User input user_input = "Tell me about your favorite movie." # Generate a creative prompt using Azure Promptflow prompt = f"User: {user_input}\nChatbot: Certainly! One of my favorite movies is 'Inception.' Directed by Christopher Nolan, it's a mind-bending sci-fi thriller that explores the depths of the human mind." # Get chatbot's response response = text_analytics_client.analyze_sentiment(prompt) # Output the response print(f"Chatbot: {response[0]['sentiment']}") In this example, Azure Promptflow is used to create prompts tailored to specific user queries, providing creative and contextually relevant responses. The analyze_sentiment function from the Azure Text Analytics client is used to assess the sentiment of the generated prompts. Replace "YOUR_AZURE_TEXT_ANALYTICS_KEY" and "YOUR_AZURE_TEXT_ANALYTICS_ENDPOINT" with your actual Azure Text Analytics API key and endpoint. Here are a few examples: URL: https://music.apple.com/us/app/apple-music/id1108187390 Text Content: Apple Music is a comprehensive music streaming app that boasts an extensive library of songs, albums, and playlists. Users can enjoy curated playlists, radio shows, and exclusive content from their favorite artists. Apple Music allows offline downloads and offers a family plan for multiple users. It also integrates with the user's existing music library, making it seamless to access purchased and uploaded music. OUTPUT: {"category": "App", "evidence": "Both"}​​​​ URL: https://www.youtube.com/user/premierleague Text Content: Premier League Pass, in collaboration with the English Premier League, delivers live football matches, highlights, and exclusive behind-the-scenes content on YouTube. Football aficionados can stay updated with their favorite teams and players through this official channel. Subscribing to Premier League Pass on YouTube ensures fans never miss a moment from the most exciting football league in the world. OUTPUT: {"category": "Channel", "evidence": "URL"} URL: https://arxiv.org/abs/2305.06858 Text Content: This research paper explores the realm of image captioning, where advanced algorithms generate descriptive captions for images. The study delves into techniques that combine computer vision and natural language processing to achieve accurate and contextually relevant image captions. The paper discusses various models, evaluates their performance, and presents findings that contribute to the field of image captioning technology. OUTPUT: {"category": "Academic", "evidence": "Text content"} URL: https://exampleconstructionsite.com/ Text Content: This website is currently under construction. Please check back later for updates and exciting content. OUTPUT: {"category": "None", "evidence": "None"}  For a given URL: {{url}}, and text content: {{text_content}}. Classified Category: Travel Evidence: The text contains information about popular tourist destinations, travel itineraries, and hotel recommendations. OUTPUT: After summarizing, here is the final Promptflow with 2 variants for the summarize_text_content node.  Benefits of using Azure ML prompt flow Apart from offering a wider range of benefits, Azure ML promptflow helps users to make the transition from ideation to experimentation. This ultimately results in production ready LLM based applications.  Prompt engineering agility  Azure prompt flow offers a visual representation of the struct of the flow structure. It allows the users to understand and navigate the projects while offering a notebook-like coding experience for debugging and efficient flow development. At the same time, users can create as well as compare more than one prompt variant which helps in facilitating an iterative refinement process. Enterprise readiness  The prompt flow streamlines the entire prompt engineering process and leverages robust enterprise readiness solutions. It thus offers a secure, reliable, and scalable foundation for experimentation and development. Besides, it supports team collaboration where multiple users can work together, share knowledge, and maintain version control. Application development  The well-defined process of Azure prompt facilitates the seamless development of AI applications. Only by leveraging it the user can progress effectively through the consequent stages of developing, testing, tuning, and deploying flows. All these ultimately result in creating a fully-fledged AI applications.   However, when the user follows this methodical and structured approach, it empowers them to develop fine-tune and test rigorously to deploy with confidence. Real-world applications of Azure Promptflow  Content creation One of the applications of Azure promptflow lies in the content creation tunes. Various content creators can generate outlines and creative ideas by creating engineering tailored to specific topics. One can even generate entire paragraphs using the prompt flow engineering method. This helps streamline the content creation process while making it look more inspiring and efficient. Language Translation Developers are now leveraging Azure promptflow to build large language translation applications. With the help of constructing prompts in the source language, one can let the system translate the inputs by providing accurate outputs required in the desired language. Such a profound implication can only be possible with the help of Azure prompt flow. It has the propensity to break all the language barriers in the globalized world. Custom support chat box By integrating Azure prompt flow within the customer support chatbots, one can enhance the user experience. However, the prompt engineering techniques help ensure the queries are accurately understood. This process would result in relevant and precise responses. It significantly reduces the response time while improving customer satisfaction. Azure prompt flow simplifies prompt engineering   Prompt engineering is an iterative and challenging process. With the help of Azure prompt flow, one can simplify the development, comparisons, and evaluation of problems. The process makes it easier for the user to find the best prompt for use cases.  Besides, developing a chatbot that utilizes large language models, including GPT3.5, can help companies provide personalized product recommendations based on customer input. Here, Azure prompt flow allows users to evaluate, create, and even deploy from the machine learning models. It speeds up the whole process of developing and deploying artificial intelligence solutions. At the same time, it also allows the user to create connections to the large language model.   Such models include GPT 3.5 and Azure open AI. Users can also use these models for different purposes, including chat computation or creating embeddings. Designing and modifying prompts Designing and modifying alarms for effective use is crucial, especially when using them for large language models. Azure prompt flow enables users to test, create, and deploy various prompt versions for recommendation purposes. To effectively utilize the large language model, especially while dealing with multiple prompts, it is imperative to modify them and design accordingly for better results. Once you can create the problems, it is time to evaluate and test them in multiple scenarios. For instance, if you are creating prompts for a product company, you must explain the process of prompts and their flow to handle the user queries. Also, one can mention the need for custom coding and deployment of end-to-end solutions with the help of Azure's prompt flow feature. Conclusion  With powerful prompt engineering capabilities, Azure prompt flow enables the developers to construct contextually relevant prompts. It enhances the efficiency and accuracy of AI applications over various domains. The potential of prompt engineering makes the future of AI development promising. However, it can only be possible with the help of Azure AI leading the way. Author BioShankar Narayanan (aka Shanky) has worked on numerous different cloud and emerging technologies like Azure, AWS, Google Cloud, IoT, Industry 4.0, and DevOps to name a few. He has led the architecture design and implementation for many Enterprise customers and helped enable them to break the barrier and take the first step towards a long and successful cloud journey. He was one of the early adopters of Microsoft Azure and Snowflake Data Cloud. Shanky likes to contribute back to the community. He contributes to open source is a frequently sought-after speaker and has delivered numerous talks on Microsoft Technologies and Snowflake. He is recognized as a Data Superhero by Snowflake and SAP Community Topic leader by SAP.
Read more
  • 0
  • 0
  • 51493
article-image-lambda-functions
Packt
05 Jul 2017
16 min read
Save for later

Lambda Functions

Packt
05 Jul 2017
16 min read
In this article, by Udita Gupta and Yohan Wadia, the authors of the book Mastering AWS Lambda, we are going to take things a step further by learning the anatomy of a typical Lambda Function and also how to actually write your own functions. We will cover the programming model for a Lambda function using simple functions as examples, the use of logs and exceptions and error handling. (For more resources related to this topic, see here.) The Lambda programming model Certain applications can be broken down into one or more simple nuggets of code called as functions and uploaded to AWS Lambda for execution. Lambda then takes care of provisioning the necessary resources to run your function along with other management activities such as auto-scaling of your functions, their availability, and so on. So what exactly are we supposed to do in all this? A developer basically has three tasks to perform when it comes to working with Lambda: Writing the code Packaging it for deployment Finally monitoring its execution and fine tuning In this section, we are going to explore the different components that actually make up a Lambda Function by understanding what AWS calls as a programming model or a programming pattern. As of date, AWS officially supports Node.js, Java, Python, and C# as the programming languages for writing Lambda functions, with each language following a generic programming pattern that comprises of certain concepts which we will see in the following sections. Handler The handler function is basically a function that Lambda calls first for execution. A handler function is capable of processing incoming event data that is passed to it as well as invoking other functions or methods from your code. We will be concentrating a lot of our code and development on Node.js; however, the programming model remains more or less the same for the other supported languages as well. A skeleton structure of a handler function is shown as follows: exports.myHandler = function(event, context, callback) { // Your code goes here. callback(); } Where, myHandler is the name of your handler function. By exporting it we make sure that Lambda knows which function it has to invoke first. The other parameters that are passed with the handler function are: event: Lambda uses this parameter to pass any event related data back to the handler. context: Lambda again uses this parameter to provide the handler with the function's runtime information such as the name of the function, the time it took to execute, and so on . callback: This parameter is used to return any data back to its caller. The callback parameter is the only optional parameter that gets passed when writing handlers. If not specified, AWS Lambda will call it implicitly and return the value as null. The callback parameter also supports two optional parameters in the form of error and result where error will return any of the function's error information back to the caller while result will return any result of your function's successful execution. Here are a few simple examples of invoking callbacks in your handler: callback() callback(null, 'Hello from Lambda') callback(error) The callback parameter is supported only in Node.js runtime v4.3. You will have to use the context methods in case your code supports earlier Node.js runtime (v0.10.42) Let us try out a simple handler example with a code: exports.myHandler = function(event, context, callback) { console.log("value = " + event.key); console.log("functionName = ", context.functionName); callback(null, "Yippee! Something worked!"); }; The following code snippet will print the value of an event (key) that we will pass to the function, print the function's name as part of the context object and finally print the success message Yippee! Something worked! if all goes well! Login to the AWS Management Console and select AWS Lambda from the dashboard. Select the Create a Lambda function option. From the Select blueprint page, select the Blank Function blueprint. Since we are not configuring any triggers for now, simple click on Next at the Configure triggers page. Provide a suitable Name and Description for your Lambda function and paste the preceding code snippet in the inline code editor as shown: Next, in the Lambda function handler and role section on the same page, type in the correct name of your Handler as shown. The handler name should match with the handler name in your function to work. Remember also to select the basic-lambda-role for your function's execution before selecting the Next button: In the Review page, select the Create function option. With your function now created, select the Test option to pass the sample event to our function. In the Sample event, pass the following event and select the Save and test option: { "key": "My Printed Value!!" } With your code execution completed, you should get a similar execution result as shown in the following figure. The important things to note here are the values for the event, context and callback parameters. You can note the callback message being returned back to the caller as the function executed successfully. The other event and context object values are printed in the Log output section as highlighted in the following figure: In case you end up with any errors, make sure the handler function name matches the handler name that you passed during the function's configuration. Context object The context object is a really useful utility when it comes to obtaining runtime information about your function. The context object can provide information such as the executing function's name, the time remaining before Lambda terminates your function's execution, the log name and stream associated with your function and much more. The context object also comes with its own methods that you can call to correctly terminate your function's executions such as context.succed(), context.fail(), context.done(), and so on. However, post April 2016, Lambda has transitioned the Node.js runtime from v0.10.42 to v4.3 which does support these methods however encourages to use the callback() for performing the same actions. Here are some of the commonly used context object methods and properties described as follows: getRemainingTimeInMillis(): This property returns the number of milliseconds left for execution before Lambda terminates your function. This comes in really handy when you want to perform some corrective actions before your function exits or gets timed out. callbackWaitsForEmptyEventLoop: This property is used to override the default behaviour of a callback() function, such as to wait till the entire event loop is processed and only then return back to the caller. If set to false, this property causes the callback() function to stop any further processing in the event loop even if there are any other tasks to be performed. The default value is set to true. functionName: This property returns the name of the executing Lambda function. functionVersion: The current version of the executing Lambda function. memoryLimitInMB: The amount of resource in terms of memory set for your Lambda function. logGroupName: This property returns the name of the CloudWatch Log Group that stores function's execution logs. logStreamName: This property returns the name of the CloudWatch Log Stream that stores function's execution logs. awsRequestID: This property returns the request ID associated with that particular function's execution. If you are using Lambda functions as mobile backend processing services, you can then extract additional information about your mobile application using the context of identity and clientContext objects. These are invoked using the AWS Mobile SDK. To learn more, click here http://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-context.html. Let us look at a simple example to understand the context object a bit better. In this example, we are using the context object callbackWaitsForEmptyEventLoop and demonstrating its working by setting the object's value to either yes or no on invocation: Login to the AWS Management Console and select AWS Lambda from the dashboard. Select the Create a Lambda function option. From the Select blueprint page, select the Blank Function blueprint. Since we are not configuring any triggers for now, simple click on Next at the Configure triggers page. Provide a suitable Name and Description for your Lambda function and paste the following code in the inline code editor: exports.myHandler = (event, context, callback) => { console.log('remaining time =', context.getRemainingTimeInMillis()); console.log('functionName =', context.functionName); console.log('AWSrequestID =', context.awsRequestId); console.log('logGroupName =', context.logGroupName); console.log('logStreamName =', context.logStreamName); switch (event.contextCallbackOption) { case "no": setTimeout(function(){ console.log("I am back from my timeout of 30 seconds!!"); },30000); // 30 seconds break break; case "yes": console.log("The callback won't wait for the setTimeout() n if the callbackWaitsForEmptyEventLoop is set to false"); setTimeout(function(){ console.log("I am back from my timeout of 30 seconds!!"); },30000); // 30 seconds break context.callbackWaitsForEmptyEventLoop = false; break; default: console.log("The Default code block"); } callback(null, 'Hello from Lambda'); }; Next, in the Lambda function handler and role section on the same page, type in the correct name of your Handler as shown. The handler name should match with the handler name in your function to work. Remember also to select the basic-lambda-role for your function's execution. The final change that we will do is change the Timeout value of our function from the default 3 seconds to 1 minute specifically for this example. Click Next to continue: In the Review page, select the Create function option. With your function now created, select the Test option to pass the sample event to our function. In the Sample event, pass the following event and select the Save and test option. You should see a similar output in the Log output window as shown: With the contextCallbackOption set to yes, the function does not wait for the 30 seconds setTimeout() function and will exit, however it prints the function's runtime information such as the remaining execution time, the function name, and so on. Now set the contextCallbackOption to no and re-run the test and verify the output. This time, you can see the setTimeout() function getting called and verify the same by comparing the remaining time left for execution with the earlier test run.   Logging You can always log your code's execution and activities using simple log statements. The following statements are supported for logging with Node.js runtime: console.log() console.error() console.warn() console.info() The logs can be viewed using both the Management Console as well as the CLI. Let us quickly explore both the options. Using the Management Console We have already been using Lambda's dashboard to view the function's execution logs, however the logs are only for the current execution. To view your function's logs from the past, you need to view them using the CloudWatch Logs section: To do so, search and select CloudWatch option from the AWS Management Console. Next, select the Logs option to display the function's logs as shown in the following figure: You can use the Filter option to filter out your Lambda logs by typing in the log group name prefix as /aws/lambda.   Select any of the present Log Groups and its corresponding Log Stream Name to view the complete and detailed execution logs of your function. If you do not see any Lambda logs listed out here it is mostly due to your Lambda execution role. Make sure your role has the necessary access rights to create the log group and log stream along with the capability to put log events. Using the CLI The CLI provides two ways using which you can view your function's execution logs: The first is using the Lambda function's invoke command itself. The invoke command when used with the --log-type parameter will print the latest 4 KB of log data that is written to CloudWatch Logs. To do so, first list out all available functions in your current region using the following command: # aws lambda list-functions Next, pick a Lambda function that you wish to invoke and substitute that function's name and payload with the following example snippet: # aws lambda invoke --invocation-type RequestResponse --function-name myFirstFunction --log-type Tail --payload '{"key1":"Lambda","key2":"is","key3":"awesome!"}' output.txt The second way is by using a combination of the context() object and the CloudWatch CLI. You can obtain your function's log group name and the log stream name using the context.logGroupName and the context.logStreamName. Next, substitute the data gathered from the output of these parameters in the following command: # aws logs get-log-events --log-group-name "/aws/lambda/myFirstFunction" --log-stream-name "2017/02/07/[$LATEST]1ae6ac9c77384794a3202802c683179a" If you run into the error The specified log stream does not exist in spite of providing correct values for the log group name and stream name; then make sure to add the "" escape character in the [$LATEST] as shown. Let us look at a few options that you can additionally pass with the get-log-events command: --start-time: The start of the log's time range. All times are in UTC. --end-time: The end of the log's time range. All times are in UTC. --next-token: The token for the next set of items to return. (You received this token from a previous call.) --limit: Used to set the maximum number of log events returned. By default the limit is set to either 10,000 log events. Alternatively, if you don't wish to use the context() objects in your code, you can still filter out the log group name and log stream name by using a combination of the following commands: # aws logs describe-log-groups --log-group-name-prefix "/aws/lambda/" The describe-log-groups command will list all the log groups that are prefixed with /aws/lambda. Make a note of your function's log group name from this output. Next, execute the following command to list your log group name's associated log stream names: # aws logs describe-log-streams --log-group-name "/aws/lambda/myFirstFunction" Make a note of the log stream name and substitute the same in the next and final command to view your log events for that particular log stream name: # aws logs get-log-events --log-group-name "/aws/lambda/myFirstFunction" --log-stream-name "2017/02/07/[$LATEST]1ae6ac9c77384794a3202802c683179a" Once again, make sure to add the backslash "" in the [$LATEST] to avoid the The specified log stream does not exist error. With the logging done, let's move on to the next piece of the programming model called exceptions. Exceptions and error handling Functions have the ability to notify AWS Lambda in case it failed to execute correctly. This is primarily done by the function passing the error object to Lambda which converts the same to a string and returns it to the user as an error message. The error messages that are returned also depend on the invocation type of the function; for example, if your function performs a synchronous execution (RequestResponse invocation type), then the error is returned back to the user and displayed on the Management Console as well as in the CloudWatch Logs. For any asynchronous executions (event invocation type), Lambda will not return anything. Instead it logs the error messages to CloudWatch Logs. Let us examine a function's error and exception handling capabilities with a simple example of a calculator function that accepts two numbers and an operand as the test events during invocation: Login to the AWS Management Console and select AWS Lambda from the dashboard. Select the Create a Lambda function option. From the Select blueprint page, select the Blank Function blueprint. Since we are not configuring any triggers for now, simple click on Next at the Configure triggers page. Provide a suitable Name and Description for your Lambda function and paste the following code in the inline code editor: exports.myHandler = (event, context, callback) => { console.log("Hello, Starting the "+ context.functionName +" Lambda Function"); console.log("The event we pass will have two numbers and an operand value"); // operand can be +, -, /, *, add, sub, mul, div console.log('Received event:', JSON.stringify(event, null, 2)); var error, result; if (isNaN(event.num1) || isNaN(event.num2)) { console.error("Invalid Numbers"); // different logging error = new Error("Invalid Numbers!"); // Exception Handling callback(error); } switch(event.operand) { case "+": case "add": result = event.num1 + event.num2; break; case "-": case "sub": result = event.num1 - event.num2; break; case "*": case "mul": result = event.num1 * event.num2; break; case "/": case "div": if(event.num2 === 0){ console.error("The divisor cannot be 0"); error = new Error("The divisor cannot be 0"); callback(error, null); } else{ result = event.num1/event.num2; } break; default: callback("Invalid Operand"); break; } console.log("The Result is: " + result); callback(null, result); }; Next, in the Lambda function handler and role section on the same page, type in the correct name of your Handler. The handler name should match with the handler name in your function to work. Remember also to select the basic-lambda-role for your function's execution. Leave the rest of the values to their defaults and click Next to continue. In the Review page, select the Create function option. With your function now created, select the Test option to pass the sample event to our function. In the Sample event, pass the following event and select the Save and test option. You should see a similar output in the Log output window as shown: { "num1": 3, "num2": 0, "operand": "div" } So what just happened there? Well first, we can print simple user friendly error messages with the help of the console.error() statement. Additionally, we can also print the stackTrace array of the error by passing the error in the callback() as shown: error = new Error("The divisor cannot be 0"); callback(error, null); You can also view the custom error message and the stackTrace JSON array both from the Lambda dashboard as well as from the CloudWatch Logs section. Next, give this code a couple of tries with some different permutations and combinations of events and check out the results. You can even write your own custom error messages and error handlers that can perform some additional task when an error is returned by the function. With this we come towards the end of a function's generic programming model and its components.  Summary We deep dived into the Lambda programming model and understood each of its sub components (handlers, context objects, errors and exceptions) with easy to follow examples.
Read more
  • 0
  • 0
  • 51421

article-image-how-to-manage-complex-applications-using-kubernetes-based-helm-tool-tutorial
Savia Lobo
16 Jul 2019
16 min read
Save for later

How to manage complex applications using Kubernetes-based Helm tool [Tutorial]

Savia Lobo
16 Jul 2019
16 min read
Helm is a popular tool in the Kubernetes ecosystem that gives us a way of building packages (known as charts) of related Kubernetes objects that can be deployed in a cohesive way to a cluster. It also allows us to parameterize these packages, so they can be reused in different contexts and deployed to the varying environments that the services they provide might be needed in. This article is an excerpt taken from the book Kubernetes on AWS written by Ed Robinson. In this book, you will discover how to utilize the power of Kubernetes to manage and update your applications. In this article, you will learn how to manage complex applications using Kubernetes-based Helm tool. You will start by learning how to install Helm and later on how to configure and package Helm charts. Like Kubernetes, development of Helm is overseen by the Cloud Native Computing Foundation. As well as Helm (the package manager), the community maintains a repository of standard charts for a wide range of open source software you can install and run on your cluster. From the Jenkins CI server to MySQL or Prometheus, it's simple to install and run complex deployments involving many underlying Kubernetes resources with Helm. Installing Helm If you have already set up your own Kubernetes cluster and have correctly configured kubectl on your machine, then it is simple to install Helm. On macOS On macOS, the simplest way to install the Helm client is with Homebrew: $ brew install kubernetes-helm On Linux and Windows Every release of Helm includes prebuilt binaries for Linux, Windows, and macOS. Visit https://github.com/kubernetes/helm/releases to download the version you need for your platform. To install the client, simply unpack and copy the binary onto your path. For example, on a Linux machine you might do the following: $ tar -zxvf helm-v2.7.2-linux-amd64.tar.gz $ mv linux-amd64/helm /usr/local/bin/helm Installing Tiller Once you have the Helm CLI tool installed on your machine, you can go about installing Helm's server-side component, Tiller. Helm uses the same configuration as kubectl, so start by checking which context you will be installing Tiller onto: $ kubectl config current-context minikube Here, we will be installing Tiller into the cluster referenced by the Minikube context. In this case, this is exactly what we want. If your kubectl is not currently pointing to another cluster, you can quickly switch to the context you want to use like this: $ kubectl config use-context minikube If you are still not sure that you are using the correct context, take a quick look at the full config and check that the cluster server field is correct: $ kubectl config view --minify=true The minify flag removes any config not referenced by the current context. Once you are happy that the cluster that kubectl is connecting to is the correct one, we can set up Helm's local environment and install Tiller on to your cluster: $ helm init $HELM_HOME has been configured at /Users/edwardrobinson/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Happy Helming! We can use kubectl to check that Tiller is indeed running on our cluster: $ kubectl -n kube-system get deploy -l app=helm NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE tiller-deploy 1 1 1 1 3m Once we have verified that Tiller is correctly running on the cluster, let's use the version command. This will validate that we are able to connect correctly to the API of the Tiller server and return the version number of both the CLI and the Tiller server: $ helm version Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"} Installing a chart Let's start by installing an application by using one of the charts provided by the community. You can discover applications that the community has produced Helm charts for at https://hub.kubeapps.com/. As well as making it simple to deploy a wide range of applications to your Kubernetes cluster, it's a great resource for learning some of the best practices the community uses when packaging applications for Helm. Helm charts can be stored in a repository, so it is simple to install them by name. By default, Helm is already configured to use one remote repository called Stable. This makes it simple for us to try out some commonly used applications as soon as Helm is installed. Before you install a chart, you will need to know three things: The name of the chart you want to install The name you will give to this release (If you omit this, Helm will create a random name for this release) The namespace on the cluster you want to install the chart into (If you omit this, Helm will use the default namespace) Helm calls each distinct installation of a particular chart a release. Each release has a unique name that is used if you later want to update, upgrade, or even remove a release from your cluster. Being able to install multiple instances of a chart onto a single cluster makes Helm a little bit different from how we think about traditional package managers that are tied to a single machine, and typically only allow one installation of a particular package at once. But once you have got used to the terminology, it is very simple to understand: A chart is the package that contains all the information about how to install a particular application or tool to the cluster. You can think of it as a template that can be reused to create many different instances or releases of the packaged application or tool. A release is a named installation of a chart to a particular cluster. By referring to a release by name, Helm can make upgrades to a particular release, updating the version of the installed tool, or making configuration changes. A repository is an HTTP server storing charts along with an index file. When configured with the location of a repository, the Helm client can install a chart from that repository by downloading it and then making a new release. Before you can install a chart onto your cluster, you need to make sure that Helm knows about the repository that you want to use. You can list the repositories that are currently in use by running the helm repo list command: $ helm repo list NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879/charts By default, Helm is configured with a repository named stable pointing at the community chart repository and local repository that points at a local address for testing your own local repository. (You need to be running helm serve for this.) Adding a Helm repository to this list is simple with the helm repo add command. You can add my Helm repository that contains some example applications related to this book by running the following command: $ helm repo add errm https://charts.errm.co.uk "errm" has been added to your repositories In order to pull the latest chart information from the configured repositories, you can run the following command: $ helm repo update Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Successfully got an update from the "errm" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. Happy Helming! Let's start with one of the simplest applications available in my Helm repository, kubeslate. This provides some very basic information about your cluster, such as the version of Kubernetes you are running and the number of pods, deployments, and services in your cluster. We are going to start with this application, since it is very simple and doesn't require any special configuration to run on Minikube, or indeed any other cluster. Installing a chart from a repository on your cluster couldn't be simpler: $ helm install --name=my-slate errm/kubeslate You should see a lot of output from the helm command. Firstly, you will see some metadata about the release, such as its name, status, and namespace: NAME: my-slate LAST DEPLOYED: Mon Mar 26 21:55:39 2018 NAMESPACE: default STATUS: DEPLOYED Next, you should see some information about the resources that Helm has instructed Kubernetes to create on the cluster. As you can see, a single service and a single deployment have been created: RESOURCES: ==> v1/Service NAME TYPE CLUSTER-IP PORT(S) AGE my-slate-kubeslate ClusterIP 10.100.209.48 80/TCP 0s ==> v1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE my-slate-kubeslate 2 0 0 0 0s ==> v1/Pod(related) NAME READY STATUS AGE my-slate-kubeslate-77bd7479cf-gckf8 0/1 ContainerCreating 0s my-slate-kubeslate-77bd7479cf-vvlnz 0/1 ContainerCreating 0s Finally, there is a section with some notes that have been provided by the chart's author to give us some information about how to start using the application: Notes: To access kubeslate. First start the kubectl proxy: kubectl proxy Now open the following URL in your browser: http://localhost:8001/api/v1/namespaces/default/services/my-slate-kubeslate:http/proxy Please try reloading the page if you see ServiceUnavailable / no endpoints available for service, as pod creation might take a few moments. Try following these instructions yourself and open Kubeslate in your browser: Kubeslate deployed with Helm Configuring a chart When you use Helm to make a release of a chart, there are certain attributes that you might need to change or configuration you might need to provide. Luckily, Helm provides a standard way for users of a chart to override some or all of the configuration values. In this section, we are going to look at how, as the user of a chart, you might go about supplying configuration to Helm. Later in the chapter, we are going to look at how you can create your own charts and use the configuration passed in to allow your chart to be customized. When we invoke helm install, there are two ways we can provide configuration values: passing them as command-line arguments, or by providing a configuration file. These configuration values are merged with the default values provided by a chart. This allows a chart author to provide a default configuration to allow users to get up and running quickly, but still allow users to tweak important settings, or enable advanced features. Providing a single value to Helm on the command line is achieved by using the set flag. The kubeslate chart allows us to specify additional labels for the pod(s) that it launches using the podLabels variable. Let's make a new release of the kubeslate chart, and then use the podLabels variable to add an additional hello label with the value world: $ helm install --name labeled-slate --set podLabels.hello=world errm/kubeslate Once you have run this command, you should be able to prove that the extra variable you passed to Helm did indeed result in the pods launched by Helm having the correct label. Using the kubectl get pods command with a label selector for the label we applied using Helm should return the pods that have just been launched with Helm: $ kubectl get pods -l hello=world NAME READY STATUS labeled-slate-kubeslate-5b75b58cb-7jpfk 1/1 Running labeled-slate-kubeslate-5b75b58cb-hcpgj 1/1 Running As well as being able to pass a configuration to Helm when we create a new release, it is also possible to update the configuration in a pre-existing release using the upgrade command. When we use Helm to update a configuration, the process is much the same as when we updated deployment resources in the last chapter, and a lot of those considerations still apply if we want to avoid downtime in our services. For example, by launching multiple replicas of a service, we can avoid downtime, as a new version of a deployment configuration is rolled out. Let's also upgrade our original kubeslate release to include the same hello: world pod label that we applied to the second release. As you can see, the structure of the upgrade command is quite similar to the install command. But rather than specifying the name of the release with the --name flag, we pass it as the first argument. This is because when we install a chart to the cluster, the name of the release is optional. If we omit it, Helm will create a random name for the release. However, when performing an upgrade, we need to target a pre-existing release to upgrade, and thus this argument is mandatory: $ helm upgrade my-slate --set podLabels.hello=world errm/kubeslate If you now run helm ls, you should see that the release named my-slate has been upgraded to Revision 2. You can test that the deployment managed by this release has been upgraded to include this pod label by repeating our kubectl get command: $ kubectl get pods -l hello=world NAME READY STATUS labeled-slate-kubeslate-5b75b58cb-7jpfk 1/1 Running labeled-slate-kubeslate-5b75b58cb-hcpgj 1/1 Running my-slate-kubeslate-5c8c4bc77-4g4l4 1/1 Running my-slate-kubeslate-5c8c4bc77-7pdtf 1/1 Running We can now see that four pods, two from each of our releases, now match the label selector we passed to kubectl get. Passing variables on the command line with the set flag is convenient when we just want to provide values for a few variables. But when we want to pass more complex configurations, it can be simpler to provide the values as a file. Let's prepare a configuration file to apply several labels to our kubeslate pods: values.yml podLabels: hello: world access: internal users: admin We can then use the helm command to apply this configuration file to our release: $ helm upgrade labeled-slate -f values.yml errm/kubeslate To learn how to create your own charts, head over to our book. Packaging Helm charts While we are developing our chart, it is simple to use the Helm CLI to deploy our chart straight from the local filesystem. However, Helm also allows you to create your own repository in order to share your charts. A Helm repository is a collection of packaged Helm charts, plus an index stored in a particular directory structure on a standard HTTP web server. Once you are happy with your chart, you will want to package it so it is ready to distribute in a Helm repository. This is simple to do with the helm package command. When you start to distribute your charts with a repository, versioning becomes important. The version number of a chart in a Helm repository needs to follow the SemVer 2 guidelines. In order to build a packaged chart, start by checking that you have set an appropriate version number in Chart.yaml. If this is the first time you have packaged your chart, the default will be OK: $ helm package version-app Successfully packaged chart and saved it to: ~/helm-charts/version-app-0.1.0.tgz You can test a packaged chart without uploading it to a repository by using the helm serve command. This command will serve all of the packaged charts found in the current directory and generate an index on the fly: $ helm serve Regenerating index. This may take a moment. Now serving you on 127.0.0.1:8879 You can now try installing your chart by using the local repository: $ helm install local/version-app You can test building an index An Helm repository is just a collection of packaged charts stored in a directory. In order to discover and search the charts and versions available in a particular repository, the Helm client downloads a special index.yaml that includes metadata about each packaged chart and the location it can be downloaded from. In order to generate this index file, we need to copy all the packaged charts that we want in our index to the same directory: cp ~/helm-charts/version-app-0.1.0.tgz ~/helm-repo/ Then, in order to generate the index.yaml file, we use the helm repo index command. You will need to pass the root URL where the packaged charts will be served from. This could be the address of a web server, or on AWS, you might use a S3 bucket: helm repo index ~/helm-repo --url https://helm-repo.example.org The chart index is quite a simple format, listing the name of each chart available, and then providing a list of each version available for each named chart. The index also includes a checksum in order to validate the download of charts from the repository: apiVersion: v1 entries: version-app: - apiVersion: v1 created: 2018-01-10T19:28:27.802896842Z description: A Helm chart for Kubernetes digest: 79aee8b48cab65f0d3693b98ae8234fe889b22815db87861e590276a657912c1 name: version-app urls: - https://helm-repo.example.org/version-app-0.1.0.tgz version: 0.1.0 generated: 2018-01-10T19:28:27.802428278Z The generated index.yaml file for our new chart repository. Once we have created the index.yaml file, it is simply a question of copying your packaged charts and the index file to the host you have chosen to use. If you are using S3, this might look like this: aws s3 sync ~/helm-repo s3://my-helm-repo-bucket In order for Helm to be able to use your repository, your web server (or S3) needs to be correctly configured. The web server needs to serve the index.yaml file with the correct content type header (text/yaml or text/x-yaml). The charts need to be available at the URLs listed in the index. Using your repository Once you have set up the repository, you can configure Helm to use it: helm repo add my-repo https://helm-repo.example.org my-repo has been added to your repositories When you add a repository, Helm validates that it can indeed connect to the URL given and download the index file. You can check this by searching for your chart by using helm search: $ helm search version-app NAME VERSION DESCRIPTION my-repo/version-app 0.1.1 A Helm chart for Kubernetes Thus, in this article you learned how to install Helm, configuring and packaging Helm charts.  It can be used for a wide range of scenarios where you want to deploy resources to a Kubernetes cluster, from providing a simple way for others to install an application you have written on their own clusters, to forming the cornerstone of an internal Platform as a Service within a larger organization. To know more about how to configure your own charts using Helm and to know the organizational patterns for Helm, head over to our book, Kubernetes on AWS. Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes Introducing ‘Quarkus’, a Kubernetes native Java framework for GraalVM & OpenJDK HotSpot Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes
Read more
  • 0
  • 0
  • 51279
Modal Close icon
Modal Close icon