Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-ai-distilled-33-tech-revolution-2024-ais-impact-across-industries
Merlyn Shelley
22 Jan 2024
13 min read
Save for later

AI Distilled 33: Tech Revolution 2024: AI's Impact Across Industries

Merlyn Shelley
22 Jan 2024
13 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!👋 Hello ,“This year, every industry will become a technology industry. You can now recognize and learn the language of almost anything with structure, and you can translate it to anything with structure — so text-protein, protein-text. This is the generative AI revolution.” -Jensen Huang, NVIDIA founder and CEO. AI is revolutionizing drug development and reshaping medical tech with cutting-edge algorithms. Dive into the latest AI_Distilled edition for sharp insights on AI's impact across industries, including breakthroughs in machine learning, NLP, and more. AI Launches & Industry Updates:  OpenAI Revises Policy, Opening Doors to Military Applications Google Cloud Introduces Advanced Generative AI Tools for Retail Enhancement Google Confirms Significant Layoffs Across Core Teams OpenAI Launches ChatGPT Team for Collaborative Workspaces Microsoft Launches Copilot Pro Plan and Expands Business Availability Vodafone and Microsoft Forge 10-Year Partnership for Digital Transformation AI in Healthcare:  MIT Researchers Harness AI to Uncover New Antibiotic Candidates Google Research Unveils AMIE: AI System for Diagnostic Medical Conversations NVIDIA CEO Foresees Tech Transformation Across All Industries in 2024 AI in Finance: AI Reshapes Financial Industry: 2024 Trends Unveiled in Survey JPMorgan Seeks AI Strategist to Monitor London Startups AI in Fintech Market to Surpass $222.49 Billion by 2030 AI in Business: AI to Impact 40% Jobs Globally, Balanced Policies Needed, Says IMF Deloitte's Quarterly Survey Reveals Business Leaders' Concerns About Gen AI's Societal Impact and Talent Shortage AI in Science & Technology:  NASA Boosts Scientific Discovery with Generative AI-Powered Search Swarovski Unveils World's First AI Binoculars AI in Supply Chain Management: AI Proves Crucial in Securing Healthcare Supply Chains: Economist Impact Study Unlocking Supply Chain Potential: Generative AI Transforms Operations We’ve also got you your fresh dose of LLM, GPT, and Gen AI secret knowledge and tutorials: How to Craft Effective AI Prompts Understanding and Managing KV Caching for LLM Inference Understanding and Enhancing Chain-of-Thought (CoT) Reasoning with Graphs Unlocking the Power of Hybrid Deep Neural Networks We know how much you love hands-on tips and strategies from the community, so here they are: Building a Local Chatbot with Next.js, Llama.cpp, and ModelFusion How to Build an Anomaly Detector with OpenAI Building Multilingual Financial Search Applications with Cohere Embedding Models in Amazon Bedrock Maximizing GPU Utilization with AWS ParallelCluster and EC2 Capacity Blocks Don’t forget to review these GitHib repositories that have been doing rounds:  vanna-ai/vanna dvmazur/mixtral-offloading pootiet/explain-then-translate genezc/minima   📥 Feedback on the Weekly EditionTake our weekly survey and get a free PDF copy of our best-selling book, "Interactive Data Visualization with Python - Second Edition." We appreciate your input and hope you enjoy the book! Share your thoughts and opinions here! Writer’s Credit: Special shout-out to Vidhu Jain for their valuable contribution to this week’s newsletter content!  Cheers,  Merlyn Shelley  Editor-in-Chief, Packt  SignUp | Advertise | Archives⚡ TechWave: AI/GPT News & AnalysisAI Launches & Industry Updates: 💎 OpenAI Revises Policy, Opening Doors to Military Applications: OpenAI updated its policy, lifting the ban on using its tech for military purposes, aiming for clarity and national security discussions. However, it maintains a strict prohibition against developing and using weapons. 💎 Google Cloud Introduces Advanced Generative AI Tools for Retail Enhancement: Google Cloud has released new AI tools to improve online shopping and help retail businesses. This includes a smart chatbot for websites and apps to help customers, a feature to make product searches better, and tools to improve customer service and speed up listing products. 💎 Google Confirms Significant Layoffs Across Core Teams: Google announced major job cuts affecting its Hardware, core engineering, and Google Assistant teams, totaling around a thousand layoffs in a day. The exact number might be higher, but no total count was provided. 💎 OpenAI Launches ChatGPT Team for Collaborative Workspaces: ChatGPT Team is a plan for teams offering a secure space with advanced models like GPT-4 and DALL·E 3. It includes tools for data analysis and lets users create custom GPTs, ensuring business data remains private. 💎 Microsoft Launches Copilot Pro Plan and Expands Business Availability: Copilot Pro, at $20/month per user, offers enhanced text, command, and image features in Microsoft 365 apps, plus early access to new GenAI models. It's also available for businesses on various Microsoft 365 and Office 365 plans. 💎 Vodafone and Microsoft Forge 10-Year Partnership for Digital Transformation: Vodafone and Microsoft have formed a 10-year partnership to serve over 300 million people in Europe and Africa, using Microsoft's AI to improve customer experiences, IoT, digital services for small businesses, and global data center strategies. AI in Healthcare: 💎 MIT Researchers Harness AI to Uncover New Antibiotic Candidates: MIT researchers have employed deep learning to identify a new class of antibiotic compounds capable of combating drug-resistant bacterium Methicillin-resistant Staphylococcus aureus (MRSA). Published in Nature, the study underscores researchers' ability to unveil the deep-learning model's criteria for antibiotic predictions, paving the way for enhanced drug design. 💎 Google Research Unveils AMIE: AI System for Diagnostic Medical Conversations: Google Research introduces the Articulate Medical Intelligence Explorer (AMIE), an AI system tailored for diagnostic reasoning and conversations in the medical field. AMIE, based on LLMs, focuses on replicating the nuanced and skilled dialogues between clinicians and patients, addressing diagnostic challenges. The system employs a unique self-play simulated learning environment, refining its diagnostic capabilities across various medical conditions. 💎 NVIDIA CEO Foresees Tech Transformation Across All Industries in 2024: Jensen Huang predicts a tech revolution in all industries by 2024, focusing on generative AI's impact. At a healthcare conference, he highlighted AI's role in language and translation, and NVIDIA's shift from aiding drug discovery to designing drugs with computers. AI in Finance: 💎 AI Reshapes Financial Industry: 2024 Trends Unveiled in Survey: NVIDIA's survey reveals 91% of financial companies are adopting or planning to use AI. 55% are interested in generative AI and LLMs, mainly to enhance operations, risk, and marketing. 97% intend to increase AI investments for new uses and workflow optimization. 💎 JPMorgan Seeks AI Strategist to Monitor London Startups: JPMorgan is hiring an 'AI Strategy Consultant' in London to identify and assess startups using Generative AI and LLMs, reporting to the Chief Data and Analytics Officer. This aligns with financial trends like HSBC's launch of Zing, a money transfer app. 💎 AI in Fintech Market to Surpass $222.49 Billion by 2030: The AI in Fintech market, valued at $13.23 billion in 2022, is growing fast. It's improving financial services with data analytics and machine learning, enhancing decision-making and security. It's projected to reach $222.49 billion by 2030, growing at 42.3% annually.  AI in Business: 💎 AI to Impact 40% Jobs Globally, Balanced Policies Needed, Says IMF: The IMF warns that AI affects 40% of global jobs, posing more risks and opportunities in advanced economies than emerging ones. It may increase income inequality, calling for social safety nets, retraining, and AI-focused policies to ensure inclusivity. 💎 Deloitte's Quarterly Survey Reveals Business Leaders' Concerns About Gen AI's Societal Impact and Talent Shortage: Deloitte's new quarterly survey, based on input from 2,800 professionals globally, shows 79% are optimistic about gen AI's impact on their businesses in 3 years. However, over 50% fear it may centralize global economic power and worsen economic inequality.  AI in Science & Technology:  💎 NASA Boosts Scientific Discovery with Generative AI-Powered Search: NASA introduces the Science Discovery Engine, powered by generative AI, simplifying access to its extensive data. Developed by the Open Source Science Initiative (OSSI) and Sinequa, it comprehends 9,000 scientific terms, offers contextual search, and enables natural language queries for 88,000 datasets and 715,000 documents from 128 sources. 💎 Swarovski Unveils World's First AI Binoculars: Swarovski Optik and designer Marc Newson launch AX VISIO, the first AI binoculars. They merge analog optics with AI, instantly identifying 9,000+ species, boasting a camera-like design, and enabling quick photo and video capture through a neural processing unit.  AI in Supply Chain Management: 💎 AI Proves Crucial in Securing Healthcare Supply Chains: Economist Impact Study: A study by Economist Impact, with DP World's support, finds 46% of healthcare firms use AI to predict supply chain issues. Amid geopolitical uncertainties, 39% use "friendshoring" for trade, and 23% optimize suppliers, showcasing industry adaptability. 💎 Unlocking Supply Chain Potential: Generative AI Transforms Operations: About 40% of supply chains invest in Gen AI for knowledge management. It's widely adopted (62%) for sustainability tracking and helps with forecasting, production, risk management, manufacturing design, predictive maintenance, and logistics efficiency.  🔮 Expert Insights from Packt Community Generative AI with LangChain - By Ben Auffarth How do GPT models work? Generative pre-training has been around for a while, employing methods such as Markov models or other techniques. However, language models such as BERT and GPT were made possible by the transformer deep neural network architecture (Vaswani and others, Attention Is All You Need, 2017), which has been a game-changer for NLP. Designed to avoid recursion to allow parallel computation, the Transformer architecture, in different variations, continues to push the boundaries of what’s possible within the field of NLP and generative AI. Transformers have pushed the envelope in NLP, especially in translation and language understanding. Neural Machine Translation (NMT) is a mainstream approach to machine translation that uses DL to capture long-range dependencies in a sentence. Models based on transformers outperformed previous approaches, such as using recurrent neural networks, particularly Long Short-Term Memory (LSTM) networks. The transformer model architecture has an encoder-decoder structure, where the encoder maps an input sequence to a sequence of hidden states, and the decoder maps the hidden states to an output sequence. The hidden state representations consider not only the inherent meaning of the words (their semantic value) but also their context in the sequence. The encoder is made up of identical layers, each with two sub-layers. The input embedding is passed through an attention mechanism, and the second sub-layer is a fully connected feed-forward network. Each sub-layer is followed by a residual connection and layer normalization. The output of each sub-layer is the sum of the input and the output of the sub-layer, which is then normalized. The architectural features that have contributed to the success of transformers are: Positional encoding: Since the transformer doesn’t process words sequentially but instead processes all words simultaneously, it lacks any notion of the order of words. To remedy this, information about the position of words in the sequence is injected into the model using positional encodings. These encodings are added to the input embeddings representing each word, thus allowing the model to consider the order of words in a sequence. Layer normalization: To stabilize the network’s learning, the transformer uses a technique called layer normalization. This technique normalizes the model’s inputs across the features dimension (instead of the batch dimension as in batch normalization), thus improving the overall speed and stability of learning. Multi-head attention: Instead of applying attention once, the transformer applies it multiple times in parallel – improving the model’s ability to focus on different types of information and thus capturing a richer combination of features. This is an excerpt from the book Generative AI with LangChain - By Ben Auffarth and published in Dec ‘23. To see what's inside the book, read the entire chapter here or try a 7-day free trial to access the full Packt digital library. To discover more, click the button below. Read through the Chapter 1 unlocked here...  🌟 Secret Knowledge: AI/LLM Resources💎 How to Craft Effective AI Prompts: Embark on a journey to understand the intricacies of AI prompts and how they can revolutionize creative content generation. Delve into the workings of AI Prompts, powered by NLP algorithms, and uncover the steps involved in their implementation. 💎 Understanding and Managing KV Caching for LLM Inference: Explore the intricacies of KV caching in the inference process of LLMs in this post. The KV cache, storing key and value tensors during token generation, poses challenges due to its linear growth with batch size and sequence length. The post delves into the memory constraints, presenting calculations for popular MHA models. 💎 Understanding and Enhancing Chain-of-Thought (CoT) Reasoning with Graphs: Explore using graphs to advance Chain-of-Thought (CoT) prompting, boosting reasoning in GPT-4. CoT enables multi-step problem-solving, spanning math to puzzles, vital for enhancing language models. 💎 Unlocking the Power of Hybrid Deep Neural Networks: This article explains Hybrid Deep Neural Networks (HDNNs), advanced ML models changing AI. It covers HDNN architecture, uses, benefits, and future trends, including how they combine various neural networks like CNNs, RNNs, and GANs.  🔛 Masterclass: AI/LLM Tutorials💎 Building a Local Chatbot with Next.js, Llama.cpp, and ModelFusion: Discover how to build a chatbot with Next.js, Llama.cpp, and ModelFusion. This tutorial covers setup, using Llama.cpp for LLM inference in C++, and creating a chatbot base with Next.js, TypeScript, ESLint, and Tailwind CSS. 💎 How to Build an Anomaly Detector with OpenAI: Learn to build an anomaly detector for different data types, including text and numbers, that fits into your data pipeline. The guide starts with the importance of anomaly detection and OpenAI's LLM role, using OpenAI and BigQuery.  💎 Building Multilingual Financial Search Applications with Cohere Embedding Models in Amazon Bedrock: Learn to use Cohere's multilingual model on Amazon Bedrock for advanced financial search tools. Unlike traditional keyword-based methods, Cohere uses machine learning for semantic searches in over 100 languages, improving document analysis and information retrieval. 💎 Maximizing GPU Utilization with AWS ParallelCluster and EC2 Capacity Blocks: Discover how to tackle GPU shortages in machine learning with AWS ParallelCluster and EC2 Capacity Blocks. This guide outlines a three-step method: reserve Capacity Block, configure your cluster, and run jobs effectively, including GPU failure management and multi-queue optimization.  🚀 HackHub: Trending AI Tools💎 vanna-ai/vanna: Toolkit for accurate Text-to-SQL generation via LLMs using RAG to interact with SQL databases through chat.  💎 dvmazur/mixtral-offloading: Achieve efficient inference for Mixtral-8x7B models, utilizing mixed quantization with HQQ for attention layers and experts, along with a MoE offloading strategy. 💎 pootiet/explain-then-translate: 2-stage Chain-of-Thought (CoT) prompting technique for program translation to improve translation across various Python-to-X and X-to-X directions. 💎 genezc/minima: Addresses the challenge of distilling knowledge from large teacher LMs to smaller student ones to optimize the capacity gap for effective LM distillation and achieving competitive performance with resource-efficient models. 
Read more
  • 0
  • 0
  • 43107

article-image-10-machine-learning-algorithms
Aaron Lazar
30 Nov 2017
7 min read
Save for later

10 machine learning algorithms every engineer needs to know

Aaron Lazar
30 Nov 2017
7 min read
When it comes to machine learning, it's all about the algorithms. But although machine learning algorithms are the bread and butter of a data scientists job role, it's not always as straightforward as simply picking up an algorithm and running with it. Algorithm selection is incredibly important and often very challenging. There's always a number of things you have to take into consideration, such as: Accuracy: While accuracy is important, it’s not always necessary. In many cases, an approximation is sufficient, in which case, one shouldn’t look for accuracy while giving up on the processing time. Training time: This goes hand in hand with accuracy and is not the same for all algorithms. The training time might go up if there are more parameters as well. When time is a big constraint, you should choose an algorithm wisely. Linearity: Algorithms that follow linearity assume that the data trends follow a linear path. While this is good for some problems, for others it can result in lowered accuracy. Once you've taken those 3 considerations on board you can start to dig a little deeper. Kaggle did a survey in 2017 asking their readers which algorithms - or 'data science methods' more broadly - respondents were most likely to use at work. Below is a screenshot of the results. Kaggle's research offers a useful insight into the algorithms actually being used by data scientists and data analysts today. But we've brought together the types of machine learning algorithms that are most important. Every algorithm is useful in different circumstances - the skill is knowing which one to use and when. 10 machine learning algorithms Linear regression This is clearly one of the most interpretable ML algorithms. It requires minimal tuning and is easy to explain, being the key reason for its popularity. It shows the relationship between two or more variables and how a change in one of the dependent variables impacts the independent variable. It is used for forecasting sales based on trends, as well as for risk assessment. Although with a relatively low level of accuracy, a few parameters needed and lesser training times makes it’s quite popular among beginners. Logistic regression Logistic regression is typically viewed as a special form of Linear Regression, where the output variable is categorical. It’s generally used to predict a binary outcome i.e.True or False, 1 or 0, Yes or No, for a set of independent variables. As you would have already guessed, this algorithm is generally used when the dependent variable is binary. Like to Linear regression, logistic regression has a low level of accuracy, fewer parameters and lesser training times. It goes without saying that it’s quite popular among beginners too. Decision trees These algorithms are mainly decision support tools that use tree-like graphs or models of decisions and possible consequences, including outcomes based on chance-event, utilities, etc. To put it in simple words, you can say decision trees are the least number of yes/no questions to be asked, in order to identify the probability of making the right decision, as often as possible. It lets you tackle the problem at hand in a structured, systematic way to logically deduce the outcome. Decision Trees are excellent when it comes to accuracy but their training times are a bit longer as compared to other algorithms. They also require a moderate number of parameters, making them not so complicated to arrive at a good combination. Naive Bayes  This is a type of classification ML algorithm that’s based on the popular probability theorem by Bayes. It is one of the most popular learning algorithms. It groups similarities together and is usually used for document classification, facial recognition software or for predicting diseases. It generally works well when you have a medium to large data set to train your models. These have moderate training times and make use of linearity. While this is good, linearity might also bring down accuracy for certain problems. They also do not bank on too many parameters, making it easy to arrive at a good combination, although at the cost of accuracy. Random forest Without a doubt, this one is a popular go-to machine learning algorithm that creates a group of decision trees with random subsets of the data. It uses the ML method of classification and regression. It is simple to use, as just a few lines of code are enough to implement the algorithm. It is used by banks in order to predict high-risk loan applicants or even by hospitals to predict whether a particular patient is likely to develop a chronic disease or not. With a high accuracy level and moderate training time, it is quite efficient to implement. Moreover, it has average parameters. K-Means K-Means is a popular unsupervised algorithm that is used for cluster analysis and is an iterative and non-deterministic method. It operates on a given dataset through a predefined number of clusters. The output of a K-Means algorithm will be k clusters, with input data partitioned among these clusters. Biggies like Google use K-means to cluster pages by similarities and discover the relevance of search results. This algorithm has a moderate training time and has good accuracy. It doesn’t consist of many parameters, meaning that it’s easy to arrive at the best possible combination. K nearest neighbors K nearest neighbors is a very popular machine learning algorithm which can be used for both regression as well as classification, although it’s majorly used for the latter. Although it is simple, it is extremely effective. It takes little to no time to train, although its accuracy can be heavily degraded by high dimension data since there is not much of a difference between the nearest neighbor and the farthest one. Support vector machines SVMs are one of the several examples of supervised ML algorithms dealing with classification. They can be used for either regression or classification, in situations where the training dataset teaches the algorithm about specific classes, so that it can then classify the newly included data. What sets them apart from other machine learning algorithms is that they are able to separate classes quicker and with lesser overfitting than several other classification algorithms. A few of the biggest pain points that have been resolved using SVMs are display advertising, image-based gender detection and image classification with large feature sets. These are moderate in their accuracy, as well as their training times, mostly because it assumes linear approximation. On the other hand, they require an average number of parameters to get the work done. Ensemble methods Ensemble methods are techniques that build a set of classifiers and combine the predictions to classify new data points. Bayesian averaging is originally an ensemble method, but newer algorithms include error-correcting output coding, etc. Although ensemble methods allow you to devise sophisticated algorithms and produce results with a high level of accuracy, they are not preferred so much in industries where interpretability of the algorithm is more important. However, with their high level of accuracy, it makes sense to use them in fields like healthcare, where even the minutest improvement can add a lot of value. Artificial neural networks Artificial neural networks are so named because they mimic the functioning and structure of biological neural networks. In these algorithms, information flows through the network and depending on the input and output, the neural network changes in response. One of the most common use cases for ANNs is speech recognition, like in voice-based services. As the information fed to them grows, these algorithms improve. However, artificial neural networks are imperfect. With great power comes longer training times. They also have several more parameters as compared to other algorithms. That being said, they are very flexible and customizable. If you want to skill-up in implementing Machine Learning Algorithms, you can check out the following books from Packt: Data Science Algorithms in a Week by Dávid Natingga Machine Learning Algorithms by Giuseppe Bonaccorso
Read more
  • 0
  • 0
  • 43092

article-image-implementing-web-application-vulnerability-scanners-with-kali-linux-tutorial
Savia Lobo
05 Oct 2018
10 min read
Save for later

Implementing Web application vulnerability scanners with Kali Linux [Tutorial]

Savia Lobo
05 Oct 2018
10 min read
Vulnerability scanners suffer the common shortcomings of all scanners (a scanner can only detect the signature of a known vulnerability; they cannot determine if the vulnerability can actually be exploited; there is a high incidence of false-positive reports). Furthermore, web vulnerability scanners cannot identify complex errors in business logic, and they do not accurately simulate the complex chained attacks used by hackers. This tutorial is an excerpt taken from the book, Mastering Kali Linux for Advanced Penetration Testing - Second Edition written by Vijay Kumar Velu. In this book, we will be using a laboratory environment to validate tools and techniques, and using an application that supports a collaborative approach to penetration testing. This article includes a list of web application vulnerability scanners and how we can implement them using Kali Linux. In an effort to increase reliability, most penetration testers use multiple tools to scan web services; when multiple tools report that a particular vulnerability may exist, this consensus will direct the tester to areas that may require manually verifying the findings. Kali comes with an extensive number of vulnerability scanners for web services and provides a stable platform for installing new scanners and extending their capabilities. This allows penetration testers to increase the effectiveness of testing by selecting scanning tools that: Maximize the completeness (the total number of vulnerabilities that are identified) and accuracy (the vulnerabilities that are real and not false-positive results) of testing. Minimize the time required to obtain usable results. Minimize any negative impacts on the web services being tested. This can include slowing down the system due to an increase of traffic throughput. For example, one of the most common negative effects is a result of testing forms that input data to a database and then email an individual providing an update of the change that has been made-uncontrolled testing of such forms can result in more than 30,000 emails being sent! There is significant complexity in choosing the most effective tool. In addition to the factors already listed, some vulnerability scanners will also launch the appropriate exploit and support the post-exploit activities. For our purposes, we will consider all tools that scan for exploitable weaknesses to be vulnerability scanners. Kali provides access to several different vulnerability scanners, including the following: Scanners that extend the functionality of traditional vulnerability scanners to include websites and associated services (Metasploit framework and Websploit) Scanners that extend the functionality of non-traditional applications, such as web browsers, to support web service vulnerability scanning (OWASP Mantra) Scanners that are specifically developed to support reconnaissance and exploit detection in websites and web services (Arachnid, Nikto, Skipfish, Vega, w3af, and so on) Introduction to Nikto and Vega Nikto is one of the most utilized active web application scanners that performs comprehensive tests against web servers. Basic functionality is to check for 6,700+ potentially dangerous files or programs, along with outdated versions of servers and vulnerabilities specific to versions over 270 servers; server mis-configuration, index files, HTTP methods, and also attempts to identify the installed web server and the software version. Nikto is released based on Open-General Public license versions (https://opensource.org/licenses/gpl-license). A Perl-based open-source scanner allows IDS evasion and user changes to scan modules; however, this original web scanner is beginning to show its age, and is not as accurate as some of the more modern scanners. Most testers start testing a website by using Nikto, a simple scanner (particularly with regards to reporting) that generally provides accurate but limited results; a sample output of this scan is shown in the following screenshot: The next step is to use more advanced scanners that scan a larger number of vulnerabilities; in turn, they can take significantly longer to run to completion. It is not uncommon for complex vulnerability scans (as determined by the number of pages to be scanned as well as the site's complexity, which can include multiple pages that permit user input such as search functions or forms that gather data from the user for a backend database) to take several days to be completed. One of the most effective scanners based on the number of verified vulnerabilities discovered is Subgraph's Vega. As shown in the following screenshot, it scans a target and classifies the vulnerabilities as high, medium, low, and informational. The tester is able to click on the identified results to drill down to specific findings. The tester can also modify the search modules, which are written in Java, to focus on particular vulnerabilities or identify new vulnerabilities: Vega can help you find vulnerabilities such as reflected cross-site scripting, stored cross-site scripting, blind SQL injection, remote file include, shell injection, and others. Vega also probes for TLS/SSL security settings and identifies opportunities for improving the security of your TLS servers. Also, Vega provides special features of Proxy section, which allows the penetration testers to query back the request and observe the response to perform the validation, which we call manual PoC. The following screenshot provides the proxy section of Vega: Customizing Nikto and Vega From Nikto version 2.1.1, the community allowed developers to debug and call specific plugins, the same can be customized accordingly from version 2.1.2, the listing can be done for all the plugins and then specify a specific plugin to perform any scan. There are currently around 35 plugins that can be utilized by penetration testers; the following screenshot provides the list of plugins that are currently available in the latest version of Nikto: For example, if attackers found a banner information as Apache server 2.2.0 then the first point that Nikto scans to burp or any proxy tool by nikto.pl -host <hostaddress> -port <hostport> -useragentnikto -useproxy http://127.0.0.1:8080, Nikto can be customized to run specific plugins only for Apache user enumeration by running the following command: nikto.pl -host target.com -Plugins "apacheusers(enumerate,dictionary:users.txt);report_xml" -output apacheusers.xml Penetration testers should be able to see the following screenshot: When the Nikto plugin is run successfully, the output file apacheusers.xml should include the active users on the target host. Similar to Nikto, Vega also allows us to customize the scanner by navigating to the window and selecting Preferences whereby one can set up a general proxy configuration or even point the traffic to a third-party proxy tool. However, Vega has its own proxy tool that can be utilized. The following screenshot provides the scanner options that can be set before beginning any web application scan: Attackers can define their own user agent or mimic any well-known user agents, such as IRC bot or Google bot and also configure the maximum number of total descendants and sub-processes, the number of paths that can be traversed. For example, if the spider reveals www.target.com/admin/, - there is a dictionary to add to the URL as www.target.com/admin/secret/. The maximum by default is set to 16, but attackers would be able to drill down by utilizing other tools to maximize the effectiveness of Vega and would select precisely the right number of paths and also, in the case of any protection mechanisms in place, such as WAF or network level IPS, pentesters can select to scan the target with a slow rate of connections per second to send to the target. One can also set the maximum number of responses size to be set, by default it is set: to 1 MB (1,024 kB). Once the preferences are set, the scan can be further customized while adding a new scan. When penetration testers click on a new scan and enter the base URL to scan and click next, the following screen should take the testers to customize the scan and they should be able to see the following screenshot: Vega provides two sections to customize: one is Injection modules and the other is Response processing modules: Injection modules: This includes a list of exploit modules that are available as part of the built-in Vega web vulnerability databases and it tests the target for those vulnerabilities, such as blind SQL injection, XSS, remote file inclusion, local file inclusion, header injections, and so on. Response processing modules: This includes the list of security misconfigurations that can be picked up as part of the HTTP response in itself, such as directory listing, error pages, cross-domain policies, version control strings, and so on. Vega also supports testers to add their own plugin modules (https://github.com/subgraph/Vega/). To know about vulnerability scanners for mobile applications, head over to the book. The OpenVAS network vulnerability scanner Open Vulnerability Assessment System (OpenVAS) is an open source vulnerability assessment scanner and also a vulnerability management tool often utilized by attackers to scan a wide range of networks, which includes around 47,000 vulnerabilities in its database; however, this can be considered as a slow network vulnerability scanner compared with other commercial tools, such as Nessus, nexpose, Qualys, and so on. If OpenVAS is already not installed, make sure your Kali is up to date and install the latest OpenVAS by running the apt-get install Openvas command. Once done, run the openvas-setup command to setup OpenVAS. To make sure the installation is okay, the penetration testers can run the command openvas-check-setup and it will list down the top 10 items that are required to run OpenVAS effectively. Upon successful installation, testers should be able to see the following screenshot: Next, create an admin user by running the openvasmd -user=admin -new-password=YourNewPassword1,-new-password=YourNewPassword1command, and start up the OpenVAS scanner and OpenVAS manager services by running the openvas-start command from the prompt. Depending on bandwidth and computer resources, this could take a while. Once the installation and update has been completed, penetration testers should be able to access the OpenVAS server on port 9392 with SSL (https://localhost:9392), as shown in the following screenshot: The next step is to validate the user credentials, by entering the username as admin and password with yournewpassword1 and testers should be able to login without any issues and see the following screenshot. Attackers are now set to utilize OpenVAS by entering the target information and clicking Start Scan from the scanner portal: Customizing OpenVAS Unlike any other scanners, OpenVAS is also customizable for scan configuration, it allows the testers to add credentials, disable particular plugins, and set the maximum and minimum number of connections that can be made and so on. The following sample screenshot shows the place where attackers are allowed to change all the required settings to customize it accordingly: To summarize, in this article we focused on multiple vulnerability assessment tools and techniques implementation using Kali Linux. If you've enjoyed this post, do check out our book Mastering Kali Linux for Advanced Penetration Testing - Second Edition to explore approaches to carry out advanced penetration testing in tightly secured environments. Getting Started with Metasploitable2 and Kali Linux Introduction to Penetration Testing and Kali Linux Wireless Attacks in Kali Linux
Read more
  • 0
  • 0
  • 43075

article-image-what-matters-on-an-engineering-resume-hacker-rank-report-says-skills-not-certifications
Richard Gall
24 May 2018
6 min read
Save for later

What matters on an engineering resume? Hacker Rank report says skills, not certifications

Richard Gall
24 May 2018
6 min read
Putting together an engineering resume can be a real headache. What should you include? How can you best communicate your experience and skills? Software engineers are constantly under pressure to deliver new projects and fix problems while learning new skills. Documenting the complexity of developer life in a straightforward and marketable manner is a challenge to say the least. Luckily, hiring managers and tech recruiters today recognize just how difficult communicating skill and competency in an engineering resume can be. A report by Hacker Rank revealed that the things that feature on a resume aren't that highly valued by recruiters and hiring managers. However, skills does remain top of the agenda: the question, really, is about how we demonstrate and communicate those skills. The quality of your previous experience matters on an engineering resume Hacker Rank found that hiring managers and tech recruiters value previous experience over everything else. 77% of survey respondents said previous experience was one of the 3 most important qualifications before a formal interview. In second place was years of experience with 46%. The difference between the two is subtle but important; it's offers a useful takeaway for engineers creating an engineering resume. Essentially, the quality of your experience is more important than the quantity of your experience. You need to make sure you communicate the details of your employment experiences. It sounds obvious but it's worth stating: applying for an engineering job isn't just a competition based on who has the most experience. You should explain the nature of the projects you are working on. The skills you used are essential, but being clear about how the project supported wider strategic or tactical goals is also important. This demonstrates not only your skills, but also your contextual awareness. It suggests to a hiring manager or recruiter you not only have the competence, but that you are also a team player with commercial awareness. Certifications aren't that important on your resume One of the most interesting insights from the Hacker Rank report was that both hiring managers and recruiters don't really care about certifications any more. Less than 16% listed it as one of the 3 most important things they look at during the recruitment process. Does this mean, then, that the certification is well and truly over? At this stage, it's hard to tell. But it does point to a wider cultural change that probably has a lot to do with open source. Because change is built into the reality of open source software, certifications are never going to be able to keep up with what's new and important. The things you learned to pass one year will likely be out of date the next. It probably also says something about the nature of technical roles today. Years ago, engineers would start a job knowing what they were going to be using. The toolchains and tech stacks would be relatively stable and consistent. In this context, certification was like a license, proving you understood the various components of a given tool or suite of tools. But today, it's more important for engineers to prove that they are both adaptable and capable of solving a range of different problems. With that in mind, it's essential to demonstrate your flexibility on your engineering resume. Make it clear that you're able to learn new things quickly, and that you can adapt your skill set to the problems you need to solve. You don't need to look good on paper to get the job... but it's going to help Hacker Rank's research also revealed that 75% of recruiters and hiring managers have hired people they initially thought didn't look good on paper. But that doesn't necessarily mean you should stop working on your resume. If anything, what this shows is that if you get your resume right, you could really catch someone's attention. You need to consider everything in your resume. Traditional resumes have a pretty clear structure, whatever job you're applying for, but if Hacker Rank's research tells us anything, it's that a an engineering resume requires a slightly different approach. Personal projects are more important than your portfolio on an engineering resume A further insight from Hacker Rank's report suggests one way you might adopt a different approach to your resume. Responding to the same question as the one we looked at above, 37% said personal projects were one of the 3 most important factors in determining whether to invite a candidate to interview. By contrast, only 22% said portfolio. This seems strange - surely a portfolio offers a deeper insight into someone's professional experience. Personal projects are more like hobbies, right? Personal projects actually tell you so much more about a candidate than a portfolio. A portfolio is largely determined by the work you have been doing. What's more, it's not always that easy to communicate value in a portfolio. Equally, if you've been badly managed, or faced a lack of support, your portfolio might not actually be a good reflection of how good you really are. Personal projects give you an insight into how a person thinks. It shows recruiters what makes an engineer tick. In the workplace your scope for creativity and problem solving might well be limited. With personal projects you're free to test out ideas try new tools. You're able to experiment. So, when you're putting together an engineering resume, make sure you dedicate some time outlining your personal projects. Consider these sorts of questions: Why did you start a project? What did you find interesting? What did you learn? Engineering skills still matter Just because the traditional resume appears to have fallen out of favor, it doesn't mean your skills don't matter. In fact, skill matters more than ever. For a third of hiring managers, skill assessments are the area they want to invest in. This would allow them to judge a candidate's competencies and skills much more effectively than simply looking at a resume. As we've seen, things like personal projects are valuable because they demonstrate skills in a way that is often difficult. They not only prove you have the technical skills you say you have, they also provide a good indication of how you think and how you might approach solving problems. They can help illustrate how you deploy those skills. And when its so easy to learn how to write lines of code (no bad thing, true), showing how you think and apply your skills is a sure fire way to make sure you stand out from the crowd. Read next: How to assess your tech team's skills Are technical skills overrated when hiring tech pros?  ‘Soft’ Skills Every Data Pro Needs
Read more
  • 0
  • 0
  • 42946

article-image-reactive-programming-in-swift-with-rxswift-and-rxcocoa-tutorial
Bhagyashree R
10 Feb 2019
10 min read
Save for later

Reactive programming in Swift with RxSwift and RxCocoa [Tutorial]

Bhagyashree R
10 Feb 2019
10 min read
The basic idea behind Reactive Programming (RP) is that of asynchronous data streams, such as the stream of events that are generated by mouse clicks, or a piece of data coming through a network connection. Anything can be a stream; there are really no constraints. The only property that makes it sensible to model any entity as a stream is its ability to change at unpredictable times. The other half of the picture is the idea of observers, which you can think of as agents that subscribe to receive notifications of new events in a stream. In between, you have ways of transforming those streams, combining them, creating new streams, filtering them, and so on. You could look at RP as a generalization of Key-Value Observing (KVO), a mechanism that is present in the macOS and iOS SDKs since their inception. KVO enables objects to receive notifications about changes to other objects' properties to which they have subscribed as observers. An observer object can register by providing a keypath, hence the name, into the observed object. This article is taken from the book Hands-On Design Patterns with Swift by Florent Vilmart, Giordano Scalzo, and Sergio De Simone.  This book demonstrates how to apply design patterns and best practices in real-life situations, whether that's for new or already existing Swift projects. You’ll begin with a quick refresher on Swift, the compiler, the standard library, and the foundation, followed by the Cocoa design patterns to follow up with the creational, structural, and behavioral patterns as defined by the GoF.  To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. In this article, we will give a brief introduction to one popular framework for RP in Swift, RxSwift, and its Cocoa counterpart, RxCocoa, to make Cocoa ready for use with RP. RxSwift is not the only RP framework for Swift. Another popular one is ReactiveCocoa, but we think that, once you have understood the basic concepts behind one, it won't be hard to switch to the other. Using RxSwift and RxCocoa in reactive programming RxSwift aims to be fully compatible with Rx, Reactive Extensions for Microsoft .NET, a mature reactive programming framework that has been ported to many languages, including Java, Scala, JavasScript, and Clojure. Adopting RxSwift thus has the advantage that it will be quite natural for you to use the same approach and concepts in another language for which Rx is available, in case you need to. If you want to play with RxSwift, the first step is creating an Xcode project and adding the SwiftRx dependency. If you use the Swift Package Manager, just make sure your Package.swift file contains the following information: If you use CocoaPods, add the following dependencies to your podfile: pod 'RxSwift', '~> 4.0' pod 'RxCocoa', '~> 4.0' Then, run this command: pod install Finally, if you use Carthage, add this to Cartfile: github "ReactiveX/RxSwift" ~> 4.0 Then, run this command to finish: carthage update As you can see, we have also included RxCocoa as a dependency. RxCocoa is a framework that extends Cocoa to make it ready to be used with RxSwift. For example, RxCocoa will make many properties of your Cocoa objects observable without requiring you to add a single line of code. So if you have a UI object whose position changes depending on some user action, you can observe its center property and react to its evolution. Observables and observers Now that RxSwift is set up in our project, let's start with a few basic concepts before diving into some code: A stream in RxSwift is represented through Observable<ObservableType>, which is equivalent to Sequence, with the added capability of being able to receive new elements asynchronously. An observable stream in Rx can emit three different events: next, error, and complete. When an observer registers for a stream, the stream begins to emit next events, and it does so until an error or complete event is generated, in which case the stream stops emitting events. You subscribe to a stream by calling ObservableType.subscribe, which is equivalent to Sequence.makeIterator. However, you do not use that iterator directly, as you would, to iterate a sequence; rather, you provide a callback that will receive new events. When you are done with a stream, you should release it, along with all resources it allocated, by calling dispose. To make it easier not to forget releasing streams, RxSwift provides DisposeBag and takeUntil. Make sure that you use one of them in your production code. All of this can be translated into the following code snippet: let aDisposableBag = DisposeBag() let thisIsAnObservableStream = Observable.from([1, 2, 3, 4, 5, 6]) let subscription = thisIsAnObservableStream.subscribe( onNext: { print("Next value: \($0)") }, onError: { print("Error: \($0)") }, onCompleted: { print("Completed") }) // add the subscription to the disposable bag // when the bag is collected, the subscription is disposed subscription.disposed(by: aDisposableBag) // if you do not use a disposable bag, do not forget this! // subscription.dispose() Usually, your view controller is where you create your subscriptions, while, in our example thisIsAnObservableStream, observers and observables fit into your view model. In general, you should make all of your model properties observable, so your view controller can subscribe to those observables to update the UI when need be. In addition to being observable, some properties of your view model could also be observers. For example, you could have a UITextField or UISearchBar in your app UI and a property of your view model could observe its text property. Based on that value, you could display some relevant information, for example, the result of a query. When a property of your view model is at the same time an observable and an observer, RxSwift provides you with a different role for your entity—that of a Subject. There exist multiple categories of subjects, categorized based on their behavior, so you will see BehaviourSubject, PublishSubject, ReplaySubject, and Variable. They only differ in the way that they make past events available to their observers. Before looking at how these new concepts may be used in your program, we need to introduce two further concepts: transformations and schedulers. Transformations Transformations allow you to create new observable streams by combining, filtering, or transforming the events emitted by other observable streams. The available transformations include the following: map: This transforms each event in a stream into another value before any observer can observe that value. For example, you could map the text property of a UISearchBar into an URL to be used to query some remote service. flatMap: This transforms each event into another Observable. For example, you could map the text property of a UISearchBar into the result of an asynchronous query. scan: This is similar to the reduce Swift operator on sequences. It will accumulate each new event into a partial result based on all previously emitted events and emit that result. filter: This enables filtering of emitted events based on a condition to be verified. merge: This merges two streams of events by preserving their ordering. zip: This combines two streams of events by creating a new stream whose events are tuples made by the successive events from the two original streams. Schedulers Schedulers allow you to control to which queue RxSwift operators are dispatched. By default, all RxSwift operations are executed on the same queue where the subscription was made, but by using schedulers with observeOn and subscribeOn, you can alter that behavior. For example, you could subscribe to a stream whose events are emitted from a background queue, possibly the results of some lengthy tasks, and observe those events from the main thread to be able to update the UI based on those tasks' outcomes. Recalling our previous example, this is how we could use observeOn and subscribeOn as described: let aDisposableBag = DisposeBag() let thisIsAnObservableStream = Observable.from([1, 2, 3, 4, 5, 6]) .observeOn(MainScheduler.instance).map { n in print("This is performed on the main scheduler") } let subscription = thisIsAnObservableStream .subscribeOn(ConcurrentDispatchQueueScheduler(qos: .background)) .subscribe(onNext: { event in print("Handle \(event) on main thread? \(Thread.isMainThread)") }, onError: { print("Error: \($0). On main thread? \(Thread.isMainThread)") }, onCompleted: { print("Completed. On main thread? \(Thread.isMainThread)") }) subscription.disposed(by: aDisposableBag) Asynchronous networking – an example Now we can take a look at a slightly more compelling example, showing off the power of reactive programming. Let's get back to our previous example: a UISearchBar collects user input that a view controller observes, to update a table displaying the result of a remote query. This is a pretty standard UI design. Using RxCocoa, we can observe the text property of the search bar and map it into a URL. For example, if the user inputs a GitHub username, the URLRequest could retrieve a list of all their repositories. We then further transform the URLRequest into another observable using flatMap. The remoteStream function is defined in the following snippet, and simply returns an observable containing the result of the network query. Finally, we bind the stream returned by flatMap to our tableView, again using one of the methods provided by RxCocoa, to update its content based on the JSON data passed in record: searchController.searchBar.rx.text.asObservable() .map(makeURLRequest) .flatMap(remoteStream) .bind(to: tableView.rx.items(cellIdentifier: cellIdentifier)) { index, record, cell in cell.textLabel?.text = "" // update here the table cells } .disposed(by: disposeBag) This looks all pretty clear and linear. The only bit left out is the networking code. This is a pretty standard code, with the major difference that it returns an observable wrapping a URLSession.dataTask call. The following code shows the standard way to create an observable stream by calling observer.onNext and passing the result of the asynchronous task: func remoteStream<T: Codable>(_ request: URLRequest) -> Observable<T> { return Observable<T>.create { observer in let task = URLSession.shared.dataTask(with: request) { (data, response, error) in do { let records: T = try JSONDecoder().decode(T.self, from: data ?? Data()) for record in records { observer.onNext(record) } } catch let error { observer.onError(error) } observer.onCompleted() } task.resume() return Disposables.create { task.cancel() } } } As a final bit, we could consider the following variant: we want to store the UISearchBar text property value in our model, instead of simply retrieving the information associated with it in our remote service. To do so, we add a username property in our view model and recognize that it should, at the same time, be an observer of the UISearchBar text property as well as an observable, since it will be observed by the view controller to retrieve the associated information whenever it changes. This is the relevant code for our view model: import Foundation import RxSwift import RxCocoa class ViewModel { var username = Variable<String>("") init() { setup() } setup() { ... } } The view controller will need to be modified as in the following code block, where you can see we bind the UISearchBar text property to our view model's username property; then, we observe the latter, as we did previously with the search bar: searchController.searchBar.rx.observe(String.self, "text") .bindTo(viewModel.username) .disposed(by: disposeBag) viewModel.username.asObservable() .map(makeURLRequest) .flatMap(remoteStream) .bind(to: tableView.rx.items(cellIdentifier: cellIdentifier)) { index, record, cell in cell.textLabel?.text = "" // update here the table cells } .disposed(by: disposeBag) With this last example, our short introduction to RxSwift is complete. There is much more to be said, though. A whole book could be devoted to RxSwift/RxCocoa and how they can be used to write Swift apps! If you found this post useful, do check out the book, Hands-On Design Patterns with Swift. This book provides a complete overview of how to implement classic design patterns in Swift.  It will guide you to build Swift applications that are scalable, faster, and easier to maintain. Reactive Extensions: Ways to create RxJS Observables [Tutorial] What’s new in Vapor 3, the popular Swift based web framework Exclusivity enforcement is now complete in Swift 5
Read more
  • 0
  • 0
  • 42940

article-image-how-build-desktop-app-using-electron
Amit Kothari
17 Oct 2016
9 min read
Save for later

How to build a desktop app using Electron

Amit Kothari
17 Oct 2016
9 min read
Desktop apps are making a comeback. Even companies with cloud-based applications with awesome web apps are investing in desktop apps to offer a better user experience. One example is team collaboration tool called Slack. They built a really good desktop app with web technologies using Electron. Electron is an open source framework used to build cross-platform desktop apps using web technologies. It uses Node.js and Chromium and allows us to develop desktop GUI apps using HTML, CSS and JavaScript. Electron is developed by GitHub, initially for Atom editor but now used by many companies, including Slack, Wordpress, Microsoft and Docker to name a few. Electron apps are web apps running in embedded Chromium web browser, with access to the full suite of Node.js modules and underlying operating system. In this post we will build a simple desktop app using Electron. Hello Electron Let’s start by creating a simple app. Before we start, we need Node.js and npm installed. Follow the instructions on the Node.js website if you do not have these installed already. Create a new director for your application and inside the app directory, create a package.json file by using the npm init command. Follow the prompts and remember to set main.js as the entry point. Once the file is generated, install electron-prebuild, which is the precomplied version of electron, and add it as a dev depenency in the package.json using the command npm install --save-dev electron-prebuilt. Also add "start": "electron ." under scripts, which we will use later to start our app. The package.json file will look something like this: { "name": "electron-tutorial", "version": "1.0.0", "description": "Electron Tutorial ", "main": "main.js", "scripts": { "start": "electron ." }, "devDependencies": { "electron-prebuilt": "^1.3.3" } } Create a file main.js with the following content: const {app, BrowserWindow} = require('electron'); // Global reference of the window object. let mainWindow; // When Electron finish initialization, create window and load app index.html app.on('ready', () => { mainWindow = new BrowserWindow({ width: 800, height: 600 }); mainWindow.loadURL(`file://${__dirname}/index.html`); }); We defined main.js as the entry point to our app in package.json. In main.js the electron app module controls the application lifecyle and BrowserWindow is used to create a native browser window. When Electron finishes initializing and our app is ready, we create a browser window to load our web page—index.html. As mentioned in the Electron documentation, remember to keep a global reference of the window object to avoid it from closing automatically when the JavaScript garbage collector kicks in. Finally, create the index.html file: <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Hello Electron</title> </head> <body> <h1>Hello Electron</h1> </body> </html> We can now start our app by running the npm start command. Testing the Electron app Let’s write some integration tests for our app using Spectron. spectron allows us to test Electron apps using ChromeDriver and WebdriverIO. It is a test framework that is agnostic, but for this example, we will use mocha to write the tests. Let’s start by adding spectron and mocha as dev dependecies using the npm install --save-dev spectron and npm install --save-dev mocha commands. Then add "test": "./node_modules/mocha/bin/mocha" under scripts in the package.json file. This will be used to run our tests later. The package.json should look something like this: { "name": "electron-tutorial", "version": "1.0.0", "description": "Electron Tutorial ", "main": "main.js", "scripts": { "start": "electron .", "test": "./node_modules/mocha/bin/mocha" }, "devDependencies": { "electron-prebuilt": "^1.3.3", "mocha": "^3.0.2", "spectron": "^3.3.0" } } Now that we have all the dependencies installed, let’s write some tests. Create a directory called test and a file called test.js inside it. Copy the following content to test.js: var Application = require('spectron').Application; var electron = require('electron-prebuilt'); var assert = require('assert'); describe('Sample app', function () { var app; beforeEach(function () { app = new Application({ path: electron, args: ['.'] }); return app.start(); }); afterEach(function () { if (app && app.isRunning()) { return app.stop(); } }); it('should show initial window', function () { return app.browserWindow.isVisible() .then(function (isVisible) { assert.equal(isVisible, true); }); }); it('should have correct app title', function () { return app.client.getTitle() .then(function (title) { assert.equal(title, 'Hello Electron') }); }); }); Here we have couple of simple tests. We start the app before each test and stop after each test. The first test is to verify that the app's browserWindow is visible, and the second test is to verify the app’s title. We can run these tests using the npm run test command. spectron not only allows us to easily set up and tear down our app, but also give access to various APIs, allowing us to write sophisticated tests covering various business requirements. Please have a look at their documentation for more details. Packaging our app Now that we have a basic app, we are ready to package and build it for distribution. We will use electron-builder for this, which offers a complete solution to distribute apps on different platforms with the option to auto-update. It is recommended to use two separate package.jsons when using electron-builder, one for the development environment and build scripts and another one with app dependencies. But for our simple app, we can just use one package.json file. Let’s start by adding electron-builder as dev dependency using command npm install --save-dev electron-builder. Make sure you have the name, desciption, version and author defined in package.json. You also need to add electron-builder-specific options as build property in package.json: "build": { "appId": "com.amitkothari.electronsample", "category": "public.app-category.productivity" } For Mac OS, we need to specify appId and category. Look at the documentation for options for other platforms. Finally add script in package.json to package and build the app: "dist": "build" The updated package.json will look like this: { "name": "electron-tutorial", "version": "1.0.0", "description": "Electron Tutorial ", "author": "Amit Kothari", "main": "main.js", "scripts": { "start": "electron .", "test": "./node_modules/mocha/bin/mocha", "dist": "build" }, "devDependencies": { "electron-prebuilt": "^1.3.3", "mocha": "^3.0.2", "spectron": "^3.3.0", "electron-builder": "^5.25.1" }, "build": { "appId": "com.amitkothari.electronsample", "category": "public.app-category.productivity" } } Next we need to create a build directory under our project root directory. In this, put a file background.png for the Mac OS DMG background and icon.icns for app icon. We can now package our app by running the npm run dist command. Todo App We’ve built a very simple app, but Electron apps can do more than just show static text. Lets add some dynamic behavior to our app and convert it into a Todo list manager. We can use any JavaScript framework of choice, from AngularJS to React, with Electron, but for this example, we will use plain JavaScript. To start with, let’s update our index.html to display a todo list: <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Hello Electron</title> <link rel="stylesheet" type="text/css" href="./style.css"> </head> <body> <div class="container"> <ul id="todoList"></ul> <textarea id="todoInput" placeholder="What needs to be done ?"></textarea> <button id="addTodoButton">Add to list</button> </div> </body> <script>require('./app.js')</script> </html> We also included style.css and app.js in index.html. All our CSS will be in style.css and our app logic will be in app.js. Create the style.css file with the following content: body { margin: 0; } ul { list-style-type: none; margin: 0; padding: 0; } li { padding: 10px; border-bottom: 1px solid #ddd; } button { background-color: black; color: #fff; margin: 5px; padding: 5px; cursor: pointer; border: none; font-size: 12px; } .container { width: 100%; } #todoInput { float: left; display: block; overflow: auto; margin: 15px; padding: 10px; font-size: 12px; width: 250px; } #addTodoButton { float: left; margin: 25px 10px; } And finally create the app.js file: (function () { const addTodoButton = document.getElementById('addTodoButton'); const todoList = document.getElementById('todoList'); // Create delete button for todo item const createTodoDeleteButton = () => { const deleteButton = document.createElement("button"); deleteButton.innerHTML = "X"; deleteButton.onclick = function () { this.parentNode.outerHTML = ""; }; return deleteButton; } // Create element to show todo text const createTodoText = (todo) => { const todoText = document.createElement("span"); todoText.innerHTML = todo; return todoText; } // Create a todo item with delete button and text const createTodoItem = (todo) => { const todoItem = document.createElement("li"); todoItem.appendChild(createTodoDeleteButton()); todoItem.appendChild(createTodoText(todo)); return todoItem; } // Clear input field const clearTodoInputField = () => { document.getElementById("todoInput").value = ""; } // Add new todo item and clear input field const addTodoItem = () => { const todo = document.getElementById('todoInput').value; if (todo) { todoList.appendChild(createTodoItem(todo)); clearTodoInputField(); } } addTodoButton.addEventListener("click", addTodoItem, false); } ()); Our app.js has a self invoking function which registers a listener (addTodoItem) on addTodoButton click event. On add button click event, the addTodoItem function will add a new todo item and clear the text area. Run the app again using the npm start command. Conclusion We built a very simple app, but it shows the potential of Electron. As stated on the Electron website, if you can build a website, you can build a desktop app. I hope you find this post interesting. If you have built an application with Electron, please share it with us. About the author Amit Kothari is a full-stack software developer based in Melbourne, Australia. He has 10+ years experience in designing and implementing software, mainly in Java/JEE. His recent experience is in building web applications using JavaScript frameworks such as React and AngularJS and backend microservices/REST API in Java. He is passionate about lean software development and continuous delivery.
Read more
  • 0
  • 0
  • 42912
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-data-professionals-planning-to-learn-this-year-python-deep-learning
Amey Varangaonkar
14 Jun 2018
4 min read
Save for later

What are data professionals planning to learn this year? Python, deep learning, yes. But also...

Amey Varangaonkar
14 Jun 2018
4 min read
One thing that every data professional absolutely dreads is the day their skills are no longer relevant in the market. In an ever-changing tech landscape, one must be constantly on the lookout for the most relevant, industrially-accepted tools and frameworks. This is applicable everywhere - from application and web developers to cybersecurity professionals. Not even the data professionals are excluded from this, as new ways and means to extract actionable insights from raw data are being found out almost every day. Gone are the days when data pros stuck to a single language and a framework to work with their data. Frameworks are more flexible now, with multiple dependencies across various tools and languages. Not just that, new domains are being identified where these frameworks can be applied, and how they can be applied varies massively as well. A whole new arena of possibilities has opened up, and with that new set of skills and toolkits to work on these domains have also been unlocked. What’s the next big thing for data professionals? We recently polled thousands of data professionals as part of our Skill-Up program, and got some very interesting insights into what they think the future of data science looks like. We asked them what they were planning to learn in the next 12 months. The following word cloud is the result of their responses, weighted by frequency of the tools they chose: What data professionals are planning on learning in the next 12 months Unsurprisingly, Python comes out on top as the language many data pros want to learn in the coming months. With its general-purpose nature and innumerable applications across various use-cases, Python’s sky-rocketing popularity is the reason everybody wants to learn it. Machine learning and AI are finding significant applications in the web development domain today. They are revolutionizing the customers’ digital experience through conversational UIs or chatbots. Not just that, smart machine learning algorithms are being used to personalize websites and their UX. With all these reasons, who wouldn’t want to learn JavaScript, as an important tool to have in their data science toolkit? Add to that the trending web dev framework Angular, and you have all the tools to build smart, responsive front-end web applications. We also saw data professionals taking active interest in the mobile and cloud domains as well. They aim to learn Kotlin and combine its power with the data science tools for developing smarter and more intelligent Android apps. When it comes to the cloud, Microsoft’s Azure platform has introduced many built-in machine learning capabilities, as well as a workbench for data scientists to develop effective, enterprise-grade models. Data professionals also prefer Docker containers to run their applications seamlessly, and hence its learning need seems to be quite high. [box type="shadow" align="" class="" width=""]Has machine learning with JavaScript caught your interest? Don’t worry, we got you covered - check out Hands-on Machine Learning with JavaScript for a practical, hands-on coverage of the essential machine learning concepts using the leading web development language. [/box] With Crypto’s popularity off the roof (sadly, we can’t say the same about Bitcoin’s price), data pros see Blockchain as a valuable skill. Building secure, decentralized apps is on the agenda for many, perhaps. Cloud, Big Data, Artificial Intelligence are some of the other domains that the data pros find interesting, and feel worth skilling up in. Work-related skills that data pros want to learn We also asked the data professionals what skills the data pros wanted to learn in the near future that could help them with their daily jobs more effectively. The following word cloud of their responses paints a pretty clear picture: Valuable skills data professionals want to learn for their everyday work As Machine learning and AI go mainstream, so do their applications in mainstream domains - often resulting in complex problems. Well, there’s deep learning and specifically neural networks to tackle these problems, and these are exactly the skills data pros want to master in order to excel at their work. [box type="shadow" align="" class="" width=""]Data pros want to learn Machine Learning in Python. Do you? Here’s a useful resource for you to get started - check out Python Machine Learning, Second Edition today![/box] So, there it is! What are the tools, languages or frameworks that you are planning to learn in the coming months? Do you agree with the results of the poll? Do let us know. What are web developers favorite front-end tools? Packt’s Skill Up report reveals all Data cleaning is the worst part of data analysis, say data scientists 15 Useful Python Libraries to make your Data Science tasks Easier
Read more
  • 0
  • 0
  • 42899

article-image-understanding-the-foundation-of-protocol-oriented-design
Expert Network
30 Jun 2021
7 min read
Save for later

Understanding the Foundation of Protocol-oriented Design

Expert Network
30 Jun 2021
7 min read
When Apple announced Swift 2 at the World Wide Developers Conference (WWDC) in 2016, they also declared that Swift was the world’s first protocol-oriented programming (POP) language. From its name, we might assume that POP is all about protocol; however, that would be a wrong assumption. POP is about so much more than just protocol; it is actually a new way of not only writing applications but also thinking about programming. This article is an excerpt from the book Mastering Swift, 6th Edition by Jon Hoffman. In this article, we will discuss a protocol-oriented design and how we can use protocols and protocol extensions to replace superclasses. We will look at how to define animal types for a video game in a protocol-oriented way. Requirements When we develop applications, we usually have a set of requirements that we need to develop against. With that in mind, let’s define the requirements for the animal types that we will be creating in this article: We will have three categories of animals: land, sea, and air. Animals may be members of multiple categories. For example, an alligator can be a member of both the land and sea categories. Animals may attack and/or move when they are on a tile that matches the categories they are in. Animals will start off with a certain number of hit points, and if those hit points reach 0 or less, then they will be considered dead. POP Design We will start off by looking at how we would design the animal types needed and the relationships between them. Figure 1 shows our protocol-oriented design: Figure 1: Protocol-oriented design In this design, we use three techniques: protocol inheritance, protocol composition, and protocol extensions. Protocol inheritance Protocol inheritance is where one protocol can inherit the requirements from one or more additional protocols. We can also inherit requirements from multiple protocols, whereas a class in Swift can have only one superclass. Protocol inheritance is extremely powerful because we can define several smaller protocols and mix/match them to create larger protocols. You will want to be careful not to create protocols that are too granular because they will become hard to maintain and manage. Protocol composition Protocol composition allows types to conform to more than one protocol. With protocol-oriented design, we are encouraged to create multiple smaller protocols with very specific requirements. Let’s look at how protocol composition works. Protocol inheritance and composition are really powerful features but can also cause problems if used wrongly. Protocol composition and inheritance may not seem that powerful on their own; however, when we combine them with protocol extensions, we have a very powerful programming paradigm. Let’s look at how powerful this paradigm is. Protocol-oriented design — putting it all together We will begin by writing the Animal superclass as a protocol: protocol Animal { var hitPoints: Int { get set } } In the Animal protocol, the only item that we are defining is the hitPoints property. If we were putting in all the requirements for an animal in a video game, this protocol would contain all the requirements that would be common to every animal. We only need to add the hitPoints property to this protocol. Next, we need to add an Animal protocol extension, which will contain the functionality that is common for all types that conform to the protocol. Our Animal protocol extension would contain the following code: extension Animal { mutating func takeHit(amount: Int) { hitPoints -= amount } func hitPointsRemaining() -> Int { return hitPoints } func isAlive() -> Bool { return hitPoints > 0 ? true : false } } The Animal protocol extension contains the same takeHit(), hitPointsRemaining(), and isAlive() methods. Any type that conforms to the Animal protocol will automatically inherit these three methods. Now let’s define our LandAnimal, SeaAnimal, and AirAnimal protocols. These protocols will define the requirements for the land, sea, and air animals respectively: protocol LandAnimal: Animal { var landAttack: Bool { get } var landMovement: Bool { get } func doLandAttack() func doLandMovement() } protocol SeaAnimal: Animal { var seaAttack: Bool { get } var seaMovement: Bool { get } func doSeaAttack() func doSeaMovement() } protocol AirAnimal: Animal { var airAttack: Bool { get } var airMovement: Bool { get } func doAirAttack() func doAirMovement() } These three protocols only contain the functionality needed for their particular type of animal. Each of these protocols only contains four lines of code. This makes our protocol design much easier to read and manage. The protocol design is also much safer because the functionalities for the various animal types are isolated in their own protocols rather than being embedded in a giant superclass. We are also able to avoid the use of flags to define the animal category and, instead, define the category of the animal by the protocols it conforms to. In a full design, we would probably need to add some protocol extensions for each of the animal types, but we do not need them for our example here. Now, let’s look at how we would create our Lion and Alligator types using protocol-oriented design: struct Lion: LandAnimal { var hitPoints = 20 let landAttack = true let landMovement = true func doLandAttack() { print(“Lion Attack”) } func doLandMovement() { print(“Lion Move”) } } struct Alligator: LandAnimal, SeaAnimal { var hitPoints = 35 let landAttack = true let landMovement = true let seaAttack = true let seaMovement = true func doLandAttack() { print(“Alligator Land Attack”) } func doLandMovement() { print(“Alligator Land Move”) } func doSeaAttack() { print(“Alligator Sea Attack”) } func doSeaMovement() { print(“Alligator Sea Move”) } } Notice that we specify that the Lion type conforms to the LandAnimal protocol, while the Alligator type conforms to both the LandAnimal and SeaAnimal protocols. As we saw previously, having a single type that conforms to multiple protocols is called protocol composition and is what allows us to use smaller protocols, rather than one giant monolithic superclass. Both the Lion and Alligator types originate from the Animal protocol; therefore, they will inherit the functionality added with the Animal protocol extension. If our animal type protocols also had extensions, then they would also inherit the function added by those extensions. With protocol inheritance, composition, and extensions, our concrete types contain only the functionality needed by the particular animal types that they conform to. Since the Lion and Alligator types originate from the Animal protocol, we can use polymorphism. Let’s look at how this works: var animals = [Animal]() animals.append(Alligator()) animals.append(Alligator()) animals.append(Lion()) for (index, animal) in animals.enumerated() { if let _ = animal as? AirAnimal { print(“Animal at \(index) is Air”) } if let _ = animal as? LandAnimal { print(“Animal at \(index) is Land”) } if let _ = animal as? SeaAnimal { print(“Animal at \(index) is Sea”) } } In this example, we create an array that will contain Animal types named animals. We then create two instances of the Alligator type and one instance of the Lion type that are added to the animals array. Finally, we use a for-in loop to loop through the array and print out the animal type based on the protocol that the instance conforms to. Upgrade your knowledge and become an expert in the latest version of the Swift programming language with Mastering Swift 5.3, 6th Edition by Jon Hoffman. About Jon Hoffman has over 25 years of experience in the field of information technology. He has worked in the areas of system administration, network administration, network security, application development, and architecture. Currently, Jon works as an Enterprise Software Manager for Syn-Tech Systems.
Read more
  • 0
  • 0
  • 42891

article-image-developer-workflow
Packt
02 Mar 2017
7 min read
Save for later

Developer Workflow

Packt
02 Mar 2017
7 min read
In this article by Chaz Chumley and William Hurley, the author of the book Mastering Drupal 8, we will to decide on a local AMP stack and the role of a Composer. (For more resources related to this topic, see here.) Deciding on a local AMP stack Any developer workflow begins with having an AMP (Apache, MySQL, PHP) stack installed and configured on a Windows, OSX, or *nix based machine. Depending on the operating system, there are a lot of different methods that one can take to setup an ideal environment. However, when it comes down to choices there are really only three: Native AMP stack: This option refers to systems that generally either come preconfigured with Apache, MySQL, and PHP or have a generally easy install path to download and configure these three requirements. There are plenty of great tutorials on how to achieve this workflow but this does require familiarity with the operating system. Packaged AMP stacks: This option refers to third-party solutions such as MAMP—https://www.mamp.info/en/, WAMP—http://www.wampserver.com/en/, or Acquia Dev Desktop—https://dev.acquia.com/downloads. These solutions come with an installer that generally works on Windows and OSX and is a self-contained AMP stack allowing for general web server development.  Out of these three only Acquia Dev Desktop is Drupal specific. Virtual machine: This option is often the best solution as it closely represents the actual development, staging, and production web servers. However, this can also be the most complex to initially setup and requires some knowledge of how to configure specific parts of the AMP stack. That being said, there are a few really well documented VM’s available that can help reduce the experience needed. Two great virtual machines to look at are Drupal VM—https://www.drupalvm.com/ and Vagrant Drupal Development (VDD)—https://www.drupal.org/project/vdd. In the end, my recommendation is to choose an environment that is flexible enough to quickly install, setup, and configure Drupal instances.  The above choices are all good to start with, and by no means is any single solution a bad choice. If you are a single person developer, then a packaged AMP stack such as MAMP may be the perfect choice. However, if you are in a team environment I would strongly recommend one of the VM options above or look into creating your own VM environment that can be distributed to your team. We will discuss virtualized environments in more detail, but before we do, we need to have a basic understanding of how to work with three very important command line interfaces. Composer, Drush, and Drupal Console. The role of Composer Drupal 8 and each minor version introduces new features and functionality. Everything from moving the most commonly used 3rd party modules into its core to the introduction of an object oriented PHP framework. These improvements also introduced the Symfony framework which brings along the ability to use a dependency management tool called Composer. Composer (https://getcomposer.org/) is a dependency manager for PHP that allows us to perform a multitude of tasks. Everything from creating a Drupal project to declaring libraries and even installing contributed modules just to name a few.  The advantage to using Composer is that it allows us to quickly install and update dependencies by simply running a few commands. These configurations are then stored within a composer.json file that can be shared with other developers to quickly setup identical Drupal instances. If you are new to Composer then let’s take a moment to discuss how to go about installing Composer for the first time within a local environment. Installing Composer locally Composer can be installed on Windows, Linux, Unix, and OSX. For this example, we will be following the install found at https://getcomposer.org/download/. Make sure to take a look at the Getting Started documentation that corresponds with your operating system. Begin by opening a new terminal window. By default, our terminal window should place us in the user directory. We can then continue by executing the following four commands: Download Composer installer to local directory: php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" Verify the installer php -r "if (hash_file('SHA384', 'composer-setup.php') === 'e115a8dc7871f15d853148a7fbac7da27d6c0030b848d9b3dc09e2a0388afed865e6a3d6b3c0fad45c48e2b5fc1196ae') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" Since Composer versions are often updated it is important to refer back to these directions to ensure the hash file above is the most current one. Run the installer: php composer-setup.php Remove the installer: php -r "unlink('composer-setup.php');" Composer is now installed locally and we can verify this by executing the following command within a terminal window: php composer.phar Composer should now present us with a list of available commands: The challenge with having Composer installed locally is that it restricts us from using it outside the current user directory. In most cases, we will be creating projects outside of our user directory, so having the ability to globally use Composer quickly becomes a necessity. Installing Composer globally Moving the composer.phar file from its current location to a global directory can be achieved by executing the following command within a terminal window: mv composer.phar /usr/local/bin/composer We can now execute Composer commands globally by typing composer in the terminal window. Using Composer to create a Drupal project One of the most common uses for Composer is the ability to create a PHP project. The create-project command takes several arguments including the type of PHP project we want to build, the location of where we want to install the project, and optionally, the package version. Using this command, we no longer need to manually download Drupal and extract the contents into an install directory. We can speed up the entire process by using one simple command. Begin by opening a terminal window and navigating to a folder where we want to install Drupal. Next we can use Composer to execute the following command: composer create-project drupal/drupal mastering The create-project command tells Composer that we want to create a new Drupal project within a folder called mastering. Once the command is executed, Composer locates the current version of Drupal and installs the project along with any additional dependencies that it needs During the install process Composer will prompt us to remove any existing version history that the Drupal repository stores.  It is generally a good idea to choose yes to remove these files as we will be wanting to manage our own repository with our own version history. Once Composer has completed creating our Drupal project, we can navigate to the mastering folder and review the composer.json file the Composer creates to store project specific configurations. As soon as the composer.json file is created our Drupal project can be referred to as a package. We can version the file, distribute it to a team, and they can run composer install to generate an identical Drupal 8 code base. With Composer installed globally we can take a look at another command line tool that will assist us with making Drupal development much easier. Summary In this article, you learned about how to decide on a local AMP stack and how to install a composer both locally and globally. Also we saw a bit about how to use Composer to create a Drupal project. Resources for Article: Further resources on this subject: Drupal 6 Performance Optimization Using Throttle and Devel Module [article] Product Cross-selling and Layout using Panels with Drupal and Ubercart 2.x [article] Setting up an online shopping cart with Drupal and Ubercart [article]
Read more
  • 0
  • 0
  • 42888

article-image-why-do-react-developers-love-redux-for-state-management
Sugandha Lahoti
03 Jul 2018
3 min read
Save for later

Why do React developers love Redux for state management?

Sugandha Lahoti
03 Jul 2018
3 min read
Redux is an implementation of FLUX, which is a pattern for managing application state in React. Redux brings a clean and testable design to the table using a purely functional approach. Redux completes the missing piece of the React framework and is used at the core of React for most complex React projects. This video tutorial talks about why Redux is needed and touches upon the Redux Flow. Why Redux? If you have written a large-scale application before, you will know that managing application state can become a pain as the app grows. Application state includes server responses, cached data, and data that has not been persisted to the server yet. Furthermore, the User Interface (UI) state constantly increases in complexity. Let’s take the example of an e-commerce website. Any website contains a lot of components, for instance, the product view, the menu section, the filter panel. Whenever we have such a complex app, whether it be a mobile or a web app, it becomes difficult to communicate between components and to know each other’s updated state. For instance, when you interact with the price filter slider, the product view changes. This can obviously work if we have a parent component calling the child component and share properties. However, this works only for simple apps. For complex apps, it becomes difficult to manage the state and update history between multiple components. Redux comes to the rescue here. In order to understand the functioning of Redux, we will go through a flow chart. Redux Flow Action Whenever a state change occurs in the components, it triggers an action creator. An action creator is a function called action. Actions are plain javascript objects of information that send data from your application to your store. They are the only source of information for the store. Reducers After action, returns this object, it is handled by Reducers. Reducers specify how the application’s state changes in response to actions sent to the store, depending on the action type. Store The store is the object that brings them together. It holds the application state, allows access to state, and allows state to be updated. Provider The provider distributes the data retrieved from a store to all the other components by encapsulating a main base component. This all seems highly theoretical, and may seem a bit difficult to gulp down first. But once you practically apply it, you will get used to complex terminologies and how Redux flows. Don’t forget to watch the video tutorial from Learning React Native Development by Mifta Sintaha to know more about Redux. For a comprehensive guide to building React Native mobile apps, buy the full video course from the Packt store. Introduction to Redux Creating Reusable Generic Modals in React and Redux Minko Gechev: “Developers should learn all major front-end frameworks to go to the next level”
Read more
  • 0
  • 0
  • 42808
article-image-implementing-dependency-injection-google-guice
Natasha Mathur
09 Sep 2018
10 min read
Save for later

Implementing Dependency Injection in Google Guice [Tutorial]

Natasha Mathur
09 Sep 2018
10 min read
Choosing a framework wisely is important when implementing Dependency Injection as each framework has its own advantages and disadvantages. There are various Java-based dependency injection frameworks available in the open source community, such as Dagger, Google Guice, Spring DI, JAVA EE 8 DI, and PicoContainer. In this article we will learn about Google Guice (pronounced juice), a lightweight DI framework that helps developers to modularize applications. Guice encapsulates annotation and generics features introduced by Java 5 to make code type-safe. It enables objects to wire together and tests with fewer efforts. Annotations help you to write error-prone and reusable code. This tutorial is an excerpt taken from the book  'Java 9 Dependency Injection', written by Krunal Patel, Nilang Patel. In Guice, the new keyword is replaced with @inject for injecting dependency. It allows constructors, fields, and methods (any method with multiple numbers of arguments) level injections. Using Guice, we can define custom scopes and circular dependency. It also has features to integrate with Spring and AOP interception. Moreover, Guice also implements Java Specification Request (JSR) 330, and uses the standard annotation provided by JSR-330. The first version of Guice was introduced by Google in 2007 and the latest version is Guice 4.1. Before we see how dependency injection gets implemented in Guice, let's first setup Guice. Guice setup To make our coding simple, throughout this tutorial, we are going to use a Maven project to understand Guice DI.  Let’s create a simple Maven project using the following parameters: groupid:, com.packt.guice.id, artifactId : chapter4, and version : 0.0.1-SNAPSHOT. By adding Guice 4.1.0 dependency on the pom.xml file, our final pom.xml will look like this: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.packt.guice.di</groupId> <artifactId>chapter4</artifactId> <packaging>jar</packaging> <version>0.0.1-SNAPSHOT</version> <name>chapter4</name> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> <dependency> <groupId>com.google.inject</groupId> <artifactId>guice</artifactId> <version>4.1.0</version> </dependency> </dependencies> <build> <finalName>chapter2</finalName> </build> </project> For this tutorial, we have used JDK 9, but not as a module project because the Guice library is not available as a Java 9 modular jar. Basic injection in Guice We have set up Guice, now it is time to understand how injection works in Guice. Let's rewrite the example of a notification system using Guice, and along with that, we will see several indispensable interfaces and classes in Guice.  We have a base interface called NotificationService, which is expecting a message and recipient details as arguments: public interface NotificationService { boolean sendNotification(String message, String recipient); } The SMSService concrete class is an implementation of the NotificationService interface. Here, we will apply the @Singleton annotation to the implementation class. When you consider that service objects will be made through injector classes, this annotation is furnished to allow them to understand that the service class ought to be a singleton object. Because of JSR-330 support in Guice, annotation, either from javax.inject or the com.google.inject package, can be used: import javax.inject.Singleton; import com.packt.guice.di.service.NotificationService; @Singleton public class SMSService implements NotificationService { public boolean sendNotification(String message, String recipient) { // Write code for sending SMS System.out.println("SMS has been sent to " + recipient); return true; } } In the same way, we can also implement another service, such as sending notifications to a social media platform, by implementing the NotificationService interface. It's time to define the consumer class, where we can initialize the service class for the application. In Guice, the @Inject annotation will be used to define setter-based as well as constructor-based dependency injection. An instance of this class is utilized to send notifications by means of the accessible correspondence services. Our AppConsumer class defines setter-based injection as follows: import javax.inject.Inject; import com.packt.guice.di.service.NotificationService; public class AppConsumer { private NotificationService notificationService; //Setter based DI @Inject public void setService(NotificationService service) { this.notificationService = service; } public boolean sendNotification(String message, String recipient){ //Business logic return notificationService.sendNotification(message, recipient); } } Guice needs to recognize which service implementation to apply, so we should configure it with the aid of extending the AbstractModule class, and offer an implementation for the configure() method. Here is an example of an injector configuration: import com.google.inject.AbstractModule; import com.packt.guice.di.impl.SMSService; import com.packt.guice.di.service.NotificationService; public class ApplicationModule extends AbstractModule{ @Override protected void configure() { //bind service to implementation class bind(NotificationService.class).to(SMSService.class); } } In the previous class, the module implementation determines that an instance of SMSService is to be injected into any place a NotificationService variable is determined. In the same way, we just need to define a binding for the new service implementation, if required. Binding in Guice is similar to wiring in Spring: import com.google.inject.Guice; import com.google.inject.Injector; import com.packt.guice.di.consumer.AppConsumer; import com.packt.guice.di.injector.ApplicationModule; public class NotificationClient { public static void main(String[] args) { Injector injector = Guice.createInjector(new ApplicationModule()); AppConsumer app = injector.getInstance(AppConsumer.class); app.sendNotification("Hello", "9999999999"); } } In the previous program, the  Injector object is created using the Guice class's createInjector() method, by passing the ApplicationModule class's implementation object. By using the injector's getInstance() method, we can initialize the AppConsumer class. At the same time as creating the AppConsumer's objects, Guice injects the needy service class implementation (SMSService, in our case). The following is the yield of running the previous code: SMS has been sent to Recipient :: 9999999999 with Message :: Hello So, this is how Guice dependency injection works compared to other DI. Guice has embraced a code-first technique for dependency injection, and management of numerous XML is not required. Let's test our client application by writing a JUnit test case. We can simply mock the service implementation of SMSService, so there is no need to implement the actual service. The MockSMSService class looks like this: import com.packt.guice.di.service.NotificationService; public class MockSMSService implements NotificationService { public boolean sendNotification(String message, String recipient) { System.out.println("In Test Service :: " + message + "Recipient :: " + recipient); return true; } } The following is the JUnit 4 test case for the client application: import org.junit.After; import org.junit.Assert; import org.junit.Before; import org.junit.Test; import com.google.inject.AbstractModule; import com.google.inject.Guice; import com.google.inject.Injector; import com.packt.guice.di.consumer.AppConsumer; import com.packt.guice.di.impl.MockSMSService; import com.packt.guice.di.service.NotificationService; public class NotificationClientTest { private Injector injector; @Before public void setUp() throws Exception { injector = Guice.createInjector(new AbstractModule() { @Override protected void configure() { bind(NotificationService.class).to(MockSMSService.class); } }); } @After public void tearDown() throws Exception { injector = null; } @Test public void test() { AppConsumer appTest = injector.getInstance(AppConsumer.class); Assert.assertEquals(true, appTest.sendNotification("Hello There", "9898989898"));; } } Take note that we are binding the MockSMSService class to NotificationService by having an anonymous class implementation of AbstractModule. This is done in the setUp() method, which runs for some time before the test methods run. Guice dependency injection As we know what dependency injection is, let us explore how Google Guice provides injection. We have seen that the injector helps to resolve dependencies by reading configurations from modules, which are called bindings. Injector is preparing charts for the requested objects. Dependency injection is managed by injectors using various types of injection: Constructor injection Method injection Field injection Optional injection Static injection Constructor Injection Constructor injection can be achieved  by using the @Inject annotation at the constructor level. This constructor ought to acknowledge class dependencies as arguments. Multiple constructors will, at that point, assign the arguments to their final fields: public class AppConsumer { private NotificationService notificationService; //Constructor level Injection @Inject public AppConsumer(NotificationService service){ this.notificationService=service; } public boolean sendNotification(String message, String recipient){ //Business logic return notificationService.sendNotification(message, recipient); } } If our class does not have a constructor with @Inject, then it will be considered a default constructor with no arguments. When we have a single constructor and the class accepts its dependency, at that time the constructor injection works perfectly and is helpful for unit testing. It is also easy because Java is maintaining the constructor invocation, so you don't have to stress about objects arriving in an uninitialized state. Method injection Guice allows us to define injection at the method level by annotating methods with the @Inject annotation. This is similar to the setter injection available in Spring. In this approach, dependencies are passed as parameters, and are resolved by the injector before invocation of the method. The name of the method and the number of parameters does not affect the method injection: private NotificationService notificationService; //Setter Injection @Inject public void setService(NotificationService service) { this.notificationService = service; } This could be valuable when we don't want to control instantiation of classes. We can, moreover, utilize it in case you have a super class that needs a few dependencies. (This is difficult to achieve in a constructor injection.) Field injection Fields can be injected by the @Inject annotation in Guice. This is a simple and short injection, but makes the field untestable if used with the private access modifier. It is advisable to avoid the following: @Inject private NotificationService notificationService; Optional injection Guice provides a way to declare an injection as optional. The method and field might be optional, which causes Guice to quietly overlook them when the dependencies aren't accessible. Optional injection can be used by mentioning the @Inject(optional=true) annotation: public class AppConsumer { private static final String DEFAULT_MSG = "Hello"; private string message = DEFAULT_MSG; @Inject(optional=true) public void setDefaultMessage(@Named("SMS") String message) { this.message = message; } } Static injection Static injection is helpful when we have to migrate a static factory implementation into Guice. It makes it feasible for objects to mostly take part in dependency injection by picking up access to injected types without being injected themselves. In a module, to indicate classes to be injected on injector creation, use requestStaticInjection(). For example,  NotificationUtil is a utility class that provides a static method, timeZoneFormat, to a string in a given format, and returns the date and timezone. The TimeZoneFormat string is hardcoded in NotificationUtil, and we will attempt to inject this utility class statically. Consider that we have one private static string variable, timeZonFmt, with setter and getter methods. We will use @Inject for the setter injection, using the @Named parameter. NotificationUtil will look like this: @Inject static String timezonFmt = "yyyy-MM-dd'T'HH:mm:ss"; @Inject public static void setTimeZoneFmt(@Named("timeZoneFmt")String timeZoneFmt){ NotificationUtil.timeZoneFormat = timeZoneFmt; } Now, SMSUtilModule should look like this: class SMSUtilModule extends AbstractModule{ @Override protected void configure() { bindConstant().annotatedWith(Names.named(timeZoneFmt)).to(yyyy-MM-dd'T'HH:mm:ss); requestStaticInjection(NotificationUtil.class); } } This API is not suggested for common utilization since it faces many of the same issues as static factories. It is also difficult to test and it makes dependencies uncertain. To sum up, what we learned in this tutorial, we began with basic dependency injection then we learned how basic Dependency Injection works in Guice, with examples. If you found this post useful, be sure to check out the book  'Java 9 Dependency Injection' to learn more about Google Guice and other concepts in dependency injection. Learning Dependency Injection (DI) Angular 2 Dependency Injection: A powerful design pattern
Read more
  • 0
  • 6
  • 42807

article-image-visualizing-bigquery-data-with-tableau
Sugandha Lahoti
04 Jun 2018
8 min read
Save for later

Visualizing BigQuery Data with Tableau

Sugandha Lahoti
04 Jun 2018
8 min read
Tableau is an interactive data visualization tool that can be used to create business intelligence dashboards. Much like most business intelligence tools, it can be used to pull and manipulate data from a number of sources. The difference is its dedication to help users create insightful data visualizations. Tableau's drag-and-drop interface makes it easy for users to explore data via elegant charts. It also includes an in-memory engine in order to speed up calculations on extremely large data sets. In today’s tutorial, we will be using Tableau Desktop for visualizing BigQuery Data. [box type="note" align="" class="" width=""]This article is an excerpt from the book, Learning Google BigQuery, written by Thirukkumaran Haridass and Eric Brown. This book is a comprehensive guide to mastering Google BigQuery to get intelligent insights from your Big Data.[/box] The following section explains how to use Tableau Desktop Edition to connect to BigQuery and get the data from BigQuery to create visuals: After opening Tableau Desktop, select Google BigQuery under the Connect To a Server section on the left; then enter your login credentials for BigQuery: At this point, all the tables in your dataset should be displayed on the left: You can drag and drop the table you are interested in using to the middle section labeled Drop Tables Here. In this case, we want to query the Google Analytics BigQuery test data, so we will click where it says New Custom SQL and enter the following query in the dialog: SELECT trafficsource.medium as Medium, COUNT(visitId) as Visits FROM `google.com:analytics- bigquery.LondonCycleHelmet.ga_sessions_20130910` GROUP BY Medium Now we can click on Update Now to view the first 10,000 rows of our data. We can also do some simple transformations on our columns, such as changing string values to dates and many others. At the bottom, click on the tab titled Sheet 1 to enter the worksheet view. Tableau's interface allows users to simply drag and drop dimensions and metrics from the left side of the report into the central part to create simple text charts, with a feel much like Excel's pivot chart functionality. This makes Tableau easy to transition to for Excel users. From the Dimensions section on the left-hand-side navigation, drag and drop the Medium dimension into the sheet section. Then drag the Visits metric in the Metric section on the left-hand-side navigation to the Text sub-section in the Marks section. This will create a simple text chart with data from the original query: On the right, click on the button marked Show Me. This should bring up a screen with icons for each graph type that can be created in Tableau: Tableau helps by shading graph types that are not available based on the data that is currently selected in the report. It will also make suggestions based on the data available. In this case, a bar chart has been preselected for us as our data is a text dimension and a numeric metric. Click on the bar chart. Once clicked, the default sideways bar chart will appear with the data we have selected. Click on the Swap Rows and Columns in the icon bar at the top of the screen to flip the chart from horizontal to vertical: Map charts in Tableau One of Tableau's strengths is its ease of use when creating a number of different types of charts. This is true when creating maps, especially because maps can be very painful to create using other tools. Here is the way to create a simple map in Tableau using BigQuery public data. The first few steps are the same as in the preceding example: After opening Tableau Desktop, select Google BigQuery under the Connect To a Server section on the left; then enter your login credentials for BigQuery. At this point, all the tables in your dataset should be displayed on the left-hand side. Click where it says New Custom SQL and enter the following query in the dialog: SELECT zipcode, SUM(population) AS population FROM `bigquery-public- data.census_bureau_usa.population_by_zip_2010` GROUP BY zipcode ORDER BY population desc This data is from the United States Census from 2010. The query returns all zip codes in USA, sorted by most populous to least populous. At the bottom, click on the tab titled Sheet 1 to enter the worksheet view. Double-click on the zipcode dimension on the dimensions section on the left navigation. Clicking on a dimension of zip codes (or any other formatted location dimension such as latitude/longitude, country names, state names, and so on) will automatically create a map in Tableau: Drag the population metric from the metrics section on the left navigation and drop it on the color tab in the marks section: The map will now show the most populous zip codes shaded darker than the less populous zip codes. The map chart also includes zoom features in order to make dealing with large maps easy. In the top-left corner of the map, there is a magnifying glass icon. This icons has the map zoom features. Clicking on the arrow at the bottom of this icon opens more features. The icon with a rectangle and a magnifying glass is the selection tool (The first icon to the right of the arrow when hovering over arrow): Click on this icon and then on the map to select a section of the map to be zoomed into: This image is shown after zooming into the California area of the United States. The map now shows the areas of the state that are the most populous. Create a word cloud in Tableau Word clouds are great visualizations for finding words that are most referenced in books, publications, and social media. This section will cover creating a word cloud in Tableau using BigQuery public data. The first few steps are the same as in the preceding example: After opening Tableau Desktop, select Google BigQuery under the Connect To a Server section on the left; then enter your login credentials for BigQuery. At this point, all the tables in your dataset should be displayed on the left. Click where it says New Custom SQL and enter the following query in the dialog: SELECT word, SUM(word_count) word_count FROM `bigquery-public-data.samples.shakespeare` GROUP BY word ORDER BY word_count desc The dataset is from the works of William Shakespeare. The query returns a list of all words in his works, along with a count of the times each word appears in one of his works. At the bottom, click on the tab titled Sheet 1 to enter the worksheet view. In the dimensions section, drag and drop the word dimension into the text tab in the marks section. In the dimensions section, drag and drop the word_count measure to the size tab in the marks section. There will be two tabs used in the marks section. Right-click on the size tab labeled word and select Measure | Count: This will create what is called a tree map. In this example, there are far too many words in the list to utilize the visualization. Drag and drop the word_count measure from the measures section to the filters section. When prompted with How do you want to filter on word_count, select Sum and click on next.. Select At Least for your condition and type 2000 in the dialog. Click on OK. This will return only those words that have a word count of at least 2,000.. Use the dropdown in the marks card to select Text: 11. Drag and drop the word_count measure from the measures section to the color tab in the marks section. This will color each word based on the count for that word: You should be left with a color-coded word cloud. Other charts can now be created as individual worksheet tabs. Tabs can then be combined to make what Tableau calls a dashboard. The process of creating a dashboard here is a bit more cumbersome than creating a dashboard in Google Data Studio, but Tableau offers a great deal of more customization for its dashboards. This, coupled with all the other features it offers, makes Tableau a much more attractive option, especially for enterprise users. We learnt various features of Tableau and how to use it for visualizing BigQuery data.To know about other third party tools for reporting and visualization purposes such as R and Google Data Studio, check out this book Learning Google BigQuery. Tableau is the most powerful and secure end-to-end analytics platform - Interview Insights Tableau 2018.1 brings new features to help organizations easily scale analytics Getting started with Data Visualization in Tableau      
Read more
  • 0
  • 0
  • 42782

article-image-introduction-raspberry-pi-zero-w-wireless
Packt
03 Mar 2018
14 min read
Save for later

Introduction to Raspberry Pi Zero W Wireless

Packt
03 Mar 2018
14 min read
In this article by Vasilis Tzivaras, the author of the book Raspberry Pi Zero W Wireless Projects, we will be covering the following topics:  An overview of the Raspberry Pi family  An introduction to the new Raspberry Pi Zero W Distributions  Common issues Raspberry Pi Zero W is the new product of the Raspberry Pi Zero family. In early 2017, Raspberry Pi community has announced a new board with wireless extension. It offers wireless functionality and now everyone can develop his own projects without cables and other components. Comparing the new board with Raspberry Pi 3 Model B we can easily see that it is quite smaller with many possibilities over the Internet of Things. But what is a Raspberry Pi Zero W and why do you need it? Let' s go though the rest of the family and introduce the new board. In the following article we will cover the following topics: (For more resources related to this topic, see here.) Raspberry Pi family As said earlier Raspberry Pi Zero W is the new member of Raspberry Pi family boards. All these years Raspberry Pi are evolving and become more user friendly with endless possibilities. Let's have a short look at the rest of the family so we can understand the difference of the Pi Zero board. Right now, the heavy board is named Raspberry Pi 3 Model B. It is the best solution for projects such as face recognition, video tracking, gaming or anything else that is demanding:                                      RASPBERRY PI 3 MODEL B It is the 3rd generation of Raspberry Pi boards after Raspberry Pi 2 and has the following specs:  A 1.2GHz 64-bit quad-core ARMv8 CPU 802.11n Wireless LAN Bluetooth 4.1 Bluetooth Low Energy (BLE)  Like the Pi 2, it also has 1GB RAM 4 USB ports 40 GPIO pins Full HDMI port Ethernet port Combined 3.5mm audio jack and composite video  Camera interface (CSI)  Display interface (DSI)  Micro SD card slot (now push-pull rather than push-push)  VideoCore IV 3D graphics core The next board is Raspberry Pi Zero, in which the Zero W was based. A small low cost and power board able to do many things:                                     Raspberry Pi Zero The specs of this board can be found as follows:  1GHz, Single-core CPU  512MB RAM  Mini-HDMI port Micro-USB OTG port  Micro-USB power  HAT-compatible 40-pin header  Composite video and reset headers  CSI camera connector (v1.3 only) At this point we should not forget to mention that apart from the boards mentioned earlier there are several other modules and components such as the Sense Hat or Raspberry Pi Touch Display available which will work great for advance projects. The 7″ Touchscreen Monitor for Raspberry Pi gives users the ability to create all-in-one, integrated projects such as tablets, infotainment systems and embedded projects:                                                        RASPBERRY PI Touch Display Where Sense HAT is an add-on board for Raspberry Pi, made especially for the Astro Pi mission. The Sense HAT has an 8×8 RGB LED matrix, a five-button joystick and includes the following sensors: Gyroscope Accelerometer  Magnetometer Temperature  Barometric pressure Humidity                                                                         sense HAT Stay tuned with more new boards and modules at the official website: https://www.raspberrypi.org/ Raspberry Pi Zero W Raspberry Pi Zero W is a small device that has the possibilities to be connected either on an external monitor or TV and of course it is connected to the internet. The operating system varies as there are many distros in the official page and almost everyone is baled on Linux systems.                                                        Raspberry Pi Zero W   With Raspberry Pi Zero W you have the ability to do almost everything, from automation to gaming! It is a small computer that allows you easily program with the help of the GPIO pins and some other components such as a camera. Its possibilities are endless! Specifications If you have bought Raspberry PI 3 Model B you would be familiar with Cypress CYW43438 wireless chip. It provides 802.11n wireless LAN and Bluetooth 4.0 connectivity. The new Raspberry Pi Zero W is equipped with that wireless chip as well. Following you can find the specifications of the new board: Dimensions: 65mm × 30mm × 5mm SoC:Broadcom BCM 2835 chip ARM11 at 1GHz, single core CPU 512ΜΒ RAM Storage: MicroSD card  Video and Audio:1080P HD video and stereo audio via mini-HDMI connector Power:5V, supplied via micro USB connector  Wireless:2.4GHz 802.11 n wireless LAN Bluetooth: Bluetooth classic 4.1 and Bluetooth Low Energy (BLE) Output: Micro USB  GPIO: 40-pin GPIO, unpopulated                                Raspberry Pi Zero W Notice that all the components are on the top side of the board so you can easily choose your case without any problems and keep it safe. As far as the antenna concern, it is formed by etching away copper on each layer of the PCB. It may not be visible as it is in other similar boards but it is working great and offers quite a lot functionalities:                  Raspberry Pi Zero W Capacitors Also, the product is limited to only one piece per buyer and costs 10$. You can buy a full kit with microsd card, a case and some more extra components for about 45$ or choose the camera full kit which contains a small camera component for 55$. Camera support Image processing projects such as video tracking or face recognition require a camera. Following you can see the official camera support of Raspberry Pi Zero W. The camera can easily be mounted at the side of the board using a cable like the Raspberry Pi 3 Model B board:The official Camera support of Raspberry Pi Zero W Depending on your distribution you many need to enable the camera though command line. More information about the usage of this module will be mentioned at the project. Accessories Well building projects with the new board there are some other gadgets that you might find useful working with. Following there is list of some crucial components. Notice that if you buy Raspberry Pi Zero W kit, it includes some of them. So, be careful and don't double buy them:  OTG cable  powerHUB GPIO header  microSD card and card adapter  HDMI to miniHDMI cable  HDMI to VGA cable Distributions The official site https://www.raspberrypi.org/downloads/ contains several distributions for downloading. The two basic operating systems that we will analyze after are RASPBIAN and NOOBS. Following you can see how the desktop environment looks like. Both RASPBIAN and NOOBS allows you to choose from two versions. There is the full version of the operating system and the lite one. Obviously the lite version does not contain everything that you might use so if you tend to use your Raspberry with a desktop environment choose and download the full version. On the other side if you tend to just ssh and do some basic stuff pick the lite one. It' s really up to you and of course you can easily download again anything you like and re-write your microSD card. NOOBS distribution Download NOOBS: https://www.raspberrypi.org/downloads/noobs/. NOOBS distribution is for the new users with not so much knowledge in linux systems and Raspberry PI boards. As the official page says it is really "New Out Of the Box Software". There is also pre-installed NOOBS SD cards that you can purchase from many retailers, such as Pimoroni, Adafruit, and The Pi Hut, and of course you can download NOOBS and write your own microSD card. If you are having trouble with the specific distribution take a look at the following links: Full guide at https://www.raspberrypi.org/learning/software-guide/. View the video at https://www.raspberrypi.org/help/videos/#noobs-setup. NOOBS operating system contains Raspbian and it provides various of other operating systems available to download. RASPBIAN distribution Download RASPBIAN: https://www.raspberrypi.org/downloads/raspbian/. Raspbian is the official supported operating system. It can be installed though NOOBS or be downloading the image file at the following link and going through the guide of the official website. Image file: https://www.raspberrypi.org/documentation/installation/installing-images/README.md. It has pre-installed plenty of software such as Python, Scratch, Sonic Pi, Java, Mathematica, and more! Furthermore, more distributions like Ubuntu MATE, Windows 10 IOT Core or Weather Station are meant to be installed for more specific projects like Internet of Things (IoT) or weather stations. To conclude with, the right distribution to install actually depends on your project and your expertise in Linux systems administration. Raspberry Pi Zero W needs an microSD card for hosting any operating system. You are able to write Raspbian, Noobs, Ubuntu MATE, or any other operating system you like. So, all that you need to do is simple write your operating system to that microSD card. First of all you have to download the image file from https://www.raspberrypi.org/downloads/ which, usually comes as a .zip file. Once downloaded, unzip the zip file, the full image is about 4.5 Gigabytes. Depending on your operating system you have to use different programs:  7-Zip for Windows  The Unarchiver for Mac  Unzip for Linux Now we are ready to write the image in the MicroSD card. You can easily write the .img file in the microSD card by following one of the next guides according to your system. For Linux users dd tool is recommended. Before connecting your microSD card with your adaptor in your computer run the following command:  df -h Now connect your card and run the same command again. You must see some new records. For example if the new device is called /dev/sdd1 keep in your mind that the card is at /dev/sdd (without the 1). The next step is to use the dd command and copy the image to the microSD card. We can do this by the following command:  dd if= of= Where if is the input file (image file or the distribution) and of is the output file (microSD card). Again be careful here and use only /dev/sdd or whatever is yours without any numbers. If you are having trouble with that please use the full manual at the following link https://www.raspberrypi.org/documentation/installation/installing-images/linux.md. A good tool that could help you out for that job is GParted. If it is not installed on your system you can easily install it with the following command:  sudo apt-get install gparted Then run sudogparted to start the tool. Its handles partitions very easily and you can format, delete or find information about all your mounted partitions. More information about ddcan be found here: https://www.raspberrypi.org/documentation/installation/installing-images/linux.md For Mac OS users dd tool is always recommended: https://www.raspberrypi.org/documentation/installation/installing-images/mac.md For Windows users Win32DiskImager utility is recommended: https://www.raspberrypi.org/documentation/installation/installing-images/windows.md There are several other ways to write an image file in a microSD card. So, if you are against any kind of problems when following the guides above feel free to use any other guide available on the Internet. Now, assuming that everything is ok and the image is ready. You can now gently plugin the microcard to your Raspberry PI Zero W board. Remember that you can always confirm that your download was successful with the sha1 code. In Linux systems you can use sha1sum followed by the file name (the image) and print the sha1 code that should and must be the same as it is at the end of the official page where you downloaded the image. Common issues Sometimes, working with Raspberry Pi boards can lead to issues. We all have faced some of them and hope to never face them again. The Pi Zero is so minimal and it can be tough to tell if it is working or not. Since, there is no LED on the board, sometimes a quick check if it is working properly or something went wrong is handy. Debugging steps With the following steps you will probably find its status: Take your board, with nothing in any slot or socket. Remove even the microSD card!  Take a normal micro-USB to USB-ADATA SYNC cable and connect the one side to your computer and the other side to the Pi's USB, (not the PWR_IN).  If the Zero is alive: • On Windows the PC will go ding for the presence of new hardware and you should see BCM2708 Boot in Device Manager. On Linux, with a ID 0a5c:2763 Broadcom Corp message from dmesg. Try to run dmesg in a Terminal before your plugin the USB and after that. You will find a new record there. Output example: [226314.048026] usb 4-2: new full-speed USB device number 82 using uhci_hcd [226314.213273] usb 4-2: New USB device found, idVendor=0a5c, idProduct=2763 [226314.213280] usb 4-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [226314.213284] usb 4-2: Product: BCM2708 Boot [226314.213] usb 4-2: Manufacturer: Broadcom If you see any of the preceding, so far so good, you know the Zero's not dead. microSD card issue Remember that if you boot your Raspberry and there is nothing working, you may have burned your microSD card wrong. This means that your card many not contain any boot partition as it should and it is not able to boot the first files. That problem occurs when the distribution is burned to /dev/sdd1 and not to /dev/sdd as we should. This is a quite common mistake and there will be no errors in your monitor. It will just not work! Case protection Raspberry Pi boards are electronics and we never place electronics in metallic surfaces or near magnetic objects. It will affect the booting operation of the Raspberry and it will probably not work. So a tip of advice, spend some extra money for the Raspberry PI Case and protect your board from anything like that. There are many problems and issues when hanging your raspberry pi using tacks. It may be silly, but there are many that do that. Summary Raspberry Pi Zero W is a new promising board allowing everyone to connect their devices to the Internet and use their skills to develop projects including software and hardware. This board is the new toy of any engineer interested in Internet of Things, security, automation and more! We have gone through an introduction in the new Raspberry Pi Zero board and the rest of its family and a brief analysis on some extra components that you should buy as well. Resources for Article:   Further resources on this subject: Raspberry Pi Zero W Wireless Projects Full Stack Web Development with Raspberry Pi 3
Read more
  • 0
  • 0
  • 42765
article-image-understanding-microservices
Packt
22 Jun 2017
19 min read
Save for later

Understanding Microservices

Packt
22 Jun 2017
19 min read
This article by Tarek Ziadé, author of the book Python Microservices Development explains the benefits and implementation of microservices with Python. While the microservices architecture looks more complicated than its monolithic counterpart, its advantages are multiple. It offers the following benefits. (For more resources related to this topic, see here.) Separation of concerns First of all, each microservice can be developed independently by a separate team. For instance, building a reservation service can be a full project on its own. The team in charge can make it in whatever programming language and database, as long as it has a well-documented HTTP API. That also means the evolution of the app is more under control than with monoliths. For example, if the payment system changes its underlying interactions with the bank, the impact is localized inside that service and the rest of the application stays stable and under control. This loose coupling improves a lot the overall project velocity as we're applying at the service level a similar philosophy than the single responsibility principle. The single responsibility principle was defined by Robert Martin to explain that a class should have only one reason to change - in other words, each class should be providing a single, well-defined feature. Applied to microservices, it means that we want to make sure that each microservice focuses on a single role. Smaller projects The second benefit is breaking the complexity of the project. When you are adding a feature to an application like the PDF reporting, even if you are doing it cleanly, you are making the base code bigger, more complicated and sometimes slower. Building that feature in a separate application avoids this problem, and makes it easier to write it with whatever tools you want. You can refactor it often and shorten your release cycles, and stay on the top of things. The growth of the application remains under your control. Dealing with a smaller project also reduces risks when improving the application: if a team wants to try out the latest programming language or framework, they can iterate quickly on a prototype that implements the same microservice API, try it out, and decide whether or not to stick with it. One real-life example in mind is the Firefox Sync storage microservice. There are currently some experiments to switch from the current Python+MySQL implementation to a Go based one that stores users data in standalone SQLite databases. That prototype is highly experimental, but since we have isolated the storage feature in a microservice with a well-defined HTTP API, it's easy enough to give it a try with a small subset of the user base. Scaling and deployment Last, having your application split into components makes it easier to scale depending on your constraints. Let's say you are starting to get a lot of customers that are booking hotels daily, and the PDF generation is starting to heat up the CPUs. You can deploy that specific microservice in some servers that have bigger CPUs. Another typical example is RAM-consuming microservices like the ones that are interacting with memory databases like Redis or Memcache. You could tweak your deployments consequently by deploying them on servers with less CPU and a lot more RAM. To summarize microservices benefits: A team can develop each microservice independently, and use whatever technological stack makes sense. They can define a custom release cycle. The tip of the iceberg is its language agnostic HTTP API. Developers break the application complexity into logical components. Each microservice focuses on doing one thing well. Since microservices are standalone applications, there's a finer control on deployments, which makes scaling easier. Microservices architectures are good at solving a lot of the problems that may arise once your application is starting to grow. Although, we need to be aware of some of the new issues they also bring in practice. Implementing microservices with Python Python is an amazingly versatile language. As you probably already know, it's used to build many different kinds of applications, from simple system scripts that perform tasks on a server, to large object-oriented applications that run services for millions of users. According to a study conducted by Philip Guo in 2014, published in the Association for Computing Machinery (ACM) website, Python has surpassed Java in top U.S. universities and is the most popular language to learn Computer Science. This trend is also true in the software industry. Python sits now in the top 5 languages in the TIOBE index (http://www.tiobe.com/tiobe-index/), and it's probably even bigger in the web development land since languages like C are rarely used as main languages to build web applications. However, some developers criticize Python for being slow and unfit for building efficient web services. Python is slow, and this is undeniable. But it's still is a language of choice for building microservices, and many major companies are happily using it. This section will give you some background on the different ways you can write microservices using Python, some insights on asynchronous versus synchronous programming, and conclude with some details on Python performances. It's composed of 4 parts: The WSGI standard Greenlet & Gevent Twisted & Tornado asyncio Language performances The WSGI standard What strikes the most web developers that are starting with Python is how easy it is to get a web application up and running. The Python web community has created a standard inspired from the Common Gateway Interface (CGI) called Web Server Gateway Interface (WSGI) that simplifies a lot how you can write a Python application which goal is to serve HTTP requests. When your code is using that standard, your project can be executed by standard web servers like Apache or NGinx, using WSGI extensions like uwsgi or mod_wsgi. Your application just has to deal with incoming requests and send back JSON responses, and Python includes all that goodness in its standard library. You can create a fully functional microservice that returns the server's local time with a vanilla Python module of fewer than ten lines: import JSON import time def application(environ, start_response): headers = [('Content-type', 'application/json')] start_response('200 OK', headers) return bytes(json.dumps({'time': time.time()}), 'utf8') Since its introduction, the WSGI protocol became an essential standard and the Python web community widely adopted it. Developers wrote middlewares, which are functions you can hook before or after the WSGI application function itself, to do something within the environment. Some web frameworks were created specifically around that standard, like Bottle (http://bottlepy.org) - and soon enough, every framework out there could be used through WSGI in a way or another. The biggest problem with WSGI though is its synchronous nature. The application function you see above is called exactly once per incoming request, and when the function returns, it has to send back the response. That means that every time you are calling the function, it will block until the response is ready. And writing microservices means your code will be waiting for responses from various network resources all the time. In other words, your application will idle and just block the client until everything is ready. That's an entirely okay behavior for HTTP APIs. We're not talking about building bidirectional applications like web socket based ones. But what happens when you have several incoming requests that are calling your application at the same time? WSGI servers will let you run a pool of threads to serve several requests concurrently. But you can't run thousands of them, and as soon as the pool is exhausted, the next request will be blocking even if your microservice is doing nothing but idling and waiting for backend services responses. That's one of the reasons why non-WSGI frameworks like Twisted, Tornado and in Javascript land Node.js became very successful - it's fully async. When you're coding a Twisted application, you can use callbacks to pause and resume the work done to build a response. That means you can accept new requests and start to treat them. That model dramatically reduces the idling time in your process. It can serve thousands of concurrent requests. Of course, that does not mean the application will return each single response faster. It just means one process can accept more concurrent requests and juggle between them as the data is getting ready to be sent back. There's no simple way with the WSGI standard to introduce something similar, and the community has debated for years to come up with a consensus - and failed. The odds are that the community will eventually drop the WSGI standard for something else. In the meantime, building microservices with synchronous frameworks is still possible and completely fine if your deployments take into account the one request == one thread limitation of the WSGI standard. There's, however, one trick to boost synchronous web applications: greenlets. Greenlet & Gevent The general principle of asynchronous programming is that the process deals with several concurrent execution contexts to simulate parallelism. Asynchronous applications are using an event loop that pauses and resumes execution contexts when an event is triggered - only one context is active, and they take turns. Explicit instruction in the code will tell the event loop that this is where it can pause the execution. When that occurs, the process will look for some other pending work to resume. Eventually, the process will come back to your function and continue it where it stopped - moving from an execution context to another is called switching. The Greenlet project (https://github.com/python-greenlet/greenlet) is a package based on the Stackless project, a particular CPython implementation, and provides greenlets. Greenlets are pseudo-threads that are very cheap to instantiate, unlike real threads, and that can be used to call python functions. Within those functions, you can switch and give back the control to another function. The switching is done with an event loop and allows you to write an asynchronous application using a Thread-like interface paradigm. Here's an example from the Greenlet documentation def test1(x, y): z = gr2.switch(x+y) print z def test2(u): print u gr1.switch(42) gr1 = greenlet(test1) gr2 = greenlet(test2) gr1.switch("hello", " world") The two greenlets are explicitly switching from one to the other. For building microservices based on the WSGI standard, if the underlying code was using greenlets we could accept several concurrent requests and just switch from one to another when we know a call is going to block the request - like performing a SQL query. Although, switching from one greenlet to another has to be done explicitly, and the resulting code can quickly become messy and hard to understand. That's where Gevent can become very useful. The Gevent project (http://www.gevent.org/) is built on the top of Greenlet and offers among other things an implicit and automatic way of switching between greenlets. It provides a cooperative version of the socket module that will use greenlets to automatically pause and resume the execution when some data is made available in the socket. There's even a monkey patch feature that will automatically replace the standard lib socket with Gevent's version. That makes your standard synchronous code magically asynchronous every time it uses sockets - with just one extra line. from gevent import monkey; monkey.patch_all() def application(environ, start_response): headers = [('Content-type', 'application/json')] start_response('200 OK', headers) # ...do something with sockets here... return result This implicit magic comes with a price, though. For Gevent to work well, all the underlying code needs to be compatible with the patching Gevent is doing. Some packages from the community will continue to block or even have unexpected results because of this. In particular, if they use C extensions and bypass some of the features of the standard library Gevent patched. But for most cases, it works well. Projects that are playing well with Gevent are dubbed "green," and when a library is not functioning well, and the community asks its authors to "make it green," it usually happens. That's what was used to scale the Firefox Sync service at Mozilla for instance. Twisted and Tornado If you are building microservices where increasing the number of concurrent requests you can hold is important, it's tempting to drop the WSGI standard and just use an asynchronous framework like Tornado (http://www.tornadoweb.org/) or Twisted (https://twistedmatrix.com/trac/). Twisted has been around for ages. To implement the same microservices you need to write a slightly more verbose code: import time from twisted.web import server, resource from twisted.internet import reactor, endpoints class Simple(resource.Resource): isLeaf = True def render_GET(self, request): request.responseHeaders.addRawHeader(b"content-type", b"application/json") return bytes(json.dumps({'time': time.time()}), 'utf8') site = server.Site(Simple()) endpoint = endpoints.TCP4ServerEndpoint(reactor, 8080) endpoint.listen(site) reactor.run() While Twisted is an extremely robust and efficient framework, it suffers from a few problems when building HTTP microservices: You need to implement each endpoint in your microservice with a class derived from a Resource class, and that implements each supported method. For a few simple APIs, it adds a lot of boilerplate code. Twisted code can be hard to understand & debug due to its asynchronous nature. It's easy to fall into callback hell when you're chaining too many functions that are getting triggered successively one after the other - and the code can get messy Properly testing your Twisted application is hard, and you have to use Twisted-specific unit testing model. Tornado is based on a similar model but is doing a better job in some areas. It has a lighter routing system and does everything possible to make the code closer to plain Python. Tornado is also using a callback model, so debugging can be hard. But both frameworks are working hard at bridging the gap to rely on the new async features introduced in Python 3. asyncio When Guido van Rossum started to work on adding async features in Python 3, part of the community pushed for a Gevent-like solution because it made a lot of sense to write applications in a synchronous, sequential fashion - rather than having to add explicit callbacks like in Tornado or Twisted. But Guido picked the explicit technique and experimented in a project called Tulip that Twisted inspired. Eventually, asyncio was born out of that side project and added into Python. In hindsight, implementing an explicit event loop mechanism in Python instead of going the Gevent way makes a lot of sense. The way the Python core developers coded asyncio and how they elegantly extended the language with the async and await keywords to implement coroutines, made asynchronous applications built with vanilla Python 3.5+ code look very elegant and close to synchronous programming. By doing this, Python did a great job at avoiding the callback syntax mess we sometimes see in Node.js or Twisted (Python 2) applications. And beyond coroutines, Python 3 has introduced a full set of features and helpers in the asyncio package to build asynchronous applications, see https://docs.python.org/3/library/asyncio.html. Python is now as expressive as languages like Lua to create coroutine-based applications, and there are now a few emerging frameworks that have embraced those features and will only work with Python 3.5+ to benefit from this. KeepSafe's aiohttp (http://aiohttp.readthedocs.io) is one of them, and building the same microservice, fully asynchronous, with it would simply be these few elegant lines. from aiohttp import web import time async def handle(request): return web.json_response({'time': time.time()}) if __name__ == '__main__': app = web.Application() app.router.add_get('/', handle) web.run_app(app) In this small example, we're very close to how we would implement a synchronous app. The only hint we're async is the async keyword marking the handle function as being a coroutine. And that's what's going to be used at every level of an async Python app going forward. Here's another example using aiopg - a Postgresql lib for asyncio. From the project documentation: import asyncio import aiopg dsn = 'dbname=aiopg user=aiopg password=passwd host=127.0.0.1' async def go(): pool = await aiopg.create_pool(dsn) async with pool.acquire() as conn: async with conn.cursor() as cur: await cur.execute("SELECT 1") ret = [] async for row in cur: ret.append(row) assert ret == [(1,)] loop = asyncio.get_event_loop() loop.run_until_complete(go()) With a few async and await prefixes, the function that's performing a SQL query and send back the result looks a lot like a synchronous function. But asynchronous frameworks and libraries based on Python 3 are still emerging, and if you are using asyncio or a framework like aiohttp, you will need to stick with particular asynchronous implementations for each feature you need. If you require using a library that is not asynchronous in your code, using it from your asynchronous code means you will need to go through some extra and challenging work if you want to prevent blocking the event loop. If your microservices are dealing with a limited number of resources, it could be manageable. But it's probably a safer bet at this point (2017) to stick with a synchronous framework that's been around for a while rather than an asynchronous one. Let's enjoy the existing ecosystem of mature packages, and wait until the asyncio ecosystem gets more sophisticated. And there are many great synchronous frameworks to build microservices with Python, like Bottle, Pyramid with Cornice or Flask. Language performances In the previous sections we've been through the two different ways to write microservices - asynchronous vs. synchronous, and whatever technique you are using, the speed of Python is directly impacting the performance of your microservice. Of course, everyone knows Python is slower than Java or Go - but execution speed is not always the top priority. A microservice is often a thin layer of code that is sitting most of its life waiting for some network responses from other services. Its core speed is usually less important than how fast your SQL queries will take to return from your Postgres server because the latter will represent most of the time spent to build the response. But wanting an application that's as fast as possible is legitimate. One controversial topic in the Python community around speeding up the language is how the Global Interpreter Lock (GIL) mutex can ruin performances because multi-threaded applications cannot use several processes. The GIL has good reasons to exist. It protects non thread-safe parts of the CPython interpreter and exists in other languages like Ruby. And all attempts to remove it so far have failed to produce a faster CPython implementation. Larry Hasting is working on a GIL-free CPython project called Gilectomy - https://github.com/larryhastings/gilectomy - its minimal goal is to come up with a GIL-free implementation that can run a single-threaded application as fast as CPython. As of today (2017), this implementation is still slower that CPython. But it's interesting to follow this work and see if it reaches speed parity one day. That would make a GIL-free CPython very appealing. For microservices, besides preventing the usage of multiple cores in the same process, the GIL will slightly degrade performances on high load, because of the system calls overhead introduced by the mutex. Although, all the scrutiny around the GIL had one beneficial impact: some work has been done in the past years to reduce its contention in the interpreter, and in some area, Python performances have improved a lot. But bear in mind that even if the core team removes the GIL, Python is an interpreted language and the produced code will never be very efficient at execution time. Python provides the dis module if you are interested to see how the interpreter decomposes a function. In the example below, the interpreter will decompose a simple function that yields incremented values from a sequence in no less than 29 steps! >>> def myfunc(data): ... for value in data: ... yield value + 1 ... >>> import dis >>> dis.dis(myfunc) 2 0 SETUP_LOOP 23 (to 26) 3 LOAD_FAST 0 (data) 6 GET_ITER >> 7 FOR_ITER 15 (to 25) 10 STORE_FAST 1 (value) 3 13 LOAD_FAST 1 (value) 16 LOAD_CONST 1 (1) 19 BINARY_ADD 20 YIELD_VALUE 21 POP_TOP 22 JUMP_ABSOLUTE 7 >> 25 POP_BLOCK >> 26 LOAD_CONST 0 (None) 29 RETURN_VALUE A similar function written in a statically compiled language will dramatically reduce the number of operations required to produce the same result. There are ways to speed up Python execution, though. One is to write part of your code into compiled code by building C extensions or using a static extension of the language like Cython (http://cython.org/) - but that makes your code more complicated. Another solution, which is the most promising one, is by simply running your application using the PyPy interpreter (http://pypy.org/). PyPy implements a Just-In-Time compiler (JIT). This compiler is directly replacing at run time pieces of Python with machine code that can be directly used by the CPU. The whole trick for the JIT is to detect in real time, ahead of the execution, when and how to do it. Even if PyPy is always a few Python versions behind CPython, it reached a point where you can use it in production, and its performances can be quite amazing. In one of our projects at Mozilla that needs fast execution, the PyPy version was almost as fast as the Go version, and we've decided to use Python there instead. The Pypy Speed Center website is a great place to look at how PyPy compares to CPython - http://speed.pypy.org/ However, if your program uses C extensions, you will need to recompile them for PyPy, and that can be a problem. In particular, if other developers maintain some of the extensions you are using. But if you are building your microservice with a standard set of libraries, the chances are that will it work out of the box with the PyPy interpreter, so that's worth a try. In any case, for most projects, the benefits of Python and its ecosystem largely surpasses the performances issues described in this section because the overhead in a microservice is rarely a problem. Summary In this article we saw that Python is considered to be one of the best languages to write web applications, and therefore microservices - for the same reasons, it's a language of choice in other areas and also because it provides tons of mature frameworks and packages to do the work. Resources for Article: Further resources on this subject: Inbuilt Data Types in Python [article] Getting Started with Python Packages [article] Layout Management for Python GUI [article]
Read more
  • 0
  • 0
  • 42735

article-image-googlewalkout-demanded-a-truly-equity-culture-for-everyone-pichai-shares-a-comprehensive-plan-for-employees-to-safely-report-sexual-harassment
Melisha Dsouza
09 Nov 2018
4 min read
Save for later

#GoogleWalkout demanded a ‘truly equitable culture for everyone’; Pichai shares a “comprehensive” plan for employees to safely report sexual harassment

Melisha Dsouza
09 Nov 2018
4 min read
Last week, 20,000 Google employees along with Temps, Vendors, and Contractors walked out to protest the discrimination, racism, and sexual harassment that they encountered at Google’s workplace. This global walkout by Google workers was a response to the New York times report on Google published last month, shielding senior executives accused of sexual misconduct. Yesterday, Google addressed these demands in a note written by Sundar Pichai to their employees. He admits that they have “not always gotten everything right in the past” and they are “sincerely sorry”  for the same. This supposedly ‘comprehensive’ plan will provide more transparency into how employees raise concerns and how Google will handle them. Here are some of the major changes that caught our attention: Following suite after Uber and Microsoft, Google has eliminated forced arbitration in cases of sexual harassment. Fostering a more transparent nature in reporting a sexual harassment case, employees can now be accompanied with support persons to the meetings with HR. Google is planning to update and expand their mandatory sexual harassment training. They will now be conducting these annually instead of once in two years. If an employee fails to complete his/her training, they will receive a one-rating dock in the employees performance review system. This applies to senior management as well where they could be downgraded from ‘exceeds expectation’ to ‘meets expectation’. They will turn increase focus towards diversity, equity and inclusion in 2019, through hiring, progression and retention, in order to create a more inclusive culture for everyone. Google found that one of the most common factors among the harassment complaints is that the perpetrator was under the influence of alcohol (~20% of cases). Stating the policy again, the plan mentions that excessive consumption of alcohol is not permitted when an employee is at work, performing Google business, or attending a Google-related event, whether onsite or offsite. Going forward, all leaders at the company will be expected to create teams, events, offsites and environments in which excessive alcohol consumption is strongly discouraged. They will be expected to follow the two-drink rule. Although the plan is a step towards making workplace conditions stable, it does leave out some of the more inherent concerns related to structural changes as stated by the organizers of the Google Walkout. For example, the structural inequity that separates ‘full time’ employees from contract workers. Contract workers make up more than half of Google’s workforce, and perform essential roles across the company. However, they receive few of the benefits associated with tech company employment. They are also largely women, people of color, immigrants, and people from working class backgrounds. “We demand a truly equitable culture, and Google leadership can achieve this by putting employee representation on the board and giving full rights and protections to contract workers, our most vulnerable workers, many of whom are Black and Brown women.” -Google Walkout Organizer Stephanie Parker Google’s plan to bring transparency at the workplace looks like a positive step towards improving their workplace culture. It would be interesting to see how the plan works out for Google’s employees, as well as other organizations using this as an example to maintain a peaceful workplace environment for their workers. You can head over to Medium.com to read the #GoogleWlakout organizers’ response to the update. Head over to Pichai’s blog post for details on the announcement itself. Technical and hidden debts in machine learning – Google engineers’ give their perspective 90% Google Play apps contain third-party trackers, share user data with Alphabet, Facebook, Twitter, etc: Oxford University Study OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly?
Read more
  • 0
  • 0
  • 42628
Modal Close icon
Modal Close icon