Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7007 Articles
article-image-react-native-development-tools-expo-react-native-cli-cocoapods-tutorial
Sugandha Lahoti
14 Mar 2019
10 min read
Save for later

React Native development tools: Expo, React Native CLI, CocoaPods [Tutorial]

Sugandha Lahoti
14 Mar 2019
10 min read
There are a large number of React Native development tools. Expo, React Native CLI, CocoaPods being the more popular ones. As with any development tools, there is going to be a trade-off between flexibility and ease of use. I encourage you start by using Expo for your React Native development workflow unless you’re sure you’ll need access to the native code. This article is taken from the book React Native Cookbook, Second Edition by Dan Ward.  In this book, you will improve your React Native mobile development skills or transition from web development to mobile development. In this article, we will learn about the various React Native development tools- Expo, React Native CLI, CocoaPods. We will also learn how to setup Expo and React Native CLI Expo This was taken from the expo.io site: "Expo is a free and open source toolchain built around React Native to help you build native iOS and Android projects using JavaScript and React." Expo is becoming an ecosystem of its own, and is made up of five interconnected tools: Expo CLI: The command-line interface for Expo. We'll be using the Expo CLI to create, build, and serve apps. A list of all the commands supported by the CLI can be found in the official documentation at the following link:   https://docs.expo.io/versions/latest/workflow/expo-cli Expo developer tools: This is a browser-based tool that automatically runs whenever an Expo app is started from the Terminal via the expo start command. It provides active logs for your in-development app, and quick access to running the app locally and sharing the app with other developers. Expo Client: An app for Android and iOS. This app allows you to run your React Native project within the Expo app on the device, without the need for installing it. This allows developers to hot reload on a real device, or share development code with anyone else without the need for installing it. Expo Snack: Hosted at https://snack.expo.io, this web app allows you to work on a React Native app in the browser, with a live preview of the code you’re working on. If you've ever used CodePen or JSFiddle, Snack is the same concept applied to React Native applications. Expo SDK: This is the SDK that houses a wonderful collection of JavaScript APIs that provide Native functionality not found in the base React Native package, including working with the device's accelerometer, camera, notifications, geolocation, and many others. This SDK comes baked in with every new project created with Expo. These tools together make up the Expo workflow. With the Expo CLI, you can create and build new applications with Expo SDK support baked in. The XDE/CLI also provides a simple way to serve your in-development app by automatically pushing your code to Amazon S3 and generating a URL for the project. From there, the CLI generates a QR code linked to the hosted code. Open the Expo Client app on your iPhone or Android device, scan the QR code, and BOOM there’s your app, equipped with live/hot reload! And since the app is hosted on Amazon S3, you can even share the in-development app with other developers in real time. React Native CLI The original bootstrapping method for creating a new React Native app using the command is as follows: react-native init This is provided by the React Native CLI. You'll likely only be using this method of bootstrapping a new app if you're sure you'll need access to the native layer of the app. In the React Native community, an app created with this method is said to be a pure React Native app, since all of the development and Native code files are exposed to the developer. While this provides the most freedom, it also forces the developer to maintain the native code. If you’re a JavaScript developer that’s jumped onto the React Native bandwagon because you intend on writing native applications solely with JavaScript, having to maintain the native code in a React Native project is probably the biggest disadvantage of this method. On the other hand, you'll have access to third-party plugins when working on an app that's been bootstrapped with the following command: react-native init Get direct access to the native portion of the code base. You'll also be able to sidestep a few of the limitations in Expo currently, particularly the inability to use background audio or background GPS services. CocoaPods Once you begin working with apps that have components that use native code, you're going to be using CocoaPods in your development as well. CocoaPods is a dependency manager for Swift and Objective-C Cocoa projects. It works nearly the same as npm, but manages open source dependencies for native iOS code instead of JavaScript code. We won't be using CocoaPods much in this book, but React Native makes use of CocoaPods for some of its iOS integration, so having a basic understanding of the manager can be helpful. Just as the package.json file houses all of the packages for a JavaScript project managed with npm, CocoaPods uses a Podfile for listing a project's iOS dependencies. Likewise, these dependencies can be installed using the command: pod install Ruby is required for CocoaPods to run. Run the command at the command line to verify Ruby is already installed: ruby -v If not, it can be installed with Homebrew with the command: brew install ruby Once Ruby has been installed, CocoaPods can be installed via the command: sudo gem install cocoapods If you encounter any issues while installing, you can read the official CocoaPods Getting Started guide at https://guides.cocoapods.org/using/getting-started.html. Planning your app and choosing your workflow When trying to choose which development workflow best fits your app's needs, here are a few things you should consider: Will I need access to the native portion of the code base? Will I need any third-party packages in my app that are not supported by Expo? Will my app need to play audio while it is not in the foreground? Will my app need location services while it is not in the foreground? Will I need push notification support? Am I comfortable working, at least nominally, in Xcode and Android Studio? In my experience, Expo usually serves as the best starting place. It provides a lot of benefits to the development process, and gives you an escape hatch in the eject process if your app grows beyond the original requirements. I would recommend only starting development with the React Native CLI if you're sure your app needs something that cannot be provided by an Expo app, or if you're sure you will need to work on the Native code. I also recommend browsing the Native Directory hosted at http://native.directory. This site has a very large catalog of the third-party packages available for React Native development. Each package listed on the site has an estimated stability, popularity, and links to documentation. Arguably the best feature of the Native Directory, however, is the ability to filter packages by what kind of device/development they support, including iOS, Android, Expo, and web. This will help you narrow down your package choices and better indicate which workflow should be adopted for a given app. React Native CLI setup We'll begin with the React Native CLI setup of our app, which will create a new pure React Native app, giving us access to all of the Native code, but also requiring that Xcode and Android Studio are installed. First, we'll install all the dependencies needed for working with a pure React Native app, starting with the Homebrew (https://brew.sh/) package manager for macOS. As stated on the project's home page, Homebrew can be easily installed from the Terminal via the following command: /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Once Homebrew is installed, it can be used to install the dependencies needed for React Native development: Node.js and nodemon. If you're a JavaScript developer, you've likely already got Node.js installed. You can check it's installed via the following command: node -v This command will list the version of Node.js that's installed, if any. Note that you will need Node.js version 8 or higher for React Native development. If Node.js is not already installed, you can install it with Hombrew via the following command: brew install node We also need the nodemon package, which React Native uses behind the scenes to enable things like live reload during development. Install nodemon with Homebrew via the following command: brew install watchman We'll also of course need the React Native CLI for running the commands that bootstrap the React Native app. This can be installed globally with npm via the following command: npm install -g react-native-cli With the CLI installed, all it takes to create a new pure React Native app is the following: react-native init name-of-project This will create a new project in a new name-of-project directory. This project has all Native code exposed, and requires Xcode for running the iOS app and Android Studio for running the Android app. Luckily, installing Xcode for supporting iOS React Native development is a simple process. The first step is to download Xcode from the App Store and install it. The second step is to install the Xcode command-line tools. To do this, open Xcode, choose Preferences... from the Xcode menu, open the Locations panel, and install the most recent version from the Command Line Tools dropdown: Unfortunately, setting up Android Studio for supporting Android React Native development is not as cut and dry, and requires some very specific steps for installing it. Since this process is particularly involved, and since there is some likelihood that the process will have changed by the time you read this chapter, I recommend referring to the official documentation for in-depth, up-to-date instructions on installing all Android development dependencies. These instructions are hosted at the following URL:   https://facebook.github.io/react-native/docs/getting-started.html#java-development-kit Now that all dependencies have been installed, we're able to run our pure React Native project via the command line. The iOS app can be executed via the following: react-native run-ios And the Andriod app can be started with this: react-native run-android Each of these commands should start up the associated emulator for the correct platform, install our new app, and run the app within the emulator. If you have any trouble with either of these commands not behaving as expected, you might be able to find an answer in the React Native troubleshooting docs, hosted here: https://facebook.github.io/react-native/docs/troubleshooting.html#content Expo CLI setup The Expo CLI can be installed using the Terminal with npm via the following command: npm install -g expo The Expo CLI can be used to do all the great things the Expo GUI client can do. For all the commands that can be run with the CLI, check out the docs here: https://docs.expo.io/versions/latest/workflow/expo-cli If you liked this post, support the author by reading the book React Native Cookbook, Second Edition for enhancing your React Native mobile development skills. React Native 0.59 is now out with React Hooks, updated JavaScriptCore, and more! React Native community announce March updates, post sharing the roadmap for Q4 How to create a native mobile app with React Native [Tutorial]
Read more
  • 0
  • 0
  • 54141

article-image-the-seven-deadly-sins-of-web-design
Guest Contributor
13 Mar 2019
7 min read
Save for later

The seven deadly sins of web design

Guest Contributor
13 Mar 2019
7 min read
Just 30 days before the debut of "Captain Marvel," the latest cinematic offering by the successful and prolific Marvel Studios, a delightful and nostalgia-filled website was unveiled to promote the movie. Since the story of "Captain Marvel" is set in the 1990s, the brilliant minds at the marketing department of Marvel Studios decided to design a website with the right look and feel, which in this case meant using FrontPage and hosting on Angelfire. The "Captain Marvel" promo website is filled with the typography, iconography, glitter, and crudely animated GIFs you would expect from a 1990s creation, including a guestbook, hidden easter eggs, flaming borders, hit counter, and even headers made with Microsoft WordArt. (Image courtesy of Marvel) The site is delightful not just for the dead-on nostalgia trip it provides to visitors, but also because it is very well developed. This is a site with a lot to explore, and it is clearly evident that the website developers met client demands while at the same time thinking about users. This site may look and feel like it was made during the GeoCities era, but it does not make any of the following seven mistakes: Sin #1: Non-Responsiveness In 2019, it is simply inconceivable to think of a web development firm that neglects to make a responsive site. Since 2016, internet traffic flowing through mobile devices has been higher than the traffic originating from desktops and laptops. Current rates are about 53 percent smartphones and tablets versus 47 percent desktops, laptops, kiosks, and smart TVs. Failure to develop responsive websites means potentially alienating more than 50 percent of prospective visitors. As for the "Captain Marvel" website, it is amazingly responsive when considering that internet users in the 1990s barely dreamed about the day when they would be able to access the web from handheld devices (mobile phones were yet to be mass distributed back then). Sin #2: Way too much Jargon (Image courtesy of the Botanical Linguist) Not all website developers have a good sense of readability, and this is something that often shows up when completed projects result in product visitors struggling to comprehend. We’re talking about jargon. There’s a lot of it online, not only in the usual places like the privacy policy and terms of service sections but sometimes in content too. Regardless of how jargon creeps onto your website, it should be rooted out. The "Captain Marvel" website features legal notices written by The Walt Disney Company, and they are very reader-friendly with minimal jargon. The best way to handle jargon is to avoid it as much as possible unless the business developer has good reasons to include it. Sin #3: A noticeable lack of content No content means no message, and this is the reason 46 percent of visitors who land on B2B websites end up leaving without further exploration or interaction. Quality content that is relevant to the intention of a website is crucial in terms of establishing credibility, and this goes beyond B2B websites. In the case of "Captain Marvel," the amount of content is reduced to match the retro sensibility, but there are enough photos, film trailers, character bios, and games to keep visitors entertained. Modern website development firms that provide full-service solutions can either provide or advise clients on the content they need to get started. Furthermore, they can also offer lessons on how to operate content management systems. Sin #4: Making essential information hard to find There was a time when the "mystery meat navigation” issue of website development was thought to have been eradicated through the judicious application of recommended practices, but then mobile apps came around. Even technology giant Google fell victim to mystery meat navigation with its 2016 release of Material Design, which introduced bottom navigation bars intended to offer a more clarifying alternative to hamburger menus. Unless there is a clever purpose for prompting visitors to click or tap on a button, link or page element, that does not explain next steps, mystery meat navigation should be avoided, particularly when it comes to essential information. When the 1990s "Captain Marvel" page loads, visitors can click or tap on labeled links to get information about the film, enjoy multimedia content, play games, interact with the guestbook, or get tickets. There is a mysterious old woman that pops up every now and then from the edges of the screen, but the reason behind this mysterious element is explained in the information section. Sin #5: Website loads too slow (Image courtesy of Horton Marketing Solutions) There is an anachronism related to the "Captain Marvel" website that users who actually used Netscape in the 1990s will notice: all pages load very fast. This is one retro aspect that Marvel Studios decided to not include on this site, and it makes perfect sense. For a fast-loading site, a web design rule of thumb is to simplify and this responsibility lies squarely with the developer. It stands to reason that the more “stuff” you have on a page (images, forms, videos, widgets, shiny things), the longer it takes the server to send over the site files and the longer it takes the browser to render them. Here are a few design best practices to keep in mind: 1 Make the site light - get rid of non-essential elements, especially if they are bandwidth-sucking images or video. 2 Compress your pages - it’s easy with Gzip. 3 Split long pages into several shorter ones 4 Write clean code that doesn’t rely on external sources 5 Optimize images For more web design tips that help your site load in the sub-three second range, like Google expects in 2019, check out our article on current design trends.   Once you have design issues under control, investigate your web host. They aren’t all created equal. Cheap, entry-level shared packages are notoriously slow and unpredictable, especially as your traffic increases. But even beyond that, the reality is that some companies spend money buying better, faster servers and don’t overload them with too many clients. Some do. Recent testing from review site HostingCanada.org checked load times across the leading providers and found variances from a ‘meh’ 2,850 ms all the way down to speedy 226 ms. With pricing amongst credible competitors roughly equal, web developers should know which hosts are the fastest and point clients in that direction. Sin #6: Outdated information Functional and accurate information will always triumph over form. The "Captain Marvel" website is garish to look at by 2019 standards, but all the information is current. The film's theater release date is clearly displayed, and should something happen that would require this date to change, you can be sure that Marvel Studios will fire up FrontPage to promptly make the adjustment. Sin #7: No clear call to action Every website should compel visitors to do something. Even if the purpose is to provide information, the call-to-action or CTA should encourage visitors to remember it and return for updates. The CTA should be as clear as the navigation elements, otherwise, the purpose of the visit is lost. Creating enticements is acceptable, but the CTA message should be explained nonetheless. In the case of "Captain Marvel," visitors can click on "Get Tickets" link to be taken to a Fandango.com page with geolocation redirection for their region. The Bottom Line In the end, the seven mistakes listed herein are easy to avoid. Whenever developers run into clients whose instructions may result in one of these mistakes, proper explanations should be given. Author Bio Gary Stevens is a front-end developer. He’s a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. 7 Web design trends and predictions for 2019 How to create a web designer resume that lands you a Job Will Grant’s 10 commandments for effective UX Design
Read more
  • 0
  • 0
  • 44445

article-image-www-turns-30-tim-berners-lee-its-inventor-shares-his-plan-to-save-the-web-from-its-current-dysfunctions
Bhagyashree R
13 Mar 2019
6 min read
Save for later

WWW turns 30: Tim Berners-Lee, its inventor, shares his plan to save the Web from its current dysfunctions

Bhagyashree R
13 Mar 2019
6 min read
The World Wide Web turned 30 years old yesterday. As a part of the celebration, its creator, Tim Berners-Lee published an open letter on Monday, sharing his vision for the future of the web. In this year’s letter, he also expressed his concerns about the direction in which the web is heading and how we can make it as the one he envisioned. To celebrate #Web30, Tim Berners-Lee is on a 30-hour trip and his first stop was the birthplace of WWW, the European Organization for Nuclear Research, CERN. https://twitter.com/timberners_lee/status/1105400740112203777 Back in 1989, Tim Berners-Lee, as a research fellow at the CERN researching lab, wrote a proposal to his boss titled Information Management: A Proposal. This proposal was for building an information system that would allow researchers to share general information about accelerators and experiments. Initially, he named the project “The Mesh”, which combined hypertext with internet TCP and domain name system. The project did not go that well, but Berners-Lee’s boss, Mike Sendall did remark that the idea is “vague but exciting”. Later on, in 1990, he actually started coding for the project and this time he named the project, what we know today as, the World Wide Web. Fast forward to now, the simple innocent system that he built has become so large, connecting millions and millions of people across the globe. If you are curious to know how WWW looked back then, check out its revived version by a CERN team: https://twitter.com/CERN/status/1105457772626358273 The three dysfunctions the Web is now facing World Wide Web has come a long way. It has opened various opportunities, given voice to marginalized groups, and has made our daily lives much convenient and easier. At the same time, it has also given opportunities to scammers, provided a platform for hate speech, and made it extremely easy for committing crimes while sitting behind a computer screen. Berners-Lee listed down three sources of problems that are affecting today’s web and also suggested a few ways we can minimize or prevent them: “Deliberate, malicious intent, such as state-sponsored hacking and attacks, criminal behavior, and online harassment.” Though it is really not possible to completely eliminate this dysfunction, policymakers can come up with laws and developers can take the responsibility to write code that will help minimize this behavior. “System design that creates perverse incentives where user value is sacrificed, such as ad-based revenue models that commercially reward clickbait and the viral spread of misinformation.” These type of systems introduces the wrong ways of rewarding that encourage others to sacrifice the user’s interests. To prevent this problem developers need to rethink the incentives and accordingly redesign the systems so that they are not promoting these wrong behaviors. “Unintended negative consequences of benevolent design, such as the outraged and polarised tone and quality of online discourse.” These are the systems that are created thoughtfully and with good intent but still result in negative outcomes. Actually, the problem is that it is really difficult to tell what are all the outcomes of the system you are building. Berners-Lee in an interview with The Guardian said, “Given there are more web pages than there are neurons in your brain, it’s a complicated thing. You build Reddit, and people on it behave in a particular way. For a while, they all behave in a very positive, constructive way. And then you find a subreddit in which they behave in a nasty way.” This problem could be eliminated by researching and understanding of existing systems. Based on this research, we can then model possible new systems or enhance those we already have. Contract for the Web Berners Lee further explained that we can’t just really put the blame on the government or a social network for all the loopholes and dysfunctions that are affecting the Web. He said, “You can’t generalise. You can’t say, you know, social networks tend to be bad, tend to be nasty.” We need to find the root causes and to do exactly that we all need to come together as a global web community. “As the web reshapes, we are responsible for making sure that it is seen as a human right and is built for the public good”, he wrote in the open letter. To address these problems, Berners-Lee has a radical solution. Back in November last year at the Web Summit, he, with The Web Foundation, introduced Contract for the Web. The contract aims to bring together governments, companies, and citizens who believe that there is a need for setting clear norms, laws, and standards that underpin the web. “Governments, companies, and citizens are all contributing, and we aim to have a result later this year,” he shared. In theory, the contract defines people’s online rights and lists the key principles and duties government, companies, and citizens should follow. In Berners-Lee’s mind, it will restore some degree of equilibrium and transparency to the digital realm. The contract is part of a broader project that Berners-Lee believes is essential if we are to ‘save’ the web from its current problems. First, we need to create an open web for the users who are already connected to the web and give them the power of fixing issues that we have with the existing web. Secondly, we need to bring the other half of the world, which is not yet connected to the web. Many people are agreeing on the points Berners-Lee discussed in the open letter. Here is what some of the Twitter users are saying: https://twitter.com/girlygeekdom/status/1105375206829256704 https://twitter.com/solutionpoint/status/1105366111678279681 Contract for the Web, as Berners-Lee says, is about “going back to the values”. His idea of bringing together governments, companies, and citizens to make the Web safer and accessible to everyone looks pretty solid. Read the full open letter by Tim Berners-Lee on the Web Foundation’s website. Web Summit 2018: day 2 highlights Tim Berners-Lee is on a mission to save the web he invented UN on Web Summit 2018: How we can create a safe and beneficial digital future for all  
Read more
  • 0
  • 0
  • 14413

Bhagyashree R
13 Mar 2019
12 min read
Save for later

Building a Progressive Web Application with Create React App 2 [Tutorial]

Bhagyashree R
13 Mar 2019
12 min read
The beauty of building a modern web application is being able to take advantage of functionalities such as a Progressive Web App (PWA)! But they can be a little complicated to work with. As always, the Create React App tool makes a lot of this easier for us but does carry some significant caveats that we'll need to think about. This article is taken from the book  Create React App 2 Quick Start Guide by Brandon Richey. This book is intended for those that want to get intimately familiar with the Create React App tool. It covers all the commands in Create React App and all of the new additions in version 2.  To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. In this article, we will learn what exactly PWAs are and how we can configure our Create React App project into a custom PWA. We will also explore service workers, their life cycle, and how to use them with Create React App. Understanding and building PWAs Let's talk a little bit about what a PWA is because there is, unfortunately, a lot of misinformation and confusion about precisely what a PWA does! In very simple words, it's simply a website that does the following: Only uses HTTPS Adds a JSON manifest (a web app manifest) file Has a Service Worker A PWA, for us, is a React application that would be installable/runnable on a mobile device or desktop. Essentially, it's just your app, but with capabilities that make it a little more advanced, a little more effective, and a little more resilient to poor/no internet. A PWA accomplishes these via a few tenets, tricks, and requirements that we'd want to follow: The app must be usable by mobile and desktop-users alike The app must operate over HTTPS The app must implement a web app JSON manifest file The app must implement a service worker Now, the first one is a design question. Did you make your design responsive? If so, congratulations, you built the first step toward having a PWA! The next one is also more of an implementation question that's maybe not as relevant to us here: when you deploy your app to production, did you make it HTTPS only? I hope the answer to this is yes, of course, but it's still a good question to ask! The next two, though, are things we can do as part of our Create React App project, and we'll make those the focus of this article. Building a PWA in Create React App Okay, so we identified the two items that we need to build to make this all happen: the JSON manifest file and the service worker! Easy, right? Actually, it's even easier than that. You see, Create React App will populate a JSON manifest file for us as part of our project creation by default. That means we have already completed this step! Let's celebrate, go home, and kick off our shoes, because we're all done now, right? Well, sort of. We should take a look at that default manifest file because it's very unlikely that we want our fancy TodoList project to be called "Create React App Sample". Let's take a look at the manifest file, located in public/manifest.json: { "short_name": "React App", "name": "Create React App Sample", "icons": [ { "src": "favicon.ico", "sizes": "64x64 32x32 24x24 16x16", "type": "image/x-icon" } ], "start_url": ".", "display": "standalone", "theme_color": "#000000", "background_color": "#ffffff" } Some of these keys are pretty self-explanatory or at least have a little bit of information that you can infer from them as to what they accomplish. Some of the other keys, though, might be a little stranger. For example, what does "start_url" mean? What are the different options we can pick for display? What's a "theme_color" or "background_color"? Aren't those just decided by the CSS of our application? Not really. Let's dive deeper into this world of JSON manifest files and turn it into something more useful! Viewing our manifest file in action with Chrome First, to be able to test this, we should have something where we can verify the results of our changes. We'll start off with Chrome, where if you go into the Developer tools section, you can navigate to the Application tab and be brought right to the Service Workers section! Let's take a look at what it all looks like for our application: Exploring the manifest file options Having a manifest file with no explanation of what the different keys and options mean is not very helpful. So, let's learn about each of them, the different configuration options available to us, and some of the possible values we could use for each. name and short_name The first key we have is short_name. This is a shorter version of the name that might be displayed when, for example, the title can only display a smaller bit of text than the full app or site name. The counterpart to this is name, which is the full name of your application.  For example: { "short_name": "Todos", "name": "Best Todoifier" } icons Next is the icons key, which is a list of sub-objects, each of which has three keys. This contains a list of icons that the PWA should use, whether it's for displaying on someone's desktop, someone's phone home screen, or something else. Each "icon" object should contain an "src", which is a link to the image file that will be your icon. Next, you have the "type" key, which should tell the PWA what type of image file you're working with. Finally, we have the "sizes" key, which tells the PWA the size of the icon. For best results, you should have at least a "512x512" and a "192x192" icon. start_url The start_url key is used to tell the application at what point it should start in your application in relation to your server. While we're not using it for anything as we have a single page, no route app, that might be different in a much larger application, so you might just want the start_url key to be something indicating where you want them to start off from. Another option would be to add a query string on to the end of url, such as a tracking link. An example of that would be something like this: { "start_url": "/?source=AB12C" } background_color This is the color used when a splash screen is displayed when the application is first launched. This is similar to when you launch an application from your phone for the first time; that little page that pops up temporarily while the app loads is the splash screen, and background_color would be the background of that. This can either be a color name like you'd use in CSS, or it can be a hex value for a color. display The display key affects the browser's UI when the application is launched. There are ways to make the application full-screen, to hide some of the UI elements, and so on. Here are the possible options, with their explanations: ValueDescription.browserA normal web browser experience.fullscreenNo browser UI, and takes up the entire display.standaloneMakes the web app look like a native application. It will run in its own window and hides a lot of the browser UI to make it look and feel more native. orientation If you want to make your application in the landscape orientation, you would specify it here. Otherwise, you would leave this option missing from your manifest: { "orientation": "landscape" } scope Scope helps to determine where the PWA in your site lies and where it doesn't. This prevents your PWA from trying to load things outside where your PWA runs. start_url must be located inside your scope for it to work properly! This is optional, and in our case, we'll be leaving it out. theme_color This sets the color of the toolbar, again to make it feel and look a little more native. If we specify a meta-theme color, we'd set this to be the same as that specification. Much like background color, this can either be a color name, as you'd use in CSS, or it can be a hex value for a color. Customizing our manifest file Now that we're experts on manifest files, let's customize our manifest file! We're going to change a few things here and there, but we won't make any major changes. Let's take a look at how we've set up the manifest file in public/manifest.json: { "short_name": "Todos", "name": "Best Todoifier", "icons": [ { "src": "favicon.ico", "sizes": "64x64 32x32 24x24 16x16", "type": "image/x-icon" } ], "start_url": "/", "display": "standalone", "theme_color": "#343a40", "background_color": "#a5a5f5" } So we've set our short_name and name keys to match the actual application. We've left the icons key alone completely since we don't really need to do much of anything with that anyway. Next, we've changed start_url to just be "/", since we're working under the assumption that this application is the only thing running on its domain. We've set the display key to standalone since we want our application to have the ability to be added to someone's home screen and be recognized as a true PWA. Finally, we set the theme color to #343a40, which matches the color of the nav bar and will give a more seamless look and feel to the PWA. We also set the background_color key, which is for our splash screen, to #a5a5f5, which is the color of our normal Todo items! If you think back to the explanation of keys, you'll remember we also need to change our meta-theme tag in our public/index.html file, so we'll open that up and quickly make that change: <meta name="theme-color" content="#343a40" /> And that's it! Our manifest file has been customized! If we did it all correctly, we should be able to verify the changes again in our Chrome Developer tools: Hooking up service workers Service workers are defined as a script that your browser runs behind the scenes, separate from the main browser threads. It can intercept network requests, interact with a cache (either storing or retrieving information from a cache), or listen to and deliver push messages. The service worker life cycle The life cycle for a service worker is pretty simple. There are three main stages: Registration Installation Activation Registration is the process of letting the browser know where the service worker is located and how to install it into the background. The code for registration may look something like this: if ('serviceWorker' in navigator) { navigator.serviceWorker.register('/service-worker.js') .then(registration => { console.log('Service Worker registered!'); }) .catch(error => { console.log('Error registering service worker! Error is:', error); }); } Installation is the process that happens after the service worker has been registered, and only happens if the service worker either hasn't already been installed, or the service worker has changed since the last time. In a service-worker.js file, you'd add something like this to be able to listen to this event: self.addEventListener('install', event => { // Do something after install }); Finally, Activation is the step that happens after all of the other steps have completed. The service worker has been registered and then installed, so now it's time for the service worker to start doing its thing: self.addEventListener('activate', event => { // Do something upon activation }); How can we use a service worker in our app? So, how do we use a service worker in our application? Well, it's simple to do with Create React App, but there is a major caveat: you can't configure the service-worker.js file generated by Create React App by default without ejecting your project! Not all is lost, however; you can still take advantage of some of the highlights of PWAs and service workers by using the default Create React App-generated service worker. To enable this, hop over into src/index.js, and, at the final line, change the service worker unregister() call to register() instead: serviceWorker.register(); And now we're opting into our service worker! Next, to actually see the results, you'll need to run the following: $ yarn build We'll create a Production build. You'll see some output that we'll want to follow as part of this: The build folder is ready to be deployed. You may serve it with a static server: yarn global add serve serve -s build As per the instructions, we'll install serve globally, and run the command as instructed: $ serve -s build We will get the following output: Now open up http://localhost:5000 in your local browser and you'll be able to see, again in the Chrome Developer tools, the service worker up and running for your application: Hopefully, we've explored at least enough of PWAs that they have been partially demystified! A lot of the confusion and trouble with building PWAs tends to stem from the fact that there's not always a good starting point for building one. Create React App limits us a little bit in how we can implement service workers, which admittedly limits the functionality and usefulness of our PWA. It doesn't hamstring us, by any means, but doing fun tricks with pre-caching networks and API responses, and loading up our application instantly, even if the browser doing the loading is offline in the first place. That being said, it's like many other things in Create React App: an amazing stepping stone and a great way to get moving with PWAs in the future! If you found this post useful, do check out the book, Create React App 2 Quick Start Guide. In addition to getting familiar with Create React App 2, you will also build modern, React projects with, SASS, and progressive web applications. ReactOS 0.4.11 is now out with kernel improvements, manifests support, and more! React Native community announce March updates, post sharing the roadmap for Q4 React Native Vs Ionic: Which one is the better mobile app development framework?
Read more
  • 0
  • 0
  • 23992

article-image-google-confirms-it-paid-135-million-as-exit-packages-to-senior-execs-accused-of-sexual-harassment
Natasha Mathur
12 Mar 2019
4 min read
Save for later

Google confirms it paid $135 million as exit packages to senior execs accused of sexual harassment

Natasha Mathur
12 Mar 2019
4 min read
According to a complaint filed in a lawsuit yesterday, Google paid $135 million in total as exit packages to top two senior execs, namely Andy Rubin (creator of Android) and Amit Singhal (former senior VP of Google search) after they were accused of sexual misconduct in the company. The lawsuit was filed by an Alphabet shareholder, James Martin, in the Santa Clara, California Court. Google also confirmed paying the exit packages to senior execs to The Verge, yesterday. Speaking of the lawsuit, the complaint is against certain directors and officers of Alphabet, Google’s parent company, for their active and direct participation in “multi-year scheme” to hide sexual harassment and discrimination at Alphabet. It also states that the misconduct by these directors has caused severe financial and reputational damage to Alphabet. The exit packages for Rubin and Singhal were approved by the Leadership Development and Compensation Committee (LLDC). The news of Google paying high exit packages to its top execs first came to light last October, after the New York Times released a report on Google, stating that the firm paid $90 million to Rubin and $15 million to Singhal. Rubin had previously also received an offer for a $150 million stock grant, which he then further use to negotiate the $90 million in severance pay, even though he should have been fired for cause without any pay, states the lawsuit. To protest against the handling of sexual misconduct within Google, more than 20,000 Google employees along with vendors, and contractors, temps, organized Google “walkout for real change” and walked out of their offices in November 2018. Googlers also launched an industry-wide awareness campaign to fight against forced arbitration in January, where they shared information about arbitration on their Twitter and Instagram accounts throughout the day.   Last year in November, Google ended its forced arbitration ( a move that was soon followed by Facebook) for its employees (excluding temps, vendors, etc) and only in the case of sexual harassment. This led to contractors writing an open letter on Medium to Sundar Pichai, CEO, Google, in December, demanding him to address their demands of better conditions and equal benefits for contractors. In response to the Google walkout and the growing public pressure, Google finally decided to end its forced arbitration policy for all employees (including contractors) and for all kinds of discrimination within Google, last month. The changes will go into effect for all the Google employees starting March 21st, 2019. Yesterday, the Google walkout for real change group tweeted condemning the multi-million dollar payouts and has asked people to use the hashtag #Googlepayoutsforall to highlight other better ways that money could have been used. https://twitter.com/GoogleWalkout/status/1105450565193121792 “The conduct of Rubin and other executives was disgusting, illegal, immoral, degrading to women and contrary to every principle that Google claims it abides by”, reads the lawsuit. James Martin also filed a lawsuit against Alphabet’s board members, Larry Page, Sergey Brin, and Eric Schmidt earlier this year in January for covering up the sexual harassment allegations against the former top execs at Google. Martin had sued Alphabet for breaching its fiduciary duty to shareholders, unjust enrichment, abuse of power, and corporate waste. “The directors’ wrongful conduct allowed illegal conduct to proliferate and continue. As such, members of the Alphabet’s board were knowing direct enables of sexual harassment and discrimination”, reads the lawsuit. It also states that the board members not only violated the California and federal law but it also violated the ethical standards and guidelines set by Alphabet. Public reaction to the news is largely negative with people condemning Google’s handling of sexual misconduct: https://twitter.com/awesome/status/1105295877487263744 https://twitter.com/justkelly_ok/status/1105456081663225856 https://twitter.com/justkelly_ok/status/1105457965790707713 https://twitter.com/conradwt/status/1105386882135875584 https://twitter.com/mer__edith/status/1105464808831361025 For more information, check out the official lawsuit here. Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Liz Fong Jones, prominent ex-Googler shares her experience at Google and ‘grave concerns’ for the company Google’s pay equity analysis finds men, not women, are underpaid; critics call out design flaws in the analysis
Read more
  • 0
  • 0
  • 35139

article-image-10-commandments-for-effective-ux-design
Will Grant
11 Mar 2019
8 min read
Save for later

Will Grant’s 10 commandments for effective UX Design

Will Grant
11 Mar 2019
8 min read
Somewhere along the journey of web maturity, we forgot something important: user experience is not art. It's the opposite of art. UX design should perform one primary function: serving users. Your UX design has to look great, but it should not be at the expense of hampering the working of the website. This is an extract from 101 UX Principles by Will Grant. Read our interview with Will here. #1 Empathy and objectivity are the primary skills of a  UX professional Empathy and objectivity are the primary skills you must possess to be good at UX. This is not to undermine those who have spent many years studying and working in the UX field — their insights and experience are valuable — rather say that study and practice alone are not enough. You need empathy to understand your users’ needs, goals, and frustrations. You need objectivity to look at your product with fresh eyes, spot the flaws and fix them. You can learn everything else. Read More: Soft skills every data scientist should teach their child #2 Don’t use more than two typefaces Too often designers add too many typefaces to their products. You should aim to use two typefaces maximum: one for headings and titles, and another for body copy that is intended to be read. Using too many typefaces creates too much visual ‘noise’ and increases the effort that the user has to put into understanding the view in front of them. What’s more, many custom-designed brand typefaces are made with punchy visual impact in mind, not readability. Use weights and italics within that font family for emphasis, rather than switching to another family. Typically, this means using your corporate brand font as the heading, while leaving the controls, dialogs and in-app copy (which need to be clearly legible) in a more proven, readable typeface. #3 Make your buttons look like buttons There are parts of your UI that can be interacted with, but your user doesn’t know which parts and doesn’t want to spend time learning. Flat design is bad. It’s really terrible for usability. It’s style over substance and it forces your users to think more about every interaction they make with your product. Stop making it hard for your customers to find the buttons! By drawing on real-world examples, we can make UI buttons that are obvious and instantly familiar. By using real-life inspiration to create affordances, a new user can identify the controls right away. Create the visual cues your user needs to know instantly that they’re looking at a button that can be tapped or clicked. #4 Make ‘blank slates’ more than just empty views The default behavior of many apps is to simply show an empty view where the content would be. For a new user, this is a pretty poor experience and a massive missed opportunity for you to give them some extra orientation and guidance. The blank slate is only shown once (before the user has generated any content). This makes it an ideal way of orienting people to the functions of your product while getting out of the way of more established users who will hopefully ‘know the ropes’ a little better. For that reason, it should be considered mandatory for UX designers to offer users a useful blank slate. #5 Hide ‘advanced’ settings from most users There’s no need to include every possible menu option on your menu when you can hide advanced settings away. Group settings together but separate out the more obscure ones for their own section of ‘power user’ settings. These should also be grouped into sections if there are a lot of them (don’t just throw all the advanced items in at random). Not only does hiding advanced settings have the effect of reducing the number of items for a user to mentally juggle, but it also makes the app appear less daunting, by hiding complex settings from most users. By picking good defaults, you can ensure that the vast majority of users will never need to alter advanced settings. For the ones that do, an advanced menu section is a pretty well-used pattern. #6 Use device-native input features where possible If you’re using a smartphone or tablet to dial a telephone number, the device’s built-in ‘phone’ app will have a large numeric keypad, that won’t force you to use a fiddly ‘QWERTY’ keyboard for numeric entry. Sadly, too often we ask users to use the wrong input features in our products. By leveraging what’s already there, we can turn painful form entry experiences into effortless interactions. No matter how good you are, you can’t justify spending the time and money that other companies have spent on making usable system controls. Even if you get it right, it’s still yet another UI for your user to learn, when there’s a perfectly good one already built into their device. Use that one. #7 Always give icons a text label Icons are used and misused so relentlessly, across so many products, that you can’t rely on any 'one' single icon to convey a definitive meaning. For example, if you’re offering a ‘history’ feature,  there’s a wide range of pictogram clocks, arrows, clocks within arrows, hourglasses, and parchment scrolls to choose from. This may confuse the user and hence you need to add a text label to make the user understand what this icon means in this context within your product. Often, a designer will decide to sacrifice the icon label on mobile responsive views. Don’t do this. Mobile users still need the label for context. The icon and the label will then work in tandem to provide context and instruction and offer a recall to the user, whether they’re new to your product or use it every day. #8 Decide if an interaction should be obvious, easy or possible To help decide where (and how prominently) a control or interaction should be placed, it’s useful to classify interactions into one of three types. Obvious Interactions Obvious interactions are the core function of the app, for example, the shutter button on a camera app or the “new event” button on a calendar app. Easy Interactions An easy interaction could be switching between the front-facing and rear-facing lens in a camera app, or editing an existing event in a calendar app. Possible Interactions Interactions we classify as possible are rarely used and they are often advanced features. For example, it is possible to adjust the white balance or auto-focus on a camera app or make an event recurring on a calendar app. #9 Don’t join the dark side So-called ‘dark patterns’ are UI or UX patterns designed to trick the user into doing what the corporation or brand wants them to do. These are, in a way, exactly the same as the scams used by old-time fraudsters and rogue traders, now transplanted to the web and updated for the post-internet age. Shopping carts that add extra "add-on" items (like insurance, protection policies, and so on) to your cart before you check out, hoping that you won't remove them Search results that begin their list by showing the item they'd like to sell you instead of the best result Ads that don't look like ads, so you accidentally tap them Changing a user's settings—edit your private profile and if you don't explicitly make it private again, the company will switch it back to public Unsubscribe "confirmation screens", where you have to uncheck a ton of checkboxes just right to actually unsubscribe. In some fields, medicine, for example, professionals have a code of conduct and ethics that form the core of the work they do. Building software does not have such a code of conduct, but maybe it should do. #10 Test with real users There’s a myth that user testing is expensive and time-consuming, but the reality is that even very small test groups (less than 10 people) can provide fascinating insights. The nature of such tests is very qualitative and doesn’t lend itself well to quantitative analysis, so you can learn a lot from working with a small sample set of fewer than 10 users. Read More: A UX strategy is worthless without a solid usability test plan You need to test with real users, not your colleagues, not your boss and not your partner. You need to test with a diverse mix of people, from the widest section of society you can get access to. User testing is an essential step to understanding not just your product but also the users you’re testing: what their goals really are, how they want to achieve them and where your product delivers or falls short. Summary In the web development world, UX and UI professionals keep making UX mistakes, trying to reinvent the wheel, and forgetting to put themselves in the place of a user. Following these 10 commandments and applying them to the software design will create more usable and successful products, that look great but at the same time do not hinder functionality. Is your web design responsive? What UX designers can teach Machine Learning Engineers? To start with: Model Interpretability Trends UX Design
Read more
  • 0
  • 0
  • 33959
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-top-announcements-from-the-tensorflow-dev-summit-2019
Sugandha Lahoti
08 Mar 2019
5 min read
Save for later

Top announcements from the TensorFlow Dev Summit 2019

Sugandha Lahoti
08 Mar 2019
5 min read
The two-days long TensorFlow Dev Summit 2019 just got over, leaving in its wake major updates being made to the TensorFlow ecosystem.  The major announcement included the release of the first alpha version of most coveted release TensorFlow 2.0. Also announced were, TensorFlow Lite 1.0, TensorFlow Federated, TensorFlow Privacy and more. TensorFlow Federated In a medium blog post, Alex Ingerman (Product Manager) and Krzys Ostrowski (Research Scientist) introduced the TensorFlow Federated framework on the first day. This open source framework is useful for experimenting with machine learning and other computations on decentralized data. As the name suggests, this framework uses Federated Learning, a learning approach introduced by Google in 2017. This technique enables ML models to collaboratively learn a shared prediction model while keeping all the training data on the device. Thus eliminating machine learning from the need to store the data in the cloud. The authors note that TFF is based on their experiences with developing federated learning technology at Google. TFF uses the Federated Learning API to express an ML model architecture, and then train it across data provided by multiple developers, while keeping each developer’s data separate and local. It also uses the Federated Core (FC) API, a set of lower-level primitives, which enables the expression of a broad range of computations over a decentralized dataset. The authors conclude, “With TFF, we are excited to put a flexible, open framework for locally simulating decentralized computations into the hands of all TensorFlow users. You can try out TFF in your browser, with just a few clicks, by walking through the tutorials.” TensorFlow 2.0.0- alpha0 The event also the release of the first alpha version of the TensorFlow 2.0 framework which came with fewer APIs. First introduced last year in August by Martin Wicke, engineer at Google, TensorFlow 2.0, is expected to come with: Easy model building with Keras and eager execution. Robust model deployment in production on any platform. Powerful experimentation for research. API simplification by reducing duplication removing deprecated endpoints. The first teaser,  TensorFlow 2.0.0- alpha0 version comes with the following changes: API clean-up included removing tf.app, tf.flags, and tf.logging in favor of absl-py. No more global variables with helper methods like tf.global_variables_initializer and tf.get_global_step. Functions, not sessions (tf.Session and session.run -> tf.function). Added support for TensorFlow Lite in TensorFlow 2.0. tf.contrib has been deprecated, and functionality has been either migrated to the core TensorFlow API, to tensorflow/addons, or removed entirely. Checkpoint breakage for RNNs and for Optimizers. Minor bug fixes have also been made to the Keras and Python API and tf.estimator. Read the full list of bug fixes in the changelog. TensorFlow Lite 1.0 The TF-Lite framework is basically designed to aid developers in deploying machine learning and artificial intelligence models on mobile and IoT devices. Lite was first introduced at the I/O developer conference in May 2017 and in developer preview later that year. At the TensorFlow Dev Summit, the team announced a new version of this framework, the TensorFlow Lite 1.0. According to a post by VentureBeat, improvements include selective registration and quantization during and after training for faster, smaller models. The team behind TF-Lite 1.0 says that quantization has helped them achieve up to 4 times compression of some models. TensorFlow Privacy Another interesting library released at the TensorFlow dev summit was TensorFlow Privacy. This Python-based open source library aids developers to train their machine-learning models with strong privacy guarantees. To achieve this, it takes inspiration from the principles of differential privacy. This technique offers strong mathematical guarantees that models do not learn or remember the details about any specific user when training the user data. TensorFlow Privacy includes implementations of TensorFlow optimizers for training machine learning models with differential privacy. For more information, you can go through the technical whitepaper describing its privacy mechanisms in more detail. The creators also note that “no expertise in privacy or its underlying mathematics should be required for using TensorFlow Privacy. Those using standard TensorFlow mechanisms should not have to change their model architectures, training procedures, or processes.” TensorFlow Replicator TF Replicator also released at the TensorFlow Dev Summit, is a software library that helps researchers deploy their TensorFlow models on GPUs and Cloud TPUs. To do this, the creators assure that developers would require minimal effort and need not have previous experience with distributed systems. For multi-GPU computation, TF-Replicator relies on an “in-graph replication” pattern, where the computation for each device is replicated in the same TensorFlow graph. When TF-Replicator builds an in-graph replicated computation, it first builds the computation for each device independently and leaves placeholders where cross-device computation has been specified by the user. Once the sub-graphs for all devices have been built, TF-Replicator connects them by replacing the placeholders with actual cross-device computation. For a more comprehensive description, you can go through the research paper. These were the top announcements made at the TensorFlow Dev Summit 2019. You can go through the Keynote and other videos of the announcements and tutorials on this YouTube playlist. TensorFlow 2.0 to be released soon with eager execution, removal of redundant APIs, tffunction and more. TensorFlow 2.0 is coming. Here’s what we can expect. Google introduces and open-sources Lingvo, a scalable TensorFlow framework for Sequence-to-Sequence Modeling
Read more
  • 0
  • 0
  • 4037

article-image-3-tips-to-build-your-own-interactive-conversational-app
Guest Contributor
07 Mar 2019
10 min read
Save for later

Rachel Batish's 3 tips to build your own interactive conversational app

Guest Contributor
07 Mar 2019
10 min read
In this article, we will provide 3 tips for making an interactive conversational application using current chat and voice examples. This is an excerpt from the book Voicebot and Chatbot Design written by Rachel Batish. In this book, the author shares her insights into cutting-edge voice-bot and chatbot technologies Help your users ask the right questions Although this sounds obvious, it is actually crucial to the success of your chatbot or voice-bot. I learned this when I initially set up my Amazon Echo device at home. Using a complementary mobile app, I was directed to ask Alexa specific questions, to which she had good answers to, such as “Alexa, what is the time?” or “Alexa, what is the weather today?” I immediately received correct answers and therefore wasn’t discouraged by a default response saying, “Sorry, I don’t have an answer to that question.” By providing the user with successful experience, we are encouraging them to trust the system and to understand that, although it has its limitations, it is really good in some specific details. Obviously, this isn’t enough because as time passes, Alexa (and Google) continues to evolve and continues to expand its support and capabilities, both internally and by leveraging third parties. To solve this discovery problem, some solutions, like Amazon Alexa and Google Home, send a weekly newsletter with the highlights of their latest capabilities. In the email below, Amazon Alexa is providing a list of questions that I should ask Alexa in my next interaction with it, exposing me to new functionalities like donation. From the Amazon Alexa weekly emails “What’s new with Alexa?” On the Google Home/Assistant, Google has also chosen topics that it recommends its users to interact with. Here, as well, the end user is exposed to new offerings/capabilities/knowledge bases, that may give them the trust needed to ask similar questions on other topics. From the Google Home newsletter Other chat and voice providers can also take advantage of this email communication idea to encourage their users to further interact with their chatbots or voice-bots and expose them to new capabilities. The simplest way of encouraging usage is by adding a dynamic ‘welcoming’ message to the chat voice applications, that includes new features that are enabled. Capital One, for example, updates this information every now and then, exposing its users to new functionalities. On Alexa, it sounds like this: “Welcome to Capital One. You can ask me for things like account balance and recent transactions.” Another way to do this – especially if you are reaching out to a random group of people – is to initiate discovery during the interaction with the user (I call this contextual discovery). For example, a banking chatbot offers information on account balances. Imagine that the user asks, “What’s my account balance?” The system gives its response: “Your checking account balance is $5,000 USD.” The bank has recently activated the option to transfer money between accounts. To expose this information to its users, it leverages the bot to prompt a rational suggestion to the user and say, “Did you know you can now transfer money between accounts? Would you like me to transfer $1,000 to your savings account?” As you can see, the discovery process was done in context with the user’s actions. Not only does the user know that he/she can now transfer money between two accounts, but they can also experience it immediately, within the relevant context. To sum up tip #1, by finding the direct path to initial success, your users will be encouraged to further explore and discover your automated solutions and will not fall back to other channels. The challenge is, of course, to continuously expose users to new functionalities, made available on your chatbots and voice-bots, preferably in a contextual manner. Give your bot a ‘personality’, but don’t pretend it’s a human Your bot, just like any digital solution you provide today, should have a personality that makes sense for your brand. It can be visual, but it can also be enabled over voice. Whether it is a character you use for your brand or something created for your bot, personality is more than just the bot’s icon. It’s the language that it ‘speaks’, the type of interaction that it has and the environment it creates. In any case, don’t try to pretend that your bot is a human talking with your clients. People tend to ask the bot questions like “are you a bot?” and sometimes even try to make it fail by asking questions that are not related to the conversation (like asking how much 30*4,000 is or what the bot thinks of *a specific event*). Let your users know that it’s a bot that they are talking to and that it’s here to help. This way, the user has no incentive to intentionally trip up the bot. ICS.ai have created many custom bots for some of the leading UK public sector organisations like county councils, local governments and healthcare trusts. Their conversational AI chat bots are custom designed by name, appearance and language according to customer needs. Chatbot examples Below are a few examples of chatbots with matching personalities. Expand your vocabulary with a word a day (Wordsworth) The Wordsworth bot has a personality of an owl (something clever), which fits very well with the purpose of the bot: to enrich the user’s vocabulary. However, we can see that this bot has more than just an owl as its ‘presenter’, pay attention to the language and word games and even the joke at the end. Jokes are a great way to deliver personality. From these two screenshots only, we can easily capture a specific image of this bot, what it represents and what it’s here to do. DIY-Crafts-Handmade FB Messenger bot The DIY-Crafts-Handmade bot has a different personality, which signals something light and fun. The language used is much more conversational (and less didactic) and there’s a lot of usage of icons and emojis. It’s clear that this bot was created for girls/women and offers the end user a close ‘friend’ to help them maximize the time they spend at home with the kids or just start some DIY projects. Voicebot examples One of the limitations around today’s voice-enabled devices is the voice itself. Whereas Google and Siri do offer a couple of voices to choose from, Alexa is limited to only one voice and it’s very difficult to create that personality that we are looking for. While this problem probably will be solved in the future, as technology improves, I find insurance company GEICO’s creativity around that very inspiring. In its effort to keep Gecko’s unique voice and personality, GEICO has incorporated multiple MP3 files with a recording of Gecko’s personalized voice. https://www.youtube.com/watch?v=11qo9a1lgBE GEICO has been investing for years in Gecko’s personalization. Gecko is very familiar from TV and radio advertisements, so when a customer activates the Alexa app or Google Action, they know they are in the right place. To make this successful, GEICO incorporated Gecko’s voice into various (non-dynamic) messages and greetings. It also handled the transition back to the device’s generic voice very nicely; after Gecko has greeted the user and provided information on what they can do, it hands it back to Alexa with every question from the user by saying, “My friend here can help you with that.” This is a great example of a cross-channel brand personality that comes to life also on automated solutions such as chatbots and voice-bots. Build an omnichannel solution – find your tool Think less on the design side and more on the strategic side, remember that new devices are not replacing old devices; they are only adding to the big basket of channels that you must support. Users today are looking for different services anywhere and anytime. Providing a similar level of service on all the different channels is not an easy task, but it will play a big part in the success of your application. There are different reasons for this. For instance, you might see a spike in requests coming from home devices such as Amazon Echo and Google Home during the early morning and late at night. However, during the day you will receive more activities from FB Messenger or your intelligent assistant. Different age groups also consume products from different channels and, of course, geography impacts as well. Providing cross-channel/omnichannel support doesn’t mean providing different experiences or capabilities. However, it does mean that you need to make that extra effort to identify the added value of each solution, in order to provide a premium, or at least the most advanced, experience on each channel. Building an omnichannel solution for voice and chat Obviously, there are differences between a chatbot and a voice-bot interaction; we talk differently to how we write and we can express ourselves with emojis while transferring our feelings with voice is still impossible. There are even differences between various voice-enabled devices, like Amazon Alexa and Google Assistant/Home and, of course, Apple’s HomePod. There are technical differences but also behavioral ones. The HomePod offers a set of limited use cases that businesses can connect with, whereas Amazon Alexa and Google Home let us create our own use cases freely. In fact, there are differences between various Amazon Echo devices, like the Alexa Show that offers a complimentary screen and the Echo Dot that lacks in screen and sound in comparison. There are some developer tools today that offer multi-channel integration to some devices and channels. They are highly recommended from a short and long-term perspective. Those platforms let bot designers and bot builders focus on the business logic and structure of their bots, while all the integration efforts are taken care of automatically. Some of those platforms focus on chat and some of them on voice. A few tools offer a bridge between all the automated channels or devices. Among those platforms, you can find Conversation.one (disclaimer: I’m one of the founders), Dexter and Jovo. With all that in mind, it is clear that developing a good conversational application is not an easy task. Developers must prove profound knowledge of machine learning, voice recognition, and natural language processing. In addition to that, it requires highly sophisticated and rare skills, that are extremely dynamic and flexible. In such a high-risk environment, where today’s top trends can skyrocket in days or simply be crushed in just a few months, any initial investment can be dicey. To know more trips and tricks to make a successful chatbot or voice-bot, read the book Voicebot and Chatbot Design by Rachel Batish. Creating a chatbot to assist in network operations [Tutorial] Building your first chatbot using Chatfuel with no code [Tutorial] Conversational AI in 2018: An arms race of new products, acquisitions, and more
Read more
  • 0
  • 0
  • 12547

article-image-using-autoencoders-for-detecting-credit-card-fraud-tutorial
Guest Contributor
07 Mar 2019
12 min read
Save for later

Using Autoencoders for detecting credit card fraud [Tutorial]

Guest Contributor
07 Mar 2019
12 min read
Autoencoders, which are one of the important generative model types have some interesting properties which can be exploited for applications like detecting credit card fraud. In this article, we will use Autoencoders for detecting credit card fraud. This is an excerpt from the book Machine Learning for Finance written by Jannes Klaas. This book introduces the study of machine learning and deep learning algorithms for financial practitioners. We will use a new dataset, which contains records of actual credit card transactions with anonymized features. The dataset does not lend itself to much feature engineering. We will have to rely on end-to-end learning methods to build a good fraud detector. You can find the dataset here. And the notebook with an implementation of an autoencoder and variational autoencoder here. Loading data from the dataset As usual, we first load the data. The time feature shows the absolute time of the transaction which makes it a bit hard to deal with here. So we will just drop it. df = pd.read_csv('../input/creditcard.csv') df = df.drop('Time',axis=1) We separate the X data on the transaction from the classification of the transaction and extract the numpy array that underlies the pandas dataframe. X = df.drop('Class',axis=1).values y = df['Class'].values Feature scaling Now we need to scale the features. Feature scaling makes it easier for our model to learn a good representation of the data. For feature scaling, we scale all features to be in between zero and one. This ensures that there are no very high or very low values in the dataset. But beware, that this method is susceptible to outliers influencing the result. For each column, we first subtract the minimum value, so that the new minimum value becomes zero. We then divide by the maximum value so that the new maximum value becomes one. By specifying axis=0 we perform the scaling column wise. X -= X.min(axis=0) X /= X.max(axis=0) Finally, we split our data: from sklearn.model_selection import train_test_split X_train, X_test, y_train,y_test = train_test_split(X,y,test_size=0.1) The input for our encoder now has 29 dimensions, which we compress down to 12 dimensions before aiming to restore the original 29-dimensional output. from keras.models import Model from keras.layers import Input, Dense You will notice that we are using the sigmoid activation function in the end. This is only possible because we scaled the data to have values between zero and one. We are also using a tanh activation of the encoded layer. This is just a style choice that worked well in experiments and ensures that encoded values are all between minus one and one. You might use different activations functions depending on your need. If you are working with images or deeper networks, a relu activation is usually a good choice. If you are working with a more shallow network as we are doing here, a tanh activation often works well. data_in = Input(shape=(29,)) encoded = Dense(12,activation='tanh')(data_in) decoded = Dense(29,activation='sigmoid')(encoded) autoencoder = Model(data_in,decoded) We use a mean squared error loss. This is a bit of an unusual choice at first, using a sigmoid activation with a mean squared error loss, yet it makes sense. Most people think that sigmoid activations have to be used with a cross-entropy loss. But cross-entropy loss encourages values to be either zero or one and works well for classification tasks where this is the case. But in our credit card example, most values will be around 0.5. Mean squared error is better at dealing with values where the target is not binary, but on a spectrum. autoencoder.compile(optimizer='adam',loss='mean_squared_error') After training, the autoencoder converges to a low loss. autoencoder.fit(X_train, X_train, epochs = 20, batch_size=128, validation_data=(X_test,X_test)) The reconstruction loss is low, but how do we know if our autoencoder is doing good? Once again, visual inspection to the rescue. Humans are very good at judging things visually, but not very good at judging abstract numbers. We will first make some predictions, in which we run a subset of our test set through the autoencoder. pred = autoencoder.predict(X_test[0:10]) We can then plot individual samples. The code below produces an overlaid bar-chart comparing the original transaction data with the reconstructed transaction data. import matplotlib.pyplot as plt import numpy as np width = 0.8 prediction   = pred[9] true_value = X_test[9] indices = np.arange(len(prediction)) fig = plt.figure(figsize=(10,7)) plt.bar(indices, prediction, width=width,        color='b', label='Predicted Value') plt.bar([i+0.25*width for i in indices], true_value,        width=0.5*width, color='r', alpha=0.5, label='True Value') plt.xticks(indices+width/2.,        ['V{}'.format(i) for i in range(len(prediction))] ) plt.legend() plt.show() Autoencoder reconstruction vs original data. As you can see, our model does a fine job at reconstructing the original values. The visual inspection gives more insight than the abstract number. Visualizing latent spaces with t-SNE We now have a neural network that takes in a credit card transaction and outputs a credit card transaction that looks more or less the same. But that is of course not why we built the autoencoder. The main advantage of an autoencoder is that we can now encode the transaction into a lower dimensional representation which captures the main elements of the transaction. To create the encoder model, all we have to do is to define a new Keras model, that maps from the input to the encoded state: encoder = Model(data_in,encoded) Note that you don't need to train this model again. The layers keep the weights from the autoencoder which we have trained before. To encode our data, we now use the encoder model: enc = encoder.predict(X_test) But how would we know if these encodings contain any meaningful information about fraud? Once again, the visual representation is key. While our encodings are lower dimensional than the input data, they still have twelve dimensions. It is impossible for humans to think about 12-dimensional space, so we need to draw our encodings in a lower dimensional space while still preserving the characteristics we care about. In our case, the characteristic we care about is proximity. We want points that are close to each other in the 12-dimensional space to be close to each other in the two-dimensional plot. More precisely, we care about the neighborhood, we want that the points that are closest to each other in the high dimensional space are also closest to each other in the low dimensional space. Preserving neighborhood is relevant because we want to find clusters of fraud. If we find that fraudulent transactions form a cluster in our high dimensional encodings, we can use a simple check for if a new transaction falls into the fraud cluster to flag a transaction as fraudulent. A popular method to project high dimensional data into low dimensional plots while preserving neighborhoods is called t-distributed stochastic neighbor embedding, or t-SNE. In a nutshell, t-SNE aims to faithfully represent the probability that two points are neighbors in a random sample of all points. That is, it tries to find a low dimensional representation of the data in which points in a random sample have the same probability of being closest neighbors than in the high dimensional data. How t-SNE measures similarity The t-SNE algorithm follows these steps: Calculate the gaussian similarity between all points. This is done by calculating the Euclidean (spatial) distance between points and then calculate the value of a Gaussian curve at that distance, see graphics. The gaussian similarity for all points from the point can be calculated as: Where 'sigma' is the variance of the Gaussian distribution? We will look at how to determine this variance later. Note that since the similarity between points i and j is scaled by the sum of distances between and all other points (expressed as k), the similarity between i, j, p i|j, can be different than the similarity between j and i, p j|i . Therefore, we average the two similarities to gain the final similarity which we work with going forward, where n is the number of data points. Randomly position the data points in the lower dimensional space. Calculate the t-similarity between all points in the lower dimensional space. Just like in training neural networks, we will optimize the positions of the data points in the lower dimensional space by following the gradient of a loss function. The loss function, in this case, is the Kullback–Leibler (KL) divergence between the similarities in the higher and lower dimensional space. We will give the KL divergence a closer look in the section on variational autoencoders. For now, just think of it as a way to measure the difference between two distributions. The derivative of the loss function with respect to the position yi of a data point i in the lower dimensional space is: Adjust the data points in the lower dimensional space by using gradient descent. Moving points that were close in the high dimensional data closer together and moving points that were further away further from each other. You will recognize this as a form of gradient descent with momentum, as the previous gradient is incorporated into the position update. The t-distribution used always has one degree of freedom. The choice of one degree of freedom leads to a simpler formula as well as some nice numerical properties that lead to faster computation and more useful charts. The standard deviation of the Gaussian distribution can be influenced by the user with a perplexity hyperparameter. Perplexity can be interpreted as the number of neighbors we expect a point to have. A low perplexity value emphasizes local proximities while a large perplexity value emphasizes global perplexity values. Mathematically, perplexity can be calculated as: Where Pi is a probability distribution over the position of all data points in the dataset and H(Pi) is the Shannon entropy of this distribution calculated as: While the details of this formula are not very relevant to using t-SNE, it is important to know that t-SNE performs a search over values of the standard deviation 'sigma' so that it finds a global distribution Pi for which the entropy over our data is our desired perplexity. In other words, you need to specify the perplexity by hand, but what that perplexity means for your dataset also depends on the dataset. Van Maarten and Hinton, the inventors of t-SNE, report that the algorithm is relatively robust to choices of perplexity between five and 50. The default value in most libraries is 30, which is a fine value for most datasets. If you find that your visualizations are not satisfactory, tuning the perplexity value is probably the first thing you want to do. For all the math involved, using t-SNE is surprisingly simple. scikit-learn has a handy t-SNE implementation which we can use just like any algorithm in scikit. We first import the TSNE class. Then we create a new TSNE instance. We define that we want to train for 5000 epochs, use the default perplexity of 30 and the default learning rate of 200. We also specify that we would like output during the training process. We then just call fit_transform which transforms our 12-dimensional encodings into two-dimensional projections. from sklearn.manifold import TSNE tsne = TSNE(verbose=1,n_iter=5000) res = tsne.fit_transform(enc) As a word of warning, t-SNE is quite slow as it needs to compute the distances between all the points. By default, sklearn uses a faster version of t-SNE called Barnes Hut approximation, which is not as precise but significantly faster already. There is a faster python implementation of t-SNE which can be used as a drop in replacement of sklearn's implementation. It is not as well documented however and has fewer features. You can find a faster implementation with installation instructions here. We can plot our t-SNE results as a scatter plot. For illustration, we will distinguish frauds from non-frauds by color, with frauds being plotted in red and non-frauds being plotted in blue. Since the actual values of t-SNE do not matter as much we will hide the axis. fig = plt.figure(figsize=(10,7)) scatter =plt.scatter(res[:,0],res[:,1],c=y_test, cmap='coolwarm', s=0.6) scatter.axes.get_xaxis().set_visible(False) scatter.axes.get_yaxis().set_visible(False) Credit Auto TSNE For easier spotting, the cluster containing most frauds is marked with a circle. You can see that the frauds nicely separate from the rest of the transactions. Clearly, our autoencoder has found a way to distinguish frauds from the genuine transaction without being given labels. This is a form of unsupervised learning. In fact, plain autoencoders perform an approximation of PCA, which is useful for unsupervised learning. In the chart, you can see a few more clusters which are clearly separate from the other transactions but which are not frauds. Using autoencoders and unsupervised learning it is possible to separate and group our data in ways we did not even think about as much before. Summary In this article, we have learned about one of the most important types of generative models: Autoencoders. We used the autoencoders for credit card fraud. Implementing Autoencoders using H2O NeurIPS 2018 paper: DeepMind researchers explore autoregressive discrete autoencoders (ADAs) to model music in raw audio at scale. What are generative adversarial networks (GANs) and how do they work? [Video]
Read more
  • 0
  • 0
  • 12187

article-image-5-useful-visual-studio-code-extensions-for-angular-developers
Aditya Modhi
06 Mar 2019
5 min read
Save for later

5 useful Visual Studio Code extensions for Angular developers

Aditya Modhi
06 Mar 2019
5 min read
Visual Studio Code has become a very popular code editor for Angular developers, particularly those running the Angular CLI. Features such as Syntax highlighting and autocomplete, provision to debug code right in the editor, built-in Git commands and, support for extensions make VSCode among the most popular code editors around. Image Source: TRIPLEBYTE In this post, I am going to look at 5 VSCode extensions that are useful for Angular developers. #1 angular2-shortcuts If you have an Angular CLI application running on your local host, in the app folder, you have the app component that is dynamically generated by the Angular CLI. As an Angular developer, you will be working on this component and quite often switching between the html, css, and the ts file. When the application grows, you'll be switching between these three files from the individual components a lot more. For that, you have a useful extension called angular2-switcher. If you install this extension, you will get access to keyboard shortcuts to quickly navigate  the individual files. File Shortcut app.component.html Shift+Alt+u app.component.css Shift+Alt+i app.component.ts Shift+Alt+o app.component.spec.ts Shift+Alt+p The table above lists  four keyboard-shortcuts to switch between CSS, HTML, the TS file for testing and the TS file of the component itself. The letters—u, i, o and p—are very close together to make it fast to switch between the individual files. #2 Angular Language Service In Angular, if you add a name to the app component and try to render it inside of the HTML template, VSCode won’t render the name to auto-completion out of the box and needs an extension for added functionality. As an Angular developer, you want access to the inside of a template. You can use the Angular Language Service extension, which will add auto-completion. If you enable it and go back to the HTML file, you'll see if the name will populate in autocomplete list as soon as you start typing. The same would happen for the title and, for that matter, anything that is created inside of the app component; you have access to the inside of the template. If you create a simple function that returns a string, then you'll have access to it as well thanks to Angular Language Service extension. #3 json2ts The other things you will work very often in Angular are endpoints that return JSON data. For the JSON data, you will need to create a user interface. You can do it manually but if you have a massive object, then it would take you some time. Luckily, a VSCode extension can automate this for you. json2ts isn’t Angular specific and works whenever you're working with TypeScript. Json2ts comes handy when you have to create a TypeScript interface from a JSON object. #4 Bookmarks Bookmark comes handy when you're working with long files. If you want to work on a little block of code, then you need to check something at the top and then go back to the place you were before. With Bookmark, you can easily put a little marker by pressing Alt+Ctrl+K, you'll see a blue marker at the place. If you go to the top of the code where all your variables are stored, you can do the same thing—Alt+Ctrl+K. You can use Alt+Ctrl+J and Alt+Ctrl+L to jump between these two markers. When you're working on a longer file and want to quickly jump to a specific section, you can put as many of these markers as you like. Action Shortcut Set bookmark/ Remove Alt+Ctrl+K Go to previous bookmark Alt+Ctrl+J Go to next bookmark Alt+Ctrl+L There are more shortcuts to this. You can go to the menu, type in bookmarks, and you’ll see all the other keyboard shortcuts related to this extension. Setting, removing and going to the next and previous bookmark are the most useful shortcuts. #5 Guides I'm sure you came across the issue when you're looking at long codes of HTML and you're wondering, where does this tag start and end? Which div is it disclosing? Wouldn’t it be nice to have some connection between the opening and closing tags? You need some sort of rules and that's exactly what Guides does. After installing the Guides extension, vertical lines connect the opening and closing divs and help you to visualize correct indentation as shown below. Guides has many settings as well. You can change the colors or the thickness of the lines for example. These VSCode extensions improve Angular Development Workflow and I believe you will find them useful too. I know there are many more useful extensions, which you use every day. I would love to hear about them. Author Bio Aditya Modi is the CEO of TOPS Infosolutions, a Mobile and Web development company. With the right allocation of resources and emerging technology, he provides innovative solutions to businesses worldwide to solve their business and engineering problems. An avid reader, he values great books and calls them his source of motivation. You may reach out to him on LinkedIn. The January 2019 release of Visual Studio code v1.31 is out React Native Vs Ionic : Which one is the better mobile app development framework? Code completion suggestions via IntelliCode comes to C++ in Visual Studio 2019
Read more
  • 0
  • 0
  • 50449
article-image-top-reasons-why-businesses-should-adopt-enterprise-collaboration-tools
Guest Contributor
05 Mar 2019
8 min read
Save for later

Top reasons why businesses should adopt enterprise collaboration tools

Guest Contributor
05 Mar 2019
8 min read
Following the trends of the modern digital workplace, organizations apply automation even to the domains that are intrinsically human-centric. Collaboration is one of them. And if we can say that organizations have already gained broad experience in digitizing business processes while foreseeing potential pitfalls, the situation is different with collaboration. The automation of collaboration processes can bring a significant number of unexpected challenges even to those companies that have tested the waters. State of Collaboration 2018 reveals a curious fact: even though organizations can be highly involved in collaborative initiatives, employees still report that both they and their companies are poorly prepared to collaborate. Almost a quarter of respondents (24%) affirm that they lack relevant enterprise collaboration tools, while 27% say that their organizations undervalue collaboration and don't offer any incentives for them to support it. Two reasons can explain these stats: The collaboration process can be hardly standardized and split into precise workflows. The number of collaboration scenarios is enormous, and it’s impossible to get them all into a single software solution. It’s also pretty hard to manage collaboration, assess its effectiveness, or understand bottlenecks. Unlike business process automation systems that play a critical role in an organization and ensure core production or business activities, enterprise collaboration tools are mostly seen as supplementary solutions, so they are the last to be implemented. Moreover, as organizations often don’t spend much effort on adapting collaboration tools to their specifics, the end solutions are frequently subject to poor adoption. At the same time, the IT market offers numerous enterprise collaboration tools Slack, Trello, Stride, Confluence, Google Suite, Workplace by Facebook, SharePoint and Office 365, to mention a few, compete to win enterprises’ loyalty. But how to choose the right enterprise Collaboration tools and make them effective? Or how to make employees use the implemented enterprise Collaboration tools actively? To answer these questions and understand how to succeed in their collaboration-focused projects, organizations have to examine both tech- and employee-related challenges they may face. Challenges rooted in technologies From the enterprise Collaboration tools' deployment model to its customization and integration flexibility, companies should consider a whole array of aspects before they decide which solution they will implement. Selecting a technologically suitable solution Finding a proper solution is a long process that requires companies to make several important decisions: Cloud or on-premises? By choosing the deployment type, organizations define their future infrastructure to run the solution, required management efforts, data location, and the amount of customization available. Cloud solutions can help enterprises save both technical and human resources. However, companies often mistrust them because of multiple security concerns. On-premises solutions can be attractive from the customization, performance, and security points of view, but they are resource-demanding and expensive due to high licensing costs. Ready-to-use or custom? Today many vendors offer ready-made enterprise collaboration tools, particularly in the field of enterprise intranets. This option is attractive for organizations because they can save on customizing a solution from scratch. However, with ready-made products, organizations can face a bigger risk of following a vendor’s rigid politics (subscription/ownership price, support rates, functional capabilities, etc.). If companies choose custom enterprise collaboration software, they have a wider choice of IT service providers to cooperate with and adjust their solutions to their needs. One tool or several integrated tools? Some organizations prefer using a couple of apps that cover different collaboration needs (for example, document management, video conferencing, instant messaging). At the same time, companies can also go for a centralized solution, such as SharePoint or Office 365 that can support all collaboration types and let users create a centralized enterprise collaboration environment. Exploring integration options Collaboration isn’t an isolated process. It is tightly related to business or organizational activities that employees do. That’s why integration capabilities are among the most critical aspects companies should check before investing in their collaboration stack. Connecting an enterprise Collaboration tool to ERP, CRM, HRM, or ITSM solutions will not only contribute to the business process consistency but will also reduce the risk of collaboration gaps and communication inconsistencies. Planning ongoing investment Like any other business solution, an enterprise collaboration tool requires financial investment to implement, customize (even ready-made solutions require tuning), and support it. The initial budget will strongly depend on the deployment type, the estimated number of users, and needed customizations. While planning their yearly collaboration investment, companies should remember that their budgets should cover not only the activities necessary to ensure the solution’s technical health but also a user adoption program. Eliminating duplicate functionality Let’s consider the following scenario: a company implements a collaboration tool that includes the project management functionality, while they also run a legacy project management system. The same situation can happen with time tracking, document management, knowledge management systems, and other stand-alone solutions. In this case, it will be reasonable to consider switching to the new suite completely and depriving the legacy one. For example, by choosing SharePoint Server or Online, organizations can unite various functions within a single solution. To ensure a smooth transition to a new environment, SharePoint developers can migrate all the data from legacy systems, thus making it part of the new solution. Choosing a security vector As mentioned before, the solution’s deployment model dictates the security measures that organizations have to take. Sometimes security is the paramount reason that holds enterprises’ collaboration initiatives back. Security concerns are particularly characteristic of organizations that hesitate between on-premises and cloud solutions. SharePoint and Office 365 trends 2018 show that security represents the major worry for organizations that consider changing their on-premises deployments for cloud environments. What’s even more surprising is that while software providers, like Microsoft, are continually improving their security measures, the degree of concern keeps on growing. The report mentioned above reveals that 50% of businesses were concerned about security in 2018 compared to 36% in 2017 and 32% in 2016. Human-related challenges Technology challenges are multiple, but they all can be solved quite quickly, especially if a company partners with a professional IT service provider that backs them up at the tech level. At the same time, companies should be ready to face employee-related barriers that may ruin their collaboration effort. Changing employees’ typical style of collaboration Don’t expect that your employees will welcome the new collaboration solution. It’s about to change their typical collaboration style, which may be difficult for many. Some employees won’t share their knowledge openly, while others will find it difficult to switch from one-to-one discussions to digitized team meetings. In this context, change management should work at two levels: a technological one and a mental one. Companies should not just explain to employees how to use the new solution effectively, but also show each team how to adapt the collaboration system to the needs of each team member without damaging the usual collaboration flow. Finding the right tools for collaborators and non-collaborators Every team consists of different personalities. Some people can be open to collaboration; others can be quite hesitant. The task is to ensure a productive co-work of these two very different types of employees and everyone in between. Teams shouldn’t wait for instant collaboration consistency or general satisfaction. These are only possible to achieve if the entire team works together to create an optimal collaboration area for each individual. Launching digital collaboration within large distributed teams When it’s about organizing collaboration within a small or medium-sized team, collaboration difficulties can be quite simple to avoid, as the collaboration flow is moderate. But when it comes to collaboration in big teams, the risk of failure increases dramatically. Organizing effective communication of remote employees, connecting distributed offices, offering relevant collaboration areas to the entire team and subteams, enable cross-device consistency of collaboration — these are just a few steps to undertake for effective teamwork. Preparing strategies to overcome adoption difficulties He biggest human-related the poor adoption of an enterprise collaboration system. It can be hard for employees to get used to the new solution, accept the new communication medium, its UI and logic. Adoption issues are critical to address because they may engender more severe consequences than the tech-related ones. Say, if there is a functional defect in a solution, a company can fix it within a few days. However, if there are adoption issues, it means that all the efforts an organization puts into technology polishing can be blown away because their employees don’t use the solution at all. Ongoing training and communication between collaboration manager and particular teams is a must to keep employees’ satisfied with the solution they use. Is there more pain than gain? On recognizing all the challenges, companies might feel that there are too many barriers to overcome to get a decent collaboration solution. So maybe it’s reasonable to stay away from the collaboration race? Is it the case? Not really. If you take a look at Internet Trends 2018, you will see that there are multiple improvements that companies get as they adopt enterprise collaboration tools. Typical advantages include reduced meeting time, quicker onboarding, less time required for support, more effective document management, and a substantial rise in teams’ productivity. If your company wants to get all these advantages, be brave to face the possible collaboration challenges to get a great reward. Author Bio Sandra Lupanova is SharePoint and Office 365 Evangelist at Itransition, a software development and IT consulting company headquartered in Denver. Sandra focuses on the SharePoint and Office 365 capabilities, challenges that companies face while adopting these platforms, as well as shares practical tips on how to improve SharePoint and Office 365 deployments through her articles.
Read more
  • 0
  • 0
  • 24308

article-image-crypto-cash-is-missing-from-the-wallet-of-dead-cryptocurrency-entrepreneur-gerald-cotten-find-it-and-you-could-get-100000
Richard Gall
05 Mar 2019
3 min read
Save for later

Crypto-cash is missing from the wallet of dead cryptocurrency entrepreneur Gerald Cotten - find it, and you could get $100,000

Richard Gall
05 Mar 2019
3 min read
In theory, stealing cryptocurrency should be impossible. But a mystery has emerged that seems to throw all that into question and even suggests a bigger, much stranger conspiracy. Gerald Cotten, the founder of cryptocurrency exchange QadrigaCX, died in December in India. He was believed to have left $136 million USD worth of crypto-cash in 'cold wallets' on his own laptop, to which only he had access. However, investigators from EY, who have been working on closing QuadrigaCX following Cotten's death, were surprised to find that the wallets were empty. In fact, it's believed crypto-cash had disappeared from them months before Cotten died. A cryptocurrency mystery now involving the FBI The only lead in this mystery is the fact that the EY investigators have found other user accounts that appear to be linked to Gerald Cotten. There's a chance that Cotten used these to trade on his own exchange, but the nature of these exchanges remain a little unclear. To add to the intrigue, Fortune reported yesterday that the FBI are working with Canada's Mounted Police Force to investigate the missing money. This information came from Jesse Powell, CEO of another cryptocurrency company called Kraken. Powell told Fortune that both the FBI and the Mounted Police have been in touch with him about the mystery surrounding QuadrigaCX. Powell has offered a reward of $100,000 to anyone that can locate the missing cryptocurrency funds. So what actually happened to Gerald Cotten and his crypto-cash? The story has many layers of complexity. There are rumors that Cotten faked his own death. For example, Cotten filed a will just 12 days before his death, leaving a significant amount of wealth and assets to his wife. And while sources from the hospital in India where Cotten is believed to have died say he died of cardiac arrest, as Fortune explains, "Cotten’s body was handled by hotel staff after an embalmer refused to receive it" - something which is, at the very least, strange. It should be noted that there is certainly no clear evidence that Cotten faked his own death - only missing pieces that encourage such rumors. A further subplot - that might or night not be useful in cracking this case - emerged late last week when Canada's Globe and Mail reported that QuadrigaCX's co-founder has a history of identity theft and using digital currencies to launder money. Where could the money be? There are, as you might expect, no shortage of theories about where the cash could be. A few days ago, it was suggested that it might be possible to locate Cotten's Ethereum funds - a blog post by James Edwards, who is the editor of cryptocurrency blog zerononcense claimed that Ethereum linked to QuadrigaCX can be found in Bitfinex, Poloniex, and Jesse Powell's Kraken. "It appears that a significant amount of Ethereum (600,000+ ETH) was transferred to these exchanges as a means of ‘storage’ during the years that QuadrigaCX was in operation and offering Ethereum on their exchange," Edwards writes. Edwards is keen for his findings to be the starting point for a clearer line of inquiry, free from speculation and conspiracy. He wrote that he hoped that it would be "a helpful addition to the QuadrigaCX narrative, rather than a conspiratorial piece that speculates on whether the exchange or its owners have been honest."
Read more
  • 0
  • 0
  • 4038

article-image-new-programming-video-courses-for-march-2019
Richard Gall
04 Mar 2019
6 min read
Save for later

New programming video courses for March 2019

Richard Gall
04 Mar 2019
6 min read
It’s not always easy to know what to learn next if you’re a programmer. Industry shifts can be subtle but they can sometimes be dramatic, making it incredibly important to stay on top of what’s happening both in your field and beyond. No one person can make that decision for you. All the thought leadership and mentorship in the world isn’t going to be able to tell you what’s right for you when it comes to your career. But this list of videos, released last month, might give you a helping hand as to where to go next when it comes to your learning… New data science and artificial intelligence video courses for March Apache Spark is carving out a big presence as the go-to software for big data. Two videos from February focus on Spark - Distributed Deep Learning with Apache Spark and Apache Spark in 7 Days. If you’re new to Spark and want a crash course on the tool, then clearly, our video aims to get you up and running quickly. However, Distributed Deep Learning with Apache Spark offers a deeper exploration that shows you how to develop end to end deep learning pipelines that can leverage the full potential of cutting edge deep learning techniques. While we’re on the subject of machine learning, other choice video courses for March include TensorFlow 2.0 New Features (we’ve been eagerly awaiting it and it finally looks like we can see what it will be like), Hands On Machine Learning with JavaScript (yes, you can now do machine learning in the browser), and a handful of interesting videos on artificial intelligence and finance: AI for Finance Machine Learning for Algorithmic Trading Bots with Python Hands on Python for Finance Elsewhere, a number of data visualization video courses prove that communicating and presenting data remains an urgent challenge for those in the data space. Tableau remains one of the definitive tools - you can learn the latest version with Tableau 2019.1 for Data Scientists and Data Visualization Recipes with Python and Matplotlib 3.   New app and web development video courses for March 2019 There are a wealth of video courses for web and app developers to choose from this month. True, Hands-on Machine Learning for JavaScript is well worth a look, but moving past the machine learning hype, there are a number of video courses that take a practical look at popular tools and new approaches to app and web development. Angular’s death has been greatly exaggerated - it remains a pillar of the JavaScript world. While the project’s versioning has arguably been lacking some clarity, if you want to get up to speed with where the framework is today, try Angular 7: A Practical Guide. It’s a video that does exactly what it says on the proverbial tin - it shows off Angular 7 and demonstrates how to start using it in web projects. We’ve also been seeing some uptake of Angular by ASP.NET developers, as it offers a nice complement to the Microsoft framework on the front end side. Our latest video on the combination, Hands-on Web Development with ASP.NET Core and Angular, is another practical look at an effective and increasingly popular approach to full-stack development. Other picks for March include Building Mobile Apps with Ionic 4, a video that brings you right up to date with the recent update that launched in January (interestingly, the project is now backed by web components, not Angular), and a couple of Redux videos - Mastering Redux and Redux Recipes. Redux is still relatively new. Essentially, it’s a JavaScript library that helps you manage application state - because it can be used with a range of different frameworks and libraries, including both Angular and React, it’s likely to go from strength to strength in 2019. Infrastructure, admin and security video courses for March 2019 Node.js is becoming an important library for infrastructure and DevOps engineers. As we move to a cloud native world, it’s a great tool for developing lightweight and modular services. That’s why we’re picking Learn Serverless App Development with Node.js and Azure Functions as one of our top videos for this month. Azure has been growing at a rapid rate over the last 12 months, and while it’s still some way behind AWS, Microsoft’s focus on developer experience is making Azure an increasingly popular platform with developers. For Node developers, this video is a great place to begin - it’s also useful for anyone who simply wants to find out what serverless development actually feels like. Read next: Serverless computing wars: AWS Lambda vs. Azure Functions A partner to this, for anyone beginning Node, is the new Node.js Design Patterns video. In particular, if Node.js is an important tool in your architecture, following design patterns is a robust method of ensuring reliability and resilience. Elsewhere, we have Modern DevOps in Practice, cutting through the consultancy-speak to give you useful and applicable guidance on how to use DevOps thinking in your workflows and processes, and DevOps with Azure, another video that again demonstrates just how impressive Azure is. For those not Azure-inclined, there’s AWS Certified Developer Associate - A Practical Guide, a video that takes you through everything you need to know to pass the AWS Developer Associate exam. There’s also a completely cloud-agnostic video course in the form of Creating a Continuous Deployment Pipeline for Cloud Platforms that’s essential for infrastructure and operations engineers getting to grips with cloud native development.     Learn a new programming language with these new video courses for March Finally, there are a number of new video courses that can help you get to grips with a new programming language. So, perfect if you’ve been putting off your new year’s resolution to learn a new language… Java 11 in 7 Days is a new video that brings you bang up to date with everything in the latest version of Java, while Hands-on Functional Programming with Java will help you rethink and reevaluate the way you use Java. Together, the two videos are a great way for Java developers to kick start their learning and update their skill set.  
Read more
  • 0
  • 0
  • 9473
article-image-announcing-linux-5-0
Melisha Dsouza
04 Mar 2019
2 min read
Save for later

Announcing Linux 5.0!

Melisha Dsouza
04 Mar 2019
2 min read
Yesterday, Linus Torvalds, announced the stable release of Linux 5.0. This release comes with AMDGPU FreeSync support, Raspberry Pi touch screen support and much more. According to Torvalds, “I'd like to point out (yet again) that we don't do feature-based releases, and that ‘5.0’ doesn't mean anything more than that the 4.x numbers started getting big enough that I ran out of fingers and toes.” Features of Linux 5.0 AMDGPU FreeSync support, which will improve the display of fast-moving images and will prove advantageous especially for gamers. According to CRN, this will also make Linux a better platform for dense data visualizations and support “a dynamic refresh rate, aimed at providing a low monitor latency and a smooth, virtually stutter-free viewing experience.” Support for the Raspberry Pi’s official touch-screen. All information is copied into a memory mapped area by RPi's firmware, instead of using a conventional bus. Energy-aware scheduling feature, that lets the task scheduler to take scheduling decisions resulting in lower power usage on asymmetric SMP platforms. This feature will use Arm's big.LITTLE CPUs and help achieve better power management in phones Adiantum file system encryption for low power devices. Btrfs can support swap files, but the swap file must be fully allocated as "nocow" with no compression on one device. Support for binderfs, a binder filesystem that will help run multiple instances of Android and is backward compatible. Improvement to reduce Fragmentation by over 90%. This results in better transparent hugepage (THP) usage. Support for Speculation Barrier (SB) instruction This is introduced as part of the fallout from Spectre and Meltdown. The merge window for 5.1 is now open. Read Linux’s official documentation for the detailed list of upgraded features in Linux 5.0. Remote Code Execution Flaw in APT Linux Package Manager allows man-in-the-middle attack Intel releases patches to add Linux Kernel support for upcoming dedicated GPU releases Undetected Linux Backdoor ‘SpeakUp’ infects Linux, MacOS with cryptominers
Read more
  • 0
  • 0
  • 31343

article-image-working-on-jetson-tx1-development-board-tutorial
Amrata Joshi
03 Mar 2019
11 min read
Save for later

Working on Jetson TX1 Development Board [Tutorial]

Amrata Joshi
03 Mar 2019
11 min read
When high-end visual computing and computer vision applications need to be deployed in real-life scenarios, then embedded development platforms are required, which can do computationally intensive tasks efficiently. Platforms such as Raspberry Pi can use OpenCV for computer vision applications and camera-interfacing capability, but it is very slow for real-time applications. Nvidia, which specializes in GPU manufacturing, has developed modules that use GPUs for computationally intensive tasks. These modules can be used to deploy computer vision applications on embedded platforms and include Jetson TK1, Jetson TX1, and Jetson TX2. Jetson TK1 is the preliminary board and contains 192 CUDA cores with the Nvidia Kepler GPU.  Jetson TX1 is intermediate in terms of processing speed, with 256 CUDA cores with Maxwell architecture, operating at 998 MHz along with ARM CPU. Jetson TX2 is highest in terms of processing speed and price. It comprises 256 CUDA cores with Pascal architecture operating at 1,300 MHz. This article is an excerpt taken from the book Hands-On GPU-Accelerated Computer Vision with OpenCV and CUDA. This book covers CUDA applications, threads, synchronization and memory, computer vision operations and more.  This article covers the Jetson TX1 Development Board, features and applications of the Jetson TX1 Development Board and basic requirements and steps to install JetPack on the Jetson TX1 Development Board. This article requires a good understanding of the Linux operating system (OS) and networking. It also requires any Nvidia GPU development board, such as Jetson TK1, TX1, or TX2. The JetPack installation file can be downloaded from Nvidia's official website. Jetson TX1 is a small system on a module developed specifically for demanding embedded applications. It is Linux-based and offers super-computing performance at the level of teraflops, which can be utilized for computer vision and deep learning applications. The Jetson TX1 module is shown in the following photograph: The size of the module is 50 x 87 mm, which makes it easy to integrate into any system. Nvidia also offers the Jetson TX1 Development Board, which houses this GPU for prototyping applications in a short amount of time. The whole development kit is shown in the following photograph: As can be seen from the photograph, apart from the GPU module, the development kit contains a camera module, USB ports, an Ethernet port, a heat sink, fan, and antennas. It is backed by a software ecosystem including JetPack, Linux for Tegra, CUDA Toolkit, cuDNN, OpenCV, and VisionWorks. This makes it ideal for developers who are doing research into deep learning and computer vision for rapid prototyping. The features of the Jetson TX1 development kit are explained in detail in the following section. Features of the Jetson TX1 The Jetson TX1 development kit has many features that make it ideal for super-computing tasks: It is a system on a chip built using 20 nm technology, and comprises an ARM Cortex A57 quad-core CPU operating at 1.73 GHz and a 256 core Maxwell GPU operating at 998 Mhz. It has 4 GB of DDR4 memory with a data bus of 64 bits working at a speed of 1,600 MHz, which is equivalent to 25.6 GB/s. It contains a 5 MP MIPI CSI-2 camera module. It supports up to six two lane or three four lane cameras at 1,220 MP/s. The development kit also has a normal USB 3.0 type A port and micro USB port for connecting a mouse, a keyboard, and USB cameras to the board. It also has an Ethernet port and Wi-Fi connectivity for network connection. It can be connected to an HDMI display device via the HDMI port. The kit contains a heat sink and a fan for cooling down the GPU device at its peak performance. It draws as little as 1 watt of power in an idle condition, around 8-10 watts under normal load, and up to 15 watts when the module is fully utilized. It can process 258 images/second with a power dissipation of 5.7 watts, which is equivalent to the performance/watt value of 45. A normal i7 CPU processor has a performance of 242 images/second at 62.5 watts, which is equivalent to a performance/watt value of 3.88. So Jetson TX1 is 11.5 times better than an i7 processor. Applications of Jetson TX1 Jetson TX1 can be used in many deep learning and computer vision applications that require computationally intensive tasks. Some of the areas and applications in which Jetson TX1 can be used are as follows: It can be used in building autonomous machines and self-driving cars for various computationally intensive tasks. It can be used in various computer vision applications such as object detection, classification, and segmentation. It can also be used in medical imaging for the analysis of MRI images and computed tomography (CT) images. It can be used to build smart video surveillance systems that can help in crime monitoring or traffic monitoring. It can be used in bioinformatics and computational chemistry for simulating DNA genes, sequencing, protein docking, and so on. It can be used in various defense equipment where fast computing is required. Installation of JetPack on Jetson TX1 The Jetson TX1 comes with a preinstalled Linux OS. The Nvidia drivers for it should be installed when it is booted for the first time. The commands to do it are as follows: cd ${HOME}/NVIDIA-INSTALLER sudo ./installer.sh When TX1 is rebooted after these two commands, the Linux OS with user interface will start. Nvidia offers a software development kit (SDK), which contains all of the software needed for building computer vision and deep learning applications, along with the target OS to flash the development board. This SDK is called JetPack. The latest JetPack contains Linux for Tegra (L4T) board support packages; TensorRT, which is used for deep learning inference in computer vision applications; the latest CUDA toolkit, cuDNN, which is a CUDA deep neural network library; VisionWorks, which is also used for computer vision and deep learning applications; and OpenCV. All of the packages will be installed by default when you install JetPack. This section describes the procedure to install JetPack on the board. The procedure is long, tedious, and a little bit complex for a newcomer to Linux. So, just follow the steps and screenshots given in the following section carefully. Basic requirements for installation There are a few basic requirements for the installation of JetPack on TX1. JetPack can't be installed directly on the board, so a PC or virtual machine that runs Ubuntu 14.04 is required as a host PC. The installation is not checked with the latest version of Ubuntu, but you are free to play around with it. The Jetson TX1 board needs peripherals such as a mouse, keyboard, and monitor, which can be connected to the USB and HDMI ports. The Jetson TX1 board should be connected to the same router as the host machine via an Ethernet cable. The installation will also require a micro USB to USB cable to connect the board with a PC for transferring packages on the board via serial transfer. Note down the IP address of the board by checking the router configuration. If all requirements are satisfied, then move to the following section for the installation of JetPack. Steps for installation This section describes the steps to install the latest JetPack version, accompanied by screenshots. All of the steps need to be executed on the host machine, which is running Ubuntu 14.04:  Download the latest JetPack version from the official Nvidia site by following the link, https://developer.nvidia.com/embedded/jetpack, and clicking on the download button, as shown in the following screenshot: JetPack 3.3 is used to demonstrate the installation procedure. The name of the downloaded file is JetPack-L4T-3.3-linux-x64_b39.run. Create a folder on Desktop named jetpack and copy this file in that folder, as shown in the following screenshot: Start a Terminal in that folder by right-clicking and selecting the Open option. The file needs to be executed, so it should have an execute permission. If that is not the case, change the permission and then start the installer, as shown in the screenshot: It will start an installation wizard for JetPack 3.3 as shown in the following screenshot. Just click on Next in this window: The wizard will ask for directories where the packages will be downloaded and installed. You can choose the current directory for installation and create a new folder in this directory for saving downloaded packages, as shown in the following screenshot. Then click on Next: The installation wizard will ask you to choose the development board on which the JetPack packages are to be installed. Select Jetson TX1, as shown in the following screenshot, and click on Next: The components manager window will be displayed, which shows which packages will be downloaded and installed. It will show packages such as CUDA Toolkit, cuDNN, OpenCV, and VisionWorks, along with the OS image, as shown in the following screenshot:  It will ask to accept the license agreement. So click on Accept all, as shown in the following screenshot, and click on Next: It will start to download the packages, as shown in the following screenshot: When all of the packages are downloaded and installed, click on Next to complete the installation on the host. It will display the following window: It will ask you to select a network layout of how the board is connected to the host PC. The board and host PC are connected to the same router, so the first option, which tells the device to access the internet via the same router or switch, is selected, as shown in the following screenshot, and then click Next: It will ask for the interface used to connect the board to the network. We have to use an Ethernet cable to connect the router to the board, so we will select the eth0 interface, as shown in the following screenshot: This will finish the installation on the host and it will show the summary of the packages that will be transferred and installed on the board. When you click Next in the window, it will show you the steps to connect the board to the PC via the micro USB to USB cable and to boot the board in Force USB Recovery Mode. The window with the steps are shown as follows: To go into force recovery mode, after pressing the POWER button, press the FORCE RECOVERY button and, while pressing it, press and release the RESET button. Then release the FORCE RECOVERY button. The device will boot in force recovery mode. Type the lsusb command in the window; it will start transferring packages on to the device if it is correctly connected. If you are using a virtual machine, then you have to enable the device from the USB settings of the virtual machine. Also, select USB 3.0 controller if it's not selected. The process that starts after typing the lsusb command is shown as follows: The process will flash the OS on the device. This process can take a long time, up to an hour to complete. It will ask for resetting the device after the flashing has completed an IP address for ssh. Write down the IP address noted earlier, along with the default username and password, which is ubuntu, and click Next. The following window will be displayed after that: Click on Next and it will push all packages, such as CUDA Toolkit, VisionWorks, OpenCV, and Multimedia, onto the device. The following window will be displayed: After the process is completed, it will ask whether to delete all the downloaded packages during the process. If you want to delete, then tick on the checkbox or keep it as it is, as shown in the following screenshot: Click on Next and the installation process will be finished. Reboot the Jetson TX1 Development Board and it will boot in the normal Ubuntu OS. You will also observe sample examples of all the packages that are installed. This article introduced the Jetson TX1 Development Board for deploying computer vision and deep learning applications on embedded platforms. It also covers features and applications of the Jetson TX1 Development Board and basic requirements and steps to install JetPack on the Jetson TX1 Development Board. To know more about Jetson TX1 and CUDA applications, check out the book  Hands-On GPU-Accelerated Computer Vision with OpenCV and CUDA. NVIDIA makes its new “brain for autonomous AI machines”, Jetson AGX Xavier Module, available for purchase NVIDIA announces pre-orders for the Jetson Xavier Developer Kit, an AI chip for autonomous machines, at $2,499 NVIDIA launches GeForce Now’s (GFN) ‘recommended router’ program to enhance the overall performance and experience of GFN
Read more
  • 0
  • 0
  • 19896
Modal Close icon
Modal Close icon