Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-how-to-create-a-desktop-application-with-electron-tutorial
Bhagyashree R
06 Feb 2019
15 min read
Save for later

How to create a desktop application with Electron [Tutorial]

Bhagyashree R
06 Feb 2019
15 min read
Electron is an open source framework, created by GitHub, that lets you develop desktop executables that bring together Node and Chrome to provide a full GUI experience. Electron has been used for several well-known projects, including developer tools such as Visual Studio Code, Atom, and Light Table. Basically, you can define the UI with HTML, CSS, and JS (or using React, as we'll be doing), but you can also use all of the packages and functions in Node. So, you won't be limited to a sandboxed experience, being able to go beyond what you could do with just a browser. This article is taken from the book  Modern JavaScript Web Development Cookbook by Federico Kereki.  This problem-solving guide teaches you popular problems solving techniques for JavaScript on servers, browsers, mobile phones, and desktops. To follow along with the examples implemented in this article, you can download the code from the book's GitHub repository. In this article, we will look at how we can use Electron together with the tools like, React and Node, to create a native desktop application, which you can distribute to users. Setting up Electron We will start with installing Electron, and then in the later recipes, we'll see how we can turn a React app into a desktop program. You can install Electron by executing the following command: npm install electron --save-dev Then, we'll need a starter JS file. Taking some tips from the main.js file, we'll create the following electron-start.js file: // Source file: electron-start.js /* @flow */ const { app, BrowserWindow } = require("electron"); let mainWindow; const createWindow = () => { mainWindow = new BrowserWindow({ height: 768, width: 1024 }); mainWindow.loadURL("http://localhost:3000"); mainWindow.on("closed", () => { mainWindow = null; }); }; app.on("ready", createWindow); app.on("activate", () => mainWindow === null && createWindow()); app.on( "window-all-closed", () => process.platform !== "darwin" && app.quit() ); Here are some points to note regarding the preceding code snippet: This code runs in Node, so we are using require() instead of import The mainWindow variable will point to the browser instance where our code will run We'll start by running our React app, so Electron will be able to load the code from http://localhost:3000 In our code, we also have to process the following events: "ready" is called when Electron has finished its initialization and can start creating windows. "closed" means your window was closed; your app might have several windows open, so at this point, you should delete the closed one. "window-all-closed" implies your whole app was closed. In Windows and Linux, this means quitting, but for macOS, you don't usually quit applications, because of Apple' s usual rules. "activate" is called when your app is reactivated, so if the window had been deleted (as in Windows or Linux), you have to create it again. We already have our React app (you can find the React app in the GitHub repository) in place, so we just need a way to call Electron. Add the following script to package.json, and you'll be ready: "scripts": { "electron": "electron .", . . . How it works... To run the Electron app in development mode, we have to do the following: Run our restful_server_cors server code from the GitHub repository. Start the React app, which requires the server to be running. Wait until it's loaded, and then and only then, move on to the next step. Start Electron. So, basically, you'll have to run the following two commands, but you'll need to do so in separate terminals: // in the directory for our restful server: node out/restful_server_cors.js // in the React app directory: npm start // and after the React app is running, in other terminal: npm run electron After starting Electron, a screen quickly comes up, and we again find our countries and regions app, now running independently of a browser: The app works as always; as an example, I selected a country, Canada, and correctly got its list of regions: We are done! You can see that everything is interconnected, as before, in the sense that if you make any changes to the React source code, they will be instantly reflected in the Electron app. Adding Node functionality to your app In the previous recipe, we saw that with just a few small configuration changes, we can turn our web page into an application. However, you're still restricted in terms of what you can do, because you are still using only those features available in a sandboxed browser window. You don't have to think this way, for you can add basically all Node functionality using functions that let you go beyond the limits of the web. Let's see how to do it in this recipe. How to do it We want to add some functionality to our app of the kind that a typical desktop would have. The key to adding Node functions to your app is to use the remote module in Electron. With it, your browser code can invoke methods of the main process, and thus gain access to extra functionality. Let's say we wanted to add the possibility of saving the list of a country's regions to a file. We'd require access to the fs module to be able to write a file, and we'd also need to open a dialog box to select what file to write to. In our serviceApi.js file, we would add the following functions: // Source file: src/regionsApp/serviceApi.js /* @flow */ const electron = window.require("electron").remote; . . . const fs = electron.require("fs"); export const writeFile = fs.writeFile.bind(fs); export const showSaveDialog = electron.dialog.showSaveDialog; Having added this, we can now write files and show dialog boxes from our main code. To use this functionality, we could add a new action to our world.actions.js file: // Source file: src/regionsApp/world.actions.js /* @flow */ import { getCountriesAPI, getRegionsAPI, showSaveDialog, writeFile } from "./serviceApi"; . . . export const saveRegionsToDisk = () => async ( dispatch: ({}) => any, getState: () => { regions: [] } ) => { showSaveDialog((filename: string = "") => { if (filename) { writeFile(filename, JSON.stringify(getState().regions), e => e && window.console.log(`ERROR SAVING ${filename}`, e); ); } }); }; When the saveRegionsToDisk() action is dispatched, it will show a dialog to prompt the user to select what file is to be written, and will then write the current set of regions, taken from getState().regions, to the selected file in JSON format. We just have to add the appropriate button to our <RegionsTable> component to be able to dispatch the necessary action: // Source file: src/regionsApp/regionsTableWithSave.component.js /* @flow */ import React from "react"; import PropTypes from "prop-types"; import "../general.css"; export class RegionsTable extends React.PureComponent<{ loading: boolean, list: Array<{ countryCode: string, regionCode: string, regionName: string }>, saveRegions: () => void }> { static propTypes = { loading: PropTypes.bool.isRequired, list: PropTypes.arrayOf(PropTypes.object).isRequired, saveRegions: PropTypes.func.isRequired }; static defaultProps = { list: [] }; render() { if (this.props.list.length === 0) { return <div className="bordered">No regions.</div>; } else { const ordered = [...this.props.list].sort( (a, b) => (a.regionName < b.regionName ? -1 : 1) ); return ( <div className="bordered"> {ordered.map(x => ( <div key={x.countryCode + "-" + x.regionCode}> {x.regionName} </div> ))} <div> <button onClick={() => this.props.saveRegions()}> Save regions to disk </button> </div> </div> ); } } } We are almost done! When we connect this component to the store, we'll simply add the new action, as follows: // Source file: src/regionsApp/regionsTableWithSave.connected.js /* @flow */ import { connect } from "react-redux"; import { RegionsTable } from "./regionsTableWithSave.component"; import { saveRegionsToDisk } from "./world.actions"; const getProps = state => ({ list: state.regions, loading: state.loadingRegions }); const getDispatch = (dispatch: any) => ({ saveRegions: () => dispatch(saveRegionsToDisk()) }); export const ConnectedRegionsTable = connect( getProps, getDispatch )(RegionsTable); How it works The code we added showed how we could gain access to a Node package (fs, in our case) and some extra functions, such as showing a Save to disk dialog. When we run our updated app and select a country, we'll see our newly added button, as in the following screenshot: Clicking on the button will pop up a dialog, allowing you to select the destination for the data: If you click Save, the list of regions will be written in JSON format, as we specified earlier in our writeRegionsToDisk() function. Building a more windowy experience In the previous recipe, we added the possibility of using any and all of the functions provided by Node. In this recipe, let's now focus on making our app more window-like, with icons, menus, and so on. We want the user to really believe that they're using a native app, with all the features that they would be accustomed to. The following list of interesting subjects from Electron APIs is just a short list of highlights, but there are many more available options: clipboardTo do copy and paste operations using the system's clipboarddialogTo show the native system dialogs for messages, alerts, opening and saving files, and so onglobalShortcutTo detect keyboard shortcutsMenu, MenuItemTo create a menu bar with menus and submenusNotificationTo add desktop notificationspowerMonitor, powerSaveBlockerTo monitor power state changes, and to disable entering sleep modescreenTo get information about the screen, displays, and so onTrayTo add icons and context menus to the system's tray Let's add a few of these functions so that we can get a better-looking app that is more integrated to the desktop. How to do it Any decent app should probably have at least an icon and a menu, possibly with some keyboard shortcuts, so let's add those features now, and just for the sake of it, let's also add some notifications for when regions are written to disk. Together with the Save dialog we already used, this means that our app will include several native windowing features. To start with, let's add an icon. Showing an icon is the simplest thing because it just requires an extra option when creating the BrowserWindow() object. I'm not very graphics-visual-designer oriented, so I just downloaded the Alphabet, letter, r Icon Free file from the Icon-Icons website. Implement the icon as follows: mainWindow = new BrowserWindow({ height: 768, width: 1024, icon: "./src/regionsApp/r_icon.png" }); You can also choose icons for the system tray, although there's no way of using our regions app in that context, but you may want to look into it nonetheless. To continue, the second feature we'll add is a menu, with some global shortcuts to boot. In our App.regions.js file, we'll need to add a few lines to access the Menu module, and to define our menu itself: // Source file: src/App.regions.js . . . import { getRegions } from "./regionsApp/world.actions"; . . . const electron = window.require("electron").remote; const { Menu } = electron; const template = [ { label: "Countries", submenu: [ { label: "Uruguay", accelerator: "Alt+CommandOrControl+U", click: () => store.dispatch(getRegions("UY")) }, { label: "Hungary", accelerator: "Alt+CommandOrControl+H", click: () => store.dispatch(getRegions("HU")) } ] }, { label: "Bye!", role: "quit" } ]; const mainMenu = Menu.buildFromTemplate(template); Menu.setApplicationMenu(mainMenu); Using a template is a simple way to create a menu, but you can also do it manually, adding item by item. I decided to have a Countries menu with two options to show the regions for Uruguay and Hungary. The click property dispatches the appropriate action. I also used the accelerator property to define global shortcuts. See the accelerator.md for the list of possible key combinations to use, including the following: Command keys, such as Command (or Cmd), Control (or Ctrl), or both (CommandOrControl or CmdOrCtrl) Alternate keys, such as Alt, AltGr, or Option Common keys, such as Shift, Escape (or Esc), Tab, Backspace, Insert, or Delete Function keys, such as F1 to F24 Cursor keys, including Up, Down, Left, Right, Home, End, PageUp, and PageDown Media keys, such as MediaPlayPause, MediaStop, MediaNextTrack, MediaPreviousTrack, VolumeUp, VolumeDown, and VolumeMute I also want to be able to quit the application. A complete list of roles is available at Electron docs. With these roles, you can do a huge amount, including some specific macOS functions, along with the following: Work with the clipboard (cut, copy, paste, and pasteAndMatchStyle) Handle the window (minimize, close, quit, reload, and forceReload) Zoom (zoomIn, zoomOut, and resetZoom) To finish, and really just for the sake of it, let's add a notification trigger for when a file is written. Electron has a Notification module, but I opted to use node-notifier, which is quite simple to use. First, we'll add the package in the usual fashion: npm install node-notifier --save In serviceApi.js, we'll have to export the new function, so we'll able to import from elsewhere, as we'll see shortly: const electron = window.require("electron").remote; . . . export const notifier = electron.require("node-notifier"); Finally, let's use this in our world.actions.js file: import { notifier, . . . } from "./serviceApi"; With all our setup, actually sending a notification is quite simple, requiring very little code: // Source file: src/regionsApp/world.actions.js . . . export const saveRegionsToDisk = () => async ( dispatch: ({}) => any, getState: () => { regions: [] } ) => { showSaveDialog((filename: string = "") => { if (filename) { writeFile(filename, JSON.stringify(getState().regions), e => { if (e) { window.console.log(`ERROR SAVING ${filename}`, e); } else { notifier.notify({ title: "Regions app", message: `Regions saved to ${filename}` }); } }); } }); }; How it works First, we can easily check that the icon appears: Now, let's look at the menu. It has our options, including the shortcuts: Then, if we select an option with either the mouse or the global shortcut, the screen correctly loads the expected regions: Finally, let's see if the notifications work as expected. If we click on the Save regions to disk button and select a file, we'll see a notification, as in the following screenshot: Making a distributable package Now that we have a full app, all that's left to do is package it up so that you can deliver it as an executable file for Windows, Linux, or macOS users. How to do it. There are many ways of packaging an app, but we'll use a tool, electron-builder, that will make it even easier, if you can get its configuration right! First of all, we'll have to begin by defining the build configuration, and our initial step will be, as always, to install the tool: npm install electron-builder --save-dev To access the added tool, we'll require a new script, which we'll add in package.json: "scripts": { "dist": "electron-builder", . . . } We'll also have to add a few more details to package.json, which are needed for the build process and the produced app. In particular, the homepage change is required, because the CRA-created index.html file uses absolute paths that won't work later with Electron: "name": "chapter13", "version": "0.1.0", "description": "Regions app for chapter 13", "homepage": "./", "license": "free", "author": "Federico Kereki", Finally, some specific building configuration will be required. You cannot build for macOS with a Linux or Windows machine, so I'll leave that configuration out. We have to specify where the files will be found, what compression method to use, and so on: "build": { "appId": "com.electron.chapter13", "compression": "normal", "asar": true, "extends": null, "files": [ "electron-start.js", "build/**/*", "node_modules/**/*", "src/regionsApp/r_icon.png" ], "linux": { "target": "zip" }, "win": { "target": "portable" } } We have completed the required configuration, but there are also some changes to do in the code itself, and we'll have to adapt the code for building the package. When the packaged app runs, there won't be any webpack server running; the code will be taken from the built React package. The starter code will require the following changes: // Source file: electron-start.for.builder.js /* @flow */ const { app, BrowserWindow } = require("electron"); const path = require("path"); const url = require("url"); let mainWindow; const createWindow = () => { mainWindow = new BrowserWindow({ height: 768, width: 1024, icon: path.join(__dirname, "./build/r_icon.png") }); mainWindow.loadURL( url.format({ pathname: path.join(__dirname, "./build/index.html"), protocol: "file", slashes: true }) ); mainWindow.on("closed", () => { mainWindow = null; }); }; app.on("ready", createWindow); app.on("activate", () => mainWindow === null && createWindow()); app.on( "window-all-closed", () => process.platform !== "darwin" && app.quit() ); Mainly, we are taking icons and code from the build/ directory. An npm run build command will take care of generating that directory, so we can proceed with creating our executable app. How it works After doing this setup, building the app is essentially trivial. Just do the following, and all the distributable files will be found in the dist/ directory: npm run electron-builder Now that we have the Linux app, we can run it by unzipping the .zip file and clicking on the chapter13 executable. (The name came from the "name" attribute in package.json, which we modified earlier.) The result should be like what's shown in the following screenshot: I also wanted to try out the Windows EXE file. Since I didn't have a Windows machine, I made do by downloading a free VirtualBox virtual machine. After downloading the virtual machine, setting it up in VirtualBox, and finally running it, the result that was produced was the same as for Linux: So, we've managed to develop a React app, enhanced it with the Node and Electron features, and finally packaged it for different operating systems. With that, we are done! If you found this post useful, do check out the book, Modern JavaScript Web Development Cookbook.  You will learn how to create native mobile applications for Android and iOS with React Native, build client-side web applications using React and Redux, and much more. How to perform event handling in React [Tutorial] Flutter challenges Electron, soon to release a desktop client to accelerate mobile development Electron 3.0.0 releases with experimental textfield, and button APIs
Read more
  • 0
  • 0
  • 77232

article-image-6-signs-you-need-containers
Richard Gall
05 Feb 2019
9 min read
Save for later

6 signs you need containers

Richard Gall
05 Feb 2019
9 min read
I’m not about to tell you containers is a hot new trend - clearly, it isn’t. Today, they are an important part of the mainstream software development industry that probably won't be disappearing any time soon. But while containers certainly can’t be described as a niche or marginal way of deploying applications, they aren’t necessarily ubiquitous. There are still developers or development teams yet to fully appreciate the usefulness of containers. You might know them - you might even be one of them. Joking aside, there are often many reasons why people aren’t using containers. Sometimes these are good reasons: maybe you just don’t need them. Often, however, you do need them, but the mere thought of changing your systems and workflow can feel like more trouble than it’s worth. If everything seems to be (just about) working, why shake things up? Well, I’m here to tell you that more often than not it is worthwhile. But to know that you’re not wasting your time and energy, there are a few important signs that can tell you if you should be using containers. Download Containerize Your Apps with Docker and Kubernetes for free, courtesy of Microsoft.  Your codebase is too complex There are few developers in the world who would tell you that their codebase couldn’t do with a little pruning and simplification. But if your code has grown into a beast that everyone fears and doesn’t really understand, containers could probably help you a lot. Why do containers help simplify your codebase? Let’s think about how spaghetti code actually happens. Yes, it always happens by accident, but usually it’s something that evolves out of years of solving intractable problems with knock on effects and consequences that only need to be solved later. By using containers you can begin to think differently about your code. Instead of everything being tied up together, like a complex concrete network of road junctions, containers allow you to isolate specific parts of it. When you can better isolate your code, you can also isolate different problems and domains. This is one of the reasons that containers is so closely aligned with microservices. Software testing is nightmarish The efficiency benefits of containers are well documented, but the way containers can help the software testing process is often underplayed - this probably says more about a general inability to treat testing with the respect and time it deserves as much as anything else. How do containers make testing easier? There are a number of reasons containers make software testing easier. On the one hand, by using containers you’re reducing that gap between the development environment and production, which means you shouldn’t be faced with as many surprises once your code hits production as you sometimes might. Containers also make the testing process faster - you only need to test against a container image, you don’t need a fully-fledged testing environment for every application you do tests on. What this all boils down to is that testing becomes much quicker and easier. In theory, then, this means the testing process fits much more neatly within the development workflow. Code quality should never be seen as a bottleneck; with containers it becomes much easier to embed the principle in your workflow. Read next: How to build 12 factor microservices on Docker Your software isn’t secure - you’ve had breaches that could have been prevented Spaghetti code, lack of effective testing can lead to major security risks. If no one really knows what’s going on inside your applications and inside your code it’s inevitable that you’ll have vulnerabilities. And, in turn, it’s highly likely these vulnerabilities will be exploited. How can containers make software security easier? Because containers allow you to make changes to parts of your software infrastructure (rather than requiring wholesale changes), this makes security patches much easier to achieve. Essentially, you can isolate the problem and tackle it. Without containers, it becomes harder to isolate specific pieces of your infrastructure, which means any changes could have a knock on effect on other parts of your code that you can’t predict. That all being said, it probably is worth mentioning that containers do still pose a significant set of security challenges. While simplicity in your codebase can make testing easier, you are replacing simplicity at that level with increased architectural complexity. To really feel the benefits of container security, you need a strong sense of how your container deployments are working together and how they might interact. Your software infrastructure is expensive (you feel the despair of vendor lock-in) Running multiple virtual machines can quickly get expensive. In terms of both storage and memory, if you want to scale up, you’re going to be running through resources at a rapid rate. While you might end up spending big on more traditional compute resources, the tools around container management and automation are getting cheaper. One of the costs of many organization’s software infrastructure is lock-in. This isn’t just about price, it’s about the restrictions that come with sticking with a certain software vendor - you’re spending money on software systems that are almost literally restricting your capacity for growth and change. How do containers solve the software infrastructure problem and reduce vendor lock-in? Traditional software infrastructure - whether that’s on-premise servers or virtual ones - is a fixed cost - you invest in the resources you need, and then either use it or you don’t. With containers running on, say, cloud, it becomes a lot easier to manage your software spend alongside strategic decisions about scalability. Fundamentally, it means you can avoid vendor lock-in. Yes, you might still be paying a lot of money for AWS or Azure, but because containers are much more portable, moving your applications between providers is much less hassle and risk. Read next: CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure DevOps is a war, not a way of working Like containers, DevOps could hardly be considered a hot new trend any more. But this doesn’t mean it’s now part of the norm. There are plenty of organizations that simply don’t get DevOps, or, at the very least, seem to be stumbling their way through sprint meetings with little real alignment between development and operations. There could be multiple causes for this conflict (maybe people just don’t get on), but DevOps often fails where the code that’s being written and deployed is too complicated for anyone to properly take accountability. This takes us back to the issue of the complex codebase. Think of it this way - if code is a gigantic behemoth that can’t be easily broken up, the unintended effects and consequences of every new release and update can cause some big problems - both personally and technically. How do containers solve DevOps challenges? Containers can help solve the problems that DevOps aims to tackle by breaking up software into different pieces. This means that developers and operations teams have much more clarity on what code is being written and why, as well as what it should do. Indeed, containers arguably facilitate DevOps practices much more effectively than DevOps proponents have been trying to do in pre-container years. Adding new product features is a pain The issue of adding features or improving applications is a complaint that reaches far beyond the development team. Product management, marketing - these departments will all bemoan the ability to make necessary changes or add new features that they will argue is business critical. Often, developers will take the heat. But traditional monolithic applications make life difficult for developers - you simply can’t make changes or updates. It’s like wanting to replace a radiator and having to redo your house’s plumbing. This actually returns us to the earlier point about DevOps - containers makes DevOps easier because it enables faster delivery cycles. You can make changes to an application at the level of a container or set of containers. Indeed, you might even simply kill one container and replace it with a new one. In turn, this means you can change and build things much more quickly. How do containers make it easier to update or build new features? To continue with the radiator analogy: containers would allow you to replace or change an individual radiator without having to gut your home. Essentially, if you want to add a new feature or change an element, you wouldn’t need to go into your application and make wholesale changes - that may have unintended consequences - instead, you can simply make a change by running the resources you need inside a new container (or set of containers). Watch for the warning signs As with any technology decision, it’s well worth paying careful attention to your own needs and demands. So, before fully committing to containers, or containerizing an application, keep a close eye on the signs that they could be a valuable option. Containers may well force you to come face to face with the reality of technical debt - and if it does, so be it. There’s no time like the present, after all. Of course, all of the problems listed above are ultimately symptoms of broader issues or challenges you face as a development team or wider organization. Containers shouldn’t be seen as a sure-fire corrective, but they can be an important element in changing your culture and processes. Learn how to containerize your apps with a new eBook, free courtesy of Microsoft. Download it here.
Read more
  • 0
  • 0
  • 39378

article-image-understanding-address-spaces-and-subnetting-in-ipv4-tutorial
Melisha Dsouza
05 Feb 2019
13 min read
Save for later

Understanding Address spaces and subnetting in IPv4 [Tutorial]

Melisha Dsouza
05 Feb 2019
13 min read
In any network, Internet Protocol (IP) addressing is needed to ensure that data is sent to the correct recipient or device. Both IPv4 and IPv6 address schemes are managed by the Internet Assigned Numbers Authority (IANA). Most of the internet that we know today is based on the IPv4 addressing scheme and is still the predominant method of communication on both the internet and private networks. This tutorial is an excerpt from a book written by Glen D. Singh, Rishi Latchmepersad titled CompTIA Network+ Certification Guide. This book is a practical certification guide that covers all CompTIA certification exam topics in an easy-to-understand manner along with self-assessment scenarios for better preparation. Public IPv4 addresses There are two main IPv4 address spaces—the public address space and the private address space. The primary difference between both address spaces is that the public IPv4 addresses are routable on the internet, which means that any device that requires communication to other devices on the internet will need to be assigned a public IPv4 address on its interface, which is connected to the internet. The public address space is divided into five classes: Class A 0.0.0.0 – 126.255.255.255 Class B 128.0.0.0 – 191.255.255.255 Class C 192.0.0.0 – 223.255.255.255 Class D 224.0.0.0 – 239.255.255.255 Class E 240.0.0.0 – 255.255.255.255 Class D addresses are used for multicast traffic. These addresses are not assignable. Class E addresses are reserved for experimental usage and are not assignable. On the internet, classes A, B, and C are commonly used on devices that are directly connected to the internet, such as layer 3 switches, routers, firewalls, servers, and any other network-related device. As mentioned earlier, there are approximately four billion public IPv4 addresses. However, in a lot of organizations and homes, only one public IPv4 address is assigned to the router or modem's publicly facing interface. The following diagram shows how a public IP address is seen by internet users: So, what about the devices that require internet access from within the organization or home? There may be a few devices to hundreds or even thousands of devices that require an internet connection and an IP address to communicate to the internet from within a company. If ISPs give their customers a single public IPv4 address on their modem or router, how can this single public IPv4 address serve more than one device from within the organization or home? The internet gateway or router is usually configured with Network Addresses Translation (NAT), which is the method of mapping either a group of IP addresses or a single IP address on the internet-facing interface to the local area network (LAN). For any devices that are behind the internet gateway that want to communicate with another device on the internet, NAT will translate the sender's source IP address to the public IPv4 address. Therefore, all of the devices on the internet will see the public IPv4 address and not the sender's actual IP address. Private IPv4 addresses As defined by RFC 1918, there are three classes of private IPv4 address that are allocated for private use only. This means within a private network such as LAN. The benefit of using the private address space (RFC 1918) is that the classes are not unique to any particular organization or group. They can be used within an organization or a private network. However, on the internet, the public IPv4 address is unique to a device. This means that if a device is directly connected to the internet with a private IPv4 address, there will be no network connectivity to devices on the internet. Most ISPs usually have a filter to prevent any private addresses (RFC 1918) from entering their network. The private address space is divided into three classes: Class A—10.0.0.0/8 network block 10.0.0.0 - 010.255.255.255 Class B—172.16.0.0/12 network block 172.16.0.0 - 172.31.255.255 Class C—192.168.0.0/16 network block 192.168.0.0 - 192.168.255.255 Subnetting in IPv4 What is subnetting and why do we need to subnet a network? First, subnetting is the process of breaking down a single IP address block into smaller subnetworks (subnets). Second, the reason we need to subnet is to efficiently distribute IP addresses with the result of less wastage. This brings us to other questions, such as why do we need to break down a single IP address block, and why is least wastage so important? Could we simply assign a Class A, B, or C address block to a network of any size? To answer these questions, we will go more in depth with this topic by using practical examples and scenarios. Let's assume that you are a network administrator at a local company and one day the IT manager assigns a new task to you. The task is to redesign the IP scheme of the company. He has also told you to use an address class that is suitable for the company's size and to ensure that there is minimal wastage of IP addresses. The first thing you decided to do was draw a high-level network diagram indicating each branch, which shows the number of hosts per branch office and the Wide Area Network (WAN) links between each branch router: Network diagram As we can see from the preceding diagram, each building has a branch router, and each router is connected to another using a WAN link. Each branch location has a different number of host devices that requires an IP address for network communication. Step 1 – determining an appropriate class of address and why The subnet mask can tell us a lot about a network, such as the following: The network and host portion of an IP address The number of hosts within a network If we use a network block from either of the address classes, we will get the following available hosts: As you may remember, the network portion of an address is represented by 1s in the subnet mask, while the 0s represent the host portion. We can use the following formula to calculate the total number of IP addresses within a subnet by the known the amount of host bits in the subnet mask. Using the formula 2H, where H represents the host bit, we get the following results: Class A = 224 = 16,777,216 total IPs Class B = 216 = 65,536 total IPs Class C = 28 = 256 total IPs In IPv4, there are two IPs that cannot be assigned to any devices. These are the Network ID and the Broadcast IP address. Therefore, you need to subtract two addresses from the total IP formula. Using the formula 2H-2 to calculate usable IPs, we get the following: Class A = 224 – 2 = 16,777,214 total IPs Class B = 216 – 2 = 65,534 total IPs Class C = 28 – 2 = 254 total IPs Looking back at Network diagram, we can identify the following seven networks: Branch A LAN: 25 hosts Branch B LAN: 15 hosts Branch C LAN: 28 hosts Branch D LAN: 26 hosts WAN R1-R2: 2 IPs are needed WAN R2-R3: 2 IPs are needed WAN R3-R4: 2 IPs are needed Determining the appropriate address class depends on the largest network and the number of networks needed. Currently, the largest network is Branch C, which has 28 host devices that needs an IP address. We can use the smallest available class, which is any Class C address because it will be able to support the largest network we have. However, to do this, we need to choose a Class C address block. Let's use the 192.168.1.0/24 block. Remember, the subnet mask is used to identify the network portion of the address. This also means that we are unable to modify the network portion of the IP address when we are subnetting, but we can modify the host portion: The first 24-bits represent the network portion and the remaining 8-bits represent the host portion. Using the formula 2H – 2 to calculate the number of usable host IPs, we get the following: 2H – 2 28 – 2 = 256 – 2 = 254 usable IP addresses Assigning this single network block to either of the seven networks, there will be a lot of IP addresses being wasted. Therefore, we need to apply our subnetting techniques to this Class C address block. Step 2 – creating subnets (subnetworks) To create more subnets or subnetworks, we need to borrow bits on the host portion of the network. The formula 2N is used to calculate the number of subnets, where N is the number of bits borrowed on the host portion. Once these bits are borrowed, they will become part of the network portion and a new subnet mask will be presented. So far, we have a Network ID of 192.168.1.0/24. We need to get seven subnets, and each subnet should be able to fit our largest network (which is Branch C—28 hosts). Let's create our subnets. Remember that we need to borrow bits on the host portion, starting where the 1s end in the subnet mask. Let's borrow two host bits and apply them to our formula to determine whether we are able to get the seven subnets: When bits are borrowed on the host portion, the bits are changed to 1s in the subnet mask. This produces a new subnet mask for all of the subnets that have been created. Let's use our formula for calculating the number of networks: Number of Networks = 2N 22 = 2 x 2 = 4 networks As we can see, two host bits are not enough as we need at least seven networks. Let's borrow one more host bit: Once again, let's use our formula for calculating the number of networks: Number of Networks = 2N 23 = 2 x 2 x 2 = 8 networks Using 3 host bits, we are able to get a total of 8 subnets. In this situation, we have one additional network, and this additional network can be placed aside for future use if there's an additional branch in the future. Since we borrowed 3 bits, we have 5 host bits remaining. Let's use our formula for calculating usable IP addresses: Usable IP addresses = 2H – 2 25 – 2 = 32 – 2 = 30 usable IPs This means that each of the 8 subnets will have a total of 32 IP addresses, with 30 usable IP addresses inclusive. Now we have a perfect match. Let's work out our 8 new subnets. The guidelines we must follow at this point are as follows: We cannot modify the network portion of the address (red) We cannot modify the host portion of the address (black) We can only modify the bits that we borrowed (green) Starting with the Network ID, we get the following eight subnets: We can't forget about the subnet mask: As we can see, there are twenty-seven 1s in the subnet mask, which gives us 255.255.255.224 or /27 as the new subnet mask for all eight subnets we've just created. Take a look at each of the subnets. They all have a fixed increment of 32. A quick method to calculate the incremental size is to use the formula 2x. This assists in working out the decimal notation of each subnet much easier than calculating the binary. The last network in any subnet always ends with the customized ending of the new subnet mask. From our example, the new subnet mask 255.255.255.224 ends with 224, and the last subnet also ends with the same value, 192.168.1.224. Step 3 – assigning each network an appropriate subnet and calculating the ranges To determine the first usable IP address within a subnet, the first bit from the right must be 1. To determine the last usable IP address within a subnet all of the host bits except the first bit from the right should all be 1s. The broadcast IP of any subnet is when all of the host bits are 1s. Let's take a look at the first subnet. We will assign subnet 1 to the Branch A LAN: The second subnet will be allocated to the Branch B LAN: The third subnet will be allocated to the Branch C LAN: The fourth subnet will be allocated to Branch D LAN: At this point, we have successfully allocated subnets 1 to 4 to each of the branch's LANs. During our initial calculation for determining the size of each subnet, we saw that each of the eight subnets are equal, and that we have 32 total IPs with 30 usable IP addresses. Currently, we have subnets 5 to 8 for allocation, but if we allocate subnet 5, 6 and 7 to the WAN links between the branches R1-R2, R2-R3 and R3-R4, we would be wasting 28 IP addresses since each WAN link (point-to-point) only requires 2 IP addresses. What if we can take one of our existing subnets and create even more but smaller networks to fit each WAN (point-to-point) link? We can do this with a process known as Variable Length Subnet Masking (VLSM). By using this process, we are subnetting a subnet. For now, we will place aside subnets 5, 6, and 7 as a future reservation for any future branches: Step 4 – VLSM and subnetting a subnet For the WAN links, we need at least three subnets. Each must have a minimum of two usable IP addresses. To get started, let's use the following formula to determine the number of host bits that are needed so that we have at least two usable IP addresses: 2H – 2, where H is the number of host bits. We are going to use one bit, 21 – 2 = 2 – 2 = 0 usable IP addresses. Let's add an extra host bit in our formula, that is, 22 – 2 = 4 – 2 = 2 usable IP addresses. At this point, we have a perfect match, and we know that only two host bits are needed to give us our WAN (point-to-point) links. We are going to use the following guidelines: We cannot modify the network portion of the address (red) Since we know that the two host bits are needed to represent two usable IP addresses, we can lock it into place (purple) The bit between the network portion (red) and the locked-in host bits (purple) will be the new network bits (black) To calculate the number of networks, we can use 2N = 23 = 8 networks. Even though we got a lot more networks than we actually needed, the remainder of the networks can be set aside for future use. To calculate the total IPs and increment, we can use 2H = 22 = 4 total IP addresses (inclusive of the Network ID and Broadcast IP addresses). To calculate the number of usable IP addresses, we can use 2H – 2 = 22 – 2 = 2 usable IP addresses per network. Let's work out our eight new subnets for any existing and future WAN (point-to-point) links: Now that we have eight new subnets, let's allocate them accordingly. The first subnet will be allocated to WAN 1, R1-R2: The second subnet will be allocated to WAN 2, R2-R3: The third subnet will be allocated to WAN 3, R3-R4: Now that we have allocated the first three subnets to each of the WAN links, the following remaining subnets can be set aside for any future branches which may need another WAN link. These will be assigned for future reservation: Summary In this tutorial, we understood public and private IPV4 addresses. We also learned the importance of having a subnet and saw the 4 simple steps needed to complete the subnetting process. To learn from industry experts and implement their practices to resolve complex IT issues and effectively pass and achieve this certification, check out our book CompTIA Network+ Certification Guide. AWS announces more flexibility its Certification Exams, drops its exam prerequisites Top 10 IT certifications for cloud and networking professionals in 2018 What matters on an engineering resume? Hacker Rank report says skills, not certifications
Read more
  • 0
  • 0
  • 619013

article-image-how-to-recover-deleted-data-from-an-android-device-tutorial
Sugandha Lahoti
04 Feb 2019
11 min read
Save for later

How to recover deleted data from an Android device [Tutorial]

Sugandha Lahoti
04 Feb 2019
11 min read
In this tutorial, we are going to learn about data recovery techniques that enable us to view data that has been deleted from a device. Deleted data could contain highly sensitive information and thus data recovery is a crucial aspect of mobile forensics. This article will cover the following topics: Data recovery overview Recovering data deleted from an SD card Recovering data deleted from a phone's internal storage This article is taken from the book Learning Android Forensics by Oleg Skulkin, Donnie Tindall, and Rohit Tamma. This book is a comprehensive guide to Android forensics, from setting up the workstation to analyzing key artifacts. Data recovery overview Data recovery is a powerful concept within digital forensics. It is the process of retrieving deleted data from a or SD card when it cannot be accessed normally. Being able to recover data that has been deleted by a user could help solve civil or criminal cases. This is because many accused just delete data from their device hoping that the evidence will be destroyed. Thus, in most criminal cases, deleted data could be crucial because it may contain information the user wanted to erase from their Android device. For example, consider the scenario where a mobile phone has been seized from a terrorist. Wouldn't it be of the greatest importance to know which items were deleted by them? Access to any deleted SMS messages, pictures, dialed numbers, and so on could be of critical importance as they may reveal a lot of sensitive information. From a normal user's point of view, recovering data that has been deleted would usually mean referring to the operating system's built-in solutions, such as the Recycle Bin in Windows. While it's true that data can be recovered from these locations, due to an increase in user awareness, these options often don't work. For instance, on a desktop computer, people now use Shift + Del whenever they want to delete a file completely from their desktop. Similarly, in mobile environments, users are aware of the restore operations provided by apps and so on. In spite of these situations, data recovery techniques allow a forensic investigator to access the data that has been deleted from the device. With respect to Android, it is possible to recover most of the deleted data, including SMS, pictures, application data, and so on. But it is important to seize the device in a proper manner and follow certain procedures, otherwise, data might be deleted permanently. To ensure that the deleted data is not lost forever, it is recommended to keep the following points in mind: Do not use the phone for any activity after seizing it. The deleted text message exists on the device until space is needed by some other incoming data, so the phone must not be used for any sort of activity to prevent the data from being overwritten. Even when the phone is not used, without any intervention from our end, data can be overwritten. For instance, an incoming SMS would automatically occupy the space, which overwrites the deleted data. Also, remote wipe commands can wipe the content present on the device. To prevent such events, you can consider the option of placing the device in a Faraday bag. Thus, care should be taken to prevent delivery of any new messages or data through any means of communication. How can deleted files be recovered? When a user deletes any data from a device, the data is not actually erased from the device and continues to exist on it. What gets deleted is the pointer to that data. All filesystems contain metadata, which maintains information about the hierarchy of files, filenames, and so on. Deletion will not really erase the data but instead removes the file system metadata. Thus, when text messages or any other files are deleted from a device, they are just made invisible to the user, but the files are still present on the device as long as they are not overwritten by some other data. Hence, there is the possibility of recovering them before new data is added and occupies the space. Deleting the pointer and marking the space as available is an extremely fast operation compared to actually erasing all the data from the device. Hence, to increase performance, operating systems just delete the metadata. Recovering deleted data on an Android device involves three scenarios: Recovering data that is deleted from the SD card such as pictures, videos, and so on Recovering data that is deleted from SQLite databases such as SMS, chats, web history, and so on Recovering data that is deleted from the device's internal storage The following sections cover the techniques that can be used to recover deleted data from SD cards, and the internal storage of the Android device. Recovering deleted data from SD cards Data present on an SD card can reveal lots of information that is useful during a forensic investigation. The fact that pictures, videos, voice recordings, and application data are stored on the SD card adds weight to this. As mentioned in the previous chapters, Android devices often use FAT32 or exFAT file systems on their SD card. The main reason for this is that these file systems are widely supported by most operating systems, including Windows, Linux, and macOS X. The maximum file size on a FAT32 formatted drive is around 4 GB. With increasingly high-resolution formats now available, this limit is commonly reached, that's why newer devices support exFAT: this file system doesn't have such limitations. Recovering the data deleted from an external SD is pretty easy if it can be mounted as a drive. If the SD card is removable, it can be mounted as a drive by connecting it to a computer using a card reader. Any files can be transferred to the SD card while it's mounted. Some of the older devices that use USB mass storage also mount the device to a drive when connected through a USB cable. In order to make sure that the original evidence is not modified, a physical image of the disk is taken and all further experimentation is done on the image itself. Similarly, in the case of SD card analysis, an image of the SD card needs to be taken. Once the imaging is done, we have a raw image file. In our example, we will use FTK Imager by AccessData, which is an imaging utility. In addition to creating disk images, it can also be used to explore the contents of a disk image. The following are the steps that can be followed to recover the contents of an SD card using this tool: Start FTK Imager and click on File and then Add Evidence Item... in the menu, as shown in the following screenshot: Adding evidence source to FTK Imager Select Image File in the Select Source dialog and click on Next. In the Select File dialog, browse to the location where you downloaded the sdcard.dd file, select it, and click on Finish, as shown in the following screenshot: Selecting the image file for analysis in FTK Imager FTK Imager's default display will appear with the contents of the SD card visible in the View pane at the lower right. You can also click on the Properties tab below the lower left pane to view the properties for the disk image. Now, on the left pane, the drive has opened. You can open folders by clicking on the + sign. When highlighting the folder, contents are shown on the right pane. When a file is selected, its contents can be seen on the bottom pane. As shown in the following screenshot, the deleted files will have a red X over the icon derived from their file extension: Deleted files shown with red X over the icons As shown in the following screenshot, to export the file, right-click on the file that contains the picture and select Export Files...: Sometimes, only a fragment of the file is recoverable, which cannot be read or viewed directly. In that case, we need to look through free or unallocated space for more data. Carving can be used to recover files from free and unallocated space. PhotoRec is one of the tools that can help you to do that. You will learn more about file carving with PhotoRec in the following sections. Recovering deleted data from internal memory Recovering files deleted from Android's internal memory, such as app data and so on, is not as easy as recovering such data from SD cards and SQLite databases, but, of course, it's not impossible. Many commercial forensic tools are capable of recovering deleted data from Android devices, of course, if a physical acquisition is possible and the user data partition isn't encrypted.  But this is not very common for modern devices, especially those running most recent versions of the operating system, such as Oreo and Pie. Most Android devices, especially modern smartphones, and tablets, use the EXT4 file system to organize data in their internal storage. This file system is very common for Linux-based devices. So, if we want to recover deleted data from the device's internal storage, we need a tool capable of recovering deleted files from the EXT4 file system. One such tool is extundelete. The tool is available for downloading here: http://extundelete.sourceforge.net/. To recover the contents of an inode, extundelete searches a file system's journal for an old copy of that inode. Information contained in the inode helps the tool to locate the file within the file system. To recover not only the file's contents, but also its name, extundelete is able to search the deleted entries in a directory to match the inode number of a file to a file name. To use this tool, you will need a Linux workstation. Most forensic Linux distributions have it already on board. For example, the following is a screenshot from SIFT Workstation—a popular digital forensics and incident response Linux distribution created by Rob Lee and his team from the SANS Institute (https://digital-forensics.sans.org/community/downloads): extundelete command-line options Before you can start the recovery process, you will need to mount a previously imaged userdata partition. In this example, we are going to use an Android device imaged via the chip-off technique. First of all, we need to determine the location of the userdata partition within the image. To do this, we can use mmls from the Sleuth Kit, as shown in the following screenshot: Android device partitions As you can see in the screenshot, the userdata partition is the last one and starts in sector 9199616. To make sure the userdata partition is EXT4 formatted, let's use fsstat, as shown in the following example: A part of fsstat output All you need now is to mount the userdata partition and run extundelete against it, as shown in the following example: extundelete /userdata/partition/mount/point --restore-all All recovered files will be saved to a subdirectory of the current directory named RECOVERED_FILES. If you are interested in recovering files before or after the specified date, you can use the --before date and --after-date options. It's important to note that these dates must be in UNIX Epoch format. There are quite a lot of both online and offline tools capable of converting timestamps, for example, you can use https://www.epochconverter.com/. As you can see, this method isn't very easy and fast, but there is a better way: using Autopsy, an open source digital forensic tool In the following example, we used a built-in file extension filter to find all the images on the Android device, and found a lot of deleted artifacts: Recovering deleted files from an EXT4 partition with Autopsy Summary Data recovery is the process of retrieving deleted data from the device and thus is a very important concept in forensics. In this chapter, we have seen various techniques to recover deleted data from both the SD card and the internal memory. While recovering the data from a removable SD card is easy, recovering data from internal memory involves a few complications. SQLite file parsing and file carving techniques aid a forensic analyst in recovering the deleted items present in the internal memory of an Android device. In order to understand the forensic perspective and the analysis of Android apps, read our book Learning Android Forensics. What role does Linux play in securing Android devices? How the Titan M chip will improve Android security Getting your Android app ready for the Play Store[Tutorial]
Read more
  • 0
  • 0
  • 43125

article-image-how-to-extract-sim-card-data-from-android-devices-tutorial
Sugandha Lahoti
03 Feb 2019
9 min read
Save for later

How to extract SIM card data from Android devices [Tutorial]

Sugandha Lahoti
03 Feb 2019
9 min read
This tutorial discusses logical data extraction, and one of its subtopics Android SIM card extractions. This article is taken from the book Learning Android Forensics by Oleg Skulkin, Donnie Tindall, and Rohit Tamma. This book explore open source and commercial forensic tools and teaches readers the basic skills of Android malware identification and analysis. Logical extraction overview In digital forensics, the term logical extraction is typically used to refer to extractions that don't recover deleted data or do not include a full bit-by-bit copy of the evidence. However, a more correct definition of logical extraction is any method that requires communication with the base operating system. Because of this interaction with the operating system, a forensic examiner cannot be sure that they have recovered all of the data possible; the operating system is choosing which data it allows the examiner to access. In traditional computer forensics, logical extraction is analogous to copying and pasting a folder in order to extract data from a system; this process will only copy files that the user can access and see. If any hidden or deleted files are present in the folder being copied, they won't be in the pasted version of the folder. As you'll see, however, the line between logical and physical extractions in mobile forensics is somewhat blurrier than in traditional computer forensics. For example, deleted data can routinely be recovered from logical extractions on mobile devices due to the prevalence of SQLite databases being used to store data. Furthermore, almost every mobile extraction will require some form of interaction with the operating Android OS; there's no simple equivalent to pulling a hard drive and imaging it without booting the drive. What data can be recovered logically? For the most part, any and all user data may be recovered logically: Contacts Call logs SMS/MMS Application data System logs and information The bulk of this data is stored in SQLite databases, so it's even possible to recover large amounts of deleted data through a logical extraction. Root access When forensically analyzing an Android device, the limiting factor is often not the type of data being sought, but rather whether or not the examiner has the ability to access the data. All of the data listed previously, when stored on the internal flash memory, is protected and requires root access to read. The exception to this is application data that is stored on the SD card, which will be discussed later in this book. Without root access, a forensic examiner cannot simply copy information from the /data partition. The examiner will have to find some method of escalating privileges in order to gain access to the contacts, call logs, SMS/MMS, and application data. These methods often carry many risks, such as the potential to destroy or brick the device (making it unable to boot), and may alter data on the device in order to gain permanence. The methods commonly vary from device to device, and there is no universal, one-click method to gain root access to every device. Commercial mobile forensic tools such as Oxygen Forensic Detective and Cellebrite UFED have built-in capabilities to temporarily and safely root many devices but do not cover the wide range of all Android devices. The decision to root a device should be in accordance with your local operating procedures and court opinions in your jurisdiction. The legal acceptance of evidence obtained by rooting varies by jurisdiction. Android SIM card extractions Traditionally, SIM cards were used for transferring data between devices. SIM cards in the past were used to store many different types of data, such as the following: User data Contacts SMS messages Dialed calls Network data Integrated Circuit Card Identifier (ICCID): Serial number of the SIM International Mobile Subscriber Identity (IMSI): Identifier that ties the SIM to a specific user account MSISDN: Phone number assigned to the SIM Location Area Identity (LAI): Identifies the cell that a user is in Authentication Key (Ki): Used to authenticate the mobile network Various other network-specific information With the rise in capacity of device storage, SD cards, and cloud backups, the necessity for storing data on a SIM card has decreased. As such, most modern smartphones typically do not store much, if any, user data on the SIM card. All network data listed previously does still reside on the SIM, as a SIM is necessary to connect to all modern (4G) cellular networks. As with all Android devices, though, there is no concrete stipulation that user data can't be stored on a SIM; it simply doesn't happen by default. Individual device manufacturers can easily decide to write user data to the SIM, and individual users can download applications to provide that functionality. This means that a device's SIM card should always be examined during a forensic examination. It is a very quick process, and should never be overlooked. Acquiring SIM card data The SIM card should always be removed from the device and examined separately. While some tools claim to read the SIM card through the device interface, this may not recover deleted data or all data on the SIM; the only way for an examiner to be certain all data was acquired is to read the SIM through a standalone SIM card reader with a tool that has been tested and verified. The location of the SIM will vary by device but is typically either stored beneath the battery or in a tray located on the side of the device. Once the SIM is removed, it should be placed in a SIM card reader. There are hundreds of SIM card readers available in the marketplace, and all major mobile forensics tools come with an included reader that will work with their software. Oftentimes, the forensic tools will also support third-party SIM readers as well. There is a surprising lack of thorough, free SIM card reading software available. Any software used should always be tested and validated on a SIM card that has been populated with known data prior to being used in an actual forensic investigation. Also, keep in mind that much of the free software available works for older 2G/3G SIMs, but may not work properly on a modern 4G SIM. We used the Mobiledit! Lite, a free version of Mobiledit!, for the following screenshots. It is available at: http://www.mobiledit.com/downloads. The following is a sample 4G SIM card extraction from an Android phone running version 4.4.4; note that nothing that could be considered user data was acquired despite the SIM being used actively for over a year, though fields such as the ICCID, IMSI, and MSISDN (own phone number) could be useful for subpoenas/warrants or other aspects of an investigation: SIM card extraction overview The following screenshot highlights SMS messages on the SIM card: The following screenshot highlights the phonebook of the SIM card: The following screenshot highlights the phone number of the SIM card (also called the MSISDN): SIM Security Due to the fact that SIM cards conform to established, international standards, all SIM cards provide the same security functionality: a 4- to 8-digit PIN. Generally, this PIN must be set through a menu on the device. On Android devices, this setting is found at Settings | Security | Set up SIM card lock. The SIM PIN is completely independent of any lock screen security settings and only has to be entered when the device boots. The SIM PIN only protects user data on the SIM; all network information is still recoverable even if the SIM is PIN locked. The SIM card will allow three attempts to enter the PIN; if one of these attempts are correct, the counter will reset. On the other hand, if all of these attempts are incorrect, the SIM will enter Personal Unblocking Key (PUK) mode. The PUK is an 8-digit number assigned by the carrier and is frequently found on documentation when the SIM is purchased. Bypassing a PUK is not possible with any commercial forensic software; because of this, an examiner should never attempt to enter the PIN on the device as the device will not indicate how many attempts remain before the PUK is activated. An examiner could unwittingly PUK lock the SIM and be unable to access the device. Forensic tools, however, will show how many attempts remain before the PUK is activated, as seen in the previous screenshots. Common carrier defaults for SIM PINs are 0000 and 1234. If three tries remain before activating the PUK, an examiner may successfully unlock the SIM with one of these defaults. Carriers frequently retain PUK keys when a SIM is issued. These may be available through a subpoena or warrant issued to the carrier. SIM cloning The SIM PIN itself provides almost no additional security, and can easily be bypassed through SIM cloning. SIM cloning is a feature provided in almost all commercial mobile forensic software, although the term cloning is somewhat misleading. SIM cloning, in the case of mobile forensics, is the process of copying the network data from a locked SIM onto a forensically sterile SIM that does not have the PIN activated. The phone will identify the cloned SIM based on this network data (typically the ICCID and IMSI) and think that it is the same SIM that was inserted previously, but this time there will be no SIM PIN. This cloned SIM will also be unable to access the cellular network, which makes it an effective solution similar to Airplane Mode. Therefore, SIM cloning will allow an examiner to access the device, but the user data on the original SIM is still inaccessible as it remains protected by the PIN. We are unaware of any free software that performs forensic SIM cloning. It is supported by almost all commercial mobile forensic kits, however. These kits will typically include a SIM card reader, software to perform the clone, as well as multiple blank SIM cards for the cloning process. This article has covered SIM card extraction, which is a subtopic of logical extractions of Android devices. To know more about the other methods of logical extractions in Android devices, read our book Learning Android Forensics. What role does Linux play in securing Android devices? How the Titan M chip will improve Android security Getting your Android app ready for the Play Store[Tutorial]
Read more
  • 0
  • 0
  • 38832

article-image-creating-views-in-odoo-12-list-form-search-tutorial
Sugandha Lahoti
02 Feb 2019
10 min read
Save for later

Creating views in Odoo 12 - List, Form, Search [Tutorial]

Sugandha Lahoti
02 Feb 2019
10 min read
Odoo provides a rapid application development framework that's particularly suited to building business applications. This type of application is usually concerned with keeping business records, centered around create, read, update, and delete (CRUD) operations. Not only does Odoo makes it easy to build this type of application, but it also provides rich components to create compelling user interfaces, such as kanban, calendar, and graph views. In this tutorial, we will create list, form, and search views, the basic building blocks for the user interface. This article is taken from the book Odoo 12 Development Essentials by Daniel Reis. This book will tecah you to build a business application from scratch by using  Odoo 12. Technical requirements The minimal requirement is for you to have a modern web browser, such as Firefox, Chrome, or Edge. You may go a little further and use a packaged Odoo distribution to have it locally installed on your computer. For that, you only need an operating system such as Windows, macOS, Debian-based Linux (such as Ubuntu), or Red Hat-based Linux (such as Fedora). Windows, Debian, and Red Hat have installation packages available. Another option is to use Docker, available for all these systems and for macOS. In this article, we will mostly have point-and-click interaction with the user interface. You will find the code snippets used and a summary of the steps performed in the book's code repository, under the ch01 folder. It's important to note that Odoo databases are incompatible between Odoo major versions. If you run an Odoo 11 server against a database created for a previous major version of Odoo, it won't work. Non-trivial migration work is needed before a database can be used with a later version of the product. The same is true for add-on modules: as a general rule, an add-on module developed for an Odoo major version will not work on other versions. When downloading a community module from the web, make sure it targets the Odoo version you are using. On the other hand, major releases (10.0, 11.0) are expected to receive frequent updates, but these should be mostly bug fixes. They are assured to be API-stable, meaning that model data structures and view element identifiers will remain stable. This is important because it means there will be no risk of custom modules breaking due to incompatible changes in the upstream core modules. Creating a new Model Models are the basic components for applications, providing the data structures and storage to be used. We will create the Model for To-do Items. It will have three fields: Description  Is done? flag Work team partner list Model definitions are accessed in the Settings app, in the Technical | Database Structure | Models menu. To create a Model, follow these steps: Visit the Models menu, and click on the upper-left Create button. Fill in the new Model form with these values: Model Description: To-do Item Model: x_todo_item We should save it before we can properly add new fields to it. So, click on Save and then Edit it again. You can see that a few fields were automatically added. The ORM includes them in all Models, and they can be useful for audit purposes: The x_name (or Name) field is a title representing the record in lists or when it is referenced in other records. It makes sense to use it for the To-do Item title. You may edit it and change the Field Label to a more meaningful label description. Adding the Is Done? flag to the Model should be straightforward now. In the Fields list, click on Add a line, at the bottom of the list, to create a new field with these values: Field Name: x_is_done Field Label: Is Done? Field Type: boolean The new Fields form should look like this: Now, something a little more challenging is to add the Work Team selection. Not only it is a relation field, referring to a record in the res.partner Model, it also is a multiple-value selection field. In many frameworks, this is not a trivial task, but fortunately, that's not the case in Odoo, because it supports many-to-many relations. This is the case because one to-do can have many people, and each person can participate in many to-do items. In the Fields list, click again on Add a line to create the new field: Field Name: x_work_team_ids Field Label: Work Team Field Type: many2many Object Relation: res.partner Domain: [('x_is_work_team', '=', True)] The many-to-many field has a few specific definitions—Relation Table, Column 1, and Column 2 fields. These are automatically filled out for you and the defaults are good for most cases, so we don't need to worry about them now. The domain attribute is optional, but we used it so that only eligible work team members are selectable from the list. Otherwise, all partners would be available for selection. The Domain expression defines a filter for the records to be presented. It follows an Odoo-specific syntax—it is a list of triplets, where each triplet is a filter condition, indicating the Field Name to filter, the filter operator to use, and the value to filter against. Odoo has an interactive domain filter wizard that can be used as a helper to generate Domain expressions. You can use it at Settings | User Interface | User-defined Filters. Once a target Model is selected in the form, the Domain field will display an add filter button, which can be used to add filter conditions, and the text box below it will dynamically show the corresponding Domain expression code. Creating views We have created the To-do Items Model. Next, we will be creating the two essential views for it—a list (also called a tree) and a form. List views We will now create a list view: In Settings, navigate to Technical | User Interface | Views and create a new record with the following values: View Name: To-do List View View Type: Tree Model: x_todo_item This is how the View definition is expected to look like: In the Architecture tab, we should write XML with the view structure. Use the following XML code: <tree> <field name="x_name" /> <field name="x_is_done" /> </tree> The basic structure of a list view is quite simple—a <tree> element containing one or more <field> elements for each of the columns to display in the list view. Form views Next, we will create the form view: Create another View record, using the following values: View Name: To-do Form View View Type: Form Model: x_todo_item If we don't specify the View Type, it will be auto-detected from the view definition. In the Architecture tab, type the following XML code: <form> <group> <field name="x_name" /> <field name="x_is_done" /> <field name="x_work_team_ids" widget="many2many_tags" context="{'default_x_is_work_team': True}" /> </group> </form> The form view structure has a root <form> element, containing elements such as <field>, Here, we also chose a specific widget for the work team field, to be displayed as tag buttons instead of a list grid. We added the widget attribute to the Work Team field, to have the team members presented as button-like tags. By default, relational fields allow you to directly create a new record to be used in the relationship. This means that we are allowed to create new Partner directly from the Work Team field. But if we do so, they won't have the Is Work Team? flag enabled, which can cause inconsistencies. For better user experience, we can have this flag set by default for these cases. This is done with the context attribute, used to pass session information to the next View, such as default values to be used. This will be discussed in detail in later chapters, and for now, we just need to know that it is a dictionary of key-value pairs. Values prefixed with default_ provide the default value for the corresponding field. So in our case, the expression needed to set a default value for the partner's Is Work Team? flag is {'default_x_is_work_team': True}. That's it. If we now try the To-Do menu option, and create a new item or open an existing one from the list, we will see the form we just added. Search views We can also make predefined filter and grouping options available, in the search box in the upper-right corner of the list view. Odoo considers these view elements also, and so they are defined in Views records, just like lists and forms are. As you may already know by now, Views can be edited either in the Settings | Technical | User Interface menu, or from the contextual Developer Tools menu. Let's go for the latter now; navigate to the to-do list, click on the Developer Tools icon in the upper-right corner, and select Edit Search view from the available options: Since no search view is yet defined for the To-do Items Model, we will see an empty form, inviting us to create the first one. Fill in these values and save it: View Name: Some meaningful description, such as To-do Items Filter View Type: Search Model: x_todo_item Architecture: Add this XML code: <search> <filter name="item_not_done" string="Not Done" domain="[('x_is_done', '=', False)]" /> </search> If we now open the to-do list from the menu, so that it is reloaded, we will see that our predefined filter is now available from the Filters button below the search box. If we type Not Done inside the search box, it will also show a suggested selection. It would be nice to have this filter enabled by default and disable it when needed. Just like default field values, we can also use context to set default filters. When we click on the To-do menu option, it runs a Window Actions to open the To-do list view. This Window Actions can set a context value, signaling the Views to enable a search filter by default. Let's try this: Click on the To-do menu option to go to the To-do list. Click on the Developer Tools icon and select the Edit Action option. This will open the Window Actions used to open the current Views. In the lower-right corner, there is a Filter section, where we have the Domain and Context fields. The Domain allows setting a fixed filter on the records shown, which can't be removed by the user. We don't want to use that. Instead, we want to enable the item_not_done filter created before by default, which can be deselected whenever the user wishes to. To enable a filter by default, add a context key with its name prefixed with search_default_, in this case {'search_default_item_not_done': True}. If we click on the To-do menu option now, we should see the Not Done filter enabled by default on the search box. In this article, we created create list, form, and search views, the basic building blocks for the user interface for our model. To learn more about Odoo development in depth, read our book Odoo 12 Development Essentials. “Everybody can benefit from adopting Odoo, whether you’re a small start-up or a giant tech company - An interview by Yenthe van Ginneken. Implement an effective CRM system in Odoo 11 [Tutorial] Handle Odoo application data with ORM API [Tutorial]
Read more
  • 0
  • 0
  • 26034
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-why-dont-you-have-a-monorepo
Viktor Charypar
01 Feb 2019
27 min read
Save for later

Why don't you have a monorepo?

Viktor Charypar
01 Feb 2019
27 min read
You’ve probably heard that Facebook, Twitter, Google, Microsoft, and other tech industry behemoths keep their entire codebase, all services, applications and tools in a single huge repository - a monorepo. If you’re used to the standard way most teams manage their codebase - one application, service or tool per repository - this sounds very strange. Many people conclude it must only solve problems the likes of Google and Facebook have. This is a guest post by Viktor Charypar, Technical Director at Red Badger. But monorepos are not only useful if you can build a custom version control system to cope. They actually have many advantages even at a smaller scale that standard tools like Git handle just fine. Using a monorepo can result in fewer barriers in the software development lifecycle. It can allow faster feedback loops, less time spent looking for code,, and less time reporting bugs and waiting for them to be fixed. It also makes it much easier to analyze a huge treasure trove of interesting data about how your software is actually built and where problem areas are. We’ve used a monorepo at one of our clients for almost three years and it’s been great. I really don’t see why you wouldn’t. But roughly every two months I tend to have a conversation with someone who’s not used to working in this way and the entire idea just seems totally crazy to them. And the conversation tends to always follow the same path, starting with the sheer size and quickly moving on to dependency management, testing and versioning strategies. It gets complicated. It’s time I finally wrote down a coherent reasoning behind why I believe monorepos should be the default way we manage a codebase. Especially if you’re building something even vaguely microservices based, you have multiple teams and want to share common code. What do you mean “just one repo”? Just so we’re all thinking about the same thing, when I say monorepo, I’m talking about a strategy of storing all the code you as an organization are responsible for. This could be a project, a programme of work, or the entirety of a product and infrastructure code of your company in a single repository, under one revision history. Individual components (libraries, services, custom tools, infrastructure automation, ...) are stored alongside each other in folders. It’s analogous to the UNIX file tree which has a single root, as opposed to multiple, device based roots in Windows operating systems. People not familiar with the concept typically have a fairly strong reaction to the idea. One giant repo? Why would anyone do that? That cannot possibly scale! Many different objections come out, most of them often only tangentially related to storing all the code together. Occasionally, people get almost religious about it (I am talking about engineers after all). Despite being used by some of the largest tech companies, it is a relatively foreign concept and on the surface goes against everything you’ve been taught about not making huge monolithic things. It also seems like we’re fixing things that are not broken: everyone in the world is doing multiple repos, building and sharing artifacts (npm modules, JARs, ruby gems…), using SemVer to version and manage dependencies and long running branches to patch bugs in older versions of code, right? Surely if it’s industry standard it must be the right thing to do. Well, I don’t believe so. I personally think almost every single one of those things is harder, more laborious, more brittle, harder to test and generally just more wasteful than the equivalent approach you get as a consequence of a monorepo. And a few of the capabilities a monorepo enables can’t be replicated in a multi-repo situation even if you build a lot of infrastructure around it, basically because you introduce distributed computing problems and get on the bad side of CAP theorem (we’ll look at this closer below). Apart from letting you make dependency management easier and testing more reliable than it can get with multiple repos, monorepo will also give you a few simple, important, but easy to underestimate advantages. The biggest advantages of using a monorepo It's easier to find and discover your code in a monorepo With a monorepo, there is no question about where all the code is and when you have access to some of it, you can see all of it. It may come as a surprise, but making code visible to the rest of the organization isn’t always the default behavior. Human insecurities get in the way and people create private repositories and squirrel code away to experiment with things “until they are ready”. Typically, when the thing does get “ready”, it now has a Continuous Integration (CI) service attached, many hyperlinks lead to it from emails, chat rooms and internal wikis, several people have cloned the repo and it’s now quite a hassle to move the code to a more visible, obvious place and so it stays where it started. As a consequence, it is often quite hard work to find all the code for the project and gain access to it, which is hard and expensive for new joiners and hurts collaboration in general. You could say this is a matter of discipline and I will agree with you, but why leave to individual discipline what you can simply prevent by telling everyone that all the code belongs in the one repo, it’s completely ok to put even little experiments and pet projects there. You never know what they will grow into and putting them in the repo has basically no cost attached. Visibility aids understanding of how to use internal APIs (internal in the sense of being designed and built by your organisation). The ability to search the entire codebase from within your editor and find usages of the call you’re considering to use is very powerful. Code editors and languages can also be set up for cross-references to work, which means you can follow references into shared libraries and find usages of shared code across the codebase. And I mean the entire codebase. This also enables all kind of analyses to be done on the codebase and its history. Knowing the totality of the codebase and having a history of all the code lets you see dependencies, find parts of the codebase only committed to by a very limited group of people, hotspots changing suspiciously frequently or by a large number of people… Your codebase is the source of truth about what your engineering organization is producing, it contains an incredible amount of interesting information we typically just ignore. Monorepos give you more flexibility when moving code Conway’s Law famously states that “organizations which design systems (...) are constrained to produce designs which are copies of the communication structures of these organisations”. This is due to the level of communication necessary to produce a coherent piece of software. The further away in the organisation an owner of a piece of software is, the harder it is to directly influence it, so you design strict interfaces to insulate yourself from the effect of “their” changes. This typically affects the repository structure as well. There are two problems with this: the structure is generally chosen upfront, before we know what the right shape of the software is, and changing the structure has a cost attached. With each service and library being in a separate repository, the module boundaries are quite a lot stronger than if they are all in one repository. Extracting common pieces of code into a shared library becomes more difficult and involves a setup of a whole new repository - full with CI integration, pull request templates and labels, access control setup… hard work. In a monorepo, these boundaries are much more fluid and flexible: moving code between services and libraries, extracting new ones or inlining libraries back into their consumers all become as easy as general refactoring. There is no reason to use a completely different set of tools to change the small-scale and the large-scale structure of your codebase. The only real downside is tooling support for access control and declaring ownership. However, as monorepos get more popular, this support is getting better. GitHub now supports codeowners, for example. We will get there. A monorepo gives you a single history timeline While visibility and flexibility are quite convenient, the one feature of a monorepo which is very hard (if not impossible) to replicate is the single history timeline. We’ll go into why it’s so hard further below, but for now let’s look at the advantages it brings. Single history timelines gives us a reliable total order of changes to the codebase over time. This means that for any two contributions to the codebase, we can definitively and reliably decide which came first and which came second. It should never be ambiguous. It also means that each commit in a monorepo is a snapshot of the system as it was at that given moment. This enables a really interesting capability: it means cross-cutting changes can be made atomically, safely, in one go. Atomic cross-cutting commits Atomic cross-cutting commits make two specific scenarios much easier to achieve. First, externally forced global migrations are much easier and quicker. Let’s say multiple services use a common database and need the password and we need to rotate it. The password itself is (hopefully!) stored in a secure credential store, but at least the reference to it will be in several different places within the codebase. If the reference changes (let’s say the reference is generated every time), we can update every specific the mention of it at once, in one commit, with a search & replac. This will get everything working again. Second, and more important, we can change APIs and update both producer and all consumers at the same time, atomically. For example, we can add an endpoint to an API service and migrate consumers to use the new endpoint. In the next commit, we can remove the old API endpoint as it’s no longer needed. If you're trying to do this across multiple repositories with their own histories, the change will have to be split into several parallel commits. This leaves the potential for the two changes to overlap and happen in the wrong order. Some consumers can get migrated, then the endpoint gets removed, then the rest of the consumers get migrated. The mid-stage is an inconsistent state and an attempt to use the not-yet-migrated consumers will fail attempting to call an API endpoint that no longer exists. Monorepos remove inconsistencies in your dependencies Inconsistencies between dependent modules are the entire reason why dependency management and versioning exist. In a monorepo, the above scenario simply can’t happen. And that’s how the conversation about storing code in one place ends up being about versioning and dependency management. Monorepos essentially make the problem go away (which is my favourite kind of problem solving). Okay, this isn’t entirely true. There are still consequences to making breaking changes to APIs. For one, you need to update all the consumers, which is work, but you also need to build all of them, test everything works and deploy it all. This is quite hard for (micro)services that get individually deployed: making coordinated deployment of multiple services atomic is possible but not trivial. You can use a blue-green strategy, for example, to make sure there is no moment in time where only some services changed but not others. It gets harder for shared libraries. Building and publishing artifacts of new versions and updating all consumers to use the new version are still at least two commits, otherwise you’d be referring to versions that won’t exist until the builds finish. Now, things are getting inconsistent again and the view of what is supposed to work together is getting blurred in time again. And what if someone sneaks some changes in between the two commits. We are, once again, in a race. Unless... Building from the latest source Yes. What if instead of building and publishing shared code as prebuilt artifacts (binaries, jars, gems, npm modules), we build each deployable service completely from source. Every time a service changes, it is entirely rebuilt, including all dependencies. This is a fair bit of work for some compiled languages. However, it can be optimised with incremental build tools which skip work that’s already been done and cached. Some, like Go, solve it by simply having designed for a fast compiler. For dynamic languages, it’s just a matter of setting up include paths correctly and bundling all the relevant code. The added benefit here is you don’t need to do anything special if you’re working on a set of interdependent projects locally. No more `npm link`. The more interesting consequence is how this affects changing a shared dependency. When building from source, you have to make sure every time that happens, all the consumers get rebuilt using it. This is great, everyone gets the latest and greatest things immediately. ...right? Don’t worry, I can hear the alarm bells ringing in your head all the way from here. Your service depends on tens, if not hundreds of libraries. Any time anyone makes a mistake and breaks any of them, it breaks your code? Hell no. But hear me out. This is a problem of understanding dependencies and testing consumers. The important consequence of building from source is you now have a single handle on what is supposed to work together. There are no separate versions, you know what to test and it’s just one thing at any one time. Push dependency management In manual dependency update management - I will call it “pull” dependency management - you as a consumer are responsible for updating your dependencies as you see fit and making sure everything still works. If you find a bug, you simply don’t upgrade. Instead you report the bug to the maintainer and expect them to fix it. This can be months after the bug was introduced and the bug may have already been fixed in a newer version you haven’t yet upgraded to because things have moved on quite a bit while you were busy hitting a deadline and it would now be a sizable investment to upgrade. Now you’re a little stuck and all the ways out are a significant amount of work for someone, all that because the feedback loop is too long. Normally, as a library maintainer, you’re never quite certain how to make sure you’re not breaking anything. Even if you could run your consumers’ test suites, which consumers at what versions do you test against? And as a DevOps team doing 24/7 support for a system, how do you know which version or versions of a library is used across your services. What do you need to update to roll out that important bug fix to your customers? In push dependency management, quite a few things are the other way round. As a consumer, you’re not responsible for updating, it is done for you - effectively, you depended on the “latest” version of everything. Every time a maintainer of the library makes a change you are responsible for testing for regressions. No, not manually! You do have unit tests, right? Right?? Please have a solid regression test suite you trust, it’s 2019. So with your unit test suite, all you need to do is run it. Actually no, let’s let the maintainer run it. If they introduce a problem, they get immediate feedback from you, before their change ever hits the master branch. And this is the contract agreement in push dependency management: If you make a change and break anyone, you are responsible for fixing them. They are responsible for supplying a good enough, by their own standard, automated mechanism for you to verify things still work. The definition of “works” is the tests pass. Seriously though, you need to have a decent regression test suite! Continuous integration for push dependencies: the final piece of the monorepo puzzle The main missing piece of tooling around monorepos is support for push dependencies in CI systems. It’s quite straightforward to implement the strategy yourself, but it’s still hard enough to be worth some shared tooling. Unfortunately, the existing build tools geared towards monorepos like Bazel and Buck take over the entire build process from more familiar tools (like Maven or Babel) and you need to switch to them. Although to be fair, in exchange, you get very performant incremental builds. A lighter tooling, which lets you express dependencies between components in a monorepo, in a language agnostic way, and is only responsible for deciding which build jobs needs to be triggered given a set of changed components in a commit seems to be missing. So I built one. It’s far from perfect, but it should do the trick. Hopefully, someone with more time on their hands will eventually come up with something similarly cheap to introduce into your build system and the community will adopt it more widely. The main takeaway is that if we build from source in a monorepo we can set up a central Continuous Integration system responsible for triggering builds for all projects potentially affected by a change, intended to make sure you didn’t break anything with the work you did, whether it belongs to you or someone else. This is next to impossible in a multi-repo codebase because of the blurriness of history mentioned above. It’s interesting to me that we have this problem today in the larger ecosystem. Everywhere. And we stumble forward and somewhat successfully live with upstream changes occasionally breaking us, because we don’t really have a better choice. We don’t have the ability to test all the consumers in the world and fix things when we break them. But if we can, at least for our own codebase, why wouldn’t we do that? Along with a “you broke it you fix it” policy. Building from source in a monorepo allows that. It also makes it significantly easier to make breaking changes much harder to make. That said... About breaking changes There are two kinds of changes that break the consumer - the ones you introduce by accident, intending the keep backwards compatibility, and then the intentional ones. The first kind should not be too laborious to fix: once you find out what’s wrong, fix it in one place, make sure you didn’t break anything else, done. The second kind is harder. If you absolutely have to make an intentional breaking change, you will need to update all the consumers. Yes. That’s the deal. And that’s also fair. I’m not sure why we’re okay with breaking changes being intentionally introduced upstream on a whim. In any other area of human endeavour, a breach of contract makes people angry and they will expect you to make good by them. Yet we accept breaking changes in software as a fact of life. “It’s fine, I bumped the major version!” Semantic versioning: a bad idea It’s not fine. In fact, semantic versioning is just a bad idea. I know that’s a bold claim and this is a whole separate article (which I promise to write soon), but I’ll try to do it at least some justice here. Semantic versioning is meant to convey some meaning with the version number, but the chosen meanings it expresses are completely arbitrary. First of all, semver only talks about API contract, not behaviour. Adding side effects or changing performance characteristics of an API call for worse while keeping the data interface the same is a completely legal patch level change. And I bet you consider that a breaking change, because it will break your app. Second, does anyone really care about minor vs. patch? The promise is the API doesn’t break. So really we only care about major or other. Major is a dealbreaker, otherwise we’re ok. From a consumer perspective a major version bump spells trouble and potentially a lot of work. Making breaking changes is a mean thing to do to your consumers and you can and should avoid them. Just keep the old API around and working and add the new one next to it. As for version numbers, the most important meaning to convey seems to be “how old?” because code tends to rot, and so versioning by date might be a good choice. But, you say, I’ll have more and more code to maintain! Well yes, of course. And that’s the other problem with semver, the expectation that even old versions still get patches and fixes. It’s not very explicitly stated but it’s there. And because we typically maintain old versions on long-running branches in version control, it’s not even very visible in the codebase. What if you kept older APIs around, but deprecated them and the answer to bugs in them would be to migrate to the newer version of a particular call, which doesn’t have the bug? Would you care about having that old code? It just sits there in the codebase, until nobody uses it. It would also be much less work for the consumer, it’s just one particular call. Also, the bug is typically deeper inside your code, so it’s actually more likely you can fix it in one go for all the API surfaces, old or new. Doing the same thing in the branching model is N times the work (for N maintenance branches). There are technologies that follow this model out of necessity. One example is GraphQL, which was built to solve (among other things) the problem of many old API consumers in people’s hands and the need to support all of them for at least some time. In GraphQL, you deprecate data fields in your API and they become invisible in documentation and introspection calls, but they still work as they used to. Possibly forever. Or at least until barely anyone uses them. The other option you have if you want to keep an older version of a library around and maintain it in a monorepo is to make a copy of the folder and work on the two separately. It’s the same thing as cutting a long running branch, you’re just making a copy in “file space” not “branch space”. And it’s more visible and representative of reality - both versions exist as first-class components being maintained. There are many different versioning and maintenance strategies you could adopt, but in my opinion the preference should be to invest effort into the latest version, making breaking changes only when absolutely inevitable (and at that point, isn’t the new version just a new thing? Like Luxon, the next version of Moment.js) and making updates trivial for your consumers. And if it’s trivial you can do it for them. Ultimately it was your decision to break the API, so you should also do the work, it’s only fair and it makes you evaluate the cost-benefit trade-off of the change. In a monorepo with building from source, this versioning strategy happens naturally. You can, however, adopt others. You just lose some of the guarantees and make feedback loops longer. Versioning strategy is really an orthogonal concept to storing code in a single repository, but the relative costs do change if you use one. Versioning by using a single version that cuts across the system becomes a lot cheaper, which means breaking changes becomes more expensive. This tends to lead to more discussions about versioning. This is actually true for most of the things we covered above. You can, but don’t have to adopt these strategies with a monorepo. Pay as you go monorepos It’s totally possible to just store everything in a single repo and not do anything else. You’ll get the visibility of what exists and flexibility of boundaries and ownership. You can still publish individual build artifacts and pull-manage dependencies (but you’d be missing out). Add building from source and you get the single snapshot benefit - you now know what code runs in a particular version of your system and to an extent, you can think about it as a monolith, despite being formed of many different independent modules and services. Add dependency aware continuous integration and the feedback loop around issues introduced while working on the codebase gets much much shorter, allowing you to go faster and waste less time on carefully managing versions, reporting bugs, making big forklift upgrades, etc. Things tend to get out of control much less. It’s simpler. Best of all, you can mix and match strategies. If you have a hugely popular library in your monorepo and each change in it triggers a build of hundreds of consumers, it only takes a couple of those builds being flakey and it will make it very hard to get builds for the changes in the library to pass. This is really a CI problem to fix (and there are so many interesting strategies out there), but sometimes you can’t do that easily. You could also say the feedback loop is now too tight for the scale and start versioning the library’s intentional releases. This still doesn’t mean you have to publish versioned artifacts. You can have a stable copy of the library in the repo, which consumers depend on, and a development copy next to it which the maintainers work on. Releasing then means moving changes from the development folder to the release one and getting its builds to pass. Or, if you wish, you can publish artifacts and let consumers pull them on their own time and report bugs to you. And you still don’t need to promise fixes for older versions without upgrading to the latest (Libraries should really have a published “code of maintenance” outlining the promises and setting maintenance expectations). And if you have to, I would again recommend making a copy, not branching. In fact, in a monorepo, branching might just not be a very good idea. Temporary branches are still useful to work on proposed changes, but long-running branches just hide the full truth about the system. And so does relying on old commits. The copies of code being used exist either way, the are still relevant and you still need to consider them for testing and security patching, they are just hidden in less apparent dimensions of the codebase “space” - branch dimension or time dimension. These are hard to think about and visualise, so maybe it’s not a good idea to use them to keep relevant and current versions of the code and stick to them as change proposal and “time travel” mechanisms. Hopefully you can see that there’s an entire spectrum of strategies you can follow but don’t have to adopt wholesale. I’m sold, but... can’t we do all this with a multi-repo? Most of the things discussed above are not really strictly dependent on monorepos, they are more a natural consequence of adopting one. You can follow versioning strategies other than semver outside of a monorepo. You can probably implement an automated version bumping system which will upgrade all the dependents of a library and test them, logging issues if they don’t pass. What you can’t do outside of a monorepo, as far as I can tell, is the atomic snapshotting of history to have a clear view of the system. And have the same view of the system a year ago. And be able to reproduce it. As soon as multiple parallel version histories are established, this ability goes away and you introduce distributed state. It’s impossible to update all the “heads” in this multi history at the same time, consistently. In version control, like git, the histories are ordered by the “follows” relationship. Later version follows - points to - its predecessor. To get a consistent, canonical view of time, there needs to be a single entrypoint. Without this central entry point, it’s impossible to define a consistent order across the entire set, it depends on where we start looking. Essentially, you already chose Partition from the three CAP properties. Now you can pick either Consistency or Availability. Typically, availability is important and so you lose consistency. You could choose consistency instead, but that would mean you can’t have availability - in order to get a consistent snapshot of the state of all of the repos, write access would need to be stopped while the snapshot is taken. In a monorepo, you don’t have partitioning, and can therefore have consistency and availability. From a physics perspective, multiple repositories with their own history effectively create a kind of spacetime, where each repository is a place and references across repos represent information propagating across space. The speed of that propagation isn’t infinite - it’s not instant. If changes happen in two places close enough in time, from the perspective of those two places, they happen in a globally inconsistent order, first the local change, then the remote change. Neither of the views is better and more true and it’s impossible to decide which of the changes came first. Unless, that is, we introduce an agreed upon central point which references all the repositories that exist and every time one of them updates, the reference in this master gets updated and a revision is created. Congratulations, we created a monorepo, well done us. The benefits of going all-in when it comes to monorepos As I said at the beginning, adopting the monorepo approach fully will result in fewer barriers in the software development lifecycle. You get faster feedback loops - the ability to test consumers of libraries before checking in a change and immediate feedback. You will spend less time looking for code and working out how it gets assembled together. You won’t need to set up repositories or ask for permissions to contribute. You can spend more time-solving problems to help your customers instead of problems you created for yourself. It takes some time to get the tooling setup right, but you only do it once, all the later projects get the setup for free. Some of the tooling is a little lacking, but in our experience there are no show stoppers. A stable, reliable CI is an absolute must, but that’s regardless of monorepos. Monorepos should also help make builds repeatable. The repo does eventually get big, but it takes years and years and hundreds of people to get to a size where it actually becomes a real problem. The Linux kernel is a monorepo and it’s probably still at least an order of magnitude bigger than your project (it is bigger than ours anyway, despite having hundreds of engineers involved at this point). Basically, you’re not Google or Microsoft. And when you are, you’ll be able to afford optimising your version control system. The UX of your code review and source hosting tooling is probably the first thing that will break, not the underlying infrastructure. For smoother scaling, the one recommendation I have is to set a file size limit - accidentally committed large files are quite hard to remove, at least in git. After using a monorepo for over two years we’re still yet to have any big technical issues with it (plenty of political ones, but that’s a story for another day) and we see the same benefits as Google reported in their recent paper. I honestly don’t really know why you would start with any other strategy. Viktor Charypar is Technical Director at Red badger, a digital consultancy based in London. You can follow him on Twitter @charypar or read more of his writing on the Red Badger blog here.
Read more
  • 0
  • 0
  • 46680

article-image-the-future-of-net-neutrality-is-being-decided-in-court-right-now-as-mozilla-takes-on-the-fcc
Richard Gall
01 Feb 2019
3 min read
Save for later

The future of net neutrality is being decided in court right now, as Mozilla takes on the FCC

Richard Gall
01 Feb 2019
3 min read
Back in August, in a bid to defend net neutrality, Mozilla filed a case against the FCC, opposing the FCC's rollback of the laws that defend users against the interests of ISPs. Today, the oral arguments in that case have come to court in Washington D.C., making it an important day in the fight to save the very principle of net neutrality. What is net neutrality and why did the FCC roll it back? To understand the significance of today, it's important to know what net neutrality is, exactly, and how and why the FCC removed the rules that put it in place. Essentially, net neutrality is the principle that all internet service providers must treat all content and services equally. It means your internet provider can't slow your access to Netflix, or prevent you from accessing any other content for commercial reasons. Essentially net neutrality protects users like you and me, and prevents a market emerging where companies and individuals can pay more for faster speeds or more access to services. The argument against net neutrality is grounded in the liberal economic principle that understands regulation as necessarily restrictive. It suggests that regulation will actually lead to price rises, rather than pushing them down. From a political perspective too, the argument is that regulating the internet in this way effectively puts it under government control - that's misleading, but it's easy to see how that argument can be peddled. The two key arguments in the net neutrality case against the FCC The case will center upon two arguments. The first is whether the FCC's decision to repeal the legislation was warranted in the first place. As a Federal agency, the FCC is forbidden by the Administrative Procedure Act to make decisions that could be described as "arbitrary and capricious." In essence, this means they can't make decisions based on the opinions and personal judgements of those that lead the organization. All regulatory decisions need to be clear and considered, and, of course, backed up by compelling evidence. From the FCC's perspective, the decision to repeal net neutrality legislation was sound. The agency argued, for example, that the rules were damaging investment in infrastructure, and restricting private businesses to develop their products and services in a way that would ultimately benefit users. This position has, however, been disputed by a Wired report that found that investment was high from market leaders during the period when net neutrality legislation was in place. The second point that will be crucial is whether ISPs are simply an information service or telecommunications provider. This distinction is important - information services are less tightly regulated than telecommunications (think of all the various ways subscription services make money). Under net neutrality rules, ISPs are regarded as telecommunications companies - by removing net neutrality rules, the FCC is saying they are merely information services. At court already this morning, Pantelis Michalopoulos, one of the plaintiff attorneys against the FCC, compared the assertion that an ISP isn't a telecommunications company to Magritte's famous painting The Treachery of Images. "This is like a surrealist painting that shows a pipe and says ‘this is not a pipe,’” The EFF reports he said in court. https://twitter.com/EFFLive/status/1091351488540995584 How to follow the case Representatives from the Electronic Frontier Foundation are live tweeting from the courtroom from @EFFLive. If you want to follow the debates and arguments - as well as plenty of useful commentary and information from the EFF, make sure you follow them.
Read more
  • 0
  • 0
  • 11626

article-image-how-to-set-up-odoo-as-a-system-service-tutorial
Sugandha Lahoti
01 Feb 2019
7 min read
Save for later

How to set up Odoo as a system service [Tutorial]

Sugandha Lahoti
01 Feb 2019
7 min read
In this tutorial, we'll learn the basics of setting up Odoo as a system service.  This article is taken from the book Odoo 12 Development Essentials by Daniel Reis. This book will help you extend your skills with Odoo 12 to build resourceful and open source business applications. Setting up and maintaining servers is a non-trivial topic in itself and should be done by specialists. The information given here is not enough to ensure an average user can create a resilient and secure environment that hosts sensitive data and services. In this article, we'll discuss the following topics: Setting up Odoo as a system service, including the following: Creating a systemd service Creating an Upstart or sysvinit service Checking the Odoo service from the command line The code and scripts used here can be found in the ch14/ directory of the Git repository. Setting up Odoo as a system service We will learn how to set up Odoo as a system service and have it started automatically when the system boots. In Ubuntu or Debian, the init system is responsible for starting services. Historically, Debian (and derived operating systems) has used sysvinit, and Ubuntu has used a compatible system called Upstart. Recently, however, this has changed, and the init system used in both the latest Debian and Ubuntu editions is systemd. This means that there are now two different ways to install a system service, and you need to pick the correct one depending on the version of your operating system. On Ubuntu 16.04 and later, you should be using systemd. However, older versions are still used in many cloud providers, so there is a good chance that you might need to use it. To check whether systemd is used in your system, try the following command: $ man init This command opens the documentation for the currently init system in use, so you're able to check what is being used. Ubuntu on Windows Subsystem for Linux (WSL) is an environment good enough for development only, but may have some quirks and is entirely inappropriate for running production servers. At the time of writing, our tests revealed that while man init identifies the init system as systemd, installing a systemd service doesn't work, while installing a sysvinit service does. Creating a systemd service If the operating system you're using is recent, such as Debian 8 or Ubuntu 16.04, you should be using systemd for the init system. To add a new service to the system, simply create a file describing it. Create a /lib/systemd/system/odoo.service file with the following content: [Unit] Description=Odoo After=postgresql.service [Service] Type=simple User=odoo Group=odoo ExecStart=/home/odoo/odoo-12/odoo-bin -c /etc/odoo/odoo.conf [Install] WantedBy=multi-user.target The Odoo source code includes a sample odoo.service file inside the debian/ directory. Instead of creating a new file, you can copy it and then make the required changes. At the very least, the ExecStart option should be changed according to your setup. Next, we need to register the new service with the following code: $ sudo systemctl enable odoo.service To start this new service, use the following command: $ sudo systemctl start odoo To check its status, run the following command: $ sudo systemctl status odoo Finally, if you want to stop it, use the following command: $ sudo systemctl stop odoo Creating an Upstart or sysvinit service If you're using an older operating system, such as Debian 7 or Ubuntu 15.04, chances are your system is sysvinit or Upstart. For the purpose of creating a system service, both should behave in the same way. Some cloud Virtual Private Server (VPS) services are still based on older Ubuntu images, so this might be aware of this scenario in case you encounter it when deploying your Odoo server. The Odoo source code includes an init script used for the Debian packaged distribution. We can use it as our service init script with minor modifications, as follows: $ sudo cp /home/odoo/odoo-12/debian/init /etc/init.d/odoo $ sudo chmod +x /etc/init.d/odoo At this point, you might want to check the content of the init script. The key parameters are assigned to variables at the top of the file, as illustrated in the following example: PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin DAEMON=/usr/bin/odoo NAME=odoo DESC=odoo CONFIG=/etc/odoo/odoo.conf LOGFILE=/var/log/odoo/odoo-server.log PIDFILE=/var/run/${NAME}.pid USER=odoo These variables should be adequate, so we'll prepare the rest of the setup with their default values in mind. However, you can, of course, change them to better suit your needs. The USER variable is the system user under which the server will run. We have already created the expected odoo user. The DAEMON variable is the path to the server executable. Our executable used to start Odoo is in a different location, but we can create the following symbolic link to it: $ sudo ln -s /home/odoo/odoo-12/odoo-bin /usr/bin/odoo $ sudo chown -h odoo /usr/bin/odoo The CONFIG variable is the configuration file we need to use. In a previous section, we created a configuration file in the default expected location, /etc/odoo/odoo.conf. Finally, the LOGFILE variable is the directory where log files should be stored. The expected directory is /var/log/odoo, which we created when we defined the configuration file. Now we should be able to start and stop our Odoo service, as follows: $ sudo /etc/init.d/odoo start Starting odoo: ok Stopping the service is done in a similar way with the following command: $ sudo /etc/init.d/odoo stop Stopping odoo: ok In Ubuntu, the service command can also be used, as follows: $ sudo service odoo start $ sudo service odoo status $ sudo service odoo stop Now we need to make the service start automatically on system boot; this can be done with the following code: $ sudo update-rc.d odoo defaults After this, when we reboot our server, the Odoo service should start automatically and with no errors. It's a good time to verify that all is working as expected. Checking the Odoo service from the command line At this point, we can confirm whether our Odoo instance is up and responding to requests as expected. If Odoo is running properly, we should be able to get a response from it and see no errors in the log file. We can check whether Odoo is responding to HTTP requests inside the server by using the following command: $ curl http://localhost:8069 <html><head><script>window.location = '/web' + location.hash;</script></head></html> In addition, to see what is in the log file, use the following command: $ sudo less /var/log/odoo/odoo-server.log You can also follow what is being added to the log file live, using  tail -f as follows: $ sudo tail -f /var/log/odoo/odoo-server.log Summary In this tutorial, we learned about the steps required for setting up Odoo as a system service. To learn more about Odoo, you should read our book  Odoo 12 Development Essentials. You may also take a look at the official documentation at https://www.odoo.com/documentation. Odoo is an open source product with a vibrant community. Getting involved, asking questions, and contributing is a great way not only to learn but also to build a business network. With this in mind, we can't help mention the Odoo Community Association (OCA), which promotes collaboration and quality open source code. You can learn more about it at odoo‑comunity.org. “Everybody can benefit from adopting Odoo, whether you’re a small start-up or a giant tech company - An interview by Yenthe van Ginneken. Implement an effective CRM system in Odoo 11 [Tutorial] Handle Odoo application data with ORM API [Tutorial]
Read more
  • 0
  • 0
  • 31433

article-image-16-javascript-frameworks-developers-should-learn-in-2019
Bhagyashree R
27 Jan 2019
14 min read
Save for later

16 JavaScript frameworks developers should learn in 2019

Bhagyashree R
27 Jan 2019
14 min read
According to Stack Overflow’s Developer Survey 2018, JavaScript is one of the most widely used programming languages. Thanks to its ever-evolving framework ecosystem to find the best solution for complex and challenging problems. Although JavaScript has spent most of its lifetime being associated with web development, in recent years, its usage seems to be expanding. Not only has it moved from front to back end, we’re also beginning to see it used for things like machine learning and augmented reality. JavaScript’s evolution is driven by frameworks. And although there are a few that seem to be leading the way, there are many other smaller tools that could be well worth your attention in 2019. Let’s take a look at them now. JavaScript web development frameworks React React was first developed by Facebook in 2011 and then open sourced in 2013. Since then it has become one of the most popular JavaScript libraries for building user interfaces. According to npm’s survey, despite a slowdown in React’s growth in 2018, it will be the dominant framework in 2019. The State of JavaScript 2018 survey designates it as “a safe technology to adopt” given its high usage satisfaction ratio and a large user base. In 2018, the React team released versions from 16.3 to 16.7 with some major updates. These updates included new lifecycle methods, Context API, suspense for code splitting, a React Profiler, Create React App 2.0, and more. The team has already laid out its plan for 2019 and will soon be releasing one of the most awaited feature, Hooks. It allows developers to access features such as state without using JavaScript classes. It aims to simplify the code for React components by allowing developers to reuse stateful logic without making any changes to the component hierarchy. Other features will include a concurrent mode to allow component tree rendering without blocking the main thread, suspense for data fetching, and more. Vue Vue was created by Evan You after working for Google using AngularJS in a number of projects. It was first released in 2014. Sharing his motivation for creating Vue, Evan said, "I figured, what if I could just extract the part that I really liked about Angular and build something really lightweight."  Vue has continued to show great adoption among JavaScript developers and I doubt this trend is going to stop anytime soon. According to the npm survey, some developers prefer Vue over React because they feel that it is “easier to get started with, while maintaining extensibility.” Vue is a library that allows developers to build interactive web interfaces. It provides data-reactive components, similar to React, with a simple and flexible API. Unlike React or Angular, one of the benefits of Vue is the clean HTML output it produces. Other JavaScript libraries tend to leave the HTML scattered with extra attributes and classes in the code, whereas Vue removes these to produce clean, semantic output. It provides advanced feature such as routing, state management, and build tooling for complex applications via officially maintained supporting libraries and packages. Angular Google developed AngularJS in 2009 and released its first version in 2012. Since then it saw enthusiastic support and widespread adoption among both enterprises and individuals. AngularJS was originally developed for designers, not developers. While it did saw a few evolutionary improvements in its design, they were not enough to fulfill developer requirements. The later versions, Angular 2, Angular 4, and so on have been upgraded to provide an overall improvement in performance, especially in speed and dependency injection. The new version is simply called Angular, a platform and framework that allows developers to build client applications in HTML and TypeScript. It comes with declarative templates, dependency injection, end to end tooling, and integrated best practices to solve development challenges. While the architecture of AngularJS is based on model-view-controller (MVC) design, Angular has a component-based architecture. Every Angular application consists of at least one component known as the root component. Each component is associated to a class that’s responsible for handling the business logic and a template that represents the view layer. Node.js There has been a lot of debate around whether Node is a framework (it’s really a library), but when talking about web development it is very hard to skip it. Node.js was originally written by Ryan Dahl, which he demonstrated at the the inaugural European JSConf on November 8, 2009. Node.js is an free, open-source, cross-platform JavaScript run-time environment that executes JavaScript code outside of a browser. Node.js follows a "JavaScript everywhere" paradigm by unifying web application development around a single programming language, rather than different languages for server side and client side scripts. At the JSConf 2018, Dahl described some limitations about his server-side JavaScript runtime engine. Many parts of its architecture suffer from limitations including security and how modules are managed. As a solution to this he introduced a new software project, called Deno, a secure TypeScript runtime on V8 JavaScript engine that sets out to correct some of the design flaws in Node.js.   Cross-platform mobile development frameworks React Native The story of React Native started in the summer of 2013 as Facebook’s internal hackathon project and it was later open sourced in 2015. React Native is a JavaScript framework used to build native mobile applications. As you might have already guessed from its name, React Native is based on React, that we discussed earlier. The reason why it is called “native” is that the UI built with React Native consists of native UI widgets that look and feel consistent with the apps you built using native languages. Under the hood, React Native translates your UI definition written in Javascript/JSX into a hierarchy of native views correct for the target platform. For example, if we are building an iOS app, it will translate the Text primitive to a native iOS UIView, and in Android, it will result with a native TextView. So, even though we are writing a JavaScript application, we do not get a web app embedded inside the shell of a mobile one. We are getting a “real native app”. NativeScript NativeScript was developed by Telerik (a subsidiary of Progress) and first released in 2014. It’s an open source framework that helps you build apps using JavaScript or any other language that transpiles to JavaScript, for example, TypeScript. It directly supports the Angular framework and supports the Vue framework via a community-developed plugin. Mobile applications built with NativeScript result in fully native apps, which use the same APIs as if they were developed in Xcode or Android Studio. Since the applications are built in JavaScript there is a need for some proxy mechanism to translate JavaScript code to the corresponding native APIs. This is done by the runtime parts of NativeScript, which act as a “bridge” between the JavaScript and the native world (Android and iOS). The runtimes facilitate calling APIs in the Android and iOS frameworks using JavaScript code. To do that JavaScript Virtual Machines are used – Google’s V8 for Android and WebKit’s JavaScriptCore implementation distributed with iOS 7.0+. Ionic Framework The Ionic framework was created by Drifty Co. and initially released in 2013. It is an open source, frontend SDK for developing hybrid mobile apps with familiar web technologies such as HTML5, CSS, and JavaScript. With Ionic, you will be able to build and deploy apps that work across multiple platforms, such as native iOS, Android, desktop, and the web as a Progressive Web App. Ionic is mainly focused on an application’s look and feel, or the UI interaction. This tells us that it’s not meant to replace Cordova or your favorite JavaScript framework. In fact, it still needs a native wrapper like Cordova to run your app as a mobile app. It uses these wrappers to gain access to host operating systems features such as Camera, GPS, Flashlight, etc. Ionic apps run in low-level browser shell like UIWebView in iOS or WebView in Android, which is wrapped by tools like Cordova/PhoneGap. JavaScript Desktop application development frameworks Electron Electron was created by Cheng Zao, a software engineer at GitHub. It was initially released in 2013 as Atom Shell and then was renamed to Electron in 2015. Electron enables web developers to use their existing knowledge and native developers to build one codebase and ship it for each platform separately. There are many popular apps that are build with Electron including Slack, Skype for Linux, Simplenote, and Visual Studio Code, among others. An Electron app consists of three components: Chromium web engine, a Node.js interpreter, and your application’s source code. The Chromium web engine is responsible for rendering the UI. The Node.js interpreter executes JavaScript and provides your app access to OS features that are not available to the Chromium engine such as filesystem access, networking, native desktop functions, etc. The application’s source code is usually a combination of JavaScript, HTML, and CSS. JavaScript Machine learning frameworks Tensorflow.js At the TensorFlow Dev Summit 2018, Google announced the JavaScript implementation of TensorFlow, their machine learning framework, called TensorFlow.js. It is the successor of deeplearn.js, which was released in August 2017, and is now named as TensorFlow.js Core. The team recently released Node.js bindings for TensorFlow, so now the same JavaScript code will work on both the browser and Node.js. Tensorflow.js consists of four layers, namely the WebGL API for GPU-supported numerical operations, the web browser for user interactions, and two APIs: Core and Layers. The low-level Core API corresponds to the former deeplearn.js library, which provides hardware-accelerated linear algebra operations and an eager API for automatic differentiation. The higher-level Layers API is used to build machine-learning models on top of Core. It also allow developers to import models previously trained in Python with Keras or TensorFlow SavedModels and use it for inference or transfer learning in the browser. Brain.js Brain.js is a library of neural network written in JavaScript, a continuation of the “brain” library, which can be used with Node.js or in the browser. It simplifies the process of creating and training a neural network by utilizing the ease-of-use of JavaScript and by limiting the API to just a few method calls and options. It comes with different types of networks for different tasks, which include a feedforward neural network with backpropagation, time step recurrent neural network, time step long short term memory neural network, among others. JavaScript augmented reality and virtual reality frameworks React 360 In 2017, Facebook and Oculus together introduced React VR, which was revamped and rebranded last year as React 360. This improved version simplifies UI layout in 3D space and is faster than React VR. Built on top of React, which we discussed earlier, React 360 is a JavaScript library that enables developers to create 3D and VR interfaces. It allows web developers to use familiar tools and concepts to create immersive 360 experiences on the web. An application built with React 360 consists of two pieces, namely, your React application and runtime, which turns your components into 3D elements on the screen. This “division of roles” concept is similar to React Native. As web browsers are single-threaded, the app code is separated from the rendering code to avoid any blocking behavior in the app. By running the app code in a separate context, the rendering loop is allowed to consistently update at a high frame rate. AR.js AR.js was developed by Jerome Etienne in 2017 with the aim of implementing augmented reality efficiently on the web. It currently gives efficiency of 60fps, which is not bad for an open source web-based solution. The library was inspired by projects like three.js, ARToolKit 5, emscripten and Chromium. AR.js requires WebGL, a 3D graphics API for the HTML5 Canvas element, and WebRTC, a set of browser APIs and protocols that allow for real-time communications of audio, video, and data in web browsers and native apps. Leveraging features in ARToolKit and A-Frame, AR.js makes the development of AR for the web a straightforward process that can be implemented by novice coders. New and emerging JavaScript frameworks Gatsby.js The creator of Gatsby, Kyle Mathews, quit his startup job in 2017 and started focusing full-time on his side projects: Gatsby.js and Typography.js. Gatsby.js was initially released in 2015 and its first version came out in 2017. It is a modern site generator for React.js, which means everything in Gatsby is built using components. With Gatsby, you can create both dynamic and static websites/web apps ranging from simple blogs, e-commerce websites to user dashboards. Gatsby supports many database sources such as Markdown files, a headless CMS like Contentful or WordPress, or a REST or GraphQL API, which you can consolidate via GraphQL. It also makes things like code splitting, image optimization, inlining critical styles, lazy-loading, and prefetching resources easier by automating them. Next.js Next.js was created by ZEIT and open sourced in 2016. Built on top of React, Webpack, and Babel, Next.js is a small JavaScript framework that enables an easy server-side rendering of React applications. It provides features like automatic code splitting, simple client-side routing, Webpack-based dev environment which supports HMR, and more. It aims to help developers write an isomorphic React application, so that the same rendering logic can be used for both client-side and server-side rendering. Next.js basically allows you to write a React app, with the SSR and things like code splitting being taken care of for you. It supports two server-side rendering modes: on demand and static export. On demand rendering means for each request, a unique page is rendered. This property is great for web apps that are highly dynamic, in which content changes often, have a login state, and similar use cases. This mode requires having a Node.js server running. While static export on other hand renders all pages to .html files up-front and serves them using any file server. This mode does not require a Node.js server running and the HTML can run anywhere. Nuxt.js Nuxt.js was originally created by the Chopin brothers, Alexandre and Sébastien Chopin and released in 2016. In January 2018, it was updated to a production-ready 1.0 version and is backed by an active and well-supported community. It is a higher-level framework inspired by Next.js, which builds on top of the Vue.js ecosystem and simplifies the development of universal or single page Vue.js applications. Under the hood, Nuxt.js uses webpack with vue-loader and babel-loader to bundle, code-split and minify your code. One of the perks of using Nuxt,js is that it provides a nuxt generate command, which generates a completely static version of your Vue application using the same codebase. In addition to that, it provides features for the development between the client side and the server side such as Asynchronous Data, Middleware, Layouts, etc. NestJS NestJS was created by Kamil Mysliwiec and released in 2017. It is a framework for effortlessly building efficient, reliable, and scalable Node.js server-side applications. It builds on top of TypeScript and JavaScript (ES6, ES7, ES8) and is heavily inspired by Angular as both use a Module/Component system that allows for reusability. Under the hood, NestJS uses Express, and is also compatible with a wide range of other libraries, for example, Fastify. For most of its abstractions, it uses classes and leverages the benefits of decorators and metadata reflection that classes and TypeScript bring. It comes with concepts like guards, pipes, and interceptors, and built-in support for other transports like WebSockets and gRPC. These were some of my picks from the plethora of JavaScript frameworks. You surely don't have to be an expert in all of them. Play with them, read the documentation, get an overview of their features. Before you start using a framework you can check it for few things such as the problems it solve, any other frameworks which do the same things better, if it aligns with your project requirement, which type of projects would this framework be ideal for, etc. If that framework appeals to you, maybe try to build a project with one. npm JavaScript predictions for 2019: React, GraphQL, and TypeScript are three technologies to learn 4 key findings from The State of JavaScript 2018 developer survey JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript
Read more
  • 0
  • 0
  • 40135
article-image-why-google-kills-its-own-products
Sugandha Lahoti
25 Jan 2019
6 min read
Save for later

Why Google kills its own products

Sugandha Lahoti
25 Jan 2019
6 min read
The internet is abuzz with discussions of popular (and sometimes short-lived) Google products that the company has killed. The conversation has recently been kickstarted by Killed by Google, and Google Cemetry, which provided an ‘obituary’ of dead Google products and services last week. Google has always been enthusiastic about venturing into new fields. That’s one of the crucial reasons for its success. Taking risks on new products is inevitably going to produce a share of martyrs, but it’s the price you pay to establish new products. Most importantly, none of these ‘dead’ products have vanished completely. There always a strong alternative that Google is investing in. Many of these dead products are actually an important step towards something better and more successful. Those that do die, have either reached EOL (if hardware based) or are rebranded/merged with an existing product or split into a separate Alphabet company. But why does Google kill products? Dead products are really just a by-product of innovation. For Google to move quickly as a business - to compete with the likes of Amazon - it needs to try new things, and, by the same token, stop things when they’re not working out. While no one likes to fail, in Silicon Valley, failing fast has become a well-known philosophy. Dead products, that once seemed cutting-edge products lay the groundwork for better, more well-timed ideas that flourish later. Failure can lead to success - maybe even something world-changing. Like an experiment gone awry, they teach companies more about technology and how people want to use it. Google likes to ignore the market and see what surprises users Google’s strategy has always been to avoid getting hung up on paying less attention to market research. By doing market research a company tries to design and launch a product that fits with people’s expectations - in general, a good idea, especially if you can’t afford investing in something that’s a risk. Google, on the other hand, with the astonishing amount of capital at its disposal,  can almost skip this altogether. If they have a bright, smart idea, they just put it in the market for people to test and see. This was done with Google Tez, which was a mobile payments service by Google that was targeted at users in India. Since launching the app, over 55 million people have downloaded the app and more than 22 million people and businesses actively use the app for digital transactions every month. This was an instant signal to Google that the app may have done better if it was given a universally-accepted name. Tez was killed almost 3 months ago and rebranded to Google Pay. They now have a unified global payments services with what it had built for India.   Deceased Google products with a second life under a new brand name Here are a few more examples of what Google has demolished and subsequently rebranded: On September 16, 2014, it was announced that Google intended to close Panoramio and migrate it to Google Maps Views. Google News & Weather is a news aggregator application developed by Google. On May 8, 2018, Google announced that it was merging Google Play Newsstand and Google News & Weather into a single service, called Google News. Google Allo is an instant messaging mobile app by Google. It will be rebranded as Google Chat. It was killed 7 months ago. P Project Tango was an API for augmented reality apps that was killed and replaced by ARCore. Sometimes, poor products are the problem While some Google products simply needed better branding, there are plenty of examples of projects that were terminated simply because they weren’t good enough. This is often down to engineering mistakes (bugs) or a lack of user engagement. Google stated that the primary reason for retiring Picasa was that it wanted to focus its efforts “entirely on a single photos service” the cross-platform, web-based Google Photos. Over the past decade, the growth of Facebook, YouTube, Blogger, and Google+ have outpaced Orkut’s. Google decided to bid Orkut farewell and shut it down. On April 20, 2015, Google officially shut down Helpouts stating that the service hadn’t, “grown at the pace we had expected.” Most recently, In October 2018, Google announced that it was shutting down Google+ for consumers, citing low user engagement and a software error. Surprisingly, lists such as these have had the exact opposite effect than what was intended by the creators. People support Google for rebranding their projects. A hacker news user said, “This list actually had the opposite intended effect on me. Yeah, Google Reader should have stuck around. But half of these I've either never heard of or only faintly remember. And the ones I do remember seem like reasonable axes. Google Video, for example, seemed to serve the sole purpose of making me think "dammit, why doesn't the 'Video' tab just take me to YouTube?" So Google's huge and had to cut off some redundant services over the years. So what. In view of privacy violations, military tech collaborations, and so on, EOL-ing a couple dozen services is hardly a cardinal sin.” However, the downside of retiring products is that there will always be someone who is unhappy. Even if a product isn’t widely used, there will always be some people that like the product, maybe have even grown to love it. Like a breakfast radio show, people form habits around a product’s’ UI and overall experience. They become comfortable. Some people have argued that Google has killed stuff on a whim. Google Reader, url shortner, code search, Picasa, were all cited as examples of things that the company should not have shut down.   Here are some of the reactions of people on Hacker News. “Other day I was looking to buy a movie and it was available on Amazon as well as YouTube, I went to Amazon because YouTube feels much more likely to shut down it’s movie business on a whim while Amazon will likely fight out to last moment. Same goes for buying music.” “Even after 5+ years, I still miss Google Reader almost every day. Just pure simplicity and tight community around sharing are yet to be matched in my opinion. The web has moved on and as someone commented here, it’s walled garden everywhere now.” Read more of this conversation on Hacker News. Dead products can teach us a lot about the priorities of businesses, and maybe even something about the people that use them - people like us. Ultimately, however, dead products are the waste product of philosophy of growth: as business looks to expand into new markets, some products are probably going to get the chop. Read Next Google hints shutting down Google News over EU’s implementation of Article 11 or the “link tax” Google releases Magenta studio beta, an open source python machine learning library for music artists Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native
Read more
  • 0
  • 1
  • 28484

article-image-google-deepminds-ai-alphastar-beats-starcraft-ii-pros-tlo-and-mana-wins-10-1-against-the-gamers
Natasha Mathur
25 Jan 2019
5 min read
Save for later

Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers

Natasha Mathur
25 Jan 2019
5 min read
It was two days back when the Blizzard team announced an update about the demo of the progress made by Google’s DeepMind AI at StarCraft II, a real-time strategy video game. The demo was presented yesterday over a live stream where it showed, AlphaStar, DeepMind’s StarCraft II AI program, beating the top two professional StarCraft II players, TLO and MaNa. The demo presented a series of five separate test matches that were held earlier on 19 December, against Team Liquid’s Grzegorz "MaNa" Komincz, and Dario “TLO” Wünsch. AlphaStar beat the two professional players, managing to score 10-0 in total (5-0 against each). After the 10 straight wins, AlphaStar finally got beaten by MaNa in a live match streamed by Blizzard and DeepMind. https://twitter.com/LiquidTLO/status/1088524496246657030 https://twitter.com/Liquid_MaNa/status/1088534975044087808 How does AlphaStar learn? AlphaStar learns by imitating the basic micro and macro-strategies used by players on the StarCraft ladder. A neural network was trained initially using supervised learning from anonymised human games released by Blizzard. This initial AI agent managed to defeat the “Elite” level AI in 95% of games. Once the agents get trained from human game replays, they’re then trained against other competitors in the “AlphaStar league”. This is where a multi-agent reinforcement learning process starts. New competitors are added to the league (branched from existing competitors). Each of these agents then learns from games against other competitors. This ensures that each competitor performs well against the strongest strategies, and does not forget how to defeat earlier ones.                                          AlphaStar As the league continues to progress, new counter-strategies emerge, that can defeat the earlier strategies. Also, each agent has its own learning objective which gets adapted during the training. One agent might have an objective to beat one specific competitor, while another one might want to beat a whole distribution of competitors. So, the neural network weights of each agent get updated using reinforcement learning, from its games against competitors. This helps optimise their personal learning objective. How does AlphaStar play the game? TLO and MaNa, professional StarCraft players, can issue hundreds of actions per minute (APM) on average. AlphaStar had an average APM of around 280 in its games against TLO and MaNa, which is significantly lower than the professional players. This is because AlphaStar starts its learning using replays and thereby mimics the way humans play the game. Moreover, AlphaStar also showed the delay between observation and action of 350ms on average.                                                    AlphaStar AlphaStar might have had a slight advantage over the human players as it interacted with the StarCraft game engine directly via its raw interface. What this means is that it could observe the attributes of its own as well as its opponent’s visible units on the map directly, basically getting a zoomed out view of the game. Human players, however, have to split their time and attention to decide where to focus the camera on the map. But, the analysis results of the game showed that the AI agents “switched context” about 30 times per minute, akin to MaNa or TLO. This proves that AlphaStar’s success against MaNa and TLO is due to its superior macro and micro-strategic decision-making. It isn’t the superior click-rate, faster reaction times, or the raw interface, that made the AI win. MaNa managed to beat AlphaStar in one match DeepMind also developed a second version of AlphaStar, which played like human players, meaning that it had to choose when and where to move the camera. Two new agents were trained, one that used the raw interface and the other that learned to control the camera, against the AlphaStar league.                                                           AlphaStar “The version of AlphaStar using the camera interface was almost as strong as the raw interface, exceeding 7000 MMR on our internal leaderboard”, states the DeepMind team. But, the team didn’t get the chance to test the AI against a human pro prior to the live stream.   In a live exhibition match, MaNa managed to defeat the new version of AlphaStar using the camera interface, which was trained for only 7 days. “We hope to evaluate a fully trained instance of the camera interface in the near future”, says the team. DeepMind team states AlphaStar’s performance was initially tested against TLO, where it won the match. “I was surprised by how strong the agent was..(it) takes well-known strategies..I hadn’t thought of before, which means there may still be new ways of playing the game that we haven’t fully explored yet,” said TLO. The agents were then trained for an extra one week, after which they played against MaNa. AlphaStar again won the game. “I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn’t have expected..this has put the game in a whole new light for me. We’re all excited to see what comes next,” said MaNa. Public reaction to the news is very positive, with people congratulating the DeepMind team for AlphaStar’s win: https://twitter.com/SebastienBubeck/status/1088524371285557248 https://twitter.com/KaiLashArul/status/1088534443718045696 https://twitter.com/fhuszar/status/1088534423786668042 https://twitter.com/panicsw1tched/status/1088524675540549635 https://twitter.com/Denver_sc2/status/1088525423229759489 To learn about the strategies developed by AlphaStar, check out the complete set of replays of AlphaStar's matches against TLO and MaNa on DeepMind's website. Best game engines for Artificial Intelligence game development Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare
Read more
  • 0
  • 0
  • 36799

article-image-what-the-us-china-tech-and-ai-arms-race-means-for-the-world-frederick-kempe-at-davos-2019
Sugandha Lahoti
24 Jan 2019
6 min read
Save for later

What the US-China tech and AI arms race means for the world - Frederick Kempe at Davos 2019

Sugandha Lahoti
24 Jan 2019
6 min read
Atlantic Council CEO, Frederick Kempe spoke in the World Economic Forum (WEF) in Davos, Switzerland. He talked about the Cold war between the US and China and why the countries need to co-operate and not compete in the tech arms race, in his presentation Future Frontiers of Technology Control. He began his presentation by posing a question set forth by Former US Foreign National Security Advisor Stephen Hadley, “Can the incumbent US and insurgent China become strategic collaborators and strategic competitors in this tech space at the same time?” Read also: The New AI Cold War Between China and the USA Kempe’s three framing arguments Geopolitical Competition This fusion of tech breakthroughs blurring lines of the physical, digital, and biological space is reaching an inflection point that makes it already clear that they will usher in a revolution that will determine the shape of the global economy. It will also determine which nations and political constructs may assume the commanding heights of global politics in the coming decade. Technological superiority Over the course of history, societies that dominated economic innovation and progress have dominated in international relations — from military superiority to societal progress and prosperity. On balance, technological progress has contributed to higher standards of living in most parts of the world; however, the disproportionate benefit goes to first movers. Commanding Heights The technological arms race for supremacy in the fourth industrial revolution has essentially become a two-horse contest between the United States and China. We are in the early stages of this race, but how it unfolds and is conducted will do much to shape global human relations. The shift in 2018 in US-China relations from a period of strategic engagement to greater strategic competition has also significantly accelerated the Tech arms race. China vs the US: Why China has the edge? It was Vladimir Putin, President of the Russian Federation who said that “The one who becomes the leader in Artificial Intelligence, will rule the world.” In 2017, DeepMind’s AlphaGo defeated a Chinese master in Go, a traditional Chinese game. Following this defeat, China launched an ambitious roadmap, called the next generation AI plan. The goal was to become the Global leader in AI by 2030 in theory, technology, and application. On current trajectories, in the four primary areas of AI over the next 5 years, China will emerge the winner of this new technology race. Kempe also quotes, author of the book, AI superpowers, Kai-fu Lee who argues that harnessing of the power of AI today- the electricity of the 21st century- requires abundant data, hungry entrepreneurs, AI scientists, and an AI friendly policy. He believes that China has the edge in all of these. The current AI has translated from out of the box research, where the US has expertise in, to actual implementation, where China has the edge. Per, Kai-fu Lee China already has the edge in entrepreneurship, data, and government support, and is rapidly catching up to the U.S. in expertise. The world has translated from the age of world-leading expertise (US department) to the age of data, where China wins hands down. Economists call China the Saudi Arabia of Data and with that as the fuel for AI, it has an enormous advantage. The Chinese government without privacy restrictions can gain and use data in a manner that is out of reach of any democracy. Kemper concludes that the nature of this technological arms contest may favor insurgent China rather than the incumbent US. What are the societal implications of this tech cold war He also touched upon the societal implications of AI and the cold war between the US and China. A number of jobs will be lost by 2030. Quoting from Kai-fu Lee’s book, Kempe says that Job displacement caused by artificial intelligence and advanced robotics could possibly displace up to 54 million US workers which comprise 30% of the US labor force. It could also displace up to 100 million Chinese workers which are 12% of the Chinese labor force. What is the way forward with these huge societal implications of a bi-lateral race underway? Kempe sees three possibilities. A sloppy Status Quo A status quo where China and the US will continue to cooperate but increasingly view each other with suspicion. They will manage their rising differences and distrust imperfectly, never bridging them entirely, but also not burning bridges, either between researchers, cooperations, or others. Techno Cold War China and the US turn the global tech contest into more of a zero-sum battle for global domination. They organize themselves in a manner that separates their tech sectors from each other and ultimately divides up the world. Collaborative Future - the one we hope for Nicholas Thompson and Ian Bremmer argued in a wired interview that despite the two countries’ societal difference, the US should wrap China in a tech embrace. The two countries should work together to establish international standards to ensure that the algorithms governing people’s lives and livelihoods are transparent and accountable. They should recognize that while the geopolitics of technological change is significant, even more important will be the challenges AI poses to all societies across the world in terms of job automation and the social disruptions that may come with it. It may sound utopian to expect US and China to cooperate in this manner, but this is what we should hope for. To do otherwise would be self-defeating and at the cost of others in the global community which needs our best thinking to navigate the challenges of the fourth industrial revolution. Kempe concludes his presentation with a quote by Henry Kissinger, Former US Secretary of State and National Security Advisor, “We’re in a position in which the peace and prosperity of the world depend on whether China and the US can find a method to work together, not always in agreement, but to handle our disagreements...This is the key problem of our time.” Note: All images in this article are taken from Frederick Kempe’s presentation. We must change how we think about AI, urge AI founding fathers Does AI deserve to be so Overhyped? Alarming ways governments are using surveillance tech to watch you
Read more
  • 0
  • 0
  • 21166
article-image-7-things-java-programmers-need-to-watch-for-in-2019
Prasad Ramesh
24 Jan 2019
7 min read
Save for later

7 things Java programmers need to watch for in 2019

Prasad Ramesh
24 Jan 2019
7 min read
Java is one of the most popular and widely used programming languages in the world. Its dominance of the TIOBE index ranking is unmatched for the most part, holding the number 1 position for almost 20 years. Although Java’s dominance is unlikely to waver over the next 12 months, there are many important issues and announcements that will demand the attention of Java developers. So, get ready for 2019 with this list of key things in the Java world to watch out for. #1 Commercial Java SE users will now need a license Perhaps the most important change for Java in 2019 is that commercial users will have to pay a license fee to use Java SE from February. This move comes in as Oracle decided to change the support model for the Java language. This change currently affects Java SE 8 which is an LTS release with premier and extended support up to March 2022 and 2025 respectively. For individual users, however, the support and updates will continue till December 2020. The recently released Java SE 11 will also have long term support with five and extended eight-year support from the release date. #2 The Java 12 release in March 2019 Since Oracle changed their support model, non-LTS version releases will be bi-yearly and probably won’t contain many major changes. JDK 12 is non-LTS, that is not to say that the changes in it are trivial, it comes with its own set of new features. It will be generally available in March this year and supported until September which is when Java 13 will be released. Java 12 will have a couple of new features, some of them are approved to ship in its March release and some are under discussion. #3 Java 13 release slated for September 2019, with early access out now So far, there is very little information about Java 13. All we really know at the moment is that it’s’ due to be released in September 2019. Like Java 12, Java 13 will be a non-LTS release. However, if you want an early insight, there is an early access build available to test right now. Some of the JEP (JDK Enhancement Proposals) in the next section may be set to be featured in Java 13, but that’s just speculation. https://twitter.com/OpenJDK/status/1082200155854639104 #4 A bunch of new features in Java in 2019 Even though the major long term support version of Java, Java 11, was released last year, releases this year also have some new noteworthy features in store. Let’s take a look at what the two releases this year might have. Confirmed candidates for Java 12 A new low pause time compiler called Shenandoah is added to cause minimal interruption when a program is running. It is added to match modern computing resources. The pause time will be the same irrespective of the heap size which is achieved by reducing GC pause times. The Microbenchmark Suite feature will make it easier for developers to run existing testing benchmarks or create new ones. Revamped switch statements should help simplify the process of writing code. It essentially means the switch statement can also be used as an expression. The JVM Constants API will, the OpenJDK website explains, “introduce a new API to model nominal descriptions of key class-file and run-time artifacts”. Integrated with Java 12 is one AArch64 port, instead of two. Default CDS Archives. G1 mixed collections. Other features that may not be out with Java 12 Raw string literals will be added to Java. A Packaging Tool, designed to make it easier to install and run a self-contained Java application on a native platform. Limit Speculative Execution to help both developers and operations engineers more effectively secure applications against speculative-execution vulnerabilities. #5 More contributions and features with OpenJDK OpenJDK is an open source implementation of Java standard edition (Java SE) which has contributions from both Oracle and the open-source community. As of now, the binaries of OpenJDK are available for the newest LTS release, Java 11. Even the life cycles of OpenJDK 7 and 8 have been extended to June 2020 and 2023 respectively. This suggests that Oracle does seem to be interested in the idea of open source and community participation. And why would it not be? Many valuable contributions come from the open source community. Microsoft seems to have benefitted from open sourcing with the incoming submissions. Although Oracle will not support these versions after six months from initial release, Red Hat will be extending support. As the chief architect of the Java platform, Mark Reinhold said stewards are the true leaders who can shape what Java should be as a language. These stewards can propose new JEPs, bring new OpenJDK problems to notice leading to more JEPs and contribute to the language overall. #6 Mobile and machine learning job opportunities In the mobile ecosystem, especially Android, Java is still the most widely used language. Yes, there’s Kotlin, but it is still relatively new. Many developers are yet to adopt the new language. According to an estimated by Indeed, the average salary of a Java developer is about $100K in the U.S. With the Android ecosystem growing rapidly over the last decade, it’s not hard to see what’s driving Java’s value. But Java - and the broader Java ecosystem - are about much more than mobile. Although Java’s importance in enterprise application development is well known, it's also used in machine learning and artificial intelligence. Even if Python is arguably the most used language in this area, Java does have its own set of libraries and is used a lot in enterprise environments. Deeplearning4j, Neuroph, Weka, OpenNLP, RapidMiner, RL4J etc are some of the popular Java libraries in artificial intelligence. #7 Java conferences in 2019 Now that we’ve talked about the language, possible releases and new features let’s take a look at the conferences that are going to take place in 2019. Conferences are a good medium to hear top professionals present, speak, and programmers to socialize. Even if you can’t attend, they are important fixtures in the calendar for anyone interested in following releases and debates in Java. Here are some of the major Java conferences in 2019 worth checking out: JAX is a Java architecture and software innovation conference. To be held in Mainz, Germany happening May 6–10 this year, the Expo is from May 7 to 9. Other than Java, topics like agile, Cloud, Kubernetes, DevOps, microservices and machine learning are also a part of this event. They’re offering discounts on passes till February 14. JBCNConf is happening in Barcelona, Spain from May 27. It will be a three-day conference with talks from notable Java champions. The focus of the conference is on Java, JVM, and open-source technologies. Jfokus is a developer-centric conference taking place in Stockholm, Sweden. It will be a three-day event from February 4-6. Speakers include the Java language architect, Brian Goetz from Oracle and many other notable experts. The conference will include Java, of course, Frontend & Web, cloud and DevOps, IoT and AI, and future trends. One of the biggest conferences is JavaZone attracting thousands of visitors and hundreds of speakers will be 18 years old this year. Usually held in Oslo, Norway in the month of September. Their website for 2019 is not active at the time of writing, you can check out last year’s website. Javaland will feature lectures, training, and community activities. Held in Bruehl, Germany from March 19 to 21 attendees can also exhibit at this conference. If you’re working in or around Java this year, there’s clearly a lot to look forward to - as well as a few unanswered questions about the evolution of the language in the future. While these changes might not impact the way you work in the immediate term, keeping on top of what’s happening and what key figures are saying will set you up nicely for the future. 4 key findings from The State of JavaScript 2018 developer survey Netflix adopts Spring Boot as its core Java framework Java 11 is here with TLS 1.3, Unicode 11, and more updates
Read more
  • 0
  • 0
  • 22949

article-image-the-10-best-cloud-and-infrastructure-conferences-happening-in-2019
Sugandha Lahoti
23 Jan 2019
11 min read
Save for later

The 10 best cloud and infrastructure conferences happening in 2019

Sugandha Lahoti
23 Jan 2019
11 min read
The latest Gartner report suggests that the cloud market is going to grow an astonishing 17.3% ($206 billion) in 2019, up from $175.8 billion in 2018. By 2022, the report claims, 90% of organizations will be using cloud services. But the cloud isn’t one thing, and 2019 is likely to bring the diversity of solutions, from hybrid to multi-cloud, to serverless, to the fore. With such a mix of opportunities and emerging trends, it’s going to be essential to keep a close eye on key cloud computing and software infrastructure conferences throughout the year. These are the events where we’ll hear the most important announcements, and they’ll probably also be the place where the most important conversations happen too. But with so many cloud computing conferences dotted throughout the year, it’s hard to know where to focus your attention. For that very reason, we’ve put together a list of some of the best cloud computing conferences taking place in 2019. #1 Google Cloud Next When and where is Google Cloud Next 2019 happening? April 9-11 at the Moscone Center in San Francisco. What is it? This is Google’s annual global conference focusing on the company’s cloud services and products, namely Google Cloud Platform. At previous events, Google has announced enterprise products such as G Suite and Developer Tools. The three-day conference features demonstrations, keynotes, announcements, conversations, and boot camps. What’s happening at Google Cloud Next 2019? This year Google Cloud Next has more than 450 sessions scheduled. You can also meet directly with Google experts in artificial intelligence and machine learning, security, and software infrastructure. Themes covered this year include application development, architecture, collaboration, and productivity, compute, cost management, DevOps and SRE, hybrid cloud, and serverless. The conference may also serve as a debut platform for new Google Cloud CEO Thomas Kurian. Who’s it for? The event is a not-to-miss event for IT professionals and engineers, but it will also likely attract entrepreneurs. For those of us who won’t attend, Google Cloud Next will certainly be one of the most important conferences to follow. Early bird registration begins from March 1 for $999. #2 OpenStack Infrastructure Summit When and where is OpenStack Infrastructure Summit 2019 happening? April 29 - May 1 in Denver. What is it? The OpenStack Infrastructure Summit, previously the OpenStack Summit, is focused on open infrastructure integration and has evolved over the years to cover more than 30 different open source projects.  The event is structured around use cases, training, and related open source projects. The summit also conducts Project Teams Gathering, just after the main conference (this year May 2-4). PTG provides meeting facilities, allowing various technical teams contributing to OSF (Open Science Framework) projects to meet in person, exchange ideas and get work done in a productive setting. What’s happening at this year’s OpenStack Infrastructure Summit? This year the summit is expected to have almost 300 sessions and workshops on Container Infrastructure, CI/CD, Telecom + NFV, Public Cloud, Private & Hybrid Cloud, Security etc. The Summit is going to have members of open source communities like Airship, Ansible, Ceph, Docker, Kata Containers, Kubernetes, ONAP, OpenStack, Open vSwitch, OPNFV, StarlingX, Zuul among other topics. Who’s it for? This is an event for engineers working in operations and administration. If you’re interested in OpenStack and how the foundation fits into the modern cloud landscape there will certainly be something here for you. #3 DockerCon When and where is DockerCon 2019 happening? April 29 to May 2 at Moscone West, San Francisco. What is it? DockerCon is perhaps the container event of the year. The focus is on what’s happening across the Docker world, but it will offer plenty of opportunities to explore the ways Docker is interacting and evolving with a wider ecosystem of tools. What’s happening at DockerCon 2019? This three-day conference will feature networking opportunities and hands-on labs. It will also hold an exposition where innovators will showcase their latest products. It’s expected to have over 6,000 attendees with 5+ tracks and 100 sessions. You’ll also have the opportunity to become a Docker Certified Associate with an on-venue test. Who’s it for? The event is essential for anyone working in and around containers - so DevOps, SRE, administration and infrastructure engineers. Of course, with Docker finding its way into the toolsets of a variety of roles, it may be useful for people who want to understand how Docker might change the way they work in the future.  Pricing for DockerCon runs from around $1080 for early-bird reservations to $1350 for standard tickets. #4 Red Hat Summit When and where is Red Hat Summit 2019 happening? May 7–9 in Boston. What is it? Red Hat Summit is an open source technology event run by Red Hat. It covers a wide range of topics and issues, essentially providing a snapshot of where the open source world is at the moment and where it might be going. With open source shaping cloud and other related trends, it’s easy to see why the event could be important for anyone with an interest in cloud and infrastructure. What’s happening at Red Hat Summit 2019? The theme for this year is AND. The copy on the event’s website reads:  AND is about scaling your technology and culture in whatever size or direction you need, when you need to, with what you actually need―not a bunch of bulky add-ons. From the right foundation―an open foundation―AND adapts with you. It’s interoperable, adjustable, elastic. Think Linux AND Containers. Think public AND private cloud. Think Red Hat AND you. There’s clearly an interesting conceptual proposition at the center of this year’s event that hints at how Red Hat wants to get engineers and technology buyers to think about the tools they use and how they use them. Who’s it for? The event is big for any admin or engineer that works with open source technology - Linux in particular (so, quite a lot of people…). Given Red Hat was bought by IBM just a few months ago in 2018, this event will certainly be worth watching for anyone interested in the evolution of both companies as well as open source software more broadly. #5 KubeCon + CloudNativeCon Europe When and where is KubeCon + CloudNativeCon Europe 2019? May 20 to 23 at Fira Barcelona. What is it? KubeCon + CloudNativeCon is CCNF’s (Cloud Native Computing Foundation) flagship conference for open source and cloud-native communities. It features contributors from cloud-native applications and computing, containers, microservices, central orchestration processing, and related projects to further cloud-native education of technologies that support the cloud-native ecosystem. What’s happening at this year’s KubeCon? The conference will feature a range of events and sessions from industry experts, project leaders, as well as sponsors. The details of the conference still need development, but the focus will be on projects such as Kubernetes (obviously), Prometheus, Linkerd, and CoreDNS. Who’s it for? The conference is relevant to anyone with an interest in software infrastructure. It’s likely to be instructive and insightful for those working in SRE, DevOps and administration, but because of Kubernetes importance in cloud native practices, there will be something here for many others in the technology industry. . The cost is unconfirmed, but it can be anywhere between $150 and $1,100. #6 IEEE International Conference on Cloud Computing When and where is the IEEE International Conference on Cloud Computing? July 8-13 in Milan. What is it? This is an IEEE conference solely dedicated to Cloud computing. IEEE Cloud is basically for research practitioners to exchange their findings on the latest cloud computing advances. It includes findings across all “as a service” categories, including network, infrastructure, platform, software, and function. What’s happening at the IEEE International Conference on Cloud Computing? IEEE cloud 2019 invites original research papers addressing all aspects of cloud computing technology, systems, applications, and business innovations. These are mostly based on technical topics including cloud as a service, cloud applications, cloud infrastructure, cloud computing architectures, cloud management, and operations. Shangguang Wang and Stephan Reiff-Marganiec have been appointed as congress workshops chairs. Featured keynote speakers for the 2019 World Congress on Services include Kathryn Guarini, VP at IBM Industry Research and Joseph Sifakis, the Emeritus Senior CNRS Researcher at Verimag. Who’s it for? The conference has a more academic bent than the others on this list. That means it’s particularly important for researchers in the field, but there will undoubtedly be lots here for industry practitioners that want to find new perspectives on the relationship between cloud computing and business. #7 VMworld When and where is VMWorld 2019? August 25 - 29 in San Francisco. What is it? VMworld is a virtualization and cloud computing conference, hosted by VMware. It is the largest virtualization-specific event. VMware CEO Pat Gelsinger and the executive team typically provide updates on the company’s various business strategies, including multi-cloud management, VMware Cloud for AWS, end-user productivity, security, mobile, and other efforts. What’s happening at VMworld 2019? The 5-day conference starts with general sessions on IT and business. It then goes deeper into breakout sessions, expert panels, and quick talks. It also holds various VMware Hands-on Labs and VMware Certification opportunities as well as one-on-one appointments with in-house experts. The expected attendee is over 21000+. Who’s it for? VMworld maybe doesn’t have the glitz and glamor of an event like DockerCon or KubeCon, but for administrators and technological decision makers that have an interest in VMware’s products and services. #8 Microsoft Ignite When and where is Microsoft Ignite 2019? November 4-8 at Orlando, Florida What is it? Ignite is Microsoft's flagship enterprise event for everything cloud, data, business intelligence, teamwork, and productivity. What’s happening at Microsoft Ignite 2019? Microsoft Ignite 2019 is expected to feature almost 700 + deep-dive sessions and 100 + expert-led and self-paced workshops. The full agenda will be posted sometime in Spring 2019. You can pre-register for Ignite 2019 here. Microsoft will also be touring many cities around the world to bring the Ignite experience to more people. Who’s it for? The event should have wide appeal, and will likely reflect Microsoft’s efforts to bring a range of tech professionals into the ecosystem. Whether you’re a developer, infrastructure engineer, or operations manager, Ignite is, at the very least, an event you should pay attention to. #9 Dreamforce When and where is Dreamforce 2019? November 19-22, in San Francisco. What is it? Dreamforce, hosted by Salesforce, is a truly huge conference, attended by more than 100,000 people.. Focusing on Salesforce and CRM, the event is an opportunity to learn from experts, share experiences and ideas, and to stay up to speed with the trends in the field, like automation and artificial intelligence. What’s happening at Dreamforce 2019? Dreamforce covers over 25 keynotes, a vast range of breakout sessions (almost 2700) and plenty of opportunities for networking. The conference is so extensive that it has its own app to help delegates manage their agenda and navigate venues. Who’s it for? Dreamforce is primarily about Salesforce - for that reason, it’s very much an event for customers and users. But given the size of the event, it also offers a great deal of insight on how businesses are using SaaS products and what they expect from them. This means there is plenty for those working in more technical or product roles to learn at the event.. #10 Amazon re:invent When and where is Amazon re:invent 2019? December 2-6 at The Venetian, Las Vegas, USA What is it? Amazon re:invent is hosted by AWS. If you’ve been living on mars in recent years, AWS is the market leader when it comes to cloud. The event, then, is AWS’ opportunity to set the agenda for the cloud landscape, announcing updates and new features, as well as an opportunity to discuss the future of the platform. What’s happening at Amazon re:invent 2019? Around 40,000 people typically attend Amazon’s top cloud event.  Amazon Web Services and its cloud-focused partners typically reveal product releases on several fronts. Some of these include enterprise security, Transit Virtual Private Cloud service, and general releases. This year, Amazon is also launching a related conference dedicated exclusively to cloud security called re:Inforce. The inaugural event will take place in Boston on June 25th and 26th, 2019 at the Boston Convention and Exhibition Center. Who’s it for? The conference attracts Amazon’s top customers, software distribution partners (ISVs) and public cloud MSPs. The event is essential for developers and engineers, administrators, architects, and decision makers. Given the importance of AWS in the broader technology ecosystem, this is an event that will be well worth tracking, wherever you are in the world. Did we miss an important cloud computing conference? Are you attending any of these this year? Let us know in the comments – we’d love to hear from you. Also, check this space for more detailed coverage of the conferences. Cloud computing trends in 2019 Key trends in software development in 2019: cloud native and the shrinking stack Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity
Read more
  • 0
  • 0
  • 35821
Modal Close icon
Modal Close icon