Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-azure-devops-outage-root-cause-analysis-starring-greedy-threads-and-rogue-scale-units
Prasad Ramesh
19 Oct 2018
4 min read
Save for later

Azure DevOps outage root cause analysis starring greedy threads and rogue scale units

Prasad Ramesh
19 Oct 2018
4 min read
Azure DevOps suffered several outages earlier this month. Microsoft has done a root cause analysis to find the causes. This is after Azure cloud was affected by the environment last month. Incidents on October 3, 4 and 8 It started on October 3 with a networking issue in the North Central US region lasting over an hour. It happened again the following day which lasted an hour. On following up with the Azure networking team, it was found that there were no networking issues when the outages happened. Another incident happened on October 8. They realized that something was fundamentally wrong which is when an analysis on telemetry was done. The issue was not found after this. After the third incident, it was found that the thread count on the machine continued to rise. This was an indication that some activity was going on even with no load coming to the machine. It was found that all 1202 threads had the same call stack, the following being the key call. Server.DistributedTaskResourceService.SetAgentOnline Agent machines send a heartbeat signal every minute to the service to notify being online. On no signal from an agent over a minute it is marked offline and the agent needs to reconnect to signal. The agent machines were marked offline in this case and eventually, they succeeded after retries. On success, the agent was stored in an in-memory list. Potentially thousands of agents were reconnecting at a time. In addition, there was a cause for threads to get full with messages since asynchronous call patterns were adopted recently. The .NET message queue stores a queue of messages to process and maintains a thread pool where. As a thread becomes available, it will service the next message in queue. Source: Microsoft The thread pool, in this case, was smaller than the queue. For N threads, N messages are processed simultaneously. When an async call is made, the same message queue is used and it queues up a new message to complete the async call in order to read the value. This call is at the end of the queue while all the threads are occupied processing other messages. Hence, the call will not complete until the other previous messages have completed, tying up one thread. The process comes to a standstill when N messages are processed where N also equals to the number of threads. At this state, an device can no longer process requests causing the load balancer to take it out of rotation. Hence the outage. An immediate fix was to conditionalize this code so no more async calls were made. This was done as the pool providers feature isn’t in effect yet. Incident on October 10 On October 10, an incident with a 15-minute impact took place. The initial problem was the result of a spike in slow response times from SPS. It was ultimately caused by problems in one of the databases. A Team Foundation Server (TFS) put pressure on SPS, their authentication service. On deploying TFS, sets of scale units called deployment rings are also deployed. When the deployment for a scale unit completes, it puts extra pressure on SPS. There are built-in delays between scale units to accommodate the extra load. There is also sharding going on in SPS to break it into multiple scale units. These factors together caused a trip in the circuit breakers, in the database. This led to slow response times and failed calls. This was mitigated by manually recycling the unhealthy scale units. For more details and complete analysis, visit the Microsoft website. Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary. Is your Enterprise Measuring the Right DevOps Metrics?
Read more
  • 0
  • 0
  • 18262

article-image-the-new-rstudio-package-manager-is-now-generally-available
Natasha Mathur
19 Oct 2018
2 min read
Save for later

The new RStudio Package Manager is now generally available

Natasha Mathur
19 Oct 2018
2 min read
The Rstudio team announced the general availability of their latest RStudio professional product, namely, RStudio Package Manager, two days ago. It explores features such as CRAN access, approved subsets of CRAN packages, adding internal packages from GitHub, and optimized experience for R users among others. RStudio Package Manager is an on-premises server product that helps teams and organizations centralize and organize R packages. In other words, it allows R users and the IT team to work together to build a central repository for R packages. Let’s discuss the features of this new Package Manager. CRAN access RStudio Package Manager allows R users to access CRAN (The Comprehensive R Archive Network) without requiring a network exception on every production node. It also helps automate CRAN updates on your schedule. Moreover, you can optimize the disk usage and only download the packages that you need. However, RStudio Package Manager does not provide binary packages from CRAN. It only provides source packages. This limitation will be addressed in the future. Approved subsets of CRAN packages RStudio Package Manager enables admins to create approved subsets of CRAN packages. It also makes sure that the subsets remain stable despite the adding or updating of packages. Adding internal packages using CLI Administrators can now add internal packages using the CLI. For instance, if your internal packages are in Git, then the RStudio Package Manager can automatically track your Git repositories. This is also capable of making the commits accessible to users. Optimized experience for R users RStudio Package Manager offers a seamless experience that gets optimized for R users. For instance, all packages are versioned which automatically makes the older versions accessible to users. This package Manager is also capable of recording the usage statistics. These metrics help administrators conduct audits and make it easy for R users to discover the popular and most useful packages. For more information, check out the official Rstudio package manager blog. Getting Started with RStudio Introducing R, RStudio, and Shiny
Read more
  • 0
  • 0
  • 8921

article-image-ebiten-1-8-a-2d-game-library-in-go-is-here-with-experimental-webassembly-support-and-newly-added-apis
Natasha Mathur
19 Oct 2018
3 min read
Save for later

Ebiten 1.8, a 2D game library in Go, is here with experimental WebAssembly support and newly added APIs

Natasha Mathur
19 Oct 2018
3 min read
The Go team has released version 1.8 of its 2D game library called Ebiten, yesterday. Ebiten 1.8 comes with new features such as experimental WebAssembly Port, newly added APIs, and bug fixes among other updates. Ebiten is a very simple 2D game library in Go that offers 2D graphics (Geometry/Color matrix transformation, Various composition modes, Offscreen rendering, Fullscreen, Text rendering), input and audio support. Experimental WebAssembly Port The Go team has added a WebAssembly port to Ebiten 1.8, but this is still in the experimental phase. This new feature compiles to a single WebAssembly module including the Go runtime for goroutine scheduling, garbage collection, maps, and other Go essentials, that results in a module of at least 2MB, or 500KB when compressed. WebAssembly refers to a binary instruction format for a stack-based virtual machine. It is designed for the compilation of high-level languages such as C/C++/Rust. This helps with easily deploying the apps on client and server applications. New APIs added New APIs have been added for different features such as polygon, TPS, Vsync, Package audio, and Package ebitenutil, in Ebiten 1.8. For polygon, type DrawTrianglesOptions API has been added which represents options to render triangles on an image. Another API type Vertex has also been added that represents a vertex passed to DrawTriangles. For TPS, func CurrentTPS() float64 API is added that returns the current TPS and represents the number of update functions called in a second. Another added API, func MaxTPS() int: returns the current maximum TPS. Also, func SetMaxTPS(tps int) API is added that sets the maximum TPS (ticks per second) and represents the number of the updating function called per second. For Vsync, func IsVsyncEnabled() bool API returns a boolean value that indicates if the game is using the display's vsync. Another func SetVsyncEnabled(enabled bool) API sets a boolean value that indicates if the game uses the display's vsync. For Package audio, the func (c *Context) IsReady() bool API returns a boolean value that indicates whether the audio is ready or not.  For Package ebitenutil, func DebugPrintAt(image *ebiten.Image, str string, x, y int) API draws the string str on the image at (x, y) position. Bug Fixes The bug causing multi monitors issue has been fixed. Also, issues related to macOS 10.14 Mojave has been fixed. For more information, check out the official release notes. GitHub is bringing back Game Off, its sixth annual game building competition, in November Microsoft announces Project xCloud, a new Xbox game streaming service, on the heels of Google’s Stream news last week Now you can play Assassin’s Creed in Chrome thanks to Google’s new game streaming service
Read more
  • 0
  • 0
  • 14517

article-image-graph-nets-deepminds-library-for-graph-networks-in-tensorflow-and-sonnet
Sunith Shetty
19 Oct 2018
3 min read
Save for later

Graph Nets – DeepMind's library for graph networks in Tensorflow and Sonnet

Sunith Shetty
19 Oct 2018
3 min read
Graph Nets is a new DeepMind’s library used for building graph networks in TensorFlow and Sonnet. Last week a paper Relational inductive biases, deep learning, and graph networks was published on arXiv by researchers from DeepMind, Google Brain, MIT and University of Edinburgh. The paper introduces a new machine learning framework called Graph networks which is expected to bring new innovations in artificial general intelligence realm. What are graph networks? Graph networks can generalize and extend various types of neural networks to perform calculations on the graph. It can implement relational inductive bias, a technique used for reasoning about inter-object relations. The graph networks framework is based on graph-to-graph modules. Each graph’s features are represented in three characteristics: Nodes Edges: Relations between the nodes Global attributes: System-level properties The graph network takes a graph as an input, performs the required operations and calculations from the edge, to the node, and to the global attributes, and then returns a new graph as an output. The research paper argues that graph networks can support two critical human-like capabilities: Relational reasoning: Drawing logical conclusions of how different objects and things relate to one another Combinatorial Generalization: Constructing new inferences, behaviors, and predictions from known building blocks To understand and learn more about graph networks you can refer the official research paper. Graph Nets Graph Nets library can be installed from pip. To install the library, run the following command: $ pip install graph_nets The installation is compatible with Linux/Mac OSX, and Python versions 2.7 and 3.4+ The library includes Jupyter notebook demos which allow you to create, manipulate, and train graph networks to perform operations such as shortest path-finding task, a sorting task, and prediction task. Each demo uses the same graph network architecture, thus showing the flexibility of the approach. You can try out various demos in your browser using Colaboratory. In other words, you don’t need to install anything locally when running the demos in the browser (or phone) via cloud Colaboratory backend. You can also run the demos on your local machine by installing the necessary dependencies. What’s ahead? The concept was released with ideas not only based in artificial intelligence research but also from the computer and cognitive sciences. Graph networks are still an early-stage research theory which does not yet offer any convincing experimental results. But it will be very interesting to see how well graph networks live up to the hype as they mature. To try out the open source library, you can visit the official Github page. In order to provide any comments or suggestions, you can contact graph-nets@google.com. Read more 2018 is the year of graph databases. Here’s why. Why Neo4j is the most popular graph database Pytorch.org revamps for Pytorch 1.0 with design changes and added Static graph support
Read more
  • 0
  • 0
  • 22592

article-image-ubuntu-18-10-cosmic-cuttlefish-releases-with-focus-on-ai-development-multi-cloud-and-edge-deployments-and-much-more
Melisha Dsouza
19 Oct 2018
3 min read
Save for later

Ubuntu 18.10 ‘Cosmic Cuttlefish’ releases with focus on AI development, multi-cloud and edge deployments, and much more!

Melisha Dsouza
19 Oct 2018
3 min read
“Ubuntu is now the world’s reference platform for AI engineering and analytics.” -Mark Shuttleworth, CEO of Canonical. Yesterday (on 18th October), Canonical announced the release of Ubuntu 18.10 termed as ‘Cosmic Cuttlefish’. This new release is focussed on multi-cloud deployments, AI software development, a new community desktop theme, and richer snap desktop integration. According to Mark, the new release will help accelerate developer productivity and help enterprises operate at a better speed whilst being scalable across multiple clouds and diverse edge appliances. [box type="shadow" align="" class="" width=""]Fun Fact : Ubuntu codenames are in incremental alphabetical order. Following the Ubuntu 18.04 Bionic Beaver, we now have the Cosmic Cuttlefish. These codenames are comprised of an adjective and an animal, both starting with the same letter.[/box] 5 major features of Ubuntu 18.10 #1 New compression algorithms for faster installation and boot Ubuntu 18.10 uses compression algorithms like LZ4 and ztsd, which support around 10% faster boot as compared to those used in its previous version. The algorithms also facilitate the installation process which takes around 5 minutes in offline mode. #2 Optimised for multi-cloud computing This new version is designed especially keeping in mind cloud based deployments. The Ubuntu Server 18.10 images are available on all major public clouds. For private clouds, the release supports OpenStack Rocky for AI and NFV hardware acceleration. It comes with Ceph Mimic to reduce storage overhead. Including the Kubernetes version 1.12, this new version brings increased security and scalability by automating the provisioning of clusters with transport layer encryption. It is more responsive to dynamic workloads through faster scaling #3 Improved gaming performance The new kernel has been updated to the 4.18 based Linux kernel. In addition to this, the updates in Mesa and X.org significantly improve game performance. Graphics support expands to AMD VegaM in the latest Intel Kabylake-G CPUs, Raspberry Pi 3 Model B, B+ and Qualcomm Snapdragon 845. Ubuntu 18.10 introduces the GNOME 3.30 desktop which has recently been released thus contributing to an overall gaming performance boost. #4 Startup time boost and XDG Portals support for Snap applications Canonical is bringing some useful improvements to its Snap packages. Snap applications will  start in lesser time. With XDG portal support, Snap can be installed in a few clicks from the Snapcraft Store website. Major public cloud and server applications like Google Cloud SDK, AWS CLI, and Azure CLI are now available in the new version. The new release allows accessing files on the host system through native desktop controls. #5 New default theme and icons Ubuntu 18.10 uses the Yaru community theme replacing their long-serving Ambiance and Radiance themes. It gives the desktop a fresh new look and feel. Other miscellaneous changes include: DLNA support connects Ubuntu with DLNA supported Smart TVs, tablets and other devices Fingerprint scanner is now supported Ubuntu Software removes dependencies while uninstalling software The default toolchain has moved to gcc 8.2 with glibc 2.28 Ubuntu 18.10 is also updated to openssl 1.1.1 and gnutls 3.6.4 with TLS1.3 support All these upgrades are causing waves in the Linux community. That being said, users are requested to check the release notes for issues that were encountered in this new version. You can head over to the official release page to download the new version of this OS. Alternatively, learn more about these new features at itsfloss.com. KUnit: A new unit testing framework for Linux Kernel Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable The kernel community attempting to make Linux more secure
Read more
  • 0
  • 0
  • 14406

article-image-angular-7-is-now-stable
Bhagyashree R
19 Oct 2018
3 min read
Save for later

Angular 7 is now stable

Bhagyashree R
19 Oct 2018
3 min read
After releasing Angular 7 beta in August, the Angular team released the stable version of Angular 7 yesterday. This version comes with a DoBootstrap interface, an update to XMB placeholders, and updated dependencies: TypeScript 3.1 and RxJS 6.3. Let's see what features this version brings in: The DoBootstrap interface Earlier, there was an interface available for each lifecycle hook but ngDoBootstrap was missing the corresponding interface. This release comes with the new lifecycle hook interface DoBootstrap. Here's an example using DoBootstrap: class AppModule implements DoBootstrap {  ngDoBootstrap(appRef: ApplicationRef) {    appRef.bootstrap(AppComponent);  } } An "original" placeholder value on extracted XMB XMB placeholders (<ph>) are now updated to include the original value on top of an example. By definition, placeholders have one example tag (<ex>) and a text node. The text node will be used as the original value from the placeholder, while the example will represent a dummy value. For example: compiler-cli An option is added to extend angularCompilerOptions in tsconfig. Currently, only TypeScript supports merging and extending of compilerOptions. This update will allow extending and inheriting angularCompilerOptions from multiple files. A new parameter for CanLoad interface In addition to Route, an UrlSegment[] is passed to implementations of CanLoad as a second parameter. It will contain the array of path elements the user tried to navigate to before canLoad is evaluated. This will help users to store the initial URL segments and refer to them later, for example, to go back to the original URL after authentication via router.navigate(urlSegments). Existing code still works as before because the second function parameter is not mandatory. Dependency updates The following dependencies are upgraded to their latest: angular/core now depends on TypeScript 3.1 and RxJS 6.3 @angular/platform-server now depends on Domino 2.1 Bug fixes Along with these added features, Angular 7 comes with many bug fixes, some of which are: Mappings are added for ngfactory and ngsummary to their module names in aot summary resolver. fileNameToModuleName lookups are now cached to save expensive reparsing of the file when not run as Bazel worker. It is allowed to privately import compile_strategy. Earlier, when an attempt was made to bootstrap a component that includes a router config using AOT Summaries, the test used to fails. This is fixed in this release. The compiler is updated to flatten nested template fns and to generate new slot allocations. To read the full list of changes, check out Angular’s GitHub repository. Angular 7 beta.0 is here! Why switch to Angular for web development – Interview with Minko Gechev ng-conf 2018 highlights, the popular angular conference
Read more
  • 0
  • 0
  • 24105
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introducing-raspberry-pi-tv-hat-a-new-addon-that-lets-you-stream-live-tv
Prasad Ramesh
19 Oct 2018
2 min read
Save for later

Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV

Prasad Ramesh
19 Oct 2018
2 min read
Yesterday the Raspberry Pi Foundation launched a new device called the Raspberry Pi TV HAT. It is a small board, TV antenna that lets you decode and stream live TV. The TV HAT is roughly the size of a Raspberry Pi Zero board. It connects to the Raspberry Pi via a GPIO connector and has a port for a TV antenna connector. The new Raspberry Pi addon is designed after a new form factor of HAT (Hardware Attached on Top). The addon itself is a half-sized HAT matching the outline of Raspberry Pi Zero boards. Source: Raspberry Pi website TV HAT specifications and requirement The board addon has a Sony CXD2880 TV tuner. It supports TV standards like DVB-T2 (1.7MHz, 5MHz, 6MHz, 7MHz, 8MHz channel bandwidth), and DVB-T (5MHz, 6MHz, 7MHz, 8MHz channel bandwidth). The frequencies it can recieve are VHF III, UHF IV, and UHF V. Raspbian Stretch (or later) is required for using the Raspberry Pi TV HAT. TVHeadend is the recommended software to start with TV streams. There is a ‘Getting Started’ guide on the Raspberry Pi website. Watch on the Raspberry Pi With the TV HAT can receive and you can view television on a Raspberry Pi board. The Pi can also be used as a server to stream television over a network to other devices. When running as a server the TV HAT works with all 40-pin GPIO Raspberry Pi boards. Watching on TV on the Pi itself needs more processing, so the use of a Pi 2, 3, or 3B+ is recommended. The TV HAT connected to a Raspberry Pi board: Source: Raspberry Pi website Streaming over a network Connecting a TV HAT to your network allows viewing streams on any device connected to the network. This includes computers, smartphones, and tablets. Initially, it will be available only in Europe. The Raspberry Pi TV HAT is now on sale for $21.50, visit the Raspberry Pi website for more details. Tensorflow 1.9 now officially supports Raspberry Pi bringing machine learning to DIY enthusiasts How to secure your Raspberry Pi board [Tutorial] Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 18770

article-image-postgresql-11-is-here-with-improved-partitioning-performance-query-parallelism-and-jit-compilation
Natasha Mathur
19 Oct 2018
3 min read
Save for later

PostgreSQL 11 is here with improved partitioning performance, query parallelism, and JIT compilation

Natasha Mathur
19 Oct 2018
3 min read
After releasing PostgreSQL 11 beta 1, back in May, the PostgreSQL Global Development Group finally released PostgreSQL 11, yesterday. PostgreSQL 11 explores features such as increased performance for partitioning, support for transactions in stored procedures, improved capabilities for query parallelism, and Just-in-Time (JIT) compilation for expressions among other updates. PostgreSQL is a popular open source relational database management system that offers better reliability, robustness, and enhanced performance measures. Let’s have a look at these features in PostgreSQL 11. Increased performance for partitioning PostgreSQL 11 comes with an ability to partition the data using a hash key, which is known as hash partitioning. This adds to the already existing ability to partition data in PostgreSQL using a list of values or by a range. Moreover, PostgreSQL 11 also improves the data federation abilities by implementing functionality improvements for partitions using PostgreSQL foreign data wrapper, and postgres_fdw. For managing these partitions, PostgreSQL 11 comes with a “catch-all” default partition for data that doesn’t match a partition key. It also comes with an ability to create primary keys, foreign keys, indexes as well as triggers on partitioned tables. The latest release also offers support for automatic movement of rows to the correct partition, given that the partition key for that row is updated. Additionally, PostgreSQL 11 enhances the query performance when reading from partitions with the help of a new partition elimination strategy. It also offers support for the popular "upsert" feature on partitioned tables. The upsert feature helps users simplify the application code as well as reduce the network overhead when interacting with their data. Support for transactions in stored procedures With PostgreSQL 11 comes newly added SQL procedures that help perform full transaction management within the body of a function. This enables the developers to build advanced server-side applications like the ones that involve incremental bulk data loading. Also, SQL procedures can now be created using the CREATE PROCEDURE command which is executed using the CALL command. These SQL procedures are supported by the server-side procedural languages such as PL/pgSQL, PL/Perl, PL/Python, and PL/Tcl. Improved capabilities for query parallelism PostgreSQL 11 enhances the parallel query performance, using the performance gains in parallel sequential scans and hash joins. It also performs more efficient scans of the partitioned data. PostgreSQL 11 comes with added parallelism for a range of data definitions commands, especially for the creation of B-tree indexes generated by executing the standard CREATE INDEX command. Other data definition commands that either create tables or materialize the views from queries are also enabled with parallelism. This includes the CREATE TABLE .. AS, SELECT INTO, and CREATE MATERIALIZED VIEW. Just-in-Time (JIT) compilation for expressions PostgreSQL 11 offers support for Just-In-Time (JIT) compilation, This helps to accelerate the execution of certain expressions during query execution. The JIT expression compilation uses the LLVM project to boost the execution of expressions in WHERE clauses, target lists, aggregates, projections, as well as some other internal operations. Other Improvements ALTER TABLE .. ADD COLUMN .. DEFAULT ..have been replaced with a not NULL default to rewrite the whole table on execution. This offers a significant performance boost when running this command. Additional functionality has been added for working with window functions, including allowing RANGE to use PRECEDING/FOLLOWING, GROUPS, and frame exclusion. Keywords such as "quit" and "exit" have been added to the PostgreSQL command-line interface to help make it easier to leave the command-line tool. For more information, check out the official release notes. PostgreSQL group releases an update to 9.6.10, 9.5.14, 9.4.19, 9.3.24 How to perform data partitioning in PostgreSQL 10 How to write effective Stored Procedures in PostgreSQL
Read more
  • 0
  • 0
  • 19956

article-image-atlassian-overhauls-its-jira-software-with-customizable-workflows-new-tech-stack-and-roadmaps-tool
Sugandha Lahoti
19 Oct 2018
3 min read
Save for later

Atlassian overhauls its Jira software with customizable workflows, new tech stack, and roadmaps tool

Sugandha Lahoti
19 Oct 2018
3 min read
Atlassian has completely revamped it’s traditional Jira software adding a simplified user experience, new third-party integrations, and a new product roadmaps tool. Announced yesterday, in their official blog post, they mention that “They’ve rolled out an entirely new project experience for the next generation with a focus on making Jira Simply Powerful.” Sean Regan, head of growth for Software Teams at Atlassian, said: “With a more streamlined and simplified application, Atlassian hopes to appeal to a wider range of business execs involved in the software-creation process.” What’s new in the revamped Jira software? Powerful tech stack: Jira Software is transformed into a modern cloud app. It now includes an updated tech stack, permissions, and UX. Developers have more autonomy, administrators have more flexibility and advanced users have more power. “Additionally, we’ve made Jira simpler to use across the board. Now, anyone who works with development teams can collaborate more easily.” Customizable workflow: To upgrade user experience, Atlassian has introduced a new feature called build-your-own-boards. Users can customize their own workflow, issue types, and fields for the board. They don’t require administrator access or the need to jeopardize other project’s customizations. Source: Jira blog This customizable workflow was inspired by Trello, the task management app acquired by Atlassian for $425 million in 2017. “What we tried to do in this new experience is mirror the power that people know and love about Jira, with the simplicity of an experience like Trello.” said Regan. Third party integrations: The new Jira comes with almost 600 third-party integrations. These third-party applications, Atlassian said, should help appeal to a broader range of job roles that interact with developers. Integrations include Adobe, Sketch, and Invision. Other integrations include Facebook's Workplace and updated integrations for Gmail and Slack. Jira Cloud Mobile: Jira Cloud mobile helps developers access their projects from their smartphones. Developers can create, read, update, and delete issues and columns; groom their backlog and start and complete sprints; respond to comments and tag relevant stakeholders, all from their mobile. Roadmapping tool: Jira now features a brand new roadmaps tool that makes it easier for teams to see the big picture. “When you have multiple teams coordinating on multiple projects at the same time, shipping different features at different percentage releases, it’s pretty easy for nobody to know what is going on,” said Regan. “Roadmaps helps bring order to the chaos of software development.” Source: Jira blog Pricing for the Jira software varies by the number of users. It costs $10 per user per month for teams of up to 10 people; $7 per user per month for teams of between 11 and 100 users; and varying prices for teams larger than 100. The company also offers a free 7-day trial. Read more about the release on the Jira Blog. You can also have a look at their public roadmap. Atlassian acquires OpsGenie, launches Jira Ops to make the incident response more powerful. GitHub’s new integration for Jira Software Cloud aims to provide teams with a seamless project management experience. Atlassian open sources Escalator, a Kubernetes autoscaler project
Read more
  • 0
  • 0
  • 13594

article-image-kunit-a-new-unit-testing-framework-for-linux-kernel
Savia Lobo
18 Oct 2018
2 min read
Save for later

KUnit: A new unit testing framework for Linux Kernel

Savia Lobo
18 Oct 2018
2 min read
On Tuesday, Google engineer Brendan Higgins announced an experimental set of 31 patches by introducing KUnit as a new Linux kernel unit testing framework to help preserve and improve the quality of the kernel's code. KUnit is a lightweight unit testing and mocking framework designed for the Linux kernel. Unit tests necessarily have finer granularity, they are able to test all code paths easily solving the classic problem of difficulty in exercising error handling code. KUnit is heavily inspired by JUnit, Python's unittest.mock, and Googletest/Googlemock for C++. KUnit provides facilities for defining unit test cases, grouping related test cases into test suites, providing common infrastructure for running tests, mocking, spying, and much more. Brenden writes, "It does not require installing the kernel on a test machine or in a VM and does not require tests to be written in userspace running on a host kernel. Additionally, KUnit is fast: From invocation to completion KUnit can run several dozen tests in under a second. Currently, the entire KUnit test suite for KUnit runs in under a second from the initial invocation (build time excluded)." When asked if KUnit will replace the other testing frameworks for the Linux Kernel, Brenden denied it,  saying, “Most existing tests for the Linux kernel are end-to-end tests, which have their place. A well tested system has lots of unit tests, a reasonable number of integration tests, and some end-to-end tests. KUnit is just trying to address the unit test space which is currently not being addressed.” To know more about KUnit in detail, read Brendan Higgins’ email threads. What role does Linux play in securing Android devices? bpftrace, a DTrace like tool for Linux now open source Linux drops Code of Conflict and adopts new Code of Conduct
Read more
  • 0
  • 0
  • 16721
article-image-we-can-sell-dangerous-surveillance-systems-to-police-or-we-can-stand-up-for-whats-right-we-cant-do-both-says-a-protesting-amazon-employee
Natasha Mathur
18 Oct 2018
5 min read
Save for later

“We can sell dangerous surveillance systems to police or we can stand up for what’s right. We can’t do both,” says a protesting Amazon employee

Natasha Mathur
18 Oct 2018
5 min read
An Amazon employee has spoken out against Amazon selling its facial recognition technology, named, Rekognition to the police departments across the world, over a letter. The news of Amazon selling its facial recognition technology to the police first came out in May this year. Earlier this week, Jeff Bezos spoke at the WIRED25 Summit regarding the use of technology to help the Department of Defense, "we are going to continue to support the DoD, and I think we should, The last thing we'd ever want to do is stop the progress of new technologies, If big tech companies are going to turn their back on US Department of Defense, this country is going to be in trouble”. Soon after a letter got published yesterday, on Medium, by an anonymous Amazon employee, whose identity was verified offline by the Medium editorial team. It read, “A couple weeks ago, my co-workers delivered a letter to this effect, signed by over 450 employees, to Jeff Bezos and other executives. We know Bezos is aware of these concerns... he acknowledged that big tech’s products might be misused, even exploited, by autocrats. But rather than meaningfully explain how Amazon will act to prevent the bad uses of its own technology, Bezos suggested we wait for society’s immune response”. The letter also laid out the employee’s demands to kick off Palantir, the software firm powering ICE’s deportation and tracking program, from Amazon Web Services along with the need to initiate employee oversight for ethical decisions within the company. It also clearly states that their concern is not regarding the harm that can be caused by some company in the future. Instead, it is about the fact that Amazon is “designing, marketing, and selling a system for mass surveillance right now”. In fact, Rekognition is already being used by law enforcement with zero debate or restrictions on its use from Amazon. For instance, Orlando, Florida, has currently put Rekognition to test with live video feeds from surveillance cameras around the city. Rekognition is a deep-learning based service which is capable of storing and searching tens of millions of faces at a time.  It allows detection of objects, scenes, activities and inappropriate content. Amazon had also received criticism from the ACLU regarding selling rekognition to cops as it said that, “People should be free to walk down the street without being watched by the government. By automating mass surveillance, facial recognition systems like Rekognition threaten this freedom, posing a particular threat to communities already unjustly targeted in the current political climate. Once powerful surveillance systems like these are built and deployed, the harm will be extremely difficult to undo.” Amazon had been quick to defend at that time and said in a statement emailed to various news organizations that, “Our quality of life would be much worse today if we outlawed new technology because some people could choose to abuse the technology. Imagine if customers couldn’t buy a computer because it was possible to use that computer for illegal purposes? Like any of our AWS services, we require our customers to comply with the law and be responsible when using Amazon Rekognition.” The protest by Amazon employees is over the same concern as ACLU’s. Giving Rekognition in the hands of the government puts the privacy of the people at stake as people won’t be able to go about their lives without being constantly monitored by the government. “Companies like ours should not be in the business of facilitating authoritarian surveillance. Not now, not ever. But Rekognition supports just that by pulling dozens of facial IDs from a single frame of video and storing them for later use or instantly comparing them with databases of millions of pictures. We cannot profit from a subset of powerful customers at the expense of our communities; we cannot avert our eyes from the human cost of our business”, mentions the letter. The letter also points out that Rekognition is not accurate in its ability to identify people and is a “flawed technology” that is more likely to “misidentify people” with darker skin tone. For instance, Rekognition was earlier this year put to test with pictures of Congress members compared against a collection of mugshots. The result was 28 false matches with incorrect results being higher for people of color. This makes it irresponsible, unreliable and unethical of the government to use Rekognition. “We will not silently build technology to oppress and kill people, whether in our country or in others. Amazon talks a lot about values of leadership. If we want to lead, we need to make a choice between people and profits. We can sell dangerous surveillance systems to police or we can stand up for what’s right. We can’t do both”, reads the letter. For more information, check out the official letter by Amazon employees. Jeff Bezos: Amazon will continue to support U.S. Defense Department Amazon increases the minimum wage of all employees in the US and UK Amazon is the next target on EU’s antitrust hitlist
Read more
  • 0
  • 0
  • 16454

article-image-redis-5-is-now-out
Bhagyashree R
18 Oct 2018
2 min read
Save for later

Redis 5 is now out

Bhagyashree R
18 Oct 2018
2 min read
After announcing Redis 5 RC1 in May earlier this year, the stable version of Redis 5 was released yesterday. This release comes with a new Stream data type, LFU/LRU info in RDB, active defragmentation version 2, HyperLogLogs improvements and many other improvements. What is new in Redis 5? Redis 5 comes with a new data type called Stream, which models a log data structure in a more abstract way. Three new modules got important APIs: Cluster API, Timer API, Dictionary API. With these APIs, you can now build a distributed system with Redis using it just as a framework, creating your own protocols. To provide better-caching accuracy after a restart or when a slave does a full sync, RDB now stores the LFU and LRU information. In the future releases, we are likely to see a new feature that sends TOUCH commands to slaves to update their information about hot keys. The cluster manager is now ported from Ruby to C and is integrated with redis-cli. Because of this change, it is faster and no longer has any dependency. To learn more about the cluster manager, you can run the redis-cli --cluster help command. Also, many commands with subcommands have a HELP subcommand. Sorted set commands, ZPOPMIN/MAX, and blocking variants are introduced. These commands are used in applications such as time series and leaderboards. With active defragmentation version 2, the process of defragmenting the memory of a running server is better than before. This will be very useful for long-running workloads that tend to fragment Jemalloc. Jemalloc is now upgraded to version 5.1 Improvements are made in the implementations of the HyperLogLog data structure with refined algorithms to offer a more accurate cardinality estimation. This version comes with better memory reporting capabilities. Redis 5 provides improved networking especially related to emitting large objects, CLIENT UNBLOCK and CLIENT ID for useful patterns around connection pools and blocking commands. Read the full Redis 5 release notes on GitHub. MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code Facebook open sources LogDevice, a distributed data store for logs RxDB 8.0.0, a reactive, offline-first, multiplatform database for JavaScript released!
Read more
  • 0
  • 0
  • 13810

article-image-how-the-titan-m-chip-will-improve-android-security
Prasad Ramesh
18 Oct 2018
4 min read
Save for later

How the Titan M chip will improve Android security

Prasad Ramesh
18 Oct 2018
4 min read
Aside from the big ugly notch on the Pixel XL 3, both the XL 3 and the Pixel 3 will sport a new security chip called the Titan M. This dedicated chip raises the security game in these new Pixel devices. The M is... well a good guess—mobile. The Titan chip was previously used internally at Google. This is another move towards making better security available at the hands of everyday consumers after Google made the Titan security key for available for purchase. What does the Titan M do? The Titan M is an individual low-power security chip designed and manufactured by Google. This is not a part of Snapdragon 845 powering the new Pixel devices. It performs a couple of security functions at the hardware level. Store and enforce the locks and rollback counters used by Android Verified Boot to prevent attackers from unlocking the bootloader. Securely locks and encrypts your phone and further limits invalid attempts of unlocking the device. Apps can use the Android Strongbox Keymaster module to generate and store keys on the Titan M. The Titan M chip has direct electrical connections to the Pixel's side buttons that prevent an attacker from faking button presses. Factory-reset policies that enforce rules with which lost or stolen devices can be restored only by the owner. Ensures that even Google themselves can't unlock a phone or install firmware updates without the passcode set by the owner with Insider Attack Resistance. An overview of the Titan M chip Since the Titan M is a separate chip, it protects against hardware-level attacks such as Rowhammer, Spectre, and Meltdown. Google has complete control and supervision over building this chip, right from the silicon stages. They have taken care to incorporate features like low power usage, low-latency, hardware cryptographic acceleration, tamper detection, and secure, timely firmware updates to the chip. On the left is the first generation Titan chip and on the right is the new Titan M chip. Source: Google Blog Titan M CPU The CPU used is an ARM Cortex-M3 microprocessor which is specially hardened against side-channel attacks. It has been augmented with defensive features to detect and act upon abnormal conditions. The CPU core also exposes several control registers to join access with chip configuration settings and peripherals. The Titan M verifies the signature of its firmware using a public key built into the chip. On signature verification, the flash is locked to prevent any modification. It also has a large programmable coprocessor for public key algorithms. Encryption in the chip This new chip also features hardware accelerators like AES and SHA. The accelerators are flexible meaning they can either be initialized with firmware provided keys or via chip-specific and hardware-bound keys generated by the Key Manager module. The chip-specific keys are generated internally with the True Random Number Generator (TRNG). Hence such keys are limited entirely to the chip internally and are not available outside the chip. Google tried to pack maximum security features into Titan M's 64 KB RAM. The RAM contents of the chip can be preserved even during battery saving mode when most hardware modules are turned off. Here’s a diagram showing the chip components. Source: Google Blog Google is aware of what goes into each chip from logic gates to the boot code. The chip allows higher security in areas like two-factor authentication, medical device control, and P2P payments among other potential future uses. The Titan M firmware source code will be publicly available soon. For more details, visit the Google Blog. Google Titan Security key with secure FIDO two factor authentication is now available for purchase Google introduces Cloud HSM beta hardware security module for crypto key security Google’s Protect your Election program: Security policies to defend against state-sponsored phishing attacks, and influence campaigns
Read more
  • 0
  • 0
  • 19492
article-image-creator-side-optimization-how-linkedins-new-feed-model-helps-small-creators
Melisha Dsouza
18 Oct 2018
4 min read
Save for later

Creator-Side Optimization: How LinkedIn’s new feed model helps small creators

Melisha Dsouza
18 Oct 2018
4 min read
LinkedIn is used by 567M users every day. It creates new opportunities for connecting with professionals all over the globe, with more than a million posts, videos, and articles made on the LinkedIn feed each day. However, the team has identified that the growth in the number of post creators and viewers have led to issues. These include almost no recognition for lesser-known creators and viral posts drowning out posts from closer connections To combat this, the team combined multiple experimental techniques and came up with a smarter feed relevance model. The problem: Almost no recognition for small creators The team discovered that there was no equal distribution of reactions given by members on a creator’s post. In other words, the number of creators who get zero feedback on making a post was actually increasing. This posed a huge problem as getting feedback is a motivational boost for creators to continue posting in the future. Influencers with millions of followers do get more reactions as compared to the average person. If feed viewers kept giving feedback to the top 1% of posters, who were already getting plenty of attention, the lesser-known creators would not be recognized at all. A second issue encountered was that the team received anecdotal reports pointing out that irrelevant hyper-viral posts were gaming the feed and crowding out posts from closer connections. Issues with the old feed model The original LinkedIn feed model was designed in a way that if many people have already enjoyed, liked, and shared a piece of content, then the feed will correctly guess that a new viewer is also highly likely to enjoy it and hence show highly viral content. This meant viewers missed important posts from close connections and people they know personally. Moreover, the model was not programmed to consider how much the creator may appreciate receiving feedback from the viewer. The solution: A new Optimization function The team added an additional term in the optimization function of the relevance model.  Once a creator receives feedback from the viewer, the term quantifies the value received. Now that the feed knew how much a given creator will appreciate getting feedback from a given viewer, it uses this information to rank posts. The model also takes into account ‘spam feedback’ considering the quality of the post to avoid spamming viewers with low-quality posts. This consideration for small creators ensures that no one is left behind and they can easily reach out to the community. Optimization from the creator’s perspective To test the model, the team performed an  “upstream/downstream metrics.” They used a collection of upstream metrics, for example: “first likes given.” This metric quantifies how often a feed viewer likes a post that didn’t previously have any likes. If the viewer sees a post that doesn’t have any likes, and he clicks the like button for it, then it creates the test case of “first like given.” The other metric used was called  “creator love” that describe how the creator feels about the viewer’s actions. It also decides the impact that the viewers’ actions has on a creators post. “creator love” upstream metrics The suite of metrics contains several variations on test cases involving comments, freshness of the post, and the changing value of feedback beyond the first piece. It all boils down to measuring a value given to the creator. The team also used edge-based bootstrapping on bernoulli randomization, pioneered by Jim Sorenson. Did the new feed model help? The answer is 'yes!'. This feature turned out to be successful for both creators and feed viewers. The team believes this change is helping posts from close connections to appears at the top of a member’s feed. And members like seeing more content from people they know! This model especially benefited creators with smaller networks. The model was supposed to take about 8% of feedback away from the top 0.1% of creators and redistribute it to the bottom 98%. This worked and showed a 5% increase in creators returning to post again. As for top creators, taking 8% of the likes away from the top 0.1% still leaves them better off than they were a year ago. These changes just help to ensure equality among all members of the network. It will interesting to see the impact that this new model will have on viewers feed and their reaction to the same. To more in-depth about the experiments the team performed on the model and their line of thoughts, you can visit their official blog. What is Statistical Analysis and why does it matter? Working with Azure container service cluster [Tutorial] Performing Sentiment Analysis with R on Obama’s State of the Union speeches [Tutorial]  
Read more
  • 0
  • 0
  • 7420

article-image-aws-announces-more-flexibility-its-certification-exams-drops-its-exam-prerequisites
Melisha Dsouza
18 Oct 2018
2 min read
Save for later

AWS announces more flexibility its Certification Exams, drops its exam prerequisites

Melisha Dsouza
18 Oct 2018
2 min read
Last week (on 11th October), the AWS team announced that they are removing the exam-prerequisites to give users more flexibility on the AWS Certification Program. Previously, it was a prerequisite for a customer to pass the foundational or Associate level exam before appearing for the Professional or Specialty certification. AWS has now eliminated this prerequisite, taking into account customers requests for flexibility. Customers are no longer required to have an Associate certification before pursuing a Professional certification. Nor do they need to hold a Foundational or Associate certification before pursuing Specialty certification. The professional level exams are pretty tough to pass. Until a customer has a complete deep knowledge of the AWS platform, passing the professional exam is difficult. If a customer skips the Foundational or Associate level exams and directly appears for the professional level exams, he will not have the practice and knowledge necessary to fare well in them. Instead, if he/she fails the exam, backing up to the Associate level can be demotivating. The AWS Certification demonstrates helps individuals obtain an expertise to design, deploy, and operate highly available, cost-effective, and secure applications on AWS. They will gain a  proficiency with AWS which will help them earn tangible benefits This exam will help Employers Identify skilled professionals that can use  AWS technologies to lead IT initiatives. Moreover, the exams will help them reduce risks and costs to implement their workloads and projects on the AWS platform. AWS dominates the cloud computing market and the AWS Certified Solutions Architect exams can help candidates secure their career in this exciting field. AWS offers digital and classroom training build cloud skills and prepare for certification exams. To know more about this announcement, head over to their official Blog. ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence AWS machine learning: Learning AWS CLI to execute a simple Amazon ML workflow [Tutorial]  
Read more
  • 0
  • 0
  • 21843
Modal Close icon
Modal Close icon