Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-chaos-engineering-platform-gremlin-launches-gremlin-free
Richard Gall
27 Feb 2019
3 min read
Save for later

Chaos engineering platform Gremlin launches Gremlin Free

Richard Gall
27 Feb 2019
3 min read
Chaos engineering has been a trend to watch for the last 12 months, but it is yet to really capture the imagination of the global software industry. It remains a pretty specialised discipline confined to the most forward thinking companies who depend on extensive distributed systems. However, that could all be about to change thanks to Gremlin who have today announced the launch of Gremlin Free. Gremlin Free is a tool that allows software, infrastructure and DevOps engineers to perform shutdown and CPU attacks on their infrastructure in a safe and controlled way using a neat and easy to use UI. In a blog post published on the Gremlin site today, Lorne Kligerman, Director of Product, said "we believe the industry has answered why do chaos engineering, and has begun asking how do I begin practicing Chaos Engineering in order to significantly increase the reliability and resiliency of our systems to provide the best user experience possible." Read next: How Gremlin is making chaos engineering accessible [Interview] What is Gremlin free? Gremlin Free is based on Netflix's Chaos Monkey tool. Chaos Monkey is the tool that gave rise to chaos engineering way back in 2011 when the streaming platform first moved to AWS. It let Netflix engineers "randomly shut down compute instances," which became a useful tactic for stress testing the reliability and resilience of its new microservices architecture. What can you do with Gremlin Free? There are two attacks you can do with Gremlin Free: Shutdown and CPU. As the name indicates, Shutdown lets you take down (or reboot) multiple hosts or containers. CPU attacks simply allow you to cause spikes in CPU usage to monitor its impact on your infrastructure. Both attacks can help teams identify pain points within their infrastructure, and ultimately form the foundations of an engineering strategy that relies heavily on the principles of chaos engineering. Why Gremlin Free now? Gremlin cites data from Gartner that underlines just how expensive downtime can be: according to Gartner, eCommerce companies can lose an average of $5,600 per minute, with that figure stretching even bigger for the planet's leading eCommerce businesses. However, despite the cost of downtime making a clear argument for chaos engineering's value, its adoption isn't widespread - certainly not as widespread as Gremlin believe it should be. Kligerman said "It's still a new concept to most engineering teams, so we wanted to offer a free version of our software that helps them become more familiar with chaos engineering - from both a tooling and culture perspective." If you're interested in trying chaos engineering, sign up for Gremlin Free here.
Read more
  • 0
  • 0
  • 8473

article-image-new-research-from-eclypsium-discloses-a-vulnerability-in-bare-metal-cloud-servers-that-allows-attackers-to-steal-data
Natasha Mathur
27 Feb 2019
4 min read
Save for later

New research from Eclypsium discloses a vulnerability in Bare Metal Cloud Servers that allows attackers to steal data

Natasha Mathur
27 Feb 2019
4 min read
Security researchers at Eclypsium, a hardware security startup, published a paper yesterday, examining the vulnerabilities in Bare Metal Cloud Servers (BMCs) that allow attackers to exploit and steal data. “We found weaknesses in methods for updating server BMC firmware that would allow an attacker to install malicious BMC firmware..these vulnerabilities can allow an attacker to not only do damage but also add other malicious implants that can persist and steal data”, states the researchers. BMC is a highly privileged component and part of the Intelligent Platform Management Interface (IPMI). It can monitor the state of a computer and allow an operating system reinstall from a remote management console through an independent connection. This means that there’s no need to physically attach a monitor, keyboard, and installation media to the server in BMCs. Now, although Bare-metal cloud offerings come with considerable benefits, they also pose new risks and challenges to security. For instance, in the majority of the cloud services, once a customer uses a bare-metal server, the hardware can be reclaimed by the service provider which is then repurposed for another customer. Similarly, for a bare-metal cloud service offering, the underlying hardware can be easily passed through different owners, providing direct access to control that hardware. This access gives rise to attackers controlling the hardware, who can spend a nominal sum of money for access to a server, and implant malicious firmware at the UEFI, BMC, and within drives or network adapters. This hardware can then get released by the attacker to the service provider, who could further pass it on for use to another customer. Eclypsium researchers have used IBM SoftLayer tecIhnology, as a case study to test the attack scenario on. However, researchers mention that the attack is not limited to any one service provider.IBM acquired SoftLayer Technologies, a managed hosting, and cloud computing provider in 2013 and is now known as IBM Cloud. The vulnerability found has been named as Cloudborne. Researchers chose SoftLayer as the testing environment due to its simplified logistics and access to hardware. However, SoftLayer was using a super vulnerable Supermicro server hardware. It took about 45 minutes for the Eclypsium team to provision the server. Once the instance was provisioned, they found out that it had the latest BMC firmware available. An additional IPMI user was created and given the administrative access to the BMC channels. This system was then finally released to IBM, which kicked off the reclamation process. Researchers noticed that the additional IPMI user was removed during the reclamation process but BMC firmware comprising the flipped bit was still present, meaning that servers’ BMC firmware was not re-flashed during the server reclamation process. “The combination of using vulnerable hardware and not re-flashing the firmware makes it possible to implant malicious code into the server’s BMC firmware and inflict damage or steal data from IBM clients that use that server in the future”, states the researchers. Other than that, BMC logs were also retained during provisioning, giving the new customer insights into the actions of the previous device owner. Also, the BMC root password was the same across provisioning, allowing the attacker to easily have control over the machine in the future. “While these issues have heightened importance for bare-metal services, they also apply to all services hosted in public and private clouds..to secure their applications, organizations must be able to manage these issues—or run the risk of endangering their most critical assets”, mentions Eclypsium researchers. For more information, check out the official Eclypsium paper. Security researchers discloses vulnerabilities in TLS libraries and the downgrade Attack on TLS 1.3 Drupal releases security advisory for ‘serious’ Remote Code Execution vulnerability A WordPress plugin vulnerability is leaking Twitter account information of users making them vulnerable to compromise
Read more
  • 0
  • 0
  • 10965

article-image-cloudflare-takes-a-step-towards-transparency-by-expanding-its-government-warrant-canaries
Amrata Joshi
27 Feb 2019
3 min read
Save for later

Cloudflare takes a step towards transparency by expanding its government warrant canaries

Amrata Joshi
27 Feb 2019
3 min read
Just two days ago, Cloudflare, a U.S. based company that provides content delivery network services, DDoS (Denial of Service) mitigation, Internet security, etc, took a strong step towards transparency by releasing its transparency report for the second half of 2018. The company has been publishing biannual Transparency Reports since 2013. A post by Cloudflare reads, “We believe an essential part of earning the trust of our customers is being transparent about our features and services, what we do – and do not do – with our users’ data, and generally how we conduct ourselves in our engagement with third parties such as law enforcement authorities.” The company believes in allowing companies to silently warn customers when the government secretly tries to acquire customer data. The “warrant canaries” is named after the canary bird. Back then, coal miners used to take canaries to the mines. And if the canary bird died, they would get a signal (of bad happening). It has been referred to as a key transparency tool that can be used by privacy-focused companies for keeping their customers aware of the whereabouts with regards to data. Cloudflare’s current canaries Cloudflare has set forth certain ‘warrant canaries’ statements of things that they claim have never done as a company. According to Cloudflare, the company has never leaked their SSL keys or customers’ SSL keys to anyone. The company claims to never have installed any law enforcement software or equipment anywhere on their network. The report by the company also states that they have never terminated a customer or taken down content due to political pressure. The company further states that it has never provided customers’ content to any law enforcement organization. Cloudflare’s updated warrant canaries The company has never modified customer content at the request of law enforcement or another third party. Cloudflare has never modified the destination of DNS responses at the request of the third party or law enforcement. It has never compromised, weakened, or subverted any of its encryption at the request of law enforcement or another third party. Cloudflare has expanded its first canary and has confirmed that the company has never turned over our encryption or authentication keys or our customers’ encryption or authentication keys to anyone. Cloudflare said that if it were ever asked to do any of the above, the company would “exhaust all legal remedies” to protect customer data, and remove the statements from its site. Big companies like Apple also have worked in this direction. Apple had included a statement in its most recent transparency reports stating that the company has to date “not received any orders for bulk data.” Reddit had also removed its warrant canary in 2015, which indicated that it had received a national security order it wasn’t permitted to disclose. Currently, Cloudflare has just responded to seven subpoenas of the 19 requests, affecting 12 accounts and 309 domains. It has also responded to 44 court orders of the 55 requests, affecting 134 accounts and 19,265 domains. To know more about this news, check out Cloudflare’s official post. workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly  
Read more
  • 0
  • 0
  • 11543

article-image-introducing-zero-server-a-zero-configuration-server-for-react-node-js-html-and-markdown
Bhagyashree R
27 Feb 2019
2 min read
Save for later

Introducing Zero Server, a zero-configuration server for React, Node.js, HTML, and Markdown

Bhagyashree R
27 Feb 2019
2 min read
Developers behind the CodeInterview.io and RemoteInterview.io websites have come up with Zero, a web framework to simplify modern web development. Zero takes the overhead of the usual project configuration for routing, bundling, and transpiling to make it easier to get started. Zero applications consist of static and code files. Static files are all non-code files like images, documents, media files, etc. Code files are parsed, bundled, and served by a particular builder for that file type. Zero supports Node.js, React, HTML, Markdown/MDX. Features in Zero server Autoconfiguration Zero eliminates the need for any configuration files in your project folder. Developers will just have to place their code and it will be automatically compiled, bundled, and served. File-system based routing The routing will be based on the file system, for example, if your code is placed in ‘./api/login.js’, it will be exposed at ‘http://domain.com/api/login’. Auto-dependency resolution Dependencies are automatically installed and resolved. To install a specific version of a package, developers just have to create their own package.json. Support for multiple languages Zero supports code written in multiple languages. So, with Zero, you can do things like exposing your TensorFlow model as a Python API, writing user login code in Node.js, all under a single project folder. Better error handling Zero isolate endpoints from each other by running them in their own process. This will ensure that if one endpoint crashes there is no effect on any other component of the application. For instance, if /api/login crashes, there will be no effect on /chatroom page or /api/chat API. It will also automatically restart the crashed endpoints when the next user visits them. To know more about the Zero server, check out its official website. Introducing Mint, a new HTTP client for Elixir Symfony leaves PHP-FIG, the framework interoperability group Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument  
Read more
  • 0
  • 0
  • 16295

article-image-mariadb-announces-mariadb-enterprise-server-and-welcomes-amazons-mark-porter-as-an-advisor-to-the-board-of-directors
Amrata Joshi
27 Feb 2019
4 min read
Save for later

MariaDB announces MariaDB Enterprise Server and welcomes Amazon’s Mark Porter as an advisor to the board of directors

Amrata Joshi
27 Feb 2019
4 min read
Yesterday, at the MariaDB OpenWorks 2019, the annual user and developer conference of the database, the MariaDB Corporation team announced a new, fully open source MariaDB Enterprise Server. The MariaDB Enterprise Server will support customers by delivering a database engineered for greater reliability and stability. It will be used as the default version for customers for on-premise or in the cloud. Max Mether, VP of Server Product Management at MariaDB Corporation said to us in an email, "We're seeing that our enterprise customers have very different needs from the average community user. These customers are working on a completely different scale with a strong focus on stability and security. In order to be able to cater to these requirements, it is clear that we need to focus on a different solution by creating another version of MariaDB Server specifically focused on enterprise production workloads." Features of MariaDB Enterprise Server New enterprise-centric features MariaDB Enterprise Server comes with features that solve specific enterprise requirements. The new features in development include enhanced MariaDB Backup, improved audit plugin, and full data-at-rest encryption of MariaDB Cluster. Security, performance and scalability for production MariaDB Enterprise Server is configured for secure, high-performance production environments, unlike Community Server. It provides reliable and faster backups for large databases. It provides end-to-end encryption for all data at rest in MariaDB clusters. Stability at scale The MariaDB Enterprise Server goes through rigorous quality assurance and testing and is pre-configured to fulfill the requirements of secure production environments. Release Integrity MariaDB Enterprise Server is distributed securely with a clearly established chain of custody from MariaDB to customers for ensuring that binaries cannot be tampered with. Pat Casey, SVP of Development and Operations, ServiceNow, said to us via an email, "Thousands of the world's largest organizations depend on the Now Platform to create great experiences and unlock productivity. Better quality assurance and stability of critical enterprise features are extremely compelling. At our scale and in production with 100,000 MariaDB databases, reliability is what matters most." MariaDB Enterprise Server 10.4 is available with next version of MariaDB Platform in spring 2019. The team will also release GA versions of MariaDB Enterprise Server 10.2 and 10.3, this spring, that will include high-end enterprise features, such as enhanced Backup. Amazon’s Mark Porter joins MariaDB as the advisor to board of directors The company further announced that Mark Porter, who ran the Amazon Relational Database Service (RDS) recently, joined MariaDB as an advisor to the board of directors. Porter is currently the CTO at Grab, a transportation and mobile payments company. He has previously served as the vice president at Oracle Corporation. Porter said, "MariaDB's DBaaS solutions give businesses many advantages. By focusing on customer needs and using their deep database expertise, they have built optimizations, flexibility and enterprise capabilities that no one else can deliver. With MariaDB's growing popularity as an option to escape Oracle, the opportunity is extremely strong to capture large market share and delight customers. I'm both humbled and thrilled to be part of the MariaDB team as relational databases continue to run the most important companies on the internet." According to the team at MariaDB, Mark Porter will contribute his expertise of cloud, distributed systems and database operations to help MariaDB rapidly grow its database-as-a-service (DBaaS) offering. He will also work in the direction of growth for SkySQL, and will further integrate new distributed technology into MariaDB Platform. Michael Howard, CEO, MariaDB Corporation, said, "Mark's guidance will be a tremendous asset in building a next-generation MariaDB cloud. Mark has a proven record of operating and scaling database services while driving rapid growth. SkySQL is designed from the ground up to offer the best MariaDB service for multi-cloud, including private cloud environments. It offers enterprise product capabilities beyond the MariaDB community server, that is used widely in public clouds, to ensure quality of service, security and features otherwise only found in proprietary legacy databases." TiDB open sources its MySQL/MariaDB compatible data migration (DM) tool MariaDB acquires Clustrix to give database customers ‘freedom from Oracle lock-in’ MariaDB 10.3.7 releases
Read more
  • 0
  • 0
  • 12064

article-image-google-launches-flutter-1-2-its-first-feature-update-at-mobile-world-congress-2019
Bhagyashree R
27 Feb 2019
3 min read
Save for later

Google launches Flutter 1.2, its first feature update, at Mobile World Congress 2019

Bhagyashree R
27 Feb 2019
3 min read
At the ongoing Mobile World Congress event, Google announced the release of Flutter 1.2, yesterday. This first feature update comes with support for Android App Bundles, improved Material and Cupertino widget sets, and more. Mobile World Congress is a four-day event, starting from 25th of this month. It is the largest annual gathering, where some of the world’s leading companies of the mobile industry talk about their latest innovations and technology. Following are some of the updates Flutter 1.2 includes: Improved Material and Cupertino widget sets The team has been putting their efforts into improving the Material and Cupertino widget sets. Now developers will have more flexibility when using Material widgets. For Cupertino widgets, they have added support for floating cursor text adding on iOS. This can be triggered by either force pressing the keyboard or by long pressing the spacebar. Support for Android App Bundles Flutter 1.2 supports Android App Bundles, a new upload format that includes all the app’s compiled code and resources. This format helps in reducing the app size and enables new features like dynamic delivery for Android apps. Support for Dart 2.2 SDK This release includes the Dart 2.2 SDK, which was also released yesterday. Dart 2.2 comes with significant performance improvements to make ahead-of-time compilation even faster and a literal language for initializing sets. It also introduces Dart Common Front End (CFE) that parses Dart code, performs type inference, and translates Dart into a lower-level intermediate language. Other updates Flutter 1.2 also supports a broader set of animation easing functions, which are inspired by Robert Penner’s work. The team is already preparing it for desktop-class operating systems by adding new keyboard events and mouse hover support. Flutter’s plug-in team has added some changes to Flutter 1.2 that will work well to support the In App Purchases plugin. Along with these updates, they have also made some bug fixes for video player, webview, and maps. Along with Flutter 1.2, the team has also released a preview of Dart DevTools, a suite of performance tools for Dart and Flutter. Some of the tools from this suite including web inspector, timeline view, and others are now available for installation. Read the full set of updates in Flutter 1.2 on the Google Developers blog. Google to make Flutter 1.0 “cross-platform”; introduces Hummingbird to bring Flutter apps to the web Flutter challenges Electron, soon to release a desktop client to accelerate mobile development Google announces Flutter Release Preview 2 with extended support for Cupertino themed controls and more!
Read more
  • 0
  • 0
  • 14191
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-rancher-labs-announces-k3s-a-lightweight-distribution-of-kubernetes-to-manage-clusters-in-edge-computing-environments
Melisha Dsouza
27 Feb 2019
3 min read
Save for later

Rancher Labs announces ‘K3s’: A lightweight distribution of Kubernetes to manage clusters in edge computing environments

Melisha Dsouza
27 Feb 2019
3 min read
Yesterday, Rancher Labs announced K3s, a lightweight Kubernetes distribution to run Kubernetes in a resource-constrained environment. According to the official blog post, this project was launched to “address the increasing demand for small, easy to manage Kubernetes clusters running on x86, ARM64 and ARMv7 processors in edge computing environments”. To operate an edge computing on Kubernetes is a complex task. K3s will reduce the memory required to run Kubernetes and provide developers with a distribution of Kubernetes that requires less than 512 MB of RAM, ideally suited for edge use cases. Features of K3s #1 Simplicity of Installation K3s was designed to maximize the simplicity of installation and operations on a large scale Kubernetes cluster. It is a standards-compliant, Kubernetes distribution for “mission-critical, production use cases”. #2 Zero Host dependencies There is no requirement for an external installer to install Kubernetes--everything necessary to install it on any device is included in a single, 40MB binary.  A single command will enable the single-node k3s cluster to be provisioned or upgraded. Nodes can be simply added to the cluster running a single command on the new node, pointing it to the original server and passing through a secure token. #3 Automatic certificate and encryption key generation All of the certificates needed to establish TLS between the Kubernetes masters and nodes, as well as the encryption keys for service accounts are automatically created when a cluster is launched. #4 Reduces Memory footprint K3s reduces the memory required to run Kubernetes by removing old and non-essential code and any alpha functionality that is disabled by default. It also removes old features that have been deprecated, non-default admission controllers, in-tree cloud providers, and storage drivers. Users can add in any drivers they need. #5 Conservation of RAM Rancher’s K3s combines the processes that run on a Kubernetes management server into a single process. It also combines the Kubelet, kubeproxy and flannel agent processes that run on a worker node into a single process. Both of these techniques help in conserving RAM. #6 Reducing runtime footprint Rancher labs were able to cut down the runtime footprint significantly by using containerd instead of Docker as the runtime container engine. Functionalities like libnetwork, swarm, Docker storage drivers and other plugins have also been removed to achieve this aim. #7 SQLite as an optional datastore To provide a lightweight alternative to etcd, Rancher added SQLite as optional datastore in K3s. This was done because SQLite has “a lower memory footprint, as well as dramatically simplified operations.” Kelsey Hightower, a Staff Developer Advocate at Google Cloud Platform, commended Rancher Labs for removing features, instead of adding anything additional, to be able to focus on running clusters in low-resource computing environments. https://twitter.com/kelseyhightower/status/1100565940939436034 Kubernetes users have also welcomed the news with enthusiasm. https://twitter.com/toszos/status/1100479805106147330 https://twitter.com/ashim_k_saha/status/1100624734121689089 K3s is released with support for x86_64, ARM64 and ARMv7 architectures,  to work across any edge infrastructure. Head over to the K3s page for a quick demo on how to use the same. Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers Introducing Platform9 Managed Kubernetes Service CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure  
Read more
  • 0
  • 0
  • 14399

article-image-dart-2-2-is-out-with-support-for-set-literals-and-more
Savia Lobo
27 Feb 2019
2 min read
Save for later

Dart 2.2 is out with support for set literals and more!

Savia Lobo
27 Feb 2019
2 min read
Michael Thomsen, the Project Manager for Dart announced the stable release of the general purpose programming language, Dart 2.2. This version, which is an incremental update to v2, offers improved performance of ahead-of-time (AOT) compiled native code and a new set literal language feature. Improvements in Dart 2.2 Improved AOT performance Developers have worked on improving the AOT performance by 11–16% on microbenchmarks (at the cost of a ~1% increase in code size). Prior to this optimization, developers had to make several lookups to an object pool to determine the destination address. However, the optimized AOT code is now able to call the destination directly using a PC-relative call. Extended Literals to support sets Dart supported the literal syntax only for Lists and Maps, which caused difficulties in initializing Sets as it had to be initialized via a list as follows: Set<String> currencies = Set.of(['EUR', 'USD', 'JPY']); This code proved to be inefficient due to the lack of literal support and also made currencies a compile-time constant. With Dart 2.2’s extension of literals to support sets, users can initialize a set and make it const using a convenient new syntax: const Set<String> currencies = {'EUR', 'USD', 'JPY'}; Updated Dart language Specification Dart 2.2 includes the up-to-date ‘Dart language specification’ with the spec source moved to a new language repository. Developers have also added continuous integration to ensure a rolling draft specification is generated in PDF format as and when the specification for future versions of the Dart language evolves. Both the 2.2 version and rolling Dart 2.x specifications are available on the Dart specification page. To know more about this announcement in detail, visit Michael Thomsen’s blog on Medium. Google Dart 2.1 released with improved performance and usability Google’s Dart hits version 2.0 with major changes for developers Is Dart programming dead already?  
Read more
  • 0
  • 0
  • 13262

article-image-python-3-8-alpha-2-is-now-available-for-testing
Natasha Mathur
27 Feb 2019
2 min read
Save for later

Python 3.8 alpha 2 is now available for testing

Natasha Mathur
27 Feb 2019
2 min read
After releasing Python 3.8.0 alpha 1 earlier this month, Python team released the second alpha version of the four planned alpha releases of Python 3.8, called Python 3.8.0a2, last week. Alpha releases make it easier for the developers to test the current state of new features, bug fixes, and the release process. Python team states that many new features for Python 3.8 are still being planned and written. Here is a list of some of the major new features and changes so far, however, these features are currently raw and not meant for production use: PEP 572 i.e. Assignment expressions have been accepted. Now, users can assign to variables within an expression with the help of the notation NAME := expr. A new exception, TargetScopeError has also been added with one change to the evaluation order. Typed_ast, a fork of the ast module (in C) used by mypy, pytype, and (IIRC) has been merged back to CPython. Typed_ast helps preserve certain comments. Multiprocessing is now allowed and users can use shared memory segments to avoid pickling costs and the need for serialization between processes. The next pre-release for Python 3.8 will be Python 3.8.0a3 and has been scheduled for 25th March 2019. For more information, check out the official Python 3.8.0a2 announcement. PyPy 7.0 released for Python 2.7, 3.5, and 3.6 alpha 5 blog posts that could make you a better Python programmer Python Software foundation and JetBrains’ Python Developers Survey 2018
Read more
  • 0
  • 0
  • 17087

article-image-australias-assistance-and-access-bill-causes-fastmail-to-lose-customers
Savia Lobo
26 Feb 2019
2 min read
Save for later

FastMail expresses issues with Australia’s Assistance and Access bill

Savia Lobo
26 Feb 2019
2 min read
Australia’s email provider, FastMail, recently reported that they are losing their customers following Australia’s Assistance and Access (A&A) bill. They have also received requests to shift their email operations outside Australia. Australia’s bill faced a lot of opposition from the tech community when it was first passed during the end of last year. FastMail CEO Bron Gondwana said, “The way in which [the laws] were introduced, debated, and ultimately passed ... creates a perception that Australia has changed - that we are no longer a country which respects the right to privacy.” “We have seen existing customers leave, and potential customers go elsewhere, citing this bill as the reason for their choice. We are [also] regularly being asked by customers if we plan to move”, Gondwana said in an email. Gondwana mentions that the problems caused by this bill revolve around perception and trust. His email states, “Our staff are curious and capable - if our system is behaving unexpectedly, they will attempt to understand why. This is a key part of bug discovery and keeping our systems secure.” The email further states, “Technology is a tinkerer’s arena. Tools exist to monitor network data, system calls and give computer users more observability than ever before. Secret data exfiltration code may be discovered by tinkerers or even anti-virus firms looking at unexpected behaviour.” “Additionally, as code is refactored and products change over time, ensuring that a technical capability isn’t lost means that everybody working on the design and implementation needs to know that the technical capability exists and take it into account.” To know more about this news in detail, read the complete email. Three major Australian political parties hacked by ‘sophisticated state actor’ ahead of election Australian intelligence and law enforcement agencies already issued notices under the ‘Assistance and Access’ Act despite opposition from industry groups Australia’s ACCC publishes a preliminary report recommending Google Facebook be regulated and monitored for discriminatory and anti-competitive behavior  
Read more
  • 0
  • 0
  • 10090
article-image-sensetime-researchers-train-imagenet-alexnet-in-record-1-5-minutes-using-gradientflow
Melisha Dsouza
26 Feb 2019
4 min read
Save for later

SenseTime researchers train ImageNet/AlexNet in record 1.5 minutes using ‘GradientFlow’

Melisha Dsouza
26 Feb 2019
4 min read
Researchers from SenseTime Research and Nanyang Technological University have broken the record to train ImageNet/AlexNet in 1.5 minutes. The previous record was held by a model developed by the researchers at TenCent, a Chinese tech giant, and Hong Kong Baptist University, that took four minutes. This is a significant 2.6 times speedup over the previous record. The SenseTime and Nanyang team used a communication backend called “GradientFlow” along with a set of network optimization techniques to reduce the deep neural network (DNN) model training time. The researchers also proposed a technique called “lazy allreduce” to combine multiple communication operations into a single one. The researchers say that high communication overhead is one of the major performance bottlenecks for distributed DNN training across multiple GPUs. To combat this issue, one of the techniques used was increasing the batch size and running through the dataset quickly to process more samples per iteration. They also used a mixture of half-precision floating point, aka FP16, as well as single-precision floating point, FP32. Both these techniques reduce the memory bandwidth pressure on the GPUs used to accelerate the machine-learning math in hardware, but cause some loss of accuracy. How does GradientFlow work? GradientFlow is a software toolkit, to tackle the high communication cost of distributed DNN training. It is a communication backend that sla shed training times on GPUs, as described in their paper, published earlier this month. GradientFlow employs lazy allreduce, to reduce network cost, and improves network throughput by fusing multiple allreduce operations into a single one. It employs “coarse-graining sparse communication” to reduce network traffic and sends only important gradient chucks. Every GPU stores batches of data from ImageNet and uses gradient descent to crunch through their pixels. These gradient values are passed onto server nodes in order to update the parameters in the overall model. This is done using a type of parallel-processing algorithm known as allreduce. Trying to ingest these values, or tensors, from hundreds of GPUs at a time will result into bottlenecks. GradientFlow increases the efficiency of the code by allowing the GPUs to communicate and exchange gradients locally before final values are sent to the model. “Instead of immediately transmitting generated gradients with allreduce, GradientFlow tries to fuse multiple sequential communication operations into a single one, avoiding sending a huge number of small tensors via network,” the researchers wrote. Lazy allreduce Lazy allreduce fuses multiple allreduce operations into a single operation with minimal GPU memory copy overhead. On completing a backward computation, a layer with learnable parameters generates one or more gradient tensors. Every tensor is allocated a separate GPU memory space by the baseline system. With the help of lazy allreduce, all gradient tensors are placed in a memory pool. Lazy allreduce waits for the lower layer’s gradient tensors, until the total size of waited tensors  is greater than a given threshold θ. Then, a single allreduce operation is performed on all waited gradient tensors. This avoids transmitting small tensors via network and improves network utilization. Coarse-grained sparse communication (CSC) To further reduce network traffic with high bandwidth utilization, the researchers have proposed coarse-grained sparse communication to select important gradient chunks for allreduce. The generated tensors are placed in a memory pool with continuance address space, based on their generated order. The CSC will equally partition the gradient memory pool into chunks. Each chunk contains a number of gradients. In this research, each chunk contains 32K gradients and the CSC partitions the gradient memory pool of AlexNet and ResNet-50 into 1903 and 797 chunks respectively. A percent (e.g., 10%) of gradient chunks are selected as important chunks at the end of each iteration. Design of coarse-grained sparse communication (CSC) Conclusion GradientFlow improves network performance for distributed DNN training. When training ImageNet/AlexNet on 512 GPUs, the researchers achieved up to 410.2 speedup ratio, and completed 95-epoch training in 1.5 minutes, outperforming existing approaches. You can head over to the research paper for a more in-depth performance analysis of the model proposed. Generating automated image captions using NLP and computer vision [Tutorial] Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments? Exploring Deep Learning Architectures [Tutorial]  
Read more
  • 0
  • 0
  • 2255

article-image-the-verge-spotlights-the-hidden-cost-of-being-a-facebook-content-moderator-a-role-facebook-outsources-to-3rd-parties-to-make-the-platform-safe-for-users
Amrata Joshi
26 Feb 2019
4 min read
Save for later

The Verge spotlights the hidden cost of being a Facebook content moderator, a role Facebook outsources to 3rd parties to make the platform safe for users

Amrata Joshi
26 Feb 2019
4 min read
Facebook has been in news in recent years for its data leaks and data privacy concerns. This time the company is on the radar because of the deplorable working conditions of content moderators. The reviewers are so much affected by the content on the platform that they are trying to overcome their PTSD by having sex and getting into drugs at work, reports The Verge in a compelling and horrifying insight into the lives of content moderators who work as contract workers at Facebook’s Arizona office. Last year there was a similar report against Facebook. An ex-employee had filed a lawsuit against Facebook, in September for not providing enough protection to the content moderators who are responsible for reviewing disturbing content on the platform. The platform has millions of videos, images of child sexual abuse, rape, torture, bestiality, beheadings, suicide, and murder. The platform relies on machine learning augmented by human content moderators to keep the platform safe for the users. This means any image that violates the corporation’s terms of use is removed from the platform. In a statement to CNBC, a Facebook spokesperson said, "We value the hard work of content reviewers and have certain standards around their well-being and support. We work with only highly reputable global partners that have standards for their workforce, and we jointly enforce these standards with regular touch points to ensure the work environment is safe and supportive, and that the most appropriate resources are in place." The company has also posted a blog post about its work with its partners like Cognizant and its steps towards ensuring a healthy working environment for content reviewers. As reported by The Verge, the contracted moderators get one 30-minute lunch, two 15-minute breaks, and nine minutes of "wellness time" per day. But much of this time is spent waiting in queues for the bathroom where three stalls per restroom serve hundreds of employees. Facebook’s environment is such that workers cope with stress by telling dark jokes about committing suicide, then smoke weed during breaks to numb their emotions. According to the report, it’s a place where employees can be fired for making just a few errors a week. Even the team leaders give a hard time to the content moderators by micromanaging their bathroom and prayer break. The moderators are paid $15 per hour for moderating content that could range from offensive jokes to potential threats to videos depicting murder. A Cognizant spokesperson said, “The company has investigated the issues raised by The Verge and previously taken action where necessary and have steps in place to continue to address these concerns and any others raised by our employees. In addition to offering a comprehensive wellness program at Cognizant, including a safe and supportive work culture, 24x7 phone support and onsite counselor support to employees, Cognizant has partnered with leading HR and Wellness consultants to develop the next generation of wellness practices." Public reaction to this news is mostly negative with users complaining and condemning how the company is being run. https://twitter.com/waltmossberg/status/1100245569451237376 https://twitter.com/HawksNest/status/1100068105336774656 https://twitter.com/likalaruku/status/1100194103902523393 https://twitter.com/blakereid/status/1100094391241170944 People are angry with the fact that the content moderators at Facebook endure such trauma in their role. Some believe some compensation should be given to those suffering from PTSD as a result of working in certain high-stress roles in companies across industries. https://twitter.com/hypatiadotca/status/1100206605356851200 According to Kevin Collier, a Cyber reporter, Facebook is underpaying and making content moderators overwork in a desperate attempt to reign in abuse of the platform it created. https://twitter.com/kevincollier/status/1100077425357176834 One of the users tweeted, “And I've concluded that FB is run by sociopaths.” Youtube has rolled out a feature in the US that displays notices below videos uploaded by news broadcasters which receive government or public money. Alex Stamos, former Chief Security Officer at Facebook, highlighted something similar but with reference to Facebook. According to him, Facebook needs a state-sponsored label and people should know the human cost of policing online humanity. https://twitter.com/alexstamos/status/1100157296527589376 To know more about this news, check out the report by The Verge. Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma NIPS 2017 Special: Decoding the Human Brain for Artificial Intelligence to make smarter decisions Facebook and Google pressurized to work against ‘Anti-Vaccine’ trends after Pinterest blocks anti-vaccination content from its pinboards  
Read more
  • 0
  • 0
  • 11707

article-image-facebook-open-sources-magma-a-software-platform-for-deploying-mobile-networks
Amrata Joshi
26 Feb 2019
2 min read
Save for later

Facebook open sources Magma, a software platform for deploying mobile networks

Amrata Joshi
26 Feb 2019
2 min read
Yesterday, the team at Facebook open-sourced Magma, a software platform that will help operators for deploying mobile networks easily. This platform comes with a software-centric distributed mobile packet core and tools for automating network management. Magma extends existing network topologies to the edge of rural deployments, private LTE (Long Term Evolution) networks or wireless enterprise deployments instead of replacing existing EPC deployments for large networks. Magma enables new types of network archetypes where there is a need for continuous integration of software components and incremental upgrade cycles. It also allows authentication and integration with the help of LTE EPC (Evolved Packet Core). It also reduces the complexity of operating mobile networks by enabling automation of network operations like software updates, element configuration, and device provisioning. Magma’s centralized cloud-based controller can be used on a public or private cloud environment. Its automated provisioning infrastructure makes deploying LTE as easy as deploying a WiFi access point. The platform currently works with existing LTE base stations and can associate with traditional mobile cores for extending services to new areas. According to a few users, “Facebook internally considers the social network to be its major asset and not their technology.” Any investment in open technologies or internal technology which make the network effect stronger is considered important. Few users discussed Facebook’s revenue strategies in the HackerNews thread. A comment on HackerNews reads, “I noticed that FB and mobile phone companies offering "free Facebook" are all in a borderline antagonistic relationship because messenger kills their revenue, and they want to bill FB an arm and a leg for that.” To know more about this news in detail, check out Facebook’s blog post. Facebook open sources SPARTA to simplify abstract interpretation Facebook open sources the ELF OpenGo project and retrains the model using reinforcement learning Facebook’s AI Chief at ISSCC talks about the future of deep learning hardware  
Read more
  • 0
  • 0
  • 11069
article-image-introducing-mint-a-new-http-client-for-elixir
Amrata Joshi
26 Feb 2019
2 min read
Save for later

Introducing Mint, a new HTTP client for Elixir

Amrata Joshi
26 Feb 2019
2 min read
Yesterday, the team at Elixir introduced Mint as their new low-level HTTP client that provides a small and functional core. It is connection based where each connection is a single structure with an associated socket belonging to the process that started the connection. Features of Mint Connections The HTTP connections of Mint are managed directly in the process that starts the connection. There is no connection pool which is used when a connection is opened. This helps users to build their own process structure that fits their application. Each connection has a single immutable data structure that the users can manage. Mint uses “active mode” sockets so the data and events from the socket are sent as messages to the process that started the connection. The user then passes the messages to the stream/2 function that further returns the updated connection and a list of “responses”. These responses get streamed back and the response is returned in partial response chunks. Process-less To many users, Mint may seem to be more cumbersome to use than other HTTP libraries. But by providing a low-level API without a predetermined process architecture, Mint gives flexibility to the user of the library. If a user writes GenStage pipelines, a pool of producers can fetch data from external sources via HTTP. With Mint, it is possible to have GenStage producer for managing its own connection while reducing overhead and simplifying the code. HTTP/1 and HTTP/2 The Mint.HTTP module has a single interface for both HTTP/1 and HTTP/2 connections which also performs version negotiation on HTTPS connections. Users can now specify HTTP version for choosing Mint.HTTP1 or Mint.HTTP2modules directly. Safe-by-default HTTPS When connecting with HTTPS, Mint performs certificate verification by default. Mint also uses an optional dependency on CAStore for providing certificates from Mozilla’s CA Certificate Store. Few users are happy about this news with  one user commenting on HackerNews, “I like that Mint keeps dependencies to a minimum.” Another user commented, “I'm liking the trend of designing runtime-behaviour agnostic libraries in Elixir.” To know more about this news, check out Mint’s official blog post. Elixir 1.8 released with new features and infrastructure improvements Elixir 1.7, the programming language for Erlang virtual machine, releases Elixir Basics – Foundational Steps toward Functional Programming  
Read more
  • 0
  • 0
  • 16915

article-image-google-introduces-and-open-sources-lingvo-a-scalable-tensorflow-framework-for-sequence-to-sequence-modeling
Natasha Mathur
26 Feb 2019
3 min read
Save for later

Google introduces and open-sources Lingvo, a scalable TensorFlow framework for Sequence-to-Sequence Modeling

Natasha Mathur
26 Feb 2019
3 min read
Google researchers announced a new TensorFlow framework, called Lingvo, last week. Lingvo offers a complete solution for collaborative deep learning research, with a particular focus towards sequence modeling tasks such as machine translation, speech recognition, and speech synthesis.   The TensorFlow team also announced yesterday that it is open sourcing Lingvo. “To show our support of the research community and encourage reproducible research effort, we have open-sourced the framework and are starting to release the models used in our papers”, states the TensorFlow team. https://twitter.com/GoogleAI/status/1100177047857487872 Lingvo has been designed for collaboration and has a code with a consistent interface and style that is easy to read and understand. It also consists of a flexible modular layering system that promotes code reuse. Since it involves many people using the same codebase, it is easier to employ other ideas within your models. Also, you can adapt to the existing models to new datasets with ease. Other than that, Lingvo makes it easier to reproduce and compare the results in research. This is because Lingvo adopts a system where all the hyperparameters of a model get configured within their own dedicated sub-directory that is separate from the model logic. All the models within Lingvo are built from the same common layers which allow them to be compared with each other easily. Also, all of these models have the same overall structure from input processing to loss computation, and all the layers consist of the same interface.                                         Overview of the LINGVO framework Moreover, all the hyperparameters within Lingvo get explicitly declared and their values get logged at runtime. This makes it easier to read and understand the models. Lingvo can also easily train the on production scale datasets. There’s also additional support provided for synchronous and asynchronous distributed training. In Lingvo, inference-specific graphs are built from the same shared code used for training. Also, quantization support has been built in directly into the framework. Lingvo initially started out for natural language processing (NLP) tasks but has become very flexible and is also applicable to models used for tasks such as image segmentation and point cloud classification. It also supports distillation, GANs, and multi-task models. Additionally, the framework does not compromise on the speed and comes with an optimized input pipeline and fast distributed training. For more information, check out the official LINGVO research paper. Google engineers work towards large scale federated learning Google AI researchers introduce PlaNet, an AI agent that can learn about the world using only images Google to acquire cloud data migration start-up ‘Alooma’
Read more
  • 0
  • 0
  • 10846
Modal Close icon
Modal Close icon