Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-fauna-announces-jepsen-results-for-faunadb-2-5-4-and-2-6-0
Natasha Mathur
06 Mar 2019
3 min read
Save for later

Fauna announces Jepsen results for FaunaDB 2.5.4 and 2.6.0

Natasha Mathur
06 Mar 2019
3 min read
FaunaDB, a distributed OLTP (online transaction processing) database, released the official results of the test done on FaunaDB version 2.5.4 and 2.6.0 by Jepsen, an independent testing organization, yesterday. FaundDB passed the tests with flying colors, and is now architecturally sound, correctly implemented, and ready to take on the enterprise workloads over cloud. The Fauna team had been working on the FaunaDB tests extensively with Kyle Kingsbury, a computer safety researcher at Jepsen, for three months. “Our mandate for him was not merely to test the basic properties of the system, but rather to poke into the dark corners and exhaustively validate..FaunaDB”, states the Fauna team. Jepsen team states that Fauna had written their own Jepsen tests, which have been refined and expanded throughout the collaboration between Jepsen and Fauna. Jepsen evaluated FaunaDB 2.5.4 and 2.5.5, along with several development builds up to 2.6.0-rc10. Jepsen team used three replicas, along with 5-10 nodes striped evenly across replicas for the tests. Additionally, the log node topologies in 2.5.4 and 2.5.5 were explicitly partitioned, with a copy in each of the replicas. The Jepsen team waited for data movement to complete as well as for all indices to signal readiness before initiating the testing process. Fauna team states that FaunaDB’s core operations on single instances in 2.5.5 appeared quite “solid”. During the tests, the Fauna team reliably managed to create, read, update, and delete the records transactionally atsnapshot, serializable, and strict serializable isolation. Also, the acknowledged instance updates were never lost. FaundaDB also managed to pass additional tests, while covering features such as indexes and temporality. By the release of FaunaDB 2.6.0-rc10, Fauna managed to address all the issues identified by Jepsen. However, progress is still needed around some minor work and schema changes. Other than that, FaunaDB also provides the “highest possible level of correctness”. FaunaDB team is currently planning on promoting SI or serializable transactions to strict serializability which is the gold standard for concurrent systems. Another noticeable fact about FaunaDB is that it is self-operating. FaunaDB has been especially designed to offer support for online addition and removal of nodes with appropriate backpressure. Also, it is architecturally sound. FaunaDB combines Calvin’s cross-shard transactional protocol with the Raft’s consensus system for individual shards. Finally, the Jepsen team states that the bugs found in FaunaDB are implementation problems, and Fauna will be working on fixing the detected bugs as soon as possible. “FaunaDB’s approach is fundamentally sound...Calvin-based systems like FaunaDB could play an important future role in the distributed database landscape”, states the Jepsen team. For more information, check out the official Jepsen results post. MariaDB CEO says big proprietary cloud vendors “strip-mining open-source technologies and companies” Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend Uber releases AresDB, a new GPU-powered real-time Analytics Engine
Read more
  • 0
  • 0
  • 9820

article-image-openai-introduces-neural-mmo-a-multiagent-game-environment-for-reinforcement-learning-agents
Amrata Joshi
06 Mar 2019
3 min read
Save for later

OpenAI introduces Neural MMO, a multiagent game environment for reinforcement learning agents

Amrata Joshi
06 Mar 2019
3 min read
On Monday, the team at OpenAI launched at Neural MMO (Massively Multiplayer Online Games), a multiagent game environment for reinforcement learning agents. It will be used for training AI in complex, open-world environments. This platform supports a large number of agents within a persistent and open-ended task. The need for Neural MMO Since the past few years, the suitability of MMOs for modeling real-life events has been explored. But there are two main challenges for multiagent reinforcement learning. Firstly, there is a need to create open-ended tasks with high complexity ceiling as the current environments are complex and narrow. The other challenge, the OpenAI team specifies is the need for more benchmark environments in order to quantify learning progress in the presence of large population scales. Different criteria to overcome challenges The team suggests certain criteria which need to be met by the environment to overcome the challenges. Persistence Agents can concurrently learn in the presence of other learning agents without the need of environment resets. The strategies should adapt to rapid changes in the behaviors of other agents and also consider long time horizons. Scale Neural MMO supports a large and variable number of entities. The experiments by the OpenAI team consider up to 100M lifetimes of 128 concurrent agents in each of 100 concurrent servers. Efficiency As the computational barrier to entry is low, effective policies can be trained on a single desktop CPU. Expansion The Neural MMO is designed to update new content. The core features include food and water foraging system, procedural generation of tile-based terrain, and a strategic combat system. There are opportunities for open-source driven expansion in the future. The Environment Players can join any available server while each containing an automatically generated tile-based game map of configurable size. Some tiles are traversable, such as food-bearing forest tiles and grass tiles, while others, such as water and solid stone, are not. Players are required to obtain food and water and avoid combat damage from other agents, in order to sustain their health. The platform comes with a procedural environment generator and visualization tools for map tile visitation distribution, value functions, and agent-agent dependencies of learned policies. The team has trained a fully connected architecture using vanilla policy gradients, with a value function baseline and reward discounting as the only enhancements. The team has converted variable length observations, such as the list of surrounding players, into a single length vector by computing the maximum across all players. Neural MMO has resolved a couple of limitations of previous game-based environments, but there are still many left unsolved. Few users are excited about this news. One of the users commented on HackerNews, “What I find interesting about this is that the agents naturally become pacifists.” While a few others think that the company should come up with novel ideas and not copied ones. Another user commented on HackerNews, “So far, they are replicating known results from evolutionary game theory (pacifism & niches) to economics (distance & diversification). I wonder when and if they will surprise some novel results.” To know more about this news, check out OpenAI’s official blog post. AI Village shares its perspective on OpenAI’s decision to release a limited version of GPT-2 OpenAI team publishes a paper arguing that long term AI safety research needs social scientists OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words
Read more
  • 0
  • 0
  • 16630

article-image-it-is-supposedly-possible-to-increase-reproducibility-from-54-to-90-in-debian-buster
Melisha Dsouza
06 Mar 2019
2 min read
Save for later

It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster!

Melisha Dsouza
06 Mar 2019
2 min read
Yesterday, Holger Levsen, a member of the team maintaining reproducible.debian.net, started a discussion on reproducible builds, stating that “Debian Buster will only be 54% reproducible (while we could be at >90%)”. He started off by stating that tests indicate Debian Buster’s 26476 source packages (92.8%) out of 28523 source packages in total can be built reproducibly in buster/amd64. The 28523 source packages build 57448 binary packages. Next, by looking at binary packages that Debian actually distributes, he says that Vagrant came up with an idea to check buildinfo.debian.net for .deb files for which there exists 2 or more .buildinfo. Turning this into a Jenkins job, he checked the above idea for all 57448 binary packages (including downloading all those .deb files from ftp.d.o)  in amd64/buster/main. He obtained the following results: reproducible packages in buster/amd64: 30885: (53.7600%) unreproducible packages in buster/amd64: 26543: (46.2000%) and reproducible binNMUs in buster/amd64: 0: (0%) unreproducible binNMU in buster/amd64: 7423: (12.9200%) He suggests that binNMUs are unreproducible because of their design and his proposed solution to obtain reproducible nature is that 'binNMUs should be replaced by easy "no-change-except-debian/changelog-uploads'. This means a 12% increase in reproducibility from 54%. Next, he also discovered that 6804 source packages need a rebuild from December 2016. This is because these packages were built with an old dpkg not producing .buildinfo files. 6804 of 28523 accounts for 23.9%. Summing everything up- 54%+12%+24% equals 90% reproducibility. Refer to the entire discussion thread for more details on this news. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable User discovers bug in debian stable kernel upgrade; armmp package affected Debian 9.7 released with fix for RCE flaw
Read more
  • 0
  • 0
  • 13738

article-image-preact-x-alpha-is-out-now-with-fragments-hooks-and-more
Bhagyashree R
06 Mar 2019
3 min read
Save for later

Preact X alpha is out now with Fragments, Hooks, and more!

Bhagyashree R
06 Mar 2019
3 min read
Yesterday, the team behind Preact, a fast and smaller alternative of React, announced that Preact X is now in alpha. Preact X is the next major release, which includes some of the in-demand features of React like Fragments, Hooks, componentDidCatch, and createContext. https://twitter.com/preactjs/status/1102726702860517376 Following are some of the updates Preact X alpha comes with: Support for fragments Preact X alpha supports fragments, which is the major feature in this release. Fragments allow you to group a list of children without adding extra nodes to the DOM. Developers can now return an array of children from a component’s render method, without having to wrap them in a DOM element. The componentDidCatch lifecycle method This release comes with the componentDidCatch lifecycle method for better error handling. To make a class component an error boundary, developers just need to define the componentDidCatch(error, info) method. This method was introduced in React 16 to prevent a single JavaScript error in the UI from breaking the whole app. This method works using a concept called error boundary. An error boundary is a component that is responsible for catching JavaScript errors in their child component tree. It also logs the error and displays a fallback UI instead of the component tree that crashed. Hooks Preact X alpha supports hooks, which are functions that allow you to “hook into” or use React state and other lifecycle features via function components. You can import hooks in Preact using preact/hooks. The createContext API The createContext API, as the name suggests, creates a Context object. If a component is rendered that subscribes to this Context object, it will read the current context value from the closest matching provider above it in the tree. The Preact team calls it a successor for getChildContext, which is fine when you are certain that the value will not change. The creatContext API is a true pub/sub solution that allows you to deliver updates deep down the tree. Devtools Adapter In order to support the recent updates in react-devtools extension, the team has rewritten Preact’s devtools adapter from scratch, which can now directly hook into the renderer.  This also makes feature development much straightforward for the team. Along with these excellent updates, this version also comes with a few breaking changes. The most noticeable one is that pros.children is not guaranteed to be an array anymore. This update is made to support rendering components that return an array of children without wrapping them in a root node. Check out Preact’s GitHub repo to read the entire list of updates in Preact X alpha. React Native 0.59 RC0 is now out with React Hooks, and more Getting started with React Hooks by building a counter with useState and useEffect React 16.8 releases with the stable implementation of Hooks
Read more
  • 0
  • 0
  • 9000

article-image-researchers-discover-spectre-like-new-speculative-flaw-spoiler-in-intel-cpus
Melisha Dsouza
06 Mar 2019
5 min read
Save for later

Researchers discover Spectre like new speculative flaw, “SPOILER” in Intel CPU’s

Melisha Dsouza
06 Mar 2019
5 min read
Intel CPU’s are reportedly vulnerable to a new attack: “SPOILER: Speculative Load Hazards Boost Rowhammer and Cache Attacks". The vulnerability takes advantage of speculative execution in the Intel CPU’s, and was discovered by computer scientists at Worcester Polytechnic Institute in Massachusetts, and the University of Lübeck in Germany. According to the research, the flaw is a “novel microarchitectural leakage which reveals critical information about physical page mappings to user space processes." The flaw can be exploited by malicious JavaScript within a web browser tab, malware running on the system or any illicit logged in users, to steal sensitive information and other data from running applications. The research paper further states that the leakage can be exploited only by a limited set of instructions, and is visible in all Intel generations starting from the 1st generation Intel Core processors, while being independent of the OS. It also works from within virtual machines and sandboxed environments. The flaw is very similar to the Spectre attacks that were revealed in July, last year. The Spoiler attack also takes advantage of speculative execution- like the Spectre attack- and reveals memory layout data, making it easy for other attacks like Rowhammer, cache attacks, and JavaScript-enabled attacks to be executed. "The root cause of the issue is that the memory operations execute speculatively and the processor resolves the dependency when the full physical address bits are available," says Ahmad Moghimi, one of the researchers who contributed to the paper. "Physical address bits are security sensitive information and if they are available to user space, it elevates the user to perform other micro architectural attacks." Intel was informed of the findings in early December, last year. However, they did not immediately respond to the researchers.  An Intel spokesperson has now provided Techradar with the following statement on the Spoiler vulnerability: “Intel received notice of this research, and we expect that software can be protected against such issues by employing side channel safe software development practices. This includes avoiding control flows that are dependent on the data of interest. We likewise expect that DRAM modules mitigated against Rowhammer style attacks remain protected. Protecting our customers and their data continues to be a critical priority for us and we appreciate the efforts of the security community for their ongoing research.” Impact of SPOILER by performing Rowhammer attack in a native user-level environment The research paper defines the Rowhammer attack as : “an attack causing cells of a victim row to leak faster by activating the neighboring rows repeatedly. If the refresh cycle fails to refresh the victim fast enough, that leads to bit flips. Once bit flips are found, they can be exploited by placing any security-critical data structure or code page at that particular location and triggering the bit flip again.” In order to perform a Rowhammer attack, the adversary needs to access DRAM rows that are adjacent to a victim row and ensure that multiple virtual pages co-locate on the same bank. Double-sided Rowhammer attacks cause bit flips faster owing to the extra charge on the nearby cells of the victim row and they further require access to contiguous memory pages. SPOILER can help boosting both single and double-sided Rowhammer attacks by its additional 8-bit physical address information and result in the detection of contiguous memory. The researchers used SPOILER to detect aliased virtual memory addresses where the 20 LSBs of the physical addresses match. These bits were then used by the memory controller for mapping the physical addresses to the DRAM banks. The  majority of the bits are known using SPOILER. Further, “a attacker can directly hammer such aliased addresses to perform a more efficient single-sided Rowhammer attack with a significantly increased probability of hitting the same bank.” The researchers reverse engineered the DRAM mappings for different hardware configurations using the DRAMA tool, and only a few bits of physical address entropy beyond the 20 bits remain unknown. To verify if aliased virtual addresses co-locate on the same bank, they used the row-conflict side channel It is observed that whenever the number of physical address bits used by the memory controller to map data to physical memory is equal to or less than 20,  the researchers always hit the same bank. To summarize their findings, SPOILER drastically improves the efficiency of finding addresses mapping to the same bank without the need of an administrative privilege or a reverse engineering of the memory controller mapping. This approach also works in sandboxed environments such as JavaScript. You can go through the Research paper for more insights on the SPOILER flaw. Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix Intel releases patches to add Linux Kernel support for upcoming dedicated GPU releases Researchers prove that Intel SGX and TSX can hide malware from antivirus software
Read more
  • 0
  • 0
  • 12012

article-image-google-releases-two-new-hardware-products-coral-dev-board-and-a-usb-accelerator-built-around-its-edge-tpu-chip
Sugandha Lahoti
06 Mar 2019
2 min read
Save for later

Google releases two new hardware products, Coral dev board and a USB accelerator built around its Edge TPU chip

Sugandha Lahoti
06 Mar 2019
2 min read
Google teased its new hardware products built around its Edge TPU at the Google Next conference last summer. Yesterday, it officially launched the Coral dev board, a Raspberry-Pi look-alike, which is designed to run machine learning algorithms ‘at the edge’, and a USB accelerator. Coral Development Board The “Coral Dev Board” has a 40-pin header that runs Linux on an i.MX8M with an Edge TPU chip for accelerating TensorFlow Lite. The board also features 8GB eMMC storage, 1GB LPDDR4 RAM, Wi-Fi and Bluetooth 4.1. It has USB 2.0/3.0 ports, 3.5mm audio jack, DSI display interface, MIPI-CSI camera interface, HDMI 2.0a connector, and two Digital PDM microphones. Source: Google Coral dev board can be used as a single-board computer when you need accelerated ML processing in a small form factor.  It can also be used as an evaluation kit for the SOM and for prototyping IoT devices and other embedded systems. This board is available for $149.00. Google has also announced a $25 MIPI-CSI 5-megapixel camera for the dev board. USB Accelerator The USB Accelerator is basically a plug-in USB 3.0 stick to add machine learning capabilities to the existing Linux machines. This 65 x 30 mm accelerator can connect to Linux-based systems via a USB Type-C port. It can also work with a Raspberry Pi board at USB 2.0 speeds. The accelerator is built around a 32-bit, 32MHz Cortex-M0+ chip with 16KB of flash and 2KB of RAM. Source: Google The USB Accelerator is available for $75. Developers can build Machine Learning models for both the devices in TensorFlow Lite. More information is available on Google’s Coral Beta website. Coming soon are the PCI-E Accelerator, for integrating the Edge TPU into legacy systems using a PCI-E interface. Also coming is a fully integrated System-on-Module with CPU, GPU, Edge TPU, Wifi, Bluetooth, and Secure Element in a 40mm x 40mm pluggable module. Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha). Intel acquires eASIC, a custom chip (FPGA) maker for IoT, cloud and 5G environments Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25.
Read more
  • 0
  • 0
  • 33600
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-nsa-releases-ghidra-a-free-software-reverse-engineering-sre-framework-at-the-rsa-security-conference
Savia Lobo
06 Mar 2019
2 min read
Save for later

NSA releases Ghidra, a free software reverse engineering (SRE) framework, at the RSA security conference

Savia Lobo
06 Mar 2019
2 min read
The National Security Agency released the Ghidra toolkit, today at the RSA security conference in San Francisco. Ghidra is a free, software reverse engineering (SRE) framework developed by NSA's Research Directorate for NSA's cybersecurity mission. Ghidra helps in analyzing malicious code and malware like viruses and can also provide cybersecurity professionals with a better understanding of potential vulnerabilities in their networks and systems. “The NSA's general plan was to release Ghidra so security researchers can get used to working with it before applying for positions at the NSA or other government intelligence agencies with which the NSA has previously shared Ghidra in private”, ZDNet reports. Ghidra’s anticipated release broke out at the start of 2019 following which users have been looking forward to this release. This is because Ghidra is a free alternative to IDA Pro, a similar reverse engineering tool which can only be available under an expensive commercial license, priced in the range of thousands of US dollars per year. NSA cybersecurity advisor, Rob Joyce said that Ghidra is capable of analyzing binaries written for a wide variety of architectures, and can be easily extended with more if ever needed. https://twitter.com/RGB_Lights/status/1103019876203978752 Key features of Ghidra Ghidra includes a suite of software analysis tools for analyzing compiled code on a variety of platforms including Windows, Mac OS, and Linux It includes capabilities such as disassembly, assembly, decompilation, graphing and scripting, and hundreds of other features Ghidra supports a wide variety of processor instruction sets and executable formats and can be run in both user-interactive and automated modes. With Ghidra users may develop their own Ghidra plug-in components and/or scripts using the exposed API To know more about the Ghidra cybersecurity tool, visit its documentation on GitHub repo or its official website. Security experts, Wolf Halton and Bo Weaver, discuss pentesting and cybersecurity [Interview] Hackers are our society’s immune system – Keren Elazari on the future of Cybersecurity 5 lessons public wi-fi can teach us about cybersecurity
Read more
  • 0
  • 0
  • 17319

article-image-security-researcher-exposes-malicious-github-repositories-that-host-more-than-300-backdoored-apps
Savia Lobo
05 Mar 2019
2 min read
Save for later

Security researcher exposes malicious GitHub repositories that host more than 300 backdoored apps

Savia Lobo
05 Mar 2019
2 min read
Security researcher expose malicious GitHub repositories that host more than 300 backdoored apps An unnamed security researcher at dfir.it recently revealed certain GitHub accounts that host more than “300 backdoored Windows, Mac, and Linux applications and software libraries”. The researcher in his blog titled, “The Supreme Backdoor Factory” explained how he stumbled upon this malicious code and various other codes within the GitHub repo. The investigation started when the researcher first spotted a malicious version of the JXplorer LDAP browser. The researcher in his blog post states, “I did not expect an installer for a quite popular LDAP browser to create a scheduled task in order to download and execute PowerShell code from a subdomain hosted by free dynamic DNS provider.” According to ZDNet, “All the GitHub accounts that were hosting these files --backdoored versions of legitimate apps-- have now been taken down.” The malicious files included codes which could allow boot persistence on infected systems and further download other malicious code. The researcher has also mentioned that the malicious apps downloaded a Java-based malware named Supreme NYC Blaze Bot (supremebot.exe). “According to researchers, this appeared to be a "sneaker bot," a piece of malware that would add infected systems to a botnet that would later participate in online auctions for limited edition sneakers”, ZDNet reports. The researcher revealed that some of the malicious entries were made via an account with the name of Andrew Dunkins that included a set of nine repositories, each hosting Linux cross-compilation tools. Each repository was watched or starred by several already known suspicious accounts. The report mentions that accounts that did not host backdoored apps were used to ‘star’ or ‘watch’ the malicious repositories and help boost their popularity in GitHub's search results. To know about these backdoored apps in detail, read the complete report, ‘The Supreme Backdoor Factory’ Brave Privacy Browser has a ‘backdoor’ to remotely inject headers in HTTP requests: HackerNews Undetected Linux Backdoor ‘SpeakUp’ infects Linux, MacOS with cryptominers Cisco and Huawei Routers hacked via backdoor attacks and botnets  
Read more
  • 0
  • 0
  • 9756

article-image-gnome-team-adds-fractional-scaling-support-in-the-upcoming-gnome-3-32
Natasha Mathur
05 Mar 2019
2 min read
Save for later

GNOME team adds Fractional Scaling support in the upcoming GNOME 3.32

Natasha Mathur
05 Mar 2019
2 min read
The GNOME team released beta version 3.32 of GNOME, a free and open source GUI for the Linux computer operating system, last month. GNOME 3.32 is set to release on 13th March 2019. Now, the GNOME team has also added the much-awaited support for fractional scaling to the GNOME 3.32, reports Phoronix. The GNOME 3.32 beta release explored major improvements, bug fixes, and other changes. Earlier GNOME would allow the users to scale windows by integral factors (typically 2). But this was very limiting as there are many systems between the dpi ranges that are effective for scale factor 2, or unscaled. In order to improve this, GNOME then allowed its users to scale by fractional values, e.g. 3/2, or 2/1.3333. This, in turn, allows its users more control over the UI scaling as opposed to the previous integer based scaling of 2, 3, etc. The newly added support for Fractional Scaling in the upcoming GNOME version 3.32 will help enhance the user experience with the modern HiDPI displays. The GNOME Shell changes along with the Mutter changes have also been merged ahead of GNOME version 3.32.0. GNOME version 3.32 says goodbye to application menus Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes GNOME 3.30 released with improved Desktop performance, Screen Sharing, and more
Read more
  • 0
  • 0
  • 20290

article-image-usb-4-will-integrate-thunderbolt-3-to-increase-the-speed-to-40gbps
Amrata Joshi
05 Mar 2019
2 min read
Save for later

USB 4 will integrate Thunderbolt 3 to increase the speed to 40Gbps

Amrata Joshi
05 Mar 2019
2 min read
Just a week later, after revealing the details about USB 3.2, yesterday at the Taipei event, the USB Implementers Forum (USB-IF) team announced USB 4, the next version of the ubiquitous connector. According to the USB-IF team, with USB 4, the transferring speed will increase from 20Gbps to 40Gbps. The team at USB-IF uses Thunderbolt  3 as the foundation for USB 4. Intel provides the manufacturers with Thunderbolt 3 along with open licensing. USB4 will be integrating this technology and will become the "new" Thunderbolt 3. USB4 will now be ready for powerful PCIe plus DisplayPort devices. USB 4 can get connected with external graphics card enclosure, two 4K monitors, and other Thunderbolt 3 accessories using a single cable connected to a PC. It will also be compatible with USB 2.0 and 3.2. USB 4 will come with support for charging speeds of 100W of power, transfer speeds of 40 Gbps, and video bandwidth for two 4K displays or one 5K display. It’s most likely to get widely available and cheaper in the future. In a statement to Techspot, Brad Saunders, USB Promoter Group Chairman, said, “The primary goal of USB is to deliver the best user experience combining data, display and power delivery over a user-friendly and robust cable and connector solution. The USB4 solution specifically tailors bus operation to further enhance this experience by optimizing the blend of data and display over a single connection and enabling the further doubling of performance.” The USB-IF team plans to produce a list of features for USB 4, which will help in standardizing features such as display out and audio out. Though the exact features are yet to be determined. Few users are not much confident about USB 4. One of the users commented on HackerNews, “Maybe it will charge the device. Maybe it won't. Maybe it'll do USB hosting, maybe it won't.” Few users think that the company’s major focus is on manufacturers while user experience is secondary. Another comment reads, “USB-IF is for manufacturers, most of whom want to do whatever the cheapest quickest thing is. The user experience absolutely comes second to manufacturing cost and marking convenience.” To know more about this, check out the post by engadget. USB-IF launches ‘Type-C Authentication Program’ for better security Apple USB Restricted Mode: Here’s Everything You Need to Know Working on Jetson TX1 Development Board [Tutorial]
Read more
  • 0
  • 0
  • 14322
article-image-alphabets-chronicle-launches-backstory-for-business-network-security-management
Melisha Dsouza
05 Mar 2019
3 min read
Save for later

Alphabet’s Chronicle launches ‘Backstory’ for business network security management

Melisha Dsouza
05 Mar 2019
3 min read
Alphabet’s ‘Chronicle’, launched last year, announced its first product, ‘Backstory’ at the ongoing RSA 2019. Backstory is a security data platform and stores huge amounts of business’ network data--including information from domain name servers to employee laptops and phones--into a Chronicle-installed collection of servers on a customer’s premises. This data is quickly indexed and organized. According to Forbes, customers can then carry out searches on the data, like “Are any of my computers sending data to Russian government servers?” Cybersecurity investigators can start asking questions such as: What kinds of information are the Russians taking, when and how?. This method of working is very similar to Google Photos. Backstory gives security analysts the ability to quickly understand the real vulnerabilities. According to the Backstory blog, “Backstory is a global security telemetry platform for investigation and threat hunting within your enterprise network. It is a specialized, cloud-native security analytics system, built on the core infrastructure that powers Google itself. Making security analytics instant, easy, and cost-effective.” The company states that this service requires zero customer hardware, maintenance, tuning, or ongoing management and can support security analytics against the largest customer networks with ease. Features of Backstory Backstory provides a real-time and retroactive instant indicator matching across all logs. It checks failure points such as if a domain flips from good to bad, Backstory shows all devices that have ever communicated with that domain). Prebuilt search results and smart filters designed for security-specific use cases. Displays data in real time to support security investigations and hunts. Backstory provides Intelligent analytics to derive insights to support security investigations. Backstory can automatically work with huge petabytes of data. Chronicle’s CEO Stephen Gillett told CNBC that the pricing model will not be based on volume. However, the licenses will be based on the size of the company and not on the size of the customer's data. Backstory also intends to partner with other cybersecurity companies rather than competing with them. Considering that Alphabet already has a history of obtaining sensitive customer information, it will be interesting to see how Backstory operates without this particular methodology. To know more about this news in detail, read Backstory’s official blog. Liz Fong Jones, prominent ex-Googler shares her experience at Google and ‘grave concerns’ for the company Google finally ends Forced arbitration for all its employees Shareholders sue Alphabet’s board members for protecting senior execs accused of sexual harassment  
Read more
  • 0
  • 0
  • 5065

article-image-google-open-sources-gpipe-a-pipeline-parallelism-library-to-scale-up-deep-neural-network-training
Natasha Mathur
05 Mar 2019
3 min read
Save for later

Google open-sources GPipe, a pipeline parallelism Library to scale up Deep Neural Network training

Natasha Mathur
05 Mar 2019
3 min read
Google AI research team announced that it’s open sourcing GPipe, a distributed machine learning library for efficiently training Large-scale Deep Neural Network Models, under the Lingvo Framework, yesterday. GPipe makes use of synchronous stochastic gradient descent and pipeline parallelism for training. It divides the network layers across accelerators and pipelines execution to achieve high hardware utilization. GPipe also allows researchers to easily deploy accelerators to train larger models and to scale the performance without tuning hyperparameters. Google AI researchers had also published a paper titled “GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism" last year in December. In the paper, researchers demonstrated the use of pipeline parallelism to scale up deep neural networks to overcome the memory limitation on current accelerators. Let’s have a look at major highlights of GPipe. GPipe helps with maximizing the memory and efficiency GPipe helps with maximizing the memory allocation for model parameters. Researchers conducted experiments on Cloud TPUv2s. Each of these Cloud TPUv2s consists of 8 accelerator cores and 64 GB memory (8 GB per accelerator). Generally, without GPipe, a single accelerator is able to train up to 82 million model parameters because of the memory limitations, however, GPipe was able to bring down the immediate activation memory from 6.26 GB to 3.46GB on a single accelerator. Researchers also measured the effects of GPipe on the model throughput of AmoebaNet-D to test its efficiency. Researchers found out that there was almost a linear speedup in training. GPipe also enabled 8 billion parameter Transformer language models on 1024-token sentences using speedup of 11x.                                        Speedup of AmoebaNet-D using GPipe Putting the accuracy of GPipe to test Researchers used GPipe to verify the hypothesis that scaling up existing neural networks can help achieve better model quality. For this experiment, an AmoebaNet-B with 557 million model parameters and input image size of 480 x 480  was trained on the ImageNet ILSVRC-2012 dataset. Researchers observed that the model was able to reach 84.3% top-1 / 97% top-5 single-crop validation accuracy without the use of any external data. Researchers also ran the transfer learning experiments on the CIFAR10 and CIFAR100 datasets, where they observed that the giant models improved the best published CIFAR-10 accuracy to 99% and CIFAR-100 accuracy to 91.3%. “We are happy to provide GPipe to the broader research community and hope it is a useful infrastructure for efficient training of large-scale DNNs”, say the researchers. For more information, check out the official GPipe Blog post. Google researchers propose building service robots with reinforcement learning to help people with mobility impairment Google AI researchers introduce PlaNet, an AI agent that can learn about the world using only images Researchers release unCaptcha2, a tool that uses Google’s speech-to-text API to bypass the reCAPTCHA audio challenge
Read more
  • 0
  • 0
  • 12676

article-image-dav1d-0-2-0-released-with-ssse3-support-improved-x86-performance-and-more
Amrata Joshi
05 Mar 2019
2 min read
Save for later

DAV1D 0.2.0 released with SSSE3 support, improved x86 performance and more

Amrata Joshi
05 Mar 2019
2 min read
Yesterday, the team behind DAV1D released DAV1D 0.2.0, the open-source AV1 video decoder which focuses on helping older desktop CPUs and mobile devices. The initial release, Dav1d 0.1  which was released three months ago, featured hand-written AVX2 code for running faster than the reference decoder on modern Intel/AMD CPUs. Though the stable version of DAV1D 0.2.0 is yet to be released. What’s new in DAV1D 0.2.0 SSSE3 Support The SSSE3 support is aimed at scaling the performance potential for older desktop CPUs. As per the Steam Hardware Survey (Feb. 2019), 97,23% of their user base supports SSSE3. x86 performance Dav1d 0.1.0 didn’t support older and lower-end processors but this release comes with support for processors not supporting AVX2. Also, there is NEON SIMD support now for ARM hardware. The performance of AVX2 has increased from 1% to 2% for dav1d. Mobile: NEON During the previous release, the speed using NEON assembly over C was around 80% which has been doubled now with DAV1D 0.2.0. Arm64 performance Performance for Arm64 has improved as there is 38% improvement for single-thread and a 53% improvement for multi-thread performances. 32-bit Arm (Armv7) The 32-bit Arm (Armv7) has also improved as most assembly code can be fairly easily ported. Major bug fixes This release comes with rewrite inverse transforms for avoiding overflows. The issues with un-decodable samples have been fixed. To know more about this news, check out the official post on Medium. dav1d 0.1.0, the AV1 decoder by VideoLAN, is here dav1d to release soon with all features of AV1, and better performance than libaom Presenting dav1d, a new lightweight AV1 decoder, by VideoLAN and FFmpeg    
Read more
  • 0
  • 0
  • 1921
article-image-2019-upskilling-enterprise-devops-skills-report-gives-an-insight-into-the-devops-skill-set-required-for-enterprise-growth
Melisha Dsouza
05 Mar 2019
3 min read
Save for later

‘2019 Upskilling: Enterprise DevOps Skills’ report gives an insight into the DevOps skill set required for enterprise growth

Melisha Dsouza
05 Mar 2019
3 min read
DevOps Institute has announced the results of the "2019 Upskilling: Enterprise DevOps Skills Report". The research and analysis for this report were conducted by Eveline Oehrlich, former vice president and research director at Forrester Research. The project was supported by founding Platinum Sponsor Electric Cloud, Gold Sponsor CloudBees and Silver Sponsor Lenovo. This report outlines the most valued and in-demand skills needed to achieve DevOps transformation within enterprise IT organizations of all sizes. It also gives an insight into the skills a DevOps professional should develop to help build a mindset and a culture for organizations and other individuals. According to Jayne Groll, CEO of DevOps Institute, “DevOps Institute is thrilled to share the research findings that will help businesses and the IT community understand the requisite skills IT practitioners need to meet the growing demand for T-shaped professionals. By identifying skill sets needed to advance the human side of DevOps, we can nurture the development of the T-shaped professional that is being driven by the requirement for speed, agility and quality software from the business.” Key findings from the report 55% of the survey respondents said that they first look for internal candidates when searching for DevOps team members and will look for external candidates only if they have not identified an internal candidate. The respondents agreed that automation skills (57%), process skills (55%) and soft skills (53%) are the most important must-have skills On being asked about which job title(s) companies recently hired (or are planning to hire), the survey depicted: DevOps Engineer/Manager, 39%; Software Engineer, 29%; DevOps Consultant, 22%; Test Engineer, 18%; Automation Architect, 17%; and Infrastructure Engineer, 17%. Other recruits included CI/CD Engineers, 16%; System Administrators, 15%; Release Engineers/Managers, 13%; and Site Reliability Engineers, 10% Functional skills and key technical skills when combined, complement the soft skills required to create qualified  DevOps engineers. Automation process and soft skills are the “must-have” skills for a DevOps engineer. Process skills are needed for intelligent automation.  Another key functional skill is IT Operations. Security comes in second. Business skills are most important to leaders, but not as much to individual contributors. Cloud and analytical knowledge are the top technical skills. Recruiting for DevOps is on the rise. Source: Press release, DevOps Institute’s '2019 Upskilling: Enterprise DevOps Skills Report The following figure shows the priorities across the top skill categories relative to the key roles surveyed: Source: Press release, DevOps Institute’s '2019 Upskilling: Enterprise DevOps Skills Report Oehrlich also said in a statement that Hiring managers see a DevOps professional as a creative, knowledge-sharing, eager-to-learn individual with shapeable skill sets. Andre Pino, vice president of marketing, CloudBees said in a statement s that “The survey results show the importance for developers and managers to have the right skills that empower them to meet business objectives and have a rewarding career in our fast-paced industry.” You can check out the entire report for more insights on this news. Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more! Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace JFrog acquires DevOps startup ‘Shippable’ for an end-to-end DevOps solution
Read more
  • 0
  • 0
  • 8492

article-image-reactos-0-4-11-is-now-out-with-kernel-improvements-manifests-support-and-more
Bhagyashree R
05 Mar 2019
2 min read
Save for later

ReactOS 0.4.11 is now out with kernel improvements, manifests support, and more!

Bhagyashree R
05 Mar 2019
2 min read
Yesterday, the ReactOS team announced the release of ReactOS 0.4.11. This release comes with improvements in the kernel for better overall system stability, support for manifests, and more. Following are some of the updates ReactOS 0.4.11 comes with: Kernel improvements ReactOS 0.4.11 comes with substantial updates in the interface that allow operating systems to talk with storage devices. Nowadays, computers generally use SATA connections and the corresponding AHCI interface. To support this interface, ReactOS relies on the UniATA driver. But, this driver was not supported by the 6th generation of Intel’s Core processors (Skylake). The team has now resolved this incompatibility enabling users to test ReactOS on more modern platforms. Support for manifests Applications often depend on other libraries in the form of dynamic link libraries (DLLs), which are loaded by the loader (LDR). One way these dependencies are specified is with the help of manifests. In the previous versions of ReactOS, manifests were not properly supported. ReactOS 0.4.11 comes with sufficient support for manifests, which has also widened the range of applications that can run in ReactOS. With this support added, ReactOS can now run applications like Blender 2.57b, Bumptop, Evernote 5.8.3, Quicktime Player 7.7.9, and many others. USETUP improvements ReactOS 0.4.11 comes with major improvements in the USETUP module. The goal behind these improvements was to enable users to upgrade an existing installation of ReactOS. This is also a step forward towards making ReactOS an actual system OS with the ability to update without the loss of any data and configuration. Testing In this release, the team has restructured the test results page to better encapsulate the relevant information. In addition to the overall conclusion of the test, users will now also be able to see details such as tracking what drove a particular conclusion and the workarounds that they might themselves attempt. Support for network debugging and diagnosis programs ReactOS 0.4.11 now supports various network debugging and diagnosis programs as a result of work done in TCP and UDP connection enumeration. With this update, the ReactOS team aims to make the platform useful for not just running applications, but also to debug them. To read the full list of updates in ReactOS 0.4.11, check out the official announcement. Btrfs now boots ReactOS, a free and open source alternative for Windows NT ReactOS version 0.4.9 released with Self-hosting and FastFAT crash fixes You can now install Windows 10 on a Raspberry Pi 3
Read more
  • 0
  • 0
  • 17857
Modal Close icon
Modal Close icon