Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-gocity-turn-your-golang-program-into-a-3d-city
Prasad Ramesh
05 Nov 2018
2 min read
Save for later

GoCity: Turn your Golang program into a 3D city

Prasad Ramesh
05 Nov 2018
2 min read
A team from Federal University of Minas Gerais (UFMG) created a Code City metaphor for visualizing Golang source code called GoCity. You simply paste the IRL to a GitHub repository and GoCity plots it out as a city with districts and buildings. It allows you to visualize your code as a neat three-dimensional city. GoCity represents a program written in Go as a city: The folders are represented as districts Files in the program are shown as buildings of varying heights, shapes, and sizes The structs are represented as buildings stacked on the top of their files Characteristics of the structures The Number of Lines of Source Code (LOC) represents the building color. Higher values make the building dark. The Number of Variables (NOV) in the program affects the building's base size. The Number of methods (NOM) in the program affects the height of the. The UI/front-end The UI for GoCIty is built with React and uses babylon.js to plot the 3D structures. The source code for the front-end is available in the front-end branch on GitHub. What the users are saying A comment on Hacker news by user napsterbr reads: “Cool! Interestingly I always use a similar metaphor on my own projects. For instance, the event system may be seen as the roads linking different blocks (domains), each with their own building (module).” The Kubernetes repository does seem to take a toll as it forms a lot of buildings spaced out. “The granddaddy of them all, Kubernetes, takes quite a toll performance-wise. https://go-city.github.io/#/github.com/kubernetes/kubernetes.” But like another user jackwilsdon pointed out on Reddit: “Try github.com/golang/go if you want some real browser-hanging action!” For more details, visit the GitHub repository. For an interactive live demonstration, visit the Go City website. Golang plans to add a core implementation of an internal language server protocol Why Golang is the fastest growing language on GitHub GoMobile: GoLang’s Foray into the Mobile World
Read more
  • 0
  • 0
  • 15743

article-image-soon-rhel-red-hat-enterprise-linux-wont-support-kde
Amrata Joshi
05 Nov 2018
2 min read
Save for later

Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE

Amrata Joshi
05 Nov 2018
2 min read
Later last week, Red Hat announced that RHEL has deprecated KDE (K Desktop Environment) support. KDE Plasma Workspaces (KDE) is an alternative to the default GNOME desktop environment for RHEL. Major future release of Red Hat Enterprise Linux will no longer support using KDE instead of the default GNOME desktop environment. In the 90’s, the Red Hat team was entirely against KDE and had put lots of effort into Gnome. Since Qt was under a not-quite-free license that time, the Red Hat team was firmly behind Gnome. Steve Almy, principal product manager of Red Hat Enterprise Linux, told the Register, “Based on trends in the Red Hat Enterprise Linux customer base, there is overwhelming interest in desktop technologies such as Gnome and Wayland, while interest in KDE has been waning in our installed base.” Red Hat heavily backs the Linux desktop environment GNOME, which is developed as an independent open-source project. Also, it is used by a large bunch of other distros. Although Red Hat is indicating the end of KDE support in RHEL, KDE is very much its own independent project that will continue on its own, with or without support from future RHEL editions. Almy said, “While Red Hat made the deprecation note in the RHEL 7.6 notes, KDE has quite a few years to go in RHEL's roadmap.” This is simply a warning that certain functionality may be removed or replaced from RHEL in the future with functionality similar or more advanced to the one deprecated. Though KDE, as well as anything listed in Chapter 51 of the Red Hat Enterprise Linux 7.6 release notes,  will continue to be supported for the life of Red Hat Enterprise Linux 7. Read more about this news on the official website of Red Hat. Red Hat released RHEL 7.6 Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation
Read more
  • 0
  • 0
  • 12972

article-image-a-new-data-breach-on-facebook-due-to-malicious-browser-extensions-reports-bbc-news
Bhagyashree R
05 Nov 2018
4 min read
Save for later

A new data breach on Facebook due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News

Bhagyashree R
05 Nov 2018
4 min read
Throughout this year, we saw many data breaches and security issues involving Facebook. Adding to this list, last week, some hackers were able to gain access to 120 million accounts and posted private posts of Facebook users. As reported by the BBC News, the hackers also put an advert selling access to these compromised accounts for 10 cents per account. What this Facebook hack was about? This case of data breach seems to be different from the ones we saw previously. While the previous attacks took advantage of vulnerabilities in Facebook’s code, this breach happened due to malicious extensions. This breach was first spotted in September, when a user nicknamed as “FBSaler” appeared on an English-language internet forum. This user was selling personal information of Facebook users: "We sell personal information of Facebook users. Our database includes 120 million accounts.” BBC contacted Digital Shadows, a cyber-security company to investigate the case. The cyber-security company confirmed that more than 81,000 of the profiles posted online contained private messages. Also, the data from 176,000 accounts were made available online, but BBC added that this data may have been scraped from members who had not hidden it. To confirm that these private posts and messages were actually of real users BBC also contacted five Russian Facebook users. These users confirmed that the posts were theirs. Who exactly is responsible for this hack? Going by Facebook’s statement to BBC, this hack happened because of malicious browser extensions. This malicious extension tracked victims’ activity on Facebook and shared their personal details and private conversations with the hackers. Facebook has not yet disclosed any information about the extension. One of the Facebook’s executive, Guy Rosen told BBC: "We have contacted browser-makers to ensure that known malicious extensions are no longer available to download in their stores. We have also contacted law enforcement and have worked with local authorities to remove the website that displayed information from Facebook accounts." On deeper investigation by BBC News, one of the websites where the data was published appeared to have been set up in St Petersburg. In addition to taking the website down, its IP address has also been flagged by the Cybercrime Tracker service. According to the Cybercrime Tracker service this address was also used to spread the LokiBot Trojan. This trojan allows attacker to gain access to user passwords. Cyber experts told BBC that if malicious extensions were the root cause of this data breach, then browsers are also responsible for it: “Independent cyber-experts have told the BBC that if rogue extensions were indeed the cause, the browsers' developers might share some responsibility for failing to vet the programs, assuming they were distributed via their marketplaces.” This news has led to a big discussion on Hacker News. One of the users on the discussion shared how these kind of attacks could be mitigated by browser policies: “Maybe it's time for the browsers to put more effort into extension network security. 1) Every extension has to declare up front what urls it needs to communicate to. 2) Every extension has to provide schema of any data it intends to send out of browser. 3) Browser locally logs all this comms. 4) Browser blocks anything which doesn't match strict key values & value values and doesn't leave browser in plain text.” We will have to wait and see how these browsers will be able to stop the use of malicious extensions and also, how Facebook makes itself much stronger against all these data breaches. Read the full report on this Facebook hack on BBC News. Facebook’s CEO, Mark Zuckerberg summoned for hearing by UK and Canadian Houses of Commons Facebook’s Machine Learning system helped remove 8.7 million abusive images of children Facebook says only 29 million and not 50 million users were affected by last month’s security breach
Read more
  • 0
  • 0
  • 14439

article-image-google-open-sources-bert-an-nlp-pre-training-technique
Prasad Ramesh
05 Nov 2018
2 min read
Save for later

Google open sources BERT, an NLP pre-training technique

Prasad Ramesh
05 Nov 2018
2 min read
Google open-sourced Bidirectional Encoder Representations from Transformers (BERT) last Friday for NLP pre-training. Natural language processing (NLP) consists of topics like sentiment analysis, language translation, question answering, and other language-related tasks. Large datasets for NLP containing millions, or billions, of annotated training examples is scarce. Google says that with BERT, you can train your own state-of-the-art question answering system in 30 minutes on a single Cloud TPU, or a few hours using a single GPU. The source code built on top of TensorFlow. A number of pre-trained language representation models are also included. BERT features BERT improves on recent work in pre-training contextual representations. This includes semi-supervised sequence learning, generative pre-training, ELMo, and ULMFit. BERT is different from these models, it is the first deeply bidirectional, unsupervised language representation, pre-trained using only a plain text corpus - Wikipedia. Context-free models like word2vec generate a single word embedding representation for every word. Contextual models, on the other hand, generate a representation\ of each word based on the other words in the sentence. BERT is deeply bidirectional as it considers the previous and next words. Bidirectionality It is not possible to train bidirectional models by simply conditioning each word on words before and after it. Doing this would allow the word that’s being predicted to indirectly see itself in a multi-layer model. To solve this, Google researchers used a straightforward technique of masking out some words in the input and condition each word bidirectionally in order to predict the masked words. This idea is not new, but BERT is the first technique where it was successfully used to pre-train a deep neural network. Results On The Stanford Question Answering Dataset (SQuAD) v1.1, BERT achieved 93.2% F1 score surpassing the previous state-of-the-art score of 91.6% and human-level score of 91.2%. BERT also improves the state-of-the-art by 7.6% absolute on the very challenging GLUE benchmark, a set of 9 diverse Natural Language Understanding (NLU) tasks. For more details, visit the Google Blog. Intel AI Lab introduces NLP Architect Library FAT Conference 2018 Session 3: Fairness in Computer Vision and NLP Implement Named Entity Recognition (NER) using OpenNLP and Java
Read more
  • 0
  • 0
  • 14782

article-image-microsoft-releases-procdump-for-linux-a-linux-version-of-the-procdump-sysinternals-tool
Savia Lobo
05 Nov 2018
2 min read
Save for later

Microsoft releases ProcDump for Linux, a Linux version of the ProcDump Sysinternals tool

Savia Lobo
05 Nov 2018
2 min read
Microsoft developer, David Fowler revealed ‘ProcDump for Linux’, a Linux version of the ProcDump Sysinternals tool, over the weekend on November 3. ProcDump is a Linux reimagining of the classic ProcDump tool from the Sysinternals suite of tools for Windows. It provides a convenient way for Linux developers to create core dumps of their application based on performance triggers. Requirements for ProcDump The tool currently supports Red Hat Enterprise Linux / CentOS 7, Fedora 26, Mageia 6 and Ubuntu 14.04 LTS, with other versions being tested. It also supports gdb >= 7.6.1 and zlib (build-time only). Limitations of ProcDump Runs on Linux Kernels version 3.5+ Does not have full feature parity with Windows version of ProcDump, specifically, stay alive functionality, and custom performance counters Installing ProcDump ProcDump can be installed using two methods, first, Package Manager, which is a preferred method. The other one is via.deb package. To know more about ProcDump in detail visit its GitHub page. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report ‘We are not going to withdraw from the future’ says Microsoft’s Brad Smith on the ongoing JEDI bid, Amazon concurs Microsoft bring an open-source model of Component Firmware Update (CFU) for peripheral developers
Read more
  • 0
  • 0
  • 19558

article-image-the-react-native-team-shares-their-open-source-roadmap-react-suite-hits-3-4-0
Bhagyashree R
02 Nov 2018
3 min read
Save for later

The React Native team shares their open source roadmap, React Suite hits 3.4.0

Bhagyashree R
02 Nov 2018
3 min read
Yesterday, the React Native team shared further plans for React Native to provide a better support to its users and collaborators outside of Facebook. The team is planning to open source some of the internal tools and improve the widely used tools in the open source community. In order to ensure no breaking code is open sourced, they are also improving their testing infrastructure. The following are some of the focus areas the developers will be working on: Cleaning up for leaner core The developers are planning on reducing the surface area of React Native by removing non-core and unused components. Currently, React Native is a huge repo, so it makes sense to break it into smaller ones. This will come with many advantages, some of which are: Managing the contributions to React Native will become easier Chance to deprecate older modules Bundle size for projects that don’t use the extractable components will be reduced. This will lead to faster startup times for the apps. Enable faster reviewing of pull requests and merging. Open sourcing internals and improving popular tools They will be open-sourcing some of the tools that Facebook uses internally and provide improved support for tools that are widely used by the open source community. Some of the projects they will be working on are: Open sourcing of JavaScript Interface (JSI), an interface that facilitates the communication between JavaScript and the native language. Support for 64-bit libraries on Android. The new architecture will come with debugging enabled. Support for CocoaPods, Gradle, Maven, and new Xcode build system will be improved. Improved testing infrastructure Before publishing a code, it goes through several tests internally by the React Native engineers. But, since there are a few differences in how React Native is being used at Facebook and by the open-source community, these updates sometimes result in introducing some breaking changes to the React Native surface. To avoid such situations they will be improving internal tests and ensure that new features are tested in an environment as similar to open source as possible. Along with these infrastructure improvements, Facebook will start using React Native via the public API, as the open source community does. This will reduce the unintentional breaking changes. All these changes, along with some more, will be achieved throughout the next year as per the official announcement. They have completed some of the goals at the moment such as JSI, which has already landed in open source. Releasing React Suite 3.4.0 Meanwhile, the React developers announced the release of React Suite 3.4.0. React Suite or RSUITE consists of React component libraries for enterprise system products. This release comes with TypeScript support and few minor bug fixes. The following are some of the updates that are introduced in React Suite 3.4.0: Support is added for TypeScript. renderTooltip is added for Slider. MultiCascader, a single selection of data with hierarchical relationship structure, has been added. Customizing options in <DatePicker> shortcuts were not working properly. This is fixed. The scroll bar not resetting after the column of the <Table> has been fixed. To read React Native’s open source roadmap check out their official announcement. Also, you can read the React Suite’s release notes to know more updates in React Suite 3.4.0. React Conf 2018 highlights: Hooks, Concurrent React, and more React introduces Hooks, a JavaScript function to allow using React without classes React Native 0.57 released with major improvements in accessibility APIs, WKWebView-backed implementation, and more!
Read more
  • 0
  • 0
  • 19245
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-neo4j-rewarded-with-80m-series-e-plans-to-expand-company
Savia Lobo
02 Nov 2018
2 min read
Save for later

Neo4j rewarded with $80M Series E, plans to expand company

Savia Lobo
02 Nov 2018
2 min read
On 1st October, Neo4j was rewarded with an $80 million Series E to bring their products to a wider market. This could possibly be the companies last private fundraise. In 2016, the company got a $36 million Series D investment. Neo4j has been successful with around 200 enterprise customers to their credit including Walmart, UBS, IBM and NASA with customers from 20 of the top 25 banks and 7 of the top 10 retailers. The round for series E was led by One Peak Partners and Morgan Stanley Expansion Capital with participation from existing investors Creandum, Eight Roads and Greenbridge Partners. As reported in Techcrunch, this is what he has to say: “If your mental framework is around building a great company, you’re going to have all kinds of options along the way. So that’s what I’m completely focused on,” Eifrem explained. This year, the company was focussed on expanding into artificial intelligence. Since Graph databases help companies understand connections in large datasets and AI involves large amounts of data to drive the learning models, both of them used hand-in-hand will benefit the organization. Eifrem has expressed intentions to use the money to expand the company internationally. He also plans to provide localized service in terms of language and culture wherever their customers happen to be. This news seems to have gone down well with Neo4j users: Source: y combinator Head over to Techcrunch to know more about this news. Why Neo4j is the most popular graph database Neo4j 3.4 aims to make connected data even more accessible From Graph Database to Graph Company: Neo4j’s Native Graph Platform addresses evolving needs of customers
Read more
  • 0
  • 0
  • 11708

article-image-senator-ron-wydens-data-privacy-law-draft-can-punish-tech-companies-that-misuse-user-data
Savia Lobo
02 Nov 2018
3 min read
Save for later

Senator Ron Wyden’s data privacy law draft can punish tech companies that misuse user data

Savia Lobo
02 Nov 2018
3 min read
On Thursday, Sen. Ron Wyden, a Democrat from Oregon, introduced a draft data privacy bill with harsh penalties for companies that violate data privacy. The bill would apply to companies that bring in more than $50 million in revenue and have personal information on more than 1 million people. This decision took roots a year ago when Equifax disclosed that hackers stole the personal information of  147.7 million Americans from its servers. Following this, Facebook and Cambridge Analytica were also sued over the firm's gathering of private data of more than 50 million people through the social network. Also, a lawsuit was filed against Uber after the San Francisco-based ride-sharing company took more than 12 months to inform users that it suffered a major hack. In August, Google closely escaped from a million dollar GDPR fine for tracking user’s data even when the user asks Google to turn off locations, it actually tracks in incognito mode. According to Cnet, “lawmakers still felt that the companies involved weren't being held accountable for mishandling data on millions of people.” Wyden has always been at the forefront of cybersecurity and privacy issues in the Senate. He said, “Today's economy is a giant vacuum for your personal information. Everything you read, everywhere you go, everything you buy and everyone you talk to is sucked up in a corporation's database. But individual Americans know far too little about how their data is collected, how it's used and how it's shared." Ron Wyden's draft bill Wyden’s draft bill has recommended boosting the ability of the Federal Trade Commission to take action on privacy violations. In current scenarios, the FTC can only fine tech companies if they agree to a consent decree. The decree straightforwardly states that users be notified and that they explicitly give their permission before data about them is shared beyond the privacy settings they have established Facebook had done the same in  2011. The bill also requires companies to submit an annual data protection report, similar to how companies like Google and Apple voluntarily release transparency reports on government demands. CNet reports, “The report needs to be signed by CEOs, who could face up to 20 years in prison if they lie to the FTC.” The draft bill introduces a national "Do No Track" website, allowing Americans to create a central page to opt out of data sharing across the internet. The FTC would also be able to issue fines up to 4 percent of the company's annual global revenue, which is also the same percentage that the European Union's General Data Protection Regulation uses. Wyden's draft bill is the first legislation proposed on data privacy in the US. Read Senator Ron Wyden’s draft bill to know more about this data privacy legislation in detail. Is AT&T trying to twist data privacy legislation to its own favor? Google, Amazon, AT&T met the U.S Senate Committee to discuss consumer data privacy Apple now allows U.S. users to download their personal data via its online privacy data portal  
Read more
  • 0
  • 0
  • 11559

article-image-this-ai-generated-animation-can-dress-like-humans-using-deep-reinforcement-learning
Prasad Ramesh
02 Nov 2018
4 min read
Save for later

This AI generated animation can dress like humans using deep reinforcement learning

Prasad Ramesh
02 Nov 2018
4 min read
In a paper published this month, the human motions to wear clothes is synthesized in animation with reinforcement learning. The paper named Learning to Dress: Synthesizing Human Dressing Motion via Deep Reinforcement Learning was published yesterday. The team is made up of two Ph.D. students from The Georgia Institute of Technology, two of its professors and a researcher from Google Brain. Understanding the dressing problem Dressing, putting on a t-shirt or a jacket is something we do every day. Yet it is a computationally costly and complex task for a machine to perform or be simulated by computers. Techniques in physics simulation and machine learning are used in this paper to simulate an animation. A physics engine is used to simulate character motion and cloth motion. On the other hand deep reinforcement learning on a neural network is used to produce character motion. Physics engine and reinforcement learning on a neural network The authors of the paper introduce a salient representation of haptic information to guide the dressing process. Then the haptic information is used in the reward function to provide learning signals when training the network. As the task is too complex to do perform in one go, the dressing task is separated into several subtasks for better control. A policy sequencing algorithm is introduced to match the distribution of output states from one task to the input distribution for the next task. The same approach is used to produce character controllers for various dressing tasks like wearing a t-shirt, wearing a jacket, and robot-assisted dressing of a sleeve. Dressing is complex, split into several subtasks The approach taken by the authors splits the dressing task into a sequence of subtasks. Then a state machine guides the between these tasks. Dressing a jacket, for example, consists of four subtasks: Pulling the sleeve over the first arm. Moving the second arm behind the back to get in position for the second sleeve. Putting hand in the second sleeve. Finally, returning the body back to a rest position. A separate reinforcement learning problem is formulated for each subtask in order to learn a control policy. The policy sequencing algorithm ensures that these individual control policies can lead to a successful dressing sequence on being executed sequentially. The algorithm matches the initial state of one subtask with the final state of the previous subtask in the sequence. A variety of successful dressing motions can be produced by applying the resultant control policies. Each subtask in the dressing task is formulated as a partially observable Markov Decision Process (POMDP). Character dynamics are simulated with Dynamic Animation and Robotics Toolkit (DART) and cloth dynamics with NVIDIA PhysX. Conclusion and room for improvement A system that learns to animate a character that puts on clothing is successfully created with the use of deep reinforcement learning and physics simulation. From the subtasks, the system learns each sub-task individually, then connects them with a state machine. It was found that carefully selecting the cloth observations and the reward functions were important factors for the success of the approach taken. This system currently performs only upper body dressing. For lower body, a balance into the controller would be required. The number of subtasks might reduce on using a control policy architecture with memory. This will allow for greater generalization of the skills learned. You can read the research paper at the Georgia Institute of Technology website. Facebook launches Horizon, its first open source reinforcement learning platform for large-scale products and services Deep reinforcement learning – trick or treat? Google open sources Active Question Answering (ActiveQA), a Reinforcement Learning based Q&A system
Read more
  • 0
  • 0
  • 21586

article-image-introducing-zink-an-opengl-implementation-on-top-of-vulkan
Amrata Joshi
02 Nov 2018
3 min read
Save for later

Introducing Zink: An OpenGL implementation on top of Vulkan

Amrata Joshi
02 Nov 2018
3 min read
Erik Kusma Faye-Lund, a graphics programmer, introduced Zink on Wednesday. Zinc is an OpenGL implementation on top of Vulkan. It is a Mesa Gallium driver that supports OpenGL implementation in Mesa to provide hardware-accelerated OpenGL when only a Vulkan driver is available. Currently, Zink is only available as a source code, distro-packages aren’t available yet. It has only been tested on Linux. To build Zink, one needs to have Git, Vulkan headers and libraries, Meson and Ninja. Also, one needs to build dependencies to compile Mesa. Erik says, “And most importantly, we are not a conformant OpenGL implementation. I’m not saying we will never be, but as it currently stands, we do not do conformance testing, and as such we neither submit conformance results to Khronos.” What Zink may include 1. Just one API OpenGL is a big API and is well-established as a requirement for applications and desktop compositors. But since the release of Vulkan, there are two APIs for essentially the same hardware functionality but both are important. As the software-world is working hard to implement Vulkan support everywhere, this is leading to complexity. One would only require things like desktop compositors to support one API in the future. There might be a future where OpenGL’s role could purely be one of legacy application compatibility. Maybe Zink can help in making the future better! 2. Lessen the workload of GPU drivers Everyone wants less amount of code to maintain for legacy hardware but the drivers to maintain are growing rapidly. Also, new drivers have been written for old hardware. If the hardware is capable of supporting Vulkan, it could be easier to only support Vulkan “natively”, and do OpenGL through Zink. There aren’t infinite programmers that can maintain every GPU driver forever. But maybe with Zink, driver-support might get better and easier. 3.  Zink comes with benefits Since Zink is implemented as a Gallium driver in Mesa, there are some side-benefits that come “for free”. For instance, projects like Gallium Nine or Clover could, in theory, may work on top of the i965 Vulkan driver through Zink in the future. In the coming years, Zink might also act as a cooperation-layer between OpenGL and Vulkan code in the same application. 4. Zink could be used as a closed-source Vulkan driver Zink might also run smoothly on top of a closed-source Vulkan driver and still get proper window system integration. What does Zink require? Currently, Zink requires a Vulkan 1.0 implementation and the following extensions: VK_KHR_maintenance1: This extension is required for the viewport flipping. VK_KHR_external_memory_fd : This extension is required for getting the rendered result on screen. Additionally, Erick has also shared a list of features that Zink doesn’t support, which include: Currently, glPointSize() is not supported. Though writing to gl_PointSize from the vertex shader does work. The texture borders are currently black due to Vulkan’s lack of arbitrary border-color support. Currently, no control-flow is supported in the shaders. There is no GL_ALPHA_TEST and glShadeModel(GL_FLAT) support yet. It would be interesting to see how Zink turns out when the features go live! Read more about this news on Kusma’s official website. Valve’s Steam Play Beta uses Proton, a modified WINE, allowing Linux gamers to play Windows games UI elements and their implementation Game Engine Wars: Unity vs Unreal Engine
Read more
  • 0
  • 0
  • 14588
article-image-salesforces-open-sourcing-centrifuge-a-library-for-accelerating-jvm-restarts
Amrata Joshi
02 Nov 2018
3 min read
Save for later

Salesforce’s open sourcing Centrifuge: A library for accelerating JVM restarts

Amrata Joshi
02 Nov 2018
3 min read
Yesterday, Paymon Teyer, a principal member of Technical Staff at Salesforce, introduced Centrifuge as a library, which is also a framework for scheduling and running startup and warmup tasks. It focuses mainly on accelerating JVM restarts. It also provides an interface for implementing warmup tasks, like, calling an HTTP endpoint, populating caches and handling pre-compilation tasks for generated code. When the JVM restarts in a production environment, it affects the performance of the server. The JVM has to reload classes, trigger reflection inflation, rerun its JIT compiler on any code paths, reinitialize objects and dependency injections, and populate component caches. The performance impact of JVM restarts can be minimized by allowing individual components to execute arbitrary warmup logic themselves, after a cold start. To make this possible, Centrifuge was created with the goal of executing warmup tasks. It also manages resource usage and handles failures. Centrifuge allows users to register and configure warmup tasks either descriptively or programmatically. It also schedules tasks, manages and monitors threads, handles exceptions and retries, and provides status reports. Centrifuge supports the following two categories of warmup tasks: Blocking tasks Blocking tasks prevent the application from returning to the available server pool until they complete. These tasks must be executed for the application to function properly. For example, executing source code generators or populating a cache from storage to meet SLA requirements. Non-blocking tasks Non- blocking tasks execute asynchronously and don’t interfere with the application’s readiness. These tasks do the work which is needed after an application restarts but is not required immediately for the application to be in a consistent state. Examples include warmup logic that triggers JIT compilation on code paths or eagerly triggering dependency injection and object creation. How to Use Centrifuge? The first step is to include a Maven dependency for Centrifuge in the POM Then implementing the Warmer interface for each of the warmup tasks. The warmer class should have an accessible default constructor and it should not swallow InterruptedException. The warmers can be registered either programmatically with code or descriptively with a configuration file. For adding and removing warmers without recompiling, the warmers should be registered descriptively within a configuration file. Then the configuration file needs to be loaded into the Centrifuge. How is the HTTP Warmer useful? Centrifuge provides a simple HTTP warmer which is used to call HTTP endpoints to trigger code path. It is also exercised by the resource implementing the endpoint. If an application provides a homepage URL and when called, connects to a database, populates the caches, etc., then the HTTP warmer can warm these code paths. Read more about Centrifuge on Salesforce’s official website. About Java Virtual Machine – JVM Languages Tuning Solr JVM and Container Concurrency programming 101: Why do programmers hang by a thread?
Read more
  • 0
  • 0
  • 18179

article-image-kubeflow-0-3-released-with-simpler-setup-and-improved-machine-learning-development
Melisha Dsouza
02 Nov 2018
3 min read
Save for later

Kubeflow 0.3 released with simpler setup and improved machine learning development

Melisha Dsouza
02 Nov 2018
3 min read
Early this week, the Kubeflow project launched its latest version- Kubeflow 0.3, just 3 months after version 0.2 was out. This release comes with easier deployment and customization of components along with better multi-framework support. Kubeflow is the machine learning toolkit for Kubernetes. It is an open source project dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Users are provided with a easy to use ML stack anywhere that Kubernetes is already running, and this stack can self configure based on the cluster it deploys into. Features of Kubeflow 0.3 1. Declarative and Extensible Deployment Kubeflow 0.3 comes with a deployment command line script; kfctl.sh. This tool allows consistent configuration and deployment of Kubernetes resources and non-K8s resources (e.g. clusters, filesystems, etc. Minikube deployment provides a single command shell script based deployment. Users can also use MicroK8s to easily run Kubeflow on their laptop. 2. Better Inference Capabilities Version 0.3 makes it possible to do batch inference with GPUs (but non distributed) for TensorFlow using Apache Beam.  Batch and streaming data processing jobs that run on a variety of execution engines can be easily written with Apache Beam. Running TFServing in production is now easier because of the Liveness probe added and using fluentd to log request and responses to enable model retraining. It also takes advantage of the NVIDIA TensorRT Inference Server to offer more options for online prediction using both CPUs and GPUs. This Server is a containerized, production-ready AI inference server which maximizes utilization of GPU servers. It does this by running multiple models concurrently on the GPU and supports all the top AI frameworks. 3. Hyperparameter tuning Kubeflow 0.3 introduces a new K8s custom controller, StudyJob, which allows a hyperparameter search to be defined using YAML thus making it easy to use hyperparameter tuning without writing any code. 4. Miscellaneous updates The upgrade includes a release of a K8s custom controller for Chainer (docs). Cisco has created a v1alpha2 API for PyTorch that brings parity and consistency with the TFJob operator. It is easier to handle production workloads for PyTorch and TFJob because of the new features added to them. There is also support provided for gang-scheduling using Kube Arbitrator to avoid stranding resources and deadlocking in clusters under heavy load. The 0.3 Kubeflow Jupyter images ship with TF Data-Validation. TF Data-Validation is a library used to explore and validate machine learning data. You can check the examples added by the team to understand how to leverage Kubeflow. The XGBoost example indicates how to use non-DL frameworks with Kubeflow The object detection example illustrates leveraging GPUs for online and batch inference. The financial time series prediction example shows how to leverage Kubeflow for time series analysis The team has said that the next major release:  0.4, will be coming by the end of this year. They will focus on ease of use to perform common ML tasks without having to learn Kubernetes. They also plan to make it easier to track models by providing a simple API and database for tracking models. Finally, they intend to upgrade the PyTorch and TFJob operators to beta. For a complete list of updates, visit the 0.3 Change Log on GitHub. Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12 ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl    
Read more
  • 0
  • 0
  • 3420

article-image-meet-carlo-a-web-rendering-surface-for-node-applications-by-the-google-chrome-team
Bhagyashree R
02 Nov 2018
2 min read
Save for later

Meet Carlo, a web rendering surface for Node applications by the Google Chrome team

Bhagyashree R
02 Nov 2018
2 min read
Yesterday, the Google Chrome team introduced Carlo, a web rendering surface for Node applications. Carlo provides rich rendering capabilities powered by the Google Chrome browser to Node applications. Using Puppeteer it is able to communicate with the locally installed browser instance. Puppeteer is also a Google Chrome project that comes with a high-level API to control Chrome or Chromium over the DevTools Protocol. Why Carlo is introduced? Carlo aims to show how the locally installed browser can be used with Node out-of-the-box. The advantage of using Carlo over Electron is that Node v8 and Chrome v8 engines are decoupled in Carlo. This provides a maintainable model that allows independent updates of the underlying components. In short, Carlo gives you more control over bundling. What you can do with Carlo? Carlo enables you to create hybrid applications that use Web stack for rendering and Node for capabilities. You can do the following with it: Using the web rendering stack, you can visualize dynamic state of your Node applications. Expose additional system capabilities accessible from Node to your web applications. Package your application into a single executable using the command-line interface, pkg. How does it work? It’s working involve three steps: First, Carlo checks whether Google Chrome is installed locally or not It then launches Google Chrome and establishes a connection to it over the process pipe Finally, exposes high-level API for rendering in Chrome In case of those users who do not have Chrome installed, Carlo prints an error message. It supports all Chrome Stable channel, versions 70.* and Node v7.6.0 onwards. You can install and get started with it by executing the following command: npm i carlo Read the full description on Carlo’s GitHub repository. Node v11.0.0 released npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more Node.js and JS Foundation announce intent to merge; developers have mixed feelings
Read more
  • 0
  • 0
  • 17581
article-image-electronic-arts-ea-announces-project-atlas-a-futuristic-cloud-based-ai-powered-game-development-platform
Natasha Mathur
02 Nov 2018
4 min read
Save for later

Electronic Arts (EA) announces Project Atlas, a futuristic cloud-based AI powered game development platform

Natasha Mathur
02 Nov 2018
4 min read
Electronic Arts (EA) announced Project Atlas, a new AI-powered, cloud computing based game development platform, earlier this week. Project Atlas comes with high-quality LIDAR Data, improved scalability, cloud-based engine, and enhanced security among others. Information regarding when the general availability of Project Atlas hasn’t been disclosed yet. “We’re calling this Project Atlas and we believe in it so much that we have over 1,000 EA employees working on building it every day, and dozens of studios around the world contributing their innovations, driving priorities, and already using many of the components” mentioned Ken Moss, Chief Technology Officer at Electronic Arts. Let’s discuss the features of Project Atlas. High-quality LIDAR Data Project Atlas will be using high-quality LIDAR data about real mountain ranges. This data will then be further passed through a deep neural network which has been trained to create terrain-building algorithms. With the help of this AI-assisted terrain generation, designers will be able to generate not just a single mountain, but a series of mountains along with the surrounding environment to bring the realism of the real world. “This is just one example of dozens or even hundreds where we can apply advanced technology to help game teams of all sizes scale to build bigger and more fun games,” says Moss. Improved Scalability Earlier, all simulation or rendering of in-game actions used to be limited to either the processing performance of the player’s console or to a single server that would interact with your system. But, now, with the help of the cloud, players will have the ability to tap into a network of many servers, that are dedicated to computing complex tasks. This will deliver hyper-realistic destruction within new HD games, that would be highly indistinguishable from real life. ”We’re working to deploy that level of gaming immersion on every device”, says Moss. Moreover, the integration of distributed networks at the rendering level means infinite scalability from the cloud. So, whether you’re on a team of 500 or just 5, you’ll now be able to scale games and create immersive experiences, in unprecedented ways. Cloud-based engine and Moddable asset database Now with Project Atlas, you can turn your own vision into reality, and share the creation with your friends as well as the whole world. You can also market your ideas and visions to the community. Keeping this in mind, Project Atlas team is planning on having a cloud-enabled engine that’ll be able to seamlessly integrates different services. Along with a moddable asset database, there’ll also be a common marketplace so that users can for share and rate other players’ creations. “Players and developers want to create. We want to help them. By blurring the line between content producers and players, this will truly democratize the game experience” adds Moss. Enhanced Security Project Atlas comes with a unified platform, where game makers have the ability to seamlessly deploy security measures such as SSL certificates, configuration, appropriate encryption of data, and zero-downtime patches for every feature from a single secure source. This will allow them to focus more on creating games and less on taking the required security measures. “We’re solving for some of the manually intensive demands by bringing together AI capabilities in an engine and cloud-enabled services at scale. With an integrated platform that delivers consistency and seamless delivery from the game, game makers will free up time, brainspace, and energy for the creative pursuit”, says Moss. For more information, check out the official Project Atlas blog. Xenko 3.0 game engine is here, now free and open-source Meet yuzu – an experimental emulator for the Nintendo Switch AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior
Read more
  • 0
  • 0
  • 16163

article-image-newer-apple-maps-is-greener-and-has-more-details
Prasad Ramesh
02 Nov 2018
3 min read
Save for later

Newer Apple maps is greener and has more details

Prasad Ramesh
02 Nov 2018
3 min read
Apple maps failed horribly with incorrect data at launch in 2012. Since then they have been putting efforts to make it better. In a blog post, Justin O’Beirne, who has contributed to Apple Maps, has made a comparison of the new and old Apple maps. The following is a summary of his analysis. Greener maps with more vegetation The new Apple maps is greener, a lot more vegetation. The cities are also more noticeable, even the smaller ones, they have more detail. The vegetation is mapped from the satellite view like Google Maps. According to the blog, 25% of county seats had no vegetation or green areas whatsoever on the old map. Here is a dramatic example: Source: Justin O’Beirne Blog The vegetation coverage covers the smallest details. Small strips of grass, vegetation between roads are all covered. In some areas, Apple maps cover vegetation details where no other maps service does it. Even Google Maps does not cover some of these vegetation details. Source: Justin O’Beirne Blog More structures are covered Other than vegetation, even shapes of structures, smaller structures other than buildings are also now in Apple maps. Details are distinguished in areas like beaches, harbors, racetracks, and parking lots. Smaller details in golf courses like fairways, running tracks in schools, pools, and playgrounds in parks are also covered. The building structures are being upgraded. Since the original launch in 2012, of course, there have been redevelopments, new buildings, etc. Apple is covering all of that trying to give a comprehensive and proper update. The buildings also have great detail showing arches, stepped design, and partitions. But some of the structures showcase incorrect height. The details in some rooftops are not there which are in Google Maps. But the perimeters of the buildings are more accurate than of Google Maps. Updating road data Apple used to license road data from TomTom, it wasn’t up to date and now Apple is replacing all of it with new data. Some roads are not present even on Google Maps but are in Apple Maps now. But Apple does not have all the business areas/stores. Also, the street names are not properly displayed, there are errors and in some cases, it is difficult to find streets while zooming in to Apple Maps. Apple maps have been in the making since 2014. It has certainly improved a lot, more details than Google in some areas but still lacks the large amount of data that Google Maps has. For more in-depth details, visit Justin O’Beirne Blog. A kernel vulnerability in Apple devices gives access to remote code execution Helium proves to be less than an ‘ideal gas’ for iPhones and Apple watches Apple launches iPad Pro, updates MacBook Air and Mac mini
Read more
  • 0
  • 0
  • 12338
Modal Close icon
Modal Close icon