Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-laravel-5-7-released-with-support-for-email-verification-improved-console-testing
Prasad Ramesh
06 Sep 2018
3 min read
Save for later

Laravel 5.7 released with support for email verification, improved console testing

Prasad Ramesh
06 Sep 2018
3 min read
Laravel 5.7.0 has been released. The latest version of the PHP framework includes support for email verification, guest policies, dump-server, improved console testing, notification localization, and other changes. The versioning scheme in Laravel maintains the convention—paradigm.major.minor. Major releases are done every six months in February and August. The minor releases may be released every week without breaking any functionality. For LTS releases like Laravel 5.5, bug fixes are provided for two years and security fixes for three years. The LTS releases provide the longest support window. For general releases, bug fixes are done for 6 months and security fixes for a year. Laravel Nova Laravel Nova is a pleasant looking administration dashboard for Laravel applications. The primary feature of Nova is the ability to administer the underlying database records using Laravel Eloquent. Additionally, Nova supports filters, lenses, actions, queued actions, metrics, authorization, custom tools, custom cards, and custom fields. After the upgrade, when referencing the Laravel framework or its components from your application or package, always use a version constraint like 5.7.*, since major releases can have breaking changes. Email Verification Laravel 5.7 introduces an optional email verification for authenticating scaffolding included with the framework. To accommodate this feature, a column called email_verified_at timestamp has been added to the default users table migration that is included with the framework. Guest User Policies In the previous Laravel versions, authorization gates and policies automatically returned false for unauthenticated visitors to your application. Now you can allow guests to pass through authorization checks by declaring an "optional" type-hint or supplying a null default value for the user argument definition. Gate::define('update-post', function (?User $user, Post $post) {    // ... }); Symfony Dump Server Laravel 5.7 offers integration with the dump-server command via a package by Marcel Pociot. To get this started, first run the dump-server Artisan command: php artisan dump-server Once the server starts after this command, all calls to dump will be shown in the dump-server console window instead of your browser. This allows inspection of values without mangling your HTTP response output. Notification Localization Now you can send notifications in a locale other than the set current language. Laravel will even remember this locale if the notification is queued. Localization of many notifiable entries can also be achieved via the Notification facade. Console Testing Laravel 5.7 allows easy "mock" user input for console commands using the expectsQuestion method. Additionally, the exit code can be specified and the text that you expect to be the output via the console command using the assertExitCode and expectsOutput methods. These were some of the major changes covered in Laravel 5.7, for a complete list, visit the Laravel Release Notes. Building a Web Service with Laravel 5 Google App Engine standard environment (beta) now includes PHP 7.2 Perform CRUD operations on MongoDB with PHP
Read more
  • 0
  • 0
  • 17133

article-image-satpy-0-10-0-python-library-for-manipulating-meteorological-remote-sensing-data-released
Amrata Joshi
26 Nov 2018
2 min read
Save for later

SatPy 0.10.0, python library for manipulating meteorological remote sensing data, released

Amrata Joshi
26 Nov 2018
2 min read
SatPy is a python library used for reading and manipulating meteorological remote sensing data and writing them to various image/data file formats. Last week, the team at Pytroll announced the release of SatPy 0.10.0. SatPy is responsible for making RGB composites directly from satellite instrument channel data or from higher level processing output. It also  makes data loading, manipulating, and analysis easy. https://twitter.com/PyTrollOrg/status/1066865986953986050 Features of SatPy 0.10.0 This version comes with two luminance sharpening compositors, LuminanceSharpeninCompositor and SandwichCompositor. The LuminanceSharpeninCompositor replaces the luminance via RGB. The SandwichCompositor multiplies the RGB channels with the reflectance. SatPy 0.10.0 comes with check_satpy function for finding missing dependencies. This version also allows writers to create output directories in case, they don't exist. In case of multiple matches, SatPy 0.10.0 helps in improving the handling of dependency loading. This version also supports the new olci l2 datasets used for olci l2 reader. Olci is used for ocean and land processing. Since yaml is the new format for area definitions in SatPy 0.10.0, areas.def has been replaced with areas.yaml In SatPy 0.10.0, filenames are used as strings by File handlers. This version also allows readers to accept pathlib.Path instances as filenames. With this version, it is easier to configure in-line composites. A README document has been added to the setup.py description. Resolved issues in SatPy 0.10.0 The issue with resampling a user-defined scene has been  resolved. Native resampler now works with DataArrays. It is now possible to review subclasses of BaseFileHander. Readthedocs builds are now working. Custom string formatter has been added in this version for lower/upper support. The inconsistent units of geostationary radiances have been resolved. Major Bug Fixes A discrete data type now gets preserved through resampling. Native resampling has been fixed. The slstr reader has been fixed for consistency. Masking in DayNightCompositor has been fixed. The problem with attributes not getting preserved while adding overlays or decorations has now been fixed. To know more about this news, check out the official release notes. Introducing ReX.js v1.0.0 a companion library for RegEx written in TypeScript Spotify releases Chartify, a new data visualization library in python for easier chart creation Google releases Magenta studio beta, an open source python machine learning library for music artists
Read more
  • 0
  • 0
  • 17131

article-image-facebook-and-arm-join-yocto-project-as-platinum-members-for-embedded-linux-development
Natasha Mathur
03 Sep 2018
3 min read
Save for later

Facebook and Arm join Yocto Project as platinum members for embedded Linux development

Natasha Mathur
03 Sep 2018
3 min read
Last week, the Yocto Project announced that Arm and Facebook will be joining the project as new platinum members. The Yocto Project is an open source collaboration project (originally an Intel Project) that was launched back in 2011. It aims to allow developers to create customized Linux-based systems for embedded products. The Yocto Project comes with a flexible set of tools and offers a space where embedded developers across the globe share technologies, software, and best practices. This helps them build tailored Linux images for embedded and Internet of Things (IOT) devices. According to Rhonda Dirvin, Senior Director, Marketing, Embedded & Automotive Line of Business, Arm, “The Yocto Project provides an excellent framework to facilitate embedded Linux development, and through our membership we will collaborate with the community to further advance Yocto Project’s custom open-source distribution.” Earlier, Linaro, which consolidates and optimizes open source software and tools for the Arm architecture, was considered a competitor of Yocto Project. However, that’s not entirely the case as both the groups have become complementary and Linaro’s Arm toolchain can be used within Yocto Project. Facebook's role in the Yocto Project and embedded Linux Facebook's role has been minor when it comes to embedded Linux. Facebook is said to join the Yocto Project either because of a new project or may be Facebook just wanted to expand its open source presence. “The Yocto Project is the basis for important open source and embedded firmware initiatives. We are happy to lend our support to the Yocto Project community, and look forward to joining with other members in this important work”, said Aaron Sullivan, Director of Hardware Engineering at Facebook The Yocto Project currently has more than 22 active members. “We are delighted to welcome Arm and Facebook to the Yocto Project at the Platinum level. With their continued support, we are furthering the embedded systems ecosystem and the Yocto Project as a whole.” mentioned Lieu Ta, Senior Director of Governance and Business Operations at Wind River and Chair of the Yocto Project Advisory Board. Yocto Project seems to be continually growing with Facebook and Arms joining in. Yocto will benefit from Facebook and Arm’s technical and financial support to consolidate it as a “secure, stable and adaptable industry standard”. For more information be sure to check out the official Yocto Project blog post. Read next Arm unveils its Client CPU roadmap designed for always-on, always-connected devices Facebook’s AI algorithm finds 20 Myanmar Military Officials guilty of spreading hate and misinformation, leads to their ban A new conservative employee group within Facebook to protest Facebook’s “intolerant” liberal policies
Read more
  • 0
  • 0
  • 17129

article-image-llvms-arm-stack-protection-feature-turns-ineffective-when-the-stack-is-re-allocated
Vincy Davis
16 Jul 2019
2 min read
Save for later

LLVMs Arm stack protection feature turns ineffective when the stack is re-allocated

Vincy Davis
16 Jul 2019
2 min read
A vulnerability in the stack protection feature in LLVM's Arm backend becomes ineffective when the stack protector slot is re-allocated. This was notified as a vulnerability note in the Software Engineering Institute of the CERT Coordination Center. The stack protection feature is optionally used to protect against buffer overflows in the LLVM Arm backend. A cookie value is added between the local variables and the stack frame return address to make this feature work. After storing this value in memory, the compiler checks the cookie with the LocalStackSlotAllocation function. The function checks if the value has been changed or overwritten. It is terminated if the address value is found to be changed.  If a new value is allocated later on, the stack protection becomes ineffective as the new stack protector slot appears only after the local variables which it is supposed to protect. It is also possible that the value gets overwritten by the stack cookie pointer. This happens when the stack protection feature is rendered ineffective.  When the stack protection feature becomes ineffective, the function becomes vulnerable to stack-based buffer overflow. This can cause the return address to be changed or the cookie to be overwritten itself, thus causing an unintended value to be passed through the check. The proposed solution for the stack vulnerability is to apply the latest updates from both the LLVM and Arm. This year saw many cases of buffer overflow vulnerabilities. In the June release of VLC 3.0.7, many security issues were resolved. One of the high security issues resolved was about the stack buffer overflow in the RIST Module of VLC 4.0.  LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces Google proposes a libc in LLVM, Rich Felker of musl libc thinks it’s a very bad idea Introducing InNative, an AOT compiler that runs WebAssembly using LLVM outside the Sandbox at 95% native speed
Read more
  • 0
  • 0
  • 17127

article-image-svelte-3-releases-with-reactivity-through-language-instead-of-an-api
Bhagyashree R
23 Apr 2019
2 min read
Save for later

Svelte 3 releases with reactivity through language instead of an API

Bhagyashree R
23 Apr 2019
2 min read
Yesterday, the Svelte community announced the stable release of Svelte 3. In this version, the team has worked towards moving reactivity into the language. Developers will now be able to write components in Svelte with significantly less boilerplate. Svelte is a component framework, similar to JavaScript frameworks such as React and Vue, but comes with an important difference. In the case of traditional frameworks, the major part of the work happens in the browser. On the other hand, Svelte shifts this work into a compile step that happens at the time when your app is built. Instead of relying on techniques like virtual DOM diffing, with this framework, you can write code that surgically updates the DOM when the app state changes. Rich Harris, the Svelte developer, says Svelte aims to be more like spreadsheets. “Spreadsheets are pretty cool and we should be more like them...Wouldn’t it be wonderful if the tools we use to build the web becomes as accessible as spreadsheets are? And, that is one of the Svelte’s overriding goals to make web development accessible...” What’s new in Svelte 3? With the introduction of hooks to React, many other frameworks also started to experiment with their own implementation of hooks. However, Svelte realized that ‘hooks’ was not the “direction they wanted to go in.” Explaining the reason behind not implementing hooks, Harris said, “Hooks have some intriguing properties, but they also involve some unnatural code and create unnecessary work for the garbage collector. For a framework that's used in as well as animation-heavy interactives, that's no good.” Because of these reasons, the team has reached the conclusion that Svelte does not require any API and has chosen to go with no API at all. “We can just use the language,” shared Harris. Not only just components, but the team has also given a completely new look and feel to Svelte in this release. They have also updated the logo, website, and also updated their tagline from 'The magical disappearing UI framework' to 'Cybernetically enhanced web apps'. To know more detail, check out the official announcement by Svelte. Applying Modern CSS to Create React App Projects [Tutorial] React Native development tools: Expo, React Native CLI, CocoaPods [Tutorial] React Native 0.59 is now out with React Hooks, updated JavaScriptCore, and more!
Read more
  • 0
  • 0
  • 17122

article-image-salesforce-open-sources-lightning-web-components-framework
Savia Lobo
30 May 2019
4 min read
Save for later

Salesforce open sources ‘Lightning Web Components framework’

Savia Lobo
30 May 2019
4 min read
Yesterday, the developers at Salesforce open sourced Lightning Web Components framework, a new JavaScript framework that leverages the web standards breakthroughs of the last five years. This will allow developers to contribute to the roadmap and also use the framework irrespective if they are building applications on Salesforce or on any other platform. The Lightning Web Components was first introduced in December 2018. The developers in their official blog post mention, “The last five years have seen an unprecedented level of innovation in web standards, mostly driven by the W3C/WHATWG and the ECMAScript Technical Committee (TC39): ECMAScript 6, 7, 8, 9 and beyond, Web components, Custom elements, Templates and slots, Shadow DOM, etc.” The introduction of Lightning Web Components framework has lead to a dramatic transformation of the web stack. Many features that required frameworks are now standard.   The framework was “born as a modern framework built on the modern web stack”, developers say. Lightning Web Components framework includes three key parts: The Lightning Web Components framework, the framework’s engine. The Base Lightning Components, which is a set of over 70 UI components all built as custom elements. Salesforce Bindings, a set of specialized services that provide declarative and imperative access to Salesforce data and metadata, data caching, and data synchronization. The Lightning Web Components framework doesn’t have dependencies on the Salesforce platform. However, Salesforce-specific services are built on top of the framework. The layered architecture means that one can now use the Lightning Web Components framework to build web apps that run anywhere. The benefits of this include: You only need to learn a single framework You can share code between apps. As Lightning Web Components is built on the latest web standards, you know you are using a cutting-edge framework based on the latest patterns and best practices. Many users said they are unhappy and that the Lightning Web Components framework is comparatively slow. One user wrote on HackerNews, “the Lightning Experience always felt non-performant compared to the traditional server-rendered pages. Things always took a noticeable amount of time to finish loading. Even though the traditional interface is, by appearance alone, quite traditional, as least it felt fast. I don't know if Lightning's problems were with poor performing front end code, or poor API performance. But I was always underwhelmed when testing the SPA version of Salesforce.” Another user wrote, “One of the bigger mistakes Salesforce made with Lightning is moving from purely transactional model to default-cached-no-way-to-purge model. Without letting a single developer to know that they did it, what are the pitfalls or how to disable it (you can't). WRT Lightning motivation, sounds like a much better option would've been supplement older server-rendered pages with some JS, update the stylesheets and make server language more useable. In fact server language is still there, still heavily used and still lacking expressiveness so badly that it's 10x slower to prototype on it rather than client side JS…” In support of Salesforce, a user on HackerNews explains why this Framework might be slow. He said, “At its core, Salesforce is a platform. As such, our customers expect their code to work for the long run (and backwards compatibility forever). Not owning the framework fundamentally means jeopardizing our business and our customers, since we can't control our future. We believe the best way to future-proof our platform is to align with standards and help push the web platform forward, hence our sugar and take on top of Web Components.” He further added, “about using different frameworks, again as a platform, allowing our customers to trivially include their framework choice of the day, will mean that we might end up having to load seven versions of react, five of Vue, 2 Embers .... You get the idea :) Outside the platform we love all the other frameworks (hence other properties might choose what it fits their use cases) and we had a lot of good discussions with framework owners about how to keep improving things over the last two years. Our goal is to keep contributing to the standards and push all the things to be implemented natively on the platform so we all get faster and better.” To know more about this news visit the Lightning Web Components Framework’s official website. Applying styles to Material-UI components in React [Tutorial] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Github Sponsors: Could corporate strategy eat FOSS culture for dinner?
Read more
  • 0
  • 0
  • 17116
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-gitlab-faces-backlash-from-users-over-performance-degradation-issues-tied-to-redis-latency
Vincy Davis
02 Jul 2019
4 min read
Save for later

GitLab faces backlash from users over performance degradation issues tied to redis latency

Vincy Davis
02 Jul 2019
4 min read
Yesterday, GitLab suffered major performance degradation in terms of 5x increased error rate and site slow down. The degradation was identified and rectified within few hours of its discovery. https://twitter.com/gabrielchuan/status/1145711954457088001 https://twitter.com/lordapo_/status/1145737533093027840 The GitLab engineers promptly started investigating the slowdown on GitLab.com and notified users that the slow down is in redis and lru cluster, thus impacting all web requests serviced by the rails front-end. What followed next was a very comprehensive detailing about the issue, its causes, who’s handling what kind of issue and more. GitLab’s step by step response looked like this: First, they investigated slow response times on GitLab. Next, they added more workers to alleviate the symptoms of the incident. Then, they investigated jobs on shared runners that were being picked up at a low rate or appeared being stuck. Next, they tracked CI issues and observed performance degradation as one incident. Over the time, they continued to investigate the degraded performance and CI pipeline delays. After a few hours, all services restored to normal operation and the CI pipelines continued to catch up from delays earlier with nearly normal levels. David Smith, the Production Engineering Manager at GitLab also updated users that the performance degradation was due to few issues tied to redis latency. Smith also added that, “We have been looking into the details of all of the network activity on redis and a few improvements are being worked on. GitLab.com has mostly recovered.” Many users on Hacker News wrote about their unpleasant experience with GitLab.com. A user states that, “I recently started a new position at a company that is using Gitlab. In the last month I've seen a lot of degraded performance and service outages (especially in Gitlab CI). If anyone at Gitlab is reading this - please, please slow down on chasing new markets + features and just make the stuff you already have work properly, and fill in the missing pieces.” Another user comments, “Slow down, simplify things, and improve your user experience. Gitlab already has enough features to be competitive for a while, with the Github + marketplace model.” Later, a GitLab employee by the username, kennyGitLab commented that GitLab is not losing sight and is just following the company’s new strategy of ‘Breadth over depth’. He further added that, “We believe that the company plowing ahead of other contributors is more valuable in the long run. It encourages others to contribute to the polish while we validate a future direction. As open-source software we want everyone to contribute to the ongoing improvement of GitLab.” Users were indignant by this response. A user commented, “"We're Open Source!" isn't a valid defense when you have paying customers. That pitch sounds great for your VCs, but for someone who spends a portion of their budget on your cloud services - I'm appalled. Gitlab is a SaaS company who also provides an open source set of software. If you don't want to invest in supporting up time - then don't sell paid SaaS services.” Another comment read, “I think I understand the perspective, but the messaging sounds a bit like, ‘Pay us full price while serving as our beta tester; sacrifice the needs of your company so you can fulfill the needs of ours’.” Few users also praised GitLab for prompt action and for providing everybody with in-depth detailing about the investigation. A user wrote that, “This is EXACTLY what I want to see when there's a service disruption. A live, in-depth view of who is doing what, any new leads on the issue, multiple teams chiming in with various diagnostic stats, honestly it's really awesome. I know this can't be expected from most businesses, especially non-open sourced ones, but it's so refreshing to see this instead of the typical "We're working on a potential service disruption" that we normally get.” GitLab goes multicloud using Crossplane with kubectl Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note
Read more
  • 0
  • 0
  • 17116

article-image-reactos-version-0-4-9-released-with-self-hosting-and-fastfat-crash-fixes
Sugandha Lahoti
24 Jul 2018
3 min read
Save for later

ReactOS version 0.4.9 released with Self-hosting and FastFAT crash fixes

Sugandha Lahoti
24 Jul 2018
3 min read
ReactOS, the free and open source “ Windows-like” operating system has been released as a new version. ReactOS 0.4.9 comes with system stability and general consistency improvements such as Self-hosting, shell improvements, FastFAT crash fixes and more. As they target a newer release every three months, more focus is given on improvements with fewer headliner changes. ReactOS is now capable of Self-Hosting Self-hosting is a process of building an OS on an OS. Self-hosting although considered to be a milestone in any OS’ maturity, is associated with many challenges of its own. First, compiling any large codebase requires high memory usage and storage I/O stressing the operating system. Scheduling is also stressed, as modern build systems in general attempt to produce multiple compilation processes to speed up the build process. ReactOS featured self-hosting in an older version. However, changes brought by subsequent releases, such as the reworking of the kernel, made this self-hosting process non-existent. However, with the recent changes made to the filesystem, Self-hosting is now completely established in the 0.4.9 release. The open source FreeBSD project’s implementation of qsort played a major role in achieving this. Stability brought in by fixing FastFAT crashes ReactOS had significant resource leakages caused by the FastFAT driver. This leakage was eating up the common cache to the point where attempts to copy large files would result in a crash. The new version fixes the FastFAT driver’s behavior by adding in write throttling support and restraining its usage of the cache. A conservative usage of the cache may slow the system a bit during IO operations. However, it ensures that resources remain available to service for large IO operations instead of crashing like before. FastFAT driver also featured a complete rewrite of the support for dirty volumes greatly reducing the chance of file corruptions. This will protect the system from becoming unusable after a crash. Shell Improvements & Features Shell has also received several upgrades. It now has a built-in zipfldr (Zip Folder) extension. With this, ReactOS can also uncompress zipped files without needing to install third-party tools to accomplish it. It also allows users to now choose whether to move, copy, or link a file or folder when they drag it with the right mouse button. Some other new improvements A new mouse properties dialog in the GUI component of the ReactOS installer The inclusion of RAPPS, the gateway program used for getting various applications installed on ReactOS. With this Unicode support, ReactOS can now easily support many different languages. ReactOS can now present itself as Windows 8.1 with the Version APIs. These are just a select few major updates. For a full list of features, upgrades, and improvements, read the changelog. Microsoft releases Windows 10 SDK Preview Build 17115 with Machine Learning APIs Microsoft releases Windows 10 Insider build 17682! What’s new in the Windows 10 SDK Preview Build 17704
Read more
  • 0
  • 0
  • 17109

article-image-twitter-allegedly-deleted-70-million-fake-accounts-in-an-attempt-to-curb-fake-news
Savia Lobo
11 Jul 2018
5 min read
Save for later

Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news

Savia Lobo
11 Jul 2018
5 min read
In the real world, a person having multiple identities is said to have Dissociative identity disorder (DID); but what about the virtual world? Social media sites such as Facebook, Twitter, and so on have an equal number or even more fake identity profiles than real ones. It has set out on a mission to excise these fake and suspicious profiles from its platform. The committee plans to depreciate 214% more accounts on a yearly basis for violating its spam policies. Source: Twitter blog Twitter initiated this drive to improve the authenticity of conversations on the platform. It also aims to ensure users have access to information that is highly credible, relevant, and of a high-quality. Following this, it started off its battle against the fake profiles and has been constantly suspending fake accounts which are inauthentic, spammy or created via malicious automated bots. Instead of waiting for people to report on these accounts, the company is proactively dodging across problematic accounts and observing their behavior by using machine learning tools. These tools identify spam or automated accounts and automatically take necessary actions. Some plans Twitter has, to avoid fake account creation, include: Enabling a read-only mode to reduce visibility of suspicious accounts It plans to monitor the behaviour of every profile and update its account metrics in near-real time. This will help in knowing the number of followers an account has, or the number of likes or Retweets a Tweet receives, and so on. The account may even be converted into a read-only mode, if found behaving suspiciously. The account will be removed from follower figures and engagement counts until it has passed a challenge of conforming the account with a phone number. A warning is displayed against such read-only accounts to prevent new accounts from following it. Once the account passes the challenge, its footprint is restored. Improving Twitter’s sign-up process Twitter will make it all the more difficult for spam accounts to register for an account. The new accounts will also have to confirm either an email address or phone number when they sign up to Twitter. It also plans to working closely with its Trust and Safety Council and other expert NGOs to ensure this change does not affect people working in a high-risk environment where anonymity is necessary. This process would be rolled-out later this year. Auditing existing accounts for signs of automated sign-up It is also conducting an audit to secure a number of legacy systems used to create accounts. This process will ensure that every account created on Twitter passes some simple, automatic security checks designed to prevent automated signups. The new protections Twitter has recently developed as a result of this audit have already aided them in preventing more than 50,000 spam sign-ups per day. Malicious behavior detection systems being expanded They are also planning to automate some processes where suspicious account activity is detected by the behavior detection systems. Activities such as exceptionally high-volume tweeting using the same hashtag, or the same @username without a reply from the account. These tests vary in intensity, and may simply request the account owner to complete a simple reCAPTCHA process or a password reset request. Complex cases are automatically passed to the team for review. Twitter has fastened its seat belt and won’t stop until it takes down all the fake accounts from its platform. While this move is bold and commendable for a social network platform given the steep rise in fake news and other allied unsavory consequences of an ever-connected world, Twitter’s investors did not take it well. The company shares fell to around 9.7% on Monday, after it announced that it is suspending more than 1 million accounts a day. As per a Twitter statement, the account suspension doubled since October last year. Many speculate that this is a response to the congressional pressure the platform has been receiving regarding the alleged Russian fake accounts found on Twitter to interfere with the U.S elections held last year. The number reached around 7 million in May and June, and a similar pace continues in July. Though this move raises serious concerns around their falling user growth rate, this is an important step for the organization to improve the health of their social platform. Chief Financial Officer, Ned Segal, tweeted, "most accounts we remove are not included in our reported metrics as they have not been active on the platform for 30 days or more, or we catch them at sign up and they are never counted." I, for one, ‘like’ Twitter’s decision. Minor inconveniences are a small price to pay for a more honest commune and information sharing. Read more about this news on The Washington Post’s original coverage. Top 5 cybersecurity assessment tools for networking professionals Top 5 Cybersecurity Myths Debunked Top 10 IT certifications for cloud and networking professionals in 2018  
Read more
  • 0
  • 0
  • 17093

article-image-nvidia-volta-tensor-core-gpu-hits-performance-milestones
Richard Gall
08 May 2018
3 min read
Save for later

Nvidia's Volta Tensor Core GPU hits performance milestones. But is it the best?

Richard Gall
08 May 2018
3 min read
Nvidia has revealed that its Volta Tensor Core GPU has hit some significant milestones in performance. This is big news for the world of AI. It raises the bar in terms of the complexity and sophistication of the deep learning models that can be built. The Volta Tensor Core GPU has, according to the Nvidia team, has "achieved record-setting ResNet-50 performance for a single chip and single server" thanks to the updates and changes they have made. Here are the headline records and milestones the Volta Tensor Core GPU has hit, according to the team's intensive and rigorous testing: When it trains a ResNet-50, one V100 TensorCore GPU can achieve more than 1,075 images every second. That is apparently four times more than the Pascal GPU, the previous generation of Nvidia's GPU microarchitecture. Last year, one DGX-1 server supported by 8 TensorCore V100s could achieve 4,200 images a second (still a hell of a lot). Now it can achieve 7,850. One AWS P3 cloud instance supported by 8 TensorCore V100s Res-Net50 in less than 3 hours. That's three times faster than on a single TPU. But what do these advances in performance mean in practice? And has Nvidia really managed to outperform its competitors? Volta Tensor Core GPUs might not be as fast as you think Nvidia is clearly pretty excited about what it has achieved. Certainly the power of the Volta Tensor Core GPUs are impressive and not to be sniffed at. But website ExtremeTech poses a caveat. The piece argues that there are problems with using FLOPS ( floating point operations per second) as a metric for performance. This is because the mathematical formula that's used to calculate FLOPs assumes a degree of consistency in how something is processed that may be misleading. One GPU, for example, might have higher potential FLOPS but not be running at capacity. It could, of course be outperformed by an 'inferior' GPU. Other studies (this one from RiseML) have indicated that Google's TPU actually performs better than Nvidia's offering (when using a different test). Admittedly the difference wasn't huge, but enough when you consider that it's significantly cheaper than the Volta. Ultimately, the difference between the two is as much about what you want from your GPU or TPU. Google might give you a little more power but there's much less flexibility than you get with the Volta. It will be interesting to see how the competition changes over the next few years. Based on current form Nvidia and Google are going to be leading the way for some time, whoever has bragging rights about performance. Distributed TensorFlow: Working with multiple GPUs and servers Nvidia Tesla V100 GPUs publicly available in beta on Google Compute Engine and Kubernetes Engine OpenAI announces block sparse GPU kernels for accelerating neural networks
Read more
  • 0
  • 0
  • 17093
article-image-openais-ai-robot-hand-learns-to-solve-a-rubik-cube-using-reinforcement-learning-and-automatic-domain-randomization-adr
Savia Lobo
16 Oct 2019
5 min read
Save for later

OpenAI’s AI robot hand learns to solve a Rubik Cube using Reinforcement learning and Automatic Domain Randomization (ADR)

Savia Lobo
16 Oct 2019
5 min read
A team of OpenAI researchers shared their research of training neural networks to solve a Rubik’s Cube with a human-like robot hand. The researchers trained the neural networks only in simulation using the same reinforcement learning code as OpenAI Five paired with a new technique called Automatic Domain Randomization (ADR). In their research paper, the team demonstrates how the system trained only in simulation can handle situations it never saw during training. “Solving a Rubik’s Cube one-handed is a challenging task even for humans, and it takes children several years to gain the dexterity required to master it. Our robot still hasn’t perfected its technique though, as it solves the Rubik’s Cube 60% of the time (and only 20% of the time for a maximally difficult scramble),” the researchers mention on their official blog. The Neural networks were also trained with Kociemba’s algorithm along with RL algorithms, for picking the solution steps. Read Also: DeepCube: A new deep reinforcement learning approach solves the Rubik’s cube with no human help What is Automatic Domain Randomization (ADR)? Domain randomization enables networks trained solely in simulation to transfer to a real robot. However, it was a challenge for the researchers to create an environment with real-world physics in the simulation environment. The team realized that it was difficult to measure factors like friction, elasticity, and dynamics for complex objects like Rubik’s Cubes or robotic hands and domain randomization alone was not enough. To overcome this, the OpenAI researchers developed a new method called Automatic Domain Randomization (ADR), which endlessly generates progressively more difficult environments in simulation. In ADR, the neural network learns to solve the cube with a single, nonrandomized environment. As the neural network gets better at the task and reaches a performance threshold, the amount of domain randomization is increased automatically. This makes the task harder since the neural network must now learn to generalize to more randomized environments. The network keeps learning until it again exceeds the performance threshold, when more randomization kicks in, and the process is repeated. “The hypothesis behind ADR is that a memory-augmented network combined with a sufficiently randomized environment leads to emergent meta-learning, where the network implements a learning algorithm that allows itself to rapidly adapt its behavior to the environment it is deployed in,” the researchers state. Source: OpenAI.com OpenAI's AI-hand and the Giiker Cube The researchers have used the Shadow Dexterous E Series Hand (E3M5R) as a humanoid robot hand and the PhaseSpace motion capture system to track the Cartesian coordinates of all five fingertips. They have also used RGB Basler cameras for vision pose estimation. Sensing the state of a Rubik’s cube from vision alone is a challenging task. The team, therefore, used a “smart” Rubik’s cube with built-in sensors and a Bluetooth module as a stepping stone. They also used a Giiker cube for some of the experiments to test the control policy without compounding errors made by the vision model’s face angle predictions. The hardware is based on the Xiaomi Giiker cube. This cube is equipped with a Bluetooth module and allows one to sense the state of the Rubik’s cube. However, it is limited to a face angle resolution of 90◦ , which is not sufficient for state tracking purposes on the robot setup. The team, therefore, replaced some of the components of the original Giiker cube with custom ones in order to achieve a tracking accuracy of approximately 5 degrees. A few challenges faced OpenAI’s method currently solves the Rubik’s Cube 20% of the time when applying a maximally difficult scramble that requires 26 face rotations. For simpler scrambles that require 15 rotations to undo, the success rate is 60%. Researchers consider an attempt to have failed when the Rubik’s Cube is dropped or a timeout is reached. However, their network is capable of solving the Rubik’s Cube from any initial condition. So if the cube is dropped, it is possible to put it back into the hand and continue solving. The neural network is much more likely to fail during the first few face rotations and flips. The team says this happens because the neural network needs to balance solving the Rubik’s Cube with adapting to the physical world during those early rotations and flips. The team also implemented a few perturbations while training the AI-robot hand, including: Resetting the hidden state: During a trial, the hidden state of the policy was reset. This leaves the environment dynamics unchanged but requires the policy to re-learn them since its memory has been wiped. Re-sampling environment dynamics: This corresponds to an abrupt change of environment dynamics by resampling the parameters of all randomizations while leaving the simulation state18 and hidden state intact. Breaking a random joint: This corresponds to disabling a randomly sampled joint of the robot hand by preventing it from moving. This is a more nuanced experiment since the overall environment dynamics are the same but the way in which the robot can interact with the environment has changed. https://twitter.com/OpenAI/status/1184145789754335232 Here’s the complete video on how the AI-robot hand swiftly solved the Rubik cube single-handedly! https://www.youtube.com/watch?time_continue=84&v=x4O8pojMF0w To know more about this research in detail, you can read the research paper. Open AI researchers advance multi-agent competition by training AI agents in a simple hide and seek environment Introducing Open AI’s Reptile: The latest scalable meta-learning Algorithm on the block Build your first Reinforcement learning agent in Keras [Tutorial]
Read more
  • 0
  • 0
  • 17087

article-image-mozilla-partners-with-khronos-group-to-bring-gltf-format-to-blender
Sugandha Lahoti
22 Oct 2018
2 min read
Save for later

Mozilla partners with Khronos Group to bring glTF format to Blender

Sugandha Lahoti
22 Oct 2018
2 min read
Mozilla has announced a collaboration with Khronos Group and developers of existing open source Blender tools, to bring GL Transmission Format (glTF) import and export add-on for Blender. This release supports the upcoming release of Blender 2.8 which will feature physically-based rendering (PBR) and improved user interface. The glTF format is the foundation for interoperable 3D tools and services. Basically, it’s the “JPEG of 3D”. It is royalty free and coordinated by the Khronos consortium. With gITF support for Blender, mixed reality developers, designers, and creators anywhere in the world, can create, edit, and remix glTF models without having to purchase the specialized software. How did the collaboration come to be? Mozilla’s Mixed Reality (WebXR) Team conducted a joint ecosystem analysis focusing on content creators, their motivations, current pain points, and the expected impact of tools that could empower them. After the joint ecosystem analysis Khronos, UX3D, and Mozilla decided to co-fund the development of the Blender importer and exporter tool. Airbus was also declared as one of the partners in the development of the Blender glTF tools. They are using the glTF format internally to visualize their Blender created mock-ups in VR and AR. They came up with an ecosystem partners model to accelerate the advancement of the glTF standard and the WebXR ecosystem. The team hopes that “the Blender tools will unleash the creativity of the global community.” The glTF Blender import and export tool will be released and ready for beta testers around the time of the Blender conference in late October. Mozilla will also be sharing a detailed analysis of the tool on Mozilla’s Hacks blog in the coming weeks. Read more about the collaboration on Medium. Building VR objects in React V2 2.0: Getting started with polygons in Blender Is Mozilla the most progressive tech organization on the planet right now? Mozilla announces $3.5 million award for ‘Responsible Computer Science Challenge’ to encourage teaching ethical coding to CS graduates
Read more
  • 0
  • 0
  • 17087

article-image-python-3-8-alpha-2-is-now-available-for-testing
Natasha Mathur
27 Feb 2019
2 min read
Save for later

Python 3.8 alpha 2 is now available for testing

Natasha Mathur
27 Feb 2019
2 min read
After releasing Python 3.8.0 alpha 1 earlier this month, Python team released the second alpha version of the four planned alpha releases of Python 3.8, called Python 3.8.0a2, last week. Alpha releases make it easier for the developers to test the current state of new features, bug fixes, and the release process. Python team states that many new features for Python 3.8 are still being planned and written. Here is a list of some of the major new features and changes so far, however, these features are currently raw and not meant for production use: PEP 572 i.e. Assignment expressions have been accepted. Now, users can assign to variables within an expression with the help of the notation NAME := expr. A new exception, TargetScopeError has also been added with one change to the evaluation order. Typed_ast, a fork of the ast module (in C) used by mypy, pytype, and (IIRC) has been merged back to CPython. Typed_ast helps preserve certain comments. Multiprocessing is now allowed and users can use shared memory segments to avoid pickling costs and the need for serialization between processes. The next pre-release for Python 3.8 will be Python 3.8.0a3 and has been scheduled for 25th March 2019. For more information, check out the official Python 3.8.0a2 announcement. PyPy 7.0 released for Python 2.7, 3.5, and 3.6 alpha 5 blog posts that could make you a better Python programmer Python Software foundation and JetBrains’ Python Developers Survey 2018
Read more
  • 0
  • 0
  • 17087
article-image-babel-7-released-with-typescript-and-jsx-fragment-support
Sugandha Lahoti
28 Aug 2018
3 min read
Save for later

Babel 7 released with Typescript and JSX fragment support

Sugandha Lahoti
28 Aug 2018
3 min read
Babel 7 has been released after 3 years of wait after Babel 6. Babel is a JavaScript compiler. It is mainly used to convert ECMAScript 2015+ code into a backward compatible version of JavaScript. Babel gives developers the freedom to use the latest JavaScript syntax without developers worrying about backward compatibility. It has been going strong in the Javascript ecosystem. There are currently over 1.3 million dependent repos on GitHub, 17 million downloads on npm per month, and hundreds of users including many major frameworks (React, Vue, Ember, Polymer), and companies (Facebook, Netflix, Airbnb).   Major Breaking Changes All major changes can be done automatically with the new babel-upgrade tool. babel-upgrade is a new tool that automatically makes upgrade changes: currently with dependencies in package.json and .babelrc config. Drop support for un-maintained Node versions: 0.10, 0.12, 4, 5 Introducing @babel namespace to differentiate official packages, so babel-core becomes @babel/core. Deprecation of any yearly presets (preset-es2015, etc). Dropped the "Stage" presets (@babel/preset-stage-0, etc) in favor of opting into individual proposals. Some packages have renames: any TC39 proposal plugin will now be -proposalinstead of -transform. So @babel/plugin-transform-class-properties becomes @babel/plugin-proposal-class-properties. Introduced a peerDependency on @babel/core for certain user-facing packages (e.g. babel-loader, @babel/cli, etc). Typescript and JSX fragment support Babel 7 now ships with TypeScript support. Babel will now get the benefits of TypeScript like catching typos, error checking, and fast editing experiences.  It will enable JavaScript users to take advantage of gradual typing. Install the Typescript plugin as npm install --save-dev @babel/typescript The JSX fragment support in Babel 7 allows returning multiple children from a component’s render method. Fragments look like empty JSX tags. They let you group a list of children without adding extra nodes to the DOM. Speed improvements Babel 7 includes changes to optimize the code as well as accept patches from the v8 team. It is also part of the Web Tooling Benchmark alongside many other great JavaScript tools. There are changes to the loose option of some plugins. Moreover transpiled ES6 classes are annotated with a /*#__PURE__*/ comment that gives a hint to minfiers like Uglify and babel-minify for dead code elimination. What’s Next There are a lot of new features in the works: plugin ordering, better validation/errors, speed, re-thinking loose/spec options, caching, using Babel asynchronously, etc. You can check out the roadmap doc for a more detailed version. These are just a select few updates. The entire changes are available on the Babel blog. TypeScript 3.0 is finally released with ‘improved errors’, editor productivity and more The 5 hurdles to overcome in JavaScript Tools in TypeScript  
Read more
  • 0
  • 0
  • 17081

article-image-low-carbon-kubernetes-scheduler-a-demand-side-management-solution-that-consumes-electricity-in-low-grid-carbon-intensity-areas
Savia Lobo
27 Jun 2019
7 min read
Save for later

Low Carbon Kubernetes Scheduler: A demand side management solution that consumes electricity in low grid carbon intensity areas

Savia Lobo
27 Jun 2019
7 min read
Machine learning experts are increasingly becoming interested in researching on how machine learning can be used to reduce greenhouse gas emissions and help society adapt to a changing climate. For example, Machine Learning can be used to regulate cloud data centres that manage an important asset, ‘Data’ as these data centres typically comprise tens to thousands of interconnected servers and consume a substantial amount of electrical energy. Researchers from Huawei published a paper in April 2015, estimating that by 2030 data centres will use anywhere between 3% and 13% of global electricity At the ICT4S 2019 conference held in Lappeenranta, Finland, from June 10-15, researchers from the University of Bristol, UK, introduced their research on a low carbon scheduling policy for the open-source Kubernetes container orchestrator. “Low Carbon Kubernetes Scheduler” can provide demand-side management (DSM) by migrating consumption of electric energy in cloud data centres to countries with the lowest carbon intensity of electricity. In their paper the researchers highlight, “All major cloud computing companies acknowledge the need to run their data centres as efficiently as possible in order to address economic and environmental concerns, and recognize that ICT consumes an increasing amount of energy”. Since the end of 2017, Google Cloud Platform runs its data centres entirely on renewable energy. Also, Microsoft has announced that its global operations have been carbon neutral since 2012. However, not all cloud providers have been able to make such an extensive commitment. For example, Oracle Cloud is currently 100% carbon neutral in Europe, but not in other regions. The Kubernetes Scheduler selects compute nodes based on the real-time carbon intensity of the electric grid in the region they are in. Real-time APIs that report grid carbon intensity is available for an increasing number of regions, but not exhaustively around the planet. In order to effectively demonstrate the schedulers ability to perform global load balancing, the researchers have evaluated the scheduler based on its ability to the metric of solar irradiation. “While much of the research on DSM focusses on domestic energy consumption there has also been work investigating DSM by cloud data centres”, the paper mentions. Demand side management (DSM) refers to any initiatives that affect how and when electricity is being required by consumers. Source: CEUR-WS.org Existing schedulers work with consideration to singular data centres rather than taking a more global view. On the other hand, the Low Carbon Scheduler considers carbon intensity across regions as scaling up and down of a large number of containers that can be done in a matter of seconds. Each national electric grid contains electricity generated from a variable mix of alternative sources. The carbon intensity of the electricity provided by the grid anywhere in the world is a measure of the amount of greenhouse gas released into the atmosphere from the combustion of fossil fuels for the generation of electricity. Significant generation sites report the volume of electricity input to the grid in regular intervals to the organizations operating the grid (for example the National Grid in the UK) in real-time via APIs. These APIs typically provide the retrieval of the production volumes and thus allow to calculate the carbon intensity in real-time. The Low carbon scheduler collects the carbon intensity from the available APIs and ranks them to identify the region with the lowest carbon intensity. [box type="shadow" align="" class="" width=""]For the European Union, such an API is provided by the European Network of Transmission System Operators for Electricity (www.entsoe.eu) and for the UK this is the Balancing Mechanism Reporting Service (www.elexon.co.uk).[/box] Why Kubernetes for building a low carbon scheduler Kubernetes can make use of GPUs4 and has also been ported to run on ARM architecture 5. Researchers have also said that Kubernetes has to a large extent won the container orchestration war. It also has support for extendability and plugins which makes it the “most suitable for which to develop a global scheduler and bring about the widest adoption, thereby producing the greatest impact on carbon emission reduction”. Kubernetes allows schedulers to run in parallel, which means the scheduler will not need to re-implement the pre-existing, and sophisticated, bin-packing strategies present in Kubernetes. It need only to apply a scheduling layer to complement the existing capabilities proffered by Kubernetes. According to the researchers, “Our design, as it operates at a higher level of abstraction, assures that Kubernetes continues to deal with bin-packing at the node level, while the scheduler performs global-level scheduling between data centres”. The official Kubernetes documentation describes three possible ways of extending the default scheduler (kube-scheduler): adding these rules to the scheduler source code and recompiling, implementing one’s own scheduler process that runs instead of, or alongside kube-scheduler, or implementing a scheduler extender. Evaluating the performance of the low carbon Kubernetes scheduler The researchers recorded the carbon intensities for the countries that the major cloud providers operate data centers between 18.2.2019 13:00 UTC and 21.4.2019 9:00 UTC. Following is a table showing countries where the largest public cloud providers operate data centers, as of April 2019. Source: CEUR-WS.org They further ranked all countries by the carbon intensity of their electricity in 30-minute intervals. Among the total set of 30-minute values, Switzerland had the lowest carbon intensity (ranked first) in 0.57% of the 30-minute intervals, Norway 0.31%, France 0.11% and Sweden in 0.01%. However, the list of the least carbon intense countries only contains countries in central Europe locations. To justify Kubernetes’ ability or globally distributed deployments the researchers chose to optimize placement to regions with the greatest degree of solar irradiance termed a Heliotropic Scheduler. This scheduler is termed ‘heliotropic’ in order to differentiate it from a ‘follow-the-sun’ application management policy that relates to meeting customer demand around the world by placing staff and resources in proximity to those locations (thereby making them available to clients at lower latency and at a suitable time of day). A ‘heliotropic’ policy, on the other hand, goes to where sunlight, and by extension solar irradiance, is abundant. They further evaluated the Heliotropic Scheduler implementation by running BOINC jobs on Kubernetes. BOINC (Berkeley Open Infrastructure for Network Computing) is a software platform for volunteer computing that allows users to contribute computational capacity from their home PCs towards scientific research. Einstein@Home, SETI@home and IBM World Community Grid are some of the most widely supported projects. Researchers say: “Even though many cloud providers are contracting for renewable energy with their energy providers, the electricity these data centres take from the grid is generated with release of a varying amount of greenhouse gas emissions into the atmosphere. Our scheduler can contribute to moving demand for more carbon intense electricity to less carbon intense electricity”. While the paper concludes that wind-dominant, solar-complementary strategy is superior for the integration of renewable energy sources into cloud data centres’ infrastructure, the Low Carbon Scheduler provides a proof-of-concept demonstrating how to reduce carbon intensity in cloud computing. To know more about this implementation for lowering carbon emissions read the research paper. Machine learning experts on how we can use machine learning to mitigate and adapt to the changing climate VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption?
Read more
  • 0
  • 0
  • 17075
Modal Close icon
Modal Close icon