Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-memory-usage-optimizations-implemented-in-v8-lite-can-benefit-v8
Sugandha Lahoti
13 Sep 2019
4 min read
Save for later

New memory usage optimizations implemented in V8 Lite can also benefit V8

Sugandha Lahoti
13 Sep 2019
4 min read
V8 Lite was released in late 2018 in V8 version 7.3 to dramatically reduce V8’s memory usage. V8 is Google’s open-source JavaScript and WebAssembly engine, written in C++. V8 Lite provides a 22% reduction in typical web page heap size compared to V8 version 7.1 by disabling code optimization, not allocating feedback vectors and performed aging of seldom executed bytecode. Initially, this project was envisioned as a separate Lite mode of V8. However, the team realized that many of the memory optimizations could be used in regular V8 thereby benefiting all users of V8. The team realized that most of the memory savings of Lite mode with none of the performance impact can be achieved by making V8 lazier. They performed Lazy feedback allocation, Lazy source positions, and Bytecode flushing to bring V8 Lite memory optimizations to regular V8. Read also: LLVM WebAssembly backend will soon become Emscripten default backend, V8 announces Lazy allocation of Feedback Vectors The team lazily allocated feedback vectors after a function executes a certain amount of bytecode (currently 1KB). Since most functions aren’t executed very often, they avoid feedback vector allocation in most cases but quickly allocate them where needed, to avoid performance regressions and still allow code to be optimized. One hitch was that lazy allocation of feedback vectors did not allow feedback vectors to form a tree. To address this, they created a new ClosureFeedbackCellArray to maintain this tree, then swap out a function’s ClosureFeedbackCellArray with a full FeedbackVector when it becomes hot. The team says that they, “have enabled lazy feedback allocation in all builds of V8, including Lite mode where the slight regression in memory compared to their original no-feedback allocation approach is more than compensated by the improvement in real-world performance.” Compiling bytecode without collecting source positions Source position tables are generated when compiling bytecode from JavaScript. However, this information is only needed when symbolizing exceptions or performing developer tasks such as debugging. To avoid this waste, bytecode is now compiled without collecting source positions. The source positions are only collected when a stack trace is actually generated. They have also fixed bytecode mismatches and added checks and a stress mode to ensure that eager and lazy compilation of a function always produces consistent outputs. Flush compiled bytecode from functions not executed recently Bytecode compiled from JavaScript source takes up a significant chunk of V8 heap space. Therefore, now compiled bytecode is flushed from functions during garbage collection if they haven’t been executed recently. They also flush feedback vectors associated with the flushed functions. To keep track of the age of a function’s bytecode, they have incremented the age after every major garbage collection, and reset it to zero when the function is executed. Additional memory optimizations Reduce the size of FunctionTemplateInfo objects. The FunctionTemplateInfo object is split such that the rare fields are stored in a side-table which is only allocated on demand if required. The TurboFan optimized code is now deoptimized such that deopt points in optimized code load the deopt id directly before calling into the runtime. Read also: V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more. Result comparison for V8 Lite and V8 Source: V8 blog People on Hacker News appreciated the work done by the team being V8. A comment reads, “Great engineering stuff. I am consistently amazed by the work of V8 team. I hope V8 v7.8 makes it to Node v12 before its LTS release in coming October.” Another says, “At the beginning of the article, they are talking about building a "v8 light" for embedded application purposes, which was pretty exciting to me, then they diverged and focused on memory optimization that's useful for all v8. This is great work, no doubt, but as the most popular and well-tested JavaScript engine, I'd love to see a focus on ease of building and embedding.” https://twitter.com/vpodk/status/1172320685634420737 More details are available on the V8 blog. Other interesting news in Tech Google releases Flutter 1.9 at GDD (Google Developer Days) conference Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack Apple’s September 2019 Event: iPhone 11 Pro and Pro Max, Watch Series 5, new iPad, and more.
Read more
  • 0
  • 0
  • 21494

article-image-ibm-google-quantum-computing
Abhishek Jha
14 Nov 2017
3 min read
Save for later

Has IBM edged past Google in the battle for Quantum Supremacy?

Abhishek Jha
14 Nov 2017
3 min read
Last month when researchers at Google unveiled a blueprint for quantum supremacy, little did they know that rival IBM was about to snatch the pole position. In what could be the largest and the most sophisticated quantum computer built till date, IBM has announced the development of a quantum computer capable of handling 50 qubits (quantum bits). The Big Blue also announced another 20-qubit processor that will be made available through IBM Q cloud by the end of the year. "Our 20-qubit machine has double the coherence time, at an average of 90 microseconds, compared to previous generations of quantum processors with an average of 50 microseconds. It is also designed to scale; the 50-qubit prototype has similar performance," Dario Gil, who leads IBM's quantum computing and artificial intelligence research division, said in his blog post. IBM’s progress in this space has been truly rapid. After launching the 5-qubit system in May 2016, they followed with a 15-qubit machine this year, and then upgraded the IBM Q experience to 20-qubits, putting 50-qubits in line. That is quite a leap in 18 months. As a technology, quantum computing is a rather difficult area to understand — information is processed differently here. Unlike normal computers that interpret either a 0 or a 1, quantum computers can live in multiple states, leading to all kinds of programming possibilities for such type of computing. Add to it the coherence factor that makes it very difficult for programmers to build a quantum algorithm. While the company did not divulge the technical details about how its engineers could simultaneously expand the number of qubits and increase the coherence times, it did mention that the improvements were due to better “superconducting qubit design, connectivity and packaging.” That the 50-qubit prototype is a “natural extension” of the 20-qubit technology and both exhibit "similar performance metrics." The major goal though is to create a fault tolerant universal system that is capable of correcting errors automatically while having high coherence. "The holy grail is fault-tolerant universal quantum computing. Today, we are creating approximate universal, meaning it can perform arbitrary operations and programs, but it’s approximating so that I have to live with errors and a limited window of time to perform the operations," Gil said. The good news is that an ecosystem is building up. Through the IBM Q experience, more than 60,000 users have run over 1.7 million quantum experiments and generated over 35 third-party research publications. That the beta-testers included 1,500 universities, 300 high schools and 300 private-sector participants means quantum computing is closer to implementation in real world, in areas like medicine, drug discovery and materials science. "Quantum computing will open up new doors in the fields of chemistry, optimisation, and machine learning in the coming years," Gil added.  "We should savor this period in the history of quantum information technology, in which we are truly in the process of rebooting computing." All eyes are now on Google, IBM’s nearest rival in quantum computing at this stage. While IBM’s 50-qubit processor has taken away half the charm out of Google’s soon to be announced 49-qubit system, expect more surprises in the offing as Google has so far managed to keep its entire quantum computing machinery behind closed doors.
Read more
  • 0
  • 0
  • 21478

article-image-icra-2018-conference-robotics-automation
Savia Lobo
25 May 2018
5 min read
Save for later

What we learned at the ICRA 2018 conference for robotics & automation

Savia Lobo
25 May 2018
5 min read
This year’s ICRA 2018 conference features interactive sessions, keynotes, exhibitions, workshops, and much more. Following are some of the interesting keynotes on machine learning, robotics, and more. Note: International Conference on Robotics and Automation (ICRA) is an international forum for robotics researchers to represent their work. It is a flagship conference of IEEE Robotics and Automation Society. This conference held at the Brisbane Convention and Exhibition Center from the 21st to 25th May, 2018  brings together experts in the field of robotics and automation. The conference includes delegates in the frontier of science and technology in robotics and automation. Implementing machine learning for safe, high-performance control of mobile robots Traditional algorithms are designed based on their a-priori knowledge leveraged from the system and its environment. This knowledge also includes system dynamics and an environment map. Such an approach can allow system to work successfully in a predictable environment. However, if the system is unaware of the environment details, it may lead to high performance losses. In order to build systems that can work efficiently in unknown and uncertain instances, the speaker, Prof. Angela Schoellig, introduces systems that are capable of learning amidst an operation and adapt the behaviour accordingly. Angela presents several approaches for online, data-efficient, and safety-guaranteed learning for robot control. In these approaches, the algorithms can: Leverage insights from control theory Make use of neural networks and Gaussian processes, which are state-of-the-art and probabilistic learning methods. Take into account any prior knowledge about system dynamics. The speaker has also demonstrated how using such novel robot control and learning algorithms can be safe and effective in real-world scenarios. You can check Angela Schoellig’s video below on how she demonstrated these algorithms on self-flying and -driving vehicles, and mobile manipulators. Meta-learning and the art of Learning to Learn Pieter Abbeel, in his talk about meta-learning (learning to learn) explains how reinforcement learning and imitation learning have been successful in various domains such as Atari, Go, and so on. You can also check out 6 Key Challenges in Deep Learning for Robotics by Pieter Abbeel presented at the NIPS 2017 conference. Humans have a default potential to learn from past experiences and can learn new skills far more quickly than machines. Pieter explains some of his recent experiments on meta-learning, where agents learn imitation or the reinforcement learning algorithms and using the algorithms as base can learn from past instances just like humans. Due to meta learning, machines can now acquire any skill just by having a single demonstration or few trials. He states that meta-learning can be applied to general examples such as omniglot and mini-imagenet, which are standard few-shot classification benchmarks. To know about meta-learning from the ground up, you can check out our article, What is Meta Learning?. You can also read our coverage on Pieter Abbeel’s accepted paper at the ICLR 2018. Robo-peers: Robust Interaction in Human-Robot Teams Richard Vaughan in this keynote explains how robots would behave in natural surroundings, i.e among humans, animals, and other peer robots. His team has worked on behaviour strategies for mobile robots. These strategies enable the robots to have sensing capabilities and also allow them to behave sophisticated like humans and have robust interactions with the world and other agents around them. Richard further described certain series of vision-mediated Human-Robot Interactions conducted within groups of driving and flying robots. The mechanisms used were simple but highly effective. Form Building Robots to Bridging the Gap between Robotics and AI Robots posses smart, reactive and user-centered programming systems using which they can physically interact with the world. In current scenarios, every layman is capable of using cutting-edge robotics technology for complex tasks such as force-sensitive assembly and safe physical human-robot interaction. Franka Emika’s Panda, the first commercial robot system, is an example of of a robot with such abilities. Sami Haddadin, in this talk offers to bridge the gap between model-based nonlinear control algorithms and data-driven machine learning via a holistic approach. He explains that neither pure control-based nor end-to-end learning algorithms are a close match to human-level general purpose machine intelligence. Two recent results reinforce this statement: i.) Learning of exact articulated robot dynamics by using the concept of first order principle networks. ii.) Learning human-like manipulation skills by combining adaptive impedance control and meta learning Panda was, right from the beginning, released with consistent research interfaces and modules to enable the robotics and AI community to build on the developments in the field until then and to push the boundaries in manipulation, interaction and general AI-enhanced robotics. Sami believes this step will positively enable the community to address the immense challenges in robotics and AI research. Socially Assistive Robots: The Next-Gen Healthcare Helpers Goldie Nejat puts down her concern by stating that the world’s elderly population is rising and so is dementia, a disease with hardly any cure. She says that robots here, can become a unique strategic technology. She further adds that they can become a crucial part of the society by helping the aged population in their day-to-day activities. In this talk she presents intelligent assistive robots, which can be used to improve the life of the older populations. The population also includes those suffering from dementia. She discusses how the assistive robots, Brian, Casper, and Tangy socially have been designed to autonomously provide cognitive and social interventions. These robots also help with activities of daily living, and lead group recreational activities in human-centered environments. These robots can serve as assistants to individuals as well as groups of users. They can personalize their interactions as per the needs of the users. These robots can also be integrated into everyday lives of other people outside the aged bracket. Read more about the other keynotes and highlights on robotics on the ICRA’s official website How to build an Arduino based ‘follow me’ drone AI powered Robotics : Autonomous machines in the making Tips and tricks for troubleshooting and flying drones safely
Read more
  • 0
  • 0
  • 21478

article-image-amoebanets-googles-new-evolutionary-automl
Savia Lobo
16 Mar 2018
2 min read
Save for later

AmoebaNets: Google’s new evolutionary AutoML

Savia Lobo
16 Mar 2018
2 min read
In order to detect objects within an image, artificial neural networks require careful design by experts over years of difficult research. They later address one specific task, such as to find what's in a photograph, to call a genetic variant, or to help diagnose a disease. Google believes one approach to generate these ANN architectures is through the use of evolutionary algorithms. So, today Google introduced AmoebaNets, an evolutionary algorithm that achieves state-of-the-art results for datasets such as ImageNet and CIFAR-10. Google offers AmoebaNets as an answer to questions such as, By using the computational resources to programmatically evolve image classifiers at unprecedented scale, can one achieve solutions with minimal expert participation? How good can today's artificially-evolved neural networks be? These questions were addressed through the two papers: Large-Scale Evolution of Image Classifiers,” presented at ICML 2017. In this paper, the authors have set up an evolutionary process with simple building blocks and trivial initial conditions. The idea was to "sit back" and let evolution at scale do the work of constructing the architecture. Regularized Evolution for Image Classifier Architecture Search (2018). This paper includes a scaled up computation using Google's new TPUv2 chips. This combination of modern hardware, expert knowledge, and evolution worked together to produce state-of-the-art models on CIFAR-10 and ImageNet, two popular benchmarks for image classification. One important feature of the evolutionary algorithm (AmoebaNets) that the team used in their second paper is a form of regularization, which means: Instead of letting the worst neural networks die, they remove the oldest ones — regardless of how good they are. This improves robustness to changes in the task being optimized and tends to produce more accurate networks in the end. Since weight inheritance is not allowed, all networks must train from scratch. Therefore, this form of regularization selects for networks that remain good when they are re-trained. These models achieve state-of-the-art results for CIFAR-10 (mean test error = 2.13%), mobile-size ImageNet (top-1 accuracy = 75.1% with 5.1 M parameters) and ImageNet (top-1 accuracy = 83.1%). Read more about AmoebaNets on Google Research Blog
Read more
  • 0
  • 0
  • 21470

article-image-python-3-8-beta-1-is-now-ready-for-you-to-test
Bhagyashree R
11 Jun 2019
2 min read
Save for later

Python 3.8 beta 1 is now ready for you to test

Bhagyashree R
11 Jun 2019
2 min read
Last week, the team behind Python announced the release of Python 3.8.0b1, which is the first out of the four planned beta release previews of Python 3.8. This release marks the beginning of the beta phase where you can test new features and make your applications ready for the new release. https://twitter.com/ThePSF/status/1137797764828553222 These are some of the features that you will see in the upcoming Python 3.8 version: Assignment expressions Assignment expressions were proposed in PEP 572, which was accepted after an extensive discussion among the Python developers. This feature introduces a new operator (:=) with which you will be able to assign variables within an expression. Positional-only arguments In Python, you can pass an argument to a function by position, keyword, or both. API designers may sometimes want to restrict passing the arguments by position only. To easily implement this, Python 3.8 will come with a new marker (/) to indicate that the arguments to its left are positional only. This is similar to * that indicates the arguments to its right are keyword only. Python Initialization Configuration Python is highly configurable, but the configurations are scattered all around the code. This version introduces new functions and structures to the Python Initialization C API to provide Python developers a “straightforward and reliable way” to configure Python. The Vectorcall protocol for CPython The calling convention impacts the flexibility and performance of your code considerably. To optimize the calling of objects, this release introduces Vectorcall protocol and a calling convention that is already being used internally for Python and built-in functions. Runtime audit hooks Python 3.8 will come with two new APIs: Audit Hook and Verified Open Hook to give you insights into a running Python application. These will facilitate both application developers and system administrators to integrate Python into their existing monitoring systems. As this is a beta release, developers should refrain from using it in production environments. The next beta release is currently planned to release on July 1st. To know more about Python 3.8.0b1, check out the official announcement. Which Python framework is best for building RESTful APIs? Django or Flask? PyCon 2019 highlights: Python Steering Council discusses the changes in the current Python governance structure Python 3.8 alpha 2 is now available for testing
Read more
  • 0
  • 0
  • 21430

article-image-devops-platform-for-coding-gitlab-reached-more-than-double-valuation-of-2-75-billion-than-its-last-funding-and-way-ahead-of-its-ipo-in-2020
Fatema Patrawala
19 Sep 2019
4 min read
Save for later

DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020

Fatema Patrawala
19 Sep 2019
4 min read
Yesterday, GitLab, a San Francisco based start-up, raised $268 million in a Series E funding round valuing the company at $2.75 billion, more than double of its last valuation. In the Series D round funding of $100 million the company was valued at $1.1 billion; and with today’s announcement, the valuation has more than doubled in less than a year. GitLab provides a DevOps platform for developing and collaborating on code and offers a single application for companies to draft, develop and release code. The product is used by companies like Delta Air Lines Inc., Ticketmaster Entertainment Inc. and Goldman Sachs Group Inc etc. The Series E funding round was led by investors including Adage Capital Management, Alkeon Capital, Altimeter Capital, Capital Group, Coatue Management, D1 Capital Partners, Franklin Templeton, Light Street Capital, Tiger Management Corp. and Two Sigma Investments. GitLab plans to go public in November 2020 According to Forbes, GitLab has already set November 18, 2020 as the date for going public. The company seems to be primed and ready for the eventual IPO. As for the $268 million, it gives the company considerable time ahead of the planned event and also gives the flexibility to choose how to take the company public. “One other consideration is that there are two options to go public. You can do an IPO or direct listing. We wanted to preserve the optionality of doing a direct listing next year. So if we do a direct listing, we’re not going to raise any additional money, and we wanted to make sure that this is enough in that case,” Sid Sijbrandij, Gitlab co-founder and CEO explained in an interview for TechCrunch. He further adds, that the new funds will be used to add monitoring and security to GitLab’s offering, and to increase the company’s staff to more than 1,000 employees this year from 400 employee strength currently. GitLab is able to add workers at a rapid rate, since it has an all-remote workforce. GitLab wants to be independent and chooses transparency for community Sijbrandij says that the company made a deliberate decision to be transparent early on. Being based on an open-source project, it’s sometimes tricky to make the transition to a commercial company, and sometimes that has a negative impact on the community and the number of contributions. Transparency was a way to combat that, and it seems to be working. He reports that the community contributes 200 improvements to the GitLab open-source products every month, and that’s double the amount of just a year ago, so the community is still highly active. He did not ignore the fact that Microsoft acquired GitHub last year for $7.5 billion. And GitLab is a similar kind of company that helps developers manage and distribute code in a DevOps environment. He claims in spite of that eye-popping number, his goal is to remain an independent company and take this through to the next phase. “Our ambition is to stay an independent company. And that’s why we put out the ambition early to become a listed company. That’s not totally in our control as the majority of the company is owned by investors, but as long as we’re more positive about the future than the people around us, I think we can we have a shot at not getting acquired,” he said. Community is happy with GitLab’s products and services Overall the community is happy with this news and GitLab’s products and services. One of the comments on Hacker News reads, “Congrats, GitLab team. Way to build an impressive business. When anybody tells you there are rules to venture capital — like it’s impossible to take on massive incumbents that have network effects — ignore them. The GitLab team is doing something phenomenal here. Enjoy your success! You’ve earned it.” Another user comments, “We’ve been using Gitlab for 4 years now. What got us initially was the free private repos before github had that. We are now a paying customer. Their integrated CICD is amazing. It works perfectly for all our needs and integrates really easily with AWS and GCP. Also their customer service is really damn good. If I ever have an issue, it’s dealt with so fast and with so much detail. Honestly one of the best customer service I’ve experienced. Their product is feature rich, priced right and is easy. I’m amazed at how the operate. Kudos to the team” Other interesting news in programming Microsoft open-sources its C++ Standard Library (STL) used by MSVC tool-chain and Visual Studio Linux 5.3 releases with support for AMD Navi GPUs, Zhaoxin x86 CPUs and power usage improvements NVIM v0.4.0 releases with new API functions, Lua library, UI events and more!
Read more
  • 0
  • 0
  • 21426
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-mobile-aware-phishing-campaign-targets-unicef-the-un-and-many-other-humanitarian-organizations
Savia Lobo
30 Oct 2019
2 min read
Save for later

Mobile-aware phishing campaign targets UNICEF, the UN, and many other humanitarian organizations

Savia Lobo
30 Oct 2019
2 min read
A few days ago researchers from the Lookout Phishing AI reported a mobile-aware phishing campaign that targets non-governmental organizations around the world including UNICEF, a variety of United Nations humanitarian organizations, the Red Cross and UN World Food, etc. The company has also contacted law enforcement and the targeted organizations. “The campaign is using landing pages signed by SSL certificates, to create legitimate-looking Microsoft Office 365 login pages,” Threatpost reports. According to the Lookout Phishing AI researchers, “The infrastructure connected to this attack has been live since March 2019. Two domains have been hosting phishing content, session-services[.]com and service-ssl-check[.]com, which resolved to two IPs over the course of this campaign: 111.90.142.105 and 111.90.142.91. The associated IP network block and ASN (Autonomous System Number) is understood by Lookout to be of low reputation and is known to have hosted malware in the past.” The researchers have also detected very interesting techniques used in this campaign. It quickly detects mobile devices and logs keystrokes directly as they are entered in the password field. Simultaneously, the JavaScript code logic on the phishing pages delivers device-specific content based on the device the victim uses. “Mobile web browsers also unintentionally help obfuscate phishing URLs by truncating them, making it harder for the victims to discover the deception,” Jeremy Richards, Principal Security Researcher, Lookout Phishing AI wrote in his blog post. Further, the SSL certificates used by the phishing infrastructure had two main ranges of validity: May 5, 2019 to August 3, 2019, and June 5, 2019 to September 3, 2019. The Lookout researchers said that currently, six certificates are still valid. They also suspect that these attacks may still be ongoing. Alexander García-Tobar, CEO and co-founder of Valimail, told Threatpost via email, “By using deviously coded phishing sites, hackers are attempting to steal login credentials and ultimately seek monetary gain or insider information.” To know more about this news in detail, read Lookout’s official blog post. UK’s NCSC report reveals significant ransomware, phishing, and supply chain threats to businesses A new Stuxnet-level vulnerability named Simjacker used to secretly spy over mobile phones in multiple countries for over 2 years: Adaptive Mobile Security reports Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs
Read more
  • 0
  • 0
  • 21408

article-image-build-custom-maps-the-easy-way-with-multiple-map-layers-in-tableau-from-whats-new
Anonymous
22 Dec 2020
5 min read
Save for later

Build custom maps the easy way with multiple map layers in Tableau from What's New

Anonymous
22 Dec 2020
5 min read
Ashwin Kumar Senior Product Manager Kristin Adderson December 22, 2020 - 8:04pm December 22, 2020 The Tableau 2020.4 release comes fully-loaded with tons of great features, including several key updates to boost your geospatial analysis. In particular, the new multiple marks layers feature lets you add an unlimited number of layers to the map. This means you can visualize multiple sets of location data in context of one another, and there’s no need for external tools to build custom background maps.  Drag and drop map layers—yes, it’s just that easy Spend less time preparing spatial datasets and more time analyzing your data with drag-and-drop map layers across Tableau Online, Server, and Desktop in Tableau 2020.4. Getting started is easy! Once you’ve connected to a datasource that contains location data and created a map, simply drag any geographic field onto the Add a Marks Layers drop target, and Tableau will instantly draw the new layer of marks on the map. For each layer that you create, Tableau provides a new marks card, so you can encode each layer’s data by size, shape, and color. What’s more, you can even control the formatting of each layer independently, giving you maximum flexibility in controlling the appearance of your map.  But that’s not all. While allowing you to draw an unlimited number of customized map layers is a powerful capability in its own right, the multiple map layers feature in Tableau gives you even more tools that you can use to supercharge your analytics. First up: the ability to toggle the visibility of each layer. With this feature, you can decide to show or hide each layer at will, allowing you to visualize only the relevant layers for the question at hand. You can use this feature by hovering over each layer’s name in the marks card, revealing the interactive eye icon. Sometimes, you may want only some of your layers to be interactive, and the remaining layers to simply be part of the background. And luckily, the multiple map layers feature allows you to have exactly this type of control. Hovering over each layer’s name in the marks card reveals a dropdown arrow. Clicking on this arrow, you can select the first option in the context menu: Disable Selection. With this option, you can customize the end-user experience, ensuring that background contextual layers do not produce tooltips or other interactive elements when not required.  Finally, you also have fine-grained control over the drawing order, or z-order, of layers on your map. With this capability, you can ensure that background layers that may obscure other map features are drawn on the bottom. To adjust the z-order of layers on the map, you can either drag to reorder your layers in the marks card, or you can use the Move Up and Move Down options in each layer’s dropdown context menu.  Drawing an unlimited number of map layers is critical to helping you build authoritative, context-appropriate maps for your organization. This is helpful for a wide variety of use cases across industries and businesses. Check out some more examples below: A national coffee chain might want to visualize stores, competitor locations, and win/loss metrics by sales area to understand competitive pressures. In the oil and gas industry, visualizing drilling rigs, block leases, and nautical boundaries could help devise exploration and investment strategies.A disaster relief NGO may decide to map out hurricane paths, at-risk hospitals, and first-responder bases to deploy rescue teams to those in need.Essentially, you can use this feature to build rich context into your maps and support easy analysis and exploration for any scenario! Plus, spatial updates across the stack: Tableau Prep, Amazon Redshift, and offline maps The 2020.4 release will also include other maps feature to help you take location intelligence to the next level. In this release, we’re including support for spatial data in Tableau Prep, so you can clean and transform your location data without having to use a third party tool. We’re also including support for spatial data from Amazon Redshift databases, and offline maps for Tableau Server, so you can use Tableau maps in any environment and connect to your location data directly from more data sources.  Want to know what else we released with Tableau 2020.4? Learn about Tableau Prep in the browser, web authoring and predictive modeling enhancements, and more in our launch announcement. We’d love your feedback Can you think of additional features you need to take your mapping in Tableau to greater heights? We would love to hear from you! Submit your request on the Tableau Ideas Forum today. Every idea is considered by our Product Management team and we value your input in making decisions about what to build next.  Want to get a sneak peek at the latest and greatest in Tableau? Visit our Coming Soon page to learn more about what we’re working on next. Happy mapping! 
Read more
  • 0
  • 0
  • 21279

article-image-introducing-node-js-12-with-v8-javascript-engine-improved-worker-threads-and-much-more
Amrata Joshi
23 Apr 2019
3 min read
Save for later

Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more

Amrata Joshi
23 Apr 2019
3 min read
Today, the team behind Node.js announced the release of Node.js 12 with new updates and features including faster startup and better default heap limits, updates to V8, and much more. This release replaces version 11 in the current release line. With the release of version 12,  Node.js release line will soon become Node.js Long Term Support (LTS) which will release in Oct 2019 What’s new in Node.js 12? V8 Javascript engine v7.4 A new version of the V8 JavaScript engine comes with improved performance, language, and runtime. The team has added a new feature called zero-cost async stack traces, that improves the error.stack property with asynchronous call frames.  With V8 v7.4, there are faster calls with arguments mismatch. Even the JavaScript parsing has got faster. TLS 1.3 This version of Node.js comes with TLS1.3 (Transport Layer Security) support which is now the default max protocol. This release also supports CLI/NODE_OPTIONS switches in order to disable it, if required. Configure default heap limits With Node.js 12 release, the JavaScript heap size is configured based on available memory instead of using defaults which were set by V8 for use with browsers. Now with the help of this configuration, Node.js won’t try to use more memory than is available and would terminate when its memory is exhausted. This feature is highly useful while processing large data-sets. Switch default http parser to llhttp This release will also switch the default parser to llhttp which will be beneficial to make testing and comparing the new llhttp-based implementation easier.   Making native modules get easier Node.js 12 makes building and supporting native modules easier. The new changes include better support for native modules in combination with Worker threads. Users can now use their own threads for native asynchronous functions. Worker threads In this release, the worker threads don’t require the use of a flag. With this release, additional threads can be leveraged whenever required for better results. Diagnostic reports Node.js 12 comes with a new experimental feature called diagnostic report which allows the users to generate a report on demand. The report contains information that can be useful for diagnosing problems in production including crashes, high CPU usage, slow performance, memory leaks, unexpected errors and more. Google is planning to bring Node.js support to Fuchsia Node.js and JS Foundations are now merged into the OpenJS Foundation 7 Best Practices for Logging in Node.js
Read more
  • 0
  • 0
  • 21247

article-image-intels-ddio-and-rdma-enabled-microprocessors-vulnerable-to-new-netcat-attack
Vincy Davis
13 Sep 2019
5 min read
Save for later

Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack

Vincy Davis
13 Sep 2019
5 min read
Two days ago, Intel disclosed a vulnerability in their 2011 released line of micro processors with  Data Direct I/O Technology (DDIO) and Remote Direct Memory Access (RDMA) technologies. The vulnerability was found by a group of researchers from the Vrije Universiteit Amsterdam and ETH Zurich. The researchers have presented a detailed security analysis of the attack in their paper, NetCAT: Practical Cache Attacks from the Network. The analysis has been implemented by reverse engineering the behavior of Data-Direct I/O (DDIO), also called as Direct Cache Access (DCA) on recent Intel processors. The security analysis resulted in the discovery of the first network-based PRIME+PROBE Cache attack, named NetCAT. The NetCAT attack enables attacks in cooperative and general adversarial settings. The cooperative setting can enable an attacker to build a covert channel between a network client and a sandboxed server process without network. In the general adversarial settings, an attacker can enable disclosure of network timing-based sensitive information. On June 23, 2019, the researchers coordinated the disclosure process with Intel and NCSC (the Dutch national CERT). Intel acknowledged the vulnerability with a bounty and have assigned CVE-2019-11184 to track the issue. What is a NetCAT attack? The threat model implemented in the paper targets victim servers with DDIO equipped Intel processors, which are mostly enabled in all Intel server-grade processors, by default since 2012. The launched cache attack is conducted over a network to a target server, such that secret information can be leaked from the connection between the server and a different client. The researchers say that there are many potential ways to exploit DDIO. The paper states, “For instance, an attacker with physical access to the victim machine could install a malicious PCIe device to directly access the LLC’s DDIO region. Our aim in this paper is to show that a similar attack is feasible even for an attacker with only remote (unprivileged) network access to the victim machine, without the need for any malicious PCIe devices.”  The threat model uses the RDMA in modern NICs to bypass the operating system at the data plane. This provides the remote machines with direct read and write access to a previously specified memory region. The below figure illustrates the model’s target topology, which is also common in data centers. Image Source: NetCAT: Practical Cache Attacks from the Network In order to launch the remote PRIME+PROBE attack, the researchers have used the remote read/write primitives provided by the PCIe device’s DDIO capabilities to remotely measure the cache activity. The paper explains two cooperative DDIO-based attacks. In the first scenario, a covert channel between two clients that are not on the same network is used and in the second scenario a covert channel between a client and a sandboxed process on a server is used. In both scenarios, it was found that the transmission rounds are loosely synchronized with a predefined time window. An attacker can control the machine with an RDMA link to an application server by using the remote PRIME+PROBE to detect network activity in the LLC as shown in the above figure. The user then opens an interactive SSH session to the application server from a different machine. In an interactive SSH session, each keystroke is sent in a separate packet. The attacker is able to recover the inter-packet times from the cache using the ring buffer location and map them to keystrokes. The security analysis successfully explored the implications of the NetCAT attack, and proved that the DDIO feature on modern Intel CPUs does exposes the system to cache attacks over the network. The researchers believe that “We have merely scratched the surface of possibilities for network-based cache attacks, and we expect similar attacks based on NetCAT in the future. We hope that our efforts caution processor vendors against exposing microarchitectural elements to peripherals without a thorough security design to prevent abuse.” A video demonstrating the NetCAT attack is shown below: https://www.youtube.com/watch?v=QXut1XBymAk In the paper, various other NetCAT-like attacks like the PCIe to CPU attacks have been discussed which may be generalized beyond the given proof-of-concept scenarios. The researchers have also explained various possible mitigations like disabling DDIO, LLC partitioning, and DDIO improvement against these last-level cache side-channel attacks from PCIe devices. With repeated vulnerabilities being found in Intel, many are beginning to distrust Intel. Some are even considering moving away to other alternatives. A Redditor comments, “Another one? Come on man, my i7 2600k already works like crap, and now another vulnerability that surely will affect performance via patches appeared? It is settled, next month I'm ditching Intel.” Another comment read, “Soooo the moral of the story is, never buy Intel chips.” For more information about the attack, interested readers can head over to the NetCAT: Practical Cache Attacks from the Network paper for more information. Other Intel news Intel discloses four new vulnerabilities labeled MDS attacks affecting Intel chips Intel unveils the first 3D Logic Chip packaging technology, ‘Foveros’, powering its new 10nm chips, ‘Sunny Cove’ IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation
Read more
  • 0
  • 0
  • 21242
article-image-unreal-engine-4-20-released-with-focus-on-mobile-and-immersive-ar-vr-mr-devices
Sugandha Lahoti
20 Jul 2018
4 min read
Save for later

Unreal Engine 4.20 released with focus on mobile and immersive (AR/VR/MR) devices

Sugandha Lahoti
20 Jul 2018
4 min read
Following the release of Unreal Engine 4.19 this April, Epic games have launched the Unreal Engine 4.20. This major update focuses on enhancing scalability and creativity, helping developers create more realistic characters, and immersive environments, for games, film, TV, and VR/AR devices. Multiple optimizations for Mobile Game development Epic games brought over 100 optimizations created for Fortnite on iOS and Android, for Unreal Engine 4.20. Hardware Occlusion Queries are now supported for high-end mobile devices on iOS and Android that support ES 3.1 or Vulkan using the GPU. Developers can also iterate and debug on Android without having to repackage the UE4 project. Game developers now have unlimited Landscape Material layers on mobile devices. Mixed Reality Capture Unreal Engine 4.20 provides a new Mixed Reality Capture functionality, which makes it easy to composite real players into a virtual space for mixed reality applications. It has three components: video input, calibration, and in-game compositing. You can use supported webcams and HDMI capture devices to pull real-world green-screened video into the Unreal Engine from a variety of sources.  The setup and calibration are done through a standalone calibration tool that can be reused across Unreal Engine 4 titles. Niagara Visual effects editor The Niagara visual effects Editor is available as an early access plugin. While the Niagara editor builds on the same particle manipulation methods of Cascade (UE4’s previous VFX), unlike Cascade, Niagara is fully Modular. UE 4.20 adds multiple improvements to Niagara Effect Design and Creation. All of Niagara’s Modules have been updated to support commonly used behaviors in building effects for games. New UI features have also been added for the Niagara stack that mimic the options developers have with UProperties in C++. Niagara now has support for GPU Simulation when used on DX11, PS4, Xbox One, OpenGL (ES3.1), and Metal platforms. Niagara CPU Simulation now works on PC, PS4, Xbox One, OpenGL (ES3.1) and Metal. Niagara was showcased at the GDC 2018 and you can see the presentation Programmable VFX with Unreal Engine’s Niagara for a complete overview. Cinematic Depth of Field Unreal Engine 4.20 also adds Cinematic Depth of Field, where developers can achieve cinema quality camera effects in real-time. Cinematic DoF, provides cleaner depth of field effect providing a cinematic appearance with the use of a procedural Bokeh simulation. It also features dynamic resolution stability, supports alpha channel, and includes settings to scale it down for console projects. For additional information, you can see the Depth of Field documentation. Proxy LOD improvements The Proxy LOD tool is now production-ready. This tool improves performance by reducing rendering cost due to poly count, draw calls, and material complexity. It  results in significant gains when developing for mobile and console platforms. The production-ready version of the Proxy LOD tool has several enhancements over the Experimental version found in UE4.19. Improved Normal Control: The use may now supply the hard-edge cutoff angle and the method used in computing the vertex normal. Gap Filling: The Proxy system automatically discards any inaccessible structures. Gap Filling results in fewer total triangles and a better use of the limited texture resource. Magic Leap One Early Access Support With Unreal Engine 4.20, game developers can now build for Magic Leap One. Unreal Engine 4 support for Magic Leap One uses built-in UE4 frameworks such as camera control, world meshing, motion controllers, and forward and deferred rendering. For developers with access to hardware, Unreal Engine 4.20 can deploy and run on the device in addition to supporting Zero Iteration workflows through Play In Editor. Read more The hype behind Magic Leap’s New Augmented Reality Headsets Magic Leap’s first AR headset, powered by Nvidia Tegra X2, is coming this Summer Apple ARKit 2.0 and Google ARCore 1.2 Support Unreal Engine 4.20 adds support for Apple’s ARKit 2.0, for better tracking quality, support for vertical plane detection, face tracking, 2D and 3D image detection, and persistent and shared AR experiences. It also adds support for Google’s ARCore 1.2, including vertical plane detection, Augmented Images, and Cloud Anchor to build collaborative AR experiences. These are just a select few updates to the Unreal Engine. The full list of release notes is available on the Unreal Engine blog. What’s new in Unreal Engine 4.19? Game Engine Wars: Unity vs Unreal Engine
Read more
  • 0
  • 1
  • 21236

article-image-openbsd-6-6-comes-with-gcc-disabled-in-base-for-armv7-and-i386-smp-improvements-and-more
Bhagyashree R
18 Oct 2019
3 min read
Save for later

OpenBSD 6.6 comes with GCC disabled in base for ARMv7 and i386, SMP Improvements, and more

Bhagyashree R
18 Oct 2019
3 min read
Yesterday, the team behind OpenBSD, a Unix-like operating system, announced the release of OpenBSD 6.6. This release has GNU Compiler Collection (GCC) disabled in its base packages for i386 and ARMv7 and expanded LLVM Clang platform support. OpenBSD 6.6 also features various SMP improvements, improved Linux compatibility with ACPI interfaces, a number of new hardware drivers, and more. It ships with OpenSSH 8.1, LibreSSL 3.0.2, OpenSMTPD 6.6, and other updated packages. Read also: OpenSSH code gets an update to protect against side-channel attacks Key updates in OpenBSD 6.6 Unlocked system calls OpenBSD 6.6 comes with unlocked ‘getrlimit’ and ‘setrlimit’ system calls. These are used for controlling the maximum system resource consumption. There are also unlocked read and write system calls for reading input and writing output respectively. Improved hardware support OpenBSD 6.6 comes with Linux compatible ACPI interfaces. Also, the ACPI support is enabled in ‘radeon’ and ‘amdgpu’. Time Stamp Counter (TSC) is re-enabled as the default AMD64 time source and TSC synchronization is added for multiprocessor machines. This release supports the cryptographic coprocessor found on newer AMD Ryzen CPUs/APUs. IEEE 802.11 wireless stack improvements The ifconfig ‘nwflag’ is now repaired. A new stayauth ‘nwflag’ is added, which you can set to ignore deauth frames to prevent your system from a spoofing attack. Support for 802.11n Tx aggregation is added to net80211 and the ‘iwn’ driver. Starting with OpenBSD 6.6, all wireless drives submit a batch of received packets to the network stack during one interrupt, instead of submitting them individually. Security improvements The unveil command is updated to improve application behavior when encountering hidden filesystem paths. OpenBSD 6.6 has improved mitigations against a number of vulnerabilities including Spectre side-channel vulnerability in Intel CPUs and Intel's Microarchitectural Data Sampling vulnerability. This release introduces 'malloc_conceal' and 'calloc_conceal', which return the memory in pages marked ‘MAP_CONCEAL’ and call ‘freezero’ on ‘free’. Read also: Seven new Spectre and Meltdown attacks found In a discussion on Hacker News, many users expressed their excitement. A user commented, “Just keeps getting better and better every release. I wish they would add an easy encryption option in the installer. You can enable full-disk encryption, but you have to mess with the bioctl settings, which potentially scares off new users.” A few users also had some doubt that why this release has U2F support and Bluetooth disabled for security. A user explained, “I'm not sure why U2F would be "disabled for security". I guess it's just that nobody has implemented all the required things. For the USB tokens, you need userspace USB HID access and hotplug notifications. I did that in Firefox for FreeBSD.” These were some of the updates in OpenBSD 6.6. Check out the official announcement to know more. OpenBSD 6.4 released OpenSSH code gets an update to protect against side-channel attacks OpenSSH 8.0 released; addresses SCP vulnerability and new SSH additions  
Read more
  • 0
  • 0
  • 21208

article-image-net-core-2-0-reaches-end-of-life-no-longer-supported-by-microsoft
Prasad Ramesh
04 Oct 2018
2 min read
Save for later

.NET Core 2.0 reaches end of life, no longer supported by Microsoft

Prasad Ramesh
04 Oct 2018
2 min read
.NET Core 2.0 was released mid August 2017. It has now reached end of life (EOL) and will no longer be supported by Microsoft. .NET Core 2.0 EOL .NET Core 2.1 was released towards the end of May 2018 and .NET Core 2.0 reached EOL on October 1. This was supposed to happen on September 1 but was pushed by a month since users experienced issues in upgrading to the newer version. .NET Core 2.1 is a long-term support (LTS) release and should be supported till at least August 2021. It is recommended to upgrade to and use .NET Core 2.1 for your projects. There are no major changes in the newer version. .NET Core 2.0 is no longer supported and updates won’t be provided. The installers, zips and Docker images of .NET Core 2.0 will still remain available, but they won’t be supported. Downloads for 2.0 will still be accessible via the Download Archives. However, .NET Core 2.0 is removed from the microsoft/dotnet repository README file. All the existing images will still be available in that repository. Microsoft’s support policy The ‘LTS’ releases contain stabilized features and components. They require fewer updates over their longer support release lifetime. The LTS releases are a good choice for applications that developers do not intend to update very often. The ‘current’ releases include features that are new and may undergo changes in the future based on feedback/issues. They give access to the latest features and improvements and hence are a good choice for applications in active development. Upgrades to newer .NET Core releases is required more frequently to stay in support. Some of the new features in .NET Core 2.1 include performance improvements, long term support, Brotli compression, and new cryptography APIs. To migrate from .NET Core 2.0 to .NET Core 2.1, visit the Microsoft website. You can read the official announcement on GitHub. Note: article amended 08.10.2018 - .NET Core 2.0 reached EOL on October 1, not .NET Core 2.1. The installers, zips and Docker images will still remain available but won't be supported, not unsupported. .NET announcements: Preview 2 of .NET Core 2.2 and Entity Framework Core 2.2, C# 7.3, and ML.NET 0.5 Microsoft’s .NET Core 2.1 now powers Bing.com Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial]
Read more
  • 0
  • 4
  • 21202
article-image-aws-greengrass-machine-learning-edge
Richard Gall
09 Apr 2018
3 min read
Save for later

AWS Greengrass brings machine learning to the edge

Richard Gall
09 Apr 2018
3 min read
AWS already has solutions for machine learning, edge computing, and IoT. But a recent update to AWS Greengrass has combined all of these facets so you can deploy machine learning models to the edge of networks. That's an important step forward in the IoT space for AWS. With Microsoft also recently announcing a $5 billion investment in IoT projects over the next 4 years, by extending the capability of AWS Greengrass, the AWS team are making sure they set the pace in the industry. Jeff Barr, AWS evangelist, explained the idea in a post on the AWS blog: "...You can now perform Machine Learning inference at the edge using AWS Greengrass. This allows you to use the power of the AWS cloud (including fast, powerful instances equipped with GPUs) to build, train, and test your ML models before deploying them to small, low-powered, intermittently-connected IoT devices running in those factories, vehicles, mines, fields..." Industrial applications of machine learning inference Machine learning inference is bringing lots of advantages to industry and agriculture. For example: In farming, edge-enabled machine learning systems will be able to monitor crops using image recognition  - in turn this will enable corrective action to be taken, allowing farmers to optimize yields. In manufacturing, machine learning inference at the edge should improve operational efficiency by making it easier to spot faults before they occur. For example, by monitoring vibrations or noise levels, Barr explains, you'll be able to identify faulty or failing machines before they actually break. Running this on AWS greengrass offers a number of advantages over running machine learning models and processing data locally - it means you can run complex models without draining your computing resources. Read more in detail on the AWS Greengrass Developer Guide. AWS Greengrass should simplify machine learning inference One of the fundamental benefits of using AWS Greengrass should be that it simplifies machine learning inference at every single stage of the typical machine learning workflow. From building and deploying machine learning models, to developing inference applications that can be launched locally within an IoT network, it should, in theory, make the advantages of machine learning inference more accessible to more people. It will be interesting to see how this new feature is applied by IoT engineers over the next year or so. But it will also be interesting to see if this has any impact on the wider battle for the future of Industrial IoT. Further reading: What is edge computing? AWS IoT Analytics: The easiest way to run analytics on IoT data, Amazon says What you need to know about IoT product development
Read more
  • 0
  • 0
  • 21200

article-image-googles-kaniko-open-source-build-tool-for-docker-images-in-kubernetes
Savia Lobo
27 Apr 2018
2 min read
Save for later

Google’s kaniko - An open-source build tool for Docker Images in Kubernetes, without a root access

Savia Lobo
27 Apr 2018
2 min read
Google recently introduced kaniko, an open-source tool for building container images from a Dockerfile even without privileged root access. Prior to kaniko, building images from a standard Dockerfile typically was totally dependent on an interactive access to a Docker daemon, which requires a root access on the machine to run. Such a process makes it difficult to build container images in environments that can’t easily or securely expose their Docker daemons, such as Kubernetes clusters. To combat these challenges, Kaniko was created. With kaniko, one can build an image from a Dockerfile and push it to a registry. Since it doesn’t require any special privileges or permissions, kaniko can even run in a standard Kubernetes cluster, Google Kubernetes Engine, or in any environment that can’t have access to privileges or a Docker daemon. How does kaniko Build Tool work? kaniko runs as a container image that takes in three arguments: a Dockerfile, a build context and the name of the registry to which it should push the final image. The image is built from scratch, and contains only a static Go binary plus the configuration files needed for pushing and pulling images.kaniko image generation The kaniko executor takes care of extracting the base image file system into the root. It executes each command in order, and takes a snapshot of the file system after each command. The snapshot is created in the user area where the file system is running and compared to the previous state that is in memory. All changes in the file system are appended to the base image, making relevant changes in the metadata of the image. After successful execution of each command in the Dockerfile, the executor pushes the newly built image to the desired registry. Finally, Kaniko unpacks the filesystem, executes commands and takes snapshots of the filesystem completely in user-space within the executor image. This is how it avoids requiring privileged access on your machine. Here, the docker daemon or CLI is not involved. To know more about how to run kaniko in a Kubernetes Cluster, and in the Google Cloud Container Builder, read the documentation on the GitHub Repo. The key differences between Kubernetes and Docker Swarm Building Docker images using Dockerfiles What’s new in Docker Enterprise Edition 2.0?
Read more
  • 0
  • 0
  • 21198
Modal Close icon
Modal Close icon