Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-kong-1-0-launches-the-only-open-source-api-platform-specifically-built-for-microservices-cloud-and-serverless
Richard Gall
18 Sep 2018
3 min read
Save for later

Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless

Richard Gall
18 Sep 2018
3 min read
The API is the building block of much modern software. With Kong 1.0, launching today at Kong Summit, Kong believes it has cemented its position as the go-to platform for developing APIs on modern infrastructures, like cloud-native, microservices, and serverless. The release of the first stable version of Kong marks an important milestone for the company as it looks to develop what it calls a 'service control platform.' This is essentially a tool that will allow developers, DevOps engineers, and architects to manage their infrastructure at every point - however they choose to build it. It should, in theory off a fully integrated solution that let's you handle APIs, manage security permissions, and even leverage the latest in cutting edge artificial intelligence for analytics and automation. CEO Augusto Marietti said that "API management is rapidly evolving with the industry, and technology must evolve with it. We built Kong from the ground up to meet these needs -- Kong is the only API platform designed to manage and broker the demands that in-flight data increasingly place on modern software architectures." How widely used is Kong? According to the press release, Kong has been downloaded 45 million times, making it the most widely used open source API platform. The team stress that reaching Kong 1.0 has taken three years of intensive development work, done alongside customers from a wide range of organizations, including Yahoo! Japan and Healthcare.gov. Kanaderu Fukuda, senior manager of the Computing Platform Department at Yahoo! Japan, said: "as Yahoo! Japan shifts to microservices, we needed more than just an API gateway – we needed a high-performance platform to manage all APIs across a modern architecture... With Kong as a single point for proxying and routing traffic across all of our API endpoints, we eliminated redundant code writing for authentication and authorization, saving hundreds of hours. Kong positions us well to take advantage of future innovations, and we're excited to expand our use of Kong for service mesh deployments next." New features in Kong 1.0 Kong 1.0, according to the release materials "combines sub-millisecond low latency, linear scalability and unparalleled flexibility." Put simply, it's fast but also easy to adapt and manipulate according to your needs. Everything a DevOps engineer or solutions architect would want. Although it isn't mentioned specifically, Kong is a tool that exemplifies the work of SREs - site reliability engineers. It's a tool that's designed to manage the relationship between various services, and to ensure they not only interact with each other in the way they should, but that they do so with minimum downtime. The Kong team appear to have a huge amount of confidence in the launch of the platform - the extent to which they can grow their customer base depends a lot on how the marketplace evolves, and how much the demand for forward-thinking software architecture grows over the next couple of years. Read next: How Gremlin is making chaos engineering accessible [Interview] Is the ‘commons clause’ a threat to open source?
Read more
  • 0
  • 0
  • 16016

article-image-an-attack-on-sks-keyserver-network-a-write-only-program-poisons-two-high-profile-openpgp-certificates
Savia Lobo
01 Jul 2019
6 min read
Save for later

An attack on SKS Keyserver Network, a write-only program, poisons two high-profile OpenPGP certificates

Savia Lobo
01 Jul 2019
6 min read
Robert J. Hansen, a maintainer of the GnuPG FAQ, revealed about a certificate spamming attack against him and Daniel Kahn Gillmor, two high-profile contributors in the OpenPGP community, in the last week of June 2019. The attack exploited a defect in the OpenPGP protocol to "poison" both Hansen’s and Gillmor’s OpenPGP certificates. “Anyone who attempts to import a poisoned certificate into a vulnerable OpenPGP installation will very likely break their installation in hard-to-debug ways”, Hansen wrote on his GitHub blog post. Gillmor said his OpenPGP certificate was flooded with bogus certifications which were uploaded to the SKS keyserver network. The main use of OpenPGP today is to verify downloaded packages for Linux-based operating systems, usually using a software tool called GnuPG. This attack has the following consequences: If you fetch a poisoned certificate from the keyserver network, you will break your GnuPG installation. Poisoned certificates cannot be deleted from the keyserver network. The number of deliberately poisoned certificates, currently at only a few, will only rise over time. The attackers may have an intent on poisoning other certificates and the scope of the damage is still unknown A year ago, OpenPGP experienced similar certificate flooding, one, a spam on Werner Koch's key and second, abuse tools made available years ago under the name "trollwot". There's a keyserver-backed filesystem proposed as a proof of concept to point out the abuse. “Poisoned certificates are already on the SKS keyserver network. There is no reason to believe the attacker will stop at just poisoning two certificates. Further, given the ease of the attack and the highly publicized success of the attack, it is prudent to believe other certificates will soon be poisoned”, Hansen further added. He also said that the mitigation to this attack cannot be carried out “in any reasonable time period” and that the future releases of OpenPGP software may have mitigation. However, he said he is unsure of the time frame. The best mitigation that can be applied at present is simple: stop retrieving data from the SKS keyserver network, Hansen says. The “keyserver software” was written to facilitate the discovery and distribution of public certificates. Users can search the keyserver by a variety of different criteria to discover public certificates which claim to belong to the desired user. The keyserver network, however, does not attest to the accuracy of the information. This was left for each user to ascertain according to their own criteria. According to the Keyserver design goals, “Keyservers could add information to existing certificates but could never, ever, ever, delete either a certificate or information about a certificate”, Hansen said as he was involved in the PGP community since 1992 and was present for these discussions. “In the early 1990s this design seemed sound. It is not sound in 2019. We've known it has problems for well over a decade”, Hansen adds. This shows that Keyservers are vulnerable and susceptible to attacks and how the data can be easily misused. Why SKS Keyserver Network can never be fixed Hansen has also given some reasons why the software was not fixed or updated for security to date. A difficult to understand algorithm The SKS or standard keyserver software was written by Yaron Minsky. It became the keystone of his Ph.D. thesis, and he wrote SKS originally as a proof of concept of his idea. The algorithm is written in an unusual programming language called OCaml, which Hansen says has an idiosyncratic dialect. “ Not only do we need to be bright enough to understand an algorithm that's literally someone's Ph.D. thesis, but we need expertise in obscure programming languages and strange programming customs”, Hansen says. Change in design goal may result in changes from scratch Due to a difficult programming language it is written in, there are hardly any programmers who are qualified to do such a major overhaul, Hansen says. Also, the design goal of the keyserver network is "baked into" essentially every part of the infrastructure and changing it may lead to huge changes in the entire software. Lack of a centralized authority The lack of centralized authority was a feature, not a bug. This means there is no single point of failure for a government to go after. This makes it even harder to change the design goals as the network works as a confederated system. Keyserver network is a Write-only file system The Keyserver network is based on a write-only, which makes it susceptible to a lot of attacks as one can only write into it and have a tough time deleting files. The keyserver network can be thought of as an extremely large, extremely reliable, extremely censorship-resistant distributed file system which anyone can write to. Attackers can easily add any malicious or censored content files or media, which no one can delete. Mitigations for using the Synchronization Key server Hansen says high-risk users should stop using the keyserver network immediately. For those confident with editing their GnuPG configuration files, the following process is recommended: Open gpg.conf in a text editor. Ensure there is no line starting with keyserver. If there is, remove it. Open dirmngr.conf in a text editor. Add the line keyserver hkps://keys.openpgp.org to the end of it. keys.openpgp.org is a new experimental keyserver which is not part of the keyserver network and has some features which make it resistant to this sort of attack. It has some limitations like its search functionality is sharply constrained. However, once changes are made users will be able to run gpg --refresh-keys with confidence. Daniel Kahn Gillmor, in his blogpost, says, “This is a mess, and it's a mess a long time coming. The parts of the OpenPGP ecosystem that rely on the naive assumptions of the SKS keyserver can no longer be relied on because people are deliberately abusing those keyservers. We need significantly more defensive programming and a better set of protocols for thinking about how and when to retrieve OpenPGP certificates”. Public reaction to this attack is quite speculative. People shared their opinions on Twitter. Some have also suggested migrating the SKS server towards the new OpenPGP key server called Hagrid. https://twitter.com/matthew_d_green/status/1145030844131753985 https://twitter.com/adulau/status/1145045929428443137 To know more about this in detail, head over to Robert J. Hansen’s GitHub post. Training Deep Convolutional GANs to generate Anime Characters [Tutorial] Former npm CTO introduces Entropic, a federated package registry with a new CLI and much more! Microsoft introduces Service Mesh Interface (SMI) for interoperability across different service mesh technologies
Read more
  • 0
  • 0
  • 16006

article-image-microsoft-open-sources-the-windows-calculator-code-on-github
Amrata Joshi
07 Mar 2019
3 min read
Save for later

Microsoft open sources the Windows Calculator code on GitHub

Amrata Joshi
07 Mar 2019
3 min read
Since the past couple of years, Microsoft has been supporting open source projects, it even joined the Open Invention Network. Last year, Microsoft had announced the general availability of its Windows 3.0 File Manager code. Yesterday, the team at Microsoft made an announcement regarding releasing its Windows Calculator program as an open source project on GitHub under the MIT License. Microsoft is making the source code, build system, unit tests, and product roadmap available to the community. It would be interesting for the developers to explore how different parts of the Calculator app work and further getting to know the Calculator logic. Microsoft is also encouraging developers to participate in their projects by bringing in new perspectives on the Calculator code. The company highlighted that developers can contribute by participating in discussions, fixing or reporting issues, prototyping new features and by addressing design flows. By reviewing the Calculator code, developers can explore the latest Microsoft technologies like XAML, Universal Windows Platform, and Azure Pipelines. They can also learn about Microsoft’s full development lifecycle and can even reuse the code to build their own projects. Microsoft will be also contributing custom controls and API extensions used in Calculator and projects like the Windows UI Library and Windows Community Toolkit. The official announcement reads, “Our goal is to build an even better user experience in partnership with the community.” With the recent updates from Microsoft, it seems that the company is becoming more and more developer friendly. Just two days ago, the company updated its App Developer Agreement. As per the new policy, the developers will now get up to 95% share. According to a few users, Microsoft might collect user information via this new project and even the section below telemetry (on the GitHub post) states the same. The post reads, "This project collects usage data and sends it to Microsoft to help improve our products and services. Read our privacy statement to learn more. Telemetry is disabled in development builds by default, and can be enabled with the SEND_TELEMETRY build flag." One of the users commented on HackerNews, “Well it must include your IP address too, and they know the time and date it was received. And then it gets bundled with the rest of the data they collected. I don't even want them knowing when I'm using my computer. What gets measured gets managed.” Few users have different perspectives regarding this. Another comment reads, “Separately, I question whether anyone looking at the telemetry on the backend. In my experience, developers add this stuff because they think it will be useful, then it never or rarely gets looked at. A telemetry event here, a telemetry event there, pretty soon you're talking real bandwidth.” Check out Microsoft’s blog post for more details on this news. Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more! Microsoft workers protest the lethal use of Hololens2 in the $480m deal with US military Microsoft adds new features to Microsoft Office 365: Microsoft threat experts, priority notifications, Desktop App Assure, and more
Read more
  • 0
  • 0
  • 16005

article-image-cloudflare-rca-major-outage-was-a-lot-more-than-a-regular-expression-went-bad
Savia Lobo
16 Jul 2019
3 min read
Save for later

Cloudflare RCA: Major outage was a lot more than “a regular expression went bad”

Savia Lobo
16 Jul 2019
3 min read
On July 2, 2019, Cloudflare suffered a major outage due to a massive spike in CPU utilization in the network. Ten days after the outage, on July 12, Cloudflare’s CTO John Graham-Cumming, has released a report highlighting the details about how the Cloudflare service went down for 27 minutes. During the outage, the company speculated the reason to be a single misconfigured rule within the Cloudflare Web Application Firewall (WAF), deployed during a routine deployment of new Cloudflare WAF Managed rules. This speculation turns out to be true and caused CPUs to become exhausted on every CPU core that handles HTTP/HTTPS traffic on the Cloudflare network worldwide. Graham-Cumming said they are “constantly improving WAF Managed Rules to respond to new vulnerabilities and threats”. The CPU exhaustion was caused by a single WAF rule that contained a poorly written regular expression that ended up creating excessive backtracking. Source: Cloudflare report The regular expression that was at the heart of the outage is : Graham-Cumming says Cloudflare deploys dozens of new rules to the WAF every week, and also have numerous systems in place to prevent any negative impact of that deployment. He shared a list of vulnerabilities that caused the major outage. What’s Cloudflare doing to mend the situation? Graham-Cumming said they had stopped all release work on the WAF completely and are following some processes: He says, for longer-term, Cloudflare is “moving away from the Lua WAF that I wrote years ago”. The company plans to port the WAF to use the new firewall engine, which provides customers the ability to control requests, in a flexible and intuitive way, inspired by the widely known Wireshark language. This will make the WAF both faster and add yet another layer of protection. Users have appreciated Cloudflare’s efforts in taking immediate calls for the outage and being completely transparent about the root cause of it with a complete post mortem report. https://twitter.com/fatih/status/1150014793253904386 https://twitter.com/nealmcquaid/status/1150754753825165313 https://twitter.com/_stevejansen/status/1150928689053470720 “We are ashamed of the outage and sorry for the impact on our customers. We believe the changes we’ve made mean such an outage will never recur,” Graham-Cumming writes. Read the complete in-depth report by Cloudflare on their blog post. How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Cloudflare adds Warp, a free VPN to 1.1.1.1 DNS app to improve internet performance and security Cloudflare raises $150M with Franklin Templeton leading the latest round of funding
Read more
  • 0
  • 0
  • 16001

article-image-survey-reveals-how-artificial-intelligence-is-impacting-developers-across-the-tech-landscape
Richard Gall
13 Sep 2018
2 min read
Save for later

Survey reveals how artificial intelligence is impacting developers across the tech landscape

Richard Gall
13 Sep 2018
2 min read
The hype around artificial intelligence has reached fever pitch. It has captured the imagination - and stoked the fears - of the wider public, reaching beyond computer science departments and research institutions. But when artificial intelligence dominates the international conversation, it's easy to forget that it's not simply a thing that exists and develops itself. However intelligent machines are, and however adept they are at 'learning', it's essential to remember they are things that are engineered - things that are built by developers. That's the thinking behind this year's AI Now survey. To capture the experiences and perspectives of developers and to better understand the impact of artificial intelligence on their work and lives. Key findings from Packt's artificial intelligence survey Launched in August, and receiving 2,869 responses from developers working in every area, from cloud to cybersecurity, the survey had some interesting findings. These include... 69% of developers aren’t currently using AI enabling tools in their day to day role. But 75% of respondents said they were planning on learning AI enabling software in the next 12 months. TensorFlow is the tool defining AI development - 27% of respondents listed it as their top tool in their to-learn list. 75% of developers believe automation will have either a positive or significant positive impact on their career. 47% of respondents believe AGI will be a reality within the next 30 years The biggest challenges for developers in terms of AI are having the time to learn new skills and knowing which frameworks and tools to learn Internal data literacy is the biggest challenge for AI implementation As well as as quantitative results, the survey also produced  qualitative insights from developers. This provided some useful and unique perspectives on artificial intelligence. One developer, talking about bias in AI, said that: “As a CompSci/IT professional I understand this is a more subtle manifestation of ‘Garbage In/Garbage Out”. As an African American, I have significant concerns about say, well documented bias in say criminal sentencing being legitimized because ‘the algorithm said so’.” To read the report click here. To coincide with the release of survey results Packt is also running a $10 sale on all eBooks and videos across their website throughout September. Visit the Packt eCommerce store to start exploring.
Read more
  • 0
  • 0
  • 16001

article-image-anaconda-5-3-0-released-takes-advantage-of-pythons-speed-and-feature-improvements
Melisha Dsouza
03 Oct 2018
2 min read
Save for later

Anaconda 5.3.0 released, takes advantage of Python’s Speed and feature improvements

Melisha Dsouza
03 Oct 2018
2 min read
The Anaconda team announced the release of Anaconda Distribution 5.3.0 in a blog post yesterday. Harnessing the speed of Python, learning and performing data science and machine learning is all the more easy in this new update. Here is the list of new features in Anaconda 5.3.0 #1 Utilising Python’s speed Anaconda Distribution 5.3 is compiled with Python 3.7, in addition to Python 2.7 Anaconda installers and Python 3.6 Anaconda metapackages. This will ensure the new update takes full advantage of Python’s speed and feature improvements. #2 Better CPU performance Users deploying TensorFlow can make use of the Intel Math Kernel Library 2019 for Deep Neural Networks (MKL 2019) included in this upgrade. These Python binary packages will ensure high CPU performance. #3 Better Reliability The team has improved the reliability by capturing and storing package metadata for installed packages. The additional metadata is used by the package cache to efficiently manage the environment while storing the patched metadata used by the conda solver. #4 New packages added There are over 230 packages which have been updated or added by the team. #5 Work in Progress on the casting bug The team is working on the casting bug in NumPy with Python 3.7 and the patch is in progress until NumPy is updated. To know more about this release, you can head over to the full release notes for the Distribution. Arm releases free Cortex-M processor cores for FPGAs, includes measures to combat FOSSi threat PyTorch 1.0 preview release is production ready with torch.jit, c10d distributed library, C++ API
Read more
  • 0
  • 0
  • 15996
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-chromium-developers-propose-an-alternative-to-webrequest-api-that-could-result-in-existing-ad-blockers-end
Bhagyashree R
23 Jan 2019
4 min read
Save for later

Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end

Bhagyashree R
23 Jan 2019
4 min read
Chromium developers recently shared the updates they are planning to do in Manifest V3, and one of them was limiting the blocking version of the webRequest API. They are introducing an alternative to this API called the declrativeNetRequest API. After knowing about this update many ad blocker maintainers and developers felt that introduction of the declarativeNetRequest API can lead to the end of many already existing ad blockers. One of the users at the Chromium bug tracker said: “If this (quite limited) declarativeNetRequest API ends up being the only way content blockers can accomplish their duty, this essentially means that two content blockers I have maintained for years, uBlock Origin ("uBO") and uMatrix, can no longer exist.” What is manifest version? Manifest version is basically a mechanism through which certain capabilities can be restricted to a certain class of extensions. These restrictions are specified in the form of either a minimum version or a maximum version. What Chromium states is their reason for doing this update? The webRequest API permit extensions to intercept requests to modify, redirect, or block them. The basic flow of handling a request using this API is, Chrome receives the request, asks the extension, and then gets the result. In Manifest V3, the use of this API will be limited in its blocking form. While the non-blocking form of the API, which permit extensions to observer network requests, but not modify, redirect, or block them will not be discouraged. They have not yet listed the limitations they are going to put in the webRequest API. Manifest V3 will treat the declarativeNetRequest API as the primary content-blocking API in extensions. What this API does is, it allows extensions to tell Chrome what to do with a given request, rather than have Chrome forward the request to the extension. This allows Chrome to handle a request synchronously. As per the doc shared by the team, this API is more performant and provides better privacy guarantees to users. What ad blocker developers and maintainers are saying? After knowing about this update many developers were concerned that this change will end up crippling all ad blockers. “Beside causing uBO and uMatrix to no longer be able to exist, it's really concerning that the proposed declarativeNetRequest API will make it impossible to come up with new and novel filtering engine designs, as the declarativeNetRequest API is no more than the implementation of one specific filtering engine, and a rather limited one (the 30,000 limit is not sufficient to enforce the famous EasyList alone)”, commented an ad blocker developer. He also stated that with the declarativeNetRequest API developers will not be able to implement other features like blocking of media element that are larger than a set size, disabling of JavaScript execution through the injection of CSP directives, etc. Users also feel that this is similar to Safari content blocking APIs, which basically puts limit on the number of rules. One of the developers stated on Chromium issue tab, “Safari has introduced a similar API, which I guess inspires this. My personal experience is that extensions written in that API is usable, but far inferior to the full power of uBlock Origin. I don't want to see this API to be the sole future.” You can check out the issue reported on Chromium bug tracker. Also, you can join the discussion or raise your concern on the Google group: Manifest V3: Web Request Changes. Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users Chromium-based Brave browser shows 22% faster page load time than its Muon-based counterpart DragonFly BSD 5.4.1 released with new system compiler in GCC 8 and more
Read more
  • 0
  • 0
  • 15993

article-image-blazor-0-6-release-and-what-it-means-for-webassembly
Amarabha Banerjee
05 Oct 2018
3 min read
Save for later

Blazor 0.6 release and what it means for WebAssembly

Amarabha Banerjee
05 Oct 2018
3 min read
WebAssembly is changing the way we use develop applications for the web. Graphics heavy applications, browser based games, and interactive data visualizations seem to have found a better way to our UI - the WebAssembly way. The latest Blazor 0.6 experimental release from Microsoft is an indication that Microsoft has identified WebAssembly as one of the upcoming trends and extended support to their bevy of developers. Blazor is an experimental web UI framework based on C#, Razor, and HTML that runs in the browser via WebAssembly. Blazor promises to greatly simplify the task of building, fast and beautiful single-page applications that run in any browser. The following image shows the architecture of Blazor. Source: MSDN Blazor has its own JavaScript format - Blazor.js. It uses mono, an open source implementation of Microsoft’s .NET Framework based on the ECMA standards for C# and the Common Language Runtime (CLR). It also uses Razor, a template engine that combines C# with HTML to create dynamic web content. Together, Blazor is promising to create dynamic and fast web apps without using the popular JavaScript frontend frameworks. This reduces the learning curve requirement for the existing C# developers. Microsoft has released the 0.6 experimental version of Blazor on October 2nd. This release includes new features for authoring templated components and enables using server-side Blazor with the Azure SignalR Service. Another important news from this release is that the server side Blazor model will be included as Razor components in the .Net core 3.0 release. The major highlights of this release are: Templated components Define components with one or more template parameters Specify template arguments using child elements Generic typed components with type inference Razor templates Refactored server-side Blazor startup code to support the Azure SignalR Service Now the important question is how is this release going to fuel the growth of WebAssembly based web development? The answer is that probably it will take some time for WebAssembly to become mainstream because this is just the alpha release which means that there will be plenty of changes before the final release comes. But why Blazor is the right step ahead can be explained by the fact that unlike former Microsoft platforms like Silverlight, it does not have its own rendering engine. Hence pixel rendering in the browser is not its responsibility. That’s what makes it lightweight. Blazor uses the browser’s DOM to display data. However, the C# code running in WebAssembly cannot access the DOM directly. It has to go through JavaScript. The process looks like this presently. Source: Learn Blazor The way this process happens, might change with the beta and subsequent releases of Blazor. Just so that the intermediate JavaScript layer can be avoided. But that’s what WebAssembly is at present. It is a bridge between your code and the browser - which evidently runs on JavaScript. Blazor can prove to be a very good supportive tool to fuel the growth of WebAssembly based apps. Why is everyone going crazy over WebAssembly? Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux Unity Benchmark report approves WebAssembly load times and performance in popular web browsers
Read more
  • 0
  • 0
  • 15991

article-image-pytorch-based-hyperlearn-statsmodels-aims-to-implement-a-faster-and-leaner-gpu-sklearn
Melisha Dsouza
04 Sep 2018
3 min read
Save for later

PyTorch-based HyperLearn Statsmodels aims to implement a faster and leaner GPU Sklearn

Melisha Dsouza
04 Sep 2018
3 min read
HyperLearn is a Statsmodel, a result of the collaboration of languages such as PyTorch, NoGil Numba, Numpy, Pandas, Scipy & LAPACK, and has similarities to Scikit Learn. This project started last month by Daniel Hanchen and still has some unstable packages. He aims to make Linear Regression, Ridge, PCA, LDA/QDA faster, which then flows onto other algorithms being faster. This Statsmodels combo incorporates novel algorithms to make it 50% more faster and enables it to use 50% lesser RAM along with a leaner GPU Sklearn. HyperLearn also has an embedded statistical inference measures, and can be called similar to a Scikit Learn's syntax (model.confidence_interval_) HyperLearn’s Speed/ Memory comparison There is a  50%+ improvement on Quadratic Discriminant Analysis (similar improvements for other models) as can be seen below: Source: GitHub Time(s) is Fit + Predict. RAM(mb) = max( RAM(Fit), RAM(Predict) ) Key Methodologies and Aims of the HyperLearn project #1 Parallel For Loops Hyperlearn for loops will include Memory Sharing and Memory Management CUDA Parallelism will be made possible through PyTorch & Numba #2 50%+ faster and leaner Matrix operations that have been improved include  Matrix Multiplication Ordering, Element Wise Matrix Multiplication reducing complexity to O(n^2) from O(n^3), reducing Matrix Operations to Einstein Notation and Evaluating one-time Matrix Operations in succession to reduce RAM overhead. Applying QR Decomposition and then SVD(Singular Value decomposition) might be faster in some cases. Utilise the structure of the matrix to compute faster inverse Computing SVD(X) and then getting pinv(X) is sometimes faster than pure pinv(X) #3 Statsmodels is sometimes slow Confidence, Prediction Intervals, Hypothesis Tests & Goodness of Fit tests for linear models are optimized. Using Einstein Notation & Hadamard Products where possible. Computing only what is necessary to compute (Diagonal of matrix only) Fixing the flaws of Statsmodels on notation, speed, memory issues and storage of variables. #4 Deep Learning Drop In Modules with PyTorch Using PyTorch to create Scikit-Learn like drop in replacements. #5 20%+ Less Code along with Cleaner Clearer Code Using Decorators & Functions wherever possible. Intuitive Middle Level Function names like (isTensor, isIterable). Handles Parallelism easily through hyperlearn.multiprocessing #6 Accessing Old and Exciting New Algorithms Matrix Completion algorithms - Non Negative Least Squares, NNMF Batch Similarity Latent Dirichelt Allocation (BS-LDA) Correlation Regression and many more!         Daniel further went on to publish some prelim algorithm timing results on a range of algos from MKL Scipy, PyTorch, MKL Numpy, HyperLearn's methods + Numba JIT compiled algorithms Here are his key findings on the HyperLearn statsmodel: HyperLearn's Pseudoinverse has no speed improvement HyperLearn's PCA will have over 200% improvement in speed boost. HyperLearn's Linear Solvers will be over 1 times faster i.e  it will show a 100% improvement in speed You can find all the details of the test on reddit.com For more insights on HyperLearn, check out the release notes on Github. A new geometric deep learning extension library for PyTorch releases! NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? Introduction to Sklearn
Read more
  • 0
  • 0
  • 15988

article-image-apple-advanced-talks-with-intel-to-buy-its-smartphone-modem-chip-business-for-1-billion-reports-wsj
Bhagyashree R
24 Jul 2019
3 min read
Save for later

Apple advanced talks with Intel to buy its smartphone modem chip business for $1 billion, reports WSJ

Bhagyashree R
24 Jul 2019
3 min read
On Monday, the Wall Street Journal reported that Apple is in advanced talks to buy Intel’s smartphone-modem business for at least $1 billion, citing people familiar with the matter. This Apple-Intel deal that will cover a portfolio of patents and staff, is expected to get confirmed in the next week. According to the report, the companies started discussing this deal last summer around the time Brian Krzanich, Intel’s former CEO resigned. However, the talk broke when Apple signed a multiyear supply agreement for modems with Qualcomm in April to settle a longstanding legal dispute between the companies. The dispute was regarding royalties Qualcomm charges for its smartphone modems. After Apple’s settlement with Qualcomm, Intel announced its plans to exit the 5G smartphone modem business. The company’s new CEO Bob Swan said in a press release that there is no “path to profitability and positive returns” for Intel in the smartphone modem business. Intel then opened this offer to other companies but eventually resumed talks with Apple, who is seen as the “most-logical buyer” for its modem business. How will this deal benefit Apple This move will help Apple jumpstart its efforts to make modem chips in-house. In recent years, Apple has been expanding its presence in the components market to eliminate dependence on other companies for hardware and software in its devices. It now designs its own application processors, graphics chips, Bluetooth chips, and security chips. Last year, Apple acquired patents, assets, and employees from Dialog Semiconductor, a British chipmaker as a part of a 600 million deal to bring power management designs in house. With this deal, the tech giant will get access to Intel’s engineering work and talent to help in the development of modem chips for the crucial next generation of wireless technology known as 5G, potentially saving years of development work. How will this deal benefit Intel This deal will allow Intel to part ways from a business that hasn't been much profitable for the company. “The smartphone operation had been losing about $1 billion annually, a person familiar with its performance has said, and has generally failed to live up to expectations,” the report reads. After its exit from the 5G smartphone modem business, the company wants to put its focus in 5G network infrastructure. Read the full story on the Wall Street Journal. Apple patched vulnerability in Mac’s Zoom Client; plans to address ‘video on by default’ OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks Apple gets into chip development and self-driving autonomous tech business
Read more
  • 0
  • 0
  • 15988
article-image-apple-releases-ios-12-beta-2-with-screen-time-and-battery-usage-updates-among-others
Natasha Mathur
20 Jun 2018
3 min read
Save for later

Apple releases iOS 12 beta 2 with screen time and battery usage updates among others

Natasha Mathur
20 Jun 2018
3 min read
The second beta of iOS 12 has been released by Apple yesterday to the registered developers for testing purposes. This is two weeks after the first beta was rolled out following the much-awaited Worldwide Developers Conference. All thanks to the ongoing beta releases, beta 2 includes modifications to many of the new features which are introduced in iOS 12 such as changes to screen time, battery usage, and other smaller tweaks. Let’s have a look at the key updates that will change your iPhone or iPad for the better. Key Updates Battery Usage The usage charts that represent the activity and battery level for the past 24 hours is redesigned in iOS 12 beta 2. Also, fonts and wordings have been updated in this section. Source: macrumors Screen Time The existing toggle that helps with clearing the Screen Time data is removed. The interface which lets you add time limits to apps via the Screen Time screen has been modified. With the first beta, when you tapped an app it would go right into the limits interface. Now when you tap on an app, more information gets displayed on the app. This information includes daily average use, developer, category, and more. There's a new splash screen available for the Screen Time feature. There are also new options in screen time which lets you view your activity on either one or all devices. Notifications The new iOS 12 comes with a feature where Siri makes suggestions to the user about limiting the notifications from the sparingly used apps. Now with beta 2, the Notifications section of the Settings app has a new toggle that will allow you to get rid of the suggestions made by Siri for the individual apps. Photos Search With the iOS 12 beta 2, the Photos now support more advanced searches. So if you search for a photo taken on a specific date, say, May 15, all the photos from all years taken on May 15 will pop up. This is quite different than the iOS 12 beta 1 behavior. Also, the font of listings such as "Media Types" and "Albums" has changed. Now the listings’ font size in the Photos app is way bigger, which makes it easier for the users to read. Voice Memos A new introductory splash screen is added for Voice Memos in iOS 12 beta 2. Apart from these updates, there are also certain minor changes which are listed below: On unlocking any content using Face ID, the iPhone X now says "Scanning with Face ID." Now, on opening iPhone apps on the iPad, such as Instagram, these apps get displayed in a modern device size (iPhone 6) in both the modes namely: 1x and 2x. A new interface for auto-filling a password saved in iCloud Keychain is added. Podcasts app will now show ‘Now Playing’ indicator for the currently playing chapters. Time Travel references have been removed from the Watch app. The iOS 12 public beta will launch after iOS 12 developer beta 3 around June 26. The release date for the final version of iOS 12 is set sometime in September 2018. Also, there are some known issues regarding the latest iOS 12 beta 2 update that needs resolving. Registered developers can check out the release notes for beta 2 on the official Apple developer website. WWDC 2018 Preview: 5 Things to expect from Apple’s Developer Conference Apple releases iOS 11.4 update with features including AirPlay 2, and HomePod among others Apple introduces macOS Mojave with UX enhancements like voice memos, redesigned App Store, Apple News, & more security controls  
Read more
  • 0
  • 0
  • 15986

article-image-valves-steam-play-beta-uses-proton-a-modified-wine-allowing-linux-gamers-to-play-windows-games
Bhagyashree R
25 Aug 2018
2 min read
Save for later

Valve’s Steam Play Beta uses Proton, a modified WINE, allowing Linux gamers to play Windows games

Bhagyashree R
25 Aug 2018
2 min read
To provide compatibility with a wide range of Windows-only games to all Linux users, a Beta version of the new and improved Steam Play is now available. It uses Proton, a modified distribution of Wine, to allow games which are exclusive to Windows to run on Linux and macOS operating systems. Proton is an open source tool, allowing advanced users to alter the code to make their own local builds. The included improvements to Wine have been designed and funded by Valve, in a joint development effort with CodeWeavers. In order to identify games that currently work great in this compatibility environment and solve the issues, if any, they are testing the entire Steam catalog. The list of games that they are enabling with this Beta release include: Beat Saber, Bejeweled 2 Deluxe, Doki Doki Literature Club!, DOOM, Fallout Shelter, FATE, FINAL FANTASY VI, and many more. Using Steam Play the gamers can purchase the games once and play anywhere. Whether you have purchased your Steam Play enabled game on a Mac, Windows, or Linux, you will be able to play on the other platform free of charge. What are the improvements introduced? You can now install and run Windows games with no Linux version currently available, directly from the Linux Steam client, complete with native Steamworks and OpenVR support. Improved game compatibility and reduced performance impact is facilitated by DirectX 11 and 12 whose implementations are now based on Vulkan. The support for fullscreen games is improved allowing them to seamlessly stretch to the desired display without interfering with the native monitor resolution or requiring the use of a virtual desktop. The support for game controller is improved enabling games to automatically recognize all controllers supported by Steam. Improved performance for multi-threaded games as compared to vanilla Wine. They have mentioned that there could be a performance difference for games where graphics API translation is required, but there is no fundamental reason for a Vulkan title to run any slower. You can find out more about the Stream Play Beta, the full list of supported games, and how Proton works in the Steam post. Facebook launched new multiplayer AR games in Messenger Meet yuzu – an experimental emulator for the Nintendo Switch What’s got game developers excited about Unity 2018.2?
Read more
  • 0
  • 0
  • 15976

article-image-linux-5-1-out-with-io_uring-io-interface-persistent-memory-new-patching-improvements-and-more-2
Vincy Davis
08 May 2019
3 min read
Save for later

Linux 5.1 out with Io_uring IO interface, persistent memory, new patching improvements and more!

Vincy Davis
08 May 2019
3 min read
Yesterday, Linus Torvalds, the principal developer of the Linux kernel announced the release of Linux 5.1 in a mailing list announcement. This release provides users with an open source operating system with lots of great additions, as well as improvements to existing features. The previous version, Linux 5.0 was released two months ago. “On the whole, 5.1 looks very normal with just over 13k commits (plus another 1k+ if you count merges). Which is pretty much our normal size these days. No way to boil that down to a sane shortlog, with work all over.”, said Linus Torvalds in the official announcement. What’s new in Linux 5.1? Io_uring: New Linux IO interface Linux 5.1 introduces a new high-performance interface called io_uring. It’s easy to use and hard to misuse user/application interface. Io_uring has an efficient buffered asynchronous I/O support, the ability to do I/O without even performing a system call via polled I/O, and other efficiency enhancements. This will help deliver fast and efficient I/O for Linux. Io_uring permits safe signal delivery in the presence of PID reuse which will improve power management without affecting power consumption. Liburing is used as the user-space library which will make the usage simpler. Axboe's FIO benchmark has also been adapted already to support io_uring. Security In Linux 5.1, the SafeSetID LSM module has been added which will provide administrators with security and policy controls. It will restrict UID/GID transitions from a given UID/GID to only those approved by system-wide acceptable lists. This will also help in stopping to receive the auxiliary privileges associated with CAP_SET{U/G}ID, which will allow the user to set up user namespace UID mappings. Storage Along with physical RAM, users can now use persistent memory as RAM (system memory), allowing them to boot the system to a device-mapper device without using initramfs, as well as support for cumulative patches for the live kernel patching feature. This persistent memory can also be used as a cost-effective RAM replacement. Live patching improvements With Linux 5.1 a new capability is being added to live patching, it’s called Atomic Replace. It includes all wanted changes from all older live patches and can completely replace them in one transition. Live patching enables a running system to be patched without the need for a full system reboot. This will allow new drivers compatible with new hardware. Users are quite happy with this update. A user on Reddit commented, “Finally! I think this one fixes problems with Elantech's touchpads spamming the dmesg log. Can't wait to install it!” Another user added, “Thank you and congratulations for the developers!” To download the Linux kernel 5.1 sources, head over to kernel.org. To know more about the release, check out the official mailing announcement. Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32 Announcing Linux 5.0! Bodhi Linux 5.0.0 released with updated Ubuntu core 18.04 and a modern look  
Read more
  • 0
  • 0
  • 15971
article-image-microsoft-is-planning-to-bring-xbox-live-gaming-to-android-ios-nintendo-switch-and-more
Sugandha Lahoti
07 Feb 2019
2 min read
Save for later

Microsoft is planning to bring Xbox Live gaming to Android, iOS, Nintendo Switch, and more

Sugandha Lahoti
07 Feb 2019
2 min read
Microsoft is reportedly planning to bring Xbox Live cross-platform gaming features to PC, Xbox, iOS, Android, and Nintendo Switch. This news was first reported by Windows Central via a GDC 2019 schedule on Xbox Live. “Xbox Live is expanding from 400 million gaming devices and a reach to over 68 million active players to over 2 billion devices with the release of our new cross-platform XDK,” says the GDC listing. The GDC session will also offer a first look at the SDK to enable game developers to connect players between iOS, Android, and Switch in addition to Xbox and any game in the Microsoft Store on Windows PCs. Until now, Microsoft has reserved Xbox Live support on iOS, Android, and Nintendo Switch platforms for its own games, but now, Microsoft is aiming to bring Xbox Live integration to even more gaming titles. This is a part of Microsoft’s gaming mission to bring software, services, and games to players on other platforms aside from its traditional PC and Xbox markets. Per Windows central, “Developers will be able to bake cross-platform Xbox Live achievements, social systems, and multiplayer, into games built for mobile devices and Nintendo Switch, as part of its division-wide effort to grow Xbox Live's user base.” For developers, this would mean allowing “communities to mingle more freely across platforms. Combined with PlayFab gaming services, this means less work for game developers and more time to focus on making games fun,” says the GDC listing. Microsoft is also building a xCloud game streaming service that will stream Xbox games to PCs, consoles, and mobile devices later this year. Twitter users are fairly excited about this news. https://twitter.com/Avers_G4GMedia/status/1091623967088144384 https://twitter.com/NintendoSwitchC/status/1092560268956233728 https://twitter.com/TannithArt/status/1092675726996844544 Microsoft announces Project xCloud, a new Xbox game streaming service. Epic games CEO calls Google “irresponsible” for disclosing the security flaw in Fortnite Android Microsoft plans to use Windows ML for Game development
Read more
  • 0
  • 0
  • 15958

article-image-google-dissolves-its-advanced-technology-external-advisory-council-in-a-week-after-repeat-criticism-on-selection-of-members
Amrata Joshi
05 Apr 2019
3 min read
Save for later

Google dissolves its Advanced Technology External Advisory Council in a week after repeat criticism on selection of members

Amrata Joshi
05 Apr 2019
3 min read
Last week Google announced the formation of Advanced Technology External Advisory Council, to help the company with the major issues in AI such as facial recognition and machine learning fairness. And it is only a week later that Google has decided to dissolve the council, according to reports by Vox. In a statement to Vox, a Google spokesperson said that “the company has decided to dissolve the panel, called the Advanced Technology External Advisory Council (ATEAC), entirely.” The company further added, “It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.” This news comes immediately after a group of Google employees criticized the selection of the council and insisted the company to remove Kay Coles James, the Heritage Foundation President for her anti-trans and anti-immigrant thoughts. The presence of James in the council had somewhere made the others uncomfortable too. When Joanna Bryson was asked by one of the users on Twitter, if she was comfortable serving on a board with James, she answered, “Believe it or not, I know worse about one of the other people.” https://twitter.com/j2bryson/status/1110632891896221696 https://twitter.com/j2bryson/status/1110628450635780097 Few researchers and civil society activists had also voiced their opinion against the idea of anti-trans and anti-LGBTQ.  Alessandro Acquisti, a behavioural economist and privacy researcher, had declined an invitation to join the council. https://twitter.com/ssnstudy/status/1112099054551515138 Googlers also insisted on removing Dyan Gibbens, the CEO of Trumbull Unmanned, a drone technology company, from the board. She has previously worked on drones for the US military. Last year, Google employees were agitated about the fact that the company had been working with the US military on drone technology as part of so-called Project Maven. A lot of employees decided to resign because of this reason, though later promised to not renew Maven. While talking more on the ethics front, Google has even offered resources to the US Department of Defense for a “pilot project” to analyze drone footage with the help of artificial intelligence. The question that arises here, “Are Googlers and Google’s shareholders comfortable with the idea of getting their software used by the US military?” President Donald Trump’s meet with the Google CEO, Sundar Pichai adds more to it. https://twitter.com/realDonaldTrump/status/1110989594521026561 Though this move by Google seems to be a mark of victory for more than 2300 Googlers and supporters who signed the petition and took a stand against Transphobia, it is still going to be a tough time for Google to redefine its AI ethics. Also, the company might have saved itself from this major turmoil if they had wisely selected the council members. https://twitter.com/EthicalGooglers/status/1113942165888094215 To know more about this news, check out the blog post by Vox. Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council Is Google trying to ethics-wash its decisions with its new Advanced Tech External Advisory Council? Amazon joins NSF in funding research exploring fairness in AI amidst public outcry over big tech #ethicswashing
Read more
  • 0
  • 0
  • 15954
Modal Close icon
Modal Close icon