Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-google-releases-oboe-a-c-library-to-build-high-performance-android-audio-apps
Bhagyashree R
12 Oct 2018
3 min read
Save for later

Google releases Oboe, a C++ library to build high-performance Android audio apps

Bhagyashree R
12 Oct 2018
3 min read
Yesterday, Google released the first production-ready version of Oboe. It is a C++ library for building real-time audio apps. One of its main benefits includes the lowest possible audio latency across the widest range of Android devices. It is similar to AndroidX for native audio. How Oboe works The communication between apps and Oboe happens by reading and writing data to streams.  This library facilitates the movement of audio data between your app and the audio inputs and outputs on your Android device. The apps are able to pass data in and out by reading from and writing to audio streams, represented by the class AudioStream. A stream consists of the following: Audio device An audio device is a hardware interface or virtual endpoint that acts as a source or sink for a continuous stream of digital audio data. For example, a built-in mic or bluetooth headset. Sharing mode The sharing mode determines whether a stream has exclusive access to an audio device that might otherwise be shared among multiple streams. Audio format This the format of the audio data in the stream. The data that is passed through a stream has the usual digital audio attributes, which developers must specify when defining a stream. These are as follows: Sample format Samples per frame Sample rate The following sample formats are allowed by Oboe: Source: GitHub What are its benefits Oboe leverages the improved performance and features of AAudio on Orea MR1 (API 27+) and also maintains backward compatibility on API 16+. The following are some of its benefits: You write and maintain less code: It uses C++ allowing you to write clean and elegant code. With Oboe you can create an audio stream in just three lines of code whereas, when using OpenSL ES the same thing requires 50+ lines. Accelerated release process: As Oboe is supplied as a source library, bug fixes can be rolled out in few days as opposed to the Android platform release cycle. Better bug handling and less guesswork: It provides workarounds for known audio bugs and has sensible default behaviour for stream properties. Open source: It is open source and maintained by Google engineers. To get started with Oboe, check out the full documentation and the code samples available on its GitHub repository. Also, read the announcement posted on the Android Developers Blog. What role does Linux play in securing Android devices? A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem Google announces updates to Chrome DevTools in Chrome 71
Read more
  • 0
  • 0
  • 22662

article-image-googles-v8-javascript-engine-adds-support-for-top-level-await
Fatema Patrawala
25 Sep 2019
3 min read
Save for later

Google’s V8 JavaScript engine adds support for top-level await

Fatema Patrawala
25 Sep 2019
3 min read
Yesterday, Joshua Litt from the Google Chromium team announced to add support for top-level await in V8. V8 is Google’s open source high-performance JavaScript and WebAssembly engine, written in C++. It is used in Chrome and in Node.js, among others. It implements ECMAScript and WebAssembly, and runs on Windows 7 or later, macOS 10.12+, and Linux systems that use x64, IA-32, ARM, or MIPS processors. V8 can run standalone, or can be embedded into any C++ application. The official documentation page on Google Chromium reads, “Adds support for parsing top level await to V8, as well as many tests.This is the final cl in the series to add support for top level await to v8.” Top-level await support will ease running JS script in V8 As per the latest ECMAScript proposal on top-level await allows the await keyword to be used at the top level of the module goal. Top-level await enables modules to act as big async functions: With top-level await, ECMAScript Modules (ESM) can await resources, causing other modules who import them to wait before they start evaluating their body. Earlier developers used IIFE for top-level awaits, a JavaScript function that runs as soon as it is defined. But there are certain limitations in using IIFE, that is with await only available within async functions, a module can include await in the code that executes at startup by factoring that code into an async function. And this pattern will be immediately invoked with IIFE and it is appropriate for situations where loading a module is intended to schedule work that will happen some time later. While Top-level await function lets developers rely on the module system itself to handle all of these, and make sure that things are well-coordinated. Community is really happy to know that top-level support has been added to V8. On Hacker News, one of the users commented, “This is huge! Finally no more need to use IIFE's for top level awaits”. Another user commented, “Top level await does more than remove a main function. If you import modules that use top level await, they will be resolved before the imports finish. To me this is most important in node where it's not uncommon to do async operations during initialization. Currently you either have to export a promise or an async function.” To know more about this read the official Google Chromium documentation page. Other interesting news in web development New memory usage optimizations implemented in V8 Lite can also benefit V8 LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more  
Read more
  • 0
  • 0
  • 22645

article-image-apples-macos-catalina-kills-itunes-and-drops-support-for-32-bit-applications
Fatema Patrawala
09 Oct 2019
4 min read
Save for later

Apple’s MacOS Catalina in major turmoil as it kills iTunes and drops support for 32 bit applications

Fatema Patrawala
09 Oct 2019
4 min read
Yesterday, Apple released MacOS Catalina, its latest update for Macs and MacBooks. The new operating system can be installed from the homepage of its App Store. Catalina brings a host of new features, including the option to use apps from the iPad as well as turn the tablet into an additional display for computers. But this new update kills iTunes and faces some major issues. Apple has confirmed that there are some serious issues in MacOS Catalina, and affected consumers should refrain from updating the OS until these issues are addressed. Catalina is finally the download that kills iTunes, which is nowhere to be found in the new update. Instead, Apple has moved the features of iTunes into their own separate Music app, the new update also includes separate apps for Podcasts and TV. MacOS Catalina update is a big problem for DJs who rely on iTunes The Mac platform is especially popular with DJs, who cart around MacBook Pro machines jam-packed with music, playlists, mixes and specialist software to allow them to perform every evening. These have been tied to iTunes’ underlying XML database. But after nearly 2 decades, iTunes are discontinued in macOS Catalina, and the XML file no longer exists to index a local music collection. This has broken popular and niche music tools alike, including some of the major titles such as Traktor and Rekordbox. The Verge reports that Apple has confirmed that this issue is down to its removal of the XML file, but is handing responsibility to the third-party developers behind each app. Unfortunately, for Apple’s reputation, those developers have been expecting the ability for the new standalone Music app to explore an XML file, a feature Apple suggested would be available until they could code around the lack of XML. Fact Mag also reported, “this news contradicts Apple’s earlier assertion that there would be a way to manually export the XML file from the new Music app, though Catalina’s launch yesterday now proves this isn’t the case at all.” Apple advice DJs that, if you rely on a software that needs this XML file to function, then do not update to Catalina until individual developers have issued compatibility updates for the new operating system. Catalina drops support for 32-bit applications and faces other issues as well Catalina also drops support for 32-bit applications. The 32-bit applications will simply not run under the new system, this version of macOS is a 64-bit only. If you are a Mac user that is reliant on a 32-bit app, then you have just a single dialog on installation that warns of the loss of support. And with these there are other questions which a user will need answers to like, you would need to know which of your apps are 32-bit and which are 64-bit? And if they are mission-critical in your role and is a 64-bit alternative available? It's not just this, a number of creative tools, including Apple Aperture, Microsoft Office 2011 and Adobe CS6 are also experiencing issues with Catalina. Additionally, there are issues with font in MacOS Catalina, as per the Chromium blog, the macOS system font appears "off" -- too light / tight kerning. It is clear that Apple wants to push forward with its platforms, but it needs to remember that the hardware has to work in the real world today. Apple should be consistent in what features it offers, it should provide clear and accurate information to developers and users, and it should ensure the very least that its own store is in order. TextMate 2.0, the text editor for macOS releases MacOS terminal emulator, iTerm2 3.3.0 is here with new Python scripting API, a scriptable status bar, Minimal theme, and more Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more! WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, and iPad and more Apple plans to make notarization a default requirement in all future macOS updates
Read more
  • 0
  • 0
  • 22614

article-image-microsoft-open-sources-project-zipline-its-data-compression-algorithm-and-hardware-for-the-cloud
Natasha Mathur
15 Mar 2019
3 min read
Save for later

Microsoft open-sources Project Zipline, its data compression algorithm and hardware for the cloud

Natasha Mathur
15 Mar 2019
3 min read
Microsoft announced that it is open-sourcing its new cutting-edge compression technology, called Project Zipline, yesterday. As a part of this open-source release, Project Zipline compression algorithms, hardware design specifications, and Verilog source code for register transfer language (RTL) has been made available. Apart from the announcement of Project Zipline, the Open Compute Project (OCP) Global Summit 2019 also started yesterday in San Jose. In the summit, the latest innovations that can make hardware more efficient, flexible, and scalable are shared. Microsoft states that its journey with OCP began in 2014 when it joined the foundation and contributed the server and data center designs that power its global Azure Cloud. Moreover, Microsoft contributes innovations to OCP every year at the summit. Microsoft has again decided to contribute Project Zipline this year. “This contribution will provide collateral for integration into a variety of silicon components across the industry for this new high-performance compression standard. Contributing RTL at this level of detail as open source to OCP is industry leading”, states Microsoft team. Project Zipline is aimed to optimize the hardware implementation for different types of data on cloud storage workloads. Microsoft has been able to achieve higher compression ratios, higher throughput, and lower latency than the other algorithms currently available. This allows for compression without compromise as well as data processing for different industry usage models (from cloud to edge). Microsoft’s Project Zipline compression algorithm produces great results with up to 2X high compression ratios as compared to the commonly used Zlib-L4 64KB model. These enhancements, in turn, produce direct customer benefits for cost savings and allow access to petabytes or exabytes of capacity in a cost-effective way for the customers. Project Zipline has also been optimized for a large variety of datasets, and Microsoft’s release of RTL allows hardware vendors to use the reference design that offers the highest compression, lowest cost, and lowest power in an algorithm. Project Zipline is available to the OCP ecosystem, so vendors can contribute further to benefit Azure and other customers. Microsoft team states that this contribution towards open source will set a “new precedent for driving frictionless collaboration in the OCP ecosystem for new technologies and opening the doors for hardware innovation at the silicon level”. In the future, Microsoft expects Project Zipline compression technology to enter different market segments and usage models such as network data processing, smart SSDs, archival systems, cloud appliances, general purpose microprocessor, IoT, and edge devices. For more information, check out the official Microsoft announcement. Microsoft open sources the Windows Calculator code on GitHub Microsoft open sources ‘Accessibility Insights for Web’, a chrome extension to help web developers fix their accessibility issue Microsoft researchers introduce a new climate forecasting model and a public dataset to train these models
Read more
  • 0
  • 0
  • 22611

article-image-graph-nets-deepminds-library-for-graph-networks-in-tensorflow-and-sonnet
Sunith Shetty
19 Oct 2018
3 min read
Save for later

Graph Nets – DeepMind's library for graph networks in Tensorflow and Sonnet

Sunith Shetty
19 Oct 2018
3 min read
Graph Nets is a new DeepMind’s library used for building graph networks in TensorFlow and Sonnet. Last week a paper Relational inductive biases, deep learning, and graph networks was published on arXiv by researchers from DeepMind, Google Brain, MIT and University of Edinburgh. The paper introduces a new machine learning framework called Graph networks which is expected to bring new innovations in artificial general intelligence realm. What are graph networks? Graph networks can generalize and extend various types of neural networks to perform calculations on the graph. It can implement relational inductive bias, a technique used for reasoning about inter-object relations. The graph networks framework is based on graph-to-graph modules. Each graph’s features are represented in three characteristics: Nodes Edges: Relations between the nodes Global attributes: System-level properties The graph network takes a graph as an input, performs the required operations and calculations from the edge, to the node, and to the global attributes, and then returns a new graph as an output. The research paper argues that graph networks can support two critical human-like capabilities: Relational reasoning: Drawing logical conclusions of how different objects and things relate to one another Combinatorial Generalization: Constructing new inferences, behaviors, and predictions from known building blocks To understand and learn more about graph networks you can refer the official research paper. Graph Nets Graph Nets library can be installed from pip. To install the library, run the following command: $ pip install graph_nets The installation is compatible with Linux/Mac OSX, and Python versions 2.7 and 3.4+ The library includes Jupyter notebook demos which allow you to create, manipulate, and train graph networks to perform operations such as shortest path-finding task, a sorting task, and prediction task. Each demo uses the same graph network architecture, thus showing the flexibility of the approach. You can try out various demos in your browser using Colaboratory. In other words, you don’t need to install anything locally when running the demos in the browser (or phone) via cloud Colaboratory backend. You can also run the demos on your local machine by installing the necessary dependencies. What’s ahead? The concept was released with ideas not only based in artificial intelligence research but also from the computer and cognitive sciences. Graph networks are still an early-stage research theory which does not yet offer any convincing experimental results. But it will be very interesting to see how well graph networks live up to the hype as they mature. To try out the open source library, you can visit the official Github page. In order to provide any comments or suggestions, you can contact graph-nets@google.com. Read more 2018 is the year of graph databases. Here’s why. Why Neo4j is the most popular graph database Pytorch.org revamps for Pytorch 1.0 with design changes and added Static graph support
Read more
  • 0
  • 0
  • 22592

article-image-epic-releases-unreal-engine-4-22-focuses-on-adding-photorealism-in-real-time-environments
Sugandha Lahoti
03 Apr 2019
4 min read
Save for later

Epic releases Unreal Engine 4.22, focuses on adding “photorealism in real-time environments”

Sugandha Lahoti
03 Apr 2019
4 min read
Epic games released a new version of it’s flagship game engine, Unreal Engine 4.22. This release comes with a total of 174 improvements, focused on “pushing the boundaries of photorealism in real-time environments”. It also comes with improved build times, up to 3x faster, and new features such as real-time ray tracing. Unreal Engine 4.22 also adds support for Microsoft HoloLens remote streaming and Visual Studio 2019. What’s new in Unreal Engine 4.22? Real-Time Ray Tracing and Path Tracing (Early Access): The Ray Tracing features, first introduced in a preview in mid-february,  are composed of a series of ray tracing shaders and ray tracing effects. They help in achieving natural realistic looking lighting effects in real-time. The Path Tracer includes a full global illumination path for indirect lighting that creates ground truth reference renders right inside of the engine. This improves workflow content in a scene without needing to export to a third-party offline path tracer for comparison. New Mesh drawing pipeline: The new pipeline for Mesh drawing results in faster caching of information for static scene elements. Automatic instancing merges draw calls where possible, resulting in four to six time fewer lines of code. This change is a big one so backwards compatibility for Drawing Policies is not possible. Any Custom Drawing Policies will need to be rewritten as FMeshPassProcessors in the new architecture. Multi-user editing (Early Access): Simultaneous multi user editing allows multiple level designers and artists to connect multiple instances of Unreal Editor together to work collaboratively in a shared editing session. Faster C++ iterations: Unreal has licensed Molecular Matters' Live++ for all developers to use on their Unreal Engine projects, and integrated it as the new Live Coding feature. Developers can now make C++ code changes in their development environment and compile and patch it into a running editor or standalone game in a few seconds. UE 4.22 also optimizes UnrealBuildTool and UnrealHeaderTool, reducing build times and resulting in up to 3x faster iterations when making C++ code changes. Improved audio with TimeSynth (Early access): TimeSynth is a new audio component with features like sample accurate starting, stopping, and concatenation of audio clips. Also includes precise and synchronous audio event queuing. Enhanced Animation: Unreal Engine 4.22 comes with a new Animation Plugin which is based upon the Master-Pose Component system and adds blending and additive Animation States. It reduces the overall amount of animation work required for a crowd of actors. This release also features an Anim Budgeter tool to help developers set a fixed budget per platform (ms of work to perform on the gamethread). Improvements in the Virtual Production Pipeline: New Composure UI: Unreal’s built-in compositing tool Composure has an updated UI to achieve real time compositing capabilities to build images, video feeds, and CG elements directly within the Unreal Engine. OpenColorIO (OCIO) color profiles: Unreal Engine now supports the Open Color IO framework for transforming the color space of any Texture or Composure Element directly within the Unreal Engine. Hardware-accelerated video decoding (Experimental): On Windows platforms, UE 4.22 can use the GPU to speed up the processing of H.264 video streams to reduce the strain on the CPU when playing back video streams. New Media I/O Formats: UE 4.22 ships with new features for professional video I/O input formats and devices, including 4K UHD inputs for both AJA and Blackmagic and AJA Kona 5 devices. nDisplay improvements (Experimental): Several new features make the nDisplay multi-display rendering system more flexible, handling new kinds of hardware configurations and inputs. These were just a select few updates. To learn more about Unreal Engine 4.22 head on over to the Unreal Engine blog. Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR) Unreal Engine 4.20 released with focus on mobile and immersive (AR/VR/MR) devices Implementing an AI in Unreal Engine 4 with AI Perception components [Tutorial]
Read more
  • 0
  • 0
  • 22573
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-a-study-confirms-that-pre-bunk-game-reduces-susceptibility-to-disinformation-and-increases-resistance-to-fake-news
Fatema Patrawala
27 Jun 2019
7 min read
Save for later

A study confirms that pre-bunk game reduces susceptibility to disinformation and increases resistance to fake news

Fatema Patrawala
27 Jun 2019
7 min read
On Tuesday, the University of Cambridge published a research performed on  thousands of online game players. The study shows how an online game can work like a “vaccine'' and increase skepticism towards fake news. This was done by giving people a weak dose of the methods behind disinformation campaigns. Last year in February, University of Cambridge researchers helped in launching the browser game Bad News. In this game, you take on the role of fake news-monger. Drop all pretense of ethics and choose a path that builds your persona as an unscrupulous media magnate. But while playing the game you have to keep an eye on your ‘followers’ and ‘credibility’ meters. The task is to get as many followers as you can while slowly building up fake credibility as a news site. And you lose if you tell obvious lies or disappoint your supporters! Jon Roozenbeek, study co-author from Cambridge University, and Dr Sander van der Linden, Director of the Cambridge Social Decision-Making Lab worked with Dutch media collective DROG and design agency Gusmanson to develop Bad News. DROG develops programs and courses and also conducts research aimed at recognizing disinformation online. The game is primarily available in English, and many other languages like Czech, Dutch, German, Greek, Esperanto, Polish, Romanian, Serbian, Slovenian and Swedish. They have also developed a special Junior version for children in the age group between 8 - 11. Jon Roozenbee, said: “We are shifting the target from ideas to tactics. By doing this, we are hoping to create what you might call a general ‘vaccine’ against fake news, rather than trying to counter each specific conspiracy or falsehood.” Hu further added, “We want to develop a simple and engaging way to establish media literacy at a relatively early age, then look at how long the effects last”. The study says that the game increased psychological resistance to fake news After the game was available to play, thousands of people spent fifteen minutes completing it, and many allowed the data to be used for the research. According to a study of 15000 participants, this game has shown to increase “psychological resistance” to fake news. Players stoke anger and fear by manipulating news and social media within the simulation: they deployed twitter bots, photo-shopped evidence, and incited conspiracy theories to attract followers. All of this was done while maintaining a “credibility score” for persuasiveness. “Research suggests that fake news spreads faster and deeper than the truth, so combating disinformation after-the-fact can be like fighting a losing battle,” said Dr Sander van der Linden. “We wanted to see if we could preemptively debunk, or ‘pre-bunk’, fake news by exposing people to a weak dose of the methods used to create and spread disinformation, so they have a better understanding of how they might be deceived. “This is a version of what psychologists call ‘inoculation theory’, with our game working like a psychological vaccination.” The study was performed by asking players to rate the reliability of content before and after gameplay To gauge the effects of the game, players were asked to rate the reliability of a series of different headlines and tweets before and after gameplay. They were randomly allocated a mixture of real and fake news. There were six “badges” to earn in the game, each reflecting a common strategy used by creators of fake news: impersonation; conspiracy; polarisation; discrediting sources; trolling; emotionally provocative content. There were in-game questions too that measured the effects of Bad News deployed for four of its featured fake news badges. As a result for the disinformation tactic of “impersonation”, which involves mimicking of trusted personalities on social media, the game reduced perceived reliability of the fake headlines and tweets by 24% from pre to post gameplay. Further it reduced perceived reliability of deliberately polarising headlines by about 10%, and “discrediting sources” that is attacking a legitimate source with accusations of bias – by 19%. For “conspiracy”, the spreading of false narratives blaming secretive groups for world events, perceived reliability was reduced by 20%. The researchers also found that those who registered as most susceptible to fake news headlines in the beginning benefited most from the “inoculation”. “We find that just fifteen minutes of gameplay has a moderate effect, but a practically meaningful one when scaled across thousands of people worldwide, if we think in terms of building societal resistance to fake news,” said van der Linden. The sample for the study was skewed towards younger male The sample was self-selecting those who came across the game online and opted to play, and as such was skewed toward younger, male, liberal, and more educated demographics. Hence, the first set of results from Bad News has its limitations, say researchers. However, the study found the game to be almost equally effective across age, education, gender, and political persuasion. But researchers did not mention if they plan to do a follow up study keeping in mind the limitations of this research. “Our platform offers early evidence of a way to start building blanket protection against deception, by training people to be more attuned to the techniques that underpin most fake news,” added Roozenbeek. Community discussion revolve around various fake news reporting techniques This news has attracted much attention on Hacker News, and users have commented about various news reporting techniques that journalists use to promote different stories. One of the user comments reads, “The "best" fake news these days is the stuff that doesn't register even to people are read-in on the usual anti-patterns. Subtle framing, selective quotation, anonymous sources, "repeat the lie" techniques, and so on, are the ones that I see happening today that are hard to immunize yourself from. Ironically, the people who fall for these are more likely to self-identify as being aware and clued in on how to avoid fake news.” Another users says, “Second best. The best is selective reporting. Even if every story is reported 100% accurately and objectively, by choosing which stories are promoted, and which buried, you can set any agenda you want.” One of them also commented that the discussion diluted the term Fake news in influences and propaganda, it reads, “This discussion is falling into a trap where "Fake News" is diluted to synonym for all influencing news and propaganda. Fake News is propaganda that consists of deliberate disinformation or hoaxes. Nothing mentioned here falls into a category of Fake News. Fake News creates cognitive dissonance and distrust. More subtler methods work differently. But mainstream media also does Fake News" arguments are whataboutism.” To this another user responds, “I've upvoted you because you make a good point, but I disagree. IMO, Fake News, in your restrictive definition, is to modern propaganda what Bootstrap is to modern frontend dev. It's an easy shortcut, widely known, and even talented operators are going to use it because it's the easiest way to control a (domestic or foreign) population. But resources are there, funding is there, to build much more subtle/complex systems if needed. Cut away Bootstrap, and you don't particularly dent the startup ecosystem. Cut away fake news, and you don't particularly dent the ability of troll farms to get work done. We're in a new era, fake news or not.” Game rivals, Microsoft and Sony, form a surprising cloud gaming and AI partnership DeepMind’s AI uses reinforcement learning to defeat humans in multiplayer games Introducing Minecraft Earth, Minecraft’s AR-based game for Android and iOS users  
Read more
  • 0
  • 0
  • 22561

article-image-core-python-team-confirms-sunsetting-python-2-on-january-1-2020
Vincy Davis
10 Sep 2019
3 min read
Save for later

Core Python team confirms sunsetting Python 2 on January 1, 2020

Vincy Davis
10 Sep 2019
3 min read
Yesterday, the team behind Python posted details about the sunsetting of Python 2. As announced before, post January 1, 2020, Python 2 will not be maintained by the Python team. This means that it will no longer receive new features and it will not be improved even if a security problem is found in it. https://twitter.com/gvanrossum/status/1170949978036084736 Why is Python 2 retiring? In the detailed post, the Python team explains that the huge alterations needed in Python 2 led to the birth of Python 3 in 2006. To keep users happy, the Python team kept improving and publishing both the versions together. However, due to some changes that Python 2 couldn’t  handle and scarcity of time required to improve Python 3 faster, the Python team has decided to sunset the second version. The team says, “So, in 2008, we announced that we would sunset Python 2 in 2015, and asked people to upgrade before then. Some did, but many did not. So, in 2014, we extended that sunset till 2020.” The Python team has clearly stated that January 1, 2020 onwards, they will not upgrade or improve the second version of Python even if a fatal security problem crops up in it. Their advice to Python 2 users is to switch to Python 3 using the official porting guide as the former will not support many tools in the future. On the other hand, Python 3 supports graph for all the 360 most popular Python packages. Users can also check out the ‘Can I Use Python 3?’ to find out which tools need to upgrade to Python 3. Python 3 adoption has begun As the end date of Python has been decided earlier on, many implementations of Python have already dropped support for Python 2 or are supporting both Python 2 and 3 for now. Two months ago, NumPy, the library for Python programming language officially dropped support for Python 2.7 in its latest version NumPy 1.17.0. It will only support Python versions 3.5 – 3.7. Earlier this year, pandas 0.24 stopped support for Python 2. Pandas maintainer, Jeff Reback had said, “It's 2019 and Python 2 is slowly trickling out of the PyData stack.” However, not all projects are yet fully on board. There has also been efforts taken to keep Python 2 alive. In August this year, PyPy announced that that they do not plan to deprecate Python 2.7 support as long as PyPy exists. https://twitter.com/pypyproject/status/1160209907079176192 Many users are happy to say goodbye to the second version of Python in favor of building towards a long term vision. https://twitter.com/mkennedy/status/1171132063220502528 https://twitter.com/MeskinDaniel/status/1171244860386480129 A user on Hacker News comments, “In 2015, there was no way I could have moved to Python 3. There were too many libraries I depended on that hadn't ported yet. In 2019, I feel pretty confident about using Python 3, having used it exclusively for about 18 months now. For my personal use case at least, this timeline worked out well for me. Hopefully it works out for most everyone. I can't imagine they made this decision without at least some data backing it up.” Head over to the Python website for more details about about this news. Latest news in Python Introducing OpenDrop, an open-source implementation of Apple AirDrop written in Python Poetry, a Python dependency management and packaging tool, releases v1 beta 1 with URL dependency Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble”
Read more
  • 0
  • 0
  • 22560

article-image-apple-introduces-swift-numerics-to-support-numerical-computing-in-swift
Bhagyashree R
08 Nov 2019
2 min read
Save for later

Apple introduces Swift Numerics to support numerical computing in Swift

Bhagyashree R
08 Nov 2019
2 min read
Yesterday, Steve Canon, a member of Apple’s Swift Standard Library team announced a new open-source project called Swift Numerics. The goal behind this project is to enable the use of Swift language in new domains of programming. What is Swift Numerics Swift Numerics is a Swift package containing a set of fine-grained modules. These modules fall broadly under two categories. One, modules that are too specialized to be included into the standard library, but are general enough to be in a single common package. The second category includes those modules that are “under active development toward possible future inclusion in the standard library.” Currently, Swift Numerics has two most-requested modules: Real and Complex. The Real module provides basic math functions proposed in SE-0246. This proposal was accepted but due to some limitations in the compiler, it is not yet possible to add the new functions directly to the standard library. Real provides the basic math functionalities in a separate module so that developers can start using them right away in their projects. The Complex module introduces a Complex number type over an underlying Real type. It includes usual arithmetic operators for complex numbers. It is conformant to usual protocols such as Equatable, Hashable, Codable, and Numeric. The support for complex numbers can be especially useful when working with Fourier transforms and signal processing algorithms. The modules included in Swift Numerics have minimal dependencies. For instance, the current modules only require the availability of the Swift and C standard libraries and the runtime support provided by compiler-rt. Also, the Swift Numerics package is open-sourced under the same license and contribution guidelines as the Swift project (Apache License 2.0). In a discussion on Hacker News, many developers shared their views on Swift Numerics. A user commented,  “Really looking forward to ShapedArray. Eventually, a lot of what one might do with Python may be available in Swift.” Read the official announcement by Apple to know more about Swift Numerics. Also, check out its GitHub repository. Swift shares diagnostic architecture improvements that will be part of the Swift 5.2 release Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter Introducing SwiftWasm, a tool for compiling Swift to WebAssembly Swift is improving the UI of its generics model with the “reverse generics” system
Read more
  • 0
  • 0
  • 22556

article-image-btrfs-boots-reactos-free-open-source-for-windows
Savia Lobo
31 Jul 2018
2 min read
Save for later

Btrfs now boots ReactOS, a free and open source alternative for Windows NT

Savia Lobo
31 Jul 2018
2 min read
Google Summer of Code (GSoC), a global program focused on introducing students to open source software development is nearing the end of its competition for this year. A student developer named Victor Perevertkin has been successful in his GSoC 2018 project on Btrfs file-system support for ReactOS. He has been able to boot the Windows API/ABI compatible OS off Btrfs. [box type="shadow" align="" class="" width=""]ReactOS is a free and open Source operating system and is compatible with applications and drivers written for the Microsoft Windows NT family of operating systems (NT4, 2000, XP, 2003, Vista, Seven). [/box] For his GSoC 2018 project, Perevertkin has been working on Btrfs support within the ReactOS boot-loader as well as other fixes needed to allow for ReactOS to be installed on and boot from a Btrfs file-system. BTRFS is case-sensitive file system, so paths like /ReactOS/System32, /reactos/system32, /ReactOS/system32 are different here. However, Windows is written assuming that case does not matter during path lookup. This issue is solved in WinBtrfs driver, but for Freeloader it can be a bit tricky. After Perevertkin was done with the Freeloader development and had fixed the VirtualBox bug, he was able to get to first error message from btrfs-booted ReactOS. He later found out that this was due to a bug in WinBtrfs driver. A pull-request to the upstream repository with a bugfix is provided on GitHub repository. At present, ReactOS is able to boot from BTRFS partition and also is in quite stable state. However, some problems are yet to be addressed. Read about this news in detail on ReactOS Blog. ReactOS version 0.4.9 released with Self-hosting and FastFAT crash fixes Google’s App Maker, a low-code tool for building business apps, is now generally available 5 reasons you should learn to code
Read more
  • 0
  • 0
  • 22546
article-image-what-to-expect-from-vsphere-6-7
Vijin Boricha
11 May 2018
3 min read
Save for later

What to expect from vSphere 6.7

Vijin Boricha
11 May 2018
3 min read
VMware has announced the latest release of the industry-leading virtualization platform vSphere 6.7. With vSphere 6.7, IT organizations can address key infrastructure demands like: Extensive growth in quantity and diversity of applications delivered Increased adoption of hybrid cloud environments Global expansion of data centers Robust infrastructure and application security Let’s take a look at some of the key capabilities of vSphere 6.7: Effortless and Efficient management: vSphere 6.7 is built on the industrial innovations delivered by vSphere 6.5, which advances customer experience to a another level. With vSphere 6.7 you can leverage management simplicity, operational efficiency, and faster time to market, all at scale. It comes with an enhanced vCenter Server Appliance (vCSA), new APIs that improve multiple vCenters deployments, which results in easier management of vCenter Server Appliance, as well as backup and restore. Customers can now link multiple vCenters and have seamless visibility across their environment without external platform services or load balancers dependencies. Extensive Security capabilities: vSphere 6.7 has enhanced its security capabilities from vSphere 6.5. It has added support for Trusted Platform Module (TPM) 2.0 hardware devices and has also introduced Virtual TPM 2.0, where you will notice significant enhancements in both the hypervisor and the guest operating system security. With this capability VMs and hosts cannot be tampered, preventing loading of unauthorized components and this enables desired guest operating system security features. With vSphere 6.7, VM Encryption is further enhanced and more operationally simple to manage, enabling encrypted vMotion across different vCenter instances. vSphere 6.7 has also extended its security features keeping in mind the collaboration between VMware and Microsoft ensuring secured Windows VMs on vSphere. Universal Application Platform: vSphere is now a universal application platform that supports existing mission critical applications along with new workloads such as 3D Graphics, Big Data, Machine Learning, Cloud-Native and more. It has also extended its support to some of the latest hardware innovations in the industry, delivering exceptional performance for a variety of workloads. With collaboration of VMware and Nvidia, vSphere 6.7 has further extended its support for GPUs by virtualizing Nvidia GPUs for non-VDI and non-general-purpose-computing use cases such as artificial intelligence, machine learning, big data and more. With these enhancements, customers are now able to better lifecycle management of hosts, reducing disruption for end-users. VMware plans to invest more in this area in order to bring full vSphere support to GPUs in future releases. Hybrid Cloud Experience is now flawless: Since customers have started looking for hybrid cloud options vSphere 6.7 introduces vCenter Server Hybrid Linked Mode. It makes customers have a unified manageability and visibility across an on-premises vSphere environments running on similar versions and a VMware Cloud on AWS environment, running on a different version of vSphere. To ensure seamless hybrid cloud experience, vSphere 6.7 delivers a new capability, called Per-VM EVC which allows for seamless migration across different CPUs. This is only an overview of the key capabilities of vSphere 6.7. You can know more about this release from VMware vSphere Blog and VMware release. Microsoft’s Azure Container Service (ACS) is now Azure Kubernetes Services (AKS) VMware vSphere storage, datastores, snapshots The key differences between Kubernetes and Docker Swarm
Read more
  • 0
  • 0
  • 22536

article-image-giving-material-angular-io-a-refresh-from-angular-blog-medium
Matthew Emerick
07 Oct 2020
3 min read
Save for later

Giving material.angular.io a refresh from Angular Blog - Medium

Matthew Emerick
07 Oct 2020
3 min read
Hi everyone, I’m Annie and I recently joined the Angular Components team after finishing up my rotations as an Engineering Resident here at Google. During the first rotation of my residency I worked on the Closure Compiler and implemented some new ES2020 features including nullish coalesce and optional chaining. After that, my second rotation project was with the Angular Components where I took on giving material.angular.io a long awaited face lift. If you have recently visited the Angular Materials documentation site you will have noticed some new visual updates. We’ve included new vibrant images on the components page, updates to the homepage, a guides page revamp and so much more! Today I would like to highlight how we generated these fun colorful images. We were inspired by the illustrations on the Material Design components page which had aesthetic abstract designs that represented each component. We wanted to adapt the idea for material.angular.io but had some constraints and requirements to consider. First of all, we didn’t have a dedicated illustrator or designer for the project because of the tight deadline of my residency. Second of all, we wanted the images to be compact but clearly showcase each component and its usage. Finally, we wanted to be able to update these images easily when a component’s appearance changed. For the team the choice became clear: we’re going to need to build something ourselves to solve for these requirements. While weighing our design options, we decided that we preferred a more realistic view of the components instead of abstract representations. This is where we came up with the idea of creating “scenes” for each component and capturing them as they would appear in use. We needed a way to efficiently capture these components. We turned to a technique called screenshot testing. Screenshot testing is a technique that captures an image of the page of the provided url and compares it to an expected image. Using this technique we were able to generate the scenes for all 35 components. Here’s how we did it: Set up a route for each component that contains a “scene” using the actual material component Create an end-to-end testing environment and take screenshots of each route with protractor Save the screenshots instead of comparing them to an expected image Load the screenshots from the site One of the benefits of our approach is that whenever we update a component, we can just take new screenshots. This process saves incredible amounts of time and effort. To create each of the scenes we held a mini hackathon to come up with fun ideas! For example, for the button component (top) we wanted to showcase all the different types and styles of buttons available (icon, FAB, raised, etc.). For the button toggle component (bottom) we wanted to show the toggle in both states in a realistic scenario where someone might use a button toggle. Conclusion It was really exciting to see the new site go live with all the changes we made and we hope you enjoy them too! Be sure to check out the site and let us know what your favorite part is! Happy coding, friends! Giving material.angular.io a refresh was originally published in Angular Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Read more
  • 0
  • 0
  • 22523

article-image-alexa-and-google-assistant-can-eavesdrop-or-vish-unsuspecting-users
Sugandha Lahoti
22 Oct 2019
3 min read
Save for later

Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs

Sugandha Lahoti
22 Oct 2019
3 min read
In a new study security researchers from SRLabs have exposed a serious vulnerability - Smart Spies attack in smart speakers from Amazon and Google. According to SRLabs, smart speaker voice apps - Skills for Alexa and Actions on Google Home can be abused to eavesdrop on users or vish (voice-phish) their passwords. The researchers demonstrated that with Smart Spies attack they can get these smart speakers to silently record users or ask their Google account passwords by simply uploading a malicious software disguised as Alexa skill or Google action. The SRLabs team added "�. " (U+D801, dot, space) character sequence to various locations inside the backend of a normal Alexa/Google Home app. They tell a user that an app has failed, insert the "�. " to induce a long pause, and then prompt the user with the phishing message after a few minutes. This tricks users into believing the phishing message has nothing to do with the previous app with which they interacted. Using this sequence, the voice assistants kept on listening for much longer than usual for further commands. Anything the user says is then automatically transcribed and can be sent directly to the hacker. This revelation of Smart Spies attack is unsurprising considering Alexa and Google Home were found phishing and eavesdropping before. In June of this year, two lawsuits were filed in Seattle that allege that Amazon is recording voiceprints of children using its Alexa devices without their consent. Later, Amazon employees were found listening to Echo audio recordings, followed by Google’s language experts doing the same. SRLabs researchers urge users to be more aware of Smart Spies attack and the potential of malicious voice apps that abuse their smart speakers. They caution users to be more aware of third-party app sources while installing a new voice app on their speakers. Measures suggested to Google and Amazon to avoid Smart Spies attack Amazon and Google need to implement better protection, starting with a more thorough review process of third-party Skills and Actions made available in their voice app stores. The voice app review needs to check explicitly for copies of built-in intents. Unpronounceable characters like “�. “ and silent SSML messages should be removed to prevent arbitrary long pauses in the speakers’ output. Suspicious output texts including “password“ deserve particular attention or should be disallowed completely. In a statement provided to Ars Technica, Amazon said it has put new mitigations in place to prevent and detect skills from being able to do this kind of thing in the future. It said that it takes down skills whenever this kind of behavior is identified. Google also told Ars Technica that it has review processes to detect this kind of behavior, and has removed the actions created by the security researchers. The company is conducting an internal review of all third-party actions, and has temporarily disabled some actions while this is taking place. On Twitter people condemned Google and Amazon and cautioned others not to buy their smart speakers. https://twitter.com/ClaudeRdCardiff/status/1186577801459187712 https://twitter.com/Jake_Hanrahan/status/1186082128095825920 For more information, read the blog post on Smart Spies attack by SRLabs. Google’s language experts are listening to some recordings from its AI assistant Amazon’s partnership with NHS to make Alexa offer medical advice raises privacy concerns and public backlash Amazon is being sued for recording children’s voices through Alexa without consent
Read more
  • 0
  • 0
  • 22506
article-image-net-core-releases-may-2019-updates
Amrata Joshi
15 May 2019
3 min read
Save for later

.NET Core releases May 2019 updates

Amrata Joshi
15 May 2019
3 min read
This month, during the Microsoft Build 2019, the team behind .NET Core announced that .NET Core 5 will be coming in 2020. Yesterday the team at .NET Core released the .NET Core May 2019 updates for 1.0.16, 1.1.14, 2.1.11 and 2.2.5. The updates include security, reliability fixes, and updated packages. Expected updates in .NET Core Security .NET Core Tampering Vulnerability(CVE-2019-0820) When .NET Core improperly processes RegEx strings, a denial of service vulnerability exists. In this case, the attacker who can successfully exploit this vulnerability can cause a denial of service against a .NET application. Even a remote unauthenticated attacker can exploit this vulnerability by issuing specially crafted requests to a .NET Core application. This update addresses this vulnerability by correcting how .NET Core applications handle RegEx string processing. This security advisory provides information about a vulnerability in .NET Core 1.0, 1.1, 2.1 and 2.2. Denial of Service vulnerability in .NET Core and ASP.NET Core (CVE-2019-0980 & CVE-2019-0981) When .NET Core and ASP.NET Core improperly handle web requests, denial of service vulnerability exists. An attacker who can successfully exploit this vulnerability can cause a denial of service against a .NET Core and ASP.NET Core application. This vulnerability can be exploited remotely and without authentication. A remote unauthenticated attacker can exploit this vulnerability by issuing specially crafted requests to a .NET Core application. This update addresses this vulnerability by correcting how .NET Core and ASP.NET Core web applications handle web requests. This security advisory provides information about the two vulnerabilities (CVE-2019-0980 & CVE-2019-0981) in .NET Core and ASP.NET Core 1.0, 1.1, 2.1, and 2.2. ASP.NET Core Denial of Service vulnerability(CVE-2019-0982) When ASP.NET Core improperly handles web requests, a denial of service vulnerability exists. An attacker who can successfully exploit this vulnerability can cause a denial of service against an ASP.NET Core web application. This vulnerability can be exploited remotely and without authentication. A remote unauthenticated attacker can exploit this vulnerability by issuing specially crafted requests to the ASP.NET Core application. This update addresses this vulnerability by correcting how the ASP.NET Core web application handles web requests. This security advisory provides information about a vulnerability (CVE-2019-0982) in ASP.NET Core 2.1 and 2.2. Docker images .NET Docker images have now been updated. microsoft/dotnet, microsoft/dotnet-samples, and microsoft/aspnetcore repos have also been updated. Users can get the latest .NET Core updates on the .NET Core download page. To know more about this news, check out the official announcement. .NET 5 arriving in 2020! Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 .NET for Apache Spark Preview is out now!  
Read more
  • 0
  • 0
  • 22458

article-image-debian-gnu-linux-port-for-risc-v-64-bits-why-it-matters-and-roadmap
Amrata Joshi
20 Jun 2019
7 min read
Save for later

Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap

Amrata Joshi
20 Jun 2019
7 min read
Last month, Manuel A. Fernandez Montecelo, a Debian contributor and developer talked about the Debian GNU/Linux riscv64 port at the RISC-V workshop. Debian, a Unix-like operating system consists of free software supported by the Debian community that comprises of individuals who basically care about free and open-source software. The goal of the Debian GNU/Linux riscv64 port project has been to have Debian ready for installation and running on systems that implement a variant of the RISC-V (an open-source hardware instruction set architecture) based systems. The feedback from the people regarding his presentation at the workshop was positive. Earlier this week,  Manuel A. Fernandez Montecelo announced an update on the status of Debian GNU/Linux riscv64 port. The announcement comes weeks before the release of buster which will come with another set of changes to benefit the port. What is RISC-V used for and why is Debian interested in building this port? According to the Debian wiki page, “RISC-V (pronounced "risk-five") is an open source instruction set architecture (ISA) based on established reduced instruction set computing (RISC) principles. In contrast to most ISAs, RISC-V is freely available for all types of use, permitting anyone to design, manufacture and sell RISC-V chips and software. While not the first open ISA, it is significant because it is designed to be useful in modern computerized devices such as warehouse-scale cloud computers, high-end mobile phones and the smallest embedded systems. Such uses demand that the designers consider both performance and power efficiency. The instruction set also has a substantial body of supporting software, which fixes the usual weakness of new instruction sets. In this project the goal is to have Debian ready to install and run on systems implementing a variant of the RISC-V ISA: Software-wise, this port will target the Linux kernel Hardware-wise, the port will target the 64-bit variant, little-endian This ISA variant is the "default flavour" recommended by the designers, and the one that seems to attract more interest for planned implementations that might become available in the next few years (development boards, possible consumer hardware or servers).” Update on Debian GNU/Linux riscv64 port Image source: Debian Let’s have a look at the graph where the percent of arch-dependent packages that are built for riscv64 (grey line) has been around or higher than 80% since mid-2018. The arch-dependent packages are almost half of Debian's [main, unstable] archive. It means that the arch-independent packages can be used by all the ports, provided that the software is present on which they rely on. The update also highlights that around 90% of packages from the whole archive has been made available for this architecture. Image source: Debian The graph above highlights that the percentages are very stable for all architectures. Montecelo writes, “This is in part due to the freeze for buster, but it usually happens at other times as well (except in the initial bring-up or in the face of severe problems).” Even the second-class ports appear to be stable. Montecelo writes, “Together, both graphs are also testament that there are people working on ports at all times, keeping things working behind the scenes, and that's why from a high level view it seems that things just work.” According to him, apart from the work of porters themselves, there are people working on bootstrapping issues that make it easier to bring up ports, better than in the past. They also make coping better when toolchain support or other issues related to ports, blow up. He further added, “And, of course, all other contributors of Debian help by keeping good tools and building rules that work across architectures, patching the upstream software for the needs of several architectures at the same time (endianness, width of basic types), many upstream projects are generic enough that they don't need specific porting, etc.” Future scope and improvements yet to come To get Debian running on RISC-V will not be easy because of various reasons including limited availability of hardware being able to run Debian port and limited options for using bootloaders. According to Montecelo, this is an area of improvement from them. He further added, “Additionally, it would be nice to have images publicly available and ready to use, for both Qemu and hardware available like the HiFive Unleashed (or others that might show up in time), but although there's been some progress on that, it's still not ready and available for end users.” Presently, they are beyond 500 packages from the Rust ecosystem in the archive (which is about 4%) which can’t be built and used until Rust gets support for the architecture. Rust requires LLVM and there’s no Rust compiler based on GCC or other toolchains. Montecelo writes, “Firefox is the main high-level package that depends on Rust, but many packages also depend on librsvg2 to render SVG images, and this library has been converted to Rust. We're still using the C version for that, but it cannot be sustained in the long term." Apart from Rust, other packages use LLVM to some extent, but currently, it is not fully working for riscv64. The support of LLVM for riscv64 is expected to be completed this year. While talking about other programming languages, he writes, “There are other programming language ecosystems that need attention, but they represent a really low percentage (only dozens of packages, of more than 12 thousand; and with no dependencies outside that set). And then, of course, there is a long tail of packages that cannot be built due to a missing dependency, lack of support for the architecture or random failures -- together they make a substantial number of the total, but they need to be looked at and solved almost on a case-by-case basis.” Why are people excited about this? Many users seem to be excited about the news, one of the reasons being that there won’t be a need to bootstrap from scratch as Rust now will be able to cross-compile easily because of the Riscv64 support. A user commented on HackerNews, “Debian Rust maintainer here. We don't need to bootstrap from scratch, Rust (via LLVM) can cross-compile very easily once riscv64 support is added.” Also, this appears to be a good news for Debian, as cross-compiling has really come a long way on Debian. Rest are awaiting for more to get incorporated with riscv. Another user commented, “I am waiting until the Bitmanip extension lands to get excited about RISC-V: https://github.com/riscv/riscv-bitmanip” Few others think that there is a need for LLVM support for riscv64. A user commented, “The lack of LLVM backend surprises me. How much work is it to add a backend with 60 instructions (and few addressing modes)? It's clearly far more than I would have guessed.” Another comment reads, “Basically LLVM is now a dependency of equal importance to GCC for Debian. Hopefully this will help motivate expanding architecture-support for LLVM, and by proxy Rust.” According to users, the architecture of this port misses on two major points, one being the support for LLVM compiler and the other one being the support for Rust based on GCC. If the port gets the LLVM support by this year, users will be able to develop a front end for any programming language as well as a backend for any instruction set architecture. Now, if we consider the case of support for Rust based on GCC, then the port will help developers to get support for many language extensions as GCC provides the same. A user commented on Reddit, “The main blocker to finish the port is having a working Rust toolchain. This is blocked on LLVM support, which only supports RISCV32 right now, and RISCV64 LLVM support is expected to be finished during 2019.” Another comment reads, “It appears that enough people in academia are working on RISCV for LLVM to accept it as a mainstream backend, but I wish more stakeholders in LLVM would make them reconsider their policy.” To know more about this news, check out Debian’s official post. Debian maintainer points out difficulties in Deep Learning Framework Packaging Debian project leader elections goes without nominations. What now? Are Debian and Docker slowly losing popularity?  
Read more
  • 0
  • 0
  • 22452
Modal Close icon
Modal Close icon