Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-mozilla-firefox-will-soon-support-letterboxing-an-anti-fingerprinting-technique-of-the-tor-broswer
Bhagyashree R
07 Mar 2019
2 min read
Save for later

Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Browser

Bhagyashree R
07 Mar 2019
2 min read
Yesterday, ZDNet shared that Mozilla will be adding a new anti-fingerprinting technique called letterboxing to Firefox 67, which is set to release in May this year. Letterboxing is part of the Tor Uplift project that started back in 2016 and is currently available for Firefox Nightly users. As part of the Tor Uplift project, the team is slowing bringing the privacy-focused features of Tor Browser to Firefox. For instance, Firefox 55 came with support for a Tor Browser feature called First-Party Isolation (FPI). This feature prevented ad trackers from using cookies to track user activity by separating cookies on a per-domain basis. What is letterboxing and why it is needed? The dimensions of a browser window can act as a big source of finger-printable data that can be used by advertising networks. These advertising networks can use browser window sizes to create user profiles and track users as they resize their browser and move across new URLs and browser tabs. To maintain online privacy of users, it is important to protect this window dimension data continuously even if users resize or maximize their window or enter fullscreen. What letterboxing does is that it masks the real dimensions of the browser window while keeping the window width and height dimensions multiples of 200px and 100px during the resize operation. And, then it adds a gray space at the top, bottom, left, or right of the current page. The advertising code tracking the window resize events reads the flawed dimensions and sends it to its server, and only then Firefox removes the gray spaces. This is how the advertising code is tricked into reading the incorrect window dimensions. Here is a demo of letterboxing showing how exactly it works: https://www.youtube.com/watch?&v=TQxuuFTgz7M The letterboxing feature is not enabled by default. To enable the feature, you can go to the ‘about:config’ page in the browser, enter “privacy.resistFingerprinting" in the search box, and toggle the browser's anti-fingerprinting features to "true." To know more in detail about letterboxing, check out ZDNet’s website. Mozilla engineer shares the implications of rewriting browser internals in Rust Mozilla shares key takeaways from the Design Tools survey Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant
Read more
  • 0
  • 0
  • 19158

article-image-google-chrome-70-now-supports-webassembly-threads-to-build-multi-threaded-web-applications
Bhagyashree R
30 Oct 2018
2 min read
Save for later

Google Chrome 70 now supports WebAssembly threads to build multi-threaded web applications

Bhagyashree R
30 Oct 2018
2 min read
Yesterday, Google announced that Chrome 70 now supports WebAssembly threads. The WebAssembly Community Group has been working to bring the support for threads to the web and this is a step towards that effort. Google’s open source JavaScript and WebAssembly engine, V8 has implemented all the necessary support for WebAssembly threads. Why the support for WebAssembly threads is needed? Earlier, parallelism in browsers was supported with the help of web workers. The downside of web workers is that they do not share mutable data between them. Instead, they rely on message-passing for communication. On the other hand, WebAssembly threads can share the same Wasm memory. The underlying storage of shared memory is enabled by SharedArrayBuffer, a JavaScript primitive that allows sharing the contents of a single ArrayBuffer concurrently between workers. Each WebAssembly thread runs in a web worker, but their shared Wasm memory allows them to work as fast as they do on native platforms. This means that those applications which use Wasm threads are responsible for managing access to the shared memory as in any traditional threaded application. How you can try this support To test the WebAssembly module you need to turn on the experimental WebAssembly threads support in Chrome 70 onwards: First, navigate to the chrome://flags URL in your browser: Source: Google Developers Next, go to the experimental WebAssembly threads setting: Source: Google Developers Now change the setting from Default to Enabled and then restart your browser: Source: Google Developers The aforementioned steps are for development purposes. In case you are interested in testing your application out in the field, you can do that with origin trial. Original trials allow you to try experimental features with your users by obtaining a testing token that’s tied to your domain. You can read more in detail about the WebAssembly thread support in Chrome 70 on the Google Developers blog. Chrome 70 releases with support for Desktop Progressive Web Apps on Windows and Linux Testing WebAssembly modules with Jest [Tutorial] Introducing Walt: A syntax for WebAssembly text format written 100% in JavaScript and needs no LLVM/binary toolkits
Read more
  • 0
  • 0
  • 19141

article-image-nvidia-and-ai-researchers-create-ai-agent-noise2noise-that-can-denoise-images
Richard Gall
10 Jul 2018
2 min read
Save for later

Nvidia and AI researchers create AI agent Noise2Noise that can denoise images

Richard Gall
10 Jul 2018
2 min read
Nvidia has created an an AI agent that can clean 'noisy images' - without ever having seen a 'clean' one. Working alongside AI researchers from MIT and Aalto University, they have created something they've called 'Noise2Noise'. The team's findings could, they claim, "lead to new capabilities in learned signal recovery using deep neural networks." This could have a big impact on a number of areas, including healthcare. How researchers trained the Noise2Noise AI agent The team took 50,000 images from the ImageNet database which were then manipulated to look 'noisy'. Noise2Noise then ran on these images and was able to 'denoise' them - without knowing what a clean image looked like. This is the most significant part of the research. The AI agent wan't learning from clean data, but was instead simply learning the denoising process. This is an emerging and exciting area in data analysis and machine learning. In the introduction to their recently published journal article, which coincides with a presentation at International Conference on Machine Learning in Stockholm this week the research team explain: "Signal reconstruction from corrupted or incomplete measurements is an important subfield of statistical data analysis. Recent advances in deep neural networks have sparked significant interest in avoiding the traditional, explicit a priori statistical modeling of signal corruptions, and instead learning to map corrupted observations to the unobserved clean versions." The impact and potential applications of Noise2Noise Because the Noise2Noise AI agent doesn't require 'clean data' - or the 'a priori statistical modeling of signal corruptions' - it could be applied in a number of very exciting ways. It "points the way significant benefits in many applications by removing the need for potentially strenuous collection of clean data" the team argue. One of the most interesting potential applications of the research is in the field of MRI scans. Essentially, an agent like Noise2Noise could give a much more accurate MRI scan than those done by traditional MRI scan agents which use something called Fast Fourier Transform. This could subsequently lead to a greater level of detail in MRI scans which will massively support medical professionals to make quicker diagnoses. Read next: Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads Nvidia’s Volta Tensor Core GPU hits performance milestones. But is it the best? How to Denoise Images with Neural Networks
Read more
  • 0
  • 0
  • 19135

article-image-google-project-zero-discloses-zero-day-android-exploit-in-pixel-huawei
Sugandha Lahoti
07 Oct 2019
3 min read
Save for later

Google Project Zero discloses a zero-day Android exploit in Pixel, Huawei, Xiaomi and Samsung devices

Sugandha Lahoti
07 Oct 2019
3 min read
Google’s Project Zero disclosed a zero-day Android exploit in popular devices from Pixel, Huawei, Xiaomi, and Samsung, last Friday. This flaw unlocks root-level access and requires no or minimal customization to root a phone that’s exposed to the bug. A similar Android OS flaw was fixed in 2017 but has now found its way on newer software versions as well. The researchers speculate that this vulnerability is attributed to the NSO group based in Israel. Google has published a proof of concept which states that it is a kernel privilege escalation which uses a ‘use-after-free’ vulnerability, accessible from inside the Chrome sandbox. How does the zero-day Android exploit work As described in the upstream commit, “binder_poll() passes the thread->wait waitqueue that can be slept on for work. When a thread that uses epoll explicitly exits using BINDER_THREAD_EXIT, the waitqueue is freed, but it is never removed from the corresponding epoll data structure. When the process subsequently exits, the epoll cleanup code tries to access the waitlist, which results in a use-after-free.” Basically, the zero-day Android exploit can gain arbitrary kernel read/write when running locally. If the exploit is delivered via the web, it only needs to be paired with a renderer exploit, as this vulnerability is accessible through the sandbox. The vulnerability is exploitable in Chrome's renderer processes under Android's 'isolated_app' SELinux domain, making Binder as the vulnerable component. Affected devices include Pixel, Pixel XL, Pixel 2, Pixel 2 XL, Huawei P20, Redmi 5A, Redmi Note 5, Mi A1, Oppo A3, Moto Z3, Oreo LG phones, Samsung Galaxy S7, Samsung Galaxy S8, and Samsung Galaxy S9.  This vulnerability was earlier patched in the Linux kernel version 4.14 and above, but without a CVE. Now, the vulnerability is being tracked as CVE-2019-2215. “This issue is rated as High severity on Android and by itself requires installation of a malicious application for potential exploitation. Any other vectors, such as via web browser, require chaining with an additional exploit,” Project Zero member Tim Willis wrote in the post. Project Zero normally offers a 90-day timeline for developers to fix an issue before making it public, but since this vulnerability was exploited in the wild, it was published in just seven days. In case 7 days elapse or a patch is made broadly available (whichever is earlier), the bug report will become visible to the public. Google said that affected Pixel devices will have the zero-day Android exploit patched in the upcoming October 2019 Android security update. Other OEMs have not yet acknowledged the vulnerability, but should ideally release patches soon. An unpatched security issue in the Kubernetes API is vulnerable to a “billions laugh attack” An unpatched vulnerability in NSA’s Ghidra allows a remote attacker to compromise exposed systems A Cargo vulnerability in Rust 1.25 and prior makes it ignore the package key and download a wrong dependency. New iPhone exploit checkm8 is unpatchable and can possibly lead to permanent jailbreak on iPhones. Google’s Project Zero reveals several serious zero-day vulnerabilities in a fully remote attack surface of the iPhone.
Read more
  • 0
  • 0
  • 19129

article-image-unity-introduces-guiding-principles-for-ethical-ai-to-promote-responsible-use-of-ai
Natasha Mathur
03 Dec 2018
3 min read
Save for later

Unity introduces guiding Principles for ethical AI to promote responsible use of AI

Natasha Mathur
03 Dec 2018
3 min read
The Unity team announced guidelines to Ethical AI, last week, to promote more responsible use of Artificial Intelligence for its developers, community, and the company. Unity’s guide to Ethical AI comprises six guiding AI principles. Unity’s six guiding AI principles Be unbiased This principle focuses on designing AI tools in a way that complements the human experience in a positive way. To achieve this, it is important to take into consideration all types of diverse human experiences that can, in turn, lead to AI complementing experiences for everybody. Be Accountable This principle puts an emphasis on keeping in mind the potential negative consequences, risks, and dangers of the AI tools while building them. It focuses on assessing the factors that might cause “direct or indirect harm” so that they can be avoided. This ensures accountability. Be fair This principle focuses on ensuring that the kind of AI tools developed does not interfere with “normal, functioning democratic systems of government”. So, the development of an AI tool that can lead to the suppression of human rights (such as free expression), as defined by the Universal Declaration, should be avoided. Be responsible This principle stresses the importance of developing products responsibly. It ensures that AI developers don’t take undue advantage of the vast capabilities of AI while building a product. Be Honest This principle focuses on building trust among the users of a technology by being clear and transparent about the product so that they can better understand its purpose. This, in turn, will lead to users making better and more informed decisions regarding the product. Be Trustworthy This principle emphasizes the importance of protecting the AI derived user data. “Guard the AI derived data as if it were handed to you by your customer directly in trust to only be used as directed under the other principles found in this guide” reads the Unity blog. “We expect to develop these principles more fully and to add to them over time as our community of developers, regulators, and partners continue to debate best practices in advancing this new technology. With this guide, we are committed to implementing the ethical use of AI across all aspects of our company’s interactions, development, and creation”, says the Unity team. For more information, check out the official Unity blog post. EPIC’s Public Voice Coalition announces Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018 Teaching AI ethics – Trick or Treat? SAP creates AI ethics guidelines and forms an advisory panel
Read more
  • 0
  • 0
  • 19126

article-image-fastly-announces-the-next-gen-edge-computing-services-available-in-private-beta
Fatema Patrawala
08 Nov 2019
4 min read
Save for later

Fastly announces the next-gen edge computing services available in private beta

Fatema Patrawala
08 Nov 2019
4 min read
Fastly, a San Francisco based startup, providing edge cloud platform, yesterday announced the private beta launch of Compute@Edge, its new edge computing services. Compute@Edge is a powerful language-agnostic compute environment. This major milestone marks as an evolution of Fastly’s edge computing capabilities and the company’s innovation in the serverless space.  https://twitter.com/fastly/status/1192080450069643264 Fastly’s Compute@Edge is designed to empower developers to build far more advanced edge applications with greater security, more robust logic, and new levels of performance. They can also create a new and improved digital experience with their own technology choices around the cloud platforms, services, and programming languages needed.  Rather than spend time on operational overhead, the company’s goal is to continue reinventing the way end users live, work, and play on the web. Fastly's Compute@Edge gives developers the freedom to push complex logic closer to end users. “When we started Fastly, we sought to build a platform with the power to realize the future of edge computing — from our software-defined modern network to our point of presence design, everything has led us to this point,” explained Tyler McMullen, CTO of Fastly. “With this launch, we’re excited to double down on that vision and work with enterprises to help them build truly complete applications in an environment that offers new levels of stability, security, and global scale.” We had the opportunity to interview Fastly’s CTO Tyler McMullen a few months back. We discussed Fastly’s Lucet and the future of WebAssembly and Rust among other things. You can read the full interview here.  Fastly Compute@Edge leverages speed for global scale and security Fastly’s Compute@Edge environment promises to offer 100x faster startup time at 35.4 microseconds, than any other solution in the market. Additionally Compute@Edge is powered by its open-source WebAssembly compiler and runtime, Lucet and supports Rust as a second language in addition to Varnish Configuration Language (VCL).  Other benefits of Compute@Edge include: Code can be computed around the world instead of a single region. This will allow developers to reduce code execution latency and further optimize the performance of their code, without worrying about managing the underlying infrastructure The unmatched speed at which the environment operates, combined with Fastly’s isolated sandboxing technology, reduces the risk of accidental data leakage. With a “burn-after-reading” approach to request memory, entire classes of vulnerabilities are eliminated With Compute@Edge, developers can serve GraphQL from its network edge and deliver more personalized experiences Developers can develop their own customized API protection logic With manifest manipulation, developers can deliver content with a “best-performance-wins” approach— like multi-CDN live streams that run smoothly for users around the world Fastly has operated in the serverless market since its founding in 2011 through its Edge Cloud Platform, including products like Full Site Delivery, Load Balancer, DDoS, and Web Application Firewall (WAF). Till date, Fastly’s serverless computing offering has focused on delivery-centric use cases via its VCL-powered programmable edge. With the introduction of Compute@Edge, Fastly unlocks even more powerful and widely-applicable computing capabilities. To learn more about Fastly’s edge computing and cloud services, you can visit its official blog. Developers who are interested to be a part of the private beta can sign up on this page. Fastly SVP, Adam Denenberg on Fastly’s new edge resources, edge computing, fog computing, and more Fastly, edge cloud platform, files for IPO Fastly open sources Lucet, a native WebAssembly compiler and runtime “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module
Read more
  • 0
  • 0
  • 19126
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-to-expect-in-webpack-5
Bhagyashree R
07 Feb 2019
3 min read
Save for later

What to expect in Webpack 5?

Bhagyashree R
07 Feb 2019
3 min read
Yesterday, the team behind Webpack shared all the updates we will see in its upcoming version, Webpack 5. This version improves build performance with persistent caching, introduces a new named chunk id algorithm, and more. For Webpack 5, the minimum supported Node.js version has been updated from 6 to 8. As this version is a major release, it will come with breaking changes and users may expect some plugin to not work. Expected features in Webpack 5 Removed Webpack 4 deprecated features All the features that were deprecated in Webpack 4 have been removed in this version. So, when migrating to Webpack 5 ensure that your Webpack build doesn’t show any deprecation warnings. Additionally, the team has also removed IgnorePlugin and BannerPlugin that must now be passed an options object. Automatic Node.js polyfills removed All the versions before Webpack 4 provided polyfills for most of the Node.js core modules. These were automatically applied once a module uses any of the core modules. Using polyfills makes it easy to use modules written for Node.js, but this also increases the bundle size as huge modules get added to the bundle. To stop this, Webpack 5 removes this automatically polyfilling and focuses on frontend compatible modules. Algorithm for deterministic chunk and module IDs Webpack 5 comes with new algorithms for long term caching. These are enabled by default in production mode with the following configuration lines: chunkIds: "deterministic”, moduleIds: “deterministic" These algorithms assign short numeric IDs to modules and chunks in a deterministic way. It is recommended that you use the default values for chunkIds and moduleIds. You can also choose to use the old defaults chunkIds: "size", moduleIds: "size", which will generate smaller bundles, but invalidate them more often for caching. Named Chunk IDs algorithm A named chunk id algorithm is introduced, which is enabled by default in development mode. It gives chunks and filenames human-readable names instead of the old numeric names. The algorithm determines the chunk ID the chunk’s content. So, users no longer need to use import(/* webpackChunkName: "name" */ "module") for debugging.To opt-out of this feature, you can change the configuration as chunkIds: “natural”. Compiler idle and close Starting from Webpack 5, compilers need to be closed after the use. Now, compilers enter and leave an idle state and have hooks for these states. Once compile is closed, all the remaining work should be finished as fast as possible. Then, a callback will signal that the closing has been completed. You can read the entire changelog from the Webpack repository. Nuxt.js 2.0 released with a new scaffolding tool, Webpack 4 upgrade, and more! How to create a desktop application with Electron [Tutorial] The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0
Read more
  • 0
  • 0
  • 19124

article-image-how-googles-deepmind-is-creating-images-with-artificial-intelligence
Sugandha Lahoti
28 Mar 2018
2 min read
Save for later

How Google’s DeepMind is creating images with artificial intelligence

Sugandha Lahoti
28 Mar 2018
2 min read
The research team at DeepMind have been using deep reinforcement learning agents to generate images as humans do. DeepMind’s AI Agents understand how digits, characters, and portraits are actually constructed instead of analyzing pixels that represent it on a screen. DeepMind’s AI agents interact with the computer paint program, placing strokes on digital canvas and changing the brush size, pressure and color. How does DeepMind generate images? As a part of the initial training process, the agent starts by drawing random strokes with no visible intent or structure. Following the reinforcement learning approach, the agent is then ‘rewarded’. This ‘encourages’ it to produce meaningful drawings. To monitor the performance of the first network, DeepMind trained a second neural network, called the discriminator. This discriminator predicts whether a particular drawing was produced by the agent, or if it was sampled from a dataset of real photographs. The painting agent is rewarded by how much it manages to “fool” the discriminator into thinking that the drawings are real. Most importantly, DeepMind’s AI agents produce images by writing graphics programs to interact with a paint environment. This is different from how a GAN works where the generator in GAN setups directly output pixels.  Moreover, the model can also apply what it has learned on the simulated paint program to re-create characters in other similar environments. This is because the framework is interpretable in the sense that it produces a sequence of motions that control a simulated brush. Training DeepMind AI agents This agent was trained to generate images resembling MNIST digits: it was shown what the digits look like, but not how they are drawn. By attempting to generate images that fool the discriminator, the agent learned to control the brush and to maneuver it to fit the style of different digits. This model was also trained to reproduce specific images on real datasets. When trained to paint celebrity faces, the agent is capable of capturing the main traits of the face, such as shape, tone, and hairstyle, much like a street artist would when painting a portrait with a limited number of brush strokes. Source: DeepMind Blog For further details on methodology and experimentation, read the research paper.
Read more
  • 0
  • 0
  • 19106

article-image-meet-gophish-the-open-source-phishing-toolkit-that-simulates-real-world-phishing-attacks
Melisha Dsouza
29 Oct 2018
2 min read
Save for later

Meet ‘Gophish’, the open source Phishing Toolkit that simulates real world phishing attacks

Melisha Dsouza
29 Oct 2018
2 min read
Phishing attacks these days are a common phenomenon. Fraudsters use technical tricks and social engineering to deceive users into revealing sensitive personal information such as usernames, passwords, account IDs, credit card details and social security numbers through fake emails. Gophish provides a framework to simulate real-world phishing attacks. This enables industries to avail phishing training to make employees more aware of security in their business. Gophish is an open-source phishing toolkit written in Golang, specially designed for businesses and penetration testers. It is  This means that the Gophish releases do not have any dependencies. It's easy to set up and run and can be hosted in-house. Here are some of the features of Gophish #1 Ease of use Users can easily create or import pixel-perfect phishing template while customizing their templates in their browser itself. Phishing emails can be scheduled and can be sent in the background. Results of the simulation are delivered in near real-time. #2 Cross Platform Gophish can be used across platforms like Windows, Mac OSX, and Linux. #3 Full REST API The framework is powered with REST API. Gophish’s Python client makes it really easy to work with the API. #4 Real-Time Results Results obtained by Gophish are updated automatically. Users can view a timeline for every recipient, track if the email was opened, link clicks, submitted credentials, and more. Damage caused by phishing in a corporate environment can have dangerous repercussions like loss or misuse of confidential data, ruining the consumer's trust in the brand, use of corporate network resources etc. The Gophish framework aims to help industry professionals learn how to tackle phishing attacks with its ease of setup, use, and powerful results. To learn more about how to use Gophish and its benefits, head over to their official Blog. Google’s Protect your Election program: Security policies to defend against state-sponsored phishing attacks, and influence campaigns Microsoft claims it halted Russian spear phishing cyberattacks IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support
Read more
  • 0
  • 0
  • 19083

article-image-vlc-media-player-affected-by-a-major-vulnerability-in-a-3rd-library-libebml-updating-to-the-latest-version-may-help
Savia Lobo
25 Jul 2019
4 min read
Save for later

VLC media player affected by a major vulnerability in a 3rd library, libebml; updating to the latest version may help

Savia Lobo
25 Jul 2019
4 min read
A few days ago, a German security agency CERT-Bund revealed it had found a Remote Code Execution (RCE) flaw in the popular open-source, VLC Media Player allowing hackers to install, modify, or run any software on a victim’s device without their authority and could also be used to disclose files on the host system. The vulnerability (listed as CVE-2019-13615) was first announced by WinFuture and received a vulnerability score of 9.8 making it a "critical" problem. According to a release by CERT-Bund, “A remote, anonymous attacker can exploit a vulnerability in VLC to execute arbitrary code, create a denial of service state, disclose information, or manipulate files.” According to Threat Post, “Specifically, VLC media player’s heap-based buffer over-read vulnerability exists in mkv::demux_sys_t::FreeUnused() in the media player’s modules/demux/mkv/demux.cpp function when called from mkv::Open in modules/demux/mkv/mkv.cpp.” VLC is not vulnerable, VideoLAN says Yesterday, VideoLAN, the makers of VLC, tweeted that VLC is not vulnerable. They said, “the issue is in a 3rd party library, called libebml, which was fixed more than 16 months ago. VLC since version 3.0.3 has the correct version shipped, and @MITREcorp did not even check their claim.” https://twitter.com/videolan/status/1153963312981389312 VideoLAN said a reporter, opened a bug on their public bug tracker, which is outside of the reporting policy and should have mailed in private on the security alias. “We could not, of course, reproduce the issue, and tried to contact the security researcher, in private”, VideoLAN tweeted. VideoLAN said the reporter was using Ubuntu 18.04, an old version of Ubuntu and “clearly has not all the updated libraries. But did not answer our questions.” VideoLAN says it wasn’t contacted before the CVE was issued VideoLAN is quite unhappy that MITRE Corp did not approach them before issuing a CVE for the VLC vulnerability, which is a direct violation of MITRE’s own policies. Source: CVE.mitre.org https://twitter.com/videolan/status/1153965979988348928 When VideoLAN complained and asked if they could manage their own CVE (like another CNA), “we had no answer and @usnistgov NVD told us that they basically couldn't do anything for us, not even fixing the wrong information”, they tweeted. https://twitter.com/videolan/status/1153965981536010240 VideoLAN said even CERT Bund did not contact them for clarifications. They further added, “So, when @certbund decided to do their "disclosure", all the media jumped in, without checking anything nor contacting us.” https://twitter.com/videolan/status/1153971024297431047 The VLC CVE on the National Vulnerability Database has now been updated. NVD has downgraded the severity of the issue from a Base Score of 9.8 (critical) to 5.5 (medium). Also, the changelog specifies that the “Victim must voluntarily interact with attack mechanism.” Dan Kaminsky, an American security researcher, tweeted, “A couple of things, though: 1) Ubuntu 18.04 is not some ancient version 2) Playing videos with VLC is both a first-class user demand and a major attack surface, given the realities of content sourcing.  If Ubuntu can't secure VLC dependencies, VLC probably has to ship local libs.” https://twitter.com/dakami/status/1154118377197035520 Last month, VideoLAN fixed two high severity bugs in their security update for the VLC media player. The update included fixes for 33 vulnerabilities in total, of which two were marked critical, 21 medium and 10 rated low. Jean-Baptiste Kempf, president of VideoLAN and an open-source developer, wrote, “This high number of security issues is due to the sponsoring of a bug bounty program funded by the European Commission, during the Free and Open Source Software Audit (FOSSA) program”. To know more about this news in detail, you can read WinFuture’s blog post. The EU Bounty Program enabled in VLC 3.0.7 release, this version fixed the most number of security issues A zero-day vulnerability on Mac Zoom Client allows hackers to enable users’ camera, leaving 750k companies exposed VLC’s updating mechanism still uses HTTP over HTTPS
Read more
  • 0
  • 0
  • 19050
article-image-amazon-introduces-s3-batch-operations-to-process-millions-of-s3-objects
Amrata Joshi
02 May 2019
3 min read
Save for later

Amazon introduces S3 batch operations to process millions of S3 objects

Amrata Joshi
02 May 2019
3 min read
Just two days ago, Amazon announced that it has made Amazon S3 Batch Operations, a storage management feature for processing millions of S3 objects in an easier way. It is also an automated feature that was first previewed at AWS re:Invent 2018. Users can now set tags or access control lists (ACLs), copy objects to another bucket, initiate a restore from Glacier, and also invoke an AWS Lambda function on each one. Developers and IT administrators can now change object properties and metadata and further execute storage management tasks with a single API request. For example, S3 Batch Operations allows customers to replace object tags, change access controls, add object retention dates, copy objects from one bucket to another, and even trigger Lambda functions against existing objects stored in S3. S3’s existing support for inventory reports are used to drive the batch operations. With this new feature of Batch Operations, users can now easily write code, set up any server fleets, or figure out how to partition the work and distribute it to the fleet. Users can now create a job in minutes with a couple of clicks. S3 uses massive, behind-the-scenes parallelism to manage the job. Users can also create, monitor, and manage their batch jobs using the S3 CLI, the S3 Console, or the S3 APIs. Important terminologies for batch operations Bucket An S3 bucket can hold a collection of any number of S3 objects, with optional per-object versioning. S3 Inventory report An S3 inventory report can be generated when daily or weekly bucket inventory is run. A report can be configured to include all of the objects in a bucket or to focus on a prefix-delimited subset. Manifest A manifest is an inventory report or a file in CSV format that identifies the objects to be processed in the batch job. Batch Action Batch action is the desired action on the objects which is described by a Manifest. IAM role An IAM role provides S3 with permission for reading the objects in the inventory report and perform the desired actions for writing the optional completion report. Batch job Batch references all of the above-mentioned terminologies. Each job has a status and a priority; higher priority (numerically) jobs take precedence over those with lower priority. Most of the users are happy because of this news as they think the performance of their projects might increase. A user commented on HackerNews, “This S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance. That means you can now use logical or sequential naming patterns in S3 object naming without any performance implications.” To know more about this news, check out Amazon’s blog post. Amazon finally agrees to let shareholders vote on selling facial recognition software Eero’s acquisition by Amazon creates a financial catastrophe for investors and employees Amazon Alexa is HIPAA-compliant: bigger leap in the health care sector      
Read more
  • 0
  • 0
  • 19039

article-image-you-can-now-make-music-with-ai-thanks-to-magenta-js
Richard Gall
04 May 2018
3 min read
Save for later

You can now make music with AI thanks to Magenta.js

Richard Gall
04 May 2018
3 min read
Google Brain's Magenta project has released Magenta.js, a tool that could open up new opportunities in developing music and art with AI. The Magenta team have been exploring a range of ways to create with machine learning, but with Magenta.js, they have developed a tool that's going to open up the very domain they've been exploring to new people. Let's take a look at how the tool works, what the aims are, and how you can get involved. How does Magenta.js work? Magenta.js is a JavaScript suite that runs on TensorFlow.js, which means it can run machine learning models in the browser. The team explains that JavaScript has been a crucial part of their project, as they have been eager to make sure they bridge the gap between the complex research they are doing and their end users. They want their research to result in tools that can actually be used. As they've said before: "...we often face conflicting desires: as researchers we want to push forward the boundaries of what is possible with machine learning, but as tool-makers, we want our models to be understandable and controllable by artists and musicians." As they note, JavaScript has informed a number of projects that have preceded Magenta.js, such as Latent Loops, Beat Blender and Melody Mixer. These tools were all built using MusicVAE, a machine learning model that forms an important part of the Magenta.js suite. The first package you'll want to pay attention to in Magenta.js is @magenta/music. This package features a number of Magenta's machine learning models for music including MusicVAE and DrumsRNN. Thanks to Magenta.js you'll be able to quickly get started. You can use a number of the project's pre-trained models which you can find on GitHub here. What next for Magenta.js? The Magenta team are keen for people to start using the tools they develop. They want a community of engineers, artists and creatives to help them drive the project forward. They're encouraging anyone who develops using Magenta.js to contribute to the GitHub repo. Clearly, this is a project where openness is going to be a huge bonus. We're excited to not only see what the Magenta team come up with next, but also the range of projects that are built using it. Perhaps we'll begin to see a whole new creative movement emerge? Read more on the project site here.
Read more
  • 0
  • 0
  • 19029

article-image-goodbye-pass-from-blog-posts-sqlservercentral
Anonymous
23 Dec 2020
1 min read
Save for later

Goodbye PASS from Blog Posts - SQLServerCentral

Anonymous
23 Dec 2020
1 min read
“It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of light, it was the season of darkness, it was the spring of hope, it-> Continue reading Goodbye PASS The post Goodbye PASS appeared first on Born SQL. The post Goodbye PASS appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 19028
article-image-mongodb-relational-4-0-release
Amey Varangaonkar
16 Apr 2018
2 min read
Save for later

MongoDB going relational with 4.0 release

Amey Varangaonkar
16 Apr 2018
2 min read
MongoDB is, without a doubt, the most popular NoSQL database today. Per the Stack Overflow Developer Survey, more developers have been wanting to work with MongoDB than any other database over the last two years. With the upcoming MongoDB 4.0 release, it plans to up the ante by adding support for multi-document transactions and ACID-based features (Atomicity Consistency Integrity and Durability). Poised to be released this summer, MongoDB 4.0 will combine the speed, flexibility and the efficiency of document models - features which make MongoDB such a great database to use - with the assurance of transactional integrity. This new addition should give the database a more relational feel, and would suit large applications with high data integrity needs regardless of how the data is modeled. It has also ensured that the support for multi-document transactions will not affect the overall speed and performance of the unrelated workloads running concurrently. MongoDB have been working on this transactional integrity feature for over 3 years now, ever since they incorporated the WiredTiger storage engine. The MongoDB 4.0 release should also see the introduction of some other important features such as snapshot isolation, a consistent view of data, ability to roll-back transactions and other ACID features. Per the 4.0 product roadmap, 85% of the work is already done, and the release seems to be on time to hit the market. You can read more about the announcement on MongoDB’s official page.You can also join the beta program to test out the newly added features in 4.0.  
Read more
  • 0
  • 0
  • 18986

article-image-riot-games-is-struggling-with-sexism-and-lack-of-diversity-employees-plan-to-walkout-in-protest
Sugandha Lahoti
30 Apr 2019
7 min read
Save for later

Riot games is struggling with sexism and lack of diversity; employees plan to walkout in protest

Sugandha Lahoti
30 Apr 2019
7 min read
Update 23rd August 2019: Riot Games has finally settled a class-action lawsuit filed by Riot workers on grounds of sexual harassment and discrimination faced at their workspace. "This is a very strong settlement agreement that provides meaningful and fair value to class members for their experiences at Riot Games," said Ryan Saba of Rosen Saba, LLP, the attorney representing the plaintiffs. "This is a clear indication that Riot is dedicated to making progress in evolving its culture and employment practices. A number of significant changes to the corporate culture have been made, including increased transparency and industry-leading diversity and inclusion programs. The many Riot employees who spoke up, including the plaintiffs, significantly helped to change the culture at Riot." "We are grateful for every Rioter who has come forward with their concerns and believe this resolution is fair for everyone involved," said Nicolo Laurent, CEO of Riot Games. "With this agreement, we are honoring our commitment to find the best and most expeditious way for all Rioters, and Riot, to move forward and heal." Update as on 6 May, 2019: Riot Games announced early Friday that they will soon start giving new employees the option to opt-out of some mandatory arbitration requirements when they are hired. The catch - The arbitration will initially narrowly focused on a specific set of employees for a specific set of causes. Riot games employees are planning to walkout in protest of the company’s sexist culture and lack of diversity. Riot has been in the spotlight since Kotaku published a detailed report highlighting how five current and former Riot employees filed lawsuits against the company citing the sexist culture that fosters in Riot. Out of the five employees, two were women. Per Kotaku, last Thursday, Riot filed a motion to force two of those women, whose lawsuits revolved around the California Equal Pay Act, into private arbitration. In their motions, Riot’s lawyer argues that these employees waived their rights to a jury trial when they signed arbitration agreements upon their hiring. Private arbitration makes these employees less likely to win against Riot. In November last year, 20,000 Google employees along with Temps, Vendors, and Contractors walked out to protest the discrimination, racism, and sexual harassment encountered at Google’s workplace. This Walkout lead to Google ending forced arbitration for its full-time employees. Google employees are also organizing a phone drive, in a letter published on Medium, to press lawmakers to legally end forced arbitration. Per the Verge, “The employees are organizing a phone bank for May 1st and asking for people to make three calls to lawmakers — two to the caller’s senators and one to their representative — pushing for the FAIR Act, which was recently reintroduced in the House of Representatives.” https://twitter.com/endforcedarb/status/1122864987243012097 Following Google, Facebook also made changes to its forced arbitration policy for sexual harassment claims Not only sexual harassment, game developers also undergo unfair treatment in terms of work conditions, job instability, and inadequate pay. In February, The American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), published an open letter on Kotaku. The letter urges the video game industry workers to unionize and voice their support for better treatment within the workplace. Following this motion, Riot employees have organized to walkout in protest demanding Rio leadership to end force arbitration against the two current employees. This walkout is planned for Monday, May 6. An internal document from Riot employees as seen by Kotaku describes the demands laid out by walkout organizers a clear intention to end forced arbitration, a precise deadline (within 6 months) by which to end it a commitment to not force arbitration on the women involved in the ongoing litigations against Riot. Riot’s sexist culture and lack of diversity The investigation conducted by Kotaku last year unveiled some major flaws in Riot’s culture and in gaming companies, in general. Over 28 current and former Riot employees, spoke to Kotaku with stories that echoed of Riot’s female employees being treated unfairly and being on the receiving end of gender discrimination. An employee named Lucy told Kotaku that on thinking of hiring a woman in the leadership role she heard plenty of excuses for why her female job candidates weren’t Riot material. Some were “ladder climbers.” Others had “too much ego.” Most weren’t “gamer enough.” A few were “too punchy,” or didn’t “challenge convention”, she told Kotaku. She also shared her personal experiences facing discrimination. Often her manager would imply that her position was a direct result of her appearance. Every few months, she said, a male boss of hers would comment in public meetings about how her kids and husband must really miss her while she was at work. Women are often told they don’t fit in the company’s ‘bro culture’; an astonishing eighty percent of Riot employees are men, according to data Riot collected from employees’ driver’s licenses. “The ‘bro culture’ there is so real,” said one female source to Kotaku, who said she’d left the company due to sexism. “It’s agonizingly real. It’s like working at a giant fraternity.” Among other people Kotaku interviewed, stories were told on how women were being groomed for promotions, and doing jobs above their title and pay grade, until men were suddenly brought in to replace them. Another women told Kotaku, “how a colleague once informed her, apparently as a compliment, that she was on a list getting passed around by senior leaders detailing who they’d sleep with.” Two former employees also added that they “felt pressure to leave after making their concerns about gender discrimination known.” Many former Riot employees also refused to come forward to share their stories and refrained from participating in the walkout. For some, this was in fear of retaliation from Riot’s fanbase; Riot is the creator of the popular game League of Legends. Others told that they were restricted from talking on the record because of non-disparagement agreements they signed before leaving the company. The walkout threat spread far enough that it prompted a response from Riot’s chief diversity officer, Angela Roseboro, in the company’s private Slack over the weekend reports Waypoint. In a copy of the message obtained by Waypoint, Roseboro says“ We’re also aware there may be an upcoming walkout and recognize some Rioters are not feeling heard. We want to open up a dialogue on Monday and invite Rioters to join us for small group sessions where we can talk through your concerns, and provide as much context as we can about where we’ve landed and why. If you’re interested, please take a moment to add your name to this spreadsheet. We’re planning to keep these sessions smaller so we can have a more candid dialogue.” Riot CEO Nicolo Laurent also acknowledged the talk of a walkout in a statement "We’re proud of our colleagues for standing up for what they believe in. We always want Rioters to have the opportunity to be heard, so we’re sitting down today with Rioters to listen to their opinions and learn more about their perspectives on arbitration. We will also be discussing this topic during our biweekly all-company town hall on Thursday. Both are important forums for us to discuss our current policy and listen to Rioter feedback, which are both important parts of evaluating all of our procedures and policies, including those related to arbitration." Tech worker union, Game workers unite, Googlers for ending forced arbitration have stood up in solidarity with Riot employees. “Forced arbitration clauses are designed to silence workers and minimize the options available to people hurt by these large corporations” https://twitter.com/GameWorkers/status/1122933899590557697 https://twitter.com/endforcedarb/status/1123005582808682497 “Employees at Riot Games are considering a walkout, and the organization efforts has prompted an internal response from company executives", tweeted Coworker.org https://twitter.com/teamcoworker/status/1122936953698160640 Others have also joined in support. https://twitter.com/theminolaur/status/1122931099057950720 https://twitter.com/LuchaLibris/status/1122929166037471233 https://twitter.com/floofyscorp/status/1122955992268967937 #NotOkGoogle: Employee-led town hall reveals hundreds of stories of retaliation at Google DataCamp reckons with its #MeToo movement; CEO steps down from his role indefinitely Microsoft’s #MeToo reckoning: female employees speak out against workplace harassment and discrimination
Read more
  • 0
  • 0
  • 18978
Modal Close icon
Modal Close icon